Dec 06 05:42:09 localhost kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec 06 05:42:09 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 06 05:42:09 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 05:42:09 localhost kernel: BIOS-provided physical RAM map:
Dec 06 05:42:09 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 06 05:42:09 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 06 05:42:09 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 06 05:42:09 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 06 05:42:09 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 06 05:42:09 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 06 05:42:09 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 06 05:42:09 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 06 05:42:09 localhost kernel: NX (Execute Disable) protection: active
Dec 06 05:42:09 localhost kernel: APIC: Static calls initialized
Dec 06 05:42:09 localhost kernel: SMBIOS 2.8 present.
Dec 06 05:42:09 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 06 05:42:09 localhost kernel: Hypervisor detected: KVM
Dec 06 05:42:09 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 06 05:42:09 localhost kernel: kvm-clock: using sched offset of 3392070371 cycles
Dec 06 05:42:09 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 06 05:42:09 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 06 05:42:09 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 06 05:42:09 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 06 05:42:09 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 06 05:42:09 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 06 05:42:09 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 06 05:42:09 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 06 05:42:09 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 06 05:42:09 localhost kernel: Using GB pages for direct mapping
Dec 06 05:42:09 localhost kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec 06 05:42:09 localhost kernel: ACPI: Early table checksum verification disabled
Dec 06 05:42:09 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 06 05:42:09 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:09 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:09 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:09 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 06 05:42:09 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:09 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 05:42:09 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 06 05:42:09 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 06 05:42:09 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 06 05:42:09 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 06 05:42:09 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 06 05:42:09 localhost kernel: No NUMA configuration found
Dec 06 05:42:09 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 06 05:42:09 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 06 05:42:09 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 06 05:42:09 localhost kernel: Zone ranges:
Dec 06 05:42:09 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 06 05:42:09 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 06 05:42:09 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 06 05:42:09 localhost kernel:   Device   empty
Dec 06 05:42:09 localhost kernel: Movable zone start for each node
Dec 06 05:42:09 localhost kernel: Early memory node ranges
Dec 06 05:42:09 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 06 05:42:09 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 06 05:42:09 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 06 05:42:09 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 06 05:42:09 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 06 05:42:09 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 06 05:42:09 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 06 05:42:09 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 06 05:42:09 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 06 05:42:09 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 06 05:42:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 06 05:42:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 06 05:42:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 06 05:42:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 06 05:42:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 06 05:42:09 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 06 05:42:09 localhost kernel: TSC deadline timer available
Dec 06 05:42:09 localhost kernel: CPU topo: Max. logical packages:   8
Dec 06 05:42:09 localhost kernel: CPU topo: Max. logical dies:       8
Dec 06 05:42:09 localhost kernel: CPU topo: Max. dies per package:   1
Dec 06 05:42:09 localhost kernel: CPU topo: Max. threads per core:   1
Dec 06 05:42:09 localhost kernel: CPU topo: Num. cores per package:     1
Dec 06 05:42:09 localhost kernel: CPU topo: Num. threads per package:   1
Dec 06 05:42:09 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 06 05:42:09 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 06 05:42:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 06 05:42:09 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 06 05:42:09 localhost kernel: Booting paravirtualized kernel on KVM
Dec 06 05:42:09 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 06 05:42:09 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 06 05:42:09 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 06 05:42:09 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 06 05:42:09 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 06 05:42:09 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 06 05:42:09 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 05:42:09 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec 06 05:42:09 localhost kernel: random: crng init done
Dec 06 05:42:09 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 06 05:42:09 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 06 05:42:09 localhost kernel: Fallback order for Node 0: 0 
Dec 06 05:42:09 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 06 05:42:09 localhost kernel: Policy zone: Normal
Dec 06 05:42:09 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 06 05:42:09 localhost kernel: software IO TLB: area num 8.
Dec 06 05:42:09 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 06 05:42:09 localhost kernel: ftrace: allocating 49335 entries in 193 pages
Dec 06 05:42:09 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 06 05:42:09 localhost kernel: Dynamic Preempt: voluntary
Dec 06 05:42:09 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 06 05:42:09 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 06 05:42:09 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 06 05:42:09 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 06 05:42:09 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 06 05:42:09 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 06 05:42:09 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 06 05:42:09 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 06 05:42:09 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 05:42:09 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 05:42:09 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 06 05:42:09 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 06 05:42:09 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 06 05:42:09 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 06 05:42:09 localhost kernel: Console: colour VGA+ 80x25
Dec 06 05:42:09 localhost kernel: printk: console [ttyS0] enabled
Dec 06 05:42:09 localhost kernel: ACPI: Core revision 20230331
Dec 06 05:42:09 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 06 05:42:09 localhost kernel: x2apic enabled
Dec 06 05:42:09 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 06 05:42:09 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 06 05:42:09 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 06 05:42:09 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 06 05:42:09 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 06 05:42:09 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 06 05:42:09 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 06 05:42:09 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 06 05:42:09 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 06 05:42:09 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 06 05:42:09 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 06 05:42:09 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 06 05:42:09 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 06 05:42:09 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 06 05:42:09 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 06 05:42:09 localhost kernel: x86/bugs: return thunk changed
Dec 06 05:42:09 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 06 05:42:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 06 05:42:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 06 05:42:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 06 05:42:09 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 06 05:42:09 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 06 05:42:09 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 06 05:42:09 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 06 05:42:09 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 06 05:42:09 localhost kernel: landlock: Up and running.
Dec 06 05:42:09 localhost kernel: Yama: becoming mindful.
Dec 06 05:42:09 localhost kernel: SELinux:  Initializing.
Dec 06 05:42:09 localhost kernel: LSM support for eBPF active
Dec 06 05:42:09 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 06 05:42:09 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 06 05:42:09 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 06 05:42:09 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 06 05:42:09 localhost kernel: ... version:                0
Dec 06 05:42:09 localhost kernel: ... bit width:              48
Dec 06 05:42:09 localhost kernel: ... generic registers:      6
Dec 06 05:42:09 localhost kernel: ... value mask:             0000ffffffffffff
Dec 06 05:42:09 localhost kernel: ... max period:             00007fffffffffff
Dec 06 05:42:09 localhost kernel: ... fixed-purpose events:   0
Dec 06 05:42:09 localhost kernel: ... event mask:             000000000000003f
Dec 06 05:42:09 localhost kernel: signal: max sigframe size: 1776
Dec 06 05:42:09 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 06 05:42:09 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 06 05:42:09 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 06 05:42:09 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 06 05:42:09 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 06 05:42:09 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 06 05:42:09 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 06 05:42:09 localhost kernel: node 0 deferred pages initialised in 9ms
Dec 06 05:42:09 localhost kernel: Memory: 7764144K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec 06 05:42:09 localhost kernel: devtmpfs: initialized
Dec 06 05:42:09 localhost kernel: x86/mm: Memory block size: 128MB
Dec 06 05:42:09 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 06 05:42:09 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 06 05:42:09 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 06 05:42:09 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 06 05:42:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 06 05:42:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 06 05:42:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 06 05:42:09 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 06 05:42:09 localhost kernel: audit: type=2000 audit(1764999727.109:1): state=initialized audit_enabled=0 res=1
Dec 06 05:42:09 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 06 05:42:09 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 06 05:42:09 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 06 05:42:09 localhost kernel: cpuidle: using governor menu
Dec 06 05:42:09 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 06 05:42:09 localhost kernel: PCI: Using configuration type 1 for base access
Dec 06 05:42:09 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 06 05:42:09 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 06 05:42:09 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 06 05:42:09 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 06 05:42:09 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 06 05:42:09 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 06 05:42:09 localhost kernel: Demotion targets for Node 0: null
Dec 06 05:42:09 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 06 05:42:09 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 06 05:42:09 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 06 05:42:09 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 06 05:42:09 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 06 05:42:09 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 06 05:42:09 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 06 05:42:09 localhost kernel: ACPI: Interpreter enabled
Dec 06 05:42:09 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 06 05:42:09 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 06 05:42:09 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 06 05:42:09 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 06 05:42:09 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 06 05:42:09 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 06 05:42:09 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [3] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [4] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [5] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [6] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [7] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [8] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [9] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [10] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [11] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [12] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [13] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [14] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [15] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [16] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [17] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [18] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [19] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [20] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [21] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [22] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [23] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [24] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [25] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [26] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [27] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [28] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [29] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [30] registered
Dec 06 05:42:09 localhost kernel: acpiphp: Slot [31] registered
Dec 06 05:42:09 localhost kernel: PCI host bridge to bus 0000:00
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 06 05:42:09 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 06 05:42:09 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 06 05:42:09 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 06 05:42:09 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 06 05:42:09 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 06 05:42:09 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 06 05:42:09 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 06 05:42:09 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 06 05:42:09 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 06 05:42:09 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 06 05:42:09 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 06 05:42:09 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 06 05:42:09 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 06 05:42:09 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 06 05:42:09 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 06 05:42:09 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 06 05:42:09 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 06 05:42:09 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 06 05:42:09 localhost kernel: iommu: Default domain type: Translated
Dec 06 05:42:09 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 06 05:42:09 localhost kernel: SCSI subsystem initialized
Dec 06 05:42:09 localhost kernel: ACPI: bus type USB registered
Dec 06 05:42:09 localhost kernel: usbcore: registered new interface driver usbfs
Dec 06 05:42:09 localhost kernel: usbcore: registered new interface driver hub
Dec 06 05:42:09 localhost kernel: usbcore: registered new device driver usb
Dec 06 05:42:09 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 06 05:42:09 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 06 05:42:09 localhost kernel: PTP clock support registered
Dec 06 05:42:09 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 06 05:42:09 localhost kernel: NetLabel: Initializing
Dec 06 05:42:09 localhost kernel: NetLabel:  domain hash size = 128
Dec 06 05:42:09 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 06 05:42:09 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 06 05:42:09 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 06 05:42:09 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 06 05:42:09 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 06 05:42:09 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 06 05:42:09 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 06 05:42:09 localhost kernel: vgaarb: loaded
Dec 06 05:42:09 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 06 05:42:09 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 06 05:42:09 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 06 05:42:09 localhost kernel: pnp: PnP ACPI init
Dec 06 05:42:09 localhost kernel: pnp 00:03: [dma 2]
Dec 06 05:42:09 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 06 05:42:09 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 06 05:42:09 localhost kernel: NET: Registered PF_INET protocol family
Dec 06 05:42:09 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 06 05:42:09 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 06 05:42:09 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 06 05:42:09 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 06 05:42:09 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 06 05:42:09 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 06 05:42:09 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 06 05:42:09 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 06 05:42:09 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 06 05:42:09 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 06 05:42:09 localhost kernel: NET: Registered PF_XDP protocol family
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 06 05:42:09 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 06 05:42:09 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 06 05:42:09 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 06 05:42:09 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73260 usecs
Dec 06 05:42:09 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 06 05:42:09 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 06 05:42:09 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 06 05:42:09 localhost kernel: ACPI: bus type thunderbolt registered
Dec 06 05:42:09 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 06 05:42:09 localhost kernel: Initialise system trusted keyrings
Dec 06 05:42:09 localhost kernel: Key type blacklist registered
Dec 06 05:42:09 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 06 05:42:09 localhost kernel: zbud: loaded
Dec 06 05:42:09 localhost kernel: integrity: Platform Keyring initialized
Dec 06 05:42:09 localhost kernel: integrity: Machine keyring initialized
Dec 06 05:42:09 localhost kernel: Freeing initrd memory: 87804K
Dec 06 05:42:09 localhost kernel: NET: Registered PF_ALG protocol family
Dec 06 05:42:09 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 06 05:42:09 localhost kernel: Key type asymmetric registered
Dec 06 05:42:09 localhost kernel: Asymmetric key parser 'x509' registered
Dec 06 05:42:09 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 06 05:42:09 localhost kernel: io scheduler mq-deadline registered
Dec 06 05:42:09 localhost kernel: io scheduler kyber registered
Dec 06 05:42:09 localhost kernel: io scheduler bfq registered
Dec 06 05:42:09 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 06 05:42:09 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 06 05:42:09 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 06 05:42:09 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 06 05:42:09 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 06 05:42:09 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 06 05:42:09 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 06 05:42:09 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 06 05:42:09 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 06 05:42:09 localhost kernel: Non-volatile memory driver v1.3
Dec 06 05:42:09 localhost kernel: rdac: device handler registered
Dec 06 05:42:09 localhost kernel: hp_sw: device handler registered
Dec 06 05:42:09 localhost kernel: emc: device handler registered
Dec 06 05:42:09 localhost kernel: alua: device handler registered
Dec 06 05:42:09 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 06 05:42:09 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 06 05:42:09 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 06 05:42:09 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 06 05:42:09 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 06 05:42:09 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 06 05:42:09 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 06 05:42:09 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec 06 05:42:09 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 06 05:42:09 localhost kernel: hub 1-0:1.0: USB hub found
Dec 06 05:42:09 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 06 05:42:09 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 06 05:42:09 localhost kernel: usbserial: USB Serial support registered for generic
Dec 06 05:42:09 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 06 05:42:09 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 06 05:42:09 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 06 05:42:09 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 06 05:42:09 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 06 05:42:09 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 06 05:42:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 06 05:42:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 06 05:42:09 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 06 05:42:09 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-06T05:42:08 UTC (1764999728)
Dec 06 05:42:09 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 06 05:42:09 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 06 05:42:09 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 06 05:42:09 localhost kernel: usbcore: registered new interface driver usbhid
Dec 06 05:42:09 localhost kernel: usbhid: USB HID core driver
Dec 06 05:42:09 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 06 05:42:09 localhost kernel: Initializing XFRM netlink socket
Dec 06 05:42:09 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 06 05:42:09 localhost kernel: Segment Routing with IPv6
Dec 06 05:42:09 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 06 05:42:09 localhost kernel: mpls_gso: MPLS GSO support
Dec 06 05:42:09 localhost kernel: IPI shorthand broadcast: enabled
Dec 06 05:42:09 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 06 05:42:09 localhost kernel: AES CTR mode by8 optimization enabled
Dec 06 05:42:09 localhost kernel: sched_clock: Marking stable (1183003344, 156290066)->(1455779190, -116485780)
Dec 06 05:42:09 localhost kernel: registered taskstats version 1
Dec 06 05:42:09 localhost kernel: Loading compiled-in X.509 certificates
Dec 06 05:42:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 06 05:42:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 06 05:42:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 06 05:42:09 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 06 05:42:09 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 06 05:42:09 localhost kernel: Demotion targets for Node 0: null
Dec 06 05:42:09 localhost kernel: page_owner is disabled
Dec 06 05:42:09 localhost kernel: Key type .fscrypt registered
Dec 06 05:42:09 localhost kernel: Key type fscrypt-provisioning registered
Dec 06 05:42:09 localhost kernel: Key type big_key registered
Dec 06 05:42:09 localhost kernel: Key type encrypted registered
Dec 06 05:42:09 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 06 05:42:09 localhost kernel: Loading compiled-in module X.509 certificates
Dec 06 05:42:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec 06 05:42:09 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 06 05:42:09 localhost kernel: ima: No architecture policies found
Dec 06 05:42:09 localhost kernel: evm: Initialising EVM extended attributes:
Dec 06 05:42:09 localhost kernel: evm: security.selinux
Dec 06 05:42:09 localhost kernel: evm: security.SMACK64 (disabled)
Dec 06 05:42:09 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 06 05:42:09 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 06 05:42:09 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 06 05:42:09 localhost kernel: evm: security.apparmor (disabled)
Dec 06 05:42:09 localhost kernel: evm: security.ima
Dec 06 05:42:09 localhost kernel: evm: security.capability
Dec 06 05:42:09 localhost kernel: evm: HMAC attrs: 0x1
Dec 06 05:42:09 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 06 05:42:09 localhost kernel: Running certificate verification RSA selftest
Dec 06 05:42:09 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 06 05:42:09 localhost kernel: Running certificate verification ECDSA selftest
Dec 06 05:42:09 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 06 05:42:09 localhost kernel: clk: Disabling unused clocks
Dec 06 05:42:09 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 06 05:42:09 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec 06 05:42:09 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 06 05:42:09 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec 06 05:42:09 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 06 05:42:09 localhost kernel: Run /init as init process
Dec 06 05:42:09 localhost kernel:   with arguments:
Dec 06 05:42:09 localhost kernel:     /init
Dec 06 05:42:09 localhost kernel:   with environment:
Dec 06 05:42:09 localhost kernel:     HOME=/
Dec 06 05:42:09 localhost kernel:     TERM=linux
Dec 06 05:42:09 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64
Dec 06 05:42:09 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 06 05:42:09 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 06 05:42:09 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 06 05:42:09 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 06 05:42:09 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 06 05:42:09 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 06 05:42:09 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 06 05:42:09 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 06 05:42:09 localhost systemd[1]: Detected virtualization kvm.
Dec 06 05:42:09 localhost systemd[1]: Detected architecture x86-64.
Dec 06 05:42:09 localhost systemd[1]: Running in initrd.
Dec 06 05:42:09 localhost systemd[1]: No hostname configured, using default hostname.
Dec 06 05:42:09 localhost systemd[1]: Hostname set to <localhost>.
Dec 06 05:42:09 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 06 05:42:09 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 06 05:42:09 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 06 05:42:09 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 06 05:42:09 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 06 05:42:09 localhost systemd[1]: Reached target Local File Systems.
Dec 06 05:42:09 localhost systemd[1]: Reached target Path Units.
Dec 06 05:42:09 localhost systemd[1]: Reached target Slice Units.
Dec 06 05:42:09 localhost systemd[1]: Reached target Swaps.
Dec 06 05:42:09 localhost systemd[1]: Reached target Timer Units.
Dec 06 05:42:09 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 06 05:42:09 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 06 05:42:09 localhost systemd[1]: Listening on Journal Socket.
Dec 06 05:42:09 localhost systemd[1]: Listening on udev Control Socket.
Dec 06 05:42:09 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 06 05:42:09 localhost systemd[1]: Reached target Socket Units.
Dec 06 05:42:09 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 06 05:42:09 localhost systemd[1]: Starting Journal Service...
Dec 06 05:42:09 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 06 05:42:09 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 06 05:42:09 localhost systemd[1]: Starting Create System Users...
Dec 06 05:42:09 localhost systemd[1]: Starting Setup Virtual Console...
Dec 06 05:42:09 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 06 05:42:09 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 06 05:42:09 localhost systemd[1]: Finished Create System Users.
Dec 06 05:42:09 localhost systemd-journald[306]: Journal started
Dec 06 05:42:09 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/dc45738e2bb04417914ca006d79f6275) is 8.0M, max 153.6M, 145.6M free.
Dec 06 05:42:09 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec 06 05:42:09 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec 06 05:42:09 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 06 05:42:09 localhost systemd[1]: Started Journal Service.
Dec 06 05:42:09 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 06 05:42:09 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 06 05:42:09 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 06 05:42:09 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 06 05:42:09 localhost systemd[1]: Finished Setup Virtual Console.
Dec 06 05:42:09 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 06 05:42:09 localhost systemd[1]: Starting dracut cmdline hook...
Dec 06 05:42:09 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Dec 06 05:42:09 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 06 05:42:09 localhost systemd[1]: Finished dracut cmdline hook.
Dec 06 05:42:09 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 06 05:42:09 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 06 05:42:09 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 06 05:42:09 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 06 05:42:09 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 06 05:42:09 localhost kernel: RPC: Registered udp transport module.
Dec 06 05:42:09 localhost kernel: RPC: Registered tcp transport module.
Dec 06 05:42:09 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 06 05:42:09 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 06 05:42:09 localhost rpc.statd[442]: Version 2.5.4 starting
Dec 06 05:42:09 localhost rpc.statd[442]: Initializing NSM state
Dec 06 05:42:10 localhost rpc.idmapd[447]: Setting log level to 0
Dec 06 05:42:10 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 06 05:42:10 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 06 05:42:10 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec 06 05:42:10 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 06 05:42:10 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 06 05:42:10 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 06 05:42:10 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 06 05:42:10 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 06 05:42:10 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 05:42:10 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 06 05:42:10 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 05:42:10 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 05:42:10 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 06 05:42:10 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 06 05:42:10 localhost systemd[1]: Reached target Network.
Dec 06 05:42:10 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 06 05:42:10 localhost systemd[1]: Starting dracut initqueue hook...
Dec 06 05:42:10 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 06 05:42:10 localhost systemd[1]: Reached target System Initialization.
Dec 06 05:42:10 localhost systemd[1]: Reached target Basic System.
Dec 06 05:42:10 localhost kernel: libata version 3.00 loaded.
Dec 06 05:42:10 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 06 05:42:10 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 06 05:42:10 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 06 05:42:10 localhost kernel: scsi host0: ata_piix
Dec 06 05:42:10 localhost kernel: scsi host1: ata_piix
Dec 06 05:42:10 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 06 05:42:10 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 06 05:42:10 localhost kernel:  vda: vda1
Dec 06 05:42:10 localhost kernel: ata1: found unknown device (class 0)
Dec 06 05:42:10 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 06 05:42:10 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 06 05:42:10 localhost systemd-udevd[479]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 05:42:10 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 06 05:42:10 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 06 05:42:10 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 06 05:42:10 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 06 05:42:10 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 06 05:42:10 localhost systemd[1]: Reached target Initrd Root Device.
Dec 06 05:42:10 localhost systemd[1]: Finished dracut initqueue hook.
Dec 06 05:42:10 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 06 05:42:10 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 06 05:42:10 localhost systemd[1]: Reached target Remote File Systems.
Dec 06 05:42:10 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 06 05:42:10 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 06 05:42:10 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 06 05:42:10 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec 06 05:42:10 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 06 05:42:10 localhost systemd[1]: Mounting /sysroot...
Dec 06 05:42:11 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 06 05:42:11 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 06 05:42:11 localhost kernel: XFS (vda1): Ending clean mount
Dec 06 05:42:11 localhost systemd[1]: Mounted /sysroot.
Dec 06 05:42:11 localhost systemd[1]: Reached target Initrd Root File System.
Dec 06 05:42:11 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 06 05:42:11 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 06 05:42:11 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 06 05:42:11 localhost systemd[1]: Reached target Initrd File Systems.
Dec 06 05:42:11 localhost systemd[1]: Reached target Initrd Default Target.
Dec 06 05:42:11 localhost systemd[1]: Starting dracut mount hook...
Dec 06 05:42:11 localhost systemd[1]: Finished dracut mount hook.
Dec 06 05:42:11 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 06 05:42:11 localhost rpc.idmapd[447]: exiting on signal 15
Dec 06 05:42:11 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 06 05:42:11 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 06 05:42:12 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 06 05:42:12 localhost systemd[1]: Stopped target Network.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Timer Units.
Dec 06 05:42:12 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 06 05:42:12 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Basic System.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Path Units.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Remote File Systems.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Slice Units.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Socket Units.
Dec 06 05:42:12 localhost systemd[1]: Stopped target System Initialization.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Local File Systems.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Swaps.
Dec 06 05:42:12 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped dracut mount hook.
Dec 06 05:42:12 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 06 05:42:12 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 06 05:42:12 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 06 05:42:12 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 06 05:42:12 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 06 05:42:12 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 06 05:42:12 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 06 05:42:12 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 06 05:42:12 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 06 05:42:12 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 06 05:42:12 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 06 05:42:12 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 06 05:42:12 localhost systemd[1]: systemd-udevd.service: Consumed 1.202s CPU time.
Dec 06 05:42:12 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Closed udev Control Socket.
Dec 06 05:42:12 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Closed udev Kernel Socket.
Dec 06 05:42:12 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 06 05:42:12 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 06 05:42:12 localhost systemd[1]: Starting Cleanup udev Database...
Dec 06 05:42:12 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 06 05:42:12 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 06 05:42:12 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Stopped Create System Users.
Dec 06 05:42:12 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 06 05:42:12 localhost systemd[1]: Finished Cleanup udev Database.
Dec 06 05:42:12 localhost systemd[1]: Reached target Switch Root.
Dec 06 05:42:12 localhost systemd[1]: Starting Switch Root...
Dec 06 05:42:12 localhost systemd[1]: Switching root.
Dec 06 05:42:12 localhost systemd-journald[306]: Journal stopped
Dec 06 06:23:59 compute-0 python3.9[71148]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:23:59 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:23:59 compute-0 sshd-session[70828]: Invalid user admin from 45.135.232.92 port 44202
Dec 06 06:23:59 compute-0 python3.9[71299]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:23:59 compute-0 sshd-session[70828]: Connection reset by invalid user admin 45.135.232.92 port 44202 [preauth]
Dec 06 06:24:00 compute-0 sshd-session[70300]: Connection closed by 192.168.122.30 port 33822
Dec 06 06:24:00 compute-0 sshd-session[70297]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:24:00 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 06 06:24:00 compute-0 systemd[1]: session-17.scope: Consumed 5.971s CPU time.
Dec 06 06:24:00 compute-0 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Dec 06 06:24:00 compute-0 systemd-logind[798]: Removed session 17.
Dec 06 06:24:01 compute-0 sshd-session[71324]: Connection reset by authenticating user root 45.135.232.92 port 44224 [preauth]
Dec 06 06:24:03 compute-0 sshd-session[71326]: Invalid user user from 45.135.232.92 port 44248
Dec 06 06:24:03 compute-0 sshd-session[71326]: Connection reset by invalid user user 45.135.232.92 port 44248 [preauth]
Dec 06 06:24:09 compute-0 sshd-session[71328]: Accepted publickey for zuul from 38.102.83.248 port 52316 ssh2: RSA SHA256:aHyVcRaDK3hfZLzaCXAUf9WeLucbkCfDdQjLc4/bZwE
Dec 06 06:24:09 compute-0 systemd-logind[798]: New session 18 of user zuul.
Dec 06 06:24:09 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 06 06:24:09 compute-0 sshd-session[71328]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:24:09 compute-0 sudo[71404]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giadawwowtiyesklikrnwohsdhwchgpp ; /usr/bin/python3'
Dec 06 06:24:09 compute-0 sudo[71404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:11 compute-0 useradd[71408]: new group: name=ceph-admin, GID=42478
Dec 06 06:24:11 compute-0 useradd[71408]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 06 06:24:12 compute-0 sudo[71404]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:13 compute-0 sudo[71490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waeigkqjcvbueiibkvjnffnchvsrunvm ; /usr/bin/python3'
Dec 06 06:24:13 compute-0 sudo[71490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:13 compute-0 sudo[71490]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:13 compute-0 sudo[71563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuaxqlzfhkdbbijkpblwtghqmnsddorg ; /usr/bin/python3'
Dec 06 06:24:13 compute-0 sudo[71563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:13 compute-0 sudo[71563]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:14 compute-0 sudo[71613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlzzbxegaetbvjssdqvtlaihmkrqohag ; /usr/bin/python3'
Dec 06 06:24:14 compute-0 sudo[71613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:14 compute-0 sudo[71613]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:14 compute-0 sudo[71639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlaapbijqkdjjotggduxkaphbjdswozt ; /usr/bin/python3'
Dec 06 06:24:14 compute-0 sudo[71639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:14 compute-0 sudo[71639]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:14 compute-0 sudo[71665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvvtidsskgnfmslcdrhztyxonxcdyygf ; /usr/bin/python3'
Dec 06 06:24:14 compute-0 sudo[71665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:14 compute-0 sudo[71665]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:15 compute-0 sudo[71691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmrfamxmmqzhktwqjvvbdhyegjewhjml ; /usr/bin/python3'
Dec 06 06:24:15 compute-0 sudo[71691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:15 compute-0 sudo[71691]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:15 compute-0 sudo[71769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otrrfcjzikarprszgmcipktpnvcwfgmg ; /usr/bin/python3'
Dec 06 06:24:15 compute-0 sudo[71769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:16 compute-0 sudo[71769]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:16 compute-0 sudo[71842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwasyquxtxfbprecvzgwzqyavwxsfqqe ; /usr/bin/python3'
Dec 06 06:24:16 compute-0 sudo[71842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:16 compute-0 sudo[71842]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:16 compute-0 sudo[71944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmwyfwxjjtopvczxvurhdrtzcayqlgmt ; /usr/bin/python3'
Dec 06 06:24:16 compute-0 sudo[71944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:16 compute-0 sudo[71944]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:17 compute-0 sudo[72017]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooyqutsumrndoaiqjnfnsysdjfbvxiyh ; /usr/bin/python3'
Dec 06 06:24:17 compute-0 sudo[72017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:17 compute-0 sudo[72017]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:17 compute-0 sudo[72067]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnubywqlmlxugozyzhrvovegwqxmmbpu ; /usr/bin/python3'
Dec 06 06:24:17 compute-0 sudo[72067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:18 compute-0 python3[72069]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:24:18 compute-0 chronyd[58533]: Selected source 23.128.92.19 (pool.ntp.org)
Dec 06 06:24:19 compute-0 sudo[72067]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:19 compute-0 sudo[72162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejbuljtpnblshqjnpohqdezoegsuyqsz ; /usr/bin/python3'
Dec 06 06:24:19 compute-0 sudo[72162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:19 compute-0 python3[72164]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 06:24:21 compute-0 sudo[72162]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:21 compute-0 sudo[72189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbkendoacfcmnjrzwkngumvewccqjlkz ; /usr/bin/python3'
Dec 06 06:24:21 compute-0 sudo[72189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:21 compute-0 python3[72191]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 06:24:21 compute-0 sudo[72189]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:22 compute-0 sudo[72215]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxnnrxuxwbjvqczbobetvdmtlxmuwub ; /usr/bin/python3'
Dec 06 06:24:22 compute-0 sudo[72215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:22 compute-0 python3[72217]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:24:22 compute-0 kernel: loop: module loaded
Dec 06 06:24:22 compute-0 kernel: loop3: detected capacity change from 0 to 14680064
Dec 06 06:24:22 compute-0 sudo[72215]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:22 compute-0 sudo[72250]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohpwuvlwpfqhjqosxvnlmkoujtrxwcty ; /usr/bin/python3'
Dec 06 06:24:22 compute-0 sudo[72250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:22 compute-0 python3[72252]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:24:24 compute-0 lvm[72255]: PV /dev/loop3 not used.
Dec 06 06:24:25 compute-0 lvm[72257]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:24:25 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 06 06:24:25 compute-0 lvm[72259]:   0 logical volume(s) in volume group "ceph_vg0" now active
Dec 06 06:24:25 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 06 06:24:25 compute-0 lvm[72262]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:24:25 compute-0 lvm[72262]: VG ceph_vg0 finished
Dec 06 06:24:25 compute-0 lvm[72271]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:24:25 compute-0 lvm[72271]: VG ceph_vg0 finished
Dec 06 06:24:25 compute-0 sudo[72250]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:26 compute-0 sudo[72347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wybpflvevzixvexbvskmxlklxhwrrrtd ; /usr/bin/python3'
Dec 06 06:24:26 compute-0 sudo[72347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:26 compute-0 python3[72349]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:24:26 compute-0 sudo[72347]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:26 compute-0 sudo[72420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vculzsgqxrlayraxjewoluflyzoprlpv ; /usr/bin/python3'
Dec 06 06:24:26 compute-0 sudo[72420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:26 compute-0 python3[72422]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002266.2032428-36993-156689590977825/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:24:26 compute-0 sudo[72420]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:27 compute-0 sudo[72470]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raflgdxtdixpfduyfvapdwncptwdlbmp ; /usr/bin/python3'
Dec 06 06:24:27 compute-0 sudo[72470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:27 compute-0 python3[72472]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:24:27 compute-0 systemd[1]: Reloading.
Dec 06 06:24:27 compute-0 systemd-rc-local-generator[72501]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:24:27 compute-0 systemd-sysv-generator[72505]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:24:27 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 06 06:24:27 compute-0 bash[72512]: /dev/loop3: [64513]:4327955 (/var/lib/ceph-osd-0.img)
Dec 06 06:24:28 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 06 06:24:28 compute-0 lvm[72513]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:24:28 compute-0 lvm[72513]: VG ceph_vg0 finished
Dec 06 06:24:28 compute-0 sudo[72470]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:30 compute-0 python3[72537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:24:33 compute-0 sudo[72628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqzvxyjgsdyqdyaranvwwpxekixntbho ; /usr/bin/python3'
Dec 06 06:24:33 compute-0 sudo[72628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:34 compute-0 python3[72630]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 06:24:35 compute-0 groupadd[72636]: group added to /etc/group: name=cephadm, GID=992
Dec 06 06:24:35 compute-0 groupadd[72636]: group added to /etc/gshadow: name=cephadm
Dec 06 06:24:35 compute-0 groupadd[72636]: new group: name=cephadm, GID=992
Dec 06 06:24:35 compute-0 useradd[72643]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 06 06:24:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:24:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:24:36 compute-0 sudo[72628]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:36 compute-0 sudo[72738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klggtubnidxlxxxhudpkmtdbjkbbszmy ; /usr/bin/python3'
Dec 06 06:24:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:24:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:24:36 compute-0 systemd[1]: run-reb8ca0c9645748899188583e8c355aa5.service: Deactivated successfully.
Dec 06 06:24:36 compute-0 sudo[72738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:36 compute-0 python3[72741]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 06:24:36 compute-0 sudo[72738]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:36 compute-0 sudo[72767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jntxydkwthblmcrdrdcdlggjvpkmnkpt ; /usr/bin/python3'
Dec 06 06:24:36 compute-0 sudo[72767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:37 compute-0 python3[72769]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:24:37 compute-0 sudo[72767]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:37 compute-0 sudo[72832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifhggwcymbfzzwyoevffsajtsocpfayp ; /usr/bin/python3'
Dec 06 06:24:37 compute-0 sudo[72832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:38 compute-0 python3[72834]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:24:38 compute-0 sudo[72832]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:38 compute-0 sudo[72858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brrmwnoeayqtojtnjhcgasfndadlgowm ; /usr/bin/python3'
Dec 06 06:24:38 compute-0 sudo[72858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:38 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:24:38 compute-0 python3[72860]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:24:38 compute-0 sudo[72858]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:38 compute-0 sudo[72936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjwqyxvmxmwsjujwyfuejjjyhfagybcd ; /usr/bin/python3'
Dec 06 06:24:38 compute-0 sudo[72936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:39 compute-0 python3[72938]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:24:39 compute-0 sudo[72936]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:39 compute-0 sudo[73009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luoldcmezsscwmdgsdplehrdjalvcynq ; /usr/bin/python3'
Dec 06 06:24:39 compute-0 sudo[73009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:39 compute-0 python3[73011]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002278.7918136-37184-65775607859846/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:24:39 compute-0 sudo[73009]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:40 compute-0 sudo[73111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlmhpiurhgaaqbteierctdtlnlwmogeo ; /usr/bin/python3'
Dec 06 06:24:40 compute-0 sudo[73111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:40 compute-0 python3[73113]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:24:40 compute-0 sudo[73111]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:40 compute-0 sudo[73184]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtnpvwzyhfesqwwlkupbusnxuyrwsmi ; /usr/bin/python3'
Dec 06 06:24:40 compute-0 sudo[73184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:40 compute-0 python3[73186]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002279.9679842-37202-40658018362919/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:24:40 compute-0 sudo[73184]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:41 compute-0 sudo[73234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoplbezcmoxcuawirrpyimasmgherria ; /usr/bin/python3'
Dec 06 06:24:41 compute-0 sudo[73234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:41 compute-0 python3[73236]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 06:24:41 compute-0 sudo[73234]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:41 compute-0 sudo[73262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awsijwppclyyqhggacficlwxvgetkywt ; /usr/bin/python3'
Dec 06 06:24:41 compute-0 sudo[73262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:41 compute-0 python3[73264]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 06:24:41 compute-0 sudo[73262]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:41 compute-0 sudo[73290]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cebkaiaipqdttipxlnhxcmlzsamzkwqs ; /usr/bin/python3'
Dec 06 06:24:41 compute-0 sudo[73290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:41 compute-0 python3[73292]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 06:24:41 compute-0 sudo[73290]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:42 compute-0 sudo[73318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygotjrueszdaxanolgcivkucgtgbgldl ; /usr/bin/python3'
Dec 06 06:24:42 compute-0 sudo[73318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:24:42 compute-0 python3[73320]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:24:42 compute-0 sshd-session[73335]: Accepted publickey for ceph-admin from 192.168.122.100 port 56974 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:24:42 compute-0 systemd-logind[798]: New session 19 of user ceph-admin.
Dec 06 06:24:42 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 06 06:24:42 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 06 06:24:42 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 06 06:24:42 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 06 06:24:42 compute-0 systemd[73339]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:24:42 compute-0 systemd[73339]: Queued start job for default target Main User Target.
Dec 06 06:24:42 compute-0 systemd[73339]: Created slice User Application Slice.
Dec 06 06:24:42 compute-0 systemd[73339]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 06:24:42 compute-0 systemd[73339]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 06:24:42 compute-0 systemd[73339]: Reached target Paths.
Dec 06 06:24:42 compute-0 systemd[73339]: Reached target Timers.
Dec 06 06:24:42 compute-0 systemd[73339]: Starting D-Bus User Message Bus Socket...
Dec 06 06:24:42 compute-0 systemd[73339]: Starting Create User's Volatile Files and Directories...
Dec 06 06:24:42 compute-0 systemd[73339]: Finished Create User's Volatile Files and Directories.
Dec 06 06:24:42 compute-0 systemd[73339]: Listening on D-Bus User Message Bus Socket.
Dec 06 06:24:42 compute-0 systemd[73339]: Reached target Sockets.
Dec 06 06:24:42 compute-0 systemd[73339]: Reached target Basic System.
Dec 06 06:24:42 compute-0 systemd[73339]: Reached target Main User Target.
Dec 06 06:24:42 compute-0 systemd[73339]: Startup finished in 127ms.
Dec 06 06:24:42 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 06 06:24:42 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 06 06:24:42 compute-0 sshd-session[73335]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:24:42 compute-0 sudo[73355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 06 06:24:42 compute-0 sudo[73355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:24:42 compute-0 sudo[73355]: pam_unix(sudo:session): session closed for user root
Dec 06 06:24:42 compute-0 sshd-session[73354]: Received disconnect from 192.168.122.100 port 56974:11: disconnected by user
Dec 06 06:24:42 compute-0 sshd-session[73354]: Disconnected from user ceph-admin 192.168.122.100 port 56974
Dec 06 06:24:42 compute-0 sshd-session[73335]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 06:24:42 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 06 06:24:42 compute-0 systemd-logind[798]: Session 19 logged out. Waiting for processes to exit.
Dec 06 06:24:42 compute-0 systemd-logind[798]: Removed session 19.
Dec 06 06:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1843795726-lower\x2dmapped.mount: Deactivated successfully.
Dec 06 06:24:52 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 06 06:24:52 compute-0 systemd[73339]: Activating special unit Exit the Session...
Dec 06 06:24:52 compute-0 systemd[73339]: Stopped target Main User Target.
Dec 06 06:24:52 compute-0 systemd[73339]: Stopped target Basic System.
Dec 06 06:24:52 compute-0 systemd[73339]: Stopped target Paths.
Dec 06 06:24:52 compute-0 systemd[73339]: Stopped target Sockets.
Dec 06 06:24:52 compute-0 systemd[73339]: Stopped target Timers.
Dec 06 06:24:52 compute-0 systemd[73339]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 06:24:52 compute-0 systemd[73339]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 06:24:52 compute-0 systemd[73339]: Closed D-Bus User Message Bus Socket.
Dec 06 06:24:52 compute-0 systemd[73339]: Stopped Create User's Volatile Files and Directories.
Dec 06 06:24:52 compute-0 systemd[73339]: Removed slice User Application Slice.
Dec 06 06:24:52 compute-0 systemd[73339]: Reached target Shutdown.
Dec 06 06:24:52 compute-0 systemd[73339]: Finished Exit the Session.
Dec 06 06:24:52 compute-0 systemd[73339]: Reached target Exit the Session.
Dec 06 06:24:52 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 06 06:24:52 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 06 06:24:52 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 06 06:24:52 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 06 06:24:52 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 06 06:24:52 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 06 06:24:52 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 06 06:25:14 compute-0 podman[73393]: 2025-12-06 06:25:14.408404493 +0000 UTC m=+31.600629892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:25:14 compute-0 podman[73454]: 2025-12-06 06:25:14.479831758 +0000 UTC m=+0.044050354 container create 3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff (image=quay.io/ceph/ceph:v18, name=dazzling_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2995841053-merged.mount: Deactivated successfully.
Dec 06 06:25:14 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 06 06:25:14 compute-0 systemd[1]: Started libpod-conmon-3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff.scope.
Dec 06 06:25:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:14 compute-0 podman[73454]: 2025-12-06 06:25:14.458948742 +0000 UTC m=+0.023167348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:14 compute-0 podman[73454]: 2025-12-06 06:25:14.589405277 +0000 UTC m=+0.153623883 container init 3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff (image=quay.io/ceph/ceph:v18, name=dazzling_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:25:14 compute-0 podman[73454]: 2025-12-06 06:25:14.599021067 +0000 UTC m=+0.163239653 container start 3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff (image=quay.io/ceph/ceph:v18, name=dazzling_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 06:25:14 compute-0 podman[73454]: 2025-12-06 06:25:14.602883342 +0000 UTC m=+0.167101928 container attach 3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff (image=quay.io/ceph/ceph:v18, name=dazzling_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 06:25:14 compute-0 dazzling_brahmagupta[73471]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec 06 06:25:14 compute-0 systemd[1]: libpod-3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff.scope: Deactivated successfully.
Dec 06 06:25:14 compute-0 podman[73454]: 2025-12-06 06:25:14.952429721 +0000 UTC m=+0.516648307 container died 3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff (image=quay.io/ceph/ceph:v18, name=dazzling_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 06:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-37017bb8551b20251bdae9b09c4b0b49c6a45b3f856b8b048715567328bcf6ec-merged.mount: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73454]: 2025-12-06 06:25:15.00295306 +0000 UTC m=+0.567171646 container remove 3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff (image=quay.io/ceph/ceph:v18, name=dazzling_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 06:25:15 compute-0 systemd[1]: libpod-conmon-3797e5b7bc2a44cd14f3826e2ca87bdc9d092fa40567588680d0bf66cbd5f2ff.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73490]: 2025-12-06 06:25:15.071780815 +0000 UTC m=+0.043836199 container create 9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 06:25:15 compute-0 systemd[1]: Started libpod-conmon-9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2.scope.
Dec 06 06:25:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:15 compute-0 podman[73490]: 2025-12-06 06:25:15.05280943 +0000 UTC m=+0.024864834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:15 compute-0 podman[73490]: 2025-12-06 06:25:15.151046132 +0000 UTC m=+0.123101546 container init 9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:25:15 compute-0 podman[73490]: 2025-12-06 06:25:15.156286474 +0000 UTC m=+0.128341848 container start 9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:25:15 compute-0 podman[73490]: 2025-12-06 06:25:15.160415096 +0000 UTC m=+0.132470480 container attach 9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 06:25:15 compute-0 optimistic_merkle[73506]: 167 167
Dec 06 06:25:15 compute-0 systemd[1]: libpod-9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73490]: 2025-12-06 06:25:15.16242265 +0000 UTC m=+0.134478034 container died 9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 06:25:15 compute-0 podman[73490]: 2025-12-06 06:25:15.197834 +0000 UTC m=+0.169889384 container remove 9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2 (image=quay.io/ceph/ceph:v18, name=optimistic_merkle, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:15 compute-0 systemd[1]: libpod-conmon-9fe198c3d091449bf0f578cab17b2f4798ee024e7141e0ffc5f680f3b07dd5e2.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73523]: 2025-12-06 06:25:15.258665448 +0000 UTC m=+0.041039533 container create 60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31 (image=quay.io/ceph/ceph:v18, name=condescending_lamport, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:15 compute-0 systemd[1]: Started libpod-conmon-60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31.scope.
Dec 06 06:25:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:15 compute-0 podman[73523]: 2025-12-06 06:25:15.318692484 +0000 UTC m=+0.101066579 container init 60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31 (image=quay.io/ceph/ceph:v18, name=condescending_lamport, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:25:15 compute-0 podman[73523]: 2025-12-06 06:25:15.324305666 +0000 UTC m=+0.106679751 container start 60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31 (image=quay.io/ceph/ceph:v18, name=condescending_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:25:15 compute-0 podman[73523]: 2025-12-06 06:25:15.327751099 +0000 UTC m=+0.110125454 container attach 60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31 (image=quay.io/ceph/ceph:v18, name=condescending_lamport, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:15 compute-0 podman[73523]: 2025-12-06 06:25:15.241665197 +0000 UTC m=+0.024039302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:15 compute-0 condescending_lamport[73540]: AQBLzDNpuvyiFBAAmRU1Kuuqb0SE4BomOKpXHw==
Dec 06 06:25:15 compute-0 systemd[1]: libpod-60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73523]: 2025-12-06 06:25:15.349685673 +0000 UTC m=+0.132059768 container died 60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31 (image=quay.io/ceph/ceph:v18, name=condescending_lamport, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:15 compute-0 podman[73523]: 2025-12-06 06:25:15.388384492 +0000 UTC m=+0.170758577 container remove 60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31 (image=quay.io/ceph/ceph:v18, name=condescending_lamport, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:25:15 compute-0 systemd[1]: libpod-conmon-60a505e52837462c29edf178b77e9c6d64797bf0b9e75f7c0ededb5610b5bc31.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73559]: 2025-12-06 06:25:15.444936854 +0000 UTC m=+0.038649809 container create c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2 (image=quay.io/ceph/ceph:v18, name=brave_yalow, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 06:25:15 compute-0 systemd[1]: Started libpod-conmon-c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2.scope.
Dec 06 06:25:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:15 compute-0 podman[73559]: 2025-12-06 06:25:15.505535456 +0000 UTC m=+0.099248431 container init c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2 (image=quay.io/ceph/ceph:v18, name=brave_yalow, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:15 compute-0 podman[73559]: 2025-12-06 06:25:15.510953163 +0000 UTC m=+0.104666108 container start c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2 (image=quay.io/ceph/ceph:v18, name=brave_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:15 compute-0 podman[73559]: 2025-12-06 06:25:15.514648972 +0000 UTC m=+0.108361947 container attach c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2 (image=quay.io/ceph/ceph:v18, name=brave_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 06:25:15 compute-0 podman[73559]: 2025-12-06 06:25:15.427336607 +0000 UTC m=+0.021049582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:15 compute-0 brave_yalow[73575]: AQBLzDNpYM2VHxAAHBqAh+si2btWQVL7cNSj1g==
Dec 06 06:25:15 compute-0 systemd[1]: libpod-c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73559]: 2025-12-06 06:25:15.533549625 +0000 UTC m=+0.127262580 container died c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2 (image=quay.io/ceph/ceph:v18, name=brave_yalow, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-037bfd0af5744247607438bdaf4c5514c6fabe2b4d5b6c2857f4d5dd30bd5c97-merged.mount: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73559]: 2025-12-06 06:25:15.572354696 +0000 UTC m=+0.166067651 container remove c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2 (image=quay.io/ceph/ceph:v18, name=brave_yalow, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:25:15 compute-0 systemd[1]: libpod-conmon-c00cfc5edb86a7f55bd297f4c67597c302c4b142e455c5534c46fcc58045e2c2.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73594]: 2025-12-06 06:25:15.632471964 +0000 UTC m=+0.040793875 container create 612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933 (image=quay.io/ceph/ceph:v18, name=wizardly_mayer, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:25:15 compute-0 systemd[1]: Started libpod-conmon-612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933.scope.
Dec 06 06:25:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:15 compute-0 podman[73594]: 2025-12-06 06:25:15.707362723 +0000 UTC m=+0.115684664 container init 612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933 (image=quay.io/ceph/ceph:v18, name=wizardly_mayer, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:15 compute-0 podman[73594]: 2025-12-06 06:25:15.614531269 +0000 UTC m=+0.022853190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:15 compute-0 podman[73594]: 2025-12-06 06:25:15.712314068 +0000 UTC m=+0.120635979 container start 612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933 (image=quay.io/ceph/ceph:v18, name=wizardly_mayer, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:25:15 compute-0 podman[73594]: 2025-12-06 06:25:15.716961313 +0000 UTC m=+0.125283244 container attach 612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933 (image=quay.io/ceph/ceph:v18, name=wizardly_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:25:15 compute-0 wizardly_mayer[73610]: AQBLzDNp4QPrKxAA5w7YG2nU3AyFEocjb7L2hw==
Dec 06 06:25:15 compute-0 systemd[1]: libpod-612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73594]: 2025-12-06 06:25:15.740639015 +0000 UTC m=+0.148960926 container died 612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933 (image=quay.io/ceph/ceph:v18, name=wizardly_mayer, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:15 compute-0 podman[73594]: 2025-12-06 06:25:15.777460313 +0000 UTC m=+0.185782224 container remove 612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933 (image=quay.io/ceph/ceph:v18, name=wizardly_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:25:15 compute-0 systemd[1]: libpod-conmon-612c464876cb68d5dc0b546739f84326194acad8dc524bf761e1696659b5f933.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73632]: 2025-12-06 06:25:15.839735449 +0000 UTC m=+0.041342041 container create 852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65 (image=quay.io/ceph/ceph:v18, name=keen_knuth, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:15 compute-0 systemd[1]: Started libpod-conmon-852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65.scope.
Dec 06 06:25:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2d3a0139a5d37e363df0aa1295f6501dcad7afc637f0de9f0e2adbf72ab49/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:15 compute-0 podman[73632]: 2025-12-06 06:25:15.896149338 +0000 UTC m=+0.097755950 container init 852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65 (image=quay.io/ceph/ceph:v18, name=keen_knuth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:25:15 compute-0 podman[73632]: 2025-12-06 06:25:15.900720712 +0000 UTC m=+0.102327314 container start 852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65 (image=quay.io/ceph/ceph:v18, name=keen_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:15 compute-0 podman[73632]: 2025-12-06 06:25:15.90434264 +0000 UTC m=+0.105949232 container attach 852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65 (image=quay.io/ceph/ceph:v18, name=keen_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:15 compute-0 podman[73632]: 2025-12-06 06:25:15.820171999 +0000 UTC m=+0.021778621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:15 compute-0 keen_knuth[73648]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 06 06:25:15 compute-0 keen_knuth[73648]: setting min_mon_release = pacific
Dec 06 06:25:15 compute-0 keen_knuth[73648]: /usr/bin/monmaptool: set fsid to 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:25:15 compute-0 keen_knuth[73648]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 06 06:25:15 compute-0 systemd[1]: libpod-852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65.scope: Deactivated successfully.
Dec 06 06:25:15 compute-0 podman[73632]: 2025-12-06 06:25:15.930001795 +0000 UTC m=+0.131608397 container died 852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65 (image=quay.io/ceph/ceph:v18, name=keen_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:25:15 compute-0 podman[73632]: 2025-12-06 06:25:15.978596442 +0000 UTC m=+0.180203024 container remove 852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65 (image=quay.io/ceph/ceph:v18, name=keen_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:25:15 compute-0 systemd[1]: libpod-conmon-852c6a3cd338329aee2ef692153bc10eeb13a30ab5c78a36394674210805ea65.scope: Deactivated successfully.
Dec 06 06:25:16 compute-0 podman[73666]: 2025-12-06 06:25:16.053782149 +0000 UTC m=+0.044177548 container create 4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d (image=quay.io/ceph/ceph:v18, name=adoring_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 06:25:16 compute-0 systemd[1]: Started libpod-conmon-4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d.scope.
Dec 06 06:25:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db321765777ef210a6b764c7342eacac2c6fbbd4ee1d1b4111b15423a7888580/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db321765777ef210a6b764c7342eacac2c6fbbd4ee1d1b4111b15423a7888580/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db321765777ef210a6b764c7342eacac2c6fbbd4ee1d1b4111b15423a7888580/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db321765777ef210a6b764c7342eacac2c6fbbd4ee1d1b4111b15423a7888580/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:16 compute-0 podman[73666]: 2025-12-06 06:25:16.109505357 +0000 UTC m=+0.099900786 container init 4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d (image=quay.io/ceph/ceph:v18, name=adoring_khayyam, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:16 compute-0 podman[73666]: 2025-12-06 06:25:16.11547673 +0000 UTC m=+0.105872139 container start 4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d (image=quay.io/ceph/ceph:v18, name=adoring_khayyam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:25:16 compute-0 podman[73666]: 2025-12-06 06:25:16.120734052 +0000 UTC m=+0.111129661 container attach 4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d (image=quay.io/ceph/ceph:v18, name=adoring_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:16 compute-0 podman[73666]: 2025-12-06 06:25:16.035560375 +0000 UTC m=+0.025955804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:16 compute-0 systemd[1]: libpod-4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d.scope: Deactivated successfully.
Dec 06 06:25:16 compute-0 podman[73666]: 2025-12-06 06:25:16.209873497 +0000 UTC m=+0.200268936 container died 4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d (image=quay.io/ceph/ceph:v18, name=adoring_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:25:16 compute-0 podman[73666]: 2025-12-06 06:25:16.253429047 +0000 UTC m=+0.243824456 container remove 4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d (image=quay.io/ceph/ceph:v18, name=adoring_khayyam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 06:25:16 compute-0 systemd[1]: libpod-conmon-4562078bbc63a3a307cd5b72886e0f58451db17e9ee9e018dd1a8a2af6d5d89d.scope: Deactivated successfully.
Dec 06 06:25:17 compute-0 systemd[1]: Reloading.
Dec 06 06:25:17 compute-0 systemd-rc-local-generator[73747]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:25:17 compute-0 systemd-sysv-generator[73751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:25:17 compute-0 systemd[1]: Reloading.
Dec 06 06:25:17 compute-0 systemd-rc-local-generator[73786]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:25:17 compute-0 systemd-sysv-generator[73789]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:25:17 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 06 06:25:17 compute-0 systemd[1]: Reloading.
Dec 06 06:25:17 compute-0 systemd-rc-local-generator[73825]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:25:17 compute-0 systemd-sysv-generator[73829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:25:17 compute-0 systemd[1]: Reached target Ceph cluster 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:25:17 compute-0 systemd[1]: Reloading.
Dec 06 06:25:17 compute-0 systemd-rc-local-generator[73865]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:25:17 compute-0 systemd-sysv-generator[73868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:25:18 compute-0 systemd[1]: Reloading.
Dec 06 06:25:18 compute-0 systemd-rc-local-generator[73902]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:25:18 compute-0 systemd-sysv-generator[73908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:25:18 compute-0 systemd[1]: Created slice Slice /system/ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:25:18 compute-0 systemd[1]: Reached target System Time Set.
Dec 06 06:25:18 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 06 06:25:18 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:25:18 compute-0 podman[73963]: 2025-12-06 06:25:18.585624629 +0000 UTC m=+0.074872019 container create f79f1b292e0c4cf230855f6eb87d5dd0462a1cbf189760efd5e094f6364cf3c7 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:18 compute-0 podman[73963]: 2025-12-06 06:25:18.532830399 +0000 UTC m=+0.022077779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a73951d86340f2ac0ae42b563d1d5df57644131a45260ad273a81d9c194d41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a73951d86340f2ac0ae42b563d1d5df57644131a45260ad273a81d9c194d41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a73951d86340f2ac0ae42b563d1d5df57644131a45260ad273a81d9c194d41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a73951d86340f2ac0ae42b563d1d5df57644131a45260ad273a81d9c194d41/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:18 compute-0 podman[73963]: 2025-12-06 06:25:18.765665197 +0000 UTC m=+0.254912577 container init f79f1b292e0c4cf230855f6eb87d5dd0462a1cbf189760efd5e094f6364cf3c7 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:18 compute-0 podman[73963]: 2025-12-06 06:25:18.77167643 +0000 UTC m=+0.260923790 container start f79f1b292e0c4cf230855f6eb87d5dd0462a1cbf189760efd5e094f6364cf3c7 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:18 compute-0 ceph-mon[73982]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:25:18 compute-0 ceph-mon[73982]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec 06 06:25:18 compute-0 ceph-mon[73982]: pidfile_write: ignore empty --pid-file
Dec 06 06:25:18 compute-0 ceph-mon[73982]: load: jerasure load: lrc 
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: RocksDB version: 7.9.2
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Git sha 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: DB SUMMARY
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: DB Session ID:  3KE3VT0E2URLAV22Y7VG
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: CURRENT file:  CURRENT
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                         Options.error_if_exists: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                       Options.create_if_missing: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                                     Options.env: 0x55fdd4969c40
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                                Options.info_log: 0x55fdd58e4ec0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                              Options.statistics: (nil)
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                               Options.use_fsync: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                              Options.db_log_dir: 
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                                 Options.wal_dir: 
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                    Options.write_buffer_manager: 0x55fdd58f4b40
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.unordered_write: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                               Options.row_cache: None
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                              Options.wal_filter: None
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.two_write_queues: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.wal_compression: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.atomic_flush: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.max_background_jobs: 2
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.max_background_compactions: -1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.max_subcompactions: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.max_total_wal_size: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                          Options.max_open_files: -1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:       Options.compaction_readahead_size: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Compression algorithms supported:
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         kZSTD supported: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         kXpressCompression supported: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         kBZip2Compression supported: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         kLZ4Compression supported: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         kZlibCompression supported: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         kSnappyCompression supported: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:           Options.merge_operator: 
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:        Options.compaction_filter: None
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fdd58e4aa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fdd58dd1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:        Options.write_buffer_size: 33554432
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:  Options.max_write_buffer_number: 2
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:          Options.compression: NoCompression
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.num_levels: 7
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3233f64f-2a9e-4588-b218-4397d62e9b9f
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002318812716, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002318832898, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "3KE3VT0E2URLAV22Y7VG", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002318833026, "job": 1, "event": "recovery_finished"}
Dec 06 06:25:18 compute-0 ceph-mon[73982]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 06 06:25:18 compute-0 bash[73963]: f79f1b292e0c4cf230855f6eb87d5dd0462a1cbf189760efd5e094f6364cf3c7
Dec 06 06:25:18 compute-0 systemd[1]: Started Ceph mon.compute-0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:25:19 compute-0 ceph-mon[73982]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:25:19 compute-0 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fdd5906e00
Dec 06 06:25:19 compute-0 ceph-mon[73982]: rocksdb: DB pointer 0x55fdd5990000
Dec 06 06:25:19 compute-0 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:25:19 compute-0 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.020       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fdd58dd1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:25:19 compute-0 ceph-mon[73982]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@-1(???) e0 preinit fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 06 06:25:19 compute-0 ceph-mon[73982]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:25:19 compute-0 ceph-mon[73982]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 06 06:25:19 compute-0 ceph-mon[73982]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:25:19 compute-0 ceph-mon[73982]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 06:25:19 compute-0 ceph-mon[73982]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-12-06T06:25:16.154833Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).mds e1 new map
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 06 06:25:19 compute-0 ceph-mon[73982]: log_channel(cluster) log [DBG] : fsmap 
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mkfs 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 06 06:25:19 compute-0 ceph-mon[73982]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 06 06:25:19 compute-0 ceph-mon[73982]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 06 06:25:19 compute-0 ceph-mon[73982]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 06:25:19 compute-0 podman[74021]: 2025-12-06 06:25:19.179046895 +0000 UTC m=+0.053547751 container create ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327 (image=quay.io/ceph/ceph:v18, name=exciting_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:19 compute-0 systemd[1]: Started libpod-conmon-ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327.scope.
Dec 06 06:25:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb01305fbd7cc4099358ac0bbb2fb854893d200e9f4e919f6768b7eaaae5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb01305fbd7cc4099358ac0bbb2fb854893d200e9f4e919f6768b7eaaae5e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:19 compute-0 podman[74021]: 2025-12-06 06:25:19.160630567 +0000 UTC m=+0.035131453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09cdb01305fbd7cc4099358ac0bbb2fb854893d200e9f4e919f6768b7eaaae5e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:19 compute-0 podman[74021]: 2025-12-06 06:25:19.612358745 +0000 UTC m=+0.486859631 container init ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327 (image=quay.io/ceph/ceph:v18, name=exciting_davinci, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:19 compute-0 podman[74021]: 2025-12-06 06:25:19.620345342 +0000 UTC m=+0.494846208 container start ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327 (image=quay.io/ceph/ceph:v18, name=exciting_davinci, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:19 compute-0 podman[74021]: 2025-12-06 06:25:19.682207467 +0000 UTC m=+0.556708363 container attach ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327 (image=quay.io/ceph/ceph:v18, name=exciting_davinci, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:25:20 compute-0 ceph-mon[73982]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 06 06:25:20 compute-0 ceph-mon[73982]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181131635' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:   cluster:
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     id:     40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     health: HEALTH_OK
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:  
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:   services:
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     mon: 1 daemons, quorum compute-0 (age 0.918581s)
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     mgr: no daemons active
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     osd: 0 osds: 0 up, 0 in
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:  
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:   data:
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     pools:   0 pools, 0 pgs
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     objects: 0 objects, 0 B
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     usage:   0 B used, 0 B / 0 B avail
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:     pgs:     
Dec 06 06:25:20 compute-0 exciting_davinci[74038]:  
Dec 06 06:25:20 compute-0 systemd[1]: libpod-ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327.scope: Deactivated successfully.
Dec 06 06:25:20 compute-0 podman[74021]: 2025-12-06 06:25:20.065000747 +0000 UTC m=+0.939501643 container died ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327 (image=quay.io/ceph/ceph:v18, name=exciting_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:20 compute-0 ceph-mon[73982]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 06:25:20 compute-0 ceph-mon[73982]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:25:20 compute-0 ceph-mon[73982]: fsmap 
Dec 06 06:25:20 compute-0 ceph-mon[73982]: osdmap e1: 0 total, 0 up, 0 in
Dec 06 06:25:20 compute-0 ceph-mon[73982]: mgrmap e1: no daemons active
Dec 06 06:25:20 compute-0 ceph-mon[73982]: from='client.? 192.168.122.100:0/4181131635' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 06:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-09cdb01305fbd7cc4099358ac0bbb2fb854893d200e9f4e919f6768b7eaaae5e-merged.mount: Deactivated successfully.
Dec 06 06:25:20 compute-0 podman[74021]: 2025-12-06 06:25:20.465188119 +0000 UTC m=+1.339688985 container remove ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327 (image=quay.io/ceph/ceph:v18, name=exciting_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 06:25:20 compute-0 systemd[1]: libpod-conmon-ed78811c94f5bba90ae351af26f8aef29cd0f0c0c0585c680d054df79f8e4327.scope: Deactivated successfully.
Dec 06 06:25:20 compute-0 podman[74077]: 2025-12-06 06:25:20.529546302 +0000 UTC m=+0.043083467 container create 116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0 (image=quay.io/ceph/ceph:v18, name=festive_merkle, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:20 compute-0 systemd[1]: Started libpod-conmon-116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0.scope.
Dec 06 06:25:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be48c9700065a560d5a2e5035efce25f25d230dc2a84f651f878b798bf2fd46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be48c9700065a560d5a2e5035efce25f25d230dc2a84f651f878b798bf2fd46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be48c9700065a560d5a2e5035efce25f25d230dc2a84f651f878b798bf2fd46/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be48c9700065a560d5a2e5035efce25f25d230dc2a84f651f878b798bf2fd46/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:20 compute-0 podman[74077]: 2025-12-06 06:25:20.590377381 +0000 UTC m=+0.103914576 container init 116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0 (image=quay.io/ceph/ceph:v18, name=festive_merkle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 06:25:20 compute-0 podman[74077]: 2025-12-06 06:25:20.596474576 +0000 UTC m=+0.110011731 container start 116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0 (image=quay.io/ceph/ceph:v18, name=festive_merkle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:20 compute-0 podman[74077]: 2025-12-06 06:25:20.599334763 +0000 UTC m=+0.112871928 container attach 116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0 (image=quay.io/ceph/ceph:v18, name=festive_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:20 compute-0 podman[74077]: 2025-12-06 06:25:20.510220469 +0000 UTC m=+0.023757654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:21 compute-0 ceph-mon[73982]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 06 06:25:21 compute-0 ceph-mon[73982]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2944137216' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 06:25:21 compute-0 ceph-mon[73982]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2944137216' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 06 06:25:21 compute-0 festive_merkle[74093]: 
Dec 06 06:25:21 compute-0 festive_merkle[74093]: [global]
Dec 06 06:25:21 compute-0 festive_merkle[74093]:         fsid = 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:25:21 compute-0 festive_merkle[74093]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 06 06:25:21 compute-0 systemd[1]: libpod-116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0.scope: Deactivated successfully.
Dec 06 06:25:21 compute-0 conmon[74093]: conmon 116f14fd03fbdc819c90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0.scope/container/memory.events
Dec 06 06:25:21 compute-0 podman[74077]: 2025-12-06 06:25:21.04252325 +0000 UTC m=+0.556060415 container died 116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0 (image=quay.io/ceph/ceph:v18, name=festive_merkle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6be48c9700065a560d5a2e5035efce25f25d230dc2a84f651f878b798bf2fd46-merged.mount: Deactivated successfully.
Dec 06 06:25:21 compute-0 podman[74077]: 2025-12-06 06:25:21.091148667 +0000 UTC m=+0.604685832 container remove 116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0 (image=quay.io/ceph/ceph:v18, name=festive_merkle, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:21 compute-0 systemd[1]: libpod-conmon-116f14fd03fbdc819c90c7f4db76e6b94fcb9d19cb99a0d30882b70108cf16a0.scope: Deactivated successfully.
Dec 06 06:25:21 compute-0 podman[74131]: 2025-12-06 06:25:21.14661364 +0000 UTC m=+0.038147115 container create 2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a (image=quay.io/ceph/ceph:v18, name=brave_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:21 compute-0 systemd[1]: Started libpod-conmon-2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a.scope.
Dec 06 06:25:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d9e6eb8599b0228975f459393c7ff659c24e3527ebd417931d617adb4a9dc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d9e6eb8599b0228975f459393c7ff659c24e3527ebd417931d617adb4a9dc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d9e6eb8599b0228975f459393c7ff659c24e3527ebd417931d617adb4a9dc1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d9e6eb8599b0228975f459393c7ff659c24e3527ebd417931d617adb4a9dc1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:21 compute-0 podman[74131]: 2025-12-06 06:25:21.213546083 +0000 UTC m=+0.105079588 container init 2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a (image=quay.io/ceph/ceph:v18, name=brave_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 06:25:21 compute-0 podman[74131]: 2025-12-06 06:25:21.218959969 +0000 UTC m=+0.110493444 container start 2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a (image=quay.io/ceph/ceph:v18, name=brave_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:21 compute-0 podman[74131]: 2025-12-06 06:25:21.223343079 +0000 UTC m=+0.114876554 container attach 2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a (image=quay.io/ceph/ceph:v18, name=brave_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:25:21 compute-0 podman[74131]: 2025-12-06 06:25:21.130478473 +0000 UTC m=+0.022011968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:21 compute-0 ceph-mon[73982]: from='client.? 192.168.122.100:0/2944137216' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 06:25:21 compute-0 ceph-mon[73982]: from='client.? 192.168.122.100:0/2944137216' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 06 06:25:21 compute-0 ceph-mon[73982]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:25:21 compute-0 ceph-mon[73982]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779138589' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:25:21 compute-0 systemd[1]: libpod-2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a.scope: Deactivated successfully.
Dec 06 06:25:21 compute-0 podman[74131]: 2025-12-06 06:25:21.641396164 +0000 UTC m=+0.532929639 container died 2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a (image=quay.io/ceph/ceph:v18, name=brave_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-20d9e6eb8599b0228975f459393c7ff659c24e3527ebd417931d617adb4a9dc1-merged.mount: Deactivated successfully.
Dec 06 06:25:21 compute-0 podman[74131]: 2025-12-06 06:25:21.681035218 +0000 UTC m=+0.572568693 container remove 2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a (image=quay.io/ceph/ceph:v18, name=brave_hodgkin, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:25:21 compute-0 systemd[1]: libpod-conmon-2945a37556a1f4b976ef292a61ee079e0d888a865cbf6c1897df84890bafcc6a.scope: Deactivated successfully.
Dec 06 06:25:21 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:25:21 compute-0 ceph-mon[73982]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 06 06:25:21 compute-0 ceph-mon[73982]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 06 06:25:21 compute-0 ceph-mon[73982]: mon.compute-0@0(leader) e1 shutdown
Dec 06 06:25:21 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[73978]: 2025-12-06T06:25:21.850+0000 7f4c1fbff640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 06 06:25:21 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[73978]: 2025-12-06T06:25:21.850+0000 7f4c1fbff640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 06 06:25:21 compute-0 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 06 06:25:21 compute-0 ceph-mon[73982]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 06 06:25:22 compute-0 podman[74215]: 2025-12-06 06:25:22.025200302 +0000 UTC m=+0.205562850 container died f79f1b292e0c4cf230855f6eb87d5dd0462a1cbf189760efd5e094f6364cf3c7 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8a73951d86340f2ac0ae42b563d1d5df57644131a45260ad273a81d9c194d41-merged.mount: Deactivated successfully.
Dec 06 06:25:22 compute-0 podman[74215]: 2025-12-06 06:25:22.063328585 +0000 UTC m=+0.243691103 container remove f79f1b292e0c4cf230855f6eb87d5dd0462a1cbf189760efd5e094f6364cf3c7 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:22 compute-0 bash[74215]: ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0
Dec 06 06:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 06:25:22 compute-0 systemd[1]: ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb@mon.compute-0.service: Deactivated successfully.
Dec 06 06:25:22 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:25:22 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:25:22 compute-0 podman[74319]: 2025-12-06 06:25:22.347854053 +0000 UTC m=+0.037974529 container create 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec43b0c383b2e27e592cf0f077b4045b137fc7129a6330e53776b54d9def86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec43b0c383b2e27e592cf0f077b4045b137fc7129a6330e53776b54d9def86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec43b0c383b2e27e592cf0f077b4045b137fc7129a6330e53776b54d9def86b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec43b0c383b2e27e592cf0f077b4045b137fc7129a6330e53776b54d9def86b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:22 compute-0 podman[74319]: 2025-12-06 06:25:22.398919277 +0000 UTC m=+0.089039783 container init 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:22 compute-0 podman[74319]: 2025-12-06 06:25:22.403963233 +0000 UTC m=+0.094083709 container start 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 06:25:22 compute-0 bash[74319]: 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0
Dec 06 06:25:22 compute-0 podman[74319]: 2025-12-06 06:25:22.333067622 +0000 UTC m=+0.023188118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:22 compute-0 systemd[1]: Started Ceph mon.compute-0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:25:22 compute-0 ceph-mon[74339]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:25:22 compute-0 ceph-mon[74339]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec 06 06:25:22 compute-0 ceph-mon[74339]: pidfile_write: ignore empty --pid-file
Dec 06 06:25:22 compute-0 ceph-mon[74339]: load: jerasure load: lrc 
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: RocksDB version: 7.9.2
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Git sha 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: DB SUMMARY
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: DB Session ID:  TCRYIVRGAQK56L3E52U0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: CURRENT file:  CURRENT
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54094 ; 
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                         Options.error_if_exists: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                       Options.create_if_missing: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                                     Options.env: 0x5596d0669c40
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                                Options.info_log: 0x5596d2c2f040
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                              Options.statistics: (nil)
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                               Options.use_fsync: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                              Options.db_log_dir: 
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                                 Options.wal_dir: 
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                    Options.write_buffer_manager: 0x5596d2c3eb40
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.unordered_write: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                               Options.row_cache: None
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                              Options.wal_filter: None
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.two_write_queues: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.wal_compression: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.atomic_flush: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.max_background_jobs: 2
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.max_background_compactions: -1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.max_subcompactions: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.max_total_wal_size: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                          Options.max_open_files: -1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:       Options.compaction_readahead_size: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Compression algorithms supported:
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         kZSTD supported: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         kXpressCompression supported: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         kBZip2Compression supported: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         kLZ4Compression supported: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         kZlibCompression supported: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         kSnappyCompression supported: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:           Options.merge_operator: 
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:        Options.compaction_filter: None
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5596d2c2ec40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5596d2c271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:        Options.write_buffer_size: 33554432
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:  Options.max_write_buffer_number: 2
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:          Options.compression: NoCompression
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.num_levels: 7
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3233f64f-2a9e-4588-b218-4397d62e9b9f
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002322451503, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002322455705, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 53705, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 135, "table_properties": {"data_size": 52260, "index_size": 151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2906, "raw_average_key_size": 29, "raw_value_size": 49934, "raw_average_value_size": 509, "num_data_blocks": 7, "num_entries": 98, "num_filter_entries": 98, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002322, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002322455816, "job": 1, "event": "recovery_finished"}
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5596d2c50e00
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: DB pointer 0x5596d2cda000
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   54.34 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   54.34 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.80 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.80 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:25:22 compute-0 ceph-mon[74339]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(???) e1 preinit fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(???).mds e1 new map
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 06 06:25:22 compute-0 ceph-mon[74339]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:25:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 06:25:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:25:22 compute-0 podman[74340]: 2025-12-06 06:25:22.476241581 +0000 UTC m=+0.041052332 container create d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812 (image=quay.io/ceph/ceph:v18, name=blissful_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:25:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap 
Dec 06 06:25:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 06 06:25:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 06 06:25:22 compute-0 systemd[1]: Started libpod-conmon-d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812.scope.
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 06 06:25:22 compute-0 ceph-mon[74339]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:25:22 compute-0 ceph-mon[74339]: fsmap 
Dec 06 06:25:22 compute-0 ceph-mon[74339]: osdmap e1: 0 total, 0 up, 0 in
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mgrmap e1: no daemons active
Dec 06 06:25:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5601b8cdc8466f31150e58db0de53b771adf4e87041545a411fdebb7088d9c92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5601b8cdc8466f31150e58db0de53b771adf4e87041545a411fdebb7088d9c92/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5601b8cdc8466f31150e58db0de53b771adf4e87041545a411fdebb7088d9c92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:22 compute-0 podman[74340]: 2025-12-06 06:25:22.459070416 +0000 UTC m=+0.023881187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:22 compute-0 podman[74340]: 2025-12-06 06:25:22.555406766 +0000 UTC m=+0.120217537 container init d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812 (image=quay.io/ceph/ceph:v18, name=blissful_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:25:22 compute-0 podman[74340]: 2025-12-06 06:25:22.561579114 +0000 UTC m=+0.126389865 container start d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812 (image=quay.io/ceph/ceph:v18, name=blissful_proskuriakova, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:22 compute-0 podman[74340]: 2025-12-06 06:25:22.564540394 +0000 UTC m=+0.129351165 container attach d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812 (image=quay.io/ceph/ceph:v18, name=blissful_proskuriakova, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Dec 06 06:25:22 compute-0 systemd[1]: libpod-d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812.scope: Deactivated successfully.
Dec 06 06:25:22 compute-0 podman[74340]: 2025-12-06 06:25:22.98080209 +0000 UTC m=+0.545612841 container died d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812 (image=quay.io/ceph/ceph:v18, name=blissful_proskuriakova, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 06:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5601b8cdc8466f31150e58db0de53b771adf4e87041545a411fdebb7088d9c92-merged.mount: Deactivated successfully.
Dec 06 06:25:23 compute-0 podman[74340]: 2025-12-06 06:25:23.027033683 +0000 UTC m=+0.591844444 container remove d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812 (image=quay.io/ceph/ceph:v18, name=blissful_proskuriakova, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:23 compute-0 systemd[1]: libpod-conmon-d3a7bbf1bc9e06c0a91258e6f68227375fc5f89859d18c0e78a1c86b0c940812.scope: Deactivated successfully.
Dec 06 06:25:23 compute-0 podman[74432]: 2025-12-06 06:25:23.082421914 +0000 UTC m=+0.036158621 container create a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd (image=quay.io/ceph/ceph:v18, name=pedantic_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:23 compute-0 systemd[1]: Started libpod-conmon-a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd.scope.
Dec 06 06:25:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294767239eedf94a5f49b2baab20812a12586dfb05aa4dd8e3bc05114db0bcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294767239eedf94a5f49b2baab20812a12586dfb05aa4dd8e3bc05114db0bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294767239eedf94a5f49b2baab20812a12586dfb05aa4dd8e3bc05114db0bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:23 compute-0 podman[74432]: 2025-12-06 06:25:23.156469139 +0000 UTC m=+0.110205856 container init a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd (image=quay.io/ceph/ceph:v18, name=pedantic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:23 compute-0 podman[74432]: 2025-12-06 06:25:23.16425077 +0000 UTC m=+0.117987477 container start a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd (image=quay.io/ceph/ceph:v18, name=pedantic_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:23 compute-0 podman[74432]: 2025-12-06 06:25:23.067822158 +0000 UTC m=+0.021558885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:23 compute-0 podman[74432]: 2025-12-06 06:25:23.168198737 +0000 UTC m=+0.121935464 container attach a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd (image=quay.io/ceph/ceph:v18, name=pedantic_liskov, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 06:25:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Dec 06 06:25:23 compute-0 systemd[1]: libpod-a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd.scope: Deactivated successfully.
Dec 06 06:25:23 compute-0 podman[74432]: 2025-12-06 06:25:23.573333233 +0000 UTC m=+0.527069940 container died a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd (image=quay.io/ceph/ceph:v18, name=pedantic_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 06:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-3294767239eedf94a5f49b2baab20812a12586dfb05aa4dd8e3bc05114db0bcd-merged.mount: Deactivated successfully.
Dec 06 06:25:23 compute-0 podman[74432]: 2025-12-06 06:25:23.636169535 +0000 UTC m=+0.589906242 container remove a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd (image=quay.io/ceph/ceph:v18, name=pedantic_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 06:25:23 compute-0 systemd[1]: libpod-conmon-a4cf28ca0c47093e53aeca68c7bc7e161baa22a6894b920f94a9f854fa9f27fd.scope: Deactivated successfully.
Dec 06 06:25:23 compute-0 systemd[1]: Reloading.
Dec 06 06:25:23 compute-0 systemd-rc-local-generator[74512]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:25:23 compute-0 systemd-sysv-generator[74516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:25:23 compute-0 systemd[1]: Reloading.
Dec 06 06:25:24 compute-0 systemd-rc-local-generator[74554]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:25:24 compute-0 systemd-sysv-generator[74557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:25:24 compute-0 systemd[1]: Starting Ceph mgr.compute-0.sfzyix for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:25:24 compute-0 podman[74610]: 2025-12-06 06:25:24.429025292 +0000 UTC m=+0.049045975 container create 2b2d7108e778e1b76c7c13239e0b5eeec55e4c383a951549f7b8abc894ed7e25 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e952008318d8dedd99686cf2d47985fc7b5c7a6070b26e0539a28468b215d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e952008318d8dedd99686cf2d47985fc7b5c7a6070b26e0539a28468b215d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e952008318d8dedd99686cf2d47985fc7b5c7a6070b26e0539a28468b215d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/928e952008318d8dedd99686cf2d47985fc7b5c7a6070b26e0539a28468b215d/merged/var/lib/ceph/mgr/ceph-compute-0.sfzyix supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:24 compute-0 podman[74610]: 2025-12-06 06:25:24.405421753 +0000 UTC m=+0.025442446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:24 compute-0 podman[74610]: 2025-12-06 06:25:24.511309865 +0000 UTC m=+0.131330588 container init 2b2d7108e778e1b76c7c13239e0b5eeec55e4c383a951549f7b8abc894ed7e25 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:25:24 compute-0 podman[74610]: 2025-12-06 06:25:24.515817742 +0000 UTC m=+0.135838415 container start 2b2d7108e778e1b76c7c13239e0b5eeec55e4c383a951549f7b8abc894ed7e25 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 06:25:24 compute-0 bash[74610]: 2b2d7108e778e1b76c7c13239e0b5eeec55e4c383a951549f7b8abc894ed7e25
Dec 06 06:25:24 compute-0 systemd[1]: Started Ceph mgr.compute-0.sfzyix for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:25:24 compute-0 ceph-mgr[74630]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:25:24 compute-0 ceph-mgr[74630]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 06 06:25:24 compute-0 ceph-mgr[74630]: pidfile_write: ignore empty --pid-file
Dec 06 06:25:24 compute-0 podman[74631]: 2025-12-06 06:25:24.598407531 +0000 UTC m=+0.046974386 container create c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8 (image=quay.io/ceph/ceph:v18, name=nifty_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:24 compute-0 systemd[1]: Started libpod-conmon-c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8.scope.
Dec 06 06:25:24 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'alerts'
Dec 06 06:25:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d4c0032671111f75b243f473c6295a7e92ea6fa21b3e207102b16d6d76d69a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d4c0032671111f75b243f473c6295a7e92ea6fa21b3e207102b16d6d76d69a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d4c0032671111f75b243f473c6295a7e92ea6fa21b3e207102b16d6d76d69a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:24 compute-0 podman[74631]: 2025-12-06 06:25:24.576675037 +0000 UTC m=+0.025241872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:24 compute-0 podman[74631]: 2025-12-06 06:25:24.687937753 +0000 UTC m=+0.136504588 container init c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8 (image=quay.io/ceph/ceph:v18, name=nifty_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 06:25:24 compute-0 podman[74631]: 2025-12-06 06:25:24.695933879 +0000 UTC m=+0.144500694 container start c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8 (image=quay.io/ceph/ceph:v18, name=nifty_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:24 compute-0 podman[74631]: 2025-12-06 06:25:24.705133608 +0000 UTC m=+0.153700443 container attach c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8 (image=quay.io/ceph/ceph:v18, name=nifty_lovelace, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:25:24 compute-0 ceph-mgr[74630]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 06:25:24 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'balancer'
Dec 06 06:25:24 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:24.997+0000 7f54cbbf4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 06:25:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108555338' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]: 
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]: {
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "health": {
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "status": "HEALTH_OK",
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "checks": {},
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "mutes": []
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     },
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "election_epoch": 5,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "quorum": [
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         0
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     ],
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "quorum_names": [
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "compute-0"
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     ],
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "quorum_age": 2,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "monmap": {
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "epoch": 1,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "min_mon_release_name": "reef",
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_mons": 1
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     },
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "osdmap": {
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "epoch": 1,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_osds": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_up_osds": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "osd_up_since": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_in_osds": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "osd_in_since": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_remapped_pgs": 0
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     },
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "pgmap": {
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "pgs_by_state": [],
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_pgs": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_pools": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_objects": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "data_bytes": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "bytes_used": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "bytes_avail": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "bytes_total": 0
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     },
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "fsmap": {
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "epoch": 1,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "by_rank": [],
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "up:standby": 0
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     },
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "mgrmap": {
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "available": false,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "num_standbys": 0,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "modules": [
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:             "iostat",
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:             "nfs",
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:             "restful"
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         ],
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "services": {}
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     },
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "servicemap": {
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "epoch": 1,
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:         "services": {}
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     },
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]:     "progress_events": {}
Dec 06 06:25:25 compute-0 nifty_lovelace[74672]: }
Dec 06 06:25:25 compute-0 systemd[1]: libpod-c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8.scope: Deactivated successfully.
Dec 06 06:25:25 compute-0 podman[74631]: 2025-12-06 06:25:25.130698733 +0000 UTC m=+0.579265588 container died c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8 (image=quay.io/ceph/ceph:v18, name=nifty_lovelace, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-34d4c0032671111f75b243f473c6295a7e92ea6fa21b3e207102b16d6d76d69a-merged.mount: Deactivated successfully.
Dec 06 06:25:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/108555338' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:25 compute-0 podman[74631]: 2025-12-06 06:25:25.1778119 +0000 UTC m=+0.626378715 container remove c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8 (image=quay.io/ceph/ceph:v18, name=nifty_lovelace, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:25 compute-0 systemd[1]: libpod-conmon-c785c7ff93541b975dbfc1cfc95a130b1171ded9f4ae47572544835a783c2fb8.scope: Deactivated successfully.
Dec 06 06:25:25 compute-0 ceph-mgr[74630]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 06:25:25 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:25.274+0000 7f54cbbf4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 06:25:25 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'cephadm'
Dec 06 06:25:27 compute-0 podman[74721]: 2025-12-06 06:25:27.253815366 +0000 UTC m=+0.049679199 container create 12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600 (image=quay.io/ceph/ceph:v18, name=elated_perlman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:25:27 compute-0 systemd[1]: Started libpod-conmon-12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600.scope.
Dec 06 06:25:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:27 compute-0 podman[74721]: 2025-12-06 06:25:27.233049661 +0000 UTC m=+0.028913514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b69df3718cae1d05faafb54958ba7e60b72da47676881fdf212f12ed1531c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b69df3718cae1d05faafb54958ba7e60b72da47676881fdf212f12ed1531c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b69df3718cae1d05faafb54958ba7e60b72da47676881fdf212f12ed1531c46/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:27 compute-0 podman[74721]: 2025-12-06 06:25:27.349407317 +0000 UTC m=+0.145271150 container init 12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600 (image=quay.io/ceph/ceph:v18, name=elated_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:27 compute-0 podman[74721]: 2025-12-06 06:25:27.354871643 +0000 UTC m=+0.150735476 container start 12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600 (image=quay.io/ceph/ceph:v18, name=elated_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 06:25:27 compute-0 podman[74721]: 2025-12-06 06:25:27.358222568 +0000 UTC m=+0.154086431 container attach 12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600 (image=quay.io/ceph/ceph:v18, name=elated_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:25:27 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'crash'
Dec 06 06:25:27 compute-0 ceph-mgr[74630]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 06:25:27 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'dashboard'
Dec 06 06:25:27 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:27.700+0000 7f54cbbf4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 06:25:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412721058' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:27 compute-0 elated_perlman[74737]: 
Dec 06 06:25:27 compute-0 elated_perlman[74737]: {
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "health": {
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "status": "HEALTH_OK",
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "checks": {},
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "mutes": []
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     },
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "election_epoch": 5,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "quorum": [
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         0
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     ],
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "quorum_names": [
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "compute-0"
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     ],
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "quorum_age": 5,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "monmap": {
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "epoch": 1,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "min_mon_release_name": "reef",
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_mons": 1
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     },
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "osdmap": {
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "epoch": 1,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_osds": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_up_osds": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "osd_up_since": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_in_osds": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "osd_in_since": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_remapped_pgs": 0
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     },
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "pgmap": {
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "pgs_by_state": [],
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_pgs": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_pools": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_objects": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "data_bytes": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "bytes_used": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "bytes_avail": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "bytes_total": 0
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     },
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "fsmap": {
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "epoch": 1,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "by_rank": [],
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "up:standby": 0
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     },
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "mgrmap": {
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "available": false,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "num_standbys": 0,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "modules": [
Dec 06 06:25:27 compute-0 elated_perlman[74737]:             "iostat",
Dec 06 06:25:27 compute-0 elated_perlman[74737]:             "nfs",
Dec 06 06:25:27 compute-0 elated_perlman[74737]:             "restful"
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         ],
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "services": {}
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     },
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "servicemap": {
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "epoch": 1,
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:27 compute-0 elated_perlman[74737]:         "services": {}
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     },
Dec 06 06:25:27 compute-0 elated_perlman[74737]:     "progress_events": {}
Dec 06 06:25:27 compute-0 elated_perlman[74737]: }
Dec 06 06:25:27 compute-0 systemd[1]: libpod-12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600.scope: Deactivated successfully.
Dec 06 06:25:27 compute-0 podman[74721]: 2025-12-06 06:25:27.789323761 +0000 UTC m=+0.585187594 container died 12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600 (image=quay.io/ceph/ceph:v18, name=elated_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b69df3718cae1d05faafb54958ba7e60b72da47676881fdf212f12ed1531c46-merged.mount: Deactivated successfully.
Dec 06 06:25:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1412721058' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:27 compute-0 podman[74721]: 2025-12-06 06:25:27.843592827 +0000 UTC m=+0.639456660 container remove 12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600 (image=quay.io/ceph/ceph:v18, name=elated_perlman, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:27 compute-0 systemd[1]: libpod-conmon-12f89df4441d414454336df26f5f2b03e1c86afe81c2c74d4d59a91acaefc600.scope: Deactivated successfully.
Dec 06 06:25:29 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'devicehealth'
Dec 06 06:25:29 compute-0 ceph-mgr[74630]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 06:25:29 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 06:25:29 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:29.554+0000 7f54cbbf4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 06:25:29 compute-0 podman[74775]: 2025-12-06 06:25:29.951177457 +0000 UTC m=+0.068378902 container create ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08 (image=quay.io/ceph/ceph:v18, name=zealous_solomon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 06:25:29 compute-0 systemd[1]: Started libpod-conmon-ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08.scope.
Dec 06 06:25:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007d47be86acc5920a17f129515829f56730a70fd116f1fe33104490e1b6c75d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007d47be86acc5920a17f129515829f56730a70fd116f1fe33104490e1b6c75d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007d47be86acc5920a17f129515829f56730a70fd116f1fe33104490e1b6c75d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:30 compute-0 podman[74775]: 2025-12-06 06:25:29.924312505 +0000 UTC m=+0.041513970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:30 compute-0 podman[74775]: 2025-12-06 06:25:30.02325447 +0000 UTC m=+0.140455925 container init ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08 (image=quay.io/ceph/ceph:v18, name=zealous_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:30 compute-0 podman[74775]: 2025-12-06 06:25:30.029294358 +0000 UTC m=+0.146495793 container start ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08 (image=quay.io/ceph/ceph:v18, name=zealous_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:30 compute-0 podman[74775]: 2025-12-06 06:25:30.032741656 +0000 UTC m=+0.149943081 container attach ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08 (image=quay.io/ceph/ceph:v18, name=zealous_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:25:30 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 06:25:30 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 06:25:30 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   from numpy import show_config as show_numpy_config
Dec 06 06:25:30 compute-0 ceph-mgr[74630]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 06:25:30 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:30.173+0000 7f54cbbf4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 06:25:30 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'influx'
Dec 06 06:25:30 compute-0 ceph-mgr[74630]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 06:25:30 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'insights'
Dec 06 06:25:30 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:30.443+0000 7f54cbbf4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 06:25:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291215412' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:30 compute-0 zealous_solomon[74792]: 
Dec 06 06:25:30 compute-0 zealous_solomon[74792]: {
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "health": {
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "status": "HEALTH_OK",
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "checks": {},
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "mutes": []
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     },
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "election_epoch": 5,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "quorum": [
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         0
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     ],
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "quorum_names": [
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "compute-0"
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     ],
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "quorum_age": 7,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "monmap": {
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "epoch": 1,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "min_mon_release_name": "reef",
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_mons": 1
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     },
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "osdmap": {
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "epoch": 1,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_osds": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_up_osds": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "osd_up_since": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_in_osds": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "osd_in_since": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_remapped_pgs": 0
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     },
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "pgmap": {
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "pgs_by_state": [],
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_pgs": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_pools": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_objects": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "data_bytes": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "bytes_used": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "bytes_avail": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "bytes_total": 0
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     },
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "fsmap": {
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "epoch": 1,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "by_rank": [],
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "up:standby": 0
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     },
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "mgrmap": {
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "available": false,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "num_standbys": 0,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "modules": [
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:             "iostat",
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:             "nfs",
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:             "restful"
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         ],
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "services": {}
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     },
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "servicemap": {
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "epoch": 1,
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:         "services": {}
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     },
Dec 06 06:25:30 compute-0 zealous_solomon[74792]:     "progress_events": {}
Dec 06 06:25:30 compute-0 zealous_solomon[74792]: }
Dec 06 06:25:30 compute-0 systemd[1]: libpod-ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08.scope: Deactivated successfully.
Dec 06 06:25:30 compute-0 podman[74775]: 2025-12-06 06:25:30.482025542 +0000 UTC m=+0.599226977 container died ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08 (image=quay.io/ceph/ceph:v18, name=zealous_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-007d47be86acc5920a17f129515829f56730a70fd116f1fe33104490e1b6c75d-merged.mount: Deactivated successfully.
Dec 06 06:25:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/291215412' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:30 compute-0 podman[74775]: 2025-12-06 06:25:30.535204227 +0000 UTC m=+0.652405652 container remove ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08 (image=quay.io/ceph/ceph:v18, name=zealous_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:30 compute-0 systemd[1]: libpod-conmon-ce82436925889a82544b247acdbbe2f0c2fdf9047a7ae96201edbd0256460b08.scope: Deactivated successfully.
Dec 06 06:25:30 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'iostat'
Dec 06 06:25:30 compute-0 ceph-mgr[74630]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 06:25:30 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'k8sevents'
Dec 06 06:25:30 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:30.971+0000 7f54cbbf4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 06:25:32 compute-0 podman[74832]: 2025-12-06 06:25:32.60437575 +0000 UTC m=+0.042481499 container create e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148 (image=quay.io/ceph/ceph:v18, name=vigilant_ardinghelli, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:32 compute-0 systemd[1]: Started libpod-conmon-e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148.scope.
Dec 06 06:25:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f411c4a9d74cef488ad882b6b907672a087506a900c46a6757b49c4f3191107d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:32 compute-0 podman[74832]: 2025-12-06 06:25:32.58692042 +0000 UTC m=+0.025026209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f411c4a9d74cef488ad882b6b907672a087506a900c46a6757b49c4f3191107d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f411c4a9d74cef488ad882b6b907672a087506a900c46a6757b49c4f3191107d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:32 compute-0 podman[74832]: 2025-12-06 06:25:32.703579561 +0000 UTC m=+0.141685390 container init e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148 (image=quay.io/ceph/ceph:v18, name=vigilant_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Dec 06 06:25:32 compute-0 podman[74832]: 2025-12-06 06:25:32.70867252 +0000 UTC m=+0.146778279 container start e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148 (image=quay.io/ceph/ceph:v18, name=vigilant_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:32 compute-0 podman[74832]: 2025-12-06 06:25:32.712630607 +0000 UTC m=+0.150736406 container attach e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148 (image=quay.io/ceph/ceph:v18, name=vigilant_ardinghelli, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:32 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'localpool'
Dec 06 06:25:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500708881' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]: 
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]: {
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "health": {
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "status": "HEALTH_OK",
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "checks": {},
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "mutes": []
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     },
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "election_epoch": 5,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "quorum": [
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         0
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     ],
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "quorum_names": [
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "compute-0"
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     ],
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "quorum_age": 10,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "monmap": {
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "epoch": 1,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "min_mon_release_name": "reef",
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_mons": 1
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     },
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "osdmap": {
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "epoch": 1,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_osds": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_up_osds": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "osd_up_since": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_in_osds": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "osd_in_since": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_remapped_pgs": 0
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     },
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "pgmap": {
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "pgs_by_state": [],
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_pgs": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_pools": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_objects": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "data_bytes": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "bytes_used": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "bytes_avail": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "bytes_total": 0
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     },
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "fsmap": {
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "epoch": 1,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "by_rank": [],
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "up:standby": 0
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     },
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "mgrmap": {
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "available": false,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "num_standbys": 0,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "modules": [
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:             "iostat",
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:             "nfs",
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:             "restful"
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         ],
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "services": {}
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     },
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "servicemap": {
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "epoch": 1,
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:         "services": {}
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     },
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]:     "progress_events": {}
Dec 06 06:25:33 compute-0 vigilant_ardinghelli[74849]: }
Dec 06 06:25:33 compute-0 systemd[1]: libpod-e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148.scope: Deactivated successfully.
Dec 06 06:25:33 compute-0 podman[74832]: 2025-12-06 06:25:33.191671283 +0000 UTC m=+0.629777042 container died e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148 (image=quay.io/ceph/ceph:v18, name=vigilant_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Dec 06 06:25:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f411c4a9d74cef488ad882b6b907672a087506a900c46a6757b49c4f3191107d-merged.mount: Deactivated successfully.
Dec 06 06:25:33 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 06:25:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/500708881' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:33 compute-0 podman[74832]: 2025-12-06 06:25:33.237757071 +0000 UTC m=+0.675862830 container remove e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148 (image=quay.io/ceph/ceph:v18, name=vigilant_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 06:25:33 compute-0 systemd[1]: libpod-conmon-e142a0d3bc1cfe3b77145824b6a5c9c2c34e4f79d4dfd482f429ea9827fd1148.scope: Deactivated successfully.
Dec 06 06:25:33 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'mirroring'
Dec 06 06:25:34 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'nfs'
Dec 06 06:25:34 compute-0 ceph-mgr[74630]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 06:25:34 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'orchestrator'
Dec 06 06:25:34 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:34.983+0000 7f54cbbf4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 06:25:35 compute-0 podman[74887]: 2025-12-06 06:25:35.298379616 +0000 UTC m=+0.038173183 container create f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb (image=quay.io/ceph/ceph:v18, name=determined_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:25:35 compute-0 systemd[1]: Started libpod-conmon-f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb.scope.
Dec 06 06:25:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c16e5ec551ec445bddda996ebbec93b3748d971f9df09a47d608293470fd5a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c16e5ec551ec445bddda996ebbec93b3748d971f9df09a47d608293470fd5a9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c16e5ec551ec445bddda996ebbec93b3748d971f9df09a47d608293470fd5a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:35 compute-0 podman[74887]: 2025-12-06 06:25:35.37100838 +0000 UTC m=+0.110801967 container init f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb (image=quay.io/ceph/ceph:v18, name=determined_hermann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:35 compute-0 podman[74887]: 2025-12-06 06:25:35.377599409 +0000 UTC m=+0.117392976 container start f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb (image=quay.io/ceph/ceph:v18, name=determined_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 06:25:35 compute-0 podman[74887]: 2025-12-06 06:25:35.282011927 +0000 UTC m=+0.021805514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:35 compute-0 podman[74887]: 2025-12-06 06:25:35.381881882 +0000 UTC m=+0.121675449 container attach f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb (image=quay.io/ceph/ceph:v18, name=determined_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:25:35 compute-0 ceph-mgr[74630]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 06:25:35 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 06:25:35 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:35.767+0000 7f54cbbf4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 06:25:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892708027' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:35 compute-0 determined_hermann[74903]: 
Dec 06 06:25:35 compute-0 determined_hermann[74903]: {
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "health": {
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "status": "HEALTH_OK",
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "checks": {},
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "mutes": []
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     },
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "election_epoch": 5,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "quorum": [
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         0
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     ],
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "quorum_names": [
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "compute-0"
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     ],
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "quorum_age": 13,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "monmap": {
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "epoch": 1,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "min_mon_release_name": "reef",
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_mons": 1
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     },
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "osdmap": {
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "epoch": 1,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_osds": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_up_osds": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "osd_up_since": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_in_osds": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "osd_in_since": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_remapped_pgs": 0
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     },
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "pgmap": {
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "pgs_by_state": [],
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_pgs": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_pools": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_objects": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "data_bytes": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "bytes_used": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "bytes_avail": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "bytes_total": 0
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     },
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "fsmap": {
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "epoch": 1,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "by_rank": [],
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "up:standby": 0
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     },
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "mgrmap": {
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "available": false,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "num_standbys": 0,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "modules": [
Dec 06 06:25:35 compute-0 determined_hermann[74903]:             "iostat",
Dec 06 06:25:35 compute-0 determined_hermann[74903]:             "nfs",
Dec 06 06:25:35 compute-0 determined_hermann[74903]:             "restful"
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         ],
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "services": {}
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     },
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "servicemap": {
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "epoch": 1,
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:35 compute-0 determined_hermann[74903]:         "services": {}
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     },
Dec 06 06:25:35 compute-0 determined_hermann[74903]:     "progress_events": {}
Dec 06 06:25:35 compute-0 determined_hermann[74903]: }
Dec 06 06:25:35 compute-0 systemd[1]: libpod-f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb.scope: Deactivated successfully.
Dec 06 06:25:35 compute-0 podman[74887]: 2025-12-06 06:25:35.833383712 +0000 UTC m=+0.573177289 container died f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb (image=quay.io/ceph/ceph:v18, name=determined_hermann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c16e5ec551ec445bddda996ebbec93b3748d971f9df09a47d608293470fd5a9-merged.mount: Deactivated successfully.
Dec 06 06:25:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3892708027' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:35 compute-0 podman[74887]: 2025-12-06 06:25:35.890035814 +0000 UTC m=+0.629829381 container remove f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb (image=quay.io/ceph/ceph:v18, name=determined_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 06:25:35 compute-0 systemd[1]: libpod-conmon-f2c49c4d3f1d9af84fae21e521d1b6752c6d28011715886e066f0193236d02fb.scope: Deactivated successfully.
Dec 06 06:25:36 compute-0 ceph-mgr[74630]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 06:25:36 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'osd_support'
Dec 06 06:25:36 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:36.040+0000 7f54cbbf4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 06:25:36 compute-0 ceph-mgr[74630]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 06:25:36 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 06:25:36 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:36.303+0000 7f54cbbf4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 06:25:36 compute-0 ceph-mgr[74630]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 06:25:36 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'progress'
Dec 06 06:25:36 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:36.598+0000 7f54cbbf4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 06:25:36 compute-0 ceph-mgr[74630]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 06:25:36 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'prometheus'
Dec 06 06:25:36 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:36.868+0000 7f54cbbf4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 06:25:37 compute-0 ceph-mgr[74630]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 06:25:37 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'rbd_support'
Dec 06 06:25:37 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:37.970+0000 7f54cbbf4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 06:25:38 compute-0 podman[74943]: 2025-12-06 06:25:37.943332078 +0000 UTC m=+0.030511754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:38 compute-0 podman[74943]: 2025-12-06 06:25:38.064378475 +0000 UTC m=+0.151558161 container create 89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5 (image=quay.io/ceph/ceph:v18, name=inspiring_cori, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:38 compute-0 systemd[1]: Started libpod-conmon-89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5.scope.
Dec 06 06:25:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2788d3aa90e87d61a483e08e1172035633ac1f51b21ba420d33ed72d0db8dfb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2788d3aa90e87d61a483e08e1172035633ac1f51b21ba420d33ed72d0db8dfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2788d3aa90e87d61a483e08e1172035633ac1f51b21ba420d33ed72d0db8dfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:38 compute-0 podman[74943]: 2025-12-06 06:25:38.162986544 +0000 UTC m=+0.250166280 container init 89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5 (image=quay.io/ceph/ceph:v18, name=inspiring_cori, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:25:38 compute-0 podman[74943]: 2025-12-06 06:25:38.169132584 +0000 UTC m=+0.256312250 container start 89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5 (image=quay.io/ceph/ceph:v18, name=inspiring_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:38 compute-0 podman[74943]: 2025-12-06 06:25:38.173363397 +0000 UTC m=+0.260543073 container attach 89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5 (image=quay.io/ceph/ceph:v18, name=inspiring_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 06:25:38 compute-0 ceph-mgr[74630]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 06:25:38 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'restful'
Dec 06 06:25:38 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:38.295+0000 7f54cbbf4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 06:25:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1924055505' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:38 compute-0 inspiring_cori[74959]: 
Dec 06 06:25:38 compute-0 inspiring_cori[74959]: {
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "health": {
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "status": "HEALTH_OK",
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "checks": {},
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "mutes": []
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     },
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "election_epoch": 5,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "quorum": [
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         0
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     ],
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "quorum_names": [
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "compute-0"
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     ],
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "quorum_age": 16,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "monmap": {
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "epoch": 1,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "min_mon_release_name": "reef",
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_mons": 1
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     },
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "osdmap": {
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "epoch": 1,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_osds": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_up_osds": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "osd_up_since": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_in_osds": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "osd_in_since": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_remapped_pgs": 0
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     },
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "pgmap": {
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "pgs_by_state": [],
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_pgs": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_pools": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_objects": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "data_bytes": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "bytes_used": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "bytes_avail": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "bytes_total": 0
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     },
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "fsmap": {
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "epoch": 1,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "by_rank": [],
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "up:standby": 0
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     },
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "mgrmap": {
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "available": false,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "num_standbys": 0,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "modules": [
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:             "iostat",
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:             "nfs",
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:             "restful"
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         ],
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "services": {}
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     },
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "servicemap": {
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "epoch": 1,
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:         "services": {}
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     },
Dec 06 06:25:38 compute-0 inspiring_cori[74959]:     "progress_events": {}
Dec 06 06:25:38 compute-0 inspiring_cori[74959]: }
Dec 06 06:25:38 compute-0 systemd[1]: libpod-89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5.scope: Deactivated successfully.
Dec 06 06:25:38 compute-0 podman[74943]: 2025-12-06 06:25:38.590262272 +0000 UTC m=+0.677441938 container died 89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5 (image=quay.io/ceph/ceph:v18, name=inspiring_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:25:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2788d3aa90e87d61a483e08e1172035633ac1f51b21ba420d33ed72d0db8dfb-merged.mount: Deactivated successfully.
Dec 06 06:25:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1924055505' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:38 compute-0 podman[74943]: 2025-12-06 06:25:38.646691542 +0000 UTC m=+0.733871198 container remove 89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5 (image=quay.io/ceph/ceph:v18, name=inspiring_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 06:25:38 compute-0 systemd[1]: libpod-conmon-89818c94b76a64fbfa9b45f3941c13eb59163e9d5398202cc0822c6f7442c4c5.scope: Deactivated successfully.
Dec 06 06:25:39 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'rgw'
Dec 06 06:25:39 compute-0 ceph-mgr[74630]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 06:25:39 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'rook'
Dec 06 06:25:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:39.896+0000 7f54cbbf4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 06:25:40 compute-0 podman[74999]: 2025-12-06 06:25:40.719943073 +0000 UTC m=+0.048104248 container create c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a (image=quay.io/ceph/ceph:v18, name=unruffled_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 06:25:40 compute-0 systemd[1]: Started libpod-conmon-c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a.scope.
Dec 06 06:25:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b25f4448ccfe4287113531483ffc72b53d1a79cd1d28cdd7f206f22bcde41d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b25f4448ccfe4287113531483ffc72b53d1a79cd1d28cdd7f206f22bcde41d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b25f4448ccfe4287113531483ffc72b53d1a79cd1d28cdd7f206f22bcde41d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:40 compute-0 podman[74999]: 2025-12-06 06:25:40.693039819 +0000 UTC m=+0.021201024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:40 compute-0 podman[74999]: 2025-12-06 06:25:40.795462484 +0000 UTC m=+0.123623699 container init c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a (image=quay.io/ceph/ceph:v18, name=unruffled_banach, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:25:40 compute-0 podman[74999]: 2025-12-06 06:25:40.802019571 +0000 UTC m=+0.130180746 container start c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a (image=quay.io/ceph/ceph:v18, name=unruffled_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:40 compute-0 podman[74999]: 2025-12-06 06:25:40.807043719 +0000 UTC m=+0.135204924 container attach c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a (image=quay.io/ceph/ceph:v18, name=unruffled_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:25:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176550574' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:41 compute-0 unruffled_banach[75015]: 
Dec 06 06:25:41 compute-0 unruffled_banach[75015]: {
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "health": {
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "status": "HEALTH_OK",
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "checks": {},
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "mutes": []
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     },
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "election_epoch": 5,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "quorum": [
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         0
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     ],
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "quorum_names": [
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "compute-0"
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     ],
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "quorum_age": 18,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "monmap": {
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "epoch": 1,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "min_mon_release_name": "reef",
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_mons": 1
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     },
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "osdmap": {
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "epoch": 1,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_osds": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_up_osds": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "osd_up_since": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_in_osds": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "osd_in_since": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_remapped_pgs": 0
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     },
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "pgmap": {
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "pgs_by_state": [],
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_pgs": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_pools": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_objects": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "data_bytes": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "bytes_used": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "bytes_avail": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "bytes_total": 0
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     },
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "fsmap": {
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "epoch": 1,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "by_rank": [],
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "up:standby": 0
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     },
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "mgrmap": {
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "available": false,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "num_standbys": 0,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "modules": [
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:             "iostat",
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:             "nfs",
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:             "restful"
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         ],
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "services": {}
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     },
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "servicemap": {
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "epoch": 1,
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:         "services": {}
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     },
Dec 06 06:25:41 compute-0 unruffled_banach[75015]:     "progress_events": {}
Dec 06 06:25:41 compute-0 unruffled_banach[75015]: }
Dec 06 06:25:41 compute-0 systemd[1]: libpod-c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a.scope: Deactivated successfully.
Dec 06 06:25:41 compute-0 podman[74999]: 2025-12-06 06:25:41.246476534 +0000 UTC m=+0.574637719 container died c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a (image=quay.io/ceph/ceph:v18, name=unruffled_banach, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:25:42 compute-0 ceph-mgr[74630]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 06:25:42 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:42.135+0000 7f54cbbf4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 06:25:42 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'selftest'
Dec 06 06:25:42 compute-0 ceph-mgr[74630]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 06:25:42 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'snap_schedule'
Dec 06 06:25:42 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:42.402+0000 7f54cbbf4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 06:25:42 compute-0 ceph-mgr[74630]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 06:25:42 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'stats'
Dec 06 06:25:42 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:42.687+0000 7f54cbbf4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 06:25:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/176550574' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-31b25f4448ccfe4287113531483ffc72b53d1a79cd1d28cdd7f206f22bcde41d-merged.mount: Deactivated successfully.
Dec 06 06:25:42 compute-0 podman[74999]: 2025-12-06 06:25:42.960298358 +0000 UTC m=+2.288459543 container remove c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a (image=quay.io/ceph/ceph:v18, name=unruffled_banach, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:25:42 compute-0 systemd[1]: libpod-conmon-c9a80f5c58e554760035f337c07dfba0bf197885dd3e5716f824e22f314e9f4a.scope: Deactivated successfully.
Dec 06 06:25:42 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'status'
Dec 06 06:25:43 compute-0 ceph-mgr[74630]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 06:25:43 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'telegraf'
Dec 06 06:25:43 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:43.276+0000 7f54cbbf4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 06:25:43 compute-0 ceph-mgr[74630]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 06:25:43 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'telemetry'
Dec 06 06:25:43 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:43.538+0000 7f54cbbf4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 06:25:44 compute-0 ceph-mgr[74630]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 06:25:44 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 06:25:44 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:44.187+0000 7f54cbbf4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 06:25:44 compute-0 ceph-mgr[74630]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 06:25:44 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'volumes'
Dec 06 06:25:44 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:44.909+0000 7f54cbbf4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 06:25:45 compute-0 podman[75057]: 2025-12-06 06:25:45.040894483 +0000 UTC m=+0.045866564 container create 2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:45 compute-0 systemd[1]: Started libpod-conmon-2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c.scope.
Dec 06 06:25:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af151e994dce294abe54751883315f9b0cb5e86c16e282838abb2163e52a8976/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af151e994dce294abe54751883315f9b0cb5e86c16e282838abb2163e52a8976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af151e994dce294abe54751883315f9b0cb5e86c16e282838abb2163e52a8976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:45 compute-0 podman[75057]: 2025-12-06 06:25:45.019596748 +0000 UTC m=+0.024568859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:45 compute-0 podman[75057]: 2025-12-06 06:25:45.12037402 +0000 UTC m=+0.125346121 container init 2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:25:45 compute-0 podman[75057]: 2025-12-06 06:25:45.125713794 +0000 UTC m=+0.130685875 container start 2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:25:45 compute-0 podman[75057]: 2025-12-06 06:25:45.13009458 +0000 UTC m=+0.135066681 container attach 2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:25:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1158767641' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]: 
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]: {
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "health": {
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "status": "HEALTH_OK",
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "checks": {},
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "mutes": []
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     },
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "election_epoch": 5,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "quorum": [
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         0
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     ],
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "quorum_names": [
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "compute-0"
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     ],
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "quorum_age": 23,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "monmap": {
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "epoch": 1,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "min_mon_release_name": "reef",
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_mons": 1
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     },
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "osdmap": {
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "epoch": 1,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_osds": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_up_osds": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "osd_up_since": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_in_osds": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "osd_in_since": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_remapped_pgs": 0
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     },
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "pgmap": {
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "pgs_by_state": [],
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_pgs": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_pools": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_objects": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "data_bytes": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "bytes_used": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "bytes_avail": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "bytes_total": 0
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     },
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "fsmap": {
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "epoch": 1,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "by_rank": [],
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "up:standby": 0
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     },
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "mgrmap": {
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "available": false,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "num_standbys": 0,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "modules": [
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:             "iostat",
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:             "nfs",
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:             "restful"
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         ],
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "services": {}
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     },
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "servicemap": {
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "epoch": 1,
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:         "services": {}
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     },
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]:     "progress_events": {}
Dec 06 06:25:45 compute-0 pedantic_goldstine[75073]: }
Dec 06 06:25:45 compute-0 systemd[1]: libpod-2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c.scope: Deactivated successfully.
Dec 06 06:25:45 compute-0 podman[75057]: 2025-12-06 06:25:45.600501608 +0000 UTC m=+0.605473699 container died 2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 06:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-af151e994dce294abe54751883315f9b0cb5e86c16e282838abb2163e52a8976-merged.mount: Deactivated successfully.
Dec 06 06:25:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1158767641' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:45 compute-0 podman[75057]: 2025-12-06 06:25:45.647639345 +0000 UTC m=+0.652611426 container remove 2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c (image=quay.io/ceph/ceph:v18, name=pedantic_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 06:25:45 compute-0 systemd[1]: libpod-conmon-2c9b975df7218f93bd4880872e462644bca1d2df103cc90fe49c70f8c405844c.scope: Deactivated successfully.
Dec 06 06:25:45 compute-0 ceph-mgr[74630]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 06:25:45 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'zabbix'
Dec 06 06:25:45 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:45.725+0000 7f54cbbf4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 06:25:45 compute-0 ceph-mgr[74630]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 06:25:45 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:45.991+0000 7f54cbbf4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 06:25:45 compute-0 ceph-mgr[74630]: ms_deliver_dispatch: unhandled message 0x562d30b9af20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec 06 06:25:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.sfzyix
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr handle_mgr_map Activating!
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr handle_mgr_map I am now activating
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.sfzyix(active, starting, since 0.014802s)
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e1 all = 1
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.sfzyix", "id": "compute-0.sfzyix"} v 0) v1
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-0.sfzyix", "id": "compute-0.sfzyix"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: balancer
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Manager daemon compute-0.sfzyix is now available
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: crash
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [balancer INFO root] Starting
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:25:46
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [balancer INFO root] No pools available
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: devicehealth
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Starting
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: iostat
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: nfs
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: orchestrator
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: pg_autoscaler
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: progress
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [progress INFO root] Loading...
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [progress INFO root] No stored events to load
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [progress INFO root] Loaded [] historic events
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] recovery thread starting
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] starting setup
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: rbd_support
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: restful
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: status
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/mirror_snapshot_schedule"} v 0) v1
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/mirror_snapshot_schedule"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: telemetry
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [restful WARNING root] server not running: no certificate configured
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] PerfHandler: starting
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TaskHandler: starting
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/trash_purge_schedule"} v 0) v1
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/trash_purge_schedule"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' 
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: [rbd_support INFO root] setup complete
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' 
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Dec 06 06:25:46 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: volumes
Dec 06 06:25:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' 
Dec 06 06:25:46 compute-0 ceph-mon[74339]: Activating manager daemon compute-0.sfzyix
Dec 06 06:25:46 compute-0 ceph-mon[74339]: mgrmap e2: compute-0.sfzyix(active, starting, since 0.014802s)
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-0.sfzyix", "id": "compute-0.sfzyix"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: Manager daemon compute-0.sfzyix is now available
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/mirror_snapshot_schedule"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/trash_purge_schedule"}]: dispatch
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' 
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' 
Dec 06 06:25:46 compute-0 ceph-mon[74339]: from='mgr.14102 192.168.122.100:0/3690080275' entity='mgr.compute-0.sfzyix' 
Dec 06 06:25:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.sfzyix(active, since 1.02513s)
Dec 06 06:25:47 compute-0 podman[75191]: 2025-12-06 06:25:47.717395049 +0000 UTC m=+0.046853323 container create 413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c (image=quay.io/ceph/ceph:v18, name=confident_bhabha, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:47 compute-0 systemd[1]: Started libpod-conmon-413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c.scope.
Dec 06 06:25:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe577fa2e39ed59a403addf6e05b39b494b05338f30ae64cc30d11c2378cddc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe577fa2e39ed59a403addf6e05b39b494b05338f30ae64cc30d11c2378cddc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe577fa2e39ed59a403addf6e05b39b494b05338f30ae64cc30d11c2378cddc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:47 compute-0 podman[75191]: 2025-12-06 06:25:47.784678038 +0000 UTC m=+0.114136322 container init 413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c (image=quay.io/ceph/ceph:v18, name=confident_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:25:47 compute-0 podman[75191]: 2025-12-06 06:25:47.694596145 +0000 UTC m=+0.024054449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:47 compute-0 podman[75191]: 2025-12-06 06:25:47.789698266 +0000 UTC m=+0.119156540 container start 413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c (image=quay.io/ceph/ceph:v18, name=confident_bhabha, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:47 compute-0 podman[75191]: 2025-12-06 06:25:47.793739906 +0000 UTC m=+0.123198210 container attach 413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c (image=quay.io/ceph/ceph:v18, name=confident_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 06:25:48 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:25:48 compute-0 ceph-mon[74339]: mgrmap e3: compute-0.sfzyix(active, since 1.02513s)
Dec 06 06:25:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.sfzyix(active, since 2s)
Dec 06 06:25:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec 06 06:25:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898197868' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:48 compute-0 confident_bhabha[75207]: 
Dec 06 06:25:48 compute-0 confident_bhabha[75207]: {
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "health": {
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "status": "HEALTH_OK",
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "checks": {},
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "mutes": []
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     },
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "election_epoch": 5,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "quorum": [
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         0
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     ],
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "quorum_names": [
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "compute-0"
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     ],
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "quorum_age": 26,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "monmap": {
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "epoch": 1,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "min_mon_release_name": "reef",
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_mons": 1
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     },
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "osdmap": {
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "epoch": 1,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_osds": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_up_osds": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "osd_up_since": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_in_osds": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "osd_in_since": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_remapped_pgs": 0
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     },
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "pgmap": {
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "pgs_by_state": [],
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_pgs": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_pools": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_objects": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "data_bytes": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "bytes_used": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "bytes_avail": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "bytes_total": 0
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     },
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "fsmap": {
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "epoch": 1,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "by_rank": [],
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "up:standby": 0
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     },
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "mgrmap": {
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "available": true,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "num_standbys": 0,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "modules": [
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:             "iostat",
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:             "nfs",
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:             "restful"
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         ],
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "services": {}
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     },
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "servicemap": {
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "epoch": 1,
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "modified": "2025-12-06T06:25:19.129259+0000",
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:         "services": {}
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     },
Dec 06 06:25:48 compute-0 confident_bhabha[75207]:     "progress_events": {}
Dec 06 06:25:48 compute-0 confident_bhabha[75207]: }
Dec 06 06:25:48 compute-0 systemd[1]: libpod-413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c.scope: Deactivated successfully.
Dec 06 06:25:48 compute-0 podman[75191]: 2025-12-06 06:25:48.606979638 +0000 UTC m=+0.936437912 container died 413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c (image=quay.io/ceph/ceph:v18, name=confident_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fe577fa2e39ed59a403addf6e05b39b494b05338f30ae64cc30d11c2378cddc-merged.mount: Deactivated successfully.
Dec 06 06:25:48 compute-0 podman[75191]: 2025-12-06 06:25:48.653213068 +0000 UTC m=+0.982671342 container remove 413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c (image=quay.io/ceph/ceph:v18, name=confident_bhabha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 06:25:48 compute-0 systemd[1]: libpod-conmon-413e649866ac888ce3d97b2d5264a5d81dab0c318bd0cdba08a6cac81df6df4c.scope: Deactivated successfully.
Dec 06 06:25:48 compute-0 podman[75243]: 2025-12-06 06:25:48.719317004 +0000 UTC m=+0.045750111 container create 443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee (image=quay.io/ceph/ceph:v18, name=happy_nightingale, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:25:48 compute-0 systemd[1]: Started libpod-conmon-443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee.scope.
Dec 06 06:25:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f5357f1bd877512f60a7eeb12d069791c882361b7f1e527d825616dcdf9ae/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f5357f1bd877512f60a7eeb12d069791c882361b7f1e527d825616dcdf9ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f5357f1bd877512f60a7eeb12d069791c882361b7f1e527d825616dcdf9ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5f5357f1bd877512f60a7eeb12d069791c882361b7f1e527d825616dcdf9ae/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:48 compute-0 podman[75243]: 2025-12-06 06:25:48.699775824 +0000 UTC m=+0.026208941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:48 compute-0 podman[75243]: 2025-12-06 06:25:48.795387755 +0000 UTC m=+0.121820852 container init 443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee (image=quay.io/ceph/ceph:v18, name=happy_nightingale, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:48 compute-0 podman[75243]: 2025-12-06 06:25:48.80022872 +0000 UTC m=+0.126661807 container start 443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee (image=quay.io/ceph/ceph:v18, name=happy_nightingale, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:48 compute-0 podman[75243]: 2025-12-06 06:25:48.804178806 +0000 UTC m=+0.130611923 container attach 443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee (image=quay.io/ceph/ceph:v18, name=happy_nightingale, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:49 compute-0 ceph-mon[74339]: mgrmap e4: compute-0.sfzyix(active, since 2s)
Dec 06 06:25:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2898197868' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 06:25:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 06 06:25:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1922759667' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 06:25:49 compute-0 systemd[1]: libpod-443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee.scope: Deactivated successfully.
Dec 06 06:25:49 compute-0 podman[75243]: 2025-12-06 06:25:49.346324021 +0000 UTC m=+0.672757118 container died 443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee (image=quay.io/ceph/ceph:v18, name=happy_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d5f5357f1bd877512f60a7eeb12d069791c882361b7f1e527d825616dcdf9ae-merged.mount: Deactivated successfully.
Dec 06 06:25:49 compute-0 podman[75243]: 2025-12-06 06:25:49.401014605 +0000 UTC m=+0.727447712 container remove 443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee (image=quay.io/ceph/ceph:v18, name=happy_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:25:49 compute-0 systemd[1]: libpod-conmon-443c6f746d7d6b4c75878e0d37fd89c8287320225eed6188958ad5d72b899cee.scope: Deactivated successfully.
Dec 06 06:25:49 compute-0 podman[75297]: 2025-12-06 06:25:49.482495583 +0000 UTC m=+0.050022085 container create 4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:49 compute-0 systemd[1]: Started libpod-conmon-4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225.scope.
Dec 06 06:25:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d75e224535aaf8cd871a77f214325b169f000ee9c2298980f57b8f888baef9d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d75e224535aaf8cd871a77f214325b169f000ee9c2298980f57b8f888baef9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d75e224535aaf8cd871a77f214325b169f000ee9c2298980f57b8f888baef9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:49 compute-0 podman[75297]: 2025-12-06 06:25:49.463547063 +0000 UTC m=+0.031073595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:49 compute-0 podman[75297]: 2025-12-06 06:25:49.567871874 +0000 UTC m=+0.135398406 container init 4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:25:49 compute-0 podman[75297]: 2025-12-06 06:25:49.573716658 +0000 UTC m=+0.141243160 container start 4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:49 compute-0 podman[75297]: 2025-12-06 06:25:49.577618165 +0000 UTC m=+0.145144707 container attach 4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:25:50 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:25:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1922759667' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 06:25:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Dec 06 06:25:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3206223454' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 06 06:25:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3206223454' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec 06 06:25:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3206223454' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  1: '-n'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  2: 'mgr.compute-0.sfzyix'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  3: '-f'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  4: '--setuser'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  5: 'ceph'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  6: '--setgroup'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  7: 'ceph'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  8: '--default-log-to-file=false'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  9: '--default-log-to-journald=true'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr respawn  exe_path /proc/self/exe
Dec 06 06:25:51 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.sfzyix(active, since 5s)
Dec 06 06:25:51 compute-0 systemd[1]: libpod-4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225.scope: Deactivated successfully.
Dec 06 06:25:51 compute-0 podman[75297]: 2025-12-06 06:25:51.086338077 +0000 UTC m=+1.653864599 container died 4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d75e224535aaf8cd871a77f214325b169f000ee9c2298980f57b8f888baef9d-merged.mount: Deactivated successfully.
Dec 06 06:25:51 compute-0 podman[75297]: 2025-12-06 06:25:51.158818678 +0000 UTC m=+1.726345180 container remove 4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:25:51 compute-0 systemd[1]: libpod-conmon-4fb4377835f23c20a6958332441ac31e53e15360576b11f6564d58ba8fcae225.scope: Deactivated successfully.
Dec 06 06:25:51 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: ignoring --setuser ceph since I am not root
Dec 06 06:25:51 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: ignoring --setgroup ceph since I am not root
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: pidfile_write: ignore empty --pid-file
Dec 06 06:25:51 compute-0 podman[75353]: 2025-12-06 06:25:51.223815434 +0000 UTC m=+0.041201734 container create 9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c (image=quay.io/ceph/ceph:v18, name=beautiful_goldberg, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:51 compute-0 systemd[1]: Started libpod-conmon-9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c.scope.
Dec 06 06:25:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319362fb95c8f6335069bb3a9c72bea841b619c680d17133251f5a6b36a91b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319362fb95c8f6335069bb3a9c72bea841b619c680d17133251f5a6b36a91b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8319362fb95c8f6335069bb3a9c72bea841b619c680d17133251f5a6b36a91b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:51 compute-0 podman[75353]: 2025-12-06 06:25:51.29449989 +0000 UTC m=+0.111886210 container init 9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c (image=quay.io/ceph/ceph:v18, name=beautiful_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:51 compute-0 podman[75353]: 2025-12-06 06:25:51.299914205 +0000 UTC m=+0.117300495 container start 9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c (image=quay.io/ceph/ceph:v18, name=beautiful_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:51 compute-0 podman[75353]: 2025-12-06 06:25:51.204378585 +0000 UTC m=+0.021764905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:51 compute-0 podman[75353]: 2025-12-06 06:25:51.304374162 +0000 UTC m=+0.121760462 container attach 9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c (image=quay.io/ceph/ceph:v18, name=beautiful_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'alerts'
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'balancer'
Dec 06 06:25:51 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:51.644+0000 7f67c37bb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 06:25:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 06 06:25:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839503818' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 06:25:51 compute-0 beautiful_goldberg[75392]: {
Dec 06 06:25:51 compute-0 beautiful_goldberg[75392]:     "epoch": 5,
Dec 06 06:25:51 compute-0 beautiful_goldberg[75392]:     "available": true,
Dec 06 06:25:51 compute-0 beautiful_goldberg[75392]:     "active_name": "compute-0.sfzyix",
Dec 06 06:25:51 compute-0 beautiful_goldberg[75392]:     "num_standby": 0
Dec 06 06:25:51 compute-0 beautiful_goldberg[75392]: }
Dec 06 06:25:51 compute-0 systemd[1]: libpod-9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c.scope: Deactivated successfully.
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 06:25:51 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'cephadm'
Dec 06 06:25:51 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:51.926+0000 7f67c37bb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 06:25:51 compute-0 podman[75418]: 2025-12-06 06:25:51.951660484 +0000 UTC m=+0.023380156 container died 9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c (image=quay.io/ceph/ceph:v18, name=beautiful_goldberg, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8319362fb95c8f6335069bb3a9c72bea841b619c680d17133251f5a6b36a91b3-merged.mount: Deactivated successfully.
Dec 06 06:25:51 compute-0 podman[75418]: 2025-12-06 06:25:51.997350653 +0000 UTC m=+0.069070315 container remove 9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c (image=quay.io/ceph/ceph:v18, name=beautiful_goldberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:25:52 compute-0 systemd[1]: libpod-conmon-9466023f1f825cb38f47c053c362a10eaf906d06cfa1ef50a84cc6457eccbf3c.scope: Deactivated successfully.
Dec 06 06:25:52 compute-0 podman[75433]: 2025-12-06 06:25:52.060646606 +0000 UTC m=+0.038148484 container create e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c (image=quay.io/ceph/ceph:v18, name=focused_sinoussi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:25:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3206223454' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 06 06:25:52 compute-0 ceph-mon[74339]: mgrmap e5: compute-0.sfzyix(active, since 5s)
Dec 06 06:25:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1839503818' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 06:25:52 compute-0 systemd[1]: Started libpod-conmon-e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c.scope.
Dec 06 06:25:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f531a5ad6f24375cd5fd54976f18fedead0b5bb60e1179a9caf38fd983dfe30/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f531a5ad6f24375cd5fd54976f18fedead0b5bb60e1179a9caf38fd983dfe30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:52 compute-0 podman[75433]: 2025-12-06 06:25:52.043302568 +0000 UTC m=+0.020804476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:25:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f531a5ad6f24375cd5fd54976f18fedead0b5bb60e1179a9caf38fd983dfe30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:25:52 compute-0 podman[75433]: 2025-12-06 06:25:52.151869461 +0000 UTC m=+0.129371359 container init e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c (image=quay.io/ceph/ceph:v18, name=focused_sinoussi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec 06 06:25:52 compute-0 podman[75433]: 2025-12-06 06:25:52.156834848 +0000 UTC m=+0.134336726 container start e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c (image=quay.io/ceph/ceph:v18, name=focused_sinoussi, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:25:52 compute-0 podman[75433]: 2025-12-06 06:25:52.161455438 +0000 UTC m=+0.138957346 container attach e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c (image=quay.io/ceph/ceph:v18, name=focused_sinoussi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:25:54 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'crash'
Dec 06 06:25:54 compute-0 ceph-mgr[74630]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 06:25:54 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'dashboard'
Dec 06 06:25:54 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:54.368+0000 7f67c37bb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 06:25:55 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'devicehealth'
Dec 06 06:25:56 compute-0 ceph-mgr[74630]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 06:25:56 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 06:25:56 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:56.244+0000 7f67c37bb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 06:25:56 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 06:25:56 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 06:25:56 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   from numpy import show_config as show_numpy_config
Dec 06 06:25:56 compute-0 ceph-mgr[74630]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 06:25:56 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:56.898+0000 7f67c37bb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 06:25:56 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'influx'
Dec 06 06:25:57 compute-0 ceph-mgr[74630]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 06:25:57 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'insights'
Dec 06 06:25:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:57.175+0000 7f67c37bb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 06:25:57 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'iostat'
Dec 06 06:25:57 compute-0 ceph-mgr[74630]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 06:25:57 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'k8sevents'
Dec 06 06:25:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:25:57.715+0000 7f67c37bb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 06:25:59 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'localpool'
Dec 06 06:26:00 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 06:26:00 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'mirroring'
Dec 06 06:26:01 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'nfs'
Dec 06 06:26:01 compute-0 ceph-mgr[74630]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 06:26:01 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'orchestrator'
Dec 06 06:26:01 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:01.719+0000 7f67c37bb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 06:26:02 compute-0 ceph-mgr[74630]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 06:26:02 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 06:26:02 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:02.461+0000 7f67c37bb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 06:26:02 compute-0 ceph-mgr[74630]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 06:26:02 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:02.757+0000 7f67c37bb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 06:26:02 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'osd_support'
Dec 06 06:26:03 compute-0 ceph-mgr[74630]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 06:26:03 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:03.015+0000 7f67c37bb140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 06:26:03 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 06:26:03 compute-0 ceph-mgr[74630]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 06:26:03 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:03.301+0000 7f67c37bb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 06:26:03 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'progress'
Dec 06 06:26:03 compute-0 ceph-mgr[74630]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 06:26:03 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'prometheus'
Dec 06 06:26:03 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:03.566+0000 7f67c37bb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 06:26:04 compute-0 ceph-mgr[74630]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 06:26:04 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:04.613+0000 7f67c37bb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 06:26:04 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'rbd_support'
Dec 06 06:26:04 compute-0 ceph-mgr[74630]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 06:26:04 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'restful'
Dec 06 06:26:04 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:04.935+0000 7f67c37bb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 06:26:05 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'rgw'
Dec 06 06:26:06 compute-0 ceph-mgr[74630]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 06:26:06 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:06.505+0000 7f67c37bb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 06:26:06 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'rook'
Dec 06 06:26:08 compute-0 ceph-mgr[74630]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 06:26:08 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:08.845+0000 7f67c37bb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 06:26:08 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'selftest'
Dec 06 06:26:09 compute-0 ceph-mgr[74630]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 06:26:09 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'snap_schedule'
Dec 06 06:26:09 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:09.132+0000 7f67c37bb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 06:26:09 compute-0 ceph-mgr[74630]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 06:26:09 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'stats'
Dec 06 06:26:09 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:09.415+0000 7f67c37bb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 06 06:26:09 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'status'
Dec 06 06:26:10 compute-0 ceph-mgr[74630]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 06:26:10 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'telegraf'
Dec 06 06:26:10 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:10.015+0000 7f67c37bb140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 06:26:10 compute-0 ceph-mgr[74630]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 06:26:10 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'telemetry'
Dec 06 06:26:10 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:10.283+0000 7f67c37bb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 06:26:10 compute-0 ceph-mgr[74630]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 06:26:10 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 06:26:10 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:10.966+0000 7f67c37bb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 06:26:11 compute-0 ceph-mgr[74630]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 06:26:11 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'volumes'
Dec 06 06:26:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:11.716+0000 7f67c37bb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr[py] Loading python module 'zabbix'
Dec 06 06:26:12 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:12.516+0000 7f67c37bb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 06:26:12 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:26:12.766+0000 7f67c37bb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Active manager daemon compute-0.sfzyix restarted
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: ms_deliver_dispatch: unhandled message 0x55bf97240420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.sfzyix
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr handle_mgr_map Activating!
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr handle_mgr_map I am now activating
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.sfzyix(active, starting, since 0.0280577s)
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.sfzyix", "id": "compute-0.sfzyix"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-0.sfzyix", "id": "compute-0.sfzyix"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e1 all = 1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: balancer
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Manager daemon compute-0.sfzyix is now available
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Starting
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:26:12
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [balancer INFO root] No pools available
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: Active manager daemon compute-0.sfzyix restarted
Dec 06 06:26:12 compute-0 ceph-mon[74339]: Activating manager daemon compute-0.sfzyix
Dec 06 06:26:12 compute-0 ceph-mon[74339]: osdmap e2: 0 total, 0 up, 0 in
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mgrmap e6: compute-0.sfzyix(active, starting, since 0.0280577s)
Dec 06 06:26:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-0.sfzyix", "id": "compute-0.sfzyix"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mon[74339]: Manager daemon compute-0.sfzyix is now available
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: cephadm
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: crash
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: devicehealth
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Starting
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: iostat
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: nfs
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: orchestrator
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: pg_autoscaler
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: progress
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [progress INFO root] Loading...
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [progress INFO root] No stored events to load
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [progress INFO root] Loaded [] historic events
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] recovery thread starting
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] starting setup
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: rbd_support
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: restful
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/mirror_snapshot_schedule"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/mirror_snapshot_schedule"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: status
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: telemetry
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] PerfHandler: starting
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TaskHandler: starting
Dec 06 06:26:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/trash_purge_schedule"} v 0) v1
Dec 06 06:26:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/trash_purge_schedule"}]: dispatch
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [restful WARNING root] server not running: no certificate configured
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] setup complete
Dec 06 06:26:12 compute-0 ceph-mgr[74630]: mgr load Constructed class from module: volumes
Dec 06 06:26:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.sfzyix(active, since 1.03869s)
Dec 06 06:26:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 06 06:26:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 06 06:26:13 compute-0 focused_sinoussi[75450]: {
Dec 06 06:26:13 compute-0 focused_sinoussi[75450]:     "mgrmap_epoch": 7,
Dec 06 06:26:13 compute-0 focused_sinoussi[75450]:     "initialized": true
Dec 06 06:26:13 compute-0 focused_sinoussi[75450]: }
Dec 06 06:26:13 compute-0 systemd[1]: libpod-e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c.scope: Deactivated successfully.
Dec 06 06:26:13 compute-0 podman[75433]: 2025-12-06 06:26:13.848268658 +0000 UTC m=+21.825770636 container died e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c (image=quay.io/ceph/ceph:v18, name=focused_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:13 compute-0 ceph-mon[74339]: Found migration_current of "None". Setting to last migration.
Dec 06 06:26:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/mirror_snapshot_schedule"}]: dispatch
Dec 06 06:26:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.sfzyix/trash_purge_schedule"}]: dispatch
Dec 06 06:26:13 compute-0 ceph-mon[74339]: mgrmap e7: compute-0.sfzyix(active, since 1.03869s)
Dec 06 06:26:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f531a5ad6f24375cd5fd54976f18fedead0b5bb60e1179a9caf38fd983dfe30-merged.mount: Deactivated successfully.
Dec 06 06:26:13 compute-0 podman[75433]: 2025-12-06 06:26:13.905129145 +0000 UTC m=+21.882631033 container remove e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c (image=quay.io/ceph/ceph:v18, name=focused_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 06:26:13 compute-0 systemd[1]: libpod-conmon-e7043783ca016813dec1424d7a9d670bce14df13734a9b85d821eb805375969c.scope: Deactivated successfully.
Dec 06 06:26:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Dec 06 06:26:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Dec 06 06:26:14 compute-0 podman[75609]: 2025-12-06 06:26:14.004216664 +0000 UTC m=+0.072357440 container create a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772 (image=quay.io/ceph/ceph:v18, name=eloquent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:14 compute-0 systemd[1]: Started libpod-conmon-a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772.scope.
Dec 06 06:26:14 compute-0 podman[75609]: 2025-12-06 06:26:13.958994174 +0000 UTC m=+0.027135050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7502470e4227a0fa8fc2fe0540e08f5c45dbc3aaf9bb6963b62de1009e8fb2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7502470e4227a0fa8fc2fe0540e08f5c45dbc3aaf9bb6963b62de1009e8fb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f7502470e4227a0fa8fc2fe0540e08f5c45dbc3aaf9bb6963b62de1009e8fb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:14 compute-0 podman[75609]: 2025-12-06 06:26:14.09787478 +0000 UTC m=+0.166015576 container init a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772 (image=quay.io/ceph/ceph:v18, name=eloquent_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 06:26:14 compute-0 podman[75609]: 2025-12-06 06:26:14.103834472 +0000 UTC m=+0.171975248 container start a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772 (image=quay.io/ceph/ceph:v18, name=eloquent_merkle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:14 compute-0 podman[75609]: 2025-12-06 06:26:14.106874845 +0000 UTC m=+0.175015641 container attach a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772 (image=quay.io/ceph/ceph:v18, name=eloquent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Dec 06 06:26:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:14 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 06 06:26:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:14 compute-0 systemd[1]: libpod-a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772.scope: Deactivated successfully.
Dec 06 06:26:14 compute-0 podman[75609]: 2025-12-06 06:26:14.830940669 +0000 UTC m=+0.899081445 container died a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772 (image=quay.io/ceph/ceph:v18, name=eloquent_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: [cephadm INFO cherrypy.error] [06/Dec/2025:06:26:15] ENGINE Bus STARTING
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : [06/Dec/2025:06:26:15] ENGINE Bus STARTING
Dec 06 06:26:15 compute-0 ceph-mon[74339]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 06 06:26:15 compute-0 ceph-mon[74339]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 06 06:26:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: [cephadm INFO cherrypy.error] [06/Dec/2025:06:26:15] ENGINE Serving on http://192.168.122.100:8765
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : [06/Dec/2025:06:26:15] ENGINE Serving on http://192.168.122.100:8765
Dec 06 06:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f7502470e4227a0fa8fc2fe0540e08f5c45dbc3aaf9bb6963b62de1009e8fb2-merged.mount: Deactivated successfully.
Dec 06 06:26:15 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.sfzyix(active, since 2s)
Dec 06 06:26:15 compute-0 podman[75609]: 2025-12-06 06:26:15.246470216 +0000 UTC m=+1.314611002 container remove a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772 (image=quay.io/ceph/ceph:v18, name=eloquent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:26:15 compute-0 systemd[1]: libpod-conmon-a820820ac00e3cc981686bf0c3872a377fd17f8c5da8749d7f63e5e12e20d772.scope: Deactivated successfully.
Dec 06 06:26:15 compute-0 podman[75687]: 2025-12-06 06:26:15.32121331 +0000 UTC m=+0.053198232 container create 05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c (image=quay.io/ceph/ceph:v18, name=hungry_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: [cephadm INFO cherrypy.error] [06/Dec/2025:06:26:15] ENGINE Serving on https://192.168.122.100:7150
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : [06/Dec/2025:06:26:15] ENGINE Serving on https://192.168.122.100:7150
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: [cephadm INFO cherrypy.error] [06/Dec/2025:06:26:15] ENGINE Bus STARTED
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : [06/Dec/2025:06:26:15] ENGINE Bus STARTED
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: [cephadm INFO cherrypy.error] [06/Dec/2025:06:26:15] ENGINE Client ('192.168.122.100', 41018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 06:26:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 06 06:26:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : [06/Dec/2025:06:26:15] ENGINE Client ('192.168.122.100', 41018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 06:26:15 compute-0 systemd[1]: Started libpod-conmon-05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c.scope.
Dec 06 06:26:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c18c47eead219e028377c3defce24a76108203d19365b6656e01f0cb752ee03/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c18c47eead219e028377c3defce24a76108203d19365b6656e01f0cb752ee03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c18c47eead219e028377c3defce24a76108203d19365b6656e01f0cb752ee03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:15 compute-0 podman[75687]: 2025-12-06 06:26:15.388052681 +0000 UTC m=+0.120037583 container init 05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c (image=quay.io/ceph/ceph:v18, name=hungry_stonebraker, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:15 compute-0 podman[75687]: 2025-12-06 06:26:15.297479408 +0000 UTC m=+0.029464340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:15 compute-0 podman[75687]: 2025-12-06 06:26:15.393753455 +0000 UTC m=+0.125738337 container start 05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c (image=quay.io/ceph/ceph:v18, name=hungry_stonebraker, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:26:15 compute-0 podman[75687]: 2025-12-06 06:26:15.397195908 +0000 UTC m=+0.129180790 container attach 05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c (image=quay.io/ceph/ceph:v18, name=hungry_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Dec 06 06:26:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: [cephadm INFO root] Set ssh ssh_user
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 06 06:26:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Dec 06 06:26:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: [cephadm INFO root] Set ssh ssh_config
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 06 06:26:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 06 06:26:15 compute-0 hungry_stonebraker[75704]: ssh user set to ceph-admin. sudo will be used
Dec 06 06:26:16 compute-0 systemd[1]: libpod-05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c.scope: Deactivated successfully.
Dec 06 06:26:16 compute-0 podman[75687]: 2025-12-06 06:26:16.009800204 +0000 UTC m=+0.741785096 container died 05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c (image=quay.io/ceph/ceph:v18, name=hungry_stonebraker, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c18c47eead219e028377c3defce24a76108203d19365b6656e01f0cb752ee03-merged.mount: Deactivated successfully.
Dec 06 06:26:16 compute-0 podman[75687]: 2025-12-06 06:26:16.062659806 +0000 UTC m=+0.794644698 container remove 05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c (image=quay.io/ceph/ceph:v18, name=hungry_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 06:26:16 compute-0 systemd[1]: libpod-conmon-05225410099ab43723716426ad838a48add3cee145a817cba0bd9ecd92f6035c.scope: Deactivated successfully.
Dec 06 06:26:16 compute-0 podman[75742]: 2025-12-06 06:26:16.1477423 +0000 UTC m=+0.054319202 container create f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a (image=quay.io/ceph/ceph:v18, name=flamboyant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 06:26:16 compute-0 ceph-mon[74339]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:16 compute-0 ceph-mon[74339]: [06/Dec/2025:06:26:15] ENGINE Bus STARTING
Dec 06 06:26:16 compute-0 ceph-mon[74339]: [06/Dec/2025:06:26:15] ENGINE Serving on http://192.168.122.100:8765
Dec 06 06:26:16 compute-0 ceph-mon[74339]: mgrmap e8: compute-0.sfzyix(active, since 2s)
Dec 06 06:26:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:16 compute-0 systemd[1]: Started libpod-conmon-f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a.scope.
Dec 06 06:26:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a5036457c6595d304eea467c160e2ebf3fb0be4ca13d7fffb7fd7f414bf941/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a5036457c6595d304eea467c160e2ebf3fb0be4ca13d7fffb7fd7f414bf941/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:16 compute-0 podman[75742]: 2025-12-06 06:26:16.128356095 +0000 UTC m=+0.034933027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a5036457c6595d304eea467c160e2ebf3fb0be4ca13d7fffb7fd7f414bf941/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a5036457c6595d304eea467c160e2ebf3fb0be4ca13d7fffb7fd7f414bf941/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62a5036457c6595d304eea467c160e2ebf3fb0be4ca13d7fffb7fd7f414bf941/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:16 compute-0 podman[75742]: 2025-12-06 06:26:16.2429801 +0000 UTC m=+0.149557102 container init f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a (image=quay.io/ceph/ceph:v18, name=flamboyant_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 06:26:16 compute-0 podman[75742]: 2025-12-06 06:26:16.249818426 +0000 UTC m=+0.156395368 container start f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a (image=quay.io/ceph/ceph:v18, name=flamboyant_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 06:26:16 compute-0 podman[75742]: 2025-12-06 06:26:16.253819764 +0000 UTC m=+0.160396716 container attach f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a (image=quay.io/ceph/ceph:v18, name=flamboyant_robinson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:16 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:16 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Dec 06 06:26:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:16 compute-0 ceph-mgr[74630]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 06 06:26:16 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 06 06:26:16 compute-0 ceph-mgr[74630]: [cephadm INFO root] Set ssh private key
Dec 06 06:26:16 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 06 06:26:16 compute-0 systemd[1]: libpod-f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a.scope: Deactivated successfully.
Dec 06 06:26:16 compute-0 podman[75742]: 2025-12-06 06:26:16.949408007 +0000 UTC m=+0.855984939 container died f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a (image=quay.io/ceph/ceph:v18, name=flamboyant_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-62a5036457c6595d304eea467c160e2ebf3fb0be4ca13d7fffb7fd7f414bf941-merged.mount: Deactivated successfully.
Dec 06 06:26:17 compute-0 podman[75742]: 2025-12-06 06:26:17.018893749 +0000 UTC m=+0.925470661 container remove f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a (image=quay.io/ceph/ceph:v18, name=flamboyant_robinson, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:26:17 compute-0 systemd[1]: libpod-conmon-f57b0c4ea9a0eab4d0afb473694216810c48e95711e2eab4f43d1c5e5d59eb1a.scope: Deactivated successfully.
Dec 06 06:26:17 compute-0 podman[75797]: 2025-12-06 06:26:17.098345302 +0000 UTC m=+0.058347511 container create bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d (image=quay.io/ceph/ceph:v18, name=suspicious_visvesvaraya, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:17 compute-0 systemd[1]: Started libpod-conmon-bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d.scope.
Dec 06 06:26:17 compute-0 podman[75797]: 2025-12-06 06:26:17.066606032 +0000 UTC m=+0.026608291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:17 compute-0 ceph-mon[74339]: [06/Dec/2025:06:26:15] ENGINE Serving on https://192.168.122.100:7150
Dec 06 06:26:17 compute-0 ceph-mon[74339]: [06/Dec/2025:06:26:15] ENGINE Bus STARTED
Dec 06 06:26:17 compute-0 ceph-mon[74339]: [06/Dec/2025:06:26:15] ENGINE Client ('192.168.122.100', 41018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 06:26:17 compute-0 ceph-mon[74339]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:17 compute-0 ceph-mon[74339]: Set ssh ssh_user
Dec 06 06:26:17 compute-0 ceph-mon[74339]: Set ssh ssh_config
Dec 06 06:26:17 compute-0 ceph-mon[74339]: ssh user set to ceph-admin. sudo will be used
Dec 06 06:26:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63488aa130f5cc7781ddf9088bd162db2003575f90290bc0e6119743c6d1f79c/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63488aa130f5cc7781ddf9088bd162db2003575f90290bc0e6119743c6d1f79c/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63488aa130f5cc7781ddf9088bd162db2003575f90290bc0e6119743c6d1f79c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63488aa130f5cc7781ddf9088bd162db2003575f90290bc0e6119743c6d1f79c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63488aa130f5cc7781ddf9088bd162db2003575f90290bc0e6119743c6d1f79c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:17 compute-0 podman[75797]: 2025-12-06 06:26:17.184460514 +0000 UTC m=+0.144462793 container init bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d (image=quay.io/ceph/ceph:v18, name=suspicious_visvesvaraya, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:17 compute-0 podman[75797]: 2025-12-06 06:26:17.191467435 +0000 UTC m=+0.151469654 container start bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d (image=quay.io/ceph/ceph:v18, name=suspicious_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:17 compute-0 podman[75797]: 2025-12-06 06:26:17.195521314 +0000 UTC m=+0.155523533 container attach bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d (image=quay.io/ceph/ceph:v18, name=suspicious_visvesvaraya, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:26:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019926134 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:17 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Dec 06 06:26:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:17 compute-0 ceph-mgr[74630]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 06 06:26:17 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 06 06:26:17 compute-0 systemd[1]: libpod-bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d.scope: Deactivated successfully.
Dec 06 06:26:17 compute-0 podman[75797]: 2025-12-06 06:26:17.783524473 +0000 UTC m=+0.743526722 container died bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d (image=quay.io/ceph/ceph:v18, name=suspicious_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-63488aa130f5cc7781ddf9088bd162db2003575f90290bc0e6119743c6d1f79c-merged.mount: Deactivated successfully.
Dec 06 06:26:17 compute-0 podman[75797]: 2025-12-06 06:26:17.844348511 +0000 UTC m=+0.804350690 container remove bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d (image=quay.io/ceph/ceph:v18, name=suspicious_visvesvaraya, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:17 compute-0 systemd[1]: libpod-conmon-bf15716423e6af58d643b791c476319bff9ed3778e0dabb16a1e635f2b2bdb9d.scope: Deactivated successfully.
Dec 06 06:26:18 compute-0 ceph-mon[74339]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:18 compute-0 ceph-mon[74339]: Set ssh ssh_identity_key
Dec 06 06:26:18 compute-0 ceph-mon[74339]: Set ssh private key
Dec 06 06:26:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:18 compute-0 podman[75852]: 2025-12-06 06:26:18.193712315 +0000 UTC m=+0.057590711 container create c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df (image=quay.io/ceph/ceph:v18, name=intelligent_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 06:26:18 compute-0 systemd[1]: Started libpod-conmon-c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df.scope.
Dec 06 06:26:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6d0290a77a610c57dc9a20a2df94570380f8ea38cc160833f76f3f83e94d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6d0290a77a610c57dc9a20a2df94570380f8ea38cc160833f76f3f83e94d4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f6d0290a77a610c57dc9a20a2df94570380f8ea38cc160833f76f3f83e94d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:18 compute-0 podman[75852]: 2025-12-06 06:26:18.269408626 +0000 UTC m=+0.133287072 container init c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df (image=quay.io/ceph/ceph:v18, name=intelligent_carson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 06:26:18 compute-0 podman[75852]: 2025-12-06 06:26:18.176716605 +0000 UTC m=+0.040595031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:18 compute-0 podman[75852]: 2025-12-06 06:26:18.275157242 +0000 UTC m=+0.139035638 container start c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df (image=quay.io/ceph/ceph:v18, name=intelligent_carson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 06:26:18 compute-0 podman[75852]: 2025-12-06 06:26:18.278168933 +0000 UTC m=+0.142047329 container attach c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df (image=quay.io/ceph/ceph:v18, name=intelligent_carson, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 06:26:18 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:18 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:18 compute-0 intelligent_carson[75868]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOCjUZFruo0d1hR237wtN2tvlg1zJ3vbxw5cZ+nnF4TV9ph2UIeWx/vdYzEfOeOJuwkBN0aNcPfxcOsGyxei8RERBBaaQgdxYrvVN1lh/aOOGostbmfoaRVMyiAil2jRU59mJ+bYo+95/Tfcp+L22fwW5pcQ0ExcNXeR7gVxNU4B5b0wwTJcwBwF2rcpv9Du26p5STnGVwzf17F/QcIck2Y/7UfGacHNNwUm2wVCKaf96QQh9vcF4XR72ZMC7ezGlhfhunyFSRLQNghsijvxPyfHmOPVHw70mFU2kTVbJD7f8/lzf+W+HMWPWiUki+0LilWZ8LtWvGSBAPvffaWdtXEmd9uagxqJG+KxxdogRLNN59gznu2soykPPSqDG63hWSCbKAs7B+iHIT8Z19AunWtcXOgpKyybEO7sN+oBcHfCGOqTYjVbbbXQvAHps5d6fD7whNNEk7Hx7voRsZReXZs7Ca3af86v2yHoXUfxwa0A7xDwwqNphkA2+YNasusTU= zuul@controller
Dec 06 06:26:19 compute-0 systemd[1]: libpod-c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df.scope: Deactivated successfully.
Dec 06 06:26:19 compute-0 podman[75894]: 2025-12-06 06:26:19.065832961 +0000 UTC m=+0.040701385 container died c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df (image=quay.io/ceph/ceph:v18, name=intelligent_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 06:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-54f6d0290a77a610c57dc9a20a2df94570380f8ea38cc160833f76f3f83e94d4-merged.mount: Deactivated successfully.
Dec 06 06:26:19 compute-0 podman[75894]: 2025-12-06 06:26:19.109604877 +0000 UTC m=+0.084473271 container remove c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df (image=quay.io/ceph/ceph:v18, name=intelligent_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 06:26:19 compute-0 systemd[1]: libpod-conmon-c48018e14b95ff99703a612cca891d68e27670ae6cc7f65d61aadce90be764df.scope: Deactivated successfully.
Dec 06 06:26:19 compute-0 ceph-mon[74339]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:19 compute-0 ceph-mon[74339]: Set ssh ssh_identity_pub
Dec 06 06:26:19 compute-0 podman[75908]: 2025-12-06 06:26:19.218138386 +0000 UTC m=+0.067827718 container create 18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:19 compute-0 systemd[1]: Started libpod-conmon-18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46.scope.
Dec 06 06:26:19 compute-0 podman[75908]: 2025-12-06 06:26:19.191650519 +0000 UTC m=+0.041339931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120da0b5df8788eb629473ae3bc601d7deaf476dfda4c648757a4f2c1020b436/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120da0b5df8788eb629473ae3bc601d7deaf476dfda4c648757a4f2c1020b436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120da0b5df8788eb629473ae3bc601d7deaf476dfda4c648757a4f2c1020b436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:19 compute-0 podman[75908]: 2025-12-06 06:26:19.326603275 +0000 UTC m=+0.176292607 container init 18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:26:19 compute-0 podman[75908]: 2025-12-06 06:26:19.33492278 +0000 UTC m=+0.184612152 container start 18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 06:26:19 compute-0 podman[75908]: 2025-12-06 06:26:19.339488724 +0000 UTC m=+0.189178076 container attach 18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec 06 06:26:19 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:20 compute-0 ceph-mon[74339]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:20 compute-0 sshd-session[75951]: Accepted publickey for ceph-admin from 192.168.122.100 port 52794 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:20 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 06 06:26:20 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 06 06:26:20 compute-0 systemd-logind[798]: New session 21 of user ceph-admin.
Dec 06 06:26:20 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 06 06:26:20 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 06 06:26:20 compute-0 systemd[75955]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:20 compute-0 sshd-session[75958]: Accepted publickey for ceph-admin from 192.168.122.100 port 52796 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:20 compute-0 systemd-logind[798]: New session 23 of user ceph-admin.
Dec 06 06:26:20 compute-0 systemd[75955]: Queued start job for default target Main User Target.
Dec 06 06:26:20 compute-0 systemd[75955]: Created slice User Application Slice.
Dec 06 06:26:20 compute-0 systemd[75955]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 06:26:20 compute-0 systemd[75955]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 06:26:20 compute-0 systemd[75955]: Reached target Paths.
Dec 06 06:26:20 compute-0 systemd[75955]: Reached target Timers.
Dec 06 06:26:20 compute-0 systemd[75955]: Starting D-Bus User Message Bus Socket...
Dec 06 06:26:20 compute-0 systemd[75955]: Starting Create User's Volatile Files and Directories...
Dec 06 06:26:20 compute-0 systemd[75955]: Finished Create User's Volatile Files and Directories.
Dec 06 06:26:20 compute-0 systemd[75955]: Listening on D-Bus User Message Bus Socket.
Dec 06 06:26:20 compute-0 systemd[75955]: Reached target Sockets.
Dec 06 06:26:20 compute-0 systemd[75955]: Reached target Basic System.
Dec 06 06:26:20 compute-0 systemd[75955]: Reached target Main User Target.
Dec 06 06:26:20 compute-0 systemd[75955]: Startup finished in 148ms.
Dec 06 06:26:20 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 06 06:26:20 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 06 06:26:20 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 06 06:26:20 compute-0 sshd-session[75951]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:20 compute-0 sshd-session[75958]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:20 compute-0 sudo[75975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:20 compute-0 sudo[75975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:20 compute-0 sudo[75975]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:20 compute-0 sudo[76000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:20 compute-0 sudo[76000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:20 compute-0 sudo[76000]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:20 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:20 compute-0 sshd-session[76025]: Accepted publickey for ceph-admin from 192.168.122.100 port 52798 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:20 compute-0 systemd-logind[798]: New session 24 of user ceph-admin.
Dec 06 06:26:20 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 06 06:26:20 compute-0 sshd-session[76025]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:21 compute-0 sudo[76029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:21 compute-0 sudo[76029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:21 compute-0 sudo[76029]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:21 compute-0 sudo[76054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 06 06:26:21 compute-0 sudo[76054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:21 compute-0 sudo[76054]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:21 compute-0 ceph-mon[74339]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:21 compute-0 sshd-session[76079]: Accepted publickey for ceph-admin from 192.168.122.100 port 52804 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:21 compute-0 systemd-logind[798]: New session 25 of user ceph-admin.
Dec 06 06:26:21 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 06 06:26:21 compute-0 sshd-session[76079]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:21 compute-0 sudo[76083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:21 compute-0 sudo[76083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:21 compute-0 sudo[76083]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:21 compute-0 sudo[76108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Dec 06 06:26:21 compute-0 sudo[76108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:21 compute-0 sudo[76108]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:21 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 06 06:26:21 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 06 06:26:21 compute-0 sshd-session[76133]: Accepted publickey for ceph-admin from 192.168.122.100 port 52818 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:21 compute-0 systemd-logind[798]: New session 26 of user ceph-admin.
Dec 06 06:26:21 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 06 06:26:21 compute-0 sshd-session[76133]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:21 compute-0 sudo[76137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:21 compute-0 sudo[76137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:21 compute-0 sudo[76137]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:21 compute-0 sudo[76162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:21 compute-0 sudo[76162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:21 compute-0 sudo[76162]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:22 compute-0 sshd-session[76187]: Accepted publickey for ceph-admin from 192.168.122.100 port 52826 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:22 compute-0 systemd-logind[798]: New session 27 of user ceph-admin.
Dec 06 06:26:22 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 06 06:26:22 compute-0 sshd-session[76187]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:22 compute-0 sudo[76191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:22 compute-0 sudo[76191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:22 compute-0 sudo[76191]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:22 compute-0 ceph-mon[74339]: Deploying cephadm binary to compute-0
Dec 06 06:26:22 compute-0 sudo[76216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:22 compute-0 sudo[76216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:22 compute-0 sudo[76216]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053092 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:22 compute-0 sshd-session[76241]: Accepted publickey for ceph-admin from 192.168.122.100 port 52838 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:22 compute-0 systemd-logind[798]: New session 28 of user ceph-admin.
Dec 06 06:26:22 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 06 06:26:22 compute-0 sshd-session[76241]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:22 compute-0 sudo[76245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:22 compute-0 sudo[76245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:22 compute-0 sudo[76245]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:22 compute-0 sudo[76270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Dec 06 06:26:22 compute-0 sudo[76270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:22 compute-0 sudo[76270]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:22 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:22 compute-0 sshd-session[76295]: Accepted publickey for ceph-admin from 192.168.122.100 port 52854 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:22 compute-0 systemd-logind[798]: New session 29 of user ceph-admin.
Dec 06 06:26:22 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 06 06:26:22 compute-0 sshd-session[76295]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:23 compute-0 sudo[76299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:23 compute-0 sudo[76299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:23 compute-0 sudo[76299]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:23 compute-0 sudo[76324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:23 compute-0 sudo[76324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:23 compute-0 sudo[76324]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:23 compute-0 sshd-session[76349]: Accepted publickey for ceph-admin from 192.168.122.100 port 52862 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:23 compute-0 systemd-logind[798]: New session 30 of user ceph-admin.
Dec 06 06:26:23 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 06 06:26:23 compute-0 sshd-session[76349]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:23 compute-0 sudo[76353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:23 compute-0 sudo[76353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:23 compute-0 sudo[76353]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:23 compute-0 sudo[76378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Dec 06 06:26:23 compute-0 sudo[76378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:23 compute-0 sudo[76378]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:23 compute-0 sshd-session[76403]: Accepted publickey for ceph-admin from 192.168.122.100 port 52866 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:23 compute-0 systemd-logind[798]: New session 31 of user ceph-admin.
Dec 06 06:26:23 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 06 06:26:23 compute-0 sshd-session[76403]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:24 compute-0 sshd-session[76430]: Accepted publickey for ceph-admin from 192.168.122.100 port 52876 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:24 compute-0 systemd-logind[798]: New session 32 of user ceph-admin.
Dec 06 06:26:24 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 06 06:26:24 compute-0 sshd-session[76430]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:24 compute-0 sudo[76434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:24 compute-0 sudo[76434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:24 compute-0 sudo[76434]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:24 compute-0 sudo[76459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Dec 06 06:26:24 compute-0 sudo[76459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:24 compute-0 sudo[76459]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:24 compute-0 sshd-session[76484]: Accepted publickey for ceph-admin from 192.168.122.100 port 52888 ssh2: RSA SHA256:+i10JGqignoq/SCnmxW2ULoUP1E+YjXSCXzWIKpqOZc
Dec 06 06:26:24 compute-0 systemd-logind[798]: New session 33 of user ceph-admin.
Dec 06 06:26:24 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 06 06:26:24 compute-0 sshd-session[76484]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 06 06:26:24 compute-0 sudo[76488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:24 compute-0 sudo[76488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:24 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:24 compute-0 sudo[76488]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:24 compute-0 sudo[76513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 06 06:26:24 compute-0 sudo[76513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:25 compute-0 sudo[76513]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 06 06:26:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:25 compute-0 ceph-mgr[74630]: [cephadm INFO root] Added host compute-0
Dec 06 06:26:25 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 06 06:26:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 06 06:26:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:25 compute-0 peaceful_hopper[75925]: Added host 'compute-0' with addr '192.168.122.100'
Dec 06 06:26:25 compute-0 systemd[1]: libpod-18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46.scope: Deactivated successfully.
Dec 06 06:26:25 compute-0 podman[75908]: 2025-12-06 06:26:25.173127414 +0000 UTC m=+6.022816776 container died 18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-120da0b5df8788eb629473ae3bc601d7deaf476dfda4c648757a4f2c1020b436-merged.mount: Deactivated successfully.
Dec 06 06:26:25 compute-0 sudo[76558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:25 compute-0 sudo[76558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:25 compute-0 sudo[76558]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:25 compute-0 podman[75908]: 2025-12-06 06:26:25.224711751 +0000 UTC m=+6.074401083 container remove 18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 06:26:25 compute-0 systemd[1]: libpod-conmon-18a08402a72f6c3f2124278e18192b2c9dd56fe338428db43df41772b43eaa46.scope: Deactivated successfully.
Dec 06 06:26:25 compute-0 sudo[76596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:25 compute-0 sudo[76596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:25 compute-0 sudo[76596]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:25 compute-0 podman[76605]: 2025-12-06 06:26:25.287054291 +0000 UTC m=+0.040242082 container create 1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 06:26:25 compute-0 systemd[1]: Started libpod-conmon-1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00.scope.
Dec 06 06:26:25 compute-0 sudo[76635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:25 compute-0 sudo[76635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:25 compute-0 sudo[76635]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:25 compute-0 podman[76605]: 2025-12-06 06:26:25.269237367 +0000 UTC m=+0.022425178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38425646e4778d76038f6b91eea714ab646259d99aff19d321701f528e7e9aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38425646e4778d76038f6b91eea714ab646259d99aff19d321701f528e7e9aa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38425646e4778d76038f6b91eea714ab646259d99aff19d321701f528e7e9aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:25 compute-0 podman[76605]: 2025-12-06 06:26:25.376553215 +0000 UTC m=+0.129741026 container init 1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:25 compute-0 podman[76605]: 2025-12-06 06:26:25.382963349 +0000 UTC m=+0.136151140 container start 1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:26:25 compute-0 podman[76605]: 2025-12-06 06:26:25.386462053 +0000 UTC m=+0.139649864 container attach 1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:25 compute-0 sudo[76665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Dec 06 06:26:25 compute-0 sudo[76665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:25 compute-0 podman[76719]: 2025-12-06 06:26:25.642385226 +0000 UTC m=+0.037748113 container create e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a (image=quay.io/ceph/ceph:v18, name=infallible_johnson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:26:25 compute-0 systemd[1]: Started libpod-conmon-e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a.scope.
Dec 06 06:26:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:25 compute-0 podman[76719]: 2025-12-06 06:26:25.62481475 +0000 UTC m=+0.020177667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:25 compute-0 podman[76719]: 2025-12-06 06:26:25.736537237 +0000 UTC m=+0.131900154 container init e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a (image=quay.io/ceph/ceph:v18, name=infallible_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 06:26:25 compute-0 podman[76719]: 2025-12-06 06:26:25.741924663 +0000 UTC m=+0.137287560 container start e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a (image=quay.io/ceph/ceph:v18, name=infallible_johnson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:26:25 compute-0 podman[76719]: 2025-12-06 06:26:25.745198901 +0000 UTC m=+0.140561818 container attach e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a (image=quay.io/ceph/ceph:v18, name=infallible_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:25 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:25 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 06 06:26:25 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 06 06:26:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 06 06:26:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:25 compute-0 goofy_satoshi[76661]: Scheduled mon update...
Dec 06 06:26:25 compute-0 systemd[1]: libpod-1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00.scope: Deactivated successfully.
Dec 06 06:26:25 compute-0 podman[76605]: 2025-12-06 06:26:25.961786379 +0000 UTC m=+0.714974190 container died 1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c38425646e4778d76038f6b91eea714ab646259d99aff19d321701f528e7e9aa-merged.mount: Deactivated successfully.
Dec 06 06:26:26 compute-0 podman[76605]: 2025-12-06 06:26:26.019943034 +0000 UTC m=+0.773130835 container remove 1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00 (image=quay.io/ceph/ceph:v18, name=goofy_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 06:26:26 compute-0 systemd[1]: libpod-conmon-1ad4d917c5d1986570f5363284a58f27f54625b0118c72f02497173ff5714d00.scope: Deactivated successfully.
Dec 06 06:26:26 compute-0 infallible_johnson[76737]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec 06 06:26:26 compute-0 podman[76773]: 2025-12-06 06:26:26.081374418 +0000 UTC m=+0.039655855 container create b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb (image=quay.io/ceph/ceph:v18, name=sad_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 06:26:26 compute-0 systemd[1]: libpod-e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a.scope: Deactivated successfully.
Dec 06 06:26:26 compute-0 podman[76719]: 2025-12-06 06:26:26.089137898 +0000 UTC m=+0.484500805 container died e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a (image=quay.io/ceph/ceph:v18, name=infallible_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:26 compute-0 systemd[1]: Started libpod-conmon-b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb.scope.
Dec 06 06:26:26 compute-0 podman[76719]: 2025-12-06 06:26:26.141300852 +0000 UTC m=+0.536663749 container remove e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a (image=quay.io/ceph/ceph:v18, name=infallible_johnson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec 06 06:26:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:26 compute-0 ceph-mon[74339]: Added host compute-0
Dec 06 06:26:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:26:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4488c32ca58252cca8c6090370b114603ff3c0670a97ddd6ad3e697a5fc4576/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4488c32ca58252cca8c6090370b114603ff3c0670a97ddd6ad3e697a5fc4576/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4488c32ca58252cca8c6090370b114603ff3c0670a97ddd6ad3e697a5fc4576/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:26 compute-0 podman[76773]: 2025-12-06 06:26:26.063919255 +0000 UTC m=+0.022200692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:26 compute-0 podman[76773]: 2025-12-06 06:26:26.161634963 +0000 UTC m=+0.119916410 container init b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb (image=quay.io/ceph/ceph:v18, name=sad_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:26 compute-0 systemd[1]: libpod-conmon-e8a7df695b653221926458d93b8f39de6cfb5723874a00c0596f45093580493a.scope: Deactivated successfully.
Dec 06 06:26:26 compute-0 podman[76773]: 2025-12-06 06:26:26.167799089 +0000 UTC m=+0.126080526 container start b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb (image=quay.io/ceph/ceph:v18, name=sad_khayyam, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:26 compute-0 podman[76773]: 2025-12-06 06:26:26.171255423 +0000 UTC m=+0.129536890 container attach b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb (image=quay.io/ceph/ceph:v18, name=sad_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:26 compute-0 sudo[76665]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Dec 06 06:26:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-213a85ff31f286218dbd861399fd74fa6a0cd47f93c2eabe7fff237230bf7d79-merged.mount: Deactivated successfully.
Dec 06 06:26:26 compute-0 sudo[76806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:26 compute-0 sudo[76806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:26 compute-0 sudo[76806]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:26 compute-0 sudo[76831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:26 compute-0 sudo[76831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:26 compute-0 sudo[76831]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:26 compute-0 sudo[76856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:26 compute-0 sudo[76856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:26 compute-0 sudo[76856]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:26 compute-0 sudo[76881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 06:26:26 compute-0 sudo[76881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:26 compute-0 sudo[76881]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:26 compute-0 sudo[76945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:26 compute-0 sudo[76945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:26 compute-0 sudo[76945]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:26 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:26 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 06 06:26:26 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 06 06:26:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 06 06:26:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:26 compute-0 sad_khayyam[76801]: Scheduled mgr update...
Dec 06 06:26:26 compute-0 sudo[76970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:26 compute-0 sudo[76970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:26 compute-0 sudo[76970]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:26 compute-0 systemd[1]: libpod-b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb.scope: Deactivated successfully.
Dec 06 06:26:26 compute-0 podman[76773]: 2025-12-06 06:26:26.78134269 +0000 UTC m=+0.739624137 container died b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb (image=quay.io/ceph/ceph:v18, name=sad_khayyam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4488c32ca58252cca8c6090370b114603ff3c0670a97ddd6ad3e697a5fc4576-merged.mount: Deactivated successfully.
Dec 06 06:26:26 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:26 compute-0 podman[76773]: 2025-12-06 06:26:26.827713056 +0000 UTC m=+0.785994493 container remove b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb (image=quay.io/ceph/ceph:v18, name=sad_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 06:26:26 compute-0 sudo[76998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:26 compute-0 sudo[76998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:26 compute-0 systemd[1]: libpod-conmon-b3b5c8b0f11417dee90ef6c42f5fa797cb061fedea6fe58596b050a4bb6ca1bb.scope: Deactivated successfully.
Dec 06 06:26:26 compute-0 sudo[76998]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:26 compute-0 podman[77033]: 2025-12-06 06:26:26.889216633 +0000 UTC m=+0.040235482 container create fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348 (image=quay.io/ceph/ceph:v18, name=nervous_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:26:26 compute-0 sudo[77035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:26:26 compute-0 sudo[77035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:26 compute-0 systemd[1]: Started libpod-conmon-fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348.scope.
Dec 06 06:26:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2763460e2d9720841e4e7c50b58c0336eee5d781dcc83da21c89584b8119e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2763460e2d9720841e4e7c50b58c0336eee5d781dcc83da21c89584b8119e4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f2763460e2d9720841e4e7c50b58c0336eee5d781dcc83da21c89584b8119e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:26 compute-0 podman[77033]: 2025-12-06 06:26:26.871546163 +0000 UTC m=+0.022565032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:26 compute-0 podman[77033]: 2025-12-06 06:26:26.975753106 +0000 UTC m=+0.126771955 container init fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348 (image=quay.io/ceph/ceph:v18, name=nervous_napier, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:26:26 compute-0 podman[77033]: 2025-12-06 06:26:26.983090795 +0000 UTC m=+0.134109644 container start fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348 (image=quay.io/ceph/ceph:v18, name=nervous_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:26 compute-0 podman[77033]: 2025-12-06 06:26:26.987399402 +0000 UTC m=+0.138418251 container attach fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348 (image=quay.io/ceph/ceph:v18, name=nervous_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 06:26:27 compute-0 ceph-mon[74339]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:27 compute-0 ceph-mon[74339]: Saving service mon spec with placement count:5
Dec 06 06:26:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:27 compute-0 podman[77167]: 2025-12-06 06:26:27.501465018 +0000 UTC m=+0.061891898 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:26:27 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:27 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service crash spec with placement *
Dec 06 06:26:27 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 06 06:26:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 06 06:26:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:27 compute-0 nervous_napier[77074]: Scheduled crash update...
Dec 06 06:26:27 compute-0 podman[77033]: 2025-12-06 06:26:27.658849871 +0000 UTC m=+0.809868720 container died fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348 (image=quay.io/ceph/ceph:v18, name=nervous_napier, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 06:26:27 compute-0 systemd[1]: libpod-fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348.scope: Deactivated successfully.
Dec 06 06:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f2763460e2d9720841e4e7c50b58c0336eee5d781dcc83da21c89584b8119e4-merged.mount: Deactivated successfully.
Dec 06 06:26:27 compute-0 podman[77033]: 2025-12-06 06:26:27.705430083 +0000 UTC m=+0.856448932 container remove fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348 (image=quay.io/ceph/ceph:v18, name=nervous_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Dec 06 06:26:27 compute-0 systemd[1]: libpod-conmon-fbf9bc7e1683fa5504160b15218d9d3e1573fb19832df3c80e22472fa6b7b348.scope: Deactivated successfully.
Dec 06 06:26:27 compute-0 podman[77200]: 2025-12-06 06:26:27.864377699 +0000 UTC m=+0.115862160 container create 495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0 (image=quay.io/ceph/ceph:v18, name=reverent_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:27 compute-0 podman[77167]: 2025-12-06 06:26:27.868668225 +0000 UTC m=+0.429095105 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:27 compute-0 systemd[1]: Started libpod-conmon-495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0.scope.
Dec 06 06:26:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec61dcf28d667d3c3e381adc5e4acef9cdc7ac3921c37a93e1fd7edecdd6b661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec61dcf28d667d3c3e381adc5e4acef9cdc7ac3921c37a93e1fd7edecdd6b661/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec61dcf28d667d3c3e381adc5e4acef9cdc7ac3921c37a93e1fd7edecdd6b661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:27 compute-0 podman[77200]: 2025-12-06 06:26:27.842526297 +0000 UTC m=+0.094010788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:27 compute-0 podman[77200]: 2025-12-06 06:26:27.954239253 +0000 UTC m=+0.205723734 container init 495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0 (image=quay.io/ceph/ceph:v18, name=reverent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:26:27 compute-0 podman[77200]: 2025-12-06 06:26:27.961759496 +0000 UTC m=+0.213243957 container start 495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0 (image=quay.io/ceph/ceph:v18, name=reverent_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:26:27 compute-0 podman[77200]: 2025-12-06 06:26:27.96631506 +0000 UTC m=+0.217799511 container attach 495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0 (image=quay.io/ceph/ceph:v18, name=reverent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:28 compute-0 sudo[77035]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:28 compute-0 sudo[77250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:28 compute-0 sudo[77250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:28 compute-0 sudo[77250]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:28 compute-0 sudo[77275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:28 compute-0 sudo[77275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:28 compute-0 sudo[77275]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:28 compute-0 ceph-mon[74339]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:28 compute-0 ceph-mon[74339]: Saving service mgr spec with placement count:2
Dec 06 06:26:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:28 compute-0 sudo[77300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:28 compute-0 sudo[77300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:28 compute-0 sudo[77300]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:28 compute-0 sudo[77325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:26:28 compute-0 sudo[77325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:28 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77382 (sysctl)
Dec 06 06:26:28 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 06 06:26:28 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 06 06:26:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Dec 06 06:26:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2928770486' entity='client.admin' 
Dec 06 06:26:28 compute-0 systemd[1]: libpod-495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0.scope: Deactivated successfully.
Dec 06 06:26:28 compute-0 podman[77200]: 2025-12-06 06:26:28.581732171 +0000 UTC m=+0.833216642 container died 495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0 (image=quay.io/ceph/ceph:v18, name=reverent_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 06:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec61dcf28d667d3c3e381adc5e4acef9cdc7ac3921c37a93e1fd7edecdd6b661-merged.mount: Deactivated successfully.
Dec 06 06:26:28 compute-0 podman[77200]: 2025-12-06 06:26:28.630426168 +0000 UTC m=+0.881910629 container remove 495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0 (image=quay.io/ceph/ceph:v18, name=reverent_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:28 compute-0 systemd[1]: libpod-conmon-495f446f5a3417a22df9ad1de67cd9f0f491ddfb681f74b6e9946992406e80b0.scope: Deactivated successfully.
Dec 06 06:26:28 compute-0 podman[77402]: 2025-12-06 06:26:28.704193391 +0000 UTC m=+0.051868341 container create 785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db (image=quay.io/ceph/ceph:v18, name=elegant_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:28 compute-0 systemd[1]: Started libpod-conmon-785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db.scope.
Dec 06 06:26:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:28 compute-0 podman[77402]: 2025-12-06 06:26:28.681087112 +0000 UTC m=+0.028762062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58baca7651c1b41370fe2bfebdfdc6773b3579ace5b8486398dc27a5dda74c36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58baca7651c1b41370fe2bfebdfdc6773b3579ace5b8486398dc27a5dda74c36/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58baca7651c1b41370fe2bfebdfdc6773b3579ace5b8486398dc27a5dda74c36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:28 compute-0 podman[77402]: 2025-12-06 06:26:28.792703105 +0000 UTC m=+0.140378075 container init 785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db (image=quay.io/ceph/ceph:v18, name=elegant_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 06:26:28 compute-0 podman[77402]: 2025-12-06 06:26:28.801687744 +0000 UTC m=+0.149362694 container start 785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db (image=quay.io/ceph/ceph:v18, name=elegant_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:26:28 compute-0 podman[77402]: 2025-12-06 06:26:28.804730268 +0000 UTC m=+0.152405508 container attach 785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db (image=quay.io/ceph/ceph:v18, name=elegant_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:28 compute-0 sudo[77325]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:28 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:28 compute-0 sudo[77439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:28 compute-0 sudo[77439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:28 compute-0 sudo[77439]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:28 compute-0 sudo[77464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:28 compute-0 sudo[77464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:28 compute-0 sudo[77464]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:29 compute-0 sudo[77489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:29 compute-0 sudo[77489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:29 compute-0 sudo[77489]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:29 compute-0 sudo[77514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 06:26:29 compute-0 sudo[77514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:29 compute-0 ceph-mon[74339]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:29 compute-0 ceph-mon[74339]: Saving service crash spec with placement *
Dec 06 06:26:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2928770486' entity='client.admin' 
Dec 06 06:26:29 compute-0 sudo[77514]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:29 compute-0 sudo[77576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:29 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:29 compute-0 sudo[77576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Dec 06 06:26:29 compute-0 sudo[77576]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:29 compute-0 systemd[1]: libpod-785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db.scope: Deactivated successfully.
Dec 06 06:26:29 compute-0 sudo[77602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:29 compute-0 sudo[77602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:29 compute-0 sudo[77602]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:29 compute-0 podman[77621]: 2025-12-06 06:26:29.431844606 +0000 UTC m=+0.027762903 container died 785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db (image=quay.io/ceph/ceph:v18, name=elegant_williamson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec 06 06:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-58baca7651c1b41370fe2bfebdfdc6773b3579ace5b8486398dc27a5dda74c36-merged.mount: Deactivated successfully.
Dec 06 06:26:29 compute-0 sudo[77641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:29 compute-0 sudo[77641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:29 compute-0 sudo[77641]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:29 compute-0 podman[77621]: 2025-12-06 06:26:29.483569966 +0000 UTC m=+0.079488263 container remove 785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db (image=quay.io/ceph/ceph:v18, name=elegant_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 06:26:29 compute-0 systemd[1]: libpod-conmon-785cc2fee209929d9d42a55ef73369657e61f684a40ec6a4c853ed6b3c4843db.scope: Deactivated successfully.
Dec 06 06:26:29 compute-0 sudo[77668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 06:26:29 compute-0 sudo[77668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:29 compute-0 podman[77672]: 2025-12-06 06:26:29.550316367 +0000 UTC m=+0.042491901 container create 416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d (image=quay.io/ceph/ceph:v18, name=jolly_montalcini, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:29 compute-0 systemd[1]: Started libpod-conmon-416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d.scope.
Dec 06 06:26:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635b913288914df92e1504490ae89d36a67d6a81fcede32d512bef268884f63a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635b913288914df92e1504490ae89d36a67d6a81fcede32d512bef268884f63a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:29 compute-0 podman[77672]: 2025-12-06 06:26:29.532321358 +0000 UTC m=+0.024496882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635b913288914df92e1504490ae89d36a67d6a81fcede32d512bef268884f63a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:29 compute-0 podman[77672]: 2025-12-06 06:26:29.644675149 +0000 UTC m=+0.136850683 container init 416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d (image=quay.io/ceph/ceph:v18, name=jolly_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:26:29 compute-0 podman[77672]: 2025-12-06 06:26:29.651790899 +0000 UTC m=+0.143966393 container start 416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d (image=quay.io/ceph/ceph:v18, name=jolly_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:29 compute-0 podman[77672]: 2025-12-06 06:26:29.657530159 +0000 UTC m=+0.149705723 container attach 416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d (image=quay.io/ceph/ceph:v18, name=jolly_montalcini, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:26:29 compute-0 podman[77754]: 2025-12-06 06:26:29.842496372 +0000 UTC m=+0.040552987 container create f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:26:29 compute-0 systemd[1]: Started libpod-conmon-f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98.scope.
Dec 06 06:26:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:29 compute-0 podman[77754]: 2025-12-06 06:26:29.907775636 +0000 UTC m=+0.105832271 container init f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:29 compute-0 podman[77754]: 2025-12-06 06:26:29.913361343 +0000 UTC m=+0.111417958 container start f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:26:29 compute-0 bold_dubinsky[77770]: 167 167
Dec 06 06:26:29 compute-0 systemd[1]: libpod-f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98.scope: Deactivated successfully.
Dec 06 06:26:29 compute-0 podman[77754]: 2025-12-06 06:26:29.918182529 +0000 UTC m=+0.116239174 container attach f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:29 compute-0 podman[77754]: 2025-12-06 06:26:29.91857853 +0000 UTC m=+0.116635155 container died f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:26:29 compute-0 podman[77754]: 2025-12-06 06:26:29.823686726 +0000 UTC m=+0.021743361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc23e3add8d57a6279844dcf99957e42825c18ab5f112ebc7093cb7a5b2dadb3-merged.mount: Deactivated successfully.
Dec 06 06:26:29 compute-0 podman[77754]: 2025-12-06 06:26:29.959526348 +0000 UTC m=+0.157582963 container remove f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_dubinsky, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:26:29 compute-0 systemd[1]: libpod-conmon-f31bec0e30762fe509f890144c80e3359924125fb53c9f28f575114c3a8c3b98.scope: Deactivated successfully.
Dec 06 06:26:30 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 06 06:26:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:30 compute-0 ceph-mgr[74630]: [cephadm INFO root] Added label _admin to host compute-0
Dec 06 06:26:30 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 06 06:26:30 compute-0 jolly_montalcini[77709]: Added label _admin to host compute-0
Dec 06 06:26:30 compute-0 systemd[1]: libpod-416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d.scope: Deactivated successfully.
Dec 06 06:26:30 compute-0 podman[77672]: 2025-12-06 06:26:30.243529747 +0000 UTC m=+0.735705241 container died 416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d (image=quay.io/ceph/ceph:v18, name=jolly_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-635b913288914df92e1504490ae89d36a67d6a81fcede32d512bef268884f63a-merged.mount: Deactivated successfully.
Dec 06 06:26:30 compute-0 podman[77672]: 2025-12-06 06:26:30.28555613 +0000 UTC m=+0.777731624 container remove 416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d (image=quay.io/ceph/ceph:v18, name=jolly_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:30 compute-0 systemd[1]: libpod-conmon-416d776e412dd41ebd7c2fbc4862619465f04078ad7ef060bd983362f2d9b73d.scope: Deactivated successfully.
Dec 06 06:26:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:30 compute-0 ceph-mon[74339]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:30 compute-0 podman[77817]: 2025-12-06 06:26:30.341690602 +0000 UTC m=+0.037904124 container create 8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3 (image=quay.io/ceph/ceph:v18, name=inspiring_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:26:30 compute-0 systemd[1]: Started libpod-conmon-8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3.scope.
Dec 06 06:26:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab1d597aa2e24634102baa9297a13233b18a4e6488904070075c94656e5a93a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab1d597aa2e24634102baa9297a13233b18a4e6488904070075c94656e5a93a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab1d597aa2e24634102baa9297a13233b18a4e6488904070075c94656e5a93a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:30 compute-0 podman[77817]: 2025-12-06 06:26:30.32579429 +0000 UTC m=+0.022007832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:30 compute-0 podman[77817]: 2025-12-06 06:26:30.436033913 +0000 UTC m=+0.132247475 container init 8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3 (image=quay.io/ceph/ceph:v18, name=inspiring_goldberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:26:30 compute-0 podman[77817]: 2025-12-06 06:26:30.441053814 +0000 UTC m=+0.137267336 container start 8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3 (image=quay.io/ceph/ceph:v18, name=inspiring_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:30 compute-0 podman[77817]: 2025-12-06 06:26:30.447760411 +0000 UTC m=+0.143973973 container attach 8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3 (image=quay.io/ceph/ceph:v18, name=inspiring_goldberg, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:26:30 compute-0 ceph-mgr[74630]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 06 06:26:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Dec 06 06:26:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/816402918' entity='client.admin' 
Dec 06 06:26:30 compute-0 systemd[1]: libpod-8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3.scope: Deactivated successfully.
Dec 06 06:26:30 compute-0 podman[77817]: 2025-12-06 06:26:30.992564262 +0000 UTC m=+0.688777824 container died 8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3 (image=quay.io/ceph/ceph:v18, name=inspiring_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bab1d597aa2e24634102baa9297a13233b18a4e6488904070075c94656e5a93a-merged.mount: Deactivated successfully.
Dec 06 06:26:31 compute-0 podman[77817]: 2025-12-06 06:26:31.03477291 +0000 UTC m=+0.730986432 container remove 8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3 (image=quay.io/ceph/ceph:v18, name=inspiring_goldberg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:31 compute-0 systemd[1]: libpod-conmon-8284f8eebe471178eac5cede42c82ae4a0c544291cc405b9c3ff2abc35a548d3.scope: Deactivated successfully.
Dec 06 06:26:31 compute-0 podman[77874]: 2025-12-06 06:26:31.089381811 +0000 UTC m=+0.037040685 container create 68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0 (image=quay.io/ceph/ceph:v18, name=silly_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 06:26:31 compute-0 systemd[1]: Started libpod-conmon-68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0.scope.
Dec 06 06:26:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c9b016f5066dcc7987726f6d7423f5d4c0bd3eeecd98cca22f99df2c1ca6d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c9b016f5066dcc7987726f6d7423f5d4c0bd3eeecd98cca22f99df2c1ca6d23/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c9b016f5066dcc7987726f6d7423f5d4c0bd3eeecd98cca22f99df2c1ca6d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:31 compute-0 podman[77874]: 2025-12-06 06:26:31.15959891 +0000 UTC m=+0.107257874 container init 68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0 (image=quay.io/ceph/ceph:v18, name=silly_ellis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:26:31 compute-0 podman[77874]: 2025-12-06 06:26:31.169721241 +0000 UTC m=+0.117380155 container start 68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0 (image=quay.io/ceph/ceph:v18, name=silly_ellis, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:26:31 compute-0 podman[77874]: 2025-12-06 06:26:31.073716447 +0000 UTC m=+0.021375341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:31 compute-0 podman[77874]: 2025-12-06 06:26:31.175210359 +0000 UTC m=+0.122869233 container attach 68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0 (image=quay.io/ceph/ceph:v18, name=silly_ellis, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Dec 06 06:26:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2276158311' entity='client.admin' 
Dec 06 06:26:31 compute-0 silly_ellis[77890]: set mgr/dashboard/cluster/status
Dec 06 06:26:31 compute-0 systemd[1]: libpod-68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0.scope: Deactivated successfully.
Dec 06 06:26:31 compute-0 podman[77874]: 2025-12-06 06:26:31.84183088 +0000 UTC m=+0.789489754 container died 68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0 (image=quay.io/ceph/ceph:v18, name=silly_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c9b016f5066dcc7987726f6d7423f5d4c0bd3eeecd98cca22f99df2c1ca6d23-merged.mount: Deactivated successfully.
Dec 06 06:26:31 compute-0 podman[77874]: 2025-12-06 06:26:31.887739074 +0000 UTC m=+0.835397938 container remove 68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0 (image=quay.io/ceph/ceph:v18, name=silly_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:26:31 compute-0 systemd[1]: libpod-conmon-68c76c54f3e6d1d20388a51162351ec745e5b5338ebb2efa90e62023f6d406a0.scope: Deactivated successfully.
Dec 06 06:26:31 compute-0 sudo[73318]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:31 compute-0 ceph-mon[74339]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:31 compute-0 ceph-mon[74339]: Added label _admin to host compute-0
Dec 06 06:26:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/816402918' entity='client.admin' 
Dec 06 06:26:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2276158311' entity='client.admin' 
Dec 06 06:26:32 compute-0 podman[77936]: 2025-12-06 06:26:32.085158703 +0000 UTC m=+0.044497122 container create 30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:26:32 compute-0 systemd[1]: Started libpod-conmon-30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36.scope.
Dec 06 06:26:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0766e430c1afb0fb4c7ba6775a573995c38f161a2b831dda69a5a6f84bb490/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0766e430c1afb0fb4c7ba6775a573995c38f161a2b831dda69a5a6f84bb490/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0766e430c1afb0fb4c7ba6775a573995c38f161a2b831dda69a5a6f84bb490/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0766e430c1afb0fb4c7ba6775a573995c38f161a2b831dda69a5a6f84bb490/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:32 compute-0 podman[77936]: 2025-12-06 06:26:32.153750843 +0000 UTC m=+0.113089272 container init 30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:26:32 compute-0 podman[77936]: 2025-12-06 06:26:32.16295705 +0000 UTC m=+0.122295479 container start 30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_zhukovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 06:26:32 compute-0 podman[77936]: 2025-12-06 06:26:32.068417524 +0000 UTC m=+0.027755963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:26:32 compute-0 podman[77936]: 2025-12-06 06:26:32.166631154 +0000 UTC m=+0.125969753 container attach 30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_zhukovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 06:26:32 compute-0 sudo[77980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkjcvyddauvymcuuzncuzctqfmtysmar ; /usr/bin/python3'
Dec 06 06:26:32 compute-0 sudo[77980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:32 compute-0 python3[77982]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:26:32 compute-0 podman[77983]: 2025-12-06 06:26:32.468928937 +0000 UTC m=+0.041909915 container create c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2 (image=quay.io/ceph/ceph:v18, name=nostalgic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:32 compute-0 systemd[1]: Started libpod-conmon-c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2.scope.
Dec 06 06:26:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c639da26445f009d113169938fe7b748efbafdd76af04943b372b5df764dd031/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c639da26445f009d113169938fe7b748efbafdd76af04943b372b5df764dd031/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:32 compute-0 podman[77983]: 2025-12-06 06:26:32.541715442 +0000 UTC m=+0.114696430 container init c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2 (image=quay.io/ceph/ceph:v18, name=nostalgic_hugle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 06:26:32 compute-0 podman[77983]: 2025-12-06 06:26:32.449966859 +0000 UTC m=+0.022947857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:32 compute-0 podman[77983]: 2025-12-06 06:26:32.548204152 +0000 UTC m=+0.121185120 container start c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2 (image=quay.io/ceph/ceph:v18, name=nostalgic_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec 06 06:26:32 compute-0 podman[77983]: 2025-12-06 06:26:32.552093143 +0000 UTC m=+0.125074151 container attach c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2 (image=quay.io/ceph/ceph:v18, name=nostalgic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:26:32 compute-0 ceph-mgr[74630]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 06 06:26:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 06 06:26:32 compute-0 ceph-mon[74339]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 06 06:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Dec 06 06:26:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3886228529' entity='client.admin' 
Dec 06 06:26:33 compute-0 systemd[1]: libpod-c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2.scope: Deactivated successfully.
Dec 06 06:26:33 compute-0 podman[78430]: 2025-12-06 06:26:33.190472083 +0000 UTC m=+0.025228060 container died c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2 (image=quay.io/ceph/ceph:v18, name=nostalgic_hugle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 06:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c639da26445f009d113169938fe7b748efbafdd76af04943b372b5df764dd031-merged.mount: Deactivated successfully.
Dec 06 06:26:33 compute-0 podman[78430]: 2025-12-06 06:26:33.238497537 +0000 UTC m=+0.073253484 container remove c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2 (image=quay.io/ceph/ceph:v18, name=nostalgic_hugle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:33 compute-0 systemd[1]: libpod-conmon-c3b4e0dada4028338a5aa8f83679090cac5486a7f3d3940eca316f5e4f5823d2.scope: Deactivated successfully.
Dec 06 06:26:33 compute-0 sudo[77980]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]: [
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:     {
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         "available": false,
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         "ceph_device": false,
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         "lsm_data": {},
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         "lvs": [],
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         "path": "/dev/sr0",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         "rejected_reasons": [
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "Has a FileSystem",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "Insufficient space (<5GB)"
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         ],
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         "sys_api": {
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "actuators": null,
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "device_nodes": "sr0",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "devname": "sr0",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "human_readable_size": "482.00 KB",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "id_bus": "ata",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "model": "QEMU DVD-ROM",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "nr_requests": "2",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "parent": "/dev/sr0",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "partitions": {},
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "path": "/dev/sr0",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "removable": "1",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "rev": "2.5+",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "ro": "0",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "rotational": "1",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "sas_address": "",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "sas_device_handle": "",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "scheduler_mode": "mq-deadline",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "sectors": 0,
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "sectorsize": "2048",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "size": 493568.0,
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "support_discard": "2048",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "type": "disk",
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:             "vendor": "QEMU"
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:         }
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]:     }
Dec 06 06:26:33 compute-0 exciting_zhukovsky[77952]: ]
Dec 06 06:26:33 compute-0 systemd[1]: libpod-30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36.scope: Deactivated successfully.
Dec 06 06:26:33 compute-0 podman[77936]: 2025-12-06 06:26:33.32101488 +0000 UTC m=+1.280353299 container died 30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 06:26:33 compute-0 systemd[1]: libpod-30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36.scope: Consumed 1.145s CPU time.
Dec 06 06:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c0766e430c1afb0fb4c7ba6775a573995c38f161a2b831dda69a5a6f84bb490-merged.mount: Deactivated successfully.
Dec 06 06:26:33 compute-0 podman[77936]: 2025-12-06 06:26:33.373343398 +0000 UTC m=+1.332681817 container remove 30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:26:33 compute-0 systemd[1]: libpod-conmon-30661c62f20b997fce44b0d4a3c80e4919a23d4d038b1c8167b266052eaa4a36.scope: Deactivated successfully.
Dec 06 06:26:33 compute-0 sudo[77668]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:26:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:26:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 06:26:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:26:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:26:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:26:33 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 06 06:26:33 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 06 06:26:33 compute-0 sudo[79067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:33 compute-0 sudo[79067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:33 compute-0 sudo[79067]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 sudo[79092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 06:26:33 compute-0 sudo[79092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:33 compute-0 sudo[79092]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 sudo[79129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:33 compute-0 sudo[79129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:33 compute-0 sudo[79129]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 sudo[79184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph
Dec 06 06:26:33 compute-0 sudo[79184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:33 compute-0 sudo[79184]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 sudo[79225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:33 compute-0 sudo[79225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:33 compute-0 sudo[79225]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 sudo[79267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:26:33 compute-0 sudo[79267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:33 compute-0 sudo[79267]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 sudo[79292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:33 compute-0 sudo[79292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:33 compute-0 sudo[79292]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:33 compute-0 sudo[79317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:33 compute-0 sudo[79317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:33 compute-0 sudo[79317]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:34 compute-0 sudo[79352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79352]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:26:34 compute-0 sudo[79397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79397]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 ceph-mon[74339]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3886228529' entity='client.admin' 
Dec 06 06:26:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:26:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:26:34 compute-0 sudo[79487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nloxxadrtqeeqtygguvhcihvumwyshhy ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765002393.6407561-37244-96328655122989/async_wrapper.py j590448695654 30 /home/zuul/.ansible/tmp/ansible-tmp-1765002393.6407561-37244-96328655122989/AnsiballZ_command.py _'
Dec 06 06:26:34 compute-0 sudo[79487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:34 compute-0 sudo[79490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:34 compute-0 sudo[79490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79490]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:26:34 compute-0 sudo[79515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79515]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 ansible-async_wrapper.py[79489]: Invoked with j590448695654 30 /home/zuul/.ansible/tmp/ansible-tmp-1765002393.6407561-37244-96328655122989/AnsiballZ_command.py _
Dec 06 06:26:34 compute-0 ansible-async_wrapper.py[79551]: Starting module and watcher
Dec 06 06:26:34 compute-0 ansible-async_wrapper.py[79551]: Start watching 79556 (30)
Dec 06 06:26:34 compute-0 ansible-async_wrapper.py[79556]: Start module (79556)
Dec 06 06:26:34 compute-0 ansible-async_wrapper.py[79489]: Return async_wrapper task started.
Dec 06 06:26:34 compute-0 sudo[79487]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:34 compute-0 sudo[79540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79540]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:26:34 compute-0 sudo[79570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79570]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 python3[79563]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:26:34 compute-0 sudo[79595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:34 compute-0 sudo[79595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79595]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 06:26:34 compute-0 sudo[79621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79621]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:26:34 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:26:34 compute-0 podman[79619]: 2025-12-06 06:26:34.541461574 +0000 UTC m=+0.055736442 container create 355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f (image=quay.io/ceph/ceph:v18, name=compassionate_goodall, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 06:26:34 compute-0 systemd[1]: Started libpod-conmon-355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f.scope.
Dec 06 06:26:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:34 compute-0 sudo[79656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:34 compute-0 sudo[79656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 podman[79619]: 2025-12-06 06:26:34.518339834 +0000 UTC m=+0.032614722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ace7582f67aace32f6043be1e5b2721167170f6164794fce72ae2472128dea6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ace7582f67aace32f6043be1e5b2721167170f6164794fce72ae2472128dea6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:34 compute-0 sudo[79656]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 podman[79619]: 2025-12-06 06:26:34.627526872 +0000 UTC m=+0.141801770 container init 355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f (image=quay.io/ceph/ceph:v18, name=compassionate_goodall, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 06:26:34 compute-0 podman[79619]: 2025-12-06 06:26:34.634971457 +0000 UTC m=+0.149246315 container start 355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f (image=quay.io/ceph/ceph:v18, name=compassionate_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:26:34 compute-0 podman[79619]: 2025-12-06 06:26:34.639772142 +0000 UTC m=+0.154047040 container attach 355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f (image=quay.io/ceph/ceph:v18, name=compassionate_goodall, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:34 compute-0 sudo[79688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:26:34 compute-0 sudo[79688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79688]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:34 compute-0 sudo[79714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79714]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:26:34 compute-0 sudo[79739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79739]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:34 compute-0 sudo[79764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:34 compute-0 sudo[79764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79764]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:26:34 compute-0 sudo[79789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79789]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:34 compute-0 sudo[79814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79814]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:34 compute-0 sudo[79840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:34 compute-0 sudo[79840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:34 compute-0 sudo[79840]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[79883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 sudo[79883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[79883]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[79908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:26:35 compute-0 sudo[79908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[79908]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 ceph-mon[74339]: Updating compute-0:/etc/ceph/ceph.conf
Dec 06 06:26:35 compute-0 sudo[79956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:26:35 compute-0 sudo[79956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 compassionate_goodall[79683]: 
Dec 06 06:26:35 compute-0 compassionate_goodall[79683]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 06 06:26:35 compute-0 sudo[79956]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 systemd[1]: libpod-355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f.scope: Deactivated successfully.
Dec 06 06:26:35 compute-0 podman[79619]: 2025-12-06 06:26:35.213484727 +0000 UTC m=+0.727759595 container died 355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f (image=quay.io/ceph/ceph:v18, name=compassionate_goodall, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ace7582f67aace32f6043be1e5b2721167170f6164794fce72ae2472128dea6-merged.mount: Deactivated successfully.
Dec 06 06:26:35 compute-0 sudo[79983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:26:35 compute-0 sudo[79983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[79983]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 podman[79619]: 2025-12-06 06:26:35.264596137 +0000 UTC m=+0.778871005 container remove 355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f (image=quay.io/ceph/ceph:v18, name=compassionate_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:35 compute-0 systemd[1]: libpod-conmon-355703da4615c21d952e3c8e148f576fe9a890e899775b1653ac1a07dfa70b1f.scope: Deactivated successfully.
Dec 06 06:26:35 compute-0 ansible-async_wrapper.py[79556]: Module complete (79556)
Dec 06 06:26:35 compute-0 sudo[80019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 sudo[80019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80019]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:26:35 compute-0 sudo[80044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80044]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 sudo[80091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80091]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:26:35 compute-0 sudo[80117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80117]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:26:35 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:26:35 compute-0 sudo[80142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 sudo[80142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80142]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 06:26:35 compute-0 sudo[80167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80167]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oweimslhvsxuhdvafffwkasexrigjzcx ; /usr/bin/python3'
Dec 06 06:26:35 compute-0 sudo[80213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:35 compute-0 sudo[80218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 sudo[80218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80218]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph
Dec 06 06:26:35 compute-0 sudo[80243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80243]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 python3[80217]: ansible-ansible.legacy.async_status Invoked with jid=j590448695654.79489 mode=status _async_dir=/root/.ansible_async
Dec 06 06:26:35 compute-0 sudo[80268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 sudo[80268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80268]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80213]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new
Dec 06 06:26:35 compute-0 sudo[80293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80293]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 sudo[80339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80339]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxuzzqbbdrqadvgnkbrfsmaqelktrrgs ; /usr/bin/python3'
Dec 06 06:26:35 compute-0 sudo[80395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:35 compute-0 sudo[80386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:35 compute-0 sudo[80386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80386]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:35 compute-0 sudo[80417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:35 compute-0 sudo[80417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:35 compute-0 sudo[80417]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new
Dec 06 06:26:36 compute-0 sudo[80442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80442]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 python3[80414]: ansible-ansible.legacy.async_status Invoked with jid=j590448695654.79489 mode=cleanup _async_dir=/root/.ansible_async
Dec 06 06:26:36 compute-0 sudo[80395]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:36 compute-0 sudo[80490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80490]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new
Dec 06 06:26:36 compute-0 sudo[80515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80515]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:36 compute-0 sudo[80540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80540]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new
Dec 06 06:26:36 compute-0 sudo[80565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80565]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nopjroohfgsforkbknizpdhrbwnncjpv ; /usr/bin/python3'
Dec 06 06:26:36 compute-0 sudo[80630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:36 compute-0 sudo[80597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:36 compute-0 sudo[80597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80597]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 06:26:36 compute-0 sudo[80641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80641]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:26:36 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:26:36 compute-0 sudo[80666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:36 compute-0 sudo[80666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80666]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 python3[80638]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 06:26:36 compute-0 sudo[80630]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:26:36 compute-0 sudo[80692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80692]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:36 compute-0 sudo[80718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:26:36 compute-0 sudo[80743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80743]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:36 compute-0 sudo[80768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80768]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new
Dec 06 06:26:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:36 compute-0 sudo[80793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80793]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:36 compute-0 sudo[80818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80818]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:36 compute-0 sudo[80864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbxfsbjglhjbyjufwvaysuenakwdtehh ; /usr/bin/python3'
Dec 06 06:26:36 compute-0 sudo[80864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:36 compute-0 sudo[80868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:36 compute-0 sudo[80868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:36 compute-0 sudo[80868]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 sudo[80894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:37 compute-0 sudo[80894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[80894]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 python3[80869]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:26:37 compute-0 sudo[80919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new
Dec 06 06:26:37 compute-0 sudo[80919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[80919]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 ceph-mon[74339]: Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:26:37 compute-0 ceph-mon[74339]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:37 compute-0 ceph-mon[74339]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:26:37 compute-0 podman[80941]: 2025-12-06 06:26:37.162410642 +0000 UTC m=+0.111917508 container create ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064 (image=quay.io/ceph/ceph:v18, name=hopeful_herschel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:37 compute-0 systemd[1]: Started libpod-conmon-ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064.scope.
Dec 06 06:26:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60494d2574d6ab4e3fa713006255fcc685407f5fcf5c3d056989d4480ba1351b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60494d2574d6ab4e3fa713006255fcc685407f5fcf5c3d056989d4480ba1351b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60494d2574d6ab4e3fa713006255fcc685407f5fcf5c3d056989d4480ba1351b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:37 compute-0 sudo[80982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:37 compute-0 sudo[80982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[80982]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 podman[80941]: 2025-12-06 06:26:37.140249659 +0000 UTC m=+0.089756525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:37 compute-0 podman[80941]: 2025-12-06 06:26:37.236035434 +0000 UTC m=+0.185542310 container init ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064 (image=quay.io/ceph/ceph:v18, name=hopeful_herschel, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:37 compute-0 podman[80941]: 2025-12-06 06:26:37.241647423 +0000 UTC m=+0.191154289 container start ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064 (image=quay.io/ceph/ceph:v18, name=hopeful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:37 compute-0 podman[80941]: 2025-12-06 06:26:37.245120071 +0000 UTC m=+0.194626937 container attach ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064 (image=quay.io/ceph/ceph:v18, name=hopeful_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 06:26:37 compute-0 sudo[81011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new
Dec 06 06:26:37 compute-0 sudo[81011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[81011]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 sudo[81036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:37 compute-0 sudo[81036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[81036]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 sudo[81061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new
Dec 06 06:26:37 compute-0 sudo[81061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[81061]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 sudo[81086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:37 compute-0 sudo[81086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[81086]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:37 compute-0 sudo[81111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring.new /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:26:37 compute-0 sudo[81111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[81111]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:26:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:26:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:37 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 94e3362b-000f-49da-be85-c688583f23df (Updating crash deployment (+1 -> 1))
Dec 06 06:26:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec 06 06:26:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:26:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 06:26:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:26:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:37 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 06 06:26:37 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 06 06:26:37 compute-0 sudo[81153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:37 compute-0 sudo[81153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[81153]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 sudo[81180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:37 compute-0 sudo[81180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[81180]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 sudo[81205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:37 compute-0 sudo[81205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 sudo[81205]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:37 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:26:37 compute-0 hopeful_herschel[80990]: 
Dec 06 06:26:37 compute-0 hopeful_herschel[80990]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 06 06:26:37 compute-0 sudo[81230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:37 compute-0 sudo[81230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:37 compute-0 systemd[1]: libpod-ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064.scope: Deactivated successfully.
Dec 06 06:26:37 compute-0 podman[80941]: 2025-12-06 06:26:37.813421542 +0000 UTC m=+0.762928418 container died ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064 (image=quay.io/ceph/ceph:v18, name=hopeful_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-60494d2574d6ab4e3fa713006255fcc685407f5fcf5c3d056989d4480ba1351b-merged.mount: Deactivated successfully.
Dec 06 06:26:37 compute-0 podman[80941]: 2025-12-06 06:26:37.886022893 +0000 UTC m=+0.835529749 container remove ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064 (image=quay.io/ceph/ceph:v18, name=hopeful_herschel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:26:37 compute-0 systemd[1]: libpod-conmon-ccb1f9fcfceb6d074c13fd56c1d95a326d644eead1790878dd74632dbee6f064.scope: Deactivated successfully.
Dec 06 06:26:37 compute-0 sudo[80864]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:38 compute-0 podman[81309]: 2025-12-06 06:26:38.117821404 +0000 UTC m=+0.036802067 container create cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_black, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 06:26:38 compute-0 systemd[1]: Started libpod-conmon-cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5.scope.
Dec 06 06:26:38 compute-0 ceph-mon[74339]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:26:38 compute-0 ceph-mon[74339]: Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:26:38 compute-0 ceph-mon[74339]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:26:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 06:26:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:38 compute-0 podman[81309]: 2025-12-06 06:26:38.195400792 +0000 UTC m=+0.114381495 container init cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_black, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:38 compute-0 podman[81309]: 2025-12-06 06:26:38.101855616 +0000 UTC m=+0.020836309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:26:38 compute-0 podman[81309]: 2025-12-06 06:26:38.203754011 +0000 UTC m=+0.122734684 container start cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 06:26:38 compute-0 podman[81309]: 2025-12-06 06:26:38.206903223 +0000 UTC m=+0.125883936 container attach cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_black, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:38 compute-0 inspiring_black[81326]: 167 167
Dec 06 06:26:38 compute-0 systemd[1]: libpod-cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5.scope: Deactivated successfully.
Dec 06 06:26:38 compute-0 podman[81309]: 2025-12-06 06:26:38.209897622 +0000 UTC m=+0.128878295 container died cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:38 compute-0 sudo[81354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuujeueyknsbxikpmzchtwtkurkwazwj ; /usr/bin/python3'
Dec 06 06:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-186de42140ac82fdfa99580da36036b38e87cfe37ee44d66144cc8b0bc7adf46-merged.mount: Deactivated successfully.
Dec 06 06:26:38 compute-0 sudo[81354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:38 compute-0 podman[81309]: 2025-12-06 06:26:38.25259015 +0000 UTC m=+0.171570833 container remove cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 06:26:38 compute-0 systemd[1]: libpod-conmon-cbd803823847083e36b1575473847063edd4e031cecd3ea34edc4592ce51b9a5.scope: Deactivated successfully.
Dec 06 06:26:38 compute-0 systemd[1]: Reloading.
Dec 06 06:26:38 compute-0 systemd-rc-local-generator[81396]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:26:38 compute-0 systemd-sysv-generator[81399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:26:38 compute-0 python3[81367]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:26:38 compute-0 podman[81406]: 2025-12-06 06:26:38.44452338 +0000 UTC m=+0.040510023 container create 55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c (image=quay.io/ceph/ceph:v18, name=confident_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:38 compute-0 podman[81406]: 2025-12-06 06:26:38.428227395 +0000 UTC m=+0.024214058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:38 compute-0 systemd[1]: Started libpod-conmon-55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c.scope.
Dec 06 06:26:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52c231358c140021b5ce1e2e655a17b3997f50c98420cea1677d6cb8945497b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52c231358c140021b5ce1e2e655a17b3997f50c98420cea1677d6cb8945497b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52c231358c140021b5ce1e2e655a17b3997f50c98420cea1677d6cb8945497b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:38 compute-0 podman[81406]: 2025-12-06 06:26:38.60558512 +0000 UTC m=+0.201571843 container init 55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c (image=quay.io/ceph/ceph:v18, name=confident_moore, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:26:38 compute-0 systemd[1]: Reloading.
Dec 06 06:26:38 compute-0 podman[81406]: 2025-12-06 06:26:38.617752084 +0000 UTC m=+0.213738727 container start 55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c (image=quay.io/ceph/ceph:v18, name=confident_moore, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:38 compute-0 podman[81406]: 2025-12-06 06:26:38.621304948 +0000 UTC m=+0.217291611 container attach 55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c (image=quay.io/ceph/ceph:v18, name=confident_moore, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:26:38 compute-0 systemd-rc-local-generator[81455]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:26:38 compute-0 systemd-sysv-generator[81460]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:26:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:38 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:26:39 compute-0 podman[81534]: 2025-12-06 06:26:39.127825735 +0000 UTC m=+0.048137503 container create 53f2ff3a1841753c1b6aa46133954b82acdcc1cffede46f99781f975bf90b4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:39 compute-0 ceph-mon[74339]: Deploying daemon crash.compute-0 on compute-0
Dec 06 06:26:39 compute-0 ceph-mon[74339]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:26:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Dec 06 06:26:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/744058964' entity='client.admin' 
Dec 06 06:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d478221c3732c4f17deb4ba3c10561a6e2e6de0e082f93f090670e4d19cf054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d478221c3732c4f17deb4ba3c10561a6e2e6de0e082f93f090670e4d19cf054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d478221c3732c4f17deb4ba3c10561a6e2e6de0e082f93f090670e4d19cf054/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d478221c3732c4f17deb4ba3c10561a6e2e6de0e082f93f090670e4d19cf054/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:39 compute-0 podman[81534]: 2025-12-06 06:26:39.106187563 +0000 UTC m=+0.026499311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:26:39 compute-0 systemd[1]: libpod-55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c.scope: Deactivated successfully.
Dec 06 06:26:39 compute-0 podman[81406]: 2025-12-06 06:26:39.215989362 +0000 UTC m=+0.811976045 container died 55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c (image=quay.io/ceph/ceph:v18, name=confident_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 06:26:39 compute-0 podman[81534]: 2025-12-06 06:26:39.221901825 +0000 UTC m=+0.142213563 container init 53f2ff3a1841753c1b6aa46133954b82acdcc1cffede46f99781f975bf90b4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:26:39 compute-0 podman[81534]: 2025-12-06 06:26:39.229308537 +0000 UTC m=+0.149620255 container start 53f2ff3a1841753c1b6aa46133954b82acdcc1cffede46f99781f975bf90b4d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:39 compute-0 bash[81534]: 53f2ff3a1841753c1b6aa46133954b82acdcc1cffede46f99781f975bf90b4d3
Dec 06 06:26:39 compute-0 systemd[1]: Started Ceph crash.compute-0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d52c231358c140021b5ce1e2e655a17b3997f50c98420cea1677d6cb8945497b-merged.mount: Deactivated successfully.
Dec 06 06:26:39 compute-0 sudo[81230]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:39 compute-0 podman[81406]: 2025-12-06 06:26:39.277886955 +0000 UTC m=+0.873873598 container remove 55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c (image=quay.io/ceph/ceph:v18, name=confident_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:39 compute-0 systemd[1]: libpod-conmon-55b1f3f3cff46b0498a6a8442113afdb0834fdb78453545a544feadeae10dd8c.scope: Deactivated successfully.
Dec 06 06:26:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:26:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:39 compute-0 sudo[81354]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 06 06:26:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:39 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 94e3362b-000f-49da-be85-c688583f23df (Updating crash deployment (+1 -> 1))
Dec 06 06:26:39 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 94e3362b-000f-49da-be85-c688583f23df (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 06 06:26:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 06 06:26:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:39 compute-0 ansible-async_wrapper.py[79551]: Done in kid B.
Dec 06 06:26:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 95469ed0-56dc-465f-b5b9-883f44418dcf does not exist
Dec 06 06:26:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 06 06:26:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 133372ac-3a09-41e9-8668-3debfd46b55b does not exist
Dec 06 06:26:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 06 06:26:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:39 compute-0 sudo[81567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:39 compute-0 sudo[81567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:39 compute-0 sudo[81567]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 06 06:26:39 compute-0 sudo[81592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:26:39 compute-0 sudo[81640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgedkvwtmkqnjincciubxledbjntubdj ; /usr/bin/python3'
Dec 06 06:26:39 compute-0 sudo[81640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:39 compute-0 sudo[81592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:39 compute-0 sudo[81592]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:39 compute-0 sudo[81645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:39 compute-0 sudo[81645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:39 compute-0 sudo[81645]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:39 compute-0 sudo[81670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:39 compute-0 sudo[81670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:39 compute-0 sudo[81670]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: 2025-12-06T06:26:39.622+0000 7f98fffa0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: 2025-12-06T06:26:39.622+0000 7f98fffa0640 -1 AuthRegistry(0x7f98f8067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: 2025-12-06T06:26:39.623+0000 7f98fffa0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: 2025-12-06T06:26:39.623+0000 7f98fffa0640 -1 AuthRegistry(0x7f98fff9f000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: 2025-12-06T06:26:39.624+0000 7f98fdd15640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: 2025-12-06T06:26:39.625+0000 7f98fffa0640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 06 06:26:39 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-crash-compute-0[81549]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 06 06:26:39 compute-0 python3[81643]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:26:39 compute-0 sudo[81696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:39 compute-0 sudo[81696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:39 compute-0 sudo[81696]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:39 compute-0 podman[81723]: 2025-12-06 06:26:39.705361026 +0000 UTC m=+0.044221929 container create cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92 (image=quay.io/ceph/ceph:v18, name=recursing_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 06:26:39 compute-0 systemd[1]: Started libpod-conmon-cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92.scope.
Dec 06 06:26:39 compute-0 sudo[81741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:26:39 compute-0 sudo[81741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60c2fbcbbda14854851c281af9336236f785d88845acc1aaaf2de05be4af489/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60c2fbcbbda14854851c281af9336236f785d88845acc1aaaf2de05be4af489/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60c2fbcbbda14854851c281af9336236f785d88845acc1aaaf2de05be4af489/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:39 compute-0 podman[81723]: 2025-12-06 06:26:39.686278719 +0000 UTC m=+0.025139652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:39 compute-0 podman[81723]: 2025-12-06 06:26:39.80337249 +0000 UTC m=+0.142233443 container init cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92 (image=quay.io/ceph/ceph:v18, name=recursing_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:39 compute-0 podman[81723]: 2025-12-06 06:26:39.812502761 +0000 UTC m=+0.151363664 container start cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92 (image=quay.io/ceph/ceph:v18, name=recursing_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:39 compute-0 podman[81723]: 2025-12-06 06:26:39.830148373 +0000 UTC m=+0.169009276 container attach cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92 (image=quay.io/ceph/ceph:v18, name=recursing_diffie, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 06:26:40 compute-0 ceph-mon[74339]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/744058964' entity='client.admin' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 podman[81845]: 2025-12-06 06:26:40.196469341 +0000 UTC m=+0.056115272 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 06:26:40 compute-0 podman[81845]: 2025-12-06 06:26:40.294516347 +0000 UTC m=+0.154162298 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/946752000' entity='client.admin' 
Dec 06 06:26:40 compute-0 systemd[1]: libpod-cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92.scope: Deactivated successfully.
Dec 06 06:26:40 compute-0 podman[81723]: 2025-12-06 06:26:40.383159161 +0000 UTC m=+0.722020114 container died cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92 (image=quay.io/ceph/ceph:v18, name=recursing_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f60c2fbcbbda14854851c281af9336236f785d88845acc1aaaf2de05be4af489-merged.mount: Deactivated successfully.
Dec 06 06:26:40 compute-0 podman[81723]: 2025-12-06 06:26:40.440203197 +0000 UTC m=+0.779064100 container remove cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92 (image=quay.io/ceph/ceph:v18, name=recursing_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:26:40 compute-0 systemd[1]: libpod-conmon-cb5ffea2ced0c909b75f170e39277e921c2ca4f093c237995f2c8efa96c21b92.scope: Deactivated successfully.
Dec 06 06:26:40 compute-0 sudo[81640]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:40 compute-0 sudo[81741]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 74fb6696-6fb7-4c76-b740-a2077d52cb0b does not exist
Dec 06 06:26:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev acce4819-656b-46ac-9796-defc2d4860e6 does not exist
Dec 06 06:26:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f88c181b-63d3-463f-b580-acd3377389aa does not exist
Dec 06 06:26:40 compute-0 sudo[81946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:40 compute-0 sudo[81946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:40 compute-0 sudo[81946]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:40 compute-0 sudo[81994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvtkcrfxjensvjbdkvchekhvjurjrtxm ; /usr/bin/python3'
Dec 06 06:26:40 compute-0 sudo[81994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:40 compute-0 sudo[81995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:26:40 compute-0 sudo[81995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:40 compute-0 sudo[81995]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:40 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 06 06:26:40 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:26:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:40 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 06:26:40 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 06:26:40 compute-0 sudo[82022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:40 compute-0 sudo[82022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:40 compute-0 sudo[82022]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:40 compute-0 python3[82004]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:26:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:40 compute-0 sudo[82047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:40 compute-0 sudo[82047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:40 compute-0 podman[82050]: 2025-12-06 06:26:40.847054347 +0000 UTC m=+0.041293696 container create 4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4 (image=quay.io/ceph/ceph:v18, name=romantic_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:26:40 compute-0 sudo[82047]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:40 compute-0 systemd[1]: Started libpod-conmon-4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4.scope.
Dec 06 06:26:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:40 compute-0 sudo[82084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39552446ae4a8213e850e115aeced654dcba628f886b8036bbdde069f9dadeb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39552446ae4a8213e850e115aeced654dcba628f886b8036bbdde069f9dadeb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39552446ae4a8213e850e115aeced654dcba628f886b8036bbdde069f9dadeb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:40 compute-0 sudo[82084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:40 compute-0 sudo[82084]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:40 compute-0 podman[82050]: 2025-12-06 06:26:40.829705039 +0000 UTC m=+0.023944418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:40 compute-0 podman[82050]: 2025-12-06 06:26:40.930898408 +0000 UTC m=+0.125137777 container init 4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4 (image=quay.io/ceph/ceph:v18, name=romantic_bouman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:40 compute-0 podman[82050]: 2025-12-06 06:26:40.938663869 +0000 UTC m=+0.132903218 container start 4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4 (image=quay.io/ceph/ceph:v18, name=romantic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:40 compute-0 podman[82050]: 2025-12-06 06:26:40.943658528 +0000 UTC m=+0.137897937 container attach 4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4 (image=quay.io/ceph/ceph:v18, name=romantic_bouman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 06:26:40 compute-0 sudo[82115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:40 compute-0 sudo[82115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:41 compute-0 podman[82158]: 2025-12-06 06:26:41.234303269 +0000 UTC m=+0.042390303 container create f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:26:41 compute-0 systemd[1]: Started libpod-conmon-f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c.scope.
Dec 06 06:26:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:41 compute-0 podman[82158]: 2025-12-06 06:26:41.299064242 +0000 UTC m=+0.107151296 container init f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:41 compute-0 podman[82158]: 2025-12-06 06:26:41.303900119 +0000 UTC m=+0.111987143 container start f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:26:41 compute-0 podman[82158]: 2025-12-06 06:26:41.3074189 +0000 UTC m=+0.115505934 container attach f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:26:41 compute-0 kind_lamarr[82193]: 167 167
Dec 06 06:26:41 compute-0 systemd[1]: libpod-f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c.scope: Deactivated successfully.
Dec 06 06:26:41 compute-0 conmon[82193]: conmon f31ff8a2f185f68d2594 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c.scope/container/memory.events
Dec 06 06:26:41 compute-0 podman[82158]: 2025-12-06 06:26:41.21343916 +0000 UTC m=+0.021526214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:26:41 compute-0 podman[82158]: 2025-12-06 06:26:41.310587694 +0000 UTC m=+0.118674728 container died f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 06:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a52265265fa079e55ea97f39389bab00293d3642d940b6cfc98def466fdb7f4-merged.mount: Deactivated successfully.
Dec 06 06:26:41 compute-0 podman[82158]: 2025-12-06 06:26:41.358076795 +0000 UTC m=+0.166163829 container remove f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/946752000' entity='client.admin' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:26:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:41 compute-0 ceph-mon[74339]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 06:26:41 compute-0 ceph-mon[74339]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:41 compute-0 systemd[1]: libpod-conmon-f31ff8a2f185f68d25945a1c1ee6f2c35ecd3a844f0f683d43d36107d4d0309c.scope: Deactivated successfully.
Dec 06 06:26:41 compute-0 sudo[82115]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:26:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:41 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.sfzyix (unknown last config time)...
Dec 06 06:26:41 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.sfzyix (unknown last config time)...
Dec 06 06:26:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.sfzyix", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 06 06:26:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.sfzyix", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:26:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 06:26:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:26:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:26:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:41 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.sfzyix on compute-0
Dec 06 06:26:41 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.sfzyix on compute-0
Dec 06 06:26:41 compute-0 sudo[82212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:41 compute-0 sudo[82212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:41 compute-0 sudo[82212]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Dec 06 06:26:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1550600185' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 06 06:26:41 compute-0 sudo[82237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:41 compute-0 sudo[82237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:41 compute-0 sudo[82237]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:41 compute-0 sudo[82263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:41 compute-0 sudo[82263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:41 compute-0 sudo[82263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:41 compute-0 sudo[82288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:26:41 compute-0 sudo[82288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:41 compute-0 podman[82329]: 2025-12-06 06:26:41.902163888 +0000 UTC m=+0.038375411 container create 1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 06:26:41 compute-0 systemd[1]: Started libpod-conmon-1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d.scope.
Dec 06 06:26:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:41 compute-0 podman[82329]: 2025-12-06 06:26:41.883757596 +0000 UTC m=+0.019969089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:26:41 compute-0 podman[82329]: 2025-12-06 06:26:41.985537471 +0000 UTC m=+0.121749024 container init 1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 06:26:41 compute-0 podman[82329]: 2025-12-06 06:26:41.997116728 +0000 UTC m=+0.133328201 container start 1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:26:42 compute-0 podman[82329]: 2025-12-06 06:26:42.000943554 +0000 UTC m=+0.137155077 container attach 1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:42 compute-0 exciting_wright[82346]: 167 167
Dec 06 06:26:42 compute-0 systemd[1]: libpod-1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d.scope: Deactivated successfully.
Dec 06 06:26:42 compute-0 podman[82329]: 2025-12-06 06:26:42.017230778 +0000 UTC m=+0.153442281 container died 1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d854b5c5a93bfb87f5ce9721fcce260228d190c34843eb8468ae73bcab5b468-merged.mount: Deactivated successfully.
Dec 06 06:26:42 compute-0 podman[82329]: 2025-12-06 06:26:42.059370331 +0000 UTC m=+0.195581814 container remove 1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:42 compute-0 systemd[1]: libpod-conmon-1842c0d51c2361de0a68a0c76259c2c7f5ef82c444d129957410a1c6b2a0907d.scope: Deactivated successfully.
Dec 06 06:26:42 compute-0 sudo[82288]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:26:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:26:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:26:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:26:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:26:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4e0543f1-11cd-4732-830b-1157df1597a0 does not exist
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev cf551961-790c-4443-96db-73cf16cb897b does not exist
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3563cdf3-260b-4ad7-a2cb-b31f58c81aca does not exist
Dec 06 06:26:42 compute-0 sudo[82363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:42 compute-0 sudo[82363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:42 compute-0 sudo[82363]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:42 compute-0 sudo[82388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:26:42 compute-0 sudo[82388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:42 compute-0 sudo[82388]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mon[74339]: Reconfiguring mgr.compute-0.sfzyix (unknown last config time)...
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.sfzyix", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:42 compute-0 ceph-mon[74339]: Reconfiguring daemon mgr.compute-0.sfzyix on compute-0
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1550600185' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:26:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:26:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1550600185' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 06 06:26:42 compute-0 romantic_bouman[82099]: set require_min_compat_client to mimic
Dec 06 06:26:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 06 06:26:42 compute-0 systemd[1]: libpod-4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4.scope: Deactivated successfully.
Dec 06 06:26:42 compute-0 podman[82050]: 2025-12-06 06:26:42.447621822 +0000 UTC m=+1.641861211 container died 4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4 (image=quay.io/ceph/ceph:v18, name=romantic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b39552446ae4a8213e850e115aeced654dcba628f886b8036bbdde069f9dadeb-merged.mount: Deactivated successfully.
Dec 06 06:26:42 compute-0 podman[82050]: 2025-12-06 06:26:42.507189459 +0000 UTC m=+1.701428818 container remove 4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4 (image=quay.io/ceph/ceph:v18, name=romantic_bouman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:26:42 compute-0 systemd[1]: libpod-conmon-4c6c7b383289f335f6fba7517f55e8027d912df2650833b428928835d4697ff4.scope: Deactivated successfully.
Dec 06 06:26:42 compute-0 sudo[81994]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 1 completed events
Dec 06 06:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:26:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:26:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:26:42 compute-0 sudo[82448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dblnrogpstuciwdgmnznbirtmpfqvcqw ; /usr/bin/python3'
Dec 06 06:26:42 compute-0 sudo[82448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:43 compute-0 python3[82450]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:26:43 compute-0 podman[82451]: 2025-12-06 06:26:43.160389146 +0000 UTC m=+0.047070278 container create 164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d (image=quay.io/ceph/ceph:v18, name=peaceful_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 06:26:43 compute-0 systemd[1]: Started libpod-conmon-164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d.scope.
Dec 06 06:26:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65467b80888de718e33b84543ffdee41a76dfbbbec0470482b295ff298c79f25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65467b80888de718e33b84543ffdee41a76dfbbbec0470482b295ff298c79f25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65467b80888de718e33b84543ffdee41a76dfbbbec0470482b295ff298c79f25/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:43 compute-0 podman[82451]: 2025-12-06 06:26:43.135444139 +0000 UTC m=+0.022125261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:43 compute-0 podman[82451]: 2025-12-06 06:26:43.233532709 +0000 UTC m=+0.120213842 container init 164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d (image=quay.io/ceph/ceph:v18, name=peaceful_keldysh, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 06 06:26:43 compute-0 podman[82451]: 2025-12-06 06:26:43.239768098 +0000 UTC m=+0.126449200 container start 164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d (image=quay.io/ceph/ceph:v18, name=peaceful_keldysh, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:26:43 compute-0 podman[82451]: 2025-12-06 06:26:43.249340244 +0000 UTC m=+0.136021366 container attach 164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d (image=quay.io/ceph/ceph:v18, name=peaceful_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:26:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1550600185' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 06 06:26:43 compute-0 ceph-mon[74339]: osdmap e3: 0 total, 0 up, 0 in
Dec 06 06:26:43 compute-0 ceph-mon[74339]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:43 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:43 compute-0 sudo[82490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:43 compute-0 sudo[82490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:43 compute-0 sudo[82490]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:43 compute-0 sudo[82515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:26:43 compute-0 sudo[82515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:43 compute-0 sudo[82515]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:44 compute-0 sudo[82540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:44 compute-0 sudo[82540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:44 compute-0 sudo[82540]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:44 compute-0 sudo[82565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Dec 06 06:26:44 compute-0 sudo[82565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:44 compute-0 sudo[82565]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 06 06:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 06 06:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 06 06:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 06 06:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mgr[74630]: [cephadm INFO root] Added host compute-0
Dec 06 06:26:44 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 06 06:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 885e0b98-4c46-4958-9541-d2010d20f70e does not exist
Dec 06 06:26:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6bcd35eb-4ad9-4f88-a181-5c6e584b767d does not exist
Dec 06 06:26:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 53514a95-cc8e-4871-821c-dbdcdb311dd5 does not exist
Dec 06 06:26:44 compute-0 ceph-mon[74339]: from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:26:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:26:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:26:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:44 compute-0 sudo[82610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:26:44 compute-0 sudo[82610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:44 compute-0 sudo[82610]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:44 compute-0 sudo[82635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:26:44 compute-0 sudo[82635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:26:44 compute-0 sudo[82635]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:45 compute-0 ceph-mon[74339]: Added host compute-0
Dec 06 06:26:45 compute-0 ceph-mon[74339]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:45 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec 06 06:26:45 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec 06 06:26:46 compute-0 ceph-mon[74339]: Deploying cephadm binary to compute-1
Dec 06 06:26:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:47 compute-0 ceph-mon[74339]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 06 06:26:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:49 compute-0 ceph-mgr[74630]: [cephadm INFO root] Added host compute-1
Dec 06 06:26:49 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Added host compute-1
Dec 06 06:26:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:26:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:26:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:50 compute-0 ceph-mon[74339]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:50 compute-0 ceph-mon[74339]: Added host compute-1
Dec 06 06:26:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:50 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec 06 06:26:50 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec 06 06:26:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:26:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:52 compute-0 ceph-mon[74339]: Deploying cephadm binary to compute-2
Dec 06 06:26:52 compute-0 ceph-mon[74339]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:53 compute-0 ceph-mon[74339]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec 06 06:26:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: [cephadm INFO root] Added host compute-2
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Added host compute-2
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 06 06:26:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 06 06:26:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Dec 06 06:26:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:54 compute-0 peaceful_keldysh[82466]: Added host 'compute-0' with addr '192.168.122.100'
Dec 06 06:26:54 compute-0 peaceful_keldysh[82466]: Added host 'compute-1' with addr '192.168.122.101'
Dec 06 06:26:54 compute-0 peaceful_keldysh[82466]: Added host 'compute-2' with addr '192.168.122.102'
Dec 06 06:26:54 compute-0 peaceful_keldysh[82466]: Scheduled mon update...
Dec 06 06:26:54 compute-0 peaceful_keldysh[82466]: Scheduled mgr update...
Dec 06 06:26:54 compute-0 peaceful_keldysh[82466]: Scheduled osd.default_drive_group update...
Dec 06 06:26:54 compute-0 systemd[1]: libpod-164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d.scope: Deactivated successfully.
Dec 06 06:26:54 compute-0 podman[82451]: 2025-12-06 06:26:54.044973211 +0000 UTC m=+10.931654393 container died 164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d (image=quay.io/ceph/ceph:v18, name=peaceful_keldysh, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Dec 06 06:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-65467b80888de718e33b84543ffdee41a76dfbbbec0470482b295ff298c79f25-merged.mount: Deactivated successfully.
Dec 06 06:26:54 compute-0 podman[82451]: 2025-12-06 06:26:54.093330741 +0000 UTC m=+10.980011843 container remove 164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d (image=quay.io/ceph/ceph:v18, name=peaceful_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:26:54 compute-0 systemd[1]: libpod-conmon-164a50edee16837cd732c2b7cf178b62c9077656d14da32d1803f868f972e97d.scope: Deactivated successfully.
Dec 06 06:26:54 compute-0 sudo[82448]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:54 compute-0 sudo[82696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aztaemydfwndcottyrkgyfflnnlvgdxp ; /usr/bin/python3'
Dec 06 06:26:54 compute-0 sudo[82696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:26:54 compute-0 python3[82698]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:26:54 compute-0 podman[82700]: 2025-12-06 06:26:54.576091536 +0000 UTC m=+0.037546075 container create 73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01 (image=quay.io/ceph/ceph:v18, name=jolly_raman, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec 06 06:26:54 compute-0 systemd[1]: Started libpod-conmon-73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01.scope.
Dec 06 06:26:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923dc79b4e210cd7732407e0c0d2c82c690cddbe88c71a8e115cc0a2f1097f71/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923dc79b4e210cd7732407e0c0d2c82c690cddbe88c71a8e115cc0a2f1097f71/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923dc79b4e210cd7732407e0c0d2c82c690cddbe88c71a8e115cc0a2f1097f71/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:26:54 compute-0 podman[82700]: 2025-12-06 06:26:54.648811156 +0000 UTC m=+0.110265715 container init 73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01 (image=quay.io/ceph/ceph:v18, name=jolly_raman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:54 compute-0 podman[82700]: 2025-12-06 06:26:54.557933774 +0000 UTC m=+0.019388333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:26:54 compute-0 podman[82700]: 2025-12-06 06:26:54.657214549 +0000 UTC m=+0.118669088 container start 73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01 (image=quay.io/ceph/ceph:v18, name=jolly_raman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:26:54 compute-0 podman[82700]: 2025-12-06 06:26:54.660170445 +0000 UTC m=+0.121625004 container attach 73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01 (image=quay.io/ceph/ceph:v18, name=jolly_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:26:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:54 compute-0 ceph-mon[74339]: Added host compute-2
Dec 06 06:26:54 compute-0 ceph-mon[74339]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:54 compute-0 ceph-mon[74339]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:54 compute-0 ceph-mon[74339]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 06 06:26:54 compute-0 ceph-mon[74339]: Marking host: compute-1 for OSDSpec preview refresh.
Dec 06 06:26:54 compute-0 ceph-mon[74339]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec 06 06:26:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:26:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 06 06:26:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2004388678' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:26:55 compute-0 jolly_raman[82716]: 
Dec 06 06:26:55 compute-0 jolly_raman[82716]: {"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":92,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-06T06:25:19.129259+0000","services":{}},"progress_events":{}}
Dec 06 06:26:55 compute-0 systemd[1]: libpod-73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01.scope: Deactivated successfully.
Dec 06 06:26:55 compute-0 podman[82700]: 2025-12-06 06:26:55.343018675 +0000 UTC m=+0.804473214 container died 73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01 (image=quay.io/ceph/ceph:v18, name=jolly_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-923dc79b4e210cd7732407e0c0d2c82c690cddbe88c71a8e115cc0a2f1097f71-merged.mount: Deactivated successfully.
Dec 06 06:26:55 compute-0 podman[82700]: 2025-12-06 06:26:55.386840682 +0000 UTC m=+0.848295221 container remove 73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01 (image=quay.io/ceph/ceph:v18, name=jolly_raman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:26:55 compute-0 systemd[1]: libpod-conmon-73cc2fff6574ebd2b2fef09a643592921a45f97aaa9cd6c0c650ebeb041acf01.scope: Deactivated successfully.
Dec 06 06:26:55 compute-0 sudo[82696]: pam_unix(sudo:session): session closed for user root
Dec 06 06:26:56 compute-0 ceph-mon[74339]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2004388678' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:26:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:26:58 compute-0 ceph-mon[74339]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:26:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:00 compute-0 ceph-mon[74339]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:02 compute-0 ceph-mon[74339]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:04 compute-0 ceph-mon[74339]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:06 compute-0 ceph-mon[74339]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:07 compute-0 ceph-mon[74339]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:10 compute-0 ceph-mon[74339]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:12 compute-0 ceph-mon[74339]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:27:12
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [balancer INFO root] No pools available
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:27:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:27:13 compute-0 ceph-mon[74339]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:15 compute-0 ceph-mon[74339]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:17 compute-0 ceph-mon[74339]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:19 compute-0 ceph-mon[74339]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:21 compute-0 ceph-mon[74339]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:23 compute-0 ceph-mon[74339]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:25 compute-0 sudo[82774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvfuxwtotkpflbsleavxlpfnrshrylvb ; /usr/bin/python3'
Dec 06 06:27:25 compute-0 sudo[82774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:27:25 compute-0 python3[82776]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:27:25 compute-0 podman[82778]: 2025-12-06 06:27:25.746201546 +0000 UTC m=+0.046973414 container create 688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535 (image=quay.io/ceph/ceph:v18, name=compassionate_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:27:25 compute-0 systemd[1]: Started libpod-conmon-688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535.scope.
Dec 06 06:27:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86970ec2984b4a1a738d475319b34dcbe76e3ddad5ffe5803f10cea59e67b24b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86970ec2984b4a1a738d475319b34dcbe76e3ddad5ffe5803f10cea59e67b24b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86970ec2984b4a1a738d475319b34dcbe76e3ddad5ffe5803f10cea59e67b24b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:27:25 compute-0 podman[82778]: 2025-12-06 06:27:25.818422143 +0000 UTC m=+0.119194021 container init 688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535 (image=quay.io/ceph/ceph:v18, name=compassionate_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 06:27:25 compute-0 podman[82778]: 2025-12-06 06:27:25.726059481 +0000 UTC m=+0.026831369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:27:25 compute-0 podman[82778]: 2025-12-06 06:27:25.824758304 +0000 UTC m=+0.125530172 container start 688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535 (image=quay.io/ceph/ceph:v18, name=compassionate_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 06:27:25 compute-0 podman[82778]: 2025-12-06 06:27:25.827999342 +0000 UTC m=+0.128771220 container attach 688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535 (image=quay.io/ceph/ceph:v18, name=compassionate_pascal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:27:25 compute-0 ceph-mon[74339]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 06 06:27:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1246561326' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:27:26 compute-0 compassionate_pascal[82794]: 
Dec 06 06:27:26 compute-0 compassionate_pascal[82794]: {"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":123,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-06T06:27:14.826183+0000","services":{}},"progress_events":{}}
Dec 06 06:27:26 compute-0 systemd[1]: libpod-688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535.scope: Deactivated successfully.
Dec 06 06:27:26 compute-0 podman[82778]: 2025-12-06 06:27:26.437470594 +0000 UTC m=+0.738242462 container died 688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535 (image=quay.io/ceph/ceph:v18, name=compassionate_pascal, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-86970ec2984b4a1a738d475319b34dcbe76e3ddad5ffe5803f10cea59e67b24b-merged.mount: Deactivated successfully.
Dec 06 06:27:26 compute-0 podman[82778]: 2025-12-06 06:27:26.486291386 +0000 UTC m=+0.787063254 container remove 688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535 (image=quay.io/ceph/ceph:v18, name=compassionate_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:27:26 compute-0 systemd[1]: libpod-conmon-688d72209e016fbc6fee5dcff0f062dfdccce273975978272d217b5cbe2ed535.scope: Deactivated successfully.
Dec 06 06:27:26 compute-0 sudo[82774]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1246561326' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:27:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:27 compute-0 ceph-mon[74339]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:29 compute-0 ceph-mon[74339]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:31 compute-0 ceph-mon[74339]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:33 compute-0 ceph-mon[74339]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:35 compute-0 ceph-mon[74339]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:38 compute-0 ceph-mon[74339]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:39 compute-0 ceph-mon[74339]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:41 compute-0 ceph-mon[74339]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:27:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:27:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:27:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:27:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:27:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:27:43 compute-0 ceph-mon[74339]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:45 compute-0 ceph-mon[74339]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:47 compute-0 ceph-mon[74339]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:49 compute-0 ceph-mon[74339]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:51 compute-0 ceph-mon[74339]: pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:27:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:27:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:27:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:27:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 06:27:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:27:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:27:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:27:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:27:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:27:52 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 06 06:27:52 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 06 06:27:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:27:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:27:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:27:53 compute-0 ceph-mon[74339]: Updating compute-1:/etc/ceph/ceph.conf
Dec 06 06:27:53 compute-0 ceph-mon[74339]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:54 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:27:54 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:27:54 compute-0 ceph-mon[74339]: Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:27:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:55 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:27:55 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:27:55 compute-0 ceph-mon[74339]: pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:55 compute-0 ceph-mon[74339]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:27:56 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:27:56 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:27:56 compute-0 sudo[82854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmmwhbahbfxzfvvhfbumsxopldyqptab ; /usr/bin/python3'
Dec 06 06:27:56 compute-0 sudo[82854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:27:56 compute-0 python3[82856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:27:56 compute-0 podman[82858]: 2025-12-06 06:27:56.828210382 +0000 UTC m=+0.067381863 container create 244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd (image=quay.io/ceph/ceph:v18, name=cranky_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 06:27:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:56 compute-0 systemd[1]: Started libpod-conmon-244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd.scope.
Dec 06 06:27:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aad9d92e8b5b046e695939114616d96b7b27395f091a8a2092bd6664fcb3769/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aad9d92e8b5b046e695939114616d96b7b27395f091a8a2092bd6664fcb3769/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:27:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aad9d92e8b5b046e695939114616d96b7b27395f091a8a2092bd6664fcb3769/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:27:56 compute-0 podman[82858]: 2025-12-06 06:27:56.802972984 +0000 UTC m=+0.042144555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:27:56 compute-0 podman[82858]: 2025-12-06 06:27:56.912242462 +0000 UTC m=+0.151413973 container init 244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd (image=quay.io/ceph/ceph:v18, name=cranky_turing, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:27:56 compute-0 podman[82858]: 2025-12-06 06:27:56.917935349 +0000 UTC m=+0.157106870 container start 244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd (image=quay.io/ceph/ceph:v18, name=cranky_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:27:56 compute-0 podman[82858]: 2025-12-06 06:27:56.921500556 +0000 UTC m=+0.160672037 container attach 244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd (image=quay.io/ceph/ceph:v18, name=cranky_turing, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:27:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:27:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:27:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:27:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 280eca7f-dba6-4b7b-a66a-702211abb4e7 (Updating crash deployment (+1 -> 2))
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:27:57.393+0000 7f67531ca640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: service_name: mon
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: placement:
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   hosts:
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   - compute-0
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   - compute-1
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   - compute-2
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 06 06:27:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec 06 06:27:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:27:57.394+0000 7f67531ca640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: service_name: mgr
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: placement:
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   hosts:
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   - compute-0
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   - compute-1
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]:   - compute-2
Dec 06 06:27:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 06 06:27:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 06:27:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:27:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec 06 06:27:57 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec 06 06:27:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 06 06:27:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/615996440' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:27:57 compute-0 cranky_turing[82874]: 
Dec 06 06:27:57 compute-0 cranky_turing[82874]: {"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":155,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-06T06:27:14.826183+0000","services":{}},"progress_events":{}}
Dec 06 06:27:57 compute-0 systemd[1]: libpod-244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd.scope: Deactivated successfully.
Dec 06 06:27:57 compute-0 podman[82858]: 2025-12-06 06:27:57.57939425 +0000 UTC m=+0.818565731 container died 244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd (image=quay.io/ceph/ceph:v18, name=cranky_turing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:27:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aad9d92e8b5b046e695939114616d96b7b27395f091a8a2092bd6664fcb3769-merged.mount: Deactivated successfully.
Dec 06 06:27:57 compute-0 podman[82858]: 2025-12-06 06:27:57.616699296 +0000 UTC m=+0.855870777 container remove 244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd (image=quay.io/ceph/ceph:v18, name=cranky_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:27:57 compute-0 systemd[1]: libpod-conmon-244ea1e81d95b5bd5e1315550049afeb93b6cf91339fbbd8a467ac5ac4215dcd.scope: Deactivated successfully.
Dec 06 06:27:57 compute-0 sudo[82854]: pam_unix(sudo:session): session closed for user root
Dec 06 06:27:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:27:57 compute-0 ceph-mon[74339]: Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:27:57 compute-0 ceph-mon[74339]: pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:27:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:27:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 06:27:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:27:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/615996440' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:27:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 06 06:27:58 compute-0 ceph-mon[74339]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec 06 06:27:58 compute-0 ceph-mon[74339]: pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:58 compute-0 ceph-mon[74339]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec 06 06:27:58 compute-0 ceph-mon[74339]: pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:27:58 compute-0 ceph-mon[74339]: Deploying daemon crash.compute-1 on compute-1
Dec 06 06:27:58 compute-0 ceph-mon[74339]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec 06 06:27:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:00 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 280eca7f-dba6-4b7b-a66a-702211abb4e7 (Updating crash deployment (+1 -> 2))
Dec 06 06:28:00 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 280eca7f-dba6-4b7b-a66a-702211abb4e7 (Updating crash deployment (+1 -> 2)) in 3 seconds
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:28:00 compute-0 sudo[82911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:00 compute-0 sudo[82911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:00 compute-0 sudo[82911]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:00 compute-0 sudo[82936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:00 compute-0 sudo[82936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:00 compute-0 sudo[82936]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:00 compute-0 sudo[82961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:00 compute-0 sudo[82961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:00 compute-0 sudo[82961]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:00 compute-0 sudo[82986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:28:00 compute-0 sudo[82986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:00 compute-0 podman[83051]: 2025-12-06 06:28:00.876014308 +0000 UTC m=+0.046948093 container create 8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:28:00 compute-0 ceph-mon[74339]: pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:28:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:28:00 compute-0 systemd[1]: Started libpod-conmon-8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b.scope.
Dec 06 06:28:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:00 compute-0 podman[83051]: 2025-12-06 06:28:00.936338849 +0000 UTC m=+0.107272654 container init 8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:00 compute-0 podman[83051]: 2025-12-06 06:28:00.941885601 +0000 UTC m=+0.112819386 container start 8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:00 compute-0 hardcore_chatelet[83068]: 167 167
Dec 06 06:28:00 compute-0 podman[83051]: 2025-12-06 06:28:00.945252372 +0000 UTC m=+0.116186157 container attach 8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chatelet, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:00 compute-0 systemd[1]: libpod-8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b.scope: Deactivated successfully.
Dec 06 06:28:00 compute-0 podman[83051]: 2025-12-06 06:28:00.946161311 +0000 UTC m=+0.117095116 container died 8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:00 compute-0 podman[83051]: 2025-12-06 06:28:00.855704631 +0000 UTC m=+0.026638506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-adb1d6c9793d41c5a545f11c488739576d3d2a666165a03b6ff92b3325affbc6-merged.mount: Deactivated successfully.
Dec 06 06:28:00 compute-0 podman[83051]: 2025-12-06 06:28:00.988001685 +0000 UTC m=+0.158935470 container remove 8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_chatelet, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 06:28:01 compute-0 systemd[1]: libpod-conmon-8d78390475d9fc63625051b4e3e62a8ed903773a0b7ef8f4b35b66333cf1af1b.scope: Deactivated successfully.
Dec 06 06:28:01 compute-0 podman[83093]: 2025-12-06 06:28:01.173166516 +0000 UTC m=+0.065641826 container create 4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:01 compute-0 systemd[1]: Started libpod-conmon-4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44.scope.
Dec 06 06:28:01 compute-0 podman[83093]: 2025-12-06 06:28:01.151896557 +0000 UTC m=+0.044371857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968597dc50e5968dbb8ed36960e04248e67b56fc7214ce2eedfa1145cae6005f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968597dc50e5968dbb8ed36960e04248e67b56fc7214ce2eedfa1145cae6005f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968597dc50e5968dbb8ed36960e04248e67b56fc7214ce2eedfa1145cae6005f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968597dc50e5968dbb8ed36960e04248e67b56fc7214ce2eedfa1145cae6005f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968597dc50e5968dbb8ed36960e04248e67b56fc7214ce2eedfa1145cae6005f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:01 compute-0 podman[83093]: 2025-12-06 06:28:01.279088654 +0000 UTC m=+0.171563984 container init 4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:28:01 compute-0 podman[83093]: 2025-12-06 06:28:01.292094932 +0000 UTC m=+0.184570192 container start 4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:01 compute-0 podman[83093]: 2025-12-06 06:28:01.296447214 +0000 UTC m=+0.188922564 container attach 4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:02 compute-0 competent_franklin[83109]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:28:02 compute-0 competent_franklin[83109]: --> relative data size: 1.0
Dec 06 06:28:02 compute-0 competent_franklin[83109]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 06:28:02 compute-0 competent_franklin[83109]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6b7b52dc-0b4c-403a-a623-fd06da2b6a8e
Dec 06 06:28:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e"} v 0) v1
Dec 06 06:28:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2306969735' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e"}]: dispatch
Dec 06 06:28:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 06 06:28:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:28:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2306969735' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e"}]': finished
Dec 06 06:28:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 06 06:28:02 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 06 06:28:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:02 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:02 compute-0 competent_franklin[83109]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 06:28:02 compute-0 lvm[83157]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:28:02 compute-0 lvm[83157]: VG ceph_vg0 finished
Dec 06 06:28:02 compute-0 competent_franklin[83109]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 06 06:28:02 compute-0 competent_franklin[83109]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 06 06:28:02 compute-0 competent_franklin[83109]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 06:28:02 compute-0 competent_franklin[83109]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:02 compute-0 competent_franklin[83109]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 06 06:28:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:02 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 2 completed events
Dec 06 06:28:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:28:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:02 compute-0 ceph-mon[74339]: pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2306969735' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e"}]: dispatch
Dec 06 06:28:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2306969735' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e"}]': finished
Dec 06 06:28:02 compute-0 ceph-mon[74339]: osdmap e4: 1 total, 0 up, 1 in
Dec 06 06:28:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e647688b-053d-4d67-9db1-a787df62bd8a"} v 0) v1
Dec 06 06:28:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/631718233' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e647688b-053d-4d67-9db1-a787df62bd8a"}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 06 06:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:28:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/631718233' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e647688b-053d-4d67-9db1-a787df62bd8a"}]': finished
Dec 06 06:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 06 06:28:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 06 06:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:03 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec 06 06:28:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4078952679' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 06:28:03 compute-0 competent_franklin[83109]:  stderr: got monmap epoch 1
Dec 06 06:28:03 compute-0 competent_franklin[83109]: --> Creating keyring file for osd.0
Dec 06 06:28:03 compute-0 competent_franklin[83109]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 06 06:28:03 compute-0 competent_franklin[83109]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 06 06:28:03 compute-0 competent_franklin[83109]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 6b7b52dc-0b4c-403a-a623-fd06da2b6a8e --setuser ceph --setgroup ceph
Dec 06 06:28:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec 06 06:28:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/903812478' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/631718233' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e647688b-053d-4d67-9db1-a787df62bd8a"}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/631718233' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e647688b-053d-4d67-9db1-a787df62bd8a"}]': finished
Dec 06 06:28:03 compute-0 ceph-mon[74339]: osdmap e5: 2 total, 0 up, 2 in
Dec 06 06:28:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4078952679' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 06:28:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/903812478' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 06:28:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 06 06:28:04 compute-0 ceph-mon[74339]: pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:04 compute-0 ceph-mon[74339]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 06 06:28:05 compute-0 competent_franklin[83109]:  stderr: 2025-12-06T06:28:03.200+0000 7f6a042f1740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 06 06:28:05 compute-0 competent_franklin[83109]:  stderr: 2025-12-06T06:28:03.200+0000 7f6a042f1740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 06 06:28:05 compute-0 competent_franklin[83109]:  stderr: 2025-12-06T06:28:03.200+0000 7f6a042f1740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 06 06:28:05 compute-0 competent_franklin[83109]:  stderr: 2025-12-06T06:28:03.200+0000 7f6a042f1740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 06 06:28:05 compute-0 competent_franklin[83109]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 06 06:28:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:05 compute-0 competent_franklin[83109]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 06:28:05 compute-0 competent_franklin[83109]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 06 06:28:05 compute-0 competent_franklin[83109]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:05 compute-0 competent_franklin[83109]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:05 compute-0 competent_franklin[83109]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 06:28:05 compute-0 competent_franklin[83109]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 06:28:05 compute-0 competent_franklin[83109]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 06 06:28:05 compute-0 competent_franklin[83109]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 06 06:28:05 compute-0 systemd[1]: libpod-4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44.scope: Deactivated successfully.
Dec 06 06:28:05 compute-0 systemd[1]: libpod-4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44.scope: Consumed 2.427s CPU time.
Dec 06 06:28:05 compute-0 podman[84078]: 2025-12-06 06:28:05.594970023 +0000 UTC m=+0.037544164 container died 4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 06:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-968597dc50e5968dbb8ed36960e04248e67b56fc7214ce2eedfa1145cae6005f-merged.mount: Deactivated successfully.
Dec 06 06:28:05 compute-0 podman[84078]: 2025-12-06 06:28:05.657764835 +0000 UTC m=+0.100338956 container remove 4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_franklin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 06:28:05 compute-0 systemd[1]: libpod-conmon-4e1a785bc87c6db38d5eb5abd763db3f519c279d7600fccb3870c646d1851e44.scope: Deactivated successfully.
Dec 06 06:28:05 compute-0 sudo[82986]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:05 compute-0 sudo[84093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:05 compute-0 sudo[84093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:05 compute-0 sudo[84093]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:05 compute-0 sudo[84118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:05 compute-0 sudo[84118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:05 compute-0 sudo[84118]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:05 compute-0 sudo[84143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:05 compute-0 sudo[84143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:05 compute-0 sudo[84143]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:05 compute-0 sudo[84168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:28:05 compute-0 sudo[84168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:06 compute-0 podman[84232]: 2025-12-06 06:28:06.246484868 +0000 UTC m=+0.043419847 container create 160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:28:06 compute-0 systemd[1]: Started libpod-conmon-160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c.scope.
Dec 06 06:28:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:06 compute-0 podman[84232]: 2025-12-06 06:28:06.225630583 +0000 UTC m=+0.022565612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:06 compute-0 podman[84232]: 2025-12-06 06:28:06.326303529 +0000 UTC m=+0.123238538 container init 160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:06 compute-0 podman[84232]: 2025-12-06 06:28:06.33425348 +0000 UTC m=+0.131188489 container start 160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 06:28:06 compute-0 podman[84232]: 2025-12-06 06:28:06.338170869 +0000 UTC m=+0.135105878 container attach 160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:06 compute-0 admiring_mahavira[84246]: 167 167
Dec 06 06:28:06 compute-0 systemd[1]: libpod-160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c.scope: Deactivated successfully.
Dec 06 06:28:06 compute-0 conmon[84246]: conmon 160cf6f7d5d50b0a5d3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c.scope/container/memory.events
Dec 06 06:28:06 compute-0 podman[84232]: 2025-12-06 06:28:06.3409597 +0000 UTC m=+0.137894709 container died 160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec 06 06:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-64752cd7692f0ef6321bfc3c2bdd8e7a9f164b0a1fd553e17b8606ef5713637b-merged.mount: Deactivated successfully.
Dec 06 06:28:06 compute-0 podman[84232]: 2025-12-06 06:28:06.377699177 +0000 UTC m=+0.174634166 container remove 160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:06 compute-0 systemd[1]: libpod-conmon-160cf6f7d5d50b0a5d3e31eca070492f3e886b65165943f53444bc35db0b301c.scope: Deactivated successfully.
Dec 06 06:28:06 compute-0 podman[84268]: 2025-12-06 06:28:06.588550391 +0000 UTC m=+0.061966066 container create 951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:06 compute-0 systemd[1]: Started libpod-conmon-951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534.scope.
Dec 06 06:28:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf87917ad26abd4d981f919e50c9448950c89e06ca5a1579051ab5c3ffe97ae4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:06 compute-0 podman[84268]: 2025-12-06 06:28:06.568145151 +0000 UTC m=+0.041560846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf87917ad26abd4d981f919e50c9448950c89e06ca5a1579051ab5c3ffe97ae4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf87917ad26abd4d981f919e50c9448950c89e06ca5a1579051ab5c3ffe97ae4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf87917ad26abd4d981f919e50c9448950c89e06ca5a1579051ab5c3ffe97ae4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:06 compute-0 podman[84268]: 2025-12-06 06:28:06.673570753 +0000 UTC m=+0.146986448 container init 951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:06 compute-0 podman[84268]: 2025-12-06 06:28:06.679418535 +0000 UTC m=+0.152834220 container start 951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:28:06 compute-0 podman[84268]: 2025-12-06 06:28:06.684361687 +0000 UTC m=+0.157777382 container attach 951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 06:28:06 compute-0 ceph-mon[74339]: pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:07 compute-0 frosty_darwin[84285]: {
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:     "0": [
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:         {
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "devices": [
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "/dev/loop3"
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             ],
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "lv_name": "ceph_lv0",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "lv_size": "7511998464",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "name": "ceph_lv0",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "tags": {
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.cluster_name": "ceph",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.crush_device_class": "",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.encrypted": "0",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.osd_id": "0",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.type": "block",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:                 "ceph.vdo": "0"
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             },
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "type": "block",
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:             "vg_name": "ceph_vg0"
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:         }
Dec 06 06:28:07 compute-0 frosty_darwin[84285]:     ]
Dec 06 06:28:07 compute-0 frosty_darwin[84285]: }
Dec 06 06:28:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:07 compute-0 systemd[1]: libpod-951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534.scope: Deactivated successfully.
Dec 06 06:28:07 compute-0 podman[84268]: 2025-12-06 06:28:07.419874911 +0000 UTC m=+0.893290586 container died 951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:28:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf87917ad26abd4d981f919e50c9448950c89e06ca5a1579051ab5c3ffe97ae4-merged.mount: Deactivated successfully.
Dec 06 06:28:07 compute-0 podman[84268]: 2025-12-06 06:28:07.491827344 +0000 UTC m=+0.965243019 container remove 951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_darwin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:28:07 compute-0 systemd[1]: libpod-conmon-951bfff7edf581f9c6078812e71131416bf44d1193971dcb045a05a9e2e9e534.scope: Deactivated successfully.
Dec 06 06:28:07 compute-0 sudo[84168]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Dec 06 06:28:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 06:28:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:28:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:28:07 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 06 06:28:07 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 06 06:28:07 compute-0 sudo[84305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:07 compute-0 sudo[84305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:07 compute-0 sudo[84305]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:07 compute-0 sudo[84330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:07 compute-0 sudo[84330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:07 compute-0 sudo[84330]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:07 compute-0 sudo[84355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:07 compute-0 sudo[84355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:07 compute-0 sudo[84355]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:07 compute-0 sudo[84380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:28:07 compute-0 sudo[84380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 06:28:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:28:08 compute-0 podman[84446]: 2025-12-06 06:28:08.144046931 +0000 UTC m=+0.044409008 container create dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_einstein, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 06:28:08 compute-0 systemd[1]: Started libpod-conmon-dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de.scope.
Dec 06 06:28:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:08 compute-0 podman[84446]: 2025-12-06 06:28:08.123382933 +0000 UTC m=+0.023745030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Dec 06 06:28:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 06:28:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:28:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:28:08 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Dec 06 06:28:08 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Dec 06 06:28:08 compute-0 podman[84446]: 2025-12-06 06:28:08.562641408 +0000 UTC m=+0.463003505 container init dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 06:28:08 compute-0 podman[84446]: 2025-12-06 06:28:08.574979123 +0000 UTC m=+0.475341210 container start dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 06:28:08 compute-0 podman[84446]: 2025-12-06 06:28:08.579699868 +0000 UTC m=+0.480061955 container attach dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_einstein, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:08 compute-0 intelligent_einstein[84462]: 167 167
Dec 06 06:28:08 compute-0 systemd[1]: libpod-dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de.scope: Deactivated successfully.
Dec 06 06:28:08 compute-0 podman[84446]: 2025-12-06 06:28:08.583275255 +0000 UTC m=+0.483637332 container died dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-1310a97f70e980c3ca413559f0d960e4988a7c06bdec21163cd2540609bf0aaa-merged.mount: Deactivated successfully.
Dec 06 06:28:08 compute-0 podman[84446]: 2025-12-06 06:28:08.640460704 +0000 UTC m=+0.540822791 container remove dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_einstein, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 06:28:08 compute-0 systemd[1]: libpod-conmon-dfc6e54c859a0cdb52d0d1fd85f7f1f2a08cfd3bdd25fea5d9f08286b56a39de.scope: Deactivated successfully.
Dec 06 06:28:08 compute-0 podman[84495]: 2025-12-06 06:28:08.930736255 +0000 UTC m=+0.047083597 container create 9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:08 compute-0 ceph-mon[74339]: pgmap v55: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:08 compute-0 ceph-mon[74339]: Deploying daemon osd.0 on compute-0
Dec 06 06:28:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 06:28:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:28:08 compute-0 systemd[1]: Started libpod-conmon-9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84.scope.
Dec 06 06:28:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01edcaf8148d22793971b3d59ac4d3fbabb227dacfb6cef7942bbe943c95a9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:09 compute-0 podman[84495]: 2025-12-06 06:28:08.911162303 +0000 UTC m=+0.027509675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01edcaf8148d22793971b3d59ac4d3fbabb227dacfb6cef7942bbe943c95a9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01edcaf8148d22793971b3d59ac4d3fbabb227dacfb6cef7942bbe943c95a9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01edcaf8148d22793971b3d59ac4d3fbabb227dacfb6cef7942bbe943c95a9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01edcaf8148d22793971b3d59ac4d3fbabb227dacfb6cef7942bbe943c95a9b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:09 compute-0 podman[84495]: 2025-12-06 06:28:09.021380723 +0000 UTC m=+0.137728155 container init 9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:09 compute-0 podman[84495]: 2025-12-06 06:28:09.027725371 +0000 UTC m=+0.144072713 container start 9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:09 compute-0 podman[84495]: 2025-12-06 06:28:09.033238432 +0000 UTC m=+0.149585844 container attach 9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v56: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:09 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test[84511]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 06 06:28:09 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test[84511]:                             [--no-systemd] [--no-tmpfs]
Dec 06 06:28:09 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test[84511]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 06 06:28:09 compute-0 systemd[1]: libpod-9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84.scope: Deactivated successfully.
Dec 06 06:28:09 compute-0 podman[84495]: 2025-12-06 06:28:09.756522994 +0000 UTC m=+0.872870326 container died 9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c01edcaf8148d22793971b3d59ac4d3fbabb227dacfb6cef7942bbe943c95a9b-merged.mount: Deactivated successfully.
Dec 06 06:28:09 compute-0 podman[84495]: 2025-12-06 06:28:09.838349461 +0000 UTC m=+0.954696803 container remove 9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate-test, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:09 compute-0 systemd[1]: libpod-conmon-9edf27052eb99575055cb945cc4c5e156adcf44c36a92f9576168316aa0c7b84.scope: Deactivated successfully.
Dec 06 06:28:09 compute-0 ceph-mon[74339]: Deploying daemon osd.1 on compute-1
Dec 06 06:28:10 compute-0 systemd[1]: Reloading.
Dec 06 06:28:10 compute-0 systemd-sysv-generator[84577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:28:10 compute-0 systemd-rc-local-generator[84573]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:28:10 compute-0 systemd[1]: Reloading.
Dec 06 06:28:10 compute-0 systemd-rc-local-generator[84616]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:28:10 compute-0 systemd-sysv-generator[84620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:28:10 compute-0 systemd[1]: Starting Ceph osd.0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:28:10 compute-0 podman[84673]: 2025-12-06 06:28:10.851348907 +0000 UTC m=+0.039532459 container create b00ee61d7754fefbf2a70ac6430ebfa13df5228503c41889bc37cc62fd3e99df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a00a926704912be045babb5146de11d143674614c621dd770eff2aeff7733d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a00a926704912be045babb5146de11d143674614c621dd770eff2aeff7733d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a00a926704912be045babb5146de11d143674614c621dd770eff2aeff7733d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a00a926704912be045babb5146de11d143674614c621dd770eff2aeff7733d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a00a926704912be045babb5146de11d143674614c621dd770eff2aeff7733d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:10 compute-0 podman[84673]: 2025-12-06 06:28:10.917461597 +0000 UTC m=+0.105645239 container init b00ee61d7754fefbf2a70ac6430ebfa13df5228503c41889bc37cc62fd3e99df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:28:10 compute-0 podman[84673]: 2025-12-06 06:28:10.833018334 +0000 UTC m=+0.021201906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:10 compute-0 podman[84673]: 2025-12-06 06:28:10.930720383 +0000 UTC m=+0.118903945 container start b00ee61d7754fefbf2a70ac6430ebfa13df5228503c41889bc37cc62fd3e99df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:10 compute-0 podman[84673]: 2025-12-06 06:28:10.934517018 +0000 UTC m=+0.122700590 container attach b00ee61d7754fefbf2a70ac6430ebfa13df5228503c41889bc37cc62fd3e99df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 06:28:10 compute-0 ceph-mon[74339]: pgmap v56: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v57: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate[84688]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 06:28:11 compute-0 bash[84673]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 06:28:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate[84688]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 06:28:11 compute-0 bash[84673]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 06:28:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate[84688]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 06:28:11 compute-0 bash[84673]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 06:28:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate[84688]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 06:28:11 compute-0 bash[84673]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 06:28:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate[84688]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:11 compute-0 bash[84673]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate[84688]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 06:28:11 compute-0 bash[84673]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 06:28:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate[84688]: --> ceph-volume raw activate successful for osd ID: 0
Dec 06 06:28:11 compute-0 bash[84673]: --> ceph-volume raw activate successful for osd ID: 0
Dec 06 06:28:11 compute-0 systemd[1]: libpod-b00ee61d7754fefbf2a70ac6430ebfa13df5228503c41889bc37cc62fd3e99df.scope: Deactivated successfully.
Dec 06 06:28:11 compute-0 podman[84673]: 2025-12-06 06:28:11.945809938 +0000 UTC m=+1.133993500 container died b00ee61d7754fefbf2a70ac6430ebfa13df5228503c41889bc37cc62fd3e99df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:28:11 compute-0 systemd[1]: libpod-b00ee61d7754fefbf2a70ac6430ebfa13df5228503c41889bc37cc62fd3e99df.scope: Consumed 1.027s CPU time.
Dec 06 06:28:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a00a926704912be045babb5146de11d143674614c621dd770eff2aeff7733d-merged.mount: Deactivated successfully.
Dec 06 06:28:12 compute-0 podman[84673]: 2025-12-06 06:28:12.004522485 +0000 UTC m=+1.192706037 container remove b00ee61d7754fefbf2a70ac6430ebfa13df5228503c41889bc37cc62fd3e99df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0-activate, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 06:28:12 compute-0 podman[84864]: 2025-12-06 06:28:12.256862282 +0000 UTC m=+0.046357493 container create 7156c3d3aaf52a571781d16639105147bbf7e8d4cd068f31a25ddf0a4cf59028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 06:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7743e7396d9a0bace9685a974a63adeddc78a577ef632e33476be3ddfe42a4f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7743e7396d9a0bace9685a974a63adeddc78a577ef632e33476be3ddfe42a4f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7743e7396d9a0bace9685a974a63adeddc78a577ef632e33476be3ddfe42a4f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7743e7396d9a0bace9685a974a63adeddc78a577ef632e33476be3ddfe42a4f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7743e7396d9a0bace9685a974a63adeddc78a577ef632e33476be3ddfe42a4f3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:12 compute-0 podman[84864]: 2025-12-06 06:28:12.319007623 +0000 UTC m=+0.108502844 container init 7156c3d3aaf52a571781d16639105147bbf7e8d4cd068f31a25ddf0a4cf59028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:12 compute-0 podman[84864]: 2025-12-06 06:28:12.328967589 +0000 UTC m=+0.118462800 container start 7156c3d3aaf52a571781d16639105147bbf7e8d4cd068f31a25ddf0a4cf59028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 06:28:12 compute-0 podman[84864]: 2025-12-06 06:28:12.23701402 +0000 UTC m=+0.026509261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:12 compute-0 bash[84864]: 7156c3d3aaf52a571781d16639105147bbf7e8d4cd068f31a25ddf0a4cf59028
Dec 06 06:28:12 compute-0 systemd[1]: Started Ceph osd.0 for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:28:12 compute-0 ceph-osd[84884]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:28:12 compute-0 ceph-osd[84884]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec 06 06:28:12 compute-0 ceph-osd[84884]: pidfile_write: ignore empty --pid-file
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d0579800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d0579800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d0579800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d0579800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d13b3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d13b3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d13b3800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d13b3800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d13b3800 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 06:28:12 compute-0 sudo[84380]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:28:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:28:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:12 compute-0 sudo[84897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:12 compute-0 sudo[84897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:12 compute-0 sudo[84897]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:12 compute-0 sudo[84922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:12 compute-0 sudo[84922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:12 compute-0 sudo[84922]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:12 compute-0 sudo[84947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:12 compute-0 sudo[84947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:12 compute-0 sudo[84947]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:12 compute-0 sudo[84972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:28:12 compute-0 sudo[84972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d0579800 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 06:28:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:28:12
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [balancer INFO root] No pools available
Dec 06 06:28:12 compute-0 ceph-osd[84884]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:28:12 compute-0 ceph-osd[84884]: load: jerasure load: lrc 
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 06:28:12 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:28:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:28:12 compute-0 podman[85044]: 2025-12-06 06:28:12.950741368 +0000 UTC m=+0.037407779 container create f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Dec 06 06:28:12 compute-0 ceph-mon[74339]: pgmap v57: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:12 compute-0 systemd[1]: Started libpod-conmon-f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32.scope.
Dec 06 06:28:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:13 compute-0 podman[85044]: 2025-12-06 06:28:12.934194444 +0000 UTC m=+0.020860875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:13 compute-0 podman[85044]: 2025-12-06 06:28:13.038817251 +0000 UTC m=+0.125483742 container init f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pasteur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:28:13 compute-0 podman[85044]: 2025-12-06 06:28:13.05190186 +0000 UTC m=+0.138568261 container start f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pasteur, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:13 compute-0 podman[85044]: 2025-12-06 06:28:13.057213744 +0000 UTC m=+0.143880185 container attach f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pasteur, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:13 compute-0 peaceful_pasteur[85061]: 167 167
Dec 06 06:28:13 compute-0 systemd[1]: libpod-f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32.scope: Deactivated successfully.
Dec 06 06:28:13 compute-0 conmon[85061]: conmon f5b52cb461f098006e98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32.scope/container/memory.events
Dec 06 06:28:13 compute-0 podman[85044]: 2025-12-06 06:28:13.063991068 +0000 UTC m=+0.150657469 container died f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pasteur, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 06:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a4b7f8a7f07cfa7e0ab6400eb61536a3f55b1a1aeab8179da29a8bb45bc9e3a-merged.mount: Deactivated successfully.
Dec 06 06:28:13 compute-0 podman[85044]: 2025-12-06 06:28:13.109570704 +0000 UTC m=+0.196237105 container remove f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:28:13 compute-0 systemd[1]: libpod-conmon-f5b52cb461f098006e98803ace6390fa786a564f0d9fb472a2f1c29967b6ae32.scope: Deactivated successfully.
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 06:28:13 compute-0 podman[85089]: 2025-12-06 06:28:13.266945242 +0000 UTC m=+0.042604550 container create 657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec 06 06:28:13 compute-0 systemd[1]: Started libpod-conmon-657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728.scope.
Dec 06 06:28:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60c84cbf9ef5e77a763051be2594cd8fd4a17396ff879e03fc4035078590364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60c84cbf9ef5e77a763051be2594cd8fd4a17396ff879e03fc4035078590364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60c84cbf9ef5e77a763051be2594cd8fd4a17396ff879e03fc4035078590364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f60c84cbf9ef5e77a763051be2594cd8fd4a17396ff879e03fc4035078590364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:13 compute-0 podman[85089]: 2025-12-06 06:28:13.247120291 +0000 UTC m=+0.022779609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:13 compute-0 podman[85089]: 2025-12-06 06:28:13.353584077 +0000 UTC m=+0.129243375 container init 657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hawking, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:13 compute-0 podman[85089]: 2025-12-06 06:28:13.361138425 +0000 UTC m=+0.136797703 container start 657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hawking, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:28:13 compute-0 podman[85089]: 2025-12-06 06:28:13.364134534 +0000 UTC m=+0.139793852 container attach 657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:28:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v58: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:13 compute-0 ceph-osd[84884]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 06 06:28:13 compute-0 ceph-osd[84884]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1434c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs mount
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs mount shared_bdev_used = 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: RocksDB version: 7.9.2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Git sha 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: DB SUMMARY
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: DB Session ID:  B0YFSXZ341OFU7JL58Z5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: CURRENT file:  CURRENT
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                         Options.error_if_exists: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.create_if_missing: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                                     Options.env: 0x5636d1405c70
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                                Options.info_log: 0x5636d05f6ba0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                              Options.statistics: (nil)
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.use_fsync: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                              Options.db_log_dir: 
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.write_buffer_manager: 0x5636d150c460
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.unordered_write: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.row_cache: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                              Options.wal_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.two_write_queues: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.wal_compression: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.atomic_flush: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.max_background_jobs: 4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.max_background_compactions: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.max_subcompactions: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.max_open_files: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Compression algorithms supported:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kZSTD supported: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kXpressCompression supported: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kBZip2Compression supported: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kLZ4Compression supported: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kZlibCompression supported: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kSnappyCompression supported: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f6600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f6600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f6600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f6600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f6600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f6600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f6600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f65c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f65c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d05f65c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 927988f3-c2be-4f5c-a9f1-8b0a7f40debc
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002493470735, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002493470961, "job": 1, "event": "recovery_finished"}
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: freelist init
Dec 06 06:28:13 compute-0 ceph-osd[84884]: freelist _read_cfg
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs umount
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bdev(0x5636d1435400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs mount
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluefs mount shared_bdev_used = 4718592
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: RocksDB version: 7.9.2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Git sha 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Compile date 2025-05-06 23:30:25
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: DB SUMMARY
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: DB Session ID:  B0YFSXZ341OFU7JL58Z4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: CURRENT file:  CURRENT
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                         Options.error_if_exists: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.create_if_missing: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                                     Options.env: 0x5636d0638c40
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                                Options.info_log: 0x5636d0600260
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                              Options.statistics: (nil)
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.use_fsync: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                              Options.db_log_dir: 
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.write_buffer_manager: 0x5636d150c8c0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.unordered_write: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.row_cache: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                              Options.wal_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.two_write_queues: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.wal_compression: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.atomic_flush: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.max_background_jobs: 4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.max_background_compactions: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.max_subcompactions: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.max_open_files: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Compression algorithms supported:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kZSTD supported: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kXpressCompression supported: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kBZip2Compression supported: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kLZ4Compression supported: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kZlibCompression supported: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         kSnappyCompression supported: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ec430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:           Options.merge_operator: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5636d14013e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5636d05ecdd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.compression: LZ4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.num_levels: 7
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.bloom_locality: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                               Options.ttl: 2592000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                       Options.enable_blob_files: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                           Options.min_blob_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 927988f3-c2be-4f5c-a9f1-8b0a7f40debc
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002493734996, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002493741177, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002493, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "927988f3-c2be-4f5c-a9f1-8b0a7f40debc", "db_session_id": "B0YFSXZ341OFU7JL58Z4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002493743887, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002493, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "927988f3-c2be-4f5c-a9f1-8b0a7f40debc", "db_session_id": "B0YFSXZ341OFU7JL58Z4", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002493746749, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002493, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "927988f3-c2be-4f5c-a9f1-8b0a7f40debc", "db_session_id": "B0YFSXZ341OFU7JL58Z4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002493748052, "job": 1, "event": "recovery_finished"}
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5636d06be700
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: DB pointer 0x5636d14f5a00
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 06 06:28:13 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 06:28:13 compute-0 ceph-osd[84884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 06 06:28:13 compute-0 ceph-osd[84884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 06 06:28:13 compute-0 ceph-osd[84884]: _get_class not permitted to load lua
Dec 06 06:28:13 compute-0 ceph-osd[84884]: _get_class not permitted to load sdk
Dec 06 06:28:13 compute-0 ceph-osd[84884]: _get_class not permitted to load test_remote_reads
Dec 06 06:28:13 compute-0 ceph-osd[84884]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 06 06:28:13 compute-0 ceph-osd[84884]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 06 06:28:13 compute-0 ceph-osd[84884]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 06 06:28:13 compute-0 ceph-osd[84884]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 06 06:28:13 compute-0 ceph-osd[84884]: osd.0 0 load_pgs
Dec 06 06:28:13 compute-0 ceph-osd[84884]: osd.0 0 load_pgs opened 0 pgs
Dec 06 06:28:13 compute-0 ceph-osd[84884]: osd.0 0 log_to_monitors true
Dec 06 06:28:13 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0[84880]: 2025-12-06T06:28:13.773+0000 7f321484b740 -1 osd.0 0 log_to_monitors true
Dec 06 06:28:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Dec 06 06:28:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 06 06:28:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 06 06:28:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:28:13 compute-0 ceph-mon[74339]: from='osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec 06 06:28:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 06 06:28:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec 06 06:28:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec 06 06:28:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec 06 06:28:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 06 06:28:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Dec 06 06:28:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:14 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:14 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:14 compute-0 admiring_hawking[85105]: {
Dec 06 06:28:14 compute-0 admiring_hawking[85105]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:28:14 compute-0 admiring_hawking[85105]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:28:14 compute-0 admiring_hawking[85105]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:28:14 compute-0 admiring_hawking[85105]:         "osd_id": 0,
Dec 06 06:28:14 compute-0 admiring_hawking[85105]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:28:14 compute-0 admiring_hawking[85105]:         "type": "bluestore"
Dec 06 06:28:14 compute-0 admiring_hawking[85105]:     }
Dec 06 06:28:14 compute-0 admiring_hawking[85105]: }
Dec 06 06:28:14 compute-0 systemd[1]: libpod-657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728.scope: Deactivated successfully.
Dec 06 06:28:14 compute-0 podman[85089]: 2025-12-06 06:28:14.348486608 +0000 UTC m=+1.124145896 container died 657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hawking, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-f60c84cbf9ef5e77a763051be2594cd8fd4a17396ff879e03fc4035078590364-merged.mount: Deactivated successfully.
Dec 06 06:28:14 compute-0 podman[85089]: 2025-12-06 06:28:14.410313809 +0000 UTC m=+1.185973117 container remove 657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hawking, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:28:14 compute-0 systemd[1]: libpod-conmon-657ae4ed24ca2cf61061edc04e7dddb2c9d1d406937ae6b2e53fc35e87cac728.scope: Deactivated successfully.
Dec 06 06:28:14 compute-0 sudo[84972]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:28:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:28:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 06 06:28:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 06 06:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 06 06:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Dec 06 06:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec 06 06:28:15 compute-0 ceph-osd[84884]: osd.0 0 done with init, starting boot process
Dec 06 06:28:15 compute-0 ceph-osd[84884]: osd.0 0 start_boot
Dec 06 06:28:15 compute-0 ceph-osd[84884]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 06 06:28:15 compute-0 ceph-osd[84884]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 06 06:28:15 compute-0 ceph-osd[84884]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 06 06:28:15 compute-0 ceph-osd[84884]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 06 06:28:15 compute-0 ceph-osd[84884]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 06 06:28:15 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec 06 06:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:15 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:15 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:15 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3899596042; not ready for session (expect reconnect)
Dec 06 06:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:15 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:15 compute-0 ceph-mon[74339]: pgmap v58: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:15 compute-0 ceph-mon[74339]: from='osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 06 06:28:15 compute-0 ceph-mon[74339]: osdmap e6: 2 total, 0 up, 2 in
Dec 06 06:28:15 compute-0 ceph-mon[74339]: from='osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec 06 06:28:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v61: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:16 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3899596042; not ready for session (expect reconnect)
Dec 06 06:28:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:16 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:16 compute-0 ceph-mon[74339]: from='osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Dec 06 06:28:16 compute-0 ceph-mon[74339]: osdmap e7: 2 total, 0 up, 2 in
Dec 06 06:28:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:17 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3899596042; not ready for session (expect reconnect)
Dec 06 06:28:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:17 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:17 compute-0 ceph-mon[74339]: purged_snaps scrub starts
Dec 06 06:28:17 compute-0 ceph-mon[74339]: purged_snaps scrub ok
Dec 06 06:28:17 compute-0 ceph-mon[74339]: pgmap v61: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v62: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:18 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3899596042; not ready for session (expect reconnect)
Dec 06 06:28:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:18 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:19 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3899596042; not ready for session (expect reconnect)
Dec 06 06:28:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:19 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:19 compute-0 ceph-mon[74339]: pgmap v62: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v63: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.880 iops: 4833.166 elapsed_sec: 0.621
Dec 06 06:28:19 compute-0 ceph-osd[84884]: log_channel(cluster) log [WRN] : OSD bench result of 4833.165530 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 0 waiting for initial osdmap
Dec 06 06:28:19 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0[84880]: 2025-12-06T06:28:19.923+0000 7f3210fe2640 -1 osd.0 0 waiting for initial osdmap
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 7 check_osdmap_features require_osd_release unknown -> reef
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 7 set_numa_affinity not setting numa affinity
Dec 06 06:28:19 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-osd-0[84880]: 2025-12-06T06:28:19.947+0000 7f320bdf3640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 06:28:19 compute-0 ceph-osd[84884]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec 06 06:28:20 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3899596042; not ready for session (expect reconnect)
Dec 06 06:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:20 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 06 06:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 06 06:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:28:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Dec 06 06:28:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042] boot
Dec 06 06:28:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Dec 06 06:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec 06 06:28:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:20 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:20 compute-0 ceph-osd[84884]: osd.0 8 state: booting -> active
Dec 06 06:28:20 compute-0 ceph-mgr[74630]: [devicehealth INFO root] creating mgr pool
Dec 06 06:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Dec 06 06:28:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 06 06:28:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec 06 06:28:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:21 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Dec 06 06:28:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 06 06:28:21 compute-0 ceph-osd[84884]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 06 06:28:21 compute-0 ceph-osd[84884]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 06 06:28:21 compute-0 ceph-osd[84884]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 06 06:28:21 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 9 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:28:21 compute-0 ceph-mon[74339]: pgmap v63: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 06:28:21 compute-0 ceph-mon[74339]: OSD bench result of 4833.165530 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 06:28:21 compute-0 ceph-mon[74339]: osd.0 [v2:192.168.122.100:6802/3899596042,v1:192.168.122.100:6803/3899596042] boot
Dec 06 06:28:21 compute-0 ceph-mon[74339]: osdmap e8: 2 total, 1 up, 2 in
Dec 06 06:28:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 06 06:28:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec 06 06:28:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v66: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Dec 06 06:28:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 06 06:28:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 06 06:28:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Dec 06 06:28:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Dec 06 06:28:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:22 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=9/10 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:28:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 06 06:28:22 compute-0 ceph-mon[74339]: osdmap e9: 2 total, 1 up, 2 in
Dec 06 06:28:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec 06 06:28:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 06 06:28:22 compute-0 ceph-mon[74339]: osdmap e10: 2 total, 1 up, 2 in
Dec 06 06:28:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] creating main.db for devicehealth
Dec 06 06:28:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 06:28:22 compute-0 ceph-mgr[74630]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.1 ()
Dec 06 06:28:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 06 06:28:22 compute-0 sudo[85561]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 06 06:28:22 compute-0 sudo[85561]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:28:22 compute-0 sudo[85561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 06 06:28:22 compute-0 sudo[85561]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 06 06:28:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 06 06:28:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:28:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Dec 06 06:28:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 06 06:28:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 06 06:28:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 06 06:28:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Dec 06 06:28:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Dec 06 06:28:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Dec 06 06:28:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 06 06:28:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e11 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Dec 06 06:28:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:23 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:23 compute-0 ceph-mon[74339]: pgmap v66: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Dec 06 06:28:23 compute-0 ceph-mon[74339]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:23 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 06 06:28:23 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 06 06:28:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:28:23 compute-0 ceph-mon[74339]: from='osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec 06 06:28:23 compute-0 ceph-mon[74339]: from='osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 06 06:28:23 compute-0 ceph-mon[74339]: osdmap e11: 2 total, 1 up, 2 in
Dec 06 06:28:23 compute-0 ceph-mon[74339]: from='osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec 06 06:28:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.sfzyix(active, since 2m)
Dec 06 06:28:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v69: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Dec 06 06:28:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 06 06:28:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Dec 06 06:28:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Dec 06 06:28:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Dec 06 06:28:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:24 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:24 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 12 pg[1.0( v 10'32 (0'0,10'32] local-lis/les=9/10 n=2 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=12 pruub=13.988319397s) [] r=-1 lpr=12 pi=[9,12)/1 crt=10'32 lcod 10'31 mlcod 10'31 active pruub 24.423660278s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:28:24 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 12 pg[1.0( v 10'32 (0'0,10'32] local-lis/les=9/10 n=2 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=12 pruub=13.988319397s) [] r=-1 lpr=12 pi=[9,12)/1 crt=10'32 lcod 10'31 mlcod 0'0 unknown NOTIFY pruub 24.423660278s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:28:24 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:24 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:28:24 compute-0 ceph-mon[74339]: mgrmap e9: compute-0.sfzyix(active, since 2m)
Dec 06 06:28:24 compute-0 ceph-mon[74339]: from='osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Dec 06 06:28:24 compute-0 ceph-mon[74339]: osdmap e12: 2 total, 1 up, 2 in
Dec 06 06:28:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:25 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:25 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:25 compute-0 ceph-mon[74339]: pgmap v69: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Dec 06 06:28:25 compute-0 ceph-mon[74339]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:28:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v71: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Dec 06 06:28:26 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:26 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:26 compute-0 ceph-mon[74339]: purged_snaps scrub starts
Dec 06 06:28:26 compute-0 ceph-mon[74339]: purged_snaps scrub ok
Dec 06 06:28:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:27 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:27 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:27 compute-0 ceph-mon[74339]: pgmap v71: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Dec 06 06:28:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v72: 1 pgs: 1 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:27 compute-0 sudo[85587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctdzsqiuqjpmdywiovnwjahdezplnrdl ; /usr/bin/python3'
Dec 06 06:28:27 compute-0 sudo[85587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:28 compute-0 python3[85589]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:28 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:28 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:28 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:28 compute-0 podman[85591]: 2025-12-06 06:28:28.291680517 +0000 UTC m=+0.199761811 container create b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f (image=quay.io/ceph/ceph:v18, name=serene_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:28 compute-0 systemd[75955]: Starting Mark boot as successful...
Dec 06 06:28:28 compute-0 systemd[1]: Started libpod-conmon-b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f.scope.
Dec 06 06:28:28 compute-0 systemd[75955]: Finished Mark boot as successful.
Dec 06 06:28:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e931b80598dfc3d3e739b3eff98235e4b972744afb2b9173cef97724202ad94e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e931b80598dfc3d3e739b3eff98235e4b972744afb2b9173cef97724202ad94e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e931b80598dfc3d3e739b3eff98235e4b972744afb2b9173cef97724202ad94e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:28 compute-0 podman[85591]: 2025-12-06 06:28:28.367309121 +0000 UTC m=+0.275390425 container init b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f (image=quay.io/ceph/ceph:v18, name=serene_swirles, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 06:28:28 compute-0 podman[85591]: 2025-12-06 06:28:28.275319421 +0000 UTC m=+0.183400705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:28 compute-0 podman[85591]: 2025-12-06 06:28:28.375301814 +0000 UTC m=+0.283383088 container start b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f (image=quay.io/ceph/ceph:v18, name=serene_swirles, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:28:28 compute-0 podman[85591]: 2025-12-06 06:28:28.379494252 +0000 UTC m=+0.287575546 container attach b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f (image=quay.io/ceph/ceph:v18, name=serene_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:28:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 06 06:28:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3273127661' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:28:29 compute-0 serene_swirles[85608]: 
Dec 06 06:28:29 compute-0 serene_swirles[85608]: {"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":186,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":1,"osd_up_since":1765002500,"num_in_osds":2,"osd_in_since":1765002483,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":28041216,"bytes_avail":7483957248,"bytes_total":7511998464,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-06T06:27:14.826183+0000","services":{}},"progress_events":{}}
Dec 06 06:28:29 compute-0 systemd[1]: libpod-b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f.scope: Deactivated successfully.
Dec 06 06:28:29 compute-0 podman[85591]: 2025-12-06 06:28:29.188677085 +0000 UTC m=+1.096758359 container died b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f (image=quay.io/ceph/ceph:v18, name=serene_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:29 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:29 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e931b80598dfc3d3e739b3eff98235e4b972744afb2b9173cef97724202ad94e-merged.mount: Deactivated successfully.
Dec 06 06:28:29 compute-0 podman[85591]: 2025-12-06 06:28:29.245865933 +0000 UTC m=+1.153947207 container remove b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f (image=quay.io/ceph/ceph:v18, name=serene_swirles, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:29 compute-0 ceph-mon[74339]: pgmap v72: 1 pgs: 1 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3273127661' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:28:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:29 compute-0 systemd[1]: libpod-conmon-b0fe224b8603ee28ccbb55da7efd35d584c2da76a6d630bfe830f76b2b18429f.scope: Deactivated successfully.
Dec 06 06:28:29 compute-0 sudo[85587]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v73: 1 pgs: 1 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:29 compute-0 sudo[85668]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqaxrrgqlluqusbzvtiuyuevgpszzwoi ; /usr/bin/python3'
Dec 06 06:28:29 compute-0 sudo[85668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:29 compute-0 python3[85670]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:29 compute-0 podman[85671]: 2025-12-06 06:28:29.884503755 +0000 UTC m=+0.060460987 container create 5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4 (image=quay.io/ceph/ceph:v18, name=stoic_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 06:28:29 compute-0 systemd[1]: Started libpod-conmon-5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4.scope.
Dec 06 06:28:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:29 compute-0 podman[85671]: 2025-12-06 06:28:29.853570959 +0000 UTC m=+0.029528251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92f60535eeacab5194ff62b6b616257237cbe8bab60b61e5a975e8d69725673/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e92f60535eeacab5194ff62b6b616257237cbe8bab60b61e5a975e8d69725673/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:29 compute-0 podman[85671]: 2025-12-06 06:28:29.964014645 +0000 UTC m=+0.139971847 container init 5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4 (image=quay.io/ceph/ceph:v18, name=stoic_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:29 compute-0 podman[85671]: 2025-12-06 06:28:29.971281844 +0000 UTC m=+0.147239036 container start 5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4 (image=quay.io/ceph/ceph:v18, name=stoic_villani, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:29 compute-0 podman[85671]: 2025-12-06 06:28:29.97542908 +0000 UTC m=+0.151386282 container attach 5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4 (image=quay.io/ceph/ceph:v18, name=stoic_villani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:30 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:30 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 06 06:28:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1637686421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:31 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:31 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 06 06:28:31 compute-0 ceph-mon[74339]: pgmap v73: 1 pgs: 1 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1637686421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1637686421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e13 e13: 2 total, 1 up, 2 in
Dec 06 06:28:31 compute-0 stoic_villani[85687]: pool 'vms' created
Dec 06 06:28:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 1 up, 2 in
Dec 06 06:28:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:31 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:31 compute-0 systemd[1]: libpod-5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4.scope: Deactivated successfully.
Dec 06 06:28:31 compute-0 podman[85671]: 2025-12-06 06:28:31.343165506 +0000 UTC m=+1.519122698 container died 5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4 (image=quay.io/ceph/ceph:v18, name=stoic_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-e92f60535eeacab5194ff62b6b616257237cbe8bab60b61e5a975e8d69725673-merged.mount: Deactivated successfully.
Dec 06 06:28:31 compute-0 podman[85671]: 2025-12-06 06:28:31.389613451 +0000 UTC m=+1.565570643 container remove 5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4 (image=quay.io/ceph/ceph:v18, name=stoic_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:28:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:28:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v75: 2 pgs: 2 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:28:31 compute-0 sudo[85668]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:31 compute-0 systemd[1]: libpod-conmon-5823fbbbefb736236139466304715f306ae3e71ca4af6f13655bd89af60d60d4.scope: Deactivated successfully.
Dec 06 06:28:31 compute-0 sudo[85727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:31 compute-0 sudo[85727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-0 sudo[85727]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-0 sudo[85798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szhllryuapkunjgowqqjelboouatgxgq ; /usr/bin/python3'
Dec 06 06:28:31 compute-0 sudo[85798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:31 compute-0 sudo[85753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:28:31 compute-0 sudo[85753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-0 sudo[85753]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-0 python3[85801]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:31 compute-0 podman[85803]: 2025-12-06 06:28:31.719726321 +0000 UTC m=+0.048419460 container create 5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c (image=quay.io/ceph/ceph:v18, name=kind_colden, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:31 compute-0 systemd[1]: Started libpod-conmon-5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c.scope.
Dec 06 06:28:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb0907155dc7d7b496ec15c739c4e60bdbf2db5004e847cfbdff3c598f85f81/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecb0907155dc7d7b496ec15c739c4e60bdbf2db5004e847cfbdff3c598f85f81/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:31 compute-0 podman[85803]: 2025-12-06 06:28:31.788331725 +0000 UTC m=+0.117024864 container init 5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c (image=quay.io/ceph/ceph:v18, name=kind_colden, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:28:31 compute-0 podman[85803]: 2025-12-06 06:28:31.700404597 +0000 UTC m=+0.029097756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:31 compute-0 podman[85803]: 2025-12-06 06:28:31.796522374 +0000 UTC m=+0.125215513 container start 5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c (image=quay.io/ceph/ceph:v18, name=kind_colden, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:28:31 compute-0 podman[85803]: 2025-12-06 06:28:31.800797684 +0000 UTC m=+0.129490833 container attach 5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c (image=quay.io/ceph/ceph:v18, name=kind_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 06:28:31 compute-0 sudo[85822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:31 compute-0 sudo[85822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-0 sudo[85822]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-0 sudo[85847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:31 compute-0 sudo[85847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-0 sudo[85847]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:31 compute-0 sudo[85872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:31 compute-0 sudo[85872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:31 compute-0 sudo[85872]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:32 compute-0 sudo[85897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:28:32 compute-0 sudo[85897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:32 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:32 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:28:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1637686421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:32 compute-0 ceph-mon[74339]: osdmap e13: 2 total, 1 up, 2 in
Dec 06 06:28:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:32 compute-0 ceph-mon[74339]: pgmap v75: 2 pgs: 2 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 06 06:28:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2169014169' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:28:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:32 compute-0 podman[86018]: 2025-12-06 06:28:32.959885158 +0000 UTC m=+0.084121704 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:28:33 compute-0 podman[86018]: 2025-12-06 06:28:33.082712721 +0000 UTC m=+0.206949277 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:33 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:33 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 06 06:28:33 compute-0 ceph-mon[74339]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2169014169' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2169014169' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e14 e14: 2 total, 1 up, 2 in
Dec 06 06:28:33 compute-0 kind_colden[85818]: pool 'volumes' created
Dec 06 06:28:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 1 up, 2 in
Dec 06 06:28:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:33 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:28:33 compute-0 systemd[1]: libpod-5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c.scope: Deactivated successfully.
Dec 06 06:28:33 compute-0 podman[85803]: 2025-12-06 06:28:33.369652554 +0000 UTC m=+1.698345723 container died 5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c (image=quay.io/ceph/ceph:v18, name=kind_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecb0907155dc7d7b496ec15c739c4e60bdbf2db5004e847cfbdff3c598f85f81-merged.mount: Deactivated successfully.
Dec 06 06:28:33 compute-0 sudo[85897]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v77: 3 pgs: 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:28:33 compute-0 podman[85803]: 2025-12-06 06:28:33.418510819 +0000 UTC m=+1.747203958 container remove 5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c (image=quay.io/ceph/ceph:v18, name=kind_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:28:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:28:33 compute-0 systemd[1]: libpod-conmon-5d7d8117529148c83b0f9e60a5345648717fcf4d8f519fb61cf8096813df5c6c.scope: Deactivated successfully.
Dec 06 06:28:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:33 compute-0 sudo[85798]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:33 compute-0 sudo[86121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:33 compute-0 sudo[86121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:33 compute-0 sudo[86121]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:33 compute-0 sudo[86173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oouoxzqudmxrkiputgchkjheidjwzhri ; /usr/bin/python3'
Dec 06 06:28:33 compute-0 sudo[86173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:33 compute-0 sudo[86164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:33 compute-0 sudo[86164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:33 compute-0 sudo[86164]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:33 compute-0 sudo[86197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:33 compute-0 sudo[86197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:33 compute-0 sudo[86197]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:33 compute-0 sudo[86222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:28:33 compute-0 sudo[86222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:33 compute-0 python3[86189]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:33 compute-0 podman[86247]: 2025-12-06 06:28:33.787007069 +0000 UTC m=+0.041225995 container create c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c (image=quay.io/ceph/ceph:v18, name=strange_kirch, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:28:33 compute-0 systemd[1]: Started libpod-conmon-c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c.scope.
Dec 06 06:28:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0381923bef0352da1a52d80fa724bca93f8da40dc97a0f393ef080d7c00c27e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0381923bef0352da1a52d80fa724bca93f8da40dc97a0f393ef080d7c00c27e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:33 compute-0 podman[86247]: 2025-12-06 06:28:33.767997845 +0000 UTC m=+0.022216561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:33 compute-0 podman[86247]: 2025-12-06 06:28:33.871733381 +0000 UTC m=+0.125952097 container init c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c (image=quay.io/ceph/ceph:v18, name=strange_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 06:28:33 compute-0 podman[86247]: 2025-12-06 06:28:33.881908246 +0000 UTC m=+0.136126942 container start c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c (image=quay.io/ceph/ceph:v18, name=strange_kirch, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:28:33 compute-0 podman[86247]: 2025-12-06 06:28:33.887738137 +0000 UTC m=+0.141956863 container attach c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c (image=quay.io/ceph/ceph:v18, name=strange_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 06:28:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:28:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:34 compute-0 sudo[86222]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:34 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:34 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:34 compute-0 sudo[86302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:34 compute-0 sudo[86302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:34 compute-0 sudo[86302]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:34 compute-0 sudo[86342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:28:34 compute-0 sudo[86342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:34 compute-0 sudo[86342]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 06 06:28:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2169014169' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:34 compute-0 ceph-mon[74339]: osdmap e14: 2 total, 1 up, 2 in
Dec 06 06:28:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:34 compute-0 ceph-mon[74339]: pgmap v77: 3 pgs: 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:34 compute-0 sudo[86367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:28:34 compute-0 sudo[86367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:34 compute-0 sudo[86367]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e15 e15: 2 total, 1 up, 2 in
Dec 06 06:28:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 1 up, 2 in
Dec 06 06:28:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:34 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:34 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:28:34 compute-0 sudo[86392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 06:28:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 06 06:28:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1347204608' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:34 compute-0 sudo[86392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:28:34 compute-0 podman[86457]: 2025-12-06 06:28:34.75631572 +0000 UTC m=+0.037771891 container create 2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_easley, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:28:34 compute-0 systemd[1]: Started libpod-conmon-2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9.scope.
Dec 06 06:28:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:28:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:28:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:34 compute-0 podman[86457]: 2025-12-06 06:28:34.739440156 +0000 UTC m=+0.020896337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:34 compute-0 podman[86457]: 2025-12-06 06:28:34.83942008 +0000 UTC m=+0.120876261 container init 2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:34 compute-0 podman[86457]: 2025-12-06 06:28:34.84553261 +0000 UTC m=+0.126988781 container start 2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_easley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:34 compute-0 podman[86457]: 2025-12-06 06:28:34.848806867 +0000 UTC m=+0.130263038 container attach 2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:34 compute-0 dazzling_easley[86473]: 167 167
Dec 06 06:28:34 compute-0 systemd[1]: libpod-2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9.scope: Deactivated successfully.
Dec 06 06:28:34 compute-0 podman[86457]: 2025-12-06 06:28:34.851270198 +0000 UTC m=+0.132726359 container died 2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_easley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 06:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c8fa5c349ca98bb358cdd9d4785c343bd81a067cdddf01428f065a74c427717-merged.mount: Deactivated successfully.
Dec 06 06:28:34 compute-0 podman[86457]: 2025-12-06 06:28:34.890125275 +0000 UTC m=+0.171581446 container remove 2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_easley, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:34 compute-0 systemd[1]: libpod-conmon-2cf2beba539f86a48b074475a8c79eebb5e5f1d16d39a93a102283ba3b9b6ab9.scope: Deactivated successfully.
Dec 06 06:28:35 compute-0 podman[86496]: 2025-12-06 06:28:35.044141262 +0000 UTC m=+0.043341044 container create 599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:28:35 compute-0 systemd[1]: Started libpod-conmon-599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448.scope.
Dec 06 06:28:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676f2ecb11bfb7e823db084f52b5aed5ea75873345cc2ab09d1973cd94f5065b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676f2ecb11bfb7e823db084f52b5aed5ea75873345cc2ab09d1973cd94f5065b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676f2ecb11bfb7e823db084f52b5aed5ea75873345cc2ab09d1973cd94f5065b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/676f2ecb11bfb7e823db084f52b5aed5ea75873345cc2ab09d1973cd94f5065b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:35 compute-0 podman[86496]: 2025-12-06 06:28:35.024149616 +0000 UTC m=+0.023349448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:28:35 compute-0 podman[86496]: 2025-12-06 06:28:35.131045146 +0000 UTC m=+0.130244948 container init 599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:35 compute-0 podman[86496]: 2025-12-06 06:28:35.138068196 +0000 UTC m=+0.137267978 container start 599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:28:35 compute-0 podman[86496]: 2025-12-06 06:28:35.142002316 +0000 UTC m=+0.141202098 container attach 599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:35 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:35 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 06 06:28:35 compute-0 ceph-mon[74339]: osdmap e15: 2 total, 1 up, 2 in
Dec 06 06:28:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1347204608' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1347204608' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e16 e16: 2 total, 1 up, 2 in
Dec 06 06:28:35 compute-0 strange_kirch[86262]: pool 'backups' created
Dec 06 06:28:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 1 up, 2 in
Dec 06 06:28:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:35 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:28:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v80: 4 pgs: 4 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:35 compute-0 systemd[1]: libpod-c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c.scope: Deactivated successfully.
Dec 06 06:28:35 compute-0 podman[86247]: 2025-12-06 06:28:35.409465379 +0000 UTC m=+1.663684075 container died c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c (image=quay.io/ceph/ceph:v18, name=strange_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 06:28:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0381923bef0352da1a52d80fa724bca93f8da40dc97a0f393ef080d7c00c27e5-merged.mount: Deactivated successfully.
Dec 06 06:28:35 compute-0 podman[86247]: 2025-12-06 06:28:35.456978959 +0000 UTC m=+1.711197655 container remove c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c (image=quay.io/ceph/ceph:v18, name=strange_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:28:35 compute-0 systemd[1]: libpod-conmon-c6e6c7297498036c9e325b83591bc50929bd5839393f33e2e893228e33577d7c.scope: Deactivated successfully.
Dec 06 06:28:35 compute-0 sudo[86173]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:35 compute-0 sudo[86553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hybdtpsygqgubvgtnmyjiiqwkfdbwohp ; /usr/bin/python3'
Dec 06 06:28:35 compute-0 sudo[86553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:35 compute-0 python3[86555]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:35 compute-0 podman[86556]: 2025-12-06 06:28:35.827594929 +0000 UTC m=+0.046474586 container create 7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e (image=quay.io/ceph/ceph:v18, name=nice_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:35 compute-0 systemd[1]: Started libpod-conmon-7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e.scope.
Dec 06 06:28:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2811f2497b4e7843a8d49983a1ca77176574d2278a24132ef29594908cd109/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c2811f2497b4e7843a8d49983a1ca77176574d2278a24132ef29594908cd109/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:35 compute-0 podman[86556]: 2025-12-06 06:28:35.804800101 +0000 UTC m=+0.023679788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:35 compute-0 podman[86556]: 2025-12-06 06:28:35.904380191 +0000 UTC m=+0.123259868 container init 7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e (image=quay.io/ceph/ceph:v18, name=nice_mcclintock, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:35 compute-0 podman[86556]: 2025-12-06 06:28:35.913867913 +0000 UTC m=+0.132747570 container start 7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e (image=quay.io/ceph/ceph:v18, name=nice_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:35 compute-0 podman[86556]: 2025-12-06 06:28:35.919554089 +0000 UTC m=+0.138433756 container attach 7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e (image=quay.io/ceph/ceph:v18, name=nice_mcclintock, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:36 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:36 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]: [
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:     {
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         "available": false,
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         "ceph_device": false,
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         "lsm_data": {},
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         "lvs": [],
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         "path": "/dev/sr0",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         "rejected_reasons": [
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "Insufficient space (<5GB)",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "Has a FileSystem"
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         ],
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         "sys_api": {
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "actuators": null,
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "device_nodes": "sr0",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "devname": "sr0",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "human_readable_size": "482.00 KB",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "id_bus": "ata",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "model": "QEMU DVD-ROM",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "nr_requests": "2",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "parent": "/dev/sr0",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "partitions": {},
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "path": "/dev/sr0",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "removable": "1",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "rev": "2.5+",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "ro": "0",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "rotational": "1",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "sas_address": "",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "sas_device_handle": "",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "scheduler_mode": "mq-deadline",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "sectors": 0,
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "sectorsize": "2048",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "size": 493568.0,
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "support_discard": "2048",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "type": "disk",
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:             "vendor": "QEMU"
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:         }
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]:     }
Dec 06 06:28:36 compute-0 laughing_engelbart[86512]: ]
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e17 e17: 2 total, 1 up, 2 in
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 1 up, 2 in
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:36 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1347204608' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:36 compute-0 ceph-mon[74339]: osdmap e16: 2 total, 1 up, 2 in
Dec 06 06:28:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:36 compute-0 ceph-mon[74339]: pgmap v80: 4 pgs: 4 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:36 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:28:36 compute-0 systemd[1]: libpod-599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448.scope: Deactivated successfully.
Dec 06 06:28:36 compute-0 podman[86496]: 2025-12-06 06:28:36.415064961 +0000 UTC m=+1.414264743 container died 599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:28:36 compute-0 systemd[1]: libpod-599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448.scope: Consumed 1.284s CPU time.
Dec 06 06:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-676f2ecb11bfb7e823db084f52b5aed5ea75873345cc2ab09d1973cd94f5065b-merged.mount: Deactivated successfully.
Dec 06 06:28:36 compute-0 podman[86496]: 2025-12-06 06:28:36.471626009 +0000 UTC m=+1.470825791 container remove 599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 06:28:36 compute-0 systemd[1]: libpod-conmon-599953e6229d58bacb4212488e107775bb26bd484b1c195e9083efb4eda63448.scope: Deactivated successfully.
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/293865094' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:36 compute-0 sudo[86392]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Dec 06 06:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:28:36 compute-0 ceph-mgr[74630]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 06:28:36 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 06:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Dec 06 06:28:36 compute-0 ceph-mgr[74630]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 06:28:36 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 06:28:37 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:37 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v82: 4 pgs: 1 creating+peering, 1 active+clean, 2 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 06 06:28:37 compute-0 ceph-mon[74339]: osdmap e17: 2 total, 1 up, 2 in
Dec 06 06:28:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/293865094' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:28:37 compute-0 ceph-mon[74339]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec 06 06:28:37 compute-0 ceph-mon[74339]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec 06 06:28:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/293865094' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e18 e18: 2 total, 1 up, 2 in
Dec 06 06:28:37 compute-0 nice_mcclintock[86574]: pool 'images' created
Dec 06 06:28:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 1 up, 2 in
Dec 06 06:28:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:37 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:37 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 18 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:28:37 compute-0 systemd[1]: libpod-7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e.scope: Deactivated successfully.
Dec 06 06:28:37 compute-0 conmon[86574]: conmon 7a542f16b62d8de1bd47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e.scope/container/memory.events
Dec 06 06:28:37 compute-0 podman[86556]: 2025-12-06 06:28:37.441268045 +0000 UTC m=+1.660147702 container died 7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e (image=quay.io/ceph/ceph:v18, name=nice_mcclintock, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c2811f2497b4e7843a8d49983a1ca77176574d2278a24132ef29594908cd109-merged.mount: Deactivated successfully.
Dec 06 06:28:37 compute-0 podman[86556]: 2025-12-06 06:28:37.489850459 +0000 UTC m=+1.708730106 container remove 7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e (image=quay.io/ceph/ceph:v18, name=nice_mcclintock, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:28:37 compute-0 systemd[1]: libpod-conmon-7a542f16b62d8de1bd4737329aed504b3d22c3b8511b33f19c084f5b4a67c50e.scope: Deactivated successfully.
Dec 06 06:28:37 compute-0 sudo[86553]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:37 compute-0 sudo[87721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fchldbltaravzuoaqzapetazjhpjqaye ; /usr/bin/python3'
Dec 06 06:28:37 compute-0 sudo[87721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:37 compute-0 python3[87723]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:37 compute-0 podman[87724]: 2025-12-06 06:28:37.824614003 +0000 UTC m=+0.023282929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:37 compute-0 podman[87724]: 2025-12-06 06:28:37.981492709 +0000 UTC m=+0.180161635 container create ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4 (image=quay.io/ceph/ceph:v18, name=zen_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:28:38 compute-0 systemd[1]: Started libpod-conmon-ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4.scope.
Dec 06 06:28:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851d04551a8a90c450d5e372eadeb2f5d5069ce482f2a9be8b815278d05c0884/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/851d04551a8a90c450d5e372eadeb2f5d5069ce482f2a9be8b815278d05c0884/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:38 compute-0 podman[87724]: 2025-12-06 06:28:38.068608552 +0000 UTC m=+0.267277478 container init ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4 (image=quay.io/ceph/ceph:v18, name=zen_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 06:28:38 compute-0 podman[87724]: 2025-12-06 06:28:38.075555399 +0000 UTC m=+0.274224365 container start ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4 (image=quay.io/ceph/ceph:v18, name=zen_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:28:38 compute-0 podman[87724]: 2025-12-06 06:28:38.080738396 +0000 UTC m=+0.279407322 container attach ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4 (image=quay.io/ceph/ceph:v18, name=zen_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:38 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:38 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:39 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 06 06:28:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 06 06:28:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2180840478' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:39 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:39 compute-0 ceph-mon[74339]: pgmap v82: 4 pgs: 1 creating+peering, 1 active+clean, 2 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/293865094' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:39 compute-0 ceph-mon[74339]: osdmap e18: 2 total, 1 up, 2 in
Dec 06 06:28:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:39 compute-0 ceph-mon[74339]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v84: 5 pgs: 1 creating+peering, 1 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e19 e19: 2 total, 1 up, 2 in
Dec 06 06:28:39 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 1 up, 2 in
Dec 06 06:28:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:39 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 19 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:28:40 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:40 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:41 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v86: 5 pgs: 3 active+clean, 2 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 06 06:28:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:41 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2180840478' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:41 compute-0 ceph-mon[74339]: pgmap v84: 5 pgs: 1 creating+peering, 1 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:41 compute-0 ceph-mon[74339]: osdmap e19: 2 total, 1 up, 2 in
Dec 06 06:28:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2180840478' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e20 e20: 2 total, 1 up, 2 in
Dec 06 06:28:41 compute-0 zen_zhukovsky[87739]: pool 'cephfs.cephfs.meta' created
Dec 06 06:28:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 1 up, 2 in
Dec 06 06:28:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:41 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:41 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:28:41 compute-0 systemd[1]: libpod-ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4.scope: Deactivated successfully.
Dec 06 06:28:41 compute-0 podman[87724]: 2025-12-06 06:28:41.611966055 +0000 UTC m=+3.810634981 container died ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4 (image=quay.io/ceph/ceph:v18, name=zen_zhukovsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-851d04551a8a90c450d5e372eadeb2f5d5069ce482f2a9be8b815278d05c0884-merged.mount: Deactivated successfully.
Dec 06 06:28:41 compute-0 podman[87724]: 2025-12-06 06:28:41.65209099 +0000 UTC m=+3.850759916 container remove ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4 (image=quay.io/ceph/ceph:v18, name=zen_zhukovsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:41 compute-0 systemd[1]: libpod-conmon-ece6dd27916a678de835614956d792e538f2bf8905cc2b26cf4714ebd34f80c4.scope: Deactivated successfully.
Dec 06 06:28:41 compute-0 sudo[87721]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:41 compute-0 sudo[87803]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiokjkxvutijvjkmqvhmcvkyxtskxkim ; /usr/bin/python3'
Dec 06 06:28:41 compute-0 sudo[87803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:41 compute-0 python3[87805]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:41 compute-0 podman[87806]: 2025-12-06 06:28:41.990112386 +0000 UTC m=+0.050735065 container create d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c (image=quay.io/ceph/ceph:v18, name=upbeat_elion, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:42 compute-0 systemd[1]: Started libpod-conmon-d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c.scope.
Dec 06 06:28:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:42 compute-0 podman[87806]: 2025-12-06 06:28:41.969438392 +0000 UTC m=+0.030061091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbd20649b70f5da5129397ef9f3428b65119e8cd211f0d61699fe563761e127/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cbd20649b70f5da5129397ef9f3428b65119e8cd211f0d61699fe563761e127/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:42 compute-0 podman[87806]: 2025-12-06 06:28:42.080018258 +0000 UTC m=+0.140640937 container init d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c (image=quay.io/ceph/ceph:v18, name=upbeat_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:42 compute-0 podman[87806]: 2025-12-06 06:28:42.086008038 +0000 UTC m=+0.146630717 container start d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c (image=quay.io/ceph/ceph:v18, name=upbeat_elion, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:28:42 compute-0 podman[87806]: 2025-12-06 06:28:42.089824126 +0000 UTC m=+0.150446805 container attach d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c (image=quay.io/ceph/ceph:v18, name=upbeat_elion, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:42 compute-0 ceph-mon[74339]: pgmap v86: 5 pgs: 3 active+clean, 2 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2180840478' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:42 compute-0 ceph-mon[74339]: osdmap e20: 2 total, 1 up, 2 in
Dec 06 06:28:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 06 06:28:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e21 e21: 2 total, 1 up, 2 in
Dec 06 06:28:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 1 up, 2 in
Dec 06 06:28:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:42 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:28:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec 06 06:28:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/833545241' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:28:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:28:43 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:43 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v89: 6 pgs: 3 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 06 06:28:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/833545241' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e22 e22: 2 total, 1 up, 2 in
Dec 06 06:28:43 compute-0 upbeat_elion[87821]: pool 'cephfs.cephfs.data' created
Dec 06 06:28:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 1 up, 2 in
Dec 06 06:28:43 compute-0 systemd[1]: libpod-d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c.scope: Deactivated successfully.
Dec 06 06:28:43 compute-0 conmon[87821]: conmon d713737754f6c38b9ea6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c.scope/container/memory.events
Dec 06 06:28:43 compute-0 podman[87806]: 2025-12-06 06:28:43.637281157 +0000 UTC m=+1.697903856 container died d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c (image=quay.io/ceph/ceph:v18, name=upbeat_elion, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:28:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:43 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:43 compute-0 ceph-mon[74339]: osdmap e21: 2 total, 1 up, 2 in
Dec 06 06:28:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/833545241' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec 06 06:28:43 compute-0 ceph-mon[74339]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cbd20649b70f5da5129397ef9f3428b65119e8cd211f0d61699fe563761e127-merged.mount: Deactivated successfully.
Dec 06 06:28:43 compute-0 podman[87806]: 2025-12-06 06:28:43.786948169 +0000 UTC m=+1.847570848 container remove d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c (image=quay.io/ceph/ceph:v18, name=upbeat_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 06:28:43 compute-0 systemd[1]: libpod-conmon-d713737754f6c38b9ea67488c2df4ef42fc22f0b9e4bb46b655c1dfa00f2587c.scope: Deactivated successfully.
Dec 06 06:28:43 compute-0 sudo[87803]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:44 compute-0 sudo[87884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohaeneonmflvfawlzradlbkhpiyxegnd ; /usr/bin/python3'
Dec 06 06:28:44 compute-0 sudo[87884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:44 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:44 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:44 compute-0 python3[87886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:44 compute-0 podman[87887]: 2025-12-06 06:28:44.281264635 +0000 UTC m=+0.052490775 container create 93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791 (image=quay.io/ceph/ceph:v18, name=silly_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 06:28:44 compute-0 systemd[1]: Started libpod-conmon-93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791.scope.
Dec 06 06:28:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4df818670ce79a8692b08d018fb49fa0275bf3fff152bb7636e7feb7b40d69/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4df818670ce79a8692b08d018fb49fa0275bf3fff152bb7636e7feb7b40d69/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:44 compute-0 podman[87887]: 2025-12-06 06:28:44.258172093 +0000 UTC m=+0.029398333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:44 compute-0 podman[87887]: 2025-12-06 06:28:44.379144333 +0000 UTC m=+0.150370493 container init 93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791 (image=quay.io/ceph/ceph:v18, name=silly_williams, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 06:28:44 compute-0 podman[87887]: 2025-12-06 06:28:44.384835444 +0000 UTC m=+0.156061584 container start 93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791 (image=quay.io/ceph/ceph:v18, name=silly_williams, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 06 06:28:44 compute-0 podman[87887]: 2025-12-06 06:28:44.38892149 +0000 UTC m=+0.160147630 container attach 93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791 (image=quay.io/ceph/ceph:v18, name=silly_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:44 compute-0 ceph-mon[74339]: pgmap v89: 6 pgs: 3 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/833545241' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 06 06:28:44 compute-0 ceph-mon[74339]: osdmap e22: 2 total, 1 up, 2 in
Dec 06 06:28:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Dec 06 06:28:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/443996486' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 06 06:28:45 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:45 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v91: 7 pgs: 3 active+clean, 4 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 06 06:28:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/443996486' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec 06 06:28:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/443996486' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 06 06:28:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e23 e23: 2 total, 1 up, 2 in
Dec 06 06:28:45 compute-0 silly_williams[87902]: enabled application 'rbd' on pool 'vms'
Dec 06 06:28:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 1 up, 2 in
Dec 06 06:28:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:45 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:45 compute-0 systemd[1]: libpod-93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791.scope: Deactivated successfully.
Dec 06 06:28:45 compute-0 podman[87887]: 2025-12-06 06:28:45.754330794 +0000 UTC m=+1.525556944 container died 93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791 (image=quay.io/ceph/ceph:v18, name=silly_williams, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb4df818670ce79a8692b08d018fb49fa0275bf3fff152bb7636e7feb7b40d69-merged.mount: Deactivated successfully.
Dec 06 06:28:46 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:46 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:46 compute-0 podman[87887]: 2025-12-06 06:28:46.364380943 +0000 UTC m=+2.135607103 container remove 93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791 (image=quay.io/ceph/ceph:v18, name=silly_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 06:28:46 compute-0 sudo[87884]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:46 compute-0 systemd[1]: libpod-conmon-93b6e5dead4912e2f18e36407e183d8ed0250ff90baa29ee2c87810772a7f791.scope: Deactivated successfully.
Dec 06 06:28:46 compute-0 sudo[87962]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvoaylwkcgitbeapcgqbimfmhqhpmdr ; /usr/bin/python3'
Dec 06 06:28:46 compute-0 sudo[87962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:46 compute-0 python3[87964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:46 compute-0 podman[87965]: 2025-12-06 06:28:46.714513802 +0000 UTC m=+0.070198026 container create cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6 (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:46 compute-0 ceph-mon[74339]: pgmap v91: 7 pgs: 3 active+clean, 4 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/443996486' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 06 06:28:46 compute-0 ceph-mon[74339]: osdmap e23: 2 total, 1 up, 2 in
Dec 06 06:28:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:46 compute-0 systemd[1]: Started libpod-conmon-cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6.scope.
Dec 06 06:28:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26c079e33330fd951fc0f1a03b61e65d890054f005c86c3e84f1362ba210f4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c26c079e33330fd951fc0f1a03b61e65d890054f005c86c3e84f1362ba210f4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:46 compute-0 podman[87965]: 2025-12-06 06:28:46.683463284 +0000 UTC m=+0.039147588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:46 compute-0 podman[87965]: 2025-12-06 06:28:46.7844548 +0000 UTC m=+0.140139044 container init cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6 (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:28:46 compute-0 podman[87965]: 2025-12-06 06:28:46.789274935 +0000 UTC m=+0.144959159 container start cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6 (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 06:28:46 compute-0 podman[87965]: 2025-12-06 06:28:46.792643311 +0000 UTC m=+0.148327535 container attach cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6 (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:47 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:47 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Dec 06 06:28:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2646185750' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 06 06:28:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v93: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 06 06:28:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2646185750' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 06 06:28:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2646185750' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 06 06:28:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e24 e24: 2 total, 1 up, 2 in
Dec 06 06:28:47 compute-0 recursing_sinoussi[87981]: enabled application 'rbd' on pool 'volumes'
Dec 06 06:28:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 1 up, 2 in
Dec 06 06:28:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:47 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:47 compute-0 systemd[1]: libpod-cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6.scope: Deactivated successfully.
Dec 06 06:28:47 compute-0 podman[87965]: 2025-12-06 06:28:47.775295404 +0000 UTC m=+1.130979628 container died cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6 (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c26c079e33330fd951fc0f1a03b61e65d890054f005c86c3e84f1362ba210f4-merged.mount: Deactivated successfully.
Dec 06 06:28:47 compute-0 podman[87965]: 2025-12-06 06:28:47.84020141 +0000 UTC m=+1.195885634 container remove cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6 (image=quay.io/ceph/ceph:v18, name=recursing_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:47 compute-0 systemd[1]: libpod-conmon-cb2d3d58c40864ea795d7b97c8dec2dd673dd689ef51a6707de35ad17dc8acc6.scope: Deactivated successfully.
Dec 06 06:28:47 compute-0 sudo[87962]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:47 compute-0 sudo[88043]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvrsowynvgdjkjojzgmfmvtermzogdfm ; /usr/bin/python3'
Dec 06 06:28:47 compute-0 sudo[88043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:48 compute-0 python3[88045]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:48 compute-0 podman[88046]: 2025-12-06 06:28:48.180215843 +0000 UTC m=+0.047788122 container create a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12 (image=quay.io/ceph/ceph:v18, name=mystifying_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:48 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:48 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:48 compute-0 systemd[1]: Started libpod-conmon-a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12.scope.
Dec 06 06:28:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c53959454e47bd5720a6a2e77fa3037869f3a141c7dfb95cc4c5f2ff8a22c0b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c53959454e47bd5720a6a2e77fa3037869f3a141c7dfb95cc4c5f2ff8a22c0b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:48 compute-0 podman[88046]: 2025-12-06 06:28:48.158289463 +0000 UTC m=+0.025861762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:48 compute-0 podman[88046]: 2025-12-06 06:28:48.267293235 +0000 UTC m=+0.134865534 container init a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12 (image=quay.io/ceph/ceph:v18, name=mystifying_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:48 compute-0 podman[88046]: 2025-12-06 06:28:48.273456339 +0000 UTC m=+0.141028618 container start a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12 (image=quay.io/ceph/ceph:v18, name=mystifying_williams, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:28:48 compute-0 podman[88046]: 2025-12-06 06:28:48.277174405 +0000 UTC m=+0.144746684 container attach a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12 (image=quay.io/ceph/ceph:v18, name=mystifying_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Dec 06 06:28:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Dec 06 06:28:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3098886519' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 06 06:28:48 compute-0 ceph-mon[74339]: pgmap v93: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2646185750' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 06 06:28:48 compute-0 ceph-mon[74339]: osdmap e24: 2 total, 1 up, 2 in
Dec 06 06:28:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:48 compute-0 ceph-mon[74339]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 06 06:28:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3098886519' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 06 06:28:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e25 e25: 2 total, 1 up, 2 in
Dec 06 06:28:49 compute-0 mystifying_williams[88062]: enabled application 'rbd' on pool 'backups'
Dec 06 06:28:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 1 up, 2 in
Dec 06 06:28:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:49 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:49 compute-0 systemd[1]: libpod-a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12.scope: Deactivated successfully.
Dec 06 06:28:49 compute-0 podman[88046]: 2025-12-06 06:28:49.026988004 +0000 UTC m=+0.894560283 container died a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12 (image=quay.io/ceph/ceph:v18, name=mystifying_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 06:28:49 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:49 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c53959454e47bd5720a6a2e77fa3037869f3a141c7dfb95cc4c5f2ff8a22c0b-merged.mount: Deactivated successfully.
Dec 06 06:28:49 compute-0 podman[88046]: 2025-12-06 06:28:49.351003125 +0000 UTC m=+1.218575404 container remove a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12 (image=quay.io/ceph/ceph:v18, name=mystifying_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:49 compute-0 systemd[1]: libpod-conmon-a954964dc105ce504ddb1d62ef6c2272a6601dc60278554fb799d9d358954c12.scope: Deactivated successfully.
Dec 06 06:28:49 compute-0 sudo[88043]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v96: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:49 compute-0 sudo[88123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqnyraetoghukglbmqxeotqmnlmctyim ; /usr/bin/python3'
Dec 06 06:28:49 compute-0 sudo[88123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:49 compute-0 python3[88125]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:49 compute-0 podman[88126]: 2025-12-06 06:28:49.782239968 +0000 UTC m=+0.095794409 container create 3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea (image=quay.io/ceph/ceph:v18, name=sweet_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:28:49 compute-0 podman[88126]: 2025-12-06 06:28:49.715413058 +0000 UTC m=+0.028967519 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:49 compute-0 systemd[1]: Started libpod-conmon-3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea.scope.
Dec 06 06:28:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba2d617abee78b01be5848ce584c6f43ba7b54b07f13813d8a4460e062c72f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba2d617abee78b01be5848ce584c6f43ba7b54b07f13813d8a4460e062c72f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:49 compute-0 podman[88126]: 2025-12-06 06:28:49.901738886 +0000 UTC m=+0.215293347 container init 3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea (image=quay.io/ceph/ceph:v18, name=sweet_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 06:28:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3098886519' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 06 06:28:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3098886519' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 06 06:28:49 compute-0 ceph-mon[74339]: osdmap e25: 2 total, 1 up, 2 in
Dec 06 06:28:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:49 compute-0 podman[88126]: 2025-12-06 06:28:49.928869653 +0000 UTC m=+0.242424094 container start 3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea (image=quay.io/ceph/ceph:v18, name=sweet_spence, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:49 compute-0 podman[88126]: 2025-12-06 06:28:49.963273916 +0000 UTC m=+0.276828387 container attach 3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea (image=quay.io/ceph/ceph:v18, name=sweet_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:50 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:50 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Dec 06 06:28:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/47176689' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 06 06:28:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 06 06:28:51 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:51 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/47176689' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 06 06:28:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e26 e26: 2 total, 1 up, 2 in
Dec 06 06:28:51 compute-0 sweet_spence[88141]: enabled application 'rbd' on pool 'images'
Dec 06 06:28:51 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 1 up, 2 in
Dec 06 06:28:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:51 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:51 compute-0 ceph-mon[74339]: pgmap v96: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/47176689' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 06 06:28:51 compute-0 systemd[1]: libpod-3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea.scope: Deactivated successfully.
Dec 06 06:28:51 compute-0 podman[88126]: 2025-12-06 06:28:51.260523614 +0000 UTC m=+1.574078055 container died 3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea (image=quay.io/ceph/ceph:v18, name=sweet_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:28:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ba2d617abee78b01be5848ce584c6f43ba7b54b07f13813d8a4460e062c72f9-merged.mount: Deactivated successfully.
Dec 06 06:28:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v98: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:51 compute-0 podman[88126]: 2025-12-06 06:28:51.543387962 +0000 UTC m=+1.856942403 container remove 3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea (image=quay.io/ceph/ceph:v18, name=sweet_spence, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:28:51 compute-0 sudo[88123]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:51 compute-0 systemd[1]: libpod-conmon-3ccfe168338c2805311db75c9aae96587a38cd4003b232b6faf0c7790fb033ea.scope: Deactivated successfully.
Dec 06 06:28:51 compute-0 sudo[88203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwsmdpcxacsdcijaqudrbifvrmnhfzuw ; /usr/bin/python3'
Dec 06 06:28:51 compute-0 sudo[88203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:51 compute-0 python3[88205]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:51 compute-0 podman[88206]: 2025-12-06 06:28:51.912697683 +0000 UTC m=+0.055454638 container create a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e (image=quay.io/ceph/ceph:v18, name=vigorous_brattain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:51 compute-0 systemd[1]: Started libpod-conmon-a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e.scope.
Dec 06 06:28:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:51 compute-0 podman[88206]: 2025-12-06 06:28:51.888536301 +0000 UTC m=+0.031293286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fa6236b2131d416661be53cc54ba6d6433964b6f261367b771c8107e729a017/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fa6236b2131d416661be53cc54ba6d6433964b6f261367b771c8107e729a017/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:51 compute-0 podman[88206]: 2025-12-06 06:28:51.997200182 +0000 UTC m=+0.139957147 container init a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e (image=quay.io/ceph/ceph:v18, name=vigorous_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:28:52 compute-0 podman[88206]: 2025-12-06 06:28:52.003976905 +0000 UTC m=+0.146733860 container start a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e (image=quay.io/ceph/ceph:v18, name=vigorous_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:28:52 compute-0 podman[88206]: 2025-12-06 06:28:52.007893525 +0000 UTC m=+0.150650490 container attach a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e (image=quay.io/ceph/ceph:v18, name=vigorous_brattain, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 06:28:52 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:52 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/47176689' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 06 06:28:52 compute-0 ceph-mon[74339]: osdmap e26: 2 total, 1 up, 2 in
Dec 06 06:28:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Dec 06 06:28:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2525288051' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 06 06:28:52 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:53 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:53 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 06 06:28:53 compute-0 ceph-mon[74339]: pgmap v98: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2525288051' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 06 06:28:53 compute-0 ceph-mon[74339]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:28:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2525288051' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 06 06:28:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e27 e27: 2 total, 1 up, 2 in
Dec 06 06:28:53 compute-0 vigorous_brattain[88222]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 06 06:28:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 1 up, 2 in
Dec 06 06:28:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:53 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:53 compute-0 systemd[1]: libpod-a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e.scope: Deactivated successfully.
Dec 06 06:28:53 compute-0 podman[88206]: 2025-12-06 06:28:53.296452107 +0000 UTC m=+1.439209092 container died a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e (image=quay.io/ceph/ceph:v18, name=vigorous_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:28:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fa6236b2131d416661be53cc54ba6d6433964b6f261367b771c8107e729a017-merged.mount: Deactivated successfully.
Dec 06 06:28:53 compute-0 podman[88206]: 2025-12-06 06:28:53.340976686 +0000 UTC m=+1.483733641 container remove a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e (image=quay.io/ceph/ceph:v18, name=vigorous_brattain, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 06:28:53 compute-0 systemd[1]: libpod-conmon-a0ea1e2c7509894c28bc09c09e0e23c6b548180356adbe0887ab33460ee6367e.scope: Deactivated successfully.
Dec 06 06:28:53 compute-0 sudo[88203]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v100: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:53 compute-0 sudo[88284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apvuufczxvvjddisxbnzuecorqhvdtzc ; /usr/bin/python3'
Dec 06 06:28:53 compute-0 sudo[88284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:53 compute-0 python3[88286]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:53 compute-0 podman[88287]: 2025-12-06 06:28:53.756810494 +0000 UTC m=+0.044358606 container create c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc (image=quay.io/ceph/ceph:v18, name=sad_panini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:53 compute-0 systemd[1]: Started libpod-conmon-c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc.scope.
Dec 06 06:28:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec99da449f4af25c07e066152007a1dafa74724372787efe758d31f030514e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec99da449f4af25c07e066152007a1dafa74724372787efe758d31f030514e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:53 compute-0 podman[88287]: 2025-12-06 06:28:53.825386262 +0000 UTC m=+0.112934384 container init c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc (image=quay.io/ceph/ceph:v18, name=sad_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:53 compute-0 podman[88287]: 2025-12-06 06:28:53.831864715 +0000 UTC m=+0.119412827 container start c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc (image=quay.io/ceph/ceph:v18, name=sad_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:28:53 compute-0 podman[88287]: 2025-12-06 06:28:53.737657802 +0000 UTC m=+0.025205924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:53 compute-0 podman[88287]: 2025-12-06 06:28:53.836135046 +0000 UTC m=+0.123683188 container attach c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc (image=quay.io/ceph/ceph:v18, name=sad_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:28:54 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:54 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2525288051' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 06 06:28:54 compute-0 ceph-mon[74339]: osdmap e27: 2 total, 1 up, 2 in
Dec 06 06:28:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:54 compute-0 ceph-mon[74339]: pgmap v100: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Dec 06 06:28:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1991769853' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 06 06:28:55 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:55 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 06 06:28:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1991769853' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 06 06:28:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1991769853' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e28 e28: 2 total, 1 up, 2 in
Dec 06 06:28:55 compute-0 sad_panini[88302]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 1 up, 2 in
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:55 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:55 compute-0 systemd[1]: libpod-c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc.scope: Deactivated successfully.
Dec 06 06:28:55 compute-0 podman[88287]: 2025-12-06 06:28:55.333005078 +0000 UTC m=+1.620553190 container died c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc (image=quay.io/ceph/ceph:v18, name=sad_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 06:28:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fec99da449f4af25c07e066152007a1dafa74724372787efe758d31f030514e0-merged.mount: Deactivated successfully.
Dec 06 06:28:55 compute-0 podman[88287]: 2025-12-06 06:28:55.377675361 +0000 UTC m=+1.665223473 container remove c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc (image=quay.io/ceph/ceph:v18, name=sad_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:28:55 compute-0 systemd[1]: libpod-conmon-c0e8744ecb5b30b8eb118f85ed88a953fcb706f46f3af8967fde0fe070d076bc.scope: Deactivated successfully.
Dec 06 06:28:55 compute-0 sudo[88284]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v102: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:28:55 compute-0 ceph-mgr[74630]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Dec 06 06:28:55 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Dec 06 06:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Dec 06 06:28:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:56 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:56 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:56 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1991769853' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 06 06:28:56 compute-0 ceph-mon[74339]: osdmap e28: 2 total, 1 up, 2 in
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:56 compute-0 ceph-mon[74339]: pgmap v102: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:28:56 compute-0 ceph-mon[74339]: Adjusting osd_memory_target on compute-1 to  5248M
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:28:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:56 compute-0 python3[88414]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:28:56 compute-0 python3[88485]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002536.1332467-37393-55185050966105/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:28:57 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:57 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:57 compute-0 ceph-mon[74339]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:28:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:57 compute-0 sudo[88585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmyahqjvjrbqmzkxdngvvpdqrubljxkr ; /usr/bin/python3'
Dec 06 06:28:57 compute-0 sudo[88585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v103: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:57 compute-0 python3[88587]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:28:57 compute-0 sudo[88585]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:57 compute-0 sudo[88660]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zesqmyzzhikxszmtwpwlwghtxiacvlcr ; /usr/bin/python3'
Dec 06 06:28:57 compute-0 sudo[88660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:28:57 compute-0 python3[88662]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002537.1544967-37407-68337073136210/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=9e7f5dcd0396a589d35d04817b71a6674b7eda37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:28:57 compute-0 sudo[88660]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:58 compute-0 sudo[88710]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhucwokyzscrxczacpvkynkyujoxvgvo ; /usr/bin/python3'
Dec 06 06:28:58 compute-0 sudo[88710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:58 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:58 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:58 compute-0 python3[88712]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:58 compute-0 podman[88713]: 2025-12-06 06:28:58.284989442 +0000 UTC m=+0.044613362 container create 0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775 (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:28:58 compute-0 systemd[1]: Started libpod-conmon-0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775.scope.
Dec 06 06:28:58 compute-0 ceph-mon[74339]: pgmap v103: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474fe0e94b445fd6b5ca1e274ec8f24d4351d490ee5df25a8b99999a4a7c590b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474fe0e94b445fd6b5ca1e274ec8f24d4351d490ee5df25a8b99999a4a7c590b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/474fe0e94b445fd6b5ca1e274ec8f24d4351d490ee5df25a8b99999a4a7c590b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:58 compute-0 podman[88713]: 2025-12-06 06:28:58.262585819 +0000 UTC m=+0.022209739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:58 compute-0 podman[88713]: 2025-12-06 06:28:58.370919372 +0000 UTC m=+0.130543282 container init 0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775 (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:28:58 compute-0 podman[88713]: 2025-12-06 06:28:58.378494205 +0000 UTC m=+0.138118095 container start 0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775 (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 06:28:58 compute-0 podman[88713]: 2025-12-06 06:28:58.382700225 +0000 UTC m=+0.142324115 container attach 0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775 (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 06:28:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec 06 06:28:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1942839260' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 06:28:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1942839260' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 06 06:28:58 compute-0 suspicious_tesla[88728]: 
Dec 06 06:28:58 compute-0 suspicious_tesla[88728]: [global]
Dec 06 06:28:58 compute-0 suspicious_tesla[88728]:         fsid = 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:28:58 compute-0 suspicious_tesla[88728]:         mon_host = 192.168.122.100
Dec 06 06:28:58 compute-0 systemd[1]: libpod-0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775.scope: Deactivated successfully.
Dec 06 06:28:59 compute-0 conmon[88728]: conmon 0d230f3536121a9fa63b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775.scope/container/memory.events
Dec 06 06:28:59 compute-0 podman[88713]: 2025-12-06 06:28:59.001478581 +0000 UTC m=+0.761102481 container died 0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775 (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:28:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-474fe0e94b445fd6b5ca1e274ec8f24d4351d490ee5df25a8b99999a4a7c590b-merged.mount: Deactivated successfully.
Dec 06 06:28:59 compute-0 podman[88713]: 2025-12-06 06:28:59.044682702 +0000 UTC m=+0.804306592 container remove 0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775 (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:28:59 compute-0 systemd[1]: libpod-conmon-0d230f3536121a9fa63b1f8f3c7a992c8d9ff57667a615631d8010368a834775.scope: Deactivated successfully.
Dec 06 06:28:59 compute-0 sudo[88710]: pam_unix(sudo:session): session closed for user root
Dec 06 06:28:59 compute-0 sudo[88788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehlpispqoznpfvysqfovnzizuqoxfgqx ; /usr/bin/python3'
Dec 06 06:28:59 compute-0 sudo[88788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:28:59 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:28:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:28:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:59 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:28:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1942839260' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 06 06:28:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1942839260' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 06 06:28:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:28:59 compute-0 python3[88790]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:28:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v104: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:28:59 compute-0 podman[88791]: 2025-12-06 06:28:59.415957839 +0000 UTC m=+0.048501943 container create 9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066 (image=quay.io/ceph/ceph:v18, name=stoic_panini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:59 compute-0 systemd[1]: Started libpod-conmon-9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066.scope.
Dec 06 06:28:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10ad8969a6b307c3ab8c48d229d260a07307632e690474386f5b3bcb7a967ca1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10ad8969a6b307c3ab8c48d229d260a07307632e690474386f5b3bcb7a967ca1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10ad8969a6b307c3ab8c48d229d260a07307632e690474386f5b3bcb7a967ca1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:28:59 compute-0 podman[88791]: 2025-12-06 06:28:59.39442476 +0000 UTC m=+0.026968864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:28:59 compute-0 podman[88791]: 2025-12-06 06:28:59.494540501 +0000 UTC m=+0.127084595 container init 9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066 (image=quay.io/ceph/ceph:v18, name=stoic_panini, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:28:59 compute-0 podman[88791]: 2025-12-06 06:28:59.505028798 +0000 UTC m=+0.137572892 container start 9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066 (image=quay.io/ceph/ceph:v18, name=stoic_panini, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Dec 06 06:28:59 compute-0 podman[88791]: 2025-12-06 06:28:59.508585808 +0000 UTC m=+0.141129912 container attach 9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066 (image=quay.io/ceph/ceph:v18, name=stoic_panini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:29:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Dec 06 06:29:00 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4211786825' entity='client.admin' 
Dec 06 06:29:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:00 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:00 compute-0 stoic_panini[88806]: set ssl_option
Dec 06 06:29:00 compute-0 systemd[1]: libpod-9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066.scope: Deactivated successfully.
Dec 06 06:29:00 compute-0 podman[88791]: 2025-12-06 06:29:00.239290298 +0000 UTC m=+0.871834382 container died 9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066 (image=quay.io/ceph/ceph:v18, name=stoic_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:29:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-10ad8969a6b307c3ab8c48d229d260a07307632e690474386f5b3bcb7a967ca1-merged.mount: Deactivated successfully.
Dec 06 06:29:00 compute-0 podman[88791]: 2025-12-06 06:29:00.291267167 +0000 UTC m=+0.923811251 container remove 9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066 (image=quay.io/ceph/ceph:v18, name=stoic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 06:29:00 compute-0 systemd[1]: libpod-conmon-9f425f5fefcd6c30c9a3b1a93c7d9fa4078b20f37dd68077464e63893a39d066.scope: Deactivated successfully.
Dec 06 06:29:00 compute-0 sudo[88788]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:00 compute-0 sudo[88865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtnszdadgvjkbazzuxzzujomehnimkp ; /usr/bin/python3'
Dec 06 06:29:00 compute-0 sudo[88865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:00 compute-0 python3[88867]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:00 compute-0 podman[88868]: 2025-12-06 06:29:00.664679935 +0000 UTC m=+0.049348986 container create a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76 (image=quay.io/ceph/ceph:v18, name=interesting_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:29:00 compute-0 systemd[1]: Started libpod-conmon-a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76.scope.
Dec 06 06:29:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:00 compute-0 podman[88868]: 2025-12-06 06:29:00.642652273 +0000 UTC m=+0.027321344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c289740af95f9d93638969d87012c9651aaa16a9cc7b90595b0c10574aa69c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c289740af95f9d93638969d87012c9651aaa16a9cc7b90595b0c10574aa69c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c289740af95f9d93638969d87012c9651aaa16a9cc7b90595b0c10574aa69c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:00 compute-0 podman[88868]: 2025-12-06 06:29:00.756794879 +0000 UTC m=+0.141463940 container init a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76 (image=quay.io/ceph/ceph:v18, name=interesting_galois, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 06:29:00 compute-0 podman[88868]: 2025-12-06 06:29:00.765016052 +0000 UTC m=+0.149685093 container start a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76 (image=quay.io/ceph/ceph:v18, name=interesting_galois, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 06:29:00 compute-0 podman[88868]: 2025-12-06 06:29:00.770514587 +0000 UTC m=+0.155183668 container attach a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76 (image=quay.io/ceph/ceph:v18, name=interesting_galois, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:29:01 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:01 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:01 compute-0 ceph-mon[74339]: pgmap v104: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4211786825' entity='client.admin' 
Dec 06 06:29:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14239 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:29:01 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:01 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 06 06:29:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:01 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec 06 06:29:01 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec 06 06:29:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Dec 06 06:29:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:01 compute-0 interesting_galois[88884]: Scheduled rgw.rgw update...
Dec 06 06:29:01 compute-0 interesting_galois[88884]: Scheduled ingress.rgw.default update...
Dec 06 06:29:01 compute-0 systemd[1]: libpod-a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76.scope: Deactivated successfully.
Dec 06 06:29:01 compute-0 podman[88868]: 2025-12-06 06:29:01.390859398 +0000 UTC m=+0.775528459 container died a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76 (image=quay.io/ceph/ceph:v18, name=interesting_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 06:29:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v105: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-05c289740af95f9d93638969d87012c9651aaa16a9cc7b90595b0c10574aa69c-merged.mount: Deactivated successfully.
Dec 06 06:29:01 compute-0 podman[88868]: 2025-12-06 06:29:01.443970969 +0000 UTC m=+0.828640020 container remove a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76 (image=quay.io/ceph/ceph:v18, name=interesting_galois, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 06:29:01 compute-0 systemd[1]: libpod-conmon-a4c39c3e7b2958027d062d027b5d4771e211ca9528af5ec40d72049e26dd2a76.scope: Deactivated successfully.
Dec 06 06:29:01 compute-0 sudo[88865]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:02 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:02 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:02 compute-0 python3[88997]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:29:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:02 compute-0 python3[89068]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002542.331245-37448-280283915725840/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:29:03 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:03 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:03 compute-0 ceph-mon[74339]: from='client.14239 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:29:03 compute-0 ceph-mon[74339]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:03 compute-0 ceph-mon[74339]: Saving service ingress.rgw.default spec with placement count:2
Dec 06 06:29:03 compute-0 ceph-mon[74339]: pgmap v105: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:03 compute-0 sudo[89116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzbzvkjjfpzmvmxxoczhdvufmscsbpjr ; /usr/bin/python3'
Dec 06 06:29:03 compute-0 sudo[89116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v106: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:03 compute-0 python3[89118]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:03 compute-0 podman[89119]: 2025-12-06 06:29:03.51624056 +0000 UTC m=+0.048314747 container create d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f (image=quay.io/ceph/ceph:v18, name=upbeat_ardinghelli, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:29:03 compute-0 systemd[1]: Started libpod-conmon-d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f.scope.
Dec 06 06:29:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04646ce8b068ea7feeb5d0395526e631a09c90c15cbe1ee933b1d1e788acd2e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04646ce8b068ea7feeb5d0395526e631a09c90c15cbe1ee933b1d1e788acd2e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04646ce8b068ea7feeb5d0395526e631a09c90c15cbe1ee933b1d1e788acd2e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:03 compute-0 podman[89119]: 2025-12-06 06:29:03.497578563 +0000 UTC m=+0.029652770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:03 compute-0 podman[89119]: 2025-12-06 06:29:03.592521947 +0000 UTC m=+0.124596154 container init d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f (image=quay.io/ceph/ceph:v18, name=upbeat_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:29:03 compute-0 podman[89119]: 2025-12-06 06:29:03.598900817 +0000 UTC m=+0.130975014 container start d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f (image=quay.io/ceph/ceph:v18, name=upbeat_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:29:03 compute-0 podman[89119]: 2025-12-06 06:29:03.602304013 +0000 UTC m=+0.134378200 container attach d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f (image=quay.io/ceph/ceph:v18, name=upbeat_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:29:04 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:04 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14241 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mgr[74630]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 06 06:29:04 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:04.227+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e2 new map
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:29:04.228395+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e29 e29: 2 total, 1 up, 2 in
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 1 up, 2 in
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 06 06:29:04 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:04 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 06 06:29:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:04 compute-0 ceph-mgr[74630]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 06 06:29:04 compute-0 systemd[1]: libpod-d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f.scope: Deactivated successfully.
Dec 06 06:29:04 compute-0 podman[89119]: 2025-12-06 06:29:04.276856246 +0000 UTC m=+0.808930443 container died d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f (image=quay.io/ceph/ceph:v18, name=upbeat_ardinghelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:29:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a04646ce8b068ea7feeb5d0395526e631a09c90c15cbe1ee933b1d1e788acd2e-merged.mount: Deactivated successfully.
Dec 06 06:29:04 compute-0 podman[89119]: 2025-12-06 06:29:04.333500067 +0000 UTC m=+0.865574254 container remove d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f (image=quay.io/ceph/ceph:v18, name=upbeat_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:29:04 compute-0 systemd[1]: libpod-conmon-d5aaaae26e7c81745be482a0258349a0ca0a797174ffa80778b63dc19105755f.scope: Deactivated successfully.
Dec 06 06:29:04 compute-0 sudo[89116]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:04 compute-0 sudo[89194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgnbkmpspynidswsrcwzxidobfhsocie ; /usr/bin/python3'
Dec 06 06:29:04 compute-0 sudo[89194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:04 compute-0 python3[89196]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:04 compute-0 podman[89197]: 2025-12-06 06:29:04.74138445 +0000 UTC m=+0.068348824 container create 07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57 (image=quay.io/ceph/ceph:v18, name=suspicious_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:29:04 compute-0 systemd[1]: Started libpod-conmon-07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57.scope.
Dec 06 06:29:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca556e1eac6aff813424a27a92d7f210314bafae0d661ecaf244976167fa630/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca556e1eac6aff813424a27a92d7f210314bafae0d661ecaf244976167fa630/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca556e1eac6aff813424a27a92d7f210314bafae0d661ecaf244976167fa630/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:04 compute-0 podman[89197]: 2025-12-06 06:29:04.802686113 +0000 UTC m=+0.129650487 container init 07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57 (image=quay.io/ceph/ceph:v18, name=suspicious_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:29:04 compute-0 podman[89197]: 2025-12-06 06:29:04.807523629 +0000 UTC m=+0.134488003 container start 07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57 (image=quay.io/ceph/ceph:v18, name=suspicious_hawking, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:04 compute-0 podman[89197]: 2025-12-06 06:29:04.812663275 +0000 UTC m=+0.139627679 container attach 07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57 (image=quay.io/ceph/ceph:v18, name=suspicious_hawking, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:29:04 compute-0 podman[89197]: 2025-12-06 06:29:04.719498311 +0000 UTC m=+0.046462735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:05 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:05 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:05 compute-0 ceph-mon[74339]: pgmap v106: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:05 compute-0 ceph-mon[74339]: from='client.14241 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:29:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 06 06:29:05 compute-0 ceph-mon[74339]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 06 06:29:05 compute-0 ceph-mon[74339]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 06 06:29:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 06 06:29:05 compute-0 ceph-mon[74339]: osdmap e29: 2 total, 1 up, 2 in
Dec 06 06:29:05 compute-0 ceph-mon[74339]: fsmap cephfs:0
Dec 06 06:29:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:05 compute-0 ceph-mon[74339]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14243 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:29:05 compute-0 ceph-mgr[74630]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:05 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 06 06:29:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:05 compute-0 suspicious_hawking[89212]: Scheduled mds.cephfs update...
Dec 06 06:29:05 compute-0 systemd[1]: libpod-07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57.scope: Deactivated successfully.
Dec 06 06:29:05 compute-0 podman[89197]: 2025-12-06 06:29:05.41337764 +0000 UTC m=+0.740342014 container died 07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57 (image=quay.io/ceph/ceph:v18, name=suspicious_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:29:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v108: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ca556e1eac6aff813424a27a92d7f210314bafae0d661ecaf244976167fa630-merged.mount: Deactivated successfully.
Dec 06 06:29:05 compute-0 podman[89197]: 2025-12-06 06:29:05.455746107 +0000 UTC m=+0.782710481 container remove 07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57 (image=quay.io/ceph/ceph:v18, name=suspicious_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:29:05 compute-0 systemd[1]: libpod-conmon-07ba7f31fb13909ff3d17c68f3e0b345eb47d9f610a0812500cda9b98f61ca57.scope: Deactivated successfully.
Dec 06 06:29:05 compute-0 sudo[89194]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:06 compute-0 sudo[89324]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymlmjubnfwslxubrzqcewervlxjuehqf ; /usr/bin/python3'
Dec 06 06:29:06 compute-0 sudo[89324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:06 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:06 compute-0 python3[89326]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:29:06 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:06 compute-0 sudo[89324]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:06 compute-0 ceph-mon[74339]: from='client.14243 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 06:29:06 compute-0 ceph-mon[74339]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 06 06:29:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:06 compute-0 ceph-mon[74339]: pgmap v108: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:06 compute-0 sudo[89397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izfuzysicqexyxdwzajrczigjcegsztk ; /usr/bin/python3'
Dec 06 06:29:06 compute-0 sudo[89397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:06 compute-0 python3[89399]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765002545.8875353-37478-15977713618921/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=eb306234ce11ca94053ba9deb99a6e4ceca2e349 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:29:06 compute-0 sudo[89397]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:06 compute-0 sudo[89447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujrgpofbyptnnuluuorfxlaevyfgnxeu ; /usr/bin/python3'
Dec 06 06:29:06 compute-0 sudo[89447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:07 compute-0 python3[89449]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:07 compute-0 podman[89450]: 2025-12-06 06:29:07.137408154 +0000 UTC m=+0.055445569 container create 5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8 (image=quay.io/ceph/ceph:v18, name=nifty_einstein, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:29:07 compute-0 systemd[1]: Started libpod-conmon-5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8.scope.
Dec 06 06:29:07 compute-0 podman[89450]: 2025-12-06 06:29:07.114186567 +0000 UTC m=+0.032224012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3b8d18c6e05e25052daafff5e98b67adec0ddfd013a732e5750e75e62a538b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b3b8d18c6e05e25052daafff5e98b67adec0ddfd013a732e5750e75e62a538b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:07 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:07 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:07 compute-0 podman[89450]: 2025-12-06 06:29:07.228665084 +0000 UTC m=+0.146702499 container init 5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8 (image=quay.io/ceph/ceph:v18, name=nifty_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:29:07 compute-0 podman[89450]: 2025-12-06 06:29:07.235202489 +0000 UTC m=+0.153239884 container start 5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8 (image=quay.io/ceph/ceph:v18, name=nifty_einstein, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:29:07 compute-0 podman[89450]: 2025-12-06 06:29:07.238461841 +0000 UTC m=+0.156499256 container attach 5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8 (image=quay.io/ceph/ceph:v18, name=nifty_einstein, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 06:29:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v109: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Dec 06 06:29:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1496796207' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 06 06:29:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1496796207' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 06 06:29:07 compute-0 systemd[1]: libpod-5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8.scope: Deactivated successfully.
Dec 06 06:29:07 compute-0 conmon[89465]: conmon 5105564ea9e5a97791ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8.scope/container/memory.events
Dec 06 06:29:07 compute-0 podman[89450]: 2025-12-06 06:29:07.862250598 +0000 UTC m=+0.780288013 container died 5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8 (image=quay.io/ceph/ceph:v18, name=nifty_einstein, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b3b8d18c6e05e25052daafff5e98b67adec0ddfd013a732e5750e75e62a538b-merged.mount: Deactivated successfully.
Dec 06 06:29:07 compute-0 podman[89450]: 2025-12-06 06:29:07.90794928 +0000 UTC m=+0.825986675 container remove 5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8 (image=quay.io/ceph/ceph:v18, name=nifty_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:29:07 compute-0 systemd[1]: libpod-conmon-5105564ea9e5a97791ad1f414071884de68fe4baca1db22fd74785dbf3855fb8.scope: Deactivated successfully.
Dec 06 06:29:07 compute-0 sudo[89447]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:08 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:08 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:08 compute-0 ceph-mon[74339]: pgmap v109: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1496796207' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec 06 06:29:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1496796207' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 06 06:29:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:08 compute-0 sudo[89524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dngmfszgcbwrlwlkgculvfmfnumaciuv ; /usr/bin/python3'
Dec 06 06:29:08 compute-0 sudo[89524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:08 compute-0 python3[89526]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:08 compute-0 podman[89528]: 2025-12-06 06:29:08.756761569 +0000 UTC m=+0.049327766 container create c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf (image=quay.io/ceph/ceph:v18, name=heuristic_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:29:08 compute-0 systemd[1]: Started libpod-conmon-c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf.scope.
Dec 06 06:29:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966aac1a24aa268882158d78cc7945217a2a3e067c670165ae69aab9e89cac7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/966aac1a24aa268882158d78cc7945217a2a3e067c670165ae69aab9e89cac7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:08 compute-0 podman[89528]: 2025-12-06 06:29:08.734429937 +0000 UTC m=+0.026996124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:08 compute-0 podman[89528]: 2025-12-06 06:29:08.830883975 +0000 UTC m=+0.123450182 container init c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf (image=quay.io/ceph/ceph:v18, name=heuristic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:29:08 compute-0 podman[89528]: 2025-12-06 06:29:08.83849674 +0000 UTC m=+0.131062927 container start c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf (image=quay.io/ceph/ceph:v18, name=heuristic_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:29:08 compute-0 podman[89528]: 2025-12-06 06:29:08.842912705 +0000 UTC m=+0.135478912 container attach c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf (image=quay.io/ceph/ceph:v18, name=heuristic_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:29:09 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:09 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v110: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 06 06:29:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325966409' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:29:09 compute-0 heuristic_mirzakhani[89545]: 
Dec 06 06:29:09 compute-0 heuristic_mirzakhani[89545]: {"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","health":{"status":"HEALTH_ERR","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false},"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":226,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":29,"num_osds":2,"num_up_osds":1,"osd_up_since":1765002500,"num_in_osds":2,"osd_in_since":1765002483,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":4},{"state_name":"unknown","count":3}],"num_pgs":7,"num_pools":7,"num_objects":0,"data_bytes":0,"bytes_used":28217344,"bytes_avail":7483781120,"bytes_total":7511998464,"unknown_pgs_ratio":0.4285714328289032},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-06T06:27:14.826183+0000","services":{}},"progress_events":{}}
Dec 06 06:29:09 compute-0 systemd[1]: libpod-c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf.scope: Deactivated successfully.
Dec 06 06:29:09 compute-0 podman[89528]: 2025-12-06 06:29:09.485917974 +0000 UTC m=+0.778484181 container died c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf (image=quay.io/ceph/ceph:v18, name=heuristic_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-966aac1a24aa268882158d78cc7945217a2a3e067c670165ae69aab9e89cac7b-merged.mount: Deactivated successfully.
Dec 06 06:29:09 compute-0 podman[89528]: 2025-12-06 06:29:09.547483015 +0000 UTC m=+0.840049202 container remove c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf (image=quay.io/ceph/ceph:v18, name=heuristic_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:29:09 compute-0 systemd[1]: libpod-conmon-c8d42b32367ebbe2b75bb2d6cff40df75c2d4f992801e9c765139c1430011abf.scope: Deactivated successfully.
Dec 06 06:29:09 compute-0 sudo[89524]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:10 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:10 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:10 compute-0 ceph-mon[74339]: pgmap v110: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/325966409' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:29:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:11 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:11 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v111: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:12 compute-0 ceph-mon[74339]: pgmap v111: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:29:12
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Some PGs (0.428571) are unknown; try again later
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.012322899218313792 quantized to 1 (current 1)
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 7511998464
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 06:29:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:29:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:29:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:29:13 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:13 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v112: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 06 06:29:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e30 e30: 2 total, 1 up, 2 in
Dec 06 06:29:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 1 up, 2 in
Dec 06 06:29:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:13 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:13 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 244fc3db-129f-49ed-8b6c-c498b8ffd9e4 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 06 06:29:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:29:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:14 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:14 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 06 06:29:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e31 e31: 2 total, 1 up, 2 in
Dec 06 06:29:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 1 up, 2 in
Dec 06 06:29:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:14 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:14 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 7fbf73e4-8f08-4c85-b6e9-e71bac7da5bf (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 06 06:29:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:29:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:14 compute-0 ceph-mon[74339]: pgmap v112: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:14 compute-0 ceph-mon[74339]: osdmap e30: 2 total, 1 up, 2 in
Dec 06 06:29:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v115: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e32 e32: 2 total, 1 up, 2 in
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 1 up, 2 in
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:15 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev ba12da22-db28-4364-8c46-ecb68243885f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 06 06:29:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=32 pruub=14.907751083s) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active pruub 76.652984619s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 32 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=32 pruub=14.907751083s) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown pruub 76.652984619s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:15 compute-0 ceph-mon[74339]: osdmap e31: 2 total, 1 up, 2 in
Dec 06 06:29:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:15 compute-0 ceph-mon[74339]: osdmap e32: 2 total, 1 up, 2 in
Dec 06 06:29:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:29:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:29:15 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 06 06:29:15 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 06 06:29:16 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:16 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 06 06:29:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e33 e33: 2 total, 1 up, 2 in
Dec 06 06:29:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 1 up, 2 in
Dec 06 06:29:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:16 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:16 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 9f025c40-349c-4285-957d-3f1fb865e94c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1f( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1c( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1e( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.19( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.8( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.4( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.3( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.5( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.6( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.7( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.2( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.a( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.b( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.c( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.d( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.f( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.10( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.12( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.13( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.16( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.14( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.18( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.17( empty local-lis/les=14/15 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Dec 06 06:29:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec 06 06:29:16 compute-0 ceph-mon[74339]: pgmap v115: 7 pgs: 4 active+clean, 3 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:29:16 compute-0 ceph-mon[74339]: Updating compute-2:/etc/ceph/ceph.conf
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:16 compute-0 ceph-mon[74339]: osdmap e33: 2 total, 1 up, 2 in
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1c( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1f( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.19( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1e( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1b( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.8( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.4( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.3( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.7( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.5( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=32/33 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.6( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.2( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.1( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.b( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.a( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.c( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.d( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.f( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.10( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.13( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.12( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.16( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.18( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.14( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 33 pg[3.17( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=14/14 les/c/f=15/15/0 sis=32) [0] r=0 lpr=32 pi=[14,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:16 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:29:16 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:29:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 06 06:29:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 06 06:29:17 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v118: 38 pgs: 1 peering, 3 active+clean, 34 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e34 e34: 2 total, 1 up, 2 in
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 1 up, 2 in
Dec 06 06:29:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:17 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 143f6f59-1538-4206-9581-11ed69cc2150 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 06 06:29:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:29:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 34 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34 pruub=9.938682556s) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active pruub 73.696853638s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 34 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=14.879796982s) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active pruub 78.638336182s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 34 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34 pruub=9.938682556s) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown pruub 73.696853638s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 34 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=14.879796982s) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown pruub 78.638336182s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:17 compute-0 ceph-mon[74339]: osdmap e34: 2 total, 1 up, 2 in
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:29:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:17 compute-0 ceph-mgr[74630]: [progress WARNING root] Starting Global Recovery Event,97 pgs not in active + clean state
Dec 06 06:29:17 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:29:17 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 06 06:29:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e35 e35: 2 total, 1 up, 2 in
Dec 06 06:29:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e35: 2 total, 1 up, 2 in
Dec 06 06:29:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev ffaa72db-78c6-4bdb-b482-422fb4428fdf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 244fc3db-129f-49ed-8b6c-c498b8ffd9e4 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 244fc3db-129f-49ed-8b6c-c498b8ffd9e4 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 7fbf73e4-8f08-4c85-b6e9-e71bac7da5bf (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 7fbf73e4-8f08-4c85-b6e9-e71bac7da5bf (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev ba12da22-db28-4364-8c46-ecb68243885f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event ba12da22-db28-4364-8c46-ecb68243885f (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 9f025c40-349c-4285-957d-3f1fb865e94c (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 9f025c40-349c-4285-957d-3f1fb865e94c (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 143f6f59-1538-4206-9581-11ed69cc2150 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 143f6f59-1538-4206-9581-11ed69cc2150 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev ffaa72db-78c6-4bdb-b482-422fb4428fdf (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event ffaa72db-78c6-4bdb-b482-422fb4428fdf (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1f( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.10( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.10( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1e( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.11( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.13( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.12( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.12( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.13( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.15( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.14( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.14( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.15( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.11( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.16( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.17( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.9( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.16( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.8( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.17( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.8( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.9( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.b( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.a( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.b( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.d( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.c( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.c( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.a( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.f( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.e( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.5( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.4( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.d( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.6( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.7( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.2( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.7( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.6( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.3( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.5( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.4( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.2( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.3( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.e( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.f( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1f( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1e( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1c( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1d( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1d( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1c( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1a( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1b( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1a( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1b( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.18( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.19( empty local-lis/les=18/19 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.18( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.19( empty local-lis/les=16/17 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.10( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.11( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1f( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.13( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.12( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.12( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.13( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.15( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.14( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.10( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.11( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.17( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.17( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.16( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.9( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.15( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.8( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.9( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.8( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.a( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.14( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.b( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.a( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.e( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.c( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.4( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.7( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.5( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.d( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=34/35 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.16( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.0( empty local-lis/les=34/35 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.6( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.7( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.2( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.6( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.5( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.3( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.2( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.4( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.f( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1e( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1d( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1c( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1a( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1a( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.3( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-mon[74339]: 3.1 scrub starts
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.18( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.1b( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.1b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[5.19( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=18/18 les/c/f=19/19/0 sis=34) [0] r=0 lpr=34 pi=[18,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.18( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 35 pg[4.19( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=16/16 les/c/f=17/17/0 sis=34) [0] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:18 compute-0 ceph-mon[74339]: 3.1 scrub ok
Dec 06 06:29:18 compute-0 ceph-mon[74339]: pgmap v118: 38 pgs: 1 peering, 3 active+clean, 34 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:18 compute-0 ceph-mon[74339]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 06 06:29:18 compute-0 ceph-mon[74339]: OSD bench result of 1657.429499 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 06:29:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:29:18 compute-0 ceph-mon[74339]: osdmap e35: 2 total, 1 up, 2 in
Dec 06 06:29:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:18 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 06 06:29:18 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:29:18 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:29:19 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2683792420; not ready for session (expect reconnect)
Dec 06 06:29:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:19 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 06 06:29:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v121: 100 pgs: 1 peering, 3 active+clean, 96 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Dec 06 06:29:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 06 06:29:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 06 06:29:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e36 e36: 2 total, 2 up, 2 in
Dec 06 06:29:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420] boot
Dec 06 06:29:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e36: 2 total, 2 up, 2 in
Dec 06 06:29:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec 06 06:29:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:19 compute-0 ceph-mon[74339]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.client.admin.keyring
Dec 06 06:29:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 06 06:29:19 compute-0 ceph-mon[74339]: osd.1 [v2:192.168.122.101:6800/2683792420,v1:192.168.122.101:6801/2683792420] boot
Dec 06 06:29:19 compute-0 ceph-mon[74339]: osdmap e36: 2 total, 2 up, 2 in
Dec 06 06:29:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 06 06:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:29:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:29:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:29:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v123: 115 pgs: 1 peering, 3 active+clean, 111 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:20 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 39c9692b-0c2a-42fa-812f-98fc47ab0ea2 (Updating mon deployment (+2 -> 3))
Dec 06 06:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec 06 06:29:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec 06 06:29:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:29:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:20 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec 06 06:29:20 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec 06 06:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 06 06:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e37 e37: 2 total, 2 up, 2 in
Dec 06 06:29:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e37: 2 total, 2 up, 2 in
Dec 06 06:29:20 compute-0 ceph-mon[74339]: 3.2 scrub starts
Dec 06 06:29:20 compute-0 ceph-mon[74339]: 3.2 scrub ok
Dec 06 06:29:20 compute-0 ceph-mon[74339]: pgmap v121: 100 pgs: 1 peering, 3 active+clean, 96 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:20 compute-0 ceph-mon[74339]: pgmap v123: 115 pgs: 1 peering, 3 active+clean, 111 unknown; 0 B data, 27 MiB used, 7.0 GiB / 7.0 GiB avail
Dec 06 06:29:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:29:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:29:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:20 compute-0 ceph-mon[74339]: Deploying daemon mon.compute-2 on compute-2
Dec 06 06:29:20 compute-0 ceph-mon[74339]: osdmap e37: 2 total, 2 up, 2 in
Dec 06 06:29:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 06 06:29:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 06 06:29:21 compute-0 ceph-mon[74339]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 06 06:29:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e38 e38: 2 total, 2 up, 2 in
Dec 06 06:29:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e38: 2 total, 2 up, 2 in
Dec 06 06:29:21 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 06 06:29:21 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 06 06:29:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v126: 115 pgs: 2 creating+peering, 97 active+clean, 16 unknown; 0 B data, 454 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
Dec 06 06:29:22 compute-0 ceph-mon[74339]: osdmap e38: 2 total, 2 up, 2 in
Dec 06 06:29:22 compute-0 ceph-mon[74339]: pgmap v126: 115 pgs: 2 creating+peering, 97 active+clean, 16 unknown; 0 B data, 454 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:29:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 36 pg[1.0( v 10'32 (0'0,10'32] local-lis/les=9/10 n=2 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=36) [1] r=-1 lpr=36 pi=[9,36)/1 crt=10'32 lcod 10'31 mlcod 0'0 unknown NOTIFY mbc={}] start_peering_interval up [] -> [1], acting [] -> [1], acting_primary ? -> 1, up_primary ? -> 1, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 36 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=36 pruub=15.688816071s) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active pruub 84.850433350s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[1.0( v 10'32 (0'0,10'32] local-lis/les=9/10 n=2 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=36) [1] r=-1 lpr=36 pi=[9,36)/1 crt=10'32 lcod 10'31 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=36 pruub=15.688816071s) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown pruub 84.850433350s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:29:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:29:22 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 8 completed events
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=20/21 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:29:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:29:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:29:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:23 compute-0 ceph-mon[74339]: 3.3 scrub starts
Dec 06 06:29:23 compute-0 ceph-mon[74339]: 3.3 scrub ok
Dec 06 06:29:23 compute-0 ceph-mon[74339]: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
Dec 06 06:29:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 06 06:29:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e39 e39: 2 total, 2 up, 2 in
Dec 06 06:29:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e39: 2 total, 2 up, 2 in
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.a( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.5( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.3( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.7( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=36/39 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.9( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:23 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=20/20 les/c/f=21/21/0 sis=36) [0] r=0 lpr=36 pi=[20,36)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v128: 115 pgs: 2 creating+peering, 97 active+clean, 16 unknown; 0 B data, 454 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:24 compute-0 ceph-mon[74339]: osdmap e39: 2 total, 2 up, 2 in
Dec 06 06:29:24 compute-0 ceph-mon[74339]: pgmap v128: 115 pgs: 2 creating+peering, 97 active+clean, 16 unknown; 0 B data, 454 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:24 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 06 06:29:24 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 06 06:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 06 06:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 06 06:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 06 06:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec 06 06:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec 06 06:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:24 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec 06 06:29:24 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec 06 06:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec 06 06:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec 06 06:29:25 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3761752723; not ready for session (expect reconnect)
Dec 06 06:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:25 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec 06 06:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 06 06:29:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:25 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 06:29:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:29:25 compute-0 ceph-mon[74339]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec 06 06:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:25 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 06 06:29:25 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 06 06:29:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v129: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:26 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:26 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:26 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:26 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec 06 06:29:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:26 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:26 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:26 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3761752723; not ready for session (expect reconnect)
Dec 06 06:29:26 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:26 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 06:29:27 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3761752723; not ready for session (expect reconnect)
Dec 06 06:29:27 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:27 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 06:29:27 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 8a97e94a-68e7-4a6f-bdd0-ea02185767d5 (Global Recovery Event) in 10 seconds
Dec 06 06:29:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v130: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:28 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:28 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:28 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:28 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec 06 06:29:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:28 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:28 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:28 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3761752723; not ready for session (expect reconnect)
Dec 06 06:29:28 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:28 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:28 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 06:29:28 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 06 06:29:28 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 06 06:29:29 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3761752723; not ready for session (expect reconnect)
Dec 06 06:29:29 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:29 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 06:29:29 compute-0 sudo[89608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fahktxmsrglimukousazuglrvrabntsb ; /usr/bin/python3'
Dec 06 06:29:29 compute-0 sudo[89608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:29 compute-0 python3[89610]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:29 compute-0 podman[89612]: 2025-12-06 06:29:29.883066179 +0000 UTC m=+0.047994058 container create ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c (image=quay.io/ceph/ceph:v18, name=quirky_hypatia, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:29 compute-0 systemd[1]: Started libpod-conmon-ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c.scope.
Dec 06 06:29:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8a44f6c7be4f9939cfda7b6802ed1d614662d54f5cd5b24e7aed077c554ec7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8a44f6c7be4f9939cfda7b6802ed1d614662d54f5cd5b24e7aed077c554ec7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:29 compute-0 podman[89612]: 2025-12-06 06:29:29.952922794 +0000 UTC m=+0.117850683 container init ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c (image=quay.io/ceph/ceph:v18, name=quirky_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:29:29 compute-0 podman[89612]: 2025-12-06 06:29:29.861076148 +0000 UTC m=+0.026004047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:29 compute-0 podman[89612]: 2025-12-06 06:29:29.959018107 +0000 UTC m=+0.123946006 container start ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c (image=quay.io/ceph/ceph:v18, name=quirky_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:29 compute-0 podman[89612]: 2025-12-06 06:29:29.962478265 +0000 UTC m=+0.127406164 container attach ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c (image=quay.io/ceph/ceph:v18, name=quirky_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:29:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v131: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec 06 06:29:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 06:29:30 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3761752723; not ready for session (expect reconnect)
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:30 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 06:29:30 compute-0 ceph-mon[74339]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 06 06:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:29:31 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3761752723; not ready for session (expect reconnect)
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e39: 2 total, 2 up, 2 in
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.sfzyix(active, since 3m)
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds; Reduced data availability: 1 pg inactive
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] :     pg 1.0 is stuck inactive for 62s, current state unknown, last acting []
Dec 06 06:29:31 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:31.573+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds; Reduced data availability: 1 pg inactive
Dec 06 06:29:31 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:31.573+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec 06 06:29:31 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:31.573+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Dec 06 06:29:31 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:31.573+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:31 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:31.573+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Dec 06 06:29:31 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:31.573+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] : [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
Dec 06 06:29:31 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:31.573+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] :     pg 1.0 is stuck inactive for 62s, current state unknown, last acting []
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:31 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 39c9692b-0c2a-42fa-812f-98fc47ab0ea2 (Updating mon deployment (+2 -> 3))
Dec 06 06:29:31 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 39c9692b-0c2a-42fa-812f-98fc47ab0ea2 (Updating mon deployment (+2 -> 3)) in 12 seconds
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:31 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 58afc546-b071-463b-89a6-621c15f26679 (Updating mgr deployment (+2 -> 3))
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive)
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: 3.4 scrub starts
Dec 06 06:29:31 compute-0 ceph-mon[74339]: 3.4 scrub ok
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:29:31 compute-0 ceph-mon[74339]: 3.5 scrub starts
Dec 06 06:29:31 compute-0 ceph-mon[74339]: 3.5 scrub ok
Dec 06 06:29:31 compute-0 ceph-mon[74339]: pgmap v129: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:29:31 compute-0 ceph-mon[74339]: pgmap v130: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: 3.6 scrub starts
Dec 06 06:29:31 compute-0 ceph-mon[74339]: 3.6 scrub ok
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: pgmap v131: 115 pgs: 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-mon[74339]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:29:31 compute-0 ceph-mon[74339]: fsmap cephfs:0
Dec 06 06:29:31 compute-0 ceph-mon[74339]: osdmap e39: 2 total, 2 up, 2 in
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mgrmap e9: compute-0.sfzyix(active, since 3m)
Dec 06 06:29:31 compute-0 ceph-mon[74339]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds; Reduced data availability: 1 pg inactive
Dec 06 06:29:31 compute-0 ceph-mon[74339]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec 06 06:29:31 compute-0 ceph-mon[74339]:     fs cephfs is offline because no MDS is active for it.
Dec 06 06:29:31 compute-0 ceph-mon[74339]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:31 compute-0 ceph-mon[74339]:     fs cephfs has 0 MDS online, but wants 1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
Dec 06 06:29:31 compute-0 ceph-mon[74339]:     pg 1.0 is stuck inactive for 62s, current state unknown, last acting []
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e40 e40: 2 total, 2 up, 2 in
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e40: 2 total, 2 up, 2 in
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.11( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.879986763s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.780891418s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.11( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.879914284s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.780891418s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.10( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.879607201s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.780860901s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.10( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.879589081s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.780860901s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.16( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.860898972s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762268066s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.16( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.860877991s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762268066s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.860768318s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762268066s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.860747337s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762268066s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.14( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.860671043s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762313843s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.14( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.860640526s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762313843s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.879034042s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.780830383s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.879012108s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.780830383s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.878166199s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.780143738s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.878003120s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.780143738s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.878353119s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.780990601s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.878327370s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.780990601s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.11( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.859452248s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762168884s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.878006935s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.780754089s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.16( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.879113197s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781929016s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.13( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877947807s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.780754089s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.11( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.859371185s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762168884s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.16( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.879086494s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781929016s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.10( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.859298706s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762245178s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.10( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.859273911s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762245178s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.9( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877831459s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.780929565s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.a( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.945768356s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 93.848869324s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877920151s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781036377s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.9( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877799988s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.780929565s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877899170s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781036377s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.a( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.945727348s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.848869324s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.e( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858819008s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762107849s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.e( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858799934s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762107849s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877658844s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.780998230s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.945510864s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 93.848854065s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.945471764s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.848854065s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877618790s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.780998230s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877526283s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781036377s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.d( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858671188s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762184143s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.a( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877499580s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781036377s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.d( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858642578s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762184143s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858555794s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762153625s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858524323s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762153625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.859045029s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762191772s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.948298454s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 93.851982117s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.948276520s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.851982117s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877875328s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781593323s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858492851s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762191772s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.c( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877844810s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781593323s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858264923s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762092590s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858247757s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762092590s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877942085s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781814575s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877709389s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781600952s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.d( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877910614s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781814575s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877684593s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781600952s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858076096s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762031555s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.858059883s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762031555s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877428055s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781509399s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.7( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947926521s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 93.852043152s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877537727s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781654358s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.e( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877403259s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781509399s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877513885s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781654358s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.7( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947889328s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.852043152s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.5( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947918892s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 93.852157593s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877343178s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781600952s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.5( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947880745s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.852157593s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.5( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877320290s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781600952s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877607346s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781982422s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947811127s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 93.852203369s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877571106s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781982422s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947777748s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.852203369s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.3( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947798729s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 93.852272034s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.3( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947779655s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.852272034s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877417564s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.781990051s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877399445s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.781990051s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.5( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.857199669s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.761825562s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877500534s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782157898s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.5( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.857163429s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.761825562s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877480507s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782157898s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.857086182s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.761825562s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.857066154s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.761825562s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877608299s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782455444s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877593994s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782470703s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947529793s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 active pruub 93.852416992s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.e( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877569199s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782470703s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=40 pruub=15.947497368s) [1] r=-1 lpr=40 pi=[36,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.852416992s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877576828s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782455444s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877510071s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782615662s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.1a( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.856146812s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.761276245s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877508163s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782608032s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1c( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877478600s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782615662s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.1a( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.856129646s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.761276245s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1f( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877449989s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782608032s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1a( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877408981s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782760620s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877604485s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782974243s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1a( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877381325s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782760620s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.1b( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877584457s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782974243s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.1c( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.855532646s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.760978699s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.1c( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.855496407s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.760978699s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877465248s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782974243s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877249718s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782775879s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.1b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877435684s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782974243s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.1a( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877232552s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782775879s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.18( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877246857s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782897949s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.1d( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.851726532s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.757385254s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[5.18( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877228737s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782897949s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.1d( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.851696014s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.757385254s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877239227s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 active pruub 88.782989502s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[4.18( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=40 pruub=10.877216339s) [1] r=-1 lpr=40 pi=[34,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 88.782989502s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.13( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.856213570s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 active pruub 86.762268066s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:29:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 40 pg[3.13( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=8.856122971s) [1] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.762268066s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:29:31 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.ytlehq on compute-2
Dec 06 06:29:31 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.ytlehq on compute-2
Dec 06 06:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 06 06:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/570367341' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:29:31 compute-0 quirky_hypatia[89628]: 
Dec 06 06:29:31 compute-0 quirky_hypatia[89628]: {"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":0,"monmap":{"epoch":2,"min_mon_release_name":"reef","num_mons":2},"osdmap":{"epoch":40,"num_osds":2,"num_up_osds":2,"osd_up_since":1765002559,"num_in_osds":2,"osd_in_since":1765002483,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":115}],"num_pgs":115,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56193024,"bytes_avail":14967803904,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-12-06T06:29:22.066399+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"39c9692b-0c2a-42fa-812f-98fc47ab0ea2":{"message":"Updating mon deployment (+2 -> 3) (4s)\n      [==============..............] (remaining: 4s)","progress":0.5,"add_to_ceph_s":true}}}
Dec 06 06:29:31 compute-0 systemd[1]: libpod-ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c.scope: Deactivated successfully.
Dec 06 06:29:31 compute-0 podman[89612]: 2025-12-06 06:29:31.976641812 +0000 UTC m=+2.141569711 container died ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c (image=quay.io/ceph/ceph:v18, name=quirky_hypatia, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:29:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8a44f6c7be4f9939cfda7b6802ed1d614662d54f5cd5b24e7aed077c554ec7-merged.mount: Deactivated successfully.
Dec 06 06:29:32 compute-0 podman[89612]: 2025-12-06 06:29:32.023856627 +0000 UTC m=+2.188784506 container remove ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c (image=quay.io/ceph/ceph:v18, name=quirky_hypatia, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:29:32 compute-0 systemd[1]: libpod-conmon-ee5ab9bc04a45064088d59080e95fa207f4878ccffe88c77016402b5428a5f2c.scope: Deactivated successfully.
Dec 06 06:29:32 compute-0 sudo[89608]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 62 unknown, 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:32 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3761752723; not ready for session (expect reconnect)
Dec 06 06:29:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:32 compute-0 ceph-mon[74339]: Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive)
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 06:29:32 compute-0 ceph-mon[74339]: osdmap e40: 2 total, 2 up, 2 in
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:32 compute-0 ceph-mon[74339]: Deploying daemon mgr.compute-2.ytlehq on compute-2
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/570367341' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:29:32 compute-0 ceph-mon[74339]: pgmap v133: 177 pgs: 62 unknown, 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:32 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 10 completed events
Dec 06 06:29:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:29:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:32 compute-0 ceph-mgr[74630]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Dec 06 06:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:29:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:29:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 06 06:29:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 06 06:29:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:29:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 06:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 06:29:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:29:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:33 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.nmklwp on compute-1
Dec 06 06:29:33 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.nmklwp on compute-1
Dec 06 06:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 06 06:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e41 e41: 2 total, 2 up, 2 in
Dec 06 06:29:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e41: 2 total, 2 up, 2 in
Dec 06 06:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 06:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:29:33 compute-0 ceph-mon[74339]: osdmap e41: 2 total, 2 up, 2 in
Dec 06 06:29:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 62 unknown, 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 06:29:34 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:34 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 06:29:34 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 06 06:29:34 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 06 06:29:34 compute-0 ceph-mon[74339]: Deploying daemon mgr.compute-1.nmklwp on compute-1
Dec 06 06:29:34 compute-0 ceph-mon[74339]: pgmap v135: 177 pgs: 62 unknown, 115 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:35 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:35 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 06:29:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 57 peering, 62 unknown, 58 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:36 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:36 compute-0 ceph-mon[74339]: 3.7 scrub starts
Dec 06 06:29:36 compute-0 ceph-mon[74339]: 3.7 scrub ok
Dec 06 06:29:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:36 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 06:29:36 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 06 06:29:36 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 06 06:29:37 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:37 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec 06 06:29:37 compute-0 ceph-mon[74339]: pgmap v136: 177 pgs: 57 peering, 62 unknown, 58 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 57 peering, 62 unknown, 58 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec 06 06:29:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec 06 06:29:38 compute-0 ceph-mon[74339]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec 06 06:29:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:29:38 compute-0 ceph-mon[74339]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:38 compute-0 ceph-mon[74339]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Dec 06 06:29:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:38 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:29:38 compute-0 ceph-mon[74339]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec 06 06:29:38 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:38 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:38 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:38 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:39 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:39 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:39 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:39 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 06 06:29:39 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 06 06:29:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 57 peering, 62 unknown, 58 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:40 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:40 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 06:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:29:41 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 06:29:41 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:41 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:41 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:41 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:42 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:42 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:42 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:42 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:29:42 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 61b0e983-8377-417b-8133-393035795fe2 (Global Recovery Event) in 10 seconds
Dec 06 06:29:43 compute-0 ceph-mon[74339]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec 06 06:29:43 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:43 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:43 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:43 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:43 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 06:29:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:29:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:44 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:44 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:44 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 06 06:29:44 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:29:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e41: 2 total, 2 up, 2 in
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.sfzyix(active, since 3m)
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:45 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:45.180+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] : Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Dec 06 06:29:45 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:45.180+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] : [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec 06 06:29:45 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:45.180+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] :     fs cephfs is offline because no MDS is active for it.
Dec 06 06:29:45 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:45.180+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] : [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:45 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:45.180+0000 7f1744b8f640 -1 log_channel(cluster) log [ERR] :     fs cephfs has 0 MDS online, but wants 1
Dec 06 06:29:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 06 06:29:45 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:45 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec 06 06:29:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:29:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:29:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:46 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2071198595; not ready for session (expect reconnect)
Dec 06 06:29:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Dec 06 06:29:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mgr[74630]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec 06 06:29:47 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T06:29:47.462+0000 7f67611e6640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec 06 06:29:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e42 e42: 2 total, 2 up, 2 in
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:29:47 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: 3.b scrub starts
Dec 06 06:29:47 compute-0 ceph-mon[74339]: 3.b scrub ok
Dec 06 06:29:47 compute-0 ceph-mon[74339]: pgmap v138: 177 pgs: 57 peering, 62 unknown, 58 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: pgmap v139: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: 2.1 scrub starts
Dec 06 06:29:47 compute-0 ceph-mon[74339]: 2.1 scrub ok
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: 2.2 deep-scrub starts
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: 2.2 deep-scrub ok
Dec 06 06:29:47 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:29:47 compute-0 ceph-mon[74339]: pgmap v140: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: 2.3 scrub starts
Dec 06 06:29:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:47 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:29:47 compute-0 ceph-mon[74339]: fsmap cephfs:0
Dec 06 06:29:47 compute-0 ceph-mon[74339]: osdmap e41: 2 total, 2 up, 2 in
Dec 06 06:29:47 compute-0 ceph-mon[74339]: mgrmap e9: compute-0.sfzyix(active, since 3m)
Dec 06 06:29:47 compute-0 ceph-mon[74339]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:47 compute-0 ceph-mon[74339]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Dec 06 06:29:47 compute-0 ceph-mon[74339]:     fs cephfs is offline because no MDS is active for it.
Dec 06 06:29:47 compute-0 ceph-mon[74339]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:29:47 compute-0 ceph-mon[74339]:     fs cephfs has 0 MDS online, but wants 1
Dec 06 06:29:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e42: 2 total, 2 up, 2 in
Dec 06 06:29:47 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 11 completed events
Dec 06 06:29:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:29:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.19( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.15( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.10( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.13( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.10( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.14( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.e( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.b( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.c( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.d( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.a( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.1( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.6( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.9( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.1d( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.1b( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.1e( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.1f( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.1e( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 06 06:29:49 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 42 pg[2.4( empty local-lis/les=0/0 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:29:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 06 06:29:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e43 e43: 2 total, 2 up, 2 in
Dec 06 06:29:50 compute-0 ceph-mon[74339]: mon.compute-1 calling monitor election
Dec 06 06:29:50 compute-0 ceph-mon[74339]: 2.3 scrub ok
Dec 06 06:29:50 compute-0 ceph-mon[74339]: 3.12 scrub starts
Dec 06 06:29:50 compute-0 ceph-mon[74339]: 3.12 scrub ok
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:50 compute-0 ceph-mon[74339]: pgmap v141: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:29:50 compute-0 ceph-mon[74339]: 5.18 scrub starts
Dec 06 06:29:50 compute-0 ceph-mon[74339]: 5.18 scrub ok
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:50 compute-0 ceph-mon[74339]: osdmap e42: 2 total, 2 up, 2 in
Dec 06 06:29:50 compute-0 ceph-mon[74339]: pgmap v143: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:50 compute-0 ceph-mon[74339]: 2.4 scrub starts
Dec 06 06:29:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e43: 2 total, 2 up, 2 in
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.1e( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.1e( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.2( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.1( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.8( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.c( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.b( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.e( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.10( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.10( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 43 pg[2.e( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=40/40 les/c/f=41/41/0 sis=42) [0] r=0 lpr=42 pi=[40,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:29:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:51 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 58afc546-b071-463b-89a6-621c15f26679 (Updating mgr deployment (+2 -> 3))
Dec 06 06:29:51 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 58afc546-b071-463b-89a6-621c15f26679 (Updating mgr deployment (+2 -> 3)) in 20 seconds
Dec 06 06:29:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec 06 06:29:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:52 compute-0 sudo[89690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulxrboxskvtfilfyupsgonxijgmvrysx ; /usr/bin/python3'
Dec 06 06:29:52 compute-0 sudo[89690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:52 compute-0 python3[89692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:52 compute-0 podman[89694]: 2025-12-06 06:29:52.364361439 +0000 UTC m=+0.039544631 container create 6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9 (image=quay.io/ceph/ceph:v18, name=nice_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:29:52 compute-0 systemd[1]: Started libpod-conmon-6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9.scope.
Dec 06 06:29:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312ba6d143fe36a9ca520367c08dbcf7e72aaf5cb5cb7c5adf8ba52f3b29ab36/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/312ba6d143fe36a9ca520367c08dbcf7e72aaf5cb5cb7c5adf8ba52f3b29ab36/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:52 compute-0 podman[89694]: 2025-12-06 06:29:52.438249617 +0000 UTC m=+0.113432829 container init 6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9 (image=quay.io/ceph/ceph:v18, name=nice_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:52 compute-0 podman[89694]: 2025-12-06 06:29:52.346864211 +0000 UTC m=+0.022047433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:52 compute-0 podman[89694]: 2025-12-06 06:29:52.444878807 +0000 UTC m=+0.120061999 container start 6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9 (image=quay.io/ceph/ceph:v18, name=nice_leakey, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 06:29:52 compute-0 podman[89694]: 2025-12-06 06:29:52.448636861 +0000 UTC m=+0.123820053 container attach 6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9 (image=quay.io/ceph/ceph:v18, name=nice_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 06:29:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:29:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:52 compute-0 ceph-mon[74339]: osdmap e43: 2 total, 2 up, 2 in
Dec 06 06:29:52 compute-0 ceph-mon[74339]: pgmap v145: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:52 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Dec 06 06:29:52 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Dec 06 06:29:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 06 06:29:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508508610' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:29:53 compute-0 nice_leakey[89711]: 
Dec 06 06:29:53 compute-0 nice_leakey[89711]: {"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":9,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":43,"num_osds":2,"num_up_osds":2,"osd_up_since":1765002559,"num_in_osds":2,"osd_in_since":1765002483,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":177}],"num_pgs":177,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56250368,"bytes_avail":14967746560,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-12-06T06:29:22.066399+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"58afc546-b071-463b-89a6-621c15f26679":{"message":"Updating mgr deployment (+2 -> 3) (18s)\n      [============================] ","progress":1,"add_to_ceph_s":true}}}
Dec 06 06:29:53 compute-0 systemd[1]: libpod-6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9.scope: Deactivated successfully.
Dec 06 06:29:53 compute-0 podman[89694]: 2025-12-06 06:29:53.124081011 +0000 UTC m=+0.799264213 container died 6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9 (image=quay.io/ceph/ceph:v18, name=nice_leakey, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 06:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-312ba6d143fe36a9ca520367c08dbcf7e72aaf5cb5cb7c5adf8ba52f3b29ab36-merged.mount: Deactivated successfully.
Dec 06 06:29:53 compute-0 podman[89694]: 2025-12-06 06:29:53.178310061 +0000 UTC m=+0.853493253 container remove 6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9 (image=quay.io/ceph/ceph:v18, name=nice_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 06:29:53 compute-0 systemd[1]: libpod-conmon-6d7bf8ea701a7a6917e22aaf6a657e625f5e3f35af0114e8e3944d6a834beea9.scope: Deactivated successfully.
Dec 06 06:29:53 compute-0 sudo[89690]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:53 compute-0 sudo[89771]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugkefxdwgdqadixgkcimflwcyovvjgxm ; /usr/bin/python3'
Dec 06 06:29:53 compute-0 sudo[89771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:53 compute-0 python3[89773]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:53 compute-0 podman[89774]: 2025-12-06 06:29:53.57200884 +0000 UTC m=+0.047331163 container create 50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74 (image=quay.io/ceph/ceph:v18, name=focused_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:29:53 compute-0 systemd[1]: Started libpod-conmon-50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74.scope.
Dec 06 06:29:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a7500cea25ee9e987f36a504ff1f4ab7c2a3bcbfa89bbe374fd069d74b3657/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42a7500cea25ee9e987f36a504ff1f4ab7c2a3bcbfa89bbe374fd069d74b3657/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:53 compute-0 podman[89774]: 2025-12-06 06:29:53.551597502 +0000 UTC m=+0.026919845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:53 compute-0 podman[89774]: 2025-12-06 06:29:53.652155387 +0000 UTC m=+0.127477730 container init 50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74 (image=quay.io/ceph/ceph:v18, name=focused_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:29:53 compute-0 podman[89774]: 2025-12-06 06:29:53.657350999 +0000 UTC m=+0.132673322 container start 50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74 (image=quay.io/ceph/ceph:v18, name=focused_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 06:29:53 compute-0 podman[89774]: 2025-12-06 06:29:53.662466579 +0000 UTC m=+0.137788912 container attach 50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74 (image=quay.io/ceph/ceph:v18, name=focused_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:29:53 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 06 06:29:53 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 06 06:29:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:54 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 08da7a18-95bc-4506-b937-4c707cb4cb35 (Updating crash deployment (+1 -> 3))
Dec 06 06:29:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec 06 06:29:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:29:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:29:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2371049915' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:29:54 compute-0 focused_joliot[89790]: 
Dec 06 06:29:54 compute-0 focused_joliot[89790]: {"epoch":3,"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","modified":"2025-12-06T06:29:38.372311Z","created":"2025-12-06T06:25:15.925835Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Dec 06 06:29:54 compute-0 focused_joliot[89790]: dumped monmap epoch 3
Dec 06 06:29:54 compute-0 systemd[1]: libpod-50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74.scope: Deactivated successfully.
Dec 06 06:29:54 compute-0 podman[89774]: 2025-12-06 06:29:54.299340116 +0000 UTC m=+0.774662449 container died 50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74 (image=quay.io/ceph/ceph:v18, name=focused_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 06:29:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-42a7500cea25ee9e987f36a504ff1f4ab7c2a3bcbfa89bbe374fd069d74b3657-merged.mount: Deactivated successfully.
Dec 06 06:29:54 compute-0 podman[89774]: 2025-12-06 06:29:54.350900585 +0000 UTC m=+0.826222908 container remove 50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74 (image=quay.io/ceph/ceph:v18, name=focused_joliot, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 06:29:54 compute-0 systemd[1]: libpod-conmon-50854bc2904187970c8889f70d165c5453b0022b2fd48db24baa72e7c4ec4a74.scope: Deactivated successfully.
Dec 06 06:29:54 compute-0 sudo[89771]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:54 compute-0 sudo[89852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alosrrkygnagvkufludwjsybmicuuwma ; /usr/bin/python3'
Dec 06 06:29:54 compute-0 sudo[89852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:54 compute-0 python3[89854]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:55 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 12 completed events
Dec 06 06:29:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:29:55 compute-0 podman[89855]: 2025-12-06 06:29:55.062659227 +0000 UTC m=+0.091126920 container create fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce (image=quay.io/ceph/ceph:v18, name=funny_payne, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 06:29:55 compute-0 podman[89855]: 2025-12-06 06:29:54.998668759 +0000 UTC m=+0.027136432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:55 compute-0 systemd[1]: Started libpod-conmon-fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce.scope.
Dec 06 06:29:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee8a87f75619daa937ebde6f0e1db25f979df403f961c6c3ce910f4eebcadcc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee8a87f75619daa937ebde6f0e1db25f979df403f961c6c3ce910f4eebcadcc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:55 compute-0 podman[89855]: 2025-12-06 06:29:55.150501604 +0000 UTC m=+0.178969267 container init fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce (image=quay.io/ceph/ceph:v18, name=funny_payne, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:29:55 compute-0 podman[89855]: 2025-12-06 06:29:55.163069738 +0000 UTC m=+0.191537391 container start fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce (image=quay.io/ceph/ceph:v18, name=funny_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 06:29:55 compute-0 podman[89855]: 2025-12-06 06:29:55.167834568 +0000 UTC m=+0.196302221 container attach fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce (image=quay.io/ceph/ceph:v18, name=funny_payne, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 06:29:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:29:55 compute-0 ceph-mon[74339]: pgmap v146: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:55 compute-0 ceph-mon[74339]: 3.1d scrub starts
Dec 06 06:29:55 compute-0 ceph-mon[74339]: 3.1d scrub ok
Dec 06 06:29:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1508508610' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:29:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.ytlehq started
Dec 06 06:29:55 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-2.ytlehq 192.168.122.102:0/1487236974; not ready for session (expect reconnect)
Dec 06 06:29:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Dec 06 06:29:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/740520273' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 06 06:29:56 compute-0 funny_payne[89870]: [client.openstack]
Dec 06 06:29:56 compute-0 funny_payne[89870]:         key = AQAgzDNpAAAAABAARUe82jbSNft4GCMkj8z7BQ==
Dec 06 06:29:56 compute-0 funny_payne[89870]:         caps mgr = "allow *"
Dec 06 06:29:56 compute-0 funny_payne[89870]:         caps mon = "profile rbd"
Dec 06 06:29:56 compute-0 funny_payne[89870]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 06 06:29:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:56 compute-0 systemd[1]: libpod-fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce.scope: Deactivated successfully.
Dec 06 06:29:56 compute-0 podman[89855]: 2025-12-06 06:29:56.101581001 +0000 UTC m=+1.130048654 container died fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce (image=quay.io/ceph/ceph:v18, name=funny_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:29:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee8a87f75619daa937ebde6f0e1db25f979df403f961c6c3ce910f4eebcadcc-merged.mount: Deactivated successfully.
Dec 06 06:29:56 compute-0 podman[89855]: 2025-12-06 06:29:56.177430561 +0000 UTC m=+1.205898214 container remove fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce (image=quay.io/ceph/ceph:v18, name=funny_payne, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:29:56 compute-0 systemd[1]: libpod-conmon-fa8b1be8c796d378099aaeb445b0eeec140545cd846cd7feb70b2de7375852ce.scope: Deactivated successfully.
Dec 06 06:29:56 compute-0 sudo[89852]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:56 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-2.ytlehq 192.168.122.102:0/1487236974; not ready for session (expect reconnect)
Dec 06 06:29:56 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.19 deep-scrub starts
Dec 06 06:29:56 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.19 deep-scrub ok
Dec 06 06:29:57 compute-0 sudo[90057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lahulgaggiuowewmtvcbzrfcfbywnrxi ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765002597.2944193-37552-279082567163335/async_wrapper.py j440871130784 30 /home/zuul/.ansible/tmp/ansible-tmp-1765002597.2944193-37552-279082567163335/AnsiballZ_command.py _'
Dec 06 06:29:57 compute-0 sudo[90057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:57 compute-0 ansible-async_wrapper.py[90059]: Invoked with j440871130784 30 /home/zuul/.ansible/tmp/ansible-tmp-1765002597.2944193-37552-279082567163335/AnsiballZ_command.py _
Dec 06 06:29:57 compute-0 ansible-async_wrapper.py[90062]: Starting module and watcher
Dec 06 06:29:57 compute-0 ansible-async_wrapper.py[90062]: Start watching 90063 (30)
Dec 06 06:29:57 compute-0 ansible-async_wrapper.py[90063]: Start module (90063)
Dec 06 06:29:57 compute-0 ansible-async_wrapper.py[90059]: Return async_wrapper task started.
Dec 06 06:29:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:29:57 compute-0 sudo[90057]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:57 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-2.ytlehq 192.168.122.102:0/1487236974; not ready for session (expect reconnect)
Dec 06 06:29:57 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 06 06:29:57 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 06 06:29:57 compute-0 python3[90064]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:29:58 compute-0 podman[90065]: 2025-12-06 06:29:58.033818533 +0000 UTC m=+0.061902791 container create 70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc (image=quay.io/ceph/ceph:v18, name=stupefied_noether, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 06:29:58 compute-0 systemd[1]: Started libpod-conmon-70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc.scope.
Dec 06 06:29:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:29:58 compute-0 podman[90065]: 2025-12-06 06:29:58.004059651 +0000 UTC m=+0.032143949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:29:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a716cb23e52380ce36d205f38a4d36e8d1cbd9891b77fa32f0f21aee54d7921/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a716cb23e52380ce36d205f38a4d36e8d1cbd9891b77fa32f0f21aee54d7921/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:29:58 compute-0 podman[90065]: 2025-12-06 06:29:58.129437493 +0000 UTC m=+0.157521771 container init 70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc (image=quay.io/ceph/ceph:v18, name=stupefied_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:29:58 compute-0 podman[90065]: 2025-12-06 06:29:58.138376428 +0000 UTC m=+0.166460696 container start 70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc (image=quay.io/ceph/ceph:v18, name=stupefied_noether, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:29:58 compute-0 podman[90065]: 2025-12-06 06:29:58.142597823 +0000 UTC m=+0.170682101 container attach 70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc (image=quay.io/ceph/ceph:v18, name=stupefied_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:29:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:29:58 compute-0 stupefied_noether[90081]: 
Dec 06 06:29:58 compute-0 stupefied_noether[90081]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 06 06:29:58 compute-0 podman[90065]: 2025-12-06 06:29:58.707499205 +0000 UTC m=+0.735583463 container died 70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc (image=quay.io/ceph/ceph:v18, name=stupefied_noether, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Dec 06 06:29:58 compute-0 systemd[1]: libpod-70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc.scope: Deactivated successfully.
Dec 06 06:29:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a716cb23e52380ce36d205f38a4d36e8d1cbd9891b77fa32f0f21aee54d7921-merged.mount: Deactivated successfully.
Dec 06 06:29:58 compute-0 podman[90065]: 2025-12-06 06:29:58.757241724 +0000 UTC m=+0.785325982 container remove 70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc (image=quay.io/ceph/ceph:v18, name=stupefied_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 06:29:58 compute-0 systemd[1]: libpod-conmon-70374b227cd758451aeca68c2d648e34bbf6118886daf77bf80876f003338ffc.scope: Deactivated successfully.
Dec 06 06:29:58 compute-0 ansible-async_wrapper.py[90063]: Module complete (90063)
Dec 06 06:29:58 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-2.ytlehq 192.168.122.102:0/1487236974; not ready for session (expect reconnect)
Dec 06 06:29:58 compute-0 sudo[90164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puqjrrldhredpgjisgmuhtndkwunkjdp ; /usr/bin/python3'
Dec 06 06:29:58 compute-0 sudo[90164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:59 compute-0 python3[90166]: ansible-ansible.legacy.async_status Invoked with jid=j440871130784.90059 mode=status _async_dir=/root/.ansible_async
Dec 06 06:29:59 compute-0 sudo[90164]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:59 compute-0 sudo[90213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnwjpkghxxjhkzigcsfkgmwaxyplbhlm ; /usr/bin/python3'
Dec 06 06:29:59 compute-0 sudo[90213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:29:59 compute-0 python3[90215]: ansible-ansible.legacy.async_status Invoked with jid=j440871130784.90059 mode=cleanup _async_dir=/root/.ansible_async
Dec 06 06:29:59 compute-0 sudo[90213]: pam_unix(sudo:session): session closed for user root
Dec 06 06:29:59 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-2.ytlehq 192.168.122.102:0/1487236974; not ready for session (expect reconnect)
Dec 06 06:29:59 compute-0 sudo[90239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cevgxqfassbvmfixdtwxxxpddhpffgvx ; /usr/bin/python3'
Dec 06 06:29:59 compute-0 sudo[90239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:30:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [ERR] : overall HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:30:00 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0[74335]: 2025-12-06T06:29:59.999+0000 7f1747394640 -1 log_channel(cluster) log [ERR] : overall HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:30:00 compute-0 python3[90241]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:30:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:00 compute-0 podman[90242]: 2025-12-06 06:30:00.132292934 +0000 UTC m=+0.057486730 container create e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30 (image=quay.io/ceph/ceph:v18, name=cranky_mccarthy, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:30:00 compute-0 podman[90242]: 2025-12-06 06:30:00.100786844 +0000 UTC m=+0.025980720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:30:00 compute-0 systemd[1]: Started libpod-conmon-e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30.scope.
Dec 06 06:30:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12dd2f0703c2c970968b6c233b6badd1eeac3c8454c4b72cb1a4b0b0de0822d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12dd2f0703c2c970968b6c233b6badd1eeac3c8454c4b72cb1a4b0b0de0822d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:00 compute-0 podman[90242]: 2025-12-06 06:30:00.285355173 +0000 UTC m=+0.210548989 container init e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30 (image=quay.io/ceph/ceph:v18, name=cranky_mccarthy, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:30:00 compute-0 podman[90242]: 2025-12-06 06:30:00.293464444 +0000 UTC m=+0.218658230 container start e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30 (image=quay.io/ceph/ceph:v18, name=cranky_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 06:30:00 compute-0 podman[90242]: 2025-12-06 06:30:00.297257527 +0000 UTC m=+0.222451303 container attach e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30 (image=quay.io/ceph/ceph:v18, name=cranky_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:30:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 06:30:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:30:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:00 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec 06 06:30:00 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec 06 06:30:00 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-2.ytlehq 192.168.122.102:0/1487236974; not ready for session (expect reconnect)
Dec 06 06:30:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14289 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:00 compute-0 cranky_mccarthy[90258]: 
Dec 06 06:30:00 compute-0 cranky_mccarthy[90258]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 06 06:30:00 compute-0 systemd[1]: libpod-e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30.scope: Deactivated successfully.
Dec 06 06:30:00 compute-0 podman[90242]: 2025-12-06 06:30:00.989778944 +0000 UTC m=+0.914972740 container died e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30 (image=quay.io/ceph/ceph:v18, name=cranky_mccarthy, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 06:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12dd2f0703c2c970968b6c233b6badd1eeac3c8454c4b72cb1a4b0b0de0822d-merged.mount: Deactivated successfully.
Dec 06 06:30:01 compute-0 podman[90242]: 2025-12-06 06:30:01.03576343 +0000 UTC m=+0.960957216 container remove e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30 (image=quay.io/ceph/ceph:v18, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:30:01 compute-0 systemd[1]: libpod-conmon-e781992962345bf757625d304b685c58fc393e50135535ba7a73ead64496ec30.scope: Deactivated successfully.
Dec 06 06:30:01 compute-0 sudo[90239]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:01 compute-0 sudo[90318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmqezcfhedssabkbxwjzluxucnzutukj ; /usr/bin/python3'
Dec 06 06:30:01 compute-0 sudo[90318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:30:01 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-2.ytlehq 192.168.122.102:0/1487236974; not ready for session (expect reconnect)
Dec 06 06:30:01 compute-0 python3[90320]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:30:01 compute-0 podman[90321]: 2025-12-06 06:30:01.971326302 +0000 UTC m=+0.061493050 container create f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b (image=quay.io/ceph/ceph:v18, name=strange_hoover, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:30:02 compute-0 systemd[1]: Started libpod-conmon-f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b.scope.
Dec 06 06:30:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10aeb453772500d3c407e5549e4d976db968916d581a143fe4e70e3f30784a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10aeb453772500d3c407e5549e4d976db968916d581a143fe4e70e3f30784a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:02 compute-0 podman[90321]: 2025-12-06 06:30:01.950951216 +0000 UTC m=+0.041117944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:30:02 compute-0 podman[90321]: 2025-12-06 06:30:02.05587763 +0000 UTC m=+0.146044368 container init f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b (image=quay.io/ceph/ceph:v18, name=strange_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:02 compute-0 podman[90321]: 2025-12-06 06:30:02.064053234 +0000 UTC m=+0.154219942 container start f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b (image=quay.io/ceph/ceph:v18, name=strange_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 06:30:02 compute-0 podman[90321]: 2025-12-06 06:30:02.068212937 +0000 UTC m=+0.158379705 container attach f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b (image=quay.io/ceph/ceph:v18, name=strange_hoover, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 06:30:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:02 compute-0 ceph-mon[74339]: 3.17 deep-scrub starts
Dec 06 06:30:02 compute-0 ceph-mon[74339]: 3.17 deep-scrub ok
Dec 06 06:30:02 compute-0 ceph-mon[74339]: 3.18 scrub starts
Dec 06 06:30:02 compute-0 ceph-mon[74339]: 3.18 scrub ok
Dec 06 06:30:02 compute-0 ceph-mon[74339]: pgmap v147: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2371049915' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:30:02 compute-0 ceph-mon[74339]: Standby manager daemon compute-2.ytlehq started
Dec 06 06:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/740520273' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec 06 06:30:02 compute-0 ceph-mon[74339]: pgmap v148: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:02 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.sfzyix(active, since 3m), standbys: compute-2.ytlehq
Dec 06 06:30:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.ytlehq", "id": "compute-2.ytlehq"} v 0) v1
Dec 06 06:30:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ytlehq", "id": "compute-2.ytlehq"}]: dispatch
Dec 06 06:30:02 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:02 compute-0 strange_hoover[90336]: 
Dec 06 06:30:02 compute-0 strange_hoover[90336]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec 06 06:30:02 compute-0 systemd[1]: libpod-f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b.scope: Deactivated successfully.
Dec 06 06:30:02 compute-0 podman[90361]: 2025-12-06 06:30:02.730446167 +0000 UTC m=+0.029961539 container died f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b (image=quay.io/ceph/ceph:v18, name=strange_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 06:30:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d10aeb453772500d3c407e5549e4d976db968916d581a143fe4e70e3f30784a8-merged.mount: Deactivated successfully.
Dec 06 06:30:02 compute-0 podman[90361]: 2025-12-06 06:30:02.799445201 +0000 UTC m=+0.098960503 container remove f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b (image=quay.io/ceph/ceph:v18, name=strange_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:30:02 compute-0 systemd[1]: libpod-conmon-f47dd2de7289ddb95c664c89ed971b80f45b1b65dd9ec9ca40518bae862b734b.scope: Deactivated successfully.
Dec 06 06:30:02 compute-0 ansible-async_wrapper.py[90062]: Done in kid B.
Dec 06 06:30:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:02 compute-0 sudo[90318]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:03 compute-0 sudo[90399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiamyulbarypsxdfzemxeuoqujbgwgtk ; /usr/bin/python3'
Dec 06 06:30:03 compute-0 sudo[90399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:30:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.nmklwp started
Dec 06 06:30:03 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-1.nmklwp 192.168.122.101:0/2119883059; not ready for session (expect reconnect)
Dec 06 06:30:03 compute-0 python3[90401]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:30:03 compute-0 podman[90402]: 2025-12-06 06:30:03.898722042 +0000 UTC m=+0.062359083 container create 70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb (image=quay.io/ceph/ceph:v18, name=elegant_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:03 compute-0 systemd[1]: Started libpod-conmon-70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb.scope.
Dec 06 06:30:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00e17b3ef1ae75194914b372fa4111b67c81aeb3449fe13e5717d01ad400342/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d00e17b3ef1ae75194914b372fa4111b67c81aeb3449fe13e5717d01ad400342/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:03 compute-0 podman[90402]: 2025-12-06 06:30:03.878330356 +0000 UTC m=+0.041967447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:30:03 compute-0 podman[90402]: 2025-12-06 06:30:03.973273837 +0000 UTC m=+0.136910878 container init 70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb (image=quay.io/ceph/ceph:v18, name=elegant_bouman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:30:03 compute-0 podman[90402]: 2025-12-06 06:30:03.979625591 +0000 UTC m=+0.143262632 container start 70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb (image=quay.io/ceph/ceph:v18, name=elegant_bouman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:30:03 compute-0 podman[90402]: 2025-12-06 06:30:03.98581693 +0000 UTC m=+0.149453961 container attach 70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb (image=quay.io/ceph/ceph:v18, name=elegant_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:30:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:04 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:04 compute-0 elegant_bouman[90417]: 
Dec 06 06:30:04 compute-0 elegant_bouman[90417]: [{"container_id": "53f2ff3a1841", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.36%", "created": "2025-12-06T06:26:39.246823Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-06T06:26:39.302358Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T06:28:33.409516Z", "memory_usage": 11607736, "ports": [], "service_name": "crash", "started": "2025-12-06T06:26:39.111913Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb@crash.compute-0", "version": "18.2.7"}, {"container_id": "23be10411580", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.28%", "created": "2025-12-06T06:28:00.238897Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2025-12-06T06:28:00.285174Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T06:28:34.809696Z", "memory_usage": 11691622, "ports": [], "service_name": "crash", "started": "2025-12-06T06:28:00.160198Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb@crash.compute-1", "version": "18.2.7"}, {"container_id": "2b2d7108e778", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "30.01%", "created": "2025-12-06T06:25:24.530629Z", "daemon_id": "compute-0.sfzyix", "daemon_name": "mgr.compute-0.sfzyix", "daemon_type": "mgr", "events": ["2025-12-06T06:26:42.127558Z daemon:mgr.compute-0.sfzyix [INFO] \"Reconfigured mgr.compute-0.sfzyix on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T06:28:33.409451Z", "memory_usage": 551446118, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-06T06:25:24.411125Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb@mgr.compute-0.sfzyix", "version": "18.2.7"}, {"daemon_id": "compute-1.nmklwp", "daemon_name": "mgr.compute-1.nmklwp", "daemon_type": "mgr", "events": ["2025-12-06T06:29:49.841720Z daemon:mgr.compute-1.nmklwp [INFO] \"Deployed mgr.compute-1.nmklwp on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2.ytlehq", "daemon_name": "mgr.compute-2.ytlehq", "daemon_type": "mgr", "events": ["2025-12-06T06:29:33.440365Z daemon:mgr.compute-2.ytlehq [INFO] \"Deployed mgr.compute-2.ytlehq on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [8765], "service_name": "mgr", "status": 2, "status_desc": "starting"}, {"container_id": "6ea38236040b", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.52%", "created": "2025-12-06T06:25:18.845483Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-06T06:26:41.419811Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T06:28:33.409320Z", "memory_request": 2147483648, "memory_usage": 37717278, "ports": [], "service_name": "mon", "started": "2025-12-06T06:25:22.337059Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb@mon.compute-0", "version": "18.2.7"}, {"daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2025-12-06T06:29:31.599817Z daemon:mon.compute-1 [INFO] \"Deployed mon.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "events": ["2025-12-06T06:29:24.942783Z daemon:mon.compute-2 [INFO] \"Deployed mon.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "memory_request": 2147483648, "ports": [], "service_name": "mon", "status": 2, "status_desc": "starting"}, {"container_id": "7156c3d3aaf5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "5.06%", "created": "2025-12-06T06:28:12.341710Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-06T06:28:12.398751Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-06T06:28:33.409576Z", "memory_request": 4294967296, "memory_usage": 54924410, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-06T06:28:12.242333Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb@osd.0", "version": "18.2.7"}, {"container_id": "ebce93884e63", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.02%", "created": "2025-12-06T06:28:15.721310Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-06T06:28:15.864405Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-06T06:28:34.809833Z", "memory_request": 5502926848, "memory_usage": 31666995, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-06T06:28:14.700192Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb@osd.1", "version": "18.2.7"}]
Dec 06 06:30:04 compute-0 systemd[1]: libpod-70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb.scope: Deactivated successfully.
Dec 06 06:30:04 compute-0 podman[90402]: 2025-12-06 06:30:04.585681687 +0000 UTC m=+0.749318728 container died 70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb (image=quay.io/ceph/ceph:v18, name=elegant_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 06:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d00e17b3ef1ae75194914b372fa4111b67c81aeb3449fe13e5717d01ad400342-merged.mount: Deactivated successfully.
Dec 06 06:30:04 compute-0 podman[90402]: 2025-12-06 06:30:04.63560297 +0000 UTC m=+0.799240021 container remove 70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb (image=quay.io/ceph/ceph:v18, name=elegant_bouman, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 06:30:04 compute-0 systemd[1]: libpod-conmon-70081544473ea37c4fe885ada49551abaddcdc80196f8c6aa833003170fabffb.scope: Deactivated successfully.
Dec 06 06:30:04 compute-0 sudo[90399]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:04 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from mgr.compute-1.nmklwp 192.168.122.101:0/2119883059; not ready for session (expect reconnect)
Dec 06 06:30:04 compute-0 ceph-mon[74339]: 3.19 deep-scrub starts
Dec 06 06:30:04 compute-0 ceph-mon[74339]: 3.19 deep-scrub ok
Dec 06 06:30:04 compute-0 ceph-mon[74339]: 3.1b scrub starts
Dec 06 06:30:04 compute-0 ceph-mon[74339]: 3.1b scrub ok
Dec 06 06:30:04 compute-0 ceph-mon[74339]: pgmap v149: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:04 compute-0 ceph-mon[74339]: from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:04 compute-0 ceph-mon[74339]: overall HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Dec 06 06:30:04 compute-0 ceph-mon[74339]: pgmap v150: 177 pgs: 1 active+clean+scrubbing+deep, 176 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 06 06:30:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:04 compute-0 ceph-mon[74339]: Deploying daemon crash.compute-2 on compute-2
Dec 06 06:30:04 compute-0 ceph-mon[74339]: from='client.14289 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:04 compute-0 ceph-mon[74339]: 2.5 deep-scrub starts
Dec 06 06:30:04 compute-0 ceph-mon[74339]: 4.1a scrub starts
Dec 06 06:30:04 compute-0 ceph-mon[74339]: pgmap v151: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:04 compute-0 ceph-mon[74339]: mgrmap e10: compute-0.sfzyix(active, since 3m), standbys: compute-2.ytlehq
Dec 06 06:30:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-2.ytlehq", "id": "compute-2.ytlehq"}]: dispatch
Dec 06 06:30:04 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 06 06:30:04 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 06 06:30:05 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 3m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:30:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.nmklwp", "id": "compute-1.nmklwp"} v 0) v1
Dec 06 06:30:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-1.nmklwp", "id": "compute-1.nmklwp"}]: dispatch
Dec 06 06:30:05 compute-0 sudo[90476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyothqhqtcnlfzrpseszsmwcduvreuys ; /usr/bin/python3'
Dec 06 06:30:05 compute-0 sudo[90476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:30:05 compute-0 python3[90478]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:30:05 compute-0 podman[90479]: 2025-12-06 06:30:05.742619733 +0000 UTC m=+0.051415054 container create caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0 (image=quay.io/ceph/ceph:v18, name=strange_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 06:30:05 compute-0 systemd[1]: Started libpod-conmon-caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0.scope.
Dec 06 06:30:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:05 compute-0 podman[90479]: 2025-12-06 06:30:05.719091691 +0000 UTC m=+0.027887062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30a74cdb0265319e064355a90d3977110d7dd9907e653f4cdf3bc7b6637a31d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30a74cdb0265319e064355a90d3977110d7dd9907e653f4cdf3bc7b6637a31d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:05 compute-0 podman[90479]: 2025-12-06 06:30:05.825476826 +0000 UTC m=+0.134272167 container init caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0 (image=quay.io/ceph/ceph:v18, name=strange_antonelli, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:30:05 compute-0 podman[90479]: 2025-12-06 06:30:05.832014984 +0000 UTC m=+0.140810305 container start caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0 (image=quay.io/ceph/ceph:v18, name=strange_antonelli, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Dec 06 06:30:05 compute-0 podman[90479]: 2025-12-06 06:30:05.836391864 +0000 UTC m=+0.145187205 container attach caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0 (image=quay.io/ceph/ceph:v18, name=strange_antonelli, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 06 06:30:05 compute-0 ceph-mon[74339]: from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:05 compute-0 ceph-mon[74339]: 3.1c scrub starts
Dec 06 06:30:05 compute-0 ceph-mon[74339]: 2.5 deep-scrub ok
Dec 06 06:30:05 compute-0 ceph-mon[74339]: 3.1c scrub ok
Dec 06 06:30:05 compute-0 ceph-mon[74339]: 4.1a scrub ok
Dec 06 06:30:05 compute-0 ceph-mon[74339]: Standby manager daemon compute-1.nmklwp started
Dec 06 06:30:05 compute-0 ceph-mon[74339]: pgmap v152: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:05 compute-0 ceph-mon[74339]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 06:30:05 compute-0 ceph-mon[74339]: 3.1e scrub starts
Dec 06 06:30:05 compute-0 ceph-mon[74339]: 3.1e scrub ok
Dec 06 06:30:05 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 3m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:30:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr metadata", "who": "compute-1.nmklwp", "id": "compute-1.nmklwp"}]: dispatch
Dec 06 06:30:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec 06 06:30:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/416954814' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:30:06 compute-0 strange_antonelli[90494]: 
Dec 06 06:30:06 compute-0 strange_antonelli[90494]: {"fsid":"40a1bae4-cf76-5610-8dab-c75116dfe0bb","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":22,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":43,"num_osds":2,"num_up_osds":2,"osd_up_since":1765002559,"num_in_osds":2,"osd_in_since":1765002483,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":175},{"state_name":"active+clean+scrubbing+deep","count":1},{"state_name":"active+clean+scrubbing","count":1}],"num_pgs":177,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56299520,"bytes_avail":14967697408,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-12-06T06:29:22.066399+0000","services":{"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"08da7a18-95bc-4506-b937-4c707cb4cb35":{"message":"Updating crash deployment (+1 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec 06 06:30:06 compute-0 systemd[1]: libpod-caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0.scope: Deactivated successfully.
Dec 06 06:30:06 compute-0 podman[90479]: 2025-12-06 06:30:06.518562997 +0000 UTC m=+0.827358318 container died caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0 (image=quay.io/ceph/ceph:v18, name=strange_antonelli, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:30:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b30a74cdb0265319e064355a90d3977110d7dd9907e653f4cdf3bc7b6637a31d-merged.mount: Deactivated successfully.
Dec 06 06:30:06 compute-0 podman[90479]: 2025-12-06 06:30:06.56004967 +0000 UTC m=+0.868844991 container remove caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0 (image=quay.io/ceph/ceph:v18, name=strange_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:30:06 compute-0 systemd[1]: libpod-conmon-caaf2d78d31acf14caa58c6a50d4b6d9cce5ffc0739e515603998f661def5de0.scope: Deactivated successfully.
Dec 06 06:30:06 compute-0 sudo[90476]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:07 compute-0 ceph-mon[74339]: pgmap v153: 177 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 175 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/416954814' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec 06 06:30:07 compute-0 sudo[90553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjmswkztlxtsykstorfvbguzpeqjojsj ; /usr/bin/python3'
Dec 06 06:30:07 compute-0 sudo[90553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:30:07 compute-0 python3[90555]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:30:07 compute-0 podman[90556]: 2025-12-06 06:30:07.677260892 +0000 UTC m=+0.051620531 container create 6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f (image=quay.io/ceph/ceph:v18, name=lucid_mendeleev, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:30:07 compute-0 systemd[1]: Started libpod-conmon-6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f.scope.
Dec 06 06:30:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5188bb33a5ae52da241c606a03f8b8fd17ecfe410ebe8de747fe67745f626acd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5188bb33a5ae52da241c606a03f8b8fd17ecfe410ebe8de747fe67745f626acd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:07 compute-0 podman[90556]: 2025-12-06 06:30:07.656318129 +0000 UTC m=+0.030677798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:30:07 compute-0 podman[90556]: 2025-12-06 06:30:07.756140425 +0000 UTC m=+0.130500094 container init 6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f (image=quay.io/ceph/ceph:v18, name=lucid_mendeleev, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:30:07 compute-0 podman[90556]: 2025-12-06 06:30:07.762800176 +0000 UTC m=+0.137159815 container start 6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f (image=quay.io/ceph/ceph:v18, name=lucid_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:30:07 compute-0 podman[90556]: 2025-12-06 06:30:07.767058243 +0000 UTC m=+0.141417882 container attach 6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f (image=quay.io/ceph/ceph:v18, name=lucid_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:30:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:30:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec 06 06:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169795211' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:30:08 compute-0 lucid_mendeleev[90571]: 
Dec 06 06:30:08 compute-0 systemd[1]: libpod-6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f.scope: Deactivated successfully.
Dec 06 06:30:08 compute-0 lucid_mendeleev[90571]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502926848","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""}]
Dec 06 06:30:08 compute-0 podman[90556]: 2025-12-06 06:30:08.349900565 +0000 UTC m=+0.724260244 container died 6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f (image=quay.io/ceph/ceph:v18, name=lucid_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5188bb33a5ae52da241c606a03f8b8fd17ecfe410ebe8de747fe67745f626acd-merged.mount: Deactivated successfully.
Dec 06 06:30:09 compute-0 podman[90556]: 2025-12-06 06:30:09.138782632 +0000 UTC m=+1.513142271 container remove 6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f (image=quay.io/ceph/ceph:v18, name=lucid_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 06 06:30:09 compute-0 sudo[90553]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:09 compute-0 systemd[1]: libpod-conmon-6dfadcf369fbf701c0277df74a760927d1a980ee83dfc706eb32cad47efef07f.scope: Deactivated successfully.
Dec 06 06:30:09 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 06 06:30:09 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 06 06:30:09 compute-0 sudo[90631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aorjtfpwfsxunyneertpowxdhxbauxki ; /usr/bin/python3'
Dec 06 06:30:09 compute-0 sudo[90631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:30:10 compute-0 ceph-mon[74339]: pgmap v154: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1169795211' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:10 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 08da7a18-95bc-4506-b937-4c707cb4cb35 (Updating crash deployment (+1 -> 3))
Dec 06 06:30:10 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 08da7a18-95bc-4506-b937-4c707cb4cb35 (Updating crash deployment (+1 -> 3)) in 16 seconds
Dec 06 06:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec 06 06:30:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:10 compute-0 python3[90633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:30:10 compute-0 podman[90634]: 2025-12-06 06:30:10.158231605 +0000 UTC m=+0.044221508 container create 83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8 (image=quay.io/ceph/ceph:v18, name=objective_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:30:10 compute-0 systemd[1]: Started libpod-conmon-83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8.scope.
Dec 06 06:30:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9288da98710b27954887b7e5b2cbe8a7912824058d6965657b445d96912e857d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9288da98710b27954887b7e5b2cbe8a7912824058d6965657b445d96912e857d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:10 compute-0 podman[90634]: 2025-12-06 06:30:10.22983837 +0000 UTC m=+0.115828303 container init 83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8 (image=quay.io/ceph/ceph:v18, name=objective_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:30:10 compute-0 podman[90634]: 2025-12-06 06:30:10.140485581 +0000 UTC m=+0.026475504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:30:10 compute-0 podman[90634]: 2025-12-06 06:30:10.236945104 +0000 UTC m=+0.122934997 container start 83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8 (image=quay.io/ceph/ceph:v18, name=objective_elbakyan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:30:10 compute-0 podman[90634]: 2025-12-06 06:30:10.240125201 +0000 UTC m=+0.126115104 container attach 83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8 (image=quay.io/ceph/ceph:v18, name=objective_elbakyan, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:10 compute-0 sudo[90653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:10 compute-0 sudo[90653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:10 compute-0 sudo[90653]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:10 compute-0 sudo[90695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:30:10 compute-0 sudo[90695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:10 compute-0 sudo[90695]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:10 compute-0 sudo[90722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:10 compute-0 sudo[90722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:10 compute-0 sudo[90722]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:10 compute-0 sudo[90747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/404259740' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 06 06:30:10 compute-0 sudo[90747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:10 compute-0 objective_elbakyan[90649]: mimic
Dec 06 06:30:10 compute-0 systemd[1]: libpod-83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8.scope: Deactivated successfully.
Dec 06 06:30:10 compute-0 podman[90634]: 2025-12-06 06:30:10.819055086 +0000 UTC m=+0.705044999 container died 83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8 (image=quay.io/ceph/ceph:v18, name=objective_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:30:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9288da98710b27954887b7e5b2cbe8a7912824058d6965657b445d96912e857d-merged.mount: Deactivated successfully.
Dec 06 06:30:10 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 13 completed events
Dec 06 06:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:30:10 compute-0 podman[90634]: 2025-12-06 06:30:10.868959809 +0000 UTC m=+0.754949732 container remove 83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8 (image=quay.io/ceph/ceph:v18, name=objective_elbakyan, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:10 compute-0 systemd[1]: libpod-conmon-83936f7cf0004aa243fc6f8d2c06223f71fbd12a00422ae0ec104d55e36caef8.scope: Deactivated successfully.
Dec 06 06:30:10 compute-0 sudo[90631]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:10 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 06 06:30:10 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 06 06:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:11 compute-0 podman[90828]: 2025-12-06 06:30:11.160232711 +0000 UTC m=+0.042978614 container create 5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:30:11 compute-0 systemd[1]: Started libpod-conmon-5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a.scope.
Dec 06 06:30:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:11 compute-0 podman[90828]: 2025-12-06 06:30:11.139617269 +0000 UTC m=+0.022363192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:11 compute-0 podman[90828]: 2025-12-06 06:30:11.243917626 +0000 UTC m=+0.126663529 container init 5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shamir, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:30:11 compute-0 podman[90828]: 2025-12-06 06:30:11.25249999 +0000 UTC m=+0.135245903 container start 5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:30:11 compute-0 podman[90828]: 2025-12-06 06:30:11.256649063 +0000 UTC m=+0.139394986 container attach 5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:30:11 compute-0 exciting_shamir[90844]: 167 167
Dec 06 06:30:11 compute-0 systemd[1]: libpod-5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a.scope: Deactivated successfully.
Dec 06 06:30:11 compute-0 podman[90828]: 2025-12-06 06:30:11.258781951 +0000 UTC m=+0.141527864 container died 5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shamir, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:11 compute-0 ceph-mon[74339]: 3.1f scrub starts
Dec 06 06:30:11 compute-0 ceph-mon[74339]: 3.1f scrub ok
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:11 compute-0 ceph-mon[74339]: pgmap v155: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/404259740' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec 06 06:30:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-34aa1ae5f7ec805a8d49adcaab49831a9ec8693519bdaa1e40a094b540208e90-merged.mount: Deactivated successfully.
Dec 06 06:30:11 compute-0 podman[90828]: 2025-12-06 06:30:11.305785835 +0000 UTC m=+0.188531738 container remove 5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_shamir, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:11 compute-0 systemd[1]: libpod-conmon-5f15a1431fb583e9100ea0b145872acc8e0b3e042866ee382c61d0a92d96028a.scope: Deactivated successfully.
Dec 06 06:30:11 compute-0 podman[90868]: 2025-12-06 06:30:11.462848593 +0000 UTC m=+0.043266602 container create 23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:30:11 compute-0 systemd[1]: Started libpod-conmon-23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087.scope.
Dec 06 06:30:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563fb37eb37d249df62094920d857eb6686f8b3daa19d3b608786eba5be4239/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:11 compute-0 podman[90868]: 2025-12-06 06:30:11.444415819 +0000 UTC m=+0.024833878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563fb37eb37d249df62094920d857eb6686f8b3daa19d3b608786eba5be4239/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563fb37eb37d249df62094920d857eb6686f8b3daa19d3b608786eba5be4239/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563fb37eb37d249df62094920d857eb6686f8b3daa19d3b608786eba5be4239/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563fb37eb37d249df62094920d857eb6686f8b3daa19d3b608786eba5be4239/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:11 compute-0 podman[90868]: 2025-12-06 06:30:11.565307 +0000 UTC m=+0.145725019 container init 23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:11 compute-0 podman[90868]: 2025-12-06 06:30:11.573642938 +0000 UTC m=+0.154060937 container start 23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dirac, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:30:11 compute-0 podman[90868]: 2025-12-06 06:30:11.582752516 +0000 UTC m=+0.163170555 container attach 23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dirac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:11 compute-0 sudo[90914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtoijtqynkulntzuydklcqcpxcajmors ; /usr/bin/python3'
Dec 06 06:30:11 compute-0 sudo[90914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:30:11 compute-0 python3[90916]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:30:11 compute-0 podman[90917]: 2025-12-06 06:30:11.996124472 +0000 UTC m=+0.103405185 container create ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8 (image=quay.io/ceph/ceph:v18, name=vigorous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:12 compute-0 systemd[1]: Started libpod-conmon-ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8.scope.
Dec 06 06:30:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e199e2d897b24c08004a4266e0cb7c12ed240ad6d73b6b85a15e342b4170fa95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e199e2d897b24c08004a4266e0cb7c12ed240ad6d73b6b85a15e342b4170fa95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:12 compute-0 podman[90917]: 2025-12-06 06:30:11.974544393 +0000 UTC m=+0.081825116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:30:12 compute-0 podman[90917]: 2025-12-06 06:30:12.074036899 +0000 UTC m=+0.181317622 container init ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8 (image=quay.io/ceph/ceph:v18, name=vigorous_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:30:12 compute-0 podman[90917]: 2025-12-06 06:30:12.08031436 +0000 UTC m=+0.187595063 container start ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8 (image=quay.io/ceph/ceph:v18, name=vigorous_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:30:12 compute-0 podman[90917]: 2025-12-06 06:30:12.084855964 +0000 UTC m=+0.192136667 container attach ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8 (image=quay.io/ceph/ceph:v18, name=vigorous_lumiere, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:12 compute-0 beautiful_dirac[90886]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:30:12 compute-0 beautiful_dirac[90886]: --> relative data size: 1.0
Dec 06 06:30:12 compute-0 beautiful_dirac[90886]: --> All data devices are unavailable
Dec 06 06:30:12 compute-0 systemd[1]: libpod-23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087.scope: Deactivated successfully.
Dec 06 06:30:12 compute-0 conmon[90886]: conmon 23e946091eff728ff846 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087.scope/container/memory.events
Dec 06 06:30:12 compute-0 podman[90868]: 2025-12-06 06:30:12.560236323 +0000 UTC m=+1.140654362 container died 23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dirac, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d563fb37eb37d249df62094920d857eb6686f8b3daa19d3b608786eba5be4239-merged.mount: Deactivated successfully.
Dec 06 06:30:12 compute-0 podman[90868]: 2025-12-06 06:30:12.621975388 +0000 UTC m=+1.202393397 container remove 23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dirac, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:30:12 compute-0 systemd[1]: libpod-conmon-23e946091eff728ff84675a4ca353d22fcab6817c94e97ff0da37402d7f33087.scope: Deactivated successfully.
Dec 06 06:30:12 compute-0 sudo[90747]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:12 compute-0 ceph-mon[74339]: 5.3 scrub starts
Dec 06 06:30:12 compute-0 ceph-mon[74339]: 5.3 scrub ok
Dec 06 06:30:12 compute-0 ceph-mon[74339]: 7.1 scrub starts
Dec 06 06:30:12 compute-0 ceph-mon[74339]: 7.1 scrub ok
Dec 06 06:30:12 compute-0 sudo[90977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:12 compute-0 sudo[90977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:12 compute-0 sudo[90977]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:12 compute-0 sudo[91002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:30:12 compute-0 sudo[91002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:12 compute-0 sudo[91002]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Dec 06 06:30:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873500984' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 06 06:30:12 compute-0 vigorous_lumiere[90932]: 
Dec 06 06:30:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:12 compute-0 vigorous_lumiere[90932]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":2},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":8}}
Dec 06 06:30:12 compute-0 systemd[1]: libpod-ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8.scope: Deactivated successfully.
Dec 06 06:30:12 compute-0 podman[90917]: 2025-12-06 06:30:12.832189937 +0000 UTC m=+0.939470650 container died ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8 (image=quay.io/ceph/ceph:v18, name=vigorous_lumiere, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:30:12
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta']
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e199e2d897b24c08004a4266e0cb7c12ed240ad6d73b6b85a15e342b4170fa95-merged.mount: Deactivated successfully.
Dec 06 06:30:12 compute-0 sudo[91028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:12 compute-0 sudo[91028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:12 compute-0 sudo[91028]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:12 compute-0 podman[90917]: 2025-12-06 06:30:12.887677303 +0000 UTC m=+0.994958006 container remove ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8 (image=quay.io/ceph/ceph:v18, name=vigorous_lumiere, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:30:12 compute-0 systemd[1]: libpod-conmon-ca751d2d053e500d34f4222e7dcbc093494e28c2644df1ee0ade37740d3245c8.scope: Deactivated successfully.
Dec 06 06:30:12 compute-0 sudo[90914]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:30:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:30:12 compute-0 sudo[91066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:30:12 compute-0 sudo[91066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:13 compute-0 podman[91131]: 2025-12-06 06:30:13.27329427 +0000 UTC m=+0.045917795 container create 7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:13 compute-0 systemd[1]: Started libpod-conmon-7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2.scope.
Dec 06 06:30:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:13 compute-0 podman[91131]: 2025-12-06 06:30:13.339356644 +0000 UTC m=+0.111980179 container init 7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:30:13 compute-0 podman[91131]: 2025-12-06 06:30:13.345792269 +0000 UTC m=+0.118415804 container start 7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:30:13 compute-0 podman[91131]: 2025-12-06 06:30:13.255029302 +0000 UTC m=+0.027652837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:13 compute-0 podman[91131]: 2025-12-06 06:30:13.349320605 +0000 UTC m=+0.121944150 container attach 7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:30:13 compute-0 cool_saha[91148]: 167 167
Dec 06 06:30:13 compute-0 systemd[1]: libpod-7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2.scope: Deactivated successfully.
Dec 06 06:30:13 compute-0 podman[91131]: 2025-12-06 06:30:13.352150212 +0000 UTC m=+0.124773737 container died 7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3a5eaa100018fe55da9ca10d4364a403e2e509bd303d1be5cd4f78499e54e6d-merged.mount: Deactivated successfully.
Dec 06 06:30:13 compute-0 podman[91131]: 2025-12-06 06:30:13.399234108 +0000 UTC m=+0.171857643 container remove 7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_saha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:30:13 compute-0 systemd[1]: libpod-conmon-7a88a3ff5ba4a95dc1629b0cb5e4c80e6da124c025420bf9fad92e1f33567ee2.scope: Deactivated successfully.
Dec 06 06:30:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"} v 0) v1
Dec 06 06:30:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]: dispatch
Dec 06 06:30:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 06 06:30:13 compute-0 podman[91172]: 2025-12-06 06:30:13.582495052 +0000 UTC m=+0.047467107 container create 8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:13 compute-0 systemd[1]: Started libpod-conmon-8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c.scope.
Dec 06 06:30:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b88de632e90c3d7613f2a85d2e0b812e0350774129d87360a2b494011e6955b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b88de632e90c3d7613f2a85d2e0b812e0350774129d87360a2b494011e6955b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b88de632e90c3d7613f2a85d2e0b812e0350774129d87360a2b494011e6955b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b88de632e90c3d7613f2a85d2e0b812e0350774129d87360a2b494011e6955b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:13 compute-0 podman[91172]: 2025-12-06 06:30:13.560628535 +0000 UTC m=+0.025600620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:13 compute-0 podman[91172]: 2025-12-06 06:30:13.666754932 +0000 UTC m=+0.131726997 container init 8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:30:13 compute-0 podman[91172]: 2025-12-06 06:30:13.67472318 +0000 UTC m=+0.139695225 container start 8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:30:13 compute-0 podman[91172]: 2025-12-06 06:30:13.677820804 +0000 UTC m=+0.142792869 container attach 8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:30:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 06 06:30:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 06 06:30:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]': finished
Dec 06 06:30:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e44 e44: 3 total, 2 up, 3 in
Dec 06 06:30:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 2 up, 3 in
Dec 06 06:30:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:14 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:14 compute-0 great_grothendieck[91189]: {
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:     "0": [
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:         {
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "devices": [
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "/dev/loop3"
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             ],
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "lv_name": "ceph_lv0",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "lv_size": "7511998464",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "name": "ceph_lv0",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "tags": {
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.cluster_name": "ceph",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.crush_device_class": "",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.encrypted": "0",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.osd_id": "0",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.type": "block",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:                 "ceph.vdo": "0"
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             },
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "type": "block",
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:             "vg_name": "ceph_vg0"
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:         }
Dec 06 06:30:14 compute-0 great_grothendieck[91189]:     ]
Dec 06 06:30:14 compute-0 great_grothendieck[91189]: }
Dec 06 06:30:14 compute-0 systemd[1]: libpod-8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c.scope: Deactivated successfully.
Dec 06 06:30:14 compute-0 podman[91172]: 2025-12-06 06:30:14.564879322 +0000 UTC m=+1.029851367 container died 8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:30:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b88de632e90c3d7613f2a85d2e0b812e0350774129d87360a2b494011e6955b-merged.mount: Deactivated successfully.
Dec 06 06:30:14 compute-0 podman[91172]: 2025-12-06 06:30:14.641620787 +0000 UTC m=+1.106592832 container remove 8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:30:14 compute-0 systemd[1]: libpod-conmon-8f65fd9f4ef9b84b0ca139087d40aaa15dd338a215807bd341d94032e688a24c.scope: Deactivated successfully.
Dec 06 06:30:14 compute-0 sudo[91066]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:14 compute-0 sudo[91212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:14 compute-0 sudo[91212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:14 compute-0 sudo[91212]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:14 compute-0 sudo[91237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:30:14 compute-0 sudo[91237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:14 compute-0 sudo[91237]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:14 compute-0 sudo[91262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:14 compute-0 sudo[91262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:14 compute-0 sudo[91262]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:14 compute-0 sudo[91287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:30:14 compute-0 sudo[91287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:14 compute-0 ceph-mon[74339]: 4.1b scrub starts
Dec 06 06:30:14 compute-0 ceph-mon[74339]: 4.1b scrub ok
Dec 06 06:30:14 compute-0 ceph-mon[74339]: pgmap v156: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3873500984' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec 06 06:30:14 compute-0 ceph-mon[74339]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]: dispatch
Dec 06 06:30:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1558451655' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]: dispatch
Dec 06 06:30:15 compute-0 podman[91352]: 2025-12-06 06:30:15.348339671 +0000 UTC m=+0.044615379 container create dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:15 compute-0 systemd[1]: Started libpod-conmon-dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a.scope.
Dec 06 06:30:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:15 compute-0 podman[91352]: 2025-12-06 06:30:15.332216201 +0000 UTC m=+0.028491929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:15 compute-0 podman[91352]: 2025-12-06 06:30:15.438172744 +0000 UTC m=+0.134448462 container init dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:30:15 compute-0 podman[91352]: 2025-12-06 06:30:15.445291409 +0000 UTC m=+0.141567107 container start dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:15 compute-0 podman[91352]: 2025-12-06 06:30:15.448734112 +0000 UTC m=+0.145009840 container attach dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:30:15 compute-0 stoic_montalcini[91369]: 167 167
Dec 06 06:30:15 compute-0 systemd[1]: libpod-dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a.scope: Deactivated successfully.
Dec 06 06:30:15 compute-0 podman[91352]: 2025-12-06 06:30:15.452582767 +0000 UTC m=+0.148858475 container died dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 06:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a095701ec8b717100a2a43c628eba03fc2f2ed5886f94eb6fc0e63cefe7a08a-merged.mount: Deactivated successfully.
Dec 06 06:30:15 compute-0 podman[91352]: 2025-12-06 06:30:15.505131042 +0000 UTC m=+0.201406750 container remove dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:30:15 compute-0 systemd[1]: libpod-conmon-dec06e491a5f0529ecd4db46fefdc516bc23a102a52fc6a3f4c254e5a700373a.scope: Deactivated successfully.
Dec 06 06:30:15 compute-0 podman[91394]: 2025-12-06 06:30:15.695598692 +0000 UTC m=+0.050682045 container create 5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:30:15 compute-0 systemd[1]: Started libpod-conmon-5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23.scope.
Dec 06 06:30:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dcec3dd43f4c389fb8731af49092558b67cd4c03575932aa4e82ec2487b52c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dcec3dd43f4c389fb8731af49092558b67cd4c03575932aa4e82ec2487b52c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dcec3dd43f4c389fb8731af49092558b67cd4c03575932aa4e82ec2487b52c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30dcec3dd43f4c389fb8731af49092558b67cd4c03575932aa4e82ec2487b52c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:15 compute-0 podman[91394]: 2025-12-06 06:30:15.67575416 +0000 UTC m=+0.030837533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:15 compute-0 podman[91394]: 2025-12-06 06:30:15.794556363 +0000 UTC m=+0.149639736 container init 5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:30:15 compute-0 podman[91394]: 2025-12-06 06:30:15.800930328 +0000 UTC m=+0.156013681 container start 5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:30:15 compute-0 podman[91394]: 2025-12-06 06:30:15.835642655 +0000 UTC m=+0.190726038 container attach 5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 06:30:15 compute-0 ceph-mon[74339]: 5.5 scrub starts
Dec 06 06:30:15 compute-0 ceph-mon[74339]: 5.5 scrub ok
Dec 06 06:30:15 compute-0 ceph-mon[74339]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e2493db6-bc13-4dc0-b3a7-5b4b07811cd5"}]': finished
Dec 06 06:30:15 compute-0 ceph-mon[74339]: osdmap e44: 3 total, 2 up, 3 in
Dec 06 06:30:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:15 compute-0 ceph-mon[74339]: pgmap v158: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1378465590' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 06 06:30:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.6 deep-scrub starts
Dec 06 06:30:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.6 deep-scrub ok
Dec 06 06:30:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:16 compute-0 stupefied_black[91410]: {
Dec 06 06:30:16 compute-0 stupefied_black[91410]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:30:16 compute-0 stupefied_black[91410]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:30:16 compute-0 stupefied_black[91410]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:30:16 compute-0 stupefied_black[91410]:         "osd_id": 0,
Dec 06 06:30:16 compute-0 stupefied_black[91410]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:30:16 compute-0 stupefied_black[91410]:         "type": "bluestore"
Dec 06 06:30:16 compute-0 stupefied_black[91410]:     }
Dec 06 06:30:16 compute-0 stupefied_black[91410]: }
Dec 06 06:30:16 compute-0 systemd[1]: libpod-5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23.scope: Deactivated successfully.
Dec 06 06:30:16 compute-0 podman[91394]: 2025-12-06 06:30:16.686507105 +0000 UTC m=+1.041590458 container died 5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-30dcec3dd43f4c389fb8731af49092558b67cd4c03575932aa4e82ec2487b52c-merged.mount: Deactivated successfully.
Dec 06 06:30:16 compute-0 podman[91394]: 2025-12-06 06:30:16.749536346 +0000 UTC m=+1.104619699 container remove 5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_black, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:30:16 compute-0 systemd[1]: libpod-conmon-5e1095f7e71946a65c8291b4267214df100a26ce21598441bc2d2a1a6d779b23.scope: Deactivated successfully.
Dec 06 06:30:16 compute-0 sudo[91287]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:30:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:30:17 compute-0 ceph-mon[74339]: 5.6 deep-scrub starts
Dec 06 06:30:17 compute-0 ceph-mon[74339]: 5.6 deep-scrub ok
Dec 06 06:30:17 compute-0 ceph-mon[74339]: pgmap v159: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 6.161449609156896e-05 of space, bias 1.0, pg target 0.012322899218313792 quantized to 1 (current 1)
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 16)
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Dec 06 06:30:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:30:18 compute-0 ceph-mon[74339]: 5.1a deep-scrub starts
Dec 06 06:30:18 compute-0 ceph-mon[74339]: 5.1a deep-scrub ok
Dec 06 06:30:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:19 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 06 06:30:19 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 06 06:30:19 compute-0 ceph-mon[74339]: 3.1a scrub starts
Dec 06 06:30:19 compute-0 ceph-mon[74339]: 3.1a scrub ok
Dec 06 06:30:19 compute-0 ceph-mon[74339]: pgmap v160: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Dec 06 06:30:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 06 06:30:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:30:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:20 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec 06 06:30:20 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec 06 06:30:21 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 06 06:30:21 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 06 06:30:21 compute-0 ceph-mon[74339]: 4.2 scrub starts
Dec 06 06:30:21 compute-0 ceph-mon[74339]: 4.2 scrub ok
Dec 06 06:30:21 compute-0 ceph-mon[74339]: pgmap v161: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:21 compute-0 ceph-mon[74339]: 2.7 scrub starts
Dec 06 06:30:21 compute-0 ceph-mon[74339]: 2.7 scrub ok
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 06 06:30:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:22 compute-0 ceph-mon[74339]: Deploying daemon osd.2 on compute-2
Dec 06 06:30:22 compute-0 ceph-mon[74339]: 4.3 scrub starts
Dec 06 06:30:22 compute-0 ceph-mon[74339]: 4.3 scrub ok
Dec 06 06:30:22 compute-0 ceph-mon[74339]: 5.e scrub starts
Dec 06 06:30:22 compute-0 ceph-mon[74339]: 5.e scrub ok
Dec 06 06:30:22 compute-0 ceph-mon[74339]: pgmap v162: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:30:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:30:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:24 compute-0 ceph-mon[74339]: pgmap v163: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:25 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 06 06:30:25 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 06 06:30:26 compute-0 ceph-mon[74339]: 6.d deep-scrub starts
Dec 06 06:30:26 compute-0 ceph-mon[74339]: 6.d deep-scrub ok
Dec 06 06:30:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:26 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Dec 06 06:30:26 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Dec 06 06:30:27 compute-0 ceph-mon[74339]: 5.8 scrub starts
Dec 06 06:30:27 compute-0 ceph-mon[74339]: 5.8 scrub ok
Dec 06 06:30:27 compute-0 ceph-mon[74339]: pgmap v164: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:27 compute-0 ceph-mon[74339]: 4.4 deep-scrub starts
Dec 06 06:30:27 compute-0 ceph-mon[74339]: 4.4 deep-scrub ok
Dec 06 06:30:27 compute-0 ceph-mon[74339]: 2.8 deep-scrub starts
Dec 06 06:30:27 compute-0 ceph-mon[74339]: 2.8 deep-scrub ok
Dec 06 06:30:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:30:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:28 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 06 06:30:28 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 06 06:30:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:30:31 compute-0 ceph-mon[74339]: pgmap v165: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Dec 06 06:30:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 06 06:30:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 06 06:30:32 compute-0 ceph-mon[74339]: 5.a scrub starts
Dec 06 06:30:32 compute-0 ceph-mon[74339]: 5.a scrub ok
Dec 06 06:30:32 compute-0 ceph-mon[74339]: pgmap v166: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:32 compute-0 ceph-mon[74339]: 5.1c scrub starts
Dec 06 06:30:32 compute-0 ceph-mon[74339]: 5.1c scrub ok
Dec 06 06:30:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:32 compute-0 ceph-mon[74339]: from='osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 06 06:30:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:32 compute-0 ceph-mon[74339]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 06 06:30:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:30:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 06 06:30:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e45 e45: 3 total, 2 up, 3 in
Dec 06 06:30:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 2 up, 3 in
Dec 06 06:30:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:34 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:35 compute-0 ceph-mon[74339]: pgmap v167: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:30:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Dec 06 06:30:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 06 06:30:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e45 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Dec 06 06:30:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 06 06:30:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:36 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 06 06:30:36 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 06 06:30:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Dec 06 06:30:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e46 e46: 3 total, 2 up, 3 in
Dec 06 06:30:36 compute-0 ceph-mon[74339]: purged_snaps scrub starts
Dec 06 06:30:36 compute-0 ceph-mon[74339]: purged_snaps scrub ok
Dec 06 06:30:36 compute-0 ceph-mon[74339]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 06 06:30:36 compute-0 ceph-mon[74339]: osdmap e45: 3 total, 2 up, 3 in
Dec 06 06:30:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:36 compute-0 ceph-mon[74339]: from='osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 06 06:30:36 compute-0 ceph-mon[74339]: pgmap v169: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:36 compute-0 ceph-mon[74339]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 06 06:30:36 compute-0 ceph-mon[74339]: 4.1 scrub starts
Dec 06 06:30:36 compute-0 ceph-mon[74339]: 4.1 scrub ok
Dec 06 06:30:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:36 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 2 up, 3 in
Dec 06 06:30:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:36 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:36 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 9ce85532-9497-4227-bc82-a695d4277833 (Updating rgw.rgw deployment (+3 -> 3))
Dec 06 06:30:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec 06 06:30:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:36 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3451812493; not ready for session (expect reconnect)
Dec 06 06:30:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:36 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:36 compute-0 sshd-session[91446]: error: kex_exchange_identification: read: Connection reset by peer
Dec 06 06:30:36 compute-0 sshd-session[91446]: Connection reset by 45.140.17.97 port 28313
Dec 06 06:30:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:37 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3451812493; not ready for session (expect reconnect)
Dec 06 06:30:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:37 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec 06 06:30:38 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3451812493; not ready for session (expect reconnect)
Dec 06 06:30:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:38 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.372704506s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.784057617s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.372220993s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.783660889s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.372704506s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.784057617s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.372220993s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783660889s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.1b( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.889240265s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.300781250s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=13.353129387s) [] r=-1 lpr=46 pi=[32,46)/1 crt=0'0 mlcod 0'0 active pruub 158.764694214s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.372019768s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.783676147s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.372019768s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783676147s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[3.1b( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=13.353129387s) [] r=-1 lpr=46 pi=[32,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764694214s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888999939s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.300933838s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[3.8( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=13.352715492s) [] r=-1 lpr=46 pi=[32,46)/1 crt=0'0 mlcod 0'0 active pruub 158.764678955s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[3.8( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=13.352715492s) [] r=-1 lpr=46 pi=[32,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764678955s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.371353149s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.783401489s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888999939s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.300933838s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.371353149s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783401489s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=46 pruub=12.441337585s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 active pruub 157.853530884s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[6.1( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=46 pruub=12.441337585s) [] r=-1 lpr=46 pi=[36,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 157.853530884s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.371168137s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.783401489s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.371168137s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783401489s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.370873451s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.783203125s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.370873451s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783203125s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.0( empty local-lis/les=34/35 n=0 ec=18/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.370573044s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.783020020s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[3.0( empty local-lis/les=32/33 n=0 ec=14/14 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=13.352206230s) [] r=-1 lpr=46 pi=[32,46)/1 crt=0'0 mlcod 0'0 active pruub 158.764709473s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.0( empty local-lis/les=34/35 n=0 ec=18/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.370573044s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783020020s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[3.0( empty local-lis/les=32/33 n=0 ec=14/14 lis/c=32/32 les/c/f=33/33/0 sis=46 pruub=13.352206230s) [] r=-1 lpr=46 pi=[32,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764709473s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.1b( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.889240265s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.300781250s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.369571686s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.782348633s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.a( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888561249s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.301345825s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.369571686s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782348633s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.369695663s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.782546997s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.d( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888528824s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.301376343s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.369695663s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782546997s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.a( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888561249s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301345825s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.d( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888528824s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.c( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888347626s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.301376343s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.8( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.369244576s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.782333374s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.c( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888347626s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888246536s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.301376343s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.8( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.369244576s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782333374s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888495445s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.301727295s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888495445s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301727295s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.10( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888304710s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.301589966s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.887997627s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.301406860s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.13( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.887997627s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301406860s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.10( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888304710s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301589966s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.888246536s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.368855476s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.782348633s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.368855476s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782348633s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.15( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.887948990s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 160.301574707s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.13( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.368290901s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.781951904s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[2.15( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.887948990s) [] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301574707s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.13( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.368290901s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.781951904s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.12( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.368638039s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 active pruub 160.782424927s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:39 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 46 pg[5.12( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=46 pruub=15.368638039s) [] r=-1 lpr=46 pi=[34,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782424927s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:39 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3451812493; not ready for session (expect reconnect)
Dec 06 06:30:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:39 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:40 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3451812493; not ready for session (expect reconnect)
Dec 06 06:30:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:40 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:40 compute-0 ceph-mon[74339]: pgmap v170: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:40 compute-0 ceph-mon[74339]: 4.6 scrub starts
Dec 06 06:30:40 compute-0 ceph-mon[74339]: 4.6 scrub ok
Dec 06 06:30:40 compute-0 ceph-mon[74339]: 5.1f scrub starts
Dec 06 06:30:40 compute-0 ceph-mon[74339]: 5.1f scrub ok
Dec 06 06:30:40 compute-0 ceph-mon[74339]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Dec 06 06:30:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:40 compute-0 ceph-mon[74339]: osdmap e46: 3 total, 2 up, 3 in
Dec 06 06:30:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:40 compute-0 ceph-mon[74339]: 2.b scrub starts
Dec 06 06:30:41 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3451812493; not ready for session (expect reconnect)
Dec 06 06:30:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:41 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:42 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 06 06:30:42 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 06 06:30:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:30:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.oieczf on compute-2
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.oieczf on compute-2
Dec 06 06:30:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3451812493; not ready for session (expect reconnect)
Dec 06 06:30:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:30:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:30:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-0 ceph-mon[74339]: pgmap v172: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oieczf", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:43 compute-0 ceph-mon[74339]: 6.2 deep-scrub starts
Dec 06 06:30:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-0 ceph-mon[74339]: pgmap v173: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:43 compute-0 ceph-mon[74339]: 6.2 deep-scrub ok
Dec 06 06:30:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-0 ceph-mon[74339]: 6.5 scrub starts
Dec 06 06:30:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-0 ceph-mgr[74630]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/3451812493; not ready for session (expect reconnect)
Dec 06 06:30:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:43 compute-0 ceph-mgr[74630]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 06 06:30:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 06 06:30:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:44 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 06 06:30:44 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 06 06:30:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:30:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 06 06:30:44 compute-0 ceph-mon[74339]: 6.5 scrub ok
Dec 06 06:30:44 compute-0 ceph-mon[74339]: pgmap v174: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:44 compute-0 ceph-mon[74339]: 5.c scrub starts
Dec 06 06:30:44 compute-0 ceph-mon[74339]: 5.c scrub ok
Dec 06 06:30:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:44 compute-0 ceph-mon[74339]: Deploying daemon rgw.rgw.compute-2.oieczf on compute-2
Dec 06 06:30:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:44 compute-0 ceph-mon[74339]: OSD bench result of 5358.078177 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 06:30:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[3.1b( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=7.998089314s) [2] r=-1 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764694214s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.1c( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.017040253s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783660889s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.1b( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.534152031s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.300781250s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.1c( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.016964912s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783660889s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[3.1b( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=7.997814178s) [2] r=-1 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764694214s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.1d( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.016732216s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783676147s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.1d( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.016709328s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783676147s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.19( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.016978264s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.784057617s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533785820s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.300933838s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.1b( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533668518s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.300781250s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.19( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.016914368s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.784057617s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[3.8( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=7.997389317s) [2] r=-1 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764678955s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[3.8( empty local-lis/les=32/33 n=0 ec=32/14 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=7.997355461s) [2] r=-1 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764678955s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.3( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.016028404s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783401489s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[6.1( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=47 pruub=7.086114407s) [2] r=-1 lpr=47 pi=[36,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 157.853530884s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[6.1( empty local-lis/les=36/39 n=0 ec=36/20 lis/c=36/36 les/c/f=39/39/0 sis=47 pruub=7.086087704s) [2] r=-1 lpr=47 pi=[36,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 157.853530884s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.3( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.015954971s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783401489s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.6( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.015913963s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783401489s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.6( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.015888214s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783401489s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493] boot
Dec 06 06:30:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.2( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.015497208s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783203125s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.0( empty local-lis/les=34/35 n=0 ec=18/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.015289307s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783020020s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.0( empty local-lis/les=34/35 n=0 ec=18/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.015262604s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783020020s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[3.0( empty local-lis/les=32/33 n=0 ec=14/14 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=7.996904373s) [2] r=-1 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764709473s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[3.0( empty local-lis/les=32/33 n=0 ec=14/14 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=7.996878624s) [2] r=-1 lpr=47 pi=[32,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 158.764709473s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.2( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.015467644s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.783203125s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.a( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533411026s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301345825s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.014372826s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782348633s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.a( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533382416s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301345825s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.d( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.014344215s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782348633s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.d( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533326149s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.d( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533301353s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.014443398s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782546997s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.c( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533228874s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.b( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.014377594s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782546997s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.8( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.014145851s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782333374s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.8( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.014121056s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782333374s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533142090s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.c( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533180237s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533368111s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301727295s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[7.a( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533075333s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301376343s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.10( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533099174s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301589966s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[7.1d( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533759117s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.300933838s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[7.14( empty local-lis/les=42/43 n=0 ec=40/22 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533272743s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301727295s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.10( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.533057213s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301589966s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.12( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.013731956s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782424927s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.14( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.013636589s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782348633s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.12( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.013694763s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782424927s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.13( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.013108253s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.781951904s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[5.13( empty local-lis/les=34/35 n=0 ec=34/18 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.013090134s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.781951904s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[4.14( empty local-lis/les=34/35 n=0 ec=34/16 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.013608932s) [2] r=-1 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.782348633s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.13( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.532367706s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301406860s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.15( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.532834053s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301574707s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.13( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.532350540s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301406860s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 47 pg[2.15( empty local-lis/les=42/43 n=0 ec=40/13 lis/c=42/42 les/c/f=43/43/0 sis=47 pruub=9.532445908s) [2] r=-1 lpr=47 pi=[42,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 160.301574707s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:30:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec 06 06:30:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 27 peering, 150 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:46 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 06 06:30:46 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 06 06:30:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:30:47 compute-0 ceph-mon[74339]: pgmap v175: 177 pgs: 177 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Dec 06 06:30:47 compute-0 ceph-mon[74339]: 4.7 scrub starts
Dec 06 06:30:47 compute-0 ceph-mon[74339]: 4.7 scrub ok
Dec 06 06:30:47 compute-0 ceph-mon[74339]: osd.2 [v2:192.168.122.102:6800/3451812493,v1:192.168.122.102:6801/3451812493] boot
Dec 06 06:30:47 compute-0 ceph-mon[74339]: osdmap e47: 3 total, 3 up, 3 in
Dec 06 06:30:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 06 06:30:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 06 06:30:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 27 peering, 150 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 06 06:30:50 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 06 06:30:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:50 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 06 06:30:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 06 06:30:50 compute-0 ceph-mon[74339]: pgmap v177: 177 pgs: 27 peering, 150 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:50 compute-0 ceph-mon[74339]: 4.b scrub starts
Dec 06 06:30:50 compute-0 ceph-mon[74339]: 4.b scrub ok
Dec 06 06:30:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:50 compute-0 ceph-mon[74339]: 6.3 scrub starts
Dec 06 06:30:50 compute-0 ceph-mon[74339]: pgmap v178: 177 pgs: 27 peering, 150 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 06 06:30:50 compute-0 ceph-mgr[74630]: [progress WARNING root] Starting Global Recovery Event,53 pgs not in active + clean state
Dec 06 06:30:51 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 06 06:30:51 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 48 pg[8.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:30:51 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 06 06:30:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 06 06:30:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v181: 178 pgs: 1 unknown, 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec 06 06:30:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Dec 06 06:30:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 06 06:30:53 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 06 06:30:53 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 06 06:30:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 06 06:30:53 compute-0 ceph-mon[74339]: 6.3 scrub ok
Dec 06 06:30:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:53 compute-0 ceph-mon[74339]: 5.14 scrub starts
Dec 06 06:30:53 compute-0 ceph-mon[74339]: pgmap v179: 177 pgs: 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:53 compute-0 ceph-mon[74339]: 5.14 scrub ok
Dec 06 06:30:53 compute-0 ceph-mon[74339]: osdmap e48: 3 total, 3 up, 3 in
Dec 06 06:30:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 06 06:30:53 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 49 pg[8.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [0] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:30:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec 06 06:30:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:30:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:53 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.dmyhav on compute-1
Dec 06 06:30:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.dmyhav on compute-1
Dec 06 06:30:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v183: 178 pgs: 1 unknown, 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:30:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 06 06:30:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 06 06:30:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 06 06:30:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 5.17 scrub starts
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 5.17 scrub ok
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 3.9 scrub starts
Dec 06 06:30:55 compute-0 ceph-mon[74339]: pgmap v181: 178 pgs: 1 unknown, 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 3.9 scrub ok
Dec 06 06:30:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:55 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 3.10 scrub starts
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 4.f scrub starts
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 4.f scrub ok
Dec 06 06:30:55 compute-0 ceph-mon[74339]: osdmap e49: 3 total, 3 up, 3 in
Dec 06 06:30:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dmyhav", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 4.1c scrub starts
Dec 06 06:30:55 compute-0 ceph-mon[74339]: 4.1c scrub ok
Dec 06 06:30:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 06 06:30:55 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event c87cd527-77c0-4461-bd37-d024aeeef1ab (Global Recovery Event) in 5 seconds
Dec 06 06:30:56 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 06 06:30:56 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 06 06:30:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v185: 178 pgs: 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Dec 06 06:30:56 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:30:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 06 06:30:56 compute-0 ceph-mon[74339]: Deploying daemon rgw.rgw.compute-1.dmyhav on compute-1
Dec 06 06:30:56 compute-0 ceph-mon[74339]: pgmap v183: 178 pgs: 1 unknown, 52 peering, 125 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:30:56 compute-0 ceph-mon[74339]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:30:56 compute-0 ceph-mon[74339]: 3.10 scrub ok
Dec 06 06:30:56 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 06 06:30:56 compute-0 ceph-mon[74339]: osdmap e50: 3 total, 3 up, 3 in
Dec 06 06:30:56 compute-0 ceph-mon[74339]: pgmap v185: 178 pgs: 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Dec 06 06:30:57 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 06 06:30:57 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 06 06:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 06 06:30:57 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 06 06:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Dec 06 06:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 06:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:30:57 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 51 pg[9.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [0] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:30:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v187: 179 pgs: 1 unknown, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Dec 06 06:30:58 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 06 06:30:58 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 06 06:30:58 compute-0 ceph-mon[74339]: 4.10 scrub starts
Dec 06 06:30:58 compute-0 ceph-mon[74339]: 4.10 scrub ok
Dec 06 06:30:58 compute-0 ceph-mon[74339]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:30:58 compute-0 ceph-mon[74339]: 7.7 scrub starts
Dec 06 06:30:58 compute-0 ceph-mon[74339]: 7.7 scrub ok
Dec 06 06:30:58 compute-0 ceph-mon[74339]: 5.19 scrub starts
Dec 06 06:30:58 compute-0 ceph-mon[74339]: 5.19 scrub ok
Dec 06 06:30:58 compute-0 ceph-mon[74339]: osdmap e51: 3 total, 3 up, 3 in
Dec 06 06:30:58 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 06:30:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 06 06:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:58 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.wqlami on compute-0
Dec 06 06:30:58 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.wqlami on compute-0
Dec 06 06:30:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 06 06:30:58 compute-0 sudo[91451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:58 compute-0 sudo[91451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:58 compute-0 sudo[91451]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 06 06:30:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 06 06:30:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 52 pg[9.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [0] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:30:58 compute-0 sudo[91476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:30:58 compute-0 sudo[91476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:58 compute-0 sudo[91476]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:58 compute-0 sudo[91505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:30:58 compute-0 sudo[91505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:58 compute-0 sudo[91505]: pam_unix(sudo:session): session closed for user root
Dec 06 06:30:58 compute-0 sudo[91530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:30:58 compute-0 sudo[91530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:30:58 compute-0 sudo[91578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohmxkifplvpbjqwgkyyvpanrmqpcicyn ; /usr/bin/python3'
Dec 06 06:30:58 compute-0 sudo[91578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:30:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:30:58 compute-0 python3[91580]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:30:59 compute-0 podman[91614]: 2025-12-06 06:30:59.020917793 +0000 UTC m=+0.039376255 container create 6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9 (image=quay.io/ceph/ceph:v18, name=funny_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:30:59 compute-0 podman[91627]: 2025-12-06 06:30:59.052014751 +0000 UTC m=+0.047545147 container create 061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 06:30:59 compute-0 systemd[1]: Started libpod-conmon-6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9.scope.
Dec 06 06:30:59 compute-0 systemd[1]: Started libpod-conmon-061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66.scope.
Dec 06 06:30:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6abcab331af9eb912d2fd81f29922845ce87da48fd165ec07890529ec200af/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6abcab331af9eb912d2fd81f29922845ce87da48fd165ec07890529ec200af/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:30:59 compute-0 podman[91614]: 2025-12-06 06:30:59.003321715 +0000 UTC m=+0.021780197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:30:59 compute-0 podman[91614]: 2025-12-06 06:30:59.099462145 +0000 UTC m=+0.117920607 container init 6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9 (image=quay.io/ceph/ceph:v18, name=funny_ramanujan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:59 compute-0 podman[91627]: 2025-12-06 06:30:59.102909974 +0000 UTC m=+0.098440390 container init 061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 06:30:59 compute-0 podman[91614]: 2025-12-06 06:30:59.105943873 +0000 UTC m=+0.124402335 container start 6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9 (image=quay.io/ceph/ceph:v18, name=funny_ramanujan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:30:59 compute-0 podman[91627]: 2025-12-06 06:30:59.108695645 +0000 UTC m=+0.104226041 container start 061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:30:59 compute-0 podman[91614]: 2025-12-06 06:30:59.109623059 +0000 UTC m=+0.128081521 container attach 6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9 (image=quay.io/ceph/ceph:v18, name=funny_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 06:30:59 compute-0 distracted_mclaren[91654]: 167 167
Dec 06 06:30:59 compute-0 systemd[1]: libpod-061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66.scope: Deactivated successfully.
Dec 06 06:30:59 compute-0 conmon[91654]: conmon 061f9a073143735236c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66.scope/container/memory.events
Dec 06 06:30:59 compute-0 podman[91627]: 2025-12-06 06:30:59.11428157 +0000 UTC m=+0.109811976 container attach 061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:30:59 compute-0 podman[91627]: 2025-12-06 06:30:59.114636179 +0000 UTC m=+0.110166585 container died 061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 06:30:59 compute-0 podman[91627]: 2025-12-06 06:30:59.031994461 +0000 UTC m=+0.027524877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:30:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d93e80126ab6ac98172e3e0ff3bd849d3f4cc77e3af76a8be6be04b9cb8475fc-merged.mount: Deactivated successfully.
Dec 06 06:30:59 compute-0 podman[91627]: 2025-12-06 06:30:59.164848155 +0000 UTC m=+0.160378551 container remove 061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:30:59 compute-0 systemd[1]: libpod-conmon-061f9a073143735236c1f42ed1c18f08849343cd904eb727960e3c6d5c283e66.scope: Deactivated successfully.
Dec 06 06:30:59 compute-0 systemd[1]: Reloading.
Dec 06 06:30:59 compute-0 systemd-sysv-generator[91768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:30:59 compute-0 systemd-rc-local-generator[91765]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:30:59 compute-0 ceph-mon[74339]: pgmap v187: 179 pgs: 1 unknown, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 682 B/s wr, 1 op/s
Dec 06 06:30:59 compute-0 ceph-mon[74339]: 4.11 scrub starts
Dec 06 06:30:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:59 compute-0 ceph-mon[74339]: 4.11 scrub ok
Dec 06 06:30:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 06 06:30:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.wqlami", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 06 06:30:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:30:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:30:59 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 06 06:30:59 compute-0 ceph-mon[74339]: osdmap e52: 3 total, 3 up, 3 in
Dec 06 06:30:59 compute-0 ceph-mon[74339]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:30:59 compute-0 systemd[1]: Reloading.
Dec 06 06:30:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 06 06:30:59 compute-0 systemd-rc-local-generator[91810]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:30:59 compute-0 systemd-sysv-generator[91813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:30:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 06 06:30:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 06 06:30:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec 06 06:30:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1768129791' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:30:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec 06 06:30:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:30:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec 06 06:30:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:30:59 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.wqlami for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:31:00 compute-0 podman[91869]: 2025-12-06 06:31:00.053275863 +0000 UTC m=+0.041531991 container create ffa040064f819c5b7e0d97ce69e7305c984c3e8580bb4369f7d9e61ad94c83ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-0-wqlami, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0cfcab8ea7325e6ce93b6ba24f3390ff043336f49b6483bc1b6a449339c3a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0cfcab8ea7325e6ce93b6ba24f3390ff043336f49b6483bc1b6a449339c3a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0cfcab8ea7325e6ce93b6ba24f3390ff043336f49b6483bc1b6a449339c3a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c0cfcab8ea7325e6ce93b6ba24f3390ff043336f49b6483bc1b6a449339c3a5/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.wqlami supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:00 compute-0 podman[91869]: 2025-12-06 06:31:00.107607416 +0000 UTC m=+0.095863554 container init ffa040064f819c5b7e0d97ce69e7305c984c3e8580bb4369f7d9e61ad94c83ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-0-wqlami, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:31:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v190: 180 pgs: 2 unknown, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 922 B/s rd, 922 B/s wr, 1 op/s
Dec 06 06:31:00 compute-0 podman[91869]: 2025-12-06 06:31:00.113632603 +0000 UTC m=+0.101888731 container start ffa040064f819c5b7e0d97ce69e7305c984c3e8580bb4369f7d9e61ad94c83ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-0-wqlami, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:31:00 compute-0 bash[91869]: ffa040064f819c5b7e0d97ce69e7305c984c3e8580bb4369f7d9e61ad94c83ce
Dec 06 06:31:00 compute-0 podman[91869]: 2025-12-06 06:31:00.03276252 +0000 UTC m=+0.021018648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:31:00 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.wqlami for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:31:00 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 06 06:31:00 compute-0 sudo[91530]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:00 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 06 06:31:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:31:00 compute-0 radosgw[91889]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:31:00 compute-0 radosgw[91889]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec 06 06:31:00 compute-0 radosgw[91889]: framework: beast
Dec 06 06:31:00 compute-0 radosgw[91889]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 06 06:31:00 compute-0 radosgw[91889]: init_numa not setting numa affinity
Dec 06 06:31:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:31:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 06 06:31:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 06 06:31:00 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 14 completed events
Dec 06 06:31:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:31:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1768129791' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 06 06:31:01 compute-0 ceph-mon[74339]: Deploying daemon rgw.rgw.compute-0.wqlami on compute-0
Dec 06 06:31:01 compute-0 ceph-mon[74339]: 4.19 scrub starts
Dec 06 06:31:01 compute-0 ceph-mon[74339]: 4.19 scrub ok
Dec 06 06:31:01 compute-0 ceph-mon[74339]: osdmap e53: 3 total, 3 up, 3 in
Dec 06 06:31:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1768129791' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:01 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/558176623' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:01 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1619933818' entity='client.rgw.rgw.compute-2.oieczf' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 06 06:31:01 compute-0 ceph-mon[74339]: pgmap v190: 180 pgs: 2 unknown, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 922 B/s rd, 922 B/s wr, 1 op/s
Dec 06 06:31:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:01 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 06 06:31:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v192: 180 pgs: 1 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 2 op/s
Dec 06 06:31:02 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 06 06:31:02 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 06 06:31:02 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:31:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:02 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 9ce85532-9497-4227-bc82-a695d4277833 (Updating rgw.rgw deployment (+3 -> 3))
Dec 06 06:31:02 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 9ce85532-9497-4227-bc82-a695d4277833 (Updating rgw.rgw deployment (+3 -> 3)) in 26 seconds
Dec 06 06:31:02 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 06:31:02 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 06:31:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 06 06:31:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 06 06:31:04 compute-0 ceph-mon[74339]: 4.12 scrub starts
Dec 06 06:31:04 compute-0 ceph-mon[74339]: 4.12 scrub ok
Dec 06 06:31:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1768129791' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:04 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:04 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-2.oieczf' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 06 06:31:04 compute-0 ceph-mon[74339]: osdmap e54: 3 total, 3 up, 3 in
Dec 06 06:31:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v193: 180 pgs: 1 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1 op/s
Dec 06 06:31:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 06 06:31:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 06 06:31:04 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 55 pg[11.0( empty local-lis/les=0/0 n=0 ec=55/55 lis/c=0/0 les/c/f=0/0/0 sis=55) [0] r=0 lpr=55 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec 06 06:31:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec 06 06:31:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec 06 06:31:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec 06 06:31:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:05 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 7d758e29-19ed-45f5-897c-a2549bb1cf4e (Updating mds.cephfs deployment (+3 -> 3))
Dec 06 06:31:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec 06 06:31:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 3.13 scrub starts
Dec 06 06:31:05 compute-0 ceph-mon[74339]: pgmap v192: 180 pgs: 1 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 2 op/s
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 5.1d scrub starts
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 5.1d scrub ok
Dec 06 06:31:05 compute-0 ceph-mon[74339]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 3.13 scrub ok
Dec 06 06:31:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:05 compute-0 ceph-mon[74339]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 4.1d scrub starts
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 4.1d scrub ok
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 5.15 scrub starts
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 5.15 scrub ok
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 3.14 scrub starts
Dec 06 06:31:05 compute-0 ceph-mon[74339]: 3.14 scrub ok
Dec 06 06:31:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:05 compute-0 ceph-mon[74339]: pgmap v193: 180 pgs: 1 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1 op/s
Dec 06 06:31:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:05 compute-0 ceph-mon[74339]: osdmap e55: 3 total, 3 up, 3 in
Dec 06 06:31:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/758001210' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:05 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 06 06:31:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v195: 181 pgs: 2 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 KiB/s wr, 130 op/s
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 06 06:31:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:31:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:06 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.tjfgow on compute-2
Dec 06 06:31:06 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.tjfgow on compute-2
Dec 06 06:31:06 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 56 pg[11.0( empty local-lis/les=55/56 n=0 ec=55/55 lis/c=0/0 les/c/f=0/0/0 sis=55) [0] r=0 lpr=55 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 06 06:31:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v197: 181 pgs: 1 creating+peering, 180 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 5.0 KiB/s wr, 120 op/s
Dec 06 06:31:08 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 06 06:31:08 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 06 06:31:09 compute-0 ceph-mon[74339]: 4.18 deep-scrub starts
Dec 06 06:31:09 compute-0 ceph-mon[74339]: 4.18 deep-scrub ok
Dec 06 06:31:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:09 compute-0 ceph-mon[74339]: pgmap v195: 181 pgs: 2 creating+peering, 1 active+clean+scrubbing, 178 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 KiB/s wr, 130 op/s
Dec 06 06:31:09 compute-0 ceph-mgr[74630]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec 06 06:31:10 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:31:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v198: 181 pgs: 1 creating+peering, 180 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 4.2 KiB/s wr, 103 op/s
Dec 06 06:31:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 06 06:31:10 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 06 06:31:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec 06 06:31:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:10 compute-0 funny_ramanujan[91652]: could not fetch user info: no user info saved
Dec 06 06:31:10 compute-0 radosgw[91889]: LDAP not started since no server URIs were provided in the configuration.
Dec 06 06:31:10 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-rgw-rgw-compute-0-wqlami[91885]: 2025-12-06T06:31:10.589+0000 7f476c83a940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 06 06:31:10 compute-0 radosgw[91889]: framework: beast
Dec 06 06:31:10 compute-0 radosgw[91889]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 06 06:31:10 compute-0 radosgw[91889]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 06 06:31:10 compute-0 radosgw[91889]: starting handler: beast
Dec 06 06:31:10 compute-0 radosgw[91889]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:31:10 compute-0 radosgw[91889]: mgrc service_daemon_register rgw.14358 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.wqlami,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=22f3b99e-039b-412d-b524-6d79a2ee4dad,zone_name=default,zonegroup_id=3605fdfe-bab9-40f4-83c5-5b52927c1749,zonegroup_name=default}
Dec 06 06:31:10 compute-0 systemd[1]: libpod-6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9.scope: Deactivated successfully.
Dec 06 06:31:10 compute-0 ceph-mon[74339]: 2.1b scrub starts
Dec 06 06:31:10 compute-0 ceph-mon[74339]: 2.1b scrub ok
Dec 06 06:31:10 compute-0 ceph-mon[74339]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 06 06:31:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:10 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 06 06:31:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.tjfgow", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:10 compute-0 ceph-mon[74339]: osdmap e56: 3 total, 3 up, 3 in
Dec 06 06:31:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:10 compute-0 ceph-mon[74339]: Deploying daemon mds.cephfs.compute-2.tjfgow on compute-2
Dec 06 06:31:10 compute-0 podman[91614]: 2025-12-06 06:31:10.72568249 +0000 UTC m=+11.744140952 container died 6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9 (image=quay.io/ceph/ceph:v18, name=funny_ramanujan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:31:10 compute-0 ceph-mon[74339]: 4.13 scrub starts
Dec 06 06:31:10 compute-0 ceph-mon[74339]: 4.13 scrub ok
Dec 06 06:31:10 compute-0 ceph-mon[74339]: pgmap v197: 181 pgs: 1 creating+peering, 180 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 5.0 KiB/s wr, 120 op/s
Dec 06 06:31:10 compute-0 ceph-mon[74339]: 4.16 scrub starts
Dec 06 06:31:10 compute-0 ceph-mon[74339]: 4.16 scrub ok
Dec 06 06:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d6abcab331af9eb912d2fd81f29922845ce87da48fd165ec07890529ec200af-merged.mount: Deactivated successfully.
Dec 06 06:31:10 compute-0 podman[91614]: 2025-12-06 06:31:10.791899981 +0000 UTC m=+11.810358443 container remove 6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9 (image=quay.io/ceph/ceph:v18, name=funny_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 06:31:10 compute-0 systemd[1]: libpod-conmon-6f0e1953b72b5a32c6288434ff3c245ef044155162bc6d6db194150469c002c9.scope: Deactivated successfully.
Dec 06 06:31:10 compute-0 sudo[91578]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:10 compute-0 sudo[92559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nijegsggrcydyqjgwnbsfwdqtmgudtdf ; /usr/bin/python3'
Dec 06 06:31:10 compute-0 sudo[92559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:31:11 compute-0 python3[92561]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:31:11 compute-0 podman[92562]: 2025-12-06 06:31:11.217183739 +0000 UTC m=+0.045222577 container create 68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72 (image=quay.io/ceph/ceph:v18, name=cool_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:31:11 compute-0 systemd[1]: Started libpod-conmon-68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72.scope.
Dec 06 06:31:11 compute-0 podman[92562]: 2025-12-06 06:31:11.198904404 +0000 UTC m=+0.026943262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec 06 06:31:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8f4004d994b0fdd9d7c176d86c0339aea7d05a1dc943be79b88b2849950043/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c8f4004d994b0fdd9d7c176d86c0339aea7d05a1dc943be79b88b2849950043/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:11 compute-0 podman[92562]: 2025-12-06 06:31:11.316679136 +0000 UTC m=+0.144717994 container init 68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72 (image=quay.io/ceph/ceph:v18, name=cool_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:31:11 compute-0 podman[92562]: 2025-12-06 06:31:11.324866359 +0000 UTC m=+0.152905197 container start 68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72 (image=quay.io/ceph/ceph:v18, name=cool_sinoussi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:31:11 compute-0 podman[92562]: 2025-12-06 06:31:11.329691194 +0000 UTC m=+0.157730032 container attach 68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72 (image=quay.io/ceph/ceph:v18, name=cool_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 06:31:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v200: 181 pgs: 181 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 105 KiB/s rd, 4.4 KiB/s wr, 211 op/s
Dec 06 06:31:12 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 06 06:31:12 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]: {
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "user_id": "openstack",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "display_name": "openstack",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "email": "",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "suspended": 0,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "max_buckets": 1000,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "subusers": [],
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "keys": [
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         {
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:             "user": "openstack",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:             "access_key": "W04YOG2PP790361HQIX7",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:             "secret_key": "kZamJH0KAOt5RXiDDYqpI8CwoTzEipt3mLxGxxHB"
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         }
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     ],
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "swift_keys": [],
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "caps": [],
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "op_mask": "read, write, delete",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "default_placement": "",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "default_storage_class": "",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "placement_tags": [],
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "bucket_quota": {
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "enabled": false,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "check_on_raw": false,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "max_size": -1,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "max_size_kb": 0,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "max_objects": -1
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     },
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "user_quota": {
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "enabled": false,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "check_on_raw": false,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "max_size": -1,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "max_size_kb": 0,
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:         "max_objects": -1
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     },
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "temp_url_keys": [],
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "type": "rgw",
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]:     "mfa_ids": []
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]: }
Dec 06 06:31:12 compute-0 cool_sinoussi[92578]: 
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:31:12
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'images', 'default.rgw.control']
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:31:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:31:13 compute-0 systemd[1]: libpod-68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72.scope: Deactivated successfully.
Dec 06 06:31:13 compute-0 conmon[92578]: conmon 68b1624d05c88a345c9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72.scope/container/memory.events
Dec 06 06:31:13 compute-0 podman[92562]: 2025-12-06 06:31:13.074066177 +0000 UTC m=+1.902105035 container died 68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72 (image=quay.io/ceph/ceph:v18, name=cool_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c8f4004d994b0fdd9d7c176d86c0339aea7d05a1dc943be79b88b2849950043-merged.mount: Deactivated successfully.
Dec 06 06:31:13 compute-0 podman[92562]: 2025-12-06 06:31:13.124229331 +0000 UTC m=+1.952268209 container remove 68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72 (image=quay.io/ceph/ceph:v18, name=cool_sinoussi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 06:31:13 compute-0 systemd[1]: libpod-conmon-68b1624d05c88a345c9d393303e578b82e2a77d9d7a2bc7488fb440cd1127c72.scope: Deactivated successfully.
Dec 06 06:31:13 compute-0 sudo[92559]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:13 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Dec 06 06:31:13 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Dec 06 06:31:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 06 06:31:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 06 06:31:14 compute-0 ceph-mon[74339]: 6.1 scrub starts
Dec 06 06:31:14 compute-0 ceph-mon[74339]: 6.1 scrub ok
Dec 06 06:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/758001210' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:14 compute-0 ceph-mon[74339]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 06 06:31:14 compute-0 ceph-mon[74339]: 5.1 scrub starts
Dec 06 06:31:14 compute-0 ceph-mon[74339]: 5.1 scrub ok
Dec 06 06:31:14 compute-0 ceph-mon[74339]: pgmap v198: 181 pgs: 1 creating+peering, 180 active+clean; 453 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 4.2 KiB/s wr, 103 op/s
Dec 06 06:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2779676723' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2195976802' entity='client.rgw.rgw.compute-0.wqlami' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:14 compute-0 ceph-mon[74339]: osdmap e57: 3 total, 3 up, 3 in
Dec 06 06:31:14 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 06 06:31:14 compute-0 ceph-mon[74339]: 2.11 deep-scrub starts
Dec 06 06:31:14 compute-0 ceph-mon[74339]: 2.11 deep-scrub ok
Dec 06 06:31:14 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 6a183abf-c86c-4a5c-aa6d-add25ded0bb2 (Global Recovery Event) in 5 seconds
Dec 06 06:31:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v202: 181 pgs: 181 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.0 deep-scrub starts
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.0 deep-scrub ok
Dec 06 06:31:15 compute-0 ceph-mon[74339]: pgmap v200: 181 pgs: 181 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 105 KiB/s rd, 4.4 KiB/s wr, 211 op/s
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.1e scrub starts
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.1e scrub ok
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 3.0 scrub starts
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.1b scrub starts
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.1b scrub ok
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 4.17 deep-scrub starts
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 4.17 deep-scrub ok
Dec 06 06:31:15 compute-0 ceph-mon[74339]: from='client.? ' entity='client.rgw.rgw.compute-1.dmyhav' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 06 06:31:15 compute-0 ceph-mon[74339]: osdmap e58: 3 total, 3 up, 3 in
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 3.0 scrub ok
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.d scrub starts
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.d scrub ok
Dec 06 06:31:15 compute-0 ceph-mon[74339]: pgmap v202: 181 pgs: 181 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.b scrub starts
Dec 06 06:31:15 compute-0 ceph-mon[74339]: 5.b scrub ok
Dec 06 06:31:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v203: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 171 KiB/s rd, 255 B/s wr, 280 op/s
Dec 06 06:31:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:31:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e3 new map
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:29:04.228395+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.tjfgow{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:31:17 compute-0 ceph-mon[74339]: 5.16 scrub starts
Dec 06 06:31:17 compute-0 ceph-mon[74339]: 5.16 scrub ok
Dec 06 06:31:17 compute-0 ceph-mon[74339]: pgmap v203: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 171 KiB/s rd, 255 B/s wr, 280 op/s
Dec 06 06:31:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] up:boot
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] as mds.0
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.tjfgow assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.tjfgow"} v 0) v1
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.tjfgow"}]: dispatch
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e3 all = 0
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e4 new map
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:17.652602+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:creating}
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:17 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.qqwnku on compute-0
Dec 06 06:31:17 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.qqwnku on compute-0
Dec 06 06:31:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.tjfgow is now active in filesystem cephfs as rank 0
Dec 06 06:31:17 compute-0 sudo[92675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:31:17 compute-0 sudo[92675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:31:17 compute-0 sudo[92675]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:17 compute-0 sudo[92700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:31:17 compute-0 sudo[92700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:31:17 compute-0 sudo[92700]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:17 compute-0 sudo[92725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:31:17 compute-0 sudo[92725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:31:17 compute-0 sudo[92725]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:17 compute-0 sudo[92750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:31:17 compute-0 sudo[92750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v204: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 171 KiB/s rd, 255 B/s wr, 280 op/s
Dec 06 06:31:18 compute-0 podman[92815]: 2025-12-06 06:31:18.296308511 +0000 UTC m=+0.042829534 container create a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:31:18 compute-0 systemd[1]: Started libpod-conmon-a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3.scope.
Dec 06 06:31:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:31:18 compute-0 podman[92815]: 2025-12-06 06:31:18.277137342 +0000 UTC m=+0.023658375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:31:18 compute-0 podman[92815]: 2025-12-06 06:31:18.383697343 +0000 UTC m=+0.130218396 container init a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:31:18 compute-0 podman[92815]: 2025-12-06 06:31:18.390291085 +0000 UTC m=+0.136812088 container start a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kowalevski, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:31:18 compute-0 podman[92815]: 2025-12-06 06:31:18.393892618 +0000 UTC m=+0.140413671 container attach a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kowalevski, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 06:31:18 compute-0 serene_kowalevski[92832]: 167 167
Dec 06 06:31:18 compute-0 systemd[1]: libpod-a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3.scope: Deactivated successfully.
Dec 06 06:31:18 compute-0 conmon[92832]: conmon a3fcf5d8ec038002aa1e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3.scope/container/memory.events
Dec 06 06:31:18 compute-0 podman[92815]: 2025-12-06 06:31:18.398786346 +0000 UTC m=+0.145307359 container died a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fb25e2289377bb76f51381c9bc21b7d87bc47dda37eceaee138fe9b893ae60b-merged.mount: Deactivated successfully.
Dec 06 06:31:18 compute-0 podman[92815]: 2025-12-06 06:31:18.435623303 +0000 UTC m=+0.182144316 container remove a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_kowalevski, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:31:18 compute-0 systemd[1]: libpod-conmon-a3fcf5d8ec038002aa1ec3d07568679b2f486b805bbc877b00b08579d183c5a3.scope: Deactivated successfully.
Dec 06 06:31:18 compute-0 systemd[1]: Reloading.
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 16)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 1)
Dec 06 06:31:18 compute-0 systemd-sysv-generator[92880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:31:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:31:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:18 compute-0 systemd-rc-local-generator[92876]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:31:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 06 06:31:18 compute-0 ceph-mon[74339]: 2.a scrub starts
Dec 06 06:31:18 compute-0 ceph-mon[74339]: 2.a scrub ok
Dec 06 06:31:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:18 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] up:boot
Dec 06 06:31:18 compute-0 ceph-mon[74339]: daemon mds.cephfs.compute-2.tjfgow assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 06 06:31:18 compute-0 ceph-mon[74339]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 06 06:31:18 compute-0 ceph-mon[74339]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 06 06:31:18 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 06:31:18 compute-0 ceph-mon[74339]: fsmap cephfs:0 1 up:standby
Dec 06 06:31:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.tjfgow"}]: dispatch
Dec 06 06:31:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qqwnku", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:18 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:creating}
Dec 06 06:31:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:18 compute-0 ceph-mon[74339]: Deploying daemon mds.cephfs.compute-0.qqwnku on compute-0
Dec 06 06:31:18 compute-0 ceph-mon[74339]: daemon mds.cephfs.compute-2.tjfgow is now active in filesystem cephfs as rank 0
Dec 06 06:31:18 compute-0 ceph-mon[74339]: 3.16 scrub starts
Dec 06 06:31:18 compute-0 ceph-mon[74339]: 3.16 scrub ok
Dec 06 06:31:18 compute-0 ceph-mon[74339]: pgmap v204: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 171 KiB/s rd, 255 B/s wr, 280 op/s
Dec 06 06:31:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e5 new map
Dec 06 06:31:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:18.697695+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Dec 06 06:31:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 06 06:31:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 06 06:31:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] up:active
Dec 06 06:31:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active}
Dec 06 06:31:18 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 2c98c727-6b4d-444b-bfbf-8441efa6af0e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 06 06:31:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:31:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:18 compute-0 systemd[1]: Reloading.
Dec 06 06:31:19 compute-0 systemd-rc-local-generator[92919]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:31:19 compute-0 systemd-sysv-generator[92922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:31:19 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 16 completed events
Dec 06 06:31:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:31:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:19 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.qqwnku for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:31:19 compute-0 podman[92978]: 2025-12-06 06:31:19.446914076 +0000 UTC m=+0.037529966 container create b9ccca99b41d87fba0709964c81990f49c7979d19670b6a15ed5e1bd429f4b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-0-qqwnku, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d91f064462ef7bd70f18332e47c1e727b041df98e1897ab6b77afcc1f7dbe8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d91f064462ef7bd70f18332e47c1e727b041df98e1897ab6b77afcc1f7dbe8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d91f064462ef7bd70f18332e47c1e727b041df98e1897ab6b77afcc1f7dbe8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d91f064462ef7bd70f18332e47c1e727b041df98e1897ab6b77afcc1f7dbe8/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.qqwnku supports timestamps until 2038 (0x7fffffff)
Dec 06 06:31:19 compute-0 podman[92978]: 2025-12-06 06:31:19.512654665 +0000 UTC m=+0.103270565 container init b9ccca99b41d87fba0709964c81990f49c7979d19670b6a15ed5e1bd429f4b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-0-qqwnku, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:31:19 compute-0 podman[92978]: 2025-12-06 06:31:19.521592498 +0000 UTC m=+0.112208378 container start b9ccca99b41d87fba0709964c81990f49c7979d19670b6a15ed5e1bd429f4b3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-0-qqwnku, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:31:19 compute-0 bash[92978]: b9ccca99b41d87fba0709964c81990f49c7979d19670b6a15ed5e1bd429f4b3a
Dec 06 06:31:19 compute-0 podman[92978]: 2025-12-06 06:31:19.429970876 +0000 UTC m=+0.020586756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:31:19 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.qqwnku for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:31:19 compute-0 ceph-mds[92997]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 06:31:19 compute-0 ceph-mds[92997]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec 06 06:31:19 compute-0 ceph-mds[92997]: main not setting numa affinity
Dec 06 06:31:19 compute-0 ceph-mds[92997]: pidfile_write: ignore empty --pid-file
Dec 06 06:31:19 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mds-cephfs-compute-0-qqwnku[92993]: starting mds.cephfs.compute-0.qqwnku at 
Dec 06 06:31:19 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku Updating MDS map to version 5 from mon.0
Dec 06 06:31:19 compute-0 sudo[92750]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:31:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 06 06:31:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:31:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v206: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 255 B/s wr, 178 op/s
Dec 06 06:31:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:20 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 06 06:31:20 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 06 06:31:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 06 06:31:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:20 compute-0 ceph-mon[74339]: osdmap e59: 3 total, 3 up, 3 in
Dec 06 06:31:20 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] up:active
Dec 06 06:31:20 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active}
Dec 06 06:31:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:20 compute-0 ceph-mon[74339]: 5.10 scrub starts
Dec 06 06:31:20 compute-0 ceph-mon[74339]: 5.10 scrub ok
Dec 06 06:31:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 06 06:31:20 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev a0e9d0fc-b60e-4aeb-9c82-5173ec960fa7 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 06 06:31:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:31:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e6 new map
Dec 06 06:31:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:18.697695+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:31:21 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku Updating MDS map to version 6 from mon.0
Dec 06 06:31:21 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku Monitors have assigned me to become a standby.
Dec 06 06:31:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:boot
Dec 06 06:31:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 1 up:standby
Dec 06 06:31:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.qqwnku"} v 0) v1
Dec 06 06:31:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qqwnku"}]: dispatch
Dec 06 06:31:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e6 all = 0
Dec 06 06:31:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 06 06:31:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v208: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 KiB/s wr, 184 op/s
Dec 06 06:31:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:31:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:31:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e7 new map
Dec 06 06:31:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:18.697695+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:31:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 1 up:standby
Dec 06 06:31:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 06 06:31:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:23 compute-0 ceph-mon[74339]: pgmap v206: 181 pgs: 181 active+clean; 454 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 255 B/s wr, 178 op/s
Dec 06 06:31:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:23 compute-0 ceph-mon[74339]: 4.1e scrub starts
Dec 06 06:31:23 compute-0 ceph-mon[74339]: 4.1e scrub ok
Dec 06 06:31:23 compute-0 ceph-mon[74339]: 2.d scrub starts
Dec 06 06:31:23 compute-0 ceph-mon[74339]: 2.d scrub ok
Dec 06 06:31:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:23 compute-0 ceph-mon[74339]: osdmap e60: 3 total, 3 up, 3 in
Dec 06 06:31:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:23 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:boot
Dec 06 06:31:23 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 1 up:standby
Dec 06 06:31:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qqwnku"}]: dispatch
Dec 06 06:31:24 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 61 pg[8.0( v 50'4 (0'0,50'4] local-lis/les=48/49 n=4 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=61 pruub=9.157764435s) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 50'3 mlcod 50'3 active pruub 199.389816284s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 06 06:31:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec 06 06:31:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:24 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 2d65075d-e4aa-4b07-a8d5-3ed4dd24a4dd (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 06 06:31:24 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 61 pg[8.0( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=61 pruub=9.157764435s) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 50'3 mlcod 0'0 unknown pruub 199.389816284s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec 06 06:31:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:31:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:24 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.vsxbzt on compute-1
Dec 06 06:31:24 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.vsxbzt on compute-1
Dec 06 06:31:24 compute-0 ceph-mgr[74630]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Dec 06 06:31:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v210: 212 pgs: 31 unknown, 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s wr, 7 op/s
Dec 06 06:31:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:24 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 06 06:31:24 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 06 06:31:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 06 06:31:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 06 06:31:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.18( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.14( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.15( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.17( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.16( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.10( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.11( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.12( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.3( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=1 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.f( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.8( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.e( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.d( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.c( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.a( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.9( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.2( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=1 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1( v 50'4 (0'0,50'4] local-lis/les=48/49 n=1 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.b( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.7( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[9.0( v 58'1159 (0'0,58'1159] local-lis/les=51/52 n=177 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=13.358112335s) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 58'1158 mlcod 58'1158 active pruub 204.797225952s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.4( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=1 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.5( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.6( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1b( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1a( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.19( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1f( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1e( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1d( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1c( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 624025e5-ef34-45c7-9d15-2f423c384a53 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.13( v 50'4 lc 0'0 (0'0,50'4] local-lis/les=48/49 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 2c98c727-6b4d-444b-bfbf-8441efa6af0e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 2c98c727-6b4d-444b-bfbf-8441efa6af0e (PG autoscaler increasing pool 8 PGs from 1 to 32) in 6 seconds
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev a0e9d0fc-b60e-4aeb-9c82-5173ec960fa7 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event a0e9d0fc-b60e-4aeb-9c82-5173ec960fa7 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 2d65075d-e4aa-4b07-a8d5-3ed4dd24a4dd (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 2d65075d-e4aa-4b07-a8d5-3ed4dd24a4dd (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 624025e5-ef34-45c7-9d15-2f423c384a53 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 06 06:31:25 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 624025e5-ef34-45c7-9d15-2f423c384a53 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[9.0( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=13.358112335s) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 58'1158 mlcod 0'0 unknown pruub 204.797225952s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.18( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.16( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.15( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.17( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.12( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.11( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.f( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.8( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.3( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.e( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.d( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.a( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.10( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.c( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.2( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.14( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.9( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.0( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 50'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.7( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.b( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1a( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.19( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1b( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.6( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1f( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1e( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1d( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.1c( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.13( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.4( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 62 pg[8.5( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=48/48 les/c/f=49/49/0 sis=61) [0] r=0 lpr=61 pi=[48,61)/1 crt=50'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v212: 274 pgs: 1 peering, 62 unknown, 211 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s wr, 6 op/s
Dec 06 06:31:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 06 06:31:26 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 06 06:31:26 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 2.c scrub starts
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 2.c scrub ok
Dec 06 06:31:26 compute-0 ceph-mon[74339]: pgmap v208: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 KiB/s wr, 184 op/s
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 7.1d deep-scrub starts
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 7.1d deep-scrub ok
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 5.11 scrub starts
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 5.11 scrub ok
Dec 06 06:31:26 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 1 up:standby
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 2.14 scrub starts
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:26 compute-0 ceph-mon[74339]: osdmap e61: 3 total, 3 up, 3 in
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 2.14 scrub ok
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.vsxbzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:31:26 compute-0 ceph-mon[74339]: Deploying daemon mds.cephfs.compute-1.vsxbzt on compute-1
Dec 06 06:31:26 compute-0 ceph-mon[74339]: pgmap v210: 212 pgs: 31 unknown, 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s wr, 7 op/s
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 6.4 scrub starts
Dec 06 06:31:26 compute-0 ceph-mon[74339]: 6.4 scrub ok
Dec 06 06:31:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 06 06:31:26 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.19( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.17( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.15( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.14( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.16( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.11( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.10( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.13( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.2( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.e( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[11.0( v 58'3 (0'0,58'3] local-lis/les=55/56 n=3 ec=55/55 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.897870064s) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 57'2 mlcod 57'2 active pruub 205.055480957s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.9( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.b( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.f( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.c( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.d( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.8( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.3( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.a( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.6( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.4( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.7( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.5( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1a( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1b( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.18( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1e( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1f( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1c( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1d( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.12( v 58'1159 lc 0'0 (0'0,58'1159] local-lis/les=51/52 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[11.0( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=55/55 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=11.897870064s) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 57'2 mlcod 0'0 unknown pruub 205.055480957s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.c( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.4( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.0( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 58'1158 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1c( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:26 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 63 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [0] r=0 lpr=62 pi=[51,62)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 06 06:31:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 93 unknown, 209 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:29 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 20 completed events
Dec 06 06:31:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:31:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 93 unknown, 209 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 62 unknown, 240 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 06 06:31:33 compute-0 ceph-mon[74339]: 7.a deep-scrub starts
Dec 06 06:31:33 compute-0 ceph-mon[74339]: 7.a deep-scrub ok
Dec 06 06:31:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 06 06:31:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-0 ceph-mon[74339]: osdmap e62: 3 total, 3 up, 3 in
Dec 06 06:31:33 compute-0 ceph-mon[74339]: pgmap v212: 274 pgs: 1 peering, 62 unknown, 211 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s wr, 6 op/s
Dec 06 06:31:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:33 compute-0 ceph-mon[74339]: 6.6 scrub starts
Dec 06 06:31:33 compute-0 ceph-mon[74339]: 6.6 scrub ok
Dec 06 06:31:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 06 06:31:33 compute-0 ceph-mon[74339]: osdmap e63: 3 total, 3 up, 3 in
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1b( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.17( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.15( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.16( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.14( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.13( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.12( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.11( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.c( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.b( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.9( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.d( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.e( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.f( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.8( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.a( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1( v 58'3 (0'0,58'3] local-lis/les=55/56 n=1 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.2( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=1 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.3( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=1 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.4( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.5( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.6( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.7( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.18( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.19( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1a( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1d( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1c( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1e( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1f( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.10( v 58'3 lc 0'0 (0'0,58'3] local-lis/les=55/56 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.16( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.15( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1b( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.17( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.14( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.11( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.0( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=55/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 57'2 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.12( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.13( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.c( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.d( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.b( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.e( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.f( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.8( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.9( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.a( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1( v 58'3 (0'0,58'3] local-lis/les=63/64 n=1 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.2( v 58'3 (0'0,58'3] local-lis/les=63/64 n=1 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.3( v 58'3 (0'0,58'3] local-lis/les=63/64 n=1 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.5( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.4( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.6( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.18( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1a( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.19( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1c( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1d( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1e( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.7( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.1f( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:33 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 64 pg[11.10( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=58'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 33 peering, 31 unknown, 241 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:35 compute-0 sshd-session[93017]: Accepted publickey for zuul from 192.168.122.30 port 41642 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:31:35 compute-0 systemd-logind[798]: New session 34 of user zuul.
Dec 06 06:31:35 compute-0 systemd[1]: Started Session 34 of User zuul.
Dec 06 06:31:35 compute-0 sshd-session[93017]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:31:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:36 compute-0 python3.9[93170]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:31:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:38 compute-0 sudo[93382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kccjarcjalwtvgjjmshixkkyhfsoxiki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002697.8284712-61-216529046352856/AnsiballZ_command.py'
Dec 06 06:31:38 compute-0 sudo[93382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:31:38 compute-0 python3.9[93384]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:31:39 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 06 06:31:39 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 06 06:31:40 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 06 06:31:40 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 06 06:31:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:41 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 06 06:31:41 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 06 06:31:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:31:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:31:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:31:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:31:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:31:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:31:43 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 06 06:31:43 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 06 06:31:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 7.c deep-scrub starts
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 7.c deep-scrub ok
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 5.12 scrub starts
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 5.12 scrub ok
Dec 06 06:31:45 compute-0 ceph-mon[74339]: pgmap v214: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 93 unknown, 209 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 2.10 scrub starts
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 2.10 scrub ok
Dec 06 06:31:45 compute-0 ceph-mon[74339]: pgmap v215: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 93 unknown, 209 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 7.14 scrub starts
Dec 06 06:31:45 compute-0 ceph-mon[74339]: pgmap v216: 305 pgs: 2 peering, 1 active+clean+scrubbing+deep, 62 unknown, 240 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 7.d scrub starts
Dec 06 06:31:45 compute-0 ceph-mon[74339]: 7.d scrub ok
Dec 06 06:31:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:31:45 compute-0 ceph-mon[74339]: osdmap e64: 3 total, 3 up, 3 in
Dec 06 06:31:45 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 06 06:31:45 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 06 06:31:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 27 active, 2 active+clean+scrubbing, 2 activating, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:46 compute-0 sudo[93382]: pam_unix(sudo:session): session closed for user root
Dec 06 06:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 27 active, 2 active+clean+scrubbing, 2 activating, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 27 active, 1 active+clean+scrubbing, 2 activating, 275 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:51 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.13 deep-scrub starts
Dec 06 06:31:51 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.13 deep-scrub ok
Dec 06 06:31:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Dec 06 06:31:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 06:31:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:52 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.10 deep-scrub starts
Dec 06 06:31:52 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.10 deep-scrub ok
Dec 06 06:31:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 06 06:31:53 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 150cc612-907b-4d17-8461-aaeb94508695 (Global Recovery Event) in 29 seconds
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 7.14 scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 5.13 scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 5.13 scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 4.5 scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: pgmap v218: 305 pgs: 33 peering, 31 unknown, 241 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 4.5 scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: pgmap v219: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 2.13 scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 2.13 scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: pgmap v220: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.7 scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.9 scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.9 scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 4.14 deep-scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 4.14 deep-scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.b scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.b scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: pgmap v221: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 2.15 scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 2.15 scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.c scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.c scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: pgmap v222: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 2.b scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 2.b scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.f scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 6.f scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: pgmap v223: 305 pgs: 32 activating, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 4.9 scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 4.9 scrub ok
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 2.19 scrub starts
Dec 06 06:31:53 compute-0 ceph-mon[74339]: 2.19 scrub ok
Dec 06 06:31:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 06:31:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:54 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 06 06:31:54 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 06 06:31:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1b( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437801361s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.321929932s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1b( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437720299s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.321929932s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.18( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.581025124s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465286255s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.18( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.580941200s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465286255s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.16( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.581033707s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465423584s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.17( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437472343s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.321914673s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.17( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437430382s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.321914673s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.16( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.580945969s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465423584s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.16( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437170982s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.321899414s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.16( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437130928s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.321899414s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.14( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.580943108s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465866089s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.15( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.580504417s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465454102s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.14( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.580919266s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465866089s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.15( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.580459595s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465454102s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.13( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.440818787s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326004028s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.13( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.440790176s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326004028s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.10( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.580438614s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465713501s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.12( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.440584183s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.325912476s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.12( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.440517426s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.325912476s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.11( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579880714s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465408325s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.11( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579857826s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465408325s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.12( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579766273s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465438843s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.10( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.580403328s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465713501s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.12( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579735756s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465438843s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.14( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.440026283s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.325881958s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.14( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.440009117s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.325881958s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.3( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579533577s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465438843s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.8( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579476357s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465454102s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.8( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579400063s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465454102s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.a( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579462051s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465713501s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.f( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579067230s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465454102s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.f( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579036713s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465454102s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.17( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579192162s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465423584s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.3( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579149246s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465438843s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.e( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.439468384s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326049805s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.e( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.439447403s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326049805s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.d( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579029083s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465698242s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.f( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.439425468s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326202393s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.f( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.439402580s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326202393s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.d( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578923225s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465698242s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.8( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.439187050s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326217651s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.8( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.439166069s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326217651s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.c( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578636169s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465698242s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.a( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.579368591s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465713501s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.c( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578609467s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465698242s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.b( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578879356s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.466079712s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.b( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578779221s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.466079712s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.9( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578272820s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465759277s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.a( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438700676s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326217651s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1( v 58'3 (0'0,58'3] local-lis/les=63/64 n=1 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438616753s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326217651s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.a( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438633919s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326217651s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1( v 58'3 (0'0,58'3] local-lis/les=63/64 n=1 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438593864s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326217651s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.9( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578177452s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465759277s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.17( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578451157s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465423584s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.2( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.578107834s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.465713501s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.3( v 58'3 (0'0,58'3] local-lis/les=63/64 n=1 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438483238s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326431274s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.2( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577776909s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.465713501s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.3( v 58'3 (0'0,58'3] local-lis/les=63/64 n=1 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438446045s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326431274s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.5( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438281059s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326446533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.4( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438328743s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326446533s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.5( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438252449s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326446533s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.4( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438155174s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326446533s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.6( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577972412s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.466293335s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.6( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577931404s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.466293335s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.7( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438539505s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326919556s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.7( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438486099s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326919556s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.5( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577619553s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.466171265s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.1b( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577612877s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.466308594s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.19( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438076019s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326766968s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.5( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577562332s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.466171265s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.1b( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577581406s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.466308594s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.19( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.438046455s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326766968s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.4( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577313423s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.466156006s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.4( v 50'4 (0'0,50'4] local-lis/les=61/62 n=1 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577264786s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.466156006s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.19( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577327728s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.466308594s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1a( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437655449s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326614380s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.19( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577300072s) [1] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.466308594s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1a( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437560081s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326614380s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.1f( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577102661s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.466308594s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1c( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437497139s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326766968s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.1f( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.577060699s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.466308594s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1d( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437592506s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326873779s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1e( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437580109s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 active pruub 231.326858521s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1d( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437561035s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326873779s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1e( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437546730s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326858521s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[11.1c( v 58'3 (0'0,58'3] local-lis/les=63/64 n=0 ec=63/55 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=10.437242508s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=58'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.326766968s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.1c( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.576751709s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 active pruub 231.466430664s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:31:54 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[8.1c( v 50'4 (0'0,50'4] local-lis/les=61/62 n=0 ec=61/48 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.576715469s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=50'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.466430664s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:31:55 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.1b( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.5( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.8( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.2( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.19( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.18( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.13( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.15( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 65 pg[10.14( empty local-lis/les=0/0 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:31:55 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 7.1f deep-scrub starts
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 7.1f deep-scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 6.7 scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: pgmap v224: 305 pgs: 27 active, 2 active+clean+scrubbing, 2 activating, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 2.16 scrub starts
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 2.16 scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 2.f scrub starts
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 2.f scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 2.17 scrub starts
Dec 06 06:31:55 compute-0 ceph-mon[74339]: pgmap v225: 305 pgs: 27 active, 2 active+clean+scrubbing, 2 activating, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-0 ceph-mon[74339]: pgmap v226: 305 pgs: 27 active, 1 active+clean+scrubbing, 2 activating, 275 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 7.5 scrub starts
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 7.5 scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 2.17 scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 7.13 deep-scrub starts
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 7.13 deep-scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: pgmap v227: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 7.10 deep-scrub starts
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 7.10 deep-scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: pgmap v228: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 2.e scrub starts
Dec 06 06:31:55 compute-0 ceph-mon[74339]: 2.e scrub ok
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 06:31:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-0 ceph-mon[74339]: osdmap e65: 3 total, 3 up, 3 in
Dec 06 06:31:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 06 06:31:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 06:31:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:31:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 06 06:31:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.13( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.15( v 64'99 lc 57'78 (0'0,64'99] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=64'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.8( v 57'96 (0'0,57'96] local-lis/les=65/66 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.14( v 64'99 lc 57'86 (0'0,64'99] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=64'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.5( v 57'96 (0'0,57'96] local-lis/les=65/66 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.19( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.2( v 57'96 (0'0,57'96] local-lis/les=65/66 n=1 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.18( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:55 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 66 pg[10.1b( v 57'96 (0'0,57'96] local-lis/les=65/66 n=0 ec=62/53 lis/c=62/62 les/c/f=64/64/0 sis=65) [0] r=0 lpr=65 pi=[62,65)/1 crt=57'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:31:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 23 peering, 1 active+clean+scrubbing, 281 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:31:56 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 06 06:31:56 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 06 06:31:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:31:58 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 21 completed events
Dec 06 06:31:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:31:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 44 peering, 261 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:32:00 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 06 06:32:00 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 06 06:32:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 21 peering, 284 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 11 B/s, 0 objects/s recovering
Dec 06 06:32:01 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 06 06:32:01 compute-0 ceph-mon[74339]: 5.4 scrub starts
Dec 06 06:32:01 compute-0 ceph-mon[74339]: 5.4 scrub ok
Dec 06 06:32:01 compute-0 ceph-mon[74339]: 7.b scrub starts
Dec 06 06:32:01 compute-0 ceph-mon[74339]: 7.b scrub ok
Dec 06 06:32:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:32:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:32:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 06 06:32:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:32:01 compute-0 ceph-mon[74339]: osdmap e66: 3 total, 3 up, 3 in
Dec 06 06:32:01 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 06 06:32:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:01 compute-0 ceph-mgr[74630]: [progress WARNING root] Starting Global Recovery Event,21 pgs not in active + clean state
Dec 06 06:32:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:32:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:32:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 11 B/s, 0 objects/s recovering
Dec 06 06:32:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:03 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 06 06:32:03 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 06 06:32:03 compute-0 systemd[75955]: Created slice User Background Tasks Slice.
Dec 06 06:32:03 compute-0 systemd[75955]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 06:32:03 compute-0 systemd[75955]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 06:32:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e8 new map
Dec 06 06:32:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:31:18.697695+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24157}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.tjfgow{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.qqwnku{-1:14385} state up:standby seq 1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:03 compute-0 ceph-mon[74339]: pgmap v231: 305 pgs: 23 peering, 1 active+clean+scrubbing, 281 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 7.8 scrub starts
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 7.8 scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 7.12 scrub starts
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 3.15 scrub starts
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 3.15 scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: pgmap v232: 305 pgs: 44 peering, 261 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 5.7 scrub starts
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 2.1a scrub starts
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 7.9 scrub starts
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 7.9 scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: pgmap v233: 305 pgs: 21 peering, 284 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 11 B/s, 0 objects/s recovering
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 2.1a scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 7.12 scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 5.7 scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 4.1f scrub starts
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 4.1f scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 7.f scrub starts
Dec 06 06:32:03 compute-0 ceph-mon[74339]: 7.f scrub ok
Dec 06 06:32:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] up:boot
Dec 06 06:32:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 2 up:standby
Dec 06 06:32:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.vsxbzt"} v 0) v1
Dec 06 06:32:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vsxbzt"}]: dispatch
Dec 06 06:32:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e8 all = 0
Dec 06 06:32:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:03 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 7d758e29-19ed-45f5-897c-a2549bb1cf4e (Updating mds.cephfs deployment (+3 -> 3))
Dec 06 06:32:03 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 7d758e29-19ed-45f5-897c-a2549bb1cf4e (Updating mds.cephfs deployment (+3 -> 3)) in 59 seconds
Dec 06 06:32:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Dec 06 06:32:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 9 B/s, 0 objects/s recovering
Dec 06 06:32:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 1/216 objects degraded (0.463%), 2 pgs degraded (PG_DEGRADED)
Dec 06 06:32:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec 06 06:32:06 compute-0 ceph-mon[74339]: pgmap v234: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 11 B/s, 0 objects/s recovering
Dec 06 06:32:06 compute-0 ceph-mon[74339]: 7.11 deep-scrub starts
Dec 06 06:32:06 compute-0 ceph-mon[74339]: 7.11 deep-scrub ok
Dec 06 06:32:06 compute-0 ceph-mon[74339]: 7.e scrub starts
Dec 06 06:32:06 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:32:06 compute-0 ceph-mon[74339]: 7.e scrub ok
Dec 06 06:32:06 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:32:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:06 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] up:boot
Dec 06 06:32:06 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-2.tjfgow=up:active} 2 up:standby
Dec 06 06:32:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.vsxbzt"}]: dispatch
Dec 06 06:32:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:06 compute-0 ceph-mon[74339]: pgmap v235: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 9 B/s, 0 objects/s recovering
Dec 06 06:32:06 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 06 06:32:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:06 compute-0 ceph-mgr[74630]: [progress INFO root] update: starting ev 5bd86f45-f4f9-45a1-9eab-1f7c895eb990 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 06 06:32:06 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 06 06:32:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Dec 06 06:32:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 8 B/s, 0 objects/s recovering
Dec 06 06:32:06 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 22 completed events
Dec 06 06:32:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:32:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:06 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.ybrwqj on compute-0
Dec 06 06:32:06 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.ybrwqj on compute-0
Dec 06 06:32:06 compute-0 sudo[93443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:06 compute-0 sudo[93443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:06 compute-0 sudo[93443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:06 compute-0 sudo[93468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:06 compute-0 sudo[93468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:06 compute-0 sudo[93468]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:06 compute-0 sudo[93493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:06 compute-0 sudo[93493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:06 compute-0 sudo[93493]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:06 compute-0 sudo[93518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:06 compute-0 sudo[93518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:06 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event c14cadc7-7fa0-4924-b0f2-40348d5ba97a (Global Recovery Event) in 6 seconds
Dec 06 06:32:07 compute-0 ceph-mon[74339]: 7.16 scrub starts
Dec 06 06:32:07 compute-0 ceph-mon[74339]: 7.16 scrub ok
Dec 06 06:32:07 compute-0 ceph-mon[74339]: Health check failed: Degraded data redundancy: 1/216 objects degraded (0.463%), 2 pgs degraded (PG_DEGRADED)
Dec 06 06:32:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:07 compute-0 ceph-mon[74339]: 7.6 scrub starts
Dec 06 06:32:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:07 compute-0 ceph-mon[74339]: 7.6 scrub ok
Dec 06 06:32:07 compute-0 ceph-mon[74339]: pgmap v236: 305 pgs: 1 active+recovering+degraded, 1 active+recovery_wait+degraded, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/216 objects degraded (0.463%); 1/216 objects misplaced (0.463%); 8 B/s, 0 objects/s recovering
Dec 06 06:32:07 compute-0 ceph-mon[74339]: 7.15 scrub starts
Dec 06 06:32:07 compute-0 ceph-mon[74339]: 7.15 scrub ok
Dec 06 06:32:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Dropping low affinity active daemon mds.cephfs.compute-2.tjfgow in favor of higher affinity standby.
Dec 06 06:32:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e8  replacing 24157 [v2:192.168.122.102:6804/1638633036,v1:192.168.122.102:6805/1638633036] mds.0.4 up:active with 14385/cephfs.compute-0.qqwnku [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155]
Dec 06 06:32:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Replacing daemon mds.cephfs.compute-2.tjfgow as rank 0 with standby daemon mds.cephfs.compute-0.qqwnku
Dec 06 06:32:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e8 fail_mds_gid 24157 mds.cephfs.compute-2.tjfgow role 0
Dec 06 06:32:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 06 06:32:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Dec 06 06:32:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e9 new map
Dec 06 06:32:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        9
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:32:07.852539+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        67
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14385}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-0.qqwnku{0:14385} state up:replay seq 13 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku Updating MDS map to version 9 from mon.0
Dec 06 06:32:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.0.9 handle_mds_map i am now mds.0.9
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.0.9 handle_mds_map state change up:standby --> up:replay
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.0.9 replay_start
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.0.9  waiting for osdmap 67 (which blocklists prior instance)
Dec 06 06:32:08 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 06 06:32:08 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:standby
Dec 06 06:32:08 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:replay} 1 up:standby
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.0.cache creating system inode with ino:0x100
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.0.cache creating system inode with ino:0x1
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.0.9 Finished replaying journal
Dec 06 06:32:08 compute-0 ceph-mds[92997]: mds.0.9 making mds journal writeable
Dec 06 06:32:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 138 B/s, 0 objects/s recovering
Dec 06 06:32:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Dec 06 06:32:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 06 06:32:08 compute-0 ceph-mon[74339]: Deploying daemon haproxy.rgw.default.compute-0.ybrwqj on compute-0
Dec 06 06:32:08 compute-0 ceph-mon[74339]: 2.12 scrub starts
Dec 06 06:32:08 compute-0 ceph-mon[74339]: 2.12 scrub ok
Dec 06 06:32:08 compute-0 ceph-mon[74339]: 3.f scrub starts
Dec 06 06:32:08 compute-0 ceph-mon[74339]: 3.f scrub ok
Dec 06 06:32:08 compute-0 ceph-mon[74339]: Dropping low affinity active daemon mds.cephfs.compute-2.tjfgow in favor of higher affinity standby.
Dec 06 06:32:08 compute-0 ceph-mon[74339]: Replacing daemon mds.cephfs.compute-2.tjfgow as rank 0 with standby daemon mds.cephfs.compute-0.qqwnku
Dec 06 06:32:08 compute-0 ceph-mon[74339]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Dec 06 06:32:08 compute-0 ceph-mon[74339]: osdmap e67: 3 total, 3 up, 3 in
Dec 06 06:32:08 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:standby
Dec 06 06:32:08 compute-0 ceph-mon[74339]: fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:replay} 1 up:standby
Dec 06 06:32:08 compute-0 ceph-mon[74339]: pgmap v238: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 138 B/s, 0 objects/s recovering
Dec 06 06:32:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 06 06:32:08 compute-0 sshd-session[93020]: Connection closed by 192.168.122.30 port 41642
Dec 06 06:32:08 compute-0 sshd-session[93017]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:32:08 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 06 06:32:08 compute-0 systemd[1]: session-34.scope: Consumed 9.424s CPU time.
Dec 06 06:32:08 compute-0 systemd-logind[798]: Session 34 logged out. Waiting for processes to exit.
Dec 06 06:32:08 compute-0 systemd-logind[798]: Removed session 34.
Dec 06 06:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 06 06:32:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/216 objects degraded (0.463%), 2 pgs degraded)
Dec 06 06:32:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 06 06:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 06 06:32:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 06 06:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e10 new map
Dec 06 06:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        10
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:32:09.094595+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        67
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14385}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-0.qqwnku{0:14385} state up:reconnect seq 14 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:09 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku Updating MDS map to version 10 from mon.0
Dec 06 06:32:09 compute-0 ceph-mds[92997]: mds.0.9 handle_mds_map i am now mds.0.9
Dec 06 06:32:09 compute-0 ceph-mds[92997]: mds.0.9 handle_mds_map state change up:replay --> up:reconnect
Dec 06 06:32:09 compute-0 ceph-mds[92997]: mds.0.9 reconnect_start
Dec 06 06:32:09 compute-0 ceph-mds[92997]: mds.0.9 reopen_log
Dec 06 06:32:09 compute-0 ceph-mds[92997]: mds.0.9 reconnect_done
Dec 06 06:32:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:reconnect
Dec 06 06:32:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] up:boot
Dec 06 06:32:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:reconnect} 2 up:standby
Dec 06 06:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.tjfgow"} v 0) v1
Dec 06 06:32:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.tjfgow"}]: dispatch
Dec 06 06:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e10 all = 0
Dec 06 06:32:09 compute-0 podman[93584]: 2025-12-06 06:32:09.9142846 +0000 UTC m=+2.893533822 container create ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11 (image=quay.io/ceph/haproxy:2.3, name=admiring_cartwright)
Dec 06 06:32:09 compute-0 systemd[1]: Started libpod-conmon-ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11.scope.
Dec 06 06:32:09 compute-0 podman[93584]: 2025-12-06 06:32:09.900894228 +0000 UTC m=+2.880143470 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 06 06:32:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:10 compute-0 podman[93584]: 2025-12-06 06:32:10.016503217 +0000 UTC m=+2.995752459 container init ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11 (image=quay.io/ceph/haproxy:2.3, name=admiring_cartwright)
Dec 06 06:32:10 compute-0 podman[93584]: 2025-12-06 06:32:10.023460735 +0000 UTC m=+3.002709957 container start ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11 (image=quay.io/ceph/haproxy:2.3, name=admiring_cartwright)
Dec 06 06:32:10 compute-0 podman[93584]: 2025-12-06 06:32:10.028241468 +0000 UTC m=+3.007490690 container attach ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11 (image=quay.io/ceph/haproxy:2.3, name=admiring_cartwright)
Dec 06 06:32:10 compute-0 admiring_cartwright[93710]: 0 0
Dec 06 06:32:10 compute-0 systemd[1]: libpod-ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11.scope: Deactivated successfully.
Dec 06 06:32:10 compute-0 podman[93584]: 2025-12-06 06:32:10.030120145 +0000 UTC m=+3.009369377 container died ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11 (image=quay.io/ceph/haproxy:2.3, name=admiring_cartwright)
Dec 06 06:32:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47610056b453cf8c9d298c4a761ab5d253f5ba0ecf2967e2569ed84c3bbeafa-merged.mount: Deactivated successfully.
Dec 06 06:32:10 compute-0 podman[93584]: 2025-12-06 06:32:10.074568768 +0000 UTC m=+3.053817990 container remove ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11 (image=quay.io/ceph/haproxy:2.3, name=admiring_cartwright)
Dec 06 06:32:10 compute-0 systemd[1]: libpod-conmon-ab25a0a7e93c3d5667979b3349002507a88d4bb735da11da2f3227586ec32d11.scope: Deactivated successfully.
Dec 06 06:32:10 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 06 06:32:10 compute-0 systemd[1]: Reloading.
Dec 06 06:32:10 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 06 06:32:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 161 B/s, 0 objects/s recovering
Dec 06 06:32:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Dec 06 06:32:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 06 06:32:10 compute-0 systemd-rc-local-generator[93758]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:32:10 compute-0 systemd-sysv-generator[93761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:32:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 06 06:32:10 compute-0 systemd[1]: Reloading.
Dec 06 06:32:10 compute-0 systemd-rc-local-generator[93795]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:32:10 compute-0 systemd-sysv-generator[93800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:32:10 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.ybrwqj for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:32:10 compute-0 ceph-mon[74339]: 3.11 deep-scrub starts
Dec 06 06:32:10 compute-0 ceph-mon[74339]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/216 objects degraded (0.463%), 2 pgs degraded)
Dec 06 06:32:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 06 06:32:10 compute-0 ceph-mon[74339]: osdmap e68: 3 total, 3 up, 3 in
Dec 06 06:32:10 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:reconnect
Dec 06 06:32:10 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] up:boot
Dec 06 06:32:10 compute-0 ceph-mon[74339]: fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:reconnect} 2 up:standby
Dec 06 06:32:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.tjfgow"}]: dispatch
Dec 06 06:32:11 compute-0 podman[93855]: 2025-12-06 06:32:11.036085862 +0000 UTC m=+0.051963160 container create 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:32:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ae157c341e518c1f283f3a5704b362f85549ac838cd4068eb8f0584286e3d6/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:11 compute-0 podman[93855]: 2025-12-06 06:32:11.10171433 +0000 UTC m=+0.117591678 container init 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:32:11 compute-0 podman[93855]: 2025-12-06 06:32:11.012053921 +0000 UTC m=+0.027931239 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 06 06:32:11 compute-0 podman[93855]: 2025-12-06 06:32:11.106897516 +0000 UTC m=+0.122774814 container start 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:32:11 compute-0 bash[93855]: 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c
Dec 06 06:32:11 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.ybrwqj for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:32:11 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj[93870]: [NOTICE] 339/063211 (2) : New worker #1 (4) forked
Dec 06 06:32:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000059s ======
Dec 06 06:32:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:11.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000059s
Dec 06 06:32:11 compute-0 sudo[93518]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:11 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 23 completed events
Dec 06 06:32:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e11 new map
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e11 print_map
                                           e11
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        11
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:32:10.364755+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        67
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14385}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-0.qqwnku{0:14385} state up:rejoin seq 15 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 06 06:32:12 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku Updating MDS map to version 11 from mon.0
Dec 06 06:32:12 compute-0 ceph-mds[92997]: mds.0.9 handle_mds_map i am now mds.0.9
Dec 06 06:32:12 compute-0 ceph-mds[92997]: mds.0.9 handle_mds_map state change up:reconnect --> up:rejoin
Dec 06 06:32:12 compute-0 ceph-mds[92997]: mds.0.9 rejoin_start
Dec 06 06:32:12 compute-0 ceph-mds[92997]: mds.0.9 rejoin_joint_start
Dec 06 06:32:12 compute-0 ceph-mds[92997]: mds.0.9 rejoin_done
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:rejoin
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] up:standby
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:rejoin} 2 up:standby
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.qqwnku is now active in filesystem cephfs as rank 0
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 2 op/s; 215 B/s, 0 objects/s recovering
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 06:32:12 compute-0 ceph-mon[74339]: 3.11 deep-scrub ok
Dec 06 06:32:12 compute-0 ceph-mon[74339]: 7.17 scrub starts
Dec 06 06:32:12 compute-0 ceph-mon[74339]: 7.17 scrub ok
Dec 06 06:32:12 compute-0 ceph-mon[74339]: 7.4 scrub starts
Dec 06 06:32:12 compute-0 ceph-mon[74339]: 7.4 scrub ok
Dec 06 06:32:12 compute-0 ceph-mon[74339]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 161 B/s, 0 objects/s recovering
Dec 06 06:32:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.nyemkw on compute-2
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.nyemkw on compute-2
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:32:12
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'backups', 'default.rgw.meta']
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 6/10 changes
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] Executing plan auto_2025-12-06_06:32:12
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] ceph osd pg-upmap-items 9.2 mappings [{'from': 0, 'to': 1}]
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] ceph osd pg-upmap-items 9.8 mappings [{'from': 0, 'to': 1}]
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] ceph osd pg-upmap-items 9.e mappings [{'from': 0, 'to': 1}]
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] ceph osd pg-upmap-items 9.14 mappings [{'from': 0, 'to': 1}]
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] ceph osd pg-upmap-items 9.19 mappings [{'from': 0, 'to': 1}]
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]} v 0) v1
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [balancer INFO root] ceph osd pg-upmap-items 9.1d mappings [{'from': 0, 'to': 1}]
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]}]: dispatch
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]} v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]}]: dispatch
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]} v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]}]: dispatch
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]} v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]: dispatch
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]} v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]}]: dispatch
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]} v 0) v1
Dec 06 06:32:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]}]: dispatch
Dec 06 06:32:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:32:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.970419884s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.163375854s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.970334053s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.163589478s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.970048904s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.163375854s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.970244408s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.163589478s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.969751358s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.163360596s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.969676018s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.163360596s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.969675064s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.163436890s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.969593048s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.163436890s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.969608307s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.163574219s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.969572067s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.163574219s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.971646309s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.165924072s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.971591949s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.165924072s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.971647263s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.166137695s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.971612930s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.166137695s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.971426010s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.165924072s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:12 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 69 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=69 pruub=9.971275330s) [2] r=-1 lpr=69 pi=[62,69)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.165924072s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 06 06:32:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Dec 06 06:32:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 06:32:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:13.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 2 op/s
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]}]': finished
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]}]': finished
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]}]': finished
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]': finished
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]}]': finished
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]}]': finished
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e70 crush map has features 3314933000854323200, adjusting msgr requires
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e70 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 70 crush map has features 432629239337189376, adjusting msgr requires for clients
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 70 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 70 crush map has features 3314933000854323200, adjusting msgr requires for osds
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.771865845s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.162796021s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.771727562s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.162796021s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.772099495s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.163421631s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.772032738s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.163421631s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.771730423s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.163223267s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.769965172s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.161514282s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.771546364s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.163223267s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.769657135s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.161514282s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.773643494s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.165740967s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.773483276s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.165740967s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.773175240s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 249.166137695s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70 pruub=8.773142815s) [1] r=-1 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.166137695s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:14 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 70 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 06 06:32:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e12 new map
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).mds e12 print_map
                                           e12
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        12
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-06T06:29:04.228355+0000
                                           modified        2025-12-06T06:32:13.099122+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        67
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14385}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-0.qqwnku{0:14385} state up:active seq 16 join_fscid=1 addr [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.vsxbzt{-1:24143} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-2.tjfgow{-1:24166} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/1077972025,v1:192.168.122.102:6805/1077972025] compat {c=[1],r=[1],i=[7ff]}]
Dec 06 06:32:14 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku Updating MDS map to version 12 from mon.0
Dec 06 06:32:14 compute-0 ceph-mds[92997]: mds.0.9 handle_mds_map i am now mds.0.9
Dec 06 06:32:14 compute-0 ceph-mds[92997]: mds.0.9 handle_mds_map state change up:rejoin --> up:active
Dec 06 06:32:14 compute-0 ceph-mds[92997]: mds.0.9 recovery_done -- successful recovery!
Dec 06 06:32:14 compute-0 ceph-mds[92997]: mds.0.9 active_start
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 06 06:32:14 compute-0 ceph-mon[74339]: osdmap e69: 3 total, 3 up, 3 in
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:rejoin
Dec 06 06:32:14 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.101:6804/1511366696,v1:192.168.122.101:6805/1511366696] up:standby
Dec 06 06:32:14 compute-0 ceph-mon[74339]: fsmap cephfs:1/1 {0=cephfs.compute-0.qqwnku=up:rejoin} 2 up:standby
Dec 06 06:32:14 compute-0 ceph-mon[74339]: daemon mds.cephfs.compute-0.qqwnku is now active in filesystem cephfs as rank 0
Dec 06 06:32:14 compute-0 ceph-mon[74339]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 2 op/s; 215 B/s, 0 objects/s recovering
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:14 compute-0 ceph-mon[74339]: 5.9 deep-scrub starts
Dec 06 06:32:14 compute-0 ceph-mon[74339]: 5.9 deep-scrub ok
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]}]: dispatch
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]}]: dispatch
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]}]: dispatch
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]: dispatch
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]}]: dispatch
Dec 06 06:32:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]}]: dispatch
Dec 06 06:32:14 compute-0 ceph-mon[74339]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Dec 06 06:32:14 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 06:32:14 compute-0 ceph-mds[92997]: mds.0.9 cluster recovered.
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:active
Dec 06 06:32:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:32:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:15.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 06 06:32:15 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 06 06:32:15 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 06 06:32:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 06 06:32:15 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:15 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 71 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[62,70)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:15 compute-0 ceph-mon[74339]: Deploying daemon haproxy.rgw.default.compute-2.nyemkw on compute-2
Dec 06 06:32:15 compute-0 ceph-mon[74339]: 5.2 scrub starts
Dec 06 06:32:15 compute-0 ceph-mon[74339]: 5.2 scrub ok
Dec 06 06:32:15 compute-0 ceph-mon[74339]: 4.15 deep-scrub starts
Dec 06 06:32:15 compute-0 ceph-mon[74339]: 4.15 deep-scrub ok
Dec 06 06:32:15 compute-0 ceph-mon[74339]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 2 op/s
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.2", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.8", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.e", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.19", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1d", "id": [0, 1]}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: osdmap e70: 3 total, 3 up, 3 in
Dec 06 06:32:15 compute-0 ceph-mon[74339]: 7.3 scrub starts
Dec 06 06:32:15 compute-0 ceph-mon[74339]: 7.3 scrub ok
Dec 06 06:32:15 compute-0 ceph-mon[74339]: mds.? [v2:192.168.122.100:6806/3519893155,v1:192.168.122.100:6807/3519893155] up:active
Dec 06 06:32:15 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:32:15 compute-0 ceph-mon[74339]: 2.6 scrub starts
Dec 06 06:32:15 compute-0 ceph-mon[74339]: 2.6 scrub ok
Dec 06 06:32:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 06 06:32:15 compute-0 ceph-mon[74339]: osdmap e71: 3 total, 3 up, 3 in
Dec 06 06:32:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 8 active+remapped, 6 remapped+peering, 291 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 8 op/s; 201 B/s, 6 objects/s recovering
Dec 06 06:32:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:16.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:32:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Dec 06 06:32:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:32:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Dec 06 06:32:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Dec 06 06:32:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Dec 06 06:32:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:16 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 06:32:16 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 06:32:16 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 06:32:16 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 06:32:16 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.cossgt on compute-2
Dec 06 06:32:16 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.cossgt on compute-2
Dec 06 06:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 06 06:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.060056686s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 257.761260986s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.17( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.059917450s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.761260986s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.059222221s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 257.761383057s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.13( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.058994293s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.761383057s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.058852196s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 257.761413574s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.f( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.058762550s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.761413574s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.058232307s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 257.761230469s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.3( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.058059692s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.761230469s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.058033943s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 257.761199951s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.052206039s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 257.755584717s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.7( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.052155495s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.755584717s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.b( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=6 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.057670593s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.761199951s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.057318687s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 257.761199951s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.057232857s) [2] async=[2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 257.761291504s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.057269096s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.761199951s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:16 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=70/71 n=5 ec=62/51 lis/c=70/62 les/c/f=71/63/0 sis=72 pruub=15.057157516s) [2] r=-1 lpr=72 pi=[62,72)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.761291504s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:17.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 72 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=71) [1]/[0] async=[1] r=0 lpr=71 pi=[62,71)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:17 compute-0 ceph-mgr[74630]: [progress WARNING root] Starting Global Recovery Event,14 pgs not in active + clean state
Dec 06 06:32:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 06 06:32:17 compute-0 ceph-mon[74339]: pgmap v246: 305 pgs: 8 active+remapped, 6 remapped+peering, 291 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 8 op/s; 201 B/s, 6 objects/s recovering
Dec 06 06:32:17 compute-0 ceph-mon[74339]: 7.2 deep-scrub starts
Dec 06 06:32:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:17 compute-0 ceph-mon[74339]: 7.2 deep-scrub ok
Dec 06 06:32:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:17 compute-0 ceph-mon[74339]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 06:32:17 compute-0 ceph-mon[74339]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 06:32:17 compute-0 ceph-mon[74339]: Deploying daemon keepalived.rgw.default.compute-2.cossgt on compute-2
Dec 06 06:32:17 compute-0 ceph-mon[74339]: osdmap e72: 3 total, 3 up, 3 in
Dec 06 06:32:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 06 06:32:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 06 06:32:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 73 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=73 pruub=15.559041977s) [1] async=[1] r=-1 lpr=73 pi=[62,73)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 259.408447266s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:17 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 73 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=73 pruub=15.558923721s) [1] r=-1 lpr=73 pi=[62,73)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.408447266s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 06 06:32:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 06 06:32:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.099307060s) [1] async=[1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 259.413848877s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.099105835s) [1] async=[1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 259.413757324s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.099086761s) [1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.413848877s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.2( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.099024773s) [1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.413757324s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.098817825s) [1] async=[1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 259.413940430s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.e( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.098756790s) [1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.413940430s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.098661423s) [1] async=[1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 259.414062500s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.8( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=6 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.098593712s) [1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.414062500s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.098112106s) [1] async=[1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 259.414093018s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:18 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 74 pg[9.1d( v 58'1159 (0'0,58'1159] local-lis/les=71/72 n=5 ec=62/51 lis/c=71/62 les/c/f=72/63/0 sis=74 pruub=15.098017693s) [1] r=-1 lpr=74 pi=[62,74)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 259.414093018s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 8 active+remapped, 6 remapped+peering, 291 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 16 op/s; 302 B/s, 9 objects/s recovering
Dec 06 06:32:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:18.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:18 compute-0 ceph-mon[74339]: osdmap e73: 3 total, 3 up, 3 in
Dec 06 06:32:18 compute-0 ceph-mon[74339]: osdmap e74: 3 total, 3 up, 3 in
Dec 06 06:32:18 compute-0 ceph-mon[74339]: pgmap v250: 305 pgs: 8 active+remapped, 6 remapped+peering, 291 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 16 op/s; 302 B/s, 9 objects/s recovering
Dec 06 06:32:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 06 06:32:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:19.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 06 06:32:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 06 06:32:19 compute-0 ceph-mon[74339]: 6.8 scrub starts
Dec 06 06:32:19 compute-0 ceph-mon[74339]: 6.8 scrub ok
Dec 06 06:32:19 compute-0 ceph-mon[74339]: osdmap e75: 3 total, 3 up, 3 in
Dec 06 06:32:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 6 remapped+peering, 299 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 6 op/s
Dec 06 06:32:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:21 compute-0 ceph-mon[74339]: 2.18 scrub starts
Dec 06 06:32:21 compute-0 ceph-mon[74339]: 2.18 scrub ok
Dec 06 06:32:21 compute-0 ceph-mon[74339]: pgmap v252: 305 pgs: 6 remapped+peering, 299 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 6 op/s
Dec 06 06:32:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:21.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:32:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 5 objects/s recovering
Dec 06 06:32:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Dec 06 06:32:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 06 06:32:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:22.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Dec 06 06:32:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.fknpoc on compute-0
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.fknpoc on compute-0
Dec 06 06:32:22 compute-0 sudo[93889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:22 compute-0 sudo[93889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:22 compute-0 sudo[93889]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 33c5883e-dfd4-4dce-95ba-35488fa50c45 (Global Recovery Event) in 5 seconds
Dec 06 06:32:22 compute-0 sudo[93914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:22 compute-0 sudo[93914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:22 compute-0 sudo[93914]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:22 compute-0 sudo[93939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:22 compute-0 sudo[93939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:22 compute-0 sudo[93939]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:22 compute-0 sudo[93964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:22 compute-0 sudo[93964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:32:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:32:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 06 06:32:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:23 compute-0 ceph-mon[74339]: pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 5 objects/s recovering
Dec 06 06:32:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 06 06:32:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 06 06:32:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 06 06:32:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 06 06:32:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:23.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:23 compute-0 sshd-session[94062]: Accepted publickey for zuul from 192.168.122.30 port 51060 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:32:23 compute-0 systemd-logind[798]: New session 35 of user zuul.
Dec 06 06:32:23 compute-0 systemd[1]: Started Session 35 of User zuul.
Dec 06 06:32:23 compute-0 sshd-session[94062]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:32:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 06 06:32:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 90 B/s, 5 objects/s recovering
Dec 06 06:32:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Dec 06 06:32:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 06 06:32:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:24.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:24 compute-0 ceph-mon[74339]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 06 06:32:24 compute-0 ceph-mon[74339]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 06 06:32:24 compute-0 ceph-mon[74339]: Deploying daemon keepalived.rgw.default.compute-0.fknpoc on compute-0
Dec 06 06:32:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 06 06:32:24 compute-0 ceph-mon[74339]: osdmap e76: 3 total, 3 up, 3 in
Dec 06 06:32:24 compute-0 ceph-mon[74339]: 2.1d deep-scrub starts
Dec 06 06:32:24 compute-0 ceph-mon[74339]: 2.1d deep-scrub ok
Dec 06 06:32:24 compute-0 python3.9[94231]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 06 06:32:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 06 06:32:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 06 06:32:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 76 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.858316422s) [2] r=-1 lpr=76 pi=[62,76)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 265.163177490s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 76 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.858219147s) [2] r=-1 lpr=76 pi=[62,76)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.163177490s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 76 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.858000755s) [2] r=-1 lpr=76 pi=[62,76)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 265.163726807s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 76 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.857930183s) [2] r=-1 lpr=76 pi=[62,76)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.163726807s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 76 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.860386848s) [2] r=-1 lpr=76 pi=[62,76)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 265.166534424s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:25 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 76 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=76 pruub=13.860342979s) [2] r=-1 lpr=76 pi=[62,76)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.166534424s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:32:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:25.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:32:25 compute-0 ceph-mon[74339]: 4.a scrub starts
Dec 06 06:32:25 compute-0 ceph-mon[74339]: 4.a scrub ok
Dec 06 06:32:25 compute-0 ceph-mon[74339]: pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 90 B/s, 5 objects/s recovering
Dec 06 06:32:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 06 06:32:25 compute-0 ceph-mon[74339]: osdmap e77: 3 total, 3 up, 3 in
Dec 06 06:32:25 compute-0 ceph-mon[74339]: 3.e scrub starts
Dec 06 06:32:25 compute-0 ceph-mon[74339]: 3.e scrub ok
Dec 06 06:32:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 06 06:32:25 compute-0 python3.9[94421]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:32:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 4 objects/s recovering
Dec 06 06:32:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:26.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:26 compute-0 podman[94030]: 2025-12-06 06:32:26.815627706 +0000 UTC m=+3.872189739 container create 30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1 (image=quay.io/ceph/keepalived:2.2.4, name=elegant_matsumoto, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, distribution-scope=public, release=1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., name=keepalived)
Dec 06 06:32:26 compute-0 systemd[1]: Started libpod-conmon-30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1.scope.
Dec 06 06:32:26 compute-0 podman[94030]: 2025-12-06 06:32:26.797004938 +0000 UTC m=+3.853567001 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 06 06:32:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:26 compute-0 podman[94030]: 2025-12-06 06:32:26.919876134 +0000 UTC m=+3.976438197 container init 30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1 (image=quay.io/ceph/keepalived:2.2.4, name=elegant_matsumoto, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, version=2.2.4, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc.)
Dec 06 06:32:26 compute-0 podman[94030]: 2025-12-06 06:32:26.929692718 +0000 UTC m=+3.986254761 container start 30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1 (image=quay.io/ceph/keepalived:2.2.4, name=elegant_matsumoto, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, distribution-scope=public, release=1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 06:32:26 compute-0 sudo[94610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfzzkgtgwxoxdxbpsnsxqtmarpjzesih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002746.5493205-98-236527812194689/AnsiballZ_command.py'
Dec 06 06:32:26 compute-0 podman[94030]: 2025-12-06 06:32:26.934341398 +0000 UTC m=+3.990903451 container attach 30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1 (image=quay.io/ceph/keepalived:2.2.4, name=elegant_matsumoto, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.openshift.expose-services=, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 06 06:32:26 compute-0 sudo[94610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:26 compute-0 elegant_matsumoto[94581]: 0 0
Dec 06 06:32:26 compute-0 systemd[1]: libpod-30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1.scope: Deactivated successfully.
Dec 06 06:32:26 compute-0 podman[94030]: 2025-12-06 06:32:26.942667967 +0000 UTC m=+3.999230010 container died 30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1 (image=quay.io/ceph/keepalived:2.2.4, name=elegant_matsumoto, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=keepalived)
Dec 06 06:32:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-46321d4da28ab7e2c72cfadb9241afa813169b6e736b389ea7940df3aadadc0e-merged.mount: Deactivated successfully.
Dec 06 06:32:27 compute-0 podman[94030]: 2025-12-06 06:32:27.001049399 +0000 UTC m=+4.057611462 container remove 30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1 (image=quay.io/ceph/keepalived:2.2.4, name=elegant_matsumoto, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, name=keepalived, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.28.2)
Dec 06 06:32:27 compute-0 systemd[1]: libpod-conmon-30bafc3eb2a1f6348dc1e67b04e4ad0cb3c5f85022d28aaea21c166bf8bd83f1.scope: Deactivated successfully.
Dec 06 06:32:27 compute-0 systemd[1]: Reloading.
Dec 06 06:32:27 compute-0 systemd-rc-local-generator[94651]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:32:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:27.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:27 compute-0 systemd-sysv-generator[94661]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:32:27 compute-0 python3.9[94614]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:32:27 compute-0 sudo[94610]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:27 compute-0 systemd[1]: Reloading.
Dec 06 06:32:27 compute-0 systemd-rc-local-generator[94720]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:32:27 compute-0 systemd-sysv-generator[94723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:32:27 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 24 completed events
Dec 06 06:32:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:32:27 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.fknpoc for 40a1bae4-cf76-5610-8dab-c75116dfe0bb...
Dec 06 06:32:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 06 06:32:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 06 06:32:27 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78 pruub=11.003668785s) [1] r=-1 lpr=78 pi=[62,78)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 265.163146973s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78 pruub=11.003635406s) [1] r=-1 lpr=78 pi=[62,78)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.163146973s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78 pruub=11.005917549s) [1] r=-1 lpr=78 pi=[62,78)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 265.166046143s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78 pruub=11.005892754s) [1] r=-1 lpr=78 pi=[62,78)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.166046143s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78 pruub=11.005753517s) [1] r=-1 lpr=78 pi=[62,78)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 265.166625977s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:27 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 78 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78 pruub=11.005488396s) [1] r=-1 lpr=78 pi=[62,78)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.166625977s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:27 compute-0 sudo[94903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evndkpsiuhfdmwdofwxpirojraantaos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002747.5218995-134-184962497562195/AnsiballZ_stat.py'
Dec 06 06:32:27 compute-0 sudo[94903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:28 compute-0 podman[94904]: 2025-12-06 06:32:28.022555932 +0000 UTC m=+0.048926489 container create bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.28.2, com.redhat.component=keepalived-container, name=keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 06:32:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd43ed2dcf4b4964178ba399fde8ca4f3d4c167b503c4180104ec670ac52d0cd/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:28 compute-0 podman[94904]: 2025-12-06 06:32:28.075735467 +0000 UTC m=+0.102106074 container init bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, com.redhat.component=keepalived-container, release=1793, vcs-type=git, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 06 06:32:28 compute-0 podman[94904]: 2025-12-06 06:32:28.088534571 +0000 UTC m=+0.114905168 container start bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.expose-services=, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 06 06:32:28 compute-0 bash[94904]: bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6
Dec 06 06:32:28 compute-0 podman[94904]: 2025-12-06 06:32:28.003060247 +0000 UTC m=+0.029430824 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 06 06:32:28 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.fknpoc for 40a1bae4-cf76-5610-8dab-c75116dfe0bb.
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: Starting VRRP child process, pid=4
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: Startup complete
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: (VI_0) Entering BACKUP STATE (init)
Dec 06 06:32:28 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:28 2025: VRRP_Script(check_backend) succeeded
Dec 06 06:32:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:28 compute-0 sudo[93964]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:28 compute-0 python3.9[94914]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:32:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:28.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:28 compute-0 sudo[94903]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:28 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.9 deep-scrub starts
Dec 06 06:32:28 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.9 deep-scrub ok
Dec 06 06:32:28 compute-0 ceph-mon[74339]: pgmap v257: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 4 objects/s recovering
Dec 06 06:32:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:28 compute-0 ceph-mgr[74630]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec 06 06:32:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Dec 06 06:32:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:28 compute-0 ceph-mgr[74630]: [progress INFO root] complete: finished ev 5bd86f45-f4f9-45a1-9eab-1f7c895eb990 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec 06 06:32:28 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 5bd86f45-f4f9-45a1-9eab-1f7c895eb990 (Updating ingress.rgw.default deployment (+4 -> 4)) in 23 seconds
Dec 06 06:32:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Dec 06 06:32:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:28 compute-0 sudo[95009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:28 compute-0 sudo[95008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:28 compute-0 sudo[95009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:28 compute-0 sudo[95008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:28 compute-0 sudo[95009]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:28 compute-0 sudo[95008]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:28 compute-0 sudo[95058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:28 compute-0 sudo[95059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:32:28 compute-0 sudo[95059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:28 compute-0 sudo[95058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:28 compute-0 sudo[95059]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:28 compute-0 sudo[95058]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 06 06:32:28 compute-0 sudo[95181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmgyknljehfggffuwnnjvznxrrgetmnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002748.5412278-167-202610207565366/AnsiballZ_file.py'
Dec 06 06:32:28 compute-0 sudo[95181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:29 compute-0 python3.9[95183]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:32:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 06 06:32:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:29.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:29 compute-0 sudo[95184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:29 compute-0 sudo[95184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 79 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[62,78)/2 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:29 compute-0 sudo[95184]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-0 sudo[95181]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-0 sudo[95209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:29 compute-0 sudo[95209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:29 compute-0 sudo[95209]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-0 sudo[95258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:29 compute-0 sudo[95258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:29 compute-0 sudo[95258]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-0 sudo[95283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:32:29 compute-0 sudo[95283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:29 compute-0 ceph-mon[74339]: 2.1c scrub starts
Dec 06 06:32:29 compute-0 ceph-mon[74339]: 2.1c scrub ok
Dec 06 06:32:29 compute-0 ceph-mon[74339]: 3.d scrub starts
Dec 06 06:32:29 compute-0 ceph-mon[74339]: 3.d scrub ok
Dec 06 06:32:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 06 06:32:29 compute-0 ceph-mon[74339]: osdmap e78: 3 total, 3 up, 3 in
Dec 06 06:32:29 compute-0 ceph-mon[74339]: pgmap v259: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:29 compute-0 ceph-mon[74339]: 2.9 deep-scrub starts
Dec 06 06:32:29 compute-0 ceph-mon[74339]: 2.9 deep-scrub ok
Dec 06 06:32:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:29 compute-0 ceph-mon[74339]: osdmap e79: 3 total, 3 up, 3 in
Dec 06 06:32:29 compute-0 sudo[95475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvgnyyfsmxsfxusczizeqfycpvdxprxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002749.368368-194-88888234479472/AnsiballZ_file.py'
Dec 06 06:32:29 compute-0 sudo[95475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:29 compute-0 python3.9[95479]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:32:29 compute-0 sudo[95475]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:29 compute-0 podman[95506]: 2025-12-06 06:32:29.853311451 +0000 UTC m=+0.101394782 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:32:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 06 06:32:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:30 compute-0 podman[95506]: 2025-12-06 06:32:30.186508756 +0000 UTC m=+0.434592067 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 06:32:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:30.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:32:30 compute-0 python3.9[95731]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:32:30 compute-0 network[95798]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:32:30 compute-0 network[95799]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:32:30 compute-0 network[95800]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:32:30 compute-0 podman[95836]: 2025-12-06 06:32:30.960621128 +0000 UTC m=+0.058459685 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:32:31 compute-0 podman[95836]: 2025-12-06 06:32:31.000482874 +0000 UTC m=+0.098321411 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:32:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:31.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:31 compute-0 podman[95907]: 2025-12-06 06:32:31.664760151 +0000 UTC m=+0.070542567 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Dec 06 06:32:31 compute-0 podman[95907]: 2025-12-06 06:32:31.68039636 +0000 UTC m=+0.086178756 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 06 06:32:31 compute-0 sudo[95283]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:31 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc[94921]: Sat Dec  6 06:32:31 2025: (VI_0) Entering MASTER STATE
Dec 06 06:32:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=5 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80 pruub=13.093908310s) [2] async=[2] r=-1 lpr=80 pi=[62,80)/2 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 271.387176514s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80 pruub=13.096385956s) [2] async=[2] r=-1 lpr=80 pi=[62,80)/2 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 271.390441895s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.d( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80 pruub=13.096217155s) [2] r=-1 lpr=80 pi=[62,80)/2 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 271.390441895s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.15( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=5 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80 pruub=13.093067169s) [2] r=-1 lpr=80 pi=[62,80)/2 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 271.387176514s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80 pruub=13.095782280s) [2] async=[2] r=-1 lpr=80 pi=[62,80)/2 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 271.390380859s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.5( v 58'1159 (0'0,58'1159] local-lis/les=78/79 n=6 ec=62/51 lis/c=78/62 les/c/f=79/63/0 sis=80 pruub=13.095601082s) [2] r=-1 lpr=80 pi=[62,80)/2 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 271.390380859s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 3 remapped+peering, 3 active+remapped, 1 peering, 298 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 8.8 KiB/s rd, 170 B/s wr, 15 op/s; 98 B/s, 3 objects/s recovering
Dec 06 06:32:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:32.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:32 compute-0 ceph-mon[74339]: 3.c scrub starts
Dec 06 06:32:32 compute-0 ceph-mon[74339]: 3.c scrub ok
Dec 06 06:32:32 compute-0 ceph-mon[74339]: pgmap v261: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 107 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 80 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=79) [1]/[0] async=[1] r=0 lpr=79 pi=[62,79)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:32:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 06 06:32:33 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 25 completed events
Dec 06 06:32:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:32:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 3 remapped+peering, 3 active+remapped, 1 peering, 298 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 163 B/s wr, 15 op/s; 94 B/s, 3 objects/s recovering
Dec 06 06:32:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:34.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 06 06:32:34 compute-0 ceph-mon[74339]: 6.a scrub starts
Dec 06 06:32:34 compute-0 ceph-mon[74339]: 6.a scrub ok
Dec 06 06:32:34 compute-0 ceph-mon[74339]: osdmap e80: 3 total, 3 up, 3 in
Dec 06 06:32:34 compute-0 ceph-mon[74339]: pgmap v263: 305 pgs: 3 remapped+peering, 3 active+remapped, 1 peering, 298 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 8.8 KiB/s rd, 170 B/s wr, 15 op/s; 98 B/s, 3 objects/s recovering
Dec 06 06:32:34 compute-0 ceph-mon[74339]: 7.19 scrub starts
Dec 06 06:32:34 compute-0 ceph-mon[74339]: 7.19 scrub ok
Dec 06 06:32:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 06 06:32:34 compute-0 sudo[96045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:34 compute-0 sudo[96045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:34 compute-0 sudo[96045]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:34 compute-0 sudo[96090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:34 compute-0 sudo[96090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:34 compute-0 sudo[96090]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:34 compute-0 sudo[96115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:34 compute-0 sudo[96115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:34 compute-0 sudo[96115]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:34 compute-0 sudo[96140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:32:34 compute-0 sudo[96140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-0 sudo[96140]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:35.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:35 compute-0 python3.9[96321]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:32:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-0 ceph-mon[74339]: pgmap v264: 305 pgs: 3 remapped+peering, 3 active+remapped, 1 peering, 298 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 163 B/s wr, 15 op/s; 94 B/s, 3 objects/s recovering
Dec 06 06:32:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-0 ceph-mon[74339]: osdmap e81: 3 total, 3 up, 3 in
Dec 06 06:32:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 06 06:32:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 06 06:32:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 06 06:32:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 82 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82 pruub=13.107554436s) [1] async=[1] r=-1 lpr=82 pi=[62,82)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 275.207763672s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 82 pg[9.16( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82 pruub=13.107456207s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 275.207763672s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 82 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=6 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82 pruub=13.106866837s) [1] async=[1] r=-1 lpr=82 pi=[62,82)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 275.207794189s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 82 pg[9.6( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=6 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82 pruub=13.106804848s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 275.207794189s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 82 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82 pruub=13.106384277s) [1] async=[1] r=-1 lpr=82 pi=[62,82)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 275.207763672s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 82 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=79/80 n=5 ec=62/51 lis/c=79/62 les/c/f=80/63/0 sis=82 pruub=13.106333733s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 275.207763672s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:36 compute-0 python3.9[96471]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:32:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 3 remapped+peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 3 objects/s recovering
Dec 06 06:32:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:32:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:36.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:32:36 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 06 06:32:36 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 06 06:32:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 06 06:32:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:32:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:37.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:32:37 compute-0 python3.9[96626]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:32:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:32:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 06 06:32:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 06 06:32:37 compute-0 ceph-mon[74339]: osdmap e82: 3 total, 3 up, 3 in
Dec 06 06:32:37 compute-0 ceph-mon[74339]: pgmap v267: 305 pgs: 3 remapped+peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 3 objects/s recovering
Dec 06 06:32:37 compute-0 ceph-mon[74339]: 7.1e scrub starts
Dec 06 06:32:37 compute-0 ceph-mon[74339]: 7.1e scrub ok
Dec 06 06:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 06:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:32:38 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 06 06:32:38 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 06 06:32:38 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec 06 06:32:38 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec 06 06:32:38 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec 06 06:32:38 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec 06 06:32:38 compute-0 sudo[96709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-0 sudo[96709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96709]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:32:38 compute-0 sudo[96758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 06:32:38 compute-0 sudo[96758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96758]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:38.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:38 compute-0 sudo[96791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-0 sudo[96791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96791]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[96875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyzibvddjvykypbbiajvocescubujhan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002758.0118964-338-153931391200645/AnsiballZ_setup.py'
Dec 06 06:32:38 compute-0 sudo[96875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:38 compute-0 sudo[96843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph
Dec 06 06:32:38 compute-0 sudo[96843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96843]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[96886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-0 sudo[96886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96886]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[96911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:32:38 compute-0 sudo[96911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96911]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[96936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-0 sudo[96936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96936]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[96961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:38 compute-0 sudo[96961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96961]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[96986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-0 python3.9[96883]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:32:38 compute-0 sudo[96986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[96986]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[97012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:32:38 compute-0 sudo[97012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[97012]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[97067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-0 sudo[97067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[97067]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[97092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:32:38 compute-0 sudo[97092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[97092]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[96875]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[97117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:38 compute-0 sudo[97117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[97117]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:38 compute-0 sudo[97142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new
Dec 06 06:32:38 compute-0 sudo[97142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:38 compute-0 sudo[97142]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 ceph-mon[74339]: 4.8 scrub starts
Dec 06 06:32:39 compute-0 ceph-mon[74339]: 4.8 scrub ok
Dec 06 06:32:39 compute-0 ceph-mon[74339]: 11.17 scrub starts
Dec 06 06:32:39 compute-0 ceph-mon[74339]: 11.17 scrub ok
Dec 06 06:32:39 compute-0 ceph-mon[74339]: osdmap e83: 3 total, 3 up, 3 in
Dec 06 06:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:32:39 compute-0 ceph-mon[74339]: Updating compute-0:/etc/ceph/ceph.conf
Dec 06 06:32:39 compute-0 ceph-mon[74339]: Updating compute-1:/etc/ceph/ceph.conf
Dec 06 06:32:39 compute-0 ceph-mon[74339]: Updating compute-2:/etc/ceph/ceph.conf
Dec 06 06:32:39 compute-0 ceph-mon[74339]: pgmap v269: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:32:39 compute-0 sudo[97167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-0 sudo[97167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97167]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:39 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:39 compute-0 sudo[97192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 06:32:39 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:39 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:39 compute-0 sudo[97192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97192]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:39 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:39.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:39 compute-0 sudo[97240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-0 sudo[97240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97240]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:32:39 compute-0 sudo[97286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97286]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaolyxmnambovjbeixxjwxmkervsgach ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002758.0118964-338-153931391200645/AnsiballZ_dnf.py'
Dec 06 06:32:39 compute-0 sudo[97341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:32:39 compute-0 sudo[97340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-0 sudo[97340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97340]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config
Dec 06 06:32:39 compute-0 sudo[97368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97368]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-0 sudo[97393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97393]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:32:39 compute-0 sudo[97418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97418]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 python3.9[97360]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:32:39 compute-0 sudo[97443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-0 sudo[97443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:39 compute-0 sudo[97469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97469]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-0 sudo[97494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97494]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:32:39 compute-0 sudo[97519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97519]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-0 sudo[97568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97568]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:32:39 compute-0 sudo[97595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97595]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:39 compute-0 sudo[97623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97623]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:39 compute-0 sudo[97648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new
Dec 06 06:32:39 compute-0 sudo[97648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:39 compute-0 sudo[97648]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-0 ceph-mon[74339]: 8.2 scrub starts
Dec 06 06:32:40 compute-0 ceph-mon[74339]: 8.2 scrub ok
Dec 06 06:32:40 compute-0 ceph-mon[74339]: Updating compute-1:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:40 compute-0 ceph-mon[74339]: Updating compute-2:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:40 compute-0 ceph-mon[74339]: Updating compute-0:/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:40 compute-0 sudo[97676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:40 compute-0 sudo[97676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-0 sudo[97676]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-0 sudo[97704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-40a1bae4-cf76-5610-8dab-c75116dfe0bb/var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf.new /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/config/ceph.conf
Dec 06 06:32:40 compute-0 sudo[97704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-0 sudo[97704]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:32:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:40.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 20bbbb65-9523-4215-bc7f-00f9791c1ddc does not exist
Dec 06 06:32:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 29f5ff6a-a26d-4b8b-9157-9835521631ae does not exist
Dec 06 06:32:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 71c317d7-f7db-408e-8a20-a176b1dcfc61 does not exist
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:32:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:40 compute-0 sudo[97734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:40 compute-0 sudo[97734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-0 sudo[97734]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-0 sudo[97761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:40 compute-0 sudo[97761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-0 sudo[97761]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-0 sudo[97786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:40 compute-0 sudo[97786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-0 sudo[97786]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:40 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 06 06:32:40 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 06 06:32:40 compute-0 sudo[97812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:32:40 compute-0 sudo[97812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:40 compute-0 podman[97889]: 2025-12-06 06:32:40.804558028 +0000 UTC m=+0.052546557 container create 1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:32:40 compute-0 systemd[1]: Started libpod-conmon-1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3.scope.
Dec 06 06:32:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:40 compute-0 podman[97889]: 2025-12-06 06:32:40.782786405 +0000 UTC m=+0.030774954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:40 compute-0 podman[97889]: 2025-12-06 06:32:40.896217438 +0000 UTC m=+0.144205997 container init 1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:32:40 compute-0 podman[97889]: 2025-12-06 06:32:40.90630355 +0000 UTC m=+0.154292079 container start 1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_clarke, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:32:40 compute-0 ecstatic_clarke[97907]: 167 167
Dec 06 06:32:40 compute-0 systemd[1]: libpod-1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3.scope: Deactivated successfully.
Dec 06 06:32:40 compute-0 podman[97889]: 2025-12-06 06:32:40.912558158 +0000 UTC m=+0.160546707 container attach 1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_clarke, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:32:40 compute-0 podman[97889]: 2025-12-06 06:32:40.913314771 +0000 UTC m=+0.161303300 container died 1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f851c3a6475363a863e48531dc3bbe423d5ca3abfc4609a77c9813448f1a45d0-merged.mount: Deactivated successfully.
Dec 06 06:32:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:41.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:41 compute-0 podman[97889]: 2025-12-06 06:32:41.510367741 +0000 UTC m=+0.758356280 container remove 1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-0 ceph-mon[74339]: pgmap v270: 305 pgs: 3 peering, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:41 compute-0 ceph-mon[74339]: 2.1f scrub starts
Dec 06 06:32:41 compute-0 ceph-mon[74339]: 2.1f scrub ok
Dec 06 06:32:41 compute-0 systemd[1]: libpod-conmon-1485b12733525aafc7fcf1d9242bf03c557c25069205ff5670f89c289a3d68e3.scope: Deactivated successfully.
Dec 06 06:32:41 compute-0 podman[97941]: 2025-12-06 06:32:41.686920238 +0000 UTC m=+0.045991612 container create fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 06 06:32:41 compute-0 systemd[1]: Started libpod-conmon-fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7.scope.
Dec 06 06:32:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:41 compute-0 podman[97941]: 2025-12-06 06:32:41.668524385 +0000 UTC m=+0.027595789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f08be6c8344fff46007d6c06493dc5ada2ee89afcfc5c9c0f63834015bb876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f08be6c8344fff46007d6c06493dc5ada2ee89afcfc5c9c0f63834015bb876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f08be6c8344fff46007d6c06493dc5ada2ee89afcfc5c9c0f63834015bb876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f08be6c8344fff46007d6c06493dc5ada2ee89afcfc5c9c0f63834015bb876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f08be6c8344fff46007d6c06493dc5ada2ee89afcfc5c9c0f63834015bb876/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:41 compute-0 podman[97941]: 2025-12-06 06:32:41.786195865 +0000 UTC m=+0.145267249 container init fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 06:32:41 compute-0 podman[97941]: 2025-12-06 06:32:41.793141304 +0000 UTC m=+0.152212678 container start fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 06:32:41 compute-0 podman[97941]: 2025-12-06 06:32:41.796700571 +0000 UTC m=+0.155771955 container attach fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:32:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 42 B/s, 2 objects/s recovering
Dec 06 06:32:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Dec 06 06:32:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 06 06:32:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:42.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 06 06:32:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 06 06:32:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 06 06:32:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 06 06:32:42 compute-0 ceph-mon[74339]: 7.1a scrub starts
Dec 06 06:32:42 compute-0 ceph-mon[74339]: 7.1a scrub ok
Dec 06 06:32:42 compute-0 ceph-mon[74339]: 8.16 scrub starts
Dec 06 06:32:42 compute-0 ceph-mon[74339]: 8.16 scrub ok
Dec 06 06:32:42 compute-0 ceph-mon[74339]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 42 B/s, 2 objects/s recovering
Dec 06 06:32:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 06 06:32:42 compute-0 friendly_cartwright[97961]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:32:42 compute-0 friendly_cartwright[97961]: --> relative data size: 1.0
Dec 06 06:32:42 compute-0 friendly_cartwright[97961]: --> All data devices are unavailable
Dec 06 06:32:42 compute-0 systemd[1]: libpod-fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7.scope: Deactivated successfully.
Dec 06 06:32:42 compute-0 podman[97941]: 2025-12-06 06:32:42.675662028 +0000 UTC m=+1.034733422 container died fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:32:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-05f08be6c8344fff46007d6c06493dc5ada2ee89afcfc5c9c0f63834015bb876-merged.mount: Deactivated successfully.
Dec 06 06:32:42 compute-0 podman[97941]: 2025-12-06 06:32:42.736788572 +0000 UTC m=+1.095859936 container remove fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_cartwright, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:32:42 compute-0 systemd[1]: libpod-conmon-fc88c8bf961c9e8cff91930c0cc8b4dc42c4819215f1b70538b1e9da1ad158b7.scope: Deactivated successfully.
Dec 06 06:32:42 compute-0 sudo[97812]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:42 compute-0 sudo[98007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:42 compute-0 sudo[98007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:42 compute-0 sudo[98007]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:42 compute-0 sudo[98032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:42 compute-0 sudo[98032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:42 compute-0 sudo[98032]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:32:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:32:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:32:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:32:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:32:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:32:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:42 compute-0 sudo[98057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:42 compute-0 sudo[98057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:42 compute-0 sudo[98057]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:43 compute-0 sudo[98082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:32:43 compute-0 sudo[98082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:32:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:43.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:32:43 compute-0 podman[98147]: 2025-12-06 06:32:43.34242384 +0000 UTC m=+0.046983581 container create dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:32:43 compute-0 systemd[1]: Started libpod-conmon-dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb.scope.
Dec 06 06:32:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:43 compute-0 podman[98147]: 2025-12-06 06:32:43.318879633 +0000 UTC m=+0.023439424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:43 compute-0 podman[98147]: 2025-12-06 06:32:43.414451741 +0000 UTC m=+0.119011502 container init dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 06:32:43 compute-0 podman[98147]: 2025-12-06 06:32:43.424683857 +0000 UTC m=+0.129243598 container start dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:32:43 compute-0 podman[98147]: 2025-12-06 06:32:43.429368888 +0000 UTC m=+0.133928659 container attach dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:32:43 compute-0 busy_jepsen[98163]: 167 167
Dec 06 06:32:43 compute-0 systemd[1]: libpod-dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb.scope: Deactivated successfully.
Dec 06 06:32:43 compute-0 podman[98147]: 2025-12-06 06:32:43.433069529 +0000 UTC m=+0.137629280 container died dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:32:43 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 06 06:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e8b4f9fa23cdb1ec9736793a493b23f8c061abad05139e548089c24d950c0d-merged.mount: Deactivated successfully.
Dec 06 06:32:43 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 06 06:32:43 compute-0 podman[98147]: 2025-12-06 06:32:43.499226484 +0000 UTC m=+0.203786225 container remove dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_jepsen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:32:43 compute-0 systemd[1]: libpod-conmon-dc0971e00af1e71fdcc5723b5d1243ed63b8744e852e51c2ea416553186b33eb.scope: Deactivated successfully.
Dec 06 06:32:43 compute-0 podman[98187]: 2025-12-06 06:32:43.655328907 +0000 UTC m=+0.041315731 container create 4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:32:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 06 06:32:43 compute-0 ceph-mon[74339]: osdmap e84: 3 total, 3 up, 3 in
Dec 06 06:32:43 compute-0 ceph-mon[74339]: 7.18 scrub starts
Dec 06 06:32:43 compute-0 ceph-mon[74339]: 7.18 scrub ok
Dec 06 06:32:43 compute-0 systemd[1]: Started libpod-conmon-4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8.scope.
Dec 06 06:32:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47b827da148bda23833cdf2d2dc72f18666151b744485e309a8bc6c402e620b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47b827da148bda23833cdf2d2dc72f18666151b744485e309a8bc6c402e620b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47b827da148bda23833cdf2d2dc72f18666151b744485e309a8bc6c402e620b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47b827da148bda23833cdf2d2dc72f18666151b744485e309a8bc6c402e620b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:43 compute-0 podman[98187]: 2025-12-06 06:32:43.723226843 +0000 UTC m=+0.109213677 container init 4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 06:32:43 compute-0 podman[98187]: 2025-12-06 06:32:43.730314645 +0000 UTC m=+0.116301459 container start 4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:32:43 compute-0 podman[98187]: 2025-12-06 06:32:43.635236523 +0000 UTC m=+0.021223357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:43 compute-0 podman[98187]: 2025-12-06 06:32:43.733995466 +0000 UTC m=+0.119982310 container attach 4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:32:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 1 objects/s recovering
Dec 06 06:32:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Dec 06 06:32:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 06 06:32:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:32:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:44.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]: {
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:     "0": [
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:         {
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "devices": [
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "/dev/loop3"
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             ],
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "lv_name": "ceph_lv0",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "lv_size": "7511998464",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "name": "ceph_lv0",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "tags": {
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.cluster_name": "ceph",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.crush_device_class": "",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.encrypted": "0",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.osd_id": "0",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.type": "block",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:                 "ceph.vdo": "0"
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             },
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "type": "block",
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:             "vg_name": "ceph_vg0"
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:         }
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]:     ]
Dec 06 06:32:44 compute-0 hopeful_perlman[98205]: }
Dec 06 06:32:44 compute-0 systemd[1]: libpod-4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8.scope: Deactivated successfully.
Dec 06 06:32:44 compute-0 podman[98187]: 2025-12-06 06:32:44.556755287 +0000 UTC m=+0.942742101 container died 4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:32:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d47b827da148bda23833cdf2d2dc72f18666151b744485e309a8bc6c402e620b-merged.mount: Deactivated successfully.
Dec 06 06:32:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 06 06:32:44 compute-0 podman[98187]: 2025-12-06 06:32:44.690622513 +0000 UTC m=+1.076609327 container remove 4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_perlman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:32:44 compute-0 systemd[1]: libpod-conmon-4578314d5330830d423f41b4301601e2a605765c52eb2e2836fd230e66b3a5e8.scope: Deactivated successfully.
Dec 06 06:32:44 compute-0 sudo[98082]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:44 compute-0 sudo[98230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:44 compute-0 sudo[98230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:44 compute-0 sudo[98230]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:44 compute-0 ceph-mgr[74630]: [progress INFO root] Completed event 3bc6857d-fb48-4160-b5af-f5c2adadea4b (Global Recovery Event) in 16 seconds
Dec 06 06:32:44 compute-0 sudo[98255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:44 compute-0 sudo[98255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:44 compute-0 sudo[98255]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:44 compute-0 sudo[98280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:44 compute-0 sudo[98280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:44 compute-0 sudo[98280]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:44 compute-0 sudo[98305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:32:44 compute-0 sudo[98305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 06 06:32:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 06 06:32:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 06 06:32:45 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 85 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=9.893611908s) [2] r=-1 lpr=85 pi=[62,85)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 281.166839600s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:45 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 85 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=9.893520355s) [2] r=-1 lpr=85 pi=[62,85)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.166839600s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:32:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:45.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:32:45 compute-0 podman[98370]: 2025-12-06 06:32:45.311662083 +0000 UTC m=+0.039420293 container create f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 06:32:45 compute-0 systemd[1]: Started libpod-conmon-f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c.scope.
Dec 06 06:32:45 compute-0 podman[98370]: 2025-12-06 06:32:45.293659183 +0000 UTC m=+0.021417423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:45 compute-0 podman[98370]: 2025-12-06 06:32:45.410025814 +0000 UTC m=+0.137784034 container init f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:32:45 compute-0 podman[98370]: 2025-12-06 06:32:45.418873269 +0000 UTC m=+0.146631469 container start f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:32:45 compute-0 ceph-mon[74339]: 7.1c deep-scrub starts
Dec 06 06:32:45 compute-0 ceph-mon[74339]: 7.1c deep-scrub ok
Dec 06 06:32:45 compute-0 ceph-mon[74339]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 1 objects/s recovering
Dec 06 06:32:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 06 06:32:45 compute-0 sad_moore[98386]: 167 167
Dec 06 06:32:45 compute-0 podman[98370]: 2025-12-06 06:32:45.423830738 +0000 UTC m=+0.151588958 container attach f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 06:32:45 compute-0 systemd[1]: libpod-f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c.scope: Deactivated successfully.
Dec 06 06:32:45 compute-0 podman[98370]: 2025-12-06 06:32:45.425270841 +0000 UTC m=+0.153029041 container died f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e32c586f3ed0095dbe30baadacf2ab2ccb17d5bb7915f30929f3b79f5a9c9ae0-merged.mount: Deactivated successfully.
Dec 06 06:32:45 compute-0 podman[98370]: 2025-12-06 06:32:45.462040644 +0000 UTC m=+0.189798854 container remove f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 06:32:45 compute-0 systemd[1]: libpod-conmon-f8018d2e1e16fe327d7f8badba52abddde5a7e9cfa58b11a270948fb4269da8c.scope: Deactivated successfully.
Dec 06 06:32:45 compute-0 podman[98412]: 2025-12-06 06:32:45.622177078 +0000 UTC m=+0.044105054 container create f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcnulty, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:32:45 compute-0 systemd[1]: Started libpod-conmon-f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b.scope.
Dec 06 06:32:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d251db8887598a68de3bd264bfe17a81b2163396222bee67bd9770f0b5a5ee9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d251db8887598a68de3bd264bfe17a81b2163396222bee67bd9770f0b5a5ee9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d251db8887598a68de3bd264bfe17a81b2163396222bee67bd9770f0b5a5ee9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d251db8887598a68de3bd264bfe17a81b2163396222bee67bd9770f0b5a5ee9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:32:45 compute-0 podman[98412]: 2025-12-06 06:32:45.603331493 +0000 UTC m=+0.025259489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:45 compute-0 podman[98412]: 2025-12-06 06:32:45.704849908 +0000 UTC m=+0.126777914 container init f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:32:45 compute-0 podman[98412]: 2025-12-06 06:32:45.711520889 +0000 UTC m=+0.133448865 container start f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:32:45 compute-0 podman[98412]: 2025-12-06 06:32:45.715453586 +0000 UTC m=+0.137381592 container attach f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:32:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Dec 06 06:32:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 06:32:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:46.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]: {
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]:         "osd_id": 0,
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]:         "type": "bluestore"
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]:     }
Dec 06 06:32:46 compute-0 mystifying_mcnulty[98428]: }
Dec 06 06:32:46 compute-0 systemd[1]: libpod-f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b.scope: Deactivated successfully.
Dec 06 06:32:46 compute-0 podman[98412]: 2025-12-06 06:32:46.694366521 +0000 UTC m=+1.116294507 container died f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcnulty, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:32:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:47.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 06 06:32:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d251db8887598a68de3bd264bfe17a81b2163396222bee67bd9770f0b5a5ee9d-merged.mount: Deactivated successfully.
Dec 06 06:32:47 compute-0 podman[98412]: 2025-12-06 06:32:47.680808264 +0000 UTC m=+2.102736260 container remove f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 06:32:47 compute-0 systemd[1]: libpod-conmon-f2e1914355de3e0b9a82152d02c4c2fd4c3d0b6c358eda5b0067170912e1cd1b.scope: Deactivated successfully.
Dec 06 06:32:47 compute-0 sudo[98305]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Dec 06 06:32:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 06:32:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:48.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:48 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 06 06:32:48 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 06 06:32:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:32:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:32:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 06 06:32:49 compute-0 ceph-mon[74339]: osdmap e85: 3 total, 3 up, 3 in
Dec 06 06:32:49 compute-0 ceph-mon[74339]: 8.9 scrub starts
Dec 06 06:32:49 compute-0 ceph-mon[74339]: 8.9 scrub ok
Dec 06 06:32:49 compute-0 ceph-mon[74339]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 06:32:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 06:32:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 06 06:32:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 06 06:32:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:49 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 86 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=86) [2]/[0] r=0 lpr=86 pi=[62,86)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:49 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 86 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=86 pruub=13.499773979s) [2] r=-1 lpr=86 pi=[62,86)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 289.164611816s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:49 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 86 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=86 pruub=13.499602318s) [2] r=-1 lpr=86 pi=[62,86)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.164611816s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:49 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 86 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=86) [2]/[0] r=0 lpr=86 pi=[62,86)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4bef1f5f-acf7-44a0-bc8d-ff5886479631 does not exist
Dec 06 06:32:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 31809954-729a-4e18-9807-2faa1d726583 does not exist
Dec 06 06:32:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dc597b06-b9d5-465f-a024-427dd556da67 does not exist
Dec 06 06:32:49 compute-0 sudo[98465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:49 compute-0 sudo[98466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:49 compute-0 sudo[98465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:49 compute-0 sudo[98466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:49 compute-0 sudo[98465]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:49 compute-0 sudo[98466]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:49 compute-0 sudo[98515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:49 compute-0 sudo[98515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:49 compute-0 sudo[98515]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:49 compute-0 sudo[98516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:32:49 compute-0 sudo[98516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:49 compute-0 sudo[98516]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:49 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec 06 06:32:49 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec 06 06:32:49 compute-0 ceph-mgr[74630]: [progress INFO root] Writing back 26 completed events
Dec 06 06:32:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec 06 06:32:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:32:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec 06 06:32:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec 06 06:32:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:32:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:49 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 06:32:49 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 06:32:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:49 compute-0 sudo[98565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:49 compute-0 sudo[98565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:49 compute-0 sudo[98565]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:49 compute-0 sudo[98590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:49 compute-0 sudo[98590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:49 compute-0 sudo[98590]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:50 compute-0 sudo[98615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:50 compute-0 sudo[98615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:50 compute-0 sudo[98615]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:50 compute-0 sudo[98640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:50 compute-0 sudo[98640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 1 active+clean+scrubbing, 2 unknown, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:50.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:50 compute-0 ceph-mon[74339]: 3.a scrub starts
Dec 06 06:32:50 compute-0 ceph-mon[74339]: 3.a scrub ok
Dec 06 06:32:50 compute-0 ceph-mon[74339]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 06 06:32:50 compute-0 ceph-mon[74339]: 7.1b scrub starts
Dec 06 06:32:50 compute-0 ceph-mon[74339]: 7.1b scrub ok
Dec 06 06:32:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 06:32:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:50 compute-0 ceph-mon[74339]: osdmap e86: 3 total, 3 up, 3 in
Dec 06 06:32:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:32:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:32:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:50 compute-0 ceph-mon[74339]: 11.a scrub starts
Dec 06 06:32:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:50 compute-0 ceph-mon[74339]: 11.a scrub ok
Dec 06 06:32:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 06 06:32:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 06:32:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 06 06:32:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 06 06:32:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 87 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=87) [2]/[0] r=0 lpr=87 pi=[62,87)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 87 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=87) [2]/[0] r=0 lpr=87 pi=[62,87)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:50 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 87 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=86/87 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=86) [2]/[0] async=[2] r=0 lpr=86 pi=[62,86)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:50 compute-0 podman[98683]: 2025-12-06 06:32:50.538177318 +0000 UTC m=+0.051964919 container create 2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:32:50 compute-0 podman[98683]: 2025-12-06 06:32:50.51191015 +0000 UTC m=+0.025697751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:50 compute-0 systemd[1]: Started libpod-conmon-2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37.scope.
Dec 06 06:32:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:50 compute-0 podman[98683]: 2025-12-06 06:32:50.80659351 +0000 UTC m=+0.320381201 container init 2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:32:50 compute-0 podman[98683]: 2025-12-06 06:32:50.836205309 +0000 UTC m=+0.349992910 container start 2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:32:50 compute-0 gallant_proskuriakova[98699]: 167 167
Dec 06 06:32:50 compute-0 systemd[1]: libpod-2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37.scope: Deactivated successfully.
Dec 06 06:32:51 compute-0 podman[98683]: 2025-12-06 06:32:51.016430695 +0000 UTC m=+0.530218396 container attach 2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:32:51 compute-0 podman[98683]: 2025-12-06 06:32:51.017504758 +0000 UTC m=+0.531292359 container died 2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:32:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-577c428ece6f7748a58b3944a9dfd256d457fd7ca4c020fcb49b898f502143eb-merged.mount: Deactivated successfully.
Dec 06 06:32:51 compute-0 podman[98683]: 2025-12-06 06:32:51.070807307 +0000 UTC m=+0.584594918 container remove 2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:32:51 compute-0 systemd[1]: libpod-conmon-2bfd044af3480716083fae6a6eb83c55facd70291aa2e6afe38baa387fd92c37.scope: Deactivated successfully.
Dec 06 06:32:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:51.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:51 compute-0 sudo[98640]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 06 06:32:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 06 06:32:51 compute-0 ceph-mon[74339]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 06 06:32:51 compute-0 ceph-mon[74339]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 06 06:32:51 compute-0 ceph-mon[74339]: 6.e scrub starts
Dec 06 06:32:51 compute-0 ceph-mon[74339]: 6.e scrub ok
Dec 06 06:32:51 compute-0 ceph-mon[74339]: pgmap v278: 305 pgs: 1 active+clean+scrubbing, 2 unknown, 302 active+clean; 456 KiB data, 108 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:32:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 06 06:32:51 compute-0 ceph-mon[74339]: osdmap e87: 3 total, 3 up, 3 in
Dec 06 06:32:51 compute-0 ceph-mon[74339]: 8.d scrub starts
Dec 06 06:32:51 compute-0 ceph-mon[74339]: 8.d scrub ok
Dec 06 06:32:51 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 06 06:32:51 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 88 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=86/87 n=5 ec=62/51 lis/c=86/62 les/c/f=87/63/0 sis=88 pruub=14.719811440s) [2] async=[2] r=-1 lpr=88 pi=[62,88)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 292.679077148s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:51 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 88 pg[9.18( v 58'1159 (0'0,58'1159] local-lis/les=86/87 n=5 ec=62/51 lis/c=86/62 les/c/f=87/63/0 sis=88 pruub=14.719625473s) [2] r=-1 lpr=88 pi=[62,88)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 292.679077148s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:51 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 88 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=87/88 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=87) [2]/[0] async=[2] r=0 lpr=87 pi=[62,87)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:32:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:51 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.sfzyix (monmap changed)...
Dec 06 06:32:51 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.sfzyix (monmap changed)...
Dec 06 06:32:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.sfzyix", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 06 06:32:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.sfzyix", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:32:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 06:32:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:32:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:51 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.sfzyix on compute-0
Dec 06 06:32:51 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.sfzyix on compute-0
Dec 06 06:32:51 compute-0 sudo[98718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:51 compute-0 sudo[98718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:51 compute-0 sudo[98718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:51 compute-0 sudo[98743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:51 compute-0 sudo[98743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:51 compute-0 sudo[98743]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:51 compute-0 sudo[98768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:51 compute-0 sudo[98768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:51 compute-0 sudo[98768]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:51 compute-0 sudo[98793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:51 compute-0 sudo[98793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 2 active+remapped, 1 active+clean+scrubbing, 2 unknown, 300 active+clean; 456 KiB data, 125 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Dec 06 06:32:52 compute-0 podman[98835]: 2025-12-06 06:32:52.261849336 +0000 UTC m=+0.046537998 container create eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:32:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:52.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:52 compute-0 systemd[1]: Started libpod-conmon-eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea.scope.
Dec 06 06:32:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:52 compute-0 podman[98835]: 2025-12-06 06:32:52.336845656 +0000 UTC m=+0.121534648 container init eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:32:52 compute-0 podman[98835]: 2025-12-06 06:32:52.243512215 +0000 UTC m=+0.028200887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:52 compute-0 podman[98835]: 2025-12-06 06:32:52.343795524 +0000 UTC m=+0.128484196 container start eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:32:52 compute-0 podman[98835]: 2025-12-06 06:32:52.346900897 +0000 UTC m=+0.131589559 container attach eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 06:32:52 compute-0 admiring_bassi[98853]: 167 167
Dec 06 06:32:52 compute-0 systemd[1]: libpod-eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea.scope: Deactivated successfully.
Dec 06 06:32:52 compute-0 podman[98835]: 2025-12-06 06:32:52.349964158 +0000 UTC m=+0.134652820 container died eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bassi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 06:32:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-d273d5b7e3c7d93cf8d0af8375ce20814b3341266a0861ba42689fe32c6750a9-merged.mount: Deactivated successfully.
Dec 06 06:32:52 compute-0 podman[98835]: 2025-12-06 06:32:52.466944568 +0000 UTC m=+0.251633250 container remove eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:32:52 compute-0 systemd[1]: libpod-conmon-eb74d41e4fcb73ff12ec7510014117ae6aadbb151062e9db11931237e7550cea.scope: Deactivated successfully.
Dec 06 06:32:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 06 06:32:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:53 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 06 06:32:53 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 06 06:32:53 compute-0 sudo[98793]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 06 06:32:53 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 89 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=87/88 n=6 ec=62/51 lis/c=87/62 les/c/f=88/63/0 sis=89 pruub=14.243775368s) [2] async=[2] r=-1 lpr=89 pi=[62,89)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 293.998168945s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:53 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 89 pg[9.9( v 58'1159 (0'0,58'1159] local-lis/les=87/88 n=6 ec=62/51 lis/c=87/62 les/c/f=88/63/0 sis=89 pruub=14.243054390s) [2] r=-1 lpr=89 pi=[62,89)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 293.998168945s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 06 06:32:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:53 compute-0 ceph-mon[74339]: osdmap e88: 3 total, 3 up, 3 in
Dec 06 06:32:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:53 compute-0 ceph-mon[74339]: Reconfiguring mgr.compute-0.sfzyix (monmap changed)...
Dec 06 06:32:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.sfzyix", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:32:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:32:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:53 compute-0 ceph-mon[74339]: Reconfiguring daemon mgr.compute-0.sfzyix on compute-0
Dec 06 06:32:53 compute-0 ceph-mon[74339]: pgmap v281: 305 pgs: 2 active+remapped, 1 active+clean+scrubbing, 2 unknown, 300 active+clean; 456 KiB data, 125 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Dec 06 06:32:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:53 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec 06 06:32:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec 06 06:32:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec 06 06:32:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:32:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:53 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec 06 06:32:53 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec 06 06:32:53 compute-0 sudo[98876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:53 compute-0 sudo[98876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:53 compute-0 sudo[98876]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:53 compute-0 sudo[98901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:53 compute-0 sudo[98901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:53 compute-0 sudo[98901]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:53 compute-0 sudo[98926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:53 compute-0 sudo[98926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:53 compute-0 sudo[98926]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:53 compute-0 sudo[98951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:53 compute-0 sudo[98951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 2 active+remapped, 1 active+clean+scrubbing, 2 unknown, 300 active+clean; 456 KiB data, 125 MiB used, 21 GiB / 21 GiB avail; 23 B/s, 0 objects/s recovering
Dec 06 06:32:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:32:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:54.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:32:54 compute-0 podman[98994]: 2025-12-06 06:32:54.28681166 +0000 UTC m=+0.056864097 container create 1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bhaskara, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:32:54 compute-0 systemd[1]: Started libpod-conmon-1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5.scope.
Dec 06 06:32:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:54 compute-0 podman[98994]: 2025-12-06 06:32:54.259576583 +0000 UTC m=+0.029629040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:54 compute-0 podman[98994]: 2025-12-06 06:32:54.365291005 +0000 UTC m=+0.135343462 container init 1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:32:54 compute-0 podman[98994]: 2025-12-06 06:32:54.370977325 +0000 UTC m=+0.141029762 container start 1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bhaskara, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:32:54 compute-0 podman[98994]: 2025-12-06 06:32:54.374162371 +0000 UTC m=+0.144214828 container attach 1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:32:54 compute-0 infallible_bhaskara[99010]: 167 167
Dec 06 06:32:54 compute-0 systemd[1]: libpod-1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5.scope: Deactivated successfully.
Dec 06 06:32:54 compute-0 podman[98994]: 2025-12-06 06:32:54.376850031 +0000 UTC m=+0.146902468 container died 1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:32:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-178760d8a444c2168e6c4b624d02466863d04a846bee92e6a35446b3ffe260df-merged.mount: Deactivated successfully.
Dec 06 06:32:54 compute-0 podman[98994]: 2025-12-06 06:32:54.414884292 +0000 UTC m=+0.184936729 container remove 1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bhaskara, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:32:54 compute-0 systemd[1]: libpod-conmon-1dddf769d839ac3def1ccf53ce6fc0adfe8bbb446962614c1fbb6b23e20817a5.scope: Deactivated successfully.
Dec 06 06:32:54 compute-0 sudo[98951]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 06 06:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 06 06:32:54 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 06 06:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Dec 06 06:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 06:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:54 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Dec 06 06:32:54 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Dec 06 06:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 06 06:32:54 compute-0 sudo[99030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 06 06:32:54 compute-0 sudo[99030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:54 compute-0 sudo[99030]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:54 compute-0 ceph-mon[74339]: 11.e scrub starts
Dec 06 06:32:54 compute-0 ceph-mon[74339]: 11.e scrub ok
Dec 06 06:32:54 compute-0 ceph-mon[74339]: 4.c scrub starts
Dec 06 06:32:54 compute-0 ceph-mon[74339]: 4.c scrub ok
Dec 06 06:32:54 compute-0 ceph-mon[74339]: 2.1e scrub starts
Dec 06 06:32:54 compute-0 ceph-mon[74339]: 2.1e scrub ok
Dec 06 06:32:54 compute-0 ceph-mon[74339]: osdmap e89: 3 total, 3 up, 3 in
Dec 06 06:32:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-0 ceph-mon[74339]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 06 06:32:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:32:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:54 compute-0 ceph-mon[74339]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 06 06:32:54 compute-0 ceph-mon[74339]: 8.f scrub starts
Dec 06 06:32:54 compute-0 ceph-mon[74339]: 8.f scrub ok
Dec 06 06:32:54 compute-0 ceph-mon[74339]: pgmap v283: 305 pgs: 2 active+remapped, 1 active+clean+scrubbing, 2 unknown, 300 active+clean; 456 KiB data, 125 MiB used, 21 GiB / 21 GiB avail; 23 B/s, 0 objects/s recovering
Dec 06 06:32:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 06 06:32:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:54 compute-0 sudo[99055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:32:54 compute-0 sudo[99055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:54 compute-0 sudo[99055]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:54 compute-0 sudo[99080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:32:54 compute-0 sudo[99080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:54 compute-0 sudo[99080]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:54 compute-0 sudo[99105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb
Dec 06 06:32:54 compute-0 sudo[99105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:32:55 compute-0 podman[99144]: 2025-12-06 06:32:55.107386672 +0000 UTC m=+0.087381233 container create 2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Dec 06 06:32:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:55.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:55 compute-0 podman[99144]: 2025-12-06 06:32:55.087937238 +0000 UTC m=+0.067931829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:32:55 compute-0 systemd[1]: Started libpod-conmon-2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350.scope.
Dec 06 06:32:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:32:55 compute-0 podman[99144]: 2025-12-06 06:32:55.568781106 +0000 UTC m=+0.548775677 container init 2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:32:55 compute-0 podman[99144]: 2025-12-06 06:32:55.574999686 +0000 UTC m=+0.554994257 container start 2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:32:55 compute-0 systemd[1]: libpod-2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350.scope: Deactivated successfully.
Dec 06 06:32:55 compute-0 bold_vaughan[99161]: 167 167
Dec 06 06:32:55 compute-0 conmon[99161]: conmon 2ecc030e7289771bcf29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350.scope/container/memory.events
Dec 06 06:32:55 compute-0 podman[99144]: 2025-12-06 06:32:55.581261398 +0000 UTC m=+0.561255989 container attach 2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 06:32:55 compute-0 podman[99144]: 2025-12-06 06:32:55.581674231 +0000 UTC m=+0.561668822 container died 2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Dec 06 06:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a069ba8cd565d401627436a6b40caa0ba70cda95a891a0254c3440286971d398-merged.mount: Deactivated successfully.
Dec 06 06:32:55 compute-0 podman[99144]: 2025-12-06 06:32:55.616759963 +0000 UTC m=+0.596754534 container remove 2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 06:32:55 compute-0 systemd[1]: libpod-conmon-2ecc030e7289771bcf2976bd1cb01684499cfa5ebef45037b50cb91661faf350.scope: Deactivated successfully.
Dec 06 06:32:55 compute-0 sudo[99105]: pam_unix(sudo:session): session closed for user root
Dec 06 06:32:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:32:56 compute-0 ceph-mon[74339]: 4.e scrub starts
Dec 06 06:32:56 compute-0 ceph-mon[74339]: 4.e scrub ok
Dec 06 06:32:56 compute-0 ceph-mon[74339]: Reconfiguring osd.0 (monmap changed)...
Dec 06 06:32:56 compute-0 ceph-mon[74339]: Reconfiguring daemon osd.0 on compute-0
Dec 06 06:32:56 compute-0 ceph-mon[74339]: osdmap e90: 3 total, 3 up, 3 in
Dec 06 06:32:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:32:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 357 B/s wr, 31 op/s; 99 B/s, 4 objects/s recovering
Dec 06 06:32:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Dec 06 06:32:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 06:32:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:56 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec 06 06:32:56 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec 06 06:32:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec 06 06:32:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:32:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:56 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec 06 06:32:56 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec 06 06:32:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:32:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:56.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:32:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 06 06:32:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 317 B/s wr, 29 op/s; 53 B/s, 2 objects/s recovering
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 06 06:32:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:32:58.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 91 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=91 pruub=12.630760193s) [1] r=-1 lpr=91 pi=[62,91)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 297.164703369s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 91 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=91 pruub=12.630648613s) [1] r=-1 lpr=91 pi=[62,91)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 297.164703369s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 91 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=91 pruub=12.632583618s) [1] r=-1 lpr=91 pi=[62,91)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 297.167022705s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 91 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=91 pruub=12.632555962s) [1] r=-1 lpr=91 pi=[62,91)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 297.167022705s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 06 06:32:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:58 compute-0 ceph-mon[74339]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 357 B/s wr, 31 op/s; 99 B/s, 4 objects/s recovering
Dec 06 06:32:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 06:32:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:58 compute-0 ceph-mon[74339]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 06 06:32:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 06 06:32:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:58 compute-0 ceph-mon[74339]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 06 06:32:58 compute-0 ceph-mon[74339]: 4.d scrub starts
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 06 06:32:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 92 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] r=0 lpr=92 pi=[62,92)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 92 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] r=0 lpr=92 pi=[62,92)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 92 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] r=0 lpr=92 pi=[62,92)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:32:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 92 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] r=0 lpr=92 pi=[62,92)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:58 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 06 06:32:58 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 06:32:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:58 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Dec 06 06:32:58 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Dec 06 06:32:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:32:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:32:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:32:59.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec 06 06:32:59 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec 06 06:32:59 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 06 06:32:59 compute-0 ceph-mon[74339]: 4.d scrub ok
Dec 06 06:32:59 compute-0 ceph-mon[74339]: 8.a scrub starts
Dec 06 06:32:59 compute-0 ceph-mon[74339]: 8.a scrub ok
Dec 06 06:32:59 compute-0 ceph-mon[74339]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 317 B/s wr, 29 op/s; 53 B/s, 2 objects/s recovering
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 06:32:59 compute-0 ceph-mon[74339]: osdmap e91: 3 total, 3 up, 3 in
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 06 06:32:59 compute-0 ceph-mon[74339]: osdmap e92: 3 total, 3 up, 3 in
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-0 ceph-mon[74339]: Reconfiguring osd.1 (monmap changed)...
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: Reconfiguring daemon osd.1 on compute-1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-0 ceph-mon[74339]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:32:59 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-1.nmklwp (monmap changed)...
Dec 06 06:32:59 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-1.nmklwp (monmap changed)...
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:32:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:32:59 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-1.nmklwp on compute-1
Dec 06 06:32:59 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-1.nmklwp on compute-1
Dec 06 06:33:00 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 93 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=92/93 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] async=[1] r=0 lpr=92 pi=[62,92)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:00 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 93 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=92/93 n=6 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=92) [1]/[0] async=[1] r=0 lpr=92 pi=[62,92)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 369 B/s wr, 34 op/s; 62 B/s, 3 objects/s recovering
Dec 06 06:33:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Dec 06 06:33:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 06 06:33:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:00.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:00 compute-0 ceph-mon[74339]: 5.f scrub starts
Dec 06 06:33:00 compute-0 ceph-mon[74339]: 5.f scrub ok
Dec 06 06:33:00 compute-0 ceph-mon[74339]: osdmap e93: 3 total, 3 up, 3 in
Dec 06 06:33:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:00 compute-0 ceph-mon[74339]: Reconfiguring mgr.compute-1.nmklwp (monmap changed)...
Dec 06 06:33:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.nmklwp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:33:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:33:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:00 compute-0 ceph-mon[74339]: Reconfiguring daemon mgr.compute-1.nmklwp on compute-1
Dec 06 06:33:00 compute-0 ceph-mon[74339]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 369 B/s wr, 34 op/s; 62 B/s, 3 objects/s recovering
Dec 06 06:33:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 06 06:33:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:33:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:33:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:00 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec 06 06:33:00 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec 06 06:33:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec 06 06:33:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:33:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec 06 06:33:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:33:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:33:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:00 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec 06 06:33:00 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec 06 06:33:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 06 06:33:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 06 06:33:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 06 06:33:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 06 06:33:00 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 94 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=92/93 n=6 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94 pruub=15.031837463s) [1] async=[1] r=-1 lpr=94 pi=[62,94)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 302.246490479s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:00 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 94 pg[9.a( v 58'1159 (0'0,58'1159] local-lis/les=92/93 n=6 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94 pruub=15.031699181s) [1] r=-1 lpr=94 pi=[62,94)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 302.246490479s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:00 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 94 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=92/93 n=5 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94 pruub=15.031599045s) [1] async=[1] r=-1 lpr=94 pi=[62,94)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 302.246490479s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:00 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 94 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=92/93 n=5 ec=62/51 lis/c=92/62 les/c/f=93/63/0 sis=94 pruub=15.031540871s) [1] r=-1 lpr=94 pi=[62,94)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 302.246490479s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:01.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:33:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:33:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:01 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.ytlehq (monmap changed)...
Dec 06 06:33:01 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.ytlehq (monmap changed)...
Dec 06 06:33:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec 06 06:33:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:33:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 06:33:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:33:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:33:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:01 compute-0 ceph-mgr[74630]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.ytlehq on compute-2
Dec 06 06:33:01 compute-0 ceph-mgr[74630]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.ytlehq on compute-2
Dec 06 06:33:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:02 compute-0 ceph-mon[74339]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:02 compute-0 ceph-mon[74339]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 06 06:33:02 compute-0 ceph-mon[74339]: osdmap e94: 3 total, 3 up, 3 in
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.ytlehq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 06:33:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:33:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:02.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:02 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 06 06:33:02 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 06 06:33:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 06 06:33:02 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 06 06:33:03 compute-0 ceph-mon[74339]: Reconfiguring mgr.compute-2.ytlehq (monmap changed)...
Dec 06 06:33:03 compute-0 ceph-mon[74339]: Reconfiguring daemon mgr.compute-2.ytlehq on compute-2
Dec 06 06:33:03 compute-0 ceph-mon[74339]: pgmap v292: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 3 objects/s recovering
Dec 06 06:33:03 compute-0 ceph-mon[74339]: 2.4 scrub starts
Dec 06 06:33:03 compute-0 ceph-mon[74339]: 2.4 scrub ok
Dec 06 06:33:03 compute-0 ceph-mon[74339]: osdmap e95: 3 total, 3 up, 3 in
Dec 06 06:33:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:33:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:33:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:03.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:03 compute-0 sudo[99233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:03 compute-0 sudo[99233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:03 compute-0 sudo[99233]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:03 compute-0 sudo[99258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:33:03 compute-0 sudo[99258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:03 compute-0 sudo[99258]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:03 compute-0 sudo[99283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:03 compute-0 sudo[99283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:03 compute-0 sudo[99283]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:03 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 06 06:33:03 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 06 06:33:03 compute-0 sudo[99308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:33:03 compute-0 sudo[99308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:03 compute-0 podman[99404]: 2025-12-06 06:33:03.9943427 +0000 UTC m=+0.069875287 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 06:33:04 compute-0 podman[99404]: 2025-12-06 06:33:04.106492169 +0000 UTC m=+0.182024736 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:33:04 compute-0 ceph-mon[74339]: 10.6 scrub starts
Dec 06 06:33:04 compute-0 ceph-mon[74339]: 10.6 scrub ok
Dec 06 06:33:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:04 compute-0 ceph-mon[74339]: 8.1 scrub starts
Dec 06 06:33:04 compute-0 ceph-mon[74339]: 8.1 scrub ok
Dec 06 06:33:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 38 B/s, 2 objects/s recovering
Dec 06 06:33:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:04.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:33:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:33:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:04 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Dec 06 06:33:04 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Dec 06 06:33:04 compute-0 podman[99556]: 2025-12-06 06:33:04.703320393 +0000 UTC m=+0.054067563 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:33:04 compute-0 podman[99556]: 2025-12-06 06:33:04.715433133 +0000 UTC m=+0.066180303 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:33:05 compute-0 podman[99620]: 2025-12-06 06:33:05.077997967 +0000 UTC m=+0.063322097 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived, version=2.2.4, distribution-scope=public, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20)
Dec 06 06:33:05 compute-0 podman[99620]: 2025-12-06 06:33:05.092309395 +0000 UTC m=+0.077633495 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, name=keepalived)
Dec 06 06:33:05 compute-0 sudo[99308]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:33:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:05.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mon[74339]: pgmap v294: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 38 B/s, 2 objects/s recovering
Dec 06 06:33:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mon[74339]: 8.7 deep-scrub starts
Dec 06 06:33:05 compute-0 ceph-mon[74339]: 8.7 deep-scrub ok
Dec 06 06:33:05 compute-0 ceph-mon[74339]: 11.8 deep-scrub starts
Dec 06 06:33:05 compute-0 ceph-mon[74339]: 11.8 deep-scrub ok
Dec 06 06:33:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5e9861e6-6ea9-4f6b-b2e4-4b303b772c1a does not exist
Dec 06 06:33:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ef6945c8-b866-4255-a980-901288373739 does not exist
Dec 06 06:33:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 64287dd0-0d31-44b4-b4be-dad18f6c83c1 does not exist
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:33:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:33:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:05 compute-0 sudo[99655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:05 compute-0 sudo[99655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:05 compute-0 sudo[99655]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:05 compute-0 sudo[99680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:33:05 compute-0 sudo[99680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:05 compute-0 sudo[99680]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:05 compute-0 sudo[99705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:05 compute-0 sudo[99705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:05 compute-0 sudo[99705]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:05 compute-0 sudo[99730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:33:05 compute-0 sudo[99730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:06 compute-0 podman[99795]: 2025-12-06 06:33:06.097571135 +0000 UTC m=+0.039856870 container create e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:33:06 compute-0 systemd[1]: Started libpod-conmon-e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a.scope.
Dec 06 06:33:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:33:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 32 B/s, 1 objects/s recovering
Dec 06 06:33:06 compute-0 podman[99795]: 2025-12-06 06:33:06.1720086 +0000 UTC m=+0.114294365 container init e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 06:33:06 compute-0 podman[99795]: 2025-12-06 06:33:06.081258416 +0000 UTC m=+0.023544181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:33:06 compute-0 podman[99795]: 2025-12-06 06:33:06.178613422 +0000 UTC m=+0.120899157 container start e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 06 06:33:06 compute-0 dazzling_darwin[99813]: 167 167
Dec 06 06:33:06 compute-0 podman[99795]: 2025-12-06 06:33:06.182126209 +0000 UTC m=+0.124411964 container attach e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 06:33:06 compute-0 systemd[1]: libpod-e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a.scope: Deactivated successfully.
Dec 06 06:33:06 compute-0 podman[99795]: 2025-12-06 06:33:06.182927684 +0000 UTC m=+0.125213439 container died e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 06:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d37582fd22c377dad8a312372d3e74d1a72c2b1da5f18448aebd495b81fd2ac-merged.mount: Deactivated successfully.
Dec 06 06:33:06 compute-0 podman[99795]: 2025-12-06 06:33:06.225552316 +0000 UTC m=+0.167838051 container remove e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:33:06 compute-0 systemd[1]: libpod-conmon-e0887f9ab620c8c4fc0ac15290544ec35c723d6fadc9849c65e3cc0287d4817a.scope: Deactivated successfully.
Dec 06 06:33:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:06.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:06 compute-0 podman[99836]: 2025-12-06 06:33:06.375035417 +0000 UTC m=+0.045286966 container create 8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:33:06 compute-0 systemd[1]: Started libpod-conmon-8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305.scope.
Dec 06 06:33:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d8d0358a7fc3ddc0a9b054c3afaa6200fc1720b4f54031f4b361e64ae12b99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d8d0358a7fc3ddc0a9b054c3afaa6200fc1720b4f54031f4b361e64ae12b99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d8d0358a7fc3ddc0a9b054c3afaa6200fc1720b4f54031f4b361e64ae12b99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d8d0358a7fc3ddc0a9b054c3afaa6200fc1720b4f54031f4b361e64ae12b99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d8d0358a7fc3ddc0a9b054c3afaa6200fc1720b4f54031f4b361e64ae12b99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:06 compute-0 podman[99836]: 2025-12-06 06:33:06.35194898 +0000 UTC m=+0.022200559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:33:06 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 06 06:33:06 compute-0 podman[99836]: 2025-12-06 06:33:06.451750242 +0000 UTC m=+0.122001821 container init 8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:33:06 compute-0 podman[99836]: 2025-12-06 06:33:06.461799859 +0000 UTC m=+0.132051408 container start 8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 06:33:06 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 06 06:33:06 compute-0 podman[99836]: 2025-12-06 06:33:06.465477661 +0000 UTC m=+0.135729230 container attach 8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:33:06 compute-0 ceph-mon[74339]: pgmap v295: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 32 B/s, 1 objects/s recovering
Dec 06 06:33:06 compute-0 ceph-mon[74339]: 8.e scrub starts
Dec 06 06:33:06 compute-0 ceph-mon[74339]: 8.e scrub ok
Dec 06 06:33:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:07.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:07 compute-0 jolly_herschel[99854]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:33:07 compute-0 jolly_herschel[99854]: --> relative data size: 1.0
Dec 06 06:33:07 compute-0 jolly_herschel[99854]: --> All data devices are unavailable
Dec 06 06:33:07 compute-0 systemd[1]: libpod-8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305.scope: Deactivated successfully.
Dec 06 06:33:07 compute-0 podman[99877]: 2025-12-06 06:33:07.369233129 +0000 UTC m=+0.029015849 container died 8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:33:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5d8d0358a7fc3ddc0a9b054c3afaa6200fc1720b4f54031f4b361e64ae12b99-merged.mount: Deactivated successfully.
Dec 06 06:33:07 compute-0 podman[99877]: 2025-12-06 06:33:07.420504046 +0000 UTC m=+0.080286746 container remove 8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:33:07 compute-0 systemd[1]: libpod-conmon-8c88cc9a83be15e45230d6694415f00cc668817227bc0483655792f901a39305.scope: Deactivated successfully.
Dec 06 06:33:07 compute-0 sudo[99730]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:07 compute-0 sudo[99892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:07 compute-0 sudo[99892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:07 compute-0 sudo[99892]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:07 compute-0 ceph-mon[74339]: 10.7 scrub starts
Dec 06 06:33:07 compute-0 ceph-mon[74339]: 10.7 scrub ok
Dec 06 06:33:07 compute-0 sudo[99917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:33:07 compute-0 sudo[99917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:07 compute-0 sudo[99917]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:07 compute-0 sudo[99942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:07 compute-0 sudo[99942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:07 compute-0 sudo[99942]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:07 compute-0 sudo[99967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:33:07 compute-0 sudo[99967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:08 compute-0 podman[100032]: 2025-12-06 06:33:08.069357071 +0000 UTC m=+0.033484265 container create 66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:33:08 compute-0 systemd[1]: Started libpod-conmon-66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a.scope.
Dec 06 06:33:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:33:08 compute-0 podman[100032]: 2025-12-06 06:33:08.054387933 +0000 UTC m=+0.018515157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:33:08 compute-0 podman[100032]: 2025-12-06 06:33:08.157283509 +0000 UTC m=+0.121410733 container init 66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:33:08 compute-0 podman[100032]: 2025-12-06 06:33:08.163573641 +0000 UTC m=+0.127700845 container start 66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:33:08 compute-0 podman[100032]: 2025-12-06 06:33:08.167065597 +0000 UTC m=+0.131192831 container attach 66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:33:08 compute-0 friendly_satoshi[100049]: 167 167
Dec 06 06:33:08 compute-0 systemd[1]: libpod-66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a.scope: Deactivated successfully.
Dec 06 06:33:08 compute-0 conmon[100049]: conmon 66b561398dc0fadb583f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a.scope/container/memory.events
Dec 06 06:33:08 compute-0 podman[100032]: 2025-12-06 06:33:08.16944535 +0000 UTC m=+0.133572554 container died 66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 06:33:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Dec 06 06:33:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Dec 06 06:33:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 06 06:33:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d2f19d57fa87a3b46c752d9f78d80afb9331d16ea1ec38260d9f9d911e45cc7-merged.mount: Deactivated successfully.
Dec 06 06:33:08 compute-0 podman[100032]: 2025-12-06 06:33:08.21097444 +0000 UTC m=+0.175101644 container remove 66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:33:08 compute-0 systemd[1]: libpod-conmon-66b561398dc0fadb583fbe429accd81c1cfa865f32639f96fbbbf66c4ce1c76a.scope: Deactivated successfully.
Dec 06 06:33:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:08.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:08 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 06 06:33:08 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 06 06:33:08 compute-0 podman[100074]: 2025-12-06 06:33:08.406717493 +0000 UTC m=+0.071139875 container create ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:33:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:08 compute-0 systemd[1]: Started libpod-conmon-ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff.scope.
Dec 06 06:33:08 compute-0 podman[100074]: 2025-12-06 06:33:08.378832051 +0000 UTC m=+0.043254463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:33:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfd0f23fe9797d009cdbfd30bd2df35166025b53181a7ff6964d4d33577e279/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfd0f23fe9797d009cdbfd30bd2df35166025b53181a7ff6964d4d33577e279/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfd0f23fe9797d009cdbfd30bd2df35166025b53181a7ff6964d4d33577e279/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bfd0f23fe9797d009cdbfd30bd2df35166025b53181a7ff6964d4d33577e279/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:08 compute-0 podman[100074]: 2025-12-06 06:33:08.502053988 +0000 UTC m=+0.166476410 container init ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 06 06:33:08 compute-0 podman[100074]: 2025-12-06 06:33:08.513285921 +0000 UTC m=+0.177708333 container start ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:33:08 compute-0 podman[100074]: 2025-12-06 06:33:08.517230971 +0000 UTC m=+0.181653383 container attach ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:33:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 06 06:33:08 compute-0 ceph-mon[74339]: pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Dec 06 06:33:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 06 06:33:08 compute-0 ceph-mon[74339]: 8.13 scrub starts
Dec 06 06:33:08 compute-0 ceph-mon[74339]: 8.13 scrub ok
Dec 06 06:33:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 06 06:33:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 06 06:33:08 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 06 06:33:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:09.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]: {
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:     "0": [
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:         {
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "devices": [
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "/dev/loop3"
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             ],
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "lv_name": "ceph_lv0",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "lv_size": "7511998464",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "name": "ceph_lv0",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "tags": {
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.cluster_name": "ceph",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.crush_device_class": "",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.encrypted": "0",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.osd_id": "0",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.type": "block",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:                 "ceph.vdo": "0"
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             },
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "type": "block",
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:             "vg_name": "ceph_vg0"
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:         }
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]:     ]
Dec 06 06:33:09 compute-0 musing_zhukovsky[100091]: }
Dec 06 06:33:09 compute-0 systemd[1]: libpod-ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff.scope: Deactivated successfully.
Dec 06 06:33:09 compute-0 podman[100074]: 2025-12-06 06:33:09.391822968 +0000 UTC m=+1.056245360 container died ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:33:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bfd0f23fe9797d009cdbfd30bd2df35166025b53181a7ff6964d4d33577e279-merged.mount: Deactivated successfully.
Dec 06 06:33:09 compute-0 podman[100074]: 2025-12-06 06:33:09.449866612 +0000 UTC m=+1.114288984 container remove ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:33:09 compute-0 systemd[1]: libpod-conmon-ff177c7ab069a2363bb738496f5dfe60b846ebccc55528a14a4091688ccd20ff.scope: Deactivated successfully.
Dec 06 06:33:09 compute-0 sudo[99967]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:09 compute-0 sudo[100130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:09 compute-0 sudo[100130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:09 compute-0 sudo[100130]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:09 compute-0 sudo[100155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:33:09 compute-0 sudo[100155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:09 compute-0 sudo[100155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 06 06:33:09 compute-0 ceph-mon[74339]: osdmap e96: 3 total, 3 up, 3 in
Dec 06 06:33:09 compute-0 sudo[100180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:09 compute-0 sudo[100183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:09 compute-0 sudo[100180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:09 compute-0 sudo[100183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:09 compute-0 sudo[100180]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:09 compute-0 sudo[100183]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:09 compute-0 sudo[100231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:09 compute-0 sudo[100231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:09 compute-0 sudo[100231]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:09 compute-0 sudo[100230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:33:09 compute-0 sudo[100230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:10 compute-0 podman[100319]: 2025-12-06 06:33:10.13775822 +0000 UTC m=+0.083787112 container create 1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:33:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Dec 06 06:33:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 06 06:33:10 compute-0 podman[100319]: 2025-12-06 06:33:10.077264991 +0000 UTC m=+0.023293903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:33:10 compute-0 systemd[1]: Started libpod-conmon-1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed.scope.
Dec 06 06:33:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:33:10 compute-0 podman[100319]: 2025-12-06 06:33:10.223915854 +0000 UTC m=+0.169944766 container init 1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:33:10 compute-0 podman[100319]: 2025-12-06 06:33:10.232808386 +0000 UTC m=+0.178837268 container start 1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:33:10 compute-0 podman[100319]: 2025-12-06 06:33:10.23688984 +0000 UTC m=+0.182918752 container attach 1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:33:10 compute-0 eager_mirzakhani[100337]: 167 167
Dec 06 06:33:10 compute-0 systemd[1]: libpod-1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed.scope: Deactivated successfully.
Dec 06 06:33:10 compute-0 podman[100319]: 2025-12-06 06:33:10.239618624 +0000 UTC m=+0.185647516 container died 1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:33:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a5bd32fc9c7836eabacb29d7e5cf72687fc5cd4fe79a04450102abf6e77be78-merged.mount: Deactivated successfully.
Dec 06 06:33:10 compute-0 podman[100319]: 2025-12-06 06:33:10.28299939 +0000 UTC m=+0.229028282 container remove 1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:33:10 compute-0 systemd[1]: libpod-conmon-1163636cece0078161553e59a2da41b81f09c43f8369fa6e2a983d65285160ed.scope: Deactivated successfully.
Dec 06 06:33:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:10.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:10 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 06 06:33:10 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 06 06:33:10 compute-0 podman[100361]: 2025-12-06 06:33:10.432190601 +0000 UTC m=+0.048581076 container create 42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_newton, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 06:33:10 compute-0 systemd[1]: Started libpod-conmon-42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f.scope.
Dec 06 06:33:10 compute-0 podman[100361]: 2025-12-06 06:33:10.408765755 +0000 UTC m=+0.025156220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:33:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0bbfa3f3563f34329df7064de958ac06b13a214ef24e8b293216ccb1ff83d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0bbfa3f3563f34329df7064de958ac06b13a214ef24e8b293216ccb1ff83d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0bbfa3f3563f34329df7064de958ac06b13a214ef24e8b293216ccb1ff83d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca0bbfa3f3563f34329df7064de958ac06b13a214ef24e8b293216ccb1ff83d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:33:10 compute-0 podman[100361]: 2025-12-06 06:33:10.527669079 +0000 UTC m=+0.144059534 container init 42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:33:10 compute-0 podman[100361]: 2025-12-06 06:33:10.534786857 +0000 UTC m=+0.151177312 container start 42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_newton, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 06:33:10 compute-0 podman[100361]: 2025-12-06 06:33:10.539089358 +0000 UTC m=+0.155479813 container attach 42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_newton, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:33:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 06 06:33:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 06 06:33:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 06 06:33:10 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 06 06:33:10 compute-0 ceph-mon[74339]: 8.15 scrub starts
Dec 06 06:33:10 compute-0 ceph-mon[74339]: 8.15 scrub ok
Dec 06 06:33:10 compute-0 ceph-mon[74339]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 06 06:33:10 compute-0 ceph-mon[74339]: 8.1a scrub starts
Dec 06 06:33:10 compute-0 ceph-mon[74339]: 8.1a scrub ok
Dec 06 06:33:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:33:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:11.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:33:11 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 06 06:33:11 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 06 06:33:11 compute-0 confident_newton[100377]: {
Dec 06 06:33:11 compute-0 confident_newton[100377]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:33:11 compute-0 confident_newton[100377]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:33:11 compute-0 confident_newton[100377]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:33:11 compute-0 confident_newton[100377]:         "osd_id": 0,
Dec 06 06:33:11 compute-0 confident_newton[100377]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:33:11 compute-0 confident_newton[100377]:         "type": "bluestore"
Dec 06 06:33:11 compute-0 confident_newton[100377]:     }
Dec 06 06:33:11 compute-0 confident_newton[100377]: }
Dec 06 06:33:11 compute-0 systemd[1]: libpod-42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f.scope: Deactivated successfully.
Dec 06 06:33:11 compute-0 podman[100361]: 2025-12-06 06:33:11.481056264 +0000 UTC m=+1.097446709 container died 42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 06:33:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca0bbfa3f3563f34329df7064de958ac06b13a214ef24e8b293216ccb1ff83d2-merged.mount: Deactivated successfully.
Dec 06 06:33:11 compute-0 podman[100361]: 2025-12-06 06:33:11.559862763 +0000 UTC m=+1.176253198 container remove 42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:33:11 compute-0 systemd[1]: libpod-conmon-42457fc649841ac54e91944293e4b21b2e20dfd537f5246b82be8287976a237f.scope: Deactivated successfully.
Dec 06 06:33:11 compute-0 sudo[100230]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:33:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:33:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 06 06:33:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b5502c97-1d53-4181-ba8e-84664ef615e4 does not exist
Dec 06 06:33:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e535db26-314b-44d5-a0e2-29d7cc825350 does not exist
Dec 06 06:33:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c42af260-4ee5-40e5-8d09-8e4cb842931c does not exist
Dec 06 06:33:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 06 06:33:11 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 06 06:33:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 06 06:33:11 compute-0 ceph-mon[74339]: osdmap e97: 3 total, 3 up, 3 in
Dec 06 06:33:11 compute-0 ceph-mon[74339]: 8.1d scrub starts
Dec 06 06:33:11 compute-0 ceph-mon[74339]: 8.1d scrub ok
Dec 06 06:33:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:33:11 compute-0 ceph-mon[74339]: osdmap e98: 3 total, 3 up, 3 in
Dec 06 06:33:11 compute-0 sudo[100416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:11 compute-0 sudo[100416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:11 compute-0 sudo[100416]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:11 compute-0 sudo[100441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:33:11 compute-0 sudo[100441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:11 compute-0 sudo[100441]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:12.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 06 06:33:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 06 06:33:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 06 06:33:12 compute-0 ceph-mon[74339]: 10.9 scrub starts
Dec 06 06:33:12 compute-0 ceph-mon[74339]: 10.9 scrub ok
Dec 06 06:33:12 compute-0 ceph-mon[74339]: pgmap v301: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:33:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:33:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:33:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:33:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:33:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:33:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:13.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 06 06:33:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 06 06:33:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 06 06:33:13 compute-0 ceph-mon[74339]: osdmap e99: 3 total, 3 up, 3 in
Dec 06 06:33:13 compute-0 ceph-mon[74339]: osdmap e100: 3 total, 3 up, 3 in
Dec 06 06:33:14 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:33:14
Dec 06 06:33:14 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:33:14 compute-0 ceph-mgr[74630]: [balancer INFO root] Some PGs (0.006557) are unknown; try again later
Dec 06 06:33:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 06 06:33:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:14.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 06 06:33:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 06 06:33:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 06 06:33:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 06 06:33:14 compute-0 ceph-mon[74339]: 11.16 scrub starts
Dec 06 06:33:14 compute-0 ceph-mon[74339]: 11.16 scrub ok
Dec 06 06:33:14 compute-0 ceph-mon[74339]: pgmap v304: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:14 compute-0 ceph-mon[74339]: 8.1e scrub starts
Dec 06 06:33:14 compute-0 ceph-mon[74339]: 8.1e scrub ok
Dec 06 06:33:14 compute-0 ceph-mon[74339]: osdmap e101: 3 total, 3 up, 3 in
Dec 06 06:33:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:15.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:15 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 06 06:33:15 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 06 06:33:16 compute-0 ceph-mon[74339]: 8.11 scrub starts
Dec 06 06:33:16 compute-0 ceph-mon[74339]: 8.11 scrub ok
Dec 06 06:33:16 compute-0 ceph-mon[74339]: 9.1 scrub starts
Dec 06 06:33:16 compute-0 ceph-mon[74339]: 9.1 scrub ok
Dec 06 06:33:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 06 06:33:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:16.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 06 06:33:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:17.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:17 compute-0 ceph-mon[74339]: pgmap v306: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:17 compute-0 ceph-mon[74339]: 9.4 scrub starts
Dec 06 06:33:17 compute-0 ceph-mon[74339]: 9.4 scrub ok
Dec 06 06:33:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 15 op/s; 146 B/s, 5 objects/s recovering
Dec 06 06:33:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Dec 06 06:33:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 06 06:33:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:18.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:33:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:19.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:33:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 06 06:33:19 compute-0 ceph-mon[74339]: pgmap v307: 305 pgs: 305 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 15 op/s; 146 B/s, 5 objects/s recovering
Dec 06 06:33:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 06 06:33:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 06 06:33:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 06 06:33:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 06 06:33:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 14 op/s; 132 B/s, 4 objects/s recovering
Dec 06 06:33:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec 06 06:33:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:33:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:20.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:20 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 06 06:33:20 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 06 06:33:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 06 06:33:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:33:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 06 06:33:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 06 06:33:21 compute-0 ceph-mon[74339]: 8.3 deep-scrub starts
Dec 06 06:33:21 compute-0 ceph-mon[74339]: 8.3 deep-scrub ok
Dec 06 06:33:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 06 06:33:21 compute-0 ceph-mon[74339]: osdmap e102: 3 total, 3 up, 3 in
Dec 06 06:33:21 compute-0 ceph-mon[74339]: pgmap v309: 305 pgs: 305 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 14 op/s; 132 B/s, 4 objects/s recovering
Dec 06 06:33:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 06 06:33:21 compute-0 ceph-mon[74339]: 9.c scrub starts
Dec 06 06:33:21 compute-0 ceph-mon[74339]: 9.c scrub ok
Dec 06 06:33:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:21.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 06 06:33:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 06 06:33:22 compute-0 ceph-mon[74339]: osdmap e103: 3 total, 3 up, 3 in
Dec 06 06:33:22 compute-0 ceph-mon[74339]: 10.a scrub starts
Dec 06 06:33:22 compute-0 ceph-mon[74339]: 10.a scrub ok
Dec 06 06:33:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 06 06:33:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 109 B/s, 3 objects/s recovering
Dec 06 06:33:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:22.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:33:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:33:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 06 06:33:23 compute-0 ceph-mon[74339]: 8.b scrub starts
Dec 06 06:33:23 compute-0 ceph-mon[74339]: 8.b scrub ok
Dec 06 06:33:23 compute-0 ceph-mon[74339]: osdmap e104: 3 total, 3 up, 3 in
Dec 06 06:33:23 compute-0 ceph-mon[74339]: 8.c scrub starts
Dec 06 06:33:23 compute-0 ceph-mon[74339]: pgmap v312: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 109 B/s, 3 objects/s recovering
Dec 06 06:33:23 compute-0 ceph-mon[74339]: 8.c scrub ok
Dec 06 06:33:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 06 06:33:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 06 06:33:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:24.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 06 06:33:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:33:25 compute-0 ceph-mon[74339]: 10.b scrub starts
Dec 06 06:33:25 compute-0 ceph-mon[74339]: 10.b scrub ok
Dec 06 06:33:25 compute-0 ceph-mon[74339]: osdmap e105: 3 total, 3 up, 3 in
Dec 06 06:33:25 compute-0 ceph-mon[74339]: pgmap v314: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 1/218 objects misplaced (0.459%); 36 B/s, 1 objects/s recovering
Dec 06 06:33:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Dec 06 06:33:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 06 06:33:26 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 06 06:33:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:26.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:26 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 06 06:33:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 06 06:33:26 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 06 06:33:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:27.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 06 06:33:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 06 06:33:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 06 06:33:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 06 06:33:27 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 06 06:33:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 341 B/s wr, 31 op/s; 1/218 objects misplaced (0.459%); 73 B/s, 3 objects/s recovering
Dec 06 06:33:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Dec 06 06:33:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 06 06:33:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:28.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:28 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 06 06:33:28 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 06 06:33:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 06 06:33:28 compute-0 ceph-mon[74339]: 11.13 scrub starts
Dec 06 06:33:28 compute-0 ceph-mon[74339]: 11.13 scrub ok
Dec 06 06:33:28 compute-0 ceph-mon[74339]: pgmap v315: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 1/218 objects misplaced (0.459%); 36 B/s, 1 objects/s recovering
Dec 06 06:33:28 compute-0 ceph-mon[74339]: 9.10 scrub starts
Dec 06 06:33:28 compute-0 ceph-mon[74339]: 9.10 scrub ok
Dec 06 06:33:28 compute-0 ceph-mon[74339]: osdmap e106: 3 total, 3 up, 3 in
Dec 06 06:33:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 06 06:33:28 compute-0 ceph-mon[74339]: osdmap e107: 3 total, 3 up, 3 in
Dec 06 06:33:28 compute-0 ceph-mon[74339]: pgmap v318: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 341 B/s wr, 31 op/s; 1/218 objects misplaced (0.459%); 73 B/s, 3 objects/s recovering
Dec 06 06:33:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 06 06:33:28 compute-0 ceph-mon[74339]: 9.11 scrub starts
Dec 06 06:33:28 compute-0 ceph-mon[74339]: 9.11 scrub ok
Dec 06 06:33:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 06 06:33:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 06 06:33:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 06 06:33:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:29.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:29 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Dec 06 06:33:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 107 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=2 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=107 pruub=13.536284447s) [1] r=-1 lpr=107 pi=[62,107)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 329.165222168s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 108 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=108 pruub=13.536329269s) [1] r=-1 lpr=108 pi=[62,108)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 329.165252686s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 108 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=108 pruub=13.536174774s) [1] r=-1 lpr=108 pi=[62,108)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 329.165252686s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:29 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 108 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=2 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=107 pruub=13.536086082s) [1] r=-1 lpr=107 pi=[62,107)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 329.165222168s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:29 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Dec 06 06:33:29 compute-0 sudo[100475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:29 compute-0 sudo[100475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:29 compute-0 sudo[100475]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:29 compute-0 sudo[100500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:29 compute-0 sudo[100500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:29 compute-0 sudo[100500]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 341 B/s wr, 31 op/s; 1/218 objects misplaced (0.459%); 73 B/s, 3 objects/s recovering
Dec 06 06:33:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Dec 06 06:33:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 06 06:33:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 06 06:33:30 compute-0 sudo[97341]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:30 compute-0 ceph-mon[74339]: 10.c scrub starts
Dec 06 06:33:30 compute-0 ceph-mon[74339]: 10.c scrub ok
Dec 06 06:33:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 06 06:33:30 compute-0 ceph-mon[74339]: osdmap e108: 3 total, 3 up, 3 in
Dec 06 06:33:30 compute-0 ceph-mon[74339]: 9.12 deep-scrub starts
Dec 06 06:33:30 compute-0 ceph-mon[74339]: 9.12 deep-scrub ok
Dec 06 06:33:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:30.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 06 06:33:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 06 06:33:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 06 06:33:30 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 109 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] r=0 lpr=109 pi=[62,109)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:30 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 109 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=2 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] r=0 lpr=109 pi=[62,109)/2 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:30 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 109 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] r=0 lpr=109 pi=[62,109)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:30 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 109 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=2 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] r=0 lpr=109 pi=[62,109)/2 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:30 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 109 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=4 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=12.548706055s) [1] r=-1 lpr=109 pi=[62,109)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 329.168090820s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:30 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 109 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=4 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109 pruub=12.548665047s) [1] r=-1 lpr=109 pi=[62,109)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 329.168090820s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:30 compute-0 sudo[100675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltgfzshbltdynnewhgvndxusqdtjoxca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002810.3615472-374-71213344640014/AnsiballZ_command.py'
Dec 06 06:33:30 compute-0 sudo[100675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:30 compute-0 python3.9[100677]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:33:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:31.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 06 06:33:31 compute-0 ceph-mon[74339]: 11.3 scrub starts
Dec 06 06:33:31 compute-0 ceph-mon[74339]: 11.3 scrub ok
Dec 06 06:33:31 compute-0 ceph-mon[74339]: pgmap v320: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 303 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 341 B/s wr, 31 op/s; 1/218 objects misplaced (0.459%); 73 B/s, 3 objects/s recovering
Dec 06 06:33:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 06 06:33:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 06 06:33:31 compute-0 ceph-mon[74339]: osdmap e109: 3 total, 3 up, 3 in
Dec 06 06:33:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 06 06:33:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 06 06:33:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 110 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=4 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=110) [1]/[0] r=0 lpr=110 pi=[62,110)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:31 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 110 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=62/63 n=4 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=110) [1]/[0] r=0 lpr=110 pi=[62,110)/1 crt=58'1159 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 06 06:33:31 compute-0 sudo[100675]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 110 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=109/110 n=2 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] async=[1] r=0 lpr=109 pi=[62,109)/2 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 110 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=109/110 n=5 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=109) [1]/[0] async=[1] r=0 lpr=109 pi=[62,109)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 1 unknown, 2 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:33:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:32 compute-0 sudo[100963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cojvxxdyggtnzxnojanhbsrgnevsidpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002811.8701851-398-19787933629821/AnsiballZ_selinux.py'
Dec 06 06:33:32 compute-0 sudo[100963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 06 06:33:32 compute-0 python3.9[100965]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 06 06:33:32 compute-0 sudo[100963]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:32 compute-0 ceph-mon[74339]: 10.d scrub starts
Dec 06 06:33:32 compute-0 ceph-mon[74339]: 10.d scrub ok
Dec 06 06:33:32 compute-0 ceph-mon[74339]: osdmap e110: 3 total, 3 up, 3 in
Dec 06 06:33:32 compute-0 ceph-mon[74339]: pgmap v323: 305 pgs: 1 unknown, 2 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:33:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:33.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:33 compute-0 sudo[101115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qipxnxgksylapzeprroczruruiabyojb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002813.193303-431-141272059427929/AnsiballZ_command.py'
Dec 06 06:33:33 compute-0 sudo[101115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:33 compute-0 python3.9[101117]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 06 06:33:33 compute-0 sudo[101115]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 1 unknown, 2 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:33:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:35.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 06 06:33:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 06 06:33:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 111 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=109/110 n=2 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=111 pruub=12.802495956s) [1] async=[1] r=-1 lpr=111 pi=[62,111)/2 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 334.277374268s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 111 pg[9.10( v 58'1159 (0'0,58'1159] local-lis/les=109/110 n=2 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=111 pruub=12.802289009s) [1] r=-1 lpr=111 pi=[62,111)/2 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 334.277374268s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:35 compute-0 sudo[101268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liczdmngdxbsorrulqqfuksyvtmhsnre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002815.2463455-455-84135768537483/AnsiballZ_file.py'
Dec 06 06:33:35 compute-0 sudo[101268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:35 compute-0 python3.9[101270]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:33:35 compute-0 sudo[101268]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 1 active+remapped, 1 activating+remapped, 1 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 18 B/s, 0 objects/s recovering
Dec 06 06:33:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 06 06:33:36 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 06 06:33:36 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 06 06:33:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:36 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 111 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=110/111 n=4 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=110) [1]/[0] async=[1] r=0 lpr=110 pi=[62,110)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:33:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:33:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:37.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:33:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 1 active+clean+scrubbing, 1 peering, 1 active+remapped, 1 activating+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 14 B/s, 1 objects/s recovering
Dec 06 06:33:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:38.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:39.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 1 active+clean+scrubbing, 1 peering, 1 active+remapped, 1 activating+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 12 B/s, 0 objects/s recovering
Dec 06 06:33:40 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 06 06:33:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:40 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 06 06:33:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:41.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:41 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Dec 06 06:33:41 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Dec 06 06:33:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 1 objects/s recovering
Dec 06 06:33:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:42.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:33:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:33:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:33:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:33:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:33:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:33:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:43.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:43 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 06 06:33:43 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 06 06:33:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 1 objects/s recovering
Dec 06 06:33:44 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 06 06:33:44 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 06 06:33:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:44.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:45.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).paxos(paxos updating c 1..751) accept timeout, calling fresh election
Dec 06 06:33:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:33:45 compute-0 ceph-mon[74339]: paxos.0).electionLogic(14) init, last seen epoch 14
Dec 06 06:33:45 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:33:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 20 B/s, 1 objects/s recovering
Dec 06 06:33:46 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.c deep-scrub starts
Dec 06 06:33:46 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.c deep-scrub ok
Dec 06 06:33:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:46.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:47.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:47 compute-0 sudo[101426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgobceghxveptvzoxsbozbqshyjenbic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002827.3255882-479-96415216982892/AnsiballZ_mount.py'
Dec 06 06:33:47 compute-0 sudo[101426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:48 compute-0 python3.9[101428]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 06 06:33:48 compute-0 sudo[101426]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:33:48 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:33:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 299 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Dec 06 06:33:48 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 06 06:33:48 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 06 06:33:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:48 compute-0 ceph-mon[74339]: pgmap v324: 305 pgs: 1 unknown, 2 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:33:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:33:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:33:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:33:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 06 06:33:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 7m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:33:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 06:33:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 06 06:33:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:49.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:49 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 06 06:33:49 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 06 06:33:49 compute-0 sudo[101579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkydbczkaytotvrjejcmnmswnwmsqxyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002829.232007-563-195529363079782/AnsiballZ_file.py'
Dec 06 06:33:49 compute-0 sudo[101579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:49 compute-0 python3.9[101581]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:33:49 compute-0 sudo[101579]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:49 compute-0 sudo[101606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:49 compute-0 sudo[101606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:49 compute-0 sudo[101606]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:49 compute-0 sudo[101631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:33:49 compute-0 sudo[101631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:33:49 compute-0 sudo[101631]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.042461) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830042613, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7855, "num_deletes": 252, "total_data_size": 10211588, "memory_usage": 10384160, "flush_reason": "Manual Compaction"}
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830144015, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8530962, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 144, "largest_seqno": 7990, "table_properties": {"data_size": 8501131, "index_size": 19604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 86220, "raw_average_key_size": 23, "raw_value_size": 8430566, "raw_average_value_size": 2311, "num_data_blocks": 863, "num_entries": 3647, "num_filter_entries": 3647, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002322, "oldest_key_time": 1765002322, "file_creation_time": 1765002830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 101623 microseconds, and 29155 cpu microseconds.
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.144090) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8530962 bytes OK
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.144133) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.150176) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.150210) EVENT_LOG_v1 {"time_micros": 1765002830150202, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.150235) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10176367, prev total WAL file size 10220762, number of live WAL files 2.
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.152652) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(8331KB) 13(52KB) 8(1944B)]
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830152833, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8586611, "oldest_snapshot_seqno": -1}
Dec 06 06:33:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 299 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3457 keys, 8542564 bytes, temperature: kUnknown
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830215402, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8542564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8513191, "index_size": 19599, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8709, "raw_key_size": 84074, "raw_average_key_size": 24, "raw_value_size": 8444463, "raw_average_value_size": 2442, "num_data_blocks": 866, "num_entries": 3457, "num_filter_entries": 3457, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765002830, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.215721) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8542564 bytes
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.217572) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.0 rd, 136.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(8.2, 0.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3750, records dropped: 293 output_compression: NoCompression
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.217593) EVENT_LOG_v1 {"time_micros": 1765002830217581, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62678, "compaction_time_cpu_micros": 21887, "output_level": 6, "num_output_files": 1, "total_output_size": 8542564, "num_input_records": 3750, "num_output_records": 3457, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830219256, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830219406, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765002830219551, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 06 06:33:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:33:50.152494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:33:50 compute-0 sudo[101783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbxsjzxnuofdvdhkmqplemtathjgssun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002830.0218804-587-145948182903582/AnsiballZ_stat.py'
Dec 06 06:33:50 compute-0 sudo[101783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:50 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.11 deep-scrub starts
Dec 06 06:33:50 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.11 deep-scrub ok
Dec 06 06:33:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:50.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:50 compute-0 python3.9[101785]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:33:50 compute-0 sudo[101783]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:50 compute-0 sudo[101861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fklrgbjlfaofciycyjbxdjmbvbjcvdaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002830.0218804-587-145948182903582/AnsiballZ_file.py'
Dec 06 06:33:50 compute-0 sudo[101861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 8.6 scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 8.6 scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 8.5 scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 8.5 scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.19 scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.19 scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 8.1f scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: osdmap e111: 3 total, 3 up, 3 in
Dec 06 06:33:51 compute-0 ceph-mon[74339]: pgmap v326: 305 pgs: 1 active+remapped, 1 activating+remapped, 1 remapped+peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 18 B/s, 0 objects/s recovering
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 9.1c scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.e scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 9.1c scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.e scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.16 scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.16 scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 8.1c deep-scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: pgmap v327: 305 pgs: 1 active+clean+scrubbing, 1 peering, 1 active+remapped, 1 activating+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 14 B/s, 1 objects/s recovering
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.12 scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.17 deep-scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.17 deep-scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: pgmap v328: 305 pgs: 1 active+clean+scrubbing, 1 peering, 1 active+remapped, 1 activating+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 4/212 objects misplaced (1.887%); 12 B/s, 0 objects/s recovering
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.2 scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.2 scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.6 deep-scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.6 deep-scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: pgmap v329: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 1 objects/s recovering
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.1a scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.1a scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.9 scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.9 scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: pgmap v330: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 1 objects/s recovering
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.b scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.b scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:33:51 compute-0 ceph-mon[74339]: pgmap v331: 305 pgs: 2 active+remapped, 1 peering, 302 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 20 B/s, 1 objects/s recovering
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.c deep-scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.c deep-scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.1c scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.1c scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 8.1f scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 10.12 scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 8.1c deep-scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:33:51 compute-0 ceph-mon[74339]: pgmap v332: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 299 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.d scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.d scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:33:51 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:33:51 compute-0 ceph-mon[74339]: osdmap e111: 3 total, 3 up, 3 in
Dec 06 06:33:51 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 7m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:33:51 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.10 scrub starts
Dec 06 06:33:51 compute-0 ceph-mon[74339]: 11.10 scrub ok
Dec 06 06:33:51 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 06 06:33:51 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 112 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=109/110 n=5 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=112 pruub=12.895370483s) [1] async=[1] r=-1 lpr=112 pi=[62,112)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 350.277862549s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:51 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 112 pg[9.11( v 58'1159 (0'0,58'1159] local-lis/les=109/110 n=5 ec=62/51 lis/c=109/62 les/c/f=110/63/0 sis=112 pruub=12.895251274s) [1] r=-1 lpr=112 pi=[62,112)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 350.277862549s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:51 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 112 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=110/111 n=4 ec=62/51 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=9.276881218s) [1] async=[1] r=-1 lpr=112 pi=[62,112)/1 crt=58'1159 lcod 0'0 mlcod 0'0 active pruub 346.660064697s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:33:51 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 112 pg[9.12( v 58'1159 (0'0,58'1159] local-lis/les=110/111 n=4 ec=62/51 lis/c=110/62 les/c/f=111/63/0 sis=112 pruub=9.276827812s) [1] r=-1 lpr=112 pi=[62,112)/1 crt=58'1159 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 346.660064697s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 06:33:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:51.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:51 compute-0 python3.9[101863]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:33:51 compute-0 sudo[101861]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 06 06:33:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 298 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:52 compute-0 sudo[102014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sslmjnfkhjnjpkvophgcgjuyhnmgdsvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002832.20854-650-105356583326595/AnsiballZ_stat.py'
Dec 06 06:33:52 compute-0 sudo[102014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 10.1d scrub starts
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 10.1d scrub ok
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 10.1f deep-scrub starts
Dec 06 06:33:52 compute-0 ceph-mon[74339]: pgmap v333: 305 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 299 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 11.11 deep-scrub starts
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 11.11 deep-scrub ok
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 10.1f deep-scrub ok
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 11.14 scrub starts
Dec 06 06:33:52 compute-0 ceph-mon[74339]: osdmap e112: 3 total, 3 up, 3 in
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 11.14 scrub ok
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 10.4 scrub starts
Dec 06 06:33:52 compute-0 ceph-mon[74339]: 10.4 scrub ok
Dec 06 06:33:52 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 06 06:33:52 compute-0 python3.9[102016]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:33:52 compute-0 sudo[102014]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:53.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:53 compute-0 ceph-mon[74339]: 8.12 scrub starts
Dec 06 06:33:53 compute-0 ceph-mon[74339]: 8.12 scrub ok
Dec 06 06:33:53 compute-0 ceph-mon[74339]: pgmap v335: 305 pgs: 3 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 1 peering, 2 active+remapped, 298 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:53 compute-0 ceph-mon[74339]: osdmap e113: 3 total, 3 up, 3 in
Dec 06 06:33:53 compute-0 sudo[102168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntrcqfkjxwsrpvzgqmdaouqexmnevruc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002833.5806315-689-273647388059711/AnsiballZ_getent.py'
Dec 06 06:33:53 compute-0 sudo[102168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:54 compute-0 python3.9[102170]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 06 06:33:54 compute-0 sudo[102168]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 1 active+clean+scrubbing, 1 peering, 2 active+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:54 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 06 06:33:54 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 06 06:33:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:54.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:54 compute-0 sudo[102322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgnbcxrtncdvpirzzkrbuvlgdumajbax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002834.6923254-719-241957398608054/AnsiballZ_getent.py'
Dec 06 06:33:54 compute-0 sudo[102322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:55 compute-0 ceph-mon[74339]: 8.17 deep-scrub starts
Dec 06 06:33:55 compute-0 ceph-mon[74339]: 8.17 deep-scrub ok
Dec 06 06:33:55 compute-0 ceph-mon[74339]: pgmap v337: 305 pgs: 1 active+clean+scrubbing, 1 peering, 2 active+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:55 compute-0 ceph-mon[74339]: 11.15 scrub starts
Dec 06 06:33:55 compute-0 ceph-mon[74339]: 11.15 scrub ok
Dec 06 06:33:55 compute-0 python3.9[102324]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 06 06:33:55 compute-0 sudo[102322]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:56 compute-0 sudo[102475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yphqfrofsntvlfqbcbjspnbpoajzxwrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002835.4723182-743-123991180447619/AnsiballZ_group.py'
Dec 06 06:33:56 compute-0 sudo[102475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 1 active+clean+scrubbing, 1 peering, 2 active+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:56 compute-0 python3.9[102477]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:33:56 compute-0 sudo[102475]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:33:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:33:56 compute-0 sudo[102628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pksbdswyaovbhbsguezbrdwfgaxmmatf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002836.6036384-770-64792166529707/AnsiballZ_file.py'
Dec 06 06:33:56 compute-0 sudo[102628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:57 compute-0 python3.9[102630]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 06 06:33:57 compute-0 sudo[102628]: pam_unix(sudo:session): session closed for user root
Dec 06 06:33:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:57.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:57 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 06 06:33:57 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 06 06:33:57 compute-0 ceph-mon[74339]: 10.1e scrub starts
Dec 06 06:33:57 compute-0 ceph-mon[74339]: 10.1e scrub ok
Dec 06 06:33:57 compute-0 ceph-mon[74339]: pgmap v338: 305 pgs: 1 active+clean+scrubbing, 1 peering, 2 active+remapped, 301 active+clean; 455 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Dec 06 06:33:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 06 06:33:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:33:58.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:58 compute-0 sudo[102781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apemxmtnpxnnupgzgdkycihqbjokzsen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002838.2130828-803-121201402407113/AnsiballZ_dnf.py'
Dec 06 06:33:58 compute-0 sudo[102781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:33:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:33:58 compute-0 python3.9[102783]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:33:58 compute-0 ceph-mon[74339]: 11.18 scrub starts
Dec 06 06:33:58 compute-0 ceph-mon[74339]: 11.18 scrub ok
Dec 06 06:33:58 compute-0 ceph-mon[74339]: 8.14 scrub starts
Dec 06 06:33:58 compute-0 ceph-mon[74339]: 8.14 scrub ok
Dec 06 06:33:58 compute-0 ceph-mon[74339]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:33:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 06 06:33:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 06 06:33:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 06 06:33:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 06 06:33:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 06 06:33:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:33:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:33:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:33:59.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:33:59 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Dec 06 06:33:59 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Dec 06 06:34:00 compute-0 sudo[102781]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Dec 06 06:34:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 06 06:34:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 06 06:34:00 compute-0 ceph-mon[74339]: osdmap e114: 3 total, 3 up, 3 in
Dec 06 06:34:00 compute-0 ceph-mon[74339]: 11.1f deep-scrub starts
Dec 06 06:34:00 compute-0 ceph-mon[74339]: 11.1f deep-scrub ok
Dec 06 06:34:00 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 06 06:34:00 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 06 06:34:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:00.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 06 06:34:00 compute-0 sudo[102935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdglwqcgyhqgldfeveotsgbmmjxvtwvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002840.3704836-827-273534255598354/AnsiballZ_file.py'
Dec 06 06:34:00 compute-0 sudo[102935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:00 compute-0 python3.9[102937]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:34:00 compute-0 sudo[102935]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 06 06:34:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 06 06:34:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 06 06:34:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:01.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:01 compute-0 sudo[103087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyyoqpiipkptuvsppxffypccrxwqucqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002841.0013177-851-251660324715374/AnsiballZ_stat.py'
Dec 06 06:34:01 compute-0 sudo[103087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:01 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 06 06:34:01 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 06 06:34:01 compute-0 python3.9[103089]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:34:01 compute-0 sudo[103087]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:01 compute-0 ceph-mon[74339]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 06 06:34:01 compute-0 ceph-mon[74339]: 10.5 scrub starts
Dec 06 06:34:01 compute-0 ceph-mon[74339]: 10.5 scrub ok
Dec 06 06:34:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 06 06:34:01 compute-0 ceph-mon[74339]: osdmap e115: 3 total, 3 up, 3 in
Dec 06 06:34:01 compute-0 ceph-mon[74339]: 11.12 scrub starts
Dec 06 06:34:01 compute-0 ceph-mon[74339]: 11.12 scrub ok
Dec 06 06:34:01 compute-0 sudo[103165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujcbhwbhhidhlnfsxmyiynzndeweprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002841.0013177-851-251660324715374/AnsiballZ_file.py'
Dec 06 06:34:01 compute-0 sudo[103165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:01 compute-0 python3.9[103167]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:34:01 compute-0 sudo[103165]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:02 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 06 06:34:02 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 06 06:34:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:02.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:02 compute-0 sudo[103318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjddefgkgcmolvroroxsuqgmyoepqmnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002842.272598-890-161190957805552/AnsiballZ_stat.py'
Dec 06 06:34:02 compute-0 sudo[103318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 06 06:34:02 compute-0 python3.9[103320]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:34:02 compute-0 sudo[103318]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:03 compute-0 sudo[103396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yanwvbfrsahsjjyggxoakshtzzoprhmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002842.272598-890-161190957805552/AnsiballZ_file.py'
Dec 06 06:34:03 compute-0 sudo[103396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:03 compute-0 python3.9[103398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:34:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:03.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:03 compute-0 sudo[103396]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:03 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 06 06:34:03 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 06 06:34:04 compute-0 sudo[103548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quosinyzffohbpmoypzeoevlzdjlmokx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002843.780063-935-165471195106935/AnsiballZ_dnf.py'
Dec 06 06:34:04 compute-0 sudo[103548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:04 compute-0 python3.9[103550]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:05.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:05 compute-0 sudo[103548]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:06.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:07.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:08 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.18 deep-scrub starts
Dec 06 06:34:08 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.18 deep-scrub ok
Dec 06 06:34:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:08.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:09.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:10 compute-0 sudo[103579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:10 compute-0 sudo[103579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:10 compute-0 sudo[103579]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:10 compute-0 sudo[103604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:10 compute-0 sudo[103604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:10 compute-0 sudo[103604]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:10 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 06 06:34:10 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 06 06:34:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:10.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:11.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:11 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:34:11 compute-0 ceph-mon[74339]: paxos.0).electionLogic(17) init, last seen epoch 17, mid-election, bumping
Dec 06 06:34:11 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:34:12 compute-0 sudo[103630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:12 compute-0 sudo[103630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:12 compute-0 sudo[103630]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:12 compute-0 sudo[103655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:34:12 compute-0 sudo[103655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:12 compute-0 sudo[103655]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:12 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:34:12 compute-0 sudo[103681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:12 compute-0 sudo[103681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:12 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:12 compute-0 sudo[103681]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:12 compute-0 sudo[103706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:34:12 compute-0 sudo[103706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:12.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:12 compute-0 podman[103803]: 2025-12-06 06:34:12.7215022 +0000 UTC m=+0.054441871 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:34:12 compute-0 podman[103803]: 2025-12-06 06:34:12.83348457 +0000 UTC m=+0.166424221 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:34:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:34:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:34:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:34:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:34:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:34:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:34:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:13.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:13 compute-0 podman[103954]: 2025-12-06 06:34:13.440179107 +0000 UTC m=+0.054457102 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:34:13 compute-0 podman[103954]: 2025-12-06 06:34:13.45459915 +0000 UTC m=+0.068877145 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:34:13 compute-0 podman[104021]: 2025-12-06 06:34:13.644514941 +0000 UTC m=+0.049682591 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, release=1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., vcs-type=git, version=2.2.4)
Dec 06 06:34:13 compute-0 podman[104021]: 2025-12-06 06:34:13.661717136 +0000 UTC m=+0.066884786 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, architecture=x86_64, io.openshift.expose-services=, name=keepalived, release=1793, description=keepalived for Ceph, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 06:34:13 compute-0 sudo[103706]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:13 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:34:13 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:34:14 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:34:14
Dec 06 06:34:14 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:34:14 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:34:14 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes']
Dec 06 06:34:14 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 1/10 changes
Dec 06 06:34:14 compute-0 ceph-mgr[74630]: [balancer INFO root] Executing plan auto_2025-12-06_06:34:14
Dec 06 06:34:14 compute-0 ceph-mgr[74630]: [balancer INFO root] ceph osd rm-pg-upmap-items 9.14
Dec 06 06:34:14 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"} v 0) v1
Dec 06 06:34:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"}]: dispatch
Dec 06 06:34:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:14 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 06 06:34:14 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 06 06:34:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:14.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:15.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:15 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 06 06:34:15 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 06 06:34:16 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:34:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:16 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 06 06:34:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:16.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:16 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:34:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:34:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 8m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:16 compute-0 sudo[104056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:16 compute-0 sudo[104056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:16 compute-0 sudo[104056]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:16 compute-0 sudo[104081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:34:16 compute-0 sudo[104081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:16 compute-0 sudo[104081]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:16 compute-0 sudo[104106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:16 compute-0 sudo[104106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:16 compute-0 sudo[104106]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:17 compute-0 sudo[104131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:34:17 compute-0 sudo[104131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:17.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:17 compute-0 sudo[104131]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.13 scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.13 scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.f scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.f scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.3 scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.3 scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 8.8 deep-scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.11 scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.11 scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.18 deep-scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.18 deep-scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.19 scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.19 scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 8.1b scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:34:18 compute-0 ceph-mon[74339]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.1b scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.1b scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.15 scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.15 scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.14 scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.14 scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 8.1b scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 8.8 deep-scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.1 scrub starts
Dec 06 06:34:18 compute-0 ceph-mon[74339]: 10.1 scrub ok
Dec 06 06:34:18 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:34:18 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:34:18 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:34:18 compute-0 ceph-mon[74339]: osdmap e116: 3 total, 3 up, 3 in
Dec 06 06:34:18 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 8m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:34:18 compute-0 ceph-mon[74339]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:34:18 compute-0 ceph-mon[74339]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:34:18 compute-0 ceph-mon[74339]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:34:18 compute-0 ceph-mon[74339]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 06 06:34:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Dec 06 06:34:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 06 06:34:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:18.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:19 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 117 pg[9.14( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=73/73 les/c/f=74/74/0 sis=117) [0] r=0 lpr=117 pi=[73,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 06 06:34:19 compute-0 ceph-mon[74339]: 10.10 scrub starts
Dec 06 06:34:19 compute-0 ceph-mon[74339]: 10.10 scrub ok
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd rm-pg-upmap-items", "format": "json", "pgid": "9.14"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: osdmap e117: 3 total, 3 up, 3 in
Dec 06 06:34:19 compute-0 ceph-mon[74339]: pgmap v353: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 06 06:34:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 06 06:34:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 06 06:34:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 06 06:34:19 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 118 pg[9.14( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=73/73 les/c/f=74/74/0 sis=118) [0]/[1] r=-1 lpr=118 pi=[73,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:19 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 118 pg[9.14( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=73/73 les/c/f=74/74/0 sis=118) [0]/[1] r=-1 lpr=118 pi=[73,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:19.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:19 compute-0 python3.9[104313]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:34:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Dec 06 06:34:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 06 06:34:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 06 06:34:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 06 06:34:20 compute-0 ceph-mon[74339]: osdmap e118: 3 total, 3 up, 3 in
Dec 06 06:34:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 06 06:34:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 06 06:34:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 06 06:34:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 06 06:34:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:20.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:20 compute-0 python3.9[104466]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 06 06:34:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:34:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:34:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:21 compute-0 python3.9[104616]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 06 06:34:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:21.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:21 compute-0 ceph-mon[74339]: pgmap v355: 305 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 06 06:34:21 compute-0 ceph-mon[74339]: osdmap e119: 3 total, 3 up, 3 in
Dec 06 06:34:21 compute-0 ceph-mon[74339]: 11.f scrub starts
Dec 06 06:34:21 compute-0 ceph-mon[74339]: 11.f scrub ok
Dec 06 06:34:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 06 06:34:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 06 06:34:21 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 120 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=118/73 les/c/f=119/74/0 sis=120) [0] r=0 lpr=120 pi=[73,120)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:21 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 120 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=118/73 les/c/f=119/74/0 sis=120) [0] r=0 lpr=120 pi=[73,120)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:34:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:34:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:34:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 89294864-4946-4d2b-9f0c-eca4201c6618 does not exist
Dec 06 06:34:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 68d5294a-9728-4fbd-989c-a6c5ec80eb46 does not exist
Dec 06 06:34:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0cbffbfc-23e6-4d03-bcd3-a23de26ddbd2 does not exist
Dec 06 06:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:34:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:34:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:34:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:34:21 compute-0 sudo[104641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:21 compute-0 sudo[104641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:21 compute-0 sudo[104641]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:21 compute-0 sudo[104666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:34:21 compute-0 sudo[104666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:21 compute-0 sudo[104666]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:21 compute-0 sudo[104691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:21 compute-0 sudo[104691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:21 compute-0 sudo[104691]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:21 compute-0 sudo[104718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:34:21 compute-0 sudo[104718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:22 compute-0 podman[104833]: 2025-12-06 06:34:22.119384533 +0000 UTC m=+0.040797140 container create c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:34:22 compute-0 systemd[1]: Started libpod-conmon-c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69.scope.
Dec 06 06:34:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:34:22 compute-0 podman[104833]: 2025-12-06 06:34:22.099983243 +0000 UTC m=+0.021395850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:34:22 compute-0 podman[104833]: 2025-12-06 06:34:22.203830394 +0000 UTC m=+0.125243001 container init c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:34:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:34:22 compute-0 podman[104833]: 2025-12-06 06:34:22.21114756 +0000 UTC m=+0.132560147 container start c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:34:22 compute-0 podman[104833]: 2025-12-06 06:34:22.214294252 +0000 UTC m=+0.135706859 container attach c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 06:34:22 compute-0 loving_keller[104851]: 167 167
Dec 06 06:34:22 compute-0 systemd[1]: libpod-c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69.scope: Deactivated successfully.
Dec 06 06:34:22 compute-0 conmon[104851]: conmon c49192913c7f3e88016e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69.scope/container/memory.events
Dec 06 06:34:22 compute-0 podman[104833]: 2025-12-06 06:34:22.219075622 +0000 UTC m=+0.140488209 container died c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 06:34:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb2c76138eb358299d72694c8684a76e5d19eaf5e98e1ff1f2625610fde2bfd2-merged.mount: Deactivated successfully.
Dec 06 06:34:22 compute-0 podman[104833]: 2025-12-06 06:34:22.252253447 +0000 UTC m=+0.173666034 container remove c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:34:22 compute-0 systemd[1]: libpod-conmon-c49192913c7f3e88016e67d935f772f5a3a5516a38d87b4b3047a1d24aeecf69.scope: Deactivated successfully.
Dec 06 06:34:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 06 06:34:22 compute-0 podman[104919]: 2025-12-06 06:34:22.40789818 +0000 UTC m=+0.041662665 container create 890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:34:22 compute-0 sudo[104959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzkdvqyzopdbuomzzpmzlyxawmjtgose ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002861.7727206-1058-88274027139055/AnsiballZ_systemd.py'
Dec 06 06:34:22 compute-0 sudo[104959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:22.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:22 compute-0 systemd[1]: Started libpod-conmon-890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c.scope.
Dec 06 06:34:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1537248b755fca4fe8870713d8128c6ba61d059ca3418f56e68f0bc6f869d1d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1537248b755fca4fe8870713d8128c6ba61d059ca3418f56e68f0bc6f869d1d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1537248b755fca4fe8870713d8128c6ba61d059ca3418f56e68f0bc6f869d1d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1537248b755fca4fe8870713d8128c6ba61d059ca3418f56e68f0bc6f869d1d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1537248b755fca4fe8870713d8128c6ba61d059ca3418f56e68f0bc6f869d1d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:22 compute-0 podman[104919]: 2025-12-06 06:34:22.390065906 +0000 UTC m=+0.023830441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:34:22 compute-0 podman[104919]: 2025-12-06 06:34:22.485696767 +0000 UTC m=+0.119461272 container init 890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 06:34:22 compute-0 podman[104919]: 2025-12-06 06:34:22.492437064 +0000 UTC m=+0.126201549 container start 890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 06:34:22 compute-0 podman[104919]: 2025-12-06 06:34:22.495925856 +0000 UTC m=+0.129690361 container attach 890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:34:22 compute-0 ceph-mon[74339]: osdmap e120: 3 total, 3 up, 3 in
Dec 06 06:34:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:34:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:34:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:34:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:34:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:34:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 06 06:34:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 06 06:34:22 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 121 pg[9.14( v 58'1159 (0'0,58'1159] local-lis/les=120/121 n=5 ec=62/51 lis/c=118/73 les/c/f=119/74/0 sis=120) [0] r=0 lpr=120 pi=[73,120)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:22 compute-0 python3.9[104961]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:34:22 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 06 06:34:22 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 06 06:34:22 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 06 06:34:22 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 06:34:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:34:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:34:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:34:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:34:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:34:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:34:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:34:23 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 06:34:23 compute-0 sudo[104959]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:34:23 compute-0 cranky_hodgkin[104964]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:34:23 compute-0 cranky_hodgkin[104964]: --> relative data size: 1.0
Dec 06 06:34:23 compute-0 cranky_hodgkin[104964]: --> All data devices are unavailable
Dec 06 06:34:23 compute-0 systemd[1]: libpod-890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c.scope: Deactivated successfully.
Dec 06 06:34:23 compute-0 podman[104919]: 2025-12-06 06:34:23.343926883 +0000 UTC m=+0.977691388 container died 890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:34:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1537248b755fca4fe8870713d8128c6ba61d059ca3418f56e68f0bc6f869d1d9-merged.mount: Deactivated successfully.
Dec 06 06:34:23 compute-0 podman[104919]: 2025-12-06 06:34:23.405743269 +0000 UTC m=+1.039507744 container remove 890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:34:23 compute-0 systemd[1]: libpod-conmon-890dffe771ff1088b806e74787e768398e8eff627b53615fefeee0964d96b45c.scope: Deactivated successfully.
Dec 06 06:34:23 compute-0 sudo[104718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:23 compute-0 sudo[105025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:23 compute-0 ceph-mon[74339]: pgmap v358: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:34:23 compute-0 ceph-mon[74339]: osdmap e121: 3 total, 3 up, 3 in
Dec 06 06:34:23 compute-0 sudo[105025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:23 compute-0 sudo[105025]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:23 compute-0 sudo[105050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:34:23 compute-0 sudo[105050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:23 compute-0 sudo[105050]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 06 06:34:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 06 06:34:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 06 06:34:23 compute-0 sudo[105075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:23 compute-0 sudo[105075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:23 compute-0 sudo[105075]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:23 compute-0 sudo[105100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:34:23 compute-0 sudo[105100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:24 compute-0 podman[105288]: 2025-12-06 06:34:24.024534041 +0000 UTC m=+0.039210793 container create 3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 06:34:24 compute-0 systemd[1]: Started libpod-conmon-3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e.scope.
Dec 06 06:34:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:34:24 compute-0 podman[105288]: 2025-12-06 06:34:24.100808822 +0000 UTC m=+0.115485584 container init 3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec 06 06:34:24 compute-0 podman[105288]: 2025-12-06 06:34:24.00711414 +0000 UTC m=+0.021790912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:34:24 compute-0 podman[105288]: 2025-12-06 06:34:24.108826878 +0000 UTC m=+0.123503630 container start 3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:34:24 compute-0 podman[105288]: 2025-12-06 06:34:24.112404052 +0000 UTC m=+0.127080804 container attach 3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wing, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:34:24 compute-0 angry_wing[105307]: 167 167
Dec 06 06:34:24 compute-0 systemd[1]: libpod-3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e.scope: Deactivated successfully.
Dec 06 06:34:24 compute-0 podman[105288]: 2025-12-06 06:34:24.116435071 +0000 UTC m=+0.131111823 container died 3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:34:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-49b4ae5801cff6cd361d88c9b9f0aadfd91783df6d1af3a7e53f770023a6863f-merged.mount: Deactivated successfully.
Dec 06 06:34:24 compute-0 podman[105288]: 2025-12-06 06:34:24.158295062 +0000 UTC m=+0.172971824 container remove 3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wing, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:34:24 compute-0 systemd[1]: libpod-conmon-3ebd43e20a54371fd72728e6d829f37c75cd0b8a60692940363ff966385c240e.scope: Deactivated successfully.
Dec 06 06:34:24 compute-0 python3.9[105292]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 06 06:34:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 1 active+recovering+remapped, 1 remapped+peering, 1 active+remapped, 302 active+clean; 456 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 1/216 objects misplaced (0.463%); 27 B/s, 1 objects/s recovering
Dec 06 06:34:24 compute-0 podman[105357]: 2025-12-06 06:34:24.310916185 +0000 UTC m=+0.039945084 container create 10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:34:24 compute-0 systemd[1]: Started libpod-conmon-10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2.scope.
Dec 06 06:34:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a5c7c4bf9ae669990759372917273aaffbf2d13df6542db6152e0d61f4388e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a5c7c4bf9ae669990759372917273aaffbf2d13df6542db6152e0d61f4388e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a5c7c4bf9ae669990759372917273aaffbf2d13df6542db6152e0d61f4388e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a5c7c4bf9ae669990759372917273aaffbf2d13df6542db6152e0d61f4388e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:24 compute-0 podman[105357]: 2025-12-06 06:34:24.386375293 +0000 UTC m=+0.115404172 container init 10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:34:24 compute-0 podman[105357]: 2025-12-06 06:34:24.293145664 +0000 UTC m=+0.022174573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:34:24 compute-0 podman[105357]: 2025-12-06 06:34:24.393049418 +0000 UTC m=+0.122078277 container start 10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:34:24 compute-0 podman[105357]: 2025-12-06 06:34:24.396325246 +0000 UTC m=+0.125354125 container attach 10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:34:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:24.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:34:24 compute-0 ceph-mon[74339]: paxos.0).electionLogic(21) init, last seen epoch 21, mid-election, bumping
Dec 06 06:34:24 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:34:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:34:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:34:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:34:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:34:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 06 06:34:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 8m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:34:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:34:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 06:34:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 06 06:34:25 compute-0 musing_bouman[105374]: {
Dec 06 06:34:25 compute-0 musing_bouman[105374]:     "0": [
Dec 06 06:34:25 compute-0 musing_bouman[105374]:         {
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "devices": [
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "/dev/loop3"
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             ],
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "lv_name": "ceph_lv0",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "lv_size": "7511998464",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "name": "ceph_lv0",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "tags": {
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.cluster_name": "ceph",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.crush_device_class": "",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.encrypted": "0",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.osd_id": "0",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.type": "block",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:                 "ceph.vdo": "0"
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             },
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "type": "block",
Dec 06 06:34:25 compute-0 musing_bouman[105374]:             "vg_name": "ceph_vg0"
Dec 06 06:34:25 compute-0 musing_bouman[105374]:         }
Dec 06 06:34:25 compute-0 musing_bouman[105374]:     ]
Dec 06 06:34:25 compute-0 musing_bouman[105374]: }
Dec 06 06:34:25 compute-0 systemd[1]: libpod-10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2.scope: Deactivated successfully.
Dec 06 06:34:25 compute-0 podman[105357]: 2025-12-06 06:34:25.232819093 +0000 UTC m=+0.961847952 container died 10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.543132328308208e-06 of space, bias 1.0, pg target 0.001962939698492462 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:34:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a5c7c4bf9ae669990759372917273aaffbf2d13df6542db6152e0d61f4388e7-merged.mount: Deactivated successfully.
Dec 06 06:34:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:25.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:25 compute-0 podman[105357]: 2025-12-06 06:34:25.291038634 +0000 UTC m=+1.020067493 container remove 10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:34:25 compute-0 systemd[1]: libpod-conmon-10e32395a02b796f6428e55aee675d7121fdaf761dc3ab1db69a8d1b4f2ad0f2.scope: Deactivated successfully.
Dec 06 06:34:25 compute-0 sudo[105100]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:25 compute-0 sudo[105393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:25 compute-0 sudo[105393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:25 compute-0 sudo[105393]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:25 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 06 06:34:25 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 06 06:34:25 compute-0 sudo[105418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:34:25 compute-0 sudo[105418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:25 compute-0 sudo[105418]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:25 compute-0 sudo[105443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:25 compute-0 sudo[105443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:25 compute-0 sudo[105443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 06:34:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:34:25 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:34:25 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:34:25 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:34:25 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:34:25 compute-0 ceph-mon[74339]: osdmap e122: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 8m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:34:25 compute-0 ceph-mon[74339]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:34:25 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 06:34:25 compute-0 sudo[105468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:34:25 compute-0 sudo[105468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 06 06:34:25 compute-0 podman[105534]: 2025-12-06 06:34:25.872147608 +0000 UTC m=+0.038352957 container create 42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:34:25 compute-0 systemd[1]: Started libpod-conmon-42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4.scope.
Dec 06 06:34:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:34:25 compute-0 podman[105534]: 2025-12-06 06:34:25.855585371 +0000 UTC m=+0.021790730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:34:25 compute-0 podman[105534]: 2025-12-06 06:34:25.963011948 +0000 UTC m=+0.129217297 container init 42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:34:25 compute-0 podman[105534]: 2025-12-06 06:34:25.9719029 +0000 UTC m=+0.138108249 container start 42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 06:34:25 compute-0 podman[105534]: 2025-12-06 06:34:25.975181346 +0000 UTC m=+0.141386705 container attach 42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:34:25 compute-0 xenodochial_yalow[105550]: 167 167
Dec 06 06:34:25 compute-0 systemd[1]: libpod-42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4.scope: Deactivated successfully.
Dec 06 06:34:25 compute-0 conmon[105550]: conmon 42a8a03c1c66013a3752 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4.scope/container/memory.events
Dec 06 06:34:25 compute-0 podman[105534]: 2025-12-06 06:34:25.978715879 +0000 UTC m=+0.144921238 container died 42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:34:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9626206b76b08a765d72f2d4454e565636e0753aab003483f07385e5effe8ca-merged.mount: Deactivated successfully.
Dec 06 06:34:26 compute-0 podman[105534]: 2025-12-06 06:34:26.013048929 +0000 UTC m=+0.179254278 container remove 42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:34:26 compute-0 systemd[1]: libpod-conmon-42a8a03c1c66013a3752a4f850d166dd2aee303b6a597a7efe16fe4a753da7b4.scope: Deactivated successfully.
Dec 06 06:34:26 compute-0 podman[105573]: 2025-12-06 06:34:26.202042802 +0000 UTC m=+0.055332378 container create f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:34:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 1 active+recovering+remapped, 1 remapped+peering, 303 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 1/216 objects misplaced (0.463%); 22 B/s, 0 objects/s recovering
Dec 06 06:34:26 compute-0 systemd[1]: Started libpod-conmon-f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d.scope.
Dec 06 06:34:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7b5fffbd3f31c88d8badb3a12aecf5b604a576431a4d93562f84b2784d5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:26 compute-0 podman[105573]: 2025-12-06 06:34:26.180796428 +0000 UTC m=+0.034086034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7b5fffbd3f31c88d8badb3a12aecf5b604a576431a4d93562f84b2784d5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7b5fffbd3f31c88d8badb3a12aecf5b604a576431a4d93562f84b2784d5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7b5fffbd3f31c88d8badb3a12aecf5b604a576431a4d93562f84b2784d5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:34:26 compute-0 podman[105573]: 2025-12-06 06:34:26.289046648 +0000 UTC m=+0.142336244 container init f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:34:26 compute-0 podman[105573]: 2025-12-06 06:34:26.30783206 +0000 UTC m=+0.161121626 container start f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:34:26 compute-0 podman[105573]: 2025-12-06 06:34:26.311881839 +0000 UTC m=+0.165171465 container attach f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:34:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:26.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:26 compute-0 ceph-mon[74339]: 11.1d scrub starts
Dec 06 06:34:26 compute-0 ceph-mon[74339]: 11.1d scrub ok
Dec 06 06:34:26 compute-0 ceph-mon[74339]: 9.14 scrub starts
Dec 06 06:34:26 compute-0 ceph-mon[74339]: 9.14 scrub ok
Dec 06 06:34:26 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:34:26 compute-0 ceph-mon[74339]: osdmap e123: 3 total, 3 up, 3 in
Dec 06 06:34:26 compute-0 ceph-mon[74339]: 9.1b scrub starts
Dec 06 06:34:26 compute-0 ceph-mon[74339]: 9.1b scrub ok
Dec 06 06:34:26 compute-0 ceph-mon[74339]: pgmap v363: 305 pgs: 1 active+recovering+remapped, 1 remapped+peering, 303 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 1/216 objects misplaced (0.463%); 22 B/s, 0 objects/s recovering
Dec 06 06:34:27 compute-0 strange_fermat[105591]: {
Dec 06 06:34:27 compute-0 strange_fermat[105591]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:34:27 compute-0 strange_fermat[105591]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:34:27 compute-0 strange_fermat[105591]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:34:27 compute-0 strange_fermat[105591]:         "osd_id": 0,
Dec 06 06:34:27 compute-0 strange_fermat[105591]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:34:27 compute-0 strange_fermat[105591]:         "type": "bluestore"
Dec 06 06:34:27 compute-0 strange_fermat[105591]:     }
Dec 06 06:34:27 compute-0 strange_fermat[105591]: }
Dec 06 06:34:27 compute-0 systemd[1]: libpod-f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d.scope: Deactivated successfully.
Dec 06 06:34:27 compute-0 podman[105573]: 2025-12-06 06:34:27.217054365 +0000 UTC m=+1.070343941 container died f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_fermat, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:34:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c57d7b5fffbd3f31c88d8badb3a12aecf5b604a576431a4d93562f84b2784d5d-merged.mount: Deactivated successfully.
Dec 06 06:34:27 compute-0 podman[105573]: 2025-12-06 06:34:27.268673972 +0000 UTC m=+1.121963538 container remove f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:34:27 compute-0 systemd[1]: libpod-conmon-f9f25720657b24fd7e3fca631c94011eb989a32bdd6bf0dbd68877079b69ad2d.scope: Deactivated successfully.
Dec 06 06:34:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:27.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:27 compute-0 sudo[105468]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:34:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:34:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 526c93cc-cf36-4d5b-afba-9de464ea6315 does not exist
Dec 06 06:34:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e7322790-41c9-4be5-a11c-fa0f6977a35f does not exist
Dec 06 06:34:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 769399de-3f52-4476-8ddd-756d79c8f5c4 does not exist
Dec 06 06:34:27 compute-0 sudo[105722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:27 compute-0 sudo[105722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:27 compute-0 sudo[105722]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:27 compute-0 sudo[105777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plbyfktavnnhorpskuovbahvxnsiiqsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002867.2702737-1229-47274940636465/AnsiballZ_systemd.py'
Dec 06 06:34:27 compute-0 sudo[105777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:27 compute-0 sudo[105776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:34:27 compute-0 sudo[105776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:27 compute-0 sudo[105776]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:27 compute-0 python3.9[105787]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:34:28 compute-0 sudo[105777]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Dec 06 06:34:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Dec 06 06:34:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 06 06:34:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:28.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:28 compute-0 sudo[105956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfjemoxlofiykjwjorqipqnkbuwmxrqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002868.1778412-1229-213253706808519/AnsiballZ_systemd.py'
Dec 06 06:34:28 compute-0 sudo[105956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 06 06:34:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:34:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 06 06:34:28 compute-0 python3.9[105958]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:34:28 compute-0 sudo[105956]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 06 06:34:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 06 06:34:28 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 06 06:34:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:29.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:29 compute-0 sshd-session[94065]: Connection closed by 192.168.122.30 port 51060
Dec 06 06:34:29 compute-0 sshd-session[94062]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:34:29 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 06 06:34:29 compute-0 systemd[1]: session-35.scope: Consumed 1min 11.813s CPU time.
Dec 06 06:34:29 compute-0 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Dec 06 06:34:29 compute-0 systemd-logind[798]: Removed session 35.
Dec 06 06:34:29 compute-0 ceph-mon[74339]: pgmap v364: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Dec 06 06:34:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 06 06:34:29 compute-0 ceph-mon[74339]: osdmap e124: 3 total, 3 up, 3 in
Dec 06 06:34:30 compute-0 sudo[105986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:30 compute-0 sudo[105986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:30 compute-0 sudo[105986]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Dec 06 06:34:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 06 06:34:30 compute-0 sudo[106011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:30 compute-0 sudo[106011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:30 compute-0 sudo[106011]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:30.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 06 06:34:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 06 06:34:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 06 06:34:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 06 06:34:30 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 125 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=89/89 les/c/f=90/90/0 sis=125) [0] r=0 lpr=125 pi=[89,125)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:31 compute-0 ceph-mon[74339]: pgmap v366: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 06 06:34:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:31.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 06 06:34:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 1 unknown, 304 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:32.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 06 06:34:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 06 06:34:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 126 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=89/89 les/c/f=90/90/0 sis=126) [0]/[2] r=-1 lpr=126 pi=[89,126)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:32 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 126 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=89/89 les/c/f=90/90/0 sis=126) [0]/[2] r=-1 lpr=126 pi=[89,126)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 06 06:34:32 compute-0 ceph-mon[74339]: osdmap e125: 3 total, 3 up, 3 in
Dec 06 06:34:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:33.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 06 06:34:34 compute-0 ceph-mon[74339]: 8.18 deep-scrub starts
Dec 06 06:34:34 compute-0 ceph-mon[74339]: 8.18 deep-scrub ok
Dec 06 06:34:34 compute-0 ceph-mon[74339]: pgmap v368: 305 pgs: 1 unknown, 304 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:34 compute-0 ceph-mon[74339]: osdmap e126: 3 total, 3 up, 3 in
Dec 06 06:34:34 compute-0 ceph-mon[74339]: 9.3 scrub starts
Dec 06 06:34:34 compute-0 ceph-mon[74339]: 9.3 scrub ok
Dec 06 06:34:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 06 06:34:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 06 06:34:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 1 unknown, 304 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:34 compute-0 sshd-session[106038]: Accepted publickey for zuul from 192.168.122.30 port 32994 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:34:34 compute-0 systemd-logind[798]: New session 36 of user zuul.
Dec 06 06:34:34 compute-0 systemd[1]: Started Session 36 of User zuul.
Dec 06 06:34:34 compute-0 sshd-session[106038]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:34:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:35 compute-0 ceph-mon[74339]: 11.1e scrub starts
Dec 06 06:34:35 compute-0 ceph-mon[74339]: 11.1e scrub ok
Dec 06 06:34:35 compute-0 ceph-mon[74339]: osdmap e127: 3 total, 3 up, 3 in
Dec 06 06:34:35 compute-0 ceph-mon[74339]: pgmap v371: 305 pgs: 1 unknown, 304 active+clean; 455 KiB data, 148 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:35 compute-0 ceph-mon[74339]: 11.1c scrub starts
Dec 06 06:34:35 compute-0 ceph-mon[74339]: 11.1c scrub ok
Dec 06 06:34:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 06 06:34:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:35.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:35 compute-0 python3.9[106191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:34:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 06 06:34:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 06 06:34:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 128 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=126/89 les/c/f=127/90/0 sis=128) [0] r=0 lpr=128 pi=[89,128)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:35 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 128 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=126/89 les/c/f=127/90/0 sis=128) [0] r=0 lpr=128 pi=[89,128)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 06 06:34:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 06 06:34:36 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 06 06:34:36 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 129 pg[9.19( v 58'1159 (0'0,58'1159] local-lis/les=128/129 n=5 ec=62/51 lis/c=126/89 les/c/f=127/90/0 sis=128) [0] r=0 lpr=128 pi=[89,128)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:36 compute-0 ceph-mon[74339]: osdmap e128: 3 total, 3 up, 3 in
Dec 06 06:34:36 compute-0 ceph-mon[74339]: 9.13 scrub starts
Dec 06 06:34:36 compute-0 ceph-mon[74339]: 9.13 scrub ok
Dec 06 06:34:36 compute-0 ceph-mon[74339]: pgmap v373: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:36 compute-0 sudo[106346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvtsshrhrkdpnuflubyezyunmfhvdlwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002876.306022-72-206683733196245/AnsiballZ_getent.py'
Dec 06 06:34:36 compute-0 sudo[106346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:36 compute-0 python3.9[106348]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 06 06:34:36 compute-0 sudo[106346]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:37.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:37 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 06 06:34:37 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 06 06:34:37 compute-0 sudo[106499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfiuebiceafmaifqjqepvgrzmpsvicuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002877.329296-108-165116601299109/AnsiballZ_setup.py'
Dec 06 06:34:37 compute-0 sudo[106499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:37 compute-0 ceph-mon[74339]: 11.7 scrub starts
Dec 06 06:34:37 compute-0 ceph-mon[74339]: 11.7 scrub ok
Dec 06 06:34:37 compute-0 ceph-mon[74339]: osdmap e129: 3 total, 3 up, 3 in
Dec 06 06:34:37 compute-0 ceph-mon[74339]: 9.19 scrub starts
Dec 06 06:34:37 compute-0 python3.9[106501]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:34:38 compute-0 sudo[106499]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:38.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:38 compute-0 sudo[106584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmobefyiehvnbrqmnejikxieqymztamd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002877.329296-108-165116601299109/AnsiballZ_dnf.py'
Dec 06 06:34:38 compute-0 sudo[106584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:38 compute-0 ceph-mon[74339]: 9.19 scrub ok
Dec 06 06:34:38 compute-0 ceph-mon[74339]: 9.b deep-scrub starts
Dec 06 06:34:38 compute-0 ceph-mon[74339]: 9.b deep-scrub ok
Dec 06 06:34:38 compute-0 ceph-mon[74339]: pgmap v375: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:38 compute-0 python3.9[106586]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 06:34:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:39.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:39 compute-0 ceph-mon[74339]: 9.7 scrub starts
Dec 06 06:34:39 compute-0 ceph-mon[74339]: 9.7 scrub ok
Dec 06 06:34:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:34:40 compute-0 sudo[106584]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:40.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:40 compute-0 ceph-mon[74339]: 11.1b scrub starts
Dec 06 06:34:40 compute-0 ceph-mon[74339]: 11.1b scrub ok
Dec 06 06:34:40 compute-0 ceph-mon[74339]: pgmap v376: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Dec 06 06:34:40 compute-0 sudo[106738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrrkazrbjspnzgatangfnakejkvsasdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002880.6372921-150-53499698307775/AnsiballZ_dnf.py'
Dec 06 06:34:40 compute-0 sudo[106738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:41 compute-0 python3.9[106740]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:41.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:41 compute-0 ceph-mon[74339]: 8.4 deep-scrub starts
Dec 06 06:34:41 compute-0 ceph-mon[74339]: 8.4 deep-scrub ok
Dec 06 06:34:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Dec 06 06:34:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 06 06:34:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:42 compute-0 sudo[106738]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:34:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:34:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:34:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:34:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:34:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:34:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 06 06:34:42 compute-0 ceph-mon[74339]: pgmap v377: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 06 06:34:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 06 06:34:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 06 06:34:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 06 06:34:42 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 130 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=94/94 les/c/f=95/95/0 sis=130) [0] r=0 lpr=130 pi=[94,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:43.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:43 compute-0 sudo[106892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtvidnxnbmjufxsmvnflvphpsyqrzgvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002882.7663214-174-57183394902071/AnsiballZ_systemd.py'
Dec 06 06:34:43 compute-0 sudo[106892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 06 06:34:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 06 06:34:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 06 06:34:43 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 131 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=94/94 les/c/f=95/95/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[94,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:43 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 131 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=94/94 les/c/f=95/95/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[94,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:43 compute-0 python3.9[106894]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:34:43 compute-0 sudo[106892]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 06 06:34:43 compute-0 ceph-mon[74339]: osdmap e130: 3 total, 3 up, 3 in
Dec 06 06:34:43 compute-0 ceph-mon[74339]: osdmap e131: 3 total, 3 up, 3 in
Dec 06 06:34:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Dec 06 06:34:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 06 06:34:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:44.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 06 06:34:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 06 06:34:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 06 06:34:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 06 06:34:44 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 132 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=132) [0] r=0 lpr=132 pi=[72,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:45 compute-0 ceph-mon[74339]: 9.5 scrub starts
Dec 06 06:34:45 compute-0 ceph-mon[74339]: 9.5 scrub ok
Dec 06 06:34:45 compute-0 ceph-mon[74339]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 06 06:34:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 06 06:34:45 compute-0 ceph-mon[74339]: osdmap e132: 3 total, 3 up, 3 in
Dec 06 06:34:45 compute-0 python3.9[107048]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:34:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:45.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 06 06:34:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 06 06:34:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 06 06:34:45 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 133 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=131/94 les/c/f=132/95/0 sis=133) [0] r=0 lpr=133 pi=[94,133)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:45 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 133 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=133) [0]/[2] r=-1 lpr=133 pi=[72,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:45 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 133 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=131/94 les/c/f=132/95/0 sis=133) [0] r=0 lpr=133 pi=[94,133)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:45 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 133 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=72/72 les/c/f=73/73/0 sis=133) [0]/[2] r=-1 lpr=133 pi=[72,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:45 compute-0 sudo[107198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raixwgjrubomlaonrifyojwhaoaszmkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002885.4048026-228-104416461841267/AnsiballZ_sefcontext.py'
Dec 06 06:34:45 compute-0 sudo[107198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:46 compute-0 python3.9[107200]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 06 06:34:46 compute-0 sudo[107198]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:46.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 06 06:34:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:47.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:47 compute-0 python3.9[107351]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:34:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 06 06:34:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 06 06:34:47 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 134 pg[9.1a( v 58'1159 (0'0,58'1159] local-lis/les=133/134 n=5 ec=62/51 lis/c=131/94 les/c/f=132/95/0 sis=133) [0] r=0 lpr=133 pi=[94,133)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:47 compute-0 ceph-mon[74339]: osdmap e133: 3 total, 3 up, 3 in
Dec 06 06:34:47 compute-0 ceph-mon[74339]: pgmap v383: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 1 activating+remapped, 1 peering, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/212 objects misplaced (0.943%)
Dec 06 06:34:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:48.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:48 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 06 06:34:48 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 06 06:34:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 06:34:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 06 06:34:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 06 06:34:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 06 06:34:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 135 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=2 ec=62/51 lis/c=133/72 les/c/f=134/73/0 sis=135) [0] r=0 lpr=135 pi=[72,135)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:48 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 135 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=2 ec=62/51 lis/c=133/72 les/c/f=134/73/0 sis=135) [0] r=0 lpr=135 pi=[72,135)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:48 compute-0 ceph-mon[74339]: osdmap e134: 3 total, 3 up, 3 in
Dec 06 06:34:48 compute-0 ceph-mon[74339]: pgmap v385: 305 pgs: 1 activating+remapped, 1 peering, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/212 objects misplaced (0.943%)
Dec 06 06:34:48 compute-0 ceph-mon[74339]: osdmap e135: 3 total, 3 up, 3 in
Dec 06 06:34:49 compute-0 sudo[107508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpddfgvwijkwzuqdmqhcioqbacciflfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002888.8214743-282-70297067134381/AnsiballZ_dnf.py'
Dec 06 06:34:49 compute-0 sudo[107508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:49.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:49 compute-0 python3.9[107510]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 06 06:34:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 06 06:34:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 06 06:34:49 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 136 pg[9.1b( v 58'1159 (0'0,58'1159] local-lis/les=135/136 n=2 ec=62/51 lis/c=133/72 les/c/f=134/73/0 sis=135) [0] r=0 lpr=135 pi=[72,135)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:34:50 compute-0 ceph-mon[74339]: 9.1a scrub starts
Dec 06 06:34:50 compute-0 ceph-mon[74339]: 9.1a scrub ok
Dec 06 06:34:50 compute-0 ceph-mon[74339]: 9.8 scrub starts
Dec 06 06:34:50 compute-0 ceph-mon[74339]: 9.8 scrub ok
Dec 06 06:34:50 compute-0 ceph-mon[74339]: osdmap e136: 3 total, 3 up, 3 in
Dec 06 06:34:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 1 activating+remapped, 1 peering, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/212 objects misplaced (0.943%)
Dec 06 06:34:50 compute-0 sudo[107513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:50 compute-0 sudo[107513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:50 compute-0 sudo[107513]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:50 compute-0 sudo[107538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:34:50 compute-0 sudo[107538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:34:50 compute-0 sudo[107538]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:50 compute-0 sudo[107508]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:51 compute-0 ceph-mon[74339]: pgmap v388: 305 pgs: 1 activating+remapped, 1 peering, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/212 objects misplaced (0.943%)
Dec 06 06:34:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:51.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:52 compute-0 sudo[107712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urfoqtktrfpmymvqltlnzqnhdccrxwlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002891.2191398-306-176247097259699/AnsiballZ_command.py'
Dec 06 06:34:52 compute-0 sudo[107712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:52 compute-0 python3.9[107714]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:34:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:52.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Dec 06 06:34:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 06 06:34:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 06 06:34:52 compute-0 sudo[107712]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:53 compute-0 ceph-mon[74339]: 9.18 scrub starts
Dec 06 06:34:53 compute-0 ceph-mon[74339]: 9.18 scrub ok
Dec 06 06:34:53 compute-0 ceph-mon[74339]: 11.5 scrub starts
Dec 06 06:34:53 compute-0 ceph-mon[74339]: 11.5 scrub ok
Dec 06 06:34:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:53.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 06 06:34:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 06 06:34:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 06 06:34:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:34:53 compute-0 sudo[108002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjelfdaydeusqrssvmazcniciusozmup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002893.1966982-330-67599773889994/AnsiballZ_file.py'
Dec 06 06:34:53 compute-0 sudo[108002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:54 compute-0 python3.9[108004]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 06:34:54 compute-0 sudo[108002]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Dec 06 06:34:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 06 06:34:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 06 06:34:54 compute-0 ceph-mon[74339]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 06 06:34:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 06 06:34:54 compute-0 ceph-mon[74339]: osdmap e137: 3 total, 3 up, 3 in
Dec 06 06:34:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:54.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:54 compute-0 sshd-session[107852]: Connection reset by authenticating user root 91.202.233.33 port 42820 [preauth]
Dec 06 06:34:55 compute-0 python3.9[108155]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:34:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 06 06:34:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 06 06:34:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 06 06:34:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:55.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:56 compute-0 sudo[108309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsxhivmbnaupmvpezyptyscqzhwlibmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002895.5637636-378-169064163156447/AnsiballZ_dnf.py'
Dec 06 06:34:56 compute-0 sudo[108309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:56 compute-0 ceph-mon[74339]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 06 06:34:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 06 06:34:56 compute-0 ceph-mon[74339]: osdmap e138: 3 total, 3 up, 3 in
Dec 06 06:34:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 06 06:34:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec 06 06:34:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 06 06:34:56 compute-0 python3.9[108311]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:56.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 06 06:34:56 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 06 06:34:56 compute-0 sshd-session[108156]: Connection reset by authenticating user root 91.202.233.33 port 42822 [preauth]
Dec 06 06:34:57 compute-0 ceph-mon[74339]: 9.9 scrub starts
Dec 06 06:34:57 compute-0 ceph-mon[74339]: 9.9 scrub ok
Dec 06 06:34:57 compute-0 ceph-mon[74339]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 06 06:34:57 compute-0 ceph-mon[74339]: 11.4 scrub starts
Dec 06 06:34:57 compute-0 ceph-mon[74339]: 11.4 scrub ok
Dec 06 06:34:57 compute-0 ceph-mon[74339]: 9.16 scrub starts
Dec 06 06:34:57 compute-0 ceph-mon[74339]: 9.16 scrub ok
Dec 06 06:34:57 compute-0 ceph-mon[74339]: osdmap e139: 3 total, 3 up, 3 in
Dec 06 06:34:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:57.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:57 compute-0 sudo[108309]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 06 06:34:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 06 06:34:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 06 06:34:57 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 06 06:34:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 140 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=140) [0] r=0 lpr=140 pi=[82,140)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec 06 06:34:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:34:58 compute-0 sudo[108466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miapqxdexftayqcdcfopvvefdomhgfhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002897.865613-405-226868056669157/AnsiballZ_dnf.py'
Dec 06 06:34:58 compute-0 sudo[108466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:34:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:34:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:34:58.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:34:58 compute-0 python3.9[108468]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:34:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:34:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 06 06:34:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:34:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 06 06:34:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 06 06:34:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 141 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=141) [0]/[1] r=-1 lpr=141 pi=[82,141)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 141 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=82/82 les/c/f=83/83/0 sis=141) [0]/[1] r=-1 lpr=141 pi=[82,141)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:58 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 141 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=107/107 les/c/f=108/108/0 sis=141) [0] r=0 lpr=141 pi=[107,141)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:34:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 06 06:34:58 compute-0 ceph-mon[74339]: osdmap e140: 3 total, 3 up, 3 in
Dec 06 06:34:58 compute-0 ceph-mon[74339]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:34:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 06 06:34:59 compute-0 sshd-session[108314]: Connection reset by authenticating user root 91.202.233.33 port 42848 [preauth]
Dec 06 06:34:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:34:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:34:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:34:59.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:34:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 06 06:34:59 compute-0 sudo[108466]: pam_unix(sudo:session): session closed for user root
Dec 06 06:34:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 06 06:34:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 06 06:34:59 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 142 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=107/107 les/c/f=108/108/0 sis=142) [0]/[1] r=-1 lpr=142 pi=[107,142)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:34:59 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 142 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/51 lis/c=107/107 les/c/f=108/108/0 sis=142) [0]/[1] r=-1 lpr=142 pi=[107,142)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 06:34:59 compute-0 ceph-mon[74339]: 11.1 scrub starts
Dec 06 06:34:59 compute-0 ceph-mon[74339]: 11.1 scrub ok
Dec 06 06:34:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 06 06:34:59 compute-0 ceph-mon[74339]: osdmap e141: 3 total, 3 up, 3 in
Dec 06 06:35:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:35:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:00.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:35:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec 06 06:35:01 compute-0 sudo[108622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrcmabxafjkjdgbsxrwknvrzeygivjjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002900.7135696-441-148384493121568/AnsiballZ_stat.py'
Dec 06 06:35:01 compute-0 sudo[108622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:01 compute-0 python3.9[108624]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:35:01 compute-0 sudo[108622]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec 06 06:35:01 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec 06 06:35:01 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 143 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=141/82 les/c/f=142/83/0 sis=143) [0] r=0 lpr=143 pi=[82,143)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:35:01 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 143 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=141/82 les/c/f=142/83/0 sis=143) [0] r=0 lpr=143 pi=[82,143)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:35:02 compute-0 ceph-mon[74339]: osdmap e142: 3 total, 3 up, 3 in
Dec 06 06:35:02 compute-0 ceph-mon[74339]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:02 compute-0 sshd-session[108470]: Connection reset by authenticating user root 91.202.233.33 port 42864 [preauth]
Dec 06 06:35:02 compute-0 sudo[108776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmooemnxrbukjonrtmxouzjexdwlzwbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002901.4943004-465-70214288223560/AnsiballZ_slurp.py'
Dec 06 06:35:02 compute-0 sudo[108776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:02 compute-0 python3.9[108778]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 06 06:35:02 compute-0 sudo[108776]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:35:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:02.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:35:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec 06 06:35:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec 06 06:35:02 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec 06 06:35:02 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 144 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=142/107 les/c/f=143/108/0 sis=144) [0] r=0 lpr=144 pi=[107,144)/1 luod=0'0 crt=58'1159 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 06:35:02 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 144 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=0/0 n=5 ec=62/51 lis/c=142/107 les/c/f=143/108/0 sis=144) [0] r=0 lpr=144 pi=[107,144)/1 crt=58'1159 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 06:35:02 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 144 pg[9.1e( v 58'1159 (0'0,58'1159] local-lis/les=143/144 n=5 ec=62/51 lis/c=141/82 les/c/f=142/83/0 sis=143) [0] r=0 lpr=143 pi=[82,143)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:35:03 compute-0 ceph-mon[74339]: 8.10 deep-scrub starts
Dec 06 06:35:03 compute-0 ceph-mon[74339]: 8.10 deep-scrub ok
Dec 06 06:35:03 compute-0 ceph-mon[74339]: 9.1d scrub starts
Dec 06 06:35:03 compute-0 ceph-mon[74339]: 9.1d scrub ok
Dec 06 06:35:03 compute-0 ceph-mon[74339]: 11.1a deep-scrub starts
Dec 06 06:35:03 compute-0 ceph-mon[74339]: 11.1a deep-scrub ok
Dec 06 06:35:03 compute-0 ceph-mon[74339]: osdmap e143: 3 total, 3 up, 3 in
Dec 06 06:35:03 compute-0 ceph-mon[74339]: pgmap v401: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:03 compute-0 ceph-mon[74339]: 8.19 scrub starts
Dec 06 06:35:03 compute-0 ceph-mon[74339]: 8.19 scrub ok
Dec 06 06:35:03 compute-0 ceph-mon[74339]: osdmap e144: 3 total, 3 up, 3 in
Dec 06 06:35:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:03.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:03 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 06 06:35:03 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 06 06:35:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec 06 06:35:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec 06 06:35:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:35:04 compute-0 ceph-osd[84884]: osd.0 pg_epoch: 145 pg[9.1f( v 58'1159 (0'0,58'1159] local-lis/les=144/145 n=5 ec=62/51 lis/c=142/107 les/c/f=143/108/0 sis=144) [0] r=0 lpr=144 pi=[107,144)/1 crt=58'1159 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 06:35:04 compute-0 ceph-mon[74339]: 9.2 scrub starts
Dec 06 06:35:04 compute-0 ceph-mon[74339]: 9.2 scrub ok
Dec 06 06:35:04 compute-0 ceph-mon[74339]: 9.1e scrub starts
Dec 06 06:35:04 compute-0 ceph-mon[74339]: 9.1e scrub ok
Dec 06 06:35:04 compute-0 ceph-mon[74339]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:35:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:04 compute-0 sshd-session[106041]: Connection closed by 192.168.122.30 port 32994
Dec 06 06:35:04 compute-0 sshd-session[106038]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:35:04 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Dec 06 06:35:04 compute-0 systemd[1]: session-36.scope: Consumed 18.356s CPU time.
Dec 06 06:35:04 compute-0 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Dec 06 06:35:04 compute-0 systemd-logind[798]: Removed session 36.
Dec 06 06:35:04 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 06 06:35:04 compute-0 ceph-osd[84884]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 06 06:35:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:35:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:04.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:35:04 compute-0 sshd-session[108780]: Connection reset by authenticating user root 91.202.233.33 port 44996 [preauth]
Dec 06 06:35:05 compute-0 ceph-mon[74339]: pgmap v404: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 457 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:05 compute-0 ceph-mon[74339]: 9.1f scrub starts
Dec 06 06:35:05 compute-0 ceph-mon[74339]: 9.1f scrub ok
Dec 06 06:35:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:05.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:06.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:07 compute-0 ceph-mon[74339]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:07 compute-0 ceph-mon[74339]: 9.e scrub starts
Dec 06 06:35:07 compute-0 ceph-mon[74339]: 9.e scrub ok
Dec 06 06:35:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:08.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:08 compute-0 ceph-mon[74339]: 9.6 scrub starts
Dec 06 06:35:08 compute-0 ceph-mon[74339]: 9.6 scrub ok
Dec 06 06:35:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:09.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:09 compute-0 ceph-mon[74339]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:10 compute-0 sudo[108810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:10 compute-0 sudo[108810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:10 compute-0 sudo[108810]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:10 compute-0 sudo[108835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:10 compute-0 sudo[108835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:10 compute-0 sudo[108835]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:10.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:10 compute-0 ceph-mon[74339]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:11.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:12.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:35:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:35:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:35:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:35:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:35:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:35:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:13.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:13 compute-0 ceph-mon[74339]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:13 compute-0 ceph-mon[74339]: 9.a scrub starts
Dec 06 06:35:13 compute-0 ceph-mon[74339]: 9.a scrub ok
Dec 06 06:35:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:14.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:14 compute-0 ceph-mon[74339]: 9.d scrub starts
Dec 06 06:35:14 compute-0 ceph-mon[74339]: 9.d scrub ok
Dec 06 06:35:14 compute-0 ceph-mon[74339]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:15.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:16 compute-0 sshd-session[108863]: Accepted publickey for zuul from 192.168.122.30 port 37644 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:35:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:16.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:16 compute-0 systemd-logind[798]: New session 37 of user zuul.
Dec 06 06:35:16 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 06 06:35:16 compute-0 sshd-session[108863]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:35:16 compute-0 ceph-mon[74339]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:17.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:17 compute-0 python3.9[109016]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:35:18
Dec 06 06:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', 'backups']
Dec 06 06:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:35:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:18.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:18 compute-0 python3.9[109171]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:35:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:19.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:20 compute-0 ceph-mon[74339]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:20 compute-0 python3.9[109365]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:35:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:20.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:20 compute-0 sshd-session[108866]: Connection closed by 192.168.122.30 port 37644
Dec 06 06:35:20 compute-0 sshd-session[108863]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:35:20 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Dec 06 06:35:20 compute-0 systemd[1]: session-37.scope: Consumed 2.580s CPU time.
Dec 06 06:35:20 compute-0 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Dec 06 06:35:20 compute-0 systemd-logind[798]: Removed session 37.
Dec 06 06:35:21 compute-0 ceph-mon[74339]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:35:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1972 writes, 9179 keys, 1972 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 1972 writes, 1972 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1972 writes, 9179 keys, 1972 commit groups, 1.0 writes per commit group, ingest: 11.83 MB, 0.02 MB/s
                                           Interval WAL: 1972 writes, 1972 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.6      0.11              0.03         2    0.053       0      0       0.0       0.0
                                             L6      1/0    8.15 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0    130.6    130.0      0.06              0.02         1    0.063    3750    293       0.0       0.0
                                            Sum      1/0    8.15 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     48.7     97.1      0.17              0.05         3    0.056    3750    293       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0     49.8     99.1      0.16              0.05         2    0.082    3750    293       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    130.6    130.0      0.06              0.02         1    0.063    3750    293       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     80.1      0.10              0.03         1    0.102       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.03 MB/s write, 0.01 GB read, 0.01 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 389.31 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(23,330.33 KB,0.106114%) FilterBlock(4,18.48 KB,0.00593788%) IndexBlock(4,40.50 KB,0.0130101%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:35:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:22.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:22 compute-0 ceph-mon[74339]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:35:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:35:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:35:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:35:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:35:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:35:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:24 compute-0 ceph-mon[74339]: 9.f scrub starts
Dec 06 06:35:24 compute-0 ceph-mon[74339]: 9.f scrub ok
Dec 06 06:35:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:35:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:35:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:25.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:25 compute-0 ceph-mon[74339]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:26.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:26 compute-0 sshd-session[109394]: Accepted publickey for zuul from 192.168.122.30 port 38658 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:35:26 compute-0 systemd-logind[798]: New session 38 of user zuul.
Dec 06 06:35:26 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 06 06:35:26 compute-0 sshd-session[109394]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:35:26 compute-0 ceph-mon[74339]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:27.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:27 compute-0 python3.9[109547]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:35:28 compute-0 sudo[109558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:28 compute-0 sudo[109558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:28 compute-0 sudo[109558]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:28 compute-0 sudo[109601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:35:28 compute-0 sudo[109601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:28 compute-0 sudo[109601]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:28 compute-0 ceph-mon[74339]: 9.15 scrub starts
Dec 06 06:35:28 compute-0 ceph-mon[74339]: 9.15 scrub ok
Dec 06 06:35:28 compute-0 sudo[109629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:28 compute-0 sudo[109629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:28 compute-0 sudo[109629]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:28 compute-0 sudo[109675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:35:28 compute-0 sudo[109675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:28.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:28 compute-0 podman[109872]: 2025-12-06 06:35:28.673353001 +0000 UTC m=+0.065220361 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:35:28 compute-0 podman[109872]: 2025-12-06 06:35:28.777465693 +0000 UTC m=+0.169333023 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:35:28 compute-0 python3.9[109842]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:35:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:35:29 compute-0 ceph-mon[74339]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:29.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:29 compute-0 podman[110120]: 2025-12-06 06:35:29.391157423 +0000 UTC m=+0.058106518 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:35:29 compute-0 podman[110120]: 2025-12-06 06:35:29.428428903 +0000 UTC m=+0.095377978 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:35:29 compute-0 sudo[110233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afxlpegvlzziotfojpnxyjkxpgtbvkgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002929.1836712-85-82260905230363/AnsiballZ_setup.py'
Dec 06 06:35:29 compute-0 sudo[110233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:29 compute-0 podman[110250]: 2025-12-06 06:35:29.662365308 +0000 UTC m=+0.059700708 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., name=keepalived, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.component=keepalived-container, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 06 06:35:29 compute-0 podman[110250]: 2025-12-06 06:35:29.677464598 +0000 UTC m=+0.074799978 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container)
Dec 06 06:35:29 compute-0 sudo[109675]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:35:29 compute-0 python3.9[110242]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:35:30 compute-0 sudo[110233]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:35:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:35:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:30 compute-0 sudo[110289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:30 compute-0 sudo[110289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-0 sudo[110289]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-0 sudo[110314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:35:30 compute-0 sudo[110314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-0 sudo[110314]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-0 sudo[110343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:30 compute-0 sudo[110343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-0 sudo[110343]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-0 sudo[110387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:35:30 compute-0 sudo[110387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:35:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:35:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000051s ======
Dec 06 06:35:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:30.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Dec 06 06:35:30 compute-0 sudo[110462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uewbtoemstasylsvwwmiaiyqtvpthawr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002929.1836712-85-82260905230363/AnsiballZ_dnf.py'
Dec 06 06:35:30 compute-0 sudo[110462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:30 compute-0 sudo[110477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:30 compute-0 sudo[110477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-0 sudo[110477]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-0 sudo[110505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:30 compute-0 sudo[110505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:30 compute-0 sudo[110505]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:30 compute-0 python3.9[110466]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:35:30 compute-0 sudo[110387]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-0 ceph-mon[74339]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:35:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:35:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:35:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:35:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:35:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5fcccfb0-7526-470b-a55f-198d66f4b527 does not exist
Dec 06 06:35:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 14f5bf45-4dad-4a80-a088-32736817f4ac does not exist
Dec 06 06:35:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9bc9c691-34c2-490e-94ef-aa954e85bfff does not exist
Dec 06 06:35:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:35:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:35:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:35:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:35:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:35:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:35:31 compute-0 sudo[110545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:31 compute-0 sudo[110545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:31 compute-0 sudo[110545]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:31 compute-0 sudo[110570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:35:31 compute-0 sudo[110570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:31 compute-0 sudo[110570]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:31 compute-0 sudo[110595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:31 compute-0 sudo[110595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:31 compute-0 sudo[110595]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:31 compute-0 sudo[110620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:35:31 compute-0 sudo[110620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:31 compute-0 podman[110685]: 2025-12-06 06:35:31.994498427 +0000 UTC m=+0.049255200 container create a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chebyshev, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:35:32 compute-0 systemd[1]: Started libpod-conmon-a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454.scope.
Dec 06 06:35:32 compute-0 podman[110685]: 2025-12-06 06:35:31.969672587 +0000 UTC m=+0.024429410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:35:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:35:32 compute-0 podman[110685]: 2025-12-06 06:35:32.109160321 +0000 UTC m=+0.163917084 container init a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 06:35:32 compute-0 podman[110685]: 2025-12-06 06:35:32.119847046 +0000 UTC m=+0.174603809 container start a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 06:35:32 compute-0 nice_chebyshev[110701]: 167 167
Dec 06 06:35:32 compute-0 podman[110685]: 2025-12-06 06:35:32.133084847 +0000 UTC m=+0.187841610 container attach a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chebyshev, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 06:35:32 compute-0 systemd[1]: libpod-a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454.scope: Deactivated successfully.
Dec 06 06:35:32 compute-0 podman[110685]: 2025-12-06 06:35:32.139483772 +0000 UTC m=+0.194240545 container died a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chebyshev, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 06:35:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:35:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:35:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:35:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:35:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:35:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aa5e1923384aaf9beae06adca97b2a12b1c398c263d0a3e9b48b964ab253380-merged.mount: Deactivated successfully.
Dec 06 06:35:32 compute-0 podman[110685]: 2025-12-06 06:35:32.317419295 +0000 UTC m=+0.372176058 container remove a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:35:32 compute-0 systemd[1]: libpod-conmon-a1e806fcef22c91b5098173f8a229271c11d7779e2e12b56ce824873eabf9454.scope: Deactivated successfully.
Dec 06 06:35:32 compute-0 podman[110726]: 2025-12-06 06:35:32.488206905 +0000 UTC m=+0.045836072 container create e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:35:32 compute-0 sudo[110462]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:32 compute-0 systemd[1]: Started libpod-conmon-e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717.scope.
Dec 06 06:35:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:32.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:35:32 compute-0 podman[110726]: 2025-12-06 06:35:32.466643859 +0000 UTC m=+0.024273046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb08994643d9ad5c3cb725929e463d65a7f7c6d03c1c167212dd16f6c71eb23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb08994643d9ad5c3cb725929e463d65a7f7c6d03c1c167212dd16f6c71eb23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb08994643d9ad5c3cb725929e463d65a7f7c6d03c1c167212dd16f6c71eb23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb08994643d9ad5c3cb725929e463d65a7f7c6d03c1c167212dd16f6c71eb23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbb08994643d9ad5c3cb725929e463d65a7f7c6d03c1c167212dd16f6c71eb23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:32 compute-0 podman[110726]: 2025-12-06 06:35:32.583983172 +0000 UTC m=+0.141612429 container init e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 06:35:32 compute-0 podman[110726]: 2025-12-06 06:35:32.591485976 +0000 UTC m=+0.149115143 container start e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:35:32 compute-0 podman[110726]: 2025-12-06 06:35:32.595458167 +0000 UTC m=+0.153087364 container attach e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 06:35:33 compute-0 sudo[110897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ociijydomyhkweekpygwytrqjrwfmxqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002932.7054636-121-223728178944944/AnsiballZ_setup.py'
Dec 06 06:35:33 compute-0 sudo[110897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:33.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:33 compute-0 python3.9[110899]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:35:33 compute-0 modest_goldberg[110744]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:35:33 compute-0 modest_goldberg[110744]: --> relative data size: 1.0
Dec 06 06:35:33 compute-0 modest_goldberg[110744]: --> All data devices are unavailable
Dec 06 06:35:33 compute-0 systemd[1]: libpod-e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717.scope: Deactivated successfully.
Dec 06 06:35:33 compute-0 podman[110726]: 2025-12-06 06:35:33.451223123 +0000 UTC m=+1.008852300 container died e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:35:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbb08994643d9ad5c3cb725929e463d65a7f7c6d03c1c167212dd16f6c71eb23-merged.mount: Deactivated successfully.
Dec 06 06:35:33 compute-0 podman[110726]: 2025-12-06 06:35:33.512567474 +0000 UTC m=+1.070196641 container remove e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:35:33 compute-0 systemd[1]: libpod-conmon-e5ac339d67825a5aa0f4bde548e0fa4d0353630084412ee253489c645cae8717.scope: Deactivated successfully.
Dec 06 06:35:33 compute-0 sudo[110620]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:33 compute-0 sudo[110944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:33 compute-0 sudo[110944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:33 compute-0 sudo[110944]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:33 compute-0 sudo[110978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:35:33 compute-0 sudo[110978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:33 compute-0 sudo[110978]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:33 compute-0 sudo[110897]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:33 compute-0 sudo[111012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:33 compute-0 sudo[111012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:33 compute-0 sudo[111012]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:33 compute-0 sudo[111039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:35:33 compute-0 sudo[111039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:34 compute-0 podman[111180]: 2025-12-06 06:35:34.171491628 +0000 UTC m=+0.041823959 container create 9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:35:34 compute-0 systemd[1]: Started libpod-conmon-9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af.scope.
Dec 06 06:35:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:35:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:34 compute-0 podman[111180]: 2025-12-06 06:35:34.153277208 +0000 UTC m=+0.023609559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:35:34 compute-0 podman[111180]: 2025-12-06 06:35:34.255478002 +0000 UTC m=+0.125810363 container init 9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:35:34 compute-0 podman[111180]: 2025-12-06 06:35:34.263690093 +0000 UTC m=+0.134022424 container start 9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 06:35:34 compute-0 podman[111180]: 2025-12-06 06:35:34.267769378 +0000 UTC m=+0.138101719 container attach 9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 06:35:34 compute-0 serene_wu[111197]: 167 167
Dec 06 06:35:34 compute-0 systemd[1]: libpod-9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af.scope: Deactivated successfully.
Dec 06 06:35:34 compute-0 podman[111180]: 2025-12-06 06:35:34.271073503 +0000 UTC m=+0.141405844 container died 9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:35:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b7d099803ef4f0de457f7ff2f3aa8cb3877639c32e0f30ecc77dec070a62867-merged.mount: Deactivated successfully.
Dec 06 06:35:34 compute-0 podman[111180]: 2025-12-06 06:35:34.308469037 +0000 UTC m=+0.178801368 container remove 9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wu, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:35:34 compute-0 systemd[1]: libpod-conmon-9c95ef76f64ad117a480771b5d15ceb77772eb58fda605cb18a4683d2fd298af.scope: Deactivated successfully.
Dec 06 06:35:34 compute-0 podman[111253]: 2025-12-06 06:35:34.468859888 +0000 UTC m=+0.040615177 container create 01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:35:34 compute-0 systemd[1]: Started libpod-conmon-01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d.scope.
Dec 06 06:35:34 compute-0 sudo[111310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kihhrwzhrpljitcmqufmmivwmjzsgxzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002934.041355-154-14359384955987/AnsiballZ_file.py'
Dec 06 06:35:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:35:34 compute-0 podman[111253]: 2025-12-06 06:35:34.451375308 +0000 UTC m=+0.023130617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:35:34 compute-0 sudo[111310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b67db86d3875d5f3ea05fa10e910786d4035de6f8ffeba539509ceb5568465/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b67db86d3875d5f3ea05fa10e910786d4035de6f8ffeba539509ceb5568465/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b67db86d3875d5f3ea05fa10e910786d4035de6f8ffeba539509ceb5568465/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b67db86d3875d5f3ea05fa10e910786d4035de6f8ffeba539509ceb5568465/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:34 compute-0 podman[111253]: 2025-12-06 06:35:34.572281453 +0000 UTC m=+0.144036792 container init 01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:35:34 compute-0 podman[111253]: 2025-12-06 06:35:34.58189868 +0000 UTC m=+0.153653969 container start 01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:35:34 compute-0 podman[111253]: 2025-12-06 06:35:34.585691088 +0000 UTC m=+0.157446427 container attach 01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:35:34 compute-0 ceph-mon[74339]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:34 compute-0 python3.9[111315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:35:34 compute-0 sudo[111310]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:35.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:35 compute-0 sudo[111471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnbiuaxzgsgjtzbhpkavkzpdhxzqdwyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002934.9387028-178-234174945303500/AnsiballZ_command.py'
Dec 06 06:35:35 compute-0 sudo[111471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:35 compute-0 eager_cartwright[111311]: {
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:     "0": [
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:         {
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "devices": [
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "/dev/loop3"
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             ],
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "lv_name": "ceph_lv0",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "lv_size": "7511998464",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "name": "ceph_lv0",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "tags": {
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.cluster_name": "ceph",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.crush_device_class": "",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.encrypted": "0",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.osd_id": "0",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.type": "block",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:                 "ceph.vdo": "0"
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             },
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "type": "block",
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:             "vg_name": "ceph_vg0"
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:         }
Dec 06 06:35:35 compute-0 eager_cartwright[111311]:     ]
Dec 06 06:35:35 compute-0 eager_cartwright[111311]: }
Dec 06 06:35:35 compute-0 systemd[1]: libpod-01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d.scope: Deactivated successfully.
Dec 06 06:35:35 compute-0 podman[111253]: 2025-12-06 06:35:35.4522249 +0000 UTC m=+1.023980189 container died 01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:35:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3b67db86d3875d5f3ea05fa10e910786d4035de6f8ffeba539509ceb5568465-merged.mount: Deactivated successfully.
Dec 06 06:35:35 compute-0 podman[111253]: 2025-12-06 06:35:35.517485071 +0000 UTC m=+1.089240360 container remove 01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:35:35 compute-0 systemd[1]: libpod-conmon-01ff0554dcea12b010388edf1f6f7b4b7ea64f967609e0f5e21840d7cecfe19d.scope: Deactivated successfully.
Dec 06 06:35:35 compute-0 sudo[111039]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:35 compute-0 python3.9[111473]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:35:35 compute-0 sudo[111485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:35 compute-0 sudo[111485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:35 compute-0 sudo[111485]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:35 compute-0 sudo[111471]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:35 compute-0 sudo[111516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:35:35 compute-0 sudo[111516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:35 compute-0 sudo[111516]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:35 compute-0 sudo[111547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:35 compute-0 sudo[111547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:35 compute-0 sudo[111547]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:35 compute-0 ceph-mon[74339]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:35 compute-0 sudo[111596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:35:35 compute-0 sudo[111596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:36 compute-0 podman[111714]: 2025-12-06 06:35:36.179196898 +0000 UTC m=+0.044278112 container create a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 06:35:36 compute-0 systemd[1]: Started libpod-conmon-a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a.scope.
Dec 06 06:35:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:36 compute-0 podman[111714]: 2025-12-06 06:35:36.161366048 +0000 UTC m=+0.026447282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:35:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:35:36 compute-0 podman[111714]: 2025-12-06 06:35:36.278037634 +0000 UTC m=+0.143118868 container init a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:35:36 compute-0 podman[111714]: 2025-12-06 06:35:36.288136574 +0000 UTC m=+0.153217788 container start a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:35:36 compute-0 podman[111714]: 2025-12-06 06:35:36.291886701 +0000 UTC m=+0.156967925 container attach a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:35:36 compute-0 systemd[1]: libpod-a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a.scope: Deactivated successfully.
Dec 06 06:35:36 compute-0 trusting_knuth[111735]: 167 167
Dec 06 06:35:36 compute-0 conmon[111735]: conmon a44591073dfa7a710f10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a.scope/container/memory.events
Dec 06 06:35:36 compute-0 podman[111714]: 2025-12-06 06:35:36.297263129 +0000 UTC m=+0.162344363 container died a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:35:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bda04c1e53281a44be5a768fb2ef13f0777f68bf9d75a65ae0bd9070b110836-merged.mount: Deactivated successfully.
Dec 06 06:35:36 compute-0 podman[111714]: 2025-12-06 06:35:36.33688878 +0000 UTC m=+0.201969994 container remove a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:35:36 compute-0 systemd[1]: libpod-conmon-a44591073dfa7a710f1080e09286b5e1160fa8f1122262e461e65f6ec02d630a.scope: Deactivated successfully.
Dec 06 06:35:36 compute-0 sudo[111822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cutukatcegrajbsqexotacoemvnrxkmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002935.8891444-202-205548778012272/AnsiballZ_stat.py'
Dec 06 06:35:36 compute-0 sudo[111822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:36 compute-0 podman[111830]: 2025-12-06 06:35:36.467479964 +0000 UTC m=+0.022912812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:35:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:36 compute-0 python3.9[111825]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:35:36 compute-0 podman[111830]: 2025-12-06 06:35:36.600314836 +0000 UTC m=+0.155747664 container create 30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 06:35:36 compute-0 sudo[111822]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:36 compute-0 systemd[1]: Started libpod-conmon-30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486.scope.
Dec 06 06:35:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c589c2aecf4e4e7ab0253be85aa9fe03b15ec0866cd34a3f9d64d3a23a4fdfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c589c2aecf4e4e7ab0253be85aa9fe03b15ec0866cd34a3f9d64d3a23a4fdfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c589c2aecf4e4e7ab0253be85aa9fe03b15ec0866cd34a3f9d64d3a23a4fdfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c589c2aecf4e4e7ab0253be85aa9fe03b15ec0866cd34a3f9d64d3a23a4fdfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:35:36 compute-0 podman[111830]: 2025-12-06 06:35:36.674223879 +0000 UTC m=+0.229656717 container init 30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:35:36 compute-0 podman[111830]: 2025-12-06 06:35:36.68239246 +0000 UTC m=+0.237825288 container start 30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:35:36 compute-0 podman[111830]: 2025-12-06 06:35:36.686319962 +0000 UTC m=+0.241752800 container attach 30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:35:36 compute-0 sudo[111928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjihiadjcoqigwcjvbvmlebfvsgvnsnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002935.8891444-202-205548778012272/AnsiballZ_file.py'
Dec 06 06:35:36 compute-0 sudo[111928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:36 compute-0 ceph-mon[74339]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:37 compute-0 python3.9[111930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:35:37 compute-0 sudo[111928]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:37.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:37 compute-0 nice_brattain[111850]: {
Dec 06 06:35:37 compute-0 nice_brattain[111850]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:35:37 compute-0 nice_brattain[111850]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:35:37 compute-0 nice_brattain[111850]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:35:37 compute-0 nice_brattain[111850]:         "osd_id": 0,
Dec 06 06:35:37 compute-0 nice_brattain[111850]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:35:37 compute-0 nice_brattain[111850]:         "type": "bluestore"
Dec 06 06:35:37 compute-0 nice_brattain[111850]:     }
Dec 06 06:35:37 compute-0 nice_brattain[111850]: }
Dec 06 06:35:37 compute-0 systemd[1]: libpod-30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486.scope: Deactivated successfully.
Dec 06 06:35:37 compute-0 podman[111830]: 2025-12-06 06:35:37.584359725 +0000 UTC m=+1.139792583 container died 30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:35:37 compute-0 sudo[112096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irsknccjveaqdvbizgoqooicrjhykwhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002937.2772703-238-263392702009024/AnsiballZ_stat.py'
Dec 06 06:35:37 compute-0 sudo[112096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c589c2aecf4e4e7ab0253be85aa9fe03b15ec0866cd34a3f9d64d3a23a4fdfc-merged.mount: Deactivated successfully.
Dec 06 06:35:37 compute-0 podman[111830]: 2025-12-06 06:35:37.643215881 +0000 UTC m=+1.198648709 container remove 30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_brattain, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:35:37 compute-0 systemd[1]: libpod-conmon-30bea4c7e3a1411c7c2602d98c2454277df68cf48e126699f6dcd15ada7aa486.scope: Deactivated successfully.
Dec 06 06:35:37 compute-0 sudo[111596]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:35:37 compute-0 python3.9[112099]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:35:37 compute-0 sudo[112096]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:38 compute-0 sudo[112187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dookuzkpcwizwzaxlksniyonzvjygrcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002937.2772703-238-263392702009024/AnsiballZ_file.py'
Dec 06 06:35:38 compute-0 sudo[112187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:38 compute-0 python3.9[112189]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:38 compute-0 sudo[112187]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:38.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:39 compute-0 sudo[112340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pefpviingtliobharrpbobpoefkbjeix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002938.6572113-277-53848578111773/AnsiballZ_ini_file.py'
Dec 06 06:35:39 compute-0 sudo[112340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:39 compute-0 python3.9[112342]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:39 compute-0 sudo[112340]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:39.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:39 compute-0 sudo[112492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mosehhthynslkcgdxxtqvmkjoxgmtfir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002939.4222686-277-98077811479231/AnsiballZ_ini_file.py'
Dec 06 06:35:39 compute-0 sudo[112492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:39 compute-0 python3.9[112494]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:39 compute-0 ceph-mon[74339]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:39 compute-0 sudo[112492]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:35:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c61935d4-6bd4-4365-87b0-5f4a167d9322 does not exist
Dec 06 06:35:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f5b5a039-ee4d-47a2-95a2-da3fe77ce9da does not exist
Dec 06 06:35:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 84ce083f-9149-434f-a5b4-5876f789074e does not exist
Dec 06 06:35:40 compute-0 sudo[112568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:40 compute-0 sudo[112568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:40 compute-0 sudo[112568]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:40 compute-0 sudo[112617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:35:40 compute-0 sudo[112617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:40 compute-0 sudo[112617]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:40 compute-0 sudo[112695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmbnsedlcxncbpxzwxejjgxlvukqnafv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002940.0767705-277-251519051726492/AnsiballZ_ini_file.py'
Dec 06 06:35:40 compute-0 sudo[112695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:40 compute-0 python3.9[112697]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:40 compute-0 sudo[112695]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:35:40 compute-0 ceph-mon[74339]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:41 compute-0 sudo[112847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhtqzjrqisxytmiidisspheohjeteokb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002940.7019227-277-62652796964039/AnsiballZ_ini_file.py'
Dec 06 06:35:41 compute-0 sudo[112847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:41 compute-0 python3.9[112849]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:35:41 compute-0 sudo[112847]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:41 compute-0 sudo[112999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dndvjyzkacypsutcuptnvtfiylcynwgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002941.6511536-370-58835500146796/AnsiballZ_dnf.py'
Dec 06 06:35:41 compute-0 sudo[112999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:42 compute-0 python3.9[113001]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:35:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:35:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:35:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:35:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:35:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:35:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:35:43 compute-0 ceph-mon[74339]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:43 compute-0 sudo[112999]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:44 compute-0 sudo[113154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yudncxmyjeerdahcbgyqicmjunkcbbnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002944.1828306-403-209620419791910/AnsiballZ_setup.py'
Dec 06 06:35:44 compute-0 sudo[113154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:44 compute-0 python3.9[113156]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:35:44 compute-0 sudo[113154]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:45.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:45 compute-0 ceph-mon[74339]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:45 compute-0 sudo[113308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wolvyrbuozvmwcvatkfnbukgzywmcyqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002945.1904514-427-240364523928464/AnsiballZ_stat.py'
Dec 06 06:35:45 compute-0 sudo[113308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:45 compute-0 python3.9[113310]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:35:45 compute-0 sudo[113308]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:46 compute-0 sudo[113462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbwvesmmzvblkmgceoqevnxngduqtxmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002945.9905274-454-176259542137049/AnsiballZ_stat.py'
Dec 06 06:35:46 compute-0 sudo[113462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:46 compute-0 python3.9[113464]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:35:46 compute-0 sudo[113462]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:46.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:46 compute-0 ceph-mon[74339]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:47 compute-0 sudo[113614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frrbqkojbpyczeetjkuppjqhzcfrvcqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002946.8042428-484-90371195472884/AnsiballZ_command.py'
Dec 06 06:35:47 compute-0 sudo[113614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:47 compute-0 python3.9[113616]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:35:48 compute-0 sudo[113614]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:48.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:48 compute-0 ceph-mon[74339]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:48 compute-0 sudo[113768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmykoukbwhvrlilbfvvnznivaydfscva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002948.304755-514-6591429400429/AnsiballZ_service_facts.py'
Dec 06 06:35:48 compute-0 sudo[113768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:48 compute-0 python3.9[113770]: ansible-service_facts Invoked
Dec 06 06:35:49 compute-0 network[113787]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:35:49 compute-0 network[113788]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:35:49 compute-0 network[113789]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:35:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:49.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:50.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:50 compute-0 sudo[113856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:50 compute-0 sudo[113856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:50 compute-0 sudo[113856]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:50 compute-0 sudo[113885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:35:50 compute-0 sudo[113885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:35:50 compute-0 sudo[113885]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:51 compute-0 ceph-mon[74339]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:51.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:51 compute-0 sudo[113768]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:53 compute-0 ceph-mon[74339]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:53.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:53 compute-0 sudo[114124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqetcaulwssximvdjcafdqvoykkpeaok ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765002953.0541935-559-22835064190148/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765002953.0541935-559-22835064190148/args'
Dec 06 06:35:53 compute-0 sudo[114124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:53 compute-0 sudo[114124]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:54 compute-0 sudo[114291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nenxajytspegwmbdpynwsvcsmvyjjscj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002953.8756719-592-198234111535109/AnsiballZ_dnf.py'
Dec 06 06:35:54 compute-0 sudo[114291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:54 compute-0 python3.9[114293]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:35:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:35:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:55.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:35:55 compute-0 ceph-mon[74339]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:55 compute-0 sudo[114291]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:35:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:56.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:35:56 compute-0 ceph-mon[74339]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:57.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:57 compute-0 sudo[114446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqunpdmkmbflkofxzhhyguliyllzckb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002956.894441-631-164964405648330/AnsiballZ_package_facts.py'
Dec 06 06:35:57 compute-0 sudo[114446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:57 compute-0 python3.9[114448]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 06 06:35:58 compute-0 sudo[114446]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:35:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:35:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:35:59 compute-0 sudo[114599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-datrhmwomhbnvautdovckkgntoyfssvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002958.8798952-661-138237665702531/AnsiballZ_stat.py'
Dec 06 06:35:59 compute-0 sudo[114599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:35:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:35:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:35:59.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:35:59 compute-0 python3.9[114601]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:35:59 compute-0 sudo[114599]: pam_unix(sudo:session): session closed for user root
Dec 06 06:35:59 compute-0 sudo[114677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgdextonquubxnljmkmzucklksjbsouk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002958.8798952-661-138237665702531/AnsiballZ_file.py'
Dec 06 06:35:59 compute-0 sudo[114677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:35:59 compute-0 python3.9[114679]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:35:59 compute-0 sudo[114677]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:00 compute-0 ceph-mon[74339]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:01 compute-0 sudo[114830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqfvewijotgskqmjaffvpcsgrwjxkits ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002960.8103344-697-100569186274136/AnsiballZ_stat.py'
Dec 06 06:36:01 compute-0 sudo[114830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:01 compute-0 python3.9[114832]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:01 compute-0 sudo[114830]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:01 compute-0 sudo[114908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyakoxkkyjziyfjuoudxmddnmffoexlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002960.8103344-697-100569186274136/AnsiballZ_file.py'
Dec 06 06:36:01 compute-0 sudo[114908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:01 compute-0 ceph-mon[74339]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:01 compute-0 python3.9[114910]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:01 compute-0 sudo[114908]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:03 compute-0 sudo[115061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msqoiglcvmzcohpmatcjonvzjtkgdjwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002962.8979852-751-223268908654510/AnsiballZ_lineinfile.py'
Dec 06 06:36:03 compute-0 sudo[115061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:03.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:03 compute-0 python3.9[115063]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:03 compute-0 sudo[115061]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:03 compute-0 ceph-mon[74339]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:04.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:04 compute-0 sudo[115214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csxdjpmfjlanlzlkuhpmylvcuglclllm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002964.683541-796-114203584049634/AnsiballZ_setup.py'
Dec 06 06:36:04 compute-0 sudo[115214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:05 compute-0 python3.9[115216]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:36:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:05.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:05 compute-0 ceph-mon[74339]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:05 compute-0 sudo[115214]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:06 compute-0 sudo[115298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxkiintnbmqujqedrwwwcqhufkhivnjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002964.683541-796-114203584049634/AnsiballZ_systemd.py'
Dec 06 06:36:06 compute-0 sudo[115298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:06 compute-0 python3.9[115300]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:36:06 compute-0 sudo[115298]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:36:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:06.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:36:06 compute-0 ceph-mon[74339]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:07 compute-0 sshd-session[109397]: Connection closed by 192.168.122.30 port 38658
Dec 06 06:36:07 compute-0 sshd-session[109394]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:36:07 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Dec 06 06:36:07 compute-0 systemd[1]: session-38.scope: Consumed 23.789s CPU time.
Dec 06 06:36:07 compute-0 systemd-logind[798]: Session 38 logged out. Waiting for processes to exit.
Dec 06 06:36:07 compute-0 systemd-logind[798]: Removed session 38.
Dec 06 06:36:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:07.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:36:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:08.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:36:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:08 compute-0 ceph-mon[74339]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:10.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:10 compute-0 sudo[115330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:10 compute-0 sudo[115330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:10 compute-0 sudo[115330]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:11 compute-0 sudo[115355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:11 compute-0 sudo[115355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:11 compute-0 sudo[115355]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:11 compute-0 ceph-mon[74339]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:12.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:12 compute-0 ceph-mon[74339]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:36:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:36:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:36:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:36:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:36:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:36:13 compute-0 sshd-session[71331]: Received disconnect from 38.102.83.248 port 52316:11: disconnected by user
Dec 06 06:36:13 compute-0 sshd-session[71331]: Disconnected from user zuul 38.102.83.248 port 52316
Dec 06 06:36:13 compute-0 sshd-session[71328]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:36:13 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 06 06:36:13 compute-0 systemd[1]: session-18.scope: Consumed 1min 43.053s CPU time.
Dec 06 06:36:13 compute-0 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Dec 06 06:36:13 compute-0 systemd-logind[798]: Removed session 18.
Dec 06 06:36:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:13.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:13 compute-0 sshd-session[115381]: Accepted publickey for zuul from 192.168.122.30 port 39864 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:36:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:13 compute-0 systemd-logind[798]: New session 39 of user zuul.
Dec 06 06:36:13 compute-0 systemd[1]: Started Session 39 of User zuul.
Dec 06 06:36:13 compute-0 sshd-session[115381]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:36:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:14 compute-0 sudo[115535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwflmiauuvbsmggmucgwzdnohtddgbfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002973.797286-31-179843834680956/AnsiballZ_file.py'
Dec 06 06:36:14 compute-0 sudo[115535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:14 compute-0 python3.9[115537]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:14 compute-0 sudo[115535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000019s ======
Dec 06 06:36:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:14.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec 06 06:36:15 compute-0 sudo[115687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mijbsktmitlecutpyrvzoodeprtogmge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002974.7846885-67-29334162554345/AnsiballZ_stat.py'
Dec 06 06:36:15 compute-0 sudo[115687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:15.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:15 compute-0 python3.9[115689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:15 compute-0 ceph-mon[74339]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:15 compute-0 sudo[115687]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:15 compute-0 sudo[115765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlotbaesuwseknkjrvdqtloeaowndjwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002974.7846885-67-29334162554345/AnsiballZ_file.py'
Dec 06 06:36:15 compute-0 sudo[115765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:15 compute-0 python3.9[115767]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:15 compute-0 sudo[115765]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:16 compute-0 sshd-session[115384]: Connection closed by 192.168.122.30 port 39864
Dec 06 06:36:16 compute-0 sshd-session[115381]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:36:16 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Dec 06 06:36:16 compute-0 systemd[1]: session-39.scope: Consumed 1.649s CPU time.
Dec 06 06:36:16 compute-0 systemd-logind[798]: Session 39 logged out. Waiting for processes to exit.
Dec 06 06:36:16 compute-0 systemd-logind[798]: Removed session 39.
Dec 06 06:36:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:16 compute-0 ceph-mon[74339]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:17.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:36:18
Dec 06 06:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr']
Dec 06 06:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:36:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:18.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:19 compute-0 ceph-mon[74339]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:20 compute-0 ceph-mon[74339]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:22 compute-0 sshd-session[115796]: Accepted publickey for zuul from 192.168.122.30 port 46220 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:36:22 compute-0 systemd-logind[798]: New session 40 of user zuul.
Dec 06 06:36:22 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 06 06:36:22 compute-0 sshd-session[115796]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:36:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000019s ======
Dec 06 06:36:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec 06 06:36:22 compute-0 ceph-mon[74339]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:22 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:36:23 compute-0 python3.9[115950]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:36:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:23.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:24 compute-0 sudo[116104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnetsulwzxdvziujbyqzvepstpcjjpxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002983.7106721-64-171463578076904/AnsiballZ_file.py'
Dec 06 06:36:24 compute-0 sudo[116104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:24 compute-0 python3.9[116106]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:24 compute-0 sudo[116104]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:24.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:24 compute-0 ceph-mon[74339]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:25 compute-0 sudo[116280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjpnxxwdxgiuscenllftsrshrihrppvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002984.5668097-88-229198451212271/AnsiballZ_stat.py'
Dec 06 06:36:25 compute-0 sudo[116280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:36:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:36:25 compute-0 python3.9[116282]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:25 compute-0 sudo[116280]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:25.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:25 compute-0 sudo[116358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uczlzivkqdnsiqkiuujfdacpqvhtasro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002984.5668097-88-229198451212271/AnsiballZ_file.py'
Dec 06 06:36:25 compute-0 sudo[116358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:25 compute-0 python3.9[116360]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.gx05p6_h recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:25 compute-0 sudo[116358]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:26.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:26 compute-0 sudo[116511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyytmixczfmzynnkcmrqxfmziffmtlei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002986.4504135-148-2401748593443/AnsiballZ_stat.py'
Dec 06 06:36:26 compute-0 sudo[116511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:26 compute-0 ceph-mon[74339]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:26 compute-0 python3.9[116513]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:26 compute-0 sudo[116511]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:27 compute-0 sudo[116589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulddvhkspnknbnkufprecsunxqecsbaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002986.4504135-148-2401748593443/AnsiballZ_file.py'
Dec 06 06:36:27 compute-0 sudo[116589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:27 compute-0 python3.9[116591]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.8muk9cww recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:27 compute-0 sudo[116589]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:27.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:27 compute-0 sudo[116741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgkwsxfwkjmsztdtqagaicngssdpjpsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002987.6733723-187-125982529198648/AnsiballZ_file.py'
Dec 06 06:36:27 compute-0 sudo[116741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:28 compute-0 python3.9[116743]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:36:28 compute-0 sudo[116741]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:28.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:28 compute-0 sudo[116894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwgskzhytizkeslhlujjsureazisfgkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002988.3786087-211-14801604282003/AnsiballZ_stat.py'
Dec 06 06:36:28 compute-0 sudo[116894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:29 compute-0 python3.9[116896]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:29 compute-0 sudo[116894]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:29 compute-0 sudo[116972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vutanyxfsyjqgmezztgbpwvpcevnorir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002988.3786087-211-14801604282003/AnsiballZ_file.py'
Dec 06 06:36:29 compute-0 sudo[116972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:29.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:29 compute-0 python3.9[116974]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:36:29 compute-0 sudo[116972]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:29 compute-0 sudo[117124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyqbyinsngevnjvinchlbikjgadcmctx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002989.6227577-211-184870693248282/AnsiballZ_stat.py'
Dec 06 06:36:29 compute-0 sudo[117124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:30 compute-0 python3.9[117126]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:30 compute-0 sudo[117124]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:30 compute-0 sudo[117203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyxqgowqrtcezfmnpvcgxqjyqyniqoyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002989.6227577-211-184870693248282/AnsiballZ_file.py'
Dec 06 06:36:30 compute-0 sudo[117203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:30 compute-0 ceph-mon[74339]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:30 compute-0 python3.9[117205]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:36:30 compute-0 sudo[117203]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:30.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:31 compute-0 sudo[117305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:31 compute-0 sudo[117305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:31 compute-0 sudo[117305]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:31 compute-0 sudo[117331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:31 compute-0 sudo[117331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:31 compute-0 sudo[117331]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:31 compute-0 sudo[117405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfrjtimlknbpnbfmkolxdzjalofvamuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002990.8959644-280-194693778366527/AnsiballZ_file.py'
Dec 06 06:36:31 compute-0 sudo[117405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:31.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:31 compute-0 python3.9[117407]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:31 compute-0 sudo[117405]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:31 compute-0 ceph-mon[74339]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:31 compute-0 sudo[117557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcplfeqjhcatwbrehqeinnqgpzdjexjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002991.6937635-304-128246751549576/AnsiballZ_stat.py'
Dec 06 06:36:31 compute-0 sudo[117557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:32 compute-0 python3.9[117559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:32 compute-0 sudo[117557]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:32 compute-0 sudo[117636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtgtezyudrpmvsxogbrphtumfsqvuzsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002991.6937635-304-128246751549576/AnsiballZ_file.py'
Dec 06 06:36:32 compute-0 sudo[117636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:32 compute-0 python3.9[117638]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:32 compute-0 sudo[117636]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:33.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:33 compute-0 sudo[117788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxjmztyoczutgmtwprcxvleleddrfsnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002993.1701937-340-170403804720763/AnsiballZ_stat.py'
Dec 06 06:36:33 compute-0 sudo[117788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:33 compute-0 python3.9[117790]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:33 compute-0 sudo[117788]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:34 compute-0 sudo[117866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paysyqsiuomucfqbenkkxvcfgtakbdop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002993.1701937-340-170403804720763/AnsiballZ_file.py'
Dec 06 06:36:34 compute-0 sudo[117866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:34 compute-0 python3.9[117868]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:34 compute-0 sudo[117866]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000019s ======
Dec 06 06:36:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:34.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec 06 06:36:35 compute-0 sudo[118019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzumxugasfmyobwkfvngtqvakbpkxncd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002994.441873-376-85314028477805/AnsiballZ_systemd.py'
Dec 06 06:36:35 compute-0 sudo[118019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:35 compute-0 ceph-mon[74339]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:35 compute-0 python3.9[118021]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:36:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:35.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:35 compute-0 systemd[1]: Reloading.
Dec 06 06:36:35 compute-0 systemd-rc-local-generator[118040]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:36:35 compute-0 systemd-sysv-generator[118049]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:36:35 compute-0 sudo[118019]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:36 compute-0 ceph-mon[74339]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:36 compute-0 sudo[118209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcnzasgskkgljckeuxufqpekpurnevvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002996.107126-400-82730962905149/AnsiballZ_stat.py'
Dec 06 06:36:36 compute-0 sudo[118209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:36.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:36 compute-0 python3.9[118211]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:36 compute-0 sudo[118209]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:36 compute-0 sudo[118287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdetxhkdiuxvwpitxjwdybloycbczyuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002996.107126-400-82730962905149/AnsiballZ_file.py'
Dec 06 06:36:37 compute-0 sudo[118287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:37 compute-0 python3.9[118289]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:37 compute-0 sudo[118287]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:37.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:37 compute-0 ceph-mon[74339]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:37 compute-0 sudo[118439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyhyltrnakcutcufmqzrfhhunpzkumce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002997.5123487-436-172920691265230/AnsiballZ_stat.py'
Dec 06 06:36:37 compute-0 sudo[118439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:38 compute-0 python3.9[118441]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:38 compute-0 sudo[118439]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:38 compute-0 sudo[118518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrpblswggdysehphkqnvfqsprenjxcuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002997.5123487-436-172920691265230/AnsiballZ_file.py'
Dec 06 06:36:38 compute-0 sudo[118518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:38 compute-0 python3.9[118520]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:38 compute-0 sudo[118518]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:38.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:38 compute-0 ceph-mon[74339]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:38 compute-0 sudo[118670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwbyrcbketdjcjtnsdznazhtxzxmbocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765002998.704379-472-206472810326301/AnsiballZ_systemd.py'
Dec 06 06:36:38 compute-0 sudo[118670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:39 compute-0 python3.9[118672]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:36:39 compute-0 systemd[1]: Reloading.
Dec 06 06:36:39 compute-0 systemd-rc-local-generator[118697]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:36:39 compute-0 systemd-sysv-generator[118701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:36:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:39.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:39 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 06:36:39 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:36:39 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:36:39 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 06:36:39 compute-0 sudo[118670]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:40 compute-0 python3.9[118865]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:36:40 compute-0 sudo[118866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:40 compute-0 sudo[118866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:40 compute-0 sudo[118866]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:40 compute-0 network[118912]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:36:40 compute-0 network[118914]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:36:40 compute-0 network[118916]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:36:40 compute-0 sudo[118902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:36:40 compute-0 sudo[118902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:40 compute-0 sudo[118902]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:40 compute-0 ceph-mon[74339]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:40.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:41 compute-0 sudo[118940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:41 compute-0 sudo[118940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:41 compute-0 sudo[118940]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:41.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:41 compute-0 sudo[118966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 06:36:41 compute-0 sudo[118966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:41 compute-0 sudo[118966]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:36:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:42.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:36:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:36:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:36:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:36:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:36:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:36:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:36:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:36:44 compute-0 sudo[119265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msehqickutfanvcrxxkvufmihbafjpkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003004.256452-550-160184407638416/AnsiballZ_stat.py'
Dec 06 06:36:44 compute-0 sudo[119265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:44.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:44 compute-0 python3.9[119267]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:44 compute-0 sudo[119265]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:45 compute-0 sudo[119343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stpipydhvofuysdgtelguffqxppikzpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003004.256452-550-160184407638416/AnsiballZ_file.py'
Dec 06 06:36:45 compute-0 sudo[119343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:45 compute-0 python3.9[119345]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:45 compute-0 sudo[119343]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:45.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:45 compute-0 ceph-mon[74339]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:36:45 compute-0 sudo[119495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teybmdrtaxsmqklkizfzkkjrqtbqagln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003005.540493-589-60692035374735/AnsiballZ_file.py'
Dec 06 06:36:45 compute-0 sudo[119495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:46 compute-0 python3.9[119497]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:46 compute-0 sudo[119495]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:46 compute-0 sudo[119648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhyregilvpbvnfkflvdllfcoblscjgna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003006.2845001-613-106790494606944/AnsiballZ_stat.py'
Dec 06 06:36:46 compute-0 sudo[119648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:46.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:46 compute-0 python3.9[119650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:36:46 compute-0 sudo[119648]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:47 compute-0 sudo[119726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrcuaenrlkkbxgmxnhgjurazobmfrbek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003006.2845001-613-106790494606944/AnsiballZ_file.py'
Dec 06 06:36:47 compute-0 sudo[119726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:47 compute-0 python3.9[119728]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:47 compute-0 sudo[119726]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:47.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:48 compute-0 sudo[119813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:48 compute-0 sudo[119813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:48 compute-0 sudo[119813]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:48 compute-0 sudo[119854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:36:48 compute-0 sudo[119854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:48 compute-0 sudo[119854]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:48 compute-0 sudo[119902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:48 compute-0 sudo[119902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:48 compute-0 sudo[119902]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:48 compute-0 sudo[119952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfefbyeoezhmukpnrrlmpidoialvlfle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003007.7153513-658-104579641603117/AnsiballZ_timezone.py'
Dec 06 06:36:48 compute-0 sudo[119952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:48 compute-0 sudo[119955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:36:48 compute-0 sudo[119955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:48 compute-0 ceph-mon[74339]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:48 compute-0 ceph-mon[74339]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:48 compute-0 python3.9[119963]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 06:36:48 compute-0 systemd[1]: Starting Time & Date Service...
Dec 06 06:36:48 compute-0 systemd[1]: Started Time & Date Service.
Dec 06 06:36:48 compute-0 sudo[119952]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:36:48 compute-0 sudo[119955]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000019s ======
Dec 06 06:36:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:48.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec 06 06:36:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:49 compute-0 sudo[120166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zckdcokmloyondxssxkgngvjbizhxyzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003008.811394-685-74083024960272/AnsiballZ_file.py'
Dec 06 06:36:49 compute-0 sudo[120166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:49 compute-0 python3.9[120168]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:49 compute-0 sudo[120166]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:49.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:36:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:36:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:36:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:36:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:36:49 compute-0 sudo[120318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuksyadmzajmkauqijlyevavqkdnfams ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003009.5058005-709-99527297782551/AnsiballZ_stat.py'
Dec 06 06:36:49 compute-0 sudo[120318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:49 compute-0 python3.9[120320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:50 compute-0 sudo[120318]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.626516) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010626572, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2298, "num_deletes": 258, "total_data_size": 3827990, "memory_usage": 3890728, "flush_reason": "Manual Compaction"}
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 06 06:36:50 compute-0 sudo[120398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caumlnamecsjpnlqpyfimednjgghwnje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003009.5058005-709-99527297782551/AnsiballZ_file.py'
Dec 06 06:36:50 compute-0 sudo[120398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010640719, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 2570745, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7991, "largest_seqno": 10288, "table_properties": {"data_size": 2561920, "index_size": 4999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 24139, "raw_average_key_size": 22, "raw_value_size": 2541753, "raw_average_value_size": 2321, "num_data_blocks": 222, "num_entries": 1095, "num_filter_entries": 1095, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002830, "oldest_key_time": 1765002830, "file_creation_time": 1765003010, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 14253 microseconds, and 7358 cpu microseconds.
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.640768) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 2570745 bytes OK
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.640789) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.641806) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.641820) EVENT_LOG_v1 {"time_micros": 1765003010641815, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.641841) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3817766, prev total WAL file size 3835800, number of live WAL files 2.
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.642697) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323539' seq:0, type:0; will stop at (end)
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(2510KB)], [20(8342KB)]
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010642780, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11113309, "oldest_snapshot_seqno": -1}
Dec 06 06:36:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:50.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 4080 keys, 9538813 bytes, temperature: kUnknown
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010726406, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9538813, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9506156, "index_size": 21347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 99062, "raw_average_key_size": 24, "raw_value_size": 9427123, "raw_average_value_size": 2310, "num_data_blocks": 941, "num_entries": 4080, "num_filter_entries": 4080, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765003010, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.726729) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9538813 bytes
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.727895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.7 rd, 113.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 4552, records dropped: 472 output_compression: NoCompression
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.727952) EVENT_LOG_v1 {"time_micros": 1765003010727922, "job": 6, "event": "compaction_finished", "compaction_time_micros": 83749, "compaction_time_cpu_micros": 34392, "output_level": 6, "num_output_files": 1, "total_output_size": 9538813, "num_input_records": 4552, "num_output_records": 4080, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010728677, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003010730928, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.642637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.731146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.731156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.731158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.731159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:36:50.731161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:36:50 compute-0 python3.9[120400]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:50 compute-0 sudo[120398]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:51 compute-0 sudo[120508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:51 compute-0 sudo[120508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:51 compute-0 sudo[120508]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:51 compute-0 sudo[120582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vabnbiqigxnuewwrxvmkxbssdjzooyvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003011.006192-745-226197932122882/AnsiballZ_stat.py'
Dec 06 06:36:51 compute-0 sudo[120582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:51 compute-0 sudo[120565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:51 compute-0 sudo[120565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:51 compute-0 sudo[120565]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000019s ======
Dec 06 06:36:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:51.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec 06 06:36:51 compute-0 python3.9[120600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:51 compute-0 sudo[120582]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-0 ceph-mon[74339]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b7f80b7a-b644-45ca-b956-20db8b68bd12 does not exist
Dec 06 06:36:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d36dc305-ee9b-456c-9e06-5dd312dbb7a5 does not exist
Dec 06 06:36:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3b5f7578-0cd4-4739-aabd-a162d46ba927 does not exist
Dec 06 06:36:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:36:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:36:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:36:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:36:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:36:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:36:51 compute-0 sudo[120651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:51 compute-0 sudo[120651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:51 compute-0 sudo[120651]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:51 compute-0 sudo[120707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sirjcwlpmhuosfkhedplatryfbzalteo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003011.006192-745-226197932122882/AnsiballZ_file.py'
Dec 06 06:36:51 compute-0 sudo[120707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:51 compute-0 sudo[120703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:36:51 compute-0 sudo[120703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:51 compute-0 sudo[120703]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:51 compute-0 sudo[120731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:51 compute-0 sudo[120731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:51 compute-0 sudo[120731]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:51 compute-0 sudo[120756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:36:51 compute-0 sudo[120756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:51 compute-0 python3.9[120723]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.865ayo7u recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:51 compute-0 sudo[120707]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:52 compute-0 podman[120845]: 2025-12-06 06:36:52.17390833 +0000 UTC m=+0.037617832 container create 12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:36:52 compute-0 systemd[1]: Started libpod-conmon-12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d.scope.
Dec 06 06:36:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:36:52 compute-0 podman[120845]: 2025-12-06 06:36:52.156982975 +0000 UTC m=+0.020692497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:36:52 compute-0 podman[120845]: 2025-12-06 06:36:52.254657385 +0000 UTC m=+0.118366917 container init 12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 06:36:52 compute-0 podman[120845]: 2025-12-06 06:36:52.261950601 +0000 UTC m=+0.125660093 container start 12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:36:52 compute-0 podman[120845]: 2025-12-06 06:36:52.265355505 +0000 UTC m=+0.129065037 container attach 12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:36:52 compute-0 priceless_williams[120862]: 167 167
Dec 06 06:36:52 compute-0 systemd[1]: libpod-12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d.scope: Deactivated successfully.
Dec 06 06:36:52 compute-0 conmon[120862]: conmon 12bd0f5923d03556af6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d.scope/container/memory.events
Dec 06 06:36:52 compute-0 podman[120845]: 2025-12-06 06:36:52.269562552 +0000 UTC m=+0.133272044 container died 12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:36:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-75b5958edf757689e4caedefccce5673750bcb7520e06200093cf5f12ff0eeeb-merged.mount: Deactivated successfully.
Dec 06 06:36:52 compute-0 podman[120845]: 2025-12-06 06:36:52.309505358 +0000 UTC m=+0.173214860 container remove 12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:36:52 compute-0 systemd[1]: libpod-conmon-12bd0f5923d03556af6d80c9918ef74dfc2677262fbd378a070517a8c37e6c9d.scope: Deactivated successfully.
Dec 06 06:36:52 compute-0 podman[120886]: 2025-12-06 06:36:52.451555644 +0000 UTC m=+0.039054209 container create e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:36:52 compute-0 systemd[1]: Started libpod-conmon-e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0.scope.
Dec 06 06:36:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58b01964fdc5864e5810ca3d145145903da651d025d9fba9e4d62c56ddd9dcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58b01964fdc5864e5810ca3d145145903da651d025d9fba9e4d62c56ddd9dcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58b01964fdc5864e5810ca3d145145903da651d025d9fba9e4d62c56ddd9dcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58b01964fdc5864e5810ca3d145145903da651d025d9fba9e4d62c56ddd9dcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58b01964fdc5864e5810ca3d145145903da651d025d9fba9e4d62c56ddd9dcd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:52 compute-0 podman[120886]: 2025-12-06 06:36:52.524484683 +0000 UTC m=+0.111983268 container init e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:36:52 compute-0 podman[120886]: 2025-12-06 06:36:52.435052168 +0000 UTC m=+0.022550733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:36:52 compute-0 podman[120886]: 2025-12-06 06:36:52.531543175 +0000 UTC m=+0.119041740 container start e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:36:52 compute-0 podman[120886]: 2025-12-06 06:36:52.534788996 +0000 UTC m=+0.122287561 container attach e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:36:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:36:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:36:52 compute-0 ceph-mon[74339]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:36:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:36:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:36:52 compute-0 ceph-mon[74339]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:52.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:52 compute-0 sudo[121031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucqjfwkupthgmusqvpuuxbopwpdlhxeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003012.4882853-781-138698701121238/AnsiballZ_stat.py'
Dec 06 06:36:52 compute-0 sudo[121031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:52 compute-0 python3.9[121033]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:52 compute-0 sudo[121031]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:53 compute-0 sudo[121109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkhnmwvpgeqmwkaxupohvejmzdqtosgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003012.4882853-781-138698701121238/AnsiballZ_file.py'
Dec 06 06:36:53 compute-0 sudo[121109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:53 compute-0 python3.9[121111]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:53 compute-0 sudo[121109]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:53 compute-0 clever_turing[120924]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:36:53 compute-0 clever_turing[120924]: --> relative data size: 1.0
Dec 06 06:36:53 compute-0 clever_turing[120924]: --> All data devices are unavailable
Dec 06 06:36:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:53.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:53 compute-0 systemd[1]: libpod-e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0.scope: Deactivated successfully.
Dec 06 06:36:53 compute-0 podman[120886]: 2025-12-06 06:36:53.458537541 +0000 UTC m=+1.046036106 container died e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:36:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e58b01964fdc5864e5810ca3d145145903da651d025d9fba9e4d62c56ddd9dcd-merged.mount: Deactivated successfully.
Dec 06 06:36:53 compute-0 podman[120886]: 2025-12-06 06:36:53.518738383 +0000 UTC m=+1.106236938 container remove e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:36:53 compute-0 systemd[1]: libpod-conmon-e91781200fe6d180ba6376b409202254f9cd2672cfea0dfd38671d32fef301e0.scope: Deactivated successfully.
Dec 06 06:36:53 compute-0 sudo[120756]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:53 compute-0 sudo[121162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:53 compute-0 sudo[121162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:53 compute-0 sudo[121162]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:53 compute-0 sudo[121187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:36:53 compute-0 sudo[121187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:53 compute-0 sudo[121187]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:53 compute-0 sudo[121212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:53 compute-0 sudo[121212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:53 compute-0 sudo[121212]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:53 compute-0 sudo[121238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:36:53 compute-0 sudo[121238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:54 compute-0 podman[121377]: 2025-12-06 06:36:54.10826435 +0000 UTC m=+0.046531118 container create afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:36:54 compute-0 systemd[1]: Started libpod-conmon-afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f.scope.
Dec 06 06:36:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:36:54 compute-0 sudo[121447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxzmvqgfepukyfkncyvqrmuzsoioyius ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003013.7769194-820-122036571091139/AnsiballZ_command.py'
Dec 06 06:36:54 compute-0 sudo[121447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:54 compute-0 podman[121377]: 2025-12-06 06:36:54.090651782 +0000 UTC m=+0.028918580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:36:54 compute-0 podman[121377]: 2025-12-06 06:36:54.194198201 +0000 UTC m=+0.132464999 container init afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:36:54 compute-0 podman[121377]: 2025-12-06 06:36:54.201354005 +0000 UTC m=+0.139620773 container start afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 06:36:54 compute-0 podman[121377]: 2025-12-06 06:36:54.204279709 +0000 UTC m=+0.142546507 container attach afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:36:54 compute-0 inspiring_ramanujan[121442]: 167 167
Dec 06 06:36:54 compute-0 systemd[1]: libpod-afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f.scope: Deactivated successfully.
Dec 06 06:36:54 compute-0 conmon[121442]: conmon afd07f6ac8d07d6a22db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f.scope/container/memory.events
Dec 06 06:36:54 compute-0 podman[121377]: 2025-12-06 06:36:54.209646939 +0000 UTC m=+0.147913707 container died afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:36:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-699753664c4259f6c8b9725f2923976045dba58bcdb2c78c549b4feab94b60ee-merged.mount: Deactivated successfully.
Dec 06 06:36:54 compute-0 podman[121377]: 2025-12-06 06:36:54.24777637 +0000 UTC m=+0.186043128 container remove afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ramanujan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:36:54 compute-0 systemd[1]: libpod-conmon-afd07f6ac8d07d6a22db692040b16181b1c20d4a8a67c1909cc7a98466ce5e6f.scope: Deactivated successfully.
Dec 06 06:36:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:54 compute-0 python3.9[121450]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:36:54 compute-0 sudo[121447]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:54 compute-0 podman[121473]: 2025-12-06 06:36:54.439972882 +0000 UTC m=+0.048232410 container create 6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:36:54 compute-0 systemd[1]: Started libpod-conmon-6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef.scope.
Dec 06 06:36:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:36:54 compute-0 podman[121473]: 2025-12-06 06:36:54.42160338 +0000 UTC m=+0.029862938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519ebc3695b0b964e6325d8d53b1b397673fe0dfba6fedbb722790687c9a685/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519ebc3695b0b964e6325d8d53b1b397673fe0dfba6fedbb722790687c9a685/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519ebc3695b0b964e6325d8d53b1b397673fe0dfba6fedbb722790687c9a685/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c519ebc3695b0b964e6325d8d53b1b397673fe0dfba6fedbb722790687c9a685/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:54 compute-0 podman[121473]: 2025-12-06 06:36:54.527301309 +0000 UTC m=+0.135560857 container init 6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:36:54 compute-0 podman[121473]: 2025-12-06 06:36:54.538380516 +0000 UTC m=+0.146640044 container start 6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:36:54 compute-0 podman[121473]: 2025-12-06 06:36:54.541564225 +0000 UTC m=+0.149823823 container attach 6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:36:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:54.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:55 compute-0 sudo[121643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmycrlyqphxsbahmvfjgyslditkhbzyb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003014.6249464-844-274099727178224/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 06:36:55 compute-0 sudo[121643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:55 compute-0 python3[121645]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 06:36:55 compute-0 sudo[121643]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:55 compute-0 busy_moore[121513]: {
Dec 06 06:36:55 compute-0 busy_moore[121513]:     "0": [
Dec 06 06:36:55 compute-0 busy_moore[121513]:         {
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "devices": [
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "/dev/loop3"
Dec 06 06:36:55 compute-0 busy_moore[121513]:             ],
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "lv_name": "ceph_lv0",
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "lv_size": "7511998464",
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "name": "ceph_lv0",
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "tags": {
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.cluster_name": "ceph",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.crush_device_class": "",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.encrypted": "0",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.osd_id": "0",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.type": "block",
Dec 06 06:36:55 compute-0 busy_moore[121513]:                 "ceph.vdo": "0"
Dec 06 06:36:55 compute-0 busy_moore[121513]:             },
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "type": "block",
Dec 06 06:36:55 compute-0 busy_moore[121513]:             "vg_name": "ceph_vg0"
Dec 06 06:36:55 compute-0 busy_moore[121513]:         }
Dec 06 06:36:55 compute-0 busy_moore[121513]:     ]
Dec 06 06:36:55 compute-0 busy_moore[121513]: }
Dec 06 06:36:55 compute-0 systemd[1]: libpod-6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef.scope: Deactivated successfully.
Dec 06 06:36:55 compute-0 podman[121473]: 2025-12-06 06:36:55.40360245 +0000 UTC m=+1.011861988 container died 6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:36:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c519ebc3695b0b964e6325d8d53b1b397673fe0dfba6fedbb722790687c9a685-merged.mount: Deactivated successfully.
Dec 06 06:36:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:36:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:55.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:36:55 compute-0 podman[121473]: 2025-12-06 06:36:55.468119613 +0000 UTC m=+1.076379131 container remove 6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:36:55 compute-0 systemd[1]: libpod-conmon-6c3cb2d3f59b390a73ce76ef48380580ffaa0cd193481407b692d35c6cc8ceef.scope: Deactivated successfully.
Dec 06 06:36:55 compute-0 sudo[121238]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:55 compute-0 sudo[121714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:55 compute-0 sudo[121714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:55 compute-0 sudo[121714]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:55 compute-0 ceph-mon[74339]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:55 compute-0 sudo[121765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:36:55 compute-0 sudo[121765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:55 compute-0 sudo[121765]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:55 compute-0 sudo[121797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:55 compute-0 sudo[121797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:55 compute-0 sudo[121797]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:55 compute-0 sudo[121842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:36:55 compute-0 sudo[121842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:55 compute-0 sudo[121913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdlliqjhfubsnywergdmwknzytpzhsfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003015.4903588-868-58809312249313/AnsiballZ_stat.py'
Dec 06 06:36:55 compute-0 sudo[121913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:55 compute-0 python3.9[121915]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:56 compute-0 sudo[121913]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:56 compute-0 podman[121958]: 2025-12-06 06:36:56.062032151 +0000 UTC m=+0.038773364 container create c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:36:56 compute-0 systemd[1]: Started libpod-conmon-c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531.scope.
Dec 06 06:36:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:36:56 compute-0 podman[121958]: 2025-12-06 06:36:56.138992256 +0000 UTC m=+0.115733499 container init c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Dec 06 06:36:56 compute-0 podman[121958]: 2025-12-06 06:36:56.045650946 +0000 UTC m=+0.022392169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:36:56 compute-0 podman[121958]: 2025-12-06 06:36:56.147049946 +0000 UTC m=+0.123791169 container start c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:36:56 compute-0 podman[121958]: 2025-12-06 06:36:56.151229254 +0000 UTC m=+0.127970497 container attach c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 06:36:56 compute-0 elegant_easley[121995]: 167 167
Dec 06 06:36:56 compute-0 systemd[1]: libpod-c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531.scope: Deactivated successfully.
Dec 06 06:36:56 compute-0 podman[121958]: 2025-12-06 06:36:56.153088699 +0000 UTC m=+0.129829922 container died c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_easley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:36:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9681cec28a29adda6a35da2ff54776dcd36b30892e0f23a477b2ad6130ee0dd6-merged.mount: Deactivated successfully.
Dec 06 06:36:56 compute-0 podman[121958]: 2025-12-06 06:36:56.189348704 +0000 UTC m=+0.166089927 container remove c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 06:36:56 compute-0 systemd[1]: libpod-conmon-c60d8e244516db762ba046fb2e018ee497813be846d84251ed61a2fb7f870531.scope: Deactivated successfully.
Dec 06 06:36:56 compute-0 sudo[122067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpimbahynfxjedwudlexvwvnktbxfvqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003015.4903588-868-58809312249313/AnsiballZ_file.py'
Dec 06 06:36:56 compute-0 sudo[122067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:56 compute-0 podman[122075]: 2025-12-06 06:36:56.33026719 +0000 UTC m=+0.038369456 container create 01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:36:56 compute-0 systemd[1]: Started libpod-conmon-01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d.scope.
Dec 06 06:36:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a3b16ef787ca6a198dd49f34bd2b0092980f8939c2cdd5420c6147a4b390e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a3b16ef787ca6a198dd49f34bd2b0092980f8939c2cdd5420c6147a4b390e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a3b16ef787ca6a198dd49f34bd2b0092980f8939c2cdd5420c6147a4b390e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9a3b16ef787ca6a198dd49f34bd2b0092980f8939c2cdd5420c6147a4b390e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:36:56 compute-0 podman[122075]: 2025-12-06 06:36:56.312728514 +0000 UTC m=+0.020830790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:36:56 compute-0 podman[122075]: 2025-12-06 06:36:56.420502882 +0000 UTC m=+0.128605168 container init 01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haibt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:36:56 compute-0 podman[122075]: 2025-12-06 06:36:56.429753165 +0000 UTC m=+0.137855431 container start 01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haibt, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:36:56 compute-0 podman[122075]: 2025-12-06 06:36:56.433570076 +0000 UTC m=+0.141672362 container attach 01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haibt, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:36:56 compute-0 python3.9[122069]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:56 compute-0 sudo[122067]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:56.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:57 compute-0 ceph-mon[74339]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:57 compute-0 sudo[122245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfmkwqkcgjsqoqgkyxoxvapqqgpghpfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003016.7378867-904-220832114892838/AnsiballZ_stat.py'
Dec 06 06:36:57 compute-0 sudo[122245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:57 compute-0 python3.9[122247]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:57 compute-0 sudo[122245]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:57 compute-0 gallant_haibt[122091]: {
Dec 06 06:36:57 compute-0 gallant_haibt[122091]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:36:57 compute-0 gallant_haibt[122091]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:36:57 compute-0 gallant_haibt[122091]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:36:57 compute-0 gallant_haibt[122091]:         "osd_id": 0,
Dec 06 06:36:57 compute-0 gallant_haibt[122091]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:36:57 compute-0 gallant_haibt[122091]:         "type": "bluestore"
Dec 06 06:36:57 compute-0 gallant_haibt[122091]:     }
Dec 06 06:36:57 compute-0 gallant_haibt[122091]: }
Dec 06 06:36:57 compute-0 systemd[1]: libpod-01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d.scope: Deactivated successfully.
Dec 06 06:36:57 compute-0 podman[122075]: 2025-12-06 06:36:57.357958613 +0000 UTC m=+1.066060899 container died 01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haibt, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:36:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9a3b16ef787ca6a198dd49f34bd2b0092980f8939c2cdd5420c6147a4b390e1-merged.mount: Deactivated successfully.
Dec 06 06:36:57 compute-0 podman[122075]: 2025-12-06 06:36:57.421779873 +0000 UTC m=+1.129882139 container remove 01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haibt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:36:57 compute-0 systemd[1]: libpod-conmon-01edc77cb5a1ccb97b76ddead0dba10e1670981dc6ef6000885163629b3dda6d.scope: Deactivated successfully.
Dec 06 06:36:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000019s ======
Dec 06 06:36:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec 06 06:36:57 compute-0 sudo[121842]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:36:57 compute-0 sudo[122353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gktxqlxdmqhpqbwtxfsujvflsciszhei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003016.7378867-904-220832114892838/AnsiballZ_file.py'
Dec 06 06:36:57 compute-0 sudo[122353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:57 compute-0 python3.9[122355]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:57 compute-0 sudo[122353]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:36:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 01974b96-d0ec-4cec-9b47-0369640234d5 does not exist
Dec 06 06:36:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9b973e16-4670-4e74-ab7d-6e321cf2b228 does not exist
Dec 06 06:36:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 72068e95-ce50-4c74-a814-e2341780fdb3 does not exist
Dec 06 06:36:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:58 compute-0 sudo[122458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:36:58 compute-0 sudo[122458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:58 compute-0 sudo[122458]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:58 compute-0 sudo[122509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:36:58 compute-0 sudo[122509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:36:58 compute-0 sudo[122509]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:58 compute-0 sudo[122554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sczpblacnmzokpowtozmuibejhjcamla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003018.0824528-940-27798816023551/AnsiballZ_stat.py'
Dec 06 06:36:58 compute-0 sudo[122554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:58 compute-0 python3.9[122558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:58 compute-0 sudo[122554]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:36:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000018s ======
Dec 06 06:36:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:36:58.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Dec 06 06:36:58 compute-0 sudo[122634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chazofnmnxuqervssqxhfweorqhxtpaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003018.0824528-940-27798816023551/AnsiballZ_file.py'
Dec 06 06:36:58 compute-0 sudo[122634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:59 compute-0 python3.9[122636]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:36:59 compute-0 sudo[122634]: pam_unix(sudo:session): session closed for user root
Dec 06 06:36:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:36:59 compute-0 ceph-mon[74339]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:36:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:36:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000019s ======
Dec 06 06:36:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:36:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Dec 06 06:36:59 compute-0 sudo[122786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btxuugfcdiebuiousheydelifynyrojq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003019.3562818-976-45428669409544/AnsiballZ_stat.py'
Dec 06 06:36:59 compute-0 sudo[122786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:36:59 compute-0 python3.9[122788]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:36:59 compute-0 sudo[122786]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:00 compute-0 sudo[122864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziexyuftxtlilnpyrtbjrssvwjgkethe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003019.3562818-976-45428669409544/AnsiballZ_file.py'
Dec 06 06:37:00 compute-0 sudo[122864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:00 compute-0 python3.9[122866]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:00 compute-0 sudo[122864]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:00.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:01 compute-0 sudo[123017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xduaxwafgdcjcgzsriphclqmgalptpdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003020.6225545-1012-53232423583443/AnsiballZ_stat.py'
Dec 06 06:37:01 compute-0 sudo[123017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:01 compute-0 python3.9[123019]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:37:01 compute-0 sudo[123017]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:01.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:01 compute-0 sudo[123095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvhanzgoblowhijkdlapfzfzngwsfenp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003020.6225545-1012-53232423583443/AnsiballZ_file.py'
Dec 06 06:37:01 compute-0 sudo[123095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:01 compute-0 python3.9[123097]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:01 compute-0 sudo[123095]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:02.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:04.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:05.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:06.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:07.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:08.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:09.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:37:09 compute-0 ceph-mon[74339]: paxos.0).electionLogic(25) init, last seen epoch 25, mid-election, bumping
Dec 06 06:37:09 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:10.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:11 compute-0 sudo[123127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:11 compute-0 sudo[123127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:11 compute-0 sudo[123127]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:11 compute-0 sudo[123152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:11 compute-0 sudo[123152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:11 compute-0 sudo[123152]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:11.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:12 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:37:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:12.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:37:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:37:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:37:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:37:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:37:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:37:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:13.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:14 compute-0 sudo[123304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stlzaccrmghqgoozhsplrurqzqvcxyhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003033.9734108-1051-180550995450118/AnsiballZ_command.py'
Dec 06 06:37:14 compute-0 sudo[123304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:14 compute-0 python3.9[123306]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:37:14 compute-0 sudo[123304]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:14.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:37:15 compute-0 sudo[123459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbvbxufdkvjlissjiliosgbbttkkcpvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003034.8070543-1075-186481699183551/AnsiballZ_blockinfile.py'
Dec 06 06:37:15 compute-0 sudo[123459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:15 compute-0 python3.9[123461]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:15 compute-0 sudo[123459]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:16 compute-0 sudo[123611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfmbibhjsowmafzvlenmlydsciibvdkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003035.821051-1102-29105623081103/AnsiballZ_file.py'
Dec 06 06:37:16 compute-0 sudo[123611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:16 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:37:16 compute-0 python3.9[123613]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:16 compute-0 sudo[123611]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:16 compute-0 sudo[123764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oicrjwlhaeywtayqtqzbuziuiaccenqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003036.4585729-1102-38499172346618/AnsiballZ_file.py'
Dec 06 06:37:16 compute-0 sudo[123764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:37:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:37:16 compute-0 python3.9[123766]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:16 compute-0 sudo[123764]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:17 compute-0 ceph-mon[74339]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:37:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:17.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:37:17 compute-0 sudo[123916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmcmswgetsfcwflpzgktfrawgidfnycp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003037.2646096-1147-156538825508131/AnsiballZ_mount.py'
Dec 06 06:37:17 compute-0 sudo[123916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:17 compute-0 python3.9[123918]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 06:37:17 compute-0 sudo[123916]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:37:18
Dec 06 06:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data']
Dec 06 06:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:37:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:37:18 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:37:18 compute-0 ceph-mon[74339]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:37:18 compute-0 ceph-mon[74339]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:18 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:18 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:18 compute-0 ceph-mon[74339]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:18 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:18 compute-0 ceph-mon[74339]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:37:18 compute-0 ceph-mon[74339]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:18 compute-0 ceph-mon[74339]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:18 compute-0 ceph-mon[74339]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:37:18 compute-0 sudo[124069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orlmvprhssdumumwjtkybsagdmyokpmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003038.140527-1147-134167351946314/AnsiballZ_mount.py'
Dec 06 06:37:18 compute-0 sudo[124069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:18 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 06:37:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:18.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:18 compute-0 python3.9[124071]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 06:37:18 compute-0 sudo[124069]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:37:19 compute-0 ceph-mon[74339]: paxos.0).electionLogic(28) init, last seen epoch 28
Dec 06 06:37:19 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:19 compute-0 sshd-session[115799]: Connection closed by 192.168.122.30 port 46220
Dec 06 06:37:19 compute-0 sshd-session[115796]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:37:19 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 06 06:37:19 compute-0 systemd[1]: session-40.scope: Consumed 29.503s CPU time.
Dec 06 06:37:19 compute-0 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Dec 06 06:37:19 compute-0 systemd-logind[798]: Removed session 40.
Dec 06 06:37:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:37:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:37:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 06:37:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 06:37:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:20.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:20 compute-0 ceph-mon[74339]: mon.compute-1 calling monitor election
Dec 06 06:37:20 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:37:20 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:37:20 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:37:20 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:20 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:20 compute-0 ceph-mon[74339]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:20 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:20 compute-0 ceph-mon[74339]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:37:20 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 06:37:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:37:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:21.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:37:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:22 compute-0 ceph-mon[74339]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:22 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:37:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:22.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:37:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:23.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:23 compute-0 ceph-mon[74339]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:24 compute-0 sshd-session[124101]: Accepted publickey for zuul from 192.168.122.30 port 43662 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:37:24 compute-0 systemd-logind[798]: New session 41 of user zuul.
Dec 06 06:37:24 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 06 06:37:24 compute-0 sshd-session[124101]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:37:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:24 compute-0 ceph-mon[74339]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:25 compute-0 sudo[124254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgemgogdmqfflpssocukisqequidehls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003044.791381-23-238168983338944/AnsiballZ_tempfile.py'
Dec 06 06:37:25 compute-0 sudo[124254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:37:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:37:25 compute-0 python3.9[124256]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 06 06:37:25 compute-0 sudo[124254]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:37:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:25.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:37:26 compute-0 sudo[124406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flgcmtrmkylxtrohdgeqhqippmagiimh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003045.652653-59-178673226700123/AnsiballZ_stat.py'
Dec 06 06:37:26 compute-0 sudo[124406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:26 compute-0 python3.9[124408]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:37:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:26 compute-0 sudo[124406]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:26 compute-0 ceph-mon[74339]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:26.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:27 compute-0 sudo[124561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woeyaodjrkzydyxiciplasmfapiejrub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003046.4804819-83-115391503768164/AnsiballZ_slurp.py'
Dec 06 06:37:27 compute-0 sudo[124561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:27 compute-0 python3.9[124563]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 06 06:37:27 compute-0 sudo[124561]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:37:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:27.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:37:27 compute-0 sudo[124713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfvjmzohndrtjmhhjmwovjkxgsnpqqky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003047.4620361-107-45357268343686/AnsiballZ_stat.py'
Dec 06 06:37:27 compute-0 sudo[124713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:27 compute-0 python3.9[124715]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.8yib8_v3 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:37:27 compute-0 sudo[124713]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:28 compute-0 sudo[124839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkvwntdobrksdcpcubxqqymlqfqowsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003047.4620361-107-45357268343686/AnsiballZ_copy.py'
Dec 06 06:37:28 compute-0 sudo[124839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:37:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:28.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:37:28 compute-0 python3.9[124841]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.8yib8_v3 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003047.4620361-107-45357268343686/.source.8yib8_v3 _original_basename=.j66ew9x0 follow=False checksum=660676d376f77098a981422bf7716e6cca0e00ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:28 compute-0 sudo[124839]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:29.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:29 compute-0 sudo[124991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yccfrojsyeuuhljhtjzrdmvcaqltylrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003049.0764267-152-230148554070641/AnsiballZ_setup.py'
Dec 06 06:37:29 compute-0 sudo[124991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:29 compute-0 python3.9[124993]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:37:30 compute-0 sudo[124991]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:30.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:31 compute-0 sudo[125144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srtcqauinxzxmahfltrtjcfwpwmtleop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003050.2335253-177-6834459628757/AnsiballZ_blockinfile.py'
Dec 06 06:37:31 compute-0 sudo[125144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:31 compute-0 python3.9[125146]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXhoI1gGk2X98AQB4B5ZyJDup3CVjjMbiB6L30cKGocfIwEEwBz5d1xDpwA7euANP32L9+ddZ3Cn0VVuebREE5y184Yi1sdS+2O6H8M7BUT+RANGW4sY7jPXbTTJt6Bp2WWZu+AKxIRGMoo0UfvvFdscomysN+yxWB/KZ/niGARJyw61l1eO1/8shGJiP1LBuA4mdwHMTBYwXiYjk6LgI/i5m6zQk5ggmw2nKJqCwwPGyf2Xf7/LbRDgnryAatph9gA4JZ+QXULUJ8U+ILis30MPOGNA7vJ07ovYFAVwsoKYRCsxrEpg8AxMeRikU+CERKL2QQPABlbuJKnDZFrW2kY/L+B2g+i8FDWpaug4GQ6ZO7REu47ARhAUnuaIuJrhJgLrDq43vTqCgagXFz7UHhLI6KXLayNe3B/4It4UaZIVv3X8K+bZiI0zMWNhyjIBAU5VFZd0QjZDjt+Wv5WMYEFiWDyil+NVEHCxdSl46yd68mUvgMxWiv2Z57ICY9i+k=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF08740jXaMkiBlZr9+3kjjW/VDtcxAKNNm3eT7v4C7q
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUJnc2vyZ7uGIoKmOtL2ok+zmjLSq/3vZNdtT52cNcj41FV66OIff0lT2r5neBPmMGSlOfqKMRY8iTu1fJs+/c=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCTKFyfmCvsG9hwBkDHhSMH95Mc80Ub24C1l2ydnJGHzY4+vYnN+ZopDHd6HVKXgsP9msqkZAEdNXjbQn0sbWYw0v02CY6OcbMq306Rwo9N1fcSO6QVC6w79bGbRJasTA8jgAoGm1VSg3XpzU9C2Delv2ginn7LqCUou48j9w9jyaklDA2EV0anjvZ6hGLjcFaMQSlFPO8rr2pGS5nfNk2Re6GtYYWF4SPkd5xfecWi9szdT+tnG8VrwRX440/Pe3eV5UyVyHQzIEvxJK6DbTgtieOn0PVz3yHI3Uo8VatpsXahO8FsABY1GaI5QAj3qUudWz4YWsiV/qy0G5Wm27CB69LVPGWRr7y4+pVz0HxWiYGyYbRZdxHVZ3jqfGNBdMXJb2shp9BlIo+lpjEydkorHn66AKIpFCZFaGpHzkFPocaTP9yMPAxo/0YPllct7AqO/4CCBNhz4E6/0aMx2lsilFN6Oo/Mj0azpEQOHvuTEwqKJ5BK3MJCWgR11ccN7PM=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEDAV7OqwkHgR6GxlfPQDRQoPSdwQxyp1ILKzyPaTiD9
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeZVQBWQAXkYJ2DA26L5Jq1a5s2ScFbJ/Q/8jiRPf43wzW24IvQwAq99mI5t4QhVhmTRCbptw5L79elvFEyDY8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXnm8ofk0+O7jBFjA8fasReq5BkfwaahdsMJZLiqB71W18faenPK2mMCw6/Lyxaif1wFQKBGWK21bUZXsQguqG7iZV8rSLfQmLlKel/CnEgi2ekXmEhhYL/5GAB8Hq0UxChaI6YQUu0gWku4cPruBw6/+Lz36/PvLLwKqQupEi8npPR3O7a4jF6Px433cpBkZ/hgwG2m5+61NMAcNSCjjNj1cdXLugpDN9+05k6A3QV1sDXS2Zx6zdxPhgmLDKZLBGesQaz+glwYPo/2KfwAwlU4tAuY5eSV2BPX04PqKqexy3iziex/q3pFmtD6f1cRmqFZiyNs+kOfsxwABOVKQ6GG1iKKgzHMsK/paqNWMoHBj0lrRIJoX88Fd2A5DdPs2UPHwy3iUxLYekNcgiigT3O/4x92cFRritKJ8i8j83J6wJOQ0DnpyWxu4WFCjI4mBSKeA0NQzqMPICgkmtmtYKfSlzSdaL9W56FqnfE5JHkSrspcV9xnX3D/ijnD/8PxU=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE4NkvA88mf0HvkHx7766e1aduefm45OK4uK2xW0LF1S
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIy6uH9XhIotH/UH4KICHfHUvzEiJMGjuOaC3xgcK45R/4kFK8w4At6C/G8bcf1l2+wNZCsHSuKrF09EzQCKCOU=
                                              create=True mode=0644 path=/tmp/ansible.8yib8_v3 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:31 compute-0 sudo[125144]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:31.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:31 compute-0 sudo[125171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:31 compute-0 sudo[125171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:31 compute-0 sudo[125171]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:31 compute-0 sudo[125201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:31 compute-0 sudo[125201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:31 compute-0 sudo[125201]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:32 compute-0 sudo[125347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxbeukhvkguizgoonwmzzbvbegwzmoqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003051.5366278-201-108003760568360/AnsiballZ_command.py'
Dec 06 06:37:32 compute-0 sudo[125347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:32 compute-0 python3.9[125349]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8yib8_v3' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:37:32 compute-0 sudo[125347]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:32.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:33 compute-0 sudo[125501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-norwaeqroiomlklyrskgcpbkvxdzcvwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003052.6140726-225-2450345940720/AnsiballZ_file.py'
Dec 06 06:37:33 compute-0 sudo[125501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:33 compute-0 python3.9[125503]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.8yib8_v3 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:33 compute-0 sudo[125501]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:33.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:33 compute-0 sshd-session[124104]: Connection closed by 192.168.122.30 port 43662
Dec 06 06:37:33 compute-0 sshd-session[124101]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:37:33 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 06 06:37:33 compute-0 systemd[1]: session-41.scope: Consumed 4.837s CPU time.
Dec 06 06:37:33 compute-0 systemd-logind[798]: Session 41 logged out. Waiting for processes to exit.
Dec 06 06:37:33 compute-0 systemd-logind[798]: Removed session 41.
Dec 06 06:37:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:34.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:35.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:36 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:37:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:36.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:37:37 compute-0 ceph-mon[74339]: paxos.0).electionLogic(31) init, last seen epoch 31, mid-election, bumping
Dec 06 06:37:37 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:37.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:38.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:38 compute-0 sshd-session[125531]: Accepted publickey for zuul from 192.168.122.30 port 42392 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:37:38 compute-0 systemd-logind[798]: New session 42 of user zuul.
Dec 06 06:37:38 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 06 06:37:38 compute-0 sshd-session[125531]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:37:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:39 compute-0 python3.9[125684]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:37:40 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:37:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:40.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:40 compute-0 sudo[125839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtqxkqlmatgkntsqqcpcboytsscrphfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003060.2756672-61-248422769635214/AnsiballZ_systemd.py'
Dec 06 06:37:40 compute-0 sudo[125839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:41 compute-0 python3.9[125841]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 06:37:41 compute-0 sudo[125839]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:41 compute-0 sudo[125993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onbtlyrjlxnjkgdriyuxwvvuonspggtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003061.5094488-85-93409279366600/AnsiballZ_systemd.py'
Dec 06 06:37:41 compute-0 sudo[125993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:37:42 compute-0 python3.9[125995]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:37:42 compute-0 sudo[125993]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:37:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:42.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:42 compute-0 sudo[126147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvcxuphttpcxqmxydelxrfulwnnocioj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003062.4780843-112-221491551968029/AnsiballZ_command.py'
Dec 06 06:37:42 compute-0 sudo[126147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:37:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:37:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:37:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:37:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:37:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:37:43 compute-0 python3.9[126149]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:37:43 compute-0 sudo[126147]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:43 compute-0 ceph-mon[74339]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:43 compute-0 ceph-mon[74339]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:43 compute-0 ceph-mon[74339]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:43 compute-0 ceph-mon[74339]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:43 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:37:43 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:37:43 compute-0 ceph-mon[74339]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:43 compute-0 ceph-mon[74339]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:43 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:37:43 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:43 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:43 compute-0 ceph-mon[74339]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:43 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:43 compute-0 ceph-mon[74339]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:37:43 compute-0 ceph-mon[74339]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:43 compute-0 ceph-mon[74339]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:37:43 compute-0 ceph-mon[74339]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:37:43 compute-0 ceph-mon[74339]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:43.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:43 compute-0 sudo[126300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgnrrawgvkawupumrzxlgxbdjgcqojxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003063.3355744-136-182354292639200/AnsiballZ_stat.py'
Dec 06 06:37:43 compute-0 sudo[126300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:43 compute-0 python3.9[126302]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:37:44 compute-0 sudo[126300]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:37:44 compute-0 ceph-mon[74339]: paxos.0).electionLogic(34) init, last seen epoch 34
Dec 06 06:37:44 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:37:44 compute-0 sudo[126453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgsliibacxrqbupbtqtxfqjclwxxgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003064.1865246-163-105161496999778/AnsiballZ_file.py'
Dec 06 06:37:44 compute-0 sudo[126453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:37:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:37:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 06:37:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:44.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:44 compute-0 python3.9[126455]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:37:44 compute-0 sudo[126453]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 06:37:45 compute-0 sshd-session[125534]: Connection closed by 192.168.122.30 port 42392
Dec 06 06:37:45 compute-0 sshd-session[125531]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:37:45 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 06 06:37:45 compute-0 systemd[1]: session-42.scope: Consumed 3.925s CPU time.
Dec 06 06:37:45 compute-0 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Dec 06 06:37:45 compute-0 systemd-logind[798]: Removed session 42.
Dec 06 06:37:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:45 compute-0 ceph-mon[74339]: mon.compute-1 calling monitor election
Dec 06 06:37:45 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:37:45 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:37:45 compute-0 ceph-mon[74339]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:45 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:37:45 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:37:45 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:37:45 compute-0 ceph-mon[74339]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:37:45 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 11m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:37:45 compute-0 ceph-mon[74339]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:37:45 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 06:37:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:46.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:47 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:37:47 compute-0 ceph-mon[74339]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:37:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:37:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:48.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:49 compute-0 ceph-mon[74339]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.531350) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069531428, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 631, "num_deletes": 251, "total_data_size": 653067, "memory_usage": 665848, "flush_reason": "Manual Compaction"}
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069539937, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 615772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10289, "largest_seqno": 10919, "table_properties": {"data_size": 612499, "index_size": 1117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8491, "raw_average_key_size": 19, "raw_value_size": 605511, "raw_average_value_size": 1424, "num_data_blocks": 50, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003010, "oldest_key_time": 1765003010, "file_creation_time": 1765003069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 8657 microseconds, and 5467 cpu microseconds.
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.540007) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 615772 bytes OK
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.540035) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.541204) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.541233) EVENT_LOG_v1 {"time_micros": 1765003069541224, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.541262) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 649575, prev total WAL file size 649575, number of live WAL files 2.
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.542173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(601KB)], [23(9315KB)]
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069542244, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10154585, "oldest_snapshot_seqno": -1}
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3983 keys, 8538496 bytes, temperature: kUnknown
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069595262, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 8538496, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8508151, "index_size": 19291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 98419, "raw_average_key_size": 24, "raw_value_size": 8432351, "raw_average_value_size": 2117, "num_data_blocks": 839, "num_entries": 3983, "num_filter_entries": 3983, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765003069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.595563) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 8538496 bytes
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.596830) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.2 rd, 160.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(30.4) write-amplify(13.9) OK, records in: 4505, records dropped: 522 output_compression: NoCompression
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.596854) EVENT_LOG_v1 {"time_micros": 1765003069596842, "job": 8, "event": "compaction_finished", "compaction_time_micros": 53105, "compaction_time_cpu_micros": 23231, "output_level": 6, "num_output_files": 1, "total_output_size": 8538496, "num_input_records": 4505, "num_output_records": 3983, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069597066, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003069598714, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.541987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.598814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.598820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.598822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.598824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:37:49.598826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:37:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:50 compute-0 ceph-mon[74339]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:50.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:50 compute-0 sshd-session[126483]: Accepted publickey for zuul from 192.168.122.30 port 43230 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:37:51 compute-0 systemd-logind[798]: New session 43 of user zuul.
Dec 06 06:37:51 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 06 06:37:51 compute-0 sshd-session[126483]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:37:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:51.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:51 compute-0 sudo[126571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:51 compute-0 sudo[126571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:51 compute-0 sudo[126571]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:51 compute-0 sudo[126611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:51 compute-0 sudo[126611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:51 compute-0 sudo[126611]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:52 compute-0 python3.9[126686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:37:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:37:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:52.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:37:53 compute-0 ceph-mon[74339]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:53 compute-0 sudo[126841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omuhukhsbhkikgcxuesodqulkskmueuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003072.8261635-67-99697427585565/AnsiballZ_setup.py'
Dec 06 06:37:53 compute-0 sudo[126841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:53 compute-0 python3.9[126843]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:37:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:53.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:53 compute-0 sudo[126841]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:54 compute-0 sudo[126925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axktvfdnjxfkzkcmuemthsmomugpvaku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003072.8261635-67-99697427585565/AnsiballZ_dnf.py'
Dec 06 06:37:54 compute-0 sudo[126925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:37:54 compute-0 python3.9[126927]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 06:37:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:37:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:37:55 compute-0 ceph-mon[74339]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:55.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:55 compute-0 sudo[126925]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:56 compute-0 python3.9[127080]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:37:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:56.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:56 compute-0 ceph-mon[74339]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:57.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:58 compute-0 python3.9[127231]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:37:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:58 compute-0 ceph-mon[74339]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:37:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:37:58 compute-0 sudo[127383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:58 compute-0 sudo[127383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:58 compute-0 sudo[127383]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:37:58.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:58 compute-0 sudo[127408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:37:58 compute-0 sudo[127408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:58 compute-0 sudo[127408]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:58 compute-0 python3.9[127382]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:37:58 compute-0 sudo[127433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:37:58 compute-0 sudo[127433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:58 compute-0 sudo[127433]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:58 compute-0 sudo[127470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:37:58 compute-0 sudo[127470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:37:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:37:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:37:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:37:59 compute-0 sudo[127470]: pam_unix(sudo:session): session closed for user root
Dec 06 06:37:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:37:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:37:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:37:59.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:37:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:37:59 compute-0 python3.9[127653]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:38:00 compute-0 sshd-session[126486]: Connection closed by 192.168.122.30 port 43230
Dec 06 06:38:00 compute-0 sshd-session[126483]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:38:00 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 06 06:38:00 compute-0 systemd[1]: session-43.scope: Consumed 6.017s CPU time.
Dec 06 06:38:00 compute-0 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Dec 06 06:38:00 compute-0 systemd-logind[798]: Removed session 43.
Dec 06 06:38:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:00.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:38:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:01.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:01 compute-0 ceph-mon[74339]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:38:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:38:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:38:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:38:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:38:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:38:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000056s ======
Dec 06 06:38:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:02.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Dec 06 06:38:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 095183f7-924a-4054-8bee-c4f77e6994f0 does not exist
Dec 06 06:38:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2b5cd57d-6a12-4646-9381-e28875432923 does not exist
Dec 06 06:38:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5c01e2e1-a150-4342-ad4c-3a6485348485 does not exist
Dec 06 06:38:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:38:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:38:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:38:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:38:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:38:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:38:02 compute-0 sudo[127691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:02 compute-0 sudo[127691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:02 compute-0 sudo[127691]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:03 compute-0 sudo[127716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:38:03 compute-0 sudo[127716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:03 compute-0 sudo[127716]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:03 compute-0 sudo[127741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:03 compute-0 sudo[127741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:03 compute-0 sudo[127741]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:03 compute-0 sudo[127766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:38:03 compute-0 sudo[127766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:03 compute-0 podman[127833]: 2025-12-06 06:38:03.501781093 +0000 UTC m=+0.046422904 container create adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heyrovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:38:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:03.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:03 compute-0 systemd[1]: Started libpod-conmon-adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9.scope.
Dec 06 06:38:03 compute-0 podman[127833]: 2025-12-06 06:38:03.481802774 +0000 UTC m=+0.026444585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:38:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:38:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:03 compute-0 ceph-mon[74339]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:38:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:38:03 compute-0 podman[127833]: 2025-12-06 06:38:03.605714213 +0000 UTC m=+0.150356044 container init adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heyrovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:38:03 compute-0 podman[127833]: 2025-12-06 06:38:03.614230076 +0000 UTC m=+0.158871887 container start adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 06:38:03 compute-0 podman[127833]: 2025-12-06 06:38:03.618693603 +0000 UTC m=+0.163335434 container attach adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:38:03 compute-0 zealous_heyrovsky[127850]: 167 167
Dec 06 06:38:03 compute-0 systemd[1]: libpod-adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9.scope: Deactivated successfully.
Dec 06 06:38:03 compute-0 podman[127833]: 2025-12-06 06:38:03.624621192 +0000 UTC m=+0.169263003 container died adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heyrovsky, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:38:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2244bae2282f30d66deabdb4571525bbfd903a22edcd34c7712395c95de19f4-merged.mount: Deactivated successfully.
Dec 06 06:38:03 compute-0 podman[127833]: 2025-12-06 06:38:03.675405089 +0000 UTC m=+0.220046890 container remove adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:38:03 compute-0 systemd[1]: libpod-conmon-adef0ac39661f4f43fafed71d25dd7ea839a5f640d8d9373973833a704309ba9.scope: Deactivated successfully.
Dec 06 06:38:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:03 compute-0 podman[127875]: 2025-12-06 06:38:03.847046647 +0000 UTC m=+0.047101132 container create 8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:38:03 compute-0 systemd[1]: Started libpod-conmon-8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940.scope.
Dec 06 06:38:03 compute-0 podman[127875]: 2025-12-06 06:38:03.827576893 +0000 UTC m=+0.027631388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:38:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b6f2d1d8b051b965781f4e0f0338fe3b60de9086f576c3796b70bb7af60bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b6f2d1d8b051b965781f4e0f0338fe3b60de9086f576c3796b70bb7af60bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b6f2d1d8b051b965781f4e0f0338fe3b60de9086f576c3796b70bb7af60bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b6f2d1d8b051b965781f4e0f0338fe3b60de9086f576c3796b70bb7af60bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837b6f2d1d8b051b965781f4e0f0338fe3b60de9086f576c3796b70bb7af60bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:03 compute-0 podman[127875]: 2025-12-06 06:38:03.95733746 +0000 UTC m=+0.157391945 container init 8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:38:03 compute-0 podman[127875]: 2025-12-06 06:38:03.965260114 +0000 UTC m=+0.165314579 container start 8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 06:38:03 compute-0 podman[127875]: 2025-12-06 06:38:03.995402503 +0000 UTC m=+0.195456998 container attach 8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:38:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:38:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:38:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:38:04 compute-0 ceph-mon[74339]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:04 compute-0 cool_torvalds[127892]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:38:04 compute-0 cool_torvalds[127892]: --> relative data size: 1.0
Dec 06 06:38:04 compute-0 cool_torvalds[127892]: --> All data devices are unavailable
Dec 06 06:38:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:38:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:04.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:38:04 compute-0 systemd[1]: libpod-8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940.scope: Deactivated successfully.
Dec 06 06:38:04 compute-0 podman[127875]: 2025-12-06 06:38:04.83293251 +0000 UTC m=+1.032986985 container died 8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:38:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-837b6f2d1d8b051b965781f4e0f0338fe3b60de9086f576c3796b70bb7af60bf-merged.mount: Deactivated successfully.
Dec 06 06:38:04 compute-0 podman[127875]: 2025-12-06 06:38:04.907838884 +0000 UTC m=+1.107893359 container remove 8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:38:04 compute-0 systemd[1]: libpod-conmon-8651656ba2423f4be2b5b4903857d5acaef34b0dfc58f6f70253f9c9f4705940.scope: Deactivated successfully.
Dec 06 06:38:04 compute-0 sudo[127766]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:05 compute-0 sudo[127922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:05 compute-0 sudo[127922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:05 compute-0 sudo[127922]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:05 compute-0 sudo[127947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:38:05 compute-0 sudo[127947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:05 compute-0 sudo[127947]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:05 compute-0 sudo[127974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:05 compute-0 sshd-session[127970]: Accepted publickey for zuul from 192.168.122.30 port 60420 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:38:05 compute-0 sudo[127974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:05 compute-0 sudo[127974]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:05 compute-0 systemd-logind[798]: New session 44 of user zuul.
Dec 06 06:38:05 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 06 06:38:05 compute-0 sshd-session[127970]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:38:05 compute-0 sudo[128000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:38:05 compute-0 sudo[128000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:05.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:05 compute-0 podman[128120]: 2025-12-06 06:38:05.611899868 +0000 UTC m=+0.037565580 container create d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:38:05 compute-0 systemd[1]: Started libpod-conmon-d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471.scope.
Dec 06 06:38:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:38:05 compute-0 podman[128120]: 2025-12-06 06:38:05.596050267 +0000 UTC m=+0.021716009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:38:05 compute-0 podman[128120]: 2025-12-06 06:38:05.692313849 +0000 UTC m=+0.117979581 container init d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:38:05 compute-0 podman[128120]: 2025-12-06 06:38:05.699437412 +0000 UTC m=+0.125103124 container start d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 06:38:05 compute-0 podman[128120]: 2025-12-06 06:38:05.703194359 +0000 UTC m=+0.128860071 container attach d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendel, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:38:05 compute-0 focused_mendel[128136]: 167 167
Dec 06 06:38:05 compute-0 systemd[1]: libpod-d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471.scope: Deactivated successfully.
Dec 06 06:38:05 compute-0 podman[128120]: 2025-12-06 06:38:05.707503361 +0000 UTC m=+0.133169103 container died d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 06:38:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6c863cc591ebb692a2db236316b547ea926baa9fbab98e6d1dcabbcd2e1b165-merged.mount: Deactivated successfully.
Dec 06 06:38:05 compute-0 podman[128120]: 2025-12-06 06:38:05.75131408 +0000 UTC m=+0.176979812 container remove d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:38:05 compute-0 systemd[1]: libpod-conmon-d9ef4eeed9ee94d7925bd8dadee2a12b312f639bb6aaa3a40743ca9704505471.scope: Deactivated successfully.
Dec 06 06:38:05 compute-0 podman[128207]: 2025-12-06 06:38:05.927651103 +0000 UTC m=+0.053969578 container create 9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 06:38:05 compute-0 systemd[1]: Started libpod-conmon-9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d.scope.
Dec 06 06:38:05 compute-0 podman[128207]: 2025-12-06 06:38:05.903599727 +0000 UTC m=+0.029918252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:38:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021495f5616b32484cf4c0034876d0bc9376981d83d43774d9e630490c027e45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021495f5616b32484cf4c0034876d0bc9376981d83d43774d9e630490c027e45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021495f5616b32484cf4c0034876d0bc9376981d83d43774d9e630490c027e45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/021495f5616b32484cf4c0034876d0bc9376981d83d43774d9e630490c027e45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:06 compute-0 podman[128207]: 2025-12-06 06:38:06.026961001 +0000 UTC m=+0.153279496 container init 9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:38:06 compute-0 podman[128207]: 2025-12-06 06:38:06.035421942 +0000 UTC m=+0.161740417 container start 9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banzai, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:38:06 compute-0 podman[128207]: 2025-12-06 06:38:06.040412354 +0000 UTC m=+0.166730829 container attach 9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:38:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:06 compute-0 python3.9[128274]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:38:06 compute-0 ceph-mon[74339]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:38:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:06.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:38:06 compute-0 competent_banzai[128270]: {
Dec 06 06:38:06 compute-0 competent_banzai[128270]:     "0": [
Dec 06 06:38:06 compute-0 competent_banzai[128270]:         {
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "devices": [
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "/dev/loop3"
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             ],
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "lv_name": "ceph_lv0",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "lv_size": "7511998464",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "name": "ceph_lv0",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "tags": {
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.cluster_name": "ceph",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.crush_device_class": "",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.encrypted": "0",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.osd_id": "0",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.type": "block",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:                 "ceph.vdo": "0"
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             },
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "type": "block",
Dec 06 06:38:06 compute-0 competent_banzai[128270]:             "vg_name": "ceph_vg0"
Dec 06 06:38:06 compute-0 competent_banzai[128270]:         }
Dec 06 06:38:06 compute-0 competent_banzai[128270]:     ]
Dec 06 06:38:06 compute-0 competent_banzai[128270]: }
Dec 06 06:38:06 compute-0 systemd[1]: libpod-9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d.scope: Deactivated successfully.
Dec 06 06:38:06 compute-0 podman[128207]: 2025-12-06 06:38:06.93231095 +0000 UTC m=+1.058629425 container died 9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:38:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-021495f5616b32484cf4c0034876d0bc9376981d83d43774d9e630490c027e45-merged.mount: Deactivated successfully.
Dec 06 06:38:06 compute-0 podman[128207]: 2025-12-06 06:38:06.992437432 +0000 UTC m=+1.118755907 container remove 9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:38:06 compute-0 systemd[1]: libpod-conmon-9d247094c1956d97f2a6bd6500910c34b96c3a4f59a58084467e3c20905b163d.scope: Deactivated successfully.
Dec 06 06:38:07 compute-0 sudo[128000]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:07 compute-0 sudo[128324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:07 compute-0 sudo[128324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:07 compute-0 sudo[128324]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:07 compute-0 sudo[128349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:38:07 compute-0 sudo[128349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:07 compute-0 sudo[128349]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:07 compute-0 sudo[128374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:07 compute-0 sudo[128374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:07 compute-0 sudo[128374]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:07 compute-0 sudo[128399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:38:07 compute-0 sudo[128399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:07 compute-0 podman[128516]: 2025-12-06 06:38:07.627840832 +0000 UTC m=+0.040513166 container create 59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:38:07 compute-0 systemd[1]: Started libpod-conmon-59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432.scope.
Dec 06 06:38:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:38:07 compute-0 podman[128516]: 2025-12-06 06:38:07.610603321 +0000 UTC m=+0.023275675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:38:07 compute-0 podman[128516]: 2025-12-06 06:38:07.70852412 +0000 UTC m=+0.121196464 container init 59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:38:07 compute-0 podman[128516]: 2025-12-06 06:38:07.717070374 +0000 UTC m=+0.129742698 container start 59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 06:38:07 compute-0 lucid_ardinghelli[128562]: 167 167
Dec 06 06:38:07 compute-0 podman[128516]: 2025-12-06 06:38:07.722402145 +0000 UTC m=+0.135074479 container attach 59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 06:38:07 compute-0 systemd[1]: libpod-59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432.scope: Deactivated successfully.
Dec 06 06:38:07 compute-0 podman[128516]: 2025-12-06 06:38:07.723912728 +0000 UTC m=+0.136585082 container died 59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 06:38:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3a321b6d1551dbfe1de3778d8ea50a22b7323d1755f209e4095cd8829c8853d-merged.mount: Deactivated successfully.
Dec 06 06:38:07 compute-0 podman[128516]: 2025-12-06 06:38:07.759554474 +0000 UTC m=+0.172226808 container remove 59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_ardinghelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:38:07 compute-0 sudo[128618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inatpnaopxwjbvceqvtxczjdrujgkzex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003087.3281157-116-241813114051275/AnsiballZ_file.py'
Dec 06 06:38:07 compute-0 sudo[128618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:07 compute-0 systemd[1]: libpod-conmon-59196737495be267f96521d9049ce6d232401bf0462ed4ea508ec8dcff3ba432.scope: Deactivated successfully.
Dec 06 06:38:07 compute-0 podman[128633]: 2025-12-06 06:38:07.923425041 +0000 UTC m=+0.048058160 container create 0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 06:38:07 compute-0 systemd[1]: Started libpod-conmon-0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f.scope.
Dec 06 06:38:07 compute-0 python3.9[128625]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:07 compute-0 podman[128633]: 2025-12-06 06:38:07.902010961 +0000 UTC m=+0.026644080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:38:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f739aed0cfd79d00c70e3b9d4f37aee7f59199e192759aedad8af26c4e672b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f739aed0cfd79d00c70e3b9d4f37aee7f59199e192759aedad8af26c4e672b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f739aed0cfd79d00c70e3b9d4f37aee7f59199e192759aedad8af26c4e672b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f739aed0cfd79d00c70e3b9d4f37aee7f59199e192759aedad8af26c4e672b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:38:08 compute-0 sudo[128618]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:08 compute-0 podman[128633]: 2025-12-06 06:38:08.019379145 +0000 UTC m=+0.144012264 container init 0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:38:08 compute-0 podman[128633]: 2025-12-06 06:38:08.029676967 +0000 UTC m=+0.154310086 container start 0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:38:08 compute-0 podman[128633]: 2025-12-06 06:38:08.033236499 +0000 UTC m=+0.157869618 container attach 0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:38:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:08 compute-0 sudo[128802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrdxkkftdzmehdnbggdvoehflaxrnpfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003088.1663961-116-262088987708147/AnsiballZ_file.py'
Dec 06 06:38:08 compute-0 sudo[128802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:08 compute-0 python3.9[128804]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:08 compute-0 sudo[128802]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:38:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:08.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:38:08 compute-0 reverent_burnell[128647]: {
Dec 06 06:38:08 compute-0 reverent_burnell[128647]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:38:08 compute-0 reverent_burnell[128647]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:38:08 compute-0 reverent_burnell[128647]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:38:08 compute-0 reverent_burnell[128647]:         "osd_id": 0,
Dec 06 06:38:08 compute-0 reverent_burnell[128647]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:38:08 compute-0 reverent_burnell[128647]:         "type": "bluestore"
Dec 06 06:38:08 compute-0 reverent_burnell[128647]:     }
Dec 06 06:38:08 compute-0 reverent_burnell[128647]: }
Dec 06 06:38:08 compute-0 systemd[1]: libpod-0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f.scope: Deactivated successfully.
Dec 06 06:38:08 compute-0 podman[128633]: 2025-12-06 06:38:08.930796206 +0000 UTC m=+1.055429325 container died 0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:38:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-90f739aed0cfd79d00c70e3b9d4f37aee7f59199e192759aedad8af26c4e672b-merged.mount: Deactivated successfully.
Dec 06 06:38:08 compute-0 podman[128633]: 2025-12-06 06:38:08.989793776 +0000 UTC m=+1.114426895 container remove 0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:38:09 compute-0 systemd[1]: libpod-conmon-0c92fa740f81085450cafe5d19db448e625f2bfe1d5350f0ddcbe9ae045bd95f.scope: Deactivated successfully.
Dec 06 06:38:09 compute-0 sudo[128399]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:38:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:38:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ebdae953-d1f5-456d-b669-3c2b210d31ab does not exist
Dec 06 06:38:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5e8221f8-3421-4c10-af37-9dc1440871b5 does not exist
Dec 06 06:38:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ce1254c7-60d3-482b-af29-19a5c9c1449b does not exist
Dec 06 06:38:09 compute-0 sudo[128990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjbraapedtiffswyxtckrjwiuexdzabt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003088.8376431-162-198973578520636/AnsiballZ_stat.py'
Dec 06 06:38:09 compute-0 sudo[128990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:09 compute-0 sudo[128972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:09 compute-0 sudo[128972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:09 compute-0 sudo[128972]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:09 compute-0 sudo[129010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:38:09 compute-0 sudo[129010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:09 compute-0 sudo[129010]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:09 compute-0 python3.9[129007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:09 compute-0 sudo[128990]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:09 compute-0 ceph-mon[74339]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:38:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:09 compute-0 sudo[129155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ammpkkoxpoqyvgirvzlednwacjlpshpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003088.8376431-162-198973578520636/AnsiballZ_copy.py'
Dec 06 06:38:09 compute-0 sudo[129155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:10 compute-0 python3.9[129157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003088.8376431-162-198973578520636/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a1c30bf462d1140b54690ce875410a788c7900c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:10 compute-0 sudo[129155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:10 compute-0 sudo[129308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwpgxmsmwmyquumxmugrrvtuocgriypg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003090.1742969-162-223201068420797/AnsiballZ_stat.py'
Dec 06 06:38:10 compute-0 sudo[129308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:10 compute-0 python3.9[129310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:10 compute-0 sudo[129308]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:38:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:10.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:38:10 compute-0 sudo[129431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekooodewzdpviymixnfvirnjzribrmjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003090.1742969-162-223201068420797/AnsiballZ_copy.py'
Dec 06 06:38:10 compute-0 sudo[129431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:11 compute-0 python3.9[129433]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003090.1742969-162-223201068420797/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ffa5eebd660d3b98ce1601104e9075977b0c55a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:11 compute-0 sudo[129431]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:11 compute-0 ceph-mon[74339]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:11.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:11 compute-0 sudo[129583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnplieddxhaeghlcreadjnbxgtbwrxaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003091.3143356-162-189807870607388/AnsiballZ_stat.py'
Dec 06 06:38:11 compute-0 sudo[129583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:11 compute-0 python3.9[129585]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:11 compute-0 sudo[129583]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:11 compute-0 sudo[129586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:11 compute-0 sudo[129586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:11 compute-0 sudo[129586]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:11 compute-0 sudo[129618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:11 compute-0 sudo[129618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:11 compute-0 sudo[129618]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:12 compute-0 sudo[129756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpefnimegyjspnhqkknhvwhvlcursswg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003091.3143356-162-189807870607388/AnsiballZ_copy.py'
Dec 06 06:38:12 compute-0 sudo[129756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:12 compute-0 python3.9[129758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003091.3143356-162-189807870607388/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=92788f0c6ff1713f76b6ba260ce1e1788023e635 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:12 compute-0 sudo[129756]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:12 compute-0 sudo[129909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brqacgsanmtezigbqfxngyysefdorptl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003092.5259004-293-218297240276175/AnsiballZ_file.py'
Dec 06 06:38:12 compute-0 sudo[129909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:38:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:38:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:38:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:38:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:38:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:38:12 compute-0 python3.9[129911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:12 compute-0 sudo[129909]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:13 compute-0 sudo[130061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obsqilqaqqvaupkdogskkerlwovxidkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003093.096837-293-155653388229828/AnsiballZ_file.py'
Dec 06 06:38:13 compute-0 sudo[130061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:13.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:13 compute-0 python3.9[130063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:13 compute-0 sudo[130061]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:38:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 8740 writes, 34K keys, 8740 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8740 writes, 2108 syncs, 4.15 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8740 writes, 34K keys, 8740 commit groups, 1.0 writes per commit group, ingest: 21.41 MB, 0.04 MB/s
                                           Interval WAL: 8740 writes, 2108 syncs, 4.15 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 06:38:13 compute-0 sudo[130213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgzyhwireyutnnhzazmhmpnobrklofbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003093.7264278-325-122007843556295/AnsiballZ_stat.py'
Dec 06 06:38:13 compute-0 sudo[130213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:14 compute-0 python3.9[130215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:14 compute-0 sudo[130213]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:14 compute-0 ceph-mon[74339]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:14 compute-0 sudo[130337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlowilopobrwvpdcgejnbkrshldbtldx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003093.7264278-325-122007843556295/AnsiballZ_copy.py'
Dec 06 06:38:14 compute-0 sudo[130337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:14 compute-0 python3.9[130339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003093.7264278-325-122007843556295/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d035911de502066e636d7622e10396c4c65c8b79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:14 compute-0 sudo[130337]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:14.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:15 compute-0 sudo[130489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjauxeykbdxltpkdvtimhqtpuvvfokzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003094.9185183-325-278092655217478/AnsiballZ_stat.py'
Dec 06 06:38:15 compute-0 sudo[130489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:15 compute-0 python3.9[130491]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:15 compute-0 sudo[130489]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:15 compute-0 ceph-mon[74339]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:15.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:15 compute-0 sudo[130612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucoglqxhagicmtjcptyefvhxgagmysmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003094.9185183-325-278092655217478/AnsiballZ_copy.py'
Dec 06 06:38:15 compute-0 sudo[130612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:15 compute-0 python3.9[130614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003094.9185183-325-278092655217478/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7302dfe8a711171d00de900e7fa863eaee0e2101 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:15 compute-0 sudo[130612]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:16 compute-0 sudo[130765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thjdehnijgcgigknslhesmxnnknwbukt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003096.0407844-325-175693761825844/AnsiballZ_stat.py'
Dec 06 06:38:16 compute-0 sudo[130765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:16 compute-0 python3.9[130767]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:16 compute-0 sudo[130765]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:16 compute-0 ceph-mon[74339]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:16 compute-0 sudo[130888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfnpvwgrbmeauluwykosikovybacghjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003096.0407844-325-175693761825844/AnsiballZ_copy.py'
Dec 06 06:38:16 compute-0 sudo[130888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:16.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:16 compute-0 python3.9[130890]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003096.0407844-325-175693761825844/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6662993f10ba40072985edbcf5695babc8f36dc1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:17 compute-0 sudo[130888]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:17 compute-0 sudo[131040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmidsoeteyxtynttqanwaioftyqtinzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003097.2398682-446-93806798937206/AnsiballZ_file.py'
Dec 06 06:38:17 compute-0 sudo[131040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:17.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:17 compute-0 python3.9[131042]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:17 compute-0 sudo[131040]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:18 compute-0 sudo[131192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtuvzlmvojpmmputkdzgucolnnfokkmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003097.8379493-446-169757552219582/AnsiballZ_file.py'
Dec 06 06:38:18 compute-0 sudo[131192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:38:18
Dec 06 06:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'backups']
Dec 06 06:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:38:18 compute-0 python3.9[131194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:18 compute-0 sudo[131192]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:18 compute-0 sudo[131345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcvkrhaivqxivkgpaxxjedcgllzymzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003098.4894657-494-256917821128617/AnsiballZ_stat.py'
Dec 06 06:38:18 compute-0 sudo[131345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:18.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:18 compute-0 python3.9[131347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:18 compute-0 sudo[131345]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:19 compute-0 sudo[131468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xydotklzjeypgoejwacrmcojhaicrhcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003098.4894657-494-256917821128617/AnsiballZ_copy.py'
Dec 06 06:38:19 compute-0 sudo[131468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:19.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:19 compute-0 python3.9[131470]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003098.4894657-494-256917821128617/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=7cfd4d0362aa3df77d7246bbca5803d9527b2493 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:19 compute-0 sudo[131468]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:19 compute-0 sudo[131620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egronzukevdmbgyxshmdtiddstxldcrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003099.718464-494-259932569895243/AnsiballZ_stat.py'
Dec 06 06:38:19 compute-0 sudo[131620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:20 compute-0 ceph-mon[74339]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:20 compute-0 python3.9[131622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:20 compute-0 sudo[131620]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:20 compute-0 sudo[131744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acdsjqzebbdrhlmiwvlmigipipmbtazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003099.718464-494-259932569895243/AnsiballZ_copy.py'
Dec 06 06:38:20 compute-0 sudo[131744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:20 compute-0 python3.9[131746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003099.718464-494-259932569895243/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7302dfe8a711171d00de900e7fa863eaee0e2101 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:20 compute-0 sudo[131744]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:20.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:21 compute-0 sudo[131896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adrnucmcyiykotwebyadivfqyywfbodv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003100.8343444-494-235900624135091/AnsiballZ_stat.py'
Dec 06 06:38:21 compute-0 sudo[131896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:21 compute-0 python3.9[131898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:21 compute-0 sudo[131896]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:21.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:21 compute-0 sudo[132019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osyzxolitbtfdmmcgktcwfrhawzuttdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003100.8343444-494-235900624135091/AnsiballZ_copy.py'
Dec 06 06:38:21 compute-0 sudo[132019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:21 compute-0 python3.9[132021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003100.8343444-494-235900624135091/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f607f46dafd075decc410904dbd266e44a2fd1ee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:21 compute-0 sudo[132019]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:22 compute-0 ceph-mon[74339]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 06:38:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:22.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:22 compute-0 sudo[132172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckgydirnyiourhpzeqcgaalnqfnefciv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003102.5867937-646-273676519600768/AnsiballZ_file.py'
Dec 06 06:38:22 compute-0 sudo[132172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:38:23 compute-0 python3.9[132174]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:23 compute-0 sudo[132172]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:38:23 compute-0 ceph-mon[74339]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:23.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:23 compute-0 sudo[132324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frsdtpwkmrdjlvjyscadflkyzrsavmas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003103.3506322-679-127037016852078/AnsiballZ_stat.py'
Dec 06 06:38:23 compute-0 sudo[132324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:23 compute-0 python3.9[132326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:23 compute-0 sudo[132324]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:24 compute-0 sudo[132448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgvmuydoiqgqmjqoouuryrhkrjjjoxpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003103.3506322-679-127037016852078/AnsiballZ_copy.py'
Dec 06 06:38:24 compute-0 sudo[132448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:24 compute-0 python3.9[132450]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003103.3506322-679-127037016852078/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:24 compute-0 sudo[132448]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:24.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:24 compute-0 sudo[132600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqwsurwyzvyyzvmwgpfgollpsbykzymv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003104.654653-739-214436539354261/AnsiballZ_file.py'
Dec 06 06:38:24 compute-0 sudo[132600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:24 compute-0 ceph-mon[74339]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:25 compute-0 python3.9[132602]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:25 compute-0 sudo[132600]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:38:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:38:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:25.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:25 compute-0 sudo[132752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qycvajgaldotkvvomycxijyxcbjlyvti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003105.3281157-763-154880899492658/AnsiballZ_stat.py'
Dec 06 06:38:25 compute-0 sudo[132752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:25 compute-0 python3.9[132754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:25 compute-0 sudo[132752]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:26 compute-0 sudo[132875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtdslpraqcqsdqfwgpqjbessbptgmsgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003105.3281157-763-154880899492658/AnsiballZ_copy.py'
Dec 06 06:38:26 compute-0 sudo[132875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:26 compute-0 python3.9[132877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003105.3281157-763-154880899492658/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:26 compute-0 sudo[132875]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:26 compute-0 sudo[133028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufdrtsnuaehkdvnpvuyqewyrgdiaxpvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003106.5595171-812-44142872518774/AnsiballZ_file.py'
Dec 06 06:38:26 compute-0 sudo[133028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:26 compute-0 ceph-mon[74339]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:26.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:27 compute-0 python3.9[133030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:27 compute-0 sudo[133028]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:27 compute-0 sudo[133180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdxjpjzwbenhnjumzjisvakkwfxoduol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003107.1934671-834-218341234140455/AnsiballZ_stat.py'
Dec 06 06:38:27 compute-0 sudo[133180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:27.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:27 compute-0 python3.9[133182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:27 compute-0 sudo[133180]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:27 compute-0 sudo[133303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spdpmaqlknzgsfwrpxgoruygybnuvbyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003107.1934671-834-218341234140455/AnsiballZ_copy.py'
Dec 06 06:38:27 compute-0 sudo[133303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:28 compute-0 python3.9[133305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003107.1934671-834-218341234140455/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:28 compute-0 sudo[133303]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:28 compute-0 sudo[133456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rolqmfxqydjjrxyishisgisrcqyhwuzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003108.3666553-880-76660994236380/AnsiballZ_file.py'
Dec 06 06:38:28 compute-0 sudo[133456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:28 compute-0 ceph-mon[74339]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:28 compute-0 python3.9[133458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:28 compute-0 sudo[133456]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:28.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:29 compute-0 sudo[133608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbwldpzkusvnpmgveiyecmzpbcdmczva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003109.0269246-903-188184205345521/AnsiballZ_stat.py'
Dec 06 06:38:29 compute-0 sudo[133608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:29 compute-0 python3.9[133610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:29 compute-0 sudo[133608]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:29.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:29 compute-0 sudo[133731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxpnuyvdngwaptwjrricgvezlkechuel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003109.0269246-903-188184205345521/AnsiballZ_copy.py'
Dec 06 06:38:29 compute-0 sudo[133731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:30 compute-0 python3.9[133733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003109.0269246-903-188184205345521/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:30 compute-0 sudo[133731]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:30 compute-0 sudo[133884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqeqzhhdblxgsmhnwoytdxzkbzmpnxgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003110.2708066-951-224564302097710/AnsiballZ_file.py'
Dec 06 06:38:30 compute-0 sudo[133884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:30 compute-0 ceph-mon[74339]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:30 compute-0 python3.9[133886]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:30 compute-0 sudo[133884]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:30.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:31 compute-0 sudo[134036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtaacwfgzoyeplrsstvvxnklnjhvuhea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003110.9372308-975-70340233067515/AnsiballZ_stat.py'
Dec 06 06:38:31 compute-0 sudo[134036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:31 compute-0 python3.9[134038]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:31 compute-0 sudo[134036]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:31 compute-0 sudo[134114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:31 compute-0 sudo[134114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:31 compute-0 sudo[134114]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:31 compute-0 sudo[134203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsjxeymijbexzxipanucnkwuqvajwyla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003110.9372308-975-70340233067515/AnsiballZ_copy.py'
Dec 06 06:38:32 compute-0 sudo[134203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:32 compute-0 sudo[134169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:32 compute-0 sudo[134169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:32 compute-0 sudo[134169]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:32 compute-0 python3.9[134209]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003110.9372308-975-70340233067515/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:32 compute-0 sudo[134203]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:32 compute-0 sudo[134362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwrnlhgaislnwsbzebxbrxobnbampfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003112.469206-1028-103737112682737/AnsiballZ_file.py'
Dec 06 06:38:32 compute-0 sudo[134362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:32.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:32 compute-0 python3.9[134364]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:32 compute-0 sudo[134362]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:33 compute-0 sudo[134514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shzzyotkstsgifhiksxlgwhfyleaktds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003113.1850932-1052-156907059360189/AnsiballZ_stat.py'
Dec 06 06:38:33 compute-0 sudo[134514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:33 compute-0 ceph-mon[74339]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:33.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:33 compute-0 python3.9[134516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:33 compute-0 sudo[134514]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:34 compute-0 sudo[134637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znrwgnzrobrjuiptmikxshybmfpxkoqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003113.1850932-1052-156907059360189/AnsiballZ_copy.py'
Dec 06 06:38:34 compute-0 sudo[134637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:34 compute-0 python3.9[134639]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003113.1850932-1052-156907059360189/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=4fb1377fac822006b36da2922ca9605bec411794 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:34 compute-0 sudo[134637]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:34 compute-0 ceph-mon[74339]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:34.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:35.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:36 compute-0 sshd-session[128008]: Connection closed by 192.168.122.30 port 60420
Dec 06 06:38:36 compute-0 sshd-session[127970]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:38:36 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 06 06:38:36 compute-0 systemd[1]: session-44.scope: Consumed 22.578s CPU time.
Dec 06 06:38:36 compute-0 systemd-logind[798]: Session 44 logged out. Waiting for processes to exit.
Dec 06 06:38:36 compute-0 systemd-logind[798]: Removed session 44.
Dec 06 06:38:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:36.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:37 compute-0 ceph-mon[74339]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:37.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:38 compute-0 ceph-mon[74339]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:38.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:39.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:40.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:41 compute-0 ceph-mon[74339]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:41.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:41 compute-0 sshd-session[134668]: Accepted publickey for zuul from 192.168.122.30 port 50842 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:38:41 compute-0 systemd-logind[798]: New session 45 of user zuul.
Dec 06 06:38:41 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 06 06:38:41 compute-0 sshd-session[134668]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:38:42 compute-0 sudo[134822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzihfllmbpvvpgoeijatntdqgovuvkui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003121.764797-31-211117178959915/AnsiballZ_file.py'
Dec 06 06:38:42 compute-0 sudo[134822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:42 compute-0 python3.9[134824]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:42 compute-0 sudo[134822]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:42 compute-0 ceph-mon[74339]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:42.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:38:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:38:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:38:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:38:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:38:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:38:43 compute-0 sudo[134974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wszwmbgkvwhkfvgesfvhgaujmhzbodft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003122.6707363-67-266916199301796/AnsiballZ_stat.py'
Dec 06 06:38:43 compute-0 sudo[134974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:43 compute-0 python3.9[134976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:43 compute-0 sudo[134974]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:43.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:43 compute-0 sudo[135097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhmhdvmyzvlgrlpbguciqqippsvbdana ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003122.6707363-67-266916199301796/AnsiballZ_copy.py'
Dec 06 06:38:43 compute-0 sudo[135097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:44 compute-0 python3.9[135099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003122.6707363-67-266916199301796/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=eb306234ce11ca94053ba9deb99a6e4ceca2e349 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:44 compute-0 sudo[135097]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:44 compute-0 sudo[135250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aubeplrepeyseegqrmpgfluklitxtkeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003124.3004606-67-132422731178909/AnsiballZ_stat.py'
Dec 06 06:38:44 compute-0 sudo[135250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:44 compute-0 python3.9[135252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:38:44 compute-0 sudo[135250]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:44.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:45 compute-0 sudo[135373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krdvzcthpbvgdvbjzudjnilpqthsygpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003124.3004606-67-132422731178909/AnsiballZ_copy.py'
Dec 06 06:38:45 compute-0 sudo[135373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:45 compute-0 python3.9[135375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003124.3004606-67-132422731178909/.source.conf _original_basename=ceph.conf follow=False checksum=72f9497223d5391694ed548fdd27afc9585eca3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:38:45 compute-0 sudo[135373]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:45 compute-0 ceph-mon[74339]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:45.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:45 compute-0 sshd-session[134671]: Connection closed by 192.168.122.30 port 50842
Dec 06 06:38:45 compute-0 sshd-session[134668]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:38:45 compute-0 systemd-logind[798]: Session 45 logged out. Waiting for processes to exit.
Dec 06 06:38:45 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 06 06:38:45 compute-0 systemd[1]: session-45.scope: Consumed 2.579s CPU time.
Dec 06 06:38:45 compute-0 systemd-logind[798]: Removed session 45.
Dec 06 06:38:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:46 compute-0 ceph-mon[74339]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:46.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:47.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:48 compute-0 ceph-mon[74339]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:48.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:49.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:51.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:51 compute-0 sshd-session[135403]: Accepted publickey for zuul from 192.168.122.30 port 36718 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:38:51 compute-0 systemd-logind[798]: New session 46 of user zuul.
Dec 06 06:38:51 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 06 06:38:51 compute-0 sshd-session[135403]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:38:52 compute-0 sudo[135448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:52 compute-0 sudo[135448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:52 compute-0 sudo[135448]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:52 compute-0 sudo[135484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:38:52 compute-0 sudo[135484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:38:52 compute-0 sudo[135484]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:52 compute-0 ceph-mon[74339]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:52 compute-0 python3.9[135607]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:38:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:52.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:53 compute-0 ceph-mon[74339]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:53.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:53 compute-0 sudo[135761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghnazcrbxprnvyidmrularqcdrosukts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003133.2403226-68-34354656431627/AnsiballZ_file.py'
Dec 06 06:38:53 compute-0 sudo[135761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:53 compute-0 python3.9[135763]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:53 compute-0 sudo[135761]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:54 compute-0 sudo[135914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idpxrhgerabeuwjkrascnqbfvdopzthb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003134.0642068-68-42457512585136/AnsiballZ_file.py'
Dec 06 06:38:54 compute-0 sudo[135914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:54 compute-0 python3.9[135916]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:38:54 compute-0 sudo[135914]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:54.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:55 compute-0 ceph-mon[74339]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:55 compute-0 python3.9[136066]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:38:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:55.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:56 compute-0 sudo[136216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-micqaikxanpppmffbdomgdvrlwdjxyxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003135.6225576-137-86425705888149/AnsiballZ_seboolean.py'
Dec 06 06:38:56 compute-0 sudo[136216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:56 compute-0 python3.9[136218]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 06:38:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:56.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:56 compute-0 ceph-mon[74339]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:38:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:57.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:38:57 compute-0 sudo[136216]: pam_unix(sudo:session): session closed for user root
Dec 06 06:38:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:38:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000057s ======
Dec 06 06:38:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:38:58.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Dec 06 06:38:59 compute-0 ceph-mon[74339]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:38:59 compute-0 sudo[136374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xylbzmkaclphievncgaupfoumqorbagb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003138.8470328-167-249590005421073/AnsiballZ_setup.py'
Dec 06 06:38:59 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 06 06:38:59 compute-0 sudo[136374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:38:59 compute-0 python3.9[136376]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:38:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:38:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:38:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:38:59.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:38:59 compute-0 sudo[136374]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:00 compute-0 sudo[136458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzhzjwyhczhroojhnmsqfnmxygmjvtmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003138.8470328-167-249590005421073/AnsiballZ_dnf.py'
Dec 06 06:39:00 compute-0 sudo[136458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:00 compute-0 python3.9[136460]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:39:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:00.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:01 compute-0 ceph-mon[74339]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:01.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:01 compute-0 sudo[136458]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:02 compute-0 sudo[136613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lazuneroescawdyonihesbkcvzcaxfti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003142.030569-203-177109527349851/AnsiballZ_systemd.py'
Dec 06 06:39:02 compute-0 sudo[136613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:02.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:02 compute-0 python3.9[136615]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:39:03 compute-0 sudo[136613]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:03.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:03 compute-0 ceph-mon[74339]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:04 compute-0 sudo[136769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akznulbkkfyqqtghbmkfcldvoioudfwx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003143.9148421-227-220611078038756/AnsiballZ_edpm_nftables_snippet.py'
Dec 06 06:39:04 compute-0 sudo[136769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:04 compute-0 python3[136771]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 06 06:39:04 compute-0 sudo[136769]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:04.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:05 compute-0 ceph-mon[74339]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:05 compute-0 sudo[136921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olxhvqywhgdrnqhjawywwqncoagmabfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003144.8916986-254-200391309115515/AnsiballZ_file.py'
Dec 06 06:39:05 compute-0 sudo[136921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:05 compute-0 python3.9[136923]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:05 compute-0 sudo[136921]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:05.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:06 compute-0 sudo[137073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czhabqhpkunpixikekebnweygdouuisr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003145.5919645-278-188693205461260/AnsiballZ_stat.py'
Dec 06 06:39:06 compute-0 sudo[137073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:06 compute-0 python3.9[137075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:06 compute-0 sudo[137073]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:06 compute-0 sudo[137152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiemfwdvyfxkimyfmxnqtanhpyjtxuwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003145.5919645-278-188693205461260/AnsiballZ_file.py'
Dec 06 06:39:06 compute-0 sudo[137152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:06 compute-0 python3.9[137154]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:06 compute-0 sudo[137152]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:06.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:07 compute-0 sudo[137304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovxlideonhtzbttervjqzewngxqjhggg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003146.9960628-314-171584015727056/AnsiballZ_stat.py'
Dec 06 06:39:07 compute-0 sudo[137304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:07 compute-0 python3.9[137306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:07 compute-0 sudo[137304]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:07.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:07 compute-0 ceph-mon[74339]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:07 compute-0 sudo[137382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmgjjcikeqrzkhycaieqwpwhzjacvfhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003146.9960628-314-171584015727056/AnsiballZ_file.py'
Dec 06 06:39:07 compute-0 sudo[137382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:07 compute-0 python3.9[137384]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.rz6wquhy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:07 compute-0 sudo[137382]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:08 compute-0 ceph-mon[74339]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:08 compute-0 sudo[137535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krfvrrtdpqusruerxhqvkpuvrpestmfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003148.3714478-350-279696515668270/AnsiballZ_stat.py'
Dec 06 06:39:08 compute-0 sudo[137535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:08 compute-0 python3.9[137537]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:09 compute-0 sudo[137535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-0 sudo[137613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utiyceqbebkbxfpyslrkybxqfvystfrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003148.3714478-350-279696515668270/AnsiballZ_file.py'
Dec 06 06:39:09 compute-0 sudo[137613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:09 compute-0 python3.9[137615]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:09 compute-0 sudo[137613]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:09.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:09 compute-0 sudo[137640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:09 compute-0 sudo[137640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:09 compute-0 sudo[137640]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-0 sudo[137665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:39:09 compute-0 sudo[137665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:09 compute-0 sudo[137665]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-0 sudo[137690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:09 compute-0 sudo[137690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:09 compute-0 sudo[137690]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:09 compute-0 sudo[137715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:39:09 compute-0 sudo[137715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:39:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:39:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:10 compute-0 sudo[137715]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 06:39:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:39:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 06:39:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:39:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 06 06:39:10 compute-0 sudo[137898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kutpzrtcartcbwljfyamzqrdrizvxggw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003149.8926444-389-278305861925944/AnsiballZ_command.py'
Dec 06 06:39:10 compute-0 sudo[137898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:10 compute-0 python3.9[137900]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:10 compute-0 sudo[137898]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:10.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:39:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:39:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 06 06:39:11 compute-0 ceph-mon[74339]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:11 compute-0 sudo[138051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aepwgqxlcunpqqnpzmmxnnizhlvjavhv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003150.9645817-413-167997735437324/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 06:39:11 compute-0 sudo[138051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:11.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:11 compute-0 python3[138053]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 06:39:11 compute-0 sudo[138051]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:12 compute-0 sudo[138203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdysjjvaxbmdqtlahkzxvarhhhmtelgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003151.8064916-437-185280653897174/AnsiballZ_stat.py'
Dec 06 06:39:12 compute-0 sudo[138203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:12 compute-0 sudo[138207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:12 compute-0 sudo[138207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:12 compute-0 sudo[138207]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:12 compute-0 python3.9[138205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:12 compute-0 sudo[138232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:12 compute-0 sudo[138232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:12 compute-0 sudo[138232]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:39:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:39:12 compute-0 sudo[138203]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:12 compute-0 sudo[138379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upecgtpmpomdwziierywlpqemwcayibb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003151.8064916-437-185280653897174/AnsiballZ_copy.py'
Dec 06 06:39:12 compute-0 sudo[138379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:39:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:39:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:39:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:39:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:12.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:39:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:39:12 compute-0 python3.9[138381]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003151.8064916-437-185280653897174/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:13 compute-0 sudo[138379]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:39:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:39:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:39:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:39:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:39:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 04bc470b-94c3-49e5-80ef-95e154d167d4 does not exist
Dec 06 06:39:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 95ae2180-43cf-4e06-9252-eaf223974eb0 does not exist
Dec 06 06:39:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ffda2af3-d7ab-4879-a3e4-b2f8672d8bdc does not exist
Dec 06 06:39:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:39:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:39:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:39:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:39:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:39:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:39:13 compute-0 sudo[138458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:13 compute-0 sudo[138458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:13 compute-0 sudo[138458]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:13 compute-0 ceph-mon[74339]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:39:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:39:13 compute-0 sudo[138500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:39:13 compute-0 sudo[138500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:13 compute-0 sudo[138500]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:13 compute-0 sudo[138543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:13 compute-0 sudo[138543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:13 compute-0 sudo[138543]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:13 compute-0 sudo[138628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwpckrujvgncrpadxptucocaosvjlnbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003153.2422922-482-231778030888071/AnsiballZ_stat.py'
Dec 06 06:39:13 compute-0 sudo[138587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:39:13 compute-0 sudo[138628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:13.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:13 compute-0 sudo[138587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:13 compute-0 python3.9[138632]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:13 compute-0 sudo[138628]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:13 compute-0 podman[138675]: 2025-12-06 06:39:13.874992248 +0000 UTC m=+0.045452882 container create b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:39:13 compute-0 systemd[1]: Started libpod-conmon-b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c.scope.
Dec 06 06:39:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:39:13 compute-0 podman[138675]: 2025-12-06 06:39:13.852616332 +0000 UTC m=+0.023076986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:39:13 compute-0 podman[138675]: 2025-12-06 06:39:13.964021349 +0000 UTC m=+0.134481983 container init b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:39:13 compute-0 podman[138675]: 2025-12-06 06:39:13.970665677 +0000 UTC m=+0.141126311 container start b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:39:13 compute-0 podman[138675]: 2025-12-06 06:39:13.974316881 +0000 UTC m=+0.144777515 container attach b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:39:13 compute-0 cranky_maxwell[138738]: 167 167
Dec 06 06:39:13 compute-0 systemd[1]: libpod-b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c.scope: Deactivated successfully.
Dec 06 06:39:13 compute-0 podman[138675]: 2025-12-06 06:39:13.976307178 +0000 UTC m=+0.146767812 container died b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e834fe8bfdf975749f6fc1c68c72d75dbff03b38f7874428683b4e2e85006bd6-merged.mount: Deactivated successfully.
Dec 06 06:39:14 compute-0 podman[138675]: 2025-12-06 06:39:14.011488508 +0000 UTC m=+0.181949142 container remove b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_maxwell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:39:14 compute-0 systemd[1]: libpod-conmon-b69857cd99e19174ef4874ab811ba64fee814f66f9410c2a5734a36f9e27061c.scope: Deactivated successfully.
Dec 06 06:39:14 compute-0 sudo[138836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyfdzvlfdgbxogshsaawjyslsehtbumk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003153.2422922-482-231778030888071/AnsiballZ_copy.py'
Dec 06 06:39:14 compute-0 sudo[138836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:14 compute-0 podman[138835]: 2025-12-06 06:39:14.172424371 +0000 UTC m=+0.045331299 container create 9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lehmann, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 06:39:14 compute-0 systemd[1]: Started libpod-conmon-9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de.scope.
Dec 06 06:39:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d07b170f521d909ec10b8200f463a28345893ff8fbc364d381fb7b2f1798e1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:14 compute-0 podman[138835]: 2025-12-06 06:39:14.151676502 +0000 UTC m=+0.024583450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d07b170f521d909ec10b8200f463a28345893ff8fbc364d381fb7b2f1798e1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d07b170f521d909ec10b8200f463a28345893ff8fbc364d381fb7b2f1798e1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d07b170f521d909ec10b8200f463a28345893ff8fbc364d381fb7b2f1798e1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d07b170f521d909ec10b8200f463a28345893ff8fbc364d381fb7b2f1798e1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:14 compute-0 podman[138835]: 2025-12-06 06:39:14.267816262 +0000 UTC m=+0.140723210 container init 9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:39:14 compute-0 podman[138835]: 2025-12-06 06:39:14.27900128 +0000 UTC m=+0.151908208 container start 9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:39:14 compute-0 podman[138835]: 2025-12-06 06:39:14.282924671 +0000 UTC m=+0.155831609 container attach 9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lehmann, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:39:14 compute-0 python3.9[138846]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003153.2422922-482-231778030888071/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:14 compute-0 sudo[138836]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:39:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:39:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:39:14 compute-0 sudo[139009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quueawhlqtavthaahzthvcqpvtlathaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003154.631402-527-234622083622143/AnsiballZ_stat.py'
Dec 06 06:39:14 compute-0 sudo[139009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:14.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:15 compute-0 festive_lehmann[138855]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:39:15 compute-0 festive_lehmann[138855]: --> relative data size: 1.0
Dec 06 06:39:15 compute-0 festive_lehmann[138855]: --> All data devices are unavailable
Dec 06 06:39:15 compute-0 python3.9[139011]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:15 compute-0 systemd[1]: libpod-9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de.scope: Deactivated successfully.
Dec 06 06:39:15 compute-0 podman[138835]: 2025-12-06 06:39:15.113420455 +0000 UTC m=+0.986327403 container died 9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lehmann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec 06 06:39:15 compute-0 sudo[139009]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d07b170f521d909ec10b8200f463a28345893ff8fbc364d381fb7b2f1798e1c-merged.mount: Deactivated successfully.
Dec 06 06:39:15 compute-0 podman[138835]: 2025-12-06 06:39:15.181282574 +0000 UTC m=+1.054189502 container remove 9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:39:15 compute-0 systemd[1]: libpod-conmon-9d4bffc4657cd8f83f256177731753f2a8115fc2a1155ffae2a1c590c10958de.scope: Deactivated successfully.
Dec 06 06:39:15 compute-0 sudo[138587]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:15 compute-0 sudo[139059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:15 compute-0 sudo[139059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:15 compute-0 sudo[139059]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:15 compute-0 sudo[139109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:39:15 compute-0 sudo[139109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:15 compute-0 sudo[139109]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:15 compute-0 sudo[139155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:15 compute-0 sudo[139155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:15 compute-0 sudo[139155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:15 compute-0 sudo[139204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:39:15 compute-0 sudo[139204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:15 compute-0 sudo[139255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgwqczhzfbcuervvpqgkeptesvudokxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003154.631402-527-234622083622143/AnsiballZ_copy.py'
Dec 06 06:39:15 compute-0 sudo[139255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:15 compute-0 ceph-mon[74339]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:15.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:15 compute-0 python3.9[139257]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003154.631402-527-234622083622143/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:15 compute-0 sudo[139255]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:15 compute-0 podman[139304]: 2025-12-06 06:39:15.757846 +0000 UTC m=+0.039037910 container create 065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 06:39:15 compute-0 systemd[1]: Started libpod-conmon-065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62.scope.
Dec 06 06:39:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:39:15 compute-0 podman[139304]: 2025-12-06 06:39:15.822614011 +0000 UTC m=+0.103805921 container init 065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 06:39:15 compute-0 podman[139304]: 2025-12-06 06:39:15.830159776 +0000 UTC m=+0.111351686 container start 065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec 06 06:39:15 compute-0 podman[139304]: 2025-12-06 06:39:15.833550472 +0000 UTC m=+0.114742482 container attach 065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:39:15 compute-0 podman[139304]: 2025-12-06 06:39:15.739527999 +0000 UTC m=+0.020719929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:39:15 compute-0 reverent_maxwell[139339]: 167 167
Dec 06 06:39:15 compute-0 systemd[1]: libpod-065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62.scope: Deactivated successfully.
Dec 06 06:39:15 compute-0 podman[139304]: 2025-12-06 06:39:15.83738673 +0000 UTC m=+0.118578680 container died 065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc643897a030a9ec9d63b985723575444214c8db58b85895f4650afab8dd6df4-merged.mount: Deactivated successfully.
Dec 06 06:39:15 compute-0 podman[139304]: 2025-12-06 06:39:15.872507509 +0000 UTC m=+0.153699419 container remove 065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_maxwell, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 06:39:15 compute-0 systemd[1]: libpod-conmon-065c20095d34fe6373d36abdf5cbce78b5c1b23394a45c8454f344e794087c62.scope: Deactivated successfully.
Dec 06 06:39:16 compute-0 podman[139399]: 2025-12-06 06:39:16.015478272 +0000 UTC m=+0.040005657 container create d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bose, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 06:39:16 compute-0 systemd[1]: Started libpod-conmon-d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58.scope.
Dec 06 06:39:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec78cf8c37b8e1f0e84c7adec26532e6ab174ea54e1fb0b823f22a3765c1dba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec78cf8c37b8e1f0e84c7adec26532e6ab174ea54e1fb0b823f22a3765c1dba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec78cf8c37b8e1f0e84c7adec26532e6ab174ea54e1fb0b823f22a3765c1dba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec78cf8c37b8e1f0e84c7adec26532e6ab174ea54e1fb0b823f22a3765c1dba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:16 compute-0 podman[139399]: 2025-12-06 06:39:16.000815326 +0000 UTC m=+0.025342721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:39:16 compute-0 podman[139399]: 2025-12-06 06:39:16.100130388 +0000 UTC m=+0.124657793 container init d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bose, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:39:16 compute-0 podman[139399]: 2025-12-06 06:39:16.109085042 +0000 UTC m=+0.133612427 container start d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bose, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:39:16 compute-0 podman[139399]: 2025-12-06 06:39:16.112573982 +0000 UTC m=+0.137101367 container attach d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bose, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:39:16 compute-0 sudo[139510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzohfmnvcfbwsldrzcnonsgrxfysdeav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003155.9277575-572-39446680343302/AnsiballZ_stat.py'
Dec 06 06:39:16 compute-0 sudo[139510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:16 compute-0 python3.9[139512]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:16 compute-0 sudo[139510]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:16 compute-0 sudo[139635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqqekghxsvkmmuomuvjqrimxsupgxlcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003155.9277575-572-39446680343302/AnsiballZ_copy.py'
Dec 06 06:39:16 compute-0 sudo[139635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:16 compute-0 frosty_bose[139440]: {
Dec 06 06:39:16 compute-0 frosty_bose[139440]:     "0": [
Dec 06 06:39:16 compute-0 frosty_bose[139440]:         {
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "devices": [
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "/dev/loop3"
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             ],
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "lv_name": "ceph_lv0",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "lv_size": "7511998464",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "name": "ceph_lv0",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "tags": {
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.cluster_name": "ceph",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.crush_device_class": "",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.encrypted": "0",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.osd_id": "0",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.type": "block",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:                 "ceph.vdo": "0"
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             },
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "type": "block",
Dec 06 06:39:16 compute-0 frosty_bose[139440]:             "vg_name": "ceph_vg0"
Dec 06 06:39:16 compute-0 frosty_bose[139440]:         }
Dec 06 06:39:16 compute-0 frosty_bose[139440]:     ]
Dec 06 06:39:16 compute-0 frosty_bose[139440]: }
Dec 06 06:39:16 compute-0 systemd[1]: libpod-d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58.scope: Deactivated successfully.
Dec 06 06:39:16 compute-0 podman[139399]: 2025-12-06 06:39:16.929994243 +0000 UTC m=+0.954521628 container died d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bose, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 06:39:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:16.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-aec78cf8c37b8e1f0e84c7adec26532e6ab174ea54e1fb0b823f22a3765c1dba-merged.mount: Deactivated successfully.
Dec 06 06:39:16 compute-0 podman[139399]: 2025-12-06 06:39:16.988144786 +0000 UTC m=+1.012672161 container remove d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bose, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:39:16 compute-0 python3.9[139639]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003155.9277575-572-39446680343302/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:17 compute-0 systemd[1]: libpod-conmon-d6f4cd064d9fe65b20161ab45ee34559decfd1461b068a28555ebbd07b397e58.scope: Deactivated successfully.
Dec 06 06:39:17 compute-0 sudo[139635]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:17 compute-0 sudo[139204]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:17 compute-0 ceph-mon[74339]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:17 compute-0 sudo[139656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:17 compute-0 sudo[139656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:17 compute-0 sudo[139656]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:17 compute-0 sudo[139705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:39:17 compute-0 sudo[139705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:17 compute-0 sudo[139705]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:17 compute-0 sudo[139730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:17 compute-0 sudo[139730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:17 compute-0 sudo[139730]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:17 compute-0 sudo[139755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:39:17 compute-0 sudo[139755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:17.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:17 compute-0 podman[139892]: 2025-12-06 06:39:17.615054913 +0000 UTC m=+0.043727034 container create e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:39:17 compute-0 systemd[1]: Started libpod-conmon-e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140.scope.
Dec 06 06:39:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:39:17 compute-0 podman[139892]: 2025-12-06 06:39:17.59770788 +0000 UTC m=+0.026380021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:39:17 compute-0 podman[139892]: 2025-12-06 06:39:17.700994526 +0000 UTC m=+0.129666677 container init e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:39:17 compute-0 podman[139892]: 2025-12-06 06:39:17.709969771 +0000 UTC m=+0.138641892 container start e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:39:17 compute-0 podman[139892]: 2025-12-06 06:39:17.714050166 +0000 UTC m=+0.142722307 container attach e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:39:17 compute-0 confident_golick[139932]: 167 167
Dec 06 06:39:17 compute-0 sudo[139961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovkwuxkflhxumlstxdkcefnbehjhbxzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003157.3457005-617-49122114062482/AnsiballZ_stat.py'
Dec 06 06:39:17 compute-0 systemd[1]: libpod-e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140.scope: Deactivated successfully.
Dec 06 06:39:17 compute-0 sudo[139961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:17 compute-0 podman[139967]: 2025-12-06 06:39:17.765494139 +0000 UTC m=+0.029403907 container died e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-58520e087d1c9e74c5307f6a8b7da072220e0c1655fcef7ae459d612987dd056-merged.mount: Deactivated successfully.
Dec 06 06:39:17 compute-0 podman[139967]: 2025-12-06 06:39:17.814213033 +0000 UTC m=+0.078122801 container remove e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec 06 06:39:17 compute-0 systemd[1]: libpod-conmon-e5a11c885309753d87c41568a1a71a33d1e320ffea4dcbc00da0e39428040140.scope: Deactivated successfully.
Dec 06 06:39:17 compute-0 python3.9[139966]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:17 compute-0 sudo[139961]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:17 compute-0 podman[139989]: 2025-12-06 06:39:17.995049373 +0000 UTC m=+0.046620476 container create e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 06:39:18 compute-0 systemd[1]: Started libpod-conmon-e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437.scope.
Dec 06 06:39:18 compute-0 podman[139989]: 2025-12-06 06:39:17.977070792 +0000 UTC m=+0.028641915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:39:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739b6976de170de52f8aea9762f97b978861fefd2d41e04c9bfc5b9cca636826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739b6976de170de52f8aea9762f97b978861fefd2d41e04c9bfc5b9cca636826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739b6976de170de52f8aea9762f97b978861fefd2d41e04c9bfc5b9cca636826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739b6976de170de52f8aea9762f97b978861fefd2d41e04c9bfc5b9cca636826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:39:18 compute-0 podman[139989]: 2025-12-06 06:39:18.088607712 +0000 UTC m=+0.140178835 container init e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:39:18 compute-0 podman[139989]: 2025-12-06 06:39:18.096901638 +0000 UTC m=+0.148472741 container start e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:39:18 compute-0 podman[139989]: 2025-12-06 06:39:18.100172311 +0000 UTC m=+0.151743444 container attach e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:39:18
Dec 06 06:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 06 06:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:39:18 compute-0 sudo[140133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hikwqakwprxoadhqnythhvezdlndngud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003157.3457005-617-49122114062482/AnsiballZ_copy.py'
Dec 06 06:39:18 compute-0 sudo[140133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:18 compute-0 python3.9[140135]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003157.3457005-617-49122114062482/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:18 compute-0 sudo[140133]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:18.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]: {
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]:         "osd_id": 0,
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]:         "type": "bluestore"
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]:     }
Dec 06 06:39:18 compute-0 nervous_heisenberg[140030]: }
Dec 06 06:39:18 compute-0 systemd[1]: libpod-e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437.scope: Deactivated successfully.
Dec 06 06:39:18 compute-0 podman[139989]: 2025-12-06 06:39:18.994255331 +0000 UTC m=+1.045826434 container died e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:39:19 compute-0 sudo[140312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oplfyrkjtrstndkuvshmevkpgvxldpgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003158.7959344-662-201138088651576/AnsiballZ_file.py'
Dec 06 06:39:19 compute-0 sudo[140312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:19 compute-0 python3.9[140314]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:19 compute-0 sudo[140312]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:19.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:39:19 compute-0 ceph-mon[74339]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-739b6976de170de52f8aea9762f97b978861fefd2d41e04c9bfc5b9cca636826-merged.mount: Deactivated successfully.
Dec 06 06:39:19 compute-0 podman[139989]: 2025-12-06 06:39:19.673971619 +0000 UTC m=+1.725542742 container remove e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:39:19 compute-0 sudo[139755]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:39:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:39:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 89c927b7-108e-49b7-aab3-5123500fd160 does not exist
Dec 06 06:39:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1bfe4073-7ddc-4fe0-901f-4f2f9e3f3957 does not exist
Dec 06 06:39:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7de3ad98-79ef-41a3-a464-f3b54d9cdc32 does not exist
Dec 06 06:39:19 compute-0 systemd[1]: libpod-conmon-e65e3c4ee53f8d49d3c8303ef8f9dc40ed4a88a99c49d99579df856537180437.scope: Deactivated successfully.
Dec 06 06:39:19 compute-0 sudo[140421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:19 compute-0 sudo[140421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:19 compute-0 sudo[140421]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:19 compute-0 sudo[140510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxyvnbwipdjihrqhxfldzjvggpxmqnbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003159.5512273-686-269008732915526/AnsiballZ_command.py'
Dec 06 06:39:19 compute-0 sudo[140510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:19 compute-0 sudo[140473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:39:19 compute-0 sudo[140473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:19 compute-0 sudo[140473]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:20 compute-0 python3.9[140515]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:20 compute-0 sudo[140510]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:20 compute-0 sudo[140671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szjtsgpaklgdizpnzunqxpuqegyvrurf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003160.274337-710-205794929018027/AnsiballZ_blockinfile.py'
Dec 06 06:39:20 compute-0 sudo[140671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:39:20 compute-0 ceph-mon[74339]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:20 compute-0 python3.9[140673]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:20 compute-0 sudo[140671]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:20.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:21 compute-0 sudo[140823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynqvcgfiyayqntpcuabycxnobcizzrxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003161.2075233-737-64890246632140/AnsiballZ_command.py'
Dec 06 06:39:21 compute-0 sudo[140823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:21.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:21 compute-0 python3.9[140825]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:21 compute-0 sudo[140823]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:22 compute-0 sudo[140977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czpymalbtycgqpxsnizvoljkgrebjivk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003162.0136166-761-160909488729980/AnsiballZ_stat.py'
Dec 06 06:39:22 compute-0 sudo[140977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:22 compute-0 python3.9[140979]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:39:22 compute-0 sudo[140977]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:22 compute-0 ceph-mon[74339]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:22.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:39:23 compute-0 sudo[141131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuuocwewtctdpwoymnskmkmcvajdspyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003163.0417788-785-203297737422206/AnsiballZ_command.py'
Dec 06 06:39:23 compute-0 sudo[141131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:39:23 compute-0 python3.9[141133]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:23 compute-0 sudo[141131]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:23.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:24 compute-0 sudo[141286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npxdahikbxsgpvzdpwsmdmyetyjiizhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003163.759348-809-184060530778111/AnsiballZ_file.py'
Dec 06 06:39:24 compute-0 sudo[141286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:24 compute-0 python3.9[141288]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:24 compute-0 sudo[141286]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:24.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:25 compute-0 ceph-mon[74339]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:39:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:39:25 compute-0 python3.9[141439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:39:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:26 compute-0 sudo[141591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkctjjoinqiooohrfrcrhfrmkhoiaajh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003166.343453-929-163399849295113/AnsiballZ_command.py'
Dec 06 06:39:26 compute-0 sudo[141591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:26 compute-0 python3.9[141593]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:26 compute-0 ovs-vsctl[141594]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 06 06:39:26 compute-0 sudo[141591]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:26.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:27 compute-0 sudo[141744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrhqjgtodkbmjmcigwkdschxmpkdtlsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003167.1698103-956-217871525125335/AnsiballZ_command.py'
Dec 06 06:39:27 compute-0 sudo[141744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:39:27 compute-0 python3.9[141746]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:27 compute-0 sudo[141744]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:28 compute-0 ceph-mon[74339]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:28 compute-0 sudo[141900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irnjwdsbnzvhyaendelldapyqazdmwsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003167.9461956-980-242360938698442/AnsiballZ_command.py'
Dec 06 06:39:28 compute-0 sudo[141900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:28 compute-0 python3.9[141902]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:39:28 compute-0 ovs-vsctl[141903]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 06 06:39:28 compute-0 sudo[141900]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:28.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:29 compute-0 python3.9[142053]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:39:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:29.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:39:29 compute-0 ceph-mon[74339]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:29 compute-0 sudo[142205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbubgbpydeiejlrmhxptwxswrizyevvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003169.5626113-1031-59645421982563/AnsiballZ_file.py'
Dec 06 06:39:29 compute-0 sudo[142205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:30 compute-0 python3.9[142207]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:30 compute-0 sudo[142205]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:30 compute-0 sudo[142358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klpwjohactfxrnnfkgsqmqeypozukbxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003170.2935052-1055-171327288747709/AnsiballZ_stat.py'
Dec 06 06:39:30 compute-0 sudo[142358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:30 compute-0 python3.9[142360]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:30 compute-0 sudo[142358]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:30.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:31 compute-0 sudo[142436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtagerdvahdcxfylzbfdositsxywsyck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003170.2935052-1055-171327288747709/AnsiballZ_file.py'
Dec 06 06:39:31 compute-0 sudo[142436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:31 compute-0 python3.9[142438]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:31 compute-0 sudo[142436]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:31.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:31 compute-0 sudo[142588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwlihjrehiikufbbsdtswidhvkldknpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003171.3663492-1055-181104256732699/AnsiballZ_stat.py'
Dec 06 06:39:31 compute-0 sudo[142588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:31 compute-0 python3.9[142590]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:31 compute-0 sudo[142588]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:32 compute-0 sudo[142666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whlvnqpssnhvgbgowadxxngtjwoxgnkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003171.3663492-1055-181104256732699/AnsiballZ_file.py'
Dec 06 06:39:32 compute-0 sudo[142666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:32 compute-0 python3.9[142668]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:32 compute-0 sudo[142666]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:32 compute-0 sudo[142670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:32 compute-0 sudo[142670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:32 compute-0 sudo[142670]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:32 compute-0 sudo[142718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:32 compute-0 sudo[142718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:32 compute-0 sudo[142718]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:32 compute-0 sudo[142869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlytfdnmidihdcghuxtycuwpyynhvghp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003172.6307104-1124-143141585325128/AnsiballZ_file.py'
Dec 06 06:39:32 compute-0 sudo[142869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:32.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:33 compute-0 ceph-mon[74339]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:33 compute-0 python3.9[142871]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:33 compute-0 sudo[142869]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:33.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:33 compute-0 sudo[143021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eldttysxnpvudefwikwdkwnlathdcvio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003173.31555-1148-172596897273459/AnsiballZ_stat.py'
Dec 06 06:39:33 compute-0 sudo[143021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:33 compute-0 python3.9[143023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:33 compute-0 sudo[143021]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:34 compute-0 sudo[143099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erwgfidpxsaammhhmddsaoxxkfaqutvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003173.31555-1148-172596897273459/AnsiballZ_file.py'
Dec 06 06:39:34 compute-0 sudo[143099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:34 compute-0 python3.9[143101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:34 compute-0 sudo[143099]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:34 compute-0 ceph-mon[74339]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:34 compute-0 sudo[143252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymhrehzoatzqxhnxrdfninrrsffgzgtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003174.5688734-1184-109505822907141/AnsiballZ_stat.py'
Dec 06 06:39:34 compute-0 sudo[143252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:34.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:34 compute-0 python3.9[143254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:35 compute-0 sudo[143252]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:35 compute-0 sudo[143330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkuynndkkzzcvsrwjxvcyyohwilelwip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003174.5688734-1184-109505822907141/AnsiballZ_file.py'
Dec 06 06:39:35 compute-0 sudo[143330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:35 compute-0 python3.9[143332]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:35 compute-0 sudo[143330]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:35.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:39:35 compute-0 ceph-mon[74339]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:36 compute-0 sudo[143483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygkvnbqmdlkpxbqgbzuqwzrbxlwiibop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003175.8777976-1220-36506664866138/AnsiballZ_systemd.py'
Dec 06 06:39:36 compute-0 sudo[143483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:36 compute-0 python3.9[143485]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:39:36 compute-0 systemd[1]: Reloading.
Dec 06 06:39:36 compute-0 systemd-sysv-generator[143516]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:39:36 compute-0 systemd-rc-local-generator[143512]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:39:36 compute-0 ceph-mon[74339]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:36 compute-0 sudo[143483]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:36.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:37 compute-0 sudo[143672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzoiubbtzxicxkllzggnrlzisswavrjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003177.123918-1244-275598899094074/AnsiballZ_stat.py'
Dec 06 06:39:37 compute-0 sudo[143672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:37 compute-0 python3.9[143674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:37.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:37 compute-0 sudo[143672]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:37 compute-0 sudo[143750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmdavwlwbfxbbipaqkzhtzuubndildox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003177.123918-1244-275598899094074/AnsiballZ_file.py'
Dec 06 06:39:37 compute-0 sudo[143750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:38 compute-0 python3.9[143752]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:38 compute-0 sudo[143750]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:38 compute-0 sudo[143903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oggqxhprrhcjolmzbraffubircyglpmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003178.608484-1280-250288590483086/AnsiballZ_stat.py'
Dec 06 06:39:38 compute-0 sudo[143903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:38.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:39 compute-0 python3.9[143905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:39 compute-0 sudo[143903]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:39 compute-0 sudo[143981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqktptxnqnyepxiuwxbbgmkpddcwwdma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003178.608484-1280-250288590483086/AnsiballZ_file.py'
Dec 06 06:39:39 compute-0 sudo[143981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:39 compute-0 ceph-mon[74339]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:39 compute-0 python3.9[143983]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:39 compute-0 sudo[143981]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:39.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:40 compute-0 sudo[144133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tywnzofjsbhnonmolxcacdjxupnmhyge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003179.8184452-1316-69147987677317/AnsiballZ_systemd.py'
Dec 06 06:39:40 compute-0 sudo[144133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:40 compute-0 python3.9[144135]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:39:40 compute-0 systemd[1]: Reloading.
Dec 06 06:39:40 compute-0 systemd-rc-local-generator[144161]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:39:40 compute-0 systemd-sysv-generator[144167]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:39:40 compute-0 ceph-mon[74339]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:40 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 06:39:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:39:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:39:40 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 06:39:40 compute-0 sudo[144133]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:40.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:41 compute-0 sudo[144328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buksvgfufojkxjjmcolgtwymfouvjtwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003181.1667519-1346-127877702981076/AnsiballZ_file.py'
Dec 06 06:39:41 compute-0 sudo[144328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:41.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:41 compute-0 python3.9[144330]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:41 compute-0 sudo[144328]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:42 compute-0 sudo[144481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlvrefrnphwmoizuincvhwrotidprsti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003181.8673358-1370-79632823751529/AnsiballZ_stat.py'
Dec 06 06:39:42 compute-0 sudo[144481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:42 compute-0 python3.9[144483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:42 compute-0 sudo[144481]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:42 compute-0 sudo[144604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcqmxmieigeahmhfdympjdgkfbvxxsiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003181.8673358-1370-79632823751529/AnsiballZ_copy.py'
Dec 06 06:39:42 compute-0 sudo[144604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:39:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:39:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:39:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:39:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:39:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:39:42 compute-0 python3.9[144606]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003181.8673358-1370-79632823751529/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:42.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:42 compute-0 sudo[144604]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:43 compute-0 ceph-mon[74339]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:43.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:43 compute-0 sudo[144756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tipcwuwtrnahewufawofwofrmhqqioii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003183.5800679-1421-182161021102885/AnsiballZ_file.py'
Dec 06 06:39:43 compute-0 sudo[144756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:44 compute-0 python3.9[144758]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:39:44 compute-0 sudo[144756]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:44 compute-0 sudo[144909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sldejqfaztjeuztaimdtgmvflnlrwkkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003184.3073127-1445-197003838092550/AnsiballZ_stat.py'
Dec 06 06:39:44 compute-0 sudo[144909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:44 compute-0 python3.9[144911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:39:44 compute-0 sudo[144909]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:44.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:45 compute-0 ceph-mon[74339]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:45 compute-0 sudo[145033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xovqwtzabgkcesjwszbicqnjnaoiyyqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003184.3073127-1445-197003838092550/AnsiballZ_copy.py'
Dec 06 06:39:45 compute-0 sudo[145033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:45 compute-0 python3.9[145035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003184.3073127-1445-197003838092550/.source.json _original_basename=.eo7jmg2c follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:45 compute-0 sudo[145033]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:45.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:39:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:46 compute-0 sudo[145187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onidcsccxrqjyqinmviutqkoupdculxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003186.4686646-1490-70667179284300/AnsiballZ_file.py'
Dec 06 06:39:46 compute-0 sudo[145187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:46 compute-0 python3.9[145189]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:39:46 compute-0 sudo[145187]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:46.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:47 compute-0 sshd-session[145032]: Connection reset by authenticating user root 45.140.17.124 port 52026 [preauth]
Dec 06 06:39:47 compute-0 sudo[145339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkjpxujzbytucznrzkstrzlxauqlkept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003187.155306-1514-225447914784449/AnsiballZ_stat.py'
Dec 06 06:39:47 compute-0 sudo[145339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:47 compute-0 ceph-mon[74339]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:47 compute-0 sudo[145339]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:47.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:39:47 compute-0 sudo[145463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pykduvhxrauowlooyagsvbfaxzouecsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003187.155306-1514-225447914784449/AnsiballZ_copy.py'
Dec 06 06:39:47 compute-0 sudo[145463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:48 compute-0 sudo[145463]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:48.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:49 compute-0 sudo[145617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leolcaflxdpeehwerzcabbtphdnyytgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003188.5128455-1565-57720775561209/AnsiballZ_container_config_data.py'
Dec 06 06:39:49 compute-0 sudo[145617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:49 compute-0 ceph-mon[74339]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:49 compute-0 python3.9[145619]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 06 06:39:49 compute-0 sudo[145617]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:49.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:39:49 compute-0 sshd-session[145342]: Invalid user user from 45.140.17.124 port 52048
Dec 06 06:39:50 compute-0 sudo[145769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovvzzlyuhncarjtbuuzscaoylimufnfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003189.617239-1592-90076297157616/AnsiballZ_container_config_hash.py'
Dec 06 06:39:50 compute-0 sudo[145769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:50 compute-0 sshd-session[145342]: Connection reset by invalid user user 45.140.17.124 port 52048 [preauth]
Dec 06 06:39:50 compute-0 python3.9[145771]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:39:50 compute-0 sudo[145769]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:50 compute-0 sudo[145924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cheqgahpxdphaesqpmwfpzwnesqxafxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003190.5006118-1619-220151732780145/AnsiballZ_podman_container_info.py'
Dec 06 06:39:50 compute-0 sudo[145924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:50.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:51 compute-0 python3.9[145926]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 06:39:51 compute-0 sudo[145924]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:51 compute-0 ceph-mon[74339]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:51.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:52 compute-0 sshd-session[145773]: Connection reset by authenticating user root 45.140.17.124 port 52054 [preauth]
Dec 06 06:39:52 compute-0 sudo[146031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:52 compute-0 sudo[146031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:52 compute-0 sudo[146031]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:52 compute-0 sudo[146079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:39:52 compute-0 sudo[146079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:39:52 compute-0 sudo[146079]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:52 compute-0 sudo[146155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnxqemzjbonjutkqenkdqmthmsbaroal ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003192.1431952-1658-36253463631890/AnsiballZ_edpm_container_manage.py'
Dec 06 06:39:52 compute-0 sudo[146155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:39:52 compute-0 ceph-mon[74339]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:52 compute-0 python3[146157]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:39:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:39:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:53.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:39:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:53.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:54 compute-0 sshd-session[146082]: Invalid user ubuntu from 45.140.17.124 port 35476
Dec 06 06:39:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:55 compute-0 sshd-session[146082]: Connection reset by invalid user ubuntu 45.140.17.124 port 35476 [preauth]
Dec 06 06:39:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:55.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:55 compute-0 ceph-mon[74339]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:57 compute-0 sshd-session[146222]: Invalid user default from 45.140.17.124 port 35494
Dec 06 06:39:57 compute-0 sshd-session[146222]: Connection reset by invalid user default 45.140.17.124 port 35494 [preauth]
Dec 06 06:39:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:57.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:57 compute-0 ceph-mon[74339]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:58 compute-0 podman[146171]: 2025-12-06 06:39:58.098411301 +0000 UTC m=+5.083059470 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 06:39:58 compute-0 podman[146294]: 2025-12-06 06:39:58.245324546 +0000 UTC m=+0.058061581 container create 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 06 06:39:58 compute-0 podman[146294]: 2025-12-06 06:39:58.214220741 +0000 UTC m=+0.026957876 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 06:39:58 compute-0 python3[146157]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 06:39:58 compute-0 sudo[146155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:39:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:39:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:39:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:39:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:39:59.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:39:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:39:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:39:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:39:59.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:40:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 06:40:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:01.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:01 compute-0 ceph-mon[74339]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:01.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:02 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:40:02 compute-0 ceph-mon[74339]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:03.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:05.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:05.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:07.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:07.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:09.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:09.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:10 compute-0 ceph-mon[74339]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:11.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:11 compute-0 ceph-mon[74339]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-0 ceph-mon[74339]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-0 ceph-mon[74339]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:11 compute-0 ceph-mon[74339]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:12 compute-0 sudo[146364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:12 compute-0 sudo[146364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:12 compute-0 sudo[146364]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:12 compute-0 sudo[146413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:12 compute-0 sudo[146413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:12 compute-0 sudo[146413]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:12 compute-0 ceph-mon[74339]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:40:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:40:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:40:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:40:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:40:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:40:12 compute-0 sudo[146539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbicktatbqfelrhxkqffawkyeqxacabd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003212.6606152-1682-113219844887736/AnsiballZ_stat.py'
Dec 06 06:40:12 compute-0 sudo[146539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:13.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:13 compute-0 python3.9[146541]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:40:13 compute-0 sudo[146539]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:13 compute-0 sudo[146693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkxraytawfgblrithjtzpkdszyoinwfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003213.5521092-1709-178204012605311/AnsiballZ_file.py'
Dec 06 06:40:13 compute-0 sudo[146693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:14 compute-0 python3.9[146695]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:14 compute-0 sudo[146693]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:14 compute-0 sudo[146770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsgubtesvdnvhayxavzmqctkqcqfwomo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003213.5521092-1709-178204012605311/AnsiballZ_stat.py'
Dec 06 06:40:14 compute-0 sudo[146770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:14 compute-0 python3.9[146772]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:40:14 compute-0 sudo[146770]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:15 compute-0 sudo[146921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmdrbvzhkcwfifmwzcvwglvdsxkaesna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003214.7390046-1709-7901413385611/AnsiballZ_copy.py'
Dec 06 06:40:15 compute-0 sudo[146921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:15 compute-0 python3.9[146923]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003214.7390046-1709-7901413385611/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:15 compute-0 sudo[146921]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:15.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:15 compute-0 sudo[146997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pznstwhynzpvuqgtokrutluasaeohjiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003214.7390046-1709-7901413385611/AnsiballZ_systemd.py'
Dec 06 06:40:15 compute-0 sudo[146997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:15 compute-0 ceph-mon[74339]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:15 compute-0 python3.9[146999]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:40:15 compute-0 systemd[1]: Reloading.
Dec 06 06:40:16 compute-0 systemd-sysv-generator[147029]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:40:16 compute-0 systemd-rc-local-generator[147025]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:40:16 compute-0 sudo[146997]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:16 compute-0 sudo[147109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfjpcpskzbwnbeusrdvgbcbfocsrilui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003214.7390046-1709-7901413385611/AnsiballZ_systemd.py'
Dec 06 06:40:16 compute-0 sudo[147109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:16 compute-0 python3.9[147111]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:40:17 compute-0 systemd[1]: Reloading.
Dec 06 06:40:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:17 compute-0 systemd-rc-local-generator[147142]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:40:17 compute-0 systemd-sysv-generator[147146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:40:17 compute-0 systemd[1]: Starting ovn_controller container...
Dec 06 06:40:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:40:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfbfec8d39cbc4ffc1d628bc867800e2b103a56b7cde654fcd19ecda027ea59/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d.
Dec 06 06:40:17 compute-0 podman[147153]: 2025-12-06 06:40:17.460528955 +0000 UTC m=+0.126086546 container init 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:40:17 compute-0 ovn_controller[147168]: + sudo -E kolla_set_configs
Dec 06 06:40:17 compute-0 podman[147153]: 2025-12-06 06:40:17.486545353 +0000 UTC m=+0.152102934 container start 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:40:17 compute-0 edpm-start-podman-container[147153]: ovn_controller
Dec 06 06:40:17 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 06 06:40:17 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 06 06:40:17 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 06 06:40:17 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 06 06:40:17 compute-0 edpm-start-podman-container[147152]: Creating additional drop-in dependency for "ovn_controller" (6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d)
Dec 06 06:40:17 compute-0 systemd[147205]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 06 06:40:17 compute-0 podman[147175]: 2025-12-06 06:40:17.580062991 +0000 UTC m=+0.079312491 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:40:17 compute-0 systemd[1]: 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d-56aed712dcf1543b.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 06:40:17 compute-0 systemd[1]: 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d-56aed712dcf1543b.service: Failed with result 'exit-code'.
Dec 06 06:40:17 compute-0 systemd[1]: Reloading.
Dec 06 06:40:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:17.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:17 compute-0 systemd-rc-local-generator[147249]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:40:17 compute-0 systemd-sysv-generator[147254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:40:17 compute-0 systemd[147205]: Queued start job for default target Main User Target.
Dec 06 06:40:17 compute-0 systemd[147205]: Created slice User Application Slice.
Dec 06 06:40:17 compute-0 systemd[147205]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 06 06:40:17 compute-0 systemd[147205]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 06:40:17 compute-0 systemd[147205]: Reached target Paths.
Dec 06 06:40:17 compute-0 systemd[147205]: Reached target Timers.
Dec 06 06:40:17 compute-0 systemd[147205]: Starting D-Bus User Message Bus Socket...
Dec 06 06:40:17 compute-0 systemd[147205]: Starting Create User's Volatile Files and Directories...
Dec 06 06:40:17 compute-0 systemd[147205]: Finished Create User's Volatile Files and Directories.
Dec 06 06:40:17 compute-0 systemd[147205]: Listening on D-Bus User Message Bus Socket.
Dec 06 06:40:17 compute-0 systemd[147205]: Reached target Sockets.
Dec 06 06:40:17 compute-0 systemd[147205]: Reached target Basic System.
Dec 06 06:40:17 compute-0 systemd[147205]: Reached target Main User Target.
Dec 06 06:40:17 compute-0 systemd[147205]: Startup finished in 140ms.
Dec 06 06:40:17 compute-0 ceph-mon[74339]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:17 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 06 06:40:17 compute-0 systemd[1]: Started ovn_controller container.
Dec 06 06:40:17 compute-0 systemd[1]: Started Session c1 of User root.
Dec 06 06:40:17 compute-0 sudo[147109]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:17 compute-0 ovn_controller[147168]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:40:17 compute-0 ovn_controller[147168]: INFO:__main__:Validating config file
Dec 06 06:40:17 compute-0 ovn_controller[147168]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:40:17 compute-0 ovn_controller[147168]: INFO:__main__:Writing out command to execute
Dec 06 06:40:17 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 06 06:40:17 compute-0 ovn_controller[147168]: ++ cat /run_command
Dec 06 06:40:17 compute-0 ovn_controller[147168]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 06 06:40:17 compute-0 ovn_controller[147168]: + ARGS=
Dec 06 06:40:17 compute-0 ovn_controller[147168]: + sudo kolla_copy_cacerts
Dec 06 06:40:18 compute-0 systemd[1]: Started Session c2 of User root.
Dec 06 06:40:18 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 06 06:40:18 compute-0 ovn_controller[147168]: + [[ ! -n '' ]]
Dec 06 06:40:18 compute-0 ovn_controller[147168]: + . kolla_extend_start
Dec 06 06:40:18 compute-0 ovn_controller[147168]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 06 06:40:18 compute-0 ovn_controller[147168]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 06 06:40:18 compute-0 ovn_controller[147168]: + umask 0022
Dec 06 06:40:18 compute-0 ovn_controller[147168]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.1042] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.1050] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.1062] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.1068] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.1072] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 06:40:18 compute-0 kernel: br-int: entered promiscuous mode
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 06:40:18 compute-0 ovn_controller[147168]: 2025-12-06T06:40:18Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.1314] manager: (ovn-150d59-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 06 06:40:18 compute-0 systemd-udevd[147299]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:40:18 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 06 06:40:18 compute-0 systemd-udevd[147302]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.1544] device (genev_sys_6081): carrier: link connected
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.1549] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 06 06:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:40:18
Dec 06 06:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Dec 06 06:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:40:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:18 compute-0 NetworkManager[48965]: <info>  [1765003218.9048] manager: (ovn-03fe05-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec 06 06:40:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:19 compute-0 ceph-mon[74339]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:19 compute-0 NetworkManager[48965]: <info>  [1765003219.1695] manager: (ovn-9f96b9-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec 06 06:40:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:19.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:20 compute-0 sudo[147431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqnhdjwsxmhbdxmskihenpuyklwxlwyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003219.7231338-1793-2306875159457/AnsiballZ_command.py'
Dec 06 06:40:20 compute-0 sudo[147431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:20 compute-0 sudo[147434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:20 compute-0 sudo[147434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:20 compute-0 sudo[147434]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-0 sudo[147460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:40:20 compute-0 sudo[147460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:20 compute-0 sudo[147460]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-0 python3.9[147433]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:40:20 compute-0 ovs-vsctl[147491]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 06 06:40:20 compute-0 sudo[147431]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-0 sudo[147485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:20 compute-0 sudo[147485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:20 compute-0 sudo[147485]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-0 sudo[147511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:40:20 compute-0 sudo[147511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:20 compute-0 sudo[147511]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:20 compute-0 sudo[147717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjreyfulmpvvnsnlrmqoupjkzbpqlqkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003220.5044892-1817-136775584359205/AnsiballZ_command.py'
Dec 06 06:40:20 compute-0 sudo[147717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:21 compute-0 python3.9[147719]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:40:21 compute-0 ovs-vsctl[147721]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 06 06:40:21 compute-0 sudo[147717]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:21.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:21 compute-0 sudo[147872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugrsztgxchrniforlucjvymcynqahgky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003221.701561-1859-203014147070306/AnsiballZ_command.py'
Dec 06 06:40:21 compute-0 sudo[147872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:22 compute-0 python3.9[147874]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:40:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:40:22 compute-0 ovs-vsctl[147875]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 06 06:40:22 compute-0 sudo[147872]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:22 compute-0 sshd-session[135406]: Connection closed by 192.168.122.30 port 36718
Dec 06 06:40:22 compute-0 sshd-session[135403]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:40:22 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 06 06:40:22 compute-0 systemd[1]: session-46.scope: Consumed 56.033s CPU time.
Dec 06 06:40:22 compute-0 systemd-logind[798]: Session 46 logged out. Waiting for processes to exit.
Dec 06 06:40:22 compute-0 systemd-logind[798]: Removed session 46.
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:40:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:23.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:40:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:40:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:23.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:40:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:40:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:25.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:40:25 compute-0 ceph-mon[74339]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:40:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:40:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:40:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:25.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:26 compute-0 ceph-mon[74339]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:26 compute-0 ceph-mon[74339]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:40:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:40:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:40:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:40:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:40:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c47d4790-e6cc-43d7-acd7-e0ceb16fa520 does not exist
Dec 06 06:40:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 46766c7b-eb56-4948-9f6e-43af76592020 does not exist
Dec 06 06:40:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0cb63b98-81a8-43ec-aacc-1a26a1a4b551 does not exist
Dec 06 06:40:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:40:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:40:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:40:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:40:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:40:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:40:27 compute-0 sudo[147903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:27 compute-0 sudo[147903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:27 compute-0 sudo[147903]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:27.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:27 compute-0 sudo[147928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:40:27 compute-0 sudo[147928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:27 compute-0 sudo[147928]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:27 compute-0 sudo[147953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:27 compute-0 sudo[147953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:27 compute-0 sudo[147953]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:27 compute-0 sudo[147978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:40:27 compute-0 sudo[147978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:27 compute-0 ceph-mon[74339]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:40:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:40:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:40:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:40:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:40:27 compute-0 podman[148045]: 2025-12-06 06:40:27.483523409 +0000 UTC m=+0.040693171 container create 6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 06:40:27 compute-0 systemd[1]: Started libpod-conmon-6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b.scope.
Dec 06 06:40:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:40:27 compute-0 podman[148045]: 2025-12-06 06:40:27.46406385 +0000 UTC m=+0.021233632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:40:27 compute-0 podman[148045]: 2025-12-06 06:40:27.572169448 +0000 UTC m=+0.129339240 container init 6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:40:27 compute-0 podman[148045]: 2025-12-06 06:40:27.579597982 +0000 UTC m=+0.136767734 container start 6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:40:27 compute-0 podman[148045]: 2025-12-06 06:40:27.58267898 +0000 UTC m=+0.139848742 container attach 6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:40:27 compute-0 exciting_fermi[148061]: 167 167
Dec 06 06:40:27 compute-0 systemd[1]: libpod-6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b.scope: Deactivated successfully.
Dec 06 06:40:27 compute-0 podman[148045]: 2025-12-06 06:40:27.58788035 +0000 UTC m=+0.145050112 container died 6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-56cf8cdec5d30e8141714ec86594995cabe02734d23ee5aefe7aacc08b99c6c7-merged.mount: Deactivated successfully.
Dec 06 06:40:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:27.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:27 compute-0 podman[148045]: 2025-12-06 06:40:27.692096256 +0000 UTC m=+0.249266018 container remove 6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 06:40:27 compute-0 systemd[1]: libpod-conmon-6078d71bb637173c0914e2d2d537eacbdc18f00b49c3c5b107a36083180dc64b.scope: Deactivated successfully.
Dec 06 06:40:27 compute-0 podman[148083]: 2025-12-06 06:40:27.828657973 +0000 UTC m=+0.024426213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:40:27 compute-0 podman[148083]: 2025-12-06 06:40:27.927385522 +0000 UTC m=+0.123153762 container create 5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:40:27 compute-0 systemd[1]: Started libpod-conmon-5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01.scope.
Dec 06 06:40:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdeaee860466c821157faf96aeeb2e44a6cea900cf1092093b0fdd6c90ae278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdeaee860466c821157faf96aeeb2e44a6cea900cf1092093b0fdd6c90ae278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdeaee860466c821157faf96aeeb2e44a6cea900cf1092093b0fdd6c90ae278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdeaee860466c821157faf96aeeb2e44a6cea900cf1092093b0fdd6c90ae278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcdeaee860466c821157faf96aeeb2e44a6cea900cf1092093b0fdd6c90ae278/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:28 compute-0 podman[148083]: 2025-12-06 06:40:28.0115 +0000 UTC m=+0.207268250 container init 5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:40:28 compute-0 podman[148083]: 2025-12-06 06:40:28.021845238 +0000 UTC m=+0.217613488 container start 5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:40:28 compute-0 podman[148083]: 2025-12-06 06:40:28.025574955 +0000 UTC m=+0.221343225 container attach 5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:40:28 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 06 06:40:28 compute-0 systemd[147205]: Activating special unit Exit the Session...
Dec 06 06:40:28 compute-0 systemd[147205]: Stopped target Main User Target.
Dec 06 06:40:28 compute-0 systemd[147205]: Stopped target Basic System.
Dec 06 06:40:28 compute-0 systemd[147205]: Stopped target Paths.
Dec 06 06:40:28 compute-0 systemd[147205]: Stopped target Sockets.
Dec 06 06:40:28 compute-0 systemd[147205]: Stopped target Timers.
Dec 06 06:40:28 compute-0 systemd[147205]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 06:40:28 compute-0 systemd[147205]: Closed D-Bus User Message Bus Socket.
Dec 06 06:40:28 compute-0 systemd[147205]: Stopped Create User's Volatile Files and Directories.
Dec 06 06:40:28 compute-0 systemd[147205]: Removed slice User Application Slice.
Dec 06 06:40:28 compute-0 systemd[147205]: Reached target Shutdown.
Dec 06 06:40:28 compute-0 systemd[147205]: Finished Exit the Session.
Dec 06 06:40:28 compute-0 systemd[147205]: Reached target Exit the Session.
Dec 06 06:40:28 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 06 06:40:28 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 06 06:40:28 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 06 06:40:28 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 06 06:40:28 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 06 06:40:28 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 06 06:40:28 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 06 06:40:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:28 compute-0 sshd-session[148107]: Accepted publickey for zuul from 192.168.122.30 port 36964 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:40:28 compute-0 systemd-logind[798]: New session 48 of user zuul.
Dec 06 06:40:28 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 06 06:40:28 compute-0 ceph-mon[74339]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:28 compute-0 sshd-session[148107]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:40:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:28 compute-0 thirsty_yalow[148099]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:40:28 compute-0 thirsty_yalow[148099]: --> relative data size: 1.0
Dec 06 06:40:28 compute-0 thirsty_yalow[148099]: --> All data devices are unavailable
Dec 06 06:40:28 compute-0 systemd[1]: libpod-5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01.scope: Deactivated successfully.
Dec 06 06:40:28 compute-0 podman[148083]: 2025-12-06 06:40:28.888425454 +0000 UTC m=+1.084193694 container died 5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcdeaee860466c821157faf96aeeb2e44a6cea900cf1092093b0fdd6c90ae278-merged.mount: Deactivated successfully.
Dec 06 06:40:28 compute-0 podman[148083]: 2025-12-06 06:40:28.948474128 +0000 UTC m=+1.144242358 container remove 5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yalow, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 06:40:28 compute-0 systemd[1]: libpod-conmon-5548baa0ba0ded7e2d190d072a08e4aa5738f76567abcacdbd0f7c2d4876bb01.scope: Deactivated successfully.
Dec 06 06:40:28 compute-0 sudo[147978]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:29 compute-0 sudo[148187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:29 compute-0 sudo[148187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:29 compute-0 sudo[148187]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:29.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:29 compute-0 sudo[148217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:40:29 compute-0 sudo[148217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:29 compute-0 sudo[148217]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:29 compute-0 sudo[148262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:29 compute-0 sudo[148262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:29 compute-0 sudo[148262]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:29 compute-0 sudo[148309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:40:29 compute-0 sudo[148309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:29 compute-0 podman[148426]: 2025-12-06 06:40:29.530206686 +0000 UTC m=+0.048285788 container create 5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:40:29 compute-0 python3.9[148384]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:40:29 compute-0 systemd[1]: Started libpod-conmon-5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e.scope.
Dec 06 06:40:29 compute-0 podman[148426]: 2025-12-06 06:40:29.503854227 +0000 UTC m=+0.021933349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:40:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:40:29 compute-0 podman[148426]: 2025-12-06 06:40:29.620457897 +0000 UTC m=+0.138537029 container init 5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:40:29 compute-0 podman[148426]: 2025-12-06 06:40:29.630705041 +0000 UTC m=+0.148784143 container start 5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:40:29 compute-0 podman[148426]: 2025-12-06 06:40:29.634355122 +0000 UTC m=+0.152434224 container attach 5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:40:29 compute-0 laughing_lumiere[148446]: 167 167
Dec 06 06:40:29 compute-0 systemd[1]: libpod-5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e.scope: Deactivated successfully.
Dec 06 06:40:29 compute-0 conmon[148446]: conmon 5206903a9d95cefdb68d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e.scope/container/memory.events
Dec 06 06:40:29 compute-0 podman[148426]: 2025-12-06 06:40:29.638442256 +0000 UTC m=+0.156521358 container died 5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:40:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:29.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-48ef3f87550d32889227e1a0dc256b2b5bc0ef508702c9a1b21a17530e40c1b1-merged.mount: Deactivated successfully.
Dec 06 06:40:29 compute-0 podman[148426]: 2025-12-06 06:40:29.675485942 +0000 UTC m=+0.193565044 container remove 5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:40:29 compute-0 systemd[1]: libpod-conmon-5206903a9d95cefdb68d3fe47c431dd4ca13a1a49f7d1085e400156d4e41137e.scope: Deactivated successfully.
Dec 06 06:40:29 compute-0 podman[148469]: 2025-12-06 06:40:29.83672387 +0000 UTC m=+0.042226842 container create 0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:40:29 compute-0 systemd[1]: Started libpod-conmon-0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d.scope.
Dec 06 06:40:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b2b6649c01521682e9020d801d70984e159c48de3badb1e30a5665147baf11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b2b6649c01521682e9020d801d70984e159c48de3badb1e30a5665147baf11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b2b6649c01521682e9020d801d70984e159c48de3badb1e30a5665147baf11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b2b6649c01521682e9020d801d70984e159c48de3badb1e30a5665147baf11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:29 compute-0 podman[148469]: 2025-12-06 06:40:29.818671169 +0000 UTC m=+0.024174141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:40:29 compute-0 podman[148469]: 2025-12-06 06:40:29.914843464 +0000 UTC m=+0.120346456 container init 0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:40:29 compute-0 podman[148469]: 2025-12-06 06:40:29.922480845 +0000 UTC m=+0.127983817 container start 0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:40:29 compute-0 podman[148469]: 2025-12-06 06:40:29.925613872 +0000 UTC m=+0.131116864 container attach 0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:40:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:30 compute-0 sudo[148644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luylbjnhpzglvjaocprmhrkzvoijbphf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003230.2206318-67-158054310638452/AnsiballZ_file.py'
Dec 06 06:40:30 compute-0 sudo[148644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]: {
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:     "0": [
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:         {
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "devices": [
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "/dev/loop3"
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             ],
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "lv_name": "ceph_lv0",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "lv_size": "7511998464",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "name": "ceph_lv0",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "tags": {
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.cluster_name": "ceph",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.crush_device_class": "",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.encrypted": "0",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.osd_id": "0",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.type": "block",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:                 "ceph.vdo": "0"
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             },
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "type": "block",
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:             "vg_name": "ceph_vg0"
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:         }
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]:     ]
Dec 06 06:40:30 compute-0 elegant_mendeleev[148510]: }
Dec 06 06:40:30 compute-0 systemd[1]: libpod-0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d.scope: Deactivated successfully.
Dec 06 06:40:30 compute-0 podman[148469]: 2025-12-06 06:40:30.751413273 +0000 UTC m=+0.956916245 container died 0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5b2b6649c01521682e9020d801d70984e159c48de3badb1e30a5665147baf11-merged.mount: Deactivated successfully.
Dec 06 06:40:30 compute-0 podman[148469]: 2025-12-06 06:40:30.811608541 +0000 UTC m=+1.017111533 container remove 0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 06:40:30 compute-0 systemd[1]: libpod-conmon-0319b0cf9442bec29d46974c7ace92a00fc77665d86942d95dce7aba7c98807d.scope: Deactivated successfully.
Dec 06 06:40:30 compute-0 sudo[148309]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:30 compute-0 python3.9[148647]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:30 compute-0 ceph-mon[74339]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:30 compute-0 sudo[148644]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:30 compute-0 sudo[148662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:30 compute-0 sudo[148662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:30 compute-0 sudo[148662]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:30 compute-0 sudo[148687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:40:30 compute-0 sudo[148687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:30 compute-0 sudo[148687]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:31 compute-0 sudo[148736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:31 compute-0 sudo[148736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:31 compute-0 sudo[148736]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:31.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:31 compute-0 sudo[148784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:40:31 compute-0 sudo[148784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:31 compute-0 sudo[148931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrphzyzihlnjlbfltpufhnngzqnvurfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003231.0447924-67-228378397199457/AnsiballZ_file.py'
Dec 06 06:40:31 compute-0 sudo[148931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:31 compute-0 podman[148955]: 2025-12-06 06:40:31.442412319 +0000 UTC m=+0.046404217 container create 4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:40:31 compute-0 systemd[1]: Started libpod-conmon-4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819.scope.
Dec 06 06:40:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:40:31 compute-0 podman[148955]: 2025-12-06 06:40:31.421695425 +0000 UTC m=+0.025687363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:40:31 compute-0 podman[148955]: 2025-12-06 06:40:31.518579549 +0000 UTC m=+0.122571487 container init 4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:40:31 compute-0 python3.9[148940]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:31 compute-0 podman[148955]: 2025-12-06 06:40:31.527184268 +0000 UTC m=+0.131176176 container start 4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:40:31 compute-0 podman[148955]: 2025-12-06 06:40:31.530652664 +0000 UTC m=+0.134644592 container attach 4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:40:31 compute-0 optimistic_kepler[148971]: 167 167
Dec 06 06:40:31 compute-0 systemd[1]: libpod-4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819.scope: Deactivated successfully.
Dec 06 06:40:31 compute-0 podman[148955]: 2025-12-06 06:40:31.533248596 +0000 UTC m=+0.137240494 container died 4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:40:31 compute-0 sudo[148931]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-813c89627fca8e789287b64b493bae8ced4d6496e7e696c0b66828afcc0a5891-merged.mount: Deactivated successfully.
Dec 06 06:40:31 compute-0 podman[148955]: 2025-12-06 06:40:31.56732794 +0000 UTC m=+0.171319848 container remove 4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:40:31 compute-0 systemd[1]: libpod-conmon-4fac2a2cc930ca18306eb18872176e94259ab7c65c48239588d3bf522bc96819.scope: Deactivated successfully.
Dec 06 06:40:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:31.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:31 compute-0 podman[149039]: 2025-12-06 06:40:31.717122711 +0000 UTC m=+0.037781209 container create 4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 06:40:31 compute-0 systemd[1]: Started libpod-conmon-4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b.scope.
Dec 06 06:40:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:40:31 compute-0 podman[149039]: 2025-12-06 06:40:31.699403609 +0000 UTC m=+0.020062137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c92c45bf855b0d7b4ade8939b93fc84b851a169857e0ba7b6a996bb54dc8d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c92c45bf855b0d7b4ade8939b93fc84b851a169857e0ba7b6a996bb54dc8d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c92c45bf855b0d7b4ade8939b93fc84b851a169857e0ba7b6a996bb54dc8d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c92c45bf855b0d7b4ade8939b93fc84b851a169857e0ba7b6a996bb54dc8d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:40:31 compute-0 podman[149039]: 2025-12-06 06:40:31.813817109 +0000 UTC m=+0.134475627 container init 4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:40:31 compute-0 podman[149039]: 2025-12-06 06:40:31.822311485 +0000 UTC m=+0.142969983 container start 4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:40:31 compute-0 podman[149039]: 2025-12-06 06:40:31.825418471 +0000 UTC m=+0.146076999 container attach 4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:40:31 compute-0 sudo[149165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmupdkvtnyzgodijmcxqxghcpfdcdttr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003231.6930223-67-144789297631782/AnsiballZ_file.py'
Dec 06 06:40:31 compute-0 sudo[149165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:32 compute-0 python3.9[149167]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:32 compute-0 sudo[149165]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:32 compute-0 ceph-mon[74339]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:32 compute-0 interesting_wright[149087]: {
Dec 06 06:40:32 compute-0 interesting_wright[149087]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:40:32 compute-0 interesting_wright[149087]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:40:32 compute-0 interesting_wright[149087]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:40:32 compute-0 interesting_wright[149087]:         "osd_id": 0,
Dec 06 06:40:32 compute-0 interesting_wright[149087]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:40:32 compute-0 interesting_wright[149087]:         "type": "bluestore"
Dec 06 06:40:32 compute-0 interesting_wright[149087]:     }
Dec 06 06:40:32 compute-0 interesting_wright[149087]: }
Dec 06 06:40:32 compute-0 systemd[1]: libpod-4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b.scope: Deactivated successfully.
Dec 06 06:40:32 compute-0 podman[149039]: 2025-12-06 06:40:32.674525868 +0000 UTC m=+0.995184366 container died 4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8c92c45bf855b0d7b4ade8939b93fc84b851a169857e0ba7b6a996bb54dc8d3-merged.mount: Deactivated successfully.
Dec 06 06:40:32 compute-0 podman[149039]: 2025-12-06 06:40:32.727623279 +0000 UTC m=+1.048281777 container remove 4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:40:32 compute-0 systemd[1]: libpod-conmon-4abfed878c112572c7cfbbf92e748bb4c5412700472c7e841b19d16aa1ed425b.scope: Deactivated successfully.
Dec 06 06:40:32 compute-0 sudo[149344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrsaoltrkcorwpqwsyazpammjohpjztj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003232.3096364-67-79680252161096/AnsiballZ_file.py'
Dec 06 06:40:32 compute-0 sudo[149344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:32 compute-0 sudo[148784]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:40:32 compute-0 sudo[149346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:32 compute-0 sudo[149346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:32 compute-0 sudo[149346]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:40:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9b44a409-f183-43fa-99c9-26e388927e32 does not exist
Dec 06 06:40:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9cb75162-35cc-4ab7-89ad-3973b43a8f6d does not exist
Dec 06 06:40:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 972b7cb0-0b8d-470f-a0a5-17d1d599c052 does not exist
Dec 06 06:40:32 compute-0 sudo[149372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:32 compute-0 sudo[149373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:32 compute-0 sudo[149372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:32 compute-0 sudo[149373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:32 compute-0 sudo[149373]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:32 compute-0 sudo[149372]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:32 compute-0 python3.9[149347]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:32 compute-0 sudo[149422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:40:32 compute-0 sudo[149422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:32 compute-0 sudo[149422]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:32 compute-0 sudo[149344]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:33.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:33 compute-0 sudo[149596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxszyeissghuvxylqulbpquyweuxlqtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003233.0782607-67-210842651128296/AnsiballZ_file.py'
Dec 06 06:40:33 compute-0 sudo[149596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:33 compute-0 python3.9[149598]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:33 compute-0 sudo[149596]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:40:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:34 compute-0 python3.9[149749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:40:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:35.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:35 compute-0 ceph-mon[74339]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:35 compute-0 sudo[149899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eilcisihuffwfshqthlxtczcdbgofood ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003234.8705125-199-146955105446907/AnsiballZ_seboolean.py'
Dec 06 06:40:35 compute-0 sudo[149899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:35 compute-0 python3.9[149901]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 06:40:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:35.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:36 compute-0 sudo[149899]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:37.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:37 compute-0 python3.9[150053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:37 compute-0 ceph-mon[74339]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:37 compute-0 python3.9[150174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003236.560049-223-181750261111374/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:38 compute-0 python3.9[150325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:39.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:39 compute-0 python3.9[150446]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003238.1004605-268-86005279373130/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:39 compute-0 ceph-mon[74339]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:39.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:39 compute-0 sudo[150596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfjpugwaatoqzskpkljjczkrejgfascp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003239.68156-319-5191202386639/AnsiballZ_setup.py'
Dec 06 06:40:40 compute-0 sudo[150596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:40 compute-0 python3.9[150598]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:40:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:40 compute-0 sudo[150596]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:40 compute-0 ceph-mon[74339]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:41 compute-0 sudo[150681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcrmddhkidfabgrkhllhrdecnvstqlsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003239.68156-319-5191202386639/AnsiballZ_dnf.py'
Dec 06 06:40:41 compute-0 sudo[150681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:41.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:41 compute-0 python3.9[150683]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:40:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:41.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:42 compute-0 sudo[150681]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:40:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:40:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:40:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:40:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:40:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:40:43 compute-0 ceph-mon[74339]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:43.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:43 compute-0 sudo[150835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqyqmgoigpqssghfmimuqoohxoycxsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003242.9725053-355-278232806094596/AnsiballZ_systemd.py'
Dec 06 06:40:43 compute-0 sudo[150835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:43.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:43 compute-0 python3.9[150837]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:40:43 compute-0 sudo[150835]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:45.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:45 compute-0 ceph-mon[74339]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:45 compute-0 python3.9[150991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:45.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:46 compute-0 python3.9[151112]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003245.1155708-379-96324332069569/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:46 compute-0 python3.9[151263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:47.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:47 compute-0 python3.9[151384]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003246.2587454-379-211689139810055/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:47 compute-0 ceph-mon[74339]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:47.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:48 compute-0 ovn_controller[147168]: 2025-12-06T06:40:48Z|00025|memory|INFO|16512 kB peak resident set size after 30.4 seconds
Dec 06 06:40:48 compute-0 ovn_controller[147168]: 2025-12-06T06:40:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Dec 06 06:40:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:48 compute-0 podman[151509]: 2025-12-06 06:40:48.428925773 +0000 UTC m=+0.082727453 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 06:40:48 compute-0 python3.9[151546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:49 compute-0 python3.9[151683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003248.116292-511-268694528602588/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:49.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:49 compute-0 ceph-mon[74339]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:49.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:49 compute-0 python3.9[151833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:50 compute-0 python3.9[151954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003249.3594317-511-77355442185655/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:51 compute-0 ceph-mon[74339]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:51.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:51 compute-0 python3.9[152105]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:40:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:51.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:51 compute-0 sudo[152257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qogpbsufjddxicfpohzchuxwcgqygtkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003251.4912817-625-161448688088178/AnsiballZ_file.py'
Dec 06 06:40:51 compute-0 sudo[152257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:51 compute-0 python3.9[152259]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:52 compute-0 sudo[152257]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:52 compute-0 ceph-mon[74339]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:52 compute-0 sudo[152410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uimquuqcbamkbarwkmnzpiqinqrasqqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003252.3266518-649-176316648664452/AnsiballZ_stat.py'
Dec 06 06:40:52 compute-0 sudo[152410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:52 compute-0 python3.9[152412]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:52 compute-0 sudo[152410]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:52 compute-0 sudo[152438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:52 compute-0 sudo[152438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:52 compute-0 sudo[152438]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:53 compute-0 sudo[152484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:40:53 compute-0 sudo[152484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:40:53 compute-0 sudo[152484]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:53 compute-0 sudo[152538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idnjgiykxzxekcsluhmwedgbhqggzzsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003252.3266518-649-176316648664452/AnsiballZ_file.py'
Dec 06 06:40:53 compute-0 sudo[152538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:53.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:53 compute-0 python3.9[152540]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:53 compute-0 sudo[152538]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:53.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:53 compute-0 sudo[152690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftumeyldgzgxufiajzulhxvhmrngpyht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003253.4310215-649-118729295940445/AnsiballZ_stat.py'
Dec 06 06:40:53 compute-0 sudo[152690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:54 compute-0 python3.9[152692]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:54 compute-0 sudo[152690]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:54 compute-0 sudo[152769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksndsbwdlpkxxineziegjkgffyikjrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003253.4310215-649-118729295940445/AnsiballZ_file.py'
Dec 06 06:40:54 compute-0 sudo[152769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:54 compute-0 ceph-mon[74339]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:54 compute-0 python3.9[152771]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:40:54 compute-0 sudo[152769]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:55.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:55 compute-0 sudo[152921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfzsbmozeignktnbooinqurptetywwcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003254.9414525-718-40625314365468/AnsiballZ_file.py'
Dec 06 06:40:55 compute-0 sudo[152921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:55 compute-0 python3.9[152923]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:55 compute-0 sudo[152921]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:55.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:55 compute-0 sudo[153073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viwcikqfatusyubaoljvzlswlwmetfaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003255.6672034-742-265881870435827/AnsiballZ_stat.py'
Dec 06 06:40:55 compute-0 sudo[153073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:56 compute-0 python3.9[153075]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:56 compute-0 sudo[153073]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:56 compute-0 sudo[153152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqzlqrnpokaecrptyorxkklhtcsqsrhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003255.6672034-742-265881870435827/AnsiballZ_file.py'
Dec 06 06:40:56 compute-0 sudo[153152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:56 compute-0 python3.9[153154]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:56 compute-0 sudo[153152]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:57.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:57 compute-0 sudo[153304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urdrnrpznvkgodermflrjnhaovsintbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003256.8760579-778-209871408576365/AnsiballZ_stat.py'
Dec 06 06:40:57 compute-0 sudo[153304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:57 compute-0 python3.9[153306]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:57 compute-0 sudo[153304]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:57 compute-0 sudo[153382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wedytaiiisflvejjornhaetdeqelyydo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003256.8760579-778-209871408576365/AnsiballZ_file.py'
Dec 06 06:40:57 compute-0 sudo[153382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:40:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:57.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:40:57 compute-0 python3.9[153384]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:40:57 compute-0 ceph-mon[74339]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:57 compute-0 sudo[153382]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:58 compute-0 sudo[153535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avdysaosenqzyljbshdjotaseugktvto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003258.0978909-814-273047608747668/AnsiballZ_systemd.py'
Dec 06 06:40:58 compute-0 sudo[153535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:58 compute-0 python3.9[153537]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:40:58 compute-0 systemd[1]: Reloading.
Dec 06 06:40:58 compute-0 systemd-sysv-generator[153566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:40:58 compute-0 systemd-rc-local-generator[153562]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:40:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:40:58 compute-0 ceph-mon[74339]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:40:58 compute-0 sudo[153535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:40:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:40:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:40:59.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:40:59 compute-0 sudo[153723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pihoatsuptfpjtnkbopuzghbfxglfeof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003259.2740388-838-37853052075228/AnsiballZ_stat.py'
Dec 06 06:40:59 compute-0 sudo[153723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:40:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:40:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:40:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:40:59.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:40:59 compute-0 python3.9[153725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:40:59 compute-0 sudo[153723]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:00 compute-0 sudo[153801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eudsnkyfbvpwhdvvkkcjdkamquxjziqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003259.2740388-838-37853052075228/AnsiballZ_file.py'
Dec 06 06:41:00 compute-0 sudo[153801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:00 compute-0 python3.9[153803]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:00 compute-0 sudo[153801]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:00 compute-0 sudo[153954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gagdhhzegtgeihgorlcxtcsxlnefeqfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003260.6279454-874-44780110477655/AnsiballZ_stat.py'
Dec 06 06:41:00 compute-0 sudo[153954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:01 compute-0 anacron[30883]: Job `cron.daily' started
Dec 06 06:41:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:01.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:01 compute-0 anacron[30883]: Job `cron.daily' terminated
Dec 06 06:41:01 compute-0 python3.9[153956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:41:01 compute-0 sudo[153954]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:01 compute-0 sudo[154034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxemtkrrwopemlgpwebdrxzepfdofqxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003260.6279454-874-44780110477655/AnsiballZ_file.py'
Dec 06 06:41:01 compute-0 sudo[154034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:01 compute-0 python3.9[154036]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:01 compute-0 sudo[154034]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:41:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:01.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:41:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:03.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:03.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:04 compute-0 ceph-mon[74339]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:05.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:05 compute-0 sudo[154188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbqpgsdkuvnlrpnmhltvjjqviasnrtqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003265.0349035-910-170129974112567/AnsiballZ_systemd.py'
Dec 06 06:41:05 compute-0 sudo[154188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:05 compute-0 python3.9[154190]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:41:05 compute-0 systemd[1]: Reloading.
Dec 06 06:41:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:05.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:05 compute-0 systemd-rc-local-generator[154219]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:41:05 compute-0 systemd-sysv-generator[154222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:41:05 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 06:41:05 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:41:05 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:41:05 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 06:41:06 compute-0 sudo[154188]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:06 compute-0 ceph-mon[74339]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:06 compute-0 ceph-mon[74339]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:41:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:06 compute-0 sudo[154383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ratqzrjphhnvgdsitpztgzugrnqnqhom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003266.528056-940-243782491761349/AnsiballZ_file.py'
Dec 06 06:41:06 compute-0 sudo[154383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:06 compute-0 python3.9[154385]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:41:06 compute-0 sudo[154383]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:07.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:07 compute-0 sudo[154535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcemcoxtvmheafgsieutntsvzzmeczpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003267.2705123-964-90262785908462/AnsiballZ_stat.py'
Dec 06 06:41:07 compute-0 sudo[154535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:07.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:07 compute-0 python3.9[154537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:41:07 compute-0 sudo[154535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:07 compute-0 ceph-mon[74339]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:08 compute-0 sudo[154658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tppqeimjqwcjaezmtaqcnrkhowxicdkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003267.2705123-964-90262785908462/AnsiballZ_copy.py'
Dec 06 06:41:08 compute-0 sudo[154658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:08 compute-0 python3.9[154660]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003267.2705123-964-90262785908462/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:41:08 compute-0 sudo[154658]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 06:41:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:08 compute-0 ceph-mon[74339]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 06:41:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:09.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:09 compute-0 sudo[154811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgxellakzbjgrrscvxczmaluqdtuahbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003268.8041604-1015-243523720955908/AnsiballZ_file.py'
Dec 06 06:41:09 compute-0 sudo[154811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:09 compute-0 python3.9[154813]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:41:09 compute-0 sudo[154811]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:09.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:09 compute-0 sudo[154963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bquzisuyyhbhnbyecfecfxtseazpvntp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003269.6041715-1039-184432960918833/AnsiballZ_stat.py'
Dec 06 06:41:09 compute-0 sudo[154963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:10 compute-0 python3.9[154965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:41:10 compute-0 sudo[154963]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:10 compute-0 sudo[155087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eijowibttaqszbwryiqqqrvbymtxxlfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003269.6041715-1039-184432960918833/AnsiballZ_copy.py'
Dec 06 06:41:10 compute-0 sudo[155087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 06:41:10 compute-0 python3.9[155089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003269.6041715-1039-184432960918833/.source.json _original_basename=.ouzh01hz follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:10 compute-0 sudo[155087]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:11.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:11 compute-0 sudo[155239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djzenlcbhllotkevufbwdcdczzhzqmwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003271.2934492-1084-177079248936184/AnsiballZ_file.py'
Dec 06 06:41:11 compute-0 sudo[155239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:41:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:11.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:41:11 compute-0 python3.9[155241]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:11 compute-0 sudo[155239]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:12 compute-0 ceph-mon[74339]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 06:41:12 compute-0 sudo[155392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bptorgxunmrogykfdwaaqqngtknwdyxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003272.0248032-1108-180492598128062/AnsiballZ_stat.py'
Dec 06 06:41:12 compute-0 sudo[155392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Dec 06 06:41:12 compute-0 sudo[155392]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:12 compute-0 sudo[155515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpdkgpqutzqtjzffodyokswhyrtwdymn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003272.0248032-1108-180492598128062/AnsiballZ_copy.py'
Dec 06 06:41:12 compute-0 sudo[155515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:41:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:41:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:41:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:41:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:41:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:41:12 compute-0 sudo[155515]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:13 compute-0 sudo[155541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:13 compute-0 sudo[155541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:13 compute-0 sudo[155541]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:13.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:13 compute-0 sudo[155567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:13 compute-0 sudo[155567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:13 compute-0 sudo[155567]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:13 compute-0 ceph-mon[74339]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Dec 06 06:41:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:13.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:13 compute-0 ceph-mgr[74630]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 06:41:13 compute-0 sudo[155717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfmioywqgvxydnfujswfyuhfgzcomlia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003273.5121462-1159-101923863081380/AnsiballZ_container_config_data.py'
Dec 06 06:41:13 compute-0 sudo[155717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:14 compute-0 python3.9[155719]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 06 06:41:14 compute-0 sudo[155717]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 06 06:41:14 compute-0 sudo[155870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssnaknamesyypfhequqscgqacknzbuzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003274.43395-1186-48223181981802/AnsiballZ_container_config_hash.py'
Dec 06 06:41:14 compute-0 sudo[155870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:15 compute-0 python3.9[155872]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:41:15 compute-0 sudo[155870]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:41:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:15.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:41:15 compute-0 ceph-mon[74339]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Dec 06 06:41:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:15.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:15 compute-0 sudo[156022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zynvzcijhfusevcaoihvcmukydvpkckb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003275.3754687-1213-162962895688747/AnsiballZ_podman_container_info.py'
Dec 06 06:41:15 compute-0 sudo[156022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:16 compute-0 python3.9[156024]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 06:41:16 compute-0 sudo[156022]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 06 06:41:17 compute-0 ceph-mon[74339]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 06 06:41:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:17.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:17 compute-0 sudo[156202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgskunaqzznamkdvemlqgncshhjgqcoy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003276.9622371-1252-135508567110014/AnsiballZ_edpm_container_manage.py'
Dec 06 06:41:17 compute-0 sudo[156202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:17.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:17 compute-0 python3[156204]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:41:18
Dec 06 06:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'volumes', '.mgr', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups']
Dec 06 06:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:41:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Dec 06 06:41:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:19.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:19 compute-0 podman[156251]: 2025-12-06 06:41:19.474401265 +0000 UTC m=+0.122963598 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:41:19 compute-0 ceph-mon[74339]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Dec 06 06:41:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:19.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec 06 06:41:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:21.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:21.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:22 compute-0 ceph-mon[74339]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Dec 06 06:41:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 103 op/s
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:41:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:23.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:41:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:23.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s
Dec 06 06:41:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:25.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:41:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:41:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:25.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 0 B/s wr, 85 op/s
Dec 06 06:41:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:41:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:27.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:41:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 06:41:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:29.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:41:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:29.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:41:30 compute-0 podman[156216]: 2025-12-06 06:41:30.148989579 +0000 UTC m=+12.314930476 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 06:41:30 compute-0 podman[156366]: 2025-12-06 06:41:30.267901864 +0000 UTC m=+0.022419403 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 06:41:30 compute-0 podman[156366]: 2025-12-06 06:41:30.38435023 +0000 UTC m=+0.138867739 container create 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 06:41:30 compute-0 python3[156204]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 06:41:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 06 06:41:30 compute-0 sudo[156202]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:31.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:31.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).paxos(paxos updating c 503..1222) accept timeout, calling fresh election
Dec 06 06:41:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:41:32 compute-0 ceph-mon[74339]: paxos.0).electionLogic(36) init, last seen epoch 36
Dec 06 06:41:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:41:32 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:41:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Dec 06 06:41:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:33.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:33 compute-0 sudo[156427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:33 compute-0 sudo[156427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-0 sudo[156427]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:33 compute-0 sudo[156449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:33 compute-0 sudo[156449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-0 sudo[156449]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:33 compute-0 sudo[156475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:33 compute-0 sudo[156475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-0 sudo[156475]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:33 compute-0 sudo[156500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:41:33 compute-0 sudo[156500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-0 sudo[156500]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:33 compute-0 sudo[156527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:33 compute-0 sudo[156527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-0 sudo[156527]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:33 compute-0 sudo[156552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:41:33 compute-0 sudo[156552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:41:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:33.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:41:33 compute-0 sudo[156552]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec 06 06:41:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:35.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:35.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:36 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:41:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:37.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:41:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 15m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:41:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:37.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:41:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:41:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 92 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 0 B/s wr, 85 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:41:38 compute-0 ceph-mon[74339]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 06:41:38 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:41:38 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:41:38 compute-0 ceph-mon[74339]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:41:38 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 15m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:41:38 compute-0 ceph-mon[74339]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 06:41:38 compute-0 ceph-mon[74339]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:41:38 compute-0 ceph-mon[74339]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 06:41:38 compute-0 ceph-mon[74339]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 06:41:38 compute-0 ceph-mon[74339]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:39.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:39.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:40 compute-0 ceph-mon[74339]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:41.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:41:42 compute-0 ceph-mon[74339]: paxos.0).electionLogic(38) init, last seen epoch 38
Dec 06 06:41:42 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:41:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:41:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:41:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:41:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:41:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:41:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:41:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:41:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:43.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:45.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:45.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Dec 06 06:41:46 compute-0 ceph-mon[74339]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:41:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:41:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:41:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:41:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 15m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:41:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:41:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 06:41:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:47.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:47.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:41:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 06:41:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:41:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:41:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:41:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:41:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:41:48 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:41:48 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:41:48 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:41:48 compute-0 ceph-mon[74339]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 06 06:41:48 compute-0 ceph-mon[74339]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Dec 06 06:41:48 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:41:48 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:41:48 compute-0 ceph-mon[74339]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:41:48 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 15m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:41:48 compute-0 ceph-mon[74339]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 06:41:48 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 06:41:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:49.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:41:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 418a40e4-ae5f-464b-bb46-8defdfaa607c does not exist
Dec 06 06:41:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0d68a606-98fb-47ed-b7f9-8db77d10fc4a does not exist
Dec 06 06:41:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3e8dd6bc-4762-4eb9-a36d-1cbd23b64304 does not exist
Dec 06 06:41:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:41:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:41:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:41:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:41:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:41:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:41:49 compute-0 sudo[156616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:49 compute-0 sudo[156616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:49 compute-0 sudo[156616]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:49 compute-0 sudo[156641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:41:49 compute-0 sudo[156641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:49 compute-0 sudo[156641]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:49 compute-0 sudo[156666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:49 compute-0 sudo[156666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:49 compute-0 sudo[156666]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:49 compute-0 sudo[156691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:41:49 compute-0 sudo[156691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:49 compute-0 podman[156715]: 2025-12-06 06:41:49.667209582 +0000 UTC m=+0.107570829 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 06:41:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:49.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:50 compute-0 podman[156783]: 2025-12-06 06:41:49.936783713 +0000 UTC m=+0.037338680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:41:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:41:51 compute-0 podman[156783]: 2025-12-06 06:41:51.136094729 +0000 UTC m=+1.236649596 container create 4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:41:51 compute-0 ceph-mon[74339]: mon.compute-1 calling monitor election
Dec 06 06:41:51 compute-0 ceph-mon[74339]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:41:51 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:41:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:41:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:41:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:41:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:41:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:41:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:41:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:51.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:51 compute-0 systemd[1]: Started libpod-conmon-4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a.scope.
Dec 06 06:41:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:41:51 compute-0 podman[156783]: 2025-12-06 06:41:51.306694171 +0000 UTC m=+1.407249118 container init 4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:41:51 compute-0 podman[156783]: 2025-12-06 06:41:51.316457858 +0000 UTC m=+1.417012725 container start 4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:41:51 compute-0 romantic_haslett[156801]: 167 167
Dec 06 06:41:51 compute-0 systemd[1]: libpod-4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a.scope: Deactivated successfully.
Dec 06 06:41:51 compute-0 podman[156783]: 2025-12-06 06:41:51.356492962 +0000 UTC m=+1.457047829 container attach 4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:41:51 compute-0 podman[156783]: 2025-12-06 06:41:51.357876116 +0000 UTC m=+1.458430983 container died 4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd19436c5d7ebdd45e6f90cda10347d77f3d2577bb5774506e1fb7f2dd1584fb-merged.mount: Deactivated successfully.
Dec 06 06:41:51 compute-0 podman[156783]: 2025-12-06 06:41:51.530805675 +0000 UTC m=+1.631360542 container remove 4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:41:51 compute-0 systemd[1]: libpod-conmon-4133e1531f93497f46c132abea1236777040a80938b1c32019de4c934dd12f2a.scope: Deactivated successfully.
Dec 06 06:41:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:51 compute-0 podman[156905]: 2025-12-06 06:41:51.787572523 +0000 UTC m=+0.096742395 container create b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:41:51 compute-0 sudo[156963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seneqrvljowpmznmmyywvelinngnstqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003311.4750028-1276-196700665668411/AnsiballZ_stat.py'
Dec 06 06:41:51 compute-0 sudo[156963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:51 compute-0 podman[156905]: 2025-12-06 06:41:51.720854579 +0000 UTC m=+0.030024451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:41:51 compute-0 systemd[1]: Started libpod-conmon-b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac.scope.
Dec 06 06:41:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de138939ced81edb567ae84c93d4fea2b2d14dbf02d3158fe9d9d120cd94e408/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de138939ced81edb567ae84c93d4fea2b2d14dbf02d3158fe9d9d120cd94e408/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de138939ced81edb567ae84c93d4fea2b2d14dbf02d3158fe9d9d120cd94e408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de138939ced81edb567ae84c93d4fea2b2d14dbf02d3158fe9d9d120cd94e408/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de138939ced81edb567ae84c93d4fea2b2d14dbf02d3158fe9d9d120cd94e408/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:51 compute-0 podman[156905]: 2025-12-06 06:41:51.964005756 +0000 UTC m=+0.273175658 container init b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:41:51 compute-0 podman[156905]: 2025-12-06 06:41:51.97358831 +0000 UTC m=+0.282758182 container start b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:41:52 compute-0 python3.9[156965]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:41:52 compute-0 sudo[156963]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Dec 06 06:41:53 compute-0 podman[156905]: 2025-12-06 06:41:53.096579019 +0000 UTC m=+1.405748911 container attach b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:41:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:53 compute-0 sudo[157001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:53 compute-0 sudo[157001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:53 compute-0 sudo[157001]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:53 compute-0 sudo[157026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:53 compute-0 sudo[157026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:53 compute-0 sudo[157026]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:53 compute-0 zen_hawking[156968]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:41:53 compute-0 zen_hawking[156968]: --> relative data size: 1.0
Dec 06 06:41:53 compute-0 zen_hawking[156968]: --> All data devices are unavailable
Dec 06 06:41:53 compute-0 systemd[1]: libpod-b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac.scope: Deactivated successfully.
Dec 06 06:41:53 compute-0 podman[156905]: 2025-12-06 06:41:53.699370088 +0000 UTC m=+2.008539980 container died b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:41:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:53.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:41:54 compute-0 ceph-mon[74339]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:41:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Dec 06 06:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-de138939ced81edb567ae84c93d4fea2b2d14dbf02d3158fe9d9d120cd94e408-merged.mount: Deactivated successfully.
Dec 06 06:41:54 compute-0 podman[156905]: 2025-12-06 06:41:54.497296896 +0000 UTC m=+2.806466768 container remove b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:41:54 compute-0 systemd[1]: libpod-conmon-b34c6410b990067078bd5a21085d529de82fdefc778f7c69d1235685e63f2cac.scope: Deactivated successfully.
Dec 06 06:41:54 compute-0 sudo[156691]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:54 compute-0 sudo[157093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:54 compute-0 sudo[157093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:54 compute-0 sudo[157093]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:54 compute-0 sudo[157132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:41:54 compute-0 sudo[157132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:54 compute-0 sudo[157132]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:54 compute-0 sudo[157178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:54 compute-0 sudo[157178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:54 compute-0 sudo[157178]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:54 compute-0 sudo[157226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:41:54 compute-0 sudo[157226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:54 compute-0 sudo[157300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmkulrlhwnnojgcmabzqtguyrtoeilku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003314.5653982-1303-213136256285123/AnsiballZ_file.py'
Dec 06 06:41:54 compute-0 sudo[157300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:55 compute-0 python3.9[157302]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:55 compute-0 sudo[157300]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:55 compute-0 podman[157348]: 2025-12-06 06:41:55.193251922 +0000 UTC m=+0.057346957 container create 6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:41:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:55 compute-0 systemd[1]: Started libpod-conmon-6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541.scope.
Dec 06 06:41:55 compute-0 podman[157348]: 2025-12-06 06:41:55.161563431 +0000 UTC m=+0.025658496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:41:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:41:55 compute-0 podman[157348]: 2025-12-06 06:41:55.279911681 +0000 UTC m=+0.144006746 container init 6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:41:55 compute-0 podman[157348]: 2025-12-06 06:41:55.29096334 +0000 UTC m=+0.155058375 container start 6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:41:55 compute-0 podman[157348]: 2025-12-06 06:41:55.294173098 +0000 UTC m=+0.158268163 container attach 6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 06:41:55 compute-0 condescending_rhodes[157407]: 167 167
Dec 06 06:41:55 compute-0 systemd[1]: libpod-6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541.scope: Deactivated successfully.
Dec 06 06:41:55 compute-0 podman[157348]: 2025-12-06 06:41:55.298853793 +0000 UTC m=+0.162948818 container died 6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d871df4935c3cc8ed116b2ec3df6273fcf547cd77ecb590fe9ca16daf49eef00-merged.mount: Deactivated successfully.
Dec 06 06:41:55 compute-0 sudo[157446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzemxmwludbrcihitjibymhblzvboysg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003314.5653982-1303-213136256285123/AnsiballZ_stat.py'
Dec 06 06:41:55 compute-0 sudo[157446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:55 compute-0 podman[157348]: 2025-12-06 06:41:55.354660461 +0000 UTC m=+0.218755496 container remove 6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:41:55 compute-0 systemd[1]: libpod-conmon-6339f24432dba8d13eac7f3be8e3c25c6bb19b28185b65a88832d43a47bf2541.scope: Deactivated successfully.
Dec 06 06:41:55 compute-0 podman[157461]: 2025-12-06 06:41:55.549235815 +0000 UTC m=+0.057054749 container create 8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:41:55 compute-0 python3.9[157453]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:41:55 compute-0 sudo[157446]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:55 compute-0 podman[157461]: 2025-12-06 06:41:55.517006111 +0000 UTC m=+0.024825065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:41:55 compute-0 systemd[1]: Started libpod-conmon-8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba.scope.
Dec 06 06:41:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19cadad24d8a5da2314920083bde01022c7a8333ecff8ce8e0a6a6af1ad213cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19cadad24d8a5da2314920083bde01022c7a8333ecff8ce8e0a6a6af1ad213cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19cadad24d8a5da2314920083bde01022c7a8333ecff8ce8e0a6a6af1ad213cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19cadad24d8a5da2314920083bde01022c7a8333ecff8ce8e0a6a6af1ad213cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:41:55 compute-0 podman[157461]: 2025-12-06 06:41:55.674771891 +0000 UTC m=+0.182590835 container init 8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:41:55 compute-0 podman[157461]: 2025-12-06 06:41:55.683764149 +0000 UTC m=+0.191583083 container start 8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:41:55 compute-0 podman[157461]: 2025-12-06 06:41:55.687944801 +0000 UTC m=+0.195763735 container attach 8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:41:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:55.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:56 compute-0 sudo[157632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkkaojrzzmkpmqshnmpxtbstnspxqiqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003315.6887271-1303-168972677128675/AnsiballZ_copy.py'
Dec 06 06:41:56 compute-0 sudo[157632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:56 compute-0 ceph-mon[74339]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Dec 06 06:41:56 compute-0 ceph-mon[74339]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.3 KiB/s rd, 0 B/s wr, 12 op/s
Dec 06 06:41:56 compute-0 python3.9[157634]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003315.6887271-1303-168972677128675/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:41:56 compute-0 sudo[157632]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 06 06:41:56 compute-0 admiring_noether[157478]: {
Dec 06 06:41:56 compute-0 admiring_noether[157478]:     "0": [
Dec 06 06:41:56 compute-0 admiring_noether[157478]:         {
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "devices": [
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "/dev/loop3"
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             ],
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "lv_name": "ceph_lv0",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "lv_size": "7511998464",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "name": "ceph_lv0",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "tags": {
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.cluster_name": "ceph",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.crush_device_class": "",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.encrypted": "0",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.osd_id": "0",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.type": "block",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:                 "ceph.vdo": "0"
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             },
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "type": "block",
Dec 06 06:41:56 compute-0 admiring_noether[157478]:             "vg_name": "ceph_vg0"
Dec 06 06:41:56 compute-0 admiring_noether[157478]:         }
Dec 06 06:41:56 compute-0 admiring_noether[157478]:     ]
Dec 06 06:41:56 compute-0 admiring_noether[157478]: }
Dec 06 06:41:56 compute-0 systemd[1]: libpod-8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba.scope: Deactivated successfully.
Dec 06 06:41:56 compute-0 podman[157461]: 2025-12-06 06:41:56.54386002 +0000 UTC m=+1.051678954 container died 8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:41:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-19cadad24d8a5da2314920083bde01022c7a8333ecff8ce8e0a6a6af1ad213cd-merged.mount: Deactivated successfully.
Dec 06 06:41:56 compute-0 podman[157461]: 2025-12-06 06:41:56.699324154 +0000 UTC m=+1.207143088 container remove 8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 06 06:41:56 compute-0 sudo[157726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnnarwkukznhoomtzqlvyajvjnwhnhmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003315.6887271-1303-168972677128675/AnsiballZ_systemd.py'
Dec 06 06:41:56 compute-0 systemd[1]: libpod-conmon-8d083ae73c9f67b36a91929b5a12f16e29a16579b89c04f316646c808ad77cba.scope: Deactivated successfully.
Dec 06 06:41:56 compute-0 sudo[157726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:56 compute-0 sudo[157226]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:56 compute-0 sudo[157729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:56 compute-0 sudo[157729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:56 compute-0 sudo[157729]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:56 compute-0 sudo[157754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:41:56 compute-0 sudo[157754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:56 compute-0 sudo[157754]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:56 compute-0 sudo[157779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:41:56 compute-0 sudo[157779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:56 compute-0 sudo[157779]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:56 compute-0 sudo[157804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:41:56 compute-0 sudo[157804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:41:57 compute-0 python3.9[157728]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:41:57 compute-0 systemd[1]: Reloading.
Dec 06 06:41:57 compute-0 systemd-rc-local-generator[157847]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:41:57 compute-0 systemd-sysv-generator[157850]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:41:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:57.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:57 compute-0 podman[157904]: 2025-12-06 06:41:57.383987205 +0000 UTC m=+0.065233408 container create b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:41:57 compute-0 podman[157904]: 2025-12-06 06:41:57.348405529 +0000 UTC m=+0.029651752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:41:57 compute-0 systemd[1]: Started libpod-conmon-b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0.scope.
Dec 06 06:41:57 compute-0 sudo[157726]: pam_unix(sudo:session): session closed for user root
Dec 06 06:41:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:41:57 compute-0 podman[157904]: 2025-12-06 06:41:57.50994016 +0000 UTC m=+0.191186393 container init b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:41:57 compute-0 ceph-mon[74339]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Dec 06 06:41:57 compute-0 podman[157904]: 2025-12-06 06:41:57.517233918 +0000 UTC m=+0.198480121 container start b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 06:41:57 compute-0 podman[157904]: 2025-12-06 06:41:57.522474205 +0000 UTC m=+0.203720408 container attach b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 06:41:57 compute-0 hardcore_nash[157920]: 167 167
Dec 06 06:41:57 compute-0 podman[157904]: 2025-12-06 06:41:57.52472168 +0000 UTC m=+0.205967883 container died b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:41:57 compute-0 systemd[1]: libpod-b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0.scope: Deactivated successfully.
Dec 06 06:41:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b208859482d1663db4554f870dbc71ecc76946f8a7363859a8ab1fb70126b90-merged.mount: Deactivated successfully.
Dec 06 06:41:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:41:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:57.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:41:57 compute-0 sudo[158011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttzoycnmcztaqnmftlcugwvfsehzhsln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003315.6887271-1303-168972677128675/AnsiballZ_systemd.py'
Dec 06 06:41:57 compute-0 sudo[158011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:41:58 compute-0 python3.9[158013]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:41:58 compute-0 systemd[1]: Reloading.
Dec 06 06:41:58 compute-0 systemd-rc-local-generator[158042]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:41:58 compute-0 systemd-sysv-generator[158045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:41:58 compute-0 podman[157904]: 2025-12-06 06:41:58.26806977 +0000 UTC m=+0.949315973 container remove b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:41:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Dec 06 06:41:58 compute-0 systemd[1]: libpod-conmon-b0971239a5791493397da8cadf3f0c1ee3a789e5598dd7b0e872ea08a6fc2bb0.scope: Deactivated successfully.
Dec 06 06:41:58 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 06 06:41:58 compute-0 podman[158060]: 2025-12-06 06:41:58.433906445 +0000 UTC m=+0.030378330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:41:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:41:59.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:41:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:41:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:41:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:41:59.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Dec 06 06:42:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:00 compute-0 podman[158060]: 2025-12-06 06:42:00.93947133 +0000 UTC m=+2.535943195 container create 2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 06:42:01 compute-0 systemd[1]: Started libpod-conmon-2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d.scope.
Dec 06 06:42:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c377b6ef608dfdb6ae3cfe391d405bcc2f70e817b1c6a32a0b4db251880628d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c377b6ef608dfdb6ae3cfe391d405bcc2f70e817b1c6a32a0b4db251880628d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c377b6ef608dfdb6ae3cfe391d405bcc2f70e817b1c6a32a0b4db251880628d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c377b6ef608dfdb6ae3cfe391d405bcc2f70e817b1c6a32a0b4db251880628d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:42:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:01.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:01 compute-0 podman[158060]: 2025-12-06 06:42:01.281524114 +0000 UTC m=+2.877996009 container init 2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:42:01 compute-0 podman[158060]: 2025-12-06 06:42:01.295380222 +0000 UTC m=+2.891852087 container start 2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:42:01 compute-0 podman[158060]: 2025-12-06 06:42:01.299672346 +0000 UTC m=+2.896144211 container attach 2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:42:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97745c1031737ec4e33edf5ad687ce7e6b77adaf14a69a108f1e9b52baf827e8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 06 06:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97745c1031737ec4e33edf5ad687ce7e6b77adaf14a69a108f1e9b52baf827e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 06:42:01 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7.
Dec 06 06:42:01 compute-0 podman[158089]: 2025-12-06 06:42:01.576584385 +0000 UTC m=+3.061384252 container init 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + sudo -E kolla_set_configs
Dec 06 06:42:01 compute-0 podman[158089]: 2025-12-06 06:42:01.604284659 +0000 UTC m=+3.089084506 container start 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:42:01 compute-0 edpm-start-podman-container[158089]: ovn_metadata_agent
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Validating config file
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Copying service configuration files
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Writing out command to execute
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 06 06:42:01 compute-0 edpm-start-podman-container[158088]: Creating additional drop-in dependency for "ovn_metadata_agent" (3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7)
Dec 06 06:42:01 compute-0 podman[158120]: 2025-12-06 06:42:01.685041634 +0000 UTC m=+0.061763324 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: ++ cat /run_command
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + CMD=neutron-ovn-metadata-agent
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + ARGS=
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + sudo kolla_copy_cacerts
Dec 06 06:42:01 compute-0 systemd[1]: Reloading.
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + [[ ! -n '' ]]
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + . kolla_extend_start
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: Running command: 'neutron-ovn-metadata-agent'
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + umask 0022
Dec 06 06:42:01 compute-0 ovn_metadata_agent[158111]: + exec neutron-ovn-metadata-agent
Dec 06 06:42:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:01.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:01 compute-0 systemd-rc-local-generator[158186]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:42:01 compute-0 systemd-sysv-generator[158192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:42:02 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 06 06:42:02 compute-0 sudo[158011]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:02 compute-0 admiring_germain[158100]: {
Dec 06 06:42:02 compute-0 admiring_germain[158100]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:42:02 compute-0 admiring_germain[158100]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:42:02 compute-0 admiring_germain[158100]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:42:02 compute-0 admiring_germain[158100]:         "osd_id": 0,
Dec 06 06:42:02 compute-0 admiring_germain[158100]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:42:02 compute-0 admiring_germain[158100]:         "type": "bluestore"
Dec 06 06:42:02 compute-0 admiring_germain[158100]:     }
Dec 06 06:42:02 compute-0 admiring_germain[158100]: }
Dec 06 06:42:02 compute-0 systemd[1]: libpod-2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d.scope: Deactivated successfully.
Dec 06 06:42:02 compute-0 conmon[158100]: conmon 2844fb0c1cc148d6a833 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d.scope/container/memory.events
Dec 06 06:42:02 compute-0 podman[158060]: 2025-12-06 06:42:02.250120286 +0000 UTC m=+3.846592151 container died 2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:42:02 compute-0 ceph-mon[74339]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Dec 06 06:42:02 compute-0 ceph-mon[74339]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Dec 06 06:42:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c377b6ef608dfdb6ae3cfe391d405bcc2f70e817b1c6a32a0b4db251880628d-merged.mount: Deactivated successfully.
Dec 06 06:42:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Dec 06 06:42:02 compute-0 sshd-session[148110]: Connection closed by 192.168.122.30 port 36964
Dec 06 06:42:02 compute-0 sshd-session[148107]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:42:02 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 06 06:42:02 compute-0 systemd[1]: session-48.scope: Consumed 59.707s CPU time.
Dec 06 06:42:02 compute-0 systemd-logind[798]: Session 48 logged out. Waiting for processes to exit.
Dec 06 06:42:02 compute-0 systemd-logind[798]: Removed session 48.
Dec 06 06:42:02 compute-0 podman[158060]: 2025-12-06 06:42:02.727287248 +0000 UTC m=+4.323759133 container remove 2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_germain, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:42:02 compute-0 systemd[1]: libpod-conmon-2844fb0c1cc148d6a833ec3761f02c28bc6976bb958dfb6bf07e86105297bf4d.scope: Deactivated successfully.
Dec 06 06:42:02 compute-0 sudo[157804]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:42:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:03.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.733 158118 INFO neutron.common.config [-] Logging enabled!
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.734 158118 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.734 158118 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.735 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.735 158118 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.735 158118 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.735 158118 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.735 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.735 158118 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.735 158118 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.735 158118 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.736 158118 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.737 158118 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.738 158118 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.738 158118 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.738 158118 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.738 158118 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.738 158118 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.738 158118 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.738 158118 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.738 158118 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.739 158118 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.740 158118 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.741 158118 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.741 158118 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.741 158118 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.741 158118 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.741 158118 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.741 158118 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.741 158118 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.741 158118 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.742 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.743 158118 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.744 158118 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.744 158118 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.744 158118 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.744 158118 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.744 158118 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.744 158118 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.744 158118 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.744 158118 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.745 158118 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.745 158118 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.745 158118 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.745 158118 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.745 158118 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.745 158118 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.745 158118 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.745 158118 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.746 158118 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.746 158118 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:03.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.746 158118 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.746 158118 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.746 158118 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.746 158118 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.747 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.748 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.748 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.748 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.748 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.748 158118 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.748 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.748 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.748 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.749 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.749 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.749 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.749 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.749 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.749 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.749 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.750 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.750 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.750 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.750 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.750 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.750 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.750 158118 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.750 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.751 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.752 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.752 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.752 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.752 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.752 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.752 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.753 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.753 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.753 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.753 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.754 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.754 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.754 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.754 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.754 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.754 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.754 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.755 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.755 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.755 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.755 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.755 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.755 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.755 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.755 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.756 158118 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.756 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.756 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.756 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.756 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.756 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.756 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.756 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.757 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.757 158118 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.757 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.757 158118 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.757 158118 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.757 158118 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.757 158118 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.757 158118 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.758 158118 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.758 158118 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.758 158118 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.758 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.758 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.758 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.758 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.758 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.759 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.759 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.759 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.759 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.759 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.759 158118 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.759 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.760 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.760 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.760 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.760 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.760 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.760 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.760 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.760 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.761 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.762 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.763 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.764 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.764 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.764 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.764 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.764 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.764 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.764 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.764 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.765 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.765 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.765 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.765 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.765 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.765 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.765 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.766 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.766 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.766 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.766 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.766 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.766 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.766 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.767 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.767 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.767 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.767 158118 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.767 158118 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.767 158118 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.768 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.768 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.768 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.768 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.768 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.768 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.769 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.769 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.769 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.769 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.769 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.769 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.769 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.770 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.770 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.770 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.770 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.770 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.770 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.770 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.771 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.771 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.771 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.771 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.771 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.771 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.771 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.772 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.772 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.772 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.772 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.772 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.772 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.773 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.773 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.773 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.773 158118 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.773 158118 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.783 158118 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.783 158118 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.783 158118 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.784 158118 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.784 158118 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.799 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name feab6d5f-1b29-488a-ae05-1d4fd579aca4 (UUID: feab6d5f-1b29-488a-ae05-1d4fd579aca4) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.823 158118 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.823 158118 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.824 158118 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.824 158118 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.827 158118 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.832 158118 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.838 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'feab6d5f-1b29-488a-ae05-1d4fd579aca4'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], external_ids={}, name=feab6d5f-1b29-488a-ae05-1d4fd579aca4, nb_cfg_timestamp=1765003226130, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.839 158118 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f70aabeda00>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.840 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.840 158118 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.840 158118 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.841 158118 INFO oslo_service.service [-] Starting 1 workers
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.845 158118 DEBUG oslo_service.service [-] Started child 158254 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.849 158118 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpj83xtban/privsep.sock']
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.849 158254 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-374408'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.875 158254 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.875 158254 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.875 158254 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.879 158254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.885 158254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 06:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:03.890 158254 INFO eventlet.wsgi.server [-] (158254) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 06 06:42:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Dec 06 06:42:04 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 06 06:42:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:04.627 158118 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 06:42:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:04.628 158118 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpj83xtban/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 06:42:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:04.456 158260 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 06:42:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:04.463 158260 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 06:42:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:04.466 158260 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 06 06:42:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:04.466 158260 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158260
Dec 06 06:42:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:04.632 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[26a0589d-1eec-4f02-ad29-dbb1ad9ee3e1]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.203 158260 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.203 158260 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.203 158260 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:42:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:05.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:42:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:05.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.836 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d90c0cda-8fb7-42c3-960a-928896f4ab20]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.841 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, column=external_ids, values=({'neutron:ovn-metadata-id': '7f594075-c722-563b-b36a-ad32679d037b'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.848 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.854 158118 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.854 158118 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.854 158118 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.854 158118 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.854 158118 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.854 158118 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.855 158118 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.855 158118 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.855 158118 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.855 158118 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.855 158118 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.855 158118 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.855 158118 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.856 158118 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.856 158118 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.856 158118 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.856 158118 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.856 158118 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.856 158118 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.856 158118 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.856 158118 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.857 158118 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.857 158118 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.857 158118 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.857 158118 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.857 158118 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.857 158118 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.858 158118 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.858 158118 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.858 158118 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.858 158118 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.858 158118 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.858 158118 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.858 158118 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.858 158118 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.859 158118 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.859 158118 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.859 158118 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.859 158118 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.859 158118 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.859 158118 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.859 158118 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.860 158118 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.861 158118 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.862 158118 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.862 158118 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.862 158118 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.862 158118 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.862 158118 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.862 158118 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.862 158118 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.863 158118 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.863 158118 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.863 158118 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.863 158118 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.863 158118 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.863 158118 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.864 158118 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.864 158118 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.864 158118 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.864 158118 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.864 158118 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.864 158118 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.864 158118 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.865 158118 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.865 158118 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.865 158118 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.865 158118 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.865 158118 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.865 158118 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.866 158118 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.866 158118 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.866 158118 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.866 158118 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.866 158118 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.866 158118 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.866 158118 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.866 158118 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.867 158118 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.867 158118 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.867 158118 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.867 158118 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.867 158118 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.867 158118 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.868 158118 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.868 158118 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.868 158118 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.868 158118 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.868 158118 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.868 158118 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.868 158118 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.869 158118 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.869 158118 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.869 158118 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.869 158118 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.869 158118 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.870 158118 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.870 158118 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.870 158118 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.870 158118 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.870 158118 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.871 158118 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.871 158118 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.871 158118 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.871 158118 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.871 158118 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.872 158118 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.872 158118 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.872 158118 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.872 158118 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.872 158118 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.872 158118 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.873 158118 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.873 158118 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.873 158118 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.873 158118 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.873 158118 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.873 158118 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.873 158118 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.874 158118 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.874 158118 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.874 158118 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.874 158118 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.874 158118 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.874 158118 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.874 158118 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.875 158118 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.875 158118 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.875 158118 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.875 158118 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.875 158118 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.875 158118 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.875 158118 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.876 158118 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.876 158118 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.876 158118 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.876 158118 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.876 158118 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.876 158118 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.876 158118 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.877 158118 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.877 158118 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.877 158118 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.877 158118 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.877 158118 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.877 158118 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.877 158118 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.878 158118 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.878 158118 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.878 158118 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.878 158118 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.878 158118 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.878 158118 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.878 158118 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.879 158118 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.879 158118 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.879 158118 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.879 158118 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.879 158118 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.879 158118 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.879 158118 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.880 158118 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.880 158118 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.880 158118 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.880 158118 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.880 158118 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.880 158118 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.881 158118 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.881 158118 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.881 158118 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.881 158118 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.881 158118 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.881 158118 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.881 158118 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.882 158118 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.882 158118 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.882 158118 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.882 158118 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.882 158118 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.882 158118 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.882 158118 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.883 158118 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.883 158118 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.883 158118 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.883 158118 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.883 158118 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.883 158118 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.883 158118 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.883 158118 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.884 158118 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.884 158118 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.884 158118 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.884 158118 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.884 158118 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.884 158118 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.884 158118 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.885 158118 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.885 158118 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.885 158118 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.885 158118 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.885 158118 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.885 158118 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.885 158118 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.885 158118 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.886 158118 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.886 158118 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.886 158118 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.886 158118 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.886 158118 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.886 158118 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.886 158118 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.886 158118 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.887 158118 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.887 158118 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.887 158118 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.887 158118 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.887 158118 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.887 158118 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.887 158118 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.888 158118 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.888 158118 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.888 158118 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.888 158118 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.888 158118 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.888 158118 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.888 158118 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.889 158118 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.889 158118 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.889 158118 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.889 158118 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.889 158118 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.889 158118 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.889 158118 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.890 158118 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.890 158118 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.890 158118 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.890 158118 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.890 158118 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.890 158118 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.890 158118 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.890 158118 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.891 158118 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.891 158118 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.891 158118 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.891 158118 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.891 158118 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.891 158118 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.891 158118 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.892 158118 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.892 158118 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.892 158118 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.892 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.892 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.892 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.893 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.893 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.893 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.893 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.893 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.893 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.893 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.894 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.894 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.894 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.894 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.894 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.894 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.894 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.894 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.895 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.895 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.895 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.895 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.895 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.896 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.896 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.896 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.896 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.896 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.896 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.896 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.897 158118 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.897 158118 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.897 158118 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.897 158118 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.897 158118 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:42:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:42:05.897 158118 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 06:42:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Dec 06 06:42:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:07.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:07.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:08 compute-0 sshd-session[158266]: Accepted publickey for zuul from 192.168.122.30 port 35302 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:42:08 compute-0 systemd-logind[798]: New session 49 of user zuul.
Dec 06 06:42:08 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 06 06:42:08 compute-0 sshd-session[158266]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:42:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 06 06:42:09 compute-0 python3.9[158420]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:42:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:09.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:10 compute-0 sudo[158575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlityfjwzabuseiyxiselftlgslqhcgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003329.9259403-67-165104903053988/AnsiballZ_command.py'
Dec 06 06:42:10 compute-0 sudo[158575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Dec 06 06:42:10 compute-0 python3.9[158577]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:10 compute-0 sudo[158575]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:42:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:11.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:42:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:11.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:11 compute-0 sudo[158739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyukihsczpvflxbanfrjrgecwmpqklej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003331.1857336-100-62140497903104/AnsiballZ_systemd_service.py'
Dec 06 06:42:11 compute-0 sudo[158739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:12 compute-0 python3.9[158741]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:42:12 compute-0 systemd[1]: Reloading.
Dec 06 06:42:12 compute-0 systemd-rc-local-generator[158769]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:42:12 compute-0 systemd-sysv-generator[158772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:42:12 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:42:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).paxos(paxos updating c 503..1243) accept timeout, calling fresh election
Dec 06 06:42:12 compute-0 ceph-mon[74339]: mon.compute-0@0(probing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:42:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:42:12 compute-0 ceph-mon[74339]: paxos.0).electionLogic(40) init, last seen epoch 40
Dec 06 06:42:12 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:12 compute-0 sudo[158739]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Dec 06 06:42:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:42:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:42:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:42:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:42:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:42:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:42:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:13.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:13 compute-0 python3.9[158927]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:42:13 compute-0 network[158944]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:42:13 compute-0 network[158945]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:42:13 compute-0 network[158946]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:42:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:13.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:15.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:15.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:16 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 06:42:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:17.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-1 in quorum (ranks 0,2)
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 16m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-1 (MON_DOWN)
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-1
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-1
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] :     mon.compute-2 (rank 1) addr [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] is down (out of quorum)
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:42:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5080bc51-4ec7-4bda-864c-993f60adec09 does not exist
Dec 06 06:42:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f207bfeb-8cb7-44e3-bcf5-e9e023370e84 does not exist
Dec 06 06:42:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev cab49451-a693-4524-b567-031375cb81ae does not exist
Dec 06 06:42:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 06:42:17 compute-0 ceph-mon[74339]: paxos.0).electionLogic(43) init, last seen epoch 43, mid-election, bumping
Dec 06 06:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:17 compute-0 sudo[159083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:42:17 compute-0 sudo[159083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:17 compute-0 sudo[159086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:42:17 compute-0 sudo[159083]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:17 compute-0 sudo[159086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:17 compute-0 sudo[159086]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:17 compute-0 sudo[159133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:42:17 compute-0 sudo[159135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:42:17 compute-0 sudo[159133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:17 compute-0 sudo[159135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:17 compute-0 sudo[159133]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:17 compute-0 sudo[159135]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:17.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:42:18
Dec 06 06:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.log', 'backups', 'images']
Dec 06 06:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:42:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:42:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 06:42:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:42:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:42:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 16m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:42:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Dec 06 06:42:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 06:42:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 06:42:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:19 compute-0 sudo[159310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxppgqgnbtdhejsfugiwmjlwclbhybwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003339.0062895-157-155547368815525/AnsiballZ_systemd_service.py'
Dec 06 06:42:19 compute-0 sudo[159310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:19 compute-0 python3.9[159312]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:19 compute-0 sudo[159310]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:19.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:19 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:42:19 compute-0 ceph-mon[74339]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Dec 06 06:42:19 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 06:42:19 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:42:19 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 06:42:19 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 06:42:19 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 06:42:19 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 06:42:19 compute-0 ceph-mon[74339]: osdmap e145: 3 total, 3 up, 3 in
Dec 06 06:42:19 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 16m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 06:42:19 compute-0 ceph-mon[74339]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-1)
Dec 06 06:42:19 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 06:42:19 compute-0 ceph-mon[74339]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:20 compute-0 sudo[159474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhutcvgdthaofubpmemtbtalpkuokkun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003339.7619705-157-160606052452046/AnsiballZ_systemd_service.py'
Dec 06 06:42:20 compute-0 sudo[159474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:20 compute-0 podman[159437]: 2025-12-06 06:42:20.086314834 +0000 UTC m=+0.085654616 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 06:42:20 compute-0 python3.9[159480]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:20 compute-0 sudo[159474]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:20 compute-0 sudo[159641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydogdhwecooddmsidpaywqpyjsqjwoey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003340.5396974-157-262305006186040/AnsiballZ_systemd_service.py'
Dec 06 06:42:20 compute-0 sudo[159641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:21 compute-0 python3.9[159643]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:21 compute-0 sudo[159641]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:21 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:42:21 compute-0 ceph-mon[74339]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:21 compute-0 sudo[159794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zufigexndlhenvdxzscoqhjzhlpzeurl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003341.23977-157-212886153516780/AnsiballZ_systemd_service.py'
Dec 06 06:42:21 compute-0 sudo[159794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:21.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:21 compute-0 python3.9[159796]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:21 compute-0 sudo[159794]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:22 compute-0 sudo[159948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wclhneuawrdtcwvjlkfgzccxirokxxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003342.0189142-157-148491346702640/AnsiballZ_systemd_service.py'
Dec 06 06:42:22 compute-0 sudo[159948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:22 compute-0 python3.9[159950]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:22 compute-0 sudo[159948]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:42:23 compute-0 sudo[160101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhtncucunrwdotzcqgniyjmrmchlpvqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003342.781353-157-247110436577695/AnsiballZ_systemd_service.py'
Dec 06 06:42:23 compute-0 sudo[160101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:23 compute-0 ceph-mon[74339]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:23.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:23 compute-0 python3.9[160103]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:23 compute-0 sudo[160101]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:42:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:23.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:23 compute-0 sudo[160254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzlylsvsizhiypmeztkvmuavpwuxhlib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003343.5297334-157-256579383964186/AnsiballZ_systemd_service.py'
Dec 06 06:42:23 compute-0 sudo[160254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:24 compute-0 python3.9[160256]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:42:24 compute-0 sudo[160254]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:24 compute-0 ceph-mon[74339]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:24.942913) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003344942972, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2222, "num_deletes": 252, "total_data_size": 4199483, "memory_usage": 4262624, "flush_reason": "Manual Compaction"}
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003344978600, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 4042472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10920, "largest_seqno": 13141, "table_properties": {"data_size": 4032601, "index_size": 6301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20160, "raw_average_key_size": 20, "raw_value_size": 4012601, "raw_average_value_size": 4036, "num_data_blocks": 281, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003070, "oldest_key_time": 1765003070, "file_creation_time": 1765003344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 35751 microseconds, and 9202 cpu microseconds.
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:24.978662) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 4042472 bytes OK
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:24.978684) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:24.982032) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:24.982050) EVENT_LOG_v1 {"time_micros": 1765003344982046, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:24.982069) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4190517, prev total WAL file size 4190517, number of live WAL files 2.
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:24.983248) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3947KB)], [26(8338KB)]
Dec 06 06:42:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003344983411, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 12580968, "oldest_snapshot_seqno": -1}
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4447 keys, 9968073 bytes, temperature: kUnknown
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345130816, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 9968073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9934110, "index_size": 21757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 109044, "raw_average_key_size": 24, "raw_value_size": 9849586, "raw_average_value_size": 2214, "num_data_blocks": 938, "num_entries": 4447, "num_filter_entries": 4447, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765003344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:25.131525) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 9968073 bytes
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:25.145178) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 85.1 rd, 67.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 8.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(5.6) write-amplify(2.5) OK, records in: 4977, records dropped: 530 output_compression: NoCompression
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:25.145285) EVENT_LOG_v1 {"time_micros": 1765003345145218, "job": 10, "event": "compaction_finished", "compaction_time_micros": 147812, "compaction_time_cpu_micros": 34190, "output_level": 6, "num_output_files": 1, "total_output_size": 9968073, "num_input_records": 4977, "num_output_records": 4447, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345146221, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003345147559, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:24.982900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:25.147646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:25.147652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:25.147653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:25.147654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:42:25.147656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:42:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:42:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:42:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:25.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:26 compute-0 sudo[160409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgeyqyletphrtlrzhzlslwpczrkprqwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003345.9286232-313-91141396634167/AnsiballZ_file.py'
Dec 06 06:42:26 compute-0 sudo[160409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:26 compute-0 python3.9[160411]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:26 compute-0 sudo[160409]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:26 compute-0 sudo[160561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jisxwdarjweoxchhbkwcxoexjzcvaptf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003346.695082-313-102440032539386/AnsiballZ_file.py'
Dec 06 06:42:26 compute-0 sudo[160561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:27 compute-0 ceph-mon[74339]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:27 compute-0 python3.9[160563]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:27 compute-0 sudo[160561]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:27 compute-0 sudo[160713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmdwllwaordlpvasitnlpmwafqjjjcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003347.2935927-313-142816722226799/AnsiballZ_file.py'
Dec 06 06:42:27 compute-0 sudo[160713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:27.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:27 compute-0 python3.9[160715]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:27 compute-0 sudo[160713]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:28 compute-0 sudo[160866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rabsptzylkmwicxcowlzgdxilowmjxsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003347.9191713-313-116246864234028/AnsiballZ_file.py'
Dec 06 06:42:28 compute-0 sudo[160866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:28 compute-0 python3.9[160868]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:28 compute-0 sudo[160866]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:28 compute-0 sudo[161018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmwnuqjsqoutjbyluyfmiyzxzacljaqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003348.5736196-313-236194593879740/AnsiballZ_file.py'
Dec 06 06:42:28 compute-0 sudo[161018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:29 compute-0 python3.9[161020]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:29 compute-0 sudo[161018]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:29 compute-0 ceph-mon[74339]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:29.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:29 compute-0 sudo[161170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wedkbntkrbxqjehlewxdjciynseqpbeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003349.1564023-313-138242272866417/AnsiballZ_file.py'
Dec 06 06:42:29 compute-0 sudo[161170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:29 compute-0 python3.9[161172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:29 compute-0 sudo[161170]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:29.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:29 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:42:29 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:42:30 compute-0 sudo[161323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mepghxrvudgojdjwqssrlsszwlnnrpra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003349.842317-313-97332136544203/AnsiballZ_file.py'
Dec 06 06:42:30 compute-0 sudo[161323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:30 compute-0 python3.9[161325]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:30 compute-0 sudo[161323]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:31 compute-0 sudo[161476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbjcfwxdnblotpaabpvjoeolmgxftrun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003351.4049957-463-37333165563504/AnsiballZ_file.py'
Dec 06 06:42:31 compute-0 sudo[161476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:31.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:31 compute-0 podman[161478]: 2025-12-06 06:42:31.827851042 +0000 UTC m=+0.090422961 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 06:42:31 compute-0 python3.9[161479]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:31 compute-0 sudo[161476]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:32 compute-0 sudo[161646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqazdolcqxnsfydxcntbetrehytnycto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003352.0483418-463-172229065272042/AnsiballZ_file.py'
Dec 06 06:42:32 compute-0 sudo[161646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:32 compute-0 python3.9[161648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:32 compute-0 sudo[161646]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:32 compute-0 ceph-mon[74339]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:32 compute-0 sudo[161798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idczjmadqvrpftmfddtbtifaujwfywpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003352.6170106-463-142514016140925/AnsiballZ_file.py'
Dec 06 06:42:32 compute-0 sudo[161798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:33 compute-0 python3.9[161800]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:33 compute-0 sudo[161798]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:33 compute-0 sudo[161950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yerupcjvlaiapepuucbhrkvphfiulffl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003353.1831322-463-230939035276792/AnsiballZ_file.py'
Dec 06 06:42:33 compute-0 sudo[161950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:33 compute-0 python3.9[161952]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:33 compute-0 sudo[161950]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:33.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:34 compute-0 sudo[162102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jldbylzxqhyyadshjvzjgznedxdfnion ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003353.7619474-463-149796865047393/AnsiballZ_file.py'
Dec 06 06:42:34 compute-0 sudo[162102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:34 compute-0 python3.9[162104]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:34 compute-0 sudo[162102]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:34 compute-0 sudo[162255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymmnlhmjyyckomsheflvladnlyrpfvrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003354.3227592-463-106251848884421/AnsiballZ_file.py'
Dec 06 06:42:34 compute-0 sudo[162255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:34 compute-0 ceph-mon[74339]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:34 compute-0 python3.9[162257]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:34 compute-0 sudo[162255]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:35 compute-0 sudo[162407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmwsxqqynjcrqhbfmpzqrknjfobsrtau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003354.907051-463-275148330432908/AnsiballZ_file.py'
Dec 06 06:42:35 compute-0 sudo[162407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:35 compute-0 python3.9[162409]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:42:35 compute-0 sudo[162407]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:35.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:35 compute-0 ceph-mon[74339]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:37 compute-0 sudo[162560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krzwaldnvxhpkkzdghqzsogkvkcjzcow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003357.0609071-616-47269876925729/AnsiballZ_command.py'
Dec 06 06:42:37 compute-0 sudo[162560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:37 compute-0 python3.9[162562]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:37 compute-0 sudo[162560]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:42:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:37.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:42:37 compute-0 sudo[162589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:42:37 compute-0 sudo[162589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:37 compute-0 sudo[162589]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:37 compute-0 sudo[162614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:42:37 compute-0 sudo[162614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:37 compute-0 sudo[162614]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:39 compute-0 ceph-mon[74339]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:39.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:42:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:39.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:42:40 compute-0 python3.9[162765]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:42:40 compute-0 ceph-mon[74339]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:40 compute-0 sudo[162916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwodaodmbcwkilptuifvzwinlxjcfdys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003360.5967035-670-113834497731796/AnsiballZ_systemd_service.py'
Dec 06 06:42:40 compute-0 sudo[162916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:41 compute-0 python3.9[162918]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:42:41 compute-0 systemd[1]: Reloading.
Dec 06 06:42:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:41 compute-0 systemd-rc-local-generator[162948]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:42:41 compute-0 systemd-sysv-generator[162951]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:42:41 compute-0 sudo[162916]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:42:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:41.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:42:42 compute-0 ceph-mon[74339]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:42 compute-0 sudo[163104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trurtznpgrurwofvhcypzqhcquyfpcbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003362.3092258-694-138822802542936/AnsiballZ_command.py'
Dec 06 06:42:42 compute-0 sudo[163104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:42 compute-0 python3.9[163106]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:42 compute-0 sudo[163104]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:42:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:42:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:42:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:42:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:42:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:42:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:43.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:43 compute-0 sudo[163257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zozwjxzswmeqvejaeyynddosanjiguec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003363.2094848-694-248095305090719/AnsiballZ_command.py'
Dec 06 06:42:43 compute-0 sudo[163257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:42:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:43.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:42:44 compute-0 python3.9[163259]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:44 compute-0 sudo[163257]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:44 compute-0 sudo[163411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqcdlwhxlbieciyqzmfbjgqvzveixjzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003364.255817-694-185843218528633/AnsiballZ_command.py'
Dec 06 06:42:44 compute-0 sudo[163411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:44 compute-0 python3.9[163413]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:44 compute-0 sudo[163411]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:45 compute-0 sudo[163564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfxcrklpcdhpcwexanevohrowqzaouov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003364.9025629-694-189767296841875/AnsiballZ_command.py'
Dec 06 06:42:45 compute-0 sudo[163564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:42:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:45.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:42:45 compute-0 python3.9[163566]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:45 compute-0 sudo[163564]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:45.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:45 compute-0 sudo[163717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clwruliekdaoevkahbswjirazoazgdbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003365.5797083-694-212144915445752/AnsiballZ_command.py'
Dec 06 06:42:45 compute-0 sudo[163717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:46 compute-0 python3.9[163719]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:46 compute-0 sudo[163717]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:46 compute-0 sudo[163871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhzgzxnxrhusgimzpxilfscpoorkktt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003366.2485647-694-103696400927944/AnsiballZ_command.py'
Dec 06 06:42:46 compute-0 sudo[163871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:46 compute-0 python3.9[163873]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:46 compute-0 sudo[163871]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:47 compute-0 sudo[164024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpxcvwfhqtphzpvvyucpiakmstiynvao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003366.9238224-694-110781654014365/AnsiballZ_command.py'
Dec 06 06:42:47 compute-0 sudo[164024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:47.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:47 compute-0 ceph-mon[74339]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:47 compute-0 python3.9[164026]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:42:47 compute-0 sudo[164024]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:47.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:48 compute-0 ceph-mon[74339]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:48 compute-0 ceph-mon[74339]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:49 compute-0 sudo[164178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohrmyrifinhujltqwdrwvxsykmdmglfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003368.8177595-856-199193663240413/AnsiballZ_getent.py'
Dec 06 06:42:49 compute-0 sudo[164178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:49.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:49 compute-0 python3.9[164180]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 06 06:42:49 compute-0 sudo[164178]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000999999s ======
Dec 06 06:42:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:49.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999999s
Dec 06 06:42:50 compute-0 sudo[164331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krjutbvctpjszdrtojfebygihsrvqgjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003369.6384854-880-43395586901760/AnsiballZ_group.py'
Dec 06 06:42:50 compute-0 sudo[164331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:50 compute-0 ceph-mon[74339]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:50 compute-0 python3.9[164333]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:42:50 compute-0 groupadd[164336]: group added to /etc/group: name=libvirt, GID=42473
Dec 06 06:42:50 compute-0 groupadd[164336]: group added to /etc/gshadow: name=libvirt
Dec 06 06:42:50 compute-0 groupadd[164336]: new group: name=libvirt, GID=42473
Dec 06 06:42:50 compute-0 sudo[164331]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:50 compute-0 podman[164335]: 2025-12-06 06:42:50.394060233 +0000 UTC m=+0.092916325 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:42:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:42:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:51.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:42:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:51.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:53 compute-0 ceph-mon[74339]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:53.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:53.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:53 compute-0 sudo[164517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxvsguusccocydeglhtvaacwsnfgqnej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003373.33724-904-41629287954104/AnsiballZ_user.py'
Dec 06 06:42:53 compute-0 sudo[164517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:54 compute-0 python3.9[164519]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 06:42:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:54 compute-0 useradd[164521]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 06:42:54 compute-0 ceph-mon[74339]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:54 compute-0 sudo[164517]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:55.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:55 compute-0 sudo[164678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlkxzirdsohvbkpmvedetccrpvicvrjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003375.0265672-937-101687831026615/AnsiballZ_setup.py'
Dec 06 06:42:55 compute-0 sudo[164678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:55 compute-0 python3.9[164680]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:42:55 compute-0 ceph-mon[74339]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:42:55 compute-0 sudo[164678]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:42:56 compute-0 sudo[164763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odmnrhdphrgugphvgyyulzvjppwkgefw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003375.0265672-937-101687831026615/AnsiballZ_dnf.py'
Dec 06 06:42:56 compute-0 sudo[164763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:42:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:56 compute-0 python3.9[164765]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:42:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:42:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:57.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:42:57 compute-0 ceph-mon[74339]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:42:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:42:57 compute-0 sudo[164767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:42:57 compute-0 sudo[164767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:57 compute-0 sudo[164767]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:58 compute-0 sudo[164792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:42:58 compute-0 sudo[164792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:42:58 compute-0 sudo[164792]: pam_unix(sudo:session): session closed for user root
Dec 06 06:42:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:42:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:42:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:42:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:42:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:42:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:42:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:42:59.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:01 compute-0 ceph-mon[74339]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:01.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:01.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:02 compute-0 ceph-mon[74339]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:02 compute-0 podman[164828]: 2025-12-06 06:43:02.457303315 +0000 UTC m=+0.106769918 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 06:43:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:03.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:43:03.786 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:43:03.788 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:43:03.788 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:43:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:43:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:03.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:43:04 compute-0 ceph-mon[74339]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:05.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:05.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:06 compute-0 ceph-mon[74339]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:07.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:07 compute-0 ceph-mon[74339]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:09.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:11 compute-0 ceph-mon[74339]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:11.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:12 compute-0 ceph-mon[74339]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:43:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:43:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:43:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:43:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:43:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:43:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:43:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:13.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:43:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:43:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:13.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:43:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:14 compute-0 ceph-mon[74339]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:43:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:15.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:43:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:43:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:15.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:43:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:17.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:17 compute-0 ceph-mon[74339]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000000s ======
Dec 06 06:43:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:17.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000000s
Dec 06 06:43:18 compute-0 sudo[165030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:18 compute-0 sudo[165030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-0 sudo[165030]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-0 sudo[165063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:18 compute-0 sudo[165055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:43:18 compute-0 sudo[165063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-0 sudo[165055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-0 sudo[165063]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-0 sudo[165055]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:43:18
Dec 06 06:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log']
Dec 06 06:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:43:18 compute-0 sudo[165105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:18 compute-0 sudo[165105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-0 sudo[165105]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-0 sudo[165108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:18 compute-0 sudo[165108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-0 sudo[165108]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:18 compute-0 sudo[165156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:43:18 compute-0 sudo[165156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:18 compute-0 ceph-mon[74339]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:18 compute-0 sudo[165156]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 06:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d97955e7-0eff-4f80-9741-836868f6bf8a does not exist
Dec 06 06:43:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b40b90c4-0593-4c52-9c6b-02c759df6389 does not exist
Dec 06 06:43:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9edf7fb4-ea5d-47e2-848d-bcd11246008e does not exist
Dec 06 06:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:43:19 compute-0 sudo[165212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:19 compute-0 sudo[165212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:19 compute-0 sudo[165212]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:19 compute-0 sudo[165237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:43:19 compute-0 sudo[165237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:19 compute-0 sudo[165237]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:19 compute-0 sudo[165263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:19 compute-0 sudo[165263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:19 compute-0 sudo[165263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:19 compute-0 sudo[165288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:43:19 compute-0 sudo[165288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:19.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:19.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:19 compute-0 podman[165359]: 2025-12-06 06:43:19.834941446 +0000 UTC m=+0.048511426 container create 1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 06:43:19 compute-0 systemd[1]: Started libpod-conmon-1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5.scope.
Dec 06 06:43:19 compute-0 podman[165359]: 2025-12-06 06:43:19.810867613 +0000 UTC m=+0.024437633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:43:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:43:19 compute-0 podman[165359]: 2025-12-06 06:43:19.935243118 +0000 UTC m=+0.148813188 container init 1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Dec 06 06:43:19 compute-0 podman[165359]: 2025-12-06 06:43:19.946472293 +0000 UTC m=+0.160042273 container start 1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:43:19 compute-0 podman[165359]: 2025-12-06 06:43:19.950317972 +0000 UTC m=+0.163887992 container attach 1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec 06 06:43:19 compute-0 distracted_colden[165375]: 167 167
Dec 06 06:43:19 compute-0 systemd[1]: libpod-1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5.scope: Deactivated successfully.
Dec 06 06:43:19 compute-0 podman[165359]: 2025-12-06 06:43:19.956725975 +0000 UTC m=+0.170295955 container died 1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-27caa9b15794c35b3c49b6bc7e0eb54543c01945f710604fa5ec98eed5535eb3-merged.mount: Deactivated successfully.
Dec 06 06:43:20 compute-0 podman[165359]: 2025-12-06 06:43:20.007197749 +0000 UTC m=+0.220767729 container remove 1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:43:20 compute-0 systemd[1]: libpod-conmon-1de802bd44524c76a08817b759a660082648a2e181a58be9da76557615d6f6a5.scope: Deactivated successfully.
Dec 06 06:43:20 compute-0 podman[165399]: 2025-12-06 06:43:20.190021291 +0000 UTC m=+0.060073910 container create ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 06:43:20 compute-0 systemd[1]: Started libpod-conmon-ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba.scope.
Dec 06 06:43:20 compute-0 podman[165399]: 2025-12-06 06:43:20.165925458 +0000 UTC m=+0.035978107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:43:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b5709b9acc7aaa5c70c5c7f01a48f859d012d5a807812b05778948beed36ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b5709b9acc7aaa5c70c5c7f01a48f859d012d5a807812b05778948beed36ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b5709b9acc7aaa5c70c5c7f01a48f859d012d5a807812b05778948beed36ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b5709b9acc7aaa5c70c5c7f01a48f859d012d5a807812b05778948beed36ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65b5709b9acc7aaa5c70c5c7f01a48f859d012d5a807812b05778948beed36ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:20 compute-0 ceph-mon[74339]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:43:20 compute-0 podman[165399]: 2025-12-06 06:43:20.283581082 +0000 UTC m=+0.153633731 container init ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:43:20 compute-0 podman[165399]: 2025-12-06 06:43:20.296854541 +0000 UTC m=+0.166907170 container start ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:43:20 compute-0 podman[165399]: 2025-12-06 06:43:20.301307463 +0000 UTC m=+0.171360102 container attach ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:43:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:21 compute-0 elastic_albattani[165416]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:43:21 compute-0 elastic_albattani[165416]: --> relative data size: 1.0
Dec 06 06:43:21 compute-0 elastic_albattani[165416]: --> All data devices are unavailable
Dec 06 06:43:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:21.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:21 compute-0 systemd[1]: libpod-ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba.scope: Deactivated successfully.
Dec 06 06:43:21 compute-0 systemd[1]: libpod-ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba.scope: Consumed 1.055s CPU time.
Dec 06 06:43:21 compute-0 podman[165399]: 2025-12-06 06:43:21.364288622 +0000 UTC m=+1.234341261 container died ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-65b5709b9acc7aaa5c70c5c7f01a48f859d012d5a807812b05778948beed36ba-merged.mount: Deactivated successfully.
Dec 06 06:43:21 compute-0 ceph-mon[74339]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:21 compute-0 podman[165399]: 2025-12-06 06:43:21.432402515 +0000 UTC m=+1.302455134 container remove ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:43:21 compute-0 systemd[1]: libpod-conmon-ebe66530eba3a3874d18468f0a7fe8d139256e650b1364156e36f74c3bbb6bba.scope: Deactivated successfully.
Dec 06 06:43:21 compute-0 podman[165432]: 2025-12-06 06:43:21.461620128 +0000 UTC m=+0.107062125 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 06:43:21 compute-0 sudo[165288]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:21 compute-0 sudo[165467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:21 compute-0 sudo[165467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:21 compute-0 sudo[165467]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:21 compute-0 sudo[165493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:43:21 compute-0 sudo[165493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:21 compute-0 sudo[165493]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:21 compute-0 sudo[165518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:21 compute-0 sudo[165518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:21 compute-0 sudo[165518]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:21 compute-0 sudo[165543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:43:21 compute-0 sudo[165543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:21.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:22 compute-0 podman[165610]: 2025-12-06 06:43:22.081164263 +0000 UTC m=+0.036739356 container create c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 06:43:22 compute-0 systemd[1]: Started libpod-conmon-c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e.scope.
Dec 06 06:43:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:43:22 compute-0 podman[165610]: 2025-12-06 06:43:22.06376108 +0000 UTC m=+0.019336193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:43:22 compute-0 podman[165610]: 2025-12-06 06:43:22.16275782 +0000 UTC m=+0.118332933 container init c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:43:22 compute-0 podman[165610]: 2025-12-06 06:43:22.331796211 +0000 UTC m=+0.287371334 container start c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 06:43:22 compute-0 podman[165610]: 2025-12-06 06:43:22.336309226 +0000 UTC m=+0.291884339 container attach c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:43:22 compute-0 funny_margulis[165626]: 167 167
Dec 06 06:43:22 compute-0 systemd[1]: libpod-c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e.scope: Deactivated successfully.
Dec 06 06:43:22 compute-0 podman[165610]: 2025-12-06 06:43:22.338840609 +0000 UTC m=+0.294415702 container died c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:43:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9033eafa41de64214845640a39530c5c4d5bb9ba894c6b0c4076471a02d7f868-merged.mount: Deactivated successfully.
Dec 06 06:43:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:22 compute-0 podman[165610]: 2025-12-06 06:43:22.726463604 +0000 UTC m=+0.682038697 container remove c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_margulis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec 06 06:43:22 compute-0 systemd[1]: libpod-conmon-c9df991593df4044988bc9bf0e2f444b60e3736588655d3ca09db674c21e971e.scope: Deactivated successfully.
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:43:23 compute-0 podman[165650]: 2025-12-06 06:43:23.097760131 +0000 UTC m=+0.118730972 container create 193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:43:23 compute-0 podman[165650]: 2025-12-06 06:43:23.007275009 +0000 UTC m=+0.028245870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:43:23 compute-0 ceph-mon[74339]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:23 compute-0 systemd[1]: Started libpod-conmon-193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf.scope.
Dec 06 06:43:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac47a7e21502c8ebd0e1c9b6d95cfa8e074a9b40d7129a5c19f1a53d6058d90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac47a7e21502c8ebd0e1c9b6d95cfa8e074a9b40d7129a5c19f1a53d6058d90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac47a7e21502c8ebd0e1c9b6d95cfa8e074a9b40d7129a5c19f1a53d6058d90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac47a7e21502c8ebd0e1c9b6d95cfa8e074a9b40d7129a5c19f1a53d6058d90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:23 compute-0 podman[165650]: 2025-12-06 06:43:23.192938553 +0000 UTC m=+0.213909414 container init 193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 06 06:43:23 compute-0 podman[165650]: 2025-12-06 06:43:23.209613957 +0000 UTC m=+0.230584798 container start 193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:43:23 compute-0 podman[165650]: 2025-12-06 06:43:23.228140859 +0000 UTC m=+0.249111730 container attach 193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:43:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:43:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:23.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:23.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:24 compute-0 interesting_noyce[165666]: {
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:     "0": [
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:         {
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "devices": [
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "/dev/loop3"
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             ],
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "lv_name": "ceph_lv0",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "lv_size": "7511998464",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "name": "ceph_lv0",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "tags": {
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.cluster_name": "ceph",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.crush_device_class": "",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.encrypted": "0",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.osd_id": "0",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.type": "block",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:                 "ceph.vdo": "0"
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             },
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "type": "block",
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:             "vg_name": "ceph_vg0"
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:         }
Dec 06 06:43:24 compute-0 interesting_noyce[165666]:     ]
Dec 06 06:43:24 compute-0 interesting_noyce[165666]: }
Dec 06 06:43:24 compute-0 systemd[1]: libpod-193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf.scope: Deactivated successfully.
Dec 06 06:43:24 compute-0 systemd[1]: libpod-193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf.scope: Consumed 1.053s CPU time.
Dec 06 06:43:24 compute-0 podman[165650]: 2025-12-06 06:43:24.302569648 +0000 UTC m=+1.323540509 container died 193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 06:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ac47a7e21502c8ebd0e1c9b6d95cfa8e074a9b40d7129a5c19f1a53d6058d90-merged.mount: Deactivated successfully.
Dec 06 06:43:24 compute-0 podman[165650]: 2025-12-06 06:43:24.359081557 +0000 UTC m=+1.380052398 container remove 193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_noyce, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:43:24 compute-0 systemd[1]: libpod-conmon-193e4cd279fe5608be94fea455b3879e49a992648fd368d05c2301dfed2563bf.scope: Deactivated successfully.
Dec 06 06:43:24 compute-0 sudo[165543]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:24 compute-0 sudo[165690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:24 compute-0 sudo[165690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:24 compute-0 sudo[165690]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:24 compute-0 sudo[165715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:43:24 compute-0 sudo[165715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:24 compute-0 sudo[165715]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:24 compute-0 sudo[165740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:24 compute-0 sudo[165740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:24 compute-0 sudo[165740]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:24 compute-0 sudo[165765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:43:24 compute-0 sudo[165765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:24 compute-0 ceph-mon[74339]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:25 compute-0 podman[165830]: 2025-12-06 06:43:25.020316212 +0000 UTC m=+0.043423306 container create 971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_stonebraker, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:43:25 compute-0 systemd[1]: Started libpod-conmon-971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642.scope.
Dec 06 06:43:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:43:25 compute-0 podman[165830]: 2025-12-06 06:43:25.097932337 +0000 UTC m=+0.121039451 container init 971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:43:25 compute-0 podman[165830]: 2025-12-06 06:43:25.004669694 +0000 UTC m=+0.027776808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:43:25 compute-0 podman[165830]: 2025-12-06 06:43:25.104194057 +0000 UTC m=+0.127301151 container start 971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:43:25 compute-0 goofy_stonebraker[165847]: 167 167
Dec 06 06:43:25 compute-0 podman[165830]: 2025-12-06 06:43:25.10867116 +0000 UTC m=+0.131778284 container attach 971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_stonebraker, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:43:25 compute-0 systemd[1]: libpod-971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642.scope: Deactivated successfully.
Dec 06 06:43:25 compute-0 podman[165830]: 2025-12-06 06:43:25.109217864 +0000 UTC m=+0.132324958 container died 971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b53ac21fe101b3931ab31951705532302f46eb643778504bf0c9fa1b2fb858-merged.mount: Deactivated successfully.
Dec 06 06:43:25 compute-0 podman[165830]: 2025-12-06 06:43:25.143698522 +0000 UTC m=+0.166805616 container remove 971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:43:25 compute-0 systemd[1]: libpod-conmon-971a0e0c193b4f143d5455f55815844217b32728cce1cf2433d05c07c3fb6642.scope: Deactivated successfully.
Dec 06 06:43:25 compute-0 podman[165871]: 2025-12-06 06:43:25.303527569 +0000 UTC m=+0.047066459 container create ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 06 06:43:25 compute-0 systemd[1]: Started libpod-conmon-ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2.scope.
Dec 06 06:43:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3656921af3ecabf2cceea6c504e49ca9e5c8e27084e3b69fcbcc76f08c1f582b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3656921af3ecabf2cceea6c504e49ca9e5c8e27084e3b69fcbcc76f08c1f582b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3656921af3ecabf2cceea6c504e49ca9e5c8e27084e3b69fcbcc76f08c1f582b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3656921af3ecabf2cceea6c504e49ca9e5c8e27084e3b69fcbcc76f08c1f582b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:43:25 compute-0 podman[165871]: 2025-12-06 06:43:25.281021366 +0000 UTC m=+0.024560306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:43:25 compute-0 podman[165871]: 2025-12-06 06:43:25.3766533 +0000 UTC m=+0.120192210 container init ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:43:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:43:25 compute-0 podman[165871]: 2025-12-06 06:43:25.384986102 +0000 UTC m=+0.128524992 container start ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:43:25 compute-0 podman[165871]: 2025-12-06 06:43:25.388937242 +0000 UTC m=+0.132476152 container attach ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:43:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:25.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:25.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]: {
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]:         "osd_id": 0,
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]:         "type": "bluestore"
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]:     }
Dec 06 06:43:26 compute-0 epic_matsumoto[165888]: }
Dec 06 06:43:26 compute-0 systemd[1]: libpod-ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2.scope: Deactivated successfully.
Dec 06 06:43:26 compute-0 podman[165871]: 2025-12-06 06:43:26.279092753 +0000 UTC m=+1.022631643 container died ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3656921af3ecabf2cceea6c504e49ca9e5c8e27084e3b69fcbcc76f08c1f582b-merged.mount: Deactivated successfully.
Dec 06 06:43:26 compute-0 podman[165871]: 2025-12-06 06:43:26.455065701 +0000 UTC m=+1.198604581 container remove ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:43:26 compute-0 systemd[1]: libpod-conmon-ad49297590b103b18191cc8e6d5317293a7ad40dc82450e642e114c8c2ec16c2.scope: Deactivated successfully.
Dec 06 06:43:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:26 compute-0 sudo[165765]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:43:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:43:27 compute-0 ceph-mon[74339]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 367d97e6-810c-45c2-ba95-dd396267ecfa does not exist
Dec 06 06:43:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 42047879-5758-4395-a1be-b2fe6ad90638 does not exist
Dec 06 06:43:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3d05c76c-939b-460e-873f-e18a0aeab3b4 does not exist
Dec 06 06:43:27 compute-0 sudo[165924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:27 compute-0 sudo[165924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:27 compute-0 sudo[165924]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:27.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:27 compute-0 sudo[165949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:43:27 compute-0 sudo[165949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:27 compute-0 sudo[165949]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:27.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:43:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:29 compute-0 ceph-mon[74339]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:29.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:31 compute-0 ceph-mon[74339]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:31.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:31.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:32 compute-0 kernel: SELinux:  Converting 2771 SID table entries...
Dec 06 06:43:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:43:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:43:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:43:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:43:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:43:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:43:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:43:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:32 compute-0 ceph-mon[74339]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:33 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 06 06:43:33 compute-0 podman[165984]: 2025-12-06 06:43:33.447009846 +0000 UTC m=+0.080394447 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:43:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:33.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:33.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:34 compute-0 ceph-mon[74339]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:35.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:35.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:37 compute-0 ceph-mon[74339]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:37.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:43:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:37.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:43:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:38 compute-0 sudo[166005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:38 compute-0 sudo[166005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:38 compute-0 sudo[166005]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:38 compute-0 sudo[166030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:38 compute-0 sudo[166030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:38 compute-0 sudo[166030]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:38 compute-0 ceph-mon[74339]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:39.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:41.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:41 compute-0 ceph-mon[74339]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:43:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:43:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:43:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:43:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:43:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:43:43 compute-0 ceph-mon[74339]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:43.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:43.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:44 compute-0 kernel: SELinux:  Converting 2771 SID table entries...
Dec 06 06:43:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:43:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:43:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:43:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:43:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:43:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:43:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:43:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:44 compute-0 ceph-mon[74339]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:45.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:45.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:47 compute-0 ceph-mon[74339]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:47.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:49.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:50 compute-0 ceph-mon[74339]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:51 compute-0 ceph-mon[74339]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:43:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:51.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:43:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:52 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 06 06:43:52 compute-0 podman[166069]: 2025-12-06 06:43:52.450876126 +0000 UTC m=+0.102175245 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 06:43:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:53.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:53 compute-0 ceph-mon[74339]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:55 compute-0 ceph-mon[74339]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:43:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:55.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:43:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:43:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:43:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:43:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:56 compute-0 ceph-mon[74339]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:57.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:43:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:57.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:43:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:58 compute-0 sudo[166862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:58 compute-0 sudo[166862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:58 compute-0 sudo[166862]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:58 compute-0 sudo[166931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:43:58 compute-0 sudo[166931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:43:58 compute-0 sudo[166931]: pam_unix(sudo:session): session closed for user root
Dec 06 06:43:59 compute-0 ceph-mon[74339]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:43:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:43:59.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:43:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:43:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:43:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:43:59.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:01.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:01 compute-0 ceph-mon[74339]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:01.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:03 compute-0 ceph-mon[74339]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:03.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:44:03.788 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:44:03.790 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:44:03.791 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:44:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000009s ======
Dec 06 06:44:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:03.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Dec 06 06:44:04 compute-0 podman[170178]: 2025-12-06 06:44:04.412375 +0000 UTC m=+0.064248520 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:44:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:05.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000009s ======
Dec 06 06:44:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:05.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Dec 06 06:44:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:06 compute-0 ceph-mon[74339]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:07 compute-0 ceph-mon[74339]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:07.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:07.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:09.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:09 compute-0 ceph-mon[74339]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:09.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:11 compute-0 ceph-mon[74339]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:11.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:11.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:44:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:44:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:44:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:44:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:44:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:44:13 compute-0 ceph-mon[74339]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:13.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:15 compute-0 ceph-mon[74339]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:15.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:15.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:17.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:17 compute-0 ceph-mon[74339]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:17.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:44:18
Dec 06 06:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'volumes', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups']
Dec 06 06:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:44:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:18 compute-0 sudo[179543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:18 compute-0 sudo[179543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:18 compute-0 sudo[179543]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:18 compute-0 sudo[179607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:18 compute-0 sudo[179607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:18 compute-0 sudo[179607]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:19.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:19 compute-0 ceph-mon[74339]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:19.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:21.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:21.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:44:23 compute-0 ceph-mon[74339]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:23 compute-0 podman[182548]: 2025-12-06 06:44:23.417153204 +0000 UTC m=+0.074212826 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:44:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:23.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:23.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:25 compute-0 ceph-mon[74339]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:44:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:44:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:25.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:25.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:26 compute-0 ceph-mon[74339]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:27.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:27 compute-0 sudo[183060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:27 compute-0 sudo[183060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:27 compute-0 sudo[183060]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:27.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:27 compute-0 ceph-mon[74339]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:27 compute-0 sudo[183086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:44:27 compute-0 sudo[183086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:27 compute-0 sudo[183086]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:27 compute-0 sudo[183114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:27 compute-0 sudo[183114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:27 compute-0 sudo[183114]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:28 compute-0 sudo[183139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:44:28 compute-0 sudo[183139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:28 compute-0 sudo[183139]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:44:28 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:44:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:44:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:44:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:44:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 85115f3b-725b-4ae6-a19d-2705d72223ae does not exist
Dec 06 06:44:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b5cb4154-b71e-46c0-830f-c87e618be2bd does not exist
Dec 06 06:44:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fa0e6848-1e74-4cd5-8e19-5d65670e3b63 does not exist
Dec 06 06:44:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:44:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:44:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:44:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:44:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:44:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:44:29 compute-0 sudo[183203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:29 compute-0 sudo[183203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:29 compute-0 sudo[183203]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:29 compute-0 ceph-mon[74339]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:44:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:44:29 compute-0 sudo[183228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:44:29 compute-0 sudo[183228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:29 compute-0 sudo[183228]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:29 compute-0 sudo[183253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:29 compute-0 sudo[183253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:29 compute-0 sudo[183253]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:29 compute-0 sudo[183278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:44:29 compute-0 sudo[183278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:29.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:29 compute-0 podman[183341]: 2025-12-06 06:44:29.685925848 +0000 UTC m=+0.042254822 container create 6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:44:29 compute-0 systemd[1]: Started libpod-conmon-6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2.scope.
Dec 06 06:44:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:44:29 compute-0 podman[183341]: 2025-12-06 06:44:29.665211511 +0000 UTC m=+0.021540505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:44:29 compute-0 podman[183341]: 2025-12-06 06:44:29.773368557 +0000 UTC m=+0.129697551 container init 6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:44:29 compute-0 podman[183341]: 2025-12-06 06:44:29.785303819 +0000 UTC m=+0.141632833 container start 6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:44:29 compute-0 podman[183341]: 2025-12-06 06:44:29.790317922 +0000 UTC m=+0.146646906 container attach 6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:44:29 compute-0 distracted_merkle[183357]: 167 167
Dec 06 06:44:29 compute-0 systemd[1]: libpod-6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2.scope: Deactivated successfully.
Dec 06 06:44:29 compute-0 podman[183341]: 2025-12-06 06:44:29.792479189 +0000 UTC m=+0.148808183 container died 6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:44:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1d7c10c6c3dc641eba96eb839c218e026ede720a3df46393bdbe7e0ddd91e0b-merged.mount: Deactivated successfully.
Dec 06 06:44:29 compute-0 podman[183341]: 2025-12-06 06:44:29.837981599 +0000 UTC m=+0.194310573 container remove 6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 06:44:29 compute-0 systemd[1]: libpod-conmon-6087fb04d5a64a46277233ed01e68710cf52128ca1fda07b0aa8c15ad7202ec2.scope: Deactivated successfully.
Dec 06 06:44:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:29.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:30 compute-0 podman[183381]: 2025-12-06 06:44:30.008283535 +0000 UTC m=+0.042356583 container create 38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:44:30 compute-0 systemd[1]: Started libpod-conmon-38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34.scope.
Dec 06 06:44:30 compute-0 podman[183381]: 2025-12-06 06:44:29.988401595 +0000 UTC m=+0.022474663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:44:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d23b2af73cf0ab8d01cc5d74a8d0f3dc1d4f20e9fdc1540fcfb04bc788a8171/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d23b2af73cf0ab8d01cc5d74a8d0f3dc1d4f20e9fdc1540fcfb04bc788a8171/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d23b2af73cf0ab8d01cc5d74a8d0f3dc1d4f20e9fdc1540fcfb04bc788a8171/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d23b2af73cf0ab8d01cc5d74a8d0f3dc1d4f20e9fdc1540fcfb04bc788a8171/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d23b2af73cf0ab8d01cc5d74a8d0f3dc1d4f20e9fdc1540fcfb04bc788a8171/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:30 compute-0 podman[183381]: 2025-12-06 06:44:30.121954447 +0000 UTC m=+0.156027525 container init 38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:44:30 compute-0 podman[183381]: 2025-12-06 06:44:30.130919294 +0000 UTC m=+0.164992342 container start 38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:44:30 compute-0 podman[183381]: 2025-12-06 06:44:30.138149726 +0000 UTC m=+0.172222804 container attach 38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:44:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:44:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:44:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:44:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:30 compute-0 nostalgic_aryabhata[183398]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:44:30 compute-0 nostalgic_aryabhata[183398]: --> relative data size: 1.0
Dec 06 06:44:30 compute-0 nostalgic_aryabhata[183398]: --> All data devices are unavailable
Dec 06 06:44:30 compute-0 systemd[1]: libpod-38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34.scope: Deactivated successfully.
Dec 06 06:44:30 compute-0 podman[183381]: 2025-12-06 06:44:30.996031901 +0000 UTC m=+1.030104949 container died 38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d23b2af73cf0ab8d01cc5d74a8d0f3dc1d4f20e9fdc1540fcfb04bc788a8171-merged.mount: Deactivated successfully.
Dec 06 06:44:31 compute-0 podman[183381]: 2025-12-06 06:44:31.118322147 +0000 UTC m=+1.152395195 container remove 38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_aryabhata, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:44:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:31 compute-0 systemd[1]: libpod-conmon-38f47de9f00e05420077f269a6db8953db796a9a2e75e93162fda42ad7e5ee34.scope: Deactivated successfully.
Dec 06 06:44:31 compute-0 sudo[183278]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:31 compute-0 sudo[183428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:31 compute-0 sudo[183428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:31 compute-0 sudo[183428]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:31 compute-0 sudo[183453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:44:31 compute-0 sudo[183453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:31 compute-0 sudo[183453]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:31 compute-0 sudo[183478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:31 compute-0 sudo[183478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:31 compute-0 sudo[183478]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:31 compute-0 sudo[183503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:44:31 compute-0 sudo[183503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:31.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:31 compute-0 podman[183567]: 2025-12-06 06:44:31.763929968 +0000 UTC m=+0.059067356 container create 4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:44:31 compute-0 systemd[1]: Started libpod-conmon-4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9.scope.
Dec 06 06:44:31 compute-0 podman[183567]: 2025-12-06 06:44:31.728969379 +0000 UTC m=+0.024106787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:44:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:44:31 compute-0 ceph-mon[74339]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:31 compute-0 podman[183567]: 2025-12-06 06:44:31.888740815 +0000 UTC m=+0.183878233 container init 4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:44:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:31.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:31 compute-0 podman[183567]: 2025-12-06 06:44:31.91155895 +0000 UTC m=+0.206696338 container start 4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:44:31 compute-0 podman[183567]: 2025-12-06 06:44:31.915873047 +0000 UTC m=+0.211010455 container attach 4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:44:31 compute-0 hardcore_elgamal[183584]: 167 167
Dec 06 06:44:31 compute-0 systemd[1]: libpod-4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9.scope: Deactivated successfully.
Dec 06 06:44:31 compute-0 podman[183567]: 2025-12-06 06:44:31.930562973 +0000 UTC m=+0.225700361 container died 4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-53f0db0f2cd0b8a4628ddd0567291262cfd0cb7616a674664a3f0d5685a613da-merged.mount: Deactivated successfully.
Dec 06 06:44:32 compute-0 podman[183567]: 2025-12-06 06:44:32.057155295 +0000 UTC m=+0.352292683 container remove 4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_elgamal, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:44:32 compute-0 systemd[1]: libpod-conmon-4184f922f6ebc94f0773abb738736b9dd6ba46d9c2fbda082851803669799ee9.scope: Deactivated successfully.
Dec 06 06:44:32 compute-0 podman[183608]: 2025-12-06 06:44:32.232915299 +0000 UTC m=+0.023774205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:44:32 compute-0 podman[183608]: 2025-12-06 06:44:32.39218429 +0000 UTC m=+0.183043166 container create 05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 06:44:32 compute-0 systemd[1]: Started libpod-conmon-05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b.scope.
Dec 06 06:44:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4349a579ace76cf37cfb3c02e9d008609c2c2eaf5c22a6be4eb75d502dc3ad9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4349a579ace76cf37cfb3c02e9d008609c2c2eaf5c22a6be4eb75d502dc3ad9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4349a579ace76cf37cfb3c02e9d008609c2c2eaf5c22a6be4eb75d502dc3ad9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4349a579ace76cf37cfb3c02e9d008609c2c2eaf5c22a6be4eb75d502dc3ad9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:32 compute-0 podman[183608]: 2025-12-06 06:44:32.661086749 +0000 UTC m=+0.451945635 container init 05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:44:32 compute-0 podman[183608]: 2025-12-06 06:44:32.66933026 +0000 UTC m=+0.460189146 container start 05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:44:32 compute-0 podman[183608]: 2025-12-06 06:44:32.773653393 +0000 UTC m=+0.564512279 container attach 05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:44:33 compute-0 ceph-mon[74339]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:33 compute-0 infallible_leakey[183625]: {
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:     "0": [
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:         {
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "devices": [
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "/dev/loop3"
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             ],
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "lv_name": "ceph_lv0",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "lv_size": "7511998464",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "name": "ceph_lv0",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "tags": {
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.cluster_name": "ceph",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.crush_device_class": "",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.encrypted": "0",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.osd_id": "0",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.type": "block",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:                 "ceph.vdo": "0"
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             },
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "type": "block",
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:             "vg_name": "ceph_vg0"
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:         }
Dec 06 06:44:33 compute-0 infallible_leakey[183625]:     ]
Dec 06 06:44:33 compute-0 infallible_leakey[183625]: }
Dec 06 06:44:33 compute-0 systemd[1]: libpod-05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b.scope: Deactivated successfully.
Dec 06 06:44:33 compute-0 podman[183634]: 2025-12-06 06:44:33.572347652 +0000 UTC m=+0.025087105 container died 05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:44:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:33.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4349a579ace76cf37cfb3c02e9d008609c2c2eaf5c22a6be4eb75d502dc3ad9d-merged.mount: Deactivated successfully.
Dec 06 06:44:33 compute-0 podman[183634]: 2025-12-06 06:44:33.736243483 +0000 UTC m=+0.188982916 container remove 05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:44:33 compute-0 systemd[1]: libpod-conmon-05cee928ef3330ef0604aa2f21399b244b3b30b623eadbb37189cd07e4aa367b.scope: Deactivated successfully.
Dec 06 06:44:33 compute-0 sudo[183503]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:33 compute-0 sudo[183649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:33 compute-0 sudo[183649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:33 compute-0 sudo[183649]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:33 compute-0 sudo[183674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:44:33 compute-0 sudo[183674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:33 compute-0 sudo[183674]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:33.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:33 compute-0 sudo[183699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:33 compute-0 sudo[183699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:33 compute-0 sudo[183699]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:34 compute-0 sudo[183724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:44:34 compute-0 sudo[183724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:34 compute-0 podman[183788]: 2025-12-06 06:44:34.317653234 +0000 UTC m=+0.037913585 container create 3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:44:34 compute-0 systemd[1]: Started libpod-conmon-3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9.scope.
Dec 06 06:44:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:44:34 compute-0 podman[183788]: 2025-12-06 06:44:34.301760898 +0000 UTC m=+0.022021279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:44:34 compute-0 podman[183788]: 2025-12-06 06:44:34.397832519 +0000 UTC m=+0.118092870 container init 3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:44:34 compute-0 podman[183788]: 2025-12-06 06:44:34.40488126 +0000 UTC m=+0.125141611 container start 3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:44:34 compute-0 podman[183788]: 2025-12-06 06:44:34.409401299 +0000 UTC m=+0.129661680 container attach 3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:44:34 compute-0 blissful_leavitt[183805]: 167 167
Dec 06 06:44:34 compute-0 systemd[1]: libpod-3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9.scope: Deactivated successfully.
Dec 06 06:44:34 compute-0 podman[183788]: 2025-12-06 06:44:34.411283184 +0000 UTC m=+0.131543535 container died 3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leavitt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5012a720df33ff013fa0be198884425d99527e645e45a6a16d696fc982ea0a78-merged.mount: Deactivated successfully.
Dec 06 06:44:34 compute-0 podman[183788]: 2025-12-06 06:44:34.457228437 +0000 UTC m=+0.177488788 container remove 3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_leavitt, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:44:34 compute-0 systemd[1]: libpod-conmon-3cc4da338cdde3b215270148b98cf708e98c398f980bfd1a68b164063d1a74f9.scope: Deactivated successfully.
Dec 06 06:44:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:34 compute-0 podman[183817]: 2025-12-06 06:44:34.524036107 +0000 UTC m=+0.062339693 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 06:44:34 compute-0 podman[183847]: 2025-12-06 06:44:34.628056678 +0000 UTC m=+0.046682411 container create 312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rosalind, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 06:44:34 compute-0 systemd[1]: Started libpod-conmon-312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4.scope.
Dec 06 06:44:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e164b7488bfe810993c0554ab4cb0aa3dfe3d17ef807b7c53cdffbe4db8088d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e164b7488bfe810993c0554ab4cb0aa3dfe3d17ef807b7c53cdffbe4db8088d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e164b7488bfe810993c0554ab4cb0aa3dfe3d17ef807b7c53cdffbe4db8088d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:34 compute-0 podman[183847]: 2025-12-06 06:44:34.61083385 +0000 UTC m=+0.029459603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e164b7488bfe810993c0554ab4cb0aa3dfe3d17ef807b7c53cdffbe4db8088d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:44:34 compute-0 podman[183847]: 2025-12-06 06:44:34.716492853 +0000 UTC m=+0.135118586 container init 312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:44:34 compute-0 podman[183847]: 2025-12-06 06:44:34.722819517 +0000 UTC m=+0.141445250 container start 312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rosalind, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:44:34 compute-0 podman[183847]: 2025-12-06 06:44:34.727144575 +0000 UTC m=+0.145770338 container attach 312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:44:34 compute-0 ceph-mon[74339]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]: {
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]:         "osd_id": 0,
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]:         "type": "bluestore"
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]:     }
Dec 06 06:44:35 compute-0 ecstatic_rosalind[183863]: }
Dec 06 06:44:35 compute-0 systemd[1]: libpod-312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4.scope: Deactivated successfully.
Dec 06 06:44:35 compute-0 podman[183847]: 2025-12-06 06:44:35.555974721 +0000 UTC m=+0.974600454 container died 312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rosalind, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e164b7488bfe810993c0554ab4cb0aa3dfe3d17ef807b7c53cdffbe4db8088d-merged.mount: Deactivated successfully.
Dec 06 06:44:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:35.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:35 compute-0 podman[183847]: 2025-12-06 06:44:35.612487064 +0000 UTC m=+1.031112797 container remove 312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:44:35 compute-0 systemd[1]: libpod-conmon-312e638a4732cade2ad0e5368528add2c384a667d37e5e026960de47797517b4.scope: Deactivated successfully.
Dec 06 06:44:35 compute-0 sudo[183724]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:44:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000008s ======
Dec 06 06:44:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:35.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Dec 06 06:44:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:44:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:37.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:37 compute-0 ceph-mon[74339]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:37.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4c5cb210-7ef4-4c44-939b-f7451de9f88c does not exist
Dec 06 06:44:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0c77f379-a81e-40ff-8551-0e928a64ce2b does not exist
Dec 06 06:44:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 93e2861c-e523-46c2-bfc4-09728bd54710 does not exist
Dec 06 06:44:38 compute-0 sudo[183900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:38 compute-0 sudo[183901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:38 compute-0 sudo[183900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:38 compute-0 ceph-mon[74339]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:38 compute-0 sudo[183901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:38 compute-0 sudo[183900]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:38 compute-0 sudo[183901]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:39 compute-0 sudo[183950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:39 compute-0 sudo[183950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:39 compute-0 sudo[183950]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:39 compute-0 sudo[183951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:44:39 compute-0 sudo[183951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:39 compute-0 sudo[183951]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:39.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:39.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:44:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:41 compute-0 ceph-mon[74339]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:44:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:41.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:44:41 compute-0 kernel: SELinux:  Converting 2772 SID table entries...
Dec 06 06:44:41 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:44:41 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 06 06:44:41 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:44:41 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:44:41 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:44:41 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:44:41 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:44:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:41.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:44:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:44:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:44:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:44:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:44:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:44:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:44:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:43.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:44:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:43.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:45 compute-0 groupadd[184015]: group added to /etc/group: name=dnsmasq, GID=991
Dec 06 06:44:45 compute-0 groupadd[184015]: group added to /etc/gshadow: name=dnsmasq
Dec 06 06:44:45 compute-0 groupadd[184015]: new group: name=dnsmasq, GID=991
Dec 06 06:44:45 compute-0 useradd[184022]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 06 06:44:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:45.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:45 compute-0 dbus-broker-launch[739]: Noticed file-system modification, trigger reload.
Dec 06 06:44:45 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 06 06:44:45 compute-0 dbus-broker-launch[739]: Noticed file-system modification, trigger reload.
Dec 06 06:44:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:44:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:45.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:44:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:46 compute-0 ceph-mon[74339]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:47 compute-0 groupadd[184036]: group added to /etc/group: name=clevis, GID=990
Dec 06 06:44:47 compute-0 groupadd[184036]: group added to /etc/gshadow: name=clevis
Dec 06 06:44:47 compute-0 groupadd[184036]: new group: name=clevis, GID=990
Dec 06 06:44:47 compute-0 useradd[184043]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 06 06:44:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:47.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:47 compute-0 usermod[184053]: add 'clevis' to group 'tss'
Dec 06 06:44:47 compute-0 usermod[184053]: add 'clevis' to shadow group 'tss'
Dec 06 06:44:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000020s ======
Dec 06 06:44:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:47.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Dec 06 06:44:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:48 compute-0 ceph-mon[74339]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:48 compute-0 ceph-mon[74339]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:49.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:49.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:50 compute-0 ceph-mon[74339]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:51.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:51.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:52 compute-0 polkitd[43458]: Reloading rules
Dec 06 06:44:52 compute-0 polkitd[43458]: Collecting garbage unconditionally...
Dec 06 06:44:52 compute-0 polkitd[43458]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 06:44:52 compute-0 polkitd[43458]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 06:44:52 compute-0 polkitd[43458]: Finished loading, compiling and executing 3 rules
Dec 06 06:44:52 compute-0 polkitd[43458]: Reloading rules
Dec 06 06:44:52 compute-0 polkitd[43458]: Collecting garbage unconditionally...
Dec 06 06:44:52 compute-0 polkitd[43458]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 06:44:52 compute-0 polkitd[43458]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 06:44:52 compute-0 polkitd[43458]: Finished loading, compiling and executing 3 rules
Dec 06 06:44:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:53.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:54 compute-0 podman[184135]: 2025-12-06 06:44:54.499991274 +0000 UTC m=+0.137175609 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 06:44:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:44:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:55.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:44:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:44:56 compute-0 groupadd[184270]: group added to /etc/group: name=ceph, GID=167
Dec 06 06:44:56 compute-0 groupadd[184270]: group added to /etc/gshadow: name=ceph
Dec 06 06:44:56 compute-0 groupadd[184270]: new group: name=ceph, GID=167
Dec 06 06:44:56 compute-0 useradd[184276]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 06 06:44:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:56 compute-0 ceph-mon[74339]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:57.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:44:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:57.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:44:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:44:59 compute-0 sudo[184851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:59 compute-0 sudo[184851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:59 compute-0 sudo[184851]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:59 compute-0 sudo[184914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:44:59 compute-0 sudo[184914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:44:59 compute-0 sudo[184914]: pam_unix(sudo:session): session closed for user root
Dec 06 06:44:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:44:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:44:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:44:59 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 06 06:44:59 compute-0 sshd[1006]: Received signal 15; terminating.
Dec 06 06:44:59 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 06 06:44:59 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 06 06:44:59 compute-0 systemd[1]: sshd.service: Consumed 3.725s CPU time, read 32.0K from disk, written 36.0K to disk.
Dec 06 06:44:59 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 06 06:44:59 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 06 06:44:59 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:44:59 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:44:59 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:44:59 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 06 06:44:59 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 06 06:44:59 compute-0 sshd[184953]: Server listening on 0.0.0.0 port 22.
Dec 06 06:44:59 compute-0 sshd[184953]: Server listening on :: port 22.
Dec 06 06:44:59 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 06 06:44:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:44:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:44:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:44:59.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:00 compute-0 ceph-mon[74339]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:00 compute-0 ceph-mon[74339]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:00 compute-0 ceph-mon[74339]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:45:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:45:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:01.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:01 compute-0 systemd[1]: Reloading.
Dec 06 06:45:01 compute-0 systemd-rc-local-generator[185214]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:45:01 compute-0 systemd-sysv-generator[185217]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:45:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:01.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:45:02 compute-0 auditd[702]: Audit daemon rotating log files
Dec 06 06:45:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:03.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:45:03.790 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:45:03.792 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:45:03.793 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:45:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:45:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:03.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:45:04 compute-0 ceph-mon[74339]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:04 compute-0 ceph-mon[74339]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:05 compute-0 sudo[164763]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:05 compute-0 podman[188544]: 2025-12-06 06:45:05.310929552 +0000 UTC m=+0.069745707 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 06:45:05 compute-0 ceph-mon[74339]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:05 compute-0 ceph-mon[74339]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:05.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:05.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:07 compute-0 ceph-mon[74339]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:07.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:07.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:45:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:09.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:45:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:09.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:11 compute-0 ceph-mon[74339]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:45:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:11.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:45:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:11.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:45:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:45:12 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.289s CPU time.
Dec 06 06:45:12 compute-0 systemd[1]: run-rc9681e85813d42a798d67bf17895bdea.service: Deactivated successfully.
Dec 06 06:45:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:45:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:45:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:45:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:45:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:45:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:45:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:45:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:13.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:45:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:15 compute-0 ceph-mon[74339]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:15.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:15.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:17 compute-0 ceph-mon[74339]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:17 compute-0 ceph-mon[74339]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:17.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:17.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:45:18
Dec 06 06:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Dec 06 06:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:45:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:18 compute-0 ceph-mon[74339]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:19 compute-0 sudo[193653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:19 compute-0 sudo[193653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:19 compute-0 sudo[193653]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:19 compute-0 sudo[193678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:19 compute-0 sudo[193678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:19 compute-0 sudo[193678]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:45:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:19.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:45:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:19.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:20 compute-0 ceph-mon[74339]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:21.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:21.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:45:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3211 writes, 14K keys, 3209 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3211 writes, 3209 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1239 writes, 5043 keys, 1237 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s
                                           Interval WAL: 1239 writes, 1237 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     91.9      0.16              0.05         5    0.033       0      0       0.0       0.0
                                             L6      1/0    9.51 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3    116.5    100.5      0.35              0.11         4    0.087     17K   1817       0.0       0.0
                                            Sum      1/0    9.51 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3     79.1     97.7      0.51              0.16         9    0.057     17K   1817       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     94.0     98.0      0.34              0.11         6    0.057     14K   1524       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    116.5    100.5      0.35              0.11         4    0.087     17K   1817       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     93.8      0.16              0.05         4    0.040       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.03 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 1.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(70,1.22 MB,0.402722%) FilterBlock(10,59.05 KB,0.0189681%) IndexBlock(10,121.59 KB,0.0390605%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:45:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:45:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:23.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:23.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:24 compute-0 ceph-mon[74339]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:45:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:45:25 compute-0 podman[193706]: 2025-12-06 06:45:25.446551906 +0000 UTC m=+0.101635165 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:45:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:25.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:25.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:27 compute-0 ceph-mon[74339]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:27 compute-0 ceph-mon[74339]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:27.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:27.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:29.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:29.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:31 compute-0 ceph-mon[74339]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:31 compute-0 ceph-mon[74339]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:31.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:31.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:33.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:33.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:35.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:35 compute-0 ceph-mon[74339]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:35.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:36 compute-0 podman[193739]: 2025-12-06 06:45:36.404122447 +0000 UTC m=+0.063034069 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Dec 06 06:45:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:37 compute-0 ceph-mon[74339]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:37 compute-0 ceph-mon[74339]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:37 compute-0 ceph-mon[74339]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:37.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:37.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:39 compute-0 ceph-mon[74339]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:39 compute-0 sudo[193760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:39 compute-0 sudo[193760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-0 sudo[193760]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:39 compute-0 sudo[193764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:39 compute-0 sudo[193764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-0 sudo[193764]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:39 compute-0 sudo[193811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:39 compute-0 sudo[193811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-0 sudo[193811]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:39 compute-0 sudo[193810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:45:39 compute-0 sudo[193810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-0 sudo[193810]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:39 compute-0 sudo[193860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:39 compute-0 sudo[193860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-0 sudo[193860]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:39 compute-0 sudo[193885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:45:39 compute-0 sudo[193885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:39.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:45:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:39.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:45:40 compute-0 podman[193982]: 2025-12-06 06:45:40.115978754 +0000 UTC m=+0.068855607 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 06:45:40 compute-0 podman[193982]: 2025-12-06 06:45:40.220287422 +0000 UTC m=+0.173164305 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:45:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:40 compute-0 podman[194136]: 2025-12-06 06:45:40.832811671 +0000 UTC m=+0.063323196 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:45:40 compute-0 podman[194136]: 2025-12-06 06:45:40.842595403 +0000 UTC m=+0.073106918 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:45:41 compute-0 podman[194204]: 2025-12-06 06:45:41.057940908 +0000 UTC m=+0.052550029 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.openshift.expose-services=, release=1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20)
Dec 06 06:45:41 compute-0 podman[194204]: 2025-12-06 06:45:41.074611554 +0000 UTC m=+0.069220685 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9)
Dec 06 06:45:41 compute-0 sudo[193885]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:45:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:41.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:41.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:45:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:45:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:45:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:45:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:45:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:45:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:43.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:44.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:45.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:46.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:46 compute-0 ceph-mon[74339]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:45:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:48.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:48 compute-0 ceph-mon[74339]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:48 compute-0 ceph-mon[74339]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:48 compute-0 ceph-mon[74339]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:48 compute-0 sudo[194241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:48 compute-0 sudo[194241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:48 compute-0 sudo[194241]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:48 compute-0 sudo[194266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:45:48 compute-0 sudo[194266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:48 compute-0 sudo[194266]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:49 compute-0 sudo[194291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:49 compute-0 sudo[194291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:49 compute-0 sudo[194291]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:49 compute-0 sudo[194316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:45:49 compute-0 sudo[194316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:49 compute-0 sudo[194316]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:45:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:45:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:45:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:45:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:45:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3cdc2655-5854-4beb-93ea-1332233f0d1e does not exist
Dec 06 06:45:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 97bd8955-1048-4e09-9410-bf30a34842f8 does not exist
Dec 06 06:45:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d404b2f1-e723-4ec4-a464-db1803a11166 does not exist
Dec 06 06:45:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:45:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:45:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:45:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:45:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:45:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:45:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:45:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:49.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:45:49 compute-0 sudo[194371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:49 compute-0 sudo[194371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:49 compute-0 sudo[194371]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:49 compute-0 sudo[194396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:45:49 compute-0 sudo[194396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:49 compute-0 sudo[194396]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:49 compute-0 ceph-mon[74339]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:45:49 compute-0 sudo[194421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:49 compute-0 sudo[194421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:49 compute-0 sudo[194421]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:49 compute-0 sudo[194446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:45:49 compute-0 sudo[194446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:45:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:50.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:45:50 compute-0 podman[194510]: 2025-12-06 06:45:50.344354616 +0000 UTC m=+0.088653117 container create f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_archimedes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:45:50 compute-0 podman[194510]: 2025-12-06 06:45:50.284054033 +0000 UTC m=+0.028352554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:45:50 compute-0 systemd[1]: Started libpod-conmon-f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a.scope.
Dec 06 06:45:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:45:50 compute-0 podman[194510]: 2025-12-06 06:45:50.448482779 +0000 UTC m=+0.192781290 container init f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_archimedes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:45:50 compute-0 podman[194510]: 2025-12-06 06:45:50.457221706 +0000 UTC m=+0.201520197 container start f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_archimedes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:45:50 compute-0 podman[194510]: 2025-12-06 06:45:50.460617177 +0000 UTC m=+0.204915678 container attach f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:45:50 compute-0 sharp_archimedes[194527]: 167 167
Dec 06 06:45:50 compute-0 systemd[1]: libpod-f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a.scope: Deactivated successfully.
Dec 06 06:45:50 compute-0 podman[194510]: 2025-12-06 06:45:50.466982969 +0000 UTC m=+0.211281460 container died f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_archimedes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 06:45:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-441b1919d0efe023042164f0d21f0e4f15aca5ead168c487925a1eeb71c1449c-merged.mount: Deactivated successfully.
Dec 06 06:45:50 compute-0 podman[194510]: 2025-12-06 06:45:50.513153325 +0000 UTC m=+0.257451816 container remove f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_archimedes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:45:50 compute-0 systemd[1]: libpod-conmon-f1f9300e4e784767f3d0d9312f2c14efe0d36228339dfaa335bf5caab28eae7a.scope: Deactivated successfully.
Dec 06 06:45:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:50 compute-0 podman[194550]: 2025-12-06 06:45:50.699470901 +0000 UTC m=+0.057156089 container create 8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:45:50 compute-0 systemd[1]: Started libpod-conmon-8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd.scope.
Dec 06 06:45:50 compute-0 podman[194550]: 2025-12-06 06:45:50.67965702 +0000 UTC m=+0.037342228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:45:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fa13d8e383eb2073037dbe7b608a8a1f3d38bd02907c462ff7273a03aac8168/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fa13d8e383eb2073037dbe7b608a8a1f3d38bd02907c462ff7273a03aac8168/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fa13d8e383eb2073037dbe7b608a8a1f3d38bd02907c462ff7273a03aac8168/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fa13d8e383eb2073037dbe7b608a8a1f3d38bd02907c462ff7273a03aac8168/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fa13d8e383eb2073037dbe7b608a8a1f3d38bd02907c462ff7273a03aac8168/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:50 compute-0 podman[194550]: 2025-12-06 06:45:50.815672451 +0000 UTC m=+0.173357659 container init 8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 06:45:50 compute-0 podman[194550]: 2025-12-06 06:45:50.822667877 +0000 UTC m=+0.180353065 container start 8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 06:45:50 compute-0 podman[194550]: 2025-12-06 06:45:50.826560169 +0000 UTC m=+0.184245347 container attach 8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:45:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:51.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:51 compute-0 objective_ganguly[194566]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:45:51 compute-0 objective_ganguly[194566]: --> relative data size: 1.0
Dec 06 06:45:51 compute-0 objective_ganguly[194566]: --> All data devices are unavailable
Dec 06 06:45:51 compute-0 systemd[1]: libpod-8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd.scope: Deactivated successfully.
Dec 06 06:45:51 compute-0 podman[194550]: 2025-12-06 06:45:51.826426069 +0000 UTC m=+1.184111257 container died 8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 06:45:51 compute-0 ceph-mon[74339]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.835516) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551835657, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1638, "num_deletes": 250, "total_data_size": 3015764, "memory_usage": 3065264, "flush_reason": "Manual Compaction"}
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551856331, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 2959024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13142, "largest_seqno": 14779, "table_properties": {"data_size": 2951579, "index_size": 4452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14096, "raw_average_key_size": 18, "raw_value_size": 2936758, "raw_average_value_size": 3833, "num_data_blocks": 201, "num_entries": 766, "num_filter_entries": 766, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003346, "oldest_key_time": 1765003346, "file_creation_time": 1765003551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 20978 microseconds, and 9291 cpu microseconds.
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.856482) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 2959024 bytes OK
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.856517) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.858575) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.858611) EVENT_LOG_v1 {"time_micros": 1765003551858602, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.858646) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3008968, prev total WAL file size 3008968, number of live WAL files 2.
Dec 06 06:45:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fa13d8e383eb2073037dbe7b608a8a1f3d38bd02907c462ff7273a03aac8168-merged.mount: Deactivated successfully.
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.862507) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(2889KB)], [29(9734KB)]
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551862595, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 12927097, "oldest_snapshot_seqno": -1}
Dec 06 06:45:51 compute-0 podman[194550]: 2025-12-06 06:45:51.894662059 +0000 UTC m=+1.252347257 container remove 8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:45:51 compute-0 systemd[1]: libpod-conmon-8ebd9f8f54d991eb20154f330b1fda3f5aca04139305ceba77de8dd8b35203fd.scope: Deactivated successfully.
Dec 06 06:45:51 compute-0 sudo[194446]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4700 keys, 12379566 bytes, temperature: kUnknown
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551961382, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12379566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12341705, "index_size": 25015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 115933, "raw_average_key_size": 24, "raw_value_size": 12250492, "raw_average_value_size": 2606, "num_data_blocks": 1062, "num_entries": 4700, "num_filter_entries": 4700, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765003551, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.961697) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12379566 bytes
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.964058) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.7 rd, 125.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.5 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(8.6) write-amplify(4.2) OK, records in: 5213, records dropped: 513 output_compression: NoCompression
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.964080) EVENT_LOG_v1 {"time_micros": 1765003551964071, "job": 12, "event": "compaction_finished", "compaction_time_micros": 98895, "compaction_time_cpu_micros": 36840, "output_level": 6, "num_output_files": 1, "total_output_size": 12379566, "num_input_records": 5213, "num_output_records": 4700, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551964680, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003551966398, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.860538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.966517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.966524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.966526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.966527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:45:51.966529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:45:52 compute-0 sudo[194592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:52 compute-0 sudo[194592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:45:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:52.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:45:52 compute-0 sudo[194592]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:52 compute-0 sudo[194617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:45:52 compute-0 sudo[194617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:52 compute-0 sudo[194617]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:52 compute-0 sudo[194642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:52 compute-0 sudo[194642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:52 compute-0 sudo[194642]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:52 compute-0 sudo[194667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:45:52 compute-0 sudo[194667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:52 compute-0 podman[194733]: 2025-12-06 06:45:52.595837081 +0000 UTC m=+0.049547700 container create 25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 06:45:52 compute-0 systemd[1]: Started libpod-conmon-25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78.scope.
Dec 06 06:45:52 compute-0 podman[194733]: 2025-12-06 06:45:52.572535778 +0000 UTC m=+0.026246457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:45:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:45:52 compute-0 podman[194733]: 2025-12-06 06:45:52.703709636 +0000 UTC m=+0.157420275 container init 25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 06 06:45:52 compute-0 podman[194733]: 2025-12-06 06:45:52.711950724 +0000 UTC m=+0.165661363 container start 25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 06:45:52 compute-0 podman[194733]: 2025-12-06 06:45:52.7163742 +0000 UTC m=+0.170084819 container attach 25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:45:52 compute-0 eager_gates[194749]: 167 167
Dec 06 06:45:52 compute-0 systemd[1]: libpod-25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78.scope: Deactivated successfully.
Dec 06 06:45:52 compute-0 podman[194733]: 2025-12-06 06:45:52.717872961 +0000 UTC m=+0.171583580 container died 25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fbf3af42e6a906ef880cf780f7966962f71dcaa43a01a6203637e344b6d7a65-merged.mount: Deactivated successfully.
Dec 06 06:45:52 compute-0 podman[194733]: 2025-12-06 06:45:52.761055303 +0000 UTC m=+0.214765962 container remove 25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 06:45:52 compute-0 systemd[1]: libpod-conmon-25b96de6e4c04c36fa08fe747488b493df44199ee8e13ba86cca674324798f78.scope: Deactivated successfully.
Dec 06 06:45:52 compute-0 podman[194773]: 2025-12-06 06:45:52.944083399 +0000 UTC m=+0.045796448 container create 4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:45:52 compute-0 systemd[1]: Started libpod-conmon-4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a.scope.
Dec 06 06:45:53 compute-0 podman[194773]: 2025-12-06 06:45:52.924908606 +0000 UTC m=+0.026621685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:45:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1939c0deadb426bdcc5a81cced12e002792980f78d71a89cec56a658c6282c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1939c0deadb426bdcc5a81cced12e002792980f78d71a89cec56a658c6282c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1939c0deadb426bdcc5a81cced12e002792980f78d71a89cec56a658c6282c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1939c0deadb426bdcc5a81cced12e002792980f78d71a89cec56a658c6282c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:53 compute-0 podman[194773]: 2025-12-06 06:45:53.039797373 +0000 UTC m=+0.141510442 container init 4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:45:53 compute-0 podman[194773]: 2025-12-06 06:45:53.046416805 +0000 UTC m=+0.148129854 container start 4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:45:53 compute-0 podman[194773]: 2025-12-06 06:45:53.050156157 +0000 UTC m=+0.151869196 container attach 4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:45:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:53.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:53 compute-0 angry_chatelet[194790]: {
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:     "0": [
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:         {
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "devices": [
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "/dev/loop3"
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             ],
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "lv_name": "ceph_lv0",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "lv_size": "7511998464",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "name": "ceph_lv0",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "tags": {
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.cluster_name": "ceph",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.crush_device_class": "",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.encrypted": "0",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.osd_id": "0",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.type": "block",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:                 "ceph.vdo": "0"
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             },
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "type": "block",
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:             "vg_name": "ceph_vg0"
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:         }
Dec 06 06:45:53 compute-0 angry_chatelet[194790]:     ]
Dec 06 06:45:53 compute-0 angry_chatelet[194790]: }
Dec 06 06:45:53 compute-0 systemd[1]: libpod-4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a.scope: Deactivated successfully.
Dec 06 06:45:53 compute-0 podman[194773]: 2025-12-06 06:45:53.962013778 +0000 UTC m=+1.063726827 container died 4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:45:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1939c0deadb426bdcc5a81cced12e002792980f78d71a89cec56a658c6282c3-merged.mount: Deactivated successfully.
Dec 06 06:45:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:54 compute-0 podman[194773]: 2025-12-06 06:45:54.023451523 +0000 UTC m=+1.125164572 container remove 4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chatelet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 06:45:54 compute-0 systemd[1]: libpod-conmon-4b34374b026272fa547320aa0132b778eaa92bdcbb78fa4d519193c4be720c7a.scope: Deactivated successfully.
Dec 06 06:45:54 compute-0 sudo[194667]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:54 compute-0 sudo[194810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:54 compute-0 sudo[194810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:54 compute-0 sudo[194810]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:54 compute-0 sudo[194835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:45:54 compute-0 sudo[194835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:54 compute-0 sudo[194835]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:54 compute-0 sudo[194860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:45:54 compute-0 sudo[194860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:54 compute-0 sudo[194860]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:54 compute-0 sudo[194885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:45:54 compute-0 sudo[194885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:45:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:54 compute-0 podman[194952]: 2025-12-06 06:45:54.660598712 +0000 UTC m=+0.037562901 container create 26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:45:54 compute-0 systemd[1]: Started libpod-conmon-26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a.scope.
Dec 06 06:45:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:45:54 compute-0 podman[194952]: 2025-12-06 06:45:54.736734564 +0000 UTC m=+0.113698763 container init 26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:45:54 compute-0 podman[194952]: 2025-12-06 06:45:54.644752 +0000 UTC m=+0.021716209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:45:54 compute-0 podman[194952]: 2025-12-06 06:45:54.745759978 +0000 UTC m=+0.122724167 container start 26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:45:54 compute-0 podman[194952]: 2025-12-06 06:45:54.749458147 +0000 UTC m=+0.126422336 container attach 26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lumiere, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:45:54 compute-0 peaceful_lumiere[194968]: 167 167
Dec 06 06:45:54 compute-0 podman[194952]: 2025-12-06 06:45:54.750363628 +0000 UTC m=+0.127327817 container died 26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lumiere, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:45:54 compute-0 systemd[1]: libpod-26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a.scope: Deactivated successfully.
Dec 06 06:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-949c689d1dcc1d58b95127d6710b35c832a0eba5fbdc74c15d012eee05fe8b51-merged.mount: Deactivated successfully.
Dec 06 06:45:54 compute-0 podman[194952]: 2025-12-06 06:45:54.790652236 +0000 UTC m=+0.167616425 container remove 26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_lumiere, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:45:54 compute-0 systemd[1]: libpod-conmon-26c67aec057fc6ab642cb679c01915eb3c60fc7f21574184d571e329f2be371a.scope: Deactivated successfully.
Dec 06 06:45:54 compute-0 podman[194991]: 2025-12-06 06:45:54.96985289 +0000 UTC m=+0.040553315 container create cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:45:55 compute-0 systemd[1]: Started libpod-conmon-cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a.scope.
Dec 06 06:45:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5024b1be03f9107d2a9391d0f1b4bec33c88fa2a19e96c5b20661d2f79ef391/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5024b1be03f9107d2a9391d0f1b4bec33c88fa2a19e96c5b20661d2f79ef391/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5024b1be03f9107d2a9391d0f1b4bec33c88fa2a19e96c5b20661d2f79ef391/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5024b1be03f9107d2a9391d0f1b4bec33c88fa2a19e96c5b20661d2f79ef391/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:45:55 compute-0 podman[194991]: 2025-12-06 06:45:55.043564539 +0000 UTC m=+0.114264994 container init cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:45:55 compute-0 podman[194991]: 2025-12-06 06:45:54.951582526 +0000 UTC m=+0.022282961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:45:55 compute-0 podman[194991]: 2025-12-06 06:45:55.050229633 +0000 UTC m=+0.120930058 container start cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:45:55 compute-0 podman[194991]: 2025-12-06 06:45:55.053395982 +0000 UTC m=+0.124096407 container attach cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_aryabhata, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:45:55 compute-0 ceph-mon[74339]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:55.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]: {
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]:         "osd_id": 0,
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]:         "type": "bluestore"
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]:     }
Dec 06 06:45:55 compute-0 hungry_aryabhata[195008]: }
Dec 06 06:45:55 compute-0 systemd[1]: libpod-cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a.scope: Deactivated successfully.
Dec 06 06:45:55 compute-0 podman[194991]: 2025-12-06 06:45:55.936091505 +0000 UTC m=+1.006791970 container died cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 06:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5024b1be03f9107d2a9391d0f1b4bec33c88fa2a19e96c5b20661d2f79ef391-merged.mount: Deactivated successfully.
Dec 06 06:45:55 compute-0 podman[194991]: 2025-12-06 06:45:55.996307792 +0000 UTC m=+1.067008217 container remove cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_aryabhata, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:45:56 compute-0 systemd[1]: libpod-conmon-cd49dd4777b85a4312d1b6264c221b353ed40b800286614c0e42f6e4a042604a.scope: Deactivated successfully.
Dec 06 06:45:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:56 compute-0 sudo[194885]: pam_unix(sudo:session): session closed for user root
Dec 06 06:45:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:45:56 compute-0 podman[195029]: 2025-12-06 06:45:56.102663086 +0000 UTC m=+0.138123949 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 06:45:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:45:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:57.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:45:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:45:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:45:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:45:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:45:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:45:59.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000022s ======
Dec 06 06:46:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:00.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec 06 06:46:00 compute-0 ceph-mon[74339]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:46:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:46:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:01.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:02.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:03.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:46:03.792 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:46:03.793 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:46:03.794 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:46:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000022s ======
Dec 06 06:46:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:04.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec 06 06:46:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:05.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:06.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:07 compute-0 podman[195072]: 2025-12-06 06:46:07.429090933 +0000 UTC m=+0.089661144 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 06:46:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:07.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:08.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:46:08 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 68f567af-2baa-430d-a3fe-01d031e93ad0 does not exist
Dec 06 06:46:08 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 62a35ece-d3a3-4763-8c1d-6e2373b8bdb7 does not exist
Dec 06 06:46:08 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0233fa68-e712-458f-ad1e-b0509171b07a does not exist
Dec 06 06:46:08 compute-0 sudo[195091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:46:08 compute-0 sudo[195092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:46:08 compute-0 sudo[195091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:08 compute-0 sudo[195092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:08 compute-0 sudo[195091]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:08 compute-0 sudo[195092]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:08 compute-0 sudo[195141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:46:08 compute-0 sudo[195142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:46:08 compute-0 sudo[195141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:08 compute-0 sudo[195142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:08 compute-0 sudo[195141]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:08 compute-0 sudo[195142]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:08 compute-0 ceph-mon[74339]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:08 compute-0 ceph-mon[74339]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:46:08 compute-0 ceph-mon[74339]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:09.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:10.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:10 compute-0 ceph-mon[74339]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:10 compute-0 ceph-mon[74339]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:10 compute-0 ceph-mon[74339]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:46:10 compute-0 ceph-mon[74339]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:11.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:12.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:46:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:46:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:46:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:46:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:46:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:46:13 compute-0 ceph-mon[74339]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:13.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:14.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:15 compute-0 ceph-mon[74339]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:15 compute-0 ceph-mon[74339]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:15.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:16.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:17.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:17 compute-0 ceph-mon[74339]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:18.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:46:18
Dec 06 06:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Dec 06 06:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:46:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:19.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:20.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:21.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:22.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:22 compute-0 ceph-mon[74339]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:46:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:23.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:25 compute-0 ceph-mon[74339]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:25 compute-0 ceph-mon[74339]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:25 compute-0 sshd-session[158269]: Received disconnect from 192.168.122.30 port 35302:11: disconnected by user
Dec 06 06:46:25 compute-0 sshd-session[158269]: Disconnected from user zuul 192.168.122.30 port 35302
Dec 06 06:46:25 compute-0 sshd-session[158266]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:46:25 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 06 06:46:25 compute-0 systemd[1]: session-49.scope: Consumed 2min 8.542s CPU time.
Dec 06 06:46:25 compute-0 systemd-logind[798]: Session 49 logged out. Waiting for processes to exit.
Dec 06 06:46:25 compute-0 systemd-logind[798]: Removed session 49.
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:46:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:46:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:25.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:26.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:26 compute-0 podman[195200]: 2025-12-06 06:46:26.457176013 +0000 UTC m=+0.098873073 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 06:46:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:27.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:28.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:28 compute-0 sudo[195229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:46:28 compute-0 sudo[195229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:28 compute-0 sudo[195229]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:28 compute-0 sudo[195254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:46:28 compute-0 sudo[195254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:28 compute-0 sudo[195254]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:29 compute-0 ceph-mon[74339]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:29.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:31.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:33.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:34.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:35.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:36.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:36 compute-0 ceph-mon[74339]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:36 compute-0 ceph-mon[74339]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:37.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:38.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:38 compute-0 podman[195283]: 2025-12-06 06:46:38.401204676 +0000 UTC m=+0.060854984 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Dec 06 06:46:38 compute-0 ceph-mon[74339]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:38 compute-0 ceph-mon[74339]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:38 compute-0 ceph-mon[74339]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:38 compute-0 ceph-mon[74339]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:39.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:40.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:41.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:46:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:46:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:46:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:46:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:46:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:46:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000022s ======
Dec 06 06:46:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:43.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Dec 06 06:46:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:44 compute-0 ceph-mon[74339]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:45 compute-0 ceph-mon[74339]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:45 compute-0 ceph-mon[74339]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:45 compute-0 ceph-mon[74339]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:47 compute-0 ceph-mon[74339]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:48 compute-0 sudo[195308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:46:48 compute-0 sudo[195308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:48 compute-0 sudo[195308]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:48 compute-0 sudo[195333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:46:48 compute-0 sudo[195333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:46:48 compute-0 sudo[195333]: pam_unix(sudo:session): session closed for user root
Dec 06 06:46:49 compute-0 ceph-mon[74339]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:50.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:51 compute-0 ceph-mon[74339]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:53 compute-0 ceph-mon[74339]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000021s ======
Dec 06 06:46:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000021s
Dec 06 06:46:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:55 compute-0 ceph-mon[74339]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:55.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:56.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:46:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:57 compute-0 podman[195362]: 2025-12-06 06:46:57.479215029 +0000 UTC m=+0.140711595 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:46:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:46:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:46:58 compute-0 ceph-mon[74339]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:59 compute-0 ceph-mon[74339]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:46:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:46:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:46:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:46:59.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:00.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:00 compute-0 ceph-mon[74339]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000182s ======
Dec 06 06:47:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:01.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000182s
Dec 06 06:47:01 compute-0 sshd-session[195390]: Accepted publickey for zuul from 192.168.122.30 port 38300 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:47:01 compute-0 systemd-logind[798]: New session 50 of user zuul.
Dec 06 06:47:01 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 06 06:47:01 compute-0 sshd-session[195390]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:47:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:02.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:02 compute-0 sudo[195519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwctlujxcojovxizyafaclfqdsmbnakp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003621.9856493-973-217683693176745/AnsiballZ_systemd.py'
Dec 06 06:47:02 compute-0 sudo[195519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:02 compute-0 python3.9[195522]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:02 compute-0 systemd[1]: Reloading.
Dec 06 06:47:02 compute-0 systemd-sysv-generator[195555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:02 compute-0 systemd-rc-local-generator[195552]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:03 compute-0 sudo[195519]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:03 compute-0 sudo[195710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btvkjrlndhvmkqmggzcibsllwcwizcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003623.2596736-973-130099011680277/AnsiballZ_systemd.py'
Dec 06 06:47:03 compute-0 sudo[195710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:03 compute-0 ceph-mon[74339]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:47:03.793 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:47:03.794 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:47:03.795 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:47:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000182s ======
Dec 06 06:47:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:03.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000182s
Dec 06 06:47:03 compute-0 python3.9[195712]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:03 compute-0 systemd[1]: Reloading.
Dec 06 06:47:04 compute-0 systemd-rc-local-generator[195741]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:04 compute-0 systemd-sysv-generator[195745]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:04.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:04 compute-0 sudo[195710]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:04 compute-0 sudo[195901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqmjydycgzyapgkobdqlxrkkbhhbemrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003624.4320762-973-93456142101452/AnsiballZ_systemd.py'
Dec 06 06:47:04 compute-0 sudo[195901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:05 compute-0 python3.9[195903]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:05 compute-0 systemd[1]: Reloading.
Dec 06 06:47:05 compute-0 systemd-rc-local-generator[195933]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:05 compute-0 systemd-sysv-generator[195936]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:05 compute-0 sudo[195901]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:05 compute-0 ceph-mon[74339]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:05 compute-0 sudo[196091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmagdzmshgdyhiuakplamofqeealibws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003625.5653396-973-231546698924821/AnsiballZ_systemd.py'
Dec 06 06:47:05 compute-0 sudo[196091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000182s ======
Dec 06 06:47:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:06.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000182s
Dec 06 06:47:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:06 compute-0 python3.9[196093]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:06 compute-0 systemd[1]: Reloading.
Dec 06 06:47:06 compute-0 systemd-rc-local-generator[196125]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:06 compute-0 systemd-sysv-generator[196129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:06 compute-0 sudo[196091]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:07 compute-0 sudo[196283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naiqijbxdxdftkwcjiehhcrmfgcalkse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003627.1810527-1066-19394853557829/AnsiballZ_systemd.py'
Dec 06 06:47:07 compute-0 sudo[196283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:07 compute-0 ceph-mon[74339]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000182s ======
Dec 06 06:47:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:07.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000182s
Dec 06 06:47:07 compute-0 python3.9[196285]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:07 compute-0 systemd[1]: Reloading.
Dec 06 06:47:08 compute-0 systemd-rc-local-generator[196318]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:08 compute-0 systemd-sysv-generator[196322]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000182s ======
Dec 06 06:47:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:08.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000182s
Dec 06 06:47:08 compute-0 sudo[196283]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:08 compute-0 sudo[196443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:08 compute-0 sudo[196428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:08 compute-0 sudo[196428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-0 sudo[196443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-0 sudo[196428]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-0 sudo[196443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-0 sudo[196514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:08 compute-0 sudo[196571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djxfdanqdtbgdaqwzsnecbetmwlbihtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003628.4531932-1066-191350857044801/AnsiballZ_systemd.py'
Dec 06 06:47:08 compute-0 sudo[196515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:47:08 compute-0 sudo[196514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-0 sudo[196571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:08 compute-0 sudo[196515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-0 sudo[196514]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-0 sudo[196515]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-0 podman[196495]: 2025-12-06 06:47:08.850211472 +0000 UTC m=+0.073470035 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 06:47:08 compute-0 ceph-mon[74339]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:08 compute-0 sudo[196595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:08 compute-0 sudo[196595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:08 compute-0 sudo[196595]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:08 compute-0 sudo[196620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 06:47:08 compute-0 sudo[196620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:09 compute-0 python3.9[196594]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:09 compute-0 sudo[196620]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:47:09 compute-0 systemd[1]: Reloading.
Dec 06 06:47:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:47:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:09 compute-0 systemd-rc-local-generator[196716]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:09 compute-0 systemd-sysv-generator[196722]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:09 compute-0 sudo[196671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:09 compute-0 sudo[196671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:09 compute-0 sudo[196671]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:09 compute-0 sudo[196571]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:09 compute-0 sudo[196730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:47:09 compute-0 sudo[196730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:09 compute-0 sudo[196730]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:09 compute-0 sudo[196779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:09 compute-0 sudo[196779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:09 compute-0 sudo[196779]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:09 compute-0 sudo[196822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:47:09 compute-0 sudo[196822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000182s ======
Dec 06 06:47:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:09.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000182s
Dec 06 06:47:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:47:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:47:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-0 sudo[196974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljtfiwsllckulzxtskcegtblleqrnikg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003629.6526656-1066-246910426307787/AnsiballZ_systemd.py'
Dec 06 06:47:10 compute-0 sudo[196974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000182s ======
Dec 06 06:47:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:10.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000182s
Dec 06 06:47:10 compute-0 sudo[196822]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:10 compute-0 python3.9[196976]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:10 compute-0 systemd[1]: Reloading.
Dec 06 06:47:10 compute-0 systemd-rc-local-generator[197022]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:10 compute-0 systemd-sysv-generator[197026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:10 compute-0 sudo[196974]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:47:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:47:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:47:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:47:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:47:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3e0ccb74-7795-4281-a98e-9243b7f2eb2a does not exist
Dec 06 06:47:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7be1de26-697c-4c6f-babc-67c6570ca58c does not exist
Dec 06 06:47:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2033da27-f9e0-4b4c-8012-ecec96382df9 does not exist
Dec 06 06:47:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:47:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:47:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:47:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:47:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:47:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:47:11 compute-0 sudo[197112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:11 compute-0 sudo[197112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:11 compute-0 sudo[197112]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:11 compute-0 sudo[197153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:47:11 compute-0 sudo[197153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:11 compute-0 sudo[197153]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:11 compute-0 sudo[197202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:11 compute-0 sudo[197202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:11 compute-0 sudo[197202]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:11 compute-0 sudo[197252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkeyetigtrqdqkxajssekdlrhaovsobs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003630.9643023-1066-171097819193556/AnsiballZ_systemd.py'
Dec 06 06:47:11 compute-0 ceph-mon[74339]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:47:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:47:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:47:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:47:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:47:11 compute-0 sudo[197252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:11 compute-0 sudo[197254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:47:11 compute-0 sudo[197254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:11 compute-0 python3.9[197261]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:11 compute-0 podman[197322]: 2025-12-06 06:47:11.625362053 +0000 UTC m=+0.056750619 container create 7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:47:11 compute-0 sudo[197252]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:11 compute-0 systemd[1]: Started libpod-conmon-7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b.scope.
Dec 06 06:47:11 compute-0 podman[197322]: 2025-12-06 06:47:11.602031943 +0000 UTC m=+0.033420529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:47:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:47:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000545s ======
Dec 06 06:47:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:11.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000545s
Dec 06 06:47:12 compute-0 podman[197322]: 2025-12-06 06:47:12.012414424 +0000 UTC m=+0.443803010 container init 7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Dec 06 06:47:12 compute-0 podman[197322]: 2025-12-06 06:47:12.022531748 +0000 UTC m=+0.453920314 container start 7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:47:12 compute-0 podman[197322]: 2025-12-06 06:47:12.027124074 +0000 UTC m=+0.458512640 container attach 7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 06:47:12 compute-0 determined_torvalds[197343]: 167 167
Dec 06 06:47:12 compute-0 systemd[1]: libpod-7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b.scope: Deactivated successfully.
Dec 06 06:47:12 compute-0 podman[197322]: 2025-12-06 06:47:12.029319044 +0000 UTC m=+0.460707610 container died 7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:47:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8121a04f75e49a89cef962ba85fec187065ed28af226611a027cafbae78f6893-merged.mount: Deactivated successfully.
Dec 06 06:47:12 compute-0 sudo[197506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtoprakkurmmpwbtzdbmdkgwrbnnunzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003631.7886105-1066-15402717678699/AnsiballZ_systemd.py'
Dec 06 06:47:12 compute-0 sudo[197506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:12 compute-0 podman[197322]: 2025-12-06 06:47:12.090258066 +0000 UTC m=+0.521646632 container remove 7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_torvalds, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:47:12 compute-0 systemd[1]: libpod-conmon-7f3c2932eb5449cf012b73244c927a70ce51bcadc9a8f887b096b7adb3cb210b.scope: Deactivated successfully.
Dec 06 06:47:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:12 compute-0 podman[197516]: 2025-12-06 06:47:12.26221004 +0000 UTC m=+0.049857253 container create 3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kilby, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 06:47:12 compute-0 systemd[1]: Started libpod-conmon-3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2.scope.
Dec 06 06:47:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082eee87878b2560783e7ae1e719cee51ffb25981fda45a092f8c8a274805ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082eee87878b2560783e7ae1e719cee51ffb25981fda45a092f8c8a274805ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082eee87878b2560783e7ae1e719cee51ffb25981fda45a092f8c8a274805ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082eee87878b2560783e7ae1e719cee51ffb25981fda45a092f8c8a274805ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5082eee87878b2560783e7ae1e719cee51ffb25981fda45a092f8c8a274805ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:12 compute-0 podman[197516]: 2025-12-06 06:47:12.241611578 +0000 UTC m=+0.029258811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:47:12 compute-0 podman[197516]: 2025-12-06 06:47:12.34987537 +0000 UTC m=+0.137522603 container init 3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kilby, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:47:12 compute-0 podman[197516]: 2025-12-06 06:47:12.356050606 +0000 UTC m=+0.143697819 container start 3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:47:12 compute-0 podman[197516]: 2025-12-06 06:47:12.3723214 +0000 UTC m=+0.159968723 container attach 3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kilby, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:47:12 compute-0 python3.9[197508]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:12 compute-0 systemd[1]: Reloading.
Dec 06 06:47:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:12 compute-0 systemd-rc-local-generator[197563]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:12 compute-0 systemd-sysv-generator[197568]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:12 compute-0 sudo[197506]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:47:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:47:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:47:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:47:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:47:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:47:13 compute-0 dreamy_kilby[197533]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:47:13 compute-0 dreamy_kilby[197533]: --> relative data size: 1.0
Dec 06 06:47:13 compute-0 dreamy_kilby[197533]: --> All data devices are unavailable
Dec 06 06:47:13 compute-0 systemd[1]: libpod-3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2.scope: Deactivated successfully.
Dec 06 06:47:13 compute-0 podman[197516]: 2025-12-06 06:47:13.292685102 +0000 UTC m=+1.080332335 container died 3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kilby, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 06:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5082eee87878b2560783e7ae1e719cee51ffb25981fda45a092f8c8a274805ff-merged.mount: Deactivated successfully.
Dec 06 06:47:13 compute-0 podman[197516]: 2025-12-06 06:47:13.352694536 +0000 UTC m=+1.140341749 container remove 3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 06 06:47:13 compute-0 systemd[1]: libpod-conmon-3023320a45d99e45c32d58b4aaa72e5c975a37e10981f6cb554a9abc97f497f2.scope: Deactivated successfully.
Dec 06 06:47:13 compute-0 sudo[197254]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:13 compute-0 sudo[197756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvmgqobucaejyopbhmfeiirymsefgmgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003633.0864735-1174-178047621202313/AnsiballZ_systemd.py'
Dec 06 06:47:13 compute-0 sudo[197756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:13 compute-0 sudo[197746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:13 compute-0 sudo[197746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:13 compute-0 sudo[197746]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:13 compute-0 sudo[197778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:47:13 compute-0 sudo[197778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:13 compute-0 sudo[197778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:13 compute-0 sudo[197803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:13 compute-0 sudo[197803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:13 compute-0 sudo[197803]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:13 compute-0 sudo[197828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:47:13 compute-0 sudo[197828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:13 compute-0 ceph-mon[74339]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:13 compute-0 python3.9[197775]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 06:47:13 compute-0 systemd[1]: Reloading.
Dec 06 06:47:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:13 compute-0 systemd-rc-local-generator[197928]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:47:13 compute-0 systemd-sysv-generator[197931]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:47:13 compute-0 podman[197896]: 2025-12-06 06:47:13.936216396 +0000 UTC m=+0.052714541 container create 4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:47:14 compute-0 podman[197896]: 2025-12-06 06:47:13.918595841 +0000 UTC m=+0.035094016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:47:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:14.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:14 compute-0 systemd[1]: Started libpod-conmon-4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9.scope.
Dec 06 06:47:14 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 06 06:47:14 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 06 06:47:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:47:14 compute-0 sudo[197756]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:14 compute-0 podman[197896]: 2025-12-06 06:47:14.192752996 +0000 UTC m=+0.309251161 container init 4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:47:14 compute-0 podman[197896]: 2025-12-06 06:47:14.202301065 +0000 UTC m=+0.318799210 container start 4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:47:14 compute-0 podman[197896]: 2025-12-06 06:47:14.206452661 +0000 UTC m=+0.322950826 container attach 4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:47:14 compute-0 hungry_moore[197947]: 167 167
Dec 06 06:47:14 compute-0 systemd[1]: libpod-4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9.scope: Deactivated successfully.
Dec 06 06:47:14 compute-0 podman[197896]: 2025-12-06 06:47:14.209439355 +0000 UTC m=+0.325937500 container died 4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdbe79442a93d60be4a7c7a21c5dfffd1c573e96bd3513619102826e49aff62f-merged.mount: Deactivated successfully.
Dec 06 06:47:14 compute-0 podman[197896]: 2025-12-06 06:47:14.249598052 +0000 UTC m=+0.366096197 container remove 4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:47:14 compute-0 systemd[1]: libpod-conmon-4b583181aefac8aa5998d545220d9dc64a19ade1fc9df9e98ca70989694f3fc9.scope: Deactivated successfully.
Dec 06 06:47:14 compute-0 podman[197997]: 2025-12-06 06:47:14.408246475 +0000 UTC m=+0.039770237 container create 3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:47:14 compute-0 systemd[1]: Started libpod-conmon-3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1.scope.
Dec 06 06:47:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4afad6f26582f68e8738ce2beec4346daca0145c34992dbfa258c4063dcc5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4afad6f26582f68e8738ce2beec4346daca0145c34992dbfa258c4063dcc5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4afad6f26582f68e8738ce2beec4346daca0145c34992dbfa258c4063dcc5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c4afad6f26582f68e8738ce2beec4346daca0145c34992dbfa258c4063dcc5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:14 compute-0 podman[197997]: 2025-12-06 06:47:14.483041835 +0000 UTC m=+0.114565597 container init 3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:47:14 compute-0 podman[197997]: 2025-12-06 06:47:14.391148036 +0000 UTC m=+0.022671828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:47:14 compute-0 podman[197997]: 2025-12-06 06:47:14.489171037 +0000 UTC m=+0.120694799 container start 3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:47:14 compute-0 podman[197997]: 2025-12-06 06:47:14.492327835 +0000 UTC m=+0.123851587 container attach 3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:47:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:14 compute-0 sudo[198144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krdkgeoevravxnbisszjeauvxwlflnik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003634.542655-1198-157920394602910/AnsiballZ_systemd.py'
Dec 06 06:47:14 compute-0 sudo[198144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:15 compute-0 python3.9[198146]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:15 compute-0 sudo[198144]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]: {
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:     "0": [
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:         {
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "devices": [
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "/dev/loop3"
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             ],
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "lv_name": "ceph_lv0",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "lv_size": "7511998464",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "name": "ceph_lv0",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "tags": {
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.cluster_name": "ceph",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.crush_device_class": "",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.encrypted": "0",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.osd_id": "0",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.type": "block",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:                 "ceph.vdo": "0"
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             },
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "type": "block",
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:             "vg_name": "ceph_vg0"
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:         }
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]:     ]
Dec 06 06:47:15 compute-0 compassionate_cartwright[198014]: }
Dec 06 06:47:15 compute-0 systemd[1]: libpod-3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1.scope: Deactivated successfully.
Dec 06 06:47:15 compute-0 podman[197997]: 2025-12-06 06:47:15.289508192 +0000 UTC m=+0.921031954 container died 3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:47:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c4afad6f26582f68e8738ce2beec4346daca0145c34992dbfa258c4063dcc5a-merged.mount: Deactivated successfully.
Dec 06 06:47:15 compute-0 podman[197997]: 2025-12-06 06:47:15.341075709 +0000 UTC m=+0.972599471 container remove 3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:47:15 compute-0 systemd[1]: libpod-conmon-3bf6e1cac3d2e795720e7856d061b96aab846309094d8ce9afb1f65494a21ac1.scope: Deactivated successfully.
Dec 06 06:47:15 compute-0 sudo[197828]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:15 compute-0 sudo[198237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:15 compute-0 sudo[198237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:15 compute-0 sudo[198237]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:15 compute-0 sudo[198280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:47:15 compute-0 sudo[198280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:15 compute-0 sudo[198280]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:15 compute-0 sudo[198314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:15 compute-0 sudo[198314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:15 compute-0 sudo[198314]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:15 compute-0 sudo[198363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:47:15 compute-0 sudo[198363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:15 compute-0 sudo[198414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rueyrjdjrmptnkvfluonelgxgcvqyhzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003635.3367035-1198-49707266971048/AnsiballZ_systemd.py'
Dec 06 06:47:15 compute-0 sudo[198414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:15 compute-0 podman[198456]: 2025-12-06 06:47:15.943029346 +0000 UTC m=+0.038053259 container create bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:47:15 compute-0 python3.9[198416]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:15 compute-0 systemd[1]: Started libpod-conmon-bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce.scope.
Dec 06 06:47:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:47:16 compute-0 podman[198456]: 2025-12-06 06:47:15.927437748 +0000 UTC m=+0.022461661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:47:16 compute-0 podman[198456]: 2025-12-06 06:47:16.033761202 +0000 UTC m=+0.128785145 container init bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 06:47:16 compute-0 podman[198456]: 2025-12-06 06:47:16.040193604 +0000 UTC m=+0.135217517 container start bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:47:16 compute-0 podman[198456]: 2025-12-06 06:47:16.043180117 +0000 UTC m=+0.138204030 container attach bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Dec 06 06:47:16 compute-0 vibrant_rubin[198473]: 167 167
Dec 06 06:47:16 compute-0 systemd[1]: libpod-bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce.scope: Deactivated successfully.
Dec 06 06:47:16 compute-0 podman[198456]: 2025-12-06 06:47:16.046165511 +0000 UTC m=+0.141189444 container died bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae271ccd1979fe0e0d9dea851ed3a09b389cb5ae21543e0c7e385c8ba9a15ca1-merged.mount: Deactivated successfully.
Dec 06 06:47:16 compute-0 sudo[198414]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:16 compute-0 podman[198456]: 2025-12-06 06:47:16.082024798 +0000 UTC m=+0.177048711 container remove bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rubin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:47:16 compute-0 systemd[1]: libpod-conmon-bd6c0fff89a60e05c238b6f67d234c308e99e9034b726e38a35a8b058d6680ce.scope: Deactivated successfully.
Dec 06 06:47:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:16.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:16 compute-0 podman[198523]: 2025-12-06 06:47:16.229564219 +0000 UTC m=+0.037020980 container create 077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:47:16 compute-0 systemd[1]: Started libpod-conmon-077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394.scope.
Dec 06 06:47:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:47:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5297e6ea3c1584fb9afd2fc33a017db330c2c494ff39297e77c4b3c39e8e1007/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5297e6ea3c1584fb9afd2fc33a017db330c2c494ff39297e77c4b3c39e8e1007/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5297e6ea3c1584fb9afd2fc33a017db330c2c494ff39297e77c4b3c39e8e1007/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5297e6ea3c1584fb9afd2fc33a017db330c2c494ff39297e77c4b3c39e8e1007/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:47:16 compute-0 podman[198523]: 2025-12-06 06:47:16.310297665 +0000 UTC m=+0.117754446 container init 077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 06:47:16 compute-0 podman[198523]: 2025-12-06 06:47:16.214394193 +0000 UTC m=+0.021850984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:47:16 compute-0 podman[198523]: 2025-12-06 06:47:16.317338413 +0000 UTC m=+0.124795184 container start 077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:47:16 compute-0 podman[198523]: 2025-12-06 06:47:16.320118841 +0000 UTC m=+0.127575622 container attach 077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 06:47:16 compute-0 ceph-mon[74339]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:16 compute-0 sudo[198670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyohuthcungqhetbydgifpfzhegeicrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003636.253057-1198-94565470803538/AnsiballZ_systemd.py'
Dec 06 06:47:16 compute-0 sudo[198670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:16 compute-0 python3.9[198672]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:16 compute-0 sudo[198670]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]: {
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]:         "osd_id": 0,
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]:         "type": "bluestore"
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]:     }
Dec 06 06:47:17 compute-0 hopeful_johnson[198562]: }
Dec 06 06:47:17 compute-0 systemd[1]: libpod-077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394.scope: Deactivated successfully.
Dec 06 06:47:17 compute-0 podman[198523]: 2025-12-06 06:47:17.212737446 +0000 UTC m=+1.020194227 container died 077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5297e6ea3c1584fb9afd2fc33a017db330c2c494ff39297e77c4b3c39e8e1007-merged.mount: Deactivated successfully.
Dec 06 06:47:17 compute-0 podman[198523]: 2025-12-06 06:47:17.270254091 +0000 UTC m=+1.077710862 container remove 077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_johnson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 06:47:17 compute-0 systemd[1]: libpod-conmon-077500bbd801eddc8bdbe672e3d3ee80391d10f23cf89d0ab646122a7f28d394.scope: Deactivated successfully.
Dec 06 06:47:17 compute-0 sudo[198363]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:47:17 compute-0 sudo[198852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttkhfsumdqhfbkqlohxsuntovuacleef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003637.0504572-1198-133127625821640/AnsiballZ_systemd.py'
Dec 06 06:47:17 compute-0 sudo[198852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:47:17 compute-0 python3.9[198854]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:17 compute-0 sudo[198852]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:47:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:47:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:18.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:18 compute-0 sudo[199007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txypopslmaicmplcdhpjxbxjaxralpdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003637.9402547-1198-16656034049048/AnsiballZ_systemd.py'
Dec 06 06:47:18 compute-0 sudo[199007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:47:18
Dec 06 06:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes']
Dec 06 06:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:47:18 compute-0 python3.9[199009]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:18 compute-0 sudo[199007]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:19 compute-0 sudo[199163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pacevsoexpdrbbdtjtnqkuhrkjgwltcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003638.8086832-1198-232457817582382/AnsiballZ_systemd.py'
Dec 06 06:47:19 compute-0 sudo[199163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:19 compute-0 python3.9[199165]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:19 compute-0 sudo[199163]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:19 compute-0 sudo[199318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkthyainmzmkazovsckvfnwwzvoymnws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003639.5353348-1198-269041878048319/AnsiballZ_systemd.py'
Dec 06 06:47:19 compute-0 sudo[199318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:19.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:19 compute-0 ceph-mon[74339]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:20 compute-0 python3.9[199320]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:20.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:20 compute-0 sudo[199318]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:20 compute-0 sudo[199474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snhchxamkzkynuzrqmjsxlzfqpcseemw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003640.3501246-1198-85217360668995/AnsiballZ_systemd.py'
Dec 06 06:47:20 compute-0 sudo[199474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:20 compute-0 python3.9[199476]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:21 compute-0 sudo[199474]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:21 compute-0 sudo[199629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pojrlubwucftsclutkrvbqyixhtlynek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003641.1354904-1198-262942290997571/AnsiballZ_systemd.py'
Dec 06 06:47:21 compute-0 sudo[199629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:21 compute-0 python3.9[199631]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:21 compute-0 sudo[199629]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:21.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f0d1724c-8088-4d4f-b943-4ac3fd3eb82d does not exist
Dec 06 06:47:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ab460d51-b7ac-4260-8fbb-e90522620949 does not exist
Dec 06 06:47:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a69f2fa5-802f-4f7e-8a24-ddcdb6040be5 does not exist
Dec 06 06:47:22 compute-0 sudo[199682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:22 compute-0 sudo[199682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:22 compute-0 sudo[199682]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:22 compute-0 sudo[199734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:47:22 compute-0 sudo[199734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:22 compute-0 sudo[199734]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:22.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:22 compute-0 sudo[199834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qppxweuuuitkvstgmdnarabtefmpvyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003641.9917364-1198-84002000086593/AnsiballZ_systemd.py'
Dec 06 06:47:22 compute-0 sudo[199834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:22 compute-0 python3.9[199836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:22 compute-0 sudo[199834]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:47:23 compute-0 sudo[199991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijllhgbcbnkzhxpjdnhvznadsfwbtlrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003642.9481282-1198-191312733767046/AnsiballZ_systemd.py'
Dec 06 06:47:23 compute-0 sudo[199991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:47:23 compute-0 python3.9[199993]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:23 compute-0 sudo[199991]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:23.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:23 compute-0 sudo[200147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjtvijoavjftgjtpzryxedibvxmofbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003643.7233589-1198-88753302240770/AnsiballZ_systemd.py'
Dec 06 06:47:23 compute-0 sudo[200147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:47:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:24.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:47:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:24 compute-0 ceph-mon[74339]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:24 compute-0 ceph-mon[74339]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:24 compute-0 python3.9[200149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:24 compute-0 sudo[200147]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:24 compute-0 sudo[200303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muiqyoffjbxdwzouicfgivehusuxqmix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003644.5568404-1198-253762887779906/AnsiballZ_systemd.py'
Dec 06 06:47:24 compute-0 sudo[200303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:25 compute-0 python3.9[200305]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:25 compute-0 sudo[200303]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:47:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:47:25 compute-0 sudo[200458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aecvwhxdiditjvglcffrldjjalbcmwqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003645.3397834-1198-7949076284041/AnsiballZ_systemd.py'
Dec 06 06:47:25 compute-0 sudo[200458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:25 compute-0 sshd-session[199916]: Connection reset by authenticating user root 45.135.232.92 port 56654 [preauth]
Dec 06 06:47:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:25.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:25 compute-0 python3.9[200460]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 06:47:26 compute-0 sudo[200458]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:26.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:47:26 compute-0 ceph-mon[74339]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:26 compute-0 ceph-mon[74339]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:27 compute-0 sudo[200627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxfjlgyeyjwfbgvzozuybgealwxytdss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003647.483934-1504-34871506584946/AnsiballZ_file.py'
Dec 06 06:47:27 compute-0 sudo[200627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:27 compute-0 sshd-session[200461]: Connection reset by authenticating user root 45.135.232.92 port 64880 [preauth]
Dec 06 06:47:27 compute-0 podman[200590]: 2025-12-06 06:47:27.782763061 +0000 UTC m=+0.079864453 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 06:47:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:27.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:27 compute-0 python3.9[200635]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:27 compute-0 sudo[200627]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:28.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:28 compute-0 ceph-mon[74339]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:28 compute-0 sudo[200795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpvijslvcolnymvpjitxobfkvutrjflr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003648.1371372-1504-62546035796832/AnsiballZ_file.py'
Dec 06 06:47:28 compute-0 sudo[200795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:28 compute-0 python3.9[200797]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:28 compute-0 sudo[200795]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:28 compute-0 sudo[200897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:28 compute-0 sudo[200897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:28 compute-0 sudo[200897]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:28 compute-0 sudo[200922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:28 compute-0 sudo[200922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:28 compute-0 sudo[200922]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:29 compute-0 sudo[200997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fybxdjsttybocxwbgghvbjsdavvomlak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003648.7361603-1504-221863470553051/AnsiballZ_file.py'
Dec 06 06:47:29 compute-0 sudo[200997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:29 compute-0 python3.9[200999]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:29 compute-0 sudo[200997]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:29 compute-0 ceph-mon[74339]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:29 compute-0 sudo[201149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhacjtvggbbgauxlvlalbwruqwbjpnnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003649.3593187-1504-161335486523182/AnsiballZ_file.py'
Dec 06 06:47:29 compute-0 sudo[201149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:29 compute-0 python3.9[201151]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:29 compute-0 sudo[201149]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:30.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:30 compute-0 sudo[201301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spculyfhodamddqdyddstvhksfuouruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003649.959847-1504-110351294991780/AnsiballZ_file.py'
Dec 06 06:47:30 compute-0 sudo[201301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:30 compute-0 python3.9[201303]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:30 compute-0 sudo[201301]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:30 compute-0 sudo[201454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edfidrvpbpsxqngfbuxgxbozwylcapdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003650.5655305-1504-239772000368444/AnsiballZ_file.py'
Dec 06 06:47:30 compute-0 sudo[201454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:31 compute-0 python3.9[201456]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:47:31 compute-0 sudo[201454]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:31 compute-0 sshd-session[200643]: Connection reset by authenticating user root 45.135.232.92 port 64890 [preauth]
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.230005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651230072, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1088, "num_deletes": 501, "total_data_size": 1318851, "memory_usage": 1344272, "flush_reason": "Manual Compaction"}
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651238385, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 853154, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14780, "largest_seqno": 15867, "table_properties": {"data_size": 848909, "index_size": 1385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13557, "raw_average_key_size": 19, "raw_value_size": 838028, "raw_average_value_size": 1187, "num_data_blocks": 61, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003552, "oldest_key_time": 1765003552, "file_creation_time": 1765003651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 8471 microseconds, and 4431 cpu microseconds.
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.238472) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 853154 bytes OK
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.238498) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.239994) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.240021) EVENT_LOG_v1 {"time_micros": 1765003651240015, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.240042) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1312774, prev total WAL file size 1312774, number of live WAL files 2.
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.240889) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323538' seq:72057594037927935, type:22 .. '6D67727374617400353039' seq:0, type:0; will stop at (end)
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(833KB)], [32(11MB)]
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651240966, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 13232720, "oldest_snapshot_seqno": -1}
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4416 keys, 7584927 bytes, temperature: kUnknown
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651315442, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7584927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7554712, "index_size": 18073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 111413, "raw_average_key_size": 25, "raw_value_size": 7474071, "raw_average_value_size": 1692, "num_data_blocks": 749, "num_entries": 4416, "num_filter_entries": 4416, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765003651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.315790) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7584927 bytes
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.317400) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.3 rd, 101.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(24.4) write-amplify(8.9) OK, records in: 5406, records dropped: 990 output_compression: NoCompression
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.317422) EVENT_LOG_v1 {"time_micros": 1765003651317412, "job": 14, "event": "compaction_finished", "compaction_time_micros": 74617, "compaction_time_cpu_micros": 38857, "output_level": 6, "num_output_files": 1, "total_output_size": 7584927, "num_input_records": 5406, "num_output_records": 4416, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651317915, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003651320247, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.240726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.320385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.320393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.320395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.320399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:47:31.320403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:47:31 compute-0 ceph-mon[74339]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:31 compute-0 sudo[201607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iysazxeurjlyuudutfvgrrucvqfxtokb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003651.2723105-1633-150292103304921/AnsiballZ_stat.py'
Dec 06 06:47:31 compute-0 sudo[201607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:31 compute-0 python3.9[201609]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:31 compute-0 sudo[201607]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:47:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:32.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:47:32 compute-0 sudo[201734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhraausurcthqmzvwpapbkaqyosvvvoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003651.2723105-1633-150292103304921/AnsiballZ_copy.py'
Dec 06 06:47:32 compute-0 sudo[201734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:32 compute-0 python3.9[201736]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003651.2723105-1633-150292103304921/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:32 compute-0 sudo[201734]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:32 compute-0 sudo[201886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnhpfikfgmrrbcmdfefrmqxoypypjimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003652.7311397-1633-247431178790574/AnsiballZ_stat.py'
Dec 06 06:47:32 compute-0 sudo[201886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:33 compute-0 python3.9[201888]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:33 compute-0 sudo[201886]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:33 compute-0 ceph-mon[74339]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:33 compute-0 sudo[202011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wewzvyvlkhijmuoircunkojbsttcejxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003652.7311397-1633-247431178790574/AnsiballZ_copy.py'
Dec 06 06:47:33 compute-0 sudo[202011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:33 compute-0 sshd-session[201533]: Connection reset by authenticating user root 45.135.232.92 port 64912 [preauth]
Dec 06 06:47:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:33.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:33 compute-0 python3.9[202013]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003652.7311397-1633-247431178790574/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:33 compute-0 sudo[202011]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:34.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:34 compute-0 sudo[202164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhxxchuxjsrhgabeufmgivvidcijzurb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003654.0761452-1633-184540948464047/AnsiballZ_stat.py'
Dec 06 06:47:34 compute-0 sudo[202164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:34 compute-0 python3.9[202167]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:34 compute-0 sudo[202164]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:34 compute-0 ceph-mon[74339]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:34 compute-0 sudo[202291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayxlwgtnjqfdpiushqfwoxoixggvxujo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003654.0761452-1633-184540948464047/AnsiballZ_copy.py'
Dec 06 06:47:34 compute-0 sudo[202291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:35 compute-0 python3.9[202293]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003654.0761452-1633-184540948464047/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:35 compute-0 sudo[202291]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:35 compute-0 sudo[202443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsqzeqhqrfwnpxgnkrznxzfvysjpijqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003655.3167267-1633-129335753581785/AnsiballZ_stat.py'
Dec 06 06:47:35 compute-0 sudo[202443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:35 compute-0 python3.9[202445]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:35 compute-0 sudo[202443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:35 compute-0 sshd-session[202041]: Invalid user monitor from 45.135.232.92 port 64924
Dec 06 06:47:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:35.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:36 compute-0 sudo[202568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egemoahcrejasutlewhsiyqeixtrchuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003655.3167267-1633-129335753581785/AnsiballZ_copy.py'
Dec 06 06:47:36 compute-0 sudo[202568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:36.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:36 compute-0 python3.9[202570]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003655.3167267-1633-129335753581785/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:36 compute-0 sudo[202568]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:36 compute-0 sshd-session[202041]: Connection reset by invalid user monitor 45.135.232.92 port 64924 [preauth]
Dec 06 06:47:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:36 compute-0 sudo[202721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhsbiltkekumiafttwgrgcliwbgkyzao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003656.419446-1633-40546565577878/AnsiballZ_stat.py'
Dec 06 06:47:36 compute-0 sudo[202721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:36 compute-0 python3.9[202723]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:36 compute-0 sudo[202721]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:37 compute-0 sudo[202846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtdteiesrzyygwyaxnsmgeoxjyjhainb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003656.419446-1633-40546565577878/AnsiballZ_copy.py'
Dec 06 06:47:37 compute-0 sudo[202846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:37 compute-0 ceph-mon[74339]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:37 compute-0 python3.9[202848]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003656.419446-1633-40546565577878/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:37 compute-0 sudo[202846]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:37 compute-0 sudo[202998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjgjpltgsqkcptcwwyadxwpivdujvbdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003657.6020026-1633-83913747211865/AnsiballZ_stat.py'
Dec 06 06:47:37 compute-0 sudo[202998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:37.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:38 compute-0 python3.9[203000]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:38 compute-0 sudo[202998]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:38.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:38 compute-0 sudo[203124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfgjzwsnjxlplewyvibcuoizshqnaksr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003657.6020026-1633-83913747211865/AnsiballZ_copy.py'
Dec 06 06:47:38 compute-0 sudo[203124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:38 compute-0 python3.9[203126]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003657.6020026-1633-83913747211865/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:38 compute-0 sudo[203124]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:38 compute-0 sudo[203289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piobzlyetxqxtmskzfhagqdrdysvunli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003658.7263029-1633-104153904179464/AnsiballZ_stat.py'
Dec 06 06:47:38 compute-0 sudo[203289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:39 compute-0 podman[203250]: 2025-12-06 06:47:39.011753533 +0000 UTC m=+0.059314076 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 06:47:39 compute-0 python3.9[203296]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:39 compute-0 sudo[203289]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:39 compute-0 sudo[203419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgozamzoeubnnysiqweesdtxihxzpryd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003658.7263029-1633-104153904179464/AnsiballZ_copy.py'
Dec 06 06:47:39 compute-0 sudo[203419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:39 compute-0 ceph-mon[74339]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:39 compute-0 python3.9[203421]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003658.7263029-1633-104153904179464/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:39 compute-0 sudo[203419]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:39.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:40.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:40 compute-0 sudo[203571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiixxzmypaapkrpqekjxtzilmngyxymw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003659.9405153-1633-111308254234578/AnsiballZ_stat.py'
Dec 06 06:47:40 compute-0 sudo[203571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:40 compute-0 python3.9[203573]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:40 compute-0 sudo[203571]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:40 compute-0 sudo[203697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-illzscixljgiyztmtjjywwuqcsanuzrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003659.9405153-1633-111308254234578/AnsiballZ_copy.py'
Dec 06 06:47:40 compute-0 sudo[203697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:40 compute-0 python3.9[203699]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765003659.9405153-1633-111308254234578/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:40 compute-0 sudo[203697]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:41 compute-0 sudo[203849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoxcnxqbdaztgbrrddwlgudjiamhktpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003661.2400799-1972-160035040274348/AnsiballZ_command.py'
Dec 06 06:47:41 compute-0 sudo[203849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:41 compute-0 python3.9[203851]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 06 06:47:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:42 compute-0 sudo[203849]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:42.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:42 compute-0 ceph-mon[74339]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:47:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:47:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:47:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:47:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:47:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:47:43 compute-0 sudo[204003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kamwajmbhotpzbfliolurocsusrqbjcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003662.7579546-1999-79427876867180/AnsiballZ_file.py'
Dec 06 06:47:43 compute-0 sudo[204003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:43 compute-0 python3.9[204005]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:43 compute-0 sudo[204003]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:43 compute-0 sudo[204155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kykmqnkeladfqioeznmhjaillpgzjcnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003663.3732927-1999-110517919504517/AnsiballZ_file.py'
Dec 06 06:47:43 compute-0 sudo[204155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:43 compute-0 python3.9[204157]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:43 compute-0 sudo[204155]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:43.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:44.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:44 compute-0 sudo[204307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hicfruvxhaadwttfilalhumxscrcifps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003663.9931695-1999-58056541704595/AnsiballZ_file.py'
Dec 06 06:47:44 compute-0 sudo[204307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:44 compute-0 python3.9[204310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:44 compute-0 sudo[204307]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:44 compute-0 sudo[204460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buffyjqxzfisipxqlyokpegvrxhrzues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003664.734382-1999-12545129994887/AnsiballZ_file.py'
Dec 06 06:47:44 compute-0 sudo[204460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:45 compute-0 python3.9[204462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:45 compute-0 sudo[204460]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:45 compute-0 ceph-mon[74339]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:45 compute-0 sudo[204612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsbjsipqsmcnxxgdlvkpkaitlnptmeaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003665.334028-1999-91461163872134/AnsiballZ_file.py'
Dec 06 06:47:45 compute-0 sudo[204612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:45 compute-0 python3.9[204614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:45 compute-0 sudo[204612]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:45.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:46.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:46 compute-0 sudo[204764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqxjznpmhugendfwzklkucninfnuwelu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003665.965515-1999-210978398591154/AnsiballZ_file.py'
Dec 06 06:47:46 compute-0 sudo[204764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:46 compute-0 python3.9[204766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:46 compute-0 sudo[204764]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:46 compute-0 ceph-mon[74339]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:46 compute-0 sudo[204917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgjpiuvxqduqrcrldapckcvdvaisvwme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003666.6407151-1999-38337259661614/AnsiballZ_file.py'
Dec 06 06:47:46 compute-0 sudo[204917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:47 compute-0 python3.9[204919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:47 compute-0 sudo[204917]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:47 compute-0 sudo[205069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yltqkdvrmkeexqvpjvqpjlkvalvdfcuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003667.318697-1999-207795557659102/AnsiballZ_file.py'
Dec 06 06:47:47 compute-0 sudo[205069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:47 compute-0 python3.9[205071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:47 compute-0 sudo[205069]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:47:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:47.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:47:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:48.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:48 compute-0 sudo[205221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgwrpknojgoxfmjtdjdpuwdkwjwyknxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003667.9870267-1999-163314611656060/AnsiballZ_file.py'
Dec 06 06:47:48 compute-0 sudo[205221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:48 compute-0 python3.9[205223]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:48 compute-0 sudo[205221]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:48 compute-0 sudo[205374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urgoxndrdbxkrmumcghzxzxnzzomkorm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003668.6463482-1999-54296801135448/AnsiballZ_file.py'
Dec 06 06:47:48 compute-0 sudo[205374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:49 compute-0 sudo[205377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:49 compute-0 sudo[205377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:49 compute-0 sudo[205377]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:49 compute-0 sudo[205402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:47:49 compute-0 sudo[205402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:47:49 compute-0 sudo[205402]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:49 compute-0 python3.9[205376]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:49 compute-0 sudo[205374]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:49 compute-0 ceph-mon[74339]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:49 compute-0 sudo[205576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gebveomctgsbktjwnfsdwerhuhcmclrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003669.3184023-1999-40843545521307/AnsiballZ_file.py'
Dec 06 06:47:49 compute-0 sudo[205576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:49 compute-0 python3.9[205578]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:49 compute-0 sudo[205576]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:49.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:50 compute-0 sudo[205728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvjzegujbnohudtpvrfkahlzconpklza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003669.9406164-1999-107144114670820/AnsiballZ_file.py'
Dec 06 06:47:50 compute-0 sudo[205728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:50 compute-0 python3.9[205730]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:50 compute-0 sudo[205728]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:50 compute-0 sudo[205881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygmaoawcyhtceduyelwjdrwvobpeoevj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003670.535178-1999-116319585669461/AnsiballZ_file.py'
Dec 06 06:47:50 compute-0 sudo[205881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:50 compute-0 python3.9[205883]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:50 compute-0 sudo[205881]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:51 compute-0 sudo[206033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyrtrloachasesbrmmemmszrogmslwkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003671.1133904-1999-23712366252432/AnsiballZ_file.py'
Dec 06 06:47:51 compute-0 sudo[206033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:51 compute-0 python3.9[206035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:51 compute-0 sudo[206033]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:51.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:47:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:52.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:47:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:47:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:53.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:47:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:54.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:54 compute-0 ceph-mon[74339]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:54 compute-0 sudo[206187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdrgqgxybwsjyugikhgedgpmyqnhzyrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003674.379847-2296-252342906343645/AnsiballZ_stat.py'
Dec 06 06:47:54 compute-0 sudo[206187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:54 compute-0 python3.9[206189]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:54 compute-0 sudo[206187]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:55 compute-0 sudo[206310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdingkdqkgkazrymutqxlokanzisqvvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003674.379847-2296-252342906343645/AnsiballZ_copy.py'
Dec 06 06:47:55 compute-0 sudo[206310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:55 compute-0 python3.9[206312]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003674.379847-2296-252342906343645/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:55 compute-0 sudo[206310]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:55 compute-0 ceph-mon[74339]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:55 compute-0 ceph-mon[74339]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:55 compute-0 ceph-mon[74339]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:55 compute-0 sudo[206462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brczdhjsddfwqaahnuxaxfnyxebfrrnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003675.5401313-2296-228134450911600/AnsiballZ_stat.py'
Dec 06 06:47:55 compute-0 sudo[206462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:55.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:55 compute-0 python3.9[206464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:55 compute-0 sudo[206462]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:56.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:47:56 compute-0 sudo[206585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nonldtevupsqcqdbellbzibvxtpicgbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003675.5401313-2296-228134450911600/AnsiballZ_copy.py'
Dec 06 06:47:56 compute-0 sudo[206585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:56 compute-0 python3.9[206587]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003675.5401313-2296-228134450911600/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:56 compute-0 sudo[206585]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:56 compute-0 sudo[206738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyjmoktxhuxmvkxelooolikvwpdinkpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003676.712121-2296-70218513040915/AnsiballZ_stat.py'
Dec 06 06:47:56 compute-0 sudo[206738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:57 compute-0 python3.9[206740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:57 compute-0 sudo[206738]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:57 compute-0 ceph-mon[74339]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:57 compute-0 sudo[206861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqpzozrqfmwckwnjfudkmgqbrkxezpcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003676.712121-2296-70218513040915/AnsiballZ_copy.py'
Dec 06 06:47:57 compute-0 sudo[206861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:57 compute-0 python3.9[206863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003676.712121-2296-70218513040915/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:57 compute-0 sudo[206861]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:47:58.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:47:58 compute-0 sudo[207030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoioiserfjlnuorzzmkoxbsxnckzmwrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003677.8964748-2296-207324272075058/AnsiballZ_stat.py'
Dec 06 06:47:58 compute-0 sudo[207030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:58 compute-0 podman[206987]: 2025-12-06 06:47:58.226843339 +0000 UTC m=+0.083979078 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:47:58 compute-0 python3.9[207035]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:58 compute-0 sudo[207030]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:58 compute-0 sudo[207161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smdiigqcwoznoyaclporjpyxnguzfquc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003677.8964748-2296-207324272075058/AnsiballZ_copy.py'
Dec 06 06:47:58 compute-0 sudo[207161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:58 compute-0 python3.9[207163]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003677.8964748-2296-207324272075058/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:47:58 compute-0 sudo[207161]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:58 compute-0 ceph-mon[74339]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:47:59 compute-0 sudo[207313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnskvoxbpcpvkiecqezhsbmyriwrkesg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003679.082193-2296-114087560277463/AnsiballZ_stat.py'
Dec 06 06:47:59 compute-0 sudo[207313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:59 compute-0 python3.9[207315]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:47:59 compute-0 sudo[207313]: pam_unix(sudo:session): session closed for user root
Dec 06 06:47:59 compute-0 sudo[207436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzwugeofhizwjsfhtmlkwejgymahriss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003679.082193-2296-114087560277463/AnsiballZ_copy.py'
Dec 06 06:47:59 compute-0 sudo[207436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:47:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:47:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:47:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:47:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:00 compute-0 python3.9[207438]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003679.082193-2296-114087560277463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:00 compute-0 sudo[207436]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:00.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:00 compute-0 sudo[207589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozbglypwppymmyzzgegqbffrsjitbskz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003680.3159006-2296-83735144589968/AnsiballZ_stat.py'
Dec 06 06:48:00 compute-0 sudo[207589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:00 compute-0 python3.9[207591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:00 compute-0 sudo[207589]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:01 compute-0 sudo[207712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgrlbnatzhbruwilpbugsfqvnmfmgkax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003680.3159006-2296-83735144589968/AnsiballZ_copy.py'
Dec 06 06:48:01 compute-0 sudo[207712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:01 compute-0 python3.9[207714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003680.3159006-2296-83735144589968/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:01 compute-0 sudo[207712]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:01 compute-0 sudo[207864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pstohdbcjbbrxkgewnqmpzufsijnzsob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003681.5289187-2296-235309030839080/AnsiballZ_stat.py'
Dec 06 06:48:01 compute-0 sudo[207864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:48:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:01.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:48:02 compute-0 python3.9[207866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:02 compute-0 sudo[207864]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:02.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:02 compute-0 sudo[207988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shopblrhfgpddvgmcgfzmmfflngdgouk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003681.5289187-2296-235309030839080/AnsiballZ_copy.py'
Dec 06 06:48:02 compute-0 sudo[207988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:02 compute-0 python3.9[207990]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003681.5289187-2296-235309030839080/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:02 compute-0 sudo[207988]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:02 compute-0 ceph-mon[74339]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:03 compute-0 sudo[208140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axequywdvgcpgeaqiyskvzsjbqtlwvou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003682.9404216-2296-57187853093808/AnsiballZ_stat.py'
Dec 06 06:48:03 compute-0 sudo[208140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:03 compute-0 python3.9[208142]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:03 compute-0 sudo[208140]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:03 compute-0 sudo[208263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdgseylmdlybqbmdpwspqotaftbrrkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003682.9404216-2296-57187853093808/AnsiballZ_copy.py'
Dec 06 06:48:03 compute-0 sudo[208263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:48:03.794 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:48:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:48:03.796 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:48:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:48:03.797 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:48:03 compute-0 python3.9[208265]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003682.9404216-2296-57187853093808/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:03 compute-0 sudo[208263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:03.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:04.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:04 compute-0 sudo[208416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znqcfokjwapzlmiwfhmvbbvosknttqgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003684.1257145-2296-6759969049756/AnsiballZ_stat.py'
Dec 06 06:48:04 compute-0 sudo[208416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:04 compute-0 ceph-mon[74339]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:04 compute-0 python3.9[208418]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:04 compute-0 sudo[208416]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:05 compute-0 sudo[208539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjxgedazzyxptgrpdeiytuspsmcwtjhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003684.1257145-2296-6759969049756/AnsiballZ_copy.py'
Dec 06 06:48:05 compute-0 sudo[208539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:05 compute-0 python3.9[208541]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003684.1257145-2296-6759969049756/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:05 compute-0 sudo[208539]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:05 compute-0 sudo[208691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqrvaxdfwoujzgqoqpvefynpuemthvtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003685.4281042-2296-66454793870724/AnsiballZ_stat.py'
Dec 06 06:48:05 compute-0 sudo[208691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:05 compute-0 python3.9[208693]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:05 compute-0 sudo[208691]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:05.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:06.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:06 compute-0 sudo[208814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exuuwgvutyfiipypkrxkyffgbksgjoje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003685.4281042-2296-66454793870724/AnsiballZ_copy.py'
Dec 06 06:48:06 compute-0 sudo[208814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:06 compute-0 ceph-mon[74339]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:06 compute-0 python3.9[208816]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003685.4281042-2296-66454793870724/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:06 compute-0 sudo[208814]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:06 compute-0 sudo[208967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohbdcmkvtzmgaqxiequdvlernlojhdxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003686.5349894-2296-174035807775467/AnsiballZ_stat.py'
Dec 06 06:48:06 compute-0 sudo[208967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:07 compute-0 python3.9[208969]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:07 compute-0 sudo[208967]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:07 compute-0 sudo[209090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzjpisbmlbemsmzdnypdclngmlpetcbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003686.5349894-2296-174035807775467/AnsiballZ_copy.py'
Dec 06 06:48:07 compute-0 sudo[209090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:07 compute-0 ceph-mon[74339]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:07 compute-0 python3.9[209092]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003686.5349894-2296-174035807775467/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:07 compute-0 sudo[209090]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:07.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:07 compute-0 sudo[209242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kduauqrullwtbxlziqktfmdkorrkzehe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003687.7251816-2296-84707601210644/AnsiballZ_stat.py'
Dec 06 06:48:07 compute-0 sudo[209242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:08 compute-0 python3.9[209244]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:08.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:08 compute-0 sudo[209242]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:08 compute-0 sudo[209366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjxhmqmwrqgknvmdutaudlucxuvewib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003687.7251816-2296-84707601210644/AnsiballZ_copy.py'
Dec 06 06:48:08 compute-0 sudo[209366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:08 compute-0 python3.9[209368]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003687.7251816-2296-84707601210644/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:08 compute-0 sudo[209366]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:09 compute-0 sudo[209530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dadagtjlnhfuikhgqegqjdjykeddzpxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003688.8397846-2296-249478312355901/AnsiballZ_stat.py'
Dec 06 06:48:09 compute-0 sudo[209530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:09 compute-0 podman[209492]: 2025-12-06 06:48:09.131879965 +0000 UTC m=+0.067332746 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 06:48:09 compute-0 sudo[209540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:09 compute-0 sudo[209540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:09 compute-0 sudo[209540]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:09 compute-0 sudo[209565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:09 compute-0 sudo[209565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:09 compute-0 sudo[209565]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:09 compute-0 python3.9[209536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:09 compute-0 sudo[209530]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:09 compute-0 sudo[209710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txmgdkrizlnkrdgkgwjzaeakjbkobylp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003688.8397846-2296-249478312355901/AnsiballZ_copy.py'
Dec 06 06:48:09 compute-0 sudo[209710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:09 compute-0 python3.9[209712]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003688.8397846-2296-249478312355901/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:09 compute-0 sudo[209710]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:09 compute-0 ceph-mon[74339]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:10 compute-0 sudo[209862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqdhgwbspkupdzhizjhpoeenjxokwubs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003689.9694576-2296-130805843285393/AnsiballZ_stat.py'
Dec 06 06:48:10 compute-0 sudo[209862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:10 compute-0 python3.9[209864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:10 compute-0 sudo[209862]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:10 compute-0 sudo[209986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nimsmmotrrnfdcmycfrzyqjyuiphaoiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003689.9694576-2296-130805843285393/AnsiballZ_copy.py'
Dec 06 06:48:10 compute-0 sudo[209986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:10 compute-0 python3.9[209988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003689.9694576-2296-130805843285393/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:11 compute-0 sudo[209986]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:11 compute-0 ceph-mon[74339]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:11 compute-0 python3.9[210138]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:11.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:12.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:12 compute-0 sudo[210292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzpqexbcexvssgvbxiwgmawhkgwusoyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003692.141225-2914-26085575151770/AnsiballZ_seboolean.py'
Dec 06 06:48:12 compute-0 sudo[210292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:12 compute-0 python3.9[210294]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 06 06:48:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:48:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:48:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:48:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:48:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:48:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:48:13 compute-0 ceph-mon[74339]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:48:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9329 writes, 35K keys, 9329 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9329 writes, 2392 syncs, 3.90 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 589 writes, 906 keys, 589 commit groups, 1.0 writes per commit group, ingest: 0.29 MB, 0.00 MB/s
                                           Interval WAL: 589 writes, 284 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 8.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 06:48:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:14 compute-0 sudo[210292]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:14 compute-0 ceph-mon[74339]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:15 compute-0 sudo[210449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knwgrpugthjzvxkmeujkeskkcantcmaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003695.089747-2938-32655825024505/AnsiballZ_copy.py'
Dec 06 06:48:15 compute-0 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 06 06:48:15 compute-0 sudo[210449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:15 compute-0 python3.9[210451]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:15 compute-0 sudo[210449]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:15 compute-0 sudo[210601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouhluafvmslfxtfzocthorgurlcjqeub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003695.7028012-2938-257470035321265/AnsiballZ_copy.py'
Dec 06 06:48:15 compute-0 sudo[210601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:15.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:16 compute-0 python3.9[210603]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:16 compute-0 sudo[210601]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:16.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:16 compute-0 sudo[210754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vygkfxofsthrvjdwipinkphqvtguogkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003696.277629-2938-102740097849037/AnsiballZ_copy.py'
Dec 06 06:48:16 compute-0 sudo[210754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:16 compute-0 python3.9[210756]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:16 compute-0 sudo[210754]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:17 compute-0 sudo[210906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxqjqtjlemkafpmgombreslchahigjgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003696.889166-2938-83561570628843/AnsiballZ_copy.py'
Dec 06 06:48:17 compute-0 sudo[210906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:17 compute-0 ceph-mon[74339]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:17 compute-0 python3.9[210908]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:17 compute-0 sudo[210906]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:17 compute-0 sudo[211058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otiqxkphdgmcyovejciolfvazoltceeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003697.524004-2938-161000342395237/AnsiballZ_copy.py'
Dec 06 06:48:17 compute-0 sudo[211058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:17 compute-0 python3.9[211060]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:17.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:17 compute-0 sudo[211058]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:18.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:48:18
Dec 06 06:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'backups', 'images', 'vms', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data']
Dec 06 06:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:48:18 compute-0 sudo[211211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrhvbavwhqvucvasrasiuslkjtywjstq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003698.2358954-3046-213586102111195/AnsiballZ_copy.py'
Dec 06 06:48:18 compute-0 sudo[211211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:18 compute-0 python3.9[211213]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:18 compute-0 sudo[211211]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:18 compute-0 ceph-mon[74339]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:19 compute-0 sudo[211363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhnpvcfwamxzbwmhslpexfiuusafguuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003698.857101-3046-32754806022284/AnsiballZ_copy.py'
Dec 06 06:48:19 compute-0 sudo[211363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:19 compute-0 python3.9[211365]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:19 compute-0 sudo[211363]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:19 compute-0 sudo[211515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwezjkelppsvnrvnwpjldshbehzmutys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003699.4622142-3046-143981667116287/AnsiballZ_copy.py'
Dec 06 06:48:19 compute-0 sudo[211515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:19 compute-0 python3.9[211517]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:19 compute-0 sudo[211515]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:19.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:20.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:20 compute-0 sudo[211667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfobyggxzizjhebqlhrfgsswmsltpkzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003700.0885432-3046-106798748585094/AnsiballZ_copy.py'
Dec 06 06:48:20 compute-0 sudo[211667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:20 compute-0 python3.9[211670]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:20 compute-0 sudo[211667]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:20 compute-0 ceph-mon[74339]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:21 compute-0 sudo[211820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzzbbdiyjvlvvgoewkhjeeljyskndlde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003700.7122626-3046-263820504513932/AnsiballZ_copy.py'
Dec 06 06:48:21 compute-0 sudo[211820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:21 compute-0 python3.9[211822]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:21 compute-0 sudo[211820]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:21 compute-0 sudo[211972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfncmytkotvxmclsdjyeglloluwvbkhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003701.4278078-3154-271321705450400/AnsiballZ_systemd.py'
Dec 06 06:48:21 compute-0 sudo[211972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:21.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:22 compute-0 python3.9[211974]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:22 compute-0 systemd[1]: Reloading.
Dec 06 06:48:22 compute-0 systemd-sysv-generator[212001]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:22 compute-0 systemd-rc-local-generator[211998]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:48:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:22.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:48:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 06:48:22 compute-0 sudo[212011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:22 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 06 06:48:22 compute-0 sudo[212011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-0 sudo[212011]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 06 06:48:22 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 06 06:48:22 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 06 06:48:22 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 06 06:48:22 compute-0 sudo[212040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:48:22 compute-0 sudo[212040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-0 sudo[212040]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-0 sudo[212067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:22 compute-0 sudo[212067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-0 sudo[212067]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 06 06:48:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:22 compute-0 sudo[211972]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:22 compute-0 sudo[212092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:48:22 compute-0 sudo[212092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:48:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:48:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:23 compute-0 sudo[212285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewqazhbsqlmryousrqynnyfaiftloiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003702.7696438-3154-170828722123344/AnsiballZ_systemd.py'
Dec 06 06:48:23 compute-0 sudo[212285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:48:23 compute-0 sudo[212092]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:48:23 compute-0 python3.9[212287]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:23 compute-0 systemd[1]: Reloading.
Dec 06 06:48:23 compute-0 systemd-sysv-generator[212332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:23 compute-0 systemd-rc-local-generator[212329]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:48:23 compute-0 ceph-mon[74339]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:23 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 06 06:48:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 06 06:48:23 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 06 06:48:23 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 06 06:48:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 06 06:48:23 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 06 06:48:23 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 06 06:48:23 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 06 06:48:23 compute-0 sudo[212285]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:48:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:48:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:48:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:48:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:48:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:23.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dea8fd4e-994e-4e54-b930-8136c5e57a0b does not exist
Dec 06 06:48:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7ab14d12-284d-49c9-b213-89246eb6fa86 does not exist
Dec 06 06:48:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9d56d850-846a-4096-86fd-cbd5607063d4 does not exist
Dec 06 06:48:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:48:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:48:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:48:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:48:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:48:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:48:24 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 06 06:48:24 compute-0 sudo[212437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:24 compute-0 sudo[212437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:24 compute-0 sudo[212437]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:24 compute-0 sudo[212491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:48:24 compute-0 sudo[212491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:24 compute-0 sudo[212491]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:24.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:24 compute-0 sudo[212529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:24 compute-0 sudo[212529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:24 compute-0 sudo[212529]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:24 compute-0 sudo[212604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xroacmpamaqchorzzqnkdgalitwhujqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003703.981929-3154-149026116178998/AnsiballZ_systemd.py'
Dec 06 06:48:24 compute-0 sudo[212604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:24 compute-0 sudo[212580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:48:24 compute-0 sudo[212580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:24 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 06 06:48:24 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 06 06:48:24 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 06 06:48:24 compute-0 python3.9[212616]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:24 compute-0 systemd[1]: Reloading.
Dec 06 06:48:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:24 compute-0 systemd-sysv-generator[212702]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:24 compute-0 systemd-rc-local-generator[212699]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:24 compute-0 podman[212668]: 2025-12-06 06:48:24.694195645 +0000 UTC m=+0.056821050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:48:25 compute-0 podman[212668]: 2025-12-06 06:48:25.018057925 +0000 UTC m=+0.380683310 container create b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 06:48:25 compute-0 systemd[1]: Started libpod-conmon-b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3.scope.
Dec 06 06:48:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:48:25 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 06 06:48:25 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 06 06:48:25 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 06 06:48:25 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 06 06:48:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 06 06:48:25 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 06 06:48:25 compute-0 sudo[212604]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:25 compute-0 podman[212668]: 2025-12-06 06:48:25.436851471 +0000 UTC m=+0.799476876 container init b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:48:25 compute-0 podman[212668]: 2025-12-06 06:48:25.44947449 +0000 UTC m=+0.812099905 container start b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:48:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:48:25 compute-0 systemd[1]: libpod-b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3.scope: Deactivated successfully.
Dec 06 06:48:25 compute-0 musing_visvesvaraya[212721]: 167 167
Dec 06 06:48:25 compute-0 conmon[212721]: conmon b378beaa2da9df09afc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3.scope/container/memory.events
Dec 06 06:48:25 compute-0 podman[212668]: 2025-12-06 06:48:25.479250456 +0000 UTC m=+0.841875851 container attach b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_visvesvaraya, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:48:25 compute-0 podman[212668]: 2025-12-06 06:48:25.479870205 +0000 UTC m=+0.842495580 container died b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_visvesvaraya, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d628d2d840e390997c4f2d57ecd9b3e2c9ec9ae48b7c41e8a9caa9df2f5b35fc-merged.mount: Deactivated successfully.
Dec 06 06:48:25 compute-0 podman[212668]: 2025-12-06 06:48:25.5295793 +0000 UTC m=+0.892204685 container remove b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:48:25 compute-0 systemd[1]: libpod-conmon-b378beaa2da9df09afc72c9853abf6168220351379598d085b1949ffa0d852f3.scope: Deactivated successfully.
Dec 06 06:48:25 compute-0 setroubleshoot[212445]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 09567b77-06d5-40ce-af9a-3d0fdc4cd1c2
Dec 06 06:48:25 compute-0 setroubleshoot[212445]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 06 06:48:25 compute-0 setroubleshoot[212445]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 09567b77-06d5-40ce-af9a-3d0fdc4cd1c2
Dec 06 06:48:25 compute-0 setroubleshoot[212445]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 06 06:48:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:48:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:48:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:48:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:48:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:48:25 compute-0 podman[212850]: 2025-12-06 06:48:25.716819662 +0000 UTC m=+0.051503501 container create 05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 06:48:25 compute-0 systemd[1]: Started libpod-conmon-05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28.scope.
Dec 06 06:48:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95d8ec9381180a89905e60822a1038f27eec000d645a3103d653f7a70d1b06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95d8ec9381180a89905e60822a1038f27eec000d645a3103d653f7a70d1b06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95d8ec9381180a89905e60822a1038f27eec000d645a3103d653f7a70d1b06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:25 compute-0 podman[212850]: 2025-12-06 06:48:25.69783714 +0000 UTC m=+0.032520979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95d8ec9381180a89905e60822a1038f27eec000d645a3103d653f7a70d1b06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95d8ec9381180a89905e60822a1038f27eec000d645a3103d653f7a70d1b06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:25 compute-0 podman[212850]: 2025-12-06 06:48:25.801543079 +0000 UTC m=+0.136226918 container init 05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:48:25 compute-0 podman[212850]: 2025-12-06 06:48:25.812744116 +0000 UTC m=+0.147427935 container start 05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:48:25 compute-0 podman[212850]: 2025-12-06 06:48:25.816481308 +0000 UTC m=+0.151165137 container attach 05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:48:25 compute-0 sudo[212938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viwtziswolfjmpxskisdrkbvpyagdifc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003705.557223-3154-152642070550298/AnsiballZ_systemd.py'
Dec 06 06:48:25 compute-0 sudo[212938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:26.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:26 compute-0 python3.9[212940]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:26 compute-0 systemd[1]: Reloading.
Dec 06 06:48:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:26 compute-0 systemd-rc-local-generator[212966]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:26 compute-0 systemd-sysv-generator[212969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:26 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 06 06:48:26 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 06 06:48:26 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 06 06:48:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:26 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 06 06:48:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 06 06:48:26 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 06 06:48:26 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 06 06:48:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 06 06:48:26 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 06 06:48:26 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 06 06:48:26 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 06 06:48:26 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 06 06:48:26 compute-0 sudo[212938]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:26 compute-0 nice_lehmann[212907]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:48:26 compute-0 nice_lehmann[212907]: --> relative data size: 1.0
Dec 06 06:48:26 compute-0 nice_lehmann[212907]: --> All data devices are unavailable
Dec 06 06:48:26 compute-0 systemd[1]: libpod-05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28.scope: Deactivated successfully.
Dec 06 06:48:26 compute-0 podman[212850]: 2025-12-06 06:48:26.783933536 +0000 UTC m=+1.118617355 container died 05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f95d8ec9381180a89905e60822a1038f27eec000d645a3103d653f7a70d1b06-merged.mount: Deactivated successfully.
Dec 06 06:48:26 compute-0 podman[212850]: 2025-12-06 06:48:26.848804397 +0000 UTC m=+1.183488216 container remove 05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 06:48:26 compute-0 systemd[1]: libpod-conmon-05960817984ad687e433d746ce6f67d44974bf6480e33774460c6465875c7c28.scope: Deactivated successfully.
Dec 06 06:48:26 compute-0 sudo[212580]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:26 compute-0 sudo[213080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:26 compute-0 sudo[213080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:26 compute-0 sudo[213080]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:27 compute-0 sudo[213137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:48:27 compute-0 sudo[213137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:27 compute-0 sudo[213137]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:27 compute-0 sudo[213178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:27 compute-0 sudo[213178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:27 compute-0 sudo[213178]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:27 compute-0 sudo[213232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:48:27 compute-0 sudo[213275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgesjyklfofyxnevxwjazxkzkqhxywga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003706.8721724-3154-257300041825902/AnsiballZ_systemd.py'
Dec 06 06:48:27 compute-0 sudo[213232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:27 compute-0 sudo[213275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:27 compute-0 python3.9[213279]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:48:27 compute-0 systemd[1]: Reloading.
Dec 06 06:48:27 compute-0 podman[213320]: 2025-12-06 06:48:27.528446997 +0000 UTC m=+0.054457059 container create eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:48:27 compute-0 systemd-sysv-generator[213360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:48:27 compute-0 systemd-rc-local-generator[213355]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:48:27 compute-0 podman[213320]: 2025-12-06 06:48:27.506286781 +0000 UTC m=+0.032296853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:48:27 compute-0 systemd[1]: Started libpod-conmon-eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24.scope.
Dec 06 06:48:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:48:27 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 06 06:48:27 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 06 06:48:27 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 06 06:48:27 compute-0 podman[213320]: 2025-12-06 06:48:27.880554567 +0000 UTC m=+0.406564639 container init eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_greider, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:48:27 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 06 06:48:27 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 06 06:48:27 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 06 06:48:27 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 06 06:48:27 compute-0 podman[213320]: 2025-12-06 06:48:27.891889328 +0000 UTC m=+0.417899380 container start eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:48:27 compute-0 podman[213320]: 2025-12-06 06:48:27.895367762 +0000 UTC m=+0.421377814 container attach eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec 06 06:48:27 compute-0 silly_greider[213371]: 167 167
Dec 06 06:48:27 compute-0 systemd[1]: libpod-eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24.scope: Deactivated successfully.
Dec 06 06:48:27 compute-0 podman[213320]: 2025-12-06 06:48:27.899583269 +0000 UTC m=+0.425593311 container died eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d3ae37d2f11ff412b5f68a676885349fafaaf1f32e34bd153f57a683800fbcc-merged.mount: Deactivated successfully.
Dec 06 06:48:27 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 06 06:48:27 compute-0 podman[213320]: 2025-12-06 06:48:27.937896232 +0000 UTC m=+0.463906284 container remove eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_greider, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 06:48:27 compute-0 systemd[1]: libpod-conmon-eb2f64943f2f8a5d7e9c339601d5039e9a2b0b9e7ac6d092f8775db6c01dae24.scope: Deactivated successfully.
Dec 06 06:48:27 compute-0 sudo[213275]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:28.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:28 compute-0 podman[213442]: 2025-12-06 06:48:28.112043729 +0000 UTC m=+0.049046016 container create 5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:48:28 compute-0 systemd[1]: Started libpod-conmon-5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10.scope.
Dec 06 06:48:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19a9244ec90f83ab70f371edde97f98d65d2cc4ffb3da9194fda7f9dad7d66e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:28 compute-0 podman[213442]: 2025-12-06 06:48:28.092696157 +0000 UTC m=+0.029698464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19a9244ec90f83ab70f371edde97f98d65d2cc4ffb3da9194fda7f9dad7d66e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19a9244ec90f83ab70f371edde97f98d65d2cc4ffb3da9194fda7f9dad7d66e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d19a9244ec90f83ab70f371edde97f98d65d2cc4ffb3da9194fda7f9dad7d66e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:28 compute-0 podman[213442]: 2025-12-06 06:48:28.202776358 +0000 UTC m=+0.139778665 container init 5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:48:28 compute-0 podman[213442]: 2025-12-06 06:48:28.211305684 +0000 UTC m=+0.148307981 container start 5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:48:28 compute-0 podman[213442]: 2025-12-06 06:48:28.215258024 +0000 UTC m=+0.152260381 container attach 5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 06:48:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:48:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:28.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:48:28 compute-0 podman[213464]: 2025-12-06 06:48:28.456708815 +0000 UTC m=+0.107954208 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:48:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]: {
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:     "0": [
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:         {
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "devices": [
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "/dev/loop3"
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             ],
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "lv_name": "ceph_lv0",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "lv_size": "7511998464",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "name": "ceph_lv0",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "tags": {
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.cluster_name": "ceph",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.crush_device_class": "",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.encrypted": "0",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.osd_id": "0",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.type": "block",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:                 "ceph.vdo": "0"
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             },
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "type": "block",
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:             "vg_name": "ceph_vg0"
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:         }
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]:     ]
Dec 06 06:48:29 compute-0 distracted_satoshi[213459]: }
Dec 06 06:48:29 compute-0 systemd[1]: libpod-5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10.scope: Deactivated successfully.
Dec 06 06:48:29 compute-0 podman[213442]: 2025-12-06 06:48:29.070474355 +0000 UTC m=+1.007476662 container died 5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d19a9244ec90f83ab70f371edde97f98d65d2cc4ffb3da9194fda7f9dad7d66e-merged.mount: Deactivated successfully.
Dec 06 06:48:29 compute-0 podman[213442]: 2025-12-06 06:48:29.146307215 +0000 UTC m=+1.083309522 container remove 5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_satoshi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:48:29 compute-0 systemd[1]: libpod-conmon-5b964af624b06a77d5dd3d13efc492403865e464537f4a34aa773721eb93aa10.scope: Deactivated successfully.
Dec 06 06:48:29 compute-0 sudo[213232]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:29 compute-0 sudo[213507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:29 compute-0 sudo[213507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:29 compute-0 sudo[213507]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:29 compute-0 ceph-mon[74339]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:29 compute-0 sudo[213532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:29 compute-0 sudo[213532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:29 compute-0 sudo[213532]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:29 compute-0 sudo[213534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:48:29 compute-0 sudo[213534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:29 compute-0 sudo[213534]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:29 compute-0 sudo[213582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:29 compute-0 sudo[213584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:29 compute-0 sudo[213582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:29 compute-0 sudo[213584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:29 compute-0 sudo[213582]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:29 compute-0 sudo[213584]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:29 compute-0 sudo[213632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:48:29 compute-0 sudo[213632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:29 compute-0 podman[213699]: 2025-12-06 06:48:29.790878231 +0000 UTC m=+0.051337554 container create 180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:48:29 compute-0 systemd[1]: Started libpod-conmon-180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527.scope.
Dec 06 06:48:29 compute-0 podman[213699]: 2025-12-06 06:48:29.76624143 +0000 UTC m=+0.026700773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:48:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:48:29 compute-0 podman[213699]: 2025-12-06 06:48:29.902494388 +0000 UTC m=+0.162953731 container init 180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:48:29 compute-0 podman[213699]: 2025-12-06 06:48:29.91221976 +0000 UTC m=+0.172679103 container start 180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:48:29 compute-0 practical_snyder[213716]: 167 167
Dec 06 06:48:29 compute-0 systemd[1]: libpod-180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527.scope: Deactivated successfully.
Dec 06 06:48:29 compute-0 podman[213699]: 2025-12-06 06:48:29.929733758 +0000 UTC m=+0.190193081 container attach 180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:48:29 compute-0 podman[213699]: 2025-12-06 06:48:29.930223863 +0000 UTC m=+0.190683186 container died 180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-af075bf70f002d6c4052b4185926f8d97cc134ffba4a56a413d038e36a36935a-merged.mount: Deactivated successfully.
Dec 06 06:48:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:30 compute-0 podman[213699]: 2025-12-06 06:48:30.013811617 +0000 UTC m=+0.274270960 container remove 180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:48:30 compute-0 systemd[1]: libpod-conmon-180fc92f4e3d42f86baa712f5ec20c0ccaec25fe3c4773344e01c7393b87d527.scope: Deactivated successfully.
Dec 06 06:48:30 compute-0 podman[213839]: 2025-12-06 06:48:30.215133882 +0000 UTC m=+0.062647496 container create 74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_zhukovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 06:48:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:30.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:30 compute-0 sudo[213879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kajclxyxolkxewstzrlvoutmgcjqcpbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003709.9061594-3265-228058305184117/AnsiballZ_file.py'
Dec 06 06:48:30 compute-0 sudo[213879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:30 compute-0 systemd[1]: Started libpod-conmon-74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d.scope.
Dec 06 06:48:30 compute-0 podman[213839]: 2025-12-06 06:48:30.179623213 +0000 UTC m=+0.027136907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:48:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc61eb3172cc6565d54c54c24e98b7467640cae327e59829d4e8d5dcc6c6ba4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc61eb3172cc6565d54c54c24e98b7467640cae327e59829d4e8d5dcc6c6ba4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc61eb3172cc6565d54c54c24e98b7467640cae327e59829d4e8d5dcc6c6ba4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc61eb3172cc6565d54c54c24e98b7467640cae327e59829d4e8d5dcc6c6ba4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:48:30 compute-0 podman[213839]: 2025-12-06 06:48:30.310711606 +0000 UTC m=+0.158225230 container init 74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_zhukovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:48:30 compute-0 podman[213839]: 2025-12-06 06:48:30.321893812 +0000 UTC m=+0.169407436 container start 74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:48:30 compute-0 ceph-mon[74339]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:30 compute-0 ceph-mon[74339]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:30 compute-0 podman[213839]: 2025-12-06 06:48:30.340157041 +0000 UTC m=+0.187670685 container attach 74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 06:48:30 compute-0 python3.9[213882]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:30 compute-0 sudo[213879]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:30 compute-0 sudo[214040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcmppcmtanvabosgsomslilwcwuexgyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003710.6811147-3289-95033354258688/AnsiballZ_find.py'
Dec 06 06:48:30 compute-0 sudo[214040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:31 compute-0 python3.9[214042]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]: {
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]:         "osd_id": 0,
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]:         "type": "bluestore"
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]:     }
Dec 06 06:48:31 compute-0 fervent_zhukovsky[213885]: }
Dec 06 06:48:31 compute-0 sudo[214040]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:31 compute-0 systemd[1]: libpod-74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d.scope: Deactivated successfully.
Dec 06 06:48:31 compute-0 podman[213839]: 2025-12-06 06:48:31.204086495 +0000 UTC m=+1.051600109 container died 74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:48:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc61eb3172cc6565d54c54c24e98b7467640cae327e59829d4e8d5dcc6c6ba4f-merged.mount: Deactivated successfully.
Dec 06 06:48:31 compute-0 podman[213839]: 2025-12-06 06:48:31.262799481 +0000 UTC m=+1.110313105 container remove 74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_zhukovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:48:31 compute-0 systemd[1]: libpod-conmon-74d4785780d48bf3559248d5811aa808e5248cb13e97da9bc3a21c25f5b3d88d.scope: Deactivated successfully.
Dec 06 06:48:31 compute-0 sudo[213632]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:48:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:48:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ed4bc008-e5fe-4b51-80af-d40394e4f0d8 does not exist
Dec 06 06:48:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6879e5a9-b372-4dac-8e7e-a0a942cca4b3 does not exist
Dec 06 06:48:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 469e199d-356e-44dc-b50d-2d49109cbb54 does not exist
Dec 06 06:48:31 compute-0 ceph-mon[74339]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:48:31 compute-0 sudo[214106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:31 compute-0 sudo[214106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:31 compute-0 sudo[214106]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:31 compute-0 sudo[214162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:48:31 compute-0 sudo[214162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:31 compute-0 sudo[214162]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:31 compute-0 sudo[214268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnafwtrhvzfspleuqhozlikqclxirgdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003711.3619308-3313-94569412601240/AnsiballZ_command.py'
Dec 06 06:48:31 compute-0 sudo[214268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:31 compute-0 python3.9[214270]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:31 compute-0 sudo[214268]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:48:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:32.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:48:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:32.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:32 compute-0 python3.9[214425]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:48:33 compute-0 ceph-mon[74339]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:33 compute-0 python3.9[214575]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:34.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:34 compute-0 python3.9[214696]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003713.0954616-3370-244906298148646/.source.xml follow=False _original_basename=secret.xml.j2 checksum=cbc4650861b6b585bb80bed115fd7c888a642f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000089s ======
Dec 06 06:48:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000089s
Dec 06 06:48:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:34 compute-0 sudo[214847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhtrezkmrbfytguvuzumptmkyhvduwpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003714.4291358-3415-79888041446888/AnsiballZ_command.py'
Dec 06 06:48:34 compute-0 sudo[214847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:34 compute-0 ceph-mon[74339]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:34 compute-0 python3.9[214849]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 40a1bae4-cf76-5610-8dab-c75116dfe0bb
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:34 compute-0 polkitd[43458]: Registered Authentication Agent for unix-process:214851:398757 (system bus name :1.2918 [pkttyagent --process 214851 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 06:48:34 compute-0 polkitd[43458]: Unregistered Authentication Agent for unix-process:214851:398757 (system bus name :1.2918, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 06:48:35 compute-0 polkitd[43458]: Registered Authentication Agent for unix-process:214850:398756 (system bus name :1.2919 [pkttyagent --process 214850 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 06:48:35 compute-0 polkitd[43458]: Unregistered Authentication Agent for unix-process:214850:398756 (system bus name :1.2919, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 06:48:35 compute-0 sudo[214847]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:35 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 06 06:48:35 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.035s CPU time.
Dec 06 06:48:35 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 06 06:48:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:36.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:36 compute-0 python3.9[215011]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:36.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:36 compute-0 sudo[215162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeoeqiguareujuppcfinxcnpuappejdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003716.3620956-3463-242459020595646/AnsiballZ_command.py'
Dec 06 06:48:36 compute-0 sudo[215162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:36 compute-0 sudo[215162]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:37 compute-0 sudo[215315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oglxtqsxlawrbajycanxbynxncvalokj ; FSID=40a1bae4-cf76-5610-8dab-c75116dfe0bb KEY=AQAgzDNpAAAAABAARUe82jbSNft4GCMkj8z7BQ== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003717.0470734-3487-231242270718435/AnsiballZ_command.py'
Dec 06 06:48:37 compute-0 sudo[215315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:37 compute-0 polkitd[43458]: Registered Authentication Agent for unix-process:215318:399022 (system bus name :1.2922 [pkttyagent --process 215318 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 06 06:48:37 compute-0 polkitd[43458]: Unregistered Authentication Agent for unix-process:215318:399022 (system bus name :1.2922, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 06 06:48:37 compute-0 sudo[215315]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:37 compute-0 ceph-mon[74339]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:38.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:38 compute-0 sudo[215473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvywcjfobkhekwwobvlnvxvtrpziisqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003717.860788-3511-273451511188082/AnsiballZ_copy.py'
Dec 06 06:48:38 compute-0 sudo[215473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:38.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:38 compute-0 python3.9[215475]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:38 compute-0 sudo[215473]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:38 compute-0 ceph-mon[74339]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:39 compute-0 sudo[215626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vegftphqpfoyoxhgtpomdpgrkgfubtxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003718.608443-3535-236107340129265/AnsiballZ_stat.py'
Dec 06 06:48:39 compute-0 sudo[215626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:39 compute-0 python3.9[215628]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:39 compute-0 sudo[215626]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:39 compute-0 podman[215629]: 2025-12-06 06:48:39.412667894 +0000 UTC m=+0.065308356 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 06 06:48:39 compute-0 sudo[215767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hexhtyuumpwlqrxllcittetrpotlpjhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003718.608443-3535-236107340129265/AnsiballZ_copy.py'
Dec 06 06:48:39 compute-0 sudo[215767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:39 compute-0 python3.9[215769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003718.608443-3535-236107340129265/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:39 compute-0 sudo[215767]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:40.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:40 compute-0 sudo[215920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndaovqfnjgjjjeslbzuxerufzspjpvch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003720.1625493-3583-85941758964405/AnsiballZ_file.py'
Dec 06 06:48:40 compute-0 sudo[215920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:40 compute-0 python3.9[215922]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:40 compute-0 sudo[215920]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:41 compute-0 sudo[216072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazsdepvrptexrhquastwyrbmyxqvxlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003720.7898972-3607-165447729522845/AnsiballZ_stat.py'
Dec 06 06:48:41 compute-0 sudo[216072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:41 compute-0 python3.9[216074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:41 compute-0 sudo[216072]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:41 compute-0 sudo[216150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsvnsztpykgnkpcpetyifazbecmcyupz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003720.7898972-3607-165447729522845/AnsiballZ_file.py'
Dec 06 06:48:41 compute-0 sudo[216150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:41 compute-0 python3.9[216152]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:41 compute-0 sudo[216150]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:42.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:42.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:42 compute-0 sudo[216302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utdcgibrifpgxcmzofikrfximgqlexit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003721.9567034-3643-100316372247740/AnsiballZ_stat.py'
Dec 06 06:48:42 compute-0 sudo[216302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:42 compute-0 python3.9[216304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:42 compute-0 sudo[216302]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:42 compute-0 sudo[216381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbcopsfltlmrsbsxremmfsputbiltpvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003721.9567034-3643-100316372247740/AnsiballZ_file.py'
Dec 06 06:48:42 compute-0 sudo[216381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:42 compute-0 ceph-mon[74339]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:42 compute-0 python3.9[216383]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.rvml9w5t recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:48:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:48:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:48:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:48:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:48:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:48:42 compute-0 sudo[216381]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:43 compute-0 sudo[216533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttnwusttteskrwxlsgcwnflajprflapn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003723.1105545-3679-260159341187468/AnsiballZ_stat.py'
Dec 06 06:48:43 compute-0 sudo[216533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:43 compute-0 python3.9[216535]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:43 compute-0 sudo[216533]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:43 compute-0 sudo[216611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmybhfwwyiephavfzkvbhbhhpouicvzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003723.1105545-3679-260159341187468/AnsiballZ_file.py'
Dec 06 06:48:43 compute-0 sudo[216611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:48:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:44.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:48:44 compute-0 python3.9[216613]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:44 compute-0 sudo[216611]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:48:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:48:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:44 compute-0 sudo[216764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anldbagozyfmnnpyhitnuwgferfjwbfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003724.394815-3718-174455377915120/AnsiballZ_command.py'
Dec 06 06:48:44 compute-0 sudo[216764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:44 compute-0 python3.9[216766]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:44 compute-0 sudo[216764]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:46.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:46.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:46 compute-0 ceph-mon[74339]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:47 compute-0 sudo[216918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgapudzmbmgmbqfywehrdxttcguhfsyk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003726.6163452-3742-43462555208138/AnsiballZ_edpm_nftables_from_files.py'
Dec 06 06:48:47 compute-0 sudo[216918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:47 compute-0 python3[216920]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 06:48:47 compute-0 sudo[216918]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:47 compute-0 sudo[217070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzkfnsyxgqtxwkgibqjkbdsicnuehmgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003727.4652908-3766-200057652834150/AnsiballZ_stat.py'
Dec 06 06:48:47 compute-0 sudo[217070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:47 compute-0 python3.9[217072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:48 compute-0 sudo[217070]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:48.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:48 compute-0 ceph-mon[74339]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:48 compute-0 ceph-mon[74339]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:48.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:48 compute-0 sudo[217149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uccwyewttklmfazpwtsqjbxxeorjvwvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003727.4652908-3766-200057652834150/AnsiballZ_file.py'
Dec 06 06:48:48 compute-0 sudo[217149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:48 compute-0 python3.9[217151]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:48 compute-0 sudo[217149]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:49 compute-0 sudo[217301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spmfgvemmwppysocyzjaiicginnxfhxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003728.8047307-3802-111638351163682/AnsiballZ_stat.py'
Dec 06 06:48:49 compute-0 sudo[217301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:49 compute-0 python3.9[217303]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:49 compute-0 sudo[217301]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:49 compute-0 sudo[217321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:49 compute-0 sudo[217321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:49 compute-0 sudo[217321]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:49 compute-0 sudo[217359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:48:49 compute-0 sudo[217359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:48:49 compute-0 sudo[217359]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:49 compute-0 sudo[217429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbpsinofytepmsusclnltvaukvjeruuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003728.8047307-3802-111638351163682/AnsiballZ_file.py'
Dec 06 06:48:49 compute-0 sudo[217429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:49 compute-0 python3.9[217431]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:49 compute-0 sudo[217429]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:50.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:50.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:50 compute-0 sudo[217582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdqgqabmwhvfkvpaazbgncxsjoubsqll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003730.2093205-3838-273231470377885/AnsiballZ_stat.py'
Dec 06 06:48:50 compute-0 sudo[217582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:50 compute-0 python3.9[217584]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:50 compute-0 sudo[217582]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:50 compute-0 sudo[217660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfspfcoydbfjozwlesfzadewccwfirbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003730.2093205-3838-273231470377885/AnsiballZ_file.py'
Dec 06 06:48:50 compute-0 sudo[217660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:51 compute-0 python3.9[217662]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:51 compute-0 sudo[217660]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:52.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:52 compute-0 ceph-mon[74339]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:52.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:53 compute-0 sudo[217814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyatgqcydbhomrnztwgdukugidoygkxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003732.9446123-3874-265393101994498/AnsiballZ_stat.py'
Dec 06 06:48:53 compute-0 sudo[217814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:53 compute-0 ceph-mon[74339]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:53 compute-0 ceph-mon[74339]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:53 compute-0 python3.9[217816]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:53 compute-0 sudo[217814]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:53 compute-0 sudo[217892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhjfgdbfxzazhlcthiqwshjderzrsyjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003732.9446123-3874-265393101994498/AnsiballZ_file.py'
Dec 06 06:48:53 compute-0 sudo[217892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:53 compute-0 python3.9[217894]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:53 compute-0 sudo[217892]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:54.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:54.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:54 compute-0 sudo[218045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btzxlnmzjhyhbkkgzvmrrzutfbulbimw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003734.074006-3910-17290634651202/AnsiballZ_stat.py'
Dec 06 06:48:54 compute-0 sudo[218045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:54 compute-0 python3.9[218047]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:48:54 compute-0 sudo[218045]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:55 compute-0 sudo[218170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttmzicijsjvkjcjqbacstmdsgilranod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003734.074006-3910-17290634651202/AnsiballZ_copy.py'
Dec 06 06:48:55 compute-0 sudo[218170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:55 compute-0 python3.9[218172]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765003734.074006-3910-17290634651202/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:55 compute-0 sudo[218170]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:55 compute-0 sudo[218322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjnoojtlkxttigszketrikdfkfqcfzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003735.584765-3955-177902979774849/AnsiballZ_file.py'
Dec 06 06:48:55 compute-0 sudo[218322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:56.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:56 compute-0 python3.9[218324]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:56 compute-0 sudo[218322]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:48:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:48:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:56.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:48:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:57 compute-0 ceph-mon[74339]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:57 compute-0 sudo[218475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytttcssshaeofuaofzxidbxgaohbrxsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003737.2092519-3979-71598836120222/AnsiballZ_command.py'
Dec 06 06:48:57 compute-0 sudo[218475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:57 compute-0 python3.9[218477]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:57 compute-0 sudo[218475]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:48:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:48:58.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:48:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:48:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:48:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:48:58.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:48:58 compute-0 ceph-mon[74339]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:58 compute-0 sudo[218631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlkpqyozgwjgpzoglfpiwmnjbuykrjgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003737.9556832-4003-134355588967952/AnsiballZ_blockinfile.py'
Dec 06 06:48:58 compute-0 sudo[218631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:58 compute-0 podman[218633]: 2025-12-06 06:48:58.646159702 +0000 UTC m=+0.128703431 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 06:48:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:48:58 compute-0 python3.9[218634]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:58 compute-0 sudo[218631]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:59 compute-0 sudo[218807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnuejipdmdmgltrdlycrzfulpwkjmoyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003738.9855359-4030-16713498113813/AnsiballZ_command.py'
Dec 06 06:48:59 compute-0 sudo[218807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:48:59 compute-0 python3.9[218809]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:48:59 compute-0 sudo[218807]: pam_unix(sudo:session): session closed for user root
Dec 06 06:48:59 compute-0 sudo[218960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bleaogbcpjxslhxxxpgaroihgqvzikye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003739.7043471-4054-163307553750549/AnsiballZ_stat.py'
Dec 06 06:48:59 compute-0 sudo[218960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:00.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:00 compute-0 python3.9[218962]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:00 compute-0 sudo[218960]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:00.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:02 compute-0 ceph-mon[74339]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:02.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:02.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:02 compute-0 sudo[219116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doblmbzvrtnusizflxuthuxqtcxlbkoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003742.3153536-4078-81235771683571/AnsiballZ_command.py'
Dec 06 06:49:02 compute-0 sudo[219116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:02 compute-0 python3.9[219118]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:02 compute-0 sudo[219116]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:03 compute-0 sudo[219271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-helacfyavtafeyxislvanmkoroxlwimt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003743.04834-4102-75835760638976/AnsiballZ_file.py'
Dec 06 06:49:03 compute-0 sudo[219271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:03 compute-0 python3.9[219273]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:03 compute-0 sudo[219271]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:03 compute-0 ceph-mon[74339]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:03 compute-0 ceph-mon[74339]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:49:03.796 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:49:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:49:03.799 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:49:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:49:03.799 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:49:04 compute-0 sudo[219423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouibpykujrleacxpgkmfqkzhogycbajy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003743.780771-4126-252006614815352/AnsiballZ_stat.py'
Dec 06 06:49:04 compute-0 sudo[219423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:04.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:04 compute-0 python3.9[219425]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:04 compute-0 sudo[219423]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:04.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:04 compute-0 sudo[219547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxbzmgpcoakvshvxmfygimkripgoqfxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003743.780771-4126-252006614815352/AnsiballZ_copy.py'
Dec 06 06:49:04 compute-0 sudo[219547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:04 compute-0 python3.9[219549]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003743.780771-4126-252006614815352/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:04 compute-0 sudo[219547]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:05 compute-0 ceph-mon[74339]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:05 compute-0 sudo[219699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwaigkgcpcqeuftdumrijkhzmhnxjgjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003745.1180408-4171-275036853956752/AnsiballZ_stat.py'
Dec 06 06:49:05 compute-0 sudo[219699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:05 compute-0 python3.9[219701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:05 compute-0 sudo[219699]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:05 compute-0 sudo[219822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctbcqwvvuzuzjsxhilayaimsruwtqill ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003745.1180408-4171-275036853956752/AnsiballZ_copy.py'
Dec 06 06:49:05 compute-0 sudo[219822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:06.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:06 compute-0 python3.9[219824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003745.1180408-4171-275036853956752/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:06 compute-0 sudo[219822]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:06.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:06 compute-0 sudo[219975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swytvrymruyycrighmkhbrjeqcmhmofm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003746.3822472-4216-143093995116762/AnsiballZ_stat.py'
Dec 06 06:49:06 compute-0 sudo[219975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:06 compute-0 python3.9[219977]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:06 compute-0 sudo[219975]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:07 compute-0 sudo[220098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdctfzprysqkizbxgvwbrwcrkfnqqdmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003746.3822472-4216-143093995116762/AnsiballZ_copy.py'
Dec 06 06:49:07 compute-0 sudo[220098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:07 compute-0 python3.9[220100]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003746.3822472-4216-143093995116762/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:07 compute-0 sudo[220098]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:07 compute-0 sudo[220250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnstqdcwexxghfyvvfarmdtigzraksry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003747.6475215-4261-175960853120461/AnsiballZ_systemd.py'
Dec 06 06:49:07 compute-0 sudo[220250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:08.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:08 compute-0 ceph-mon[74339]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:08 compute-0 python3.9[220252]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:49:08 compute-0 systemd[1]: Reloading.
Dec 06 06:49:08 compute-0 systemd-sysv-generator[220282]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:49:08 compute-0 systemd-rc-local-generator[220275]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:49:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:49:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:08.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:49:08 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 06 06:49:08 compute-0 sudo[220250]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:09 compute-0 ceph-mon[74339]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:09 compute-0 sudo[220318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:09 compute-0 sudo[220318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:09 compute-0 sudo[220318]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:09 compute-0 sudo[220349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:09 compute-0 sudo[220349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:09 compute-0 podman[220342]: 2025-12-06 06:49:09.653184372 +0000 UTC m=+0.056102911 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Dec 06 06:49:09 compute-0 sudo[220349]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:10 compute-0 sudo[220512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtxrvzsvfxkyxcrzubjckzfwegydpuvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003749.7792988-4285-254419944732464/AnsiballZ_systemd.py'
Dec 06 06:49:10 compute-0 sudo[220512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:10.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:10.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:10 compute-0 python3.9[220514]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 06:49:10 compute-0 systemd[1]: Reloading.
Dec 06 06:49:10 compute-0 systemd-rc-local-generator[220544]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:49:10 compute-0 systemd-sysv-generator[220548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:49:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:10 compute-0 systemd[1]: Reloading.
Dec 06 06:49:10 compute-0 systemd-sysv-generator[220584]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:49:10 compute-0 systemd-rc-local-generator[220580]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:49:10 compute-0 sudo[220512]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:11 compute-0 ceph-mon[74339]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:11 compute-0 sshd-session[195393]: Connection closed by 192.168.122.30 port 38300
Dec 06 06:49:11 compute-0 sshd-session[195390]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:49:11 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 06 06:49:11 compute-0 systemd[1]: session-50.scope: Consumed 1min 27.401s CPU time.
Dec 06 06:49:11 compute-0 systemd-logind[798]: Session 50 logged out. Waiting for processes to exit.
Dec 06 06:49:11 compute-0 systemd-logind[798]: Removed session 50.
Dec 06 06:49:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:12.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:12.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:49:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:49:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:49:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:49:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:49:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:49:13 compute-0 ceph-mon[74339]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:14.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:49:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:14.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:49:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:15 compute-0 ceph-mon[74339]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:16.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:16.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:16 compute-0 ceph-mon[74339]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:17 compute-0 sshd-session[220616]: Accepted publickey for zuul from 192.168.122.30 port 41484 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:49:17 compute-0 systemd-logind[798]: New session 51 of user zuul.
Dec 06 06:49:17 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec 06 06:49:17 compute-0 sshd-session[220616]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:49:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:18.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:18 compute-0 python3.9[220769]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:49:18
Dec 06 06:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root']
Dec 06 06:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:49:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:49:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:18.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:49:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:18 compute-0 ceph-mon[74339]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:19 compute-0 python3.9[220924]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:49:19 compute-0 network[220941]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:49:19 compute-0 network[220942]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:49:19 compute-0 network[220943]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:49:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:49:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:20.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:49:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:20.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:20 compute-0 ceph-mon[74339]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:22.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:22.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:49:23 compute-0 ceph-mon[74339]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:23 compute-0 sudo[221215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igexjhoxanctzqerwxjdqnopfbgkmvvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003763.701049-107-276987105677346/AnsiballZ_setup.py'
Dec 06 06:49:23 compute-0 sudo[221215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:24.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:24 compute-0 python3.9[221217]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 06:49:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:24.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:24 compute-0 sudo[221215]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:24 compute-0 ceph-mon[74339]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:24 compute-0 sudo[221300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouqtynsdpqiedygnxmjbglecvmsdfmiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003763.701049-107-276987105677346/AnsiballZ_dnf.py'
Dec 06 06:49:24 compute-0 sudo[221300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:25 compute-0 python3.9[221302]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:49:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:49:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:26.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:26.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:27 compute-0 ceph-mon[74339]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:28.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:28.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:28 compute-0 ceph-mon[74339]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:29 compute-0 podman[221306]: 2025-12-06 06:49:29.444074407 +0000 UTC m=+0.096223546 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 06:49:29 compute-0 sudo[221333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:29 compute-0 sudo[221333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:29 compute-0 sudo[221333]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:29 compute-0 sudo[221358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:29 compute-0 sudo[221358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:29 compute-0 sudo[221358]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:30.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:49:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:30.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:49:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:31 compute-0 sudo[221300]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:31 compute-0 sudo[221533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvuupuyjydvrrbyrunuodkyeyadzwifm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003771.1841779-143-76526439184566/AnsiballZ_stat.py'
Dec 06 06:49:31 compute-0 sudo[221533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:31 compute-0 ceph-mon[74339]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:31 compute-0 sudo[221536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:31 compute-0 sudo[221536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:31 compute-0 sudo[221536]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-0 python3.9[221535]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:31 compute-0 sudo[221533]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-0 sudo[221561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:49:31 compute-0 sudo[221561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:31 compute-0 sudo[221561]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-0 sudo[221599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:31 compute-0 sudo[221599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:31 compute-0 sudo[221599]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 compute-0 sudo[221635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:49:31 compute-0 sudo[221635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:32.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:32.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:49:32 compute-0 sudo[221635]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 06:49:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:49:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 06:49:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:49:32 compute-0 sudo[221817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcjadeptizjdlgbksyuvjbmrvrearwwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003772.143811-173-160155194886524/AnsiballZ_command.py'
Dec 06 06:49:32 compute-0 sudo[221817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:32 compute-0 python3.9[221819]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:32 compute-0 sudo[221817]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:34.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:49:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:34.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:49:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:49:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 06:49:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 06:49:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:35 compute-0 sudo[221971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gueqwgzhiivlprwylcewkvxjcygcswba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003775.4925802-203-96543374439146/AnsiballZ_stat.py'
Dec 06 06:49:35 compute-0 sudo[221971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:35 compute-0 python3.9[221973]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:35 compute-0 sudo[221971]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:49:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:49:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:49:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:49:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:49:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4385beed-c6e7-4d81-ae01-fe243b516c71 does not exist
Dec 06 06:49:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3aced4fd-7079-436c-be24-781746f8090f does not exist
Dec 06 06:49:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c09b6a73-469d-421b-b319-930063ace6d0 does not exist
Dec 06 06:49:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:49:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:49:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:49:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:49:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:49:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:49:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:36.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:36 compute-0 sudo[221998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:36 compute-0 sudo[221998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:36 compute-0 sudo[221998]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:36 compute-0 sudo[222053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:49:36 compute-0 sudo[222053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:36 compute-0 sudo[222053]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:36 compute-0 ceph-mon[74339]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:36 compute-0 ceph-mon[74339]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:49:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:49:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:49:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:49:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:49:36 compute-0 sudo[222100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:36 compute-0 sudo[222100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:36 compute-0 sudo[222100]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:36.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:36 compute-0 sudo[222148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:49:36 compute-0 sudo[222148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:36 compute-0 sudo[222224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nryxyoohjakbaspguvclcwmeartmsbmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003776.1354754-227-167037394469523/AnsiballZ_command.py'
Dec 06 06:49:36 compute-0 sudo[222224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:36 compute-0 python3.9[222226]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:36 compute-0 sudo[222224]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:36 compute-0 podman[222269]: 2025-12-06 06:49:36.705559425 +0000 UTC m=+0.090891270 container create a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 06:49:36 compute-0 podman[222269]: 2025-12-06 06:49:36.642538651 +0000 UTC m=+0.027870506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:49:36 compute-0 systemd[1]: Started libpod-conmon-a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92.scope.
Dec 06 06:49:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:49:36 compute-0 podman[222269]: 2025-12-06 06:49:36.832878515 +0000 UTC m=+0.218210370 container init a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 06:49:36 compute-0 podman[222269]: 2025-12-06 06:49:36.840347376 +0000 UTC m=+0.225679211 container start a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:49:36 compute-0 podman[222269]: 2025-12-06 06:49:36.844786074 +0000 UTC m=+0.230117939 container attach a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_heyrovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:49:36 compute-0 bold_heyrovsky[222309]: 167 167
Dec 06 06:49:36 compute-0 systemd[1]: libpod-a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92.scope: Deactivated successfully.
Dec 06 06:49:36 compute-0 podman[222269]: 2025-12-06 06:49:36.847964183 +0000 UTC m=+0.233296028 container died a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-6def263caf19ca92eaa2707f1b5f653159c0f002e21b7ed9634c79564bfa9277-merged.mount: Deactivated successfully.
Dec 06 06:49:36 compute-0 podman[222269]: 2025-12-06 06:49:36.979511574 +0000 UTC m=+0.364843409 container remove a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 06:49:36 compute-0 systemd[1]: libpod-conmon-a28963be91d521862e4c29a2c54b605924e2402d840338ba6137854b92e87c92.scope: Deactivated successfully.
Dec 06 06:49:37 compute-0 sudo[222454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnsacghpvyokmydvcivfwucyjbeehuuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003776.8116314-251-281145087994958/AnsiballZ_stat.py'
Dec 06 06:49:37 compute-0 sudo[222454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:37 compute-0 podman[222462]: 2025-12-06 06:49:37.148510216 +0000 UTC m=+0.028478164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:49:37 compute-0 podman[222462]: 2025-12-06 06:49:37.259878901 +0000 UTC m=+0.139846819 container create 9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:49:37 compute-0 python3.9[222456]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:37 compute-0 sudo[222454]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:37 compute-0 ceph-mon[74339]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:37 compute-0 systemd[1]: Started libpod-conmon-9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424.scope.
Dec 06 06:49:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceebdf3fee7e057ef03a7061aaf7deab69d05186690dc998b274e7368fe55b88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceebdf3fee7e057ef03a7061aaf7deab69d05186690dc998b274e7368fe55b88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceebdf3fee7e057ef03a7061aaf7deab69d05186690dc998b274e7368fe55b88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceebdf3fee7e057ef03a7061aaf7deab69d05186690dc998b274e7368fe55b88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceebdf3fee7e057ef03a7061aaf7deab69d05186690dc998b274e7368fe55b88/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:37 compute-0 podman[222462]: 2025-12-06 06:49:37.495837651 +0000 UTC m=+0.375805589 container init 9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:49:37 compute-0 podman[222462]: 2025-12-06 06:49:37.503256501 +0000 UTC m=+0.383224419 container start 9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:49:37 compute-0 podman[222462]: 2025-12-06 06:49:37.546649598 +0000 UTC m=+0.426617516 container attach 9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 06:49:37 compute-0 sudo[222603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkzjsnzxazdxdhbcsltfhuzlawiuyzel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003776.8116314-251-281145087994958/AnsiballZ_copy.py'
Dec 06 06:49:37 compute-0 sudo[222603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:37 compute-0 python3.9[222605]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003776.8116314-251-281145087994958/.source.iscsi _original_basename=.5wyl6dpw follow=False checksum=e37a401621c42ddea9fcdbcff71b82a17c8201cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:37 compute-0 sudo[222603]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:38.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:38.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:38 compute-0 eloquent_hoover[222496]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:49:38 compute-0 eloquent_hoover[222496]: --> relative data size: 1.0
Dec 06 06:49:38 compute-0 eloquent_hoover[222496]: --> All data devices are unavailable
Dec 06 06:49:38 compute-0 systemd[1]: libpod-9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424.scope: Deactivated successfully.
Dec 06 06:49:38 compute-0 podman[222462]: 2025-12-06 06:49:38.425590643 +0000 UTC m=+1.305558561 container died 9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceebdf3fee7e057ef03a7061aaf7deab69d05186690dc998b274e7368fe55b88-merged.mount: Deactivated successfully.
Dec 06 06:49:38 compute-0 sudo[222778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkhtdyaqlrgulkgghwmuzbpcrvbpzyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003778.2075973-296-83654687402231/AnsiballZ_file.py'
Dec 06 06:49:38 compute-0 sudo[222778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:38 compute-0 podman[222462]: 2025-12-06 06:49:38.716833567 +0000 UTC m=+1.596801485 container remove 9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:49:38 compute-0 systemd[1]: libpod-conmon-9bb9453ae793b7d56a0aef774365864c07ddc2bfa824c7b9fc7526bb76d69424.scope: Deactivated successfully.
Dec 06 06:49:38 compute-0 sudo[222148]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 compute-0 sudo[222781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:38 compute-0 sudo[222781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:38 compute-0 sudo[222781]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 compute-0 python3.9[222780]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:38 compute-0 sudo[222778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 compute-0 sudo[222806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:49:38 compute-0 sudo[222806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:38 compute-0 sudo[222806]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 compute-0 sudo[222831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:38 compute-0 sudo[222831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:38 compute-0 sudo[222831]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 compute-0 ceph-mon[74339]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:38 compute-0 sudo[222880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:49:38 compute-0 sudo[222880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:39 compute-0 podman[223004]: 2025-12-06 06:49:39.310362759 +0000 UTC m=+0.051760896 container create 2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:49:39 compute-0 systemd[1]: Started libpod-conmon-2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64.scope.
Dec 06 06:49:39 compute-0 podman[223004]: 2025-12-06 06:49:39.288611504 +0000 UTC m=+0.030009661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:49:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:49:39 compute-0 sudo[223093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuecvmlnjrutlqbvbcdfbwwkqvjohuia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003779.005456-320-18654534942569/AnsiballZ_lineinfile.py'
Dec 06 06:49:39 compute-0 sudo[223093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:39 compute-0 podman[223004]: 2025-12-06 06:49:39.44513538 +0000 UTC m=+0.186533547 container init 2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 06:49:39 compute-0 podman[223004]: 2025-12-06 06:49:39.453090677 +0000 UTC m=+0.194488814 container start 2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:49:39 compute-0 distracted_northcutt[223062]: 167 167
Dec 06 06:49:39 compute-0 podman[223004]: 2025-12-06 06:49:39.457714551 +0000 UTC m=+0.199112688 container attach 2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 06:49:39 compute-0 systemd[1]: libpod-2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64.scope: Deactivated successfully.
Dec 06 06:49:39 compute-0 podman[223004]: 2025-12-06 06:49:39.45899355 +0000 UTC m=+0.200391697 container died 2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-68a519f9f7ee37ce39a126d9d860a4560e99879a811b2a2289f5c19a17c29f9d-merged.mount: Deactivated successfully.
Dec 06 06:49:39 compute-0 podman[223004]: 2025-12-06 06:49:39.504285215 +0000 UTC m=+0.245683352 container remove 2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_northcutt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:49:39 compute-0 systemd[1]: libpod-conmon-2e0c0e1c9c06d9d10520df7349752cad539d5de84afa3bf8a448b9533359da64.scope: Deactivated successfully.
Dec 06 06:49:39 compute-0 python3.9[223095]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:39 compute-0 sudo[223093]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:39 compute-0 podman[223116]: 2025-12-06 06:49:39.668461088 +0000 UTC m=+0.041833069 container create 0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curran, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:49:39 compute-0 systemd[1]: Started libpod-conmon-0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53.scope.
Dec 06 06:49:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53390b9d3fc19749e33b0f70a1ba13410dc4dc87df36559ebd89e0aea28c92a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53390b9d3fc19749e33b0f70a1ba13410dc4dc87df36559ebd89e0aea28c92a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53390b9d3fc19749e33b0f70a1ba13410dc4dc87df36559ebd89e0aea28c92a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53390b9d3fc19749e33b0f70a1ba13410dc4dc87df36559ebd89e0aea28c92a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:39 compute-0 podman[223116]: 2025-12-06 06:49:39.650232422 +0000 UTC m=+0.023604433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:49:39 compute-0 podman[223116]: 2025-12-06 06:49:39.826914193 +0000 UTC m=+0.200286194 container init 0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:49:39 compute-0 podman[223116]: 2025-12-06 06:49:39.834988024 +0000 UTC m=+0.208360005 container start 0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:49:39 compute-0 podman[223116]: 2025-12-06 06:49:39.847900584 +0000 UTC m=+0.221272585 container attach 0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:49:39 compute-0 podman[223145]: 2025-12-06 06:49:39.866928775 +0000 UTC m=+0.165199906 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 06 06:49:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:40.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:40.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:40 compute-0 sudo[223307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuknncmarepjhrxcbwhlczicwwjnpeqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003779.892956-347-224495194983441/AnsiballZ_systemd_service.py'
Dec 06 06:49:40 compute-0 sudo[223307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:40 compute-0 determined_curran[223158]: {
Dec 06 06:49:40 compute-0 determined_curran[223158]:     "0": [
Dec 06 06:49:40 compute-0 determined_curran[223158]:         {
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "devices": [
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "/dev/loop3"
Dec 06 06:49:40 compute-0 determined_curran[223158]:             ],
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "lv_name": "ceph_lv0",
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "lv_size": "7511998464",
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "name": "ceph_lv0",
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "tags": {
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.cluster_name": "ceph",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.crush_device_class": "",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.encrypted": "0",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.osd_id": "0",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.type": "block",
Dec 06 06:49:40 compute-0 determined_curran[223158]:                 "ceph.vdo": "0"
Dec 06 06:49:40 compute-0 determined_curran[223158]:             },
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "type": "block",
Dec 06 06:49:40 compute-0 determined_curran[223158]:             "vg_name": "ceph_vg0"
Dec 06 06:49:40 compute-0 determined_curran[223158]:         }
Dec 06 06:49:40 compute-0 determined_curran[223158]:     ]
Dec 06 06:49:40 compute-0 determined_curran[223158]: }
Dec 06 06:49:40 compute-0 systemd[1]: libpod-0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53.scope: Deactivated successfully.
Dec 06 06:49:40 compute-0 podman[223116]: 2025-12-06 06:49:40.627585091 +0000 UTC m=+1.000957072 container died 0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 06:49:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:40 compute-0 python3.9[223309]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:49:40 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 06 06:49:40 compute-0 sudo[223307]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-53390b9d3fc19749e33b0f70a1ba13410dc4dc87df36559ebd89e0aea28c92a6-merged.mount: Deactivated successfully.
Dec 06 06:49:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:41 compute-0 podman[223116]: 2025-12-06 06:49:41.293469487 +0000 UTC m=+1.666841468 container remove 0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_curran, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 06:49:41 compute-0 ceph-mon[74339]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:41 compute-0 sudo[222880]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:41 compute-0 sudo[223483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wegfmrtghbsvppvskmqulbeutpcrvfts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003781.0363677-371-186775959023505/AnsiballZ_systemd_service.py'
Dec 06 06:49:41 compute-0 sudo[223483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:41 compute-0 sudo[223478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:41 compute-0 sudo[223478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:41 compute-0 sudo[223478]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:41 compute-0 systemd[1]: libpod-conmon-0fa1554458bb55cb7171030e199ba4f8f30869c96ce97c8dad9e79e4c3d57a53.scope: Deactivated successfully.
Dec 06 06:49:41 compute-0 sudo[223507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:49:41 compute-0 sudo[223507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:41 compute-0 sudo[223507]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:41 compute-0 sudo[223532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:41 compute-0 sudo[223532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:41 compute-0 sudo[223532]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:41 compute-0 sudo[223557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:49:41 compute-0 sudo[223557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:41 compute-0 python3.9[223502]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:49:41 compute-0 systemd[1]: Reloading.
Dec 06 06:49:41 compute-0 systemd-rc-local-generator[223642]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:49:41 compute-0 systemd-sysv-generator[223651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:49:41 compute-0 podman[223657]: 2025-12-06 06:49:41.881385275 +0000 UTC m=+0.044471960 container create 587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kilby, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:49:41 compute-0 podman[223657]: 2025-12-06 06:49:41.860214238 +0000 UTC m=+0.023300953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:49:42 compute-0 systemd[1]: Started libpod-conmon-587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f.scope.
Dec 06 06:49:42 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 06 06:49:42 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 06 06:49:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:49:42 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 06 06:49:42 compute-0 systemd[1]: Started Open-iSCSI.
Dec 06 06:49:42 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 06 06:49:42 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 06 06:49:42 compute-0 sudo[223483]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:42.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:42 compute-0 podman[223657]: 2025-12-06 06:49:42.230719902 +0000 UTC m=+0.393806627 container init 587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:49:42 compute-0 podman[223657]: 2025-12-06 06:49:42.240700511 +0000 UTC m=+0.403787206 container start 587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kilby, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:49:42 compute-0 gifted_kilby[223675]: 167 167
Dec 06 06:49:42 compute-0 systemd[1]: libpod-587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f.scope: Deactivated successfully.
Dec 06 06:49:42 compute-0 podman[223657]: 2025-12-06 06:49:42.260285919 +0000 UTC m=+0.423372614 container attach 587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kilby, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:49:42 compute-0 podman[223657]: 2025-12-06 06:49:42.260720612 +0000 UTC m=+0.423807307 container died 587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kilby, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:49:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:42.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a0d76fb9fbd5d03ee1f1129c784319f88f12f0579e5ba35d06afa1e28f6452c-merged.mount: Deactivated successfully.
Dec 06 06:49:42 compute-0 podman[223657]: 2025-12-06 06:49:42.381985255 +0000 UTC m=+0.545071950 container remove 587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_kilby, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:49:42 compute-0 systemd[1]: libpod-conmon-587450e51d2dac8b5a31c1e2a6c64c9f6a0dd695df26a656c03ee146f47ab91f.scope: Deactivated successfully.
Dec 06 06:49:42 compute-0 podman[223735]: 2025-12-06 06:49:42.568347275 +0000 UTC m=+0.069028092 container create 057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 06:49:42 compute-0 podman[223735]: 2025-12-06 06:49:42.525845807 +0000 UTC m=+0.026526644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:49:42 compute-0 systemd[1]: Started libpod-conmon-057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662.scope.
Dec 06 06:49:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e5939999638c61996228bdfaf755849071e7022b7cd6264d12d2e72a14a8ca6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e5939999638c61996228bdfaf755849071e7022b7cd6264d12d2e72a14a8ca6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e5939999638c61996228bdfaf755849071e7022b7cd6264d12d2e72a14a8ca6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e5939999638c61996228bdfaf755849071e7022b7cd6264d12d2e72a14a8ca6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:49:42 compute-0 podman[223735]: 2025-12-06 06:49:42.713769507 +0000 UTC m=+0.214450344 container init 057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:49:42 compute-0 podman[223735]: 2025-12-06 06:49:42.730741303 +0000 UTC m=+0.231422120 container start 057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:49:42 compute-0 podman[223735]: 2025-12-06 06:49:42.734790839 +0000 UTC m=+0.235471676 container attach 057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:49:42 compute-0 sudo[223881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txwvjzunsrfsqqfclajiceqaqyztxbbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003782.6169913-404-246196979054003/AnsiballZ_service_facts.py'
Dec 06 06:49:42 compute-0 sudo[223881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:49:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:49:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:49:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:49:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:49:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:49:43 compute-0 python3.9[223883]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:49:43 compute-0 network[223900]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:49:43 compute-0 network[223901]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:49:43 compute-0 network[223902]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:49:43 compute-0 romantic_shirley[223802]: {
Dec 06 06:49:43 compute-0 romantic_shirley[223802]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:49:43 compute-0 romantic_shirley[223802]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:49:43 compute-0 romantic_shirley[223802]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:49:43 compute-0 romantic_shirley[223802]:         "osd_id": 0,
Dec 06 06:49:43 compute-0 romantic_shirley[223802]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:49:43 compute-0 romantic_shirley[223802]:         "type": "bluestore"
Dec 06 06:49:43 compute-0 romantic_shirley[223802]:     }
Dec 06 06:49:43 compute-0 romantic_shirley[223802]: }
Dec 06 06:49:43 compute-0 podman[223735]: 2025-12-06 06:49:43.612542428 +0000 UTC m=+1.113223245 container died 057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:49:43 compute-0 ceph-mon[74339]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:43 compute-0 systemd[1]: libpod-057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662.scope: Deactivated successfully.
Dec 06 06:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e5939999638c61996228bdfaf755849071e7022b7cd6264d12d2e72a14a8ca6-merged.mount: Deactivated successfully.
Dec 06 06:49:44 compute-0 podman[223735]: 2025-12-06 06:49:44.019689328 +0000 UTC m=+1.520370155 container remove 057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_shirley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:49:44 compute-0 systemd[1]: libpod-conmon-057c791fb7fccacc13ad33f8fd204f6e9001c4e251b81df359412ebc73e20662.scope: Deactivated successfully.
Dec 06 06:49:44 compute-0 sudo[223557]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:49:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:49:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:44.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:49:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:49:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:44.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 832e2dd5-4515-4d6f-8f36-3e676063178a does not exist
Dec 06 06:49:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8de5c979-85cd-4568-ab51-b46bd6805313 does not exist
Dec 06 06:49:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6cf299fc-96a9-47ae-811d-f649bb1c8924 does not exist
Dec 06 06:49:44 compute-0 sudo[223985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:44 compute-0 sudo[223985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:44 compute-0 sudo[223985]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:45 compute-0 sudo[224011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:49:45 compute-0 sudo[224011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:45 compute-0 sudo[224011]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:45 compute-0 ceph-mon[74339]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:49:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:46.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:46.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:47 compute-0 ceph-mon[74339]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:47 compute-0 sudo[223881]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:47 compute-0 sudo[224254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjgjvpmltleeebwuhfczykczohylpdnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003787.6516218-434-281189181687327/AnsiballZ_file.py'
Dec 06 06:49:47 compute-0 sudo[224254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:48 compute-0 python3.9[224256]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 06:49:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:48.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:48 compute-0 sudo[224254]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:48 compute-0 sudo[224407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfjdmmtziiulzhbhpgjlhvcqtqcprsvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003788.4200857-458-249196100637576/AnsiballZ_modprobe.py'
Dec 06 06:49:48 compute-0 sudo[224407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:49 compute-0 python3.9[224409]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 06 06:49:49 compute-0 sudo[224407]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:49 compute-0 sudo[224563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egzuhtsmwmtbwfafovcehnqnwlmyrhwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003789.2481158-482-94267281358313/AnsiballZ_stat.py'
Dec 06 06:49:49 compute-0 sudo[224563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:49 compute-0 python3.9[224565]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:49 compute-0 sudo[224563]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:49 compute-0 ceph-mon[74339]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:49 compute-0 sudo[224607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:49 compute-0 sudo[224607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:49 compute-0 sudo[224607]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:49 compute-0 sudo[224655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:49:49 compute-0 sudo[224655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:49:49 compute-0 sudo[224655]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:50 compute-0 sudo[224736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytzbrxcgnhtgdbwamyhjxycztikvlyii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003789.2481158-482-94267281358313/AnsiballZ_copy.py'
Dec 06 06:49:50 compute-0 sudo[224736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:50.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:50 compute-0 python3.9[224738]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003789.2481158-482-94267281358313/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:50 compute-0 sudo[224736]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:50 compute-0 ceph-mon[74339]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:50 compute-0 sudo[224889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdoshdqarynppevqzzppxvgfizaavkcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003790.7134302-530-75267654869181/AnsiballZ_lineinfile.py'
Dec 06 06:49:50 compute-0 sudo[224889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:51 compute-0 python3.9[224891]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:51 compute-0 sudo[224889]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:52 compute-0 sudo[225041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrzuavgmwsurlkmxoveffknahhvgoumo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003791.435417-554-168422491988116/AnsiballZ_systemd.py'
Dec 06 06:49:52 compute-0 sudo[225041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:52.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:52 compute-0 python3.9[225043]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:49:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:52 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 06:49:52 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 06 06:49:52 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 06 06:49:52 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 06 06:49:52 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 06 06:49:52 compute-0 sudo[225041]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:53 compute-0 sudo[225198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdltqcnoadxhmgiujvgnemkxfsmdsqxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003792.7217758-578-121550762421695/AnsiballZ_file.py'
Dec 06 06:49:53 compute-0 sudo[225198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:53 compute-0 python3.9[225200]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:49:53 compute-0 sudo[225198]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:53 compute-0 ceph-mon[74339]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:53 compute-0 sudo[225350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekkfetrbzkcicnjinenfqqzdpywpkkae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003793.6563451-605-160545398194160/AnsiballZ_stat.py'
Dec 06 06:49:53 compute-0 sudo[225350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:54.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:54 compute-0 python3.9[225352]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:54 compute-0 sudo[225350]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:54.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:54 compute-0 sudo[225503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chwdggpalorgnmgjhwdclzrnbxmswpgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003794.4246938-632-209704865484293/AnsiballZ_stat.py'
Dec 06 06:49:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:54 compute-0 sudo[225503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:54 compute-0 python3.9[225505]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:49:54 compute-0 sudo[225503]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:55 compute-0 sudo[225655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzkmaqazuudcbpcukvspmppldsuyujkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003795.163174-656-90475871712607/AnsiballZ_stat.py'
Dec 06 06:49:55 compute-0 sudo[225655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:55 compute-0 python3.9[225657]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:49:55 compute-0 sudo[225655]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:55 compute-0 ceph-mon[74339]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:55 compute-0 sudo[225778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbcjptqecnfsjrxzapdrhaqirpejylhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003795.163174-656-90475871712607/AnsiballZ_copy.py'
Dec 06 06:49:55 compute-0 sudo[225778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:56 compute-0 python3.9[225780]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003795.163174-656-90475871712607/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:56.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:56 compute-0 sudo[225778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:49:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:56.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:56 compute-0 ceph-mon[74339]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:56 compute-0 sudo[225931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iusvglhxxmiywdsffkitziphmsftbstk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003796.4232025-701-246578787014269/AnsiballZ_command.py'
Dec 06 06:49:56 compute-0 sudo[225931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:57 compute-0 python3.9[225933]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:57 compute-0 sudo[225931]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:57 compute-0 sudo[226084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvyvyelmbtebjgxqyavaaoknndskmuug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003797.4001746-725-170544442809527/AnsiballZ_lineinfile.py'
Dec 06 06:49:57 compute-0 sudo[226084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:57 compute-0 python3.9[226086]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:57 compute-0 sudo[226084]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:49:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:49:58.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:49:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:49:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:49:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:49:58.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:49:58 compute-0 sudo[226237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoblmwrrezqzmiqqlrvkpretktvypyuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003798.1084588-749-150779960518449/AnsiballZ_replace.py'
Dec 06 06:49:58 compute-0 sudo[226237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:58 compute-0 python3.9[226239]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:58 compute-0 sudo[226237]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:58 compute-0 ceph-mon[74339]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:49:59 compute-0 sudo[226389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wifygblllbsdbbzaqhplyicrnkrcoxoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003798.9587193-773-71630359649443/AnsiballZ_replace.py'
Dec 06 06:49:59 compute-0 sudo[226389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:49:59 compute-0 python3.9[226391]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:59 compute-0 sudo[226389]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 06:50:00 compute-0 sudo[226554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abvxefpynwpypjufconglykqnbfwbexq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003799.737165-800-272107063264526/AnsiballZ_lineinfile.py'
Dec 06 06:50:00 compute-0 sudo[226554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 06:50:00 compute-0 podman[226515]: 2025-12-06 06:50:00.076667462 +0000 UTC m=+0.105035180 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:50:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:00.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:00 compute-0 python3.9[226560]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:00 compute-0 sudo[226554]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000061s ======
Dec 06 06:50:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:00.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000061s
Dec 06 06:50:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:00 compute-0 sudo[226720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubwtnyiuizaxpvpimcrqwrwegpfadpdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003800.3775473-800-54324081149477/AnsiballZ_lineinfile.py'
Dec 06 06:50:00 compute-0 sudo[226720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:00 compute-0 python3.9[226722]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:00 compute-0 sudo[226720]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:01 compute-0 ceph-mon[74339]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:01 compute-0 sudo[226872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srjhogwhfwxygnlmvsrxoqwqzbgekxqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003801.0665998-800-278052526725489/AnsiballZ_lineinfile.py'
Dec 06 06:50:01 compute-0 sudo[226872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:01 compute-0 python3.9[226874]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:01 compute-0 sudo[226872]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:02 compute-0 sudo[227024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owbsimtyisjsunnqytofaqljmanltzat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003801.7899568-800-121431080702841/AnsiballZ_lineinfile.py'
Dec 06 06:50:02 compute-0 sudo[227024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:02.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:02 compute-0 python3.9[227026]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:02 compute-0 sudo[227024]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:02 compute-0 sudo[227177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwlwdhaekiiqvmiogbyjdkihhizeylgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003802.5314496-887-251840169595564/AnsiballZ_stat.py'
Dec 06 06:50:02 compute-0 sudo[227177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:03 compute-0 python3.9[227179]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:50:03 compute-0 sudo[227177]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:03 compute-0 sudo[227331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcaptujkjropqryznlctofleugljklws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003803.2810452-911-141979098634589/AnsiballZ_file.py'
Dec 06 06:50:03 compute-0 sudo[227331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:03 compute-0 ceph-mon[74339]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:50:03.798 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:50:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:50:03.800 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:50:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:50:03.801 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:50:03 compute-0 python3.9[227333]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:03 compute-0 sudo[227331]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:50:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:04.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:50:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:04 compute-0 sudo[227484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chcfnwxivyjpalejlcdpthwxuebvomla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003804.1350372-938-280333291264437/AnsiballZ_file.py'
Dec 06 06:50:04 compute-0 sudo[227484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:04 compute-0 python3.9[227486]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:04 compute-0 sudo[227484]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:04 compute-0 ceph-mon[74339]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:05 compute-0 sudo[227636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-docvwochazdofbwxxfbxexramppjxexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003804.9732504-962-244532113472955/AnsiballZ_stat.py'
Dec 06 06:50:05 compute-0 sudo[227636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:05 compute-0 python3.9[227638]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:05 compute-0 sudo[227636]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:05 compute-0 sudo[227714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltdlysjzgfnfqlgbuuejjvjeoeqbeudm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003804.9732504-962-244532113472955/AnsiballZ_file.py'
Dec 06 06:50:05 compute-0 sudo[227714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:05 compute-0 python3.9[227716]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:05 compute-0 sudo[227714]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:06.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:50:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:06.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:50:06 compute-0 sudo[227867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avlfsdlbkunjlobxwzgayarxikarxtfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003806.0913296-962-184838064337208/AnsiballZ_stat.py'
Dec 06 06:50:06 compute-0 sudo[227867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:06 compute-0 python3.9[227869]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:06 compute-0 sudo[227867]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:06 compute-0 sudo[227945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbirjxvbnkigtpahadeqttdodxkivgwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003806.0913296-962-184838064337208/AnsiballZ_file.py'
Dec 06 06:50:06 compute-0 sudo[227945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:07 compute-0 python3.9[227947]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:07 compute-0 sudo[227945]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:07 compute-0 sudo[228097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxnjqgqcjzmyymzdssmhnfgkmbiqyjai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003807.2777216-1031-277986871173727/AnsiballZ_file.py'
Dec 06 06:50:07 compute-0 sudo[228097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:07 compute-0 python3.9[228099]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:07 compute-0 sudo[228097]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:07 compute-0 ceph-mon[74339]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:50:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:08.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:50:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:50:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:50:08 compute-0 sudo[228250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djzagarivynphousngvhieosfbqbhjdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003808.0588453-1055-259431348673365/AnsiballZ_stat.py'
Dec 06 06:50:08 compute-0 sudo[228250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:08 compute-0 python3.9[228252]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:08 compute-0 sudo[228250]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:08 compute-0 sudo[228328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsvnzfbfffshzigkhwudmjpgjigvgdke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003808.0588453-1055-259431348673365/AnsiballZ_file.py'
Dec 06 06:50:08 compute-0 sudo[228328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:09 compute-0 ceph-mon[74339]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:09 compute-0 python3.9[228330]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:09 compute-0 sudo[228328]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:09 compute-0 sudo[228480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgvnahdotnfvshcutdqikuneycazdwzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003809.3180127-1091-208641383508824/AnsiballZ_stat.py'
Dec 06 06:50:09 compute-0 sudo[228480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:09 compute-0 python3.9[228482]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:09 compute-0 sudo[228480]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:10 compute-0 sudo[228532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:10 compute-0 sudo[228532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:10 compute-0 sudo[228532]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:10 compute-0 sudo[228593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsmiblpevjqqqyrfpopqkvbvkozzfuoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003809.3180127-1091-208641383508824/AnsiballZ_file.py'
Dec 06 06:50:10 compute-0 sudo[228593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:10 compute-0 sudo[228592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:10 compute-0 podman[228535]: 2025-12-06 06:50:10.122326829 +0000 UTC m=+0.084470481 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 06:50:10 compute-0 sudo[228592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:10 compute-0 sudo[228592]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:10.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:10 compute-0 python3.9[228602]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:10 compute-0 sudo[228593]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:10 compute-0 sudo[228778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxreviyotsfcxmvpcblzxdjddoogtwjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003810.5370278-1127-58715403769451/AnsiballZ_systemd.py'
Dec 06 06:50:10 compute-0 sudo[228778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:11 compute-0 python3.9[228780]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:11 compute-0 systemd[1]: Reloading.
Dec 06 06:50:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:11 compute-0 systemd-sysv-generator[228811]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:11 compute-0 systemd-rc-local-generator[228808]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:11 compute-0 sudo[228778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:11 compute-0 ceph-mon[74339]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:50:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:12.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:50:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:12 compute-0 sudo[228968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqjcbiymuybsjlowkzwxevoqglsyumso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003812.1291165-1151-166387011336927/AnsiballZ_stat.py'
Dec 06 06:50:12 compute-0 sudo[228968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:12 compute-0 python3.9[228970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:12 compute-0 sudo[228968]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:12 compute-0 ceph-mon[74339]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:50:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:50:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:50:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:50:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:50:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:50:12 compute-0 sudo[229046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kinjmjkrtuaufxysybasnlkprqtbsiub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003812.1291165-1151-166387011336927/AnsiballZ_file.py'
Dec 06 06:50:12 compute-0 sudo[229046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:13 compute-0 python3.9[229048]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:13 compute-0 sudo[229046]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:13 compute-0 sudo[229198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqouyapiqkkxghdsoamnsqluswvmopmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003813.4076586-1187-70172272758013/AnsiballZ_stat.py'
Dec 06 06:50:13 compute-0 sudo[229198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:13 compute-0 python3.9[229200]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:13 compute-0 sudo[229198]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:14 compute-0 sudo[229276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khhrbpzphjtonxqkrwnxqpinthxgajkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003813.4076586-1187-70172272758013/AnsiballZ_file.py'
Dec 06 06:50:14 compute-0 sudo[229276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:14.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:14 compute-0 python3.9[229278]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:14 compute-0 sudo[229276]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:14 compute-0 sudo[229429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urrdwzelgfmfjpjzvycoktsxcemnvdjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003814.5503125-1223-165472731583707/AnsiballZ_systemd.py'
Dec 06 06:50:14 compute-0 sudo[229429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:15 compute-0 python3.9[229431]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:15 compute-0 systemd[1]: Reloading.
Dec 06 06:50:15 compute-0 systemd-rc-local-generator[229456]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:15 compute-0 systemd-sysv-generator[229459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:15 compute-0 systemd[1]: Starting Create netns directory...
Dec 06 06:50:15 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 06:50:15 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 06:50:15 compute-0 systemd[1]: Finished Create netns directory.
Dec 06 06:50:15 compute-0 sudo[229429]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:15 compute-0 ceph-mon[74339]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:16.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:16 compute-0 sudo[229622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivhxzolrzxjkzwzmjqlzdbhytkkvsllv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003815.9214673-1253-147331949563112/AnsiballZ_file.py'
Dec 06 06:50:16 compute-0 sudo[229622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:16.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:16 compute-0 python3.9[229624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:16 compute-0 sudo[229622]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:16 compute-0 sudo[229775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lthdnuitrwfsspvizxbrdxjxswjheutt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003816.5512989-1277-93993312981698/AnsiballZ_stat.py'
Dec 06 06:50:16 compute-0 sudo[229775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:17 compute-0 python3.9[229777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:17 compute-0 sudo[229775]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:17 compute-0 ceph-mon[74339]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:17 compute-0 sudo[229898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysejguerwuczkcatrhrscllrqrwfjodj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003816.5512989-1277-93993312981698/AnsiballZ_copy.py'
Dec 06 06:50:17 compute-0 sudo[229898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:17 compute-0 python3.9[229900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003816.5512989-1277-93993312981698/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:17 compute-0 sudo[229898]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:50:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:18.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:50:18
Dec 06 06:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', '.rgw.root', '.mgr', 'backups', 'volumes', 'default.rgw.log']
Dec 06 06:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:50:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:18.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:18 compute-0 sudo[230051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kthyxvyznuuciqaqdqbljfqbaeumywwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003818.1761575-1328-10202042586313/AnsiballZ_file.py'
Dec 06 06:50:18 compute-0 sudo[230051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:18 compute-0 python3.9[230053]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:50:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:18 compute-0 sudo[230051]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:19 compute-0 sudo[230203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tungduzzokzcjrfwyzryuadylirfndcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003818.9992325-1352-216153079181761/AnsiballZ_stat.py'
Dec 06 06:50:19 compute-0 sudo[230203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:19 compute-0 python3.9[230205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:19 compute-0 sudo[230203]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:19 compute-0 sudo[230326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxcgdtcyiamisfihpkfzaoohiftwngsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003818.9992325-1352-216153079181761/AnsiballZ_copy.py'
Dec 06 06:50:19 compute-0 sudo[230326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:19 compute-0 ceph-mon[74339]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:19 compute-0 python3.9[230328]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003818.9992325-1352-216153079181761/.source.json _original_basename=.gz74wfam follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:19 compute-0 sudo[230326]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:20.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:20.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:20 compute-0 sudo[230479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spffsvicqwhzegggpsdvedeklljsdrvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003820.2071428-1397-30503135277183/AnsiballZ_file.py'
Dec 06 06:50:20 compute-0 sudo[230479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:20 compute-0 python3.9[230481]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:20 compute-0 sudo[230479]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:20 compute-0 ceph-mon[74339]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:21 compute-0 sudo[230631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvpqemoisyenfebisizirecmbhwfzmuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003820.988237-1421-105461491441113/AnsiballZ_stat.py'
Dec 06 06:50:21 compute-0 sudo[230631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:21 compute-0 sudo[230631]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:21 compute-0 sudo[230754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xonchoeniblnpefesajfeignkeqoenpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003820.988237-1421-105461491441113/AnsiballZ_copy.py'
Dec 06 06:50:21 compute-0 sudo[230754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:21 compute-0 sudo[230754]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:22.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:22 compute-0 sudo[230907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdznjftobguvpqjcsmvhiklrouqrwfnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003822.5181508-1472-175739675735643/AnsiballZ_container_config_data.py'
Dec 06 06:50:22 compute-0 sudo[230907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:23 compute-0 python3.9[230909]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 06 06:50:23 compute-0 sudo[230907]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:50:23 compute-0 ceph-mon[74339]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:23 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 06 06:50:23 compute-0 sudo[231059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bariexikhxzdduolllnlfmzrpcgdmwfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003823.424442-1499-237303469474076/AnsiballZ_container_config_hash.py'
Dec 06 06:50:23 compute-0 sudo[231059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:24 compute-0 python3.9[231062]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:50:24 compute-0 sudo[231059]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:24.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:24.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:24 compute-0 ceph-mon[74339]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:24 compute-0 sudo[231213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahbhiwrtftuxzvfgwxmdnktnqkzphqlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003824.3163855-1526-29400362167295/AnsiballZ_podman_container_info.py'
Dec 06 06:50:24 compute-0 sudo[231213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:24 compute-0 python3.9[231215]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 06:50:25 compute-0 sudo[231213]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:25 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:50:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:50:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:26.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:26.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:26 compute-0 sudo[231395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvwddkkmugcmdgmhpzthkoqzssyyxlsr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003826.2252464-1565-186301562559383/AnsiballZ_edpm_container_manage.py'
Dec 06 06:50:26 compute-0 sudo[231395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:27 compute-0 python3[231397]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:50:27 compute-0 ceph-mon[74339]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:28 compute-0 podman[231410]: 2025-12-06 06:50:28.090325561 +0000 UTC m=+1.012853944 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 06:50:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:28.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:28 compute-0 podman[231469]: 2025-12-06 06:50:28.212075385 +0000 UTC m=+0.042450764 container create a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:50:28 compute-0 podman[231469]: 2025-12-06 06:50:28.190866869 +0000 UTC m=+0.021242268 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 06:50:28 compute-0 python3[231397]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 06:50:28 compute-0 sudo[231395]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:28.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:28 compute-0 ceph-mon[74339]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:29 compute-0 sudo[231658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvikxkycdxzdtovrivpbzczssgrzgxgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003828.736332-1589-97059046536563/AnsiballZ_stat.py'
Dec 06 06:50:29 compute-0 sudo[231658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:29 compute-0 python3.9[231660]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:50:29 compute-0 sudo[231658]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:29 compute-0 sudo[231812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqehjagofowcpxqkvxerccuthjcgipnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003829.524657-1616-45693800614929/AnsiballZ_file.py'
Dec 06 06:50:29 compute-0 sudo[231812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:30 compute-0 python3.9[231814]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:30 compute-0 sudo[231812]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:30.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:30 compute-0 sudo[231821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:30 compute-0 sudo[231821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:30 compute-0 sudo[231821]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:30 compute-0 sudo[231872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:30 compute-0 sudo[231872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:30 compute-0 sudo[231872]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:30 compute-0 podman[231862]: 2025-12-06 06:50:30.311083826 +0000 UTC m=+0.092445016 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 06:50:30 compute-0 sudo[231961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zahfoasatvogokrcbvnkujryczfqqssi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003829.524657-1616-45693800614929/AnsiballZ_stat.py'
Dec 06 06:50:30 compute-0 sudo[231961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:30.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:30 compute-0 python3.9[231963]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:50:30 compute-0 sudo[231961]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:31 compute-0 sudo[232113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywvzyxpgjfxdotmttiblzcqjuepcfsxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003830.664682-1616-54868919160069/AnsiballZ_copy.py'
Dec 06 06:50:31 compute-0 sudo[232113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:31 compute-0 python3.9[232115]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003830.664682-1616-54868919160069/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:31 compute-0 sudo[232113]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:31 compute-0 sudo[232189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcmckznnatobzblygqsclgtgazmyaion ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003830.664682-1616-54868919160069/AnsiballZ_systemd.py'
Dec 06 06:50:31 compute-0 sudo[232189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:31 compute-0 ceph-mon[74339]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:31 compute-0 python3.9[232191]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:50:31 compute-0 systemd[1]: Reloading.
Dec 06 06:50:32 compute-0 systemd-rc-local-generator[232217]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:32 compute-0 systemd-sysv-generator[232220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:32.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:32 compute-0 sudo[232189]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:32.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:32 compute-0 sudo[232300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xidsvajhdldmqbmextforwafdsqccozm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003830.664682-1616-54868919160069/AnsiballZ_systemd.py'
Dec 06 06:50:32 compute-0 sudo[232300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:32 compute-0 ceph-mon[74339]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:32 compute-0 python3.9[232302]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:32 compute-0 systemd[1]: Reloading.
Dec 06 06:50:32 compute-0 systemd-sysv-generator[232334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:32 compute-0 systemd-rc-local-generator[232331]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:33 compute-0 systemd[1]: Starting multipathd container...
Dec 06 06:50:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd8aa409c3d965096c2b9313b0a58f7183f60987ce7a6695698e50d2d859b3d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd8aa409c3d965096c2b9313b0a58f7183f60987ce7a6695698e50d2d859b3d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a.
Dec 06 06:50:33 compute-0 podman[232341]: 2025-12-06 06:50:33.347773881 +0000 UTC m=+0.110191287 container init a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 06:50:33 compute-0 multipathd[232356]: + sudo -E kolla_set_configs
Dec 06 06:50:33 compute-0 podman[232341]: 2025-12-06 06:50:33.371326112 +0000 UTC m=+0.133743488 container start a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 06:50:33 compute-0 podman[232341]: multipathd
Dec 06 06:50:33 compute-0 sudo[232363]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 06:50:33 compute-0 sudo[232363]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:50:33 compute-0 sudo[232363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 06:50:33 compute-0 systemd[1]: Started multipathd container.
Dec 06 06:50:33 compute-0 multipathd[232356]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:50:33 compute-0 multipathd[232356]: INFO:__main__:Validating config file
Dec 06 06:50:33 compute-0 sudo[232300]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:33 compute-0 multipathd[232356]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:50:33 compute-0 multipathd[232356]: INFO:__main__:Writing out command to execute
Dec 06 06:50:33 compute-0 sudo[232363]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:33 compute-0 multipathd[232356]: ++ cat /run_command
Dec 06 06:50:33 compute-0 multipathd[232356]: + CMD='/usr/sbin/multipathd -d'
Dec 06 06:50:33 compute-0 multipathd[232356]: + ARGS=
Dec 06 06:50:33 compute-0 multipathd[232356]: + sudo kolla_copy_cacerts
Dec 06 06:50:33 compute-0 podman[232364]: 2025-12-06 06:50:33.459937811 +0000 UTC m=+0.065316427 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 06:50:33 compute-0 systemd[1]: a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a-2029975a0ba30ed0.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 06:50:33 compute-0 systemd[1]: a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a-2029975a0ba30ed0.service: Failed with result 'exit-code'.
Dec 06 06:50:33 compute-0 sudo[232387]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 06:50:33 compute-0 sudo[232387]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:50:33 compute-0 multipathd[232356]: + [[ ! -n '' ]]
Dec 06 06:50:33 compute-0 sudo[232387]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 06:50:33 compute-0 multipathd[232356]: + . kolla_extend_start
Dec 06 06:50:33 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:50:33 compute-0 multipathd[232356]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 06 06:50:33 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:50:33 compute-0 multipathd[232356]: Running command: '/usr/sbin/multipathd -d'
Dec 06 06:50:33 compute-0 sudo[232387]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:33 compute-0 multipathd[232356]: + umask 0022
Dec 06 06:50:33 compute-0 multipathd[232356]: + exec /usr/sbin/multipathd -d
Dec 06 06:50:33 compute-0 multipathd[232356]: 4106.134587 | --------start up--------
Dec 06 06:50:33 compute-0 multipathd[232356]: 4106.134603 | read /etc/multipath.conf
Dec 06 06:50:33 compute-0 multipathd[232356]: 4106.141790 | path checkers start up
Dec 06 06:50:34 compute-0 python3.9[232549]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:50:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:34.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:34 compute-0 sudo[232702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akhxglqzwiqnaickqydzaowlzpwjogql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003834.300492-1724-101947034202422/AnsiballZ_command.py'
Dec 06 06:50:34 compute-0 sudo[232702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:34 compute-0 python3.9[232704]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:50:34 compute-0 sudo[232702]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:35 compute-0 sudo[232867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jutdzqnhwklcbafgnjxrbvbcdateprsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003835.1464038-1748-248173143202871/AnsiballZ_systemd.py'
Dec 06 06:50:35 compute-0 sudo[232867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:35 compute-0 python3.9[232869]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:50:35 compute-0 systemd[1]: Stopping multipathd container...
Dec 06 06:50:35 compute-0 multipathd[232356]: 4108.513673 | exit (signal)
Dec 06 06:50:35 compute-0 multipathd[232356]: 4108.513724 | --------shut down-------
Dec 06 06:50:35 compute-0 systemd[1]: libpod-a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a.scope: Deactivated successfully.
Dec 06 06:50:35 compute-0 podman[232873]: 2025-12-06 06:50:35.898917446 +0000 UTC m=+0.117245891 container died a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 06 06:50:35 compute-0 systemd[1]: a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a-2029975a0ba30ed0.timer: Deactivated successfully.
Dec 06 06:50:35 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a.
Dec 06 06:50:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a-userdata-shm.mount: Deactivated successfully.
Dec 06 06:50:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffd8aa409c3d965096c2b9313b0a58f7183f60987ce7a6695698e50d2d859b3d-merged.mount: Deactivated successfully.
Dec 06 06:50:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:36.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:36 compute-0 ceph-mon[74339]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:36 compute-0 podman[232873]: 2025-12-06 06:50:36.295632371 +0000 UTC m=+0.513960836 container cleanup a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:50:36 compute-0 podman[232873]: multipathd
Dec 06 06:50:36 compute-0 podman[232902]: multipathd
Dec 06 06:50:36 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 06 06:50:36 compute-0 systemd[1]: Stopped multipathd container.
Dec 06 06:50:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:36.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:36 compute-0 systemd[1]: Starting multipathd container...
Dec 06 06:50:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd8aa409c3d965096c2b9313b0a58f7183f60987ce7a6695698e50d2d859b3d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd8aa409c3d965096c2b9313b0a58f7183f60987ce7a6695698e50d2d859b3d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a.
Dec 06 06:50:36 compute-0 podman[232916]: 2025-12-06 06:50:36.664990469 +0000 UTC m=+0.262228689 container init a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 06:50:36 compute-0 multipathd[232932]: + sudo -E kolla_set_configs
Dec 06 06:50:36 compute-0 sudo[232938]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 06:50:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:36 compute-0 sudo[232938]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:50:36 compute-0 sudo[232938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 06:50:36 compute-0 podman[232916]: 2025-12-06 06:50:36.696846689 +0000 UTC m=+0.294084889 container start a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 06:50:36 compute-0 multipathd[232932]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:50:36 compute-0 multipathd[232932]: INFO:__main__:Validating config file
Dec 06 06:50:36 compute-0 multipathd[232932]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:50:36 compute-0 multipathd[232932]: INFO:__main__:Writing out command to execute
Dec 06 06:50:36 compute-0 sudo[232938]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:36 compute-0 multipathd[232932]: ++ cat /run_command
Dec 06 06:50:36 compute-0 multipathd[232932]: + CMD='/usr/sbin/multipathd -d'
Dec 06 06:50:36 compute-0 multipathd[232932]: + ARGS=
Dec 06 06:50:36 compute-0 multipathd[232932]: + sudo kolla_copy_cacerts
Dec 06 06:50:36 compute-0 podman[232916]: multipathd
Dec 06 06:50:36 compute-0 sudo[232953]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 06:50:36 compute-0 sudo[232953]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 06:50:36 compute-0 sudo[232953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 06:50:36 compute-0 systemd[1]: Started multipathd container.
Dec 06 06:50:36 compute-0 sudo[232953]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:36 compute-0 multipathd[232932]: + [[ ! -n '' ]]
Dec 06 06:50:36 compute-0 multipathd[232932]: + . kolla_extend_start
Dec 06 06:50:36 compute-0 multipathd[232932]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 06 06:50:36 compute-0 multipathd[232932]: Running command: '/usr/sbin/multipathd -d'
Dec 06 06:50:36 compute-0 multipathd[232932]: + umask 0022
Dec 06 06:50:36 compute-0 multipathd[232932]: + exec /usr/sbin/multipathd -d
Dec 06 06:50:36 compute-0 multipathd[232932]: 4109.429666 | --------start up--------
Dec 06 06:50:36 compute-0 multipathd[232932]: 4109.429691 | read /etc/multipath.conf
Dec 06 06:50:36 compute-0 multipathd[232932]: 4109.435352 | path checkers start up
Dec 06 06:50:36 compute-0 sudo[232867]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:36 compute-0 podman[232939]: 2025-12-06 06:50:36.808209957 +0000 UTC m=+0.092606560 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 06:50:37 compute-0 sudo[233119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjidgykuptetxyotoscvbxnqxzwhoewq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003836.9518087-1772-20569745781131/AnsiballZ_file.py'
Dec 06 06:50:37 compute-0 sudo[233119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:37 compute-0 ceph-mon[74339]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:37 compute-0 python3.9[233121]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:37 compute-0 sudo[233119]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:37 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 06 06:50:37 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 06 06:50:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:38.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:38 compute-0 sudo[233273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfbrbsiwftuvdxosopzucxftksybqtjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003838.0041447-1808-146543044860921/AnsiballZ_file.py'
Dec 06 06:50:38 compute-0 sudo[233273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:38.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:38 compute-0 python3.9[233275]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 06:50:38 compute-0 sudo[233273]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:38 compute-0 ceph-mon[74339]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:38 compute-0 sudo[233426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvtylxvkfkjebiurrzgjbmlhmcotgmkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003838.7669394-1832-79809220489849/AnsiballZ_modprobe.py'
Dec 06 06:50:38 compute-0 sudo[233426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:39 compute-0 python3.9[233428]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 06 06:50:39 compute-0 kernel: Key type psk registered
Dec 06 06:50:39 compute-0 sudo[233426]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:39 compute-0 sudo[233589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtqxyoxqhlnccnstrqgjdfizpdqijhix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003839.7353275-1856-159204357087765/AnsiballZ_stat.py'
Dec 06 06:50:39 compute-0 sudo[233589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:40 compute-0 python3.9[233591]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:50:40 compute-0 sudo[233589]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:40.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:40.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:40 compute-0 podman[233639]: 2025-12-06 06:50:40.416081397 +0000 UTC m=+0.061042798 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 06:50:40 compute-0 sudo[233732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqjoojyqyqhphfwgbsyotgkzuqlbtpfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003839.7353275-1856-159204357087765/AnsiballZ_copy.py'
Dec 06 06:50:40 compute-0 sudo[233732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:40 compute-0 python3.9[233734]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003839.7353275-1856-159204357087765/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:40 compute-0 sudo[233732]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:40 compute-0 ceph-mon[74339]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:41 compute-0 sudo[233884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbzbmqrpiljysiuxrppkynwtvjgioxia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003841.1070306-1904-174724672920179/AnsiballZ_lineinfile.py'
Dec 06 06:50:41 compute-0 sudo[233884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:41 compute-0 python3.9[233886]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:41 compute-0 sudo[233884]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:42 compute-0 sudo[234036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjbvpaupmcihjjbiceuiuawatzryyig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003841.8276238-1928-80441359843266/AnsiballZ_systemd.py'
Dec 06 06:50:42 compute-0 sudo[234036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:42.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:42.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:42 compute-0 python3.9[234038]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:50:42 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 06:50:42 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 06 06:50:42 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 06 06:50:42 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 06 06:50:42 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 06 06:50:42 compute-0 sudo[234036]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:42 compute-0 ceph-mon[74339]: pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:50:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:50:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:50:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:50:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:50:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:50:43 compute-0 sudo[234193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcuqmhwqsryvzkzzbetaantacittdqfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003842.7961593-1952-83901402875020/AnsiballZ_dnf.py'
Dec 06 06:50:43 compute-0 sudo[234193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:43 compute-0 python3.9[234195]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 06:50:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:50:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:44.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:50:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:44.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:44 compute-0 ceph-mon[74339]: pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:45 compute-0 sudo[234201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:45 compute-0 sudo[234201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:45 compute-0 sudo[234201]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:45 compute-0 sudo[234226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:50:45 compute-0 sudo[234226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:45 compute-0 sudo[234226]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:45 compute-0 sudo[234251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:45 compute-0 sudo[234251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:45 compute-0 sudo[234251]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:45 compute-0 sudo[234276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:50:45 compute-0 sudo[234276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:45 compute-0 systemd[1]: Reloading.
Dec 06 06:50:46 compute-0 sudo[234276]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:46 compute-0 systemd-rc-local-generator[234360]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:46 compute-0 systemd-sysv-generator[234363]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:46 compute-0 systemd[1]: Reloading.
Dec 06 06:50:46 compute-0 systemd-rc-local-generator[234395]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:46.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:46 compute-0 systemd-sysv-generator[234399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:46 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 06 06:50:46 compute-0 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 06 06:50:46 compute-0 lvm[234445]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 06:50:46 compute-0 lvm[234445]: VG ceph_vg0 finished
Dec 06 06:50:46 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 06:50:46 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 06 06:50:46 compute-0 systemd[1]: Reloading.
Dec 06 06:50:47 compute-0 systemd-sysv-generator[234498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:47 compute-0 systemd-rc-local-generator[234492]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 06:50:47 compute-0 ceph-mon[74339]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:50:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:50:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:47 compute-0 sudo[234193]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 06:50:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 06 06:50:48 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.371s CPU time.
Dec 06 06:50:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:48 compute-0 systemd[1]: run-rca0e67a4d7b14f8a9cd9d02e73fff8ce.service: Deactivated successfully.
Dec 06 06:50:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:48.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:48.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:50:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:50:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:50:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e54496b0-91b0-40a1-8efc-d8438118bb00 does not exist
Dec 06 06:50:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d7cde520-e9d4-49eb-9f3d-d3af606d1689 does not exist
Dec 06 06:50:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d3c821a9-1f3d-4ec3-88f7-918a9661cf24 does not exist
Dec 06 06:50:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:50:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:50:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:50:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:50:48 compute-0 sudo[235730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:48 compute-0 sudo[235730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:48 compute-0 sudo[235730]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:48 compute-0 sudo[235758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:50:48 compute-0 sudo[235758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:48 compute-0 sudo[235758]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:48 compute-0 sudo[235807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:48 compute-0 sudo[235807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:48 compute-0 sudo[235807]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:48 compute-0 sudo[235858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzihycmlqbfwnruajwmzhrpohombmefr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003848.453778-1976-173211880448994/AnsiballZ_systemd_service.py'
Dec 06 06:50:48 compute-0 sudo[235858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:48 compute-0 sudo[235859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:50:48 compute-0 sudo[235859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:50:48 compute-0 ceph-mon[74339]: pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:49 compute-0 python3.9[235864]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:50:49 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 06 06:50:49 compute-0 iscsid[223677]: iscsid shutting down.
Dec 06 06:50:49 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 06 06:50:49 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 06 06:50:49 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 06 06:50:49 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 06 06:50:49 compute-0 systemd[1]: Started Open-iSCSI.
Dec 06 06:50:49 compute-0 podman[235926]: 2025-12-06 06:50:49.11722008 +0000 UTC m=+0.045120578 container create 5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:50:49 compute-0 sudo[235858]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:49 compute-0 systemd[1]: Started libpod-conmon-5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa.scope.
Dec 06 06:50:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:50:49 compute-0 podman[235926]: 2025-12-06 06:50:49.093837543 +0000 UTC m=+0.021738091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:50:49 compute-0 podman[235926]: 2025-12-06 06:50:49.198686061 +0000 UTC m=+0.126586589 container init 5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:50:49 compute-0 podman[235926]: 2025-12-06 06:50:49.204768219 +0000 UTC m=+0.132668727 container start 5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 06:50:49 compute-0 podman[235926]: 2025-12-06 06:50:49.208574194 +0000 UTC m=+0.136474712 container attach 5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shannon, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:50:49 compute-0 jolly_shannon[235945]: 167 167
Dec 06 06:50:49 compute-0 systemd[1]: libpod-5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa.scope: Deactivated successfully.
Dec 06 06:50:49 compute-0 podman[235926]: 2025-12-06 06:50:49.213191772 +0000 UTC m=+0.141092270 container died 5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ae8c56eb17ba7ef888eaba7333cfbfe177a4cf75ec1373d5d3ed1c449e6f698-merged.mount: Deactivated successfully.
Dec 06 06:50:49 compute-0 podman[235926]: 2025-12-06 06:50:49.254603367 +0000 UTC m=+0.182503865 container remove 5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_shannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec 06 06:50:49 compute-0 systemd[1]: libpod-conmon-5855fce84c51645758e768b447e10d294f34dd318abb29d93546fdf44c8fc7fa.scope: Deactivated successfully.
Dec 06 06:50:49 compute-0 podman[236030]: 2025-12-06 06:50:49.385472104 +0000 UTC m=+0.023310126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:50:49 compute-0 podman[236030]: 2025-12-06 06:50:49.837076255 +0000 UTC m=+0.474914257 container create cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 06:50:49 compute-0 systemd[1]: Started libpod-conmon-cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76.scope.
Dec 06 06:50:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618f71307d19a7e1764deb4ff20513da57222a1d00500c9eb743b33a830eb8d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618f71307d19a7e1764deb4ff20513da57222a1d00500c9eb743b33a830eb8d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618f71307d19a7e1764deb4ff20513da57222a1d00500c9eb743b33a830eb8d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618f71307d19a7e1764deb4ff20513da57222a1d00500c9eb743b33a830eb8d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/618f71307d19a7e1764deb4ff20513da57222a1d00500c9eb743b33a830eb8d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:49 compute-0 podman[236030]: 2025-12-06 06:50:49.915138952 +0000 UTC m=+0.552976984 container init cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 06:50:49 compute-0 podman[236030]: 2025-12-06 06:50:49.921797186 +0000 UTC m=+0.559635188 container start cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 06:50:49 compute-0 podman[236030]: 2025-12-06 06:50:49.924882981 +0000 UTC m=+0.562720983 container attach cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:50:50 compute-0 python3.9[236133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:50:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:50.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:50 compute-0 sudo[236148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:50 compute-0 sudo[236148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:50 compute-0 sudo[236148]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:50:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:50.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:50:50 compute-0 sudo[236194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:50 compute-0 sudo[236194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:50 compute-0 sudo[236194]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:50 compute-0 sudo[236353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgfukgpwwxmagbhfoccwlqixxuilntit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003850.518305-2028-101571307238136/AnsiballZ_file.py'
Dec 06 06:50:50 compute-0 sudo[236353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:50 compute-0 confident_dijkstra[236136]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:50:50 compute-0 confident_dijkstra[236136]: --> relative data size: 1.0
Dec 06 06:50:50 compute-0 confident_dijkstra[236136]: --> All data devices are unavailable
Dec 06 06:50:50 compute-0 systemd[1]: libpod-cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76.scope: Deactivated successfully.
Dec 06 06:50:50 compute-0 podman[236030]: 2025-12-06 06:50:50.843725905 +0000 UTC m=+1.481563927 container died cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-618f71307d19a7e1764deb4ff20513da57222a1d00500c9eb743b33a830eb8d7-merged.mount: Deactivated successfully.
Dec 06 06:50:50 compute-0 ceph-mon[74339]: pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:50 compute-0 podman[236030]: 2025-12-06 06:50:50.917083242 +0000 UTC m=+1.554921244 container remove cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 06:50:50 compute-0 systemd[1]: libpod-conmon-cda84dd41553d0615729e81c34dfe67fe32715bc66aa56c8405fede7a85e6d76.scope: Deactivated successfully.
Dec 06 06:50:50 compute-0 sudo[235859]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:50 compute-0 python3.9[236357]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:51 compute-0 sudo[236369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:51 compute-0 sudo[236369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:51 compute-0 sudo[236353]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:51 compute-0 sudo[236369]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:51 compute-0 sudo[236394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:50:51 compute-0 sudo[236394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:51 compute-0 sudo[236394]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:51 compute-0 sudo[236443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:51 compute-0 sudo[236443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:51 compute-0 sudo[236443]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:51 compute-0 sudo[236468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:50:51 compute-0 sudo[236468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:51 compute-0 podman[236534]: 2025-12-06 06:50:51.491915139 +0000 UTC m=+0.035305626 container create ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 06:50:51 compute-0 systemd[1]: Started libpod-conmon-ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819.scope.
Dec 06 06:50:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:50:51 compute-0 podman[236534]: 2025-12-06 06:50:51.566175841 +0000 UTC m=+0.109566348 container init ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:50:51 compute-0 podman[236534]: 2025-12-06 06:50:51.476085522 +0000 UTC m=+0.019476039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:50:51 compute-0 podman[236534]: 2025-12-06 06:50:51.573054471 +0000 UTC m=+0.116444948 container start ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:50:51 compute-0 podman[236534]: 2025-12-06 06:50:51.576509947 +0000 UTC m=+0.119900434 container attach ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 06:50:51 compute-0 nostalgic_morse[236570]: 167 167
Dec 06 06:50:51 compute-0 systemd[1]: libpod-ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819.scope: Deactivated successfully.
Dec 06 06:50:51 compute-0 podman[236534]: 2025-12-06 06:50:51.578357518 +0000 UTC m=+0.121747995 container died ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d9ebae42c3031b58ffdb1b552d36b9917f720097a89d424c73cb192dd0265b9-merged.mount: Deactivated successfully.
Dec 06 06:50:51 compute-0 podman[236534]: 2025-12-06 06:50:51.619158436 +0000 UTC m=+0.162548923 container remove ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_morse, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 06:50:51 compute-0 systemd[1]: libpod-conmon-ad2ca491b14c70604da47f5e73a48672405c1a076b385fb0760410a417636819.scope: Deactivated successfully.
Dec 06 06:50:51 compute-0 podman[236627]: 2025-12-06 06:50:51.766708964 +0000 UTC m=+0.037256831 container create ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gates, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:50:51 compute-0 systemd[1]: Started libpod-conmon-ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535.scope.
Dec 06 06:50:51 compute-0 podman[236627]: 2025-12-06 06:50:51.749964691 +0000 UTC m=+0.020512578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:50:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8add73958cbabe250e89fc85d2479abbb92eeb3312bfe5a34d4510f50eef3ca9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8add73958cbabe250e89fc85d2479abbb92eeb3312bfe5a34d4510f50eef3ca9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8add73958cbabe250e89fc85d2479abbb92eeb3312bfe5a34d4510f50eef3ca9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8add73958cbabe250e89fc85d2479abbb92eeb3312bfe5a34d4510f50eef3ca9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:51 compute-0 podman[236627]: 2025-12-06 06:50:51.865564376 +0000 UTC m=+0.136112293 container init ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 06:50:51 compute-0 podman[236627]: 2025-12-06 06:50:51.8798335 +0000 UTC m=+0.150381367 container start ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 06:50:51 compute-0 podman[236627]: 2025-12-06 06:50:51.883026518 +0000 UTC m=+0.153574465 container attach ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:50:51 compute-0 sudo[236721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czfdcozubgfvfqaoqrqrgwtnvrwgwvgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003851.5416045-2061-163257846379495/AnsiballZ_systemd_service.py'
Dec 06 06:50:51 compute-0 sudo[236721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:52.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:52 compute-0 python3.9[236723]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:50:52 compute-0 systemd[1]: Reloading.
Dec 06 06:50:52 compute-0 systemd-sysv-generator[236754]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:50:52 compute-0 systemd-rc-local-generator[236750]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:50:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:52 compute-0 pedantic_gates[236666]: {
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:     "0": [
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:         {
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "devices": [
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "/dev/loop3"
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             ],
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "lv_name": "ceph_lv0",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "lv_size": "7511998464",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "name": "ceph_lv0",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "tags": {
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.cluster_name": "ceph",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.crush_device_class": "",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.encrypted": "0",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.osd_id": "0",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.type": "block",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:                 "ceph.vdo": "0"
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             },
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "type": "block",
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:             "vg_name": "ceph_vg0"
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:         }
Dec 06 06:50:52 compute-0 pedantic_gates[236666]:     ]
Dec 06 06:50:52 compute-0 pedantic_gates[236666]: }
Dec 06 06:50:52 compute-0 sudo[236721]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:52 compute-0 systemd[1]: libpod-ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535.scope: Deactivated successfully.
Dec 06 06:50:52 compute-0 podman[236627]: 2025-12-06 06:50:52.674030919 +0000 UTC m=+0.944578806 container died ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:50:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8add73958cbabe250e89fc85d2479abbb92eeb3312bfe5a34d4510f50eef3ca9-merged.mount: Deactivated successfully.
Dec 06 06:50:52 compute-0 ceph-mon[74339]: pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:53 compute-0 python3.9[236926]: ansible-ansible.builtin.service_facts Invoked
Dec 06 06:50:53 compute-0 network[236943]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 06:50:53 compute-0 network[236944]: 'network-scripts' will be removed from distribution in near future.
Dec 06 06:50:53 compute-0 network[236945]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 06:50:53 compute-0 podman[236627]: 2025-12-06 06:50:53.545858414 +0000 UTC m=+1.816406321 container remove ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:50:53 compute-0 sudo[236468]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:54.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:54 compute-0 systemd[1]: libpod-conmon-ec6be867e60246bb3cc6fa233637476749162dd505d4c206a9e5b0edbfd78535.scope: Deactivated successfully.
Dec 06 06:50:54 compute-0 sudo[236951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:54 compute-0 sudo[236951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:54 compute-0 sudo[236951]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:54 compute-0 sudo[236977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:50:54 compute-0 sudo[236977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:54 compute-0 sudo[236977]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:54 compute-0 sudo[237004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:54 compute-0 sudo[237004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:54 compute-0 sudo[237004]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:54 compute-0 sudo[237034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:50:54 compute-0 sudo[237034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:54 compute-0 podman[237119]: 2025-12-06 06:50:54.846684005 +0000 UTC m=+0.074998024 container create 401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 06:50:54 compute-0 systemd[1]: Started libpod-conmon-401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73.scope.
Dec 06 06:50:54 compute-0 podman[237119]: 2025-12-06 06:50:54.794256286 +0000 UTC m=+0.022570305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:50:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:50:55 compute-0 ceph-mon[74339]: pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:55 compute-0 podman[237119]: 2025-12-06 06:50:55.501543843 +0000 UTC m=+0.729857932 container init 401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:50:55 compute-0 podman[237119]: 2025-12-06 06:50:55.513026291 +0000 UTC m=+0.741340300 container start 401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:50:55 compute-0 festive_dubinsky[237142]: 167 167
Dec 06 06:50:55 compute-0 systemd[1]: libpod-401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73.scope: Deactivated successfully.
Dec 06 06:50:55 compute-0 podman[237119]: 2025-12-06 06:50:55.522601165 +0000 UTC m=+0.750915264 container attach 401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:50:55 compute-0 podman[237119]: 2025-12-06 06:50:55.523635904 +0000 UTC m=+0.751949903 container died 401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3793177755a4ec73c5a9b29448e55975622b4ae7632e678d8c063c9c5f7faec4-merged.mount: Deactivated successfully.
Dec 06 06:50:55 compute-0 podman[237119]: 2025-12-06 06:50:55.651527928 +0000 UTC m=+0.879841927 container remove 401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_dubinsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:50:55 compute-0 systemd[1]: libpod-conmon-401dadff54c6a107346bfbbca8a28d0d62560fcee9c260f2ad47d14dd590df73.scope: Deactivated successfully.
Dec 06 06:50:55 compute-0 podman[237217]: 2025-12-06 06:50:55.864479623 +0000 UTC m=+0.100462147 container create 7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kapitsa, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:50:55 compute-0 podman[237217]: 2025-12-06 06:50:55.786005585 +0000 UTC m=+0.021988129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:50:55 compute-0 systemd[1]: Started libpod-conmon-7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44.scope.
Dec 06 06:50:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e1ed0cd291dc3098fa9dc0f68b8deb2323f1408f5a55b4c31c0236f2cdfed8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e1ed0cd291dc3098fa9dc0f68b8deb2323f1408f5a55b4c31c0236f2cdfed8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e1ed0cd291dc3098fa9dc0f68b8deb2323f1408f5a55b4c31c0236f2cdfed8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60e1ed0cd291dc3098fa9dc0f68b8deb2323f1408f5a55b4c31c0236f2cdfed8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:50:56 compute-0 podman[237217]: 2025-12-06 06:50:56.045231869 +0000 UTC m=+0.281214393 container init 7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kapitsa, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:50:56 compute-0 podman[237217]: 2025-12-06 06:50:56.053361774 +0000 UTC m=+0.289344288 container start 7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kapitsa, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:50:56 compute-0 podman[237217]: 2025-12-06 06:50:56.056871191 +0000 UTC m=+0.292853715 container attach 7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:50:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:56.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:50:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:56.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]: {
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]:         "osd_id": 0,
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]:         "type": "bluestore"
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]:     }
Dec 06 06:50:56 compute-0 naughty_kapitsa[237248]: }
Dec 06 06:50:56 compute-0 systemd[1]: libpod-7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44.scope: Deactivated successfully.
Dec 06 06:50:56 compute-0 podman[237217]: 2025-12-06 06:50:56.965526723 +0000 UTC m=+1.201509257 container died 7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kapitsa, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:50:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-60e1ed0cd291dc3098fa9dc0f68b8deb2323f1408f5a55b4c31c0236f2cdfed8-merged.mount: Deactivated successfully.
Dec 06 06:50:57 compute-0 sudo[237453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pojyovayoxttwijamlgvhanluxsfxktr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003856.8645165-2118-18321020622993/AnsiballZ_systemd_service.py'
Dec 06 06:50:57 compute-0 sudo[237453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:57 compute-0 podman[237217]: 2025-12-06 06:50:57.299780691 +0000 UTC m=+1.535763215 container remove 7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kapitsa, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:50:57 compute-0 systemd[1]: libpod-conmon-7f070ab8afa359dfadde9f0316fe72db76277a135b7c77f5d792488afce86e44.scope: Deactivated successfully.
Dec 06 06:50:57 compute-0 sudo[237034]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:50:57 compute-0 python3.9[237455]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:57 compute-0 sudo[237453]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:57 compute-0 sudo[237606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmwisfzvdevqkcdcadssebvmapddsiex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003857.6486328-2118-138322576259938/AnsiballZ_systemd_service.py'
Dec 06 06:50:57 compute-0 sudo[237606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:58 compute-0 ceph-mon[74339]: pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:50:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 452396d3-ce12-4d68-a120-b77de2dd7532 does not exist
Dec 06 06:50:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ae134a0f-4abf-4d21-b26f-7c750ef19d11 does not exist
Dec 06 06:50:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bbf4b751-1f08-44e0-aa25-64b8f53677e6 does not exist
Dec 06 06:50:58 compute-0 sudo[237609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:50:58 compute-0 sudo[237609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:58 compute-0 sudo[237609]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:50:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:50:58.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:50:58 compute-0 sudo[237634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:50:58 compute-0 sudo[237634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:50:58 compute-0 sudo[237634]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:58 compute-0 python3.9[237608]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:58 compute-0 sudo[237606]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:50:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:50:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:50:58.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:50:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:58 compute-0 sudo[237810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ickvpsndhqhtvkytkrjewbooijyttjfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003858.492336-2118-43095904543938/AnsiballZ_systemd_service.py'
Dec 06 06:50:58 compute-0 sudo[237810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:59 compute-0 python3.9[237812]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:59 compute-0 sudo[237810]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:50:59 compute-0 ceph-mon[74339]: pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:50:59 compute-0 sudo[237963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxxvuiuasjrijaoprhovjjpgxzhyngf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003859.2003937-2118-30442285306930/AnsiballZ_systemd_service.py'
Dec 06 06:50:59 compute-0 sudo[237963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:50:59 compute-0 python3.9[237965]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:50:59 compute-0 sudo[237963]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:00.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:00.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:00 compute-0 sudo[238126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iddkemucbrpxillobiwypcgmbcclgtir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003859.9300091-2118-7038520433092/AnsiballZ_systemd_service.py'
Dec 06 06:51:00 compute-0 sudo[238126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:00 compute-0 podman[238090]: 2025-12-06 06:51:00.507852983 +0000 UTC m=+0.152396704 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec 06 06:51:00 compute-0 python3.9[238135]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:51:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:00 compute-0 sudo[238126]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:00 compute-0 ceph-mon[74339]: pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:01 compute-0 sudo[238296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckpotvccyrxbtsxpdlmbagpjerqvzrhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003860.8485005-2118-108481581064991/AnsiballZ_systemd_service.py'
Dec 06 06:51:01 compute-0 sudo[238296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:01 compute-0 python3.9[238298]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:51:01 compute-0 sudo[238296]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:01 compute-0 sudo[238449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximcaxyxmfjfsjndgutuviefdzqtbyum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003861.637943-2118-145402050404059/AnsiballZ_systemd_service.py'
Dec 06 06:51:01 compute-0 sudo[238449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:02 compute-0 python3.9[238451]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:51:02 compute-0 sudo[238449]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:51:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:51:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:02.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:02 compute-0 sudo[238603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ronatozuobvqdkvgaoyfvseclexprqew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003862.341938-2118-192160747747531/AnsiballZ_systemd_service.py'
Dec 06 06:51:02 compute-0 sudo[238603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:02 compute-0 python3.9[238605]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:51:02 compute-0 sudo[238603]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:03 compute-0 ceph-mon[74339]: pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:51:03.799 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:51:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:51:03.801 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:51:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:51:03.801 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:51:04 compute-0 sudo[238756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjephvubaojfhanruigwbozjiovchejb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003863.8102474-2295-84168919377519/AnsiballZ_file.py'
Dec 06 06:51:04 compute-0 sudo[238756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:04 compute-0 python3.9[238758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:51:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:04.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:51:04 compute-0 sudo[238756]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:51:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:04.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:51:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:04 compute-0 sudo[238909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzvdtumvdwbsioicsmjcgqkgwumhkewj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003864.3772652-2295-184437515407991/AnsiballZ_file.py'
Dec 06 06:51:04 compute-0 sudo[238909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:04 compute-0 ceph-mon[74339]: pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:04 compute-0 python3.9[238911]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:04 compute-0 sudo[238909]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:05 compute-0 sudo[239061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmdgoghucyabghanwcjkstmryapjzkdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003865.1020792-2295-173691990828104/AnsiballZ_file.py'
Dec 06 06:51:05 compute-0 sudo[239061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:05 compute-0 python3.9[239063]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:05 compute-0 sudo[239061]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:05 compute-0 sudo[239213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlkvuhcqokchmtfrqwgzjmurkgclxhhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003865.6991675-2295-197331569376266/AnsiballZ_file.py'
Dec 06 06:51:05 compute-0 sudo[239213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:06 compute-0 python3.9[239215]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:06 compute-0 sudo[239213]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:06.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:06.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:06 compute-0 sudo[239366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzhuallvpknwjhremdgikbyertqgeive ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003866.3466158-2295-164823484224750/AnsiballZ_file.py'
Dec 06 06:51:06 compute-0 sudo[239366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:51:06 compute-0 ceph-mon[74339]: pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 06:51:06 compute-0 python3.9[239368]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:06 compute-0 sudo[239366]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:07 compute-0 sudo[239535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fomgtjjdicwmfklobapexxlzjafkldpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003866.9670572-2295-210915533325665/AnsiballZ_file.py'
Dec 06 06:51:07 compute-0 sudo[239535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:07 compute-0 podman[239492]: 2025-12-06 06:51:07.260148785 +0000 UTC m=+0.074491789 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 06:51:07 compute-0 python3.9[239540]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:07 compute-0 sudo[239535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:07 compute-0 sudo[239690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfkdagtlltrcllzwdiglcuhlzlnzwjch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003867.634736-2295-280136035348698/AnsiballZ_file.py'
Dec 06 06:51:07 compute-0 sudo[239690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:08 compute-0 python3.9[239692]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:08 compute-0 sudo[239690]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:51:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:08.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:51:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:08.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:08 compute-0 sudo[239843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndzrnxaidrpurzwdmaitofitwtguaefj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003868.2871983-2295-91037088813064/AnsiballZ_file.py'
Dec 06 06:51:08 compute-0 sudo[239843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec 06 06:51:08 compute-0 python3.9[239845]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:08 compute-0 sudo[239843]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:08 compute-0 ceph-mon[74339]: pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec 06 06:51:09 compute-0 sudo[239995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evhrfclkbsrriiracodembkxpbtutzwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003869.1971462-2466-155848749991311/AnsiballZ_file.py'
Dec 06 06:51:09 compute-0 sudo[239995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:09 compute-0 python3.9[239997]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:09 compute-0 sudo[239995]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:10 compute-0 sudo[240147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpyijqrpbdedphbueiiwygomxstdukmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003869.7676497-2466-247660853314352/AnsiballZ_file.py'
Dec 06 06:51:10 compute-0 sudo[240147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:10 compute-0 python3.9[240149]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:10 compute-0 sudo[240147]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:10.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:10.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:10 compute-0 sudo[240209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:10 compute-0 sudo[240209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:10 compute-0 sudo[240209]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:10 compute-0 podman[240252]: 2025-12-06 06:51:10.545964465 +0000 UTC m=+0.051996058 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 06:51:10 compute-0 sudo[240281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:10 compute-0 sudo[240281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:10 compute-0 sudo[240281]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:10 compute-0 sudo[240370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abdwncafjieupkkechcxwllttsfejuhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003870.3810873-2466-119506122563239/AnsiballZ_file.py'
Dec 06 06:51:10 compute-0 sudo[240370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec 06 06:51:10 compute-0 python3.9[240372]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:10 compute-0 sudo[240370]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:11 compute-0 sudo[240522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlfruhznlsrqohjxbkaaexzvlavkguwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003870.9808104-2466-6367641205234/AnsiballZ_file.py'
Dec 06 06:51:11 compute-0 sudo[240522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:11 compute-0 ceph-mon[74339]: pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Dec 06 06:51:11 compute-0 python3.9[240524]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:11 compute-0 sudo[240522]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:11 compute-0 sudo[240674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypltluatkkpcliceqkjjgrylqzscvpzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003871.6288812-2466-69282439112836/AnsiballZ_file.py'
Dec 06 06:51:11 compute-0 sudo[240674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:12 compute-0 python3.9[240676]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:12 compute-0 sudo[240674]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:12.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:12.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:12 compute-0 sudo[240827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlmrtbxkfotmqkmfrcmytetvlbvvayes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003872.2250898-2466-216469345383027/AnsiballZ_file.py'
Dec 06 06:51:12 compute-0 sudo[240827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Dec 06 06:51:12 compute-0 python3.9[240829]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:12 compute-0 sudo[240827]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:12 compute-0 ceph-mon[74339]: pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Dec 06 06:51:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:51:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:51:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:51:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:51:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:51:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:51:13 compute-0 sudo[240979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onelmjnhikwekwpjksfhvnbmrqoewqkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003872.866338-2466-191748555919260/AnsiballZ_file.py'
Dec 06 06:51:13 compute-0 sudo[240979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:13 compute-0 python3.9[240981]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:13 compute-0 sudo[240979]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:13 compute-0 sudo[241131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljpzudwtzcyhgvrumvvyvsgshuptjodx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003873.692957-2466-157294641074671/AnsiballZ_file.py'
Dec 06 06:51:13 compute-0 sudo[241131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:14 compute-0 python3.9[241133]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:14 compute-0 sudo[241131]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:14.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:14.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 83 op/s
Dec 06 06:51:15 compute-0 ceph-mon[74339]: pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 0 B/s wr, 83 op/s
Dec 06 06:51:15 compute-0 sudo[241284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lueczefwvocweuagotjumdllnwklxzyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003875.2995076-2640-98665418860031/AnsiballZ_command.py'
Dec 06 06:51:15 compute-0 sudo[241284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:15 compute-0 python3.9[241286]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:15 compute-0 sudo[241284]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:16.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:51:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:16.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.434465) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876434522, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1944, "num_deletes": 257, "total_data_size": 3694562, "memory_usage": 3750336, "flush_reason": "Manual Compaction"}
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876461346, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 3620560, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15868, "largest_seqno": 17811, "table_properties": {"data_size": 3611698, "index_size": 5548, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16929, "raw_average_key_size": 19, "raw_value_size": 3594216, "raw_average_value_size": 4088, "num_data_blocks": 248, "num_entries": 879, "num_filter_entries": 879, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003651, "oldest_key_time": 1765003651, "file_creation_time": 1765003876, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 27024 microseconds, and 15716 cpu microseconds.
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.461488) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 3620560 bytes OK
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.461533) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.462955) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.462987) EVENT_LOG_v1 {"time_micros": 1765003876462983, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.463003) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3686709, prev total WAL file size 3686709, number of live WAL files 2.
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.464469) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(3535KB)], [35(7407KB)]
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876464577, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 11205487, "oldest_snapshot_seqno": -1}
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4764 keys, 10809917 bytes, temperature: kUnknown
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876572855, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 10809917, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10774684, "index_size": 22195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 119655, "raw_average_key_size": 25, "raw_value_size": 10685180, "raw_average_value_size": 2242, "num_data_blocks": 921, "num_entries": 4764, "num_filter_entries": 4764, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765003876, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.573092) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 10809917 bytes
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.574558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.4 rd, 99.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 7.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.1) write-amplify(3.0) OK, records in: 5295, records dropped: 531 output_compression: NoCompression
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.574591) EVENT_LOG_v1 {"time_micros": 1765003876574576, "job": 16, "event": "compaction_finished", "compaction_time_micros": 108344, "compaction_time_cpu_micros": 52058, "output_level": 6, "num_output_files": 1, "total_output_size": 10809917, "num_input_records": 5295, "num_output_records": 4764, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876575948, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003876578780, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.464203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.578959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.578967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.578971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.578974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:16.578978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:16 compute-0 python3.9[241439]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 06:51:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 06:51:17 compute-0 sudo[241589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sefrwdaslmwlgqyshqziuefjzpewbxvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003876.9745853-2694-114213902980005/AnsiballZ_systemd_service.py'
Dec 06 06:51:17 compute-0 sudo[241589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:17 compute-0 python3.9[241591]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:51:17 compute-0 systemd[1]: Reloading.
Dec 06 06:51:17 compute-0 systemd-sysv-generator[241618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:51:17 compute-0 systemd-rc-local-generator[241611]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:51:17 compute-0 ceph-mon[74339]: pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 06:51:18 compute-0 sudo[241589]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:18.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:51:18
Dec 06 06:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'volumes', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Dec 06 06:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:51:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:18.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:18 compute-0 sudo[241780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcmfvsnpggqorygvqjtfxqjatfbacfuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003878.2570956-2718-76745886995544/AnsiballZ_command.py'
Dec 06 06:51:18 compute-0 sudo[241780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Dec 06 06:51:18 compute-0 python3.9[241782]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:18 compute-0 sudo[241780]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:19 compute-0 sudo[241933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzklbenwbhzfsfntiyhhdiqijtmjgmgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003878.9037306-2718-191392980400311/AnsiballZ_command.py'
Dec 06 06:51:19 compute-0 sudo[241933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:19 compute-0 ceph-mon[74339]: pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Dec 06 06:51:19 compute-0 python3.9[241935]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:19 compute-0 sudo[241933]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:19 compute-0 sudo[242086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhawkksupwffapadfgvpqytgdyzrfadk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003879.5280533-2718-130421728540919/AnsiballZ_command.py'
Dec 06 06:51:19 compute-0 sudo[242086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:20 compute-0 python3.9[242088]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:20 compute-0 sudo[242086]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:20.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:20.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:20 compute-0 sudo[242240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbnqbgebnaczdwisucburygrdgvfpvyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003880.1872668-2718-207740377716929/AnsiballZ_command.py'
Dec 06 06:51:20 compute-0 sudo[242240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:20 compute-0 python3.9[242242]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:20 compute-0 sudo[242240]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 117 op/s
Dec 06 06:51:20 compute-0 ceph-mon[74339]: pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 117 op/s
Dec 06 06:51:21 compute-0 sudo[242393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zudesavwbehhksvymtnndqcpbwnjkjlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003880.7500513-2718-237069687299236/AnsiballZ_command.py'
Dec 06 06:51:21 compute-0 sudo[242393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:21 compute-0 python3.9[242395]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:21 compute-0 sudo[242393]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:21 compute-0 sudo[242546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suaxqujmlugfosymojzfcsevnvftfygz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003881.3232315-2718-64337083308272/AnsiballZ_command.py'
Dec 06 06:51:21 compute-0 sudo[242546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:21 compute-0 python3.9[242548]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:21 compute-0 sudo[242546]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:22.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:22 compute-0 sudo[242699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlznenzrnnquslgpyhkfdwpciicbxsjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003882.038033-2718-107477454110106/AnsiballZ_command.py'
Dec 06 06:51:22 compute-0 sudo[242699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:22.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:22 compute-0 python3.9[242701]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:22 compute-0 sudo[242699]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 148 op/s
Dec 06 06:51:22 compute-0 ceph-mon[74339]: pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 148 op/s
Dec 06 06:51:22 compute-0 sudo[242853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rojqvrswbhmgbzrdmmgdhxdlfebvrras ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003882.61644-2718-272975500749009/AnsiballZ_command.py'
Dec 06 06:51:22 compute-0 sudo[242853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:23 compute-0 python3.9[242855]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:51:23 compute-0 sudo[242853]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:51:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:24.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:24.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:24 compute-0 sudo[243007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thjcsdeumxxtgntnerhhoeilgzpiwenj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003884.3099012-2925-181222436948054/AnsiballZ_file.py'
Dec 06 06:51:24 compute-0 sudo[243007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 112 op/s
Dec 06 06:51:24 compute-0 python3.9[243009]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:24 compute-0 sudo[243007]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:25 compute-0 sudo[243159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fotsdsamiubqynywrfiadgsbxtviurns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003884.9565306-2925-275387106202941/AnsiballZ_file.py'
Dec 06 06:51:25 compute-0 sudo[243159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:25 compute-0 python3.9[243161]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:51:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:51:25 compute-0 sudo[243159]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:25 compute-0 sudo[243311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyimfqwqsphahupyertupnzlhehzdkot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003885.7017207-2925-55811474645096/AnsiballZ_file.py'
Dec 06 06:51:25 compute-0 sudo[243311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:26 compute-0 python3.9[243313]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:26 compute-0 sudo[243311]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:26.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:26 compute-0 ceph-mon[74339]: pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 112 op/s
Dec 06 06:51:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 95 op/s
Dec 06 06:51:26 compute-0 sudo[243464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auzudnzscexeamacljdayusczrnlmeez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003886.4058135-2991-265063269630774/AnsiballZ_file.py'
Dec 06 06:51:26 compute-0 sudo[243464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:26 compute-0 python3.9[243466]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:26 compute-0 sudo[243464]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:27 compute-0 sudo[243616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lktuvfegwqlbgqwpuzrncomquhzngswf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003887.0881577-2991-11452654134483/AnsiballZ_file.py'
Dec 06 06:51:27 compute-0 sudo[243616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:27 compute-0 python3.9[243618]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:27 compute-0 sudo[243616]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:27 compute-0 ceph-mon[74339]: pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 95 op/s
Dec 06 06:51:28 compute-0 sudo[243768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izpwfnvyvupbfvnumvugkvbmftxnzpun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003887.7441454-2991-148111200976371/AnsiballZ_file.py'
Dec 06 06:51:28 compute-0 sudo[243768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:28 compute-0 python3.9[243770]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:28 compute-0 sudo[243768]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:51:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:28.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:51:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:51:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:51:28 compute-0 sudo[243921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkjqrqffzmqqlfmuuwvjfvxkofexfmhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003888.4280398-2991-247283455043831/AnsiballZ_file.py'
Dec 06 06:51:28 compute-0 sudo[243921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Dec 06 06:51:28 compute-0 python3.9[243923]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:28 compute-0 sudo[243921]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:29 compute-0 ceph-mon[74339]: pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Dec 06 06:51:29 compute-0 sudo[244073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfyzjzepilfvvgvgbzhfhlptbqcpnrke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003889.0361366-2991-57555194008302/AnsiballZ_file.py'
Dec 06 06:51:29 compute-0 sudo[244073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:29 compute-0 python3.9[244075]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:29 compute-0 sudo[244073]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.598155) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889598205, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 357, "num_deletes": 251, "total_data_size": 239303, "memory_usage": 246360, "flush_reason": "Manual Compaction"}
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889690508, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 237057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17813, "largest_seqno": 18168, "table_properties": {"data_size": 234849, "index_size": 372, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5454, "raw_average_key_size": 18, "raw_value_size": 230534, "raw_average_value_size": 781, "num_data_blocks": 17, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003877, "oldest_key_time": 1765003877, "file_creation_time": 1765003889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 92397 microseconds, and 1358 cpu microseconds.
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.690564) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 237057 bytes OK
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.690585) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.692762) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.692821) EVENT_LOG_v1 {"time_micros": 1765003889692815, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.692839) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 236949, prev total WAL file size 236949, number of live WAL files 2.
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.693342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(231KB)], [38(10MB)]
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889693370, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11046974, "oldest_snapshot_seqno": -1}
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4549 keys, 8962466 bytes, temperature: kUnknown
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889779065, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8962466, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8930377, "index_size": 19627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 115855, "raw_average_key_size": 25, "raw_value_size": 8846182, "raw_average_value_size": 1944, "num_data_blocks": 806, "num_entries": 4549, "num_filter_entries": 4549, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765003889, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.779450) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8962466 bytes
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.781275) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.7 rd, 104.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.3 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(84.4) write-amplify(37.8) OK, records in: 5059, records dropped: 510 output_compression: NoCompression
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.781334) EVENT_LOG_v1 {"time_micros": 1765003889781292, "job": 18, "event": "compaction_finished", "compaction_time_micros": 85842, "compaction_time_cpu_micros": 37166, "output_level": 6, "num_output_files": 1, "total_output_size": 8962466, "num_input_records": 5059, "num_output_records": 4549, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889781551, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765003889785240, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.693220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.785323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.785328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.785329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.785331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:51:29.785332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:51:30 compute-0 sudo[244225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nifncmsxcnycmcnzqikuvpppdkblxvlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003889.8420308-2991-157130398899653/AnsiballZ_file.py'
Dec 06 06:51:30 compute-0 sudo[244225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:30.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:30 compute-0 python3.9[244227]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:30 compute-0 sudo[244225]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:30 compute-0 sudo[244304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:30 compute-0 sudo[244304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:30 compute-0 sudo[244304]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:30 compute-0 sudo[244354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:30 compute-0 sudo[244354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:30 compute-0 sudo[244354]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 06 06:51:30 compute-0 podman[244347]: 2025-12-06 06:51:30.769299287 +0000 UTC m=+0.114337506 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 06:51:30 compute-0 sudo[244455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqzbtdxsqoewmtcckmjfnrsgvjioqgqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003890.506506-2991-122368812471849/AnsiballZ_file.py'
Dec 06 06:51:30 compute-0 sudo[244455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:30 compute-0 ceph-mon[74339]: pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 06 06:51:30 compute-0 python3.9[244457]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:30 compute-0 sudo[244455]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:32.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:32.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 06 06:51:32 compute-0 ceph-mon[74339]: pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec 06 06:51:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:34.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:34.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 06:51:34 compute-0 ceph-mon[74339]: pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 06:51:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:36 compute-0 sudo[244611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldiyzyegnhifxepygxmbkbhboesegzpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003895.9515572-3316-163929537856282/AnsiballZ_getent.py'
Dec 06 06:51:36 compute-0 sudo[244611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:36 compute-0 python3.9[244613]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 06 06:51:36 compute-0 sudo[244611]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:36 compute-0 ceph-mon[74339]: pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:37 compute-0 sudo[244764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-timrapnyhxsjfhbzflotemcgvtxzwyvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003896.7690363-3340-280432386602264/AnsiballZ_group.py'
Dec 06 06:51:37 compute-0 sudo[244764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:37 compute-0 podman[244767]: 2025-12-06 06:51:37.416354945 +0000 UTC m=+0.066455439 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 06:51:37 compute-0 python3.9[244766]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 06:51:37 compute-0 groupadd[244787]: group added to /etc/group: name=nova, GID=42436
Dec 06 06:51:37 compute-0 groupadd[244787]: group added to /etc/gshadow: name=nova
Dec 06 06:51:37 compute-0 groupadd[244787]: new group: name=nova, GID=42436
Dec 06 06:51:37 compute-0 sudo[244764]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:38.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:38 compute-0 ceph-mon[74339]: pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:39 compute-0 sudo[244943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gafeusttwtncnvcrkzulwcgvtetgarbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003898.6674557-3364-101411413978953/AnsiballZ_user.py'
Dec 06 06:51:39 compute-0 sudo[244943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:39 compute-0 python3.9[244945]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 06:51:39 compute-0 useradd[244947]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 06 06:51:39 compute-0 useradd[244947]: add 'nova' to group 'libvirt'
Dec 06 06:51:39 compute-0 useradd[244947]: add 'nova' to shadow group 'libvirt'
Dec 06 06:51:39 compute-0 sudo[244943]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:40.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:40 compute-0 sshd-session[244978]: Accepted publickey for zuul from 192.168.122.30 port 36470 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 06:51:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:40.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:40 compute-0 systemd-logind[798]: New session 52 of user zuul.
Dec 06 06:51:40 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 06 06:51:40 compute-0 sshd-session[244978]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 06:51:40 compute-0 sshd-session[244982]: Received disconnect from 192.168.122.30 port 36470:11: disconnected by user
Dec 06 06:51:40 compute-0 sshd-session[244982]: Disconnected from user zuul 192.168.122.30 port 36470
Dec 06 06:51:40 compute-0 sshd-session[244978]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:51:40 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec 06 06:51:40 compute-0 systemd-logind[798]: Session 52 logged out. Waiting for processes to exit.
Dec 06 06:51:40 compute-0 systemd-logind[798]: Removed session 52.
Dec 06 06:51:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:40 compute-0 podman[245007]: 2025-12-06 06:51:40.723697882 +0000 UTC m=+0.077298021 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:51:40 compute-0 ceph-mon[74339]: pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:41 compute-0 python3.9[245152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:41 compute-0 python3.9[245273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003900.7921228-3439-121566855674045/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:42 compute-0 python3.9[245423]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:42 compute-0 python3.9[245500]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:51:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:51:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:51:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:51:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:51:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:51:43 compute-0 python3.9[245650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:43 compute-0 ceph-mon[74339]: pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:44 compute-0 python3.9[245771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003903.071697-3439-242676553002994/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:44.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:44.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:44 compute-0 python3.9[245922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:45 compute-0 ceph-mon[74339]: pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:45 compute-0 python3.9[246043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003904.2111988-3439-207542392763334/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:45 compute-0 python3.9[246193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:46 compute-0 python3.9[246314]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003905.3177884-3439-148158408613463/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:46.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:46 compute-0 python3.9[246465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:47 compute-0 ceph-mon[74339]: pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:47 compute-0 python3.9[246586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003906.4041996-3439-130427922001447/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:51:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:48.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:51:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:48 compute-0 ceph-mon[74339]: pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:48 compute-0 sudo[246737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uirmyiqegmtqvbepvbwvrkoayncovfui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003908.5214088-3688-177307769784031/AnsiballZ_file.py'
Dec 06 06:51:48 compute-0 sudo[246737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:49 compute-0 python3.9[246739]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:49 compute-0 sudo[246737]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:49 compute-0 sudo[246889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnsbatusiegfvfagrbjsxezwgbxqtmak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003909.2793188-3712-272370429747584/AnsiballZ_copy.py'
Dec 06 06:51:49 compute-0 sudo[246889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:49 compute-0 python3.9[246891]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:51:49 compute-0 sudo[246889]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:50 compute-0 sudo[247041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pctylrlnfydmmlogrvavrdyinrwoumvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003910.0562453-3736-92781489224239/AnsiballZ_stat.py'
Dec 06 06:51:50 compute-0 sudo[247041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:50.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:50 compute-0 python3.9[247043]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:51:50 compute-0 sudo[247041]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:50 compute-0 sudo[247069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:50 compute-0 sudo[247069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:50 compute-0 sudo[247069]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:50 compute-0 sudo[247094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:50 compute-0 sudo[247094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:50 compute-0 sudo[247094]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:51 compute-0 sudo[247244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnqqnkegioiwsgtrhfwwgviyxhpmscxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003911.3136106-3760-27968894457890/AnsiballZ_stat.py'
Dec 06 06:51:51 compute-0 sudo[247244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:51 compute-0 python3.9[247246]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:51 compute-0 sudo[247244]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:52 compute-0 ceph-mon[74339]: pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:52 compute-0 sudo[247367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rapqpgpfcbtmcfegveqdqeqpanmycbms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003911.3136106-3760-27968894457890/AnsiballZ_copy.py'
Dec 06 06:51:52 compute-0 sudo[247367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:52 compute-0 python3.9[247369]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765003911.3136106-3760-27968894457890/.source _original_basename=.e02w_ozx follow=False checksum=9687232396a6d4692f31dbc3efe019f9dd975ddb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 06 06:51:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:52.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:52 compute-0 sudo[247367]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:52.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:53 compute-0 ceph-mon[74339]: pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:54 compute-0 python3.9[247522]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:51:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:51:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:51:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:51:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:54.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:51:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:54 compute-0 python3.9[247675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:54 compute-0 ceph-mon[74339]: pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:55 compute-0 python3.9[247796]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003914.3537974-3838-262381083774377/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:56 compute-0 python3.9[247946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 06:51:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:51:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:51:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:56.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:51:56 compute-0 python3.9[248068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765003915.7520514-3883-113258499615178/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 06:51:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:56 compute-0 ceph-mon[74339]: pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:57 compute-0 sudo[248218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkqeviplyprykbluwvjevhxrvukmdume ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003917.2872996-3934-13768053800885/AnsiballZ_container_config_data.py'
Dec 06 06:51:57 compute-0 sudo[248218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:57 compute-0 python3.9[248220]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 06 06:51:57 compute-0 sudo[248218]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:51:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:51:58.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:51:58 compute-0 sudo[248371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fylixgwalerbbrbnksylpiplljdpbctf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003918.1955466-3961-215150568558885/AnsiballZ_container_config_hash.py'
Dec 06 06:51:58 compute-0 sudo[248371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:51:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:51:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:51:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:51:58.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:51:58 compute-0 sudo[248374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:58 compute-0 sudo[248374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:58 compute-0 sudo[248374]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-0 sudo[248399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:51:58 compute-0 sudo[248399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:58 compute-0 sudo[248399]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-0 python3.9[248373]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:51:58 compute-0 sudo[248371]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-0 sudo[248424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:58 compute-0 sudo[248424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:58 compute-0 sudo[248424]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:58 compute-0 sudo[248449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:51:58 compute-0 sudo[248449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:59 compute-0 ceph-mon[74339]: pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:51:59 compute-0 sudo[248449]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:51:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:51:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:51:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:51:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:51:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:51:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 70ebe0f0-583a-4711-9f91-4fd94c4cf2d5 does not exist
Dec 06 06:51:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a32ff796-6a63-4b14-9c11-94254381642f does not exist
Dec 06 06:51:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 053c2313-ad9f-4df2-8082-8fa6b9dd19d8 does not exist
Dec 06 06:51:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:51:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:51:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:51:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:51:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:51:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:51:59 compute-0 sudo[248528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:59 compute-0 sudo[248528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:59 compute-0 sudo[248528]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:59 compute-0 sudo[248553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:51:59 compute-0 sudo[248553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:59 compute-0 sudo[248553]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:59 compute-0 sudo[248578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:51:59 compute-0 sudo[248578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:59 compute-0 sudo[248578]: pam_unix(sudo:session): session closed for user root
Dec 06 06:51:59 compute-0 sudo[248603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:51:59 compute-0 sudo[248603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:51:59 compute-0 podman[248700]: 2025-12-06 06:51:59.965452019 +0000 UTC m=+0.039310002 container create 82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 06:52:00 compute-0 systemd[1]: Started libpod-conmon-82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869.scope.
Dec 06 06:52:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:00 compute-0 podman[248700]: 2025-12-06 06:51:59.948205032 +0000 UTC m=+0.022063035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:52:00 compute-0 podman[248700]: 2025-12-06 06:52:00.060221131 +0000 UTC m=+0.134079144 container init 82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec 06 06:52:00 compute-0 podman[248700]: 2025-12-06 06:52:00.066432961 +0000 UTC m=+0.140290944 container start 82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:52:00 compute-0 podman[248700]: 2025-12-06 06:52:00.071156815 +0000 UTC m=+0.145014828 container attach 82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:52:00 compute-0 elastic_lalande[248761]: 167 167
Dec 06 06:52:00 compute-0 systemd[1]: libpod-82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869.scope: Deactivated successfully.
Dec 06 06:52:00 compute-0 podman[248700]: 2025-12-06 06:52:00.074038205 +0000 UTC m=+0.147896188 container died 82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b290d4385e0e505272db62139233480e2382b07f31d039c08d0a8280ffb5c935-merged.mount: Deactivated successfully.
Dec 06 06:52:00 compute-0 podman[248700]: 2025-12-06 06:52:00.118621873 +0000 UTC m=+0.192479856 container remove 82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:52:00 compute-0 systemd[1]: libpod-conmon-82c30b305deb2e8b7951f864b4988663da6cde885dffc24738f5a801ec2b5869.scope: Deactivated successfully.
Dec 06 06:52:00 compute-0 sudo[248829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oarmckvxgxbxnztubibfvgmaesyxrcsn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003919.88953-3991-277969743399038/AnsiballZ_edpm_container_manage.py'
Dec 06 06:52:00 compute-0 sudo[248829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:52:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:52:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:52:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:52:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:52:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:52:00 compute-0 podman[248837]: 2025-12-06 06:52:00.292262752 +0000 UTC m=+0.045593023 container create edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_knuth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:52:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:52:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:00.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:52:00 compute-0 systemd[1]: Started libpod-conmon-edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5.scope.
Dec 06 06:52:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec368e5c2dc1b4ff69a7eb2d1f8e423dc935909847d544ab6f36148091fd09f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec368e5c2dc1b4ff69a7eb2d1f8e423dc935909847d544ab6f36148091fd09f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec368e5c2dc1b4ff69a7eb2d1f8e423dc935909847d544ab6f36148091fd09f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec368e5c2dc1b4ff69a7eb2d1f8e423dc935909847d544ab6f36148091fd09f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ec368e5c2dc1b4ff69a7eb2d1f8e423dc935909847d544ab6f36148091fd09f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:00 compute-0 podman[248837]: 2025-12-06 06:52:00.276694485 +0000 UTC m=+0.030024776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:52:00 compute-0 podman[248837]: 2025-12-06 06:52:00.37488454 +0000 UTC m=+0.128214831 container init edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_knuth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:52:00 compute-0 podman[248837]: 2025-12-06 06:52:00.385001795 +0000 UTC m=+0.138332086 container start edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:52:00 compute-0 podman[248837]: 2025-12-06 06:52:00.388484789 +0000 UTC m=+0.141815120 container attach edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 06:52:00 compute-0 python3[248831]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:52:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:00.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:01 compute-0 youthful_knuth[248854]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:52:01 compute-0 youthful_knuth[248854]: --> relative data size: 1.0
Dec 06 06:52:01 compute-0 youthful_knuth[248854]: --> All data devices are unavailable
Dec 06 06:52:01 compute-0 systemd[1]: libpod-edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5.scope: Deactivated successfully.
Dec 06 06:52:01 compute-0 podman[248837]: 2025-12-06 06:52:01.206821237 +0000 UTC m=+0.960151528 container died edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:52:01 compute-0 ceph-mon[74339]: pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ec368e5c2dc1b4ff69a7eb2d1f8e423dc935909847d544ab6f36148091fd09f-merged.mount: Deactivated successfully.
Dec 06 06:52:01 compute-0 podman[248837]: 2025-12-06 06:52:01.271681065 +0000 UTC m=+1.025011336 container remove edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_knuth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:52:01 compute-0 systemd[1]: libpod-conmon-edee6215748051ea2bd09f2aadbae21c3280bbdc5547661950349981629478e5.scope: Deactivated successfully.
Dec 06 06:52:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:01 compute-0 sudo[248603]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:01 compute-0 podman[248902]: 2025-12-06 06:52:01.331831871 +0000 UTC m=+0.093060532 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 06:52:01 compute-0 sudo[248933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:01 compute-0 sudo[248933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:01 compute-0 sudo[248933]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:01 compute-0 sudo[248963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:52:01 compute-0 sudo[248963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:01 compute-0 sudo[248963]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:01 compute-0 sudo[248992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:01 compute-0 sudo[248992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:01 compute-0 sudo[248992]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:01 compute-0 sudo[249017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:52:01 compute-0 sudo[249017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:02 compute-0 podman[249078]: 2025-12-06 06:52:02.015150544 +0000 UTC m=+0.049086468 container create 5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:52:02 compute-0 systemd[1]: Started libpod-conmon-5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b.scope.
Dec 06 06:52:02 compute-0 podman[249078]: 2025-12-06 06:52:01.992185179 +0000 UTC m=+0.026121123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:52:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:02 compute-0 podman[249078]: 2025-12-06 06:52:02.106249157 +0000 UTC m=+0.140185101 container init 5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:52:02 compute-0 podman[249078]: 2025-12-06 06:52:02.113287968 +0000 UTC m=+0.147223892 container start 5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:52:02 compute-0 podman[249078]: 2025-12-06 06:52:02.116136636 +0000 UTC m=+0.150072580 container attach 5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:52:02 compute-0 adoring_sinoussi[249095]: 167 167
Dec 06 06:52:02 compute-0 systemd[1]: libpod-5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b.scope: Deactivated successfully.
Dec 06 06:52:02 compute-0 podman[249078]: 2025-12-06 06:52:02.119966269 +0000 UTC m=+0.153902203 container died 5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b96de503324f7b1fcb85213e38cd6607aac32e080f6be5e6e0875f39b7e0c919-merged.mount: Deactivated successfully.
Dec 06 06:52:02 compute-0 podman[249078]: 2025-12-06 06:52:02.180147224 +0000 UTC m=+0.214083148 container remove 5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_sinoussi, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:52:02 compute-0 systemd[1]: libpod-conmon-5a6be0f2117d2842339432ff408cd3e99a1a7d24f33e8ae57b6644c50e740d8b.scope: Deactivated successfully.
Dec 06 06:52:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:02 compute-0 podman[249118]: 2025-12-06 06:52:02.402191083 +0000 UTC m=+0.064636113 container create 57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 06:52:02 compute-0 systemd[1]: Started libpod-conmon-57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17.scope.
Dec 06 06:52:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:52:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:52:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c463eb9927cc7ae69f62fbaa16e2fa57fcddc4756a0a25313c6f5b14030285/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c463eb9927cc7ae69f62fbaa16e2fa57fcddc4756a0a25313c6f5b14030285/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c463eb9927cc7ae69f62fbaa16e2fa57fcddc4756a0a25313c6f5b14030285/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c463eb9927cc7ae69f62fbaa16e2fa57fcddc4756a0a25313c6f5b14030285/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:02 compute-0 podman[249118]: 2025-12-06 06:52:02.384176568 +0000 UTC m=+0.046621618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:52:02 compute-0 podman[249118]: 2025-12-06 06:52:02.553668396 +0000 UTC m=+0.216113456 container init 57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 06:52:02 compute-0 podman[249118]: 2025-12-06 06:52:02.563272549 +0000 UTC m=+0.225717579 container start 57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:52:02 compute-0 podman[249118]: 2025-12-06 06:52:02.573598238 +0000 UTC m=+0.236043268 container attach 57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:52:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:03 compute-0 ceph-mon[74339]: pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:03 compute-0 goofy_feistel[249136]: {
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:     "0": [
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:         {
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "devices": [
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "/dev/loop3"
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             ],
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "lv_name": "ceph_lv0",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "lv_size": "7511998464",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "name": "ceph_lv0",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "tags": {
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.cluster_name": "ceph",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.crush_device_class": "",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.encrypted": "0",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.osd_id": "0",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.type": "block",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:                 "ceph.vdo": "0"
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             },
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "type": "block",
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:             "vg_name": "ceph_vg0"
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:         }
Dec 06 06:52:03 compute-0 goofy_feistel[249136]:     ]
Dec 06 06:52:03 compute-0 goofy_feistel[249136]: }
Dec 06 06:52:03 compute-0 systemd[1]: libpod-57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17.scope: Deactivated successfully.
Dec 06 06:52:03 compute-0 podman[249118]: 2025-12-06 06:52:03.389477137 +0000 UTC m=+1.051922167 container died 57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:52:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:52:03.800 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:52:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:52:03.802 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:52:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:52:03.802 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:52:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:04.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:05 compute-0 ceph-mon[74339]: pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:06.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:08.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:52:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:08.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:52:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:10.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:52:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:10.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:52:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-72c463eb9927cc7ae69f62fbaa16e2fa57fcddc4756a0a25313c6f5b14030285-merged.mount: Deactivated successfully.
Dec 06 06:52:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:10 compute-0 podman[249118]: 2025-12-06 06:52:10.7835892 +0000 UTC m=+8.446034230 container remove 57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_feistel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:52:10 compute-0 systemd[1]: libpod-conmon-57ba7743415a00737c8a3e5e2bfadd4c71869003e334776808e6edd91e9f7d17.scope: Deactivated successfully.
Dec 06 06:52:10 compute-0 podman[249198]: 2025-12-06 06:52:10.800892478 +0000 UTC m=+2.450972330 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:52:10 compute-0 sudo[249017]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:10 compute-0 podman[249227]: 2025-12-06 06:52:10.825188486 +0000 UTC m=+0.052857409 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 06:52:10 compute-0 podman[248873]: 2025-12-06 06:52:10.867721234 +0000 UTC m=+10.386770670 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 06:52:10 compute-0 sudo[249243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:10 compute-0 sudo[249243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:10 compute-0 sudo[249243]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:10 compute-0 sudo[249264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:10 compute-0 sudo[249264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:10 compute-0 sudo[249264]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:10 compute-0 sudo[249302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:52:10 compute-0 sudo[249302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:10 compute-0 sudo[249302]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:11 compute-0 sudo[249337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:11 compute-0 sudo[249337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:11 compute-0 sudo[249337]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:11 compute-0 podman[249340]: 2025-12-06 06:52:11.017165158 +0000 UTC m=+0.048051293 container create ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:52:11 compute-0 podman[249340]: 2025-12-06 06:52:10.991355044 +0000 UTC m=+0.022241199 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 06:52:11 compute-0 python3[248831]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 06 06:52:11 compute-0 sudo[249362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:11 compute-0 sudo[249362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:11 compute-0 sudo[249362]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:11 compute-0 sudo[249406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:52:11 compute-0 sudo[249406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:11 compute-0 sudo[248829]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:11 compute-0 podman[249516]: 2025-12-06 06:52:11.448805596 +0000 UTC m=+0.042782896 container create d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:52:11 compute-0 systemd[1]: Started libpod-conmon-d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59.scope.
Dec 06 06:52:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:11 compute-0 podman[249516]: 2025-12-06 06:52:11.428090085 +0000 UTC m=+0.022067405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:52:11 compute-0 podman[249516]: 2025-12-06 06:52:11.536323663 +0000 UTC m=+0.130300983 container init d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:52:11 compute-0 podman[249516]: 2025-12-06 06:52:11.547967054 +0000 UTC m=+0.141944394 container start d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 06:52:11 compute-0 podman[249516]: 2025-12-06 06:52:11.552575255 +0000 UTC m=+0.146552555 container attach d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:52:11 compute-0 interesting_burnell[249533]: 167 167
Dec 06 06:52:11 compute-0 systemd[1]: libpod-d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59.scope: Deactivated successfully.
Dec 06 06:52:11 compute-0 podman[249516]: 2025-12-06 06:52:11.554035051 +0000 UTC m=+0.148012361 container died d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:52:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcca81b184bac6fab3ed440d25f2b6e7d0e9d0414a66f4a0d014a4d6b36fff20-merged.mount: Deactivated successfully.
Dec 06 06:52:11 compute-0 podman[249516]: 2025-12-06 06:52:11.596811745 +0000 UTC m=+0.190789065 container remove d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:52:11 compute-0 systemd[1]: libpod-conmon-d4b8c9bb0d743b3f885630dfd2a1c11c0963d6312b1042c3c76a609171e4db59.scope: Deactivated successfully.
Dec 06 06:52:11 compute-0 podman[249557]: 2025-12-06 06:52:11.811482956 +0000 UTC m=+0.058212449 container create bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:52:11 compute-0 systemd[1]: Started libpod-conmon-bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc.scope.
Dec 06 06:52:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f98fa7b04aec5ef42c0e46c5ace476f4d5bad195e6cfcf8a24c5642171007d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f98fa7b04aec5ef42c0e46c5ace476f4d5bad195e6cfcf8a24c5642171007d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f98fa7b04aec5ef42c0e46c5ace476f4d5bad195e6cfcf8a24c5642171007d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f98fa7b04aec5ef42c0e46c5ace476f4d5bad195e6cfcf8a24c5642171007d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:11 compute-0 podman[249557]: 2025-12-06 06:52:11.794831984 +0000 UTC m=+0.041561487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:52:11 compute-0 podman[249557]: 2025-12-06 06:52:11.890528298 +0000 UTC m=+0.137257801 container init bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 06:52:11 compute-0 podman[249557]: 2025-12-06 06:52:11.898200113 +0000 UTC m=+0.144929606 container start bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:52:11 compute-0 podman[249557]: 2025-12-06 06:52:11.90222699 +0000 UTC m=+0.148956483 container attach bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 06:52:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:12.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:12.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]: {
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]:         "osd_id": 0,
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]:         "type": "bluestore"
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]:     }
Dec 06 06:52:12 compute-0 nifty_antonelli[249573]: }
Dec 06 06:52:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:12 compute-0 systemd[1]: libpod-bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc.scope: Deactivated successfully.
Dec 06 06:52:12 compute-0 podman[249557]: 2025-12-06 06:52:12.743655608 +0000 UTC m=+0.990385121 container died bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:52:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f98fa7b04aec5ef42c0e46c5ace476f4d5bad195e6cfcf8a24c5642171007d9-merged.mount: Deactivated successfully.
Dec 06 06:52:12 compute-0 podman[249557]: 2025-12-06 06:52:12.810511374 +0000 UTC m=+1.057240877 container remove bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_antonelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:52:12 compute-0 systemd[1]: libpod-conmon-bc04eb3d1e7a867a8427769e8928015be557bb6abb6abe6294a165b361ab54dc.scope: Deactivated successfully.
Dec 06 06:52:12 compute-0 sudo[249406]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:52:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:52:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:52:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:52:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:52:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:52:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:52:13 compute-0 ceph-mon[74339]: pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:52:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:52:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:52:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 65fa3f66-7afa-46c6-9575-31a80772e335 does not exist
Dec 06 06:52:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9dc663cf-98ba-47b1-8f31-e7e4b755dbcf does not exist
Dec 06 06:52:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 49de8544-02f2-4c80-9141-1bd1106340d7 does not exist
Dec 06 06:52:13 compute-0 sudo[249607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:13 compute-0 sudo[249607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:13 compute-0 sudo[249607]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:13 compute-0 sudo[249632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:52:13 compute-0 sudo[249632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:13 compute-0 sudo[249632]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:14 compute-0 sudo[249782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smyydmklgxzojaknqlsqusunibjrecwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003933.7608721-4015-270469407216629/AnsiballZ_stat.py'
Dec 06 06:52:14 compute-0 sudo[249782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:14 compute-0 ceph-mon[74339]: pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:14 compute-0 ceph-mon[74339]: pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:14 compute-0 ceph-mon[74339]: pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:52:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:52:14 compute-0 python3.9[249784]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:14 compute-0 sudo[249782]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:14.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:52:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:14.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:52:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:15 compute-0 sudo[249937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwhioxpxjhhqtngwwmbpjosegmmmgdvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003934.9873843-4051-173460504066090/AnsiballZ_container_config_data.py'
Dec 06 06:52:15 compute-0 sudo[249937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:15 compute-0 python3.9[249939]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 06 06:52:15 compute-0 sudo[249937]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:16.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:16.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:17 compute-0 sudo[250090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gorcemmfrghparvqskwfbxlqzuntwstb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003937.5388353-4078-113363962382601/AnsiballZ_container_config_hash.py'
Dec 06 06:52:17 compute-0 sudo[250090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:52:18
Dec 06 06:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'images', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms']
Dec 06 06:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:52:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:52:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:18.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:52:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:18 compute-0 python3.9[250092]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 06:52:18 compute-0 sudo[250090]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:18 compute-0 ceph-mon[74339]: pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:19 compute-0 sudo[250243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsraajggmrnyxvdzrrartgclesoxeaqe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765003939.0088065-4108-6865530039713/AnsiballZ_edpm_container_manage.py'
Dec 06 06:52:19 compute-0 sudo[250243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:19 compute-0 python3[250245]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 06:52:19 compute-0 podman[250283]: 2025-12-06 06:52:19.671843284 +0000 UTC m=+0.019569805 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 06:52:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:20.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:20 compute-0 ceph-mon[74339]: pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:20 compute-0 ceph-mon[74339]: pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:20 compute-0 podman[250283]: 2025-12-06 06:52:20.439049326 +0000 UTC m=+0.786775867 container create 859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 06:52:20 compute-0 python3[250245]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 06 06:52:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:20.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:20 compute-0 sudo[250243]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:21 compute-0 sudo[250472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmcdxpfwlrrkojkbjgpdqxrvribmlypd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003940.7555225-4132-26084276217454/AnsiballZ_stat.py'
Dec 06 06:52:21 compute-0 sudo[250472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:21 compute-0 python3.9[250474]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:21 compute-0 sudo[250472]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:21 compute-0 ceph-mon[74339]: pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:21 compute-0 sudo[250626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgylvndcvlfabiccbucandomjbafokfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003941.5896432-4159-268121025374105/AnsiballZ_file.py'
Dec 06 06:52:21 compute-0 sudo[250626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:22 compute-0 python3.9[250628]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:52:22 compute-0 sudo[250626]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:22.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:22.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:22 compute-0 sudo[250778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwgdvmrynhczihdfdrgtzrjxristdjru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003942.1192915-4159-77100744432308/AnsiballZ_copy.py'
Dec 06 06:52:22 compute-0 sudo[250778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:22 compute-0 python3.9[250780]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003942.1192915-4159-77100744432308/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:52:22 compute-0 sudo[250778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:22 compute-0 sudo[250854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzsotrviptlpqbmolsxgjeuacdyfbcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003942.1192915-4159-77100744432308/AnsiballZ_systemd.py'
Dec 06 06:52:22 compute-0 sudo[250854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:23 compute-0 ceph-mon[74339]: pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:52:23 compute-0 python3.9[250856]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 06:52:23 compute-0 systemd[1]: Reloading.
Dec 06 06:52:23 compute-0 systemd-rc-local-generator[250877]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:52:23 compute-0 systemd-sysv-generator[250883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:52:23 compute-0 sudo[250854]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:23 compute-0 sudo[250965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqdsextlijegiuhbphbygvbmsmqtdhfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003942.1192915-4159-77100744432308/AnsiballZ_systemd.py'
Dec 06 06:52:23 compute-0 sudo[250965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:24 compute-0 python3.9[250967]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 06:52:24 compute-0 systemd[1]: Reloading.
Dec 06 06:52:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:24.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:24 compute-0 systemd-sysv-generator[251000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 06:52:24 compute-0 systemd-rc-local-generator[250996]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:52:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:24.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:24 compute-0 systemd[1]: Starting nova_compute container...
Dec 06 06:52:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:24 compute-0 podman[251008]: 2025-12-06 06:52:24.752193361 +0000 UTC m=+0.105807173 container init 859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:52:24 compute-0 podman[251008]: 2025-12-06 06:52:24.759685423 +0000 UTC m=+0.113299215 container start 859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:52:24 compute-0 podman[251008]: nova_compute
Dec 06 06:52:24 compute-0 nova_compute[251024]: + sudo -E kolla_set_configs
Dec 06 06:52:24 compute-0 systemd[1]: Started nova_compute container.
Dec 06 06:52:24 compute-0 sudo[250965]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:24 compute-0 ceph-mon[74339]: pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Validating config file
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying service configuration files
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Deleting /etc/ceph
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Creating directory /etc/ceph
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Writing out command to execute
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:24 compute-0 nova_compute[251024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 06:52:24 compute-0 nova_compute[251024]: ++ cat /run_command
Dec 06 06:52:24 compute-0 nova_compute[251024]: + CMD=nova-compute
Dec 06 06:52:24 compute-0 nova_compute[251024]: + ARGS=
Dec 06 06:52:24 compute-0 nova_compute[251024]: + sudo kolla_copy_cacerts
Dec 06 06:52:24 compute-0 nova_compute[251024]: + [[ ! -n '' ]]
Dec 06 06:52:24 compute-0 nova_compute[251024]: + . kolla_extend_start
Dec 06 06:52:24 compute-0 nova_compute[251024]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 06:52:24 compute-0 nova_compute[251024]: Running command: 'nova-compute'
Dec 06 06:52:24 compute-0 nova_compute[251024]: + umask 0022
Dec 06 06:52:24 compute-0 nova_compute[251024]: + exec nova-compute
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:52:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:52:25 compute-0 python3.9[251186]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:26.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:26 compute-0 ceph-mon[74339]: pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:27 compute-0 python3.9[251337]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:27 compute-0 nova_compute[251024]: 2025-12-06 06:52:27.121 251028 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:27 compute-0 nova_compute[251024]: 2025-12-06 06:52:27.121 251028 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:27 compute-0 nova_compute[251024]: 2025-12-06 06:52:27.122 251028 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:27 compute-0 nova_compute[251024]: 2025-12-06 06:52:27.122 251028 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 06:52:27 compute-0 nova_compute[251024]: 2025-12-06 06:52:27.266 251028 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:27 compute-0 nova_compute[251024]: 2025-12-06 06:52:27.289 251028 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:52:27 compute-0 nova_compute[251024]: 2025-12-06 06:52:27.289 251028 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 06:52:27 compute-0 python3.9[251491]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.024 251028 INFO nova.virt.driver [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.178 251028 INFO nova.compute.provider_config [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.192 251028 DEBUG oslo_concurrency.lockutils [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.192 251028 DEBUG oslo_concurrency.lockutils [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.193 251028 DEBUG oslo_concurrency.lockutils [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.193 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.193 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.193 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.194 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.194 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.194 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.194 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.194 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.194 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.195 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.195 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.195 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.195 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.195 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.195 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.196 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.196 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.196 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.196 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.196 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.196 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.197 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.197 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.197 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.197 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.197 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.197 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.198 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.198 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.198 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.198 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.198 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.198 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.198 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.199 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.199 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.199 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.199 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.199 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.199 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.200 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.200 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.200 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.200 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.200 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.200 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.201 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.201 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.201 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.201 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.201 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.201 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.202 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.202 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.202 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.202 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.202 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.202 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.202 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.203 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.203 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.203 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.203 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.203 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.203 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.203 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.204 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.204 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.204 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.204 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.204 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.204 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.204 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.205 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.205 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.205 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.205 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.205 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.205 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.205 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.206 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.206 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.206 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.206 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.206 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.206 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.207 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.207 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.207 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.207 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.207 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.207 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.207 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.208 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.208 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.208 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.208 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.208 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.208 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.209 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.209 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.209 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.209 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.209 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.209 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.210 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.210 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.210 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.210 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.210 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.211 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.211 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.211 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.211 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.211 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.212 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.212 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.212 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.212 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.212 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.213 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.213 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.213 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.213 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.213 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.213 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.213 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.214 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.214 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.214 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.214 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.214 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.215 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.215 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.215 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.215 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.215 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.216 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.216 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.216 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.216 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.216 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.216 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.217 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.217 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.217 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.217 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.217 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.217 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.217 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.218 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.218 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.218 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.218 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.218 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.218 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.219 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.219 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.219 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.219 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.219 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.219 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.220 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.220 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.220 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.220 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.220 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.221 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.221 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.221 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.221 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.221 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.222 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.222 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.222 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.222 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.223 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.223 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.223 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.223 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.223 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.224 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.224 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.224 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.224 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.224 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.224 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.225 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.225 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.225 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.225 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.225 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.225 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.226 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.226 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.226 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.226 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.226 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.227 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.227 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.227 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.227 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.227 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.228 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.228 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.228 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.228 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.228 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.228 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.228 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.229 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.229 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.229 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.229 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.229 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.229 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.229 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.230 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.230 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.230 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.230 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.230 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.230 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.231 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.231 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.231 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.231 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.231 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.231 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.231 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.232 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.232 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.232 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.232 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.232 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.232 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.233 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.233 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.233 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.233 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.233 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.233 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.234 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.234 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.234 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.234 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.234 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.234 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.234 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.235 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.235 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.235 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.235 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.235 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.235 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.236 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.236 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.236 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.236 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.236 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.236 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.237 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.237 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.237 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.237 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.237 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.237 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.237 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.238 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.238 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.238 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.238 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.238 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.238 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.238 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.239 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.239 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.239 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.239 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.239 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.239 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.240 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.240 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.240 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.240 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.240 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.240 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.241 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.241 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.241 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.241 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.241 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.241 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.242 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.242 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.242 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.242 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.242 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.242 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.243 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.243 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.243 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.243 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.243 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.243 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.243 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.244 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.244 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.244 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.244 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.244 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.245 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.245 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.245 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.245 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.245 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.245 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.246 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.246 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.246 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.246 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.247 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.247 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.247 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.247 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.247 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.247 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.247 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.248 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.248 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.248 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.248 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.248 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.248 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.248 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.249 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.249 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.249 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.249 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.249 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.250 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.250 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.250 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.250 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.250 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.250 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.250 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.251 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.251 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.251 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.251 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.251 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.251 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.252 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.252 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.252 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.252 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.253 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.253 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.253 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.253 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.253 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.253 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.253 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.254 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.254 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.254 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.254 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.254 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.254 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.254 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.255 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.255 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.255 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.255 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.255 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.255 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.255 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.256 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.256 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.256 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.256 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.256 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.256 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.257 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.257 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.257 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.257 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.257 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.257 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.257 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.258 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.258 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.258 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.258 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.258 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.258 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.258 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.259 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.259 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.259 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.259 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.259 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.259 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.260 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.260 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.260 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.260 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.260 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.260 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.261 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.261 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.261 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.261 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.261 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.261 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.261 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.262 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.262 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.262 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.262 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.262 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.262 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.262 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.263 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.263 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.263 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.263 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.263 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.263 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.263 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.264 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.264 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.264 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.264 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.264 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.264 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.265 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.265 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.265 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.265 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.265 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.265 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.266 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.266 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.266 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.266 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.266 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.266 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.267 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.267 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.267 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.267 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.267 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.267 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.267 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.268 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.268 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.268 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.268 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.268 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.268 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.269 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.269 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.269 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.269 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.269 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.269 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.269 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.270 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.270 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.270 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.270 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.270 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.270 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.270 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.271 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.271 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.271 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.271 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.271 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.271 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.271 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.272 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.272 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.272 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.272 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.272 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.273 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.273 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.273 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.273 251028 WARNING oslo_config.cfg [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 06:52:28 compute-0 nova_compute[251024]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 06:52:28 compute-0 nova_compute[251024]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 06:52:28 compute-0 nova_compute[251024]: and ``live_migration_inbound_addr`` respectively.
Dec 06 06:52:28 compute-0 nova_compute[251024]: ).  Its value may be silently ignored in the future.
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.273 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.274 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.274 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.274 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.274 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.274 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.274 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.274 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.275 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.275 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.275 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.275 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.275 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.275 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.276 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.276 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.276 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.276 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.276 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rbd_secret_uuid        = 40a1bae4-cf76-5610-8dab-c75116dfe0bb log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.276 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.277 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.277 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.277 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.277 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.277 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.277 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.277 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.278 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.278 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.278 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.278 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.278 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.278 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.279 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.279 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.279 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.279 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.279 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.279 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.279 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.280 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.280 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.280 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.280 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.280 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.280 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.280 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.281 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.281 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.281 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.281 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.281 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.281 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.282 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.282 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.282 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.282 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.282 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.282 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.282 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.282 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.283 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.283 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.283 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.283 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.283 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.283 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.283 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.284 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.284 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.284 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.284 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.284 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.284 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.285 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.285 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.285 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.285 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.285 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.285 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.285 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.286 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.286 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.286 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.286 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.286 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.286 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.287 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.287 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.287 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.287 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.287 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.287 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.287 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.288 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.288 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.288 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.288 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.288 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.288 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.288 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.289 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.289 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.289 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.289 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.289 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.289 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.290 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.290 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.290 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.290 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.290 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.290 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.290 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.291 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.291 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.291 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.291 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.291 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.291 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.292 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.292 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.292 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.292 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.292 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.292 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.293 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.293 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.293 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.293 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.293 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.294 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.294 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.294 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.294 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.294 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.294 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.294 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.295 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.295 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.295 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.295 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.295 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.296 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.296 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.296 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.296 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.296 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.296 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.296 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.297 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.297 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.297 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.297 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.297 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.298 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.298 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.298 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.298 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.298 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.298 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.298 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.299 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.299 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.299 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.299 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.299 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.300 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.300 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.300 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.300 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.300 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.301 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.301 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.301 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.301 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.301 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.302 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.302 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.302 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.302 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.303 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.303 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.303 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.303 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.303 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.304 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.304 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.304 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.304 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.305 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.305 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.305 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.305 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.305 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.305 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.306 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.306 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.306 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.306 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.306 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.306 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.307 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.307 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.307 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.307 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.307 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.307 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.307 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.308 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.308 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.308 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.308 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.308 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.308 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.308 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.309 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.309 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.309 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.309 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.309 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.309 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.310 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.310 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.310 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.310 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.310 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.310 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.310 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.311 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.311 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.311 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.311 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.311 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.311 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.312 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.312 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.312 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.312 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.312 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.312 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.312 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.313 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.313 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.313 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.313 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.313 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.313 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.314 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.314 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.314 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.314 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.314 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.314 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.315 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.315 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.315 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.315 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.315 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.315 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.315 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.316 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.316 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.316 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.316 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.316 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.316 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.317 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.317 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.317 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.317 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.317 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.317 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.317 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.318 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.318 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.318 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.318 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.318 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.318 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.318 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.319 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.319 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.319 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.319 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.319 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.319 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.319 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.320 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.320 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.320 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.320 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.320 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.320 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.321 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.321 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.321 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.321 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.321 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.321 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.321 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.322 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.322 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.322 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.322 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.322 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.322 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.322 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.323 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.323 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.323 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.323 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.323 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.323 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.324 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.324 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.324 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.324 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.324 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.324 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.324 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.325 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.325 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.325 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.325 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.325 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.325 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.326 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.326 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.326 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.326 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.326 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.326 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.327 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.327 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.327 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.327 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.327 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.327 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.328 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.328 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.328 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.328 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.328 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.328 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.329 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.329 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.329 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.329 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.329 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.329 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.329 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.330 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.330 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.330 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.330 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.330 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.330 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.330 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.331 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.331 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.331 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.331 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.331 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.331 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.331 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.332 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.332 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.332 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.332 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.332 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.332 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.332 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.333 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.333 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.333 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.333 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.333 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.333 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.333 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.334 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.334 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.334 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.334 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.334 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.334 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.334 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.335 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.335 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.335 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.335 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.335 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.335 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.336 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.336 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.336 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.336 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.336 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.336 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.336 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.337 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.337 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.337 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.337 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.337 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.337 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.338 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.338 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.338 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.338 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.338 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.339 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.339 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.339 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.339 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.339 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.340 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.340 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.340 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.340 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.340 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.341 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.341 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.341 251028 DEBUG oslo_service.service [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.342 251028 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 06:52:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:28.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:28.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.506 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.507 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.508 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.508 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 06:52:28 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 06 06:52:28 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.600 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f88f667afa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.606 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f88f667afa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.608 251028 INFO nova.virt.libvirt.driver [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Connection event '1' reason 'None'
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.631 251028 WARNING nova.virt.libvirt.driver [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 06 06:52:28 compute-0 nova_compute[251024]: 2025-12-06 06:52:28.632 251028 DEBUG nova.virt.libvirt.volume.mount [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 06:52:28 compute-0 sudo[251686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evwcxoxjramyijwhpemvuizisfyrfwzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003948.1961613-4339-245596898496629/AnsiballZ_podman_container.py'
Dec 06 06:52:28 compute-0 sudo[251686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:28 compute-0 ceph-mon[74339]: pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:28 compute-0 python3.9[251689]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 06:52:29 compute-0 sudo[251686]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:29 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.508 251028 INFO nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]: 
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <host>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <uuid>dc45738e-2bb0-4417-914c-a006d79f6275</uuid>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <arch>x86_64</arch>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <microcode version='16777317'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <signature family='23' model='49' stepping='0'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='x2apic'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='tsc-deadline'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='osxsave'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='hypervisor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='tsc_adjust'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='spec-ctrl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='stibp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='arch-capabilities'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='cmp_legacy'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='topoext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='virt-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='lbrv'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='tsc-scale'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='vmcb-clean'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='pause-filter'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='pfthreshold'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='svme-addr-chk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='rdctl-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='mds-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature name='pschange-mc-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <pages unit='KiB' size='4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <pages unit='KiB' size='2048'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <pages unit='KiB' size='1048576'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <power_management>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <suspend_mem/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </power_management>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <iommu support='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <migration_features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <live/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <uri_transports>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <uri_transport>tcp</uri_transport>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <uri_transport>rdma</uri_transport>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </uri_transports>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </migration_features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <topology>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <cells num='1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <cell id='0'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:           <memory unit='KiB'>7864320</memory>
Dec 06 06:52:29 compute-0 nova_compute[251024]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 06 06:52:29 compute-0 nova_compute[251024]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 06:52:29 compute-0 nova_compute[251024]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 06:52:29 compute-0 nova_compute[251024]:           <distances>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <sibling id='0' value='10'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:           </distances>
Dec 06 06:52:29 compute-0 nova_compute[251024]:           <cpus num='8'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:           </cpus>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         </cell>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </cells>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </topology>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <cache>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </cache>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <secmodel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model>selinux</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <doi>0</doi>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </secmodel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <secmodel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model>dac</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <doi>0</doi>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </secmodel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </host>
Dec 06 06:52:29 compute-0 nova_compute[251024]: 
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <guest>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <os_type>hvm</os_type>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <arch name='i686'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <wordsize>32</wordsize>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <domain type='qemu'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <domain type='kvm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </arch>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <pae/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <nonpae/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <acpi default='on' toggle='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <apic default='on' toggle='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <cpuselection/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <deviceboot/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <disksnapshot default='on' toggle='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <externalSnapshot/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </guest>
Dec 06 06:52:29 compute-0 nova_compute[251024]: 
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <guest>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <os_type>hvm</os_type>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <arch name='x86_64'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <wordsize>64</wordsize>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <domain type='qemu'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <domain type='kvm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </arch>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <acpi default='on' toggle='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <apic default='on' toggle='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <cpuselection/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <deviceboot/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <disksnapshot default='on' toggle='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <externalSnapshot/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </guest>
Dec 06 06:52:29 compute-0 nova_compute[251024]: 
Dec 06 06:52:29 compute-0 nova_compute[251024]: </capabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]: 
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.514 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.535 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 06:52:29 compute-0 nova_compute[251024]: <domainCapabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <domain>kvm</domain>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <arch>i686</arch>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <vcpu max='4096'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <iothreads supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <os supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <enum name='firmware'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <loader supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>rom</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pflash</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='readonly'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>yes</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>no</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='secure'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>no</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </loader>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </os>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>on</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>off</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='maximumMigratable'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>on</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>off</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='succor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='custom' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-128'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-256'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-512'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='KnightsMill'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SierraForest'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='athlon'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='athlon-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='core2duo'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='core2duo-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='coreduo'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='coreduo-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='n270'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='n270-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='phenom'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='phenom-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <memoryBacking supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <enum name='sourceType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>file</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>anonymous</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>memfd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </memoryBacking>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <devices>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <disk supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='diskDevice'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>disk</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>cdrom</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>floppy</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>lun</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='bus'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>fdc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>scsi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>sata</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </disk>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <graphics supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vnc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>egl-headless</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dbus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </graphics>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <video supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='modelType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vga</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>cirrus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>none</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>bochs</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ramfb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </video>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <hostdev supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='mode'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>subsystem</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='startupPolicy'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>default</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>mandatory</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>requisite</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>optional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='subsysType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pci</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>scsi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='capsType'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='pciBackend'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </hostdev>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <rng supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>random</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>egd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>builtin</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </rng>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <filesystem supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='driverType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>path</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>handle</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtiofs</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </filesystem>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <tpm supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tpm-tis</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tpm-crb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>emulator</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>external</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendVersion'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>2.0</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </tpm>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <redirdev supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='bus'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </redirdev>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <channel supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pty</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>unix</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </channel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <crypto supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>qemu</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>builtin</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </crypto>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <interface supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>default</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>passt</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </interface>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <panic supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>isa</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>hyperv</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </panic>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <console supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>null</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pty</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dev</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>file</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pipe</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>stdio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>udp</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tcp</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>unix</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>qemu-vdagent</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dbus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </console>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </devices>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <gic supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <genid supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <backup supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <async-teardown supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <ps2 supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <sev supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <sgx supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <hyperv supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='features'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>relaxed</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vapic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>spinlocks</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vpindex</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>runtime</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>synic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>stimer</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>reset</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vendor_id</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>frequencies</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>reenlightenment</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tlbflush</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ipi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>avic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>emsr_bitmap</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>xmm_input</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <defaults>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </defaults>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </hyperv>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <launchSecurity supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='sectype'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tdx</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </launchSecurity>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </features>
Dec 06 06:52:29 compute-0 nova_compute[251024]: </domainCapabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.541 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 06:52:29 compute-0 nova_compute[251024]: <domainCapabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <domain>kvm</domain>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <arch>i686</arch>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <vcpu max='240'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <iothreads supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <os supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <enum name='firmware'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <loader supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>rom</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pflash</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='readonly'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>yes</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>no</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='secure'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>no</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </loader>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </os>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>on</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>off</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='maximumMigratable'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>on</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>off</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='succor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='custom' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-128'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-256'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-512'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 sudo[251880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xafmasjdtyjgatinsgttopxwjhxuleyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003949.335538-4363-167060621310325/AnsiballZ_systemd.py'
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 sudo[251880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='KnightsMill'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SierraForest'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='athlon'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='athlon-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='core2duo'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='core2duo-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='coreduo'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='coreduo-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='n270'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='n270-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='phenom'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='phenom-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <memoryBacking supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <enum name='sourceType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>file</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>anonymous</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>memfd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </memoryBacking>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <devices>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <disk supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='diskDevice'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>disk</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>cdrom</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>floppy</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>lun</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='bus'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ide</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>fdc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>scsi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>sata</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </disk>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <graphics supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vnc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>egl-headless</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dbus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </graphics>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <video supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='modelType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vga</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>cirrus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>none</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>bochs</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ramfb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </video>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <hostdev supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='mode'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>subsystem</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='startupPolicy'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>default</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>mandatory</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>requisite</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>optional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='subsysType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pci</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>scsi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='capsType'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='pciBackend'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </hostdev>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <rng supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>random</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>egd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>builtin</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </rng>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <filesystem supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='driverType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>path</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>handle</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtiofs</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </filesystem>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <tpm supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tpm-tis</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tpm-crb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>emulator</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>external</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendVersion'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>2.0</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </tpm>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <redirdev supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='bus'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </redirdev>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <channel supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pty</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>unix</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </channel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <crypto supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>qemu</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>builtin</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </crypto>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <interface supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>default</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>passt</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </interface>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <panic supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>isa</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>hyperv</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </panic>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <console supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>null</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pty</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dev</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>file</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pipe</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>stdio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>udp</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tcp</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>unix</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>qemu-vdagent</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dbus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </console>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </devices>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <gic supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <genid supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <backup supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <async-teardown supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <ps2 supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <sev supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <sgx supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <hyperv supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='features'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>relaxed</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vapic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>spinlocks</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vpindex</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>runtime</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>synic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>stimer</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>reset</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vendor_id</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>frequencies</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>reenlightenment</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tlbflush</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ipi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>avic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>emsr_bitmap</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>xmm_input</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <defaults>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </defaults>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </hyperv>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <launchSecurity supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='sectype'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tdx</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </launchSecurity>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </features>
Dec 06 06:52:29 compute-0 nova_compute[251024]: </domainCapabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.566 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.570 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 06:52:29 compute-0 nova_compute[251024]: <domainCapabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <domain>kvm</domain>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <arch>x86_64</arch>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <vcpu max='4096'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <iothreads supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <os supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <enum name='firmware'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>efi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <loader supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>rom</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pflash</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='readonly'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>yes</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>no</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='secure'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>yes</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>no</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </loader>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </os>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>on</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>off</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='maximumMigratable'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>on</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>off</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='succor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='custom' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-128'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-256'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-512'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='KnightsMill'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SierraForest'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='athlon'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='athlon-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='core2duo'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='core2duo-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='coreduo'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='coreduo-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='n270'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='n270-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='phenom'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='phenom-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <memoryBacking supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <enum name='sourceType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>file</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>anonymous</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>memfd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </memoryBacking>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <devices>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <disk supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='diskDevice'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>disk</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>cdrom</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>floppy</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>lun</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='bus'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>fdc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>scsi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>sata</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </disk>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <graphics supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vnc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>egl-headless</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dbus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </graphics>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <video supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='modelType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vga</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>cirrus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>none</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>bochs</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ramfb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </video>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <hostdev supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='mode'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>subsystem</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='startupPolicy'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>default</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>mandatory</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>requisite</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>optional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='subsysType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pci</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>scsi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='capsType'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='pciBackend'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </hostdev>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <rng supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>random</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>egd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>builtin</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </rng>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <filesystem supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='driverType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>path</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>handle</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtiofs</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </filesystem>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <tpm supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tpm-tis</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tpm-crb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>emulator</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>external</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendVersion'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>2.0</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </tpm>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <redirdev supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='bus'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </redirdev>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <channel supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pty</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>unix</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </channel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <crypto supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>qemu</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>builtin</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </crypto>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <interface supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>default</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>passt</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </interface>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <panic supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>isa</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>hyperv</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </panic>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <console supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>null</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pty</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dev</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>file</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pipe</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>stdio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>udp</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tcp</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>unix</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>qemu-vdagent</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dbus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </console>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </devices>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <gic supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <genid supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <backup supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <async-teardown supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <ps2 supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <sev supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <sgx supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <hyperv supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='features'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>relaxed</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vapic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>spinlocks</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vpindex</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>runtime</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>synic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>stimer</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>reset</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vendor_id</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>frequencies</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>reenlightenment</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tlbflush</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ipi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>avic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>emsr_bitmap</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>xmm_input</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <defaults>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </defaults>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </hyperv>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <launchSecurity supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='sectype'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tdx</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </launchSecurity>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </features>
Dec 06 06:52:29 compute-0 nova_compute[251024]: </domainCapabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.629 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 06:52:29 compute-0 nova_compute[251024]: <domainCapabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <domain>kvm</domain>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <arch>x86_64</arch>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <vcpu max='240'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <iothreads supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <os supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <enum name='firmware'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <loader supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>rom</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pflash</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='readonly'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>yes</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>no</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='secure'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>no</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </loader>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </os>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>on</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>off</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='maximumMigratable'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>on</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>off</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <vendor>AMD</vendor>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='succor'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <mode name='custom' supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Denverton-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='auto-ibrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amd-psfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='stibp-always-on'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='EPYC-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-128'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-256'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx10-512'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='prefetchiti'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Haswell-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='KnightsMill'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512er'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512pf'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fma4'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tbm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xop'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='amx-tile'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-bf16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-fp16'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bitalg'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrc'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fzrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='la57'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='taa-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xfd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SierraForest'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ifma'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cmpccxadd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fbsdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='fsrs'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ibrs-all'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mcdt-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pbrsb-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='psdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='serialize'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vaes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='hle'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='rtm'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512bw'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512cd'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512dq'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512f'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='avx512vl'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='invpcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pcid'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='pku'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='mpx'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='core-capability'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='split-lock-detect'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='cldemote'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='erms'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='gfni'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdir64b'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='movdiri'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='xsaves'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='athlon'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='athlon-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='core2duo'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='core2duo-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='coreduo'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='coreduo-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='n270'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='n270-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='ss'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='phenom'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <blockers model='phenom-v1'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnow'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <feature name='3dnowext'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </blockers>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </mode>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <memoryBacking supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <enum name='sourceType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>file</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>anonymous</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <value>memfd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </memoryBacking>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <devices>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <disk supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='diskDevice'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>disk</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>cdrom</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>floppy</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>lun</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='bus'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ide</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>fdc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>scsi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>sata</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </disk>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <graphics supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vnc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>egl-headless</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dbus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </graphics>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <video supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='modelType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vga</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>cirrus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>none</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>bochs</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ramfb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </video>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <hostdev supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='mode'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>subsystem</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='startupPolicy'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>default</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>mandatory</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>requisite</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>optional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='subsysType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pci</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>scsi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='capsType'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='pciBackend'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </hostdev>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <rng supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtio-non-transitional</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>random</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>egd</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>builtin</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </rng>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <filesystem supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='driverType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>path</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>handle</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>virtiofs</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </filesystem>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <tpm supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tpm-tis</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tpm-crb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>emulator</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>external</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendVersion'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>2.0</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </tpm>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <redirdev supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='bus'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>usb</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </redirdev>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <channel supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pty</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>unix</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </channel>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <crypto supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>qemu</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendModel'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>builtin</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </crypto>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <interface supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='backendType'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>default</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>passt</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </interface>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <panic supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='model'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>isa</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>hyperv</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </panic>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <console supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='type'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>null</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vc</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pty</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dev</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>file</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>pipe</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>stdio</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>udp</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tcp</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>unix</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>qemu-vdagent</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>dbus</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </console>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </devices>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <features>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <gic supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <genid supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <backup supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <async-teardown supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <ps2 supported='yes'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <sev supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <sgx supported='no'/>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <hyperv supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='features'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>relaxed</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vapic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>spinlocks</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vpindex</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>runtime</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>synic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>stimer</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>reset</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>vendor_id</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>frequencies</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>reenlightenment</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tlbflush</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>ipi</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>avic</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>emsr_bitmap</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>xmm_input</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <defaults>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </defaults>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </hyperv>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     <launchSecurity supported='yes'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       <enum name='sectype'>
Dec 06 06:52:29 compute-0 nova_compute[251024]:         <value>tdx</value>
Dec 06 06:52:29 compute-0 nova_compute[251024]:       </enum>
Dec 06 06:52:29 compute-0 nova_compute[251024]:     </launchSecurity>
Dec 06 06:52:29 compute-0 nova_compute[251024]:   </features>
Dec 06 06:52:29 compute-0 nova_compute[251024]: </domainCapabilities>
Dec 06 06:52:29 compute-0 nova_compute[251024]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.692 251028 DEBUG nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.693 251028 INFO nova.virt.libvirt.host [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Secure Boot support detected
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.695 251028 INFO nova.virt.libvirt.driver [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.695 251028 INFO nova.virt.libvirt.driver [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.704 251028 DEBUG nova.virt.libvirt.driver [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] cpu compare xml: <cpu match="exact">
Dec 06 06:52:29 compute-0 nova_compute[251024]:   <model>Nehalem</model>
Dec 06 06:52:29 compute-0 nova_compute[251024]: </cpu>
Dec 06 06:52:29 compute-0 nova_compute[251024]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.706 251028 DEBUG nova.virt.libvirt.driver [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.729 251028 INFO nova.virt.node [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Determined node identity e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from /var/lib/nova/compute_id
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.761 251028 WARNING nova.compute.manager [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Compute nodes ['e75da5bf-16fa-49b1-b5e1-3aa61daf0433'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.851 251028 INFO nova.compute.manager [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 06:52:29 compute-0 python3.9[251882]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:52:29 compute-0 systemd[1]: Stopping nova_compute container...
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.946 251028 WARNING nova.compute.manager [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.947 251028 DEBUG oslo_concurrency.lockutils [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.947 251028 DEBUG oslo_concurrency.lockutils [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.947 251028 DEBUG oslo_concurrency.lockutils [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.947 251028 DEBUG nova.compute.resource_tracker [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:52:29 compute-0 nova_compute[251024]: 2025-12-06 06:52:29.948 251028 DEBUG oslo_concurrency.processutils [None req-cc525a1b-ba85-452b-874b-bf4ecbc9366c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:30 compute-0 nova_compute[251024]: 2025-12-06 06:52:30.007 251028 DEBUG oslo_concurrency.lockutils [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:52:30 compute-0 nova_compute[251024]: 2025-12-06 06:52:30.008 251028 DEBUG oslo_concurrency.lockutils [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:52:30 compute-0 nova_compute[251024]: 2025-12-06 06:52:30.008 251028 DEBUG oslo_concurrency.lockutils [None req-36dec380-9e39-44ae-8f62-3d5fcd9b7491 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:52:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:30.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:30 compute-0 virtqemud[251613]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 06 06:52:30 compute-0 virtqemud[251613]: hostname: compute-0
Dec 06 06:52:30 compute-0 virtqemud[251613]: End of file while reading data: Input/output error
Dec 06 06:52:30 compute-0 systemd[1]: libpod-859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9.scope: Deactivated successfully.
Dec 06 06:52:30 compute-0 systemd[1]: libpod-859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9.scope: Consumed 3.312s CPU time.
Dec 06 06:52:30 compute-0 podman[251886]: 2025-12-06 06:52:30.425944283 +0000 UTC m=+0.469418851 container died 859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 06:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9-userdata-shm.mount: Deactivated successfully.
Dec 06 06:52:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f-merged.mount: Deactivated successfully.
Dec 06 06:52:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:30.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:31 compute-0 podman[251886]: 2025-12-06 06:52:31.135916312 +0000 UTC m=+1.179390920 container cleanup 859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 06 06:52:31 compute-0 podman[251886]: nova_compute
Dec 06 06:52:31 compute-0 sudo[251916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:31 compute-0 sudo[251916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:31 compute-0 sudo[251916]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:31 compute-0 podman[251929]: nova_compute
Dec 06 06:52:31 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 06 06:52:31 compute-0 systemd[1]: Stopped nova_compute container.
Dec 06 06:52:31 compute-0 systemd[1]: Starting nova_compute container...
Dec 06 06:52:31 compute-0 sudo[251952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:31 compute-0 sudo[251952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:31 compute-0 sudo[251952]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177dec6e89c61694d83e700e75ee2d7998e40d8bb92c78437d86aef45b1fe35f/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:31 compute-0 podman[251960]: 2025-12-06 06:52:31.311367684 +0000 UTC m=+0.082590223 container init 859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 06:52:31 compute-0 podman[251960]: 2025-12-06 06:52:31.318218387 +0000 UTC m=+0.089440896 container start 859d479a9b3c2135c5fe69277bc8a878d2992b1a27d61336884bac801d81bcd9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 06 06:52:31 compute-0 podman[251960]: nova_compute
Dec 06 06:52:31 compute-0 nova_compute[251992]: + sudo -E kolla_set_configs
Dec 06 06:52:31 compute-0 systemd[1]: Started nova_compute container.
Dec 06 06:52:31 compute-0 sudo[251880]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Validating config file
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying service configuration files
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /etc/ceph
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Creating directory /etc/ceph
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Writing out command to execute
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:31 compute-0 nova_compute[251992]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 06:52:31 compute-0 nova_compute[251992]: ++ cat /run_command
Dec 06 06:52:31 compute-0 nova_compute[251992]: + CMD=nova-compute
Dec 06 06:52:31 compute-0 nova_compute[251992]: + ARGS=
Dec 06 06:52:31 compute-0 nova_compute[251992]: + sudo kolla_copy_cacerts
Dec 06 06:52:31 compute-0 nova_compute[251992]: + [[ ! -n '' ]]
Dec 06 06:52:31 compute-0 nova_compute[251992]: + . kolla_extend_start
Dec 06 06:52:31 compute-0 nova_compute[251992]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 06:52:31 compute-0 nova_compute[251992]: Running command: 'nova-compute'
Dec 06 06:52:31 compute-0 nova_compute[251992]: + umask 0022
Dec 06 06:52:31 compute-0 nova_compute[251992]: + exec nova-compute
Dec 06 06:52:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:32.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:32 compute-0 podman[252029]: 2025-12-06 06:52:32.442230597 +0000 UTC m=+0.091650362 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 06:52:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:32.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:32 compute-0 ceph-mon[74339]: pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:33 compute-0 sudo[252182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbavtsmofncprmhtlhwkqctezcgtmhmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765003953.0444944-4390-213315126843407/AnsiballZ_podman_container.py'
Dec 06 06:52:33 compute-0 sudo[252182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 06:52:33 compute-0 nova_compute[251992]: 2025-12-06 06:52:33.480 251996 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:33 compute-0 nova_compute[251992]: 2025-12-06 06:52:33.480 251996 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:33 compute-0 nova_compute[251992]: 2025-12-06 06:52:33.481 251996 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 06:52:33 compute-0 nova_compute[251992]: 2025-12-06 06:52:33.481 251996 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 06:52:33 compute-0 ceph-mon[74339]: pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:33 compute-0 python3.9[252184]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 06:52:33 compute-0 nova_compute[251992]: 2025-12-06 06:52:33.618 251996 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:33 compute-0 nova_compute[251992]: 2025-12-06 06:52:33.644 251996 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:52:33 compute-0 nova_compute[251992]: 2025-12-06 06:52:33.644 251996 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 06:52:33 compute-0 systemd[1]: Started libpod-conmon-ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba.scope.
Dec 06 06:52:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f2c9b62a977eae156d7d1ac2ccbafedbb64f7732bb3388d521f86fffedef9b/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f2c9b62a977eae156d7d1ac2ccbafedbb64f7732bb3388d521f86fffedef9b/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f2c9b62a977eae156d7d1ac2ccbafedbb64f7732bb3388d521f86fffedef9b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 06:52:33 compute-0 podman[252211]: 2025-12-06 06:52:33.83034064 +0000 UTC m=+0.122882422 container init ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:52:33 compute-0 podman[252211]: 2025-12-06 06:52:33.837290635 +0000 UTC m=+0.129832407 container start ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:52:33 compute-0 python3.9[252184]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Applying nova statedir ownership
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 06 06:52:33 compute-0 nova_compute_init[252232]: INFO:nova_statedir:Nova statedir ownership complete
Dec 06 06:52:33 compute-0 systemd[1]: libpod-ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba.scope: Deactivated successfully.
Dec 06 06:52:33 compute-0 podman[252245]: 2025-12-06 06:52:33.922772917 +0000 UTC m=+0.025510352 container died ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:52:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba-userdata-shm.mount: Deactivated successfully.
Dec 06 06:52:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-07f2c9b62a977eae156d7d1ac2ccbafedbb64f7732bb3388d521f86fffedef9b-merged.mount: Deactivated successfully.
Dec 06 06:52:33 compute-0 sudo[252182]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:33 compute-0 podman[252245]: 2025-12-06 06:52:33.983225311 +0000 UTC m=+0.085962756 container cleanup ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Dec 06 06:52:33 compute-0 systemd[1]: libpod-conmon-ef6b2891cae701a0bfc6de8c83b8eaff4dc4630dcc7cc32b0b4c8cef448a7aba.scope: Deactivated successfully.
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.125 251996 INFO nova.virt.driver [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.224 251996 INFO nova.compute.provider_config [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.237 251996 DEBUG oslo_concurrency.lockutils [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.238 251996 DEBUG oslo_concurrency.lockutils [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.238 251996 DEBUG oslo_concurrency.lockutils [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.238 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.238 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.239 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.239 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.239 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.239 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.239 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.240 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.240 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.240 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.240 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.240 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.241 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.241 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.241 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.241 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.241 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.242 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.242 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.242 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.242 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.242 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.243 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.243 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.243 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.243 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.243 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.244 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.244 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.244 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.244 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.245 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.245 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.245 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.245 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.245 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.246 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.246 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.246 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.246 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.246 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.247 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.247 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.247 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.247 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.248 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.248 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.248 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.248 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.248 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.249 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.249 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.249 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.249 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.249 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.249 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.250 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.250 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.250 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.250 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.250 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.251 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.251 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.251 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.251 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.251 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.251 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.252 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.252 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.252 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.252 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.252 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.253 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.253 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.253 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.253 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.253 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.254 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.254 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.254 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.254 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.254 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.255 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.255 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.255 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.255 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.255 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.256 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.256 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.256 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.256 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.256 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.257 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.257 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.257 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.257 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.257 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.257 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.258 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.258 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.258 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.258 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.259 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.259 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.259 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.259 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.259 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.260 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.260 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.260 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.260 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.260 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.261 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.261 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.261 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.261 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.261 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.261 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.262 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.262 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.262 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.262 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.262 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.263 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.263 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.263 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.263 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.263 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.264 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.264 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.264 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.264 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.264 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.265 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.265 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.265 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.265 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.265 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.265 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.266 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.266 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.266 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.266 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.266 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.267 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.267 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.267 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.267 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.267 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.268 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.268 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.268 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.268 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.268 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.269 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.269 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.269 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.269 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.269 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.270 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.270 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.270 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.270 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.270 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.271 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.271 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.271 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.271 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.271 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.272 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.272 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.272 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.272 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.272 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.273 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.273 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.273 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.273 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.273 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.274 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.274 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.274 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.274 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.274 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.275 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.275 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.275 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.275 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.275 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.276 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.276 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.276 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.276 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.276 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.276 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.277 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.277 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.277 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.277 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.277 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.277 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.277 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.278 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.278 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.278 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.278 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.278 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.278 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.278 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.278 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.279 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.279 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.279 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.279 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.279 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.279 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.280 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.280 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.280 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.280 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.280 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.280 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.280 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.281 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.281 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.281 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.281 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.281 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.281 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.281 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.282 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.282 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.282 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.282 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.282 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.282 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.282 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.283 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.283 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.283 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.283 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.283 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.283 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.284 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.284 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.284 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.284 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.284 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.284 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.285 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.285 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.285 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.285 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.285 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.285 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.285 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.285 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.286 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.286 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.286 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.286 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.286 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.287 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.287 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.287 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.287 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.287 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.287 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.288 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.288 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.288 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.288 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.288 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.288 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.288 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.289 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.289 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.289 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.289 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.289 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.289 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.290 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.290 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.290 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.290 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.290 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.290 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.291 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.291 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.291 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.291 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.291 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.291 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.291 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.292 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.292 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.292 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.292 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.292 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.293 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.293 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.293 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.293 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.293 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.293 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.293 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.294 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.294 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.294 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.294 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.294 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.294 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.294 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.295 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.295 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.295 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.295 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.295 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.295 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.295 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.296 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.296 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.296 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.296 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.296 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.296 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.296 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.297 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.297 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.297 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.297 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.297 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.297 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.298 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.298 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.298 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.298 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.298 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.298 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.298 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.299 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.299 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.299 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.299 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.299 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.299 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.299 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.300 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.300 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.300 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.300 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.300 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.300 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.300 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.301 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.301 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.301 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.301 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.301 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.301 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.302 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.302 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.302 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.302 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.302 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.302 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.302 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.303 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.303 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.303 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.303 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.303 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.303 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.303 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.304 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.304 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.304 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.304 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.304 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.304 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.304 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.305 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.305 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.305 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.305 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.305 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.305 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.306 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.306 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.306 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.306 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.306 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.306 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.306 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.307 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.307 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.307 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.307 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.307 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.307 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.307 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.308 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.308 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.308 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.308 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.308 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.308 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.308 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.309 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.309 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.309 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.309 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.309 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.309 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.309 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.310 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.310 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.310 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.310 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.310 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.310 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.310 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.311 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.311 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.311 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.311 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.311 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.311 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.311 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.312 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.312 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.312 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.312 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.312 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.312 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.312 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.313 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.313 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.313 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.313 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.313 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.313 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.313 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.314 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.314 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.314 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.314 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.314 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.314 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.314 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.315 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.315 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.315 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.315 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.315 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.315 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.316 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.316 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.316 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.316 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.316 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.316 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.316 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.317 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.317 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.317 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.317 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.317 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.317 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.317 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.318 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.318 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.318 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.318 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.318 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.318 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.319 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.319 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.319 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.319 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.319 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.319 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.319 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.320 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.320 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.320 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.320 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.320 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.320 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.320 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.321 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.321 251996 WARNING oslo_config.cfg [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 06:52:34 compute-0 nova_compute[251992]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 06:52:34 compute-0 nova_compute[251992]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 06:52:34 compute-0 nova_compute[251992]: and ``live_migration_inbound_addr`` respectively.
Dec 06 06:52:34 compute-0 nova_compute[251992]: ).  Its value may be silently ignored in the future.
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.321 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.321 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.321 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.321 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.322 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.322 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.322 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.322 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.322 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.322 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.323 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.323 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.323 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.323 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.323 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.323 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.323 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.324 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.324 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rbd_secret_uuid        = 40a1bae4-cf76-5610-8dab-c75116dfe0bb log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.324 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.324 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.324 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.324 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.324 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.325 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.325 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.325 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.325 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.325 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.325 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.325 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.326 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.326 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.326 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.326 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.326 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.326 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.327 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.327 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.327 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.327 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.327 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.327 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.327 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.328 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.328 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.328 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.328 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.328 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.328 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.328 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.329 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.329 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.329 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.329 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.329 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.329 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.329 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.329 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.330 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.330 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.330 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.330 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.330 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.330 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.331 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.331 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.331 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.331 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.331 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.331 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.331 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.332 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.332 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.332 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.332 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.332 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.332 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.332 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.333 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.333 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.333 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.333 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.333 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.333 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.333 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.334 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.334 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.334 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.334 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.334 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.334 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.335 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.335 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.335 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.335 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.335 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.335 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.335 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.336 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.336 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.336 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.336 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.336 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.336 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.336 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.337 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.337 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.337 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.337 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.337 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.337 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.338 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.338 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.338 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.338 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.338 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.338 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.339 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.339 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.339 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.339 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.339 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.339 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.340 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.340 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.340 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.340 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.340 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.340 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.340 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.341 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.341 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.341 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.341 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.341 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.341 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.342 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.342 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.342 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.342 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.343 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.343 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.343 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.343 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.343 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.343 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.344 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.344 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.344 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.344 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.344 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.344 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.344 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.345 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.345 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.345 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.345 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.345 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.345 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.346 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.346 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.346 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.346 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.346 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.346 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.347 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.347 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.347 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.347 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.347 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.348 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.348 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.348 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.348 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.348 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.348 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.348 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.349 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.349 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.349 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.349 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.349 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.350 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.350 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.350 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.350 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.350 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.350 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.351 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.351 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.351 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.351 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.351 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.351 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.352 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.352 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.352 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.352 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.352 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.352 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.353 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.353 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.353 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.353 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.353 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.353 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.354 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.354 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.354 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.354 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.354 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.354 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.354 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.355 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.355 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.355 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.355 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.355 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.355 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.355 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.356 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.356 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.356 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.356 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.356 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.356 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.356 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.357 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.357 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.357 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.357 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.357 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.357 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.357 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.358 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.358 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.358 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.358 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.358 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.358 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.358 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.359 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.359 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.359 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.359 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.359 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.359 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.360 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.360 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.360 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.360 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.360 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.360 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.361 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.361 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.361 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.361 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.361 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.361 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.361 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.362 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.362 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.362 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.362 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.362 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.362 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.362 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.363 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.363 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.363 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.363 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.363 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.363 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.363 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.364 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.364 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.364 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.364 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.364 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.364 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.364 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.365 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.365 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.365 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.365 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.365 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.365 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.365 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.366 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.366 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.366 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.366 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.366 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.366 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.367 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.367 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.367 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.367 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.367 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.367 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.368 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.368 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.368 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.368 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.368 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.369 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.369 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.369 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.369 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.369 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.369 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.369 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.370 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.370 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.370 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:52:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.370 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.370 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.370 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.371 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.371 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.371 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.371 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.371 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.371 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.371 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.372 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.372 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.372 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.372 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.372 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.373 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.373 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.373 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.373 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.373 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.373 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.373 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.374 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.374 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.374 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.374 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.374 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.374 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.375 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.375 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.375 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.375 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.375 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.375 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.375 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.376 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.376 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.376 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.376 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.376 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.376 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.376 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.377 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.377 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.377 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.377 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.377 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.377 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.378 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.378 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.378 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.378 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.378 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.378 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.378 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.379 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.379 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.379 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.379 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.379 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.379 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.380 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.380 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.380 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.380 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.380 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.380 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.380 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.381 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.381 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.381 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.381 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.381 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.381 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.381 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.382 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.382 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.382 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.382 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.382 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.382 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.383 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.383 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.383 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.383 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.383 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.383 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.383 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.384 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.384 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.384 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.384 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.384 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.384 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.384 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.385 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.385 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.385 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.385 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.385 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.385 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.385 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.386 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.386 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.386 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.386 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.386 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.386 251996 DEBUG oslo_service.service [None req-3316cb90-d99e-4b4b-8360-3400bc9711e8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.387 251996 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.412 251996 INFO nova.virt.node [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Determined node identity e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from /var/lib/nova/compute_id
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.413 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.413 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.413 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.414 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.425 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd56308d520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.427 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd56308d520> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.428 251996 INFO nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Connection event '1' reason 'None'
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.434 251996 INFO nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]: 
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <host>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <uuid>dc45738e-2bb0-4417-914c-a006d79f6275</uuid>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <arch>x86_64</arch>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model>EPYC-Rome-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <vendor>AMD</vendor>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <microcode version='16777317'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <signature family='23' model='49' stepping='0'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='x2apic'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='tsc-deadline'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='osxsave'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='hypervisor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='tsc_adjust'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='spec-ctrl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='stibp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='arch-capabilities'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='cmp_legacy'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='topoext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='virt-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='lbrv'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='tsc-scale'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='vmcb-clean'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='pause-filter'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='pfthreshold'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='svme-addr-chk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='rdctl-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='mds-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature name='pschange-mc-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <pages unit='KiB' size='4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <pages unit='KiB' size='2048'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <pages unit='KiB' size='1048576'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <power_management>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <suspend_mem/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </power_management>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <iommu support='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <migration_features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <live/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <uri_transports>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <uri_transport>tcp</uri_transport>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <uri_transport>rdma</uri_transport>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </uri_transports>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </migration_features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <topology>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <cells num='1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <cell id='0'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:           <memory unit='KiB'>7864320</memory>
Dec 06 06:52:34 compute-0 nova_compute[251992]:           <pages unit='KiB' size='4'>1966080</pages>
Dec 06 06:52:34 compute-0 nova_compute[251992]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 06:52:34 compute-0 nova_compute[251992]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 06:52:34 compute-0 nova_compute[251992]:           <distances>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <sibling id='0' value='10'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:           </distances>
Dec 06 06:52:34 compute-0 nova_compute[251992]:           <cpus num='8'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:           </cpus>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         </cell>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </cells>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </topology>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <cache>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </cache>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <secmodel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model>selinux</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <doi>0</doi>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </secmodel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <secmodel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model>dac</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <doi>0</doi>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </secmodel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </host>
Dec 06 06:52:34 compute-0 nova_compute[251992]: 
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <guest>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <os_type>hvm</os_type>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <arch name='i686'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <wordsize>32</wordsize>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <domain type='qemu'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <domain type='kvm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </arch>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <pae/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <nonpae/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <acpi default='on' toggle='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <apic default='on' toggle='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <cpuselection/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <deviceboot/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <disksnapshot default='on' toggle='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <externalSnapshot/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </guest>
Dec 06 06:52:34 compute-0 nova_compute[251992]: 
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <guest>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <os_type>hvm</os_type>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <arch name='x86_64'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <wordsize>64</wordsize>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <domain type='qemu'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <domain type='kvm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </arch>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <acpi default='on' toggle='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <apic default='on' toggle='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <cpuselection/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <deviceboot/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <disksnapshot default='on' toggle='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <externalSnapshot/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </guest>
Dec 06 06:52:34 compute-0 nova_compute[251992]: 
Dec 06 06:52:34 compute-0 nova_compute[251992]: </capabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]: 
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.439 251996 DEBUG nova.virt.libvirt.volume.mount [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.441 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.446 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 06:52:34 compute-0 nova_compute[251992]: <domainCapabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <domain>kvm</domain>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <arch>i686</arch>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <vcpu max='4096'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <iothreads supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <os supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <enum name='firmware'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <loader supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>rom</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pflash</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='readonly'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>yes</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>no</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='secure'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>no</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </loader>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </os>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>on</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>off</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='maximumMigratable'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>on</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>off</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <vendor>AMD</vendor>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='succor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='custom' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='auto-ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='auto-ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-128'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-256'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-512'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='KnightsMill'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512er'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512pf'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512er'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512pf'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tbm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tbm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SierraForest'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cmpccxadd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cmpccxadd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='athlon'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='athlon-v1'>
Dec 06 06:52:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:34.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='core2duo'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='core2duo-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='coreduo'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='coreduo-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='n270'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='n270-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='phenom'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='phenom-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <memoryBacking supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <enum name='sourceType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>file</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>anonymous</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>memfd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </memoryBacking>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <disk supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='diskDevice'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>disk</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>cdrom</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>floppy</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>lun</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='bus'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>fdc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>scsi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>sata</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-non-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <graphics supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vnc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>egl-headless</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dbus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </graphics>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <video supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='modelType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vga</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>cirrus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>none</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>bochs</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ramfb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </video>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <hostdev supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='mode'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>subsystem</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='startupPolicy'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>default</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>mandatory</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>requisite</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>optional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='subsysType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pci</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>scsi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='capsType'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='pciBackend'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </hostdev>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <rng supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-non-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>random</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>egd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>builtin</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <filesystem supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='driverType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>path</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>handle</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtiofs</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </filesystem>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <tpm supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tpm-tis</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tpm-crb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>emulator</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>external</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendVersion'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>2.0</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </tpm>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <redirdev supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='bus'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </redirdev>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <channel supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pty</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>unix</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </channel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <crypto supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>qemu</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>builtin</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </crypto>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <interface supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>default</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>passt</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </interface>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <panic supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>isa</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>hyperv</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </panic>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <console supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>null</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pty</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dev</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>file</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pipe</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>stdio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>udp</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tcp</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>unix</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>qemu-vdagent</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dbus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </console>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <gic supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <genid supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <backup supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <async-teardown supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <ps2 supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <sev supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <sgx supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <hyperv supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='features'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>relaxed</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vapic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>spinlocks</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vpindex</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>runtime</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>synic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>stimer</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>reset</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vendor_id</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>frequencies</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>reenlightenment</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tlbflush</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ipi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>avic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>emsr_bitmap</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>xmm_input</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <defaults>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </defaults>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </hyperv>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <launchSecurity supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='sectype'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tdx</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </launchSecurity>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </features>
Dec 06 06:52:34 compute-0 nova_compute[251992]: </domainCapabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.451 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 06:52:34 compute-0 nova_compute[251992]: <domainCapabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <domain>kvm</domain>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <arch>i686</arch>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <vcpu max='240'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <iothreads supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <os supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <enum name='firmware'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <loader supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>rom</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pflash</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='readonly'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>yes</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>no</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='secure'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>no</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </loader>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </os>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>on</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>off</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='maximumMigratable'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>on</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>off</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <vendor>AMD</vendor>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='succor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='custom' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='auto-ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='auto-ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-128'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-256'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-512'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='KnightsMill'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512er'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512pf'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512er'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512pf'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tbm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tbm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SierraForest'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cmpccxadd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cmpccxadd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='athlon'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='athlon-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='core2duo'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='core2duo-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='coreduo'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='coreduo-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='n270'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='n270-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='phenom'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='phenom-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <memoryBacking supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <enum name='sourceType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>file</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>anonymous</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>memfd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </memoryBacking>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <disk supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='diskDevice'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>disk</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>cdrom</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>floppy</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>lun</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='bus'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ide</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>fdc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>scsi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>sata</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-non-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <graphics supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vnc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>egl-headless</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dbus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </graphics>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <video supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='modelType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vga</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>cirrus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>none</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>bochs</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ramfb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </video>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <hostdev supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='mode'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>subsystem</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='startupPolicy'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>default</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>mandatory</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>requisite</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>optional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='subsysType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pci</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>scsi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='capsType'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='pciBackend'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </hostdev>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <rng supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-non-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>random</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>egd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>builtin</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <filesystem supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='driverType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>path</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>handle</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtiofs</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </filesystem>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <tpm supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tpm-tis</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tpm-crb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>emulator</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>external</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendVersion'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>2.0</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </tpm>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <redirdev supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='bus'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </redirdev>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <channel supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pty</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>unix</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </channel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <crypto supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>qemu</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>builtin</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </crypto>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <interface supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>default</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>passt</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </interface>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <panic supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>isa</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>hyperv</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </panic>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <console supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>null</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pty</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dev</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>file</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pipe</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>stdio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>udp</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tcp</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>unix</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>qemu-vdagent</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dbus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </console>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <gic supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <genid supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <backup supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <async-teardown supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <ps2 supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <sev supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <sgx supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <hyperv supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='features'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>relaxed</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vapic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>spinlocks</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vpindex</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>runtime</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>synic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>stimer</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>reset</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vendor_id</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>frequencies</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>reenlightenment</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tlbflush</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ipi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>avic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>emsr_bitmap</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>xmm_input</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <defaults>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </defaults>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </hyperv>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <launchSecurity supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='sectype'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tdx</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </launchSecurity>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </features>
Dec 06 06:52:34 compute-0 nova_compute[251992]: </domainCapabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.481 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.485 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 06:52:34 compute-0 nova_compute[251992]: <domainCapabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <domain>kvm</domain>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <arch>x86_64</arch>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <vcpu max='4096'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <iothreads supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <os supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <enum name='firmware'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>efi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <loader supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>rom</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pflash</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='readonly'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>yes</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>no</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='secure'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>yes</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>no</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </loader>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </os>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>on</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>off</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='maximumMigratable'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>on</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>off</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <vendor>AMD</vendor>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='succor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='custom' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='auto-ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='auto-ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-128'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-256'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-512'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='KnightsMill'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512er'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512pf'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512er'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512pf'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tbm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tbm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SierraForest'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cmpccxadd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cmpccxadd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='athlon'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='athlon-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='core2duo'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='core2duo-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='coreduo'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='coreduo-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='n270'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='n270-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='phenom'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='phenom-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <memoryBacking supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <enum name='sourceType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>file</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>anonymous</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>memfd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </memoryBacking>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <disk supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='diskDevice'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>disk</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>cdrom</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>floppy</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>lun</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='bus'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>fdc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>scsi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>sata</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-non-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <graphics supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vnc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>egl-headless</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dbus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </graphics>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <video supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='modelType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vga</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>cirrus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>none</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>bochs</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ramfb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </video>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <hostdev supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='mode'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>subsystem</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='startupPolicy'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>default</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>mandatory</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>requisite</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>optional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='subsysType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pci</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>scsi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='capsType'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='pciBackend'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </hostdev>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <rng supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-non-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>random</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>egd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>builtin</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <filesystem supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='driverType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>path</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>handle</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtiofs</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </filesystem>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <tpm supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tpm-tis</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tpm-crb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>emulator</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>external</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendVersion'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>2.0</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </tpm>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <redirdev supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='bus'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </redirdev>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <channel supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pty</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>unix</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </channel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <crypto supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>qemu</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>builtin</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </crypto>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <interface supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>default</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>passt</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </interface>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <panic supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>isa</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>hyperv</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </panic>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <console supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>null</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pty</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dev</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>file</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pipe</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>stdio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>udp</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tcp</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>unix</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>qemu-vdagent</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dbus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </console>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <gic supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <genid supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <backup supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <async-teardown supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <ps2 supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <sev supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <sgx supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <hyperv supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='features'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>relaxed</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vapic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>spinlocks</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vpindex</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>runtime</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>synic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>stimer</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>reset</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vendor_id</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>frequencies</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>reenlightenment</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tlbflush</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ipi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>avic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>emsr_bitmap</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>xmm_input</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <defaults>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </defaults>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </hyperv>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <launchSecurity supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='sectype'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tdx</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </launchSecurity>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </features>
Dec 06 06:52:34 compute-0 nova_compute[251992]: </domainCapabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.548 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 06:52:34 compute-0 nova_compute[251992]: <domainCapabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <domain>kvm</domain>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <arch>x86_64</arch>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <vcpu max='240'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <iothreads supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <os supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <enum name='firmware'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <loader supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>rom</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pflash</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='readonly'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>yes</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>no</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='secure'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>no</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </loader>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </os>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='host-passthrough' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='hostPassthroughMigratable'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>on</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>off</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='maximum' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='maximumMigratable'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>on</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>off</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='host-model' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <vendor>AMD</vendor>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='x2apic'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='hypervisor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='stibp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='overflow-recov'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='succor'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='lbrv'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='tsc-scale'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='flushbyasid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='pause-filter'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='pfthreshold'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <feature policy='disable' name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <mode name='custom' supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Broadwell-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 sshd-session[220619]: Connection closed by 192.168.122.30 port 41484
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 sshd-session[220616]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Cooperlake-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Denverton-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Dhyana-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Genoa'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='auto-ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='auto-ibrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Milan-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amd-psfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='no-nested-data-bp'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='null-sel-clr-base'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='stibp-always-on'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-Rome-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='EPYC-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='GraniteRapids-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-128'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-256'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx10-512'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='prefetchiti'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Haswell-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v6'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 systemd[1]: session-51.scope: Consumed 2min 18.809s CPU time.
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Icelake-Server-v7'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='IvyBridge-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='KnightsMill'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512er'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512pf'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='KnightsMill-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4fmaps'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-4vnniw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512er'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 systemd-logind[798]: Session 51 logged out. Waiting for processes to exit.
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512pf'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G4-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tbm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Opteron_G5-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fma4'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tbm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xop'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 systemd-logind[798]: Removed session 51.
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SapphireRapids-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='amx-tile'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-bf16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-fp16'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512-vpopcntdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bitalg'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vbmi2'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrc'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fzrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='la57'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='taa-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='tsx-ldtrk'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xfd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SierraForest'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cmpccxadd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='SierraForest-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ifma'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-ne-convert'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx-vnni-int8'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='bus-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cmpccxadd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fbsdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='fsrs'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ibrs-all'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mcdt-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pbrsb-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='psdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='sbdr-ssdp-no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='serialize'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vaes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='vpclmulqdq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Client-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='hle'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='rtm'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Skylake-Server-v5'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512bw'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512cd'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512dq'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512f'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='avx512vl'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='invpcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pcid'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='pku'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='mpx'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v2'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v3'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='core-capability'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='split-lock-detect'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='Snowridge-v4'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='cldemote'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='erms'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='gfni'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdir64b'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='movdiri'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='xsaves'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='athlon'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='athlon-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='core2duo'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='core2duo-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='coreduo'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='coreduo-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='n270'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='n270-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='ss'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='phenom'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <blockers model='phenom-v1'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnow'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <feature name='3dnowext'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </blockers>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </mode>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <memoryBacking supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <enum name='sourceType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>file</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>anonymous</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <value>memfd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </memoryBacking>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <disk supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='diskDevice'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>disk</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>cdrom</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>floppy</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>lun</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='bus'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ide</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>fdc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>scsi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>sata</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-non-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <graphics supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vnc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>egl-headless</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dbus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </graphics>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <video supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='modelType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vga</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>cirrus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>none</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>bochs</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ramfb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </video>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <hostdev supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='mode'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>subsystem</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='startupPolicy'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>default</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>mandatory</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>requisite</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>optional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='subsysType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pci</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>scsi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='capsType'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='pciBackend'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </hostdev>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <rng supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtio-non-transitional</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>random</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>egd</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>builtin</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <filesystem supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='driverType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>path</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>handle</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>virtiofs</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </filesystem>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <tpm supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tpm-tis</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tpm-crb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>emulator</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>external</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendVersion'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>2.0</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </tpm>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <redirdev supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='bus'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>usb</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </redirdev>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <channel supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pty</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>unix</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </channel>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <crypto supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>qemu</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendModel'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>builtin</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </crypto>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <interface supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='backendType'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>default</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>passt</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </interface>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <panic supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='model'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>isa</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>hyperv</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </panic>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <console supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='type'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>null</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vc</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pty</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dev</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>file</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>pipe</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>stdio</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>udp</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tcp</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>unix</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>qemu-vdagent</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>dbus</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </console>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <features>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <gic supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <vmcoreinfo supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <genid supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <backingStoreInput supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <backup supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <async-teardown supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <ps2 supported='yes'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <sev supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <sgx supported='no'/>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <hyperv supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='features'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>relaxed</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vapic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>spinlocks</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vpindex</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>runtime</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>synic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>stimer</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>reset</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>vendor_id</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>frequencies</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>reenlightenment</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tlbflush</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>ipi</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>avic</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>emsr_bitmap</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>xmm_input</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <defaults>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <spinlocks>4095</spinlocks>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <stimer_direct>on</stimer_direct>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <tlbflush_direct>on</tlbflush_direct>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <tlbflush_extended>on</tlbflush_extended>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </defaults>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </hyperv>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     <launchSecurity supported='yes'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       <enum name='sectype'>
Dec 06 06:52:34 compute-0 nova_compute[251992]:         <value>tdx</value>
Dec 06 06:52:34 compute-0 nova_compute[251992]:       </enum>
Dec 06 06:52:34 compute-0 nova_compute[251992]:     </launchSecurity>
Dec 06 06:52:34 compute-0 nova_compute[251992]:   </features>
Dec 06 06:52:34 compute-0 nova_compute[251992]: </domainCapabilities>
Dec 06 06:52:34 compute-0 nova_compute[251992]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.624 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.624 251996 INFO nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Secure Boot support detected
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.626 251996 INFO nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.626 251996 INFO nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.635 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] cpu compare xml: <cpu match="exact">
Dec 06 06:52:34 compute-0 nova_compute[251992]:   <model>Nehalem</model>
Dec 06 06:52:34 compute-0 nova_compute[251992]: </cpu>
Dec 06 06:52:34 compute-0 nova_compute[251992]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.637 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.658 251996 INFO nova.virt.node [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Determined node identity e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from /var/lib/nova/compute_id
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.690 251996 WARNING nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Compute nodes ['e75da5bf-16fa-49b1-b5e1-3aa61daf0433'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.734 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 06:52:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.778 251996 WARNING nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.778 251996 DEBUG oslo_concurrency.lockutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.779 251996 DEBUG oslo_concurrency.lockutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.779 251996 DEBUG oslo_concurrency.lockutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.779 251996 DEBUG nova.compute.resource_tracker [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:52:34 compute-0 nova_compute[251992]: 2025-12-06 06:52:34.779 251996 DEBUG oslo_concurrency.processutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:34 compute-0 ceph-mon[74339]: pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:52:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866568348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:35 compute-0 rsyslogd[1005]: imjournal from <np0005548729:nova_compute>: begin to drop messages due to rate-limiting
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.216 251996 DEBUG oslo_concurrency.processutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:52:35 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 06 06:52:35 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.490 251996 WARNING nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.491 251996 DEBUG nova.compute.resource_tracker [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5180MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.491 251996 DEBUG oslo_concurrency.lockutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.491 251996 DEBUG oslo_concurrency.lockutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.507 251996 WARNING nova.compute.resource_tracker [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:e75da5bf-16fa-49b1-b5e1-3aa61daf0433: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host e75da5bf-16fa-49b1-b5e1-3aa61daf0433 could not be found.
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.541 251996 INFO nova.compute.resource_tracker [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: e75da5bf-16fa-49b1-b5e1-3aa61daf0433
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.630 251996 DEBUG nova.compute.resource_tracker [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:52:35 compute-0 nova_compute[251992]: 2025-12-06 06:52:35.630 251996 DEBUG nova.compute.resource_tracker [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:52:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2866568348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:36 compute-0 nova_compute[251992]: 2025-12-06 06:52:36.742 251996 INFO nova.scheduler.client.report [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [req-93f0ccf6-abfe-4796-b140-e4900a15ade8] Created resource provider record via placement API for resource provider with UUID e75da5bf-16fa-49b1-b5e1-3aa61daf0433 and name compute-0.ctlplane.example.com.
Dec 06 06:52:36 compute-0 nova_compute[251992]: 2025-12-06 06:52:36.794 251996 DEBUG oslo_concurrency.processutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:52:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3199913541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3277231340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:37 compute-0 ceph-mon[74339]: pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:52:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2477464245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.238 251996 DEBUG oslo_concurrency.processutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.242 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 06 06:52:37 compute-0 nova_compute[251992]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.242 251996 INFO nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] kernel doesn't support AMD SEV
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.243 251996 DEBUG nova.compute.provider_tree [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.244 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.248 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Libvirt baseline CPU <cpu>
Dec 06 06:52:37 compute-0 nova_compute[251992]:   <arch>x86_64</arch>
Dec 06 06:52:37 compute-0 nova_compute[251992]:   <model>Nehalem</model>
Dec 06 06:52:37 compute-0 nova_compute[251992]:   <vendor>AMD</vendor>
Dec 06 06:52:37 compute-0 nova_compute[251992]:   <topology sockets="8" cores="1" threads="1"/>
Dec 06 06:52:37 compute-0 nova_compute[251992]: </cpu>
Dec 06 06:52:37 compute-0 nova_compute[251992]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.384 251996 DEBUG nova.scheduler.client.report [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Updated inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.385 251996 DEBUG nova.compute.provider_tree [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Updating resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.385 251996 DEBUG nova.compute.provider_tree [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.618 251996 DEBUG nova.compute.provider_tree [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Updating resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.655 251996 DEBUG nova.compute.resource_tracker [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.655 251996 DEBUG oslo_concurrency.lockutils [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.656 251996 DEBUG nova.service [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.783 251996 DEBUG nova.service [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 06 06:52:37 compute-0 nova_compute[251992]: 2025-12-06 06:52:37.784 251996 DEBUG nova.servicegroup.drivers.db [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 06 06:52:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2477464245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2782321474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/822919840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:52:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:38.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:38.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:39 compute-0 ceph-mon[74339]: pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:40.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:40.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:40 compute-0 ceph-mon[74339]: pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:41 compute-0 podman[252389]: 2025-12-06 06:52:41.390902153 +0000 UTC m=+0.050465266 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 06:52:41 compute-0 podman[252390]: 2025-12-06 06:52:41.39597698 +0000 UTC m=+0.056982599 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 06:52:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:42.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:42 compute-0 ceph-mon[74339]: pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:52:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:52:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:52:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:52:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:52:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:52:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:44.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:44.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:44 compute-0 ceph-mon[74339]: pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:46.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:46.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:46 compute-0 ceph-mon[74339]: pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:48.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:48.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:48 compute-0 ceph-mon[74339]: pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:50.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:52:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:50.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:52:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:50 compute-0 ceph-mon[74339]: pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:51 compute-0 sudo[252433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:51 compute-0 sudo[252433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:51 compute-0 sudo[252433]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:51 compute-0 sudo[252458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:52:51 compute-0 sudo[252458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:52:51 compute-0 sudo[252458]: pam_unix(sudo:session): session closed for user root
Dec 06 06:52:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:52:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:52.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:52:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:52.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:53 compute-0 ceph-mon[74339]: pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:54.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:54.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:54 compute-0 ceph-mon[74339]: pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:52:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:56.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:56.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:56 compute-0 ceph-mon[74339]: pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:52:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:52:58.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:52:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:52:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:52:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:52:58.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:52:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:52:58 compute-0 ceph-mon[74339]: pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:53:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:00.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:53:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:00.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:00 compute-0 ceph-mon[74339]: pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:53:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:02.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:53:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:02.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:02 compute-0 ceph-mon[74339]: pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:03 compute-0 podman[252489]: 2025-12-06 06:53:03.445877897 +0000 UTC m=+0.103182031 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 06:53:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:53:03.801 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:53:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:53:03.802 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:53:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:53:03.803 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:53:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:04.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:04.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:04 compute-0 ceph-mon[74339]: pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:53:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:06.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:53:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:06.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:06 compute-0 ceph-mon[74339]: pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:53:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:08.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:53:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:08.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:08 compute-0 ceph-mon[74339]: pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:10.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:10.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:10 compute-0 ceph-mon[74339]: pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:11 compute-0 sudo[252521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:11 compute-0 sudo[252521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:11 compute-0 sudo[252521]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:11 compute-0 podman[252545]: 2025-12-06 06:53:11.54394424 +0000 UTC m=+0.052718877 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 06:53:11 compute-0 sudo[252556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:11 compute-0 sudo[252556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:11 compute-0 sudo[252556]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:11 compute-0 podman[252546]: 2025-12-06 06:53:11.55617546 +0000 UTC m=+0.054635786 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:53:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:12.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:12.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 06:53:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725809804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 06:53:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725809804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2725809804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2725809804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:53:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:53:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:53:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:53:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:53:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:53:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 06:53:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390278078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 06:53:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3390278078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:13 compute-0 sudo[252608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:13 compute-0 sudo[252608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:13 compute-0 sudo[252608]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:13 compute-0 ceph-mon[74339]: pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3390278078' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3390278078' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:13 compute-0 sudo[252633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:53:13 compute-0 sudo[252633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:13 compute-0 sudo[252633]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:13 compute-0 sudo[252658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:13 compute-0 sudo[252658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:13 compute-0 sudo[252658]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:13 compute-0 sudo[252683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:53:13 compute-0 sudo[252683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:14 compute-0 sudo[252683]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:53:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:53:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 06:53:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717889535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:53:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:14.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7447e86d-f310-4065-9c10-cce2fb72df24 does not exist
Dec 06 06:53:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 33e5ac15-7a31-499a-bda3-c726ef9e5cf1 does not exist
Dec 06 06:53:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e02b316d-38e7-4738-953a-ddc5919ac56c does not exist
Dec 06 06:53:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:53:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:53:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 06:53:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2717889535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:53:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:53:14 compute-0 sudo[252740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:14 compute-0 sudo[252740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:14 compute-0 sudo[252740]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:14.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:14 compute-0 sudo[252765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:53:14 compute-0 sudo[252765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:14 compute-0 sudo[252765]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:14 compute-0 sudo[252790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:14 compute-0 sudo[252790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:14 compute-0 sudo[252790]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:14 compute-0 sudo[252815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:53:14 compute-0 sudo[252815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2717889535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2717889535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:53:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:14 compute-0 podman[252882]: 2025-12-06 06:53:14.986394868 +0000 UTC m=+0.036594136 container create 2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 06:53:15 compute-0 systemd[1]: Started libpod-conmon-2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4.scope.
Dec 06 06:53:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:53:15 compute-0 podman[252882]: 2025-12-06 06:53:15.062812587 +0000 UTC m=+0.113011875 container init 2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:53:15 compute-0 podman[252882]: 2025-12-06 06:53:14.970646509 +0000 UTC m=+0.020845797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:53:15 compute-0 podman[252882]: 2025-12-06 06:53:15.070144915 +0000 UTC m=+0.120344183 container start 2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 06:53:15 compute-0 podman[252882]: 2025-12-06 06:53:15.074073897 +0000 UTC m=+0.124273195 container attach 2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:53:15 compute-0 optimistic_swirles[252898]: 167 167
Dec 06 06:53:15 compute-0 systemd[1]: libpod-2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4.scope: Deactivated successfully.
Dec 06 06:53:15 compute-0 podman[252882]: 2025-12-06 06:53:15.079704141 +0000 UTC m=+0.129903419 container died 2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:53:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-2efe75451f4ad40483958f39f8f56b5b0164d6e094d2a17edd560fcf069f7210-merged.mount: Deactivated successfully.
Dec 06 06:53:15 compute-0 podman[252882]: 2025-12-06 06:53:15.116028788 +0000 UTC m=+0.166228056 container remove 2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 06:53:15 compute-0 systemd[1]: libpod-conmon-2b3d04aaea932d4c9b8b591debd3b494c8dd9e4be0c4fd678a93ceaf3dd677f4.scope: Deactivated successfully.
Dec 06 06:53:15 compute-0 podman[252921]: 2025-12-06 06:53:15.265826274 +0000 UTC m=+0.040468626 container create 5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:53:15 compute-0 systemd[1]: Started libpod-conmon-5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133.scope.
Dec 06 06:53:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ef9311bdce7d932b2101d6f5c3d661904362f2645ec7647b881d585b370c76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ef9311bdce7d932b2101d6f5c3d661904362f2645ec7647b881d585b370c76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ef9311bdce7d932b2101d6f5c3d661904362f2645ec7647b881d585b370c76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ef9311bdce7d932b2101d6f5c3d661904362f2645ec7647b881d585b370c76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ef9311bdce7d932b2101d6f5c3d661904362f2645ec7647b881d585b370c76/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:15 compute-0 podman[252921]: 2025-12-06 06:53:15.249592301 +0000 UTC m=+0.024234663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:53:15 compute-0 podman[252921]: 2025-12-06 06:53:15.345191566 +0000 UTC m=+0.119833948 container init 5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 06:53:15 compute-0 podman[252921]: 2025-12-06 06:53:15.354842736 +0000 UTC m=+0.129485078 container start 5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 06 06:53:15 compute-0 podman[252921]: 2025-12-06 06:53:15.3582281 +0000 UTC m=+0.132870512 container attach 5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 06:53:15 compute-0 ceph-mon[74339]: pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:16 compute-0 silly_germain[252937]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:53:16 compute-0 silly_germain[252937]: --> relative data size: 1.0
Dec 06 06:53:16 compute-0 silly_germain[252937]: --> All data devices are unavailable
Dec 06 06:53:16 compute-0 systemd[1]: libpod-5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133.scope: Deactivated successfully.
Dec 06 06:53:16 compute-0 podman[252921]: 2025-12-06 06:53:16.197434968 +0000 UTC m=+0.972077310 container died 5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 06:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-68ef9311bdce7d932b2101d6f5c3d661904362f2645ec7647b881d585b370c76-merged.mount: Deactivated successfully.
Dec 06 06:53:16 compute-0 podman[252921]: 2025-12-06 06:53:16.2545673 +0000 UTC m=+1.029209642 container remove 5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:53:16 compute-0 systemd[1]: libpod-conmon-5ffbe04dc88deb4538127aa6b5e96438a143338e21f2e4f3ae4f6d263a3ae133.scope: Deactivated successfully.
Dec 06 06:53:16 compute-0 sudo[252815]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:16 compute-0 sudo[252966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:16 compute-0 sudo[252966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:16 compute-0 sudo[252966]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:16 compute-0 sudo[252991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:53:16 compute-0 sudo[252991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:16 compute-0 sudo[252991]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:16.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:16 compute-0 sudo[253017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:16 compute-0 sudo[253017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:16 compute-0 sudo[253017]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:16 compute-0 sudo[253042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:53:16 compute-0 sudo[253042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:53:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:16.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:53:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:16 compute-0 podman[253106]: 2025-12-06 06:53:16.803773564 +0000 UTC m=+0.037651419 container create 7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 06:53:16 compute-0 systemd[1]: Started libpod-conmon-7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f.scope.
Dec 06 06:53:16 compute-0 ceph-mon[74339]: pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:53:16 compute-0 podman[253106]: 2025-12-06 06:53:16.78688018 +0000 UTC m=+0.020758065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:53:16 compute-0 podman[253106]: 2025-12-06 06:53:16.883174056 +0000 UTC m=+0.117051921 container init 7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:53:16 compute-0 podman[253106]: 2025-12-06 06:53:16.891478354 +0000 UTC m=+0.125356199 container start 7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 06:53:16 compute-0 podman[253106]: 2025-12-06 06:53:16.895827319 +0000 UTC m=+0.129705204 container attach 7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_golick, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 06:53:16 compute-0 wonderful_golick[253123]: 167 167
Dec 06 06:53:16 compute-0 systemd[1]: libpod-7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f.scope: Deactivated successfully.
Dec 06 06:53:16 compute-0 podman[253106]: 2025-12-06 06:53:16.897799229 +0000 UTC m=+0.131677084 container died 7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_golick, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d44ff2a3b26b174711a56f4ceda0d348570e7ff88754faf580adb945672bd03d-merged.mount: Deactivated successfully.
Dec 06 06:53:16 compute-0 podman[253106]: 2025-12-06 06:53:16.937961986 +0000 UTC m=+0.171839841 container remove 7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 06:53:16 compute-0 systemd[1]: libpod-conmon-7fce542ded40d4e99123ac857f533d0e8e7b34356aa7aabd2c984761aadebb4f.scope: Deactivated successfully.
Dec 06 06:53:17 compute-0 podman[253145]: 2025-12-06 06:53:17.09704414 +0000 UTC m=+0.039021341 container create 0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:53:17 compute-0 systemd[1]: Started libpod-conmon-0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55.scope.
Dec 06 06:53:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1312e128b8f423b94b895fe50b6df32c4db2ba7c1b4d39eefb434f1adad9b3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1312e128b8f423b94b895fe50b6df32c4db2ba7c1b4d39eefb434f1adad9b3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1312e128b8f423b94b895fe50b6df32c4db2ba7c1b4d39eefb434f1adad9b3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1312e128b8f423b94b895fe50b6df32c4db2ba7c1b4d39eefb434f1adad9b3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:17 compute-0 podman[253145]: 2025-12-06 06:53:17.082317283 +0000 UTC m=+0.024294504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:53:17 compute-0 podman[253145]: 2025-12-06 06:53:17.182739037 +0000 UTC m=+0.124716258 container init 0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wescoff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:53:17 compute-0 podman[253145]: 2025-12-06 06:53:17.190198799 +0000 UTC m=+0.132176000 container start 0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 06:53:17 compute-0 podman[253145]: 2025-12-06 06:53:17.193971186 +0000 UTC m=+0.135948407 container attach 0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]: {
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:     "0": [
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:         {
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "devices": [
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "/dev/loop3"
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             ],
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "lv_name": "ceph_lv0",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "lv_size": "7511998464",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "name": "ceph_lv0",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "tags": {
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.cluster_name": "ceph",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.crush_device_class": "",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.encrypted": "0",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.osd_id": "0",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.type": "block",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:                 "ceph.vdo": "0"
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             },
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "type": "block",
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:             "vg_name": "ceph_vg0"
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:         }
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]:     ]
Dec 06 06:53:17 compute-0 compassionate_wescoff[253162]: }
Dec 06 06:53:17 compute-0 systemd[1]: libpod-0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55.scope: Deactivated successfully.
Dec 06 06:53:17 compute-0 podman[253145]: 2025-12-06 06:53:17.963001588 +0000 UTC m=+0.904978789 container died 0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:53:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1312e128b8f423b94b895fe50b6df32c4db2ba7c1b4d39eefb434f1adad9b3d-merged.mount: Deactivated successfully.
Dec 06 06:53:18 compute-0 podman[253145]: 2025-12-06 06:53:18.017668172 +0000 UTC m=+0.959645373 container remove 0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 06:53:18 compute-0 systemd[1]: libpod-conmon-0b2aa98ababd223a7e2da742af1168668ceb4287541d3001fa59549f13763b55.scope: Deactivated successfully.
Dec 06 06:53:18 compute-0 sudo[253042]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:18 compute-0 sudo[253183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:18 compute-0 sudo[253183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:18 compute-0 sudo[253183]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:18 compute-0 sudo[253208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:53:18 compute-0 sudo[253208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:18 compute-0 sudo[253208]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:18 compute-0 sudo[253233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:18 compute-0 sudo[253233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:18 compute-0 sudo[253233]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:53:18
Dec 06 06:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'default.rgw.meta', 'backups', 'volumes']
Dec 06 06:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:53:18 compute-0 sudo[253258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:53:18 compute-0 sudo[253258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:18.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:53:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:18.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:53:18 compute-0 podman[253325]: 2025-12-06 06:53:18.58891097 +0000 UTC m=+0.035846683 container create cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_davinci, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:53:18 compute-0 systemd[1]: Started libpod-conmon-cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a.scope.
Dec 06 06:53:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:53:18 compute-0 podman[253325]: 2025-12-06 06:53:18.664465423 +0000 UTC m=+0.111401156 container init cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec 06 06:53:18 compute-0 podman[253325]: 2025-12-06 06:53:18.573520583 +0000 UTC m=+0.020456326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:53:18 compute-0 podman[253325]: 2025-12-06 06:53:18.672167212 +0000 UTC m=+0.119102925 container start cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_davinci, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:53:18 compute-0 podman[253325]: 2025-12-06 06:53:18.674924178 +0000 UTC m=+0.121859911 container attach cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_davinci, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 06:53:18 compute-0 upbeat_davinci[253342]: 167 167
Dec 06 06:53:18 compute-0 systemd[1]: libpod-cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a.scope: Deactivated successfully.
Dec 06 06:53:18 compute-0 podman[253325]: 2025-12-06 06:53:18.676341311 +0000 UTC m=+0.123277024 container died cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_davinci, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:53:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f391d2102214dccf7552e87c68f3bc9debe5b9fe2a1eb39b3e2e429b40e8ecb-merged.mount: Deactivated successfully.
Dec 06 06:53:18 compute-0 podman[253325]: 2025-12-06 06:53:18.717615571 +0000 UTC m=+0.164551294 container remove cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_davinci, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 06:53:18 compute-0 systemd[1]: libpod-conmon-cd037404833313e065947365d9b059e268f1a2b904a35dbfa2ef48bf7685ee6a.scope: Deactivated successfully.
Dec 06 06:53:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:18 compute-0 ceph-mon[74339]: pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:18 compute-0 podman[253367]: 2025-12-06 06:53:18.860157883 +0000 UTC m=+0.036121772 container create 6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:53:18 compute-0 systemd[1]: Started libpod-conmon-6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4.scope.
Dec 06 06:53:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc95d942b56c69d86be20d4b3515b7e5d7c436b859466d4c46981c9fe82d5599/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc95d942b56c69d86be20d4b3515b7e5d7c436b859466d4c46981c9fe82d5599/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc95d942b56c69d86be20d4b3515b7e5d7c436b859466d4c46981c9fe82d5599/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc95d942b56c69d86be20d4b3515b7e5d7c436b859466d4c46981c9fe82d5599/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:53:18 compute-0 podman[253367]: 2025-12-06 06:53:18.919973367 +0000 UTC m=+0.095937276 container init 6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 06:53:18 compute-0 podman[253367]: 2025-12-06 06:53:18.931211016 +0000 UTC m=+0.107174905 container start 6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:53:18 compute-0 podman[253367]: 2025-12-06 06:53:18.9345857 +0000 UTC m=+0.110549609 container attach 6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:53:18 compute-0 podman[253367]: 2025-12-06 06:53:18.845168418 +0000 UTC m=+0.021132327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:53:19 compute-0 awesome_cannon[253383]: {
Dec 06 06:53:19 compute-0 awesome_cannon[253383]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:53:19 compute-0 awesome_cannon[253383]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:53:19 compute-0 awesome_cannon[253383]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:53:19 compute-0 awesome_cannon[253383]:         "osd_id": 0,
Dec 06 06:53:19 compute-0 awesome_cannon[253383]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:53:19 compute-0 awesome_cannon[253383]:         "type": "bluestore"
Dec 06 06:53:19 compute-0 awesome_cannon[253383]:     }
Dec 06 06:53:19 compute-0 awesome_cannon[253383]: }
Dec 06 06:53:19 compute-0 systemd[1]: libpod-6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4.scope: Deactivated successfully.
Dec 06 06:53:19 compute-0 podman[253367]: 2025-12-06 06:53:19.770146416 +0000 UTC m=+0.946110325 container died 6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:53:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc95d942b56c69d86be20d4b3515b7e5d7c436b859466d4c46981c9fe82d5599-merged.mount: Deactivated successfully.
Dec 06 06:53:19 compute-0 podman[253367]: 2025-12-06 06:53:19.820028063 +0000 UTC m=+0.995991952 container remove 6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_cannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:53:19 compute-0 systemd[1]: libpod-conmon-6b9816f50683ed511f8506a23b119f6f17016f56e7d504dd49e5f27c08c16cf4.scope: Deactivated successfully.
Dec 06 06:53:19 compute-0 sudo[253258]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:53:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:53:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2bc51715-a3ae-440e-96eb-f1c07c16b265 does not exist
Dec 06 06:53:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f2fa49fa-3566-40af-a206-182ae767a445 does not exist
Dec 06 06:53:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5e3b50f7-8830-42be-8f45-039a6c1a5578 does not exist
Dec 06 06:53:20 compute-0 sudo[253415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:20 compute-0 sudo[253415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:20 compute-0 sudo[253415]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:20 compute-0 sudo[253440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:53:20 compute-0 sudo[253440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:20 compute-0 sudo[253440]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:20.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:53:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:20.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:53:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:53:20 compute-0 ceph-mon[74339]: pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000031s ======
Dec 06 06:53:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:22.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec 06 06:53:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:22.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:22 compute-0 ceph-mon[74339]: pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:53:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:24.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:24 compute-0 nova_compute[251992]: 2025-12-06 06:53:24.786 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:24 compute-0 nova_compute[251992]: 2025-12-06 06:53:24.823 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:24 compute-0 ceph-mon[74339]: pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:53:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:53:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:26.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:26.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:26 compute-0 ceph-mon[74339]: pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:28.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:28 compute-0 ceph-mon[74339]: pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:30.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:30.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:30 compute-0 ceph-mon[74339]: pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:31 compute-0 sudo[253471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:31 compute-0 sudo[253471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:31 compute-0 sudo[253471]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:31 compute-0 sudo[253496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:31 compute-0 sudo[253496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:31 compute-0 sudo[253496]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:32.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:32.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:32 compute-0 ceph-mon[74339]: pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.684 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.685 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.685 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.686 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.686 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.686 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.686 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.686 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.687 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.717 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.717 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.717 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.717 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:53:33 compute-0 nova_compute[251992]: 2025-12-06 06:53:33.718 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:53:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:53:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472132947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:34 compute-0 nova_compute[251992]: 2025-12-06 06:53:34.189 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:53:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/472132947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:34 compute-0 nova_compute[251992]: 2025-12-06 06:53:34.341 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:53:34 compute-0 nova_compute[251992]: 2025-12-06 06:53:34.343 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5196MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:53:34 compute-0 nova_compute[251992]: 2025-12-06 06:53:34.343 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:53:34 compute-0 nova_compute[251992]: 2025-12-06 06:53:34.343 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:53:34 compute-0 podman[253544]: 2025-12-06 06:53:34.452293827 +0000 UTC m=+0.108649485 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:53:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:34.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:34.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:34 compute-0 nova_compute[251992]: 2025-12-06 06:53:34.649 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:53:34 compute-0 nova_compute[251992]: 2025-12-06 06:53:34.649 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:53:34 compute-0 nova_compute[251992]: 2025-12-06 06:53:34.692 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:53:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:53:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127660712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:35 compute-0 nova_compute[251992]: 2025-12-06 06:53:35.135 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:53:35 compute-0 nova_compute[251992]: 2025-12-06 06:53:35.141 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:53:35 compute-0 nova_compute[251992]: 2025-12-06 06:53:35.168 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:53:35 compute-0 nova_compute[251992]: 2025-12-06 06:53:35.170 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:53:35 compute-0 nova_compute[251992]: 2025-12-06 06:53:35.170 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:53:35 compute-0 ceph-mon[74339]: pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1311066653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1127660712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3584048377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3498460210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:36.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:36.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1477125112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:53:37 compute-0 ceph-mon[74339]: pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:38.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:38.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:38 compute-0 ceph-mon[74339]: pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:40.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:40.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:40 compute-0 ceph-mon[74339]: pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:42 compute-0 podman[253597]: 2025-12-06 06:53:42.402899646 +0000 UTC m=+0.063246991 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Dec 06 06:53:42 compute-0 podman[253596]: 2025-12-06 06:53:42.421246334 +0000 UTC m=+0.082816445 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 06:53:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:42.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:42.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:42 compute-0 ceph-mon[74339]: pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:53:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:53:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:53:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:53:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:53:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:53:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:44.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:44.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:44 compute-0 ceph-mon[74339]: pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:46.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:46.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:46 compute-0 ceph-mon[74339]: pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:48.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:48.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:48 compute-0 ceph-mon[74339]: pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:50.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:50.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:51 compute-0 ceph-mon[74339]: pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:51 compute-0 sudo[253637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:51 compute-0 sudo[253637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:51 compute-0 sudo[253637]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:51 compute-0 sudo[253662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:53:51 compute-0 sudo[253662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:53:51 compute-0 sudo[253662]: pam_unix(sudo:session): session closed for user root
Dec 06 06:53:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:52.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:52.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:52 compute-0 ceph-mon[74339]: pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:54.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:54.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:54 compute-0 ceph-mon[74339]: pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:53:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:53:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:56.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:53:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:53:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:56.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:53:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:56 compute-0 ceph-mon[74339]: pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:53:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:53:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:53:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:53:58.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:53:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:53:58 compute-0 ceph-mon[74339]: pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:00.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:00.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:00 compute-0 ceph-mon[74339]: pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:02.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:02.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:02 compute-0 ceph-mon[74339]: pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:54:03.802 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:54:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:54:03.802 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:54:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:54:03.802 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:54:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:04.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:54:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:04.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:54:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:04 compute-0 ceph-mon[74339]: pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:05 compute-0 podman[253694]: 2025-12-06 06:54:05.413905505 +0000 UTC m=+0.075715777 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 06:54:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:06.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:06.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:06 compute-0 ceph-mon[74339]: pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:08.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:08 compute-0 ceph-mon[74339]: pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000030s ======
Dec 06 06:54:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:10.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec 06 06:54:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:10.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:10 compute-0 ceph-mon[74339]: pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:11 compute-0 sudo[253723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:11 compute-0 sudo[253723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:11 compute-0 sudo[253723]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:11 compute-0 sudo[253748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:11 compute-0 sudo[253748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:11 compute-0 sudo[253748]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:12.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:12 compute-0 ceph-mon[74339]: pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:54:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:54:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:54:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:54:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:54:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:54:13 compute-0 podman[253774]: 2025-12-06 06:54:13.037376751 +0000 UTC m=+0.057076408 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 06 06:54:13 compute-0 podman[253775]: 2025-12-06 06:54:13.04313397 +0000 UTC m=+0.058654375 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 06:54:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:14.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:14.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:14 compute-0 ceph-mon[74339]: pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:16.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:16.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:16 compute-0 ceph-mon[74339]: pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:54:18
Dec 06 06:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'images', '.rgw.root', 'default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data']
Dec 06 06:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:54:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:18.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:18 compute-0 ceph-mon[74339]: pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:20 compute-0 sudo[253816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:20 compute-0 sudo[253816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:20 compute-0 sudo[253816]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:20.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:20 compute-0 sudo[253841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:54:20 compute-0 sudo[253841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:20 compute-0 sudo[253841]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:20.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:20 compute-0 sudo[253866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:20 compute-0 sudo[253866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:20 compute-0 sudo[253866]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:20 compute-0 sudo[253891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:54:20 compute-0 sudo[253891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:20 compute-0 ceph-mon[74339]: pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:21 compute-0 sudo[253891]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 06:54:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:54:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:54:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:54:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 248d66c8-b1a5-4059-868c-943e54c7786c does not exist
Dec 06 06:54:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 629ec7d9-6bac-4a19-8e0e-73beecae3847 does not exist
Dec 06 06:54:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 58de943c-94b5-4b60-abbf-05e623194105 does not exist
Dec 06 06:54:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:54:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:54:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:54:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:21 compute-0 sudo[253948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:21 compute-0 sudo[253948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:21 compute-0 sudo[253948]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:21 compute-0 sudo[253973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:54:21 compute-0 sudo[253973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:21 compute-0 sudo[253973]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:21 compute-0 sudo[253998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:21 compute-0 sudo[253998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:21 compute-0 sudo[253998]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:21 compute-0 sudo[254023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:54:21 compute-0 sudo[254023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:21 compute-0 podman[254087]: 2025-12-06 06:54:21.795457203 +0000 UTC m=+0.037891506 container create 4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:54:21 compute-0 systemd[1]: Started libpod-conmon-4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731.scope.
Dec 06 06:54:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:54:21 compute-0 podman[254087]: 2025-12-06 06:54:21.87057056 +0000 UTC m=+0.113004923 container init 4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:54:21 compute-0 podman[254087]: 2025-12-06 06:54:21.777874235 +0000 UTC m=+0.020308558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:54:21 compute-0 podman[254087]: 2025-12-06 06:54:21.878873255 +0000 UTC m=+0.121307568 container start 4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ride, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:54:21 compute-0 podman[254087]: 2025-12-06 06:54:21.881810341 +0000 UTC m=+0.124244644 container attach 4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:54:21 compute-0 nifty_ride[254103]: 167 167
Dec 06 06:54:21 compute-0 systemd[1]: libpod-4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731.scope: Deactivated successfully.
Dec 06 06:54:21 compute-0 podman[254087]: 2025-12-06 06:54:21.886004285 +0000 UTC m=+0.128438588 container died 4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ride, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1d731967710b54c161f51c8c865f2611f99aa1f68fe35aaa8d2bf3bc5515ce2-merged.mount: Deactivated successfully.
Dec 06 06:54:21 compute-0 podman[254087]: 2025-12-06 06:54:21.925680751 +0000 UTC m=+0.168115054 container remove 4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ride, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 06:54:21 compute-0 systemd[1]: libpod-conmon-4760501012fcc3b09d79225ab36635fbad1d973e56a313ce5eedeffdc8e1b731.scope: Deactivated successfully.
Dec 06 06:54:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:54:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:54:22 compute-0 podman[254127]: 2025-12-06 06:54:22.076959099 +0000 UTC m=+0.037256686 container create 1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:54:22 compute-0 systemd[1]: Started libpod-conmon-1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1.scope.
Dec 06 06:54:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b15fe0cbfaa0a1acc2906f122746a0d58f0d654ea4ef51b47c6ddcfe18da0e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b15fe0cbfaa0a1acc2906f122746a0d58f0d654ea4ef51b47c6ddcfe18da0e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b15fe0cbfaa0a1acc2906f122746a0d58f0d654ea4ef51b47c6ddcfe18da0e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b15fe0cbfaa0a1acc2906f122746a0d58f0d654ea4ef51b47c6ddcfe18da0e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b15fe0cbfaa0a1acc2906f122746a0d58f0d654ea4ef51b47c6ddcfe18da0e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:22 compute-0 podman[254127]: 2025-12-06 06:54:22.060585187 +0000 UTC m=+0.020882794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:54:22 compute-0 podman[254127]: 2025-12-06 06:54:22.165853833 +0000 UTC m=+0.126151450 container init 1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:54:22 compute-0 podman[254127]: 2025-12-06 06:54:22.172661332 +0000 UTC m=+0.132958919 container start 1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:54:22 compute-0 podman[254127]: 2025-12-06 06:54:22.177499785 +0000 UTC m=+0.137797372 container attach 1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec 06 06:54:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:54:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:22.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:54:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:22 compute-0 ceph-mon[74339]: pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:22 compute-0 goofy_ishizaka[254144]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:54:22 compute-0 goofy_ishizaka[254144]: --> relative data size: 1.0
Dec 06 06:54:22 compute-0 goofy_ishizaka[254144]: --> All data devices are unavailable
Dec 06 06:54:23 compute-0 systemd[1]: libpod-1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1.scope: Deactivated successfully.
Dec 06 06:54:23 compute-0 podman[254160]: 2025-12-06 06:54:23.062481964 +0000 UTC m=+0.028876420 container died 1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b15fe0cbfaa0a1acc2906f122746a0d58f0d654ea4ef51b47c6ddcfe18da0e7-merged.mount: Deactivated successfully.
Dec 06 06:54:23 compute-0 podman[254160]: 2025-12-06 06:54:23.113627147 +0000 UTC m=+0.080021583 container remove 1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 06:54:23 compute-0 systemd[1]: libpod-conmon-1a933babbf23db3bc8af3401ad9a71dea52020e6739aa0cafe322dc70159a8f1.scope: Deactivated successfully.
Dec 06 06:54:23 compute-0 sudo[254023]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:23 compute-0 sudo[254175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:23 compute-0 sudo[254175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:23 compute-0 sudo[254175]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:54:23 compute-0 sudo[254200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:54:23 compute-0 sudo[254200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:23 compute-0 sudo[254200]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:23 compute-0 sudo[254225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:23 compute-0 sudo[254225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:23 compute-0 sudo[254225]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:23 compute-0 sudo[254250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:54:23 compute-0 sudo[254250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:54:23 compute-0 podman[254317]: 2025-12-06 06:54:23.685359576 +0000 UTC m=+0.035932367 container create 8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamport, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:54:23 compute-0 systemd[1]: Started libpod-conmon-8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee.scope.
Dec 06 06:54:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:54:23 compute-0 podman[254317]: 2025-12-06 06:54:23.757524788 +0000 UTC m=+0.108097619 container init 8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 06:54:23 compute-0 podman[254317]: 2025-12-06 06:54:23.765136532 +0000 UTC m=+0.115709323 container start 8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamport, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:54:23 compute-0 podman[254317]: 2025-12-06 06:54:23.670254873 +0000 UTC m=+0.020827684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:54:23 compute-0 clever_lamport[254333]: 167 167
Dec 06 06:54:23 compute-0 systemd[1]: libpod-8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee.scope: Deactivated successfully.
Dec 06 06:54:23 compute-0 podman[254317]: 2025-12-06 06:54:23.796804953 +0000 UTC m=+0.147377764 container attach 8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamport, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:54:23 compute-0 podman[254317]: 2025-12-06 06:54:23.797076141 +0000 UTC m=+0.147648932 container died 8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 06:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e5e94b2026c69ec725f0292153045393b9034c5908c9a077fa73b06cc7ab7a0-merged.mount: Deactivated successfully.
Dec 06 06:54:23 compute-0 podman[254317]: 2025-12-06 06:54:23.846042461 +0000 UTC m=+0.196615262 container remove 8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:54:23 compute-0 systemd[1]: libpod-conmon-8b02795fbf3c65951369806c443e959f9a053f07bdb5659c603372c683e168ee.scope: Deactivated successfully.
Dec 06 06:54:23 compute-0 podman[254357]: 2025-12-06 06:54:23.991421185 +0000 UTC m=+0.038304657 container create bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:54:24 compute-0 systemd[1]: Started libpod-conmon-bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf.scope.
Dec 06 06:54:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e03304763bfe6560e86db8313780730000ae300b599860d7bd6ab47c7553fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e03304763bfe6560e86db8313780730000ae300b599860d7bd6ab47c7553fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e03304763bfe6560e86db8313780730000ae300b599860d7bd6ab47c7553fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e03304763bfe6560e86db8313780730000ae300b599860d7bd6ab47c7553fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:24 compute-0 podman[254357]: 2025-12-06 06:54:24.070612623 +0000 UTC m=+0.117496115 container init bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:54:24 compute-0 podman[254357]: 2025-12-06 06:54:23.976174777 +0000 UTC m=+0.023058269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:54:24 compute-0 podman[254357]: 2025-12-06 06:54:24.077905087 +0000 UTC m=+0.124788559 container start bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:54:24 compute-0 podman[254357]: 2025-12-06 06:54:24.081183024 +0000 UTC m=+0.128066506 container attach bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_heisenberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:54:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:24.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:24.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]: {
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:     "0": [
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:         {
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "devices": [
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "/dev/loop3"
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             ],
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "lv_name": "ceph_lv0",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "lv_size": "7511998464",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "name": "ceph_lv0",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "tags": {
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.cluster_name": "ceph",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.crush_device_class": "",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.encrypted": "0",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.osd_id": "0",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.type": "block",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:                 "ceph.vdo": "0"
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             },
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "type": "block",
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:             "vg_name": "ceph_vg0"
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:         }
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]:     ]
Dec 06 06:54:24 compute-0 trusting_heisenberg[254373]: }
Dec 06 06:54:24 compute-0 systemd[1]: libpod-bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf.scope: Deactivated successfully.
Dec 06 06:54:24 compute-0 podman[254357]: 2025-12-06 06:54:24.839130948 +0000 UTC m=+0.886014430 container died bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 06:54:24 compute-0 ceph-mon[74339]: pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1e03304763bfe6560e86db8313780730000ae300b599860d7bd6ab47c7553fb-merged.mount: Deactivated successfully.
Dec 06 06:54:24 compute-0 podman[254357]: 2025-12-06 06:54:24.895247038 +0000 UTC m=+0.942130510 container remove bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_heisenberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 06:54:24 compute-0 systemd[1]: libpod-conmon-bc7865dd3a17b8bb2328e4b6b56b8eb146b4c7fcbc8b34bc69c9802b645860bf.scope: Deactivated successfully.
Dec 06 06:54:24 compute-0 sudo[254250]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:24 compute-0 sudo[254396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:24 compute-0 sudo[254396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:24 compute-0 sudo[254396]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:25 compute-0 sudo[254421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:54:25 compute-0 sudo[254421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:25 compute-0 sudo[254421]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:25 compute-0 sudo[254446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:25 compute-0 sudo[254446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:25 compute-0 sudo[254446]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:25 compute-0 sudo[254471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:54:25 compute-0 sudo[254471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:25 compute-0 podman[254536]: 2025-12-06 06:54:25.45857454 +0000 UTC m=+0.040436840 container create 0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 06:54:25 compute-0 systemd[1]: Started libpod-conmon-0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703.scope.
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:54:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:54:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:54:25 compute-0 podman[254536]: 2025-12-06 06:54:25.53170929 +0000 UTC m=+0.113571610 container init 0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gould, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:54:25 compute-0 podman[254536]: 2025-12-06 06:54:25.537712316 +0000 UTC m=+0.119574606 container start 0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:54:25 compute-0 podman[254536]: 2025-12-06 06:54:25.444024592 +0000 UTC m=+0.025886912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:54:25 compute-0 podman[254536]: 2025-12-06 06:54:25.540828199 +0000 UTC m=+0.122690549 container attach 0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 06:54:25 compute-0 clever_gould[254552]: 167 167
Dec 06 06:54:25 compute-0 systemd[1]: libpod-0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703.scope: Deactivated successfully.
Dec 06 06:54:25 compute-0 podman[254536]: 2025-12-06 06:54:25.543328331 +0000 UTC m=+0.125190651 container died 0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 06:54:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1174c430a9591387d3521e4050eaef0385f2e201ab217b9228ba426635903ff-merged.mount: Deactivated successfully.
Dec 06 06:54:25 compute-0 podman[254536]: 2025-12-06 06:54:25.580405432 +0000 UTC m=+0.162267732 container remove 0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 06:54:25 compute-0 systemd[1]: libpod-conmon-0b9aab46aa2d8007cfe6d9d893b08eb3fdeeaac2457f2e28bac7831a1dd2a703.scope: Deactivated successfully.
Dec 06 06:54:25 compute-0 podman[254575]: 2025-12-06 06:54:25.728245179 +0000 UTC m=+0.035419973 container create cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 06:54:25 compute-0 systemd[1]: Started libpod-conmon-cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa.scope.
Dec 06 06:54:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16142682225b82019e8bfc007afdd3c9fc27fc266b3d242460c638d2f0c76259/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16142682225b82019e8bfc007afdd3c9fc27fc266b3d242460c638d2f0c76259/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16142682225b82019e8bfc007afdd3c9fc27fc266b3d242460c638d2f0c76259/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16142682225b82019e8bfc007afdd3c9fc27fc266b3d242460c638d2f0c76259/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:54:25 compute-0 podman[254575]: 2025-12-06 06:54:25.714068531 +0000 UTC m=+0.021243345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:54:25 compute-0 podman[254575]: 2025-12-06 06:54:25.826860788 +0000 UTC m=+0.134035602 container init cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 06:54:25 compute-0 podman[254575]: 2025-12-06 06:54:25.834479161 +0000 UTC m=+0.141653955 container start cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:54:25 compute-0 podman[254575]: 2025-12-06 06:54:25.837417078 +0000 UTC m=+0.144591892 container attach cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:54:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:26.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:26.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:26 compute-0 objective_germain[254591]: {
Dec 06 06:54:26 compute-0 objective_germain[254591]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:54:26 compute-0 objective_germain[254591]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:54:26 compute-0 objective_germain[254591]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:54:26 compute-0 objective_germain[254591]:         "osd_id": 0,
Dec 06 06:54:26 compute-0 objective_germain[254591]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:54:26 compute-0 objective_germain[254591]:         "type": "bluestore"
Dec 06 06:54:26 compute-0 objective_germain[254591]:     }
Dec 06 06:54:26 compute-0 objective_germain[254591]: }
Dec 06 06:54:26 compute-0 systemd[1]: libpod-cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa.scope: Deactivated successfully.
Dec 06 06:54:26 compute-0 podman[254575]: 2025-12-06 06:54:26.675689984 +0000 UTC m=+0.982864778 container died cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 06:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-16142682225b82019e8bfc007afdd3c9fc27fc266b3d242460c638d2f0c76259-merged.mount: Deactivated successfully.
Dec 06 06:54:26 compute-0 podman[254575]: 2025-12-06 06:54:26.745481976 +0000 UTC m=+1.052656770 container remove cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 06:54:26 compute-0 systemd[1]: libpod-conmon-cb93b60a392020b87289ca7c142fdfc82db65faad094a85af7a0c225547d87fa.scope: Deactivated successfully.
Dec 06 06:54:26 compute-0 sudo[254471]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:54:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:54:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 076a0d84-e3b7-4c80-ba71-77ec150a4613 does not exist
Dec 06 06:54:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fff26e32-48e7-44af-8b02-71608fea9512 does not exist
Dec 06 06:54:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c4ee52ff-c70e-421d-83d0-c8fc87ec1b99 does not exist
Dec 06 06:54:26 compute-0 sudo[254624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:26 compute-0 sudo[254624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:26 compute-0 sudo[254624]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:26 compute-0 sudo[254649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:54:26 compute-0 sudo[254649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:26 compute-0 sudo[254649]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:27 compute-0 ceph-mon[74339]: pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:54:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:28.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000029s ======
Dec 06 06:54:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:28.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Dec 06 06:54:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:28 compute-0 ceph-mon[74339]: pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:30.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:30 compute-0 ceph-mon[74339]: pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:32 compute-0 sudo[254676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:32 compute-0 sudo[254676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:32 compute-0 sudo[254676]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:32 compute-0 sudo[254701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:32 compute-0 sudo[254701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:32 compute-0 sudo[254701]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:54:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:32.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:54:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:32.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:32 compute-0 ceph-mon[74339]: pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:54:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:34.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:54:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:54:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:34.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:54:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:34 compute-0 ceph-mon[74339]: pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.162 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.181 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.182 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.182 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.195 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.195 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.195 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.195 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.196 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.196 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.701 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.702 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.702 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.702 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:54:35 compute-0 nova_compute[251992]: 2025-12-06 06:54:35.703 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:54:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:54:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026439830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:36 compute-0 nova_compute[251992]: 2025-12-06 06:54:36.218 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:54:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3026439830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:36 compute-0 nova_compute[251992]: 2025-12-06 06:54:36.429 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:54:36 compute-0 nova_compute[251992]: 2025-12-06 06:54:36.430 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5172MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:54:36 compute-0 nova_compute[251992]: 2025-12-06 06:54:36.431 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:54:36 compute-0 nova_compute[251992]: 2025-12-06 06:54:36.431 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:54:36 compute-0 podman[254750]: 2025-12-06 06:54:36.442822211 +0000 UTC m=+0.098606658 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 06:54:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:36.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:36 compute-0 nova_compute[251992]: 2025-12-06 06:54:36.727 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:54:36 compute-0 nova_compute[251992]: 2025-12-06 06:54:36.727 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:54:36 compute-0 nova_compute[251992]: 2025-12-06 06:54:36.748 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:54:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:54:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556591730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:37 compute-0 nova_compute[251992]: 2025-12-06 06:54:37.185 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:54:37 compute-0 nova_compute[251992]: 2025-12-06 06:54:37.192 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:54:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/353004254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:37 compute-0 ceph-mon[74339]: pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2556591730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1567979913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:37 compute-0 nova_compute[251992]: 2025-12-06 06:54:37.340 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:54:37 compute-0 nova_compute[251992]: 2025-12-06 06:54:37.342 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:54:37 compute-0 nova_compute[251992]: 2025-12-06 06:54:37.343 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:54:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2042275790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1746649706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:54:39 compute-0 ceph-mon[74339]: pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:40.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:40.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:40 compute-0 ceph-mon[74339]: pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:54:42.098 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:54:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:54:42.100 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 06:54:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:54:42.101 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:54:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:42.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:42.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:42 compute-0 ceph-mon[74339]: pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:54:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:54:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:54:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:54:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:54:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:54:43 compute-0 podman[254802]: 2025-12-06 06:54:43.381637905 +0000 UTC m=+0.045929895 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 06:54:43 compute-0 podman[254803]: 2025-12-06 06:54:43.427912649 +0000 UTC m=+0.085493120 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 06:54:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:44.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:44.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:45 compute-0 ceph-mon[74339]: pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:46.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:54:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:46.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:54:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:46 compute-0 ceph-mon[74339]: pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:48.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:48.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:49 compute-0 ceph-mon[74339]: pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:50.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:50 compute-0 ceph-mon[74339]: pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:52 compute-0 sudo[254845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:52 compute-0 sudo[254845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:52 compute-0 sudo[254845]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:52 compute-0 sudo[254870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:54:52 compute-0 sudo[254870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:54:52 compute-0 sudo[254870]: pam_unix(sudo:session): session closed for user root
Dec 06 06:54:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:52.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:52.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:52 compute-0 ceph-mon[74339]: pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:54:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:54.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:54:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:54.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:54 compute-0 ceph-mon[74339]: pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:54:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:54:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:56.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:54:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:54:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:56.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:54:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:57 compute-0 ceph-mon[74339]: pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:54:58.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:54:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:54:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:54:58.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:54:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:54:58 compute-0 ceph-mon[74339]: pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:00.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:00.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:01 compute-0 ceph-mon[74339]: pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:02.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:02.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:03 compute-0 ceph-mon[74339]: pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:55:03.804 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:55:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:55:03.805 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:55:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:55:03.805 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:55:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:04.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:04.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:05 compute-0 ceph-mon[74339]: pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:06.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:07 compute-0 podman[254903]: 2025-12-06 06:55:07.42013228 +0000 UTC m=+0.075318123 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 06 06:55:07 compute-0 ceph-mon[74339]: pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:08.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:08.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 06:55:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344939197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:55:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 06:55:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344939197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:55:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1344939197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:55:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1344939197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:55:10 compute-0 ceph-mon[74339]: pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:10.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:11 compute-0 ceph-mon[74339]: pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:12 compute-0 sudo[254932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:12 compute-0 sudo[254932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:12 compute-0 sudo[254932]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:12 compute-0 sudo[254957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:12 compute-0 sudo[254957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:12 compute-0 sudo[254957]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:12.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:12.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:55:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:55:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:55:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:55:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:55:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:55:13 compute-0 ceph-mon[74339]: pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:14 compute-0 podman[254983]: 2025-12-06 06:55:14.401483301 +0000 UTC m=+0.063160610 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 06:55:14 compute-0 podman[254984]: 2025-12-06 06:55:14.430046805 +0000 UTC m=+0.074271512 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 06:55:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:14.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:14.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:15 compute-0 ceph-mon[74339]: pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:16.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:55:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:16.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:55:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:17 compute-0 ceph-mon[74339]: pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:55:18
Dec 06 06:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', '.mgr', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root']
Dec 06 06:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:55:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:18.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:19 compute-0 ceph-mon[74339]: pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:20.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:21 compute-0 ceph-mon[74339]: pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:55:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4494 writes, 19K keys, 4489 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4494 writes, 4489 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1283 writes, 5622 keys, 1280 commit groups, 1.0 writes per commit group, ingest: 9.02 MB, 0.02 MB/s
                                           Interval WAL: 1283 writes, 1280 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     71.6      0.31              0.08         9    0.035       0      0       0.0       0.0
                                             L6      1/0    8.55 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    121.2    101.8      0.72              0.28         8    0.089     38K   4361       0.0       0.0
                                            Sum      1/0    8.55 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     84.3     92.6      1.03              0.36        17    0.060     38K   4361       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   6.2     89.4     87.5      0.52              0.20         8    0.065     20K   2544       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    121.2    101.8      0.72              0.28         8    0.089     38K   4361       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     72.3      0.31              0.08         8    0.039       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.022, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 1.0 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 4.99 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 7.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(269,4.66 MB,1.53397%) FilterBlock(18,114.42 KB,0.0367566%) IndexBlock(18,224.47 KB,0.0721078%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 06:55:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:22.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:22.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:55:23 compute-0 ceph-mon[74339]: pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:24.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:24.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:55:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:55:25 compute-0 ceph-mon[74339]: pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:25 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 06 06:55:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:25.995820) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:55:25 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 06 06:55:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004125995874, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2096, "num_deletes": 251, "total_data_size": 3935340, "memory_usage": 3984688, "flush_reason": "Manual Compaction"}
Dec 06 06:55:25 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126014212, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3870542, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18169, "largest_seqno": 20264, "table_properties": {"data_size": 3861180, "index_size": 5920, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18670, "raw_average_key_size": 19, "raw_value_size": 3842507, "raw_average_value_size": 4096, "num_data_blocks": 266, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765003890, "oldest_key_time": 1765003890, "file_creation_time": 1765004125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 18423 microseconds, and 8460 cpu microseconds.
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.014246) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3870542 bytes OK
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.014263) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.015644) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.015661) EVENT_LOG_v1 {"time_micros": 1765004126015656, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.015678) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3926897, prev total WAL file size 3926897, number of live WAL files 2.
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.016690) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3779KB)], [41(8752KB)]
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126016729, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 12833008, "oldest_snapshot_seqno": -1}
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4972 keys, 10725499 bytes, temperature: kUnknown
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126089250, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 10725499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10689203, "index_size": 22746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 125063, "raw_average_key_size": 25, "raw_value_size": 10596116, "raw_average_value_size": 2131, "num_data_blocks": 941, "num_entries": 4972, "num_filter_entries": 4972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765004126, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.089567) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10725499 bytes
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.090580) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.7 rd, 147.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5487, records dropped: 515 output_compression: NoCompression
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.090595) EVENT_LOG_v1 {"time_micros": 1765004126090588, "job": 20, "event": "compaction_finished", "compaction_time_micros": 72606, "compaction_time_cpu_micros": 24184, "output_level": 6, "num_output_files": 1, "total_output_size": 10725499, "num_input_records": 5487, "num_output_records": 4972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126091368, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004126092970, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.016612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.093092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.093139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.093142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.093143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:55:26.093145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:55:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:26.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:26.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:27 compute-0 sudo[255029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:27 compute-0 sudo[255029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:27 compute-0 sudo[255029]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:27 compute-0 sudo[255054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:55:27 compute-0 sudo[255054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:27 compute-0 sudo[255054]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:27 compute-0 sudo[255079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:27 compute-0 sudo[255079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:27 compute-0 sudo[255079]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:27 compute-0 sudo[255104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:55:27 compute-0 sudo[255104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:27 compute-0 sudo[255104]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:55:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:55:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:55:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:55:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:55:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c11f7427-9a2a-4e08-8db3-dbfc0c21db10 does not exist
Dec 06 06:55:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5e4760b3-b8e7-42ef-868e-67dabe903480 does not exist
Dec 06 06:55:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d940967c-ad0f-45d6-9d25-0565b230d4c0 does not exist
Dec 06 06:55:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:55:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:55:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:55:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:55:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:55:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:55:28 compute-0 sudo[255161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:28 compute-0 sudo[255161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:28 compute-0 sudo[255161]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:28 compute-0 sudo[255186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:55:28 compute-0 sudo[255186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:28 compute-0 sudo[255186]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:28 compute-0 sudo[255211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:28 compute-0 sudo[255211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:28 compute-0 sudo[255211]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:28 compute-0 sudo[255236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:55:28 compute-0 sudo[255236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:28 compute-0 ceph-mon[74339]: pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:55:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:55:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:55:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:55:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:55:28 compute-0 podman[255301]: 2025-12-06 06:55:28.482149301 +0000 UTC m=+0.049920347 container create 696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:55:28 compute-0 systemd[1]: Started libpod-conmon-696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59.scope.
Dec 06 06:55:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:55:28 compute-0 podman[255301]: 2025-12-06 06:55:28.459611475 +0000 UTC m=+0.027382541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:55:28 compute-0 podman[255301]: 2025-12-06 06:55:28.561261619 +0000 UTC m=+0.129032695 container init 696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:55:28 compute-0 podman[255301]: 2025-12-06 06:55:28.568902444 +0000 UTC m=+0.136673500 container start 696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 06:55:28 compute-0 podman[255301]: 2025-12-06 06:55:28.573279578 +0000 UTC m=+0.141050624 container attach 696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:55:28 compute-0 affectionate_buck[255318]: 167 167
Dec 06 06:55:28 compute-0 systemd[1]: libpod-696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59.scope: Deactivated successfully.
Dec 06 06:55:28 compute-0 podman[255301]: 2025-12-06 06:55:28.576321203 +0000 UTC m=+0.144092259 container died 696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:55:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-98304d2e37c061ba33bc5753ecc886aa8d1287139ed3292383341ddb022af002-merged.mount: Deactivated successfully.
Dec 06 06:55:28 compute-0 podman[255301]: 2025-12-06 06:55:28.610596229 +0000 UTC m=+0.178367275 container remove 696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:55:28 compute-0 systemd[1]: libpod-conmon-696be819b612d1a21bb99e1669882b3eb8ee081a4e72fe4a1c4a2fd5d54f7d59.scope: Deactivated successfully.
Dec 06 06:55:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:28.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:28.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:28 compute-0 podman[255342]: 2025-12-06 06:55:28.759905355 +0000 UTC m=+0.035352577 container create 266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 06:55:28 compute-0 systemd[1]: Started libpod-conmon-266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036.scope.
Dec 06 06:55:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180e0dc957a56fca79035ceefc1477d114b5569d8f7e2ac2061ecaec7edbb16e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180e0dc957a56fca79035ceefc1477d114b5569d8f7e2ac2061ecaec7edbb16e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180e0dc957a56fca79035ceefc1477d114b5569d8f7e2ac2061ecaec7edbb16e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180e0dc957a56fca79035ceefc1477d114b5569d8f7e2ac2061ecaec7edbb16e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180e0dc957a56fca79035ceefc1477d114b5569d8f7e2ac2061ecaec7edbb16e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:28 compute-0 podman[255342]: 2025-12-06 06:55:28.830777061 +0000 UTC m=+0.106224303 container init 266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:55:28 compute-0 podman[255342]: 2025-12-06 06:55:28.837005226 +0000 UTC m=+0.112452448 container start 266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 06:55:28 compute-0 podman[255342]: 2025-12-06 06:55:28.840182196 +0000 UTC m=+0.115629458 container attach 266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:55:28 compute-0 podman[255342]: 2025-12-06 06:55:28.744942043 +0000 UTC m=+0.020389285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:55:29 compute-0 priceless_turing[255358]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:55:29 compute-0 priceless_turing[255358]: --> relative data size: 1.0
Dec 06 06:55:29 compute-0 priceless_turing[255358]: --> All data devices are unavailable
Dec 06 06:55:29 compute-0 systemd[1]: libpod-266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036.scope: Deactivated successfully.
Dec 06 06:55:29 compute-0 podman[255342]: 2025-12-06 06:55:29.636728564 +0000 UTC m=+0.912175796 container died 266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Dec 06 06:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-180e0dc957a56fca79035ceefc1477d114b5569d8f7e2ac2061ecaec7edbb16e-merged.mount: Deactivated successfully.
Dec 06 06:55:29 compute-0 podman[255342]: 2025-12-06 06:55:29.693966695 +0000 UTC m=+0.969413917 container remove 266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 06:55:29 compute-0 systemd[1]: libpod-conmon-266d05dc80b4087426e45ec506ef4d4fb787275c81c08c4073ae1c2d0356e036.scope: Deactivated successfully.
Dec 06 06:55:29 compute-0 sudo[255236]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:29 compute-0 sudo[255385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:29 compute-0 sudo[255385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:29 compute-0 sudo[255385]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:29 compute-0 sudo[255410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:55:29 compute-0 sudo[255410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:29 compute-0 sudo[255410]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:29 compute-0 sudo[255435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:29 compute-0 sudo[255435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:29 compute-0 sudo[255435]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:29 compute-0 sudo[255460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:55:29 compute-0 sudo[255460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:30 compute-0 podman[255525]: 2025-12-06 06:55:30.242855697 +0000 UTC m=+0.046252014 container create 7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shirley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 06:55:30 compute-0 systemd[1]: Started libpod-conmon-7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d.scope.
Dec 06 06:55:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:55:30 compute-0 podman[255525]: 2025-12-06 06:55:30.223209193 +0000 UTC m=+0.026605500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:55:30 compute-0 podman[255525]: 2025-12-06 06:55:30.334342404 +0000 UTC m=+0.137738711 container init 7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shirley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:55:30 compute-0 podman[255525]: 2025-12-06 06:55:30.342646128 +0000 UTC m=+0.146042405 container start 7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shirley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:55:30 compute-0 recursing_shirley[255541]: 167 167
Dec 06 06:55:30 compute-0 systemd[1]: libpod-7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d.scope: Deactivated successfully.
Dec 06 06:55:30 compute-0 conmon[255541]: conmon 7032b43e324e90be2a81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d.scope/container/memory.events
Dec 06 06:55:30 compute-0 podman[255525]: 2025-12-06 06:55:30.357242569 +0000 UTC m=+0.160638866 container attach 7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shirley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Dec 06 06:55:30 compute-0 podman[255525]: 2025-12-06 06:55:30.35762869 +0000 UTC m=+0.161024967 container died 7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shirley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:55:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2df32ced562efe9653bf197c08b1b3fbcfb8c6723610abb2cc1efa8f5807fb3-merged.mount: Deactivated successfully.
Dec 06 06:55:30 compute-0 podman[255525]: 2025-12-06 06:55:30.394460967 +0000 UTC m=+0.197857244 container remove 7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_shirley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 06:55:30 compute-0 systemd[1]: libpod-conmon-7032b43e324e90be2a81a0ea753f4c59b29e39af52e8ffea3104c6565f82991d.scope: Deactivated successfully.
Dec 06 06:55:30 compute-0 podman[255566]: 2025-12-06 06:55:30.558872229 +0000 UTC m=+0.042778976 container create ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_aryabhata, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:55:30 compute-0 systemd[1]: Started libpod-conmon-ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9.scope.
Dec 06 06:55:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09c59c096370c0dfa9f34eb7401df2f82f9685c0601013b3bf70ca834724fff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09c59c096370c0dfa9f34eb7401df2f82f9685c0601013b3bf70ca834724fff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09c59c096370c0dfa9f34eb7401df2f82f9685c0601013b3bf70ca834724fff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09c59c096370c0dfa9f34eb7401df2f82f9685c0601013b3bf70ca834724fff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:30 compute-0 podman[255566]: 2025-12-06 06:55:30.542265261 +0000 UTC m=+0.026172028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:55:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:30.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:30 compute-0 podman[255566]: 2025-12-06 06:55:30.642063822 +0000 UTC m=+0.125970589 container init ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 06:55:30 compute-0 podman[255566]: 2025-12-06 06:55:30.647932167 +0000 UTC m=+0.131838924 container start ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:55:30 compute-0 podman[255566]: 2025-12-06 06:55:30.651374324 +0000 UTC m=+0.135281071 container attach ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 06:55:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:30.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:30 compute-0 ceph-mon[74339]: pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]: {
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:     "0": [
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:         {
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "devices": [
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "/dev/loop3"
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             ],
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "lv_name": "ceph_lv0",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "lv_size": "7511998464",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "name": "ceph_lv0",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "tags": {
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.cluster_name": "ceph",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.crush_device_class": "",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.encrypted": "0",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.osd_id": "0",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.type": "block",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:                 "ceph.vdo": "0"
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             },
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "type": "block",
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:             "vg_name": "ceph_vg0"
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:         }
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]:     ]
Dec 06 06:55:31 compute-0 laughing_aryabhata[255582]: }
Dec 06 06:55:31 compute-0 systemd[1]: libpod-ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9.scope: Deactivated successfully.
Dec 06 06:55:31 compute-0 podman[255566]: 2025-12-06 06:55:31.425843689 +0000 UTC m=+0.909750456 container died ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_aryabhata, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b09c59c096370c0dfa9f34eb7401df2f82f9685c0601013b3bf70ca834724fff-merged.mount: Deactivated successfully.
Dec 06 06:55:31 compute-0 podman[255566]: 2025-12-06 06:55:31.479369567 +0000 UTC m=+0.963276314 container remove ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:55:31 compute-0 systemd[1]: libpod-conmon-ace86b808f87eb4238d28f8fdc1bdb92a0a3ee0577655e48bbda8d62184d4de9.scope: Deactivated successfully.
Dec 06 06:55:31 compute-0 sudo[255460]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:31 compute-0 sudo[255604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:31 compute-0 sudo[255604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:31 compute-0 sudo[255604]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:31 compute-0 sudo[255629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:55:31 compute-0 sudo[255629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:31 compute-0 sudo[255629]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:31 compute-0 sudo[255654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:31 compute-0 sudo[255654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:31 compute-0 sudo[255654]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:31 compute-0 sudo[255679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:55:31 compute-0 sudo[255679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:32 compute-0 podman[255745]: 2025-12-06 06:55:32.088059373 +0000 UTC m=+0.045804802 container create 4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:55:32 compute-0 systemd[1]: Started libpod-conmon-4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06.scope.
Dec 06 06:55:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:55:32 compute-0 podman[255745]: 2025-12-06 06:55:32.068219604 +0000 UTC m=+0.025965083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:55:32 compute-0 podman[255745]: 2025-12-06 06:55:32.178654445 +0000 UTC m=+0.136399904 container init 4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:55:32 compute-0 podman[255745]: 2025-12-06 06:55:32.187154584 +0000 UTC m=+0.144900033 container start 4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:55:32 compute-0 podman[255745]: 2025-12-06 06:55:32.19092074 +0000 UTC m=+0.148666179 container attach 4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 06:55:32 compute-0 nostalgic_agnesi[255762]: 167 167
Dec 06 06:55:32 compute-0 systemd[1]: libpod-4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06.scope: Deactivated successfully.
Dec 06 06:55:32 compute-0 conmon[255762]: conmon 4977f6665b57573ba531 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06.scope/container/memory.events
Dec 06 06:55:32 compute-0 podman[255745]: 2025-12-06 06:55:32.19625308 +0000 UTC m=+0.153998539 container died 4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:55:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-74d035fca95a719bfcbfd967f4e169a4cbb2885dd52472f44f741dec9686942d-merged.mount: Deactivated successfully.
Dec 06 06:55:32 compute-0 podman[255745]: 2025-12-06 06:55:32.234720674 +0000 UTC m=+0.192466113 container remove 4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec 06 06:55:32 compute-0 systemd[1]: libpod-conmon-4977f6665b57573ba5311ec95a4c18a2c878f804d1d8c12a92106bf7469d4f06.scope: Deactivated successfully.
Dec 06 06:55:32 compute-0 ceph-mon[74339]: pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:32 compute-0 podman[255784]: 2025-12-06 06:55:32.386260292 +0000 UTC m=+0.040672036 container create b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 06:55:32 compute-0 systemd[1]: Started libpod-conmon-b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a.scope.
Dec 06 06:55:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:55:32 compute-0 podman[255784]: 2025-12-06 06:55:32.37022669 +0000 UTC m=+0.024638454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25bf4c49d30f98efdef3d26196c1e58eb125a66515a333786cd07ed86292d5be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25bf4c49d30f98efdef3d26196c1e58eb125a66515a333786cd07ed86292d5be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25bf4c49d30f98efdef3d26196c1e58eb125a66515a333786cd07ed86292d5be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25bf4c49d30f98efdef3d26196c1e58eb125a66515a333786cd07ed86292d5be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:55:32 compute-0 sudo[255801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:32 compute-0 sudo[255801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:32 compute-0 podman[255784]: 2025-12-06 06:55:32.480884758 +0000 UTC m=+0.135296512 container init b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 06:55:32 compute-0 sudo[255801]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:32 compute-0 podman[255784]: 2025-12-06 06:55:32.486822925 +0000 UTC m=+0.141234649 container start b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 06:55:32 compute-0 podman[255784]: 2025-12-06 06:55:32.490018455 +0000 UTC m=+0.144430189 container attach b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 06:55:32 compute-0 sudo[255831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:32 compute-0 sudo[255831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:32 compute-0 sudo[255831]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:32.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:32.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:33 compute-0 lucid_galois[255807]: {
Dec 06 06:55:33 compute-0 lucid_galois[255807]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:55:33 compute-0 lucid_galois[255807]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:55:33 compute-0 lucid_galois[255807]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:55:33 compute-0 lucid_galois[255807]:         "osd_id": 0,
Dec 06 06:55:33 compute-0 lucid_galois[255807]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:55:33 compute-0 lucid_galois[255807]:         "type": "bluestore"
Dec 06 06:55:33 compute-0 lucid_galois[255807]:     }
Dec 06 06:55:33 compute-0 lucid_galois[255807]: }
Dec 06 06:55:33 compute-0 systemd[1]: libpod-b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a.scope: Deactivated successfully.
Dec 06 06:55:33 compute-0 podman[255784]: 2025-12-06 06:55:33.361153653 +0000 UTC m=+1.015565387 container died b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:55:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-25bf4c49d30f98efdef3d26196c1e58eb125a66515a333786cd07ed86292d5be-merged.mount: Deactivated successfully.
Dec 06 06:55:33 compute-0 podman[255784]: 2025-12-06 06:55:33.443757841 +0000 UTC m=+1.098169605 container remove b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:55:33 compute-0 systemd[1]: libpod-conmon-b93c5c6c54393ee421fe6dbc811f028a75b35af27a321e1706d38c60b7b1592a.scope: Deactivated successfully.
Dec 06 06:55:33 compute-0 sudo[255679]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:55:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:55:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2b44fc96-c58f-4637-8c24-dafa2badd991 does not exist
Dec 06 06:55:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev af3a8622-6a58-4709-9b11-bdeff7ccb9ca does not exist
Dec 06 06:55:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9112fe70-e030-443f-a4e4-6cb910828e46 does not exist
Dec 06 06:55:33 compute-0 sudo[255886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:33 compute-0 sudo[255886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:33 compute-0 sudo[255886]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:33 compute-0 sudo[255911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:55:33 compute-0 sudo[255911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:33 compute-0 sudo[255911]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:34 compute-0 ceph-mon[74339]: pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:55:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 06:55:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 06:55:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.344 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.345 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.345 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.345 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.345 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.345 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:55:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:55:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:36.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:36 compute-0 nova_compute[251992]: 2025-12-06 06:55:36.683 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:55:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:36 compute-0 ceph-mon[74339]: pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/922551956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4111632330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:37 compute-0 rsyslogd[1005]: imjournal: 2640 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 06 06:55:37 compute-0 nova_compute[251992]: 2025-12-06 06:55:37.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-0 nova_compute[251992]: 2025-12-06 06:55:37.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-0 nova_compute[251992]: 2025-12-06 06:55:37.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:55:37 compute-0 nova_compute[251992]: 2025-12-06 06:55:37.692 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:55:37 compute-0 nova_compute[251992]: 2025-12-06 06:55:37.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:55:37 compute-0 nova_compute[251992]: 2025-12-06 06:55:37.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:55:37 compute-0 nova_compute[251992]: 2025-12-06 06:55:37.693 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:55:37 compute-0 nova_compute[251992]: 2025-12-06 06:55:37.693 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:55:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:55:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524605542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:38 compute-0 nova_compute[251992]: 2025-12-06 06:55:38.133 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:55:38 compute-0 ceph-mon[74339]: pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2524605542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:38 compute-0 nova_compute[251992]: 2025-12-06 06:55:38.291 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:55:38 compute-0 nova_compute[251992]: 2025-12-06 06:55:38.293 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:55:38 compute-0 nova_compute[251992]: 2025-12-06 06:55:38.293 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:55:38 compute-0 nova_compute[251992]: 2025-12-06 06:55:38.294 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:55:38 compute-0 podman[255960]: 2025-12-06 06:55:38.425056021 +0000 UTC m=+0.083788811 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 06:55:38 compute-0 nova_compute[251992]: 2025-12-06 06:55:38.588 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:55:38 compute-0 nova_compute[251992]: 2025-12-06 06:55:38.588 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:55:38 compute-0 nova_compute[251992]: 2025-12-06 06:55:38.611 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:55:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:38.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:55:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:38.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:55:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:55:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2983734056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:39 compute-0 nova_compute[251992]: 2025-12-06 06:55:39.033 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:55:39 compute-0 nova_compute[251992]: 2025-12-06 06:55:39.038 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:55:39 compute-0 nova_compute[251992]: 2025-12-06 06:55:39.052 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:55:39 compute-0 nova_compute[251992]: 2025-12-06 06:55:39.053 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:55:39 compute-0 nova_compute[251992]: 2025-12-06 06:55:39.054 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:55:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4068762730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2983734056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:40 compute-0 ceph-mon[74339]: pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/713881256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:55:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:55:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:55:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:40.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:42.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:42.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:42 compute-0 ceph-mon[74339]: pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:55:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:55:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:55:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:55:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:55:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:55:44 compute-0 ceph-mon[74339]: pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:44.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:45 compute-0 podman[256012]: 2025-12-06 06:55:45.394818029 +0000 UTC m=+0.048473045 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 06:55:45 compute-0 podman[256013]: 2025-12-06 06:55:45.397951456 +0000 UTC m=+0.050024053 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 06:55:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:46 compute-0 ceph-mon[74339]: pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:46.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:46.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:48 compute-0 ceph-mon[74339]: pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:48.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:48.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:50 compute-0 ceph-mon[74339]: pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:50.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:52 compute-0 ceph-mon[74339]: pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:52 compute-0 sudo[256053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:52 compute-0 sudo[256053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:52 compute-0 sudo[256053]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:52 compute-0 sudo[256078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:55:52 compute-0 sudo[256078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:55:52 compute-0 sudo[256078]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:52.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:52.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:54 compute-0 ceph-mon[74339]: pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f4d66f0 =====
Dec 06 06:55:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:54.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f4d66f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:54 compute-0 radosgw[91889]: beast: 0x7f463f4d66f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:54.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:55:56 compute-0 ceph-mon[74339]: pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:55:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:56.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:55:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:58 compute-0 ceph-mon[74339]: pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:55:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:55:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:55:58.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:55:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:55:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:55:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:55:58.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:55:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:00 compute-0 ceph-mon[74339]: pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:00.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:00.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:02 compute-0 ceph-mon[74339]: pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:02.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:02.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:56:03.805 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:56:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:56:03.806 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:56:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:56:03.806 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:56:04 compute-0 ceph-mon[74339]: pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:04.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:06 compute-0 ceph-mon[74339]: pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:06.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:06.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:08.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:08.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:08 compute-0 ceph-mon[74339]: pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:09 compute-0 podman[256111]: 2025-12-06 06:56:09.406145329 +0000 UTC m=+0.067454825 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 06:56:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/351251706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:56:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/351251706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:56:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:10.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:10.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.754722) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171754819, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 611, "num_deletes": 250, "total_data_size": 780187, "memory_usage": 792384, "flush_reason": "Manual Compaction"}
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171760683, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 528173, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20265, "largest_seqno": 20875, "table_properties": {"data_size": 525272, "index_size": 873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7601, "raw_average_key_size": 20, "raw_value_size": 519199, "raw_average_value_size": 1366, "num_data_blocks": 39, "num_entries": 380, "num_filter_entries": 380, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004126, "oldest_key_time": 1765004126, "file_creation_time": 1765004171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 5980 microseconds, and 2396 cpu microseconds.
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.760719) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 528173 bytes OK
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.760735) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.761758) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.761775) EVENT_LOG_v1 {"time_micros": 1765004171761770, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.761792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 776908, prev total WAL file size 792883, number of live WAL files 2.
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.762383) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353038' seq:72057594037927935, type:22 .. '6D67727374617400373539' seq:0, type:0; will stop at (end)
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(515KB)], [44(10MB)]
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171762467, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11253672, "oldest_snapshot_seqno": -1}
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4856 keys, 7659959 bytes, temperature: kUnknown
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171808952, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7659959, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7628521, "index_size": 18208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 123064, "raw_average_key_size": 25, "raw_value_size": 7541494, "raw_average_value_size": 1553, "num_data_blocks": 743, "num_entries": 4856, "num_filter_entries": 4856, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765004171, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.809220) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7659959 bytes
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.810483) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.7 rd, 164.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.2 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(35.8) write-amplify(14.5) OK, records in: 5352, records dropped: 496 output_compression: NoCompression
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.810518) EVENT_LOG_v1 {"time_micros": 1765004171810504, "job": 22, "event": "compaction_finished", "compaction_time_micros": 46564, "compaction_time_cpu_micros": 17886, "output_level": 6, "num_output_files": 1, "total_output_size": 7659959, "num_input_records": 5352, "num_output_records": 4856, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171810757, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004171812524, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.762262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.812630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.812636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.812638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.812639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:56:11.812641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:56:12 compute-0 ceph-mon[74339]: pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:12.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:12 compute-0 sudo[256141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:12 compute-0 sudo[256141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:12 compute-0 sudo[256141]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:12 compute-0 sudo[256166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:12 compute-0 sudo[256166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:12 compute-0 sudo[256166]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:56:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:56:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:56:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:56:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:56:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:56:13 compute-0 ceph-mon[74339]: pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:13 compute-0 ceph-mgr[74630]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 06:56:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:14.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:14.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:14 compute-0 ceph-mon[74339]: pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:16 compute-0 ceph-mon[74339]: pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:16 compute-0 podman[256192]: 2025-12-06 06:56:16.387213037 +0000 UTC m=+0.049201942 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 06 06:56:16 compute-0 podman[256193]: 2025-12-06 06:56:16.393254337 +0000 UTC m=+0.054543554 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 06:56:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:16.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:16.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:56:18
Dec 06 06:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root']
Dec 06 06:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:56:18 compute-0 ceph-mon[74339]: pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:18.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:18.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=404 latency=0.002000049s ======
Dec 06 06:56:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:20.168 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.002000049s
Dec 06 06:56:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - - [06/Dec/2025:06:56:20.383 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000025s
Dec 06 06:56:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:20.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:20.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:20 compute-0 ceph-mon[74339]: pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:22 compute-0 ceph-mon[74339]: pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:22.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:56:24 compute-0 ceph-mon[74339]: pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:24.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:24.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:56:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:56:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:26 compute-0 ceph-mon[74339]: pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:26.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:26.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:28.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:28.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:28 compute-0 ceph-mon[74339]: pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:30 compute-0 ceph-mon[74339]: pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:30.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:30.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec 06 06:56:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec 06 06:56:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec 06 06:56:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:32.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:32.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:32 compute-0 sudo[256239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:32 compute-0 sudo[256239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:32 compute-0 sudo[256239]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:32 compute-0 sudo[256264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:32 compute-0 sudo[256264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:32 compute-0 sudo[256264]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:33 compute-0 ceph-mon[74339]: pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:33 compute-0 ceph-mon[74339]: osdmap e146: 3 total, 3 up, 3 in
Dec 06 06:56:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec 06 06:56:34 compute-0 sudo[256289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:34 compute-0 sudo[256289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:34 compute-0 sudo[256289]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:34 compute-0 sudo[256314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:56:34 compute-0 sudo[256314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:34 compute-0 sudo[256314]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:34 compute-0 sudo[256339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:34 compute-0 sudo[256339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:34 compute-0 sudo[256339]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec 06 06:56:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec 06 06:56:34 compute-0 sudo[256364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 06:56:34 compute-0 sudo[256364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:34 compute-0 ceph-mon[74339]: pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:34 compute-0 ceph-mon[74339]: osdmap e147: 3 total, 3 up, 3 in
Dec 06 06:56:34 compute-0 podman[256462]: 2025-12-06 06:56:34.711246867 +0000 UTC m=+0.054266748 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:56:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:34.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:34.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:34 compute-0 podman[256462]: 2025-12-06 06:56:34.800562184 +0000 UTC m=+0.143582105 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:56:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Dec 06 06:56:35 compute-0 podman[256619]: 2025-12-06 06:56:35.361905589 +0000 UTC m=+0.058774980 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:56:35 compute-0 podman[256619]: 2025-12-06 06:56:35.381390073 +0000 UTC m=+0.078259444 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 06:56:35 compute-0 podman[256686]: 2025-12-06 06:56:35.576579989 +0000 UTC m=+0.053150700 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, release=1793, vcs-type=git, version=2.2.4, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, architecture=x86_64)
Dec 06 06:56:35 compute-0 podman[256686]: 2025-12-06 06:56:35.589413308 +0000 UTC m=+0.065983999 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.expose-services=, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec 06 06:56:35 compute-0 sudo[256364]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:56:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:56:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:35 compute-0 sudo[256721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:35 compute-0 sudo[256721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:35 compute-0 sudo[256721]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:35 compute-0 sudo[256746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:56:35 compute-0 sudo[256746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:35 compute-0 sudo[256746]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:35 compute-0 sudo[256771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:35 compute-0 sudo[256771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:35 compute-0 sudo[256771]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:35 compute-0 sudo[256796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:56:35 compute-0 sudo[256796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:36 compute-0 nova_compute[251992]: 2025-12-06 06:56:36.047 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:36 compute-0 sudo[256796]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:36 compute-0 sudo[256852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:36 compute-0 sudo[256852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:36 compute-0 sudo[256852]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:36 compute-0 sudo[256878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:56:36 compute-0 sudo[256878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:36 compute-0 sudo[256878]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:36 compute-0 sudo[256903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:36 compute-0 sudo[256903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:36 compute-0 sudo[256903]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:36 compute-0 sudo[256928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 06:56:36 compute-0 sudo[256928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:36.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:36.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:36 compute-0 sudo[256928]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:56:36 compute-0 ceph-mon[74339]: pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 127 B/s wr, 0 op/s
Dec 06 06:56:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 1.1 KiB/s wr, 9 op/s
Dec 06 06:56:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.807 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.807 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.807 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:56:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.829 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.830 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.830 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.830 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.830 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.831 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.831 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:37 compute-0 nova_compute[251992]: 2025-12-06 06:56:37.831 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:56:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:56:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:56:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:56:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:56:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:56:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec 06 06:56:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec 06 06:56:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1f460237-df86-4457-a5ea-3b04d00824ef does not exist
Dec 06 06:56:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev de5c762d-2a88-436e-a3da-096d3f552efe does not exist
Dec 06 06:56:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 636e9fb8-1765-472c-958e-352df5a8898e does not exist
Dec 06 06:56:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:56:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:56:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:56:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:56:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:56:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:56:38 compute-0 sudo[256974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:38 compute-0 sudo[256974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:38 compute-0 sudo[256974]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:38 compute-0 sudo[256999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:56:38 compute-0 sudo[256999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:38 compute-0 sudo[256999]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:38 compute-0 sudo[257024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:38 compute-0 sudo[257024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:38 compute-0 sudo[257024]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:38 compute-0 ceph-mon[74339]: pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 1.1 KiB/s wr, 9 op/s
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/839735319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:38 compute-0 ceph-mon[74339]: osdmap e148: 3 total, 3 up, 3 in
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:56:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:56:38 compute-0 sudo[257049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:56:38 compute-0 sudo[257049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 06:56:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:38.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 06:56:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:38.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 1.6 KiB/s wr, 12 op/s
Dec 06 06:56:39 compute-0 podman[257116]: 2025-12-06 06:56:38.968167847 +0000 UTC m=+0.027136495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:56:39 compute-0 podman[257116]: 2025-12-06 06:56:39.206035232 +0000 UTC m=+0.265003830 container create 1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keller, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:56:39 compute-0 systemd[1]: Started libpod-conmon-1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885.scope.
Dec 06 06:56:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:56:39 compute-0 podman[257116]: 2025-12-06 06:56:39.297542633 +0000 UTC m=+0.356511251 container init 1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 06:56:39 compute-0 podman[257116]: 2025-12-06 06:56:39.304958057 +0000 UTC m=+0.363926655 container start 1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keller, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 06:56:39 compute-0 podman[257116]: 2025-12-06 06:56:39.308654534 +0000 UTC m=+0.367623142 container attach 1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:56:39 compute-0 fervent_keller[257133]: 167 167
Dec 06 06:56:39 compute-0 systemd[1]: libpod-1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885.scope: Deactivated successfully.
Dec 06 06:56:39 compute-0 podman[257116]: 2025-12-06 06:56:39.311032711 +0000 UTC m=+0.370001309 container died 1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keller, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:56:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-03467f12b27ee987bc4539c67cee402be0eb012679b4bb84b39090b3914da560-merged.mount: Deactivated successfully.
Dec 06 06:56:39 compute-0 podman[257116]: 2025-12-06 06:56:39.345652532 +0000 UTC m=+0.404621130 container remove 1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 06:56:39 compute-0 systemd[1]: libpod-conmon-1936ed8d32eab85db2bdaa29af6fe8f03581b9a945ab2a8ee7c9ba81c53d4885.scope: Deactivated successfully.
Dec 06 06:56:39 compute-0 nova_compute[251992]: 2025-12-06 06:56:39.434 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:39 compute-0 podman[257158]: 2025-12-06 06:56:39.505183298 +0000 UTC m=+0.041092545 container create daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:56:39 compute-0 systemd[1]: Started libpod-conmon-daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217.scope.
Dec 06 06:56:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581f5e93e47e57624ab22e50a093d68646438dc07e4bc2f7108bb0605f001e52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581f5e93e47e57624ab22e50a093d68646438dc07e4bc2f7108bb0605f001e52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581f5e93e47e57624ab22e50a093d68646438dc07e4bc2f7108bb0605f001e52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581f5e93e47e57624ab22e50a093d68646438dc07e4bc2f7108bb0605f001e52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/581f5e93e47e57624ab22e50a093d68646438dc07e4bc2f7108bb0605f001e52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:39 compute-0 podman[257158]: 2025-12-06 06:56:39.576981843 +0000 UTC m=+0.112891120 container init daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:56:39 compute-0 podman[257158]: 2025-12-06 06:56:39.487940959 +0000 UTC m=+0.023850236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:56:39 compute-0 podman[257158]: 2025-12-06 06:56:39.586884888 +0000 UTC m=+0.122794135 container start daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:56:39 compute-0 podman[257158]: 2025-12-06 06:56:39.590627407 +0000 UTC m=+0.126536664 container attach daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 06:56:39 compute-0 podman[257173]: 2025-12-06 06:56:39.637733165 +0000 UTC m=+0.093371658 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 06:56:39 compute-0 nova_compute[251992]: 2025-12-06 06:56:39.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:56:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1159887546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1841860323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.277 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.278 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.278 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.278 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.278 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:56:40 compute-0 nervous_heisenberg[257176]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:56:40 compute-0 nervous_heisenberg[257176]: --> relative data size: 1.0
Dec 06 06:56:40 compute-0 nervous_heisenberg[257176]: --> All data devices are unavailable
Dec 06 06:56:40 compute-0 systemd[1]: libpod-daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217.scope: Deactivated successfully.
Dec 06 06:56:40 compute-0 podman[257158]: 2025-12-06 06:56:40.396805711 +0000 UTC m=+0.932714958 container died daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:56:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-581f5e93e47e57624ab22e50a093d68646438dc07e4bc2f7108bb0605f001e52-merged.mount: Deactivated successfully.
Dec 06 06:56:40 compute-0 podman[257158]: 2025-12-06 06:56:40.448905517 +0000 UTC m=+0.984814764 container remove daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 06:56:40 compute-0 systemd[1]: libpod-conmon-daabcd6ea4572bc2c83f0a3e30ef6e23af0fe6370f2e7687a31a77a7a6bf3217.scope: Deactivated successfully.
Dec 06 06:56:40 compute-0 sudo[257049]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:40 compute-0 sudo[257248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:40 compute-0 sudo[257248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:40 compute-0 sudo[257248]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:40 compute-0 sudo[257273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:56:40 compute-0 sudo[257273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:40 compute-0 sudo[257273]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:40 compute-0 sudo[257298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:40 compute-0 sudo[257298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:40 compute-0 sudo[257298]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:40 compute-0 sudo[257323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:56:40 compute-0 sudo[257323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:40.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:56:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622520585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:56:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:40.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.763 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:56:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 6.6 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Dec 06 06:56:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec 06 06:56:40 compute-0 ceph-mon[74339]: pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 8.3 KiB/s rd, 1.6 KiB/s wr, 12 op/s
Dec 06 06:56:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1622520585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1136916909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.928 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.929 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.930 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:56:40 compute-0 nova_compute[251992]: 2025-12-06 06:56:40.930 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:56:41 compute-0 nova_compute[251992]: 2025-12-06 06:56:41.020 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:56:41 compute-0 nova_compute[251992]: 2025-12-06 06:56:41.020 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:56:41 compute-0 nova_compute[251992]: 2025-12-06 06:56:41.041 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:56:41 compute-0 podman[257390]: 2025-12-06 06:56:41.046000879 +0000 UTC m=+0.038404632 container create 4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:56:41 compute-0 systemd[1]: Started libpod-conmon-4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67.scope.
Dec 06 06:56:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:56:41 compute-0 podman[257390]: 2025-12-06 06:56:41.116934533 +0000 UTC m=+0.109338306 container init 4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 06:56:41 compute-0 podman[257390]: 2025-12-06 06:56:41.124762839 +0000 UTC m=+0.117166592 container start 4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 06:56:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec 06 06:56:41 compute-0 podman[257390]: 2025-12-06 06:56:41.030612014 +0000 UTC m=+0.023015797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:56:41 compute-0 podman[257390]: 2025-12-06 06:56:41.127928854 +0000 UTC m=+0.120332607 container attach 4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Dec 06 06:56:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec 06 06:56:41 compute-0 sweet_herschel[257407]: 167 167
Dec 06 06:56:41 compute-0 systemd[1]: libpod-4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67.scope: Deactivated successfully.
Dec 06 06:56:41 compute-0 podman[257390]: 2025-12-06 06:56:41.131940279 +0000 UTC m=+0.124344032 container died 4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aa848c9bf445344d8553a56f3230d4c68da904a7da818bbd52d5df3ba8a8efa-merged.mount: Deactivated successfully.
Dec 06 06:56:41 compute-0 podman[257390]: 2025-12-06 06:56:41.253169776 +0000 UTC m=+0.245573529 container remove 4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_herschel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:56:41 compute-0 systemd[1]: libpod-conmon-4d148a23ea46e15b71aeaa92baff047ecc8bc8eec7bf564e45d22297d3b5bf67.scope: Deactivated successfully.
Dec 06 06:56:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec 06 06:56:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec 06 06:56:41 compute-0 podman[257450]: 2025-12-06 06:56:41.4096262 +0000 UTC m=+0.042634923 container create 0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:56:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec 06 06:56:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:56:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/938473162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:41 compute-0 systemd[1]: Started libpod-conmon-0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0.scope.
Dec 06 06:56:41 compute-0 podman[257450]: 2025-12-06 06:56:41.390882106 +0000 UTC m=+0.023890849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:56:41 compute-0 nova_compute[251992]: 2025-12-06 06:56:41.485 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:56:41 compute-0 nova_compute[251992]: 2025-12-06 06:56:41.491 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:56:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f77bcf4e04d3301ff38b1cb3cfeea6cb472e8effb0b94e18eee2eefbf53e0cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f77bcf4e04d3301ff38b1cb3cfeea6cb472e8effb0b94e18eee2eefbf53e0cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f77bcf4e04d3301ff38b1cb3cfeea6cb472e8effb0b94e18eee2eefbf53e0cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f77bcf4e04d3301ff38b1cb3cfeea6cb472e8effb0b94e18eee2eefbf53e0cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:41 compute-0 podman[257450]: 2025-12-06 06:56:41.508435815 +0000 UTC m=+0.141444548 container init 0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:56:41 compute-0 podman[257450]: 2025-12-06 06:56:41.515965634 +0000 UTC m=+0.148974357 container start 0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:56:41 compute-0 podman[257450]: 2025-12-06 06:56:41.519014676 +0000 UTC m=+0.152023399 container attach 0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swanson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:56:41 compute-0 nova_compute[251992]: 2025-12-06 06:56:41.548 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:56:41 compute-0 nova_compute[251992]: 2025-12-06 06:56:41.550 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:56:41 compute-0 nova_compute[251992]: 2025-12-06 06:56:41.550 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:56:42 compute-0 gifted_swanson[257469]: {
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:     "0": [
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:         {
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "devices": [
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "/dev/loop3"
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             ],
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "lv_name": "ceph_lv0",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "lv_size": "7511998464",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "name": "ceph_lv0",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "tags": {
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.cluster_name": "ceph",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.crush_device_class": "",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.encrypted": "0",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.osd_id": "0",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.type": "block",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:                 "ceph.vdo": "0"
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             },
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "type": "block",
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:             "vg_name": "ceph_vg0"
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:         }
Dec 06 06:56:42 compute-0 gifted_swanson[257469]:     ]
Dec 06 06:56:42 compute-0 gifted_swanson[257469]: }
Dec 06 06:56:42 compute-0 ceph-mon[74339]: pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 161 MiB used, 21 GiB / 21 GiB avail; 6.6 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Dec 06 06:56:42 compute-0 ceph-mon[74339]: osdmap e149: 3 total, 3 up, 3 in
Dec 06 06:56:42 compute-0 ceph-mon[74339]: osdmap e150: 3 total, 3 up, 3 in
Dec 06 06:56:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/938473162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:56:42 compute-0 systemd[1]: libpod-0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0.scope: Deactivated successfully.
Dec 06 06:56:42 compute-0 podman[257450]: 2025-12-06 06:56:42.245246193 +0000 UTC m=+0.878254926 container died 0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swanson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:56:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f77bcf4e04d3301ff38b1cb3cfeea6cb472e8effb0b94e18eee2eefbf53e0cf-merged.mount: Deactivated successfully.
Dec 06 06:56:42 compute-0 podman[257450]: 2025-12-06 06:56:42.292912396 +0000 UTC m=+0.925921119 container remove 0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:56:42 compute-0 systemd[1]: libpod-conmon-0d37a1ec4119c19812b802c92a033c4a227269ade41914e3e00c21630bd7daf0.scope: Deactivated successfully.
Dec 06 06:56:42 compute-0 sudo[257323]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:42 compute-0 sudo[257492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:42 compute-0 sudo[257492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:42 compute-0 sudo[257492]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:42 compute-0 sudo[257518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:56:42 compute-0 sudo[257518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:42 compute-0 sudo[257518]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:42 compute-0 sudo[257543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:42 compute-0 sudo[257543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:42 compute-0 sudo[257543]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:42 compute-0 sudo[257568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:56:42 compute-0 sudo[257568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:42.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:42.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 29 MiB data, 182 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 4.7 MiB/s wr, 49 op/s
Dec 06 06:56:42 compute-0 podman[257632]: 2025-12-06 06:56:42.897807432 +0000 UTC m=+0.039771765 container create c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 06:56:42 compute-0 systemd[1]: Started libpod-conmon-c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21.scope.
Dec 06 06:56:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:56:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:56:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:56:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:56:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:56:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:56:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:56:42 compute-0 podman[257632]: 2025-12-06 06:56:42.880343917 +0000 UTC m=+0.022308270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:56:42 compute-0 podman[257632]: 2025-12-06 06:56:42.977233777 +0000 UTC m=+0.119198130 container init c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec 06 06:56:42 compute-0 podman[257632]: 2025-12-06 06:56:42.984414267 +0000 UTC m=+0.126378590 container start c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:56:42 compute-0 podman[257632]: 2025-12-06 06:56:42.988423453 +0000 UTC m=+0.130387806 container attach c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 06:56:42 compute-0 wizardly_dewdney[257648]: 167 167
Dec 06 06:56:42 compute-0 systemd[1]: libpod-c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21.scope: Deactivated successfully.
Dec 06 06:56:42 compute-0 conmon[257648]: conmon c05205f15c4ac06940e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21.scope/container/memory.events
Dec 06 06:56:42 compute-0 podman[257632]: 2025-12-06 06:56:42.991997597 +0000 UTC m=+0.133961930 container died c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 06:56:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca9e7dfdafcfe9a3f763caa74c2d384357071302c52d6ce0b93e05fe3f63a602-merged.mount: Deactivated successfully.
Dec 06 06:56:43 compute-0 podman[257632]: 2025-12-06 06:56:43.019715066 +0000 UTC m=+0.161679399 container remove c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:56:43 compute-0 systemd[1]: libpod-conmon-c05205f15c4ac06940e14459afc55e299e49726e296d01c75698aee4da275f21.scope: Deactivated successfully.
Dec 06 06:56:43 compute-0 podman[257671]: 2025-12-06 06:56:43.168062117 +0000 UTC m=+0.039326765 container create 32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:56:43 compute-0 systemd[1]: Started libpod-conmon-32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7.scope.
Dec 06 06:56:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68971954a20eac699589f4c560a11e1c0a8e153c39a913458e1a93b437246f99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68971954a20eac699589f4c560a11e1c0a8e153c39a913458e1a93b437246f99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68971954a20eac699589f4c560a11e1c0a8e153c39a913458e1a93b437246f99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68971954a20eac699589f4c560a11e1c0a8e153c39a913458e1a93b437246f99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:56:43 compute-0 podman[257671]: 2025-12-06 06:56:43.238430937 +0000 UTC m=+0.109695595 container init 32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:56:43 compute-0 podman[257671]: 2025-12-06 06:56:43.149949056 +0000 UTC m=+0.021213734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:56:43 compute-0 podman[257671]: 2025-12-06 06:56:43.250124744 +0000 UTC m=+0.121389392 container start 32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 06:56:43 compute-0 podman[257671]: 2025-12-06 06:56:43.253227487 +0000 UTC m=+0.124492255 container attach 32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:56:44 compute-0 nice_payne[257687]: {
Dec 06 06:56:44 compute-0 nice_payne[257687]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:56:44 compute-0 nice_payne[257687]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:56:44 compute-0 nice_payne[257687]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:56:44 compute-0 nice_payne[257687]:         "osd_id": 0,
Dec 06 06:56:44 compute-0 nice_payne[257687]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:56:44 compute-0 nice_payne[257687]:         "type": "bluestore"
Dec 06 06:56:44 compute-0 nice_payne[257687]:     }
Dec 06 06:56:44 compute-0 nice_payne[257687]: }
Dec 06 06:56:44 compute-0 systemd[1]: libpod-32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7.scope: Deactivated successfully.
Dec 06 06:56:44 compute-0 podman[257671]: 2025-12-06 06:56:44.042692726 +0000 UTC m=+0.913957364 container died 32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:56:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-68971954a20eac699589f4c560a11e1c0a8e153c39a913458e1a93b437246f99-merged.mount: Deactivated successfully.
Dec 06 06:56:44 compute-0 podman[257671]: 2025-12-06 06:56:44.143773565 +0000 UTC m=+1.015038213 container remove 32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 06:56:44 compute-0 systemd[1]: libpod-conmon-32a6181b3faba6eb448c8cc2d9149be40e2ed3147f92869095e9813b3ab5b8f7.scope: Deactivated successfully.
Dec 06 06:56:44 compute-0 sudo[257568]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:56:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:56:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6c3c089f-43ac-4d6e-897a-f7c11808b03b does not exist
Dec 06 06:56:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 485abaaf-6804-429b-a0e8-42f50185d500 does not exist
Dec 06 06:56:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e153dc7b-b90c-429d-a484-0471b94ca2fe does not exist
Dec 06 06:56:44 compute-0 ceph-mon[74339]: pgmap v1058: 305 pgs: 305 active+clean; 29 MiB data, 182 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 4.7 MiB/s wr, 49 op/s
Dec 06 06:56:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:56:44 compute-0 sudo[257720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:44 compute-0 sudo[257720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:44 compute-0 sudo[257720]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:44 compute-0 sudo[257745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:56:44 compute-0 sudo[257745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:44 compute-0 sudo[257745]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:44.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:56:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:44.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:56:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 6.3 MiB/s wr, 47 op/s
Dec 06 06:56:46 compute-0 ceph-mon[74339]: pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 6.3 MiB/s wr, 47 op/s
Dec 06 06:56:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:46.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 37 op/s
Dec 06 06:56:47 compute-0 podman[257772]: 2025-12-06 06:56:47.39789929 +0000 UTC m=+0.053246984 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 06:56:47 compute-0 podman[257773]: 2025-12-06 06:56:47.397839929 +0000 UTC m=+0.053204004 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 06:56:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:48.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:48 compute-0 ceph-mon[74339]: pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 37 op/s
Dec 06 06:56:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:48.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 37 op/s
Dec 06 06:56:50 compute-0 ceph-mon[74339]: pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 5.1 MiB/s wr, 37 op/s
Dec 06 06:56:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:50.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:50.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 MiB/s wr, 30 op/s
Dec 06 06:56:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:52 compute-0 ceph-mon[74339]: pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 MiB/s wr, 30 op/s
Dec 06 06:56:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:52.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 3.6 MiB/s wr, 26 op/s
Dec 06 06:56:52 compute-0 sudo[257814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:52 compute-0 sudo[257814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:52 compute-0 sudo[257814]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:53 compute-0 sudo[257839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:56:53 compute-0 sudo[257839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:56:53 compute-0 sudo[257839]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:54 compute-0 ceph-mon[74339]: pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 3.6 MiB/s wr, 26 op/s
Dec 06 06:56:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:54.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 596 B/s rd, 1.0 MiB/s wr, 0 op/s
Dec 06 06:56:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:56:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:56:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:56:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:56:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:56.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:56:56 compute-0 ceph-mon[74339]: pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 596 B/s rd, 1.0 MiB/s wr, 0 op/s
Dec 06 06:56:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:57 compute-0 ceph-mon[74339]: pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:56:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:56:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:56:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:56:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:56:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:56:58.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:56:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:56:59 compute-0 ceph-mon[74339]: pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:00.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:00.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:01 compute-0 ceph-mon[74339]: pgmap v1067: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:02.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:57:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:02.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:57:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:57:03.807 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:57:03.807 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:57:03.807 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:03 compute-0 ceph-mon[74339]: pgmap v1068: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:04.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:04.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:06 compute-0 ceph-mon[74339]: pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:06.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:06.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:08 compute-0 ceph-mon[74339]: pgmap v1070: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 06:57:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1112622656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:57:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 06:57:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1112622656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:57:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000047s ======
Dec 06 06:57:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000047s
Dec 06 06:57:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:57:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:08.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:57:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:57:09.088 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:57:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:57:09.090 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 06:57:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1112622656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:57:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1112622656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:57:10 compute-0 podman[257872]: 2025-12-06 06:57:10.455971998 +0000 UTC m=+0.109168812 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 06 06:57:10 compute-0 ceph-mon[74339]: pgmap v1071: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:10.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:57:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:10.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:57:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:11 compute-0 ceph-mon[74339]: pgmap v1072: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:12.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:12.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:57:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:57:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:57:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:57:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:57:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:57:13 compute-0 sudo[257900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:13 compute-0 sudo[257900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:13 compute-0 sudo[257900]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:13 compute-0 sudo[257925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:13 compute-0 sudo[257925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:13 compute-0 sudo[257925]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:14 compute-0 ceph-mon[74339]: pgmap v1073: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:14.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2208619595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:16.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:16.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:57:17.092 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:57:17 compute-0 ceph-mon[74339]: pgmap v1074: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:57:18
Dec 06 06:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', '.rgw.root', 'vms', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 06 06:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:57:18 compute-0 podman[257953]: 2025-12-06 06:57:18.388816192 +0000 UTC m=+0.046014794 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 06:57:18 compute-0 podman[257952]: 2025-12-06 06:57:18.389732873 +0000 UTC m=+0.050313735 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 06:57:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:18.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:18 compute-0 ceph-mon[74339]: pgmap v1075: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:57:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:57:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:20 compute-0 ceph-mon[74339]: pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:20.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:20.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec 06 06:57:22 compute-0 ceph-mon[74339]: pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Dec 06 06:57:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:57:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:22.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:57:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:22.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Dec 06 06:57:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec 06 06:57:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:57:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec 06 06:57:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec 06 06:57:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec 06 06:57:23 compute-0 ceph-mon[74339]: pgmap v1078: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Dec 06 06:57:23 compute-0 ceph-mon[74339]: osdmap e151: 3 total, 3 up, 3 in
Dec 06 06:57:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:24.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 10 op/s
Dec 06 06:57:24 compute-0 ceph-mon[74339]: osdmap e152: 3 total, 3 up, 3 in
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:57:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:57:26 compute-0 ceph-mon[74339]: pgmap v1081: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 10 op/s
Dec 06 06:57:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/77946902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1331597734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:26.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 64 MiB data, 205 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec 06 06:57:28 compute-0 ceph-mon[74339]: pgmap v1082: 305 pgs: 305 active+clean; 64 MiB data, 205 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.3 MiB/s wr, 12 op/s
Dec 06 06:57:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:28.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Dec 06 06:57:30 compute-0 ceph-mon[74339]: pgmap v1083: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Dec 06 06:57:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:57:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:30.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:57:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:30.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Dec 06 06:57:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Dec 06 06:57:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Dec 06 06:57:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Dec 06 06:57:32 compute-0 ceph-mon[74339]: pgmap v1084: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Dec 06 06:57:32 compute-0 ceph-mon[74339]: osdmap e153: 3 total, 3 up, 3 in
Dec 06 06:57:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:32.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:32.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 75 op/s
Dec 06 06:57:33 compute-0 sudo[257999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:33 compute-0 sudo[257999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:33 compute-0 sudo[257999]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:33 compute-0 sudo[258024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:33 compute-0 sudo[258024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:33 compute-0 sudo[258024]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:33 compute-0 nova_compute[251992]: 2025-12-06 06:57:33.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:33 compute-0 nova_compute[251992]: 2025-12-06 06:57:33.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 06:57:33 compute-0 nova_compute[251992]: 2025-12-06 06:57:33.812 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 06:57:33 compute-0 nova_compute[251992]: 2025-12-06 06:57:33.813 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:33 compute-0 nova_compute[251992]: 2025-12-06 06:57:33.814 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 06:57:33 compute-0 nova_compute[251992]: 2025-12-06 06:57:33.889 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:34 compute-0 nova_compute[251992]: 2025-12-06 06:57:34.689 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "301f84c3-00d9-4b53-b014-f72ec664448c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:34 compute-0 nova_compute[251992]: 2025-12-06 06:57:34.690 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "301f84c3-00d9-4b53-b014-f72ec664448c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:34 compute-0 nova_compute[251992]: 2025-12-06 06:57:34.712 251996 DEBUG nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:57:34 compute-0 ceph-mon[74339]: pgmap v1086: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 75 op/s
Dec 06 06:57:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:34.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:34 compute-0 nova_compute[251992]: 2025-12-06 06:57:34.797 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:34 compute-0 nova_compute[251992]: 2025-12-06 06:57:34.798 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:34 compute-0 nova_compute[251992]: 2025-12-06 06:57:34.807 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:57:34 compute-0 nova_compute[251992]: 2025-12-06 06:57:34.807 251996 INFO nova.compute.claims [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Claim successful on node compute-0.ctlplane.example.com
Dec 06 06:57:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:34.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 06 06:57:34 compute-0 nova_compute[251992]: 2025-12-06 06:57:34.949 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043231983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.404 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.410 251996 DEBUG nova.compute.provider_tree [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.585 251996 DEBUG nova.scheduler.client.report [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.638 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.639 251996 DEBUG nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.706 251996 DEBUG nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.706 251996 DEBUG nova.network.neutron [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.733 251996 INFO nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.755 251996 DEBUG nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:57:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4043231983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.873 251996 DEBUG nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.875 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.875 251996 INFO nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Creating image(s)
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.907 251996 DEBUG nova.storage.rbd_utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image 301f84c3-00d9-4b53-b014-f72ec664448c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.935 251996 DEBUG nova.storage.rbd_utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image 301f84c3-00d9-4b53-b014-f72ec664448c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.962 251996 DEBUG nova.storage.rbd_utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image 301f84c3-00d9-4b53-b014-f72ec664448c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.964 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:35 compute-0 nova_compute[251992]: 2025-12-06 06:57:35.965 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:36 compute-0 nova_compute[251992]: 2025-12-06 06:57:36.542 251996 DEBUG nova.virt.libvirt.imagebackend [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6efab05d-c7cf-4770-a5c3-c806a2739063/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6efab05d-c7cf-4770-a5c3-c806a2739063/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 06:57:36 compute-0 nova_compute[251992]: 2025-12-06 06:57:36.747 251996 DEBUG nova.network.neutron [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 06:57:36 compute-0 nova_compute[251992]: 2025-12-06 06:57:36.748 251996 DEBUG nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 06:57:36 compute-0 ceph-mon[74339]: pgmap v1087: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Dec 06 06:57:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:36.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.063 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.064 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.064 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.064 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.113 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.171 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.172 251996 DEBUG nova.virt.images [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] 6efab05d-c7cf-4770-a5c3-c806a2739063 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.172 251996 DEBUG nova.privsep.utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.173 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.303 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.part /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.315 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.376 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.377 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.407 251996 DEBUG nova.storage.rbd_utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image 301f84c3-00d9-4b53-b014-f72ec664448c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.410 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 301f84c3-00d9-4b53-b014-f72ec664448c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:57:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:38.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:38.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:38 compute-0 ceph-mon[74339]: pgmap v1088: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Dec 06 06:57:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/13983779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.917 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 301f84c3-00d9-4b53-b014-f72ec664448c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:38 compute-0 nova_compute[251992]: 2025-12-06 06:57:38.991 251996 DEBUG nova.storage.rbd_utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] resizing rbd image 301f84c3-00d9-4b53-b014-f72ec664448c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:57:39 compute-0 nova_compute[251992]: 2025-12-06 06:57:39.105 251996 DEBUG nova.objects.instance [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lazy-loading 'migration_context' on Instance uuid 301f84c3-00d9-4b53-b014-f72ec664448c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:39 compute-0 nova_compute[251992]: 2025-12-06 06:57:39.625 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 06:57:39 compute-0 nova_compute[251992]: 2025-12-06 06:57:39.625 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 06:57:39 compute-0 nova_compute[251992]: 2025-12-06 06:57:39.626 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:39 compute-0 nova_compute[251992]: 2025-12-06 06:57:39.626 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:39 compute-0 nova_compute[251992]: 2025-12-06 06:57:39.626 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:39 compute-0 nova_compute[251992]: 2025-12-06 06:57:39.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:39 compute-0 nova_compute[251992]: 2025-12-06 06:57:39.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:57:40 compute-0 ceph-mon[74339]: pgmap v1089: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.567 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.568 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.569 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.569 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.570 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.657 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.658 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Ensure instance console log exists: /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.659 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.659 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.660 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.663 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.668 251996 WARNING nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.672 251996 DEBUG nova.virt.libvirt.host [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.672 251996 DEBUG nova.virt.libvirt.host [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.675 251996 DEBUG nova.virt.libvirt.host [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.676 251996 DEBUG nova.virt.libvirt.host [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.678 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.678 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.678 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.679 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.679 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.679 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.680 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.680 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.680 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.681 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.681 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.681 251996 DEBUG nova.virt.hardware [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.686 251996 DEBUG nova.privsep.utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 06:57:40 compute-0 nova_compute[251992]: 2025-12-06 06:57:40.687 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:40.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:40.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 06:57:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990925264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.000 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:57:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985528722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.153 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.155 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5126MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.155 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.155 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.168 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.193 251996 DEBUG nova.storage.rbd_utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image 301f84c3-00d9-4b53-b014-f72ec664448c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.197 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3990925264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:41 compute-0 podman[258335]: 2025-12-06 06:57:41.490194704 +0000 UTC m=+0.141143281 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 06:57:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:57:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2467075325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.629 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:41 compute-0 nova_compute[251992]: 2025-12-06 06:57:41.635 251996 DEBUG nova.objects.instance [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lazy-loading 'pci_devices' on Instance uuid 301f84c3-00d9-4b53-b014-f72ec664448c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:42 compute-0 ceph-mon[74339]: pgmap v1090: 305 pgs: 305 active+clean; 88 MiB data, 220 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 06:57:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1985528722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2467075325' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:57:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:42.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:57:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:57:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:57:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 110 MiB data, 231 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 962 KiB/s wr, 98 op/s
Dec 06 06:57:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:57:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:57:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:57:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:57:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:57:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:57:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1122354135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:44 compute-0 nova_compute[251992]: 2025-12-06 06:57:44.291 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <uuid>301f84c3-00d9-4b53-b014-f72ec664448c</uuid>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <name>instance-00000002</name>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <metadata>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-547972134</nova:name>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 06:57:40</nova:creationTime>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <nova:user uuid="28be0883246146559e0210481394b3d0">tempest-DeleteServersAdminTestJSON-1434350404-project-member</nova:user>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <nova:project uuid="f0fc1e7b36eb4fbd942a923898feb462">tempest-DeleteServersAdminTestJSON-1434350404</nova:project>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   </metadata>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <system>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <entry name="serial">301f84c3-00d9-4b53-b014-f72ec664448c</entry>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <entry name="uuid">301f84c3-00d9-4b53-b014-f72ec664448c</entry>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     </system>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <os>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   </os>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <features>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <apic/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   </features>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   </clock>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/301f84c3-00d9-4b53-b014-f72ec664448c_disk">
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       </source>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/301f84c3-00d9-4b53-b014-f72ec664448c_disk.config">
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       </source>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:57:44 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c/console.log" append="off"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     </serial>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <video>
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     </video>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 06:57:44 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 06:57:44 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 06:57:44 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:57:44 compute-0 nova_compute[251992]: </domain>
Dec 06 06:57:44 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:57:44 compute-0 sudo[258365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:44 compute-0 sudo[258365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:44 compute-0 sudo[258365]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:44 compute-0 sudo[258390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:57:44 compute-0 sudo[258390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:44 compute-0 sudo[258390]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:44.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:44 compute-0 sudo[258415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:44 compute-0 sudo[258415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:44 compute-0 sudo[258415]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:44.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 134 MiB data, 241 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 06 06:57:44 compute-0 sudo[258440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 06:57:44 compute-0 sudo[258440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:44 compute-0 ceph-mon[74339]: pgmap v1091: 305 pgs: 305 active+clean; 110 MiB data, 231 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 962 KiB/s wr, 98 op/s
Dec 06 06:57:45 compute-0 sudo[258440]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.155 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.155 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.156 251996 INFO nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Using config drive
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.180 251996 DEBUG nova.storage.rbd_utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image 301f84c3-00d9-4b53-b014-f72ec664448c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.542 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 301f84c3-00d9-4b53-b014-f72ec664448c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.543 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.543 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.564 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.623 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.624 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.734 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 06:57:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.761 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 06:57:45 compute-0 sudo[258502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:45 compute-0 nova_compute[251992]: 2025-12-06 06:57:45.800 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:45 compute-0 sudo[258502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:45 compute-0 sudo[258502]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:45 compute-0 sudo[258528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:57:45 compute-0 sudo[258528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:45 compute-0 sudo[258528]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:45 compute-0 sudo[258553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:45 compute-0 sudo[258553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:45 compute-0 sudo[258553]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:45 compute-0 sudo[258597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:57:45 compute-0 sudo[258597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.017 251996 INFO nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Creating config drive at /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c/disk.config
Dec 06 06:57:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3000776520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.022 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ol17fdc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.147 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6ol17fdc" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.176 251996 DEBUG nova.storage.rbd_utils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image 301f84c3-00d9-4b53-b014-f72ec664448c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.178 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c/disk.config 301f84c3-00d9-4b53-b014-f72ec664448c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413436200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.259 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.264 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.307 251996 ERROR nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [req-2aca5384-d933-447b-82a9-13b44d0805bf] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID e75da5bf-16fa-49b1-b5e1-3aa61daf0433.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-2aca5384-d933-447b-82a9-13b44d0805bf"}]}
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.325 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.345 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.345 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.386 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 06:57:46 compute-0 sudo[258597]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.429 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.493 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:46 compute-0 ceph-mon[74339]: pgmap v1092: 305 pgs: 305 active+clean; 134 MiB data, 241 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 06 06:57:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3000776520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/413436200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.728 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.728 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.759 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:57:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:46.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.831 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:46.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 146 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 48 op/s
Dec 06 06:57:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777465336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.944 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.949 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.991 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updated inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.992 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 06:57:46 compute-0 nova_compute[251992]: 2025-12-06 06:57:46.992 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.031 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.031 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.032 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.037 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.038 251996 INFO nova.compute.claims [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Claim successful on node compute-0.ctlplane.example.com
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.300 251996 DEBUG oslo_concurrency.processutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c/disk.config 301f84c3-00d9-4b53-b014-f72ec664448c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.300 251996 INFO nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Deleting local config drive /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c/disk.config because it was imported into RBD.
Dec 06 06:57:47 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.352 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:47 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 06 06:57:47 compute-0 systemd-machined[212986]: New machine qemu-1-instance-00000002.
Dec 06 06:57:47 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Dec 06 06:57:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/777465336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3418748782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2115016062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930106838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.821 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.830 251996 DEBUG nova.compute.provider_tree [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.853 251996 DEBUG nova.scheduler.client.report [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.884 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.885 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.924 251996 DEBUG nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.925 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.928 251996 INFO nova.virt.libvirt.driver [-] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Instance spawned successfully.
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.929 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.932 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004267.931711, 301f84c3-00d9-4b53-b014-f72ec664448c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.932 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] VM Resumed (Lifecycle Event)
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.950 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.950 251996 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.957 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.961 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.961 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.962 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.962 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.963 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.963 251996 DEBUG nova.virt.libvirt.driver [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.967 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:57:47 compute-0 nova_compute[251992]: 2025-12-06 06:57:47.973 251996 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.012 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.014 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.014 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004267.932583, 301f84c3-00d9-4b53-b014-f72ec664448c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.015 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] VM Started (Lifecycle Event)
Dec 06 06:57:48 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.068 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.071 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.149 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:57:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:57:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:57:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:57:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:57:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 266f9440-8320-45d0-b1bd-35434aa4bd12 does not exist
Dec 06 06:57:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b229a158-d7d7-476c-8ddb-ce215d1bd64c does not exist
Dec 06 06:57:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8677e177-d58b-4172-8b3d-b880ed124043 does not exist
Dec 06 06:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:57:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:57:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:57:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.272 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.274 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.275 251996 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Creating image(s)
Dec 06 06:57:48 compute-0 sudo[258816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:48 compute-0 sudo[258816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.302 251996 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:48 compute-0 sudo[258816]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.333 251996 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:48 compute-0 sudo[258859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:57:48 compute-0 sudo[258859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:48 compute-0 sudo[258859]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.363 251996 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.370 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.389 251996 INFO nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Took 12.52 seconds to spawn the instance on the hypervisor.
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.392 251996 DEBUG nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:57:48 compute-0 sudo[258920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:48 compute-0 sudo[258920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:48 compute-0 sudo[258920]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.441 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.441 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.442 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.442 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.476 251996 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.496 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:48 compute-0 sudo[258960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:57:48 compute-0 sudo[258960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:48 compute-0 podman[258947]: 2025-12-06 06:57:48.508427959 +0000 UTC m=+0.068172975 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.524 251996 INFO nova.compute.manager [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Took 13.76 seconds to build instance.
Dec 06 06:57:48 compute-0 podman[258946]: 2025-12-06 06:57:48.531072005 +0000 UTC m=+0.090629937 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.546 251996 DEBUG oslo_concurrency.lockutils [None req-16b47c2b-ad82-4624-9684-60461bd17049 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "301f84c3-00d9-4b53-b014-f72ec664448c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:48 compute-0 ceph-mon[74339]: pgmap v1093: 305 pgs: 305 active+clean; 146 MiB data, 254 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 48 op/s
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3930106838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2428242993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/512715773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:57:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:57:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:57:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:48.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:57:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:48.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:48 compute-0 podman[259088]: 2025-12-06 06:57:48.848987073 +0000 UTC m=+0.040899788 container create a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:57:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 155 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 06 06:57:48 compute-0 systemd[1]: Started libpod-conmon-a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6.scope.
Dec 06 06:57:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:57:48 compute-0 podman[259088]: 2025-12-06 06:57:48.831818939 +0000 UTC m=+0.023731684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:57:48 compute-0 podman[259088]: 2025-12-06 06:57:48.935583701 +0000 UTC m=+0.127496446 container init a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bohr, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:57:48 compute-0 podman[259088]: 2025-12-06 06:57:48.942377235 +0000 UTC m=+0.134289960 container start a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 06:57:48 compute-0 podman[259088]: 2025-12-06 06:57:48.945557041 +0000 UTC m=+0.137469786 container attach a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bohr, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 06:57:48 compute-0 trusting_bohr[259105]: 167 167
Dec 06 06:57:48 compute-0 systemd[1]: libpod-a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6.scope: Deactivated successfully.
Dec 06 06:57:48 compute-0 podman[259088]: 2025-12-06 06:57:48.947724584 +0000 UTC m=+0.139637309 container died a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bohr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 06:57:48 compute-0 nova_compute[251992]: 2025-12-06 06:57:48.948 251996 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Automatically allocating a network for project 066c314d67e347f6a49e8e3e27998441. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Dec 06 06:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e61a2b8c29111ed58dff8beba001688efdab3d2375498886a3d6d67feedc54e4-merged.mount: Deactivated successfully.
Dec 06 06:57:48 compute-0 podman[259088]: 2025-12-06 06:57:48.983482606 +0000 UTC m=+0.175395321 container remove a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:57:48 compute-0 systemd[1]: libpod-conmon-a5695409b42a0a4bfd86405f4e67fa25cb2cd5fa22f6673ccea6fc35040b3ed6.scope: Deactivated successfully.
Dec 06 06:57:49 compute-0 podman[259130]: 2025-12-06 06:57:49.182702171 +0000 UTC m=+0.054939906 container create 193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:57:49 compute-0 systemd[1]: Started libpod-conmon-193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa.scope.
Dec 06 06:57:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64de91722e313e7187d573edd995f4218065de7509ed340de66709feef0cecb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64de91722e313e7187d573edd995f4218065de7509ed340de66709feef0cecb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64de91722e313e7187d573edd995f4218065de7509ed340de66709feef0cecb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64de91722e313e7187d573edd995f4218065de7509ed340de66709feef0cecb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64de91722e313e7187d573edd995f4218065de7509ed340de66709feef0cecb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:49 compute-0 podman[259130]: 2025-12-06 06:57:49.162469143 +0000 UTC m=+0.034706918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:57:49 compute-0 podman[259130]: 2025-12-06 06:57:49.273903231 +0000 UTC m=+0.146140986 container init 193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 06:57:49 compute-0 podman[259130]: 2025-12-06 06:57:49.281722909 +0000 UTC m=+0.153960664 container start 193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_euler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:57:49 compute-0 podman[259130]: 2025-12-06 06:57:49.285350467 +0000 UTC m=+0.157588302 container attach 193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_euler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.489 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Acquiring lock "301f84c3-00d9-4b53-b014-f72ec664448c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.491 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Lock "301f84c3-00d9-4b53-b014-f72ec664448c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.491 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Acquiring lock "301f84c3-00d9-4b53-b014-f72ec664448c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.492 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Lock "301f84c3-00d9-4b53-b014-f72ec664448c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.492 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Lock "301f84c3-00d9-4b53-b014-f72ec664448c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.494 251996 INFO nova.compute.manager [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Terminating instance
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.495 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Acquiring lock "refresh_cache-301f84c3-00d9-4b53-b014-f72ec664448c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.495 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Acquired lock "refresh_cache-301f84c3-00d9-4b53-b014-f72ec664448c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.495 251996 DEBUG nova.network.neutron [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.625 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.707 251996 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] resizing rbd image 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.824 251996 DEBUG nova.objects.instance [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'migration_context' on Instance uuid 6064a21e-5bc7-495b-9059-a6dd7d3abee3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.833 251996 DEBUG nova.network.neutron [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.849 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.850 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Ensure instance console log exists: /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.850 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.851 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:49 compute-0 nova_compute[251992]: 2025-12-06 06:57:49.852 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:50 compute-0 flamboyant_euler[259146]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:57:50 compute-0 flamboyant_euler[259146]: --> relative data size: 1.0
Dec 06 06:57:50 compute-0 flamboyant_euler[259146]: --> All data devices are unavailable
Dec 06 06:57:50 compute-0 systemd[1]: libpod-193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa.scope: Deactivated successfully.
Dec 06 06:57:50 compute-0 podman[259130]: 2025-12-06 06:57:50.113025888 +0000 UTC m=+0.985263643 container died 193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_euler, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 06 06:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a64de91722e313e7187d573edd995f4218065de7509ed340de66709feef0cecb-merged.mount: Deactivated successfully.
Dec 06 06:57:50 compute-0 podman[259130]: 2025-12-06 06:57:50.161091877 +0000 UTC m=+1.033329612 container remove 193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:57:50 compute-0 systemd[1]: libpod-conmon-193c258da4d03b7652e032660d76d726191996107024070d801ce40372024faa.scope: Deactivated successfully.
Dec 06 06:57:50 compute-0 sudo[258960]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:50 compute-0 sudo[259245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:50 compute-0 sudo[259245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:50 compute-0 sudo[259245]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:50 compute-0 sudo[259270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:57:50 compute-0 sudo[259270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:50 compute-0 sudo[259270]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:50 compute-0 sudo[259295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:50 compute-0 sudo[259295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:50 compute-0 sudo[259295]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:50 compute-0 nova_compute[251992]: 2025-12-06 06:57:50.363 251996 DEBUG nova.network.neutron [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:57:50 compute-0 nova_compute[251992]: 2025-12-06 06:57:50.381 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Releasing lock "refresh_cache-301f84c3-00d9-4b53-b014-f72ec664448c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:57:50 compute-0 nova_compute[251992]: 2025-12-06 06:57:50.381 251996 DEBUG nova.compute.manager [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 06:57:50 compute-0 sudo[259320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:57:50 compute-0 sudo[259320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:50 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 06 06:57:50 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 3.056s CPU time.
Dec 06 06:57:50 compute-0 systemd-machined[212986]: Machine qemu-1-instance-00000002 terminated.
Dec 06 06:57:50 compute-0 nova_compute[251992]: 2025-12-06 06:57:50.610 251996 INFO nova.virt.libvirt.driver [-] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Instance destroyed successfully.
Dec 06 06:57:50 compute-0 nova_compute[251992]: 2025-12-06 06:57:50.611 251996 DEBUG nova.objects.instance [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Lazy-loading 'resources' on Instance uuid 301f84c3-00d9-4b53-b014-f72ec664448c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:50 compute-0 podman[259406]: 2025-12-06 06:57:50.750354879 +0000 UTC m=+0.040584950 container create 0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 06:57:50 compute-0 systemd[1]: Started libpod-conmon-0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef.scope.
Dec 06 06:57:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:50.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:57:50 compute-0 podman[259406]: 2025-12-06 06:57:50.73173518 +0000 UTC m=+0.021965221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:57:50 compute-0 podman[259406]: 2025-12-06 06:57:50.837307776 +0000 UTC m=+0.127537807 container init 0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:57:50 compute-0 podman[259406]: 2025-12-06 06:57:50.844256933 +0000 UTC m=+0.134487004 container start 0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec 06 06:57:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:50.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:50 compute-0 focused_shamir[259422]: 167 167
Dec 06 06:57:50 compute-0 podman[259406]: 2025-12-06 06:57:50.84865333 +0000 UTC m=+0.138883351 container attach 0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 06:57:50 compute-0 systemd[1]: libpod-0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef.scope: Deactivated successfully.
Dec 06 06:57:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 155 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 06 06:57:50 compute-0 podman[259427]: 2025-12-06 06:57:50.887963298 +0000 UTC m=+0.025520017 container died 0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:57:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c815043d2789106bf3f17264e0e7c5212408648da301b6bdbe96ee59394f7f1-merged.mount: Deactivated successfully.
Dec 06 06:57:50 compute-0 podman[259427]: 2025-12-06 06:57:50.932936922 +0000 UTC m=+0.070493651 container remove 0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_shamir, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Dec 06 06:57:50 compute-0 systemd[1]: libpod-conmon-0772baf014bc0f0a23ed54f4676b2d027e25186b3ec69f6ead1da4631d32b2ef.scope: Deactivated successfully.
Dec 06 06:57:50 compute-0 ceph-mon[74339]: pgmap v1094: 305 pgs: 305 active+clean; 155 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 06 06:57:51 compute-0 podman[259447]: 2025-12-06 06:57:51.113196289 +0000 UTC m=+0.040041726 container create ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mclean, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 06:57:51 compute-0 systemd[1]: Started libpod-conmon-ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d.scope.
Dec 06 06:57:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331a69d7a66792d3d5c0221b8bcd5664ecd3a289f993583968511df79c805675/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331a69d7a66792d3d5c0221b8bcd5664ecd3a289f993583968511df79c805675/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331a69d7a66792d3d5c0221b8bcd5664ecd3a289f993583968511df79c805675/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/331a69d7a66792d3d5c0221b8bcd5664ecd3a289f993583968511df79c805675/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:51 compute-0 podman[259447]: 2025-12-06 06:57:51.173627467 +0000 UTC m=+0.100472914 container init ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mclean, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:57:51 compute-0 podman[259447]: 2025-12-06 06:57:51.181950388 +0000 UTC m=+0.108795835 container start ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mclean, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:57:51 compute-0 podman[259447]: 2025-12-06 06:57:51.18705578 +0000 UTC m=+0.113901227 container attach ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mclean, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:57:51 compute-0 podman[259447]: 2025-12-06 06:57:51.096488656 +0000 UTC m=+0.023334113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:57:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]: {
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:     "0": [
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:         {
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "devices": [
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "/dev/loop3"
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             ],
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "lv_name": "ceph_lv0",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "lv_size": "7511998464",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "name": "ceph_lv0",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "tags": {
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.cluster_name": "ceph",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.crush_device_class": "",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.encrypted": "0",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.osd_id": "0",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.type": "block",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:                 "ceph.vdo": "0"
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             },
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "type": "block",
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:             "vg_name": "ceph_vg0"
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:         }
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]:     ]
Dec 06 06:57:51 compute-0 dreamy_mclean[259463]: }
Dec 06 06:57:51 compute-0 systemd[1]: libpod-ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d.scope: Deactivated successfully.
Dec 06 06:57:51 compute-0 podman[259447]: 2025-12-06 06:57:51.936586367 +0000 UTC m=+0.863431804 container died ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mclean, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 06:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-331a69d7a66792d3d5c0221b8bcd5664ecd3a289f993583968511df79c805675-merged.mount: Deactivated successfully.
Dec 06 06:57:51 compute-0 podman[259447]: 2025-12-06 06:57:51.993811897 +0000 UTC m=+0.920657344 container remove ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 06:57:52 compute-0 systemd[1]: libpod-conmon-ea571fdd634c51276b6546bc5956e496925561958fda9b7bcfe92ac14bc16f0d.scope: Deactivated successfully.
Dec 06 06:57:52 compute-0 sudo[259320]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:52 compute-0 sudo[259486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:52 compute-0 sudo[259486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:52 compute-0 sudo[259486]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:52 compute-0 sudo[259511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:57:52 compute-0 sudo[259511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:52 compute-0 sudo[259511]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:52 compute-0 sudo[259536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:52 compute-0 sudo[259536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:52 compute-0 sudo[259536]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:52 compute-0 sudo[259561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:57:52 compute-0 sudo[259561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:52 compute-0 podman[259628]: 2025-12-06 06:57:52.636321284 +0000 UTC m=+0.077825939 container create f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:57:52 compute-0 podman[259628]: 2025-12-06 06:57:52.582017113 +0000 UTC m=+0.023521788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:57:52 compute-0 systemd[1]: Started libpod-conmon-f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5.scope.
Dec 06 06:57:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:57:52 compute-0 ceph-mon[74339]: pgmap v1095: 305 pgs: 305 active+clean; 155 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 81 op/s
Dec 06 06:57:52 compute-0 podman[259628]: 2025-12-06 06:57:52.797028449 +0000 UTC m=+0.238533194 container init f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:57:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:52.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:52 compute-0 podman[259628]: 2025-12-06 06:57:52.804354186 +0000 UTC m=+0.245858841 container start f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:57:52 compute-0 frosty_mahavira[259644]: 167 167
Dec 06 06:57:52 compute-0 systemd[1]: libpod-f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5.scope: Deactivated successfully.
Dec 06 06:57:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:57:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:52.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:57:52 compute-0 podman[259628]: 2025-12-06 06:57:52.852846765 +0000 UTC m=+0.294351430 container attach f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:57:52 compute-0 podman[259628]: 2025-12-06 06:57:52.853490751 +0000 UTC m=+0.294995456 container died f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:57:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 211 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.6 MiB/s wr, 179 op/s
Dec 06 06:57:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd758ba118ff1df53e3cd357d588828efa7a31a2e98e293bad4ea88c758b2c25-merged.mount: Deactivated successfully.
Dec 06 06:57:53 compute-0 podman[259628]: 2025-12-06 06:57:53.010272562 +0000 UTC m=+0.451777247 container remove f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 06:57:53 compute-0 systemd[1]: libpod-conmon-f8e13cd16537813f03c2fd5b46aa216fba912ab4509d4831183319db95dc2ec5.scope: Deactivated successfully.
Dec 06 06:57:53 compute-0 podman[259670]: 2025-12-06 06:57:53.16608265 +0000 UTC m=+0.024997664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:57:53 compute-0 podman[259670]: 2025-12-06 06:57:53.306709011 +0000 UTC m=+0.165623995 container create daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:57:53 compute-0 systemd[1]: Started libpod-conmon-daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4.scope.
Dec 06 06:57:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08cc07042f4ff273cc6e685e3ffe76d14bb7a4493cde40aa0f7400e69684ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08cc07042f4ff273cc6e685e3ffe76d14bb7a4493cde40aa0f7400e69684ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08cc07042f4ff273cc6e685e3ffe76d14bb7a4493cde40aa0f7400e69684ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08cc07042f4ff273cc6e685e3ffe76d14bb7a4493cde40aa0f7400e69684ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:57:53 compute-0 sudo[259686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:53 compute-0 sudo[259686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:53 compute-0 sudo[259686]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:53 compute-0 sudo[259714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:53 compute-0 sudo[259714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:53 compute-0 sudo[259714]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:53 compute-0 podman[259670]: 2025-12-06 06:57:53.471876605 +0000 UTC m=+0.330791649 container init daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_elion, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:57:53 compute-0 podman[259670]: 2025-12-06 06:57:53.47871058 +0000 UTC m=+0.337625574 container start daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 06:57:53 compute-0 podman[259670]: 2025-12-06 06:57:53.528098221 +0000 UTC m=+0.387013245 container attach daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_elion, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 06:57:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4214025489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/594518666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:54 compute-0 relaxed_elion[259690]: {
Dec 06 06:57:54 compute-0 relaxed_elion[259690]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:57:54 compute-0 relaxed_elion[259690]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:57:54 compute-0 relaxed_elion[259690]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:57:54 compute-0 relaxed_elion[259690]:         "osd_id": 0,
Dec 06 06:57:54 compute-0 relaxed_elion[259690]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:57:54 compute-0 relaxed_elion[259690]:         "type": "bluestore"
Dec 06 06:57:54 compute-0 relaxed_elion[259690]:     }
Dec 06 06:57:54 compute-0 relaxed_elion[259690]: }
Dec 06 06:57:54 compute-0 systemd[1]: libpod-daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4.scope: Deactivated successfully.
Dec 06 06:57:54 compute-0 conmon[259690]: conmon daae6d04d513e2080f9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4.scope/container/memory.events
Dec 06 06:57:54 compute-0 podman[259670]: 2025-12-06 06:57:54.345276399 +0000 UTC m=+1.204191393 container died daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-df08cc07042f4ff273cc6e685e3ffe76d14bb7a4493cde40aa0f7400e69684ac-merged.mount: Deactivated successfully.
Dec 06 06:57:54 compute-0 podman[259670]: 2025-12-06 06:57:54.410258456 +0000 UTC m=+1.269173450 container remove daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_elion, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 06:57:54 compute-0 systemd[1]: libpod-conmon-daae6d04d513e2080f9c8c63233478237de500061cf0319083091e8d21d81bb4.scope: Deactivated successfully.
Dec 06 06:57:54 compute-0 sudo[259561]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:57:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:57:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fd747890-c1a0-4763-bf93-ebba0e7e4c6f does not exist
Dec 06 06:57:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3216e4db-e2d0-4fc8-a307-22b38ce2571d does not exist
Dec 06 06:57:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4fffd67a-5624-4556-b4e6-b80e80a8e749 does not exist
Dec 06 06:57:54 compute-0 sudo[259772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:57:54 compute-0 sudo[259772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:54 compute-0 sudo[259772]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:54 compute-0 sudo[259798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:57:54 compute-0 sudo[259798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:57:54 compute-0 sudo[259798]: pam_unix(sudo:session): session closed for user root
Dec 06 06:57:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:54.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:54.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 260 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.6 MiB/s wr, 213 op/s
Dec 06 06:57:54 compute-0 nova_compute[251992]: 2025-12-06 06:57:54.950 251996 INFO nova.virt.libvirt.driver [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Deleting instance files /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c_del
Dec 06 06:57:54 compute-0 nova_compute[251992]: 2025-12-06 06:57:54.952 251996 INFO nova.virt.libvirt.driver [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Deletion of /var/lib/nova/instances/301f84c3-00d9-4b53-b014-f72ec664448c_del complete
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.043 251996 DEBUG nova.virt.libvirt.host [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.043 251996 INFO nova.virt.libvirt.host [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] UEFI support detected
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.046 251996 INFO nova.compute.manager [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Took 4.66 seconds to destroy the instance on the hypervisor.
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.046 251996 DEBUG oslo.service.loopingcall [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.046 251996 DEBUG nova.compute.manager [-] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.046 251996 DEBUG nova.network.neutron [-] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.189 251996 DEBUG nova.network.neutron [-] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.205 251996 DEBUG nova.network.neutron [-] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:57:55 compute-0 ceph-mon[74339]: pgmap v1096: 305 pgs: 305 active+clean; 211 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.6 MiB/s wr, 179 op/s
Dec 06 06:57:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.221 251996 INFO nova.compute.manager [-] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Took 0.17 seconds to deallocate network for instance.
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.287 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.288 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.454 251996 DEBUG oslo_concurrency.processutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2996594464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.909 251996 DEBUG oslo_concurrency.processutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.916 251996 DEBUG nova.compute.provider_tree [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.932 251996 DEBUG nova.scheduler.client.report [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.954 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:55 compute-0 nova_compute[251992]: 2025-12-06 06:57:55.984 251996 INFO nova.scheduler.client.report [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Deleted allocations for instance 301f84c3-00d9-4b53-b014-f72ec664448c
Dec 06 06:57:56 compute-0 nova_compute[251992]: 2025-12-06 06:57:56.059 251996 DEBUG oslo_concurrency.lockutils [None req-f9a75c8e-d79f-4c7e-969d-c105fffb6bc1 1bad3aa50a314fb5b20d450eafccbaaf 852e365d07c34106812539a81d3788b6 - - default default] Lock "301f84c3-00d9-4b53-b014-f72ec664448c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:56 compute-0 ceph-mon[74339]: pgmap v1097: 305 pgs: 305 active+clean; 260 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.6 MiB/s wr, 213 op/s
Dec 06 06:57:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3377158659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2996594464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2594476977' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:57:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:57:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Dec 06 06:57:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Dec 06 06:57:56 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Dec 06 06:57:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:56.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:56.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 292 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.1 MiB/s wr, 272 op/s
Dec 06 06:57:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Dec 06 06:57:57 compute-0 ceph-mon[74339]: osdmap e154: 3 total, 3 up, 3 in
Dec 06 06:57:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Dec 06 06:57:57 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Dec 06 06:57:58 compute-0 nova_compute[251992]: 2025-12-06 06:57:58.193 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:58 compute-0 nova_compute[251992]: 2025-12-06 06:57:58.193 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:58 compute-0 nova_compute[251992]: 2025-12-06 06:57:58.215 251996 DEBUG nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:57:58 compute-0 nova_compute[251992]: 2025-12-06 06:57:58.299 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:58 compute-0 nova_compute[251992]: 2025-12-06 06:57:58.300 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:58 compute-0 nova_compute[251992]: 2025-12-06 06:57:58.314 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:57:58 compute-0 nova_compute[251992]: 2025-12-06 06:57:58.315 251996 INFO nova.compute.claims [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Claim successful on node compute-0.ctlplane.example.com
Dec 06 06:57:58 compute-0 nova_compute[251992]: 2025-12-06 06:57:58.523 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:58 compute-0 ceph-mon[74339]: pgmap v1099: 305 pgs: 305 active+clean; 292 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.1 MiB/s wr, 272 op/s
Dec 06 06:57:58 compute-0 ceph-mon[74339]: osdmap e155: 3 total, 3 up, 3 in
Dec 06 06:57:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:57:58.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:57:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:57:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:57:58.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:57:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 306 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 11 MiB/s wr, 350 op/s
Dec 06 06:57:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:57:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243932209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.001 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.007 251996 DEBUG nova.compute.provider_tree [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.042 251996 DEBUG nova.scheduler.client.report [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.061 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.062 251996 DEBUG nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.115 251996 DEBUG nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.115 251996 DEBUG nova.network.neutron [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.139 251996 INFO nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.159 251996 DEBUG nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.301 251996 DEBUG nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.303 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.303 251996 INFO nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Creating image(s)
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.338 251996 DEBUG nova.storage.rbd_utils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.367 251996 DEBUG nova.storage.rbd_utils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.395 251996 DEBUG nova.storage.rbd_utils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.399 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.414 251996 DEBUG nova.network.neutron [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.415 251996 DEBUG nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.454 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.455 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.456 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.456 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.485 251996 DEBUG nova.storage.rbd_utils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.488 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:57:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1243932209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.826 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.886 251996 DEBUG nova.storage.rbd_utils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] resizing rbd image c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.979 251996 DEBUG nova.objects.instance [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lazy-loading 'migration_context' on Instance uuid c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.994 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.995 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Ensure instance console log exists: /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.995 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.995 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.996 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:57:59 compute-0 nova_compute[251992]: 2025-12-06 06:57:59.997 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.000 251996 WARNING nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.005 251996 DEBUG nova.virt.libvirt.host [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.006 251996 DEBUG nova.virt.libvirt.host [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.008 251996 DEBUG nova.virt.libvirt.host [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.009 251996 DEBUG nova.virt.libvirt.host [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.010 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.010 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.010 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.010 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.011 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.011 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.011 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.011 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.011 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.012 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.012 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.012 251996 DEBUG nova.virt.hardware [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.015 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/708831470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.437 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.467 251996 DEBUG nova.storage.rbd_utils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.472 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:00 compute-0 ceph-mon[74339]: pgmap v1101: 305 pgs: 305 active+clean; 306 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 11 MiB/s wr, 350 op/s
Dec 06 06:58:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/708831470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:00.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:00.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 306 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.1 MiB/s wr, 204 op/s
Dec 06 06:58:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3242452057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.907 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.909 251996 DEBUG nova.objects.instance [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lazy-loading 'pci_devices' on Instance uuid c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:00 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.924 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <uuid>c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b</uuid>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <name>instance-00000007</name>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <metadata>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-1894074862</nova:name>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 06:58:00</nova:creationTime>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <nova:user uuid="28be0883246146559e0210481394b3d0">tempest-DeleteServersAdminTestJSON-1434350404-project-member</nova:user>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <nova:project uuid="f0fc1e7b36eb4fbd942a923898feb462">tempest-DeleteServersAdminTestJSON-1434350404</nova:project>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   </metadata>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <system>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <entry name="serial">c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b</entry>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <entry name="uuid">c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b</entry>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     </system>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <os>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   </os>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <features>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <apic/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   </features>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   </clock>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk">
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       </source>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk.config">
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       </source>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:58:00 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b/console.log" append="off"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     </serial>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <video>
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     </video>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 06:58:00 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 06:58:00 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 06:58:00 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:58:00 compute-0 nova_compute[251992]: </domain>
Dec 06 06:58:00 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:00.999 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.000 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.001 251996 INFO nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Using config drive
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.029 251996 DEBUG nova.storage.rbd_utils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.291 251996 INFO nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Creating config drive at /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b/disk.config
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.295 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbeih2lba execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.418 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbeih2lba" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.447 251996 DEBUG nova.storage.rbd_utils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] rbd image c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.450 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b/disk.config c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.632 251996 DEBUG oslo_concurrency.processutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b/disk.config c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:01 compute-0 nova_compute[251992]: 2025-12-06 06:58:01.633 251996 INFO nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Deleting local config drive /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b/disk.config because it was imported into RBD.
Dec 06 06:58:01 compute-0 systemd-machined[212986]: New machine qemu-2-instance-00000007.
Dec 06 06:58:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3242452057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:01 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000007.
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.358 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004282.3577666, c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.359 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] VM Resumed (Lifecycle Event)
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.362 251996 DEBUG nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.362 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.366 251996 INFO nova.virt.libvirt.driver [-] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Instance spawned successfully.
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.367 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.390 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.393 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.401 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.402 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.402 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.403 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.403 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.404 251996 DEBUG nova.virt.libvirt.driver [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.438 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.438 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004282.3586743, c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.439 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] VM Started (Lifecycle Event)
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.468 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.471 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.479 251996 INFO nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Took 3.18 seconds to spawn the instance on the hypervisor.
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.479 251996 DEBUG nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.492 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.541 251996 INFO nova.compute.manager [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Took 4.28 seconds to build instance.
Dec 06 06:58:02 compute-0 nova_compute[251992]: 2025-12-06 06:58:02.561 251996 DEBUG oslo_concurrency.lockutils [None req-3af75813-8864-4ffb-980e-1ff6f35ef925 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:02 compute-0 ceph-mon[74339]: pgmap v1102: 305 pgs: 305 active+clean; 306 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.1 MiB/s wr, 204 op/s
Dec 06 06:58:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:02.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:02.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 333 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.7 MiB/s wr, 240 op/s
Dec 06 06:58:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3709163343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:03.809 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:03.810 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:03.810 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.149 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.150 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.150 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.150 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.151 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.151 251996 INFO nova.compute.manager [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Terminating instance
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.152 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "refresh_cache-c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.152 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquired lock "refresh_cache-c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.153 251996 DEBUG nova.network.neutron [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.314 251996 DEBUG nova.network.neutron [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.724 251996 DEBUG nova.network.neutron [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.743 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Releasing lock "refresh_cache-c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.744 251996 DEBUG nova.compute.manager [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 06:58:04 compute-0 ceph-mon[74339]: pgmap v1103: 305 pgs: 305 active+clean; 333 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.7 MiB/s wr, 240 op/s
Dec 06 06:58:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2344159975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3348113514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:04 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 06 06:58:04 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Consumed 3.162s CPU time.
Dec 06 06:58:04 compute-0 systemd-machined[212986]: Machine qemu-2-instance-00000007 terminated.
Dec 06 06:58:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:04.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:04.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 364 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 7.9 MiB/s wr, 286 op/s
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.960 251996 INFO nova.virt.libvirt.driver [-] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Instance destroyed successfully.
Dec 06 06:58:04 compute-0 nova_compute[251992]: 2025-12-06 06:58:04.960 251996 DEBUG nova.objects.instance [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lazy-loading 'resources' on Instance uuid c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.345 251996 INFO nova.virt.libvirt.driver [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Deleting instance files /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_del
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.345 251996 INFO nova.virt.libvirt.driver [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Deletion of /var/lib/nova/instances/c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b_del complete
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.496 251996 INFO nova.compute.manager [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Took 0.75 seconds to destroy the instance on the hypervisor.
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.498 251996 DEBUG oslo.service.loopingcall [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.499 251996 DEBUG nova.compute.manager [-] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.499 251996 DEBUG nova.network.neutron [-] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.609 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004270.6080225, 301f84c3-00d9-4b53-b014-f72ec664448c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.609 251996 INFO nova.compute.manager [-] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] VM Stopped (Lifecycle Event)
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.648 251996 DEBUG nova.compute.manager [None req-f728571f-19a5-4a2f-8807-191d25293ae9 - - - - - -] [instance: 301f84c3-00d9-4b53-b014-f72ec664448c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.842 251996 DEBUG nova.network.neutron [-] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.861 251996 DEBUG nova.network.neutron [-] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.902 251996 INFO nova.compute.manager [-] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Took 0.40 seconds to deallocate network for instance.
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.996 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:05 compute-0 nova_compute[251992]: 2025-12-06 06:58:05.997 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:06 compute-0 nova_compute[251992]: 2025-12-06 06:58:06.104 251996 DEBUG oslo_concurrency.processutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Dec 06 06:58:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Dec 06 06:58:06 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Dec 06 06:58:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3106078905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:06 compute-0 nova_compute[251992]: 2025-12-06 06:58:06.574 251996 DEBUG oslo_concurrency.processutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:06 compute-0 nova_compute[251992]: 2025-12-06 06:58:06.581 251996 DEBUG nova.compute.provider_tree [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:58:06 compute-0 nova_compute[251992]: 2025-12-06 06:58:06.602 251996 DEBUG nova.scheduler.client.report [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:58:06 compute-0 nova_compute[251992]: 2025-12-06 06:58:06.623 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:06 compute-0 nova_compute[251992]: 2025-12-06 06:58:06.675 251996 INFO nova.scheduler.client.report [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Deleted allocations for instance c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b
Dec 06 06:58:06 compute-0 nova_compute[251992]: 2025-12-06 06:58:06.741 251996 DEBUG oslo_concurrency.lockutils [None req-ba293330-0490-4ca7-b2cb-71f3ca276306 28be0883246146559e0210481394b3d0 f0fc1e7b36eb4fbd942a923898feb462 - - default default] Lock "c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:06.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:06.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 359 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.8 MiB/s wr, 317 op/s
Dec 06 06:58:06 compute-0 ceph-mon[74339]: pgmap v1104: 305 pgs: 305 active+clean; 364 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 7.9 MiB/s wr, 286 op/s
Dec 06 06:58:06 compute-0 ceph-mon[74339]: osdmap e156: 3 total, 3 up, 3 in
Dec 06 06:58:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3106078905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/364414299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:08 compute-0 ceph-mon[74339]: pgmap v1106: 305 pgs: 305 active+clean; 359 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.8 MiB/s wr, 317 op/s
Dec 06 06:58:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 06:58:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2833273441' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:58:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 06:58:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2833273441' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:58:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:08.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 363 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.6 MiB/s wr, 338 op/s
Dec 06 06:58:08 compute-0 nova_compute[251992]: 2025-12-06 06:58:08.946 251996 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Automatically allocated network: {'id': 'fa805a2c-a79c-458b-b658-8e0534714a02', 'name': 'auto_allocated_network', 'tenant_id': '066c314d67e347f6a49e8e3e27998441', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['04b3c0e5-fc90-42d4-bc2f-73e704cb01a0', 'd4a716c0-b329-403f-9680-e6cc8e535b63'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-12-06T06:57:49Z', 'updated_at': '2025-12-06T06:58:07Z', 'revision_number': 4, 'project_id': '066c314d67e347f6a49e8e3e27998441'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Dec 06 06:58:08 compute-0 nova_compute[251992]: 2025-12-06 06:58:08.957 251996 WARNING oslo_policy.policy [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 06 06:58:08 compute-0 nova_compute[251992]: 2025-12-06 06:58:08.957 251996 WARNING oslo_policy.policy [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 06 06:58:08 compute-0 nova_compute[251992]: 2025-12-06 06:58:08.959 251996 DEBUG nova.policy [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5122185c6194067bdb22d6ba8205dca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '066c314d67e347f6a49e8e3e27998441', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 06:58:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2833273441' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:58:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2833273441' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:58:10 compute-0 ceph-mon[74339]: pgmap v1107: 305 pgs: 305 active+clean; 363 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.6 MiB/s wr, 338 op/s
Dec 06 06:58:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:10.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:10.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 363 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.6 MiB/s wr, 338 op/s
Dec 06 06:58:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:12 compute-0 ceph-mon[74339]: pgmap v1108: 305 pgs: 305 active+clean; 363 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.6 MiB/s wr, 338 op/s
Dec 06 06:58:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:12.215 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:58:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:12.217 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 06:58:12 compute-0 podman[260262]: 2025-12-06 06:58:12.434579338 +0000 UTC m=+0.096182561 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 06:58:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:12.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:12.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 414 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 7.0 MiB/s wr, 324 op/s
Dec 06 06:58:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:58:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:58:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:58:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:58:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:58:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:58:13 compute-0 sudo[260290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:13 compute-0 sudo[260290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:13 compute-0 sudo[260290]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:13 compute-0 sudo[260315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:13 compute-0 sudo[260315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:13 compute-0 sudo[260315]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 06:58:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 10K writes, 3089 syncs, 3.53 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1560 writes, 4077 keys, 1560 commit groups, 1.0 writes per commit group, ingest: 3.31 MB, 0.01 MB/s
                                           Interval WAL: 1560 writes, 697 syncs, 2.24 writes per sync, written: 0.00 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 06:58:13 compute-0 nova_compute[251992]: 2025-12-06 06:58:13.904 251996 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Successfully created port: 5a0e5db4-24c8-463d-b14f-0d5210748804 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 06:58:14 compute-0 ceph-mon[74339]: pgmap v1109: 305 pgs: 305 active+clean; 414 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 7.0 MiB/s wr, 324 op/s
Dec 06 06:58:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2324399139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:14.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:14.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 423 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.6 MiB/s wr, 304 op/s
Dec 06 06:58:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/440116575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1471238150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:16 compute-0 ceph-mon[74339]: pgmap v1110: 305 pgs: 305 active+clean; 423 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.6 MiB/s wr, 304 op/s
Dec 06 06:58:16 compute-0 nova_compute[251992]: 2025-12-06 06:58:16.669 251996 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Successfully updated port: 5a0e5db4-24c8-463d-b14f-0d5210748804 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 06:58:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:16.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:16.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 410 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.4 MiB/s wr, 315 op/s
Dec 06 06:58:16 compute-0 nova_compute[251992]: 2025-12-06 06:58:16.956 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:58:16 compute-0 nova_compute[251992]: 2025-12-06 06:58:16.956 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquired lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:58:16 compute-0 nova_compute[251992]: 2025-12-06 06:58:16.957 251996 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:58:17 compute-0 nova_compute[251992]: 2025-12-06 06:58:17.218 251996 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:58:17 compute-0 nova_compute[251992]: 2025-12-06 06:58:17.232 251996 DEBUG nova.compute.manager [req-fcbbde21-4c6f-4e00-9c16-2b001cc5388c req-8c1ef49a-7d9f-4a1c-94b8-c672a640bea6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received event network-changed-5a0e5db4-24c8-463d-b14f-0d5210748804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:17 compute-0 nova_compute[251992]: 2025-12-06 06:58:17.233 251996 DEBUG nova.compute.manager [req-fcbbde21-4c6f-4e00-9c16-2b001cc5388c req-8c1ef49a-7d9f-4a1c-94b8-c672a640bea6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Refreshing instance network info cache due to event network-changed-5a0e5db4-24c8-463d-b14f-0d5210748804. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 06:58:17 compute-0 nova_compute[251992]: 2025-12-06 06:58:17.234 251996 DEBUG oslo_concurrency.lockutils [req-fcbbde21-4c6f-4e00-9c16-2b001cc5388c req-8c1ef49a-7d9f-4a1c-94b8-c672a640bea6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:58:18
Dec 06 06:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.control']
Dec 06 06:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:58:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:18.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 360 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 225 op/s
Dec 06 06:58:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:18.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:18 compute-0 ceph-mon[74339]: pgmap v1111: 305 pgs: 305 active+clean; 410 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.4 MiB/s wr, 315 op/s
Dec 06 06:58:19 compute-0 sshd-session[260342]: Invalid user ubuntu from 91.202.233.33 port 25980
Dec 06 06:58:19 compute-0 podman[260345]: 2025-12-06 06:58:19.146873422 +0000 UTC m=+0.052487227 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:58:19 compute-0 podman[260346]: 2025-12-06 06:58:19.146958784 +0000 UTC m=+0.052063766 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:58:19 compute-0 sshd-session[260342]: Connection reset by invalid user ubuntu 91.202.233.33 port 25980 [preauth]
Dec 06 06:58:19 compute-0 nova_compute[251992]: 2025-12-06 06:58:19.959 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004284.9587162, c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:19 compute-0 nova_compute[251992]: 2025-12-06 06:58:19.960 251996 INFO nova.compute.manager [-] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] VM Stopped (Lifecycle Event)
Dec 06 06:58:19 compute-0 nova_compute[251992]: 2025-12-06 06:58:19.993 251996 DEBUG nova.compute.manager [None req-7deb09ad-6586-4db2-b60e-c54a766e3d96 - - - - - -] [instance: c3c72dbd-e3e3-4698-8ff5-6f41a3dc5f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:20.219 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.419 251996 DEBUG nova.network.neutron [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Updating instance_info_cache with network_info: [{"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.443 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Releasing lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.444 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Instance network_info: |[{"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.444 251996 DEBUG oslo_concurrency.lockutils [req-fcbbde21-4c6f-4e00-9c16-2b001cc5388c req-8c1ef49a-7d9f-4a1c-94b8-c672a640bea6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.444 251996 DEBUG nova.network.neutron [req-fcbbde21-4c6f-4e00-9c16-2b001cc5388c req-8c1ef49a-7d9f-4a1c-94b8-c672a640bea6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Refreshing network info cache for port 5a0e5db4-24c8-463d-b14f-0d5210748804 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.447 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Start _get_guest_xml network_info=[{"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.451 251996 WARNING nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.458 251996 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.459 251996 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.462 251996 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.462 251996 DEBUG nova.virt.libvirt.host [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.464 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.464 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.464 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:58:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.465 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:58:20 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.465 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.465 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.465 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.466 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.466 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.466 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.467 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.467 251996 DEBUG nova.virt.hardware [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.470 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:20 compute-0 ceph-mon[74339]: pgmap v1112: 305 pgs: 305 active+clean; 360 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.4 MiB/s wr, 225 op/s
Dec 06 06:58:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/723393079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:20.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 360 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 148 op/s
Dec 06 06:58:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:20.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967313379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.916 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.944 251996 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:20 compute-0 nova_compute[251992]: 2025-12-06 06:58:20.948 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:58:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539297012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.374 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.377 251996 DEBUG nova.virt.libvirt.vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:57:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-3',id=5,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:57:48Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=6064a21e-5bc7-495b-9059-a6dd7d3abee3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.378 251996 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.380 251996 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:9a:94,bridge_name='br-int',has_traffic_filtering=True,id=5a0e5db4-24c8-463d-b14f-0d5210748804,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a0e5db4-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.384 251996 DEBUG nova.objects.instance [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6064a21e-5bc7-495b-9059-a6dd7d3abee3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.412 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <uuid>6064a21e-5bc7-495b-9059-a6dd7d3abee3</uuid>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <name>instance-00000005</name>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <metadata>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <nova:name>tempest-tempest.common.compute-instance-2024863607-3</nova:name>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 06:58:20</nova:creationTime>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <nova:user uuid="e5122185c6194067bdb22d6ba8205dca">tempest-AutoAllocateNetworkTest-1572395875-project-member</nova:user>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <nova:project uuid="066c314d67e347f6a49e8e3e27998441">tempest-AutoAllocateNetworkTest-1572395875</nova:project>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <nova:port uuid="5a0e5db4-24c8-463d-b14f-0d5210748804">
Dec 06 06:58:21 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="fdfe:381f:8400::296" ipVersion="6"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.1.0.19" ipVersion="4"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   </metadata>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <system>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <entry name="serial">6064a21e-5bc7-495b-9059-a6dd7d3abee3</entry>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <entry name="uuid">6064a21e-5bc7-495b-9059-a6dd7d3abee3</entry>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </system>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <os>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   </os>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <features>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <apic/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   </features>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   </clock>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk">
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       </source>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk.config">
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       </source>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:58:21 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:45:9a:94"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <target dev="tap5a0e5db4-24"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </interface>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3/console.log" append="off"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </serial>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <video>
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </video>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 06:58:21 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 06:58:21 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 06:58:21 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:58:21 compute-0 nova_compute[251992]: </domain>
Dec 06 06:58:21 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.414 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Preparing to wait for external event network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.414 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.414 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.415 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.416 251996 DEBUG nova.virt.libvirt.vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T06:57:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-3',id=5,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T06:57:48Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=6064a21e-5bc7-495b-9059-a6dd7d3abee3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.416 251996 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.417 251996 DEBUG nova.network.os_vif_util [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:9a:94,bridge_name='br-int',has_traffic_filtering=True,id=5a0e5db4-24c8-463d-b14f-0d5210748804,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a0e5db4-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.417 251996 DEBUG os_vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:9a:94,bridge_name='br-int',has_traffic_filtering=True,id=5a0e5db4-24c8-463d-b14f-0d5210748804,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a0e5db4-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.454 251996 DEBUG ovsdbapp.backend.ovs_idl [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.455 251996 DEBUG ovsdbapp.backend.ovs_idl [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.455 251996 DEBUG ovsdbapp.backend.ovs_idl [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.457 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.457 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.458 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.461 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.471 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.471 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.472 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 06:58:21 compute-0 nova_compute[251992]: 2025-12-06 06:58:21.473 251996 INFO oslo.privsep.daemon [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpxmql_fyx/privsep.sock']
Dec 06 06:58:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1967313379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/539297012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:21 compute-0 sshd-session[260384]: Connection reset by authenticating user root 91.202.233.33 port 25994 [preauth]
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.154 251996 INFO oslo.privsep.daemon [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Spawned new privsep daemon via rootwrap
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.044 260454 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.050 260454 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.052 260454 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.053 260454 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260454
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.218 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.501 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.502 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a0e5db4-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.502 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a0e5db4-24, col_values=(('external_ids', {'iface-id': '5a0e5db4-24c8-463d-b14f-0d5210748804', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:9a:94', 'vm-uuid': '6064a21e-5bc7-495b-9059-a6dd7d3abee3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.504 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:22 compute-0 NetworkManager[48965]: <info>  [1765004302.5052] manager: (tap5a0e5db4-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.507 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.511 251996 INFO os_vif [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:9a:94,bridge_name='br-int',has_traffic_filtering=True,id=5a0e5db4-24c8-463d-b14f-0d5210748804,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a0e5db4-24')
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.599 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.600 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.600 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] No VIF found with MAC fa:16:3e:45:9a:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.600 251996 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Using config drive
Dec 06 06:58:22 compute-0 nova_compute[251992]: 2025-12-06 06:58:22.627 251996 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:22.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:22 compute-0 ceph-mon[74339]: pgmap v1113: 305 pgs: 305 active+clean; 360 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 148 op/s
Dec 06 06:58:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 306 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 213 op/s
Dec 06 06:58:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:22.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:58:23 compute-0 ceph-mon[74339]: pgmap v1114: 305 pgs: 305 active+clean; 306 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 213 op/s
Dec 06 06:58:24 compute-0 nova_compute[251992]: 2025-12-06 06:58:24.152 251996 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Creating config drive at /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3/disk.config
Dec 06 06:58:24 compute-0 nova_compute[251992]: 2025-12-06 06:58:24.156 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jiytk2s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:24 compute-0 nova_compute[251992]: 2025-12-06 06:58:24.283 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jiytk2s" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:24 compute-0 nova_compute[251992]: 2025-12-06 06:58:24.312 251996 DEBUG nova.storage.rbd_utils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] rbd image 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:24 compute-0 nova_compute[251992]: 2025-12-06 06:58:24.316 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3/disk.config 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:24 compute-0 nova_compute[251992]: 2025-12-06 06:58:24.339 251996 DEBUG nova.network.neutron [req-fcbbde21-4c6f-4e00-9c16-2b001cc5388c req-8c1ef49a-7d9f-4a1c-94b8-c672a640bea6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Updated VIF entry in instance network info cache for port 5a0e5db4-24c8-463d-b14f-0d5210748804. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 06:58:24 compute-0 nova_compute[251992]: 2025-12-06 06:58:24.340 251996 DEBUG nova.network.neutron [req-fcbbde21-4c6f-4e00-9c16-2b001cc5388c req-8c1ef49a-7d9f-4a1c-94b8-c672a640bea6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Updating instance_info_cache with network_info: [{"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:24 compute-0 nova_compute[251992]: 2025-12-06 06:58:24.484 251996 DEBUG oslo_concurrency.lockutils [req-fcbbde21-4c6f-4e00-9c16-2b001cc5388c req-8c1ef49a-7d9f-4a1c-94b8-c672a640bea6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:58:24 compute-0 sshd-session[260455]: Connection reset by authenticating user root 91.202.233.33 port 60562 [preauth]
Dec 06 06:58:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:24.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 778 KiB/s wr, 163 op/s
Dec 06 06:58:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:24.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.272 251996 DEBUG oslo_concurrency.processutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3/disk.config 6064a21e-5bc7-495b-9059-a6dd7d3abee3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.273 251996 INFO nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Deleting local config drive /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3/disk.config because it was imported into RBD.
Dec 06 06:58:25 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 06 06:58:25 compute-0 kernel: tap5a0e5db4-24: entered promiscuous mode
Dec 06 06:58:25 compute-0 NetworkManager[48965]: <info>  [1765004305.3335] manager: (tap5a0e5db4-24): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.334 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:25 compute-0 ovn_controller[147168]: 2025-12-06T06:58:25Z|00027|binding|INFO|Claiming lport 5a0e5db4-24c8-463d-b14f-0d5210748804 for this chassis.
Dec 06 06:58:25 compute-0 ovn_controller[147168]: 2025-12-06T06:58:25Z|00028|binding|INFO|5a0e5db4-24c8-463d-b14f-0d5210748804: Claiming fa:16:3e:45:9a:94 10.1.0.19 fdfe:381f:8400::296
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.337 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:25.355 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:9a:94 10.1.0.19 fdfe:381f:8400::296'], port_security=['fa:16:3e:45:9a:94 10.1.0.19 fdfe:381f:8400::296'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.19/26 fdfe:381f:8400::296/64', 'neutron:device_id': '6064a21e-5bc7-495b-9059-a6dd7d3abee3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa805a2c-a79c-458b-b658-8e0534714a02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '066c314d67e347f6a49e8e3e27998441', 'neutron:revision_number': '2', 'neutron:security_group_ids': '460e28b2-b45f-4429-a2b9-8f57e45c4e5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a47e992-7383-43fa-bf34-745cbd8b74f1, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=5a0e5db4-24c8-463d-b14f-0d5210748804) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:58:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:25.358 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 5a0e5db4-24c8-463d-b14f-0d5210748804 in datapath fa805a2c-a79c-458b-b658-8e0534714a02 bound to our chassis
Dec 06 06:58:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:25.361 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa805a2c-a79c-458b-b658-8e0534714a02
Dec 06 06:58:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:25.363 158118 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpqwa9q7t4/privsep.sock']
Dec 06 06:58:25 compute-0 systemd-udevd[260542]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:58:25 compute-0 NetworkManager[48965]: <info>  [1765004305.3887] device (tap5a0e5db4-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 06:58:25 compute-0 NetworkManager[48965]: <info>  [1765004305.3917] device (tap5a0e5db4-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 06:58:25 compute-0 systemd-machined[212986]: New machine qemu-3-instance-00000005.
Dec 06 06:58:25 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000005.
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.427 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:25 compute-0 ovn_controller[147168]: 2025-12-06T06:58:25Z|00029|binding|INFO|Setting lport 5a0e5db4-24c8-463d-b14f-0d5210748804 ovn-installed in OVS
Dec 06 06:58:25 compute-0 ovn_controller[147168]: 2025-12-06T06:58:25Z|00030|binding|INFO|Setting lport 5a0e5db4-24c8-463d-b14f-0d5210748804 up in Southbound
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.434 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006129824469570073 of space, bias 1.0, pg target 1.8389473408710217 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:58:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.797 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004305.796411, 6064a21e-5bc7-495b-9059-a6dd7d3abee3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.797 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] VM Started (Lifecycle Event)
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.824 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.827 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004305.7973979, 6064a21e-5bc7-495b-9059-a6dd7d3abee3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.828 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] VM Paused (Lifecycle Event)
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.873 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.877 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:25 compute-0 nova_compute[251992]: 2025-12-06 06:58:25.903 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:26.127 158118 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:26.127 158118 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqwa9q7t4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:25.988 260599 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:25.993 260599 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:25.996 260599 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:25.996 260599 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260599
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:26.130 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[77c88106-f826-4171-a5f4-2968ee9c9b37]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.212 251996 DEBUG nova.compute.manager [req-59097cbc-eeca-4062-bc0f-a97e5a8d42c0 req-e0c9ca1f-63ce-4deb-8418-191dddd98dee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received event network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.212 251996 DEBUG oslo_concurrency.lockutils [req-59097cbc-eeca-4062-bc0f-a97e5a8d42c0 req-e0c9ca1f-63ce-4deb-8418-191dddd98dee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.213 251996 DEBUG oslo_concurrency.lockutils [req-59097cbc-eeca-4062-bc0f-a97e5a8d42c0 req-e0c9ca1f-63ce-4deb-8418-191dddd98dee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.213 251996 DEBUG oslo_concurrency.lockutils [req-59097cbc-eeca-4062-bc0f-a97e5a8d42c0 req-e0c9ca1f-63ce-4deb-8418-191dddd98dee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.213 251996 DEBUG nova.compute.manager [req-59097cbc-eeca-4062-bc0f-a97e5a8d42c0 req-e0c9ca1f-63ce-4deb-8418-191dddd98dee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Processing event network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.214 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.224 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004306.2168798, 6064a21e-5bc7-495b-9059-a6dd7d3abee3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.231 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] VM Resumed (Lifecycle Event)
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.234 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.237 251996 INFO nova.virt.libvirt.driver [-] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Instance spawned successfully.
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.237 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.261 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.267 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.271 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.271 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.272 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.273 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.274 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.274 251996 DEBUG nova.virt.libvirt.driver [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.314 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.387 251996 INFO nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Took 38.11 seconds to spawn the instance on the hypervisor.
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.391 251996 DEBUG nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:58:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:26 compute-0 ceph-mon[74339]: pgmap v1115: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 778 KiB/s wr, 163 op/s
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.469 251996 INFO nova.compute.manager [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Took 39.66 seconds to build instance.
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.491 251996 DEBUG oslo_concurrency.lockutils [None req-7559733d-95e0-4b50-a7e0-b705b61a33ed e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 39.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:26.792 260599 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:26.793 260599 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:26.793 260599 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.829 251996 DEBUG oslo_concurrency.processutils [None req-9e9b7dab-19b9-414d-8baa-d9ef1dc06e0f a1560b2fdffc4a29a3d59e4df376ae8f dd6a67f3cdf54a75950c1701b7cd1432 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:26.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:26 compute-0 nova_compute[251992]: 2025-12-06 06:58:26.865 251996 DEBUG oslo_concurrency.processutils [None req-9e9b7dab-19b9-414d-8baa-d9ef1dc06e0f a1560b2fdffc4a29a3d59e4df376ae8f dd6a67f3cdf54a75950c1701b7cd1432 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 88 KiB/s wr, 153 op/s
Dec 06 06:58:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:26.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:27 compute-0 sshd-session[260523]: Connection reset by authenticating user root 91.202.233.33 port 60574 [preauth]
Dec 06 06:58:27 compute-0 nova_compute[251992]: 2025-12-06 06:58:27.220 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:27.470 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5791aa9b-ab32-41bc-a99d-4367860d7279]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:27.472 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa805a2c-a1 in ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 06:58:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:27.474 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa805a2c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 06:58:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:27.474 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d96f8e0e-b868-42e3-80a7-3ff10912b705]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:27.478 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[929be558-0d3b-422a-9c4e-cf161e8f3006]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:27.503 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b0689c76-6277-447c-8516-6e5585805491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:27 compute-0 nova_compute[251992]: 2025-12-06 06:58:27.504 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:27.521 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0708c7-32d3-4a73-9d5c-9d858d3cd4fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:27.524 158118 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpwd939e8o/privsep.sock']
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.344 158118 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.346 158118 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwd939e8o/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.196 260617 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.201 260617 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.203 260617 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.203 260617 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260617
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.348 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[01bc8a46-005f-4bdd-bf88-08de06ba1802]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:28 compute-0 nova_compute[251992]: 2025-12-06 06:58:28.360 251996 DEBUG nova.compute.manager [req-1345552d-2049-4e3b-b5f1-c64c17b60ac1 req-6a20d7bc-ea73-4313-9130-740eaa638978 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received event network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:28 compute-0 nova_compute[251992]: 2025-12-06 06:58:28.360 251996 DEBUG oslo_concurrency.lockutils [req-1345552d-2049-4e3b-b5f1-c64c17b60ac1 req-6a20d7bc-ea73-4313-9130-740eaa638978 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:28 compute-0 nova_compute[251992]: 2025-12-06 06:58:28.361 251996 DEBUG oslo_concurrency.lockutils [req-1345552d-2049-4e3b-b5f1-c64c17b60ac1 req-6a20d7bc-ea73-4313-9130-740eaa638978 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:28 compute-0 nova_compute[251992]: 2025-12-06 06:58:28.361 251996 DEBUG oslo_concurrency.lockutils [req-1345552d-2049-4e3b-b5f1-c64c17b60ac1 req-6a20d7bc-ea73-4313-9130-740eaa638978 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:28 compute-0 nova_compute[251992]: 2025-12-06 06:58:28.361 251996 DEBUG nova.compute.manager [req-1345552d-2049-4e3b-b5f1-c64c17b60ac1 req-6a20d7bc-ea73-4313-9130-740eaa638978 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] No waiting events found dispatching network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 06:58:28 compute-0 nova_compute[251992]: 2025-12-06 06:58:28.361 251996 WARNING nova.compute.manager [req-1345552d-2049-4e3b-b5f1-c64c17b60ac1 req-6a20d7bc-ea73-4313-9130-740eaa638978 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received unexpected event network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 for instance with vm_state active and task_state None.
Dec 06 06:58:28 compute-0 ceph-mon[74339]: pgmap v1116: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 88 KiB/s wr, 153 op/s
Dec 06 06:58:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:28.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.868 260617 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.868 260617 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:28.868 260617 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 38 KiB/s wr, 161 op/s
Dec 06 06:58:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:28.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:29 compute-0 sshd-session[260606]: Connection reset by authenticating user root 91.202.233.33 port 60578 [preauth]
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.540 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[45f2fefd-d03c-4e16-9aa7-3ed01f3ad4e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 NetworkManager[48965]: <info>  [1765004309.5584] manager: (tapfa805a2c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.557 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[280bd7b0-6c42-4279-ba24-8bb5022d5d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 systemd-udevd[260630]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.587 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3d2a30-3e98-49a8-9b76-73e5d949115e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.592 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b231e5-71f1-48f7-b03d-47f246869aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 NetworkManager[48965]: <info>  [1765004309.6191] device (tapfa805a2c-a0): carrier: link connected
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.629 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3d03cd0d-484a-4e41-8f42-f14d94b32f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.647 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb8fc17-8b3e-4fb7-9539-967a53bfbb7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa805a2c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:f0:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458220, 'reachable_time': 35897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260648, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.664 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5f744b10-32aa-49d0-b92e-ce21455ab0cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:f092'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 458220, 'tstamp': 458220}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260649, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.678 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[98b72bfc-7ba1-41d5-8148-79f2a324643b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa805a2c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:f0:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458220, 'reachable_time': 35897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260650, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.706 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[30fee33d-87e9-4096-9ce8-e13ced8659c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/643756439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1018108383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1229977498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/788469425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.760 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[34af92d6-15dd-498a-a1c4-ff1d8fab1ed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.763 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa805a2c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.763 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.764 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa805a2c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:29 compute-0 NetworkManager[48965]: <info>  [1765004309.7675] manager: (tapfa805a2c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec 06 06:58:29 compute-0 nova_compute[251992]: 2025-12-06 06:58:29.767 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:29 compute-0 kernel: tapfa805a2c-a0: entered promiscuous mode
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.773 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa805a2c-a0, col_values=(('external_ids', {'iface-id': '34b96bd0-4cf5-4098-a5e5-d0a4d39a953b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:29 compute-0 ovn_controller[147168]: 2025-12-06T06:58:29Z|00031|binding|INFO|Releasing lport 34b96bd0-4cf5-4098-a5e5-d0a4d39a953b from this chassis (sb_readonly=0)
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.777 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 06:58:29 compute-0 nova_compute[251992]: 2025-12-06 06:58:29.777 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.777 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[355da70e-1ba7-4380-8895-6f7c81b986b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.779 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: global
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-fa805a2c-a79c-458b-b658-8e0534714a02
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/fa805a2c-a79c-458b-b658-8e0534714a02.pid.haproxy
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID fa805a2c-a79c-458b-b658-8e0534714a02
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 06:58:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:29.780 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'env', 'PROCESS_TAG=haproxy-fa805a2c-a79c-458b-b658-8e0534714a02', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa805a2c-a79c-458b-b658-8e0534714a02.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 06:58:29 compute-0 nova_compute[251992]: 2025-12-06 06:58:29.789 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:30 compute-0 podman[260683]: 2025-12-06 06:58:30.178991278 +0000 UTC m=+0.076454944 container create 4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:58:30 compute-0 podman[260683]: 2025-12-06 06:58:30.123688915 +0000 UTC m=+0.021152601 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 06:58:30 compute-0 systemd[1]: Started libpod-conmon-4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5.scope.
Dec 06 06:58:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:58:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0240cea06dbe33061c66dc4bb955112a5e31634be0cd8aabfbd21a345ff48e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:30 compute-0 podman[260683]: 2025-12-06 06:58:30.273213961 +0000 UTC m=+0.170677657 container init 4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 06:58:30 compute-0 podman[260683]: 2025-12-06 06:58:30.279195795 +0000 UTC m=+0.176659461 container start 4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 06:58:30 compute-0 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[260699]: [NOTICE]   (260703) : New worker (260705) forked
Dec 06 06:58:30 compute-0 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[260699]: [NOTICE]   (260703) : Loading success.
Dec 06 06:58:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:30.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 38 KiB/s wr, 123 op/s
Dec 06 06:58:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:30.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:31 compute-0 ceph-mon[74339]: pgmap v1117: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 38 KiB/s wr, 161 op/s
Dec 06 06:58:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:32 compute-0 nova_compute[251992]: 2025-12-06 06:58:32.222 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:32 compute-0 ceph-mon[74339]: pgmap v1118: 305 pgs: 305 active+clean; 306 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 38 KiB/s wr, 123 op/s
Dec 06 06:58:32 compute-0 nova_compute[251992]: 2025-12-06 06:58:32.506 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:32.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 306 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 50 KiB/s wr, 162 op/s
Dec 06 06:58:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:32.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:33 compute-0 sudo[260716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:33 compute-0 sudo[260716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:33 compute-0 sudo[260716]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:33 compute-0 sudo[260741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:33 compute-0 sudo[260741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:33 compute-0 sudo[260741]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:34.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 318 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 147 op/s
Dec 06 06:58:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:34.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:35 compute-0 ceph-mon[74339]: pgmap v1119: 305 pgs: 305 active+clean; 306 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 50 KiB/s wr, 162 op/s
Dec 06 06:58:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:36.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 326 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.9 MiB/s wr, 206 op/s
Dec 06 06:58:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:36.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:36 compute-0 ceph-mon[74339]: pgmap v1120: 305 pgs: 305 active+clean; 318 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 147 op/s
Dec 06 06:58:37 compute-0 nova_compute[251992]: 2025-12-06 06:58:37.224 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:37 compute-0 nova_compute[251992]: 2025-12-06 06:58:37.545 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:38.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 328 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 242 op/s
Dec 06 06:58:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:38.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:39 compute-0 ceph-mon[74339]: pgmap v1121: 305 pgs: 305 active+clean; 326 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.9 MiB/s wr, 206 op/s
Dec 06 06:58:40 compute-0 ceph-mon[74339]: pgmap v1122: 305 pgs: 305 active+clean; 328 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 242 op/s
Dec 06 06:58:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 328 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Dec 06 06:58:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:40.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2084576725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/467992479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:42 compute-0 nova_compute[251992]: 2025-12-06 06:58:42.229 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:42 compute-0 nova_compute[251992]: 2025-12-06 06:58:42.546 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:42 compute-0 ceph-mon[74339]: pgmap v1123: 305 pgs: 305 active+clean; 328 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Dec 06 06:58:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000023s ======
Dec 06 06:58:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:42.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Dec 06 06:58:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 316 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Dec 06 06:58:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:42.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:58:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:58:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:58:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:58:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:58:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:58:43 compute-0 podman[260773]: 2025-12-06 06:58:43.089212279 +0000 UTC m=+0.162061090 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 06:58:43 compute-0 ovn_controller[147168]: 2025-12-06T06:58:43Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:9a:94 10.1.0.19
Dec 06 06:58:43 compute-0 ovn_controller[147168]: 2025-12-06T06:58:43Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:9a:94 10.1.0.19
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.026 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.027 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.047 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.047 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.047 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.629 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.630 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.630 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 06:58:44 compute-0 nova_compute[251992]: 2025-12-06 06:58:44.630 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6064a21e-5bc7-495b-9059-a6dd7d3abee3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1246655193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:44.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 310 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 257 op/s
Dec 06 06:58:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:44.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1169068870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:46 compute-0 ceph-mon[74339]: pgmap v1124: 305 pgs: 305 active+clean; 316 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Dec 06 06:58:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1330517347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1169068870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:46.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 322 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 242 op/s
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.904 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Updating instance_info_cache with network_info: [{"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.921 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-6064a21e-5bc7-495b-9059-a6dd7d3abee3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.921 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.921 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.922 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.922 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.922 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.923 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.923 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.923 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.923 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:58:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000024s ======
Dec 06 06:58:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:46.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.955 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.955 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.956 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.956 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:58:46 compute-0 nova_compute[251992]: 2025-12-06 06:58:46.957 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913165207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.444 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.518 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.519 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.548 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.677 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.678 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4697MB free_disk=20.832809448242188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.892 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 6064a21e-5bc7-495b-9059-a6dd7d3abee3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.892 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.892 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:58:47 compute-0 nova_compute[251992]: 2025-12-06 06:58:47.940 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3140234720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:48 compute-0 nova_compute[251992]: 2025-12-06 06:58:48.411 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:48 compute-0 nova_compute[251992]: 2025-12-06 06:58:48.415 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:58:48 compute-0 nova_compute[251992]: 2025-12-06 06:58:48.438 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:58:48 compute-0 nova_compute[251992]: 2025-12-06 06:58:48.462 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:58:48 compute-0 nova_compute[251992]: 2025-12-06 06:58:48.463 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:48 compute-0 ceph-mon[74339]: pgmap v1125: 305 pgs: 305 active+clean; 310 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.2 MiB/s wr, 257 op/s
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.555093) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328555145, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1803, "num_deletes": 258, "total_data_size": 3099119, "memory_usage": 3134296, "flush_reason": "Manual Compaction"}
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328574266, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 3012158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20876, "largest_seqno": 22678, "table_properties": {"data_size": 3003971, "index_size": 4937, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17186, "raw_average_key_size": 19, "raw_value_size": 2987205, "raw_average_value_size": 3441, "num_data_blocks": 220, "num_entries": 868, "num_filter_entries": 868, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004171, "oldest_key_time": 1765004171, "file_creation_time": 1765004328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 19262 microseconds, and 7526 cpu microseconds.
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.574375) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 3012158 bytes OK
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.574473) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.576373) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.576388) EVENT_LOG_v1 {"time_micros": 1765004328576384, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.576404) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3091412, prev total WAL file size 3091412, number of live WAL files 2.
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.577504) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323533' seq:72057594037927935, type:22 .. '6C6F676D00353037' seq:0, type:0; will stop at (end)
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2941KB)], [47(7480KB)]
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328577573, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 10672117, "oldest_snapshot_seqno": -1}
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5189 keys, 10456947 bytes, temperature: kUnknown
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328649307, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 10456947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10420423, "index_size": 22462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 131303, "raw_average_key_size": 25, "raw_value_size": 10324643, "raw_average_value_size": 1989, "num_data_blocks": 924, "num_entries": 5189, "num_filter_entries": 5189, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765004328, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.649566) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 10456947 bytes
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.652077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.6 rd, 145.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 7.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 5724, records dropped: 535 output_compression: NoCompression
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.652127) EVENT_LOG_v1 {"time_micros": 1765004328652119, "job": 24, "event": "compaction_finished", "compaction_time_micros": 71808, "compaction_time_cpu_micros": 30848, "output_level": 6, "num_output_files": 1, "total_output_size": 10456947, "num_input_records": 5724, "num_output_records": 5189, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328652656, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004328653918, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.577397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.654006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.654012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.654014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.654015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:48.654017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Dec 06 06:58:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:48.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:49 compute-0 podman[260847]: 2025-12-06 06:58:49.4978444 +0000 UTC m=+0.054902577 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 06:58:49 compute-0 podman[260848]: 2025-12-06 06:58:49.504661903 +0000 UTC m=+0.060884008 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 06:58:49 compute-0 ceph-mon[74339]: pgmap v1126: 305 pgs: 305 active+clean; 322 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 242 op/s
Dec 06 06:58:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/913165207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3140234720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:50 compute-0 ceph-mon[74339]: pgmap v1127: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.819886) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330819933, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 279, "num_deletes": 251, "total_data_size": 49151, "memory_usage": 55840, "flush_reason": "Manual Compaction"}
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330822089, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 48844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22679, "largest_seqno": 22957, "table_properties": {"data_size": 46979, "index_size": 94, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4938, "raw_average_key_size": 18, "raw_value_size": 43312, "raw_average_value_size": 161, "num_data_blocks": 4, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004329, "oldest_key_time": 1765004329, "file_creation_time": 1765004330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 2243 microseconds, and 711 cpu microseconds.
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.822149) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 48844 bytes OK
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.822163) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.823362) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.823379) EVENT_LOG_v1 {"time_micros": 1765004330823374, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.823392) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 47043, prev total WAL file size 47043, number of live WAL files 2.
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.823726) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(47KB)], [50(10211KB)]
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330823809, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10505791, "oldest_snapshot_seqno": -1}
Dec 06 06:58:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:50.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4949 keys, 8462018 bytes, temperature: kUnknown
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330884699, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8462018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8428829, "index_size": 19703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 127022, "raw_average_key_size": 25, "raw_value_size": 8338955, "raw_average_value_size": 1684, "num_data_blocks": 799, "num_entries": 4949, "num_filter_entries": 4949, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765004330, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.884897) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8462018 bytes
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.886305) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.4 rd, 138.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.0 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(388.3) write-amplify(173.2) OK, records in: 5458, records dropped: 509 output_compression: NoCompression
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.886321) EVENT_LOG_v1 {"time_micros": 1765004330886313, "job": 26, "event": "compaction_finished", "compaction_time_micros": 60954, "compaction_time_cpu_micros": 18549, "output_level": 6, "num_output_files": 1, "total_output_size": 8462018, "num_input_records": 5458, "num_output_records": 4949, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330886440, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004330887859, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.823676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.887910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.887914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.887916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.887917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-06:58:50.887918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 06:58:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 744 KiB/s rd, 2.2 MiB/s wr, 127 op/s
Dec 06 06:58:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:50.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:51 compute-0 ceph-mon[74339]: pgmap v1128: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 744 KiB/s rd, 2.2 MiB/s wr, 127 op/s
Dec 06 06:58:52 compute-0 nova_compute[251992]: 2025-12-06 06:58:52.234 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:52 compute-0 nova_compute[251992]: 2025-12-06 06:58:52.550 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 302 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 754 KiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 06:58:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:52.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/500948300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.299 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.300 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.300 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.300 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.300 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.302 251996 INFO nova.compute.manager [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Terminating instance
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.303 251996 DEBUG nova.compute.manager [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 06:58:53 compute-0 kernel: tap5a0e5db4-24 (unregistering): left promiscuous mode
Dec 06 06:58:53 compute-0 NetworkManager[48965]: <info>  [1765004333.3462] device (tap5a0e5db4-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 06:58:53 compute-0 ovn_controller[147168]: 2025-12-06T06:58:53Z|00032|binding|INFO|Releasing lport 5a0e5db4-24c8-463d-b14f-0d5210748804 from this chassis (sb_readonly=0)
Dec 06 06:58:53 compute-0 ovn_controller[147168]: 2025-12-06T06:58:53Z|00033|binding|INFO|Setting lport 5a0e5db4-24c8-463d-b14f-0d5210748804 down in Southbound
Dec 06 06:58:53 compute-0 ovn_controller[147168]: 2025-12-06T06:58:53Z|00034|binding|INFO|Removing iface tap5a0e5db4-24 ovn-installed in OVS
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.352 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 06 06:58:53 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000005.scope: Consumed 14.901s CPU time.
Dec 06 06:58:53 compute-0 systemd-machined[212986]: Machine qemu-3-instance-00000005 terminated.
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.437 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:9a:94 10.1.0.19 fdfe:381f:8400::296'], port_security=['fa:16:3e:45:9a:94 10.1.0.19 fdfe:381f:8400::296'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.19/26 fdfe:381f:8400::296/64', 'neutron:device_id': '6064a21e-5bc7-495b-9059-a6dd7d3abee3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa805a2c-a79c-458b-b658-8e0534714a02', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '066c314d67e347f6a49e8e3e27998441', 'neutron:revision_number': '4', 'neutron:security_group_ids': '460e28b2-b45f-4429-a2b9-8f57e45c4e5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a47e992-7383-43fa-bf34-745cbd8b74f1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=5a0e5db4-24c8-463d-b14f-0d5210748804) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.438 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 5a0e5db4-24c8-463d-b14f-0d5210748804 in datapath fa805a2c-a79c-458b-b658-8e0534714a02 unbound from our chassis
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.439 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa805a2c-a79c-458b-b658-8e0534714a02, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.441 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fe893101-a33c-4796-85c5-966033fb7995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.441 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 namespace which is not needed anymore
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.522 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.529 251996 INFO nova.virt.libvirt.driver [-] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Instance destroyed successfully.
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.529 251996 DEBUG nova.objects.instance [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lazy-loading 'resources' on Instance uuid 6064a21e-5bc7-495b-9059-a6dd7d3abee3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.554 251996 DEBUG nova.virt.libvirt.vif [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T06:57:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-2024863607-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2024863607-3',id=5,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2025-12-06T06:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='066c314d67e347f6a49e8e3e27998441',ramdisk_id='',reservation_id='r-bb9k3jjn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1572395875',owner_user_name='tempest-AutoAllocateNetworkTest-1572395875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T06:58:26Z,user_data=None,user_id='e5122185c6194067bdb22d6ba8205dca',uuid=6064a21e-5bc7-495b-9059-a6dd7d3abee3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.555 251996 DEBUG nova.network.os_vif_util [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converting VIF {"id": "5a0e5db4-24c8-463d-b14f-0d5210748804", "address": "fa:16:3e:45:9a:94", "network": {"id": "fa805a2c-a79c-458b-b658-8e0534714a02", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::296", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "066c314d67e347f6a49e8e3e27998441", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a0e5db4-24", "ovs_interfaceid": "5a0e5db4-24c8-463d-b14f-0d5210748804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 06:58:53 compute-0 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[260699]: [NOTICE]   (260703) : haproxy version is 2.8.14-c23fe91
Dec 06 06:58:53 compute-0 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[260699]: [NOTICE]   (260703) : path to executable is /usr/sbin/haproxy
Dec 06 06:58:53 compute-0 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[260699]: [WARNING]  (260703) : Exiting Master process...
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.555 251996 DEBUG nova.network.os_vif_util [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:9a:94,bridge_name='br-int',has_traffic_filtering=True,id=5a0e5db4-24c8-463d-b14f-0d5210748804,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a0e5db4-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.556 251996 DEBUG os_vif [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:9a:94,bridge_name='br-int',has_traffic_filtering=True,id=5a0e5db4-24c8-463d-b14f-0d5210748804,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a0e5db4-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 06:58:53 compute-0 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[260699]: [ALERT]    (260703) : Current worker (260705) exited with code 143 (Terminated)
Dec 06 06:58:53 compute-0 neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02[260699]: [WARNING]  (260703) : All workers exited. Exiting... (0)
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.558 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a0e5db4-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:53 compute-0 systemd[1]: libpod-4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5.scope: Deactivated successfully.
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.559 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.560 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.563 251996 INFO os_vif [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:9a:94,bridge_name='br-int',has_traffic_filtering=True,id=5a0e5db4-24c8-463d-b14f-0d5210748804,network=Network(fa805a2c-a79c-458b-b658-8e0534714a02),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a0e5db4-24')
Dec 06 06:58:53 compute-0 podman[260908]: 2025-12-06 06:58:53.566294366 +0000 UTC m=+0.043623165 container died 4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 06:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c0240cea06dbe33061c66dc4bb955112a5e31634be0cd8aabfbd21a345ff48e-merged.mount: Deactivated successfully.
Dec 06 06:58:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5-userdata-shm.mount: Deactivated successfully.
Dec 06 06:58:53 compute-0 podman[260908]: 2025-12-06 06:58:53.638194838 +0000 UTC m=+0.115523637 container cleanup 4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 06:58:53 compute-0 systemd[1]: libpod-conmon-4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5.scope: Deactivated successfully.
Dec 06 06:58:53 compute-0 podman[260968]: 2025-12-06 06:58:53.699530698 +0000 UTC m=+0.040038367 container remove 4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.704 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d32ab345-1153-4ab8-b3b7-6bf7ee3cfe71]: (4, ('Sat Dec  6 06:58:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 (4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5)\n4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5\nSat Dec  6 06:58:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 (4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5)\n4179e6b44303de5460ad6b974f74590af0e69122a170cbeed0bb65c19a2edbb5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.706 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[07515891-670e-4c1b-8dc4-264186003868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.708 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa805a2c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.709 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 kernel: tapfa805a2c-a0: left promiscuous mode
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.723 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.726 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[34f89288-8f77-4cae-89e8-508862e088bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.745 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[40eb372d-ce47-4e82-9bd4-78ee7b097920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.746 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2f41bb3f-dfbb-49eb-bf58-36fe05eb13b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.758 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d21259-6764-4ba0-bedf-17596698c7ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 458212, 'reachable_time': 29627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260983, 'error': None, 'target': 'ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:53 compute-0 systemd[1]: run-netns-ovnmeta\x2dfa805a2c\x2da79c\x2d458b\x2db658\x2d8e0534714a02.mount: Deactivated successfully.
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.767 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa805a2c-a79c-458b-b658-8e0534714a02 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 06:58:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:58:53.768 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[824b2b8e-70e9-4ccd-92b5-10831e7266c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 06:58:53 compute-0 sudo[260985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:53 compute-0 sudo[260985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:53 compute-0 sudo[260985]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:53 compute-0 sudo[261010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:53 compute-0 sudo[261010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:53 compute-0 sudo[261010]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.992 251996 DEBUG nova.compute.manager [req-939ea7f6-1363-4199-aef9-4e00e0150db8 req-71ccd31f-1129-47c0-ba8f-214b6f41545c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received event network-vif-unplugged-5a0e5db4-24c8-463d-b14f-0d5210748804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.993 251996 DEBUG oslo_concurrency.lockutils [req-939ea7f6-1363-4199-aef9-4e00e0150db8 req-71ccd31f-1129-47c0-ba8f-214b6f41545c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.994 251996 DEBUG oslo_concurrency.lockutils [req-939ea7f6-1363-4199-aef9-4e00e0150db8 req-71ccd31f-1129-47c0-ba8f-214b6f41545c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.994 251996 DEBUG oslo_concurrency.lockutils [req-939ea7f6-1363-4199-aef9-4e00e0150db8 req-71ccd31f-1129-47c0-ba8f-214b6f41545c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.994 251996 DEBUG nova.compute.manager [req-939ea7f6-1363-4199-aef9-4e00e0150db8 req-71ccd31f-1129-47c0-ba8f-214b6f41545c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] No waiting events found dispatching network-vif-unplugged-5a0e5db4-24c8-463d-b14f-0d5210748804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 06:58:53 compute-0 nova_compute[251992]: 2025-12-06 06:58:53.994 251996 DEBUG nova.compute.manager [req-939ea7f6-1363-4199-aef9-4e00e0150db8 req-71ccd31f-1129-47c0-ba8f-214b6f41545c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received event network-vif-unplugged-5a0e5db4-24c8-463d-b14f-0d5210748804 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 06:58:54 compute-0 ceph-mon[74339]: pgmap v1129: 305 pgs: 305 active+clean; 302 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 754 KiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 06:58:54 compute-0 nova_compute[251992]: 2025-12-06 06:58:54.446 251996 INFO nova.virt.libvirt.driver [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Deleting instance files /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3_del
Dec 06 06:58:54 compute-0 nova_compute[251992]: 2025-12-06 06:58:54.447 251996 INFO nova.virt.libvirt.driver [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Deletion of /var/lib/nova/instances/6064a21e-5bc7-495b-9059-a6dd7d3abee3_del complete
Dec 06 06:58:54 compute-0 nova_compute[251992]: 2025-12-06 06:58:54.511 251996 INFO nova.compute.manager [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Took 1.21 seconds to destroy the instance on the hypervisor.
Dec 06 06:58:54 compute-0 nova_compute[251992]: 2025-12-06 06:58:54.512 251996 DEBUG oslo.service.loopingcall [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 06:58:54 compute-0 nova_compute[251992]: 2025-12-06 06:58:54.512 251996 DEBUG nova.compute.manager [-] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 06:58:54 compute-0 nova_compute[251992]: 2025-12-06 06:58:54.513 251996 DEBUG nova.network.neutron [-] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 06:58:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:54.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 279 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 1.2 MiB/s wr, 102 op/s
Dec 06 06:58:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:54.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:55 compute-0 sudo[261037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:55 compute-0 sudo[261037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-0 sudo[261037]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-0 sudo[261062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:58:55 compute-0 sudo[261062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-0 sudo[261062]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-0 sudo[261087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:55 compute-0 sudo[261087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-0 sudo[261087]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-0 sudo[261112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 06:58:55 compute-0 sudo[261112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:58:55 compute-0 sudo[261112]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:58:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:55 compute-0 nova_compute[251992]: 2025-12-06 06:58:55.670 251996 DEBUG nova.network.neutron [-] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:58:55 compute-0 nova_compute[251992]: 2025-12-06 06:58:55.690 251996 INFO nova.compute.manager [-] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Took 1.18 seconds to deallocate network for instance.
Dec 06 06:58:55 compute-0 nova_compute[251992]: 2025-12-06 06:58:55.737 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:55 compute-0 nova_compute[251992]: 2025-12-06 06:58:55.738 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:55 compute-0 nova_compute[251992]: 2025-12-06 06:58:55.793 251996 DEBUG oslo_concurrency.processutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:55 compute-0 sudo[261168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:55 compute-0 sudo[261168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-0 sudo[261168]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-0 sudo[261194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:58:55 compute-0 sudo[261194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-0 sudo[261194]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-0 nova_compute[251992]: 2025-12-06 06:58:55.914 251996 DEBUG nova.compute.manager [req-ac247d26-657d-4e0f-8b45-f3b0b1c591eb req-b055f647-50a4-4055-a963-3703299c103f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received event network-vif-deleted-5a0e5db4-24c8-463d-b14f-0d5210748804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:55 compute-0 sudo[261219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:55 compute-0 sudo[261219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:55 compute-0 sudo[261219]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:55 compute-0 nova_compute[251992]: 2025-12-06 06:58:55.992 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquiring lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:55 compute-0 nova_compute[251992]: 2025-12-06 06:58:55.993 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.014 251996 DEBUG nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:58:56 compute-0 sudo[261263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 06:58:56 compute-0 sudo[261263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.119 251996 DEBUG nova.compute.manager [req-beedc9bf-c0b3-4531-91b4-c01d04eae78e req-cd694d09-0b4b-412c-bd89-49d26f5c8330 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received event network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.120 251996 DEBUG oslo_concurrency.lockutils [req-beedc9bf-c0b3-4531-91b4-c01d04eae78e req-cd694d09-0b4b-412c-bd89-49d26f5c8330 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.120 251996 DEBUG oslo_concurrency.lockutils [req-beedc9bf-c0b3-4531-91b4-c01d04eae78e req-cd694d09-0b4b-412c-bd89-49d26f5c8330 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.120 251996 DEBUG oslo_concurrency.lockutils [req-beedc9bf-c0b3-4531-91b4-c01d04eae78e req-cd694d09-0b4b-412c-bd89-49d26f5c8330 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.121 251996 DEBUG nova.compute.manager [req-beedc9bf-c0b3-4531-91b4-c01d04eae78e req-cd694d09-0b4b-412c-bd89-49d26f5c8330 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] No waiting events found dispatching network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.121 251996 WARNING nova.compute.manager [req-beedc9bf-c0b3-4531-91b4-c01d04eae78e req-cd694d09-0b4b-412c-bd89-49d26f5c8330 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Received unexpected event network-vif-plugged-5a0e5db4-24c8-463d-b14f-0d5210748804 for instance with vm_state deleted and task_state None.
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.122 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 06:58:56 compute-0 podman[261329]: 2025-12-06 06:58:56.307595496 +0000 UTC m=+0.020908683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:58:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044821323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.504 251996 DEBUG oslo_concurrency.processutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.510 251996 DEBUG nova.compute.provider_tree [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.525 251996 DEBUG nova.scheduler.client.report [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.568 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.570 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.576 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.576 251996 INFO nova.compute.claims [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Claim successful on node compute-0.ctlplane.example.com
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.607 251996 INFO nova.scheduler.client.report [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Deleted allocations for instance 6064a21e-5bc7-495b-9059-a6dd7d3abee3
Dec 06 06:58:56 compute-0 podman[261329]: 2025-12-06 06:58:56.623917052 +0000 UTC m=+0.337230219 container create 700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.688 251996 DEBUG oslo_concurrency.lockutils [None req-dff30ad3-b369-4a28-b06d-46888795aa3d e5122185c6194067bdb22d6ba8205dca 066c314d67e347f6a49e8e3e27998441 - - default default] Lock "6064a21e-5bc7-495b-9059-a6dd7d3abee3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:56 compute-0 nova_compute[251992]: 2025-12-06 06:58:56.701 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:56 compute-0 systemd[1]: Started libpod-conmon-700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b.scope.
Dec 06 06:58:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:58:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:56.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 237 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 277 KiB/s rd, 123 KiB/s wr, 92 op/s
Dec 06 06:58:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:58:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:56.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:58:56 compute-0 podman[261329]: 2025-12-06 06:58:56.972644819 +0000 UTC m=+0.685958016 container init 700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bose, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 06:58:56 compute-0 podman[261329]: 2025-12-06 06:58:56.979211785 +0000 UTC m=+0.692524952 container start 700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:58:56 compute-0 modest_bose[261349]: 167 167
Dec 06 06:58:56 compute-0 systemd[1]: libpod-700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b.scope: Deactivated successfully.
Dec 06 06:58:57 compute-0 podman[261329]: 2025-12-06 06:58:57.005210764 +0000 UTC m=+0.718523921 container attach 700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:58:57 compute-0 podman[261329]: 2025-12-06 06:58:57.006192821 +0000 UTC m=+0.719506008 container died 700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:58:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:58:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155014578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.191 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.197 251996 DEBUG nova.compute.provider_tree [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.211 251996 DEBUG nova.scheduler.client.report [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.233 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.234 251996 DEBUG nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.237 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-288dc17cb4a88be395675ca8d6df2e66f9419638c99126879c42b48ef5098285-merged.mount: Deactivated successfully.
Dec 06 06:58:57 compute-0 podman[261329]: 2025-12-06 06:58:57.30483112 +0000 UTC m=+1.018144287 container remove 700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_bose, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.324 251996 DEBUG nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 06 06:58:57 compute-0 systemd[1]: libpod-conmon-700086db5c258aaf7abb49900c8b865f63142839ddd0711623f9060607e2c24b.scope: Deactivated successfully.
Dec 06 06:58:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 06:58:57 compute-0 ceph-mon[74339]: pgmap v1130: 305 pgs: 305 active+clean; 279 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 1.2 MiB/s wr, 102 op/s
Dec 06 06:58:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2044821323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:57 compute-0 podman[261395]: 2025-12-06 06:58:57.474411771 +0000 UTC m=+0.044675893 container create 5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mayer, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:58:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:57 compute-0 systemd[1]: Started libpod-conmon-5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6.scope.
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.539 251996 INFO nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:58:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:58:57 compute-0 podman[261395]: 2025-12-06 06:58:57.456122898 +0000 UTC m=+0.026387050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5037af94208cdd4e569daffc39fed7187f2916d12714f516571bd4c2ca7a30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5037af94208cdd4e569daffc39fed7187f2916d12714f516571bd4c2ca7a30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5037af94208cdd4e569daffc39fed7187f2916d12714f516571bd4c2ca7a30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e5037af94208cdd4e569daffc39fed7187f2916d12714f516571bd4c2ca7a30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:58:57 compute-0 podman[261395]: 2025-12-06 06:58:57.561896243 +0000 UTC m=+0.132160385 container init 5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mayer, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 06:58:57 compute-0 podman[261395]: 2025-12-06 06:58:57.569565609 +0000 UTC m=+0.139829731 container start 5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:58:57 compute-0 podman[261395]: 2025-12-06 06:58:57.572838417 +0000 UTC m=+0.143102559 container attach 5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mayer, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.622 251996 DEBUG nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.906 251996 DEBUG nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.907 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.908 251996 INFO nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Creating image(s)
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.933 251996 DEBUG nova.storage.rbd_utils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] rbd image 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.957 251996 DEBUG nova.storage.rbd_utils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] rbd image 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.981 251996 DEBUG nova.storage.rbd_utils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] rbd image 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:57 compute-0 nova_compute[251992]: 2025-12-06 06:58:57.985 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:58 compute-0 nova_compute[251992]: 2025-12-06 06:58:58.044 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:58 compute-0 nova_compute[251992]: 2025-12-06 06:58:58.046 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:58 compute-0 nova_compute[251992]: 2025-12-06 06:58:58.048 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:58 compute-0 nova_compute[251992]: 2025-12-06 06:58:58.049 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:58 compute-0 nova_compute[251992]: 2025-12-06 06:58:58.086 251996 DEBUG nova.storage.rbd_utils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] rbd image 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:58:58 compute-0 nova_compute[251992]: 2025-12-06 06:58:58.089 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:58:58 compute-0 nova_compute[251992]: 2025-12-06 06:58:58.604 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:58:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 06:58:58 compute-0 sweet_mayer[261411]: [
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:     {
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         "available": false,
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         "ceph_device": false,
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         "lsm_data": {},
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         "lvs": [],
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         "path": "/dev/sr0",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         "rejected_reasons": [
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "Insufficient space (<5GB)",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "Has a FileSystem"
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         ],
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         "sys_api": {
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "actuators": null,
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "device_nodes": "sr0",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "devname": "sr0",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "human_readable_size": "482.00 KB",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "id_bus": "ata",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "model": "QEMU DVD-ROM",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "nr_requests": "2",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "parent": "/dev/sr0",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "partitions": {},
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "path": "/dev/sr0",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "removable": "1",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "rev": "2.5+",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "ro": "0",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "rotational": "1",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "sas_address": "",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "sas_device_handle": "",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "scheduler_mode": "mq-deadline",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "sectors": 0,
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "sectorsize": "2048",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "size": 493568.0,
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "support_discard": "2048",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "type": "disk",
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:             "vendor": "QEMU"
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:         }
Dec 06 06:58:58 compute-0 sweet_mayer[261411]:     }
Dec 06 06:58:58 compute-0 sweet_mayer[261411]: ]
Dec 06 06:58:58 compute-0 systemd[1]: libpod-5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6.scope: Deactivated successfully.
Dec 06 06:58:58 compute-0 systemd[1]: libpod-5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6.scope: Consumed 1.184s CPU time.
Dec 06 06:58:58 compute-0 podman[261395]: 2025-12-06 06:58:58.73811999 +0000 UTC m=+1.308384122 container died 5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 06:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e5037af94208cdd4e569daffc39fed7187f2916d12714f516571bd4c2ca7a30-merged.mount: Deactivated successfully.
Dec 06 06:58:58 compute-0 podman[261395]: 2025-12-06 06:58:58.789255325 +0000 UTC m=+1.359519447 container remove 5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mayer, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 06:58:58 compute-0 systemd[1]: libpod-conmon-5abadda938eff1d49e4ee717e45777e966d60b4d8748001eee1bc4698b0805f6.scope: Deactivated successfully.
Dec 06 06:58:58 compute-0 sudo[261263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:58:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:58:58.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:58 compute-0 ceph-mon[74339]: pgmap v1131: 305 pgs: 305 active+clean; 237 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 277 KiB/s rd, 123 KiB/s wr, 92 op/s
Dec 06 06:58:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1155014578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:58:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 97 KiB/s wr, 63 op/s
Dec 06 06:58:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:58:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:58:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:58:58.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 54c85328-16c8-4e10-a1a1-16b7f95bda30 does not exist
Dec 06 06:58:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f425e860-a122-476c-8516-06df8d19043a does not exist
Dec 06 06:58:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b9d0d7bb-f6a0-490c-b833-d57a810ad569 does not exist
Dec 06 06:58:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 06:58:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.641 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:58:59 compute-0 sudo[262826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:59 compute-0 sudo[262826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:59 compute-0 sudo[262826]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:59 compute-0 sudo[262865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:58:59 compute-0 sudo[262865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:59 compute-0 sudo[262865]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.748 251996 DEBUG nova.storage.rbd_utils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] resizing rbd image 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:58:59 compute-0 sudo[262909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:58:59 compute-0 sudo[262909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:59 compute-0 sudo[262909]: pam_unix(sudo:session): session closed for user root
Dec 06 06:58:59 compute-0 sudo[262955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 06:58:59 compute-0 sudo[262955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.874 251996 DEBUG nova.objects.instance [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lazy-loading 'migration_context' on Instance uuid 2b0cc34e-c661-4ac2-8f7c-81e282ad2524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.895 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.896 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Ensure instance console log exists: /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.897 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.897 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.897 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.899 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/400368129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: pgmap v1132: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 97 KiB/s wr, 63 op/s
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 06:58:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.905 251996 WARNING nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.911 251996 DEBUG nova.virt.libvirt.host [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.912 251996 DEBUG nova.virt.libvirt.host [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.916 251996 DEBUG nova.virt.libvirt.host [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.917 251996 DEBUG nova.virt.libvirt.host [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.918 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.919 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.919 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.919 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.920 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.920 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.920 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.920 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.921 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.921 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.921 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.922 251996 DEBUG nova.virt.hardware [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:58:59 compute-0 nova_compute[251992]: 2025-12-06 06:58:59.925 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:00 compute-0 podman[263057]: 2025-12-06 06:59:00.12490907 +0000 UTC m=+0.039137404 container create 47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 06:59:00 compute-0 systemd[1]: Started libpod-conmon-47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9.scope.
Dec 06 06:59:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:59:00 compute-0 podman[263057]: 2025-12-06 06:59:00.109191657 +0000 UTC m=+0.023420011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:59:00 compute-0 podman[263057]: 2025-12-06 06:59:00.210884332 +0000 UTC m=+0.125112686 container init 47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:59:00 compute-0 podman[263057]: 2025-12-06 06:59:00.217883729 +0000 UTC m=+0.132112063 container start 47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 06:59:00 compute-0 friendly_mendel[263073]: 167 167
Dec 06 06:59:00 compute-0 systemd[1]: libpod-47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9.scope: Deactivated successfully.
Dec 06 06:59:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:59:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344733042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:00 compute-0 podman[263057]: 2025-12-06 06:59:00.36887496 +0000 UTC m=+0.283103324 container attach 47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 06:59:00 compute-0 podman[263057]: 2025-12-06 06:59:00.369859476 +0000 UTC m=+0.284087840 container died 47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.377 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd33821f222f32d432782e96f11a2494e4794cd9ca7b48c6aaba594c92313946-merged.mount: Deactivated successfully.
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.409 251996 DEBUG nova.storage.rbd_utils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] rbd image 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.413 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:00 compute-0 podman[263057]: 2025-12-06 06:59:00.493094679 +0000 UTC m=+0.407323013 container remove 47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:59:00 compute-0 systemd[1]: libpod-conmon-47107c9aee203c29f5bd5cb28f2d1dfbe4c2ecda912c436580af11a80dd997e9.scope: Deactivated successfully.
Dec 06 06:59:00 compute-0 podman[263138]: 2025-12-06 06:59:00.648709734 +0000 UTC m=+0.038172158 container create 69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:59:00 compute-0 systemd[1]: Started libpod-conmon-69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098.scope.
Dec 06 06:59:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:59:00 compute-0 podman[263138]: 2025-12-06 06:59:00.63181032 +0000 UTC m=+0.021272744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd81552af4347b66bf06a831347285aec7bd8dad4cada47282e06cc08b2d632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd81552af4347b66bf06a831347285aec7bd8dad4cada47282e06cc08b2d632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd81552af4347b66bf06a831347285aec7bd8dad4cada47282e06cc08b2d632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd81552af4347b66bf06a831347285aec7bd8dad4cada47282e06cc08b2d632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd81552af4347b66bf06a831347285aec7bd8dad4cada47282e06cc08b2d632/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:00 compute-0 podman[263138]: 2025-12-06 06:59:00.748132487 +0000 UTC m=+0.137594941 container init 69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 06:59:00 compute-0 podman[263138]: 2025-12-06 06:59:00.757167731 +0000 UTC m=+0.146630155 container start 69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:59:00 compute-0 podman[263138]: 2025-12-06 06:59:00.760138831 +0000 UTC m=+0.149601255 container attach 69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 06:59:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:59:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959108537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.862 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.866 251996 DEBUG nova.objects.instance [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b0cc34e-c661-4ac2-8f7c-81e282ad2524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:00.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.880 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <uuid>2b0cc34e-c661-4ac2-8f7c-81e282ad2524</uuid>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <name>instance-0000000a</name>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <metadata>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-456905304</nova:name>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 06:58:59</nova:creationTime>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <nova:user uuid="21371e07b1764fe883604309e8ae27d5">tempest-ServerDiagnosticsV248Test-574229049-project-member</nova:user>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <nova:project uuid="164f3793f37e4768b0da70527fb86c4c">tempest-ServerDiagnosticsV248Test-574229049</nova:project>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   </metadata>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <system>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <entry name="serial">2b0cc34e-c661-4ac2-8f7c-81e282ad2524</entry>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <entry name="uuid">2b0cc34e-c661-4ac2-8f7c-81e282ad2524</entry>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     </system>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <os>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   </os>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <features>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <apic/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   </features>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   </clock>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk">
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       </source>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk.config">
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       </source>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:59:00 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524/console.log" append="off"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     </serial>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <video>
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     </video>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 06:59:00 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 06:59:00 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 06:59:00 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:59:00 compute-0 nova_compute[251992]: </domain>
Dec 06 06:59:00 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:59:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 17 KiB/s wr, 56 op/s
Dec 06 06:59:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2344733042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/959108537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.940 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.941 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.942 251996 INFO nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Using config drive
Dec 06 06:59:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:00.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:00 compute-0 nova_compute[251992]: 2025-12-06 06:59:00.967 251996 DEBUG nova.storage.rbd_utils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] rbd image 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:01 compute-0 nova_compute[251992]: 2025-12-06 06:59:01.123 251996 INFO nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Creating config drive at /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524/disk.config
Dec 06 06:59:01 compute-0 nova_compute[251992]: 2025-12-06 06:59:01.129 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnp3teua3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:01 compute-0 nova_compute[251992]: 2025-12-06 06:59:01.257 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnp3teua3" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:01 compute-0 nova_compute[251992]: 2025-12-06 06:59:01.296 251996 DEBUG nova.storage.rbd_utils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] rbd image 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:01 compute-0 nova_compute[251992]: 2025-12-06 06:59:01.300 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524/disk.config 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:01 compute-0 nova_compute[251992]: 2025-12-06 06:59:01.455 251996 DEBUG oslo_concurrency.processutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524/disk.config 2b0cc34e-c661-4ac2-8f7c-81e282ad2524_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:01 compute-0 nova_compute[251992]: 2025-12-06 06:59:01.456 251996 INFO nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Deleting local config drive /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524/disk.config because it was imported into RBD.
Dec 06 06:59:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:01 compute-0 systemd-machined[212986]: New machine qemu-4-instance-0000000a.
Dec 06 06:59:01 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Dec 06 06:59:01 compute-0 magical_boyd[263155]: --> passed data devices: 0 physical, 1 LVM
Dec 06 06:59:01 compute-0 magical_boyd[263155]: --> relative data size: 1.0
Dec 06 06:59:01 compute-0 magical_boyd[263155]: --> All data devices are unavailable
Dec 06 06:59:01 compute-0 systemd[1]: libpod-69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098.scope: Deactivated successfully.
Dec 06 06:59:01 compute-0 podman[263247]: 2025-12-06 06:59:01.657868449 +0000 UTC m=+0.024127829 container died 69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 06:59:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-abd81552af4347b66bf06a831347285aec7bd8dad4cada47282e06cc08b2d632-merged.mount: Deactivated successfully.
Dec 06 06:59:01 compute-0 podman[263247]: 2025-12-06 06:59:01.710800102 +0000 UTC m=+0.077059482 container remove 69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_boyd, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 06:59:01 compute-0 systemd[1]: libpod-conmon-69dc7b4b050ee4665a14b6e74fde5075eaf3b5777aa13374fb75d240fbfd4098.scope: Deactivated successfully.
Dec 06 06:59:01 compute-0 sudo[262955]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:01 compute-0 sudo[263263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:01 compute-0 sudo[263263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:01 compute-0 sudo[263263]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:01 compute-0 sudo[263288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:59:01 compute-0 sudo[263288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:01 compute-0 sudo[263288]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:01 compute-0 sudo[263313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:01 compute-0 sudo[263313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:01 compute-0 sudo[263313]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:01 compute-0 sudo[263338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 06:59:01 compute-0 sudo[263338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:01 compute-0 ceph-mon[74339]: pgmap v1133: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 17 KiB/s wr, 56 op/s
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.084 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.237 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:02 compute-0 podman[263422]: 2025-12-06 06:59:02.30424531 +0000 UTC m=+0.035647700 container create 99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_keller, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:59:02 compute-0 systemd[1]: Started libpod-conmon-99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a.scope.
Dec 06 06:59:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:59:02 compute-0 podman[263422]: 2025-12-06 06:59:02.363704578 +0000 UTC m=+0.095106978 container init 99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 06:59:02 compute-0 podman[263422]: 2025-12-06 06:59:02.369458973 +0000 UTC m=+0.100861373 container start 99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_keller, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:59:02 compute-0 podman[263422]: 2025-12-06 06:59:02.372771272 +0000 UTC m=+0.104173682 container attach 99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_keller, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:59:02 compute-0 condescending_keller[263461]: 167 167
Dec 06 06:59:02 compute-0 systemd[1]: libpod-99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a.scope: Deactivated successfully.
Dec 06 06:59:02 compute-0 podman[263422]: 2025-12-06 06:59:02.375600298 +0000 UTC m=+0.107002698 container died 99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_keller, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:59:02 compute-0 podman[263422]: 2025-12-06 06:59:02.290303405 +0000 UTC m=+0.021705905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4826841e2707ff3c20ed7c910b584b3fae1873e86bd58d1c9c6fd28bc346ef79-merged.mount: Deactivated successfully.
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.403 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004342.4027855, 2b0cc34e-c661-4ac2-8f7c-81e282ad2524 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.404 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] VM Resumed (Lifecycle Event)
Dec 06 06:59:02 compute-0 podman[263422]: 2025-12-06 06:59:02.411273597 +0000 UTC m=+0.142675997 container remove 99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_keller, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.412 251996 DEBUG nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.412 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.419 251996 INFO nova.virt.libvirt.driver [-] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Instance spawned successfully.
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.420 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.425 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.431 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:59:02 compute-0 systemd[1]: libpod-conmon-99e7ec537184cfe2c0884eac71c23d6999a89e1f0ac532f623fa67f18107fd4a.scope: Deactivated successfully.
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.440 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.441 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.442 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.442 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.443 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.443 251996 DEBUG nova.virt.libvirt.driver [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.484 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.484 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004342.4113603, 2b0cc34e-c661-4ac2-8f7c-81e282ad2524 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.484 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] VM Started (Lifecycle Event)
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.522 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.525 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.546 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.553 251996 INFO nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Took 4.65 seconds to spawn the instance on the hypervisor.
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.553 251996 DEBUG nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:02 compute-0 podman[263488]: 2025-12-06 06:59:02.574955069 +0000 UTC m=+0.051694151 container create a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:59:02 compute-0 systemd[1]: Started libpod-conmon-a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10.scope.
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.628 251996 INFO nova.compute.manager [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Took 6.53 seconds to build instance.
Dec 06 06:59:02 compute-0 podman[263488]: 2025-12-06 06:59:02.552761072 +0000 UTC m=+0.029500184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:59:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:59:02 compute-0 nova_compute[251992]: 2025-12-06 06:59:02.649 251996 DEBUG oslo_concurrency.lockutils [None req-851afb17-5dcc-4461-b462-0e8a150e0c29 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e42d36cb9169c0f5aa4ae7c495d07e5b4aa177012006521ca7cdcd97dca413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e42d36cb9169c0f5aa4ae7c495d07e5b4aa177012006521ca7cdcd97dca413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e42d36cb9169c0f5aa4ae7c495d07e5b4aa177012006521ca7cdcd97dca413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e42d36cb9169c0f5aa4ae7c495d07e5b4aa177012006521ca7cdcd97dca413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:02 compute-0 podman[263488]: 2025-12-06 06:59:02.673152309 +0000 UTC m=+0.149891411 container init a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kare, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 06:59:02 compute-0 podman[263488]: 2025-12-06 06:59:02.680979569 +0000 UTC m=+0.157718651 container start a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:59:02 compute-0 podman[263488]: 2025-12-06 06:59:02.683876447 +0000 UTC m=+0.160615519 container attach a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kare, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 06:59:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 227 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 1.2 MiB/s wr, 73 op/s
Dec 06 06:59:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:02.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 06:59:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/542679742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:59:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 06:59:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/542679742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:59:03 compute-0 nova_compute[251992]: 2025-12-06 06:59:03.509 251996 DEBUG nova.compute.manager [None req-52832146-930b-422d-ad17-bce0a8debfad 9690c3ae82544cab91af1f68211dc8d1 640161fc88f944cd94a82d5f36e7fa99 - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:03 compute-0 nova_compute[251992]: 2025-12-06 06:59:03.512 251996 INFO nova.compute.manager [None req-52832146-930b-422d-ad17-bce0a8debfad 9690c3ae82544cab91af1f68211dc8d1 640161fc88f944cd94a82d5f36e7fa99 - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Retrieving diagnostics
Dec 06 06:59:03 compute-0 beautiful_kare[263505]: {
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:     "0": [
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:         {
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "devices": [
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "/dev/loop3"
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             ],
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "lv_name": "ceph_lv0",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "lv_size": "7511998464",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "name": "ceph_lv0",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "tags": {
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.cluster_name": "ceph",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.crush_device_class": "",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.encrypted": "0",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.osd_id": "0",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.type": "block",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:                 "ceph.vdo": "0"
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             },
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "type": "block",
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:             "vg_name": "ceph_vg0"
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:         }
Dec 06 06:59:03 compute-0 beautiful_kare[263505]:     ]
Dec 06 06:59:03 compute-0 beautiful_kare[263505]: }
Dec 06 06:59:03 compute-0 systemd[1]: libpod-a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10.scope: Deactivated successfully.
Dec 06 06:59:03 compute-0 podman[263514]: 2025-12-06 06:59:03.585885281 +0000 UTC m=+0.023635846 container died a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 06:59:03 compute-0 nova_compute[251992]: 2025-12-06 06:59:03.608 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0e42d36cb9169c0f5aa4ae7c495d07e5b4aa177012006521ca7cdcd97dca413-merged.mount: Deactivated successfully.
Dec 06 06:59:03 compute-0 podman[263514]: 2025-12-06 06:59:03.638763903 +0000 UTC m=+0.076514468 container remove a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kare, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 06:59:03 compute-0 systemd[1]: libpod-conmon-a2283e7bf231dc1bda84977702612bc94636eb0878a0d401cd5be0d361311d10.scope: Deactivated successfully.
Dec 06 06:59:03 compute-0 sudo[263338]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:03 compute-0 sudo[263529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:03 compute-0 sudo[263529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:03 compute-0 sudo[263529]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:03 compute-0 sudo[263554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 06:59:03 compute-0 sudo[263554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:03 compute-0 sudo[263554]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:59:03.809 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:59:03.810 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:59:03.810 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:03 compute-0 sudo[263579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:03 compute-0 sudo[263579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:03 compute-0 sudo[263579]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:03 compute-0 sudo[263604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 06:59:03 compute-0 sudo[263604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:04 compute-0 ceph-mon[74339]: pgmap v1134: 305 pgs: 305 active+clean; 227 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 1.2 MiB/s wr, 73 op/s
Dec 06 06:59:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3611950374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/542679742' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:59:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/542679742' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:59:04 compute-0 podman[263671]: 2025-12-06 06:59:04.233986298 +0000 UTC m=+0.044058346 container create e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 06:59:04 compute-0 systemd[1]: Started libpod-conmon-e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442.scope.
Dec 06 06:59:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:59:04 compute-0 podman[263671]: 2025-12-06 06:59:04.303373624 +0000 UTC m=+0.113445682 container init e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lederberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 06:59:04 compute-0 podman[263671]: 2025-12-06 06:59:04.216336894 +0000 UTC m=+0.026408962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:59:04 compute-0 podman[263671]: 2025-12-06 06:59:04.310606648 +0000 UTC m=+0.120678696 container start e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 06:59:04 compute-0 podman[263671]: 2025-12-06 06:59:04.313807934 +0000 UTC m=+0.123879982 container attach e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lederberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 06:59:04 compute-0 youthful_lederberg[263687]: 167 167
Dec 06 06:59:04 compute-0 systemd[1]: libpod-e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442.scope: Deactivated successfully.
Dec 06 06:59:04 compute-0 podman[263671]: 2025-12-06 06:59:04.318131811 +0000 UTC m=+0.128203869 container died e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 06:59:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0def1032d8196bb0d24cbd808141faa83fc1a490d22d174c223820ba7f70a27-merged.mount: Deactivated successfully.
Dec 06 06:59:04 compute-0 podman[263671]: 2025-12-06 06:59:04.352854634 +0000 UTC m=+0.162926692 container remove e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 06:59:04 compute-0 systemd[1]: libpod-conmon-e9d3432889b1ddfa0e50c516b6ddbaaa3fc90f205a426c65e88898703fba7442.scope: Deactivated successfully.
Dec 06 06:59:04 compute-0 podman[263713]: 2025-12-06 06:59:04.500620908 +0000 UTC m=+0.035149117 container create cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mcnulty, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 06:59:04 compute-0 systemd[1]: Started libpod-conmon-cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913.scope.
Dec 06 06:59:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 06:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad094c2ee845d5bd6025a46f02f857522b9e28152beeff476abdb5e3b22a8d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad094c2ee845d5bd6025a46f02f857522b9e28152beeff476abdb5e3b22a8d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad094c2ee845d5bd6025a46f02f857522b9e28152beeff476abdb5e3b22a8d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad094c2ee845d5bd6025a46f02f857522b9e28152beeff476abdb5e3b22a8d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 06:59:04 compute-0 podman[263713]: 2025-12-06 06:59:04.566570911 +0000 UTC m=+0.101099140 container init cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mcnulty, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 06:59:04 compute-0 podman[263713]: 2025-12-06 06:59:04.573757954 +0000 UTC m=+0.108286163 container start cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:59:04 compute-0 podman[263713]: 2025-12-06 06:59:04.577585627 +0000 UTC m=+0.112113856 container attach cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 06:59:04 compute-0 podman[263713]: 2025-12-06 06:59:04.485899322 +0000 UTC m=+0.020427541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 06:59:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 264 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 724 KiB/s rd, 2.4 MiB/s wr, 101 op/s
Dec 06 06:59:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:04.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2887831352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]: {
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]:         "osd_id": 0,
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]:         "type": "bluestore"
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]:     }
Dec 06 06:59:05 compute-0 nice_mcnulty[263729]: }
Dec 06 06:59:05 compute-0 systemd[1]: libpod-cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913.scope: Deactivated successfully.
Dec 06 06:59:05 compute-0 podman[263713]: 2025-12-06 06:59:05.405427887 +0000 UTC m=+0.939956096 container died cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 06:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ad094c2ee845d5bd6025a46f02f857522b9e28152beeff476abdb5e3b22a8d1-merged.mount: Deactivated successfully.
Dec 06 06:59:05 compute-0 podman[263713]: 2025-12-06 06:59:05.457008283 +0000 UTC m=+0.991536492 container remove cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mcnulty, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 06:59:05 compute-0 systemd[1]: libpod-conmon-cefcdf5c99273da5c5545089e2bef1b812fee9d1f62462ff952fb6e42d353913.scope: Deactivated successfully.
Dec 06 06:59:05 compute-0 sudo[263604]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 06:59:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:59:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 06:59:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:59:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 368a2683-1dc1-4ce9-83b8-f566af5b978c does not exist
Dec 06 06:59:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d5af6604-3f1f-4067-a480-2c0b5c689089 does not exist
Dec 06 06:59:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 525b9703-3daa-4bc9-8b8e-659fbd149eb6 does not exist
Dec 06 06:59:05 compute-0 sudo[263762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:05 compute-0 sudo[263762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:05 compute-0 sudo[263762]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:05 compute-0 sudo[263787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 06:59:05 compute-0 sudo[263787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:05 compute-0 sudo[263787]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:06 compute-0 ceph-mon[74339]: pgmap v1135: 305 pgs: 305 active+clean; 264 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 724 KiB/s rd, 2.4 MiB/s wr, 101 op/s
Dec 06 06:59:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2849377092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:59:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 06:59:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:06.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 191 MiB data, 333 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 194 op/s
Dec 06 06:59:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3731078009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1650501530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:07 compute-0 nova_compute[251992]: 2025-12-06 06:59:07.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:08 compute-0 ceph-mon[74339]: pgmap v1136: 305 pgs: 305 active+clean; 191 MiB data, 333 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 194 op/s
Dec 06 06:59:08 compute-0 nova_compute[251992]: 2025-12-06 06:59:08.527 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004333.526628, 6064a21e-5bc7-495b-9059-a6dd7d3abee3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:08 compute-0 nova_compute[251992]: 2025-12-06 06:59:08.528 251996 INFO nova.compute.manager [-] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] VM Stopped (Lifecycle Event)
Dec 06 06:59:08 compute-0 nova_compute[251992]: 2025-12-06 06:59:08.584 251996 DEBUG nova.compute.manager [None req-a9daac78-8cd9-4eca-9a3c-9ceddd7afb2b - - - - - -] [instance: 6064a21e-5bc7-495b-9059-a6dd7d3abee3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 06:59:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3524332152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:59:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 06:59:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3524332152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:59:08 compute-0 nova_compute[251992]: 2025-12-06 06:59:08.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:08.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 251 op/s
Dec 06 06:59:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:08.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3524332152' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 06:59:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3524332152' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 06:59:10 compute-0 ceph-mon[74339]: pgmap v1137: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 251 op/s
Dec 06 06:59:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:10.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Dec 06 06:59:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:10.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:12 compute-0 ceph-mon[74339]: pgmap v1138: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Dec 06 06:59:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1905717733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:12 compute-0 nova_compute[251992]: 2025-12-06 06:59:12.242 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:12.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 269 op/s
Dec 06 06:59:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:59:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:59:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:59:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:59:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:59:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:59:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:12.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:13 compute-0 podman[263816]: 2025-12-06 06:59:13.444320014 +0000 UTC m=+0.104011188 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 06:59:13 compute-0 nova_compute[251992]: 2025-12-06 06:59:13.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:13 compute-0 sudo[263843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:13 compute-0 sudo[263843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:13 compute-0 sudo[263843]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:14 compute-0 sudo[263868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:14 compute-0 sudo[263868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:14 compute-0 sudo[263868]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:14 compute-0 ceph-mon[74339]: pgmap v1139: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 269 op/s
Dec 06 06:59:14 compute-0 nova_compute[251992]: 2025-12-06 06:59:14.786 251996 DEBUG nova.compute.manager [None req-b27870f1-a713-458d-9f5f-41da685e95d1 9690c3ae82544cab91af1f68211dc8d1 640161fc88f944cd94a82d5f36e7fa99 - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:14 compute-0 nova_compute[251992]: 2025-12-06 06:59:14.791 251996 INFO nova.compute.manager [None req-b27870f1-a713-458d-9f5f-41da685e95d1 9690c3ae82544cab91af1f68211dc8d1 640161fc88f944cd94a82d5f36e7fa99 - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Retrieving diagnostics
Dec 06 06:59:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:14.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 255 op/s
Dec 06 06:59:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:14.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.218 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquiring lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.218 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.219 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquiring lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.219 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.219 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.220 251996 INFO nova.compute.manager [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Terminating instance
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.221 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquiring lock "refresh_cache-2b0cc34e-c661-4ac2-8f7c-81e282ad2524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.221 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquired lock "refresh_cache-2b0cc34e-c661-4ac2-8f7c-81e282ad2524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.221 251996 DEBUG nova.network.neutron [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 06:59:15 compute-0 nova_compute[251992]: 2025-12-06 06:59:15.558 251996 DEBUG nova.network.neutron [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.000 251996 DEBUG nova.network.neutron [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.015 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Releasing lock "refresh_cache-2b0cc34e-c661-4ac2-8f7c-81e282ad2524" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.016 251996 DEBUG nova.compute.manager [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 06:59:16 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 06 06:59:16 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 13.248s CPU time.
Dec 06 06:59:16 compute-0 systemd-machined[212986]: Machine qemu-4-instance-0000000a terminated.
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.233 251996 INFO nova.virt.libvirt.driver [-] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Instance destroyed successfully.
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.233 251996 DEBUG nova.objects.instance [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lazy-loading 'resources' on Instance uuid 2b0cc34e-c661-4ac2-8f7c-81e282ad2524 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:16 compute-0 ceph-mon[74339]: pgmap v1140: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 255 op/s
Dec 06 06:59:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.621 251996 INFO nova.virt.libvirt.driver [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Deleting instance files /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524_del
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.622 251996 INFO nova.virt.libvirt.driver [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Deletion of /var/lib/nova/instances/2b0cc34e-c661-4ac2-8f7c-81e282ad2524_del complete
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.682 251996 INFO nova.compute.manager [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Took 0.67 seconds to destroy the instance on the hypervisor.
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.683 251996 DEBUG oslo.service.loopingcall [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.683 251996 DEBUG nova.compute.manager [-] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 06:59:16 compute-0 nova_compute[251992]: 2025-12-06 06:59:16.684 251996 DEBUG nova.network.neutron [-] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 06:59:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:16.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 107 MiB data, 326 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 259 op/s
Dec 06 06:59:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:16.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.033 251996 DEBUG nova.network.neutron [-] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.053 251996 DEBUG nova.network.neutron [-] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.067 251996 INFO nova.compute.manager [-] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Took 0.38 seconds to deallocate network for instance.
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.116 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.116 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.159 251996 DEBUG oslo_concurrency.processutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.243 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128981368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.627 251996 DEBUG oslo_concurrency.processutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.634 251996 DEBUG nova.compute.provider_tree [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.652 251996 DEBUG nova.scheduler.client.report [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:59:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2128981368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.672 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.707 251996 INFO nova.scheduler.client.report [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Deleted allocations for instance 2b0cc34e-c661-4ac2-8f7c-81e282ad2524
Dec 06 06:59:17 compute-0 nova_compute[251992]: 2025-12-06 06:59:17.783 251996 DEBUG oslo_concurrency.lockutils [None req-677dff78-ab67-4a8c-a527-d3924f20a0f7 21371e07b1764fe883604309e8ae27d5 164f3793f37e4768b0da70527fb86c4c - - default default] Lock "2b0cc34e-c661-4ac2-8f7c-81e282ad2524" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_06:59:18
Dec 06 06:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 06:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 06:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', '.mgr', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'images']
Dec 06 06:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 06:59:18 compute-0 nova_compute[251992]: 2025-12-06 06:59:18.680 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:18 compute-0 ceph-mon[74339]: pgmap v1141: 305 pgs: 305 active+clean; 107 MiB data, 326 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 259 op/s
Dec 06 06:59:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:18.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 105 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 208 op/s
Dec 06 06:59:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:18.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:59:19.334 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 06:59:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:59:19.335 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 06:59:19 compute-0 nova_compute[251992]: 2025-12-06 06:59:19.366 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:20 compute-0 podman[263941]: 2025-12-06 06:59:20.405010061 +0000 UTC m=+0.060123957 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 06:59:20 compute-0 podman[263940]: 2025-12-06 06:59:20.420227551 +0000 UTC m=+0.077155246 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 06:59:20 compute-0 ceph-mon[74339]: pgmap v1142: 305 pgs: 305 active+clean; 105 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 208 op/s
Dec 06 06:59:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:20.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 114 MiB data, 312 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.0 MiB/s wr, 138 op/s
Dec 06 06:59:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:20.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:22 compute-0 nova_compute[251992]: 2025-12-06 06:59:22.245 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:22 compute-0 ceph-mon[74339]: pgmap v1143: 305 pgs: 305 active+clean; 114 MiB data, 312 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.0 MiB/s wr, 138 op/s
Dec 06 06:59:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:22.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 119 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.2 MiB/s wr, 165 op/s
Dec 06 06:59:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:22.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 06:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 06:59:23 compute-0 nova_compute[251992]: 2025-12-06 06:59:23.682 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:24 compute-0 ceph-mon[74339]: pgmap v1144: 305 pgs: 305 active+clean; 119 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.2 MiB/s wr, 165 op/s
Dec 06 06:59:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:24.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 562 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Dec 06 06:59:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:24.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021606876977479992 of space, bias 1.0, pg target 0.6482063093243998 quantized to 32 (current 32)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 06:59:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 06:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:26 compute-0 nova_compute[251992]: 2025-12-06 06:59:26.652 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:26 compute-0 nova_compute[251992]: 2025-12-06 06:59:26.652 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:26 compute-0 nova_compute[251992]: 2025-12-06 06:59:26.673 251996 DEBUG nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 06:59:26 compute-0 nova_compute[251992]: 2025-12-06 06:59:26.752 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:26 compute-0 nova_compute[251992]: 2025-12-06 06:59:26.753 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:26 compute-0 nova_compute[251992]: 2025-12-06 06:59:26.760 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 06:59:26 compute-0 nova_compute[251992]: 2025-12-06 06:59:26.760 251996 INFO nova.compute.claims [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Claim successful on node compute-0.ctlplane.example.com
Dec 06 06:59:26 compute-0 ceph-mon[74339]: pgmap v1145: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 562 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Dec 06 06:59:26 compute-0 nova_compute[251992]: 2025-12-06 06:59:26.866 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:26.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 553 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Dec 06 06:59:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:26.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.246 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2692941592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.309 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.315 251996 DEBUG nova.compute.provider_tree [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.333 251996 DEBUG nova.scheduler.client.report [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.382 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.383 251996 DEBUG nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.472 251996 DEBUG nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.488 251996 INFO nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.508 251996 DEBUG nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.620 251996 DEBUG nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.622 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.622 251996 INFO nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating image(s)
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.650 251996 DEBUG nova.storage.rbd_utils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.676 251996 DEBUG nova.storage.rbd_utils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.700 251996 DEBUG nova.storage.rbd_utils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.704 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.763 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.764 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.765 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.765 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.792 251996 DEBUG nova.storage.rbd_utils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:27 compute-0 nova_compute[251992]: 2025-12-06 06:59:27.796 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2692941592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.104 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.181 251996 DEBUG nova.storage.rbd_utils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] resizing rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.294 251996 DEBUG nova.objects.instance [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'migration_context' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.311 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.312 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Ensure instance console log exists: /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.312 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.313 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.313 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.315 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.320 251996 WARNING nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.333 251996 DEBUG nova.virt.libvirt.host [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.334 251996 DEBUG nova.virt.libvirt.host [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 06:59:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 06:59:28.337 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.338 251996 DEBUG nova.virt.libvirt.host [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.339 251996 DEBUG nova.virt.libvirt.host [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.340 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.341 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.341 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.341 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.341 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.342 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.342 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.342 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.342 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.342 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.343 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.343 251996 DEBUG nova.virt.hardware [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.346 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.686 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:59:28 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1654061355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.789 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.815 251996 DEBUG nova.storage.rbd_utils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:28 compute-0 nova_compute[251992]: 2025-12-06 06:59:28.819 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:28 compute-0 ceph-mon[74339]: pgmap v1146: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 553 KiB/s rd, 4.3 MiB/s wr, 145 op/s
Dec 06 06:59:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1654061355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:28.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 144 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 415 KiB/s rd, 3.3 MiB/s wr, 111 op/s
Dec 06 06:59:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 06:59:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:28.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 06:59:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 06:59:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423695454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.264 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.266 251996 DEBUG nova.objects.instance [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.288 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] End _get_guest_xml xml=<domain type="kvm">
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <uuid>c0e44b8d-95d4-4389-a1c4-39ec403311c2</uuid>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <name>instance-0000000c</name>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <metadata>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersAdmin275Test-server-818914075</nova:name>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 06:59:28</nova:creationTime>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <nova:user uuid="c471515305d641a4ad0f6c96b0a3a99c">tempest-ServersAdmin275Test-1943715333-project-member</nova:user>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <nova:project uuid="93ed74fbaa5248349a016be1f013fb09">tempest-ServersAdmin275Test-1943715333</nova:project>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   </metadata>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <system>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <entry name="serial">c0e44b8d-95d4-4389-a1c4-39ec403311c2</entry>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <entry name="uuid">c0e44b8d-95d4-4389-a1c4-39ec403311c2</entry>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     </system>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <os>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   </os>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <features>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <apic/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   </features>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   </clock>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   </cpu>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   <devices>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk">
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       </source>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config">
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       </source>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 06:59:29 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       </auth>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     </disk>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/console.log" append="off"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     </serial>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <video>
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     </video>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     </rng>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 06:59:29 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 06:59:29 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 06:59:29 compute-0 nova_compute[251992]:   </devices>
Dec 06 06:59:29 compute-0 nova_compute[251992]: </domain>
Dec 06 06:59:29 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.343 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.343 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.344 251996 INFO nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Using config drive
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.368 251996 DEBUG nova.storage.rbd_utils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.614 251996 INFO nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating config drive at /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.619 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptl7olx39 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.744 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptl7olx39" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.775 251996 DEBUG nova.storage.rbd_utils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.779 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:29 compute-0 ceph-mon[74339]: pgmap v1147: 305 pgs: 305 active+clean; 144 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 415 KiB/s rd, 3.3 MiB/s wr, 111 op/s
Dec 06 06:59:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/423695454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.966 251996 DEBUG oslo_concurrency.processutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:29 compute-0 nova_compute[251992]: 2025-12-06 06:59:29.967 251996 INFO nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deleting local config drive /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config because it was imported into RBD.
Dec 06 06:59:30 compute-0 systemd-machined[212986]: New machine qemu-5-instance-0000000c.
Dec 06 06:59:30 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000c.
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.553 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004370.5526288, c0e44b8d-95d4-4389-a1c4-39ec403311c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.554 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] VM Resumed (Lifecycle Event)
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.557 251996 DEBUG nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.557 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.561 251996 INFO nova.virt.libvirt.driver [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance spawned successfully.
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.562 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.603 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.603 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.604 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.605 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.606 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.606 251996 DEBUG nova.virt.libvirt.driver [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.611 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.614 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.636 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.637 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004370.5540519, c0e44b8d-95d4-4389-a1c4-39ec403311c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.637 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] VM Started (Lifecycle Event)
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.667 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.670 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.675 251996 INFO nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Took 3.05 seconds to spawn the instance on the hypervisor.
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.675 251996 DEBUG nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.706 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.731 251996 INFO nova.compute.manager [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Took 4.01 seconds to build instance.
Dec 06 06:59:30 compute-0 nova_compute[251992]: 2025-12-06 06:59:30.749 251996 DEBUG oslo_concurrency.lockutils [None req-a4b78f51-1411-4db1-a8c4-83537d59dc36 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Dec 06 06:59:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:30.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Dec 06 06:59:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 161 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 184 KiB/s rd, 2.0 MiB/s wr, 75 op/s
Dec 06 06:59:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Dec 06 06:59:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:30.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:31 compute-0 nova_compute[251992]: 2025-12-06 06:59:31.232 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004356.2318594, 2b0cc34e-c661-4ac2-8f7c-81e282ad2524 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 06:59:31 compute-0 nova_compute[251992]: 2025-12-06 06:59:31.234 251996 INFO nova.compute.manager [-] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] VM Stopped (Lifecycle Event)
Dec 06 06:59:31 compute-0 nova_compute[251992]: 2025-12-06 06:59:31.258 251996 DEBUG nova.compute.manager [None req-b924886a-6415-4aa9-906e-1b9d5ed32097 - - - - - -] [instance: 2b0cc34e-c661-4ac2-8f7c-81e282ad2524] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:32 compute-0 ceph-mon[74339]: pgmap v1148: 305 pgs: 305 active+clean; 161 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 184 KiB/s rd, 2.0 MiB/s wr, 75 op/s
Dec 06 06:59:32 compute-0 ceph-mon[74339]: osdmap e157: 3 total, 3 up, 3 in
Dec 06 06:59:32 compute-0 nova_compute[251992]: 2025-12-06 06:59:32.247 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:32.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 167 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 06 06:59:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:32.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.206 251996 INFO nova.compute.manager [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Rebuilding instance
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.641 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.656 251996 DEBUG nova.compute.manager [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.690 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.725 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'pci_requests' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.738 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.752 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'resources' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.766 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'migration_context' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.784 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 06:59:33 compute-0 nova_compute[251992]: 2025-12-06 06:59:33.788 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 06:59:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1428329051' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:34 compute-0 sudo[264352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:34 compute-0 sudo[264352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:34 compute-0 sudo[264352]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:34 compute-0 sudo[264377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:34 compute-0 sudo[264377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:34 compute-0 sudo[264377]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:34.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 831 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Dec 06 06:59:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:34.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:35 compute-0 ceph-mon[74339]: pgmap v1150: 305 pgs: 305 active+clean; 167 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 06 06:59:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3323031020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:36.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Dec 06 06:59:36 compute-0 ceph-mon[74339]: pgmap v1151: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 831 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Dec 06 06:59:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:37.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:37 compute-0 nova_compute[251992]: 2025-12-06 06:59:37.248 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:38 compute-0 ceph-mon[74339]: pgmap v1152: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Dec 06 06:59:38 compute-0 nova_compute[251992]: 2025-12-06 06:59:38.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:38.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 181 op/s
Dec 06 06:59:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:39.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Dec 06 06:59:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Dec 06 06:59:40 compute-0 ceph-mon[74339]: pgmap v1153: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 181 op/s
Dec 06 06:59:40 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Dec 06 06:59:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:40.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 582 KiB/s wr, 190 op/s
Dec 06 06:59:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:41.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:42 compute-0 ceph-mon[74339]: osdmap e158: 3 total, 3 up, 3 in
Dec 06 06:59:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/237919553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:42 compute-0 nova_compute[251992]: 2025-12-06 06:59:42.250 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 15 KiB/s wr, 177 op/s
Dec 06 06:59:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:42.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:59:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:59:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:59:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:59:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 06:59:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 06:59:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:43.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:43 compute-0 nova_compute[251992]: 2025-12-06 06:59:43.698 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:43 compute-0 nova_compute[251992]: 2025-12-06 06:59:43.832 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 06:59:44 compute-0 ceph-mon[74339]: pgmap v1155: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 582 KiB/s wr, 190 op/s
Dec 06 06:59:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4098330128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1155863205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:44 compute-0 podman[264408]: 2025-12-06 06:59:44.416954747 +0000 UTC m=+0.077540497 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec 06 06:59:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 170 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 670 KiB/s wr, 160 op/s
Dec 06 06:59:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:44.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:45.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:45 compute-0 ceph-mon[74339]: pgmap v1156: 305 pgs: 305 active+clean; 167 MiB data, 328 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 15 KiB/s wr, 177 op/s
Dec 06 06:59:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/323353182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:46 compute-0 ceph-mon[74339]: pgmap v1157: 305 pgs: 305 active+clean; 170 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 670 KiB/s wr, 160 op/s
Dec 06 06:59:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2920678114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 182 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Dec 06 06:59:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:46.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:47.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:47 compute-0 nova_compute[251992]: 2025-12-06 06:59:47.251 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2461963196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.465 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.465 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.466 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.466 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.484 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.484 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.484 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.484 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 06:59:48 compute-0 ceph-mon[74339]: pgmap v1158: 305 pgs: 305 active+clean; 182 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 111 op/s
Dec 06 06:59:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1495032765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.701 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:48 compute-0 nova_compute[251992]: 2025-12-06 06:59:48.739 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 06:59:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 214 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.3 MiB/s wr, 153 op/s
Dec 06 06:59:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:48.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:49.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.197 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.212 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.212 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.213 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.213 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.214 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.214 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.214 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.214 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.215 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.215 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.237 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.237 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.238 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.238 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.238 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2009234372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3413033794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.776 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.860 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:59:49 compute-0 nova_compute[251992]: 2025-12-06 06:59:49.860 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.048 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.050 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4705MB free_disk=20.904388427734375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.050 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.050 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.152 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c0e44b8d-95d4-4389-a1c4-39ec403311c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.152 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.153 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.199 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 06:59:50 compute-0 ovn_controller[147168]: 2025-12-06T06:59:50Z|00035|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 06:59:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 06:59:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345918381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.632 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.638 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.659 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.693 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 06:59:50 compute-0 nova_compute[251992]: 2025-12-06 06:59:50.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 06:59:50 compute-0 ceph-mon[74339]: pgmap v1159: 305 pgs: 305 active+clean; 214 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.3 MiB/s wr, 153 op/s
Dec 06 06:59:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2009234372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2345918381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 234 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 946 KiB/s rd, 3.8 MiB/s wr, 154 op/s
Dec 06 06:59:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:50.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:51 compute-0 podman[264485]: 2025-12-06 06:59:51.403944818 +0000 UTC m=+0.051576717 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 06:59:51 compute-0 podman[264486]: 2025-12-06 06:59:51.404899723 +0000 UTC m=+0.051718771 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 06 06:59:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Dec 06 06:59:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Dec 06 06:59:51 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Dec 06 06:59:52 compute-0 nova_compute[251992]: 2025-12-06 06:59:52.252 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:52 compute-0 ceph-mon[74339]: pgmap v1160: 305 pgs: 305 active+clean; 234 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 946 KiB/s rd, 3.8 MiB/s wr, 154 op/s
Dec 06 06:59:52 compute-0 ceph-mon[74339]: osdmap e159: 3 total, 3 up, 3 in
Dec 06 06:59:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 196 op/s
Dec 06 06:59:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 06:59:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:52.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 06:59:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:53.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:53 compute-0 nova_compute[251992]: 2025-12-06 06:59:53.703 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:54 compute-0 sudo[264523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:54 compute-0 sudo[264523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:54 compute-0 sudo[264523]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:54 compute-0 sudo[264548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 06:59:54 compute-0 sudo[264548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 06:59:54 compute-0 sudo[264548]: pam_unix(sudo:session): session closed for user root
Dec 06 06:59:54 compute-0 nova_compute[251992]: 2025-12-06 06:59:54.879 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 06:59:54 compute-0 ceph-mon[74339]: pgmap v1162: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 196 op/s
Dec 06 06:59:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4292194976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 06:59:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 222 op/s
Dec 06 06:59:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:54.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:55.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:56 compute-0 ceph-mon[74339]: pgmap v1163: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 222 op/s
Dec 06 06:59:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 06:59:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 226 op/s
Dec 06 06:59:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:56.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:57.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:57 compute-0 nova_compute[251992]: 2025-12-06 06:59:57.254 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:57 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec 06 06:59:57 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Consumed 15.033s CPU time.
Dec 06 06:59:57 compute-0 systemd-machined[212986]: Machine qemu-5-instance-0000000c terminated.
Dec 06 06:59:57 compute-0 nova_compute[251992]: 2025-12-06 06:59:57.897 251996 INFO nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance shutdown successfully after 24 seconds.
Dec 06 06:59:57 compute-0 nova_compute[251992]: 2025-12-06 06:59:57.903 251996 INFO nova.virt.libvirt.driver [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance destroyed successfully.
Dec 06 06:59:57 compute-0 nova_compute[251992]: 2025-12-06 06:59:57.908 251996 INFO nova.virt.libvirt.driver [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance destroyed successfully.
Dec 06 06:59:58 compute-0 nova_compute[251992]: 2025-12-06 06:59:58.707 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 06:59:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 142 op/s
Dec 06 06:59:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 06:59:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:06:59:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 06:59:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 06:59:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 06:59:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:06:59:59.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 06:59:59 compute-0 ceph-mon[74339]: pgmap v1164: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 226 op/s
Dec 06 06:59:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/843334347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:00:00 compute-0 ceph-mon[74339]: pgmap v1165: 305 pgs: 305 active+clean; 248 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 142 op/s
Dec 06 07:00:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:00:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 266 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 131 op/s
Dec 06 07:00:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:00.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:01.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.255 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:02 compute-0 ceph-mon[74339]: pgmap v1166: 305 pgs: 305 active+clean; 266 MiB data, 380 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 131 op/s
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.498 251996 INFO nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deleting instance files /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2_del
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.499 251996 INFO nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deletion of /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2_del complete
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.800 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.801 251996 INFO nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating image(s)
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.838 251996 DEBUG nova.storage.rbd_utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.870 251996 DEBUG nova.storage.rbd_utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.906 251996 DEBUG nova.storage.rbd_utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.911 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:02 compute-0 nova_compute[251992]: 2025-12-06 07:00:02.912 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 260 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 123 op/s
Dec 06 07:00:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:00:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:02.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:00:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:03.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:03 compute-0 nova_compute[251992]: 2025-12-06 07:00:03.275 251996 DEBUG nova.virt.libvirt.imagebackend [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/412dd61d-1b1e-439f-b7f9-7e7c4e42924c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/412dd61d-1b1e-439f-b7f9-7e7c4e42924c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:00:03 compute-0 nova_compute[251992]: 2025-12-06 07:00:03.710 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:00:03.810 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:00:03.811 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:00:03.811 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:04 compute-0 ceph-mon[74339]: pgmap v1167: 305 pgs: 305 active+clean; 260 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 123 op/s
Dec 06 07:00:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 250 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 118 op/s
Dec 06 07:00:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:04.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:05.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:05 compute-0 nova_compute[251992]: 2025-12-06 07:00:05.051 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:05 compute-0 nova_compute[251992]: 2025-12-06 07:00:05.122 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:05 compute-0 nova_compute[251992]: 2025-12-06 07:00:05.123 251996 DEBUG nova.virt.images [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] 412dd61d-1b1e-439f-b7f9-7e7c4e42924c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 06 07:00:05 compute-0 nova_compute[251992]: 2025-12-06 07:00:05.124 251996 DEBUG nova.privsep.utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 07:00:05 compute-0 nova_compute[251992]: 2025-12-06 07:00:05.125 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:06 compute-0 sudo[264666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:06 compute-0 sudo[264666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:06 compute-0 sudo[264666]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:06 compute-0 sudo[264691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:00:06 compute-0 sudo[264691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:06 compute-0 sudo[264691]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:06 compute-0 sudo[264716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:06 compute-0 sudo[264716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:06 compute-0 sudo[264716]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:06 compute-0 nova_compute[251992]: 2025-12-06 07:00:06.159 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.part /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted" returned: 0 in 1.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:06 compute-0 nova_compute[251992]: 2025-12-06 07:00:06.165 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:06 compute-0 sudo[264741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:00:06 compute-0 sudo[264741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:06 compute-0 nova_compute[251992]: 2025-12-06 07:00:06.223 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737.converted --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:06 compute-0 nova_compute[251992]: 2025-12-06 07:00:06.225 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:06 compute-0 nova_compute[251992]: 2025-12-06 07:00:06.370 251996 DEBUG nova.storage.rbd_utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:06 compute-0 nova_compute[251992]: 2025-12-06 07:00:06.375 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:00:06 compute-0 sudo[264741]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3864384142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/807715224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 07:00:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:00:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:00:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 07:00:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:00:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 239 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Dec 06 07:00:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:06.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:07.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.256 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.728 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.795 251996 DEBUG nova.storage.rbd_utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] resizing rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.905 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.906 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Ensure instance console log exists: /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.907 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.908 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.909 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.912 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.918 251996 WARNING nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.923 251996 DEBUG nova.virt.libvirt.host [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.924 251996 DEBUG nova.virt.libvirt.host [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:00:07 compute-0 ceph-mon[74339]: pgmap v1168: 305 pgs: 305 active+clean; 250 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 118 op/s
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.928 251996 DEBUG nova.virt.libvirt.host [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:00:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:00:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:00:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.929 251996 DEBUG nova.virt.libvirt.host [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.931 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.932 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.932 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.932 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.933 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.933 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.933 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.933 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.934 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.934 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.934 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.935 251996 DEBUG nova.virt.hardware [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.935 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:07 compute-0 nova_compute[251992]: 2025-12-06 07:00:07.957 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:00:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:00:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:00:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:08 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 03eb5302-31bf-41c9-8c67-b20644aadebb does not exist
Dec 06 07:00:08 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a4b0c670-9f33-4e1b-ab4f-bb65aeb6f449 does not exist
Dec 06 07:00:08 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fd3b847b-cb97-4784-a810-ac8a3bdf759e does not exist
Dec 06 07:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:00:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:00:08 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:00:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:00:08 compute-0 sudo[264928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:08 compute-0 sudo[264928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:08 compute-0 sudo[264928]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:00:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197566583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:08 compute-0 nova_compute[251992]: 2025-12-06 07:00:08.428 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:08 compute-0 sudo[264953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:00:08 compute-0 sudo[264953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:08 compute-0 sudo[264953]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:08 compute-0 nova_compute[251992]: 2025-12-06 07:00:08.457 251996 DEBUG nova.storage.rbd_utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:08 compute-0 nova_compute[251992]: 2025-12-06 07:00:08.463 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:08 compute-0 sudo[264996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:08 compute-0 sudo[264996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:08 compute-0 sudo[264996]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:08 compute-0 sudo[265025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:00:08 compute-0 sudo[265025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:08 compute-0 nova_compute[251992]: 2025-12-06 07:00:08.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:00:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2009438146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:08 compute-0 nova_compute[251992]: 2025-12-06 07:00:08.900 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:08 compute-0 nova_compute[251992]: 2025-12-06 07:00:08.904 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <uuid>c0e44b8d-95d4-4389-a1c4-39ec403311c2</uuid>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <name>instance-0000000c</name>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersAdmin275Test-server-818914075</nova:name>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:00:07</nova:creationTime>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <nova:user uuid="c471515305d641a4ad0f6c96b0a3a99c">tempest-ServersAdmin275Test-1943715333-project-member</nova:user>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <nova:project uuid="93ed74fbaa5248349a016be1f013fb09">tempest-ServersAdmin275Test-1943715333</nova:project>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <system>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <entry name="serial">c0e44b8d-95d4-4389-a1c4-39ec403311c2</entry>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <entry name="uuid">c0e44b8d-95d4-4389-a1c4-39ec403311c2</entry>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     </system>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <os>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   </os>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <features>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   </features>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk">
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       </source>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config">
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       </source>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:00:08 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/console.log" append="off"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <video>
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     </video>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:00:08 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:00:08 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:00:08 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:00:08 compute-0 nova_compute[251992]: </domain>
Dec 06 07:00:08 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:00:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 278 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 150 op/s
Dec 06 07:00:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:08 compute-0 podman[265110]: 2025-12-06 07:00:08.856186859 +0000 UTC m=+0.022059272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:00:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:08.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:09.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:09 compute-0 nova_compute[251992]: 2025-12-06 07:00:09.270 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:00:09 compute-0 nova_compute[251992]: 2025-12-06 07:00:09.271 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:00:09 compute-0 nova_compute[251992]: 2025-12-06 07:00:09.272 251996 INFO nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Using config drive
Dec 06 07:00:09 compute-0 nova_compute[251992]: 2025-12-06 07:00:09.300 251996 DEBUG nova.storage.rbd_utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:09 compute-0 nova_compute[251992]: 2025-12-06 07:00:09.340 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:09 compute-0 podman[265110]: 2025-12-06 07:00:09.433188228 +0000 UTC m=+0.599060621 container create 42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec 06 07:00:09 compute-0 nova_compute[251992]: 2025-12-06 07:00:09.439 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'keypairs' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:09 compute-0 ceph-mon[74339]: pgmap v1169: 305 pgs: 305 active+clean; 239 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 116 op/s
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/197566583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/371361718' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/371361718' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:00:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2009438146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:09 compute-0 systemd[1]: Started libpod-conmon-42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90.scope.
Dec 06 07:00:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:00:10 compute-0 podman[265110]: 2025-12-06 07:00:10.025431155 +0000 UTC m=+1.191303578 container init 42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cori, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:00:10 compute-0 podman[265110]: 2025-12-06 07:00:10.033986854 +0000 UTC m=+1.199859257 container start 42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cori, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:00:10 compute-0 brave_cori[265147]: 167 167
Dec 06 07:00:10 compute-0 podman[265110]: 2025-12-06 07:00:10.039967013 +0000 UTC m=+1.205839416 container attach 42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cori, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:00:10 compute-0 systemd[1]: libpod-42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90.scope: Deactivated successfully.
Dec 06 07:00:10 compute-0 conmon[265147]: conmon 42e7d4bd4a0328d655db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90.scope/container/memory.events
Dec 06 07:00:10 compute-0 podman[265110]: 2025-12-06 07:00:10.041721901 +0000 UTC m=+1.207594294 container died 42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:00:10 compute-0 nova_compute[251992]: 2025-12-06 07:00:10.066 251996 INFO nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating config drive at /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config
Dec 06 07:00:10 compute-0 nova_compute[251992]: 2025-12-06 07:00:10.073 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp125de7jd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eab6a85ba034903b972eeb004e1aa76b83c8e2a1be3ec8636aaef1c519050ee-merged.mount: Deactivated successfully.
Dec 06 07:00:10 compute-0 podman[265110]: 2025-12-06 07:00:10.16427779 +0000 UTC m=+1.330150183 container remove 42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:00:10 compute-0 systemd[1]: libpod-conmon-42e7d4bd4a0328d655dbb5d8bd32bf1b9ab5090ece24a801db3bc9ef8231aa90.scope: Deactivated successfully.
Dec 06 07:00:10 compute-0 nova_compute[251992]: 2025-12-06 07:00:10.196 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp125de7jd" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:10 compute-0 nova_compute[251992]: 2025-12-06 07:00:10.228 251996 DEBUG nova.storage.rbd_utils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:10 compute-0 nova_compute[251992]: 2025-12-06 07:00:10.232 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:10 compute-0 podman[265207]: 2025-12-06 07:00:10.335491561 +0000 UTC m=+0.023751787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:00:10 compute-0 podman[265207]: 2025-12-06 07:00:10.874165755 +0000 UTC m=+0.562425961 container create 23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:00:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 287 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 165 op/s
Dec 06 07:00:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:10.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:11.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:11 compute-0 systemd[1]: Started libpod-conmon-23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c.scope.
Dec 06 07:00:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a4f7b48a93a43dab125f79192b9aa7336bb1243c57a7e131db00803502e3f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a4f7b48a93a43dab125f79192b9aa7336bb1243c57a7e131db00803502e3f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a4f7b48a93a43dab125f79192b9aa7336bb1243c57a7e131db00803502e3f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a4f7b48a93a43dab125f79192b9aa7336bb1243c57a7e131db00803502e3f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27a4f7b48a93a43dab125f79192b9aa7336bb1243c57a7e131db00803502e3f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:11 compute-0 podman[265207]: 2025-12-06 07:00:11.222678011 +0000 UTC m=+0.910938237 container init 23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec 06 07:00:11 compute-0 podman[265207]: 2025-12-06 07:00:11.23238827 +0000 UTC m=+0.920648496 container start 23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 07:00:11 compute-0 ceph-mon[74339]: pgmap v1170: 305 pgs: 305 active+clean; 278 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 150 op/s
Dec 06 07:00:11 compute-0 podman[265207]: 2025-12-06 07:00:11.271196969 +0000 UTC m=+0.959457185 container attach 23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:00:11 compute-0 nova_compute[251992]: 2025-12-06 07:00:11.407 251996 DEBUG oslo_concurrency.processutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:11 compute-0 nova_compute[251992]: 2025-12-06 07:00:11.409 251996 INFO nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deleting local config drive /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config because it was imported into RBD.
Dec 06 07:00:11 compute-0 systemd-machined[212986]: New machine qemu-6-instance-0000000c.
Dec 06 07:00:11 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-0000000c.
Dec 06 07:00:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:12 compute-0 eager_bardeen[265227]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:00:12 compute-0 eager_bardeen[265227]: --> relative data size: 1.0
Dec 06 07:00:12 compute-0 eager_bardeen[265227]: --> All data devices are unavailable
Dec 06 07:00:12 compute-0 systemd[1]: libpod-23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c.scope: Deactivated successfully.
Dec 06 07:00:12 compute-0 podman[265207]: 2025-12-06 07:00:12.061330421 +0000 UTC m=+1.749590627 container died 23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.258 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-27a4f7b48a93a43dab125f79192b9aa7336bb1243c57a7e131db00803502e3f9-merged.mount: Deactivated successfully.
Dec 06 07:00:12 compute-0 podman[265207]: 2025-12-06 07:00:12.418580709 +0000 UTC m=+2.106840925 container remove 23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bardeen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec 06 07:00:12 compute-0 systemd[1]: libpod-conmon-23ff737e63affe8e42b2de7d8e8bdeeba70d6d33ff88df7ebd81c505fef5540c.scope: Deactivated successfully.
Dec 06 07:00:12 compute-0 sudo[265025]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Dec 06 07:00:12 compute-0 ceph-mon[74339]: pgmap v1171: 305 pgs: 305 active+clean; 287 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 165 op/s
Dec 06 07:00:12 compute-0 sudo[265305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:12 compute-0 sudo[265305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:12 compute-0 sudo[265305]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Dec 06 07:00:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Dec 06 07:00:12 compute-0 sudo[265334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:00:12 compute-0 sudo[265334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.626 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for c0e44b8d-95d4-4389-a1c4-39ec403311c2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.627 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004412.6252103, c0e44b8d-95d4-4389-a1c4-39ec403311c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.628 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] VM Resumed (Lifecycle Event)
Dec 06 07:00:12 compute-0 sudo[265334]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.632 251996 DEBUG nova.compute.manager [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.632 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.636 251996 INFO nova.virt.libvirt.driver [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance spawned successfully.
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.637 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.651 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.654 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.674 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.674 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.675 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.675 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.675 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.676 251996 DEBUG nova.virt.libvirt.driver [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.687 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.687 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004412.6270664, c0e44b8d-95d4-4389-a1c4-39ec403311c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.687 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] VM Started (Lifecycle Event)
Dec 06 07:00:12 compute-0 sudo[265361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:12 compute-0 sudo[265361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:12 compute-0 sudo[265361]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.722 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.725 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.753 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:00:12 compute-0 sudo[265386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:00:12 compute-0 sudo[265386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.786 251996 DEBUG nova.compute.manager [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.877 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.878 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:12 compute-0 nova_compute[251992]: 2025-12-06 07:00:12.878 251996 DEBUG nova.objects.instance [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:00:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.1 MiB/s wr, 172 op/s
Dec 06 07:00:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:12.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:00:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:00:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:00:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:00:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:00:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:00:13 compute-0 nova_compute[251992]: 2025-12-06 07:00:13.000 251996 DEBUG oslo_concurrency.lockutils [None req-b9cf4658-f0ef-4527-a0dc-0a21b8e47e69 c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:13.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:13 compute-0 podman[265451]: 2025-12-06 07:00:13.075038295 +0000 UTC m=+0.024519207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:00:13 compute-0 podman[265451]: 2025-12-06 07:00:13.611981901 +0000 UTC m=+0.561462763 container create 49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:00:13 compute-0 nova_compute[251992]: 2025-12-06 07:00:13.715 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:14 compute-0 ceph-mon[74339]: osdmap e160: 3 total, 3 up, 3 in
Dec 06 07:00:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1111738318' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:14 compute-0 systemd[1]: Started libpod-conmon-49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73.scope.
Dec 06 07:00:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:00:14 compute-0 podman[265451]: 2025-12-06 07:00:14.162460111 +0000 UTC m=+1.111941003 container init 49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:00:14 compute-0 podman[265451]: 2025-12-06 07:00:14.172036837 +0000 UTC m=+1.121517699 container start 49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:00:14 compute-0 podman[265451]: 2025-12-06 07:00:14.176142357 +0000 UTC m=+1.125623219 container attach 49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 07:00:14 compute-0 admiring_fermat[265467]: 167 167
Dec 06 07:00:14 compute-0 systemd[1]: libpod-49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73.scope: Deactivated successfully.
Dec 06 07:00:14 compute-0 podman[265451]: 2025-12-06 07:00:14.177299628 +0000 UTC m=+1.126780490 container died 49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 07:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b93b7f68c3f2387eef531257656819a6f9eb2146da26a0b7219bcba5bb96ce1-merged.mount: Deactivated successfully.
Dec 06 07:00:14 compute-0 podman[265451]: 2025-12-06 07:00:14.216290681 +0000 UTC m=+1.165771543 container remove 49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:00:14 compute-0 systemd[1]: libpod-conmon-49111c7fd33d5b147e7779e23f7befd9e504b02ff81a0bae29dfe52dcbf34e73.scope: Deactivated successfully.
Dec 06 07:00:14 compute-0 podman[265491]: 2025-12-06 07:00:14.389770083 +0000 UTC m=+0.047966244 container create d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:00:14 compute-0 systemd[1]: Started libpod-conmon-d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174.scope.
Dec 06 07:00:14 compute-0 sudo[265502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:14 compute-0 sudo[265502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:14 compute-0 sudo[265502]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62df9c214088ac101e96ef1939c6b8fb2f4ad2b03bafcf5993d6555915dd701/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:14 compute-0 podman[265491]: 2025-12-06 07:00:14.369256285 +0000 UTC m=+0.027452476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62df9c214088ac101e96ef1939c6b8fb2f4ad2b03bafcf5993d6555915dd701/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62df9c214088ac101e96ef1939c6b8fb2f4ad2b03bafcf5993d6555915dd701/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62df9c214088ac101e96ef1939c6b8fb2f4ad2b03bafcf5993d6555915dd701/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:14 compute-0 podman[265491]: 2025-12-06 07:00:14.475727664 +0000 UTC m=+0.133923845 container init d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:00:14 compute-0 podman[265491]: 2025-12-06 07:00:14.483782049 +0000 UTC m=+0.141978210 container start d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:00:14 compute-0 podman[265491]: 2025-12-06 07:00:14.490767766 +0000 UTC m=+0.148963957 container attach d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:00:14 compute-0 sudo[265543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:14 compute-0 sudo[265543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:14 compute-0 sudo[265543]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:14 compute-0 podman[265534]: 2025-12-06 07:00:14.544623527 +0000 UTC m=+0.092652750 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:00:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 229 op/s
Dec 06 07:00:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:14.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:15.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]: {
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:     "0": [
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:         {
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "devices": [
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "/dev/loop3"
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             ],
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "lv_name": "ceph_lv0",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "lv_size": "7511998464",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "name": "ceph_lv0",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "tags": {
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.cluster_name": "ceph",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.crush_device_class": "",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.encrypted": "0",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.osd_id": "0",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.type": "block",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:                 "ceph.vdo": "0"
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             },
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "type": "block",
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:             "vg_name": "ceph_vg0"
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:         }
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]:     ]
Dec 06 07:00:15 compute-0 happy_chaplygin[265532]: }
Dec 06 07:00:15 compute-0 systemd[1]: libpod-d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174.scope: Deactivated successfully.
Dec 06 07:00:15 compute-0 podman[265491]: 2025-12-06 07:00:15.314780254 +0000 UTC m=+0.972976415 container died d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:00:15 compute-0 ceph-mon[74339]: pgmap v1173: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.1 MiB/s wr, 172 op/s
Dec 06 07:00:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3130597809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f62df9c214088ac101e96ef1939c6b8fb2f4ad2b03bafcf5993d6555915dd701-merged.mount: Deactivated successfully.
Dec 06 07:00:15 compute-0 podman[265491]: 2025-12-06 07:00:15.977647081 +0000 UTC m=+1.635843242 container remove d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:00:16 compute-0 systemd[1]: libpod-conmon-d3d2b4870490467a177d5df247606e80077b4638150123a2749f77eadf573174.scope: Deactivated successfully.
Dec 06 07:00:16 compute-0 sudo[265386]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:16 compute-0 sudo[265602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:16 compute-0 sudo[265602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:16 compute-0 sudo[265602]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:16 compute-0 sudo[265627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:00:16 compute-0 sudo[265627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:16 compute-0 sudo[265627]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:16 compute-0 sudo[265652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:16 compute-0 sudo[265652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:16 compute-0 sudo[265652]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:16 compute-0 sudo[265677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:00:16 compute-0 sudo[265677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:16 compute-0 podman[265739]: 2025-12-06 07:00:16.54755366 +0000 UTC m=+0.036285762 container create 92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:00:16 compute-0 systemd[1]: Started libpod-conmon-92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5.scope.
Dec 06 07:00:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:00:16 compute-0 podman[265739]: 2025-12-06 07:00:16.626632517 +0000 UTC m=+0.115364639 container init 92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 07:00:16 compute-0 podman[265739]: 2025-12-06 07:00:16.533386861 +0000 UTC m=+0.022118983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:00:16 compute-0 podman[265739]: 2025-12-06 07:00:16.634371823 +0000 UTC m=+0.123103925 container start 92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 07:00:16 compute-0 podman[265739]: 2025-12-06 07:00:16.637472366 +0000 UTC m=+0.126204488 container attach 92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:00:16 compute-0 cranky_diffie[265755]: 167 167
Dec 06 07:00:16 compute-0 systemd[1]: libpod-92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5.scope: Deactivated successfully.
Dec 06 07:00:16 compute-0 podman[265739]: 2025-12-06 07:00:16.641878754 +0000 UTC m=+0.130610856 container died 92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 07:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b52d9f5b1260884eb0dee8caf8478de90fa096abf3ef513ba58d3215b5796f7-merged.mount: Deactivated successfully.
Dec 06 07:00:16 compute-0 podman[265739]: 2025-12-06 07:00:16.681220997 +0000 UTC m=+0.169953099 container remove 92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_diffie, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:00:16 compute-0 systemd[1]: libpod-conmon-92e24680b29763958efa4c209a5e438d99067d7207be45dc9de63570e89e1fd5.scope: Deactivated successfully.
Dec 06 07:00:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:16 compute-0 podman[265779]: 2025-12-06 07:00:16.842328668 +0000 UTC m=+0.044631235 container create e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:00:16 compute-0 systemd[1]: Started libpod-conmon-e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957.scope.
Dec 06 07:00:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:00:16 compute-0 nova_compute[251992]: 2025-12-06 07:00:16.896 251996 INFO nova.compute.manager [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Rebuilding instance
Dec 06 07:00:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfce99b17541d52ad06f204761bdd0d57a3fb98404f59ccf9e375b5e5aa87a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfce99b17541d52ad06f204761bdd0d57a3fb98404f59ccf9e375b5e5aa87a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfce99b17541d52ad06f204761bdd0d57a3fb98404f59ccf9e375b5e5aa87a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfce99b17541d52ad06f204761bdd0d57a3fb98404f59ccf9e375b5e5aa87a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:00:16 compute-0 podman[265779]: 2025-12-06 07:00:16.909084333 +0000 UTC m=+0.111386920 container init e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:00:16 compute-0 podman[265779]: 2025-12-06 07:00:16.918076955 +0000 UTC m=+0.120379522 container start e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:00:16 compute-0 podman[265779]: 2025-12-06 07:00:16.823600607 +0000 UTC m=+0.025903194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:00:16 compute-0 podman[265779]: 2025-12-06 07:00:16.921003013 +0000 UTC m=+0.123305620 container attach e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:00:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.3 MiB/s wr, 267 op/s
Dec 06 07:00:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:16.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:16 compute-0 ceph-mon[74339]: pgmap v1174: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.0 MiB/s wr, 229 op/s
Dec 06 07:00:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:17.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.137 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lazy-loading 'trusted_certs' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.156 251996 DEBUG nova.compute.manager [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.228 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lazy-loading 'pci_requests' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.259 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.269 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lazy-loading 'pci_devices' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.294 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lazy-loading 'resources' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.308 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lazy-loading 'migration_context' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.322 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:00:17 compute-0 nova_compute[251992]: 2025-12-06 07:00:17.326 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:00:17 compute-0 interesting_murdock[265796]: {
Dec 06 07:00:17 compute-0 interesting_murdock[265796]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:00:17 compute-0 interesting_murdock[265796]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:00:17 compute-0 interesting_murdock[265796]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:00:17 compute-0 interesting_murdock[265796]:         "osd_id": 0,
Dec 06 07:00:17 compute-0 interesting_murdock[265796]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:00:17 compute-0 interesting_murdock[265796]:         "type": "bluestore"
Dec 06 07:00:17 compute-0 interesting_murdock[265796]:     }
Dec 06 07:00:17 compute-0 interesting_murdock[265796]: }
Dec 06 07:00:17 compute-0 systemd[1]: libpod-e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957.scope: Deactivated successfully.
Dec 06 07:00:17 compute-0 podman[265779]: 2025-12-06 07:00:17.799623522 +0000 UTC m=+1.001926089 container died e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:00:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cfce99b17541d52ad06f204761bdd0d57a3fb98404f59ccf9e375b5e5aa87a3-merged.mount: Deactivated successfully.
Dec 06 07:00:17 compute-0 podman[265779]: 2025-12-06 07:00:17.855501497 +0000 UTC m=+1.057804064 container remove e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:00:17 compute-0 systemd[1]: libpod-conmon-e0abe157a329e4fda2d689162065e74a423dd8df8d9ea7254f8bca82f9fe2957.scope: Deactivated successfully.
Dec 06 07:00:17 compute-0 sudo[265677]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:00:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:00:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1427829d-b39f-4535-b0f4-fa63a92a517f does not exist
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 61117248-d4e5-434a-8883-c9b409dd49f5 does not exist
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 938e66e9-b088-4172-a5f6-8682bfc9c24a does not exist
Dec 06 07:00:18 compute-0 sudo[265828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:18 compute-0 sudo[265828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:18 compute-0 sudo[265828]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:18 compute-0 sudo[265853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:00:18 compute-0 sudo[265853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:18 compute-0 sudo[265853]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Dec 06 07:00:18 compute-0 ceph-mon[74339]: pgmap v1175: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.3 MiB/s wr, 267 op/s
Dec 06 07:00:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:00:18
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'vms']
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:00:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Dec 06 07:00:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Dec 06 07:00:18 compute-0 nova_compute[251992]: 2025-12-06 07:00:18.717 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 418 KiB/s wr, 319 op/s
Dec 06 07:00:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:18.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:19.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:19 compute-0 ceph-mon[74339]: osdmap e161: 3 total, 3 up, 3 in
Dec 06 07:00:20 compute-0 ceph-mon[74339]: pgmap v1177: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 418 KiB/s wr, 319 op/s
Dec 06 07:00:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/776762329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 3.7 KiB/s wr, 318 op/s
Dec 06 07:00:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:20.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:21.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:22 compute-0 ceph-mon[74339]: pgmap v1178: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 3.7 KiB/s wr, 318 op/s
Dec 06 07:00:22 compute-0 nova_compute[251992]: 2025-12-06 07:00:22.260 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:22 compute-0 podman[265881]: 2025-12-06 07:00:22.407720983 +0000 UTC m=+0.062994696 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 07:00:22 compute-0 podman[265882]: 2025-12-06 07:00:22.416863757 +0000 UTC m=+0.072007007 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:00:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 6.7 MiB/s rd, 3.6 KiB/s wr, 267 op/s
Dec 06 07:00:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:22.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:23.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:00:23 compute-0 nova_compute[251992]: 2025-12-06 07:00:23.569 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:00:23.570 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:00:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:00:23.573 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:00:23 compute-0 nova_compute[251992]: 2025-12-06 07:00:23.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:24 compute-0 ceph-mon[74339]: pgmap v1179: 305 pgs: 305 active+clean; 295 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 6.7 MiB/s rd, 3.6 KiB/s wr, 267 op/s
Dec 06 07:00:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 298 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 628 KiB/s wr, 174 op/s
Dec 06 07:00:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:24.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:25.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006614561522892239 of space, bias 1.0, pg target 1.9843684568676716 quantized to 32 (current 32)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:00:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:00:26 compute-0 ceph-mon[74339]: pgmap v1180: 305 pgs: 305 active+clean; 298 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 628 KiB/s wr, 174 op/s
Dec 06 07:00:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Dec 06 07:00:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 325 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 130 op/s
Dec 06 07:00:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:26.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:27.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:27 compute-0 nova_compute[251992]: 2025-12-06 07:00:27.261 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:27 compute-0 nova_compute[251992]: 2025-12-06 07:00:27.365 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:00:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Dec 06 07:00:27 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Dec 06 07:00:28 compute-0 nova_compute[251992]: 2025-12-06 07:00:28.723 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 336 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.4 MiB/s wr, 112 op/s
Dec 06 07:00:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:28.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:29 compute-0 ceph-mon[74339]: pgmap v1181: 305 pgs: 305 active+clean; 325 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 130 op/s
Dec 06 07:00:29 compute-0 ceph-mon[74339]: osdmap e162: 3 total, 3 up, 3 in
Dec 06 07:00:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:29.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:30 compute-0 ceph-mon[74339]: pgmap v1183: 305 pgs: 305 active+clean; 336 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.4 MiB/s wr, 112 op/s
Dec 06 07:00:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 341 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 622 KiB/s rd, 4.9 MiB/s wr, 117 op/s
Dec 06 07:00:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:31.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:32 compute-0 nova_compute[251992]: 2025-12-06 07:00:32.261 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:32 compute-0 ceph-mon[74339]: pgmap v1184: 305 pgs: 305 active+clean; 341 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 622 KiB/s rd, 4.9 MiB/s wr, 117 op/s
Dec 06 07:00:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:00:32.576 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:00:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 346 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 809 KiB/s rd, 5.0 MiB/s wr, 137 op/s
Dec 06 07:00:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:32.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:33.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:33 compute-0 nova_compute[251992]: 2025-12-06 07:00:33.725 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:34 compute-0 sudo[265926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:34 compute-0 sudo[265926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:34 compute-0 sudo[265926]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:34 compute-0 sudo[265951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:34 compute-0 sudo[265951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:34 compute-0 sudo[265951]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 351 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 158 op/s
Dec 06 07:00:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:34.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:35.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 353 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:00:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:00:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:00:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:37.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:37 compute-0 nova_compute[251992]: 2025-12-06 07:00:37.263 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:37 compute-0 ceph-mon[74339]: pgmap v1185: 305 pgs: 305 active+clean; 346 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 809 KiB/s rd, 5.0 MiB/s wr, 137 op/s
Dec 06 07:00:38 compute-0 nova_compute[251992]: 2025-12-06 07:00:38.407 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:00:38 compute-0 nova_compute[251992]: 2025-12-06 07:00:38.728 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:38 compute-0 ceph-mon[74339]: pgmap v1186: 305 pgs: 305 active+clean; 351 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.4 MiB/s wr, 158 op/s
Dec 06 07:00:38 compute-0 ceph-mon[74339]: pgmap v1187: 305 pgs: 305 active+clean; 353 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:00:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/60258488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 360 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 139 op/s
Dec 06 07:00:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:38.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:39.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:39 compute-0 nova_compute[251992]: 2025-12-06 07:00:39.878 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:39 compute-0 nova_compute[251992]: 2025-12-06 07:00:39.927 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:39 compute-0 ceph-mon[74339]: pgmap v1188: 305 pgs: 305 active+clean; 360 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 139 op/s
Dec 06 07:00:40 compute-0 nova_compute[251992]: 2025-12-06 07:00:40.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:40 compute-0 nova_compute[251992]: 2025-12-06 07:00:40.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:40 compute-0 nova_compute[251992]: 2025-12-06 07:00:40.746 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:40 compute-0 nova_compute[251992]: 2025-12-06 07:00:40.746 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:40 compute-0 nova_compute[251992]: 2025-12-06 07:00:40.747 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:40 compute-0 nova_compute[251992]: 2025-12-06 07:00:40.747 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:00:40 compute-0 nova_compute[251992]: 2025-12-06 07:00:40.747 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:40 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec 06 07:00:40 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Consumed 14.922s CPU time.
Dec 06 07:00:40 compute-0 systemd-machined[212986]: Machine qemu-6-instance-0000000c terminated.
Dec 06 07:00:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 368 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 694 KiB/s rd, 693 KiB/s wr, 97 op/s
Dec 06 07:00:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:40.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:41.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:00:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1361533499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.215 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.419 251996 INFO nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance shutdown successfully after 24 seconds.
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.425 251996 INFO nova.virt.libvirt.driver [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance destroyed successfully.
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.429 251996 INFO nova.virt.libvirt.driver [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance destroyed successfully.
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.582 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.583 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.734 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.735 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4915MB free_disk=20.80645751953125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.736 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:41 compute-0 nova_compute[251992]: 2025-12-06 07:00:41.736 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1361533499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.184 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c0e44b8d-95d4-4389-a1c4-39ec403311c2 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.184 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.184 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.225 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.306 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:00:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072397604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.677 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.682 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:00:42 compute-0 ceph-mon[74339]: pgmap v1189: 305 pgs: 305 active+clean; 368 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 694 KiB/s rd, 693 KiB/s wr, 97 op/s
Dec 06 07:00:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3072397604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.852 251996 INFO nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deleting instance files /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2_del
Dec 06 07:00:42 compute-0 nova_compute[251992]: 2025-12-06 07:00:42.852 251996 INFO nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deletion of /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2_del complete
Dec 06 07:00:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 377 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 636 KiB/s rd, 560 KiB/s wr, 87 op/s
Dec 06 07:00:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:00:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:00:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:00:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:00:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:00:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:00:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:42.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:43 compute-0 nova_compute[251992]: 2025-12-06 07:00:43.150 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:00:43 compute-0 nova_compute[251992]: 2025-12-06 07:00:43.152 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:00:43 compute-0 nova_compute[251992]: 2025-12-06 07:00:43.153 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:43 compute-0 nova_compute[251992]: 2025-12-06 07:00:43.731 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.321 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.322 251996 INFO nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating image(s)
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.349 251996 DEBUG nova.storage.rbd_utils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.378 251996 DEBUG nova.storage.rbd_utils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.408 251996 DEBUG nova.storage.rbd_utils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.412 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.475 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.476 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.477 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.477 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.509 251996 DEBUG nova.storage.rbd_utils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:44 compute-0 nova_compute[251992]: 2025-12-06 07:00:44.513 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 350 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 490 KiB/s rd, 1.4 MiB/s wr, 85 op/s
Dec 06 07:00:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:45.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.153 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.154 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.154 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.155 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:00:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3478449055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2206640894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.180 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.182 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.182 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.183 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.434 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:00:45 compute-0 podman[266142]: 2025-12-06 07:00:45.439989153 +0000 UTC m=+0.091078478 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.768 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.830 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.831 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.832 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.832 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.832 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.833 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.833 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.857 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:45 compute-0 nova_compute[251992]: 2025-12-06 07:00:45.932 251996 DEBUG nova.storage.rbd_utils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] resizing rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.030 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.030 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Ensure instance console log exists: /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.031 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.031 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.031 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.033 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.037 251996 WARNING nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.041 251996 DEBUG nova.virt.libvirt.host [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.042 251996 DEBUG nova.virt.libvirt.host [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.045 251996 DEBUG nova.virt.libvirt.host [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.045 251996 DEBUG nova.virt.libvirt.host [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.047 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.047 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.047 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.047 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.048 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.048 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.048 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.049 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.049 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.049 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.050 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.050 251996 DEBUG nova.virt.hardware [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.050 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lazy-loading 'vcpu_model' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.069 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:00:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4110431903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.656 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.685 251996 DEBUG nova.storage.rbd_utils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:46 compute-0 nova_compute[251992]: 2025-12-06 07:00:46.689 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:46 compute-0 ceph-mon[74339]: pgmap v1190: 305 pgs: 305 active+clean; 377 MiB data, 486 MiB used, 21 GiB / 21 GiB avail; 636 KiB/s rd, 560 KiB/s wr, 87 op/s
Dec 06 07:00:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/710089016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2444872434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:46 compute-0 ceph-mon[74339]: pgmap v1191: 305 pgs: 305 active+clean; 350 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 490 KiB/s rd, 1.4 MiB/s wr, 85 op/s
Dec 06 07:00:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/817716717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3634300850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 360 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 276 KiB/s rd, 3.1 MiB/s wr, 114 op/s
Dec 06 07:00:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:46.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:00:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:47.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:00:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:00:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2989411430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:47 compute-0 nova_compute[251992]: 2025-12-06 07:00:47.140 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:47 compute-0 nova_compute[251992]: 2025-12-06 07:00:47.143 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <uuid>c0e44b8d-95d4-4389-a1c4-39ec403311c2</uuid>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <name>instance-0000000c</name>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersAdmin275Test-server-818914075</nova:name>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:00:46</nova:creationTime>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <nova:user uuid="c471515305d641a4ad0f6c96b0a3a99c">tempest-ServersAdmin275Test-1943715333-project-member</nova:user>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <nova:project uuid="93ed74fbaa5248349a016be1f013fb09">tempest-ServersAdmin275Test-1943715333</nova:project>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <system>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <entry name="serial">c0e44b8d-95d4-4389-a1c4-39ec403311c2</entry>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <entry name="uuid">c0e44b8d-95d4-4389-a1c4-39ec403311c2</entry>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     </system>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <os>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   </os>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <features>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   </features>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk">
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       </source>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config">
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       </source>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:00:47 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/console.log" append="off"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <video>
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     </video>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:00:47 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:00:47 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:00:47 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:00:47 compute-0 nova_compute[251992]: </domain>
Dec 06 07:00:47 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:00:47 compute-0 nova_compute[251992]: 2025-12-06 07:00:47.307 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4110431903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:47 compute-0 ceph-mon[74339]: pgmap v1192: 305 pgs: 305 active+clean; 360 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 276 KiB/s rd, 3.1 MiB/s wr, 114 op/s
Dec 06 07:00:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2989411430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:00:48 compute-0 nova_compute[251992]: 2025-12-06 07:00:48.386 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:00:48 compute-0 nova_compute[251992]: 2025-12-06 07:00:48.387 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:00:48 compute-0 nova_compute[251992]: 2025-12-06 07:00:48.387 251996 INFO nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Using config drive
Dec 06 07:00:48 compute-0 nova_compute[251992]: 2025-12-06 07:00:48.414 251996 DEBUG nova.storage.rbd_utils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:48 compute-0 nova_compute[251992]: 2025-12-06 07:00:48.734 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 360 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 149 KiB/s rd, 3.0 MiB/s wr, 104 op/s
Dec 06 07:00:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:48.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:00:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:49.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:00:49 compute-0 nova_compute[251992]: 2025-12-06 07:00:49.666 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lazy-loading 'ec2_ids' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:49 compute-0 nova_compute[251992]: 2025-12-06 07:00:49.757 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lazy-loading 'keypairs' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 376 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Dec 06 07:00:50 compute-0 ceph-mon[74339]: pgmap v1193: 305 pgs: 305 active+clean; 360 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 149 KiB/s rd, 3.0 MiB/s wr, 104 op/s
Dec 06 07:00:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:50.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:51.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:51 compute-0 nova_compute[251992]: 2025-12-06 07:00:51.491 251996 INFO nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Creating config drive at /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config
Dec 06 07:00:51 compute-0 nova_compute[251992]: 2025-12-06 07:00:51.496 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0xc53tu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:51 compute-0 nova_compute[251992]: 2025-12-06 07:00:51.625 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0xc53tu" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:51 compute-0 nova_compute[251992]: 2025-12-06 07:00:51.655 251996 DEBUG nova.storage.rbd_utils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] rbd image c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:00:51 compute-0 nova_compute[251992]: 2025-12-06 07:00:51.658 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:00:51 compute-0 nova_compute[251992]: 2025-12-06 07:00:51.837 251996 DEBUG oslo_concurrency.processutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config c0e44b8d-95d4-4389-a1c4-39ec403311c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:00:51 compute-0 nova_compute[251992]: 2025-12-06 07:00:51.838 251996 INFO nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deleting local config drive /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2/disk.config because it was imported into RBD.
Dec 06 07:00:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:51 compute-0 systemd-machined[212986]: New machine qemu-7-instance-0000000c.
Dec 06 07:00:51 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-0000000c.
Dec 06 07:00:51 compute-0 ceph-mon[74339]: pgmap v1194: 305 pgs: 305 active+clean; 376 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.348 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.439 251996 DEBUG nova.compute.manager [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.440 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.440 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for c0e44b8d-95d4-4389-a1c4-39ec403311c2 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.440 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004452.43895, c0e44b8d-95d4-4389-a1c4-39ec403311c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.441 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] VM Resumed (Lifecycle Event)
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.446 251996 INFO nova.virt.libvirt.driver [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance spawned successfully.
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.447 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.733 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.738 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.827 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.827 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.828 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.828 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.829 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.829 251996 DEBUG nova.virt.libvirt.driver [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.842 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.842 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004452.43907, c0e44b8d-95d4-4389-a1c4-39ec403311c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.843 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] VM Started (Lifecycle Event)
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.881 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:52 compute-0 nova_compute[251992]: 2025-12-06 07:00:52.884 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:00:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 376 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 155 op/s
Dec 06 07:00:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:53.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:53 compute-0 nova_compute[251992]: 2025-12-06 07:00:53.004 251996 DEBUG nova.compute.manager [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:00:53 compute-0 nova_compute[251992]: 2025-12-06 07:00:53.069 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:00:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:53.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:53 compute-0 nova_compute[251992]: 2025-12-06 07:00:53.308 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:53 compute-0 nova_compute[251992]: 2025-12-06 07:00:53.309 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:53 compute-0 nova_compute[251992]: 2025-12-06 07:00:53.309 251996 DEBUG nova.objects.instance [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:00:53 compute-0 podman[266423]: 2025-12-06 07:00:53.401267777 +0000 UTC m=+0.058506557 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 07:00:53 compute-0 podman[266424]: 2025-12-06 07:00:53.418023795 +0000 UTC m=+0.069746797 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 06 07:00:53 compute-0 nova_compute[251992]: 2025-12-06 07:00:53.736 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:53 compute-0 nova_compute[251992]: 2025-12-06 07:00:53.977 251996 DEBUG oslo_concurrency.lockutils [None req-1829129f-b1e5-4fc6-bedf-474aba2b1249 24a0d7bc709e496a899c9cdc3d4626d3 2afd88dcb7eb49519ff580bf538ba30c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:54 compute-0 ceph-mon[74339]: pgmap v1195: 305 pgs: 305 active+clean; 376 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 155 op/s
Dec 06 07:00:54 compute-0 sudo[266466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:54 compute-0 sudo[266466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:54 compute-0 sudo[266466]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:54 compute-0 sudo[266491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:00:54 compute-0 sudo[266491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:00:54 compute-0 sudo[266491]: pam_unix(sudo:session): session closed for user root
Dec 06 07:00:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 341 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 167 op/s
Dec 06 07:00:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:55.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:00:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:55.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.984 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.985 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.985 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.985 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.985 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.986 251996 INFO nova.compute.manager [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Terminating instance
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.987 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.987 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquired lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:00:55 compute-0 nova_compute[251992]: 2025-12-06 07:00:55.988 251996 DEBUG nova.network.neutron [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:00:56 compute-0 nova_compute[251992]: 2025-12-06 07:00:56.186 251996 DEBUG nova.network.neutron [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:00:56 compute-0 nova_compute[251992]: 2025-12-06 07:00:56.588 251996 DEBUG nova.network.neutron [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:00:56 compute-0 nova_compute[251992]: 2025-12-06 07:00:56.604 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Releasing lock "refresh_cache-c0e44b8d-95d4-4389-a1c4-39ec403311c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:00:56 compute-0 nova_compute[251992]: 2025-12-06 07:00:56.605 251996 DEBUG nova.compute.manager [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:00:56 compute-0 ceph-mon[74339]: pgmap v1196: 305 pgs: 305 active+clean; 341 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 167 op/s
Dec 06 07:00:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/487672312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:56 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec 06 07:00:56 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000c.scope: Consumed 4.771s CPU time.
Dec 06 07:00:56 compute-0 systemd-machined[212986]: Machine qemu-7-instance-0000000c terminated.
Dec 06 07:00:56 compute-0 nova_compute[251992]: 2025-12-06 07:00:56.824 251996 INFO nova.virt.libvirt.driver [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance destroyed successfully.
Dec 06 07:00:56 compute-0 nova_compute[251992]: 2025-12-06 07:00:56.825 251996 DEBUG nova.objects.instance [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lazy-loading 'resources' on Instance uuid c0e44b8d-95d4-4389-a1c4-39ec403311c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:00:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:00:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 296 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 233 op/s
Dec 06 07:00:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:57.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:57.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:57 compute-0 nova_compute[251992]: 2025-12-06 07:00:57.350 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:58 compute-0 nova_compute[251992]: 2025-12-06 07:00:58.742 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:00:58 compute-0 ceph-mon[74339]: pgmap v1197: 305 pgs: 305 active+clean; 296 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 233 op/s
Dec 06 07:00:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1289067618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:00:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 296 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 617 KiB/s wr, 173 op/s
Dec 06 07:00:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:00:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:00:59.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:00:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:00:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:00:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:00:59.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:01:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4229951688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 302 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 208 op/s
Dec 06 07:01:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:01.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:01 compute-0 anacron[30883]: Job `cron.weekly' started
Dec 06 07:01:01 compute-0 anacron[30883]: Job `cron.weekly' terminated
Dec 06 07:01:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:01.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:01 compute-0 CROND[266544]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 07:01:01 compute-0 run-parts[266547]: (/etc/cron.hourly) starting 0anacron
Dec 06 07:01:01 compute-0 run-parts[266553]: (/etc/cron.hourly) finished 0anacron
Dec 06 07:01:01 compute-0 CROND[266543]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 07:01:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:02 compute-0 ceph-mon[74339]: pgmap v1198: 305 pgs: 305 active+clean; 296 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 617 KiB/s wr, 173 op/s
Dec 06 07:01:02 compute-0 nova_compute[251992]: 2025-12-06 07:01:02.235 251996 INFO nova.virt.libvirt.driver [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deleting instance files /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2_del
Dec 06 07:01:02 compute-0 nova_compute[251992]: 2025-12-06 07:01:02.237 251996 INFO nova.virt.libvirt.driver [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deletion of /var/lib/nova/instances/c0e44b8d-95d4-4389-a1c4-39ec403311c2_del complete
Dec 06 07:01:02 compute-0 nova_compute[251992]: 2025-12-06 07:01:02.352 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 308 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 184 op/s
Dec 06 07:01:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:03 compute-0 ceph-mon[74339]: pgmap v1199: 305 pgs: 305 active+clean; 302 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 208 op/s
Dec 06 07:01:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:03 compute-0 nova_compute[251992]: 2025-12-06 07:01:03.744 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:03.811 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:03.812 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:03.812 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:04 compute-0 ceph-mon[74339]: pgmap v1200: 305 pgs: 305 active+clean; 308 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 184 op/s
Dec 06 07:01:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 323 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 200 op/s
Dec 06 07:01:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:05.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:05.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:05 compute-0 nova_compute[251992]: 2025-12-06 07:01:05.256 251996 INFO nova.compute.manager [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Took 8.65 seconds to destroy the instance on the hypervisor.
Dec 06 07:01:05 compute-0 nova_compute[251992]: 2025-12-06 07:01:05.257 251996 DEBUG oslo.service.loopingcall [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:01:05 compute-0 nova_compute[251992]: 2025-12-06 07:01:05.258 251996 DEBUG nova.compute.manager [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:01:05 compute-0 nova_compute[251992]: 2025-12-06 07:01:05.258 251996 DEBUG nova.network.neutron [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.036 251996 DEBUG nova.network.neutron [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.108 251996 DEBUG nova.network.neutron [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.139 251996 INFO nova.compute.manager [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Took 0.88 seconds to deallocate network for instance.
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.235 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.236 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.313 251996 DEBUG oslo_concurrency.processutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:06 compute-0 ceph-mon[74339]: pgmap v1201: 305 pgs: 305 active+clean; 323 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 200 op/s
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.753 251996 DEBUG oslo_concurrency.processutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.759 251996 DEBUG nova.compute.provider_tree [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.797 251996 DEBUG nova.scheduler.client.report [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:01:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:06 compute-0 nova_compute[251992]: 2025-12-06 07:01:06.874 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Dec 06 07:01:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:07.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:07 compute-0 nova_compute[251992]: 2025-12-06 07:01:07.125 251996 INFO nova.scheduler.client.report [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Deleted allocations for instance c0e44b8d-95d4-4389-a1c4-39ec403311c2
Dec 06 07:01:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:07.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:07 compute-0 nova_compute[251992]: 2025-12-06 07:01:07.231 251996 DEBUG oslo_concurrency.lockutils [None req-baf6bc72-1b74-4cae-8d9b-3ce61f4fd8ad c471515305d641a4ad0f6c96b0a3a99c 93ed74fbaa5248349a016be1f013fb09 - - default default] Lock "c0e44b8d-95d4-4389-a1c4-39ec403311c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:07 compute-0 nova_compute[251992]: 2025-12-06 07:01:07.354 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/347649379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:08 compute-0 nova_compute[251992]: 2025-12-06 07:01:08.746 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Dec 06 07:01:08 compute-0 ceph-mon[74339]: pgmap v1202: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Dec 06 07:01:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2183247011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:01:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2183247011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:01:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:09.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:09.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:10 compute-0 ceph-mon[74339]: pgmap v1203: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Dec 06 07:01:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 153 op/s
Dec 06 07:01:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:11.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:11.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:11 compute-0 nova_compute[251992]: 2025-12-06 07:01:11.823 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004456.8218653, c0e44b8d-95d4-4389-a1c4-39ec403311c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:11 compute-0 nova_compute[251992]: 2025-12-06 07:01:11.824 251996 INFO nova.compute.manager [-] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] VM Stopped (Lifecycle Event)
Dec 06 07:01:11 compute-0 nova_compute[251992]: 2025-12-06 07:01:11.870 251996 DEBUG nova.compute.manager [None req-f1df7ff4-f3d6-4ba2-ace3-77584fdcc2fe - - - - - -] [instance: c0e44b8d-95d4-4389-a1c4-39ec403311c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:12 compute-0 nova_compute[251992]: 2025-12-06 07:01:12.355 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:12 compute-0 ceph-mon[74339]: pgmap v1204: 305 pgs: 305 active+clean; 331 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 153 op/s
Dec 06 07:01:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 340 KiB/s rd, 1.4 MiB/s wr, 137 op/s
Dec 06 07:01:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:01:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:01:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:01:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:01:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:01:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:01:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:13.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Dec 06 07:01:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/698066887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2348888828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3837711917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:13 compute-0 nova_compute[251992]: 2025-12-06 07:01:13.748 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Dec 06 07:01:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Dec 06 07:01:14 compute-0 sudo[266584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:14 compute-0 sudo[266584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:14 compute-0 sudo[266584]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 222 KiB/s rd, 590 KiB/s wr, 94 op/s
Dec 06 07:01:14 compute-0 sudo[266609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:14 compute-0 sudo[266609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:14 compute-0 sudo[266609]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:15.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:15 compute-0 ceph-mon[74339]: pgmap v1205: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 340 KiB/s rd, 1.4 MiB/s wr, 137 op/s
Dec 06 07:01:15 compute-0 ceph-mon[74339]: osdmap e163: 3 total, 3 up, 3 in
Dec 06 07:01:16 compute-0 podman[266634]: 2025-12-06 07:01:16.419186159 +0000 UTC m=+0.081321977 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:01:16 compute-0 ceph-mon[74339]: pgmap v1207: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 222 KiB/s rd, 590 KiB/s wr, 94 op/s
Dec 06 07:01:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1114402516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 45 KiB/s wr, 103 op/s
Dec 06 07:01:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:17.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:17.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:17 compute-0 nova_compute[251992]: 2025-12-06 07:01:17.356 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4001666700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:01:18
Dec 06 07:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.meta']
Dec 06 07:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:01:18 compute-0 sudo[266662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:18 compute-0 sudo[266662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:18 compute-0 sudo[266662]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:18 compute-0 sudo[266687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:01:18 compute-0 sudo[266687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:18 compute-0 sudo[266687]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:18 compute-0 sudo[266712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:18 compute-0 sudo[266712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:18 compute-0 sudo[266712]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:18 compute-0 sudo[266737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:01:18 compute-0 sudo[266737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:18 compute-0 nova_compute[251992]: 2025-12-06 07:01:18.751 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 45 KiB/s wr, 103 op/s
Dec 06 07:01:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:19 compute-0 sudo[266737]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:19.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:19 compute-0 ceph-mon[74339]: pgmap v1208: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 45 KiB/s wr, 103 op/s
Dec 06 07:01:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:01:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 244 op/s
Dec 06 07:01:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:21.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:21.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:01:21 compute-0 ceph-mon[74339]: pgmap v1209: 305 pgs: 305 active+clean; 331 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 64 KiB/s rd, 45 KiB/s wr, 103 op/s
Dec 06 07:01:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:01:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:01:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:01:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:01:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:01:22 compute-0 nova_compute[251992]: 2025-12-06 07:01:22.356 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0e54e836-0409-46c2-bfb3-3cb615a5d531 does not exist
Dec 06 07:01:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d0bb4223-17d2-41fc-a60b-8fdcacc285cf does not exist
Dec 06 07:01:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f49953c7-6710-47f9-be5c-f18c27069e26 does not exist
Dec 06 07:01:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:01:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:01:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:01:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:01:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:01:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:01:22 compute-0 sudo[266795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:22 compute-0 sudo[266795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:22 compute-0 sudo[266795]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:22 compute-0 sudo[266820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:01:22 compute-0 sudo[266820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:22 compute-0 sudo[266820]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:22 compute-0 sudo[266845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:22 compute-0 sudo[266845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:22 compute-0 sudo[266845]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:22 compute-0 sudo[266870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:01:22 compute-0 sudo[266870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:22 compute-0 ceph-mon[74339]: pgmap v1210: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 244 op/s
Dec 06 07:01:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:01:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:01:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:22 compute-0 podman[266934]: 2025-12-06 07:01:22.960641621 +0000 UTC m=+0.038177076 container create 1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:01:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 18 KiB/s wr, 262 op/s
Dec 06 07:01:23 compute-0 systemd[1]: Started libpod-conmon-1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e.scope.
Dec 06 07:01:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:01:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:23 compute-0 podman[266934]: 2025-12-06 07:01:22.943794926 +0000 UTC m=+0.021330361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:01:23 compute-0 podman[266934]: 2025-12-06 07:01:23.049519436 +0000 UTC m=+0.127054871 container init 1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:01:23 compute-0 podman[266934]: 2025-12-06 07:01:23.056025156 +0000 UTC m=+0.133560571 container start 1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:01:23 compute-0 podman[266934]: 2025-12-06 07:01:23.059560573 +0000 UTC m=+0.137095988 container attach 1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:01:23 compute-0 angry_edison[266951]: 167 167
Dec 06 07:01:23 compute-0 systemd[1]: libpod-1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e.scope: Deactivated successfully.
Dec 06 07:01:23 compute-0 podman[266934]: 2025-12-06 07:01:23.061538548 +0000 UTC m=+0.139073963 container died 1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:01:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7deea0951724ea8142a19e0e6decab5d85ccbf44fb76fb47ba4ce9e78dd7b81e-merged.mount: Deactivated successfully.
Dec 06 07:01:23 compute-0 podman[266934]: 2025-12-06 07:01:23.102493029 +0000 UTC m=+0.180028444 container remove 1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:01:23 compute-0 systemd[1]: libpod-conmon-1a82c30ed9b0d60a269edb8b6c16879b06ac768ff6717e6479d8c1f453f9068e.scope: Deactivated successfully.
Dec 06 07:01:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:23.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:23 compute-0 podman[266975]: 2025-12-06 07:01:23.255954518 +0000 UTC m=+0.039473221 container create b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:01:23 compute-0 systemd[1]: Started libpod-conmon-b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53.scope.
Dec 06 07:01:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7610b7811423403b5d119c17327681b37005b7d1752e36e998ea8e74b60d6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7610b7811423403b5d119c17327681b37005b7d1752e36e998ea8e74b60d6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7610b7811423403b5d119c17327681b37005b7d1752e36e998ea8e74b60d6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7610b7811423403b5d119c17327681b37005b7d1752e36e998ea8e74b60d6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7610b7811423403b5d119c17327681b37005b7d1752e36e998ea8e74b60d6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:23 compute-0 podman[266975]: 2025-12-06 07:01:23.333581063 +0000 UTC m=+0.117099786 container init b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:01:23 compute-0 podman[266975]: 2025-12-06 07:01:23.238804725 +0000 UTC m=+0.022323448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:01:23 compute-0 podman[266975]: 2025-12-06 07:01:23.342776146 +0000 UTC m=+0.126294839 container start b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:01:23 compute-0 podman[266975]: 2025-12-06 07:01:23.362656756 +0000 UTC m=+0.146175479 container attach b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:01:23 compute-0 nova_compute[251992]: 2025-12-06 07:01:23.754 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:23 compute-0 nova_compute[251992]: 2025-12-06 07:01:23.870 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:23.873 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:01:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:23.874 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:01:24 compute-0 gallant_sanderson[266992]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:01:24 compute-0 gallant_sanderson[266992]: --> relative data size: 1.0
Dec 06 07:01:24 compute-0 gallant_sanderson[266992]: --> All data devices are unavailable
Dec 06 07:01:24 compute-0 systemd[1]: libpod-b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53.scope: Deactivated successfully.
Dec 06 07:01:24 compute-0 podman[266975]: 2025-12-06 07:01:24.183016526 +0000 UTC m=+0.966535249 container died b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab7610b7811423403b5d119c17327681b37005b7d1752e36e998ea8e74b60d6a-merged.mount: Deactivated successfully.
Dec 06 07:01:24 compute-0 podman[266975]: 2025-12-06 07:01:24.257509884 +0000 UTC m=+1.041028587 container remove b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_sanderson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:01:24 compute-0 systemd[1]: libpod-conmon-b2dfa2c2d131e99de087a245aa56dc425389fd6e69c4f6c3b60052c5a463dd53.scope: Deactivated successfully.
Dec 06 07:01:24 compute-0 podman[267007]: 2025-12-06 07:01:24.279157902 +0000 UTC m=+0.069127810 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:01:24 compute-0 sudo[266870]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:24 compute-0 podman[267011]: 2025-12-06 07:01:24.298256699 +0000 UTC m=+0.082199791 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 06 07:01:24 compute-0 sudo[267056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:24 compute-0 sudo[267056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:24 compute-0 sudo[267056]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:24 compute-0 sudo[267082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:01:24 compute-0 sudo[267082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:24 compute-0 sudo[267082]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:24 compute-0 sudo[267107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:24 compute-0 sudo[267107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:24 compute-0 sudo[267107]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:24 compute-0 sudo[267133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:01:24 compute-0 sudo[267133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:24 compute-0 podman[267197]: 2025-12-06 07:01:24.835465599 +0000 UTC m=+0.042320000 container create cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:01:24 compute-0 systemd[1]: Started libpod-conmon-cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862.scope.
Dec 06 07:01:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:01:24 compute-0 podman[267197]: 2025-12-06 07:01:24.90178063 +0000 UTC m=+0.108635031 container init cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:01:24 compute-0 podman[267197]: 2025-12-06 07:01:24.908317521 +0000 UTC m=+0.115171922 container start cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:01:24 compute-0 podman[267197]: 2025-12-06 07:01:24.911863429 +0000 UTC m=+0.118717830 container attach cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:01:24 compute-0 sleepy_blackburn[267214]: 167 167
Dec 06 07:01:24 compute-0 systemd[1]: libpod-cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862.scope: Deactivated successfully.
Dec 06 07:01:24 compute-0 podman[267197]: 2025-12-06 07:01:24.817750819 +0000 UTC m=+0.024605240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:01:24 compute-0 podman[267197]: 2025-12-06 07:01:24.914090251 +0000 UTC m=+0.120944652 container died cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db3633fbbe6cf3645b85fdef621595e2e035fcbaf82f6fe87e4a2660f969de5-merged.mount: Deactivated successfully.
Dec 06 07:01:24 compute-0 podman[267197]: 2025-12-06 07:01:24.951986907 +0000 UTC m=+0.158841308 container remove cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:01:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 17 KiB/s wr, 251 op/s
Dec 06 07:01:24 compute-0 systemd[1]: libpod-conmon-cc1f7933c4c2d118111bcc4b9f7d1fc623afc47430033ac526df8a1ce45cc862.scope: Deactivated successfully.
Dec 06 07:01:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:25.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:25 compute-0 podman[267237]: 2025-12-06 07:01:25.109968371 +0000 UTC m=+0.042156975 container create bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:01:25 compute-0 systemd[1]: Started libpod-conmon-bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1.scope.
Dec 06 07:01:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:25.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9fe4b171967d4f4c04de783c761490b3075e84346782abc297a3cc64c713d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9fe4b171967d4f4c04de783c761490b3075e84346782abc297a3cc64c713d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9fe4b171967d4f4c04de783c761490b3075e84346782abc297a3cc64c713d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f9fe4b171967d4f4c04de783c761490b3075e84346782abc297a3cc64c713d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:25 compute-0 podman[267237]: 2025-12-06 07:01:25.091576243 +0000 UTC m=+0.023764907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:01:25 compute-0 podman[267237]: 2025-12-06 07:01:25.187268886 +0000 UTC m=+0.119457520 container init bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 07:01:25 compute-0 podman[267237]: 2025-12-06 07:01:25.194340752 +0000 UTC m=+0.126529366 container start bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:01:25 compute-0 podman[267237]: 2025-12-06 07:01:25.19862893 +0000 UTC m=+0.130817544 container attach bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0075058815489484456 of space, bias 1.0, pg target 2.2517644646845336 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:01:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:01:25 compute-0 hopeful_payne[267254]: {
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:     "0": [
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:         {
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "devices": [
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "/dev/loop3"
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             ],
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "lv_name": "ceph_lv0",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "lv_size": "7511998464",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "name": "ceph_lv0",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "tags": {
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.cluster_name": "ceph",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.crush_device_class": "",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.encrypted": "0",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.osd_id": "0",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.type": "block",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:                 "ceph.vdo": "0"
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             },
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "type": "block",
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:             "vg_name": "ceph_vg0"
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:         }
Dec 06 07:01:25 compute-0 hopeful_payne[267254]:     ]
Dec 06 07:01:25 compute-0 hopeful_payne[267254]: }
Dec 06 07:01:25 compute-0 systemd[1]: libpod-bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1.scope: Deactivated successfully.
Dec 06 07:01:25 compute-0 podman[267237]: 2025-12-06 07:01:25.952469114 +0000 UTC m=+0.884657768 container died bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:01:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f9fe4b171967d4f4c04de783c761490b3075e84346782abc297a3cc64c713d2-merged.mount: Deactivated successfully.
Dec 06 07:01:26 compute-0 podman[267237]: 2025-12-06 07:01:26.007332689 +0000 UTC m=+0.939521303 container remove bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:01:26 compute-0 systemd[1]: libpod-conmon-bf8c64287bfa82ab2a354489ce41838e8e09b027fcca44f24b5e6310dcdb6ab1.scope: Deactivated successfully.
Dec 06 07:01:26 compute-0 sudo[267133]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:26 compute-0 sudo[267277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:26 compute-0 sudo[267277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:26 compute-0 sudo[267277]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:26 compute-0 sudo[267302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:01:26 compute-0 sudo[267302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:26 compute-0 sudo[267302]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:26 compute-0 sudo[267327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:26 compute-0 sudo[267327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:26 compute-0 sudo[267327]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:26 compute-0 sudo[267352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:01:26 compute-0 sudo[267352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:26 compute-0 podman[267418]: 2025-12-06 07:01:26.632558909 +0000 UTC m=+0.041708653 container create d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:01:26 compute-0 systemd[1]: Started libpod-conmon-d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d.scope.
Dec 06 07:01:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:01:26 compute-0 podman[267418]: 2025-12-06 07:01:26.614618664 +0000 UTC m=+0.023768428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:01:26 compute-0 podman[267418]: 2025-12-06 07:01:26.713174176 +0000 UTC m=+0.122323940 container init d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 07:01:26 compute-0 podman[267418]: 2025-12-06 07:01:26.72238736 +0000 UTC m=+0.131537094 container start d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:01:26 compute-0 reverent_kare[267434]: 167 167
Dec 06 07:01:26 compute-0 podman[267418]: 2025-12-06 07:01:26.726050382 +0000 UTC m=+0.135200156 container attach d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:01:26 compute-0 podman[267418]: 2025-12-06 07:01:26.728269893 +0000 UTC m=+0.137419637 container died d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:01:26 compute-0 systemd[1]: libpod-d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d.scope: Deactivated successfully.
Dec 06 07:01:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-42e62aa7122b52007326f103c400d095a6ac7f739a4a49f23115c8a7d10e8a86-merged.mount: Deactivated successfully.
Dec 06 07:01:26 compute-0 podman[267418]: 2025-12-06 07:01:26.76505648 +0000 UTC m=+0.174206224 container remove d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:01:26 compute-0 systemd[1]: libpod-conmon-d7b86d587395db7c478eeb71bdc85d4fb9c2201b9fc9e95afad6a926dfb49e3d.scope: Deactivated successfully.
Dec 06 07:01:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:26.877 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:26 compute-0 podman[267458]: 2025-12-06 07:01:26.925591204 +0000 UTC m=+0.045299152 container create 74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:01:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:01:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:01:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:01:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/938090136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 229 op/s
Dec 06 07:01:26 compute-0 systemd[1]: Started libpod-conmon-74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876.scope.
Dec 06 07:01:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d270ed4f10beef581380cea20c775bd10cff69d3f40ae7c3b5c16da013a8e222/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d270ed4f10beef581380cea20c775bd10cff69d3f40ae7c3b5c16da013a8e222/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d270ed4f10beef581380cea20c775bd10cff69d3f40ae7c3b5c16da013a8e222/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d270ed4f10beef581380cea20c775bd10cff69d3f40ae7c3b5c16da013a8e222/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:27 compute-0 podman[267458]: 2025-12-06 07:01:26.906899467 +0000 UTC m=+0.026607445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:01:27 compute-0 podman[267458]: 2025-12-06 07:01:27.01159957 +0000 UTC m=+0.131307538 container init 74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:01:27 compute-0 podman[267458]: 2025-12-06 07:01:27.018369797 +0000 UTC m=+0.138077745 container start 74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec 06 07:01:27 compute-0 podman[267458]: 2025-12-06 07:01:27.021358799 +0000 UTC m=+0.141066747 container attach 74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:01:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:27.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:27.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:27 compute-0 nova_compute[251992]: 2025-12-06 07:01:27.358 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:27 compute-0 competent_euler[267474]: {
Dec 06 07:01:27 compute-0 competent_euler[267474]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:01:27 compute-0 competent_euler[267474]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:01:27 compute-0 competent_euler[267474]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:01:27 compute-0 competent_euler[267474]:         "osd_id": 0,
Dec 06 07:01:27 compute-0 competent_euler[267474]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:01:27 compute-0 competent_euler[267474]:         "type": "bluestore"
Dec 06 07:01:27 compute-0 competent_euler[267474]:     }
Dec 06 07:01:27 compute-0 competent_euler[267474]: }
Dec 06 07:01:27 compute-0 systemd[1]: libpod-74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876.scope: Deactivated successfully.
Dec 06 07:01:27 compute-0 podman[267458]: 2025-12-06 07:01:27.82108273 +0000 UTC m=+0.940790678 container died 74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d270ed4f10beef581380cea20c775bd10cff69d3f40ae7c3b5c16da013a8e222-merged.mount: Deactivated successfully.
Dec 06 07:01:27 compute-0 podman[267458]: 2025-12-06 07:01:27.880122361 +0000 UTC m=+0.999830309 container remove 74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:01:27 compute-0 systemd[1]: libpod-conmon-74185641295dd0f7092c4addd18551242928a96c0b01a83b90ae645312391876.scope: Deactivated successfully.
Dec 06 07:01:27 compute-0 sudo[267352]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:01:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:01:28 compute-0 ceph-mon[74339]: pgmap v1211: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 18 KiB/s wr, 262 op/s
Dec 06 07:01:28 compute-0 ceph-mon[74339]: pgmap v1212: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 17 KiB/s wr, 251 op/s
Dec 06 07:01:28 compute-0 ceph-mon[74339]: pgmap v1213: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 229 op/s
Dec 06 07:01:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:28 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8926475c-7622-45cc-8277-b6ea754f8dfd does not exist
Dec 06 07:01:28 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e843dec3-80c5-4c45-ac58-855798505de5 does not exist
Dec 06 07:01:28 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7a716c10-3d6d-4c82-8601-8ba204779165 does not exist
Dec 06 07:01:28 compute-0 sudo[267508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:28 compute-0 sudo[267508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:28 compute-0 sudo[267508]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:28 compute-0 sudo[267533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:01:28 compute-0 sudo[267533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:28 compute-0 sudo[267533]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:28 compute-0 nova_compute[251992]: 2025-12-06 07:01:28.796 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 341 B/s wr, 191 op/s
Dec 06 07:01:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:29.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:01:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:01:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:29.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:01:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Dec 06 07:01:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Dec 06 07:01:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Dec 06 07:01:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 615 KiB/s rd, 102 B/s wr, 78 op/s
Dec 06 07:01:31 compute-0 ceph-mon[74339]: pgmap v1214: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 341 B/s wr, 191 op/s
Dec 06 07:01:31 compute-0 ceph-mon[74339]: osdmap e164: 3 total, 3 up, 3 in
Dec 06 07:01:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:32 compute-0 nova_compute[251992]: 2025-12-06 07:01:32.359 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:32 compute-0 ceph-mon[74339]: pgmap v1216: 305 pgs: 305 active+clean; 331 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 615 KiB/s rd, 102 B/s wr, 78 op/s
Dec 06 07:01:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1034660562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/573904402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 335 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 394 KiB/s wr, 75 op/s
Dec 06 07:01:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:33.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:33 compute-0 nova_compute[251992]: 2025-12-06 07:01:33.798 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:33 compute-0 nova_compute[251992]: 2025-12-06 07:01:33.827 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:33 compute-0 nova_compute[251992]: 2025-12-06 07:01:33.828 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:33 compute-0 nova_compute[251992]: 2025-12-06 07:01:33.946 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.043 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.044 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.051 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.051 251996 INFO nova.compute.claims [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.246 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:01:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559190735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.682 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.688 251996 DEBUG nova.compute.provider_tree [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.704 251996 DEBUG nova.scheduler.client.report [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.804 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.805 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:01:34 compute-0 ceph-mon[74339]: pgmap v1217: 305 pgs: 305 active+clean; 335 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 394 KiB/s wr, 75 op/s
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.938 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:01:34 compute-0 nova_compute[251992]: 2025-12-06 07:01:34.939 251996 DEBUG nova.network.neutron [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:01:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 340 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 137 op/s
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.003 251996 INFO nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.027 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:01:35 compute-0 sudo[267584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:35 compute-0 sudo[267584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:35 compute-0 sudo[267584]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:35 compute-0 sudo[267609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:35 compute-0 sudo[267609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:35 compute-0 sudo[267609]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:35.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.248 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.249 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.250 251996 INFO nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Creating image(s)
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.275 251996 DEBUG nova.storage.rbd_utils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.302 251996 DEBUG nova.storage.rbd_utils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.330 251996 DEBUG nova.storage.rbd_utils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.333 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.397 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.398 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.399 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.399 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.421 251996 DEBUG nova.storage.rbd_utils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.424 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:35 compute-0 nova_compute[251992]: 2025-12-06 07:01:35.604 251996 DEBUG nova.policy [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6805353f6bf048f9b406a1e565a13f11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:01:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1559190735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:36 compute-0 nova_compute[251992]: 2025-12-06 07:01:36.529 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:36 compute-0 nova_compute[251992]: 2025-12-06 07:01:36.529 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:36 compute-0 nova_compute[251992]: 2025-12-06 07:01:36.574 251996 DEBUG nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:01:36 compute-0 nova_compute[251992]: 2025-12-06 07:01:36.851 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:36 compute-0 nova_compute[251992]: 2025-12-06 07:01:36.852 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:36 compute-0 nova_compute[251992]: 2025-12-06 07:01:36.860 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:01:36 compute-0 nova_compute[251992]: 2025-12-06 07:01:36.860 251996 INFO nova.compute.claims [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:01:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Dec 06 07:01:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:37.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:37.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:37 compute-0 ceph-mon[74339]: pgmap v1218: 305 pgs: 305 active+clean; 340 MiB data, 457 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 137 op/s
Dec 06 07:01:37 compute-0 nova_compute[251992]: 2025-12-06 07:01:37.361 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:37 compute-0 nova_compute[251992]: 2025-12-06 07:01:37.733 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:01:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3772612913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:38 compute-0 nova_compute[251992]: 2025-12-06 07:01:38.212 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:38 compute-0 nova_compute[251992]: 2025-12-06 07:01:38.218 251996 DEBUG nova.compute.provider_tree [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:01:38 compute-0 nova_compute[251992]: 2025-12-06 07:01:38.800 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:38 compute-0 nova_compute[251992]: 2025-12-06 07:01:38.812 251996 DEBUG nova.scheduler.client.report [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:01:38 compute-0 nova_compute[251992]: 2025-12-06 07:01:38.837 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:38 compute-0 nova_compute[251992]: 2025-12-06 07:01:38.838 251996 DEBUG nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:01:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Dec 06 07:01:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:39.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:39.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.204 251996 DEBUG nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.205 251996 DEBUG nova.network.neutron [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.232 251996 INFO nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:01:39 compute-0 ceph-mon[74339]: pgmap v1219: 305 pgs: 305 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Dec 06 07:01:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3772612913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.252 251996 DEBUG nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.867 251996 DEBUG nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.869 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.870 251996 INFO nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Creating image(s)
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.898 251996 DEBUG nova.storage.rbd_utils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.923 251996 DEBUG nova.storage.rbd_utils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.947 251996 DEBUG nova.storage.rbd_utils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:39 compute-0 nova_compute[251992]: 2025-12-06 07:01:39.950 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.011 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.012 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.013 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.013 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.037 251996 DEBUG nova.storage.rbd_utils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.040 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.110 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.204 251996 DEBUG nova.storage.rbd_utils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] resizing rbd image 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.508 251996 DEBUG nova.network.neutron [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Successfully updated port: 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.510 251996 DEBUG nova.network.neutron [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.510 251996 DEBUG nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.514 251996 DEBUG nova.compute.manager [req-96d480a1-cc01-46ec-b35b-675ffbaf045f req-208aebd5-18a1-4c3c-889b-47b0073d0ccf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-changed-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.514 251996 DEBUG nova.compute.manager [req-96d480a1-cc01-46ec-b35b-675ffbaf045f req-208aebd5-18a1-4c3c-889b-47b0073d0ccf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Refreshing instance network info cache due to event network-changed-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.515 251996 DEBUG oslo_concurrency.lockutils [req-96d480a1-cc01-46ec-b35b-675ffbaf045f req-208aebd5-18a1-4c3c-889b-47b0073d0ccf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.515 251996 DEBUG oslo_concurrency.lockutils [req-96d480a1-cc01-46ec-b35b-675ffbaf045f req-208aebd5-18a1-4c3c-889b-47b0073d0ccf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.515 251996 DEBUG nova.network.neutron [req-96d480a1-cc01-46ec-b35b-675ffbaf045f req-208aebd5-18a1-4c3c-889b-47b0073d0ccf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Refreshing network info cache for port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.532 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.569 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.621 251996 DEBUG nova.storage.rbd_utils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] resizing rbd image 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.727 251996 DEBUG nova.network.neutron [req-96d480a1-cc01-46ec-b35b-675ffbaf045f req-208aebd5-18a1-4c3c-889b-47b0073d0ccf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.733 251996 DEBUG nova.objects.instance [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'migration_context' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.762 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.763 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Ensure instance console log exists: /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.763 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.764 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.764 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.766 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.769 251996 WARNING nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.774 251996 DEBUG nova.virt.libvirt.host [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.774 251996 DEBUG nova.virt.libvirt.host [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.777 251996 DEBUG nova.virt.libvirt.host [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.778 251996 DEBUG nova.virt.libvirt.host [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.779 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.780 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.780 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.781 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.781 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.781 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.781 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.782 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.782 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.782 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.783 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.783 251996 DEBUG nova.virt.hardware [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:01:40 compute-0 nova_compute[251992]: 2025-12-06 07:01:40.787 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 401 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 235 op/s
Dec 06 07:01:40 compute-0 ceph-mon[74339]: pgmap v1220: 305 pgs: 305 active+clean; 358 MiB data, 477 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 223 op/s
Dec 06 07:01:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:01:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:41.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.177 251996 DEBUG nova.network.neutron [req-96d480a1-cc01-46ec-b35b-675ffbaf045f req-208aebd5-18a1-4c3c-889b-47b0073d0ccf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:41.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.194 251996 DEBUG oslo_concurrency.lockutils [req-96d480a1-cc01-46ec-b35b-675ffbaf045f req-208aebd5-18a1-4c3c-889b-47b0073d0ccf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.194 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquired lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.195 251996 DEBUG nova.network.neutron [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:01:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:01:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749293316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.249 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.273 251996 DEBUG nova.storage.rbd_utils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.276 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.412 251996 DEBUG nova.network.neutron [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:01:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910208125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.717 251996 DEBUG nova.objects.instance [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lazy-loading 'migration_context' on Instance uuid 714f2e5b-135b-4f7e-9c62-3e1849c5e151 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.725 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.727 251996 DEBUG nova.objects.instance [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.742 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <uuid>345d5d4a-3a34-4809-9ae4-60a579c5e49a</uuid>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <name>instance-00000012</name>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <nova:name>tempest-MigrationsAdminTest-server-941592718</nova:name>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:01:40</nova:creationTime>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <system>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <entry name="serial">345d5d4a-3a34-4809-9ae4-60a579c5e49a</entry>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <entry name="uuid">345d5d4a-3a34-4809-9ae4-60a579c5e49a</entry>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     </system>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <os>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   </os>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <features>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   </features>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk">
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       </source>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk.config">
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       </source>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:01:41 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/console.log" append="off"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <video>
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     </video>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:01:41 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:01:41 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:01:41 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:01:41 compute-0 nova_compute[251992]: </domain>
Dec 06 07:01:41 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.743 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.744 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Ensure instance console log exists: /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.744 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.744 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.745 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.794 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.794 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.795 251996 INFO nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Using config drive
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.817 251996 DEBUG nova.storage.rbd_utils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.985 251996 INFO nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Creating config drive at /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/disk.config
Dec 06 07:01:41 compute-0 nova_compute[251992]: 2025-12-06 07:01:41.991 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1z4nrk7o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Dec 06 07:01:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.117 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1z4nrk7o" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.162 251996 DEBUG nova.storage.rbd_utils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rbd image 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.167 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/disk.config 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.364 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.399 251996 DEBUG nova.network.neutron [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updating instance_info_cache with network_info: [{"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.418 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Releasing lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.418 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Instance network_info: |[{"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.420 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Start _get_guest_xml network_info=[{"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.423 251996 WARNING nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.437 251996 DEBUG nova.virt.libvirt.host [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.437 251996 DEBUG nova.virt.libvirt.host [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.440 251996 DEBUG nova.virt.libvirt.host [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.440 251996 DEBUG nova.virt.libvirt.host [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.441 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.442 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.442 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.442 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.442 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.442 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.443 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.443 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.443 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.443 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.443 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.444 251996 DEBUG nova.virt.hardware [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.446 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:42 compute-0 ceph-mon[74339]: pgmap v1221: 305 pgs: 305 active+clean; 401 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 235 op/s
Dec 06 07:01:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1749293316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2910208125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:42 compute-0 ceph-mon[74339]: osdmap e165: 3 total, 3 up, 3 in
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.678 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.679 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:01:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1607848207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.887 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.911 251996 DEBUG nova.storage.rbd_utils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:42 compute-0 nova_compute[251992]: 2025-12-06 07:01:42.915 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:01:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:01:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:01:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:01:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:01:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:01:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 426 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 207 op/s
Dec 06 07:01:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:43.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:01:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/982138029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.134 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.186 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.186 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:01:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.325 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.326 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4809MB free_disk=20.7884521484375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.326 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.326 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:01:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956691532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.390 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.391 251996 DEBUG nova.virt.libvirt.vif [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-720825537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-720825537',id=17,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-od28yehe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:01:35Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=714f2e5b-135b-4f7e-9c62-3e1849c5e151,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.392 251996 DEBUG nova.network.os_vif_util [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converting VIF {"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.392 251996 DEBUG nova.network.os_vif_util [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.393 251996 DEBUG nova.objects.instance [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lazy-loading 'pci_devices' on Instance uuid 714f2e5b-135b-4f7e-9c62-3e1849c5e151 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.417 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <uuid>714f2e5b-135b-4f7e-9c62-3e1849c5e151</uuid>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <name>instance-00000011</name>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-720825537</nova:name>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:01:42</nova:creationTime>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <nova:user uuid="6805353f6bf048f9b406a1e565a13f11">tempest-LiveAutoBlockMigrationV225Test-252281632-project-member</nova:user>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <nova:project uuid="dc1bc9517198484ab30d93ebd5d88c35">tempest-LiveAutoBlockMigrationV225Test-252281632</nova:project>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <nova:port uuid="8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b">
Dec 06 07:01:43 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <system>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <entry name="serial">714f2e5b-135b-4f7e-9c62-3e1849c5e151</entry>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <entry name="uuid">714f2e5b-135b-4f7e-9c62-3e1849c5e151</entry>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </system>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <os>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   </os>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <features>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   </features>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk">
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk.config">
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:01:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:3d:c8:b4"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <target dev="tap8ba0fb02-de"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/console.log" append="off"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <video>
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </video>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:01:43 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:01:43 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:01:43 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:01:43 compute-0 nova_compute[251992]: </domain>
Dec 06 07:01:43 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.418 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Preparing to wait for external event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.418 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.418 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.418 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.419 251996 DEBUG nova.virt.libvirt.vif [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-720825537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-720825537',id=17,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-od28yehe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:01:35Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=714f2e5b-135b-4f7e-9c62-3e1849c5e151,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.419 251996 DEBUG nova.network.os_vif_util [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converting VIF {"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.420 251996 DEBUG nova.network.os_vif_util [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.420 251996 DEBUG os_vif [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.421 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.421 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.421 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.422 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 714f2e5b-135b-4f7e-9c62-3e1849c5e151 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.423 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 345d5d4a-3a34-4809-9ae4-60a579c5e49a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.423 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.423 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.431 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ba0fb02-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.432 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ba0fb02-de, col_values=(('external_ids', {'iface-id': '8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:c8:b4', 'vm-uuid': '714f2e5b-135b-4f7e-9c62-3e1849c5e151'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:43 compute-0 NetworkManager[48965]: <info>  [1765004503.4344] manager: (tap8ba0fb02-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.440 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.441 251996 INFO os_vif [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de')
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.490 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.490 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.491 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] No VIF found with MAC fa:16:3e:3d:c8:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.491 251996 INFO nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Using config drive
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.515 251996 DEBUG nova.storage.rbd_utils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.520 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.893 251996 INFO nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Creating config drive at /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/disk.config
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.898 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuj3mqdu8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:01:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971980504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.955 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.960 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:01:43 compute-0 nova_compute[251992]: 2025-12-06 07:01:43.985 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:01:44 compute-0 nova_compute[251992]: 2025-12-06 07:01:44.008 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:01:44 compute-0 nova_compute[251992]: 2025-12-06 07:01:44.008 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:44 compute-0 nova_compute[251992]: 2025-12-06 07:01:44.024 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuj3mqdu8" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:44 compute-0 nova_compute[251992]: 2025-12-06 07:01:44.050 251996 DEBUG nova.storage.rbd_utils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] rbd image 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:01:44 compute-0 nova_compute[251992]: 2025-12-06 07:01:44.054 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/disk.config 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1607848207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/982138029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1956691532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 457 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 182 op/s
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.009 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.009 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.009 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.010 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.010 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:01:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:01:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:45.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:01:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:45.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.677 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.678 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.678 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:01:45 compute-0 nova_compute[251992]: 2025-12-06 07:01:45.678 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:01:46 compute-0 ceph-mon[74339]: pgmap v1223: 305 pgs: 305 active+clean; 426 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 207 op/s
Dec 06 07:01:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1879664780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2971980504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1355878612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2755476702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1242976262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 466 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 178 KiB/s rd, 4.8 MiB/s wr, 98 op/s
Dec 06 07:01:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:47.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.189 251996 DEBUG oslo_concurrency.processutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/disk.config 714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.190 251996 INFO nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Deleting local config drive /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151/disk.config because it was imported into RBD.
Dec 06 07:01:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:47 compute-0 kernel: tap8ba0fb02-de: entered promiscuous mode
Dec 06 07:01:47 compute-0 NetworkManager[48965]: <info>  [1765004507.2380] manager: (tap8ba0fb02-de): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec 06 07:01:47 compute-0 ovn_controller[147168]: 2025-12-06T07:01:47Z|00036|binding|INFO|Claiming lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for this chassis.
Dec 06 07:01:47 compute-0 ovn_controller[147168]: 2025-12-06T07:01:47Z|00037|binding|INFO|8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b: Claiming fa:16:3e:3d:c8:b4 10.100.0.14
Dec 06 07:01:47 compute-0 ovn_controller[147168]: 2025-12-06T07:01:47Z|00038|binding|INFO|Claiming lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 for this chassis.
Dec 06 07:01:47 compute-0 ovn_controller[147168]: 2025-12-06T07:01:47Z|00039|binding|INFO|5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4: Claiming fa:16:3e:cb:d9:12 19.80.0.179
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.238 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.242 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.247 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 systemd-machined[212986]: New machine qemu-8-instance-00000011.
Dec 06 07:01:47 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000011.
Dec 06 07:01:47 compute-0 systemd-udevd[268300]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:01:47 compute-0 NetworkManager[48965]: <info>  [1765004507.3036] device (tap8ba0fb02-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:01:47 compute-0 NetworkManager[48965]: <info>  [1765004507.3046] device (tap8ba0fb02-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.351 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.352 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:d9:12 19.80.0.179'], port_security=['fa:16:3e:cb:d9:12 19.80.0.179'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1920716484', 'neutron:cidrs': '19.80.0.179/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1920716484', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c735218e-8fbf-4d82-b453-bc3944800b8e, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.353 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:c8:b4 10.100.0.14'], port_security=['fa:16:3e:3d:c8:b4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1426450350', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '714f2e5b-135b-4f7e-9c62-3e1849c5e151', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1426450350', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.354 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 in datapath 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed bound to our chassis
Dec 06 07:01:47 compute-0 ovn_controller[147168]: 2025-12-06T07:01:47Z|00040|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b ovn-installed in OVS
Dec 06 07:01:47 compute-0 ovn_controller[147168]: 2025-12-06T07:01:47Z|00041|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b up in Southbound
Dec 06 07:01:47 compute-0 ovn_controller[147168]: 2025-12-06T07:01:47Z|00042|binding|INFO|Setting lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 up in Southbound
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.356 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.357 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.365 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.370 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b2defa-3a94-4c61-a2e5-1d38785c66ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.371 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a9cf9f3-d1 in ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:01:47 compute-0 podman[268291]: 2025-12-06 07:01:47.372068981 +0000 UTC m=+0.096202669 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.374 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a9cf9f3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.374 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd70f396-6a2c-4d87-9e43-18cb6f627ae6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.375 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a1de39ef-3c69-49a8-b3c5-96d0905b8d42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.395 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3640c5-93f4-4fb7-96f1-ea702f6297bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.408 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bab8e755-bf50-46c0-84fd-fc4e4276c23c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ceph-mon[74339]: pgmap v1224: 305 pgs: 305 active+clean; 457 MiB data, 516 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 182 op/s
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.438 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca45e7d-ace7-435f-b529-d2c664b99bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 systemd-udevd[268309]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:01:47 compute-0 NetworkManager[48965]: <info>  [1765004507.4501] manager: (tap4a9cf9f3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.449 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c1d292-e41c-4ef0-a10c-1f031ba167ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.484 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e43a9cf8-a7bf-4b62-b333-101b8c402d1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.490 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[93fd8e85-1f06-4235-ad78-3affa1f8181f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 NetworkManager[48965]: <info>  [1765004507.5119] device (tap4a9cf9f3-d0): carrier: link connected
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.515 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[78984575-2086-4efd-b3d8-8b55c7782f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.533 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dff2ab03-b35a-4d0a-b68f-686ddb4f02e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a9cf9f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:5f:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478010, 'reachable_time': 41627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268351, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.548 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[35f4eff5-337a-458b-be92-2f4bc9a91a9f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:5f2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478010, 'tstamp': 478010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268352, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.560 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcf82f5-8148-4ab1-aa00-85d3b159bfe1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a9cf9f3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:5f:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478010, 'reachable_time': 41627, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268353, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.590 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[14e1cf5f-fe09-47f5-bed6-715be023fbb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.639 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f798616c-92ba-4dbc-836b-5ac5d35ebc19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.642 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9cf9f3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.642 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.642 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a9cf9f3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:47 compute-0 kernel: tap4a9cf9f3-d0: entered promiscuous mode
Dec 06 07:01:47 compute-0 NetworkManager[48965]: <info>  [1765004507.6451] manager: (tap4a9cf9f3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.644 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.647 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a9cf9f3-d0, col_values=(('external_ids', {'iface-id': '46c2af8b-f787-403f-aef5-72ec2f87b6fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:47 compute-0 ovn_controller[147168]: 2025-12-06T07:01:47Z|00043|binding|INFO|Releasing lport 46c2af8b-f787-403f-aef5-72ec2f87b6fc from this chassis (sb_readonly=0)
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.648 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 nova_compute[251992]: 2025-12-06 07:01:47.663 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.664 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.665 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3359fec8-4fd4-4e64-aef2-a52550f883d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.666 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.pid.haproxy
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:01:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:47.666 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'env', 'PROCESS_TAG=haproxy-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:01:48 compute-0 podman[268403]: 2025-12-06 07:01:48.000738286 +0000 UTC m=+0.022154053 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:01:48 compute-0 podman[268403]: 2025-12-06 07:01:48.153865096 +0000 UTC m=+0.175280843 container create 3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 07:01:48 compute-0 systemd[1]: Started libpod-conmon-3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73.scope.
Dec 06 07:01:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d6d729f547e8bfb76bf64c38a0fa5bfbf6c25978b446fa4a77eb2a84f4e7063/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.322 251996 DEBUG oslo_concurrency.processutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/disk.config 345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.323 251996 INFO nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Deleting local config drive /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/disk.config because it was imported into RBD.
Dec 06 07:01:48 compute-0 podman[268403]: 2025-12-06 07:01:48.352457492 +0000 UTC m=+0.373873269 container init 3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 07:01:48 compute-0 podman[268403]: 2025-12-06 07:01:48.357806948 +0000 UTC m=+0.379222705 container start 3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:01:48 compute-0 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[268419]: [NOTICE]   (268429) : New worker (268434) forked
Dec 06 07:01:48 compute-0 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[268419]: [NOTICE]   (268429) : Loading success.
Dec 06 07:01:48 compute-0 systemd-machined[212986]: New machine qemu-9-instance-00000012.
Dec 06 07:01:48 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000012.
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.460 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.462 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dfa287bf-10c3-40fc-8071-37bb7f801357
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.473 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9e629e-4947-46ba-b68d-7a11a0206a8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.475 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdfa287bf-11 in ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.477 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdfa287bf-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.477 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[09a0a40b-3c97-4d94-a0ea-1488c5c492d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.478 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[91a9f8aa-578e-4eab-8f63-162787dbd686]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.489 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[96f6433f-49ea-417a-bdbd-2ec0bcb5c00a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.503 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea3b57d-4b08-4ad3-8ea9-b89b868a9fee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.537 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3d67c2-f5aa-439a-806b-2f8e6d9c5546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.545 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[929f297b-a9a3-4535-b2e5-eb11f3e57d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 NetworkManager[48965]: <info>  [1765004508.5462] manager: (tapdfa287bf-10): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Dec 06 07:01:48 compute-0 systemd-udevd[268338]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.582 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba37a23-affb-429f-91ad-e3fb456f71e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.586 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[52eebe38-8084-4899-872a-ae73dad3c2d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 NetworkManager[48965]: <info>  [1765004508.6160] device (tapdfa287bf-10): carrier: link connected
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.624 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[00df050c-a57b-47da-bfb9-e87ebee9764f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.644 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b1d7f0-7f6d-4112-9739-55e5541625a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478120, 'reachable_time': 42378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268496, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.659 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[12acfda8-628e-4755-81e8-f0779a304112]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:a341'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 478120, 'tstamp': 478120}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268502, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.677 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8fed6d-e0d6-4bb6-b609-aa8f9e53060f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdfa287bf-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478120, 'reachable_time': 42378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268507, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.708 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8e59d310-90c9-4db9-95cf-96efb38da016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.730 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004508.7304852, 714f2e5b-135b-4f7e-9c62-3e1849c5e151 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.731 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] VM Started (Lifecycle Event)
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.778 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.780 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[19488159-c008-4904-81d6-351f191b6aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.782 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.782 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.782 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004508.730673, 714f2e5b-135b-4f7e-9c62-3e1849c5e151 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.783 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] VM Paused (Lifecycle Event)
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.783 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfa287bf-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:48 compute-0 kernel: tapdfa287bf-10: entered promiscuous mode
Dec 06 07:01:48 compute-0 NetworkManager[48965]: <info>  [1765004508.7871] manager: (tapdfa287bf-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.786 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.791 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdfa287bf-10, col_values=(('external_ids', {'iface-id': 'a8b489de-cf80-4c12-869a-5e807cdbba8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.792 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:48 compute-0 ovn_controller[147168]: 2025-12-06T07:01:48Z|00044|binding|INFO|Releasing lport a8b489de-cf80-4c12-869a-5e807cdbba8c from this chassis (sb_readonly=0)
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.793 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.795 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.796 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[65361d73-51f1-493a-a1c8-182195d4905a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.797 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-dfa287bf-10c3-40fc-8071-37bb7f801357
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/dfa287bf-10c3-40fc-8071-37bb7f801357.pid.haproxy
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID dfa287bf-10c3-40fc-8071-37bb7f801357
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:01:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:01:48.798 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'env', 'PROCESS_TAG=haproxy-dfa287bf-10c3-40fc-8071-37bb7f801357', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dfa287bf-10c3-40fc-8071-37bb7f801357.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.807 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.855 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.858 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:01:48 compute-0 nova_compute[251992]: 2025-12-06 07:01:48.917 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:01:48 compute-0 ceph-mon[74339]: pgmap v1225: 305 pgs: 305 active+clean; 466 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 178 KiB/s rd, 4.8 MiB/s wr, 98 op/s
Dec 06 07:01:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 466 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 178 KiB/s rd, 4.8 MiB/s wr, 98 op/s
Dec 06 07:01:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3194696246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:49.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.132 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004509.1325853, 345d5d4a-3a34-4809-9ae4-60a579c5e49a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.133 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] VM Resumed (Lifecycle Event)
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.135 251996 DEBUG nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.136 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.138 251996 INFO nova.virt.libvirt.driver [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance spawned successfully.
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.138 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.171 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.175 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:01:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:49.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.221 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.222 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.222 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.223 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.223 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.224 251996 DEBUG nova.virt.libvirt.driver [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.230 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.231 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004509.1351273, 345d5d4a-3a34-4809-9ae4-60a579c5e49a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.231 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] VM Started (Lifecycle Event)
Dec 06 07:01:49 compute-0 podman[268560]: 2025-12-06 07:01:49.162573979 +0000 UTC m=+0.024754715 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:01:49 compute-0 podman[268560]: 2025-12-06 07:01:49.294858123 +0000 UTC m=+0.157038829 container create 6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:01:49 compute-0 systemd[1]: Started libpod-conmon-6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb.scope.
Dec 06 07:01:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.390 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a74f0742370c55bad72e4e0e4b174f846287690973d7fa21ac971b751a0ac36/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.398 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:01:49 compute-0 podman[268560]: 2025-12-06 07:01:49.421648145 +0000 UTC m=+0.283828871 container init 6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 07:01:49 compute-0 podman[268560]: 2025-12-06 07:01:49.429704358 +0000 UTC m=+0.291885054 container start 6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.447 251996 INFO nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Took 9.58 seconds to spawn the instance on the hypervisor.
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.447 251996 DEBUG nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:49 compute-0 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[268576]: [NOTICE]   (268580) : New worker (268582) forked
Dec 06 07:01:49 compute-0 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[268576]: [NOTICE]   (268580) : Loading success.
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.459 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.584 251996 INFO nova.compute.manager [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Took 12.75 seconds to build instance.
Dec 06 07:01:49 compute-0 nova_compute[251992]: 2025-12-06 07:01:49.610 251996 DEBUG oslo_concurrency.lockutils [None req-80770453-3799-44b1-bf59-274ac7a311ae 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:50 compute-0 ceph-mon[74339]: pgmap v1226: 305 pgs: 305 active+clean; 466 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 178 KiB/s rd, 4.8 MiB/s wr, 98 op/s
Dec 06 07:01:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/135893021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.379 251996 DEBUG nova.compute.manager [req-9f28332d-1106-4002-b44d-236a70c4ed9e req-e16411b7-e3d7-4bb4-9d3a-e5a6f43cd359 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.380 251996 DEBUG oslo_concurrency.lockutils [req-9f28332d-1106-4002-b44d-236a70c4ed9e req-e16411b7-e3d7-4bb4-9d3a-e5a6f43cd359 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.380 251996 DEBUG oslo_concurrency.lockutils [req-9f28332d-1106-4002-b44d-236a70c4ed9e req-e16411b7-e3d7-4bb4-9d3a-e5a6f43cd359 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.380 251996 DEBUG oslo_concurrency.lockutils [req-9f28332d-1106-4002-b44d-236a70c4ed9e req-e16411b7-e3d7-4bb4-9d3a-e5a6f43cd359 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.381 251996 DEBUG nova.compute.manager [req-9f28332d-1106-4002-b44d-236a70c4ed9e req-e16411b7-e3d7-4bb4-9d3a-e5a6f43cd359 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Processing event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.381 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.392 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004510.3875537, 714f2e5b-135b-4f7e-9c62-3e1849c5e151 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.392 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] VM Resumed (Lifecycle Event)
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.394 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.398 251996 INFO nova.virt.libvirt.driver [-] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Instance spawned successfully.
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.399 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.416 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.421 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.423 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.423 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.424 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.424 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.425 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.426 251996 DEBUG nova.virt.libvirt.driver [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.531 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.602 251996 INFO nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Took 15.35 seconds to spawn the instance on the hypervisor.
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.602 251996 DEBUG nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:01:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 503 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.6 MiB/s wr, 170 op/s
Dec 06 07:01:50 compute-0 nova_compute[251992]: 2025-12-06 07:01:50.989 251996 INFO nova.compute.manager [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Took 16.98 seconds to build instance.
Dec 06 07:01:51 compute-0 nova_compute[251992]: 2025-12-06 07:01:51.017 251996 DEBUG oslo_concurrency.lockutils [None req-c75755b7-3846-45ff-ab6b-6176465563df 6805353f6bf048f9b406a1e565a13f11 dc1bc9517198484ab30d93ebd5d88c35 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:51.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:52 compute-0 nova_compute[251992]: 2025-12-06 07:01:52.367 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:52 compute-0 ceph-mon[74339]: pgmap v1227: 305 pgs: 305 active+clean; 503 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.6 MiB/s wr, 170 op/s
Dec 06 07:01:52 compute-0 nova_compute[251992]: 2025-12-06 07:01:52.709 251996 DEBUG nova.compute.manager [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:01:52 compute-0 nova_compute[251992]: 2025-12-06 07:01:52.710 251996 DEBUG oslo_concurrency.lockutils [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:01:52 compute-0 nova_compute[251992]: 2025-12-06 07:01:52.710 251996 DEBUG oslo_concurrency.lockutils [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:01:52 compute-0 nova_compute[251992]: 2025-12-06 07:01:52.712 251996 DEBUG oslo_concurrency.lockutils [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:01:52 compute-0 nova_compute[251992]: 2025-12-06 07:01:52.713 251996 DEBUG nova.compute.manager [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:01:52 compute-0 nova_compute[251992]: 2025-12-06 07:01:52.713 251996 WARNING nova.compute.manager [req-c788cf66-7cea-4312-a057-946a296b988f req-aa6444dc-83a1-43d5-81de-e6578d62cecf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state None.
Dec 06 07:01:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 511 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:01:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:53.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:53.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:53 compute-0 nova_compute[251992]: 2025-12-06 07:01:53.438 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2352722957' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:54 compute-0 podman[268593]: 2025-12-06 07:01:54.396839223 +0000 UTC m=+0.050050463 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:01:54 compute-0 podman[268594]: 2025-12-06 07:01:54.407538499 +0000 UTC m=+0.058405594 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:01:54 compute-0 ceph-mon[74339]: pgmap v1228: 305 pgs: 305 active+clean; 511 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:01:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2918283785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1737184725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 537 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 243 op/s
Dec 06 07:01:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:55 compute-0 sudo[268630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:55 compute-0 sudo[268630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:55 compute-0 sudo[268630]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:55.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:55 compute-0 sudo[268655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:01:55 compute-0 sudo[268655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:01:55 compute-0 sudo[268655]: pam_unix(sudo:session): session closed for user root
Dec 06 07:01:55 compute-0 nova_compute[251992]: 2025-12-06 07:01:55.692 251996 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:55 compute-0 nova_compute[251992]: 2025-12-06 07:01:55.693 251996 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquired lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:55 compute-0 nova_compute[251992]: 2025-12-06 07:01:55.693 251996 DEBUG nova.network.neutron [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:01:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/769131874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:01:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/763879712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:01:56 compute-0 nova_compute[251992]: 2025-12-06 07:01:56.194 251996 DEBUG nova.network.neutron [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:01:56 compute-0 nova_compute[251992]: 2025-12-06 07:01:56.506 251996 DEBUG nova.network.neutron [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:01:56 compute-0 nova_compute[251992]: 2025-12-06 07:01:56.619 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Check if temp file /var/lib/nova/instances/tmp5bh77yv5 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 06 07:01:56 compute-0 nova_compute[251992]: 2025-12-06 07:01:56.619 251996 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5bh77yv5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='714f2e5b-135b-4f7e-9c62-3e1849c5e151',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 06 07:01:56 compute-0 nova_compute[251992]: 2025-12-06 07:01:56.621 251996 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Releasing lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:56 compute-0 ceph-mon[74339]: pgmap v1229: 305 pgs: 305 active+clean; 537 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 243 op/s
Dec 06 07:01:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 527 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.6 MiB/s wr, 264 op/s
Dec 06 07:01:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:01:56 compute-0 nova_compute[251992]: 2025-12-06 07:01:56.986 251996 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 07:01:56 compute-0 nova_compute[251992]: 2025-12-06 07:01:56.987 251996 DEBUG nova.virt.libvirt.volume.remotefs [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Creating file /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/a264c58829ad42b4be935328ef0fb7b5.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 07:01:56 compute-0 nova_compute[251992]: 2025-12-06 07:01:56.987 251996 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/a264c58829ad42b4be935328ef0fb7b5.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:57.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:57.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:01:57 compute-0 nova_compute[251992]: 2025-12-06 07:01:57.370 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:57 compute-0 nova_compute[251992]: 2025-12-06 07:01:57.991 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:01:57 compute-0 nova_compute[251992]: 2025-12-06 07:01:57.991 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.061 251996 INFO nova.compute.rpcapi [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.062 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.135 251996 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/a264c58829ad42b4be935328ef0fb7b5.tmp" returned: 1 in 1.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.136 251996 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/a264c58829ad42b4be935328ef0fb7b5.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.136 251996 DEBUG nova.virt.libvirt.volume.remotefs [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Creating directory /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.136 251996 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.352 251996 DEBUG oslo_concurrency.processutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.356 251996 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:01:58 compute-0 nova_compute[251992]: 2025-12-06 07:01:58.441 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:01:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 527 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 250 op/s
Dec 06 07:01:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:01:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:01:59.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:01:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:01:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:01:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:01:59.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 473 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.2 MiB/s wr, 331 op/s
Dec 06 07:02:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:01.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:01.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:02 compute-0 ceph-mon[74339]: pgmap v1230: 305 pgs: 305 active+clean; 527 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.6 MiB/s wr, 264 op/s
Dec 06 07:02:02 compute-0 nova_compute[251992]: 2025-12-06 07:02:02.371 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 475 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.1 MiB/s wr, 256 op/s
Dec 06 07:02:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:03.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:03 compute-0 ceph-mon[74339]: pgmap v1231: 305 pgs: 305 active+clean; 527 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 250 op/s
Dec 06 07:02:03 compute-0 ceph-mon[74339]: pgmap v1232: 305 pgs: 305 active+clean; 473 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.2 MiB/s wr, 331 op/s
Dec 06 07:02:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:03 compute-0 nova_compute[251992]: 2025-12-06 07:02:03.521 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:03.811 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:03.812 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:03.813 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 492 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.5 MiB/s wr, 300 op/s
Dec 06 07:02:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:05.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:05 compute-0 ceph-mon[74339]: pgmap v1233: 305 pgs: 305 active+clean; 475 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.1 MiB/s wr, 256 op/s
Dec 06 07:02:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 513 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.7 MiB/s wr, 273 op/s
Dec 06 07:02:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:07.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:07 compute-0 ceph-mon[74339]: pgmap v1234: 305 pgs: 305 active+clean; 492 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.5 MiB/s wr, 300 op/s
Dec 06 07:02:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:07.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:07 compute-0 nova_compute[251992]: 2025-12-06 07:02:07.373 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:07 compute-0 ovn_controller[147168]: 2025-12-06T07:02:07Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:c8:b4 10.100.0.14
Dec 06 07:02:07 compute-0 ovn_controller[147168]: 2025-12-06T07:02:07Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:c8:b4 10.100.0.14
Dec 06 07:02:08 compute-0 ceph-mon[74339]: pgmap v1235: 305 pgs: 305 active+clean; 513 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.7 MiB/s wr, 273 op/s
Dec 06 07:02:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1796991372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:08 compute-0 nova_compute[251992]: 2025-12-06 07:02:08.396 251996 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:02:08 compute-0 nova_compute[251992]: 2025-12-06 07:02:08.502 251996 DEBUG nova.compute.manager [req-0c25cae9-ba78-49e5-9809-028389033357 req-bd5e6e6c-a4de-4d9a-81ea-47f2a5981f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:08 compute-0 nova_compute[251992]: 2025-12-06 07:02:08.503 251996 DEBUG oslo_concurrency.lockutils [req-0c25cae9-ba78-49e5-9809-028389033357 req-bd5e6e6c-a4de-4d9a-81ea-47f2a5981f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:08 compute-0 nova_compute[251992]: 2025-12-06 07:02:08.503 251996 DEBUG oslo_concurrency.lockutils [req-0c25cae9-ba78-49e5-9809-028389033357 req-bd5e6e6c-a4de-4d9a-81ea-47f2a5981f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:08 compute-0 nova_compute[251992]: 2025-12-06 07:02:08.503 251996 DEBUG oslo_concurrency.lockutils [req-0c25cae9-ba78-49e5-9809-028389033357 req-bd5e6e6c-a4de-4d9a-81ea-47f2a5981f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:08 compute-0 nova_compute[251992]: 2025-12-06 07:02:08.503 251996 DEBUG nova.compute.manager [req-0c25cae9-ba78-49e5-9809-028389033357 req-bd5e6e6c-a4de-4d9a-81ea-47f2a5981f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:08 compute-0 nova_compute[251992]: 2025-12-06 07:02:08.503 251996 DEBUG nova.compute.manager [req-0c25cae9-ba78-49e5-9809-028389033357 req-bd5e6e6c-a4de-4d9a-81ea-47f2a5981f55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:02:08 compute-0 nova_compute[251992]: 2025-12-06 07:02:08.524 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:02:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2399199750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:02:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:02:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2399199750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:02:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 513 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 217 op/s
Dec 06 07:02:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:09.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2399199750' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:02:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2399199750' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.430 251996 INFO nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Took 12.44 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.430 251996 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:02:10 compute-0 ceph-mon[74339]: pgmap v1236: 305 pgs: 305 active+clean; 513 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.0 MiB/s wr, 217 op/s
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.452 251996 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5bh77yv5',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='714f2e5b-135b-4f7e-9c62-3e1849c5e151',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(4752161d-1265-4670-832f-4effff14ef8a),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.456 251996 DEBUG nova.objects.instance [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lazy-loading 'migration_context' on Instance uuid 714f2e5b-135b-4f7e-9c62-3e1849c5e151 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.457 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.459 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.459 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.483 251996 DEBUG nova.virt.libvirt.vif [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-720825537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-720825537',id=17,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:01:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-od28yehe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:01:50Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=714f2e5b-135b-4f7e-9c62-3e1849c5e151,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.484 251996 DEBUG nova.network.os_vif_util [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.485 251996 DEBUG nova.network.os_vif_util [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.485 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updating guest XML with vif config: <interface type="ethernet">
Dec 06 07:02:10 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:3d:c8:b4"/>
Dec 06 07:02:10 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 07:02:10 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:02:10 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 07:02:10 compute-0 nova_compute[251992]:   <target dev="tap8ba0fb02-de"/>
Dec 06 07:02:10 compute-0 nova_compute[251992]: </interface>
Dec 06 07:02:10 compute-0 nova_compute[251992]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.486 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 06 07:02:10 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Deactivated successfully.
Dec 06 07:02:10 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Consumed 14.299s CPU time.
Dec 06 07:02:10 compute-0 systemd-machined[212986]: Machine qemu-9-instance-00000012 terminated.
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.717 251996 DEBUG nova.compute.manager [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.718 251996 DEBUG oslo_concurrency.lockutils [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.719 251996 DEBUG oslo_concurrency.lockutils [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.719 251996 DEBUG oslo_concurrency.lockutils [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.719 251996 DEBUG nova.compute.manager [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.719 251996 WARNING nova.compute.manager [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state migrating.
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.719 251996 DEBUG nova.compute.manager [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-changed-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.719 251996 DEBUG nova.compute.manager [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Refreshing instance network info cache due to event network-changed-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.720 251996 DEBUG oslo_concurrency.lockutils [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.720 251996 DEBUG oslo_concurrency.lockutils [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.720 251996 DEBUG nova.network.neutron [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Refreshing network info cache for port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.961 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:02:10 compute-0 nova_compute[251992]: 2025-12-06 07:02:10.961 251996 INFO nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 06 07:02:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 551 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.9 MiB/s wr, 318 op/s
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.045 251996 INFO nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 06 07:02:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:11.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.411 251996 INFO nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance shutdown successfully after 13 seconds.
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.417 251996 INFO nova.virt.libvirt.driver [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance destroyed successfully.
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.421 251996 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.421 251996 DEBUG nova.virt.libvirt.driver [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.547 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.548 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.828 251996 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Acquiring lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.828 251996 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:11 compute-0 nova_compute[251992]: 2025-12-06 07:02:11.829 251996 DEBUG oslo_concurrency.lockutils [None req-8198078e-b3cb-4679-bc60-253f64652108 41a5abb93e70428cb0414a3d26c3ee84 35f18a3faa574f23a2b399946979fa9d - - default default] Lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:12 compute-0 nova_compute[251992]: 2025-12-06 07:02:12.050 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:02:12 compute-0 nova_compute[251992]: 2025-12-06 07:02:12.051 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 07:02:12 compute-0 nova_compute[251992]: 2025-12-06 07:02:12.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:12 compute-0 nova_compute[251992]: 2025-12-06 07:02:12.553 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:02:12 compute-0 nova_compute[251992]: 2025-12-06 07:02:12.555 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 07:02:12 compute-0 ceph-mon[74339]: pgmap v1237: 305 pgs: 305 active+clean; 551 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.9 MiB/s wr, 318 op/s
Dec 06 07:02:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:02:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:02:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:02:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:02:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:02:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:02:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 557 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 6.3 MiB/s wr, 258 op/s
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.058 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.058 251996 DEBUG nova.virt.libvirt.migration [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 07:02:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000082s ======
Dec 06 07:02:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:13.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.179 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004533.1789608, 714f2e5b-135b-4f7e-9c62-3e1849c5e151 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.180 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] VM Paused (Lifecycle Event)
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.205 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.209 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:02:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:13.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.241 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 06 07:02:13 compute-0 kernel: tap8ba0fb02-de (unregistering): left promiscuous mode
Dec 06 07:02:13 compute-0 NetworkManager[48965]: <info>  [1765004533.4753] device (tap8ba0fb02-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:02:13 compute-0 ovn_controller[147168]: 2025-12-06T07:02:13Z|00045|binding|INFO|Releasing lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b from this chassis (sb_readonly=0)
Dec 06 07:02:13 compute-0 ovn_controller[147168]: 2025-12-06T07:02:13Z|00046|binding|INFO|Setting lport 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b down in Southbound
Dec 06 07:02:13 compute-0 ovn_controller[147168]: 2025-12-06T07:02:13Z|00047|binding|INFO|Releasing lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 from this chassis (sb_readonly=0)
Dec 06 07:02:13 compute-0 ovn_controller[147168]: 2025-12-06T07:02:13Z|00048|binding|INFO|Setting lport 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 down in Southbound
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.483 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:13 compute-0 ovn_controller[147168]: 2025-12-06T07:02:13Z|00049|binding|INFO|Removing iface tap8ba0fb02-de ovn-installed in OVS
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.484 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:13 compute-0 ovn_controller[147168]: 2025-12-06T07:02:13Z|00050|binding|INFO|Releasing lport a8b489de-cf80-4c12-869a-5e807cdbba8c from this chassis (sb_readonly=0)
Dec 06 07:02:13 compute-0 ovn_controller[147168]: 2025-12-06T07:02:13Z|00051|binding|INFO|Releasing lport 46c2af8b-f787-403f-aef5-72ec2f87b6fc from this chassis (sb_readonly=0)
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.504 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:d9:12 19.80.0.179'], port_security=['fa:16:3e:cb:d9:12 19.80.0.179'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1920716484', 'neutron:cidrs': '19.80.0.179/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1920716484', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '3', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=c735218e-8fbf-4d82-b453-bc3944800b8e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.505 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:c8:b4 10.100.0.14'], port_security=['fa:16:3e:3d:c8:b4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '03fe054d-d727-4af3-9c5e-92e57505f242'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1426450350', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '714f2e5b-135b-4f7e-9c62-3e1849c5e151', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfa287bf-10c3-40fc-8071-37bb7f801357', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1426450350', 'neutron:project_id': 'dc1bc9517198484ab30d93ebd5d88c35', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0f21756f-764b-4e57-8875-8daafaed0f4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7640d9cc-b332-470c-9f54-e0b0e119f55f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.506 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 5acdfba5-a8c4-4e7e-b4d2-44b9608e42e4 in datapath 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed unbound from our chassis
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.508 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.509 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b063967f-d291-4734-a28c-1a9db48546bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.509 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed namespace which is not needed anymore
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.525 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000011.scope: Deactivated successfully.
Dec 06 07:02:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000011.scope: Consumed 15.654s CPU time.
Dec 06 07:02:13 compute-0 systemd-machined[212986]: Machine qemu-8-instance-00000011 terminated.
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.598 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[268419]: [NOTICE]   (268429) : haproxy version is 2.8.14-c23fe91
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[268419]: [NOTICE]   (268429) : path to executable is /usr/sbin/haproxy
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[268419]: [WARNING]  (268429) : Exiting Master process...
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[268419]: [WARNING]  (268429) : Exiting Master process...
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[268419]: [ALERT]    (268429) : Current worker (268434) exited with code 143 (Terminated)
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed[268419]: [WARNING]  (268429) : All workers exited. Exiting... (0)
Dec 06 07:02:13 compute-0 systemd[1]: libpod-3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73.scope: Deactivated successfully.
Dec 06 07:02:13 compute-0 podman[268721]: 2025-12-06 07:02:13.637718398 +0000 UTC m=+0.045955501 container died 3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73-userdata-shm.mount: Deactivated successfully.
Dec 06 07:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d6d729f547e8bfb76bf64c38a0fa5bfbf6c25978b446fa4a77eb2a84f4e7063-merged.mount: Deactivated successfully.
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.671 251996 DEBUG nova.network.neutron [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updated VIF entry in instance network info cache for port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.672 251996 DEBUG nova.network.neutron [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Updating instance_info_cache with network_info: [{"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:13 compute-0 podman[268721]: 2025-12-06 07:02:13.679417559 +0000 UTC m=+0.087654662 container cleanup 3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:02:13 compute-0 systemd[1]: libpod-conmon-3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73.scope: Deactivated successfully.
Dec 06 07:02:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Dec 06 07:02:13 compute-0 podman[268751]: 2025-12-06 07:02:13.737565416 +0000 UTC m=+0.038188155 container remove 3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.742 251996 DEBUG oslo_concurrency.lockutils [req-c0d665d7-deef-4c21-927a-f4883fcc0055 req-cddf37e1-91de-4836-91c3-26690a89d743 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-714f2e5b-135b-4f7e-9c62-3e1849c5e151" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.742 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a994c89f-0e63-45ee-bc2e-3b0c7da9a4f2]: (4, ('Sat Dec  6 07:02:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed (3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73)\n3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73\nSat Dec  6 07:02:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed (3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73)\n3206ddcf93f05c0b78fb6293c031e9159db0edbb28e9ad6873bfdc2e8b4e3e73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.744 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[76800654-85fd-4345-ab00-a129049db4e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.745 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a9cf9f3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.746 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:13 compute-0 kernel: tap4a9cf9f3-d0: left promiscuous mode
Dec 06 07:02:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Dec 06 07:02:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.763 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.765 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[072a013e-97e1-44cb-9469-fc5de2247c11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 virtqemud[251613]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk: No such file or directory
Dec 06 07:02:13 compute-0 virtqemud[251613]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/714f2e5b-135b-4f7e-9c62-3e1849c5e151_disk: No such file or directory
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.781 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5dff02eb-3149-43ef-9ae4-f4872348049a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.783 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6d8827-625f-4016-bb4b-3dc7240d4742]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 NetworkManager[48965]: <info>  [1765004533.7835] manager: (tap8ba0fb02-de): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.796 251996 DEBUG nova.virt.libvirt.guest [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.797 251996 INFO nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Migration operation has completed
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.797 251996 INFO nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] _post_live_migration() is started..
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.798 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[73e18864-6333-462e-9134-1c52e110e6e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478002, 'reachable_time': 32826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268771, 'error': None, 'target': 'ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 systemd[1]: run-netns-ovnmeta\x2d4a9cf9f3\x2dd63a\x2d4198\x2da2a7\x2db24331e0d8ed.mount: Deactivated successfully.
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.802 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.800 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a9cf9f3-d63a-4198-a2a7-b24331e0d8ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.802 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.801 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[24cb1357-838a-4e8c-a0c0-7fe65f577693]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 nova_compute[251992]: 2025-12-06 07:02:13.803 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.802 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b in datapath dfa287bf-10c3-40fc-8071-37bb7f801357 unbound from our chassis
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.804 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dfa287bf-10c3-40fc-8071-37bb7f801357, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.805 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b3956a33-e415-416b-8adb-04ed38c9cbf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:13.805 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 namespace which is not needed anymore
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[268576]: [NOTICE]   (268580) : haproxy version is 2.8.14-c23fe91
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[268576]: [NOTICE]   (268580) : path to executable is /usr/sbin/haproxy
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[268576]: [WARNING]  (268580) : Exiting Master process...
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[268576]: [ALERT]    (268580) : Current worker (268582) exited with code 143 (Terminated)
Dec 06 07:02:13 compute-0 neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357[268576]: [WARNING]  (268580) : All workers exited. Exiting... (0)
Dec 06 07:02:13 compute-0 systemd[1]: libpod-6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb.scope: Deactivated successfully.
Dec 06 07:02:13 compute-0 podman[268799]: 2025-12-06 07:02:13.932180792 +0000 UTC m=+0.042197526 container died 6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb-userdata-shm.mount: Deactivated successfully.
Dec 06 07:02:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a74f0742370c55bad72e4e0e4b174f846287690973d7fa21ac971b751a0ac36-merged.mount: Deactivated successfully.
Dec 06 07:02:13 compute-0 podman[268799]: 2025-12-06 07:02:13.96575374 +0000 UTC m=+0.075770454 container cleanup 6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:02:13 compute-0 systemd[1]: libpod-conmon-6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb.scope: Deactivated successfully.
Dec 06 07:02:14 compute-0 podman[268830]: 2025-12-06 07:02:14.027589988 +0000 UTC m=+0.039427450 container remove 6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.033 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[86e4cc38-5ea2-49ef-8ff1-ecc19563271f]: (4, ('Sat Dec  6 07:02:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 (6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb)\n6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb\nSat Dec  6 07:02:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 (6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb)\n6e010d47450a35773b693689fe50ccb643219911f1de50f1a0d9fb7faed13ddb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.034 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[08be5782-2bab-4e8b-8567-5e19be13d170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.035 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfa287bf-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.037 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:14 compute-0 kernel: tapdfa287bf-10: left promiscuous mode
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.053 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.057 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2390e8b6-e4ff-470d-8215-8ee42b6195ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.071 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[72446ccb-faec-4198-bb26-8648e4f98311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.072 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3ab0a0-bd1f-442e-9cff-e072b569381f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.088 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3999e549-69f9-4228-8fcb-2b735b62cfcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 478111, 'reachable_time': 28468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268849, 'error': None, 'target': 'ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.090 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dfa287bf-10c3-40fc-8071-37bb7f801357 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:02:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:14.090 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc25061-1675-450c-9caa-064dd9c13816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:14 compute-0 systemd[1]: run-netns-ovnmeta\x2ddfa287bf\x2d10c3\x2d40fc\x2d8071\x2d37bb7f801357.mount: Deactivated successfully.
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.802 251996 DEBUG nova.compute.manager [req-7b0de27e-b1f1-4e91-a44b-cf1bb12cbee1 req-37c0c205-3186-464c-a44d-f0f2da3e3e73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.803 251996 DEBUG oslo_concurrency.lockutils [req-7b0de27e-b1f1-4e91-a44b-cf1bb12cbee1 req-37c0c205-3186-464c-a44d-f0f2da3e3e73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.803 251996 DEBUG oslo_concurrency.lockutils [req-7b0de27e-b1f1-4e91-a44b-cf1bb12cbee1 req-37c0c205-3186-464c-a44d-f0f2da3e3e73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.803 251996 DEBUG oslo_concurrency.lockutils [req-7b0de27e-b1f1-4e91-a44b-cf1bb12cbee1 req-37c0c205-3186-464c-a44d-f0f2da3e3e73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.804 251996 DEBUG nova.compute.manager [req-7b0de27e-b1f1-4e91-a44b-cf1bb12cbee1 req-37c0c205-3186-464c-a44d-f0f2da3e3e73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.804 251996 DEBUG nova.compute.manager [req-7b0de27e-b1f1-4e91-a44b-cf1bb12cbee1 req-37c0c205-3186-464c-a44d-f0f2da3e3e73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:02:14 compute-0 ceph-mon[74339]: pgmap v1238: 305 pgs: 305 active+clean; 557 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 6.3 MiB/s wr, 258 op/s
Dec 06 07:02:14 compute-0 ceph-mon[74339]: osdmap e166: 3 total, 3 up, 3 in
Dec 06 07:02:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1114765484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.843 251996 DEBUG nova.compute.manager [req-d06895fc-e254-437d-b7c4-433b48aeca1b req-d58e82f5-6e9b-48dc-9621-98a817690590 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.843 251996 DEBUG oslo_concurrency.lockutils [req-d06895fc-e254-437d-b7c4-433b48aeca1b req-d58e82f5-6e9b-48dc-9621-98a817690590 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.844 251996 DEBUG oslo_concurrency.lockutils [req-d06895fc-e254-437d-b7c4-433b48aeca1b req-d58e82f5-6e9b-48dc-9621-98a817690590 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.844 251996 DEBUG oslo_concurrency.lockutils [req-d06895fc-e254-437d-b7c4-433b48aeca1b req-d58e82f5-6e9b-48dc-9621-98a817690590 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.844 251996 DEBUG nova.compute.manager [req-d06895fc-e254-437d-b7c4-433b48aeca1b req-d58e82f5-6e9b-48dc-9621-98a817690590 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:14 compute-0 nova_compute[251992]: 2025-12-06 07:02:14.844 251996 DEBUG nova.compute.manager [req-d06895fc-e254-437d-b7c4-433b48aeca1b req-d58e82f5-6e9b-48dc-9621-98a817690590 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-unplugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:02:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 559 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Dec 06 07:02:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:15.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:15.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:15 compute-0 sudo[268851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:15 compute-0 sudo[268851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:15 compute-0 sudo[268851]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:15 compute-0 sudo[268876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:15 compute-0 sudo[268876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:15 compute-0 sudo[268876]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.612 251996 DEBUG nova.network.neutron [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Activated binding for port 8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.613 251996 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.614 251996 DEBUG nova.virt.libvirt.vif [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-720825537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-720825537',id=17,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:01:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc1bc9517198484ab30d93ebd5d88c35',ramdisk_id='',reservation_id='r-od28yehe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-252281632',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-252281632-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:01:54Z,user_data=None,user_id='6805353f6bf048f9b406a1e565a13f11',uuid=714f2e5b-135b-4f7e-9c62-3e1849c5e151,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.614 251996 DEBUG nova.network.os_vif_util [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converting VIF {"id": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "address": "fa:16:3e:3d:c8:b4", "network": {"id": "dfa287bf-10c3-40fc-8071-37bb7f801357", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1283365408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc1bc9517198484ab30d93ebd5d88c35", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ba0fb02-de", "ovs_interfaceid": "8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.614 251996 DEBUG nova.network.os_vif_util [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.615 251996 DEBUG os_vif [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.617 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.617 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ba0fb02-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.618 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.619 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.622 251996 INFO os_vif [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:c8:b4,bridge_name='br-int',has_traffic_filtering=True,id=8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b,network=Network(dfa287bf-10c3-40fc-8071-37bb7f801357),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8ba0fb02-de')
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.623 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.623 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.623 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.624 251996 DEBUG nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.625 251996 INFO nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Deleting instance files /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151_del
Dec 06 07:02:15 compute-0 nova_compute[251992]: 2025-12-06 07:02:15.625 251996 INFO nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Deletion of /var/lib/nova/instances/714f2e5b-135b-4f7e-9c62-3e1849c5e151_del complete
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.232953) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536233021, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2221, "num_deletes": 254, "total_data_size": 3977797, "memory_usage": 4035024, "flush_reason": "Manual Compaction"}
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 06 07:02:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1398766661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536275482, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3869021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22958, "largest_seqno": 25178, "table_properties": {"data_size": 3858904, "index_size": 6419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21670, "raw_average_key_size": 20, "raw_value_size": 3838448, "raw_average_value_size": 3683, "num_data_blocks": 282, "num_entries": 1042, "num_filter_entries": 1042, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004331, "oldest_key_time": 1765004331, "file_creation_time": 1765004536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 42585 microseconds, and 9481 cpu microseconds.
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.275539) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3869021 bytes OK
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.275560) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.280886) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.280917) EVENT_LOG_v1 {"time_micros": 1765004536280913, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.280932) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3968488, prev total WAL file size 3968488, number of live WAL files 2.
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.282064) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3778KB)], [53(8263KB)]
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536282153, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 12331039, "oldest_snapshot_seqno": -1}
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5464 keys, 10328678 bytes, temperature: kUnknown
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536379964, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10328678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10290554, "index_size": 23358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 138718, "raw_average_key_size": 25, "raw_value_size": 10190272, "raw_average_value_size": 1864, "num_data_blocks": 954, "num_entries": 5464, "num_filter_entries": 5464, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765004536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.380211) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10328678 bytes
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.381521) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.2 rd, 105.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.1 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5991, records dropped: 527 output_compression: NoCompression
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.381537) EVENT_LOG_v1 {"time_micros": 1765004536381529, "job": 28, "event": "compaction_finished", "compaction_time_micros": 97737, "compaction_time_cpu_micros": 28639, "output_level": 6, "num_output_files": 1, "total_output_size": 10328678, "num_input_records": 5991, "num_output_records": 5464, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536382245, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004536383608, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.281893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.383643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.383647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.383649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.383651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:02:16.383653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.938 251996 DEBUG nova.compute.manager [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.939 251996 DEBUG oslo_concurrency.lockutils [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.939 251996 DEBUG oslo_concurrency.lockutils [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.939 251996 DEBUG oslo_concurrency.lockutils [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.939 251996 DEBUG nova.compute.manager [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.940 251996 WARNING nova.compute.manager [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state migrating.
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.940 251996 DEBUG nova.compute.manager [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.940 251996 DEBUG oslo_concurrency.lockutils [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.942 251996 DEBUG oslo_concurrency.lockutils [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.942 251996 DEBUG oslo_concurrency.lockutils [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.942 251996 DEBUG nova.compute.manager [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:16 compute-0 nova_compute[251992]: 2025-12-06 07:02:16.942 251996 WARNING nova.compute.manager [req-dbe8e63d-c467-43b3-90c3-8fee1766584a req-db87111e-d346-4a9e-a588-0bc1b31bcd95 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state migrating.
Dec 06 07:02:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 578 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Dec 06 07:02:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:17.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:17 compute-0 ceph-mon[74339]: pgmap v1240: 305 pgs: 305 active+clean; 559 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Dec 06 07:02:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1673630881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:17 compute-0 nova_compute[251992]: 2025-12-06 07:02:17.377 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:02:18
Dec 06 07:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'volumes', 'images', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 06 07:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:02:18 compute-0 podman[268902]: 2025-12-06 07:02:18.414534378 +0000 UTC m=+0.075736163 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:02:18 compute-0 ceph-mon[74339]: pgmap v1241: 305 pgs: 305 active+clean; 578 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Dec 06 07:02:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 578 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Dec 06 07:02:19 compute-0 nova_compute[251992]: 2025-12-06 07:02:19.095 251996 DEBUG nova.compute.manager [req-666547f9-b60d-447c-9e56-d8087702b18f req-4c11c50d-5321-4254-8358-b4a04d214ac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:19 compute-0 nova_compute[251992]: 2025-12-06 07:02:19.096 251996 DEBUG oslo_concurrency.lockutils [req-666547f9-b60d-447c-9e56-d8087702b18f req-4c11c50d-5321-4254-8358-b4a04d214ac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:19 compute-0 nova_compute[251992]: 2025-12-06 07:02:19.096 251996 DEBUG oslo_concurrency.lockutils [req-666547f9-b60d-447c-9e56-d8087702b18f req-4c11c50d-5321-4254-8358-b4a04d214ac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:19 compute-0 nova_compute[251992]: 2025-12-06 07:02:19.096 251996 DEBUG oslo_concurrency.lockutils [req-666547f9-b60d-447c-9e56-d8087702b18f req-4c11c50d-5321-4254-8358-b4a04d214ac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:19 compute-0 nova_compute[251992]: 2025-12-06 07:02:19.096 251996 DEBUG nova.compute.manager [req-666547f9-b60d-447c-9e56-d8087702b18f req-4c11c50d-5321-4254-8358-b4a04d214ac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] No waiting events found dispatching network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:19 compute-0 nova_compute[251992]: 2025-12-06 07:02:19.097 251996 WARNING nova.compute.manager [req-666547f9-b60d-447c-9e56-d8087702b18f req-4c11c50d-5321-4254-8358-b4a04d214ac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Received unexpected event network-vif-plugged-8ba0fb02-dee2-4ab1-88df-fcd81ea7ee2b for instance with vm_state active and task_state migrating.
Dec 06 07:02:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:19.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:19.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:20 compute-0 nova_compute[251992]: 2025-12-06 07:02:20.563 251996 INFO nova.compute.manager [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Swapping old allocation on dict_keys(['e75da5bf-16fa-49b1-b5e1-3aa61daf0433']) held by migration bd54e0e3-85e3-4797-b6c2-599f4f8510ad for instance
Dec 06 07:02:20 compute-0 nova_compute[251992]: 2025-12-06 07:02:20.620 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:20 compute-0 nova_compute[251992]: 2025-12-06 07:02:20.622 251996 DEBUG nova.scheduler.client.report [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Overwriting current allocation {'allocations': {'466e0fbd-7a6f-4c53-b8b9-e67b70c9ec83': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}, 'generation': 23}}, 'project_id': 'fc6c493097a84d069d178020ca398a25', 'user_id': '538aa592cfb04958ab11223ed2d98106', 'consumer_generation': 1} on consumer 345d5d4a-3a34-4809-9ae4-60a579c5e49a move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Dec 06 07:02:20 compute-0 ceph-mon[74339]: pgmap v1242: 305 pgs: 305 active+clean; 578 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Dec 06 07:02:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/239897114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 635 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 259 op/s
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.003 251996 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.004 251996 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.004 251996 DEBUG nova.network.neutron [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:21.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.225 251996 DEBUG nova.network.neutron [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:21.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.593 251996 DEBUG nova.network.neutron [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.613 251996 DEBUG oslo_concurrency.lockutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.615 251996 DEBUG nova.virt.libvirt.driver [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.695 251996 DEBUG nova.storage.rbd_utils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] rolling back rbd image(345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.761 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.762 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.762 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "714f2e5b-135b-4f7e-9c62-3e1849c5e151-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.804 251996 DEBUG nova.storage.rbd_utils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] removing snapshot(nova-resize) on rbd image(345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.824 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.824 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.824 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.825 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:02:21 compute-0 nova_compute[251992]: 2025-12-06 07:02:21.825 251996 DEBUG oslo_concurrency.processutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622845979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.317 251996 DEBUG oslo_concurrency.processutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.405 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.405 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.545 251996 WARNING nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.547 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4831MB free_disk=20.653366088867188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.547 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.548 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.628 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Migration for instance 714f2e5b-135b-4f7e-9c62-3e1849c5e151 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.681 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.702 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Migration 4752161d-1265-4670-832f-4effff14ef8a is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.703 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Instance 345d5d4a-3a34-4809-9ae4-60a579c5e49a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.703 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.703 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:02:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Dec 06 07:02:22 compute-0 ceph-mon[74339]: pgmap v1243: 305 pgs: 305 active+clean; 635 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.9 MiB/s wr, 259 op/s
Dec 06 07:02:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2358499230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1622845979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/13529011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Dec 06 07:02:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.799 251996 DEBUG oslo_concurrency.processutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.854 251996 DEBUG nova.virt.libvirt.driver [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.857 251996 WARNING nova.virt.libvirt.driver [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.865 251996 DEBUG nova.virt.libvirt.host [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.866 251996 DEBUG nova.virt.libvirt.host [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.869 251996 DEBUG nova.virt.libvirt.host [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.869 251996 DEBUG nova.virt.libvirt.host [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.870 251996 DEBUG nova.virt.libvirt.driver [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.871 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.871 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.871 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.871 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.872 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.872 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.872 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.872 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.873 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.873 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.873 251996 DEBUG nova.virt.hardware [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.873 251996 DEBUG nova.objects.instance [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:22 compute-0 nova_compute[251992]: 2025-12-06 07:02:22.940 251996 DEBUG oslo_concurrency.processutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 260 op/s
Dec 06 07:02:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:23.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168415423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:23.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.270 251996 DEBUG oslo_concurrency.processutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.277 251996 DEBUG nova.compute.provider_tree [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.307 251996 DEBUG nova.scheduler.client.report [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.345 251996 DEBUG nova.compute.resource_tracker [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.346 251996 DEBUG oslo_concurrency.lockutils [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.352 251996 INFO nova.compute.manager [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.496 251996 INFO nova.scheduler.client.report [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] Deleted allocation for migration 4752161d-1265-4670-832f-4effff14ef8a
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.497 251996 DEBUG nova.virt.libvirt.driver [None req-d294c613-8652-4c67-b4a2-864c1d34df9f 549336d6442b4deeb6b3016b3ba916fe 096c573a8cb34680b1bcc6f529b2a707 - - default default] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 06 07:02:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:02:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1485497034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.518 251996 DEBUG oslo_concurrency.processutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.550 251996 DEBUG oslo_concurrency.processutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:02:23 compute-0 ceph-mon[74339]: osdmap e167: 3 total, 3 up, 3 in
Dec 06 07:02:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3168415423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1485497034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:02:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54737310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.991 251996 DEBUG oslo_concurrency.processutils [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:23 compute-0 nova_compute[251992]: 2025-12-06 07:02:23.996 251996 DEBUG nova.virt.libvirt.driver [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <uuid>345d5d4a-3a34-4809-9ae4-60a579c5e49a</uuid>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <name>instance-00000012</name>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <nova:name>tempest-MigrationsAdminTest-server-941592718</nova:name>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:02:22</nova:creationTime>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <nova:user uuid="538aa592cfb04958ab11223ed2d98106">tempest-MigrationsAdminTest-541331030-project-member</nova:user>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <nova:project uuid="fc6c493097a84d069d178020ca398a25">tempest-MigrationsAdminTest-541331030</nova:project>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <system>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <entry name="serial">345d5d4a-3a34-4809-9ae4-60a579c5e49a</entry>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <entry name="uuid">345d5d4a-3a34-4809-9ae4-60a579c5e49a</entry>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     </system>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <os>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   </os>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <features>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   </features>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk">
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       </source>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/345d5d4a-3a34-4809-9ae4-60a579c5e49a_disk.config">
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       </source>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:02:23 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a/console.log" append="off"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <video>
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     </video>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <input type="keyboard" bus="usb"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:02:23 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:02:23 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:02:23 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:02:23 compute-0 nova_compute[251992]: </domain>
Dec 06 07:02:23 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:02:24 compute-0 systemd-machined[212986]: New machine qemu-10-instance-00000012.
Dec 06 07:02:24 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000012.
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.113 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:24.113 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:24.114 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.620 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 345d5d4a-3a34-4809-9ae4-60a579c5e49a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.620 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004544.619649, 345d5d4a-3a34-4809-9ae4-60a579c5e49a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.620 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] VM Resumed (Lifecycle Event)
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.626 251996 DEBUG nova.compute.manager [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:02:24 compute-0 podman[269147]: 2025-12-06 07:02:24.627546058 +0000 UTC m=+0.070099057 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.632 251996 INFO nova.virt.libvirt.driver [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance running successfully.
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.632 251996 DEBUG nova.virt.libvirt.driver [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Dec 06 07:02:24 compute-0 podman[269149]: 2025-12-06 07:02:24.65620687 +0000 UTC m=+0.098308307 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.656 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.660 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.697 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.698 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004544.6238031, 345d5d4a-3a34-4809-9ae4-60a579c5e49a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.699 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] VM Started (Lifecycle Event)
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.729 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.732 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:24 compute-0 ceph-mon[74339]: pgmap v1245: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 260 op/s
Dec 06 07:02:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/54737310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.841 251996 INFO nova.compute.manager [None req-ceb3a13e-6f4d-4999-b0a6-d419ac037288 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Updating instance to original state: 'active'
Dec 06 07:02:24 compute-0 nova_compute[251992]: 2025-12-06 07:02:24.871 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:02:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.8 MiB/s wr, 229 op/s
Dec 06 07:02:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:02:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:25.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:02:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01616662595384329 of space, bias 1.0, pg target 4.849987786152987 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.3799088032756375e-05 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:02:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Dec 06 07:02:25 compute-0 nova_compute[251992]: 2025-12-06 07:02:25.622 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:26 compute-0 ceph-mon[74339]: pgmap v1246: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.8 MiB/s wr, 229 op/s
Dec 06 07:02:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:02:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:27.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:27.116 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.151 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.151 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.152 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.152 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.152 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.154 251996 INFO nova.compute.manager [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Terminating instance
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.154 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.155 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquired lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.155 251996 DEBUG nova.network.neutron [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:27.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.555 251996 DEBUG nova.network.neutron [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.885 251996 DEBUG nova.network.neutron [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.911 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Releasing lock "refresh_cache-345d5d4a-3a34-4809-9ae4-60a579c5e49a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:27 compute-0 nova_compute[251992]: 2025-12-06 07:02:27.913 251996 DEBUG nova.compute.manager [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:02:28 compute-0 ceph-mon[74339]: pgmap v1247: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:02:28 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000012.scope: Deactivated successfully.
Dec 06 07:02:28 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000012.scope: Consumed 3.994s CPU time.
Dec 06 07:02:28 compute-0 systemd-machined[212986]: Machine qemu-10-instance-00000012 terminated.
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.334 251996 INFO nova.virt.libvirt.driver [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance destroyed successfully.
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.335 251996 DEBUG nova.objects.instance [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lazy-loading 'resources' on Instance uuid 345d5d4a-3a34-4809-9ae4-60a579c5e49a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:28 compute-0 sudo[269213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:28 compute-0 sudo[269213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:28 compute-0 sudo[269213]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.796 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004533.7948549, 714f2e5b-135b-4f7e-9c62-3e1849c5e151 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.797 251996 INFO nova.compute.manager [-] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] VM Stopped (Lifecycle Event)
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.807 251996 INFO nova.virt.libvirt.driver [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Deleting instance files /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a_del
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.808 251996 INFO nova.virt.libvirt.driver [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Deletion of /var/lib/nova/instances/345d5d4a-3a34-4809-9ae4-60a579c5e49a_del complete
Dec 06 07:02:28 compute-0 sudo[269238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:02:28 compute-0 sudo[269238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:28 compute-0 sudo[269238]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.836 251996 DEBUG nova.compute.manager [None req-60fc8c5c-46ce-4a0d-9879-450b6ab0fdb5 - - - - - -] [instance: 714f2e5b-135b-4f7e-9c62-3e1849c5e151] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:28 compute-0 sudo[269263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:28 compute-0 sudo[269263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:28 compute-0 sudo[269263]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.893 251996 INFO nova.compute.manager [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Took 0.98 seconds to destroy the instance on the hypervisor.
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.894 251996 DEBUG oslo.service.loopingcall [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.894 251996 DEBUG nova.compute.manager [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:02:28 compute-0 nova_compute[251992]: 2025-12-06 07:02:28.894 251996 DEBUG nova.network.neutron [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:02:28 compute-0 sudo[269288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:02:28 compute-0 sudo[269288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.059 251996 DEBUG nova.network.neutron [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.075 251996 DEBUG nova.network.neutron [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.097 251996 INFO nova.compute.manager [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Took 0.20 seconds to deallocate network for instance.
Dec 06 07:02:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:29.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.145 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.145 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.239 251996 DEBUG oslo_concurrency.processutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:29.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:29 compute-0 sudo[269288]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:02:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:02:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:02:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:02:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:02:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dace5c64-46fc-4445-9239-f91805157c32 does not exist
Dec 06 07:02:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f7cc3f2e-a85e-4c35-80d0-0a8360a68a10 does not exist
Dec 06 07:02:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0a2b42d9-4b18-4990-990a-e997be367b0b does not exist
Dec 06 07:02:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:02:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:02:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:02:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:02:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:02:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:02:29 compute-0 sudo[269362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:29 compute-0 sudo[269362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:29 compute-0 sudo[269362]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:29 compute-0 sudo[269387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:02:29 compute-0 sudo[269387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:29 compute-0 sudo[269387]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1688128300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:29 compute-0 sudo[269412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:29 compute-0 sudo[269412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:29 compute-0 sudo[269412]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.704 251996 DEBUG oslo_concurrency.processutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.710 251996 DEBUG nova.compute.provider_tree [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.726 251996 DEBUG nova.scheduler.client.report [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:02:29 compute-0 sudo[269439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:02:29 compute-0 sudo[269439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.752 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.796 251996 INFO nova.scheduler.client.report [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Deleted allocations for instance 345d5d4a-3a34-4809-9ae4-60a579c5e49a
Dec 06 07:02:29 compute-0 nova_compute[251992]: 2025-12-06 07:02:29.930 251996 DEBUG oslo_concurrency.lockutils [None req-e14a9f71-12e1-4543-84b1-d97819011e0e 538aa592cfb04958ab11223ed2d98106 fc6c493097a84d069d178020ca398a25 - - default default] Lock "345d5d4a-3a34-4809-9ae4-60a579c5e49a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:30 compute-0 podman[269503]: 2025-12-06 07:02:30.044046327 +0000 UTC m=+0.038182866 container create d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:02:30 compute-0 systemd[1]: Started libpod-conmon-d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4.scope.
Dec 06 07:02:30 compute-0 ceph-mon[74339]: pgmap v1248: 305 pgs: 305 active+clean; 648 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:02:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:02:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:02:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:02:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:02:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:02:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1688128300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:02:30 compute-0 podman[269503]: 2025-12-06 07:02:30.026729009 +0000 UTC m=+0.020865578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:02:30 compute-0 podman[269503]: 2025-12-06 07:02:30.12815431 +0000 UTC m=+0.122290879 container init d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:02:30 compute-0 podman[269503]: 2025-12-06 07:02:30.133973401 +0000 UTC m=+0.128109940 container start d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:02:30 compute-0 podman[269503]: 2025-12-06 07:02:30.137051556 +0000 UTC m=+0.131188095 container attach d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 07:02:30 compute-0 clever_darwin[269519]: 167 167
Dec 06 07:02:30 compute-0 systemd[1]: libpod-d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4.scope: Deactivated successfully.
Dec 06 07:02:30 compute-0 conmon[269519]: conmon d0188fe4210659010b07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4.scope/container/memory.events
Dec 06 07:02:30 compute-0 podman[269503]: 2025-12-06 07:02:30.140407798 +0000 UTC m=+0.134544337 container died d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:02:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-42dca542e9952ebd37791ceae7b77a4dbf97a233ad154f4fc8b53e335ba80e27-merged.mount: Deactivated successfully.
Dec 06 07:02:30 compute-0 podman[269503]: 2025-12-06 07:02:30.177704529 +0000 UTC m=+0.171841068 container remove d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:02:30 compute-0 systemd[1]: libpod-conmon-d0188fe4210659010b079d4deb507873f0bf46431c8e21b8bc065c160feec3c4.scope: Deactivated successfully.
Dec 06 07:02:30 compute-0 podman[269541]: 2025-12-06 07:02:30.323600069 +0000 UTC m=+0.040514031 container create 8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:02:30 compute-0 systemd[1]: Started libpod-conmon-8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8.scope.
Dec 06 07:02:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58b227d0daa77fa87afa987754cd3af19aa0f9ebbdcd5a9feffac518a993f0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58b227d0daa77fa87afa987754cd3af19aa0f9ebbdcd5a9feffac518a993f0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58b227d0daa77fa87afa987754cd3af19aa0f9ebbdcd5a9feffac518a993f0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58b227d0daa77fa87afa987754cd3af19aa0f9ebbdcd5a9feffac518a993f0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c58b227d0daa77fa87afa987754cd3af19aa0f9ebbdcd5a9feffac518a993f0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:30 compute-0 podman[269541]: 2025-12-06 07:02:30.30623577 +0000 UTC m=+0.023149742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:02:30 compute-0 podman[269541]: 2025-12-06 07:02:30.408842783 +0000 UTC m=+0.125756755 container init 8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:02:30 compute-0 podman[269541]: 2025-12-06 07:02:30.416578617 +0000 UTC m=+0.133492559 container start 8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kapitsa, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:02:30 compute-0 podman[269541]: 2025-12-06 07:02:30.424658061 +0000 UTC m=+0.141572083 container attach 8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:02:30 compute-0 nova_compute[251992]: 2025-12-06 07:02:30.624 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 583 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 480 KiB/s wr, 214 op/s
Dec 06 07:02:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:31.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:31 compute-0 funny_kapitsa[269558]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:02:31 compute-0 funny_kapitsa[269558]: --> relative data size: 1.0
Dec 06 07:02:31 compute-0 funny_kapitsa[269558]: --> All data devices are unavailable
Dec 06 07:02:31 compute-0 systemd[1]: libpod-8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8.scope: Deactivated successfully.
Dec 06 07:02:31 compute-0 podman[269541]: 2025-12-06 07:02:31.255297465 +0000 UTC m=+0.972211437 container died 8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:02:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:31.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c58b227d0daa77fa87afa987754cd3af19aa0f9ebbdcd5a9feffac518a993f0f-merged.mount: Deactivated successfully.
Dec 06 07:02:31 compute-0 podman[269541]: 2025-12-06 07:02:31.308951708 +0000 UTC m=+1.025865650 container remove 8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:02:31 compute-0 systemd[1]: libpod-conmon-8eac8d6f34d330bf0164809ddc69be0d67ccd341d9eedb4397052c0ec7f4fdd8.scope: Deactivated successfully.
Dec 06 07:02:31 compute-0 sudo[269439]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:31 compute-0 sudo[269588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:31 compute-0 sudo[269588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:31 compute-0 sudo[269588]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:31 compute-0 sudo[269613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:02:31 compute-0 sudo[269613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:31 compute-0 sudo[269613]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:31 compute-0 sudo[269638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:31 compute-0 sudo[269638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:31 compute-0 sudo[269638]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:31 compute-0 sudo[269663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:02:31 compute-0 sudo[269663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:31 compute-0 podman[269728]: 2025-12-06 07:02:31.844620414 +0000 UTC m=+0.038893685 container create 653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:02:31 compute-0 systemd[1]: Started libpod-conmon-653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f.scope.
Dec 06 07:02:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:02:31 compute-0 podman[269728]: 2025-12-06 07:02:31.91432462 +0000 UTC m=+0.108597911 container init 653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:02:31 compute-0 podman[269728]: 2025-12-06 07:02:31.921313362 +0000 UTC m=+0.115586633 container start 653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:02:31 compute-0 podman[269728]: 2025-12-06 07:02:31.925216101 +0000 UTC m=+0.119489372 container attach 653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:02:31 compute-0 podman[269728]: 2025-12-06 07:02:31.829782275 +0000 UTC m=+0.024055576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:02:31 compute-0 exciting_banach[269745]: 167 167
Dec 06 07:02:31 compute-0 systemd[1]: libpod-653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f.scope: Deactivated successfully.
Dec 06 07:02:31 compute-0 podman[269728]: 2025-12-06 07:02:31.926729532 +0000 UTC m=+0.121002803 container died 653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:02:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-51ae4d02d7cbc3bbe2611d96238b50057e08344d4c18841c62703ed39ad4e172-merged.mount: Deactivated successfully.
Dec 06 07:02:31 compute-0 podman[269728]: 2025-12-06 07:02:31.961645927 +0000 UTC m=+0.155919188 container remove 653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:02:31 compute-0 systemd[1]: libpod-conmon-653eb29f5f7d0794ae06ad554459232a5792686c231d1af363e48f7815bf4c7f.scope: Deactivated successfully.
Dec 06 07:02:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Dec 06 07:02:32 compute-0 podman[269767]: 2025-12-06 07:02:32.109513871 +0000 UTC m=+0.037353583 container create 1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:02:32 compute-0 systemd[1]: Started libpod-conmon-1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986.scope.
Dec 06 07:02:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e35053705d6bcf3cd889ee7c8faebce633cd9b3ee388faae9b205b8fc752080/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e35053705d6bcf3cd889ee7c8faebce633cd9b3ee388faae9b205b8fc752080/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e35053705d6bcf3cd889ee7c8faebce633cd9b3ee388faae9b205b8fc752080/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e35053705d6bcf3cd889ee7c8faebce633cd9b3ee388faae9b205b8fc752080/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Dec 06 07:02:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Dec 06 07:02:32 compute-0 podman[269767]: 2025-12-06 07:02:32.170712111 +0000 UTC m=+0.098551853 container init 1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:02:32 compute-0 podman[269767]: 2025-12-06 07:02:32.177088797 +0000 UTC m=+0.104928509 container start 1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:02:32 compute-0 podman[269767]: 2025-12-06 07:02:32.180570314 +0000 UTC m=+0.108410126 container attach 1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:02:32 compute-0 podman[269767]: 2025-12-06 07:02:32.093566111 +0000 UTC m=+0.021405843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:02:32 compute-0 nova_compute[251992]: 2025-12-06 07:02:32.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:32 compute-0 ceph-mon[74339]: pgmap v1249: 305 pgs: 305 active+clean; 583 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 480 KiB/s wr, 214 op/s
Dec 06 07:02:32 compute-0 trusting_shaw[269784]: {
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:     "0": [
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:         {
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "devices": [
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "/dev/loop3"
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             ],
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "lv_name": "ceph_lv0",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "lv_size": "7511998464",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "name": "ceph_lv0",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "tags": {
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.cluster_name": "ceph",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.crush_device_class": "",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.encrypted": "0",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.osd_id": "0",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.type": "block",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:                 "ceph.vdo": "0"
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             },
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "type": "block",
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:             "vg_name": "ceph_vg0"
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:         }
Dec 06 07:02:32 compute-0 trusting_shaw[269784]:     ]
Dec 06 07:02:32 compute-0 trusting_shaw[269784]: }
Dec 06 07:02:32 compute-0 systemd[1]: libpod-1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986.scope: Deactivated successfully.
Dec 06 07:02:32 compute-0 podman[269767]: 2025-12-06 07:02:32.967635594 +0000 UTC m=+0.895475306 container died 1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:02:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e35053705d6bcf3cd889ee7c8faebce633cd9b3ee388faae9b205b8fc752080-merged.mount: Deactivated successfully.
Dec 06 07:02:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 569 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 34 KiB/s wr, 224 op/s
Dec 06 07:02:33 compute-0 podman[269767]: 2025-12-06 07:02:33.013888511 +0000 UTC m=+0.941728223 container remove 1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:02:33 compute-0 systemd[1]: libpod-conmon-1450ff6c799baf9716c37c372f6c485af9d76f1c7ea1567e4ffe5756ca015986.scope: Deactivated successfully.
Dec 06 07:02:33 compute-0 sudo[269663]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:33 compute-0 sudo[269806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:33 compute-0 sudo[269806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:33 compute-0 sudo[269806]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:33.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:33 compute-0 sudo[269831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:02:33 compute-0 sudo[269831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:33 compute-0 sudo[269831]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:33 compute-0 sudo[269856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:33 compute-0 sudo[269856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:33 compute-0 sudo[269856]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:33 compute-0 sudo[269881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:02:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:33 compute-0 sudo[269881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:33.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:33 compute-0 nova_compute[251992]: 2025-12-06 07:02:33.289 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:33 compute-0 nova_compute[251992]: 2025-12-06 07:02:33.290 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:33 compute-0 nova_compute[251992]: 2025-12-06 07:02:33.314 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:02:33 compute-0 nova_compute[251992]: 2025-12-06 07:02:33.416 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:33 compute-0 nova_compute[251992]: 2025-12-06 07:02:33.417 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:33 compute-0 nova_compute[251992]: 2025-12-06 07:02:33.425 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:02:33 compute-0 nova_compute[251992]: 2025-12-06 07:02:33.425 251996 INFO nova.compute.claims [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:02:33 compute-0 ceph-mon[74339]: osdmap e168: 3 total, 3 up, 3 in
Dec 06 07:02:33 compute-0 nova_compute[251992]: 2025-12-06 07:02:33.582 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:33 compute-0 podman[269947]: 2025-12-06 07:02:33.594993163 +0000 UTC m=+0.042115434 container create 62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:02:33 compute-0 systemd[1]: Started libpod-conmon-62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6.scope.
Dec 06 07:02:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:02:33 compute-0 podman[269947]: 2025-12-06 07:02:33.665714586 +0000 UTC m=+0.112836877 container init 62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:02:33 compute-0 podman[269947]: 2025-12-06 07:02:33.671970979 +0000 UTC m=+0.119093250 container start 62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:02:33 compute-0 podman[269947]: 2025-12-06 07:02:33.575929896 +0000 UTC m=+0.023052187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:02:33 compute-0 podman[269947]: 2025-12-06 07:02:33.675576868 +0000 UTC m=+0.122699159 container attach 62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:02:33 compute-0 systemd[1]: libpod-62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6.scope: Deactivated successfully.
Dec 06 07:02:33 compute-0 angry_carver[269964]: 167 167
Dec 06 07:02:33 compute-0 conmon[269964]: conmon 62f9277b99512ae2c7de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6.scope/container/memory.events
Dec 06 07:02:33 compute-0 podman[269947]: 2025-12-06 07:02:33.677138572 +0000 UTC m=+0.124260853 container died 62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:02:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3542cc77724690d932bdd7991a04d591d5bd17ba5b06be5237d4e3b28c533978-merged.mount: Deactivated successfully.
Dec 06 07:02:33 compute-0 podman[269947]: 2025-12-06 07:02:33.714888284 +0000 UTC m=+0.162010555 container remove 62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_carver, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:02:33 compute-0 systemd[1]: libpod-conmon-62f9277b99512ae2c7de5e115f0f1ec913fbd1a90109538fa2a26cfff1eb94b6.scope: Deactivated successfully.
Dec 06 07:02:33 compute-0 podman[270004]: 2025-12-06 07:02:33.869775943 +0000 UTC m=+0.043066621 container create 08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:02:33 compute-0 systemd[1]: Started libpod-conmon-08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694.scope.
Dec 06 07:02:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf55ae0e5a1356bd5b3160dce655e536ec22a0c51c4e3b7bb2ad98a067a2555/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf55ae0e5a1356bd5b3160dce655e536ec22a0c51c4e3b7bb2ad98a067a2555/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf55ae0e5a1356bd5b3160dce655e536ec22a0c51c4e3b7bb2ad98a067a2555/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf55ae0e5a1356bd5b3160dce655e536ec22a0c51c4e3b7bb2ad98a067a2555/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:33 compute-0 podman[270004]: 2025-12-06 07:02:33.945386842 +0000 UTC m=+0.118677550 container init 08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 07:02:33 compute-0 podman[270004]: 2025-12-06 07:02:33.850890981 +0000 UTC m=+0.024181679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:02:33 compute-0 podman[270004]: 2025-12-06 07:02:33.953154005 +0000 UTC m=+0.126444683 container start 08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:02:33 compute-0 podman[270004]: 2025-12-06 07:02:33.955984194 +0000 UTC m=+0.129274902 container attach 08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:02:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285884426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.038 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.047 251996 DEBUG nova.compute.provider_tree [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.070 251996 DEBUG nova.scheduler.client.report [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.097 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.098 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.192 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.194 251996 DEBUG nova.network.neutron [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.215 251996 INFO nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.235 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.412 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.413 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.414 251996 INFO nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Creating image(s)
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.445 251996 DEBUG nova.storage.rbd_utils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image b2d26234-5d5c-402f-85cf-d826d2dbae79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.471 251996 DEBUG nova.storage.rbd_utils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image b2d26234-5d5c-402f-85cf-d826d2dbae79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.492 251996 DEBUG nova.storage.rbd_utils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image b2d26234-5d5c-402f-85cf-d826d2dbae79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.495 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.554 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.555 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.556 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.556 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.576 251996 DEBUG nova.storage.rbd_utils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image b2d26234-5d5c-402f-85cf-d826d2dbae79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.579 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b2d26234-5d5c-402f-85cf-d826d2dbae79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.633 251996 DEBUG nova.policy [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3cae056210a400fa5e3495fe827d29a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]: {
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]:         "osd_id": 0,
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]:         "type": "bluestore"
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]:     }
Dec 06 07:02:34 compute-0 funny_matsumoto[270021]: }
Dec 06 07:02:34 compute-0 systemd[1]: libpod-08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694.scope: Deactivated successfully.
Dec 06 07:02:34 compute-0 podman[270004]: 2025-12-06 07:02:34.820299148 +0000 UTC m=+0.993589836 container died 08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:02:34 compute-0 ceph-mon[74339]: pgmap v1251: 305 pgs: 305 active+clean; 569 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 34 KiB/s wr, 224 op/s
Dec 06 07:02:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3285884426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-adf55ae0e5a1356bd5b3160dce655e536ec22a0c51c4e3b7bb2ad98a067a2555-merged.mount: Deactivated successfully.
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.866 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b2d26234-5d5c-402f-85cf-d826d2dbae79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:34 compute-0 podman[270004]: 2025-12-06 07:02:34.870252139 +0000 UTC m=+1.043542817 container remove 08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Dec 06 07:02:34 compute-0 systemd[1]: libpod-conmon-08ea282e234f54e4fc41fbd3208d8b19b61b6a453bf65b877a5f4a5eed123694.scope: Deactivated successfully.
Dec 06 07:02:34 compute-0 sudo[269881]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:02:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:02:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2f73ca24-3489-49b6-b0af-15360e867028 does not exist
Dec 06 07:02:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 917f0a22-d910-4f1a-8e62-e16e578eb7e2 does not exist
Dec 06 07:02:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 13b4b6fb-2363-42f0-a858-d7df0f735a65 does not exist
Dec 06 07:02:34 compute-0 nova_compute[251992]: 2025-12-06 07:02:34.939 251996 DEBUG nova.storage.rbd_utils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] resizing rbd image b2d26234-5d5c-402f-85cf-d826d2dbae79_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:02:34 compute-0 sudo[270188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:34 compute-0 sudo[270188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:34 compute-0 sudo[270188]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 551 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 703 KiB/s wr, 220 op/s
Dec 06 07:02:35 compute-0 nova_compute[251992]: 2025-12-06 07:02:35.039 251996 DEBUG nova.objects.instance [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'migration_context' on Instance uuid b2d26234-5d5c-402f-85cf-d826d2dbae79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:35 compute-0 sudo[270231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:02:35 compute-0 sudo[270231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:35 compute-0 sudo[270231]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:35 compute-0 nova_compute[251992]: 2025-12-06 07:02:35.061 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:02:35 compute-0 nova_compute[251992]: 2025-12-06 07:02:35.062 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Ensure instance console log exists: /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:02:35 compute-0 nova_compute[251992]: 2025-12-06 07:02:35.062 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:35 compute-0 nova_compute[251992]: 2025-12-06 07:02:35.063 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:35 compute-0 nova_compute[251992]: 2025-12-06 07:02:35.063 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:35.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:35.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:35 compute-0 sudo[270275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:35 compute-0 sudo[270275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:35 compute-0 sudo[270275]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:35 compute-0 sudo[270300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:35 compute-0 sudo[270300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:35 compute-0 sudo[270300]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:35 compute-0 nova_compute[251992]: 2025-12-06 07:02:35.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:02:35 compute-0 ceph-mon[74339]: pgmap v1252: 305 pgs: 305 active+clean; 551 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 703 KiB/s wr, 220 op/s
Dec 06 07:02:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/401815971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 534 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 253 op/s
Dec 06 07:02:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:37 compute-0 nova_compute[251992]: 2025-12-06 07:02:37.146 251996 DEBUG nova.network.neutron [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Successfully created port: 2adad6cf-78fd-4f03-b073-1df1a5fbf944 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:02:37 compute-0 sshd-session[270124]: Connection reset by authenticating user root 45.140.17.124 port 26112 [preauth]
Dec 06 07:02:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:37.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:37 compute-0 nova_compute[251992]: 2025-12-06 07:02:37.382 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:38 compute-0 ceph-mon[74339]: pgmap v1253: 305 pgs: 305 active+clean; 534 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 253 op/s
Dec 06 07:02:38 compute-0 nova_compute[251992]: 2025-12-06 07:02:38.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:38 compute-0 nova_compute[251992]: 2025-12-06 07:02:38.685 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 534 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 253 op/s
Dec 06 07:02:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:39.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:39.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:39 compute-0 nova_compute[251992]: 2025-12-06 07:02:39.441 251996 DEBUG nova.network.neutron [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Successfully updated port: 2adad6cf-78fd-4f03-b073-1df1a5fbf944 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:02:39 compute-0 nova_compute[251992]: 2025-12-06 07:02:39.467 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:39 compute-0 nova_compute[251992]: 2025-12-06 07:02:39.467 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquired lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:39 compute-0 nova_compute[251992]: 2025-12-06 07:02:39.467 251996 DEBUG nova.network.neutron [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:39 compute-0 sshd-session[270326]: Connection reset by authenticating user root 45.140.17.124 port 26116 [preauth]
Dec 06 07:02:39 compute-0 nova_compute[251992]: 2025-12-06 07:02:39.874 251996 DEBUG nova.compute.manager [req-b9dfd408-ef12-4d41-8393-930d91413487 req-95b2204a-ed02-44cd-8240-c205bad37ac9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received event network-changed-2adad6cf-78fd-4f03-b073-1df1a5fbf944 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:39 compute-0 nova_compute[251992]: 2025-12-06 07:02:39.874 251996 DEBUG nova.compute.manager [req-b9dfd408-ef12-4d41-8393-930d91413487 req-95b2204a-ed02-44cd-8240-c205bad37ac9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Refreshing instance network info cache due to event network-changed-2adad6cf-78fd-4f03-b073-1df1a5fbf944. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:02:39 compute-0 nova_compute[251992]: 2025-12-06 07:02:39.875 251996 DEBUG oslo_concurrency.lockutils [req-b9dfd408-ef12-4d41-8393-930d91413487 req-95b2204a-ed02-44cd-8240-c205bad37ac9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:40 compute-0 nova_compute[251992]: 2025-12-06 07:02:40.282 251996 DEBUG nova.network.neutron [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:02:40 compute-0 ceph-mon[74339]: pgmap v1254: 305 pgs: 305 active+clean; 534 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 253 op/s
Dec 06 07:02:40 compute-0 nova_compute[251992]: 2025-12-06 07:02:40.631 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 548 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 127 op/s
Dec 06 07:02:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:41.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:41.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:41 compute-0 nova_compute[251992]: 2025-12-06 07:02:41.669 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.384 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.429 251996 DEBUG nova.network.neutron [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Updating instance_info_cache with network_info: [{"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:42 compute-0 ceph-mon[74339]: pgmap v1255: 305 pgs: 305 active+clean; 548 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 127 op/s
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.472 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Releasing lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.473 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Instance network_info: |[{"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.473 251996 DEBUG oslo_concurrency.lockutils [req-b9dfd408-ef12-4d41-8393-930d91413487 req-95b2204a-ed02-44cd-8240-c205bad37ac9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.473 251996 DEBUG nova.network.neutron [req-b9dfd408-ef12-4d41-8393-930d91413487 req-95b2204a-ed02-44cd-8240-c205bad37ac9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Refreshing network info cache for port 2adad6cf-78fd-4f03-b073-1df1a5fbf944 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.476 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Start _get_guest_xml network_info=[{"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.481 251996 WARNING nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.486 251996 DEBUG nova.virt.libvirt.host [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.487 251996 DEBUG nova.virt.libvirt.host [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.493 251996 DEBUG nova.virt.libvirt.host [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.494 251996 DEBUG nova.virt.libvirt.host [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.495 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.495 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.495 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.496 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.496 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.496 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.496 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.497 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.497 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.497 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.497 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.498 251996 DEBUG nova.virt.hardware [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.501 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:42 compute-0 sshd-session[270329]: Connection reset by authenticating user root 45.140.17.124 port 26150 [preauth]
Dec 06 07:02:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:02:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1762971457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:02:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:02:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:02:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:02:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:02:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.964 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.991 251996 DEBUG nova.storage.rbd_utils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image b2d26234-5d5c-402f-85cf-d826d2dbae79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 557 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 114 op/s
Dec 06 07:02:42 compute-0 nova_compute[251992]: 2025-12-06 07:02:42.995 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:43.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.333 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004548.3323712, 345d5d4a-3a34-4809-9ae4-60a579c5e49a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.334 251996 INFO nova.compute.manager [-] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] VM Stopped (Lifecycle Event)
Dec 06 07:02:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:02:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1640493818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.456 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.457 251996 DEBUG nova.virt.libvirt.vif [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-762315562',display_name='tempest-ServersAdminTestJSON-server-762315562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-762315562',id=22,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ayig0b0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:02:34Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=b2d26234-5d5c-402f-85cf-d826d2dbae79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.458 251996 DEBUG nova.network.os_vif_util [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.458 251996 DEBUG nova.network.os_vif_util [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:d4:cf,bridge_name='br-int',has_traffic_filtering=True,id=2adad6cf-78fd-4f03-b073-1df1a5fbf944,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2adad6cf-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.460 251996 DEBUG nova.objects.instance [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'pci_devices' on Instance uuid b2d26234-5d5c-402f-85cf-d826d2dbae79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.550 251996 DEBUG nova.compute.manager [None req-7966f048-6a2e-424c-a463-b5386d3d3382 - - - - - -] [instance: 345d5d4a-3a34-4809-9ae4-60a579c5e49a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.577 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <uuid>b2d26234-5d5c-402f-85cf-d826d2dbae79</uuid>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <name>instance-00000016</name>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersAdminTestJSON-server-762315562</nova:name>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:02:42</nova:creationTime>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <nova:user uuid="a3cae056210a400fa5e3495fe827d29a">tempest-ServersAdminTestJSON-1902776367-project-member</nova:user>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <nova:project uuid="b6179a8b65c2484eb7ca1e068d93a58c">tempest-ServersAdminTestJSON-1902776367</nova:project>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <nova:port uuid="2adad6cf-78fd-4f03-b073-1df1a5fbf944">
Dec 06 07:02:43 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <system>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <entry name="serial">b2d26234-5d5c-402f-85cf-d826d2dbae79</entry>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <entry name="uuid">b2d26234-5d5c-402f-85cf-d826d2dbae79</entry>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </system>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <os>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   </os>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <features>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   </features>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/b2d26234-5d5c-402f-85cf-d826d2dbae79_disk">
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/b2d26234-5d5c-402f-85cf-d826d2dbae79_disk.config">
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:02:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:83:d4:cf"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <target dev="tap2adad6cf-78"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79/console.log" append="off"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <video>
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </video>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:02:43 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:02:43 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:02:43 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:02:43 compute-0 nova_compute[251992]: </domain>
Dec 06 07:02:43 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.579 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Preparing to wait for external event network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.580 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.580 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.580 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.581 251996 DEBUG nova.virt.libvirt.vif [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-762315562',display_name='tempest-ServersAdminTestJSON-server-762315562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-762315562',id=22,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ayig0b0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:02:34Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=b2d26234-5d5c-402f-85cf-d826d2dbae79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.581 251996 DEBUG nova.network.os_vif_util [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.582 251996 DEBUG nova.network.os_vif_util [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:d4:cf,bridge_name='br-int',has_traffic_filtering=True,id=2adad6cf-78fd-4f03-b073-1df1a5fbf944,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2adad6cf-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.582 251996 DEBUG os_vif [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:d4:cf,bridge_name='br-int',has_traffic_filtering=True,id=2adad6cf-78fd-4f03-b073-1df1a5fbf944,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2adad6cf-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.583 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.583 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.584 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.587 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.587 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2adad6cf-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.588 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2adad6cf-78, col_values=(('external_ids', {'iface-id': '2adad6cf-78fd-4f03-b073-1df1a5fbf944', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:d4:cf', 'vm-uuid': 'b2d26234-5d5c-402f-85cf-d826d2dbae79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:43 compute-0 NetworkManager[48965]: <info>  [1765004563.5905] manager: (tap2adad6cf-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.592 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.598 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.599 251996 INFO os_vif [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:d4:cf,bridge_name='br-int',has_traffic_filtering=True,id=2adad6cf-78fd-4f03-b073-1df1a5fbf944,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2adad6cf-78')
Dec 06 07:02:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1762971457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1640493818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.683 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.683 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.683 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] No VIF found with MAC fa:16:3e:83:d4:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.684 251996 INFO nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Using config drive
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.708 251996 DEBUG nova.storage.rbd_utils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image b2d26234-5d5c-402f-85cf-d826d2dbae79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.719 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.719 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.720 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.720 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:02:43 compute-0 nova_compute[251992]: 2025-12-06 07:02:43.720 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876989622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.143 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.209 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.209 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.365 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.367 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4776MB free_disk=20.695194244384766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.367 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.367 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.554 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance b2d26234-5d5c-402f-85cf-d826d2dbae79 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.554 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.554 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:02:44 compute-0 ceph-mon[74339]: pgmap v1256: 305 pgs: 305 active+clean; 557 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 114 op/s
Dec 06 07:02:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3876989622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1988349296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.689 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.743 251996 INFO nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Creating config drive at /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79/disk.config
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.748 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5f6se56z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.872 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5f6se56z" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.898 251996 DEBUG nova.storage.rbd_utils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] rbd image b2d26234-5d5c-402f-85cf-d826d2dbae79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:02:44 compute-0 nova_compute[251992]: 2025-12-06 07:02:44.902 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79/disk.config b2d26234-5d5c-402f-85cf-d826d2dbae79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:02:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 538 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 137 op/s
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.075 251996 DEBUG oslo_concurrency.processutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79/disk.config b2d26234-5d5c-402f-85cf-d826d2dbae79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.076 251996 INFO nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Deleting local config drive /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79/disk.config because it was imported into RBD.
Dec 06 07:02:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:02:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968125397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:45 compute-0 kernel: tap2adad6cf-78: entered promiscuous mode
Dec 06 07:02:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:45.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:45 compute-0 NetworkManager[48965]: <info>  [1765004565.1306] manager: (tap2adad6cf-78): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.133 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:02:45 compute-0 ovn_controller[147168]: 2025-12-06T07:02:45Z|00052|binding|INFO|Claiming lport 2adad6cf-78fd-4f03-b073-1df1a5fbf944 for this chassis.
Dec 06 07:02:45 compute-0 ovn_controller[147168]: 2025-12-06T07:02:45Z|00053|binding|INFO|2adad6cf-78fd-4f03-b073-1df1a5fbf944: Claiming fa:16:3e:83:d4:cf 10.100.0.13
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.143 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.147 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.157 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:d4:cf 10.100.0.13'], port_security=['fa:16:3e:83:d4:cf 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b2d26234-5d5c-402f-85cf-d826d2dbae79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=2adad6cf-78fd-4f03-b073-1df1a5fbf944) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.159 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 2adad6cf-78fd-4f03-b073-1df1a5fbf944 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 bound to our chassis
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.161 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:02:45 compute-0 systemd-udevd[270517]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.170 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.174 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[edff4a20-5f36-4fbd-924e-6197ce88aeaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.176 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9451d867-01 in ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:02:45 compute-0 systemd-machined[212986]: New machine qemu-11-instance-00000016.
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.178 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9451d867-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.179 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3c6dc5-bf72-4e03-92dc-620fbd732f2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.181 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[58bf1dbb-a5a9-4773-8049-5ecf6fda7d5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 NetworkManager[48965]: <info>  [1765004565.1838] device (tap2adad6cf-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:02:45 compute-0 NetworkManager[48965]: <info>  [1765004565.1846] device (tap2adad6cf-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.194 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d247e8-f79f-4750-a7a5-8197a8f180dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.206 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.207 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:45 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000016.
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.215 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.219 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[998edf2b-10d5-4a6c-9f59-074e53284c1b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.221 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:45 compute-0 ovn_controller[147168]: 2025-12-06T07:02:45Z|00054|binding|INFO|Setting lport 2adad6cf-78fd-4f03-b073-1df1a5fbf944 ovn-installed in OVS
Dec 06 07:02:45 compute-0 ovn_controller[147168]: 2025-12-06T07:02:45Z|00055|binding|INFO|Setting lport 2adad6cf-78fd-4f03-b073-1df1a5fbf944 up in Southbound
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.226 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.249 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ae73eda7-6be0-416e-9442-cc809719e683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.255 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7f8023e0-3fda-44cf-a575-b080ed22fc81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 NetworkManager[48965]: <info>  [1765004565.2561] manager: (tap9451d867-00): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.284 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[082b9f27-af22-4401-8f18-aa43ba1d1647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:45.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.287 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[8c66ee63-6f3b-454c-892e-fdf33a7bbe13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 NetworkManager[48965]: <info>  [1765004565.3057] device (tap9451d867-00): carrier: link connected
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.310 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e85e3ca8-99bf-4586-b5f0-12ff2e163362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.326 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[35b1bd16-1e27-4b03-acb5-4f458b88620c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483789, 'reachable_time': 22997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270550, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.344 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[100f9882-e604-4672-9e37-a8767d230637]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:45e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 483789, 'tstamp': 483789}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270551, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.358 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[15ff0fd7-c26b-44b1-9bbc-d86df2a87cf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9451d867-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:04:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483789, 'reachable_time': 22997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270552, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.392 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f86b6c95-16b8-4740-95bf-729c29533347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 sshd-session[270355]: Connection reset by authenticating user root 45.140.17.124 port 28684 [preauth]
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.443 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[46faab08-566c-49d4-9242-22e549125acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.444 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.444 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.445 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9451d867-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.446 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:45 compute-0 NetworkManager[48965]: <info>  [1765004565.4472] manager: (tap9451d867-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Dec 06 07:02:45 compute-0 kernel: tap9451d867-00: entered promiscuous mode
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.448 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.450 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9451d867-00, col_values=(('external_ids', {'iface-id': 'fed07814-3a76-4798-8d3b-90759d15a8cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.451 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:45 compute-0 ovn_controller[147168]: 2025-12-06T07:02:45Z|00056|binding|INFO|Releasing lport fed07814-3a76-4798-8d3b-90759d15a8cf from this chassis (sb_readonly=0)
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.465 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.466 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.470 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4b762865-7c42-4de3-b628-5eca82be80a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.471 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/9451d867-0aba-464d-b4d9-f947b887e903.pid.haproxy
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 9451d867-0aba-464d-b4d9-f947b887e903
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:02:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:02:45.471 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'env', 'PROCESS_TAG=haproxy-9451d867-0aba-464d-b4d9-f947b887e903', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9451d867-0aba-464d-b4d9-f947b887e903.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:02:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/968125397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.699 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.699 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.699 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.731 251996 DEBUG nova.compute.manager [req-f59b65a1-e993-4a44-8967-c157ef115d3b req-bdc2adce-1061-4993-9eb7-37f41d005565 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received event network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.731 251996 DEBUG oslo_concurrency.lockutils [req-f59b65a1-e993-4a44-8967-c157ef115d3b req-bdc2adce-1061-4993-9eb7-37f41d005565 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.731 251996 DEBUG oslo_concurrency.lockutils [req-f59b65a1-e993-4a44-8967-c157ef115d3b req-bdc2adce-1061-4993-9eb7-37f41d005565 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.732 251996 DEBUG oslo_concurrency.lockutils [req-f59b65a1-e993-4a44-8967-c157ef115d3b req-bdc2adce-1061-4993-9eb7-37f41d005565 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.732 251996 DEBUG nova.compute.manager [req-f59b65a1-e993-4a44-8967-c157ef115d3b req-bdc2adce-1061-4993-9eb7-37f41d005565 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Processing event network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.739 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004565.7376225, b2d26234-5d5c-402f-85cf-d826d2dbae79 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.739 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] VM Started (Lifecycle Event)
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.743 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.746 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.750 251996 INFO nova.virt.libvirt.driver [-] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Instance spawned successfully.
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.751 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.765 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.770 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.777 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.777 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.778 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.778 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.778 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.779 251996 DEBUG nova.virt.libvirt.driver [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.822 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.822 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004565.7378721, b2d26234-5d5c-402f-85cf-d826d2dbae79 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.822 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] VM Paused (Lifecycle Event)
Dec 06 07:02:45 compute-0 podman[270627]: 2025-12-06 07:02:45.869152765 +0000 UTC m=+0.042385831 container create 6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.872 251996 DEBUG nova.network.neutron [req-b9dfd408-ef12-4d41-8393-930d91413487 req-95b2204a-ed02-44cd-8240-c205bad37ac9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Updated VIF entry in instance network info cache for port 2adad6cf-78fd-4f03-b073-1df1a5fbf944. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.872 251996 DEBUG nova.network.neutron [req-b9dfd408-ef12-4d41-8393-930d91413487 req-95b2204a-ed02-44cd-8240-c205bad37ac9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Updating instance_info_cache with network_info: [{"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.874 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.880 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004565.7459214, b2d26234-5d5c-402f-85cf-d826d2dbae79 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.881 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] VM Resumed (Lifecycle Event)
Dec 06 07:02:45 compute-0 systemd[1]: Started libpod-conmon-6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708.scope.
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.906 251996 INFO nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Took 11.49 seconds to spawn the instance on the hypervisor.
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.907 251996 DEBUG nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.908 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.909 251996 DEBUG oslo_concurrency.lockutils [req-b9dfd408-ef12-4d41-8393-930d91413487 req-95b2204a-ed02-44cd-8240-c205bad37ac9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.914 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:02:45 compute-0 podman[270627]: 2025-12-06 07:02:45.845801421 +0000 UTC m=+0.019034507 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:02:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:02:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f963743b3615247302df87ed22952c6085f3c98941f713becbd6c2919e773839/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:02:45 compute-0 nova_compute[251992]: 2025-12-06 07:02:45.958 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:02:45 compute-0 podman[270627]: 2025-12-06 07:02:45.96236983 +0000 UTC m=+0.135602896 container init 6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:02:45 compute-0 podman[270627]: 2025-12-06 07:02:45.967179513 +0000 UTC m=+0.140412579 container start 6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:02:45 compute-0 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[270643]: [NOTICE]   (270647) : New worker (270649) forked
Dec 06 07:02:45 compute-0 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[270643]: [NOTICE]   (270647) : Loading success.
Dec 06 07:02:46 compute-0 nova_compute[251992]: 2025-12-06 07:02:46.014 251996 INFO nova.compute.manager [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Took 12.63 seconds to build instance.
Dec 06 07:02:46 compute-0 nova_compute[251992]: 2025-12-06 07:02:46.038 251996 DEBUG oslo_concurrency.lockutils [None req-2f8be1ab-3438-48f0-9394-e6f1e8d34311 a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:46 compute-0 ceph-mon[74339]: pgmap v1257: 305 pgs: 305 active+clean; 538 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 137 op/s
Dec 06 07:02:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1248195527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:46 compute-0 nova_compute[251992]: 2025-12-06 07:02:46.741 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:46 compute-0 nova_compute[251992]: 2025-12-06 07:02:46.742 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:02:46 compute-0 nova_compute[251992]: 2025-12-06 07:02:46.743 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:02:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 532 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 193 op/s
Dec 06 07:02:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:47.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:47.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:47 compute-0 nova_compute[251992]: 2025-12-06 07:02:47.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:47 compute-0 sshd-session[270562]: Connection reset by authenticating user root 45.140.17.124 port 28728 [preauth]
Dec 06 07:02:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2579202930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/801178798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:47 compute-0 nova_compute[251992]: 2025-12-06 07:02:47.984 251996 DEBUG nova.compute.manager [req-e948cf31-a962-4e7a-91d5-fdb895f7ca2a req-cab7ad23-0ade-44ce-83ec-35bc99937314 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received event network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:02:47 compute-0 nova_compute[251992]: 2025-12-06 07:02:47.984 251996 DEBUG oslo_concurrency.lockutils [req-e948cf31-a962-4e7a-91d5-fdb895f7ca2a req-cab7ad23-0ade-44ce-83ec-35bc99937314 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:02:47 compute-0 nova_compute[251992]: 2025-12-06 07:02:47.984 251996 DEBUG oslo_concurrency.lockutils [req-e948cf31-a962-4e7a-91d5-fdb895f7ca2a req-cab7ad23-0ade-44ce-83ec-35bc99937314 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:02:47 compute-0 nova_compute[251992]: 2025-12-06 07:02:47.985 251996 DEBUG oslo_concurrency.lockutils [req-e948cf31-a962-4e7a-91d5-fdb895f7ca2a req-cab7ad23-0ade-44ce-83ec-35bc99937314 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:02:47 compute-0 nova_compute[251992]: 2025-12-06 07:02:47.985 251996 DEBUG nova.compute.manager [req-e948cf31-a962-4e7a-91d5-fdb895f7ca2a req-cab7ad23-0ade-44ce-83ec-35bc99937314 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] No waiting events found dispatching network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:02:47 compute-0 nova_compute[251992]: 2025-12-06 07:02:47.985 251996 WARNING nova.compute.manager [req-e948cf31-a962-4e7a-91d5-fdb895f7ca2a req-cab7ad23-0ade-44ce-83ec-35bc99937314 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received unexpected event network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 for instance with vm_state active and task_state None.
Dec 06 07:02:48 compute-0 nova_compute[251992]: 2025-12-06 07:02:48.032 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:48 compute-0 nova_compute[251992]: 2025-12-06 07:02:48.032 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:48 compute-0 nova_compute[251992]: 2025-12-06 07:02:48.032 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:02:48 compute-0 nova_compute[251992]: 2025-12-06 07:02:48.033 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b2d26234-5d5c-402f-85cf-d826d2dbae79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:02:48 compute-0 nova_compute[251992]: 2025-12-06 07:02:48.621 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:48 compute-0 nova_compute[251992]: 2025-12-06 07:02:48.678 251996 DEBUG oslo_concurrency.lockutils [None req-731e8b96-6332-4d54-a964-a3440774279d 772d06a215da4b2f88e6a2a37c9e7194 bcfff898efaa4054886db7320c2a0364 - - default default] Acquiring lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:02:48 compute-0 ceph-mon[74339]: pgmap v1258: 305 pgs: 305 active+clean; 532 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 193 op/s
Dec 06 07:02:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 532 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 3.9 MiB/s wr, 146 op/s
Dec 06 07:02:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:49.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:49.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:49 compute-0 podman[270660]: 2025-12-06 07:02:49.415864505 +0000 UTC m=+0.073669285 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:02:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2702856638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:50 compute-0 ceph-mon[74339]: pgmap v1259: 305 pgs: 305 active+clean; 532 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 3.9 MiB/s wr, 146 op/s
Dec 06 07:02:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/46249981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 457 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 217 op/s
Dec 06 07:02:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:02:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:51.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:02:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:51.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4028376923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:02:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3290223524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:52 compute-0 nova_compute[251992]: 2025-12-06 07:02:52.042 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Updating instance_info_cache with network_info: [{"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:52 compute-0 nova_compute[251992]: 2025-12-06 07:02:52.065 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:52 compute-0 nova_compute[251992]: 2025-12-06 07:02:52.065 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:02:52 compute-0 nova_compute[251992]: 2025-12-06 07:02:52.065 251996 DEBUG oslo_concurrency.lockutils [None req-731e8b96-6332-4d54-a964-a3440774279d 772d06a215da4b2f88e6a2a37c9e7194 bcfff898efaa4054886db7320c2a0364 - - default default] Acquired lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:02:52 compute-0 nova_compute[251992]: 2025-12-06 07:02:52.066 251996 DEBUG nova.network.neutron [None req-731e8b96-6332-4d54-a964-a3440774279d 772d06a215da4b2f88e6a2a37c9e7194 bcfff898efaa4054886db7320c2a0364 - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:02:52 compute-0 nova_compute[251992]: 2025-12-06 07:02:52.067 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:02:52 compute-0 nova_compute[251992]: 2025-12-06 07:02:52.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 451 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Dec 06 07:02:53 compute-0 ceph-mon[74339]: pgmap v1260: 305 pgs: 305 active+clean; 457 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 217 op/s
Dec 06 07:02:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:53.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:53.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:53 compute-0 nova_compute[251992]: 2025-12-06 07:02:53.624 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:54 compute-0 ceph-mon[74339]: pgmap v1261: 305 pgs: 305 active+clean; 451 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 202 op/s
Dec 06 07:02:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 192 op/s
Dec 06 07:02:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:55.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:55 compute-0 podman[270691]: 2025-12-06 07:02:55.389161024 +0000 UTC m=+0.049947440 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 07:02:55 compute-0 podman[270692]: 2025-12-06 07:02:55.399004567 +0000 UTC m=+0.056921423 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:02:55 compute-0 sudo[270725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:55 compute-0 sudo[270725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:55 compute-0 sudo[270725]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:55 compute-0 sudo[270750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:02:55 compute-0 sudo[270750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:02:55 compute-0 sudo[270750]: pam_unix(sudo:session): session closed for user root
Dec 06 07:02:55 compute-0 nova_compute[251992]: 2025-12-06 07:02:55.956 251996 DEBUG nova.network.neutron [None req-731e8b96-6332-4d54-a964-a3440774279d 772d06a215da4b2f88e6a2a37c9e7194 bcfff898efaa4054886db7320c2a0364 - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Updating instance_info_cache with network_info: [{"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:02:56 compute-0 nova_compute[251992]: 2025-12-06 07:02:56.028 251996 DEBUG oslo_concurrency.lockutils [None req-731e8b96-6332-4d54-a964-a3440774279d 772d06a215da4b2f88e6a2a37c9e7194 bcfff898efaa4054886db7320c2a0364 - - default default] Releasing lock "refresh_cache-b2d26234-5d5c-402f-85cf-d826d2dbae79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:02:56 compute-0 nova_compute[251992]: 2025-12-06 07:02:56.029 251996 DEBUG nova.compute.manager [None req-731e8b96-6332-4d54-a964-a3440774279d 772d06a215da4b2f88e6a2a37c9e7194 bcfff898efaa4054886db7320c2a0364 - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Dec 06 07:02:56 compute-0 nova_compute[251992]: 2025-12-06 07:02:56.030 251996 DEBUG nova.compute.manager [None req-731e8b96-6332-4d54-a964-a3440774279d 772d06a215da4b2f88e6a2a37c9e7194 bcfff898efaa4054886db7320c2a0364 - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] network_info to inject: |[{"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Dec 06 07:02:56 compute-0 ceph-mon[74339]: pgmap v1262: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 192 op/s
Dec 06 07:02:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 158 op/s
Dec 06 07:02:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:02:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:57.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:57.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:57 compute-0 nova_compute[251992]: 2025-12-06 07:02:57.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2544000975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:02:58 compute-0 nova_compute[251992]: 2025-12-06 07:02:58.627 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:02:58 compute-0 ceph-mon[74339]: pgmap v1263: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 158 op/s
Dec 06 07:02:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 24 KiB/s wr, 88 op/s
Dec 06 07:02:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:02:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:02:59.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:02:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:02:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:02:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:02:59.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:02:59 compute-0 ovn_controller[147168]: 2025-12-06T07:02:59Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:d4:cf 10.100.0.13
Dec 06 07:02:59 compute-0 ovn_controller[147168]: 2025-12-06T07:02:59Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:d4:cf 10.100.0.13
Dec 06 07:03:00 compute-0 ceph-mon[74339]: pgmap v1264: 305 pgs: 305 active+clean; 451 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 24 KiB/s wr, 88 op/s
Dec 06 07:03:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 472 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 132 op/s
Dec 06 07:03:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:01.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:01.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:02 compute-0 nova_compute[251992]: 2025-12-06 07:03:02.124 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:02.124 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:02.125 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:03:02 compute-0 nova_compute[251992]: 2025-12-06 07:03:02.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:02 compute-0 ceph-mon[74339]: pgmap v1265: 305 pgs: 305 active+clean; 472 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 132 op/s
Dec 06 07:03:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 480 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 809 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 07:03:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:03.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:03.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:03 compute-0 nova_compute[251992]: 2025-12-06 07:03:03.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:03.812 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:03.812 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:03.813 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:04 compute-0 ceph-mon[74339]: pgmap v1266: 305 pgs: 305 active+clean; 480 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 809 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 07:03:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 448 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 06 07:03:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:05.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:05.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:06 compute-0 ceph-mon[74339]: pgmap v1267: 305 pgs: 305 active+clean; 448 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Dec 06 07:03:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 405 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:03:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:07.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:07.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:07 compute-0 nova_compute[251992]: 2025-12-06 07:03:07.439 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:08 compute-0 ceph-mon[74339]: pgmap v1268: 305 pgs: 305 active+clean; 405 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:03:08 compute-0 nova_compute[251992]: 2025-12-06 07:03:08.632 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 405 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:03:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:09.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4229558082' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:03:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4229558082' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:03:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:09.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:10 compute-0 ceph-mon[74339]: pgmap v1269: 305 pgs: 305 active+clean; 405 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:03:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3468614630' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 444 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 202 op/s
Dec 06 07:03:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:11.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2512659534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:03:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:11.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:03:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:12.129 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:12 compute-0 ceph-mon[74339]: pgmap v1270: 305 pgs: 305 active+clean; 444 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 202 op/s
Dec 06 07:03:12 compute-0 nova_compute[251992]: 2025-12-06 07:03:12.441 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:03:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/566616693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:03:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:03:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:03:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:03:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:03:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:03:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 451 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 160 op/s
Dec 06 07:03:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:13.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:13.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/566616693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:13 compute-0 nova_compute[251992]: 2025-12-06 07:03:13.633 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:14 compute-0 ceph-mon[74339]: pgmap v1271: 305 pgs: 305 active+clean; 451 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 160 op/s
Dec 06 07:03:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 465 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.0 MiB/s wr, 199 op/s
Dec 06 07:03:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:15.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:15 compute-0 sudo[270787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:15 compute-0 sudo[270787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:15 compute-0 sudo[270787]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:15 compute-0 sudo[270812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:15 compute-0 sudo[270812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:15 compute-0 sudo[270812]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:16 compute-0 ceph-mon[74339]: pgmap v1272: 305 pgs: 305 active+clean; 465 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.0 MiB/s wr, 199 op/s
Dec 06 07:03:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.0 MiB/s wr, 236 op/s
Dec 06 07:03:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:17.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:17 compute-0 nova_compute[251992]: 2025-12-06 07:03:17.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:03:18
Dec 06 07:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'images', 'backups', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta']
Dec 06 07:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:03:18 compute-0 nova_compute[251992]: 2025-12-06 07:03:18.687 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:18 compute-0 ceph-mon[74339]: pgmap v1273: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.0 MiB/s wr, 236 op/s
Dec 06 07:03:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 06 07:03:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:19.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:19.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:20 compute-0 podman[270839]: 2025-12-06 07:03:20.423923274 +0000 UTC m=+0.082680395 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:03:20 compute-0 ceph-mon[74339]: pgmap v1274: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 06 07:03:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 182 op/s
Dec 06 07:03:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:21.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:21.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:22 compute-0 ceph-mon[74339]: pgmap v1275: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 182 op/s
Dec 06 07:03:22 compute-0 nova_compute[251992]: 2025-12-06 07:03:22.445 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Dec 06 07:03:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:23.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:03:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:23.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:03:23 compute-0 nova_compute[251992]: 2025-12-06 07:03:23.690 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:24 compute-0 ceph-mon[74339]: pgmap v1276: 305 pgs: 305 active+clean; 484 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Dec 06 07:03:24 compute-0 nova_compute[251992]: 2025-12-06 07:03:24.787 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:24 compute-0 nova_compute[251992]: 2025-12-06 07:03:24.807 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid b2d26234-5d5c-402f-85cf-d826d2dbae79 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:03:24 compute-0 nova_compute[251992]: 2025-12-06 07:03:24.807 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:24 compute-0 nova_compute[251992]: 2025-12-06 07:03:24.807 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:24 compute-0 nova_compute[251992]: 2025-12-06 07:03:24.829 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 487 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 147 op/s
Dec 06 07:03:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:25.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:25.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009784890889633352 of space, bias 1.0, pg target 2.9354672668900057 quantized to 32 (current 32)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002160505944072213 of space, bias 1.0, pg target 0.6438307713335194 quantized to 32 (current 32)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:03:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:03:26 compute-0 podman[270869]: 2025-12-06 07:03:26.400734126 +0000 UTC m=+0.055660224 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:03:26 compute-0 podman[270868]: 2025-12-06 07:03:26.408141632 +0000 UTC m=+0.065176959 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:03:26 compute-0 ceph-mon[74339]: pgmap v1277: 305 pgs: 305 active+clean; 487 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 147 op/s
Dec 06 07:03:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 502 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 100 op/s
Dec 06 07:03:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:27.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:03:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:27.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:03:27 compute-0 nova_compute[251992]: 2025-12-06 07:03:27.448 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:28 compute-0 ceph-mon[74339]: pgmap v1278: 305 pgs: 305 active+clean; 502 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 100 op/s
Dec 06 07:03:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/877678940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:28 compute-0 nova_compute[251992]: 2025-12-06 07:03:28.694 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 502 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Dec 06 07:03:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:03:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:29.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:03:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:29.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2969829256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:30 compute-0 ceph-mon[74339]: pgmap v1279: 305 pgs: 305 active+clean; 502 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Dec 06 07:03:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 489 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 06 07:03:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:31.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:31.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:32 compute-0 ceph-mon[74339]: pgmap v1280: 305 pgs: 305 active+clean; 489 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Dec 06 07:03:32 compute-0 nova_compute[251992]: 2025-12-06 07:03:32.510 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 477 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Dec 06 07:03:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:33.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:33 compute-0 nova_compute[251992]: 2025-12-06 07:03:33.697 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:34 compute-0 ceph-mon[74339]: pgmap v1281: 305 pgs: 305 active+clean; 477 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Dec 06 07:03:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1048102116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 459 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 2.7 MiB/s wr, 103 op/s
Dec 06 07:03:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/492606586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/299817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:35.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:03:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:35.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:03:35 compute-0 sudo[270913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:35 compute-0 sudo[270913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-0 sudo[270913]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:35 compute-0 sudo[270938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:03:35 compute-0 sudo[270938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-0 sudo[270938]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:35 compute-0 sudo[270963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:35 compute-0 sudo[270963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-0 sudo[270963]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:35 compute-0 sudo[270988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:03:35 compute-0 sudo[270988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-0 sudo[271033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:35 compute-0 sudo[271033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-0 sudo[271033]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:35 compute-0 sudo[271058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:35 compute-0 sudo[271058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:35 compute-0 sudo[271058]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:36 compute-0 sudo[270988]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:36 compute-0 ceph-mon[74339]: pgmap v1282: 305 pgs: 305 active+clean; 459 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 2.7 MiB/s wr, 103 op/s
Dec 06 07:03:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 484 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.8 MiB/s wr, 126 op/s
Dec 06 07:03:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:37.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:37.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:37 compute-0 nova_compute[251992]: 2025-12-06 07:03:37.512 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:38 compute-0 ceph-mon[74339]: pgmap v1283: 305 pgs: 305 active+clean; 484 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.8 MiB/s wr, 126 op/s
Dec 06 07:03:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:03:38 compute-0 nova_compute[251992]: 2025-12-06 07:03:38.700 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 484 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 2.7 MiB/s wr, 104 op/s
Dec 06 07:03:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:39.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:03:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:03:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:03:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:03:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:03:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:03:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 240f7aef-8943-49d0-bddd-df015f09dd17 does not exist
Dec 06 07:03:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d2a03e83-3948-4c7f-996e-5dab8a2a8a3a does not exist
Dec 06 07:03:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 29194d9c-1ad1-4bb2-8933-f32ef22e8c10 does not exist
Dec 06 07:03:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:03:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:03:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:03:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:03:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:03:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:03:39 compute-0 sudo[271099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:39 compute-0 sudo[271099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.371 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.371 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.371 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.372 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.372 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.373 251996 INFO nova.compute.manager [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Terminating instance
Dec 06 07:03:39 compute-0 sudo[271099]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.374 251996 DEBUG nova.compute.manager [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:03:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:39.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:39 compute-0 kernel: tap2adad6cf-78 (unregistering): left promiscuous mode
Dec 06 07:03:39 compute-0 NetworkManager[48965]: <info>  [1765004619.4304] device (tap2adad6cf-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:03:39 compute-0 sudo[271124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:03:39 compute-0 sudo[271124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:39 compute-0 sudo[271124]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:39 compute-0 ovn_controller[147168]: 2025-12-06T07:03:39Z|00057|binding|INFO|Releasing lport 2adad6cf-78fd-4f03-b073-1df1a5fbf944 from this chassis (sb_readonly=0)
Dec 06 07:03:39 compute-0 ovn_controller[147168]: 2025-12-06T07:03:39Z|00058|binding|INFO|Setting lport 2adad6cf-78fd-4f03-b073-1df1a5fbf944 down in Southbound
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.438 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 ovn_controller[147168]: 2025-12-06T07:03:39Z|00059|binding|INFO|Removing iface tap2adad6cf-78 ovn-installed in OVS
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.440 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.445 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:d4:cf 10.100.0.13'], port_security=['fa:16:3e:83:d4:cf 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'b2d26234-5d5c-402f-85cf-d826d2dbae79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9451d867-0aba-464d-b4d9-f947b887e903', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b6179a8b65c2484eb7ca1e068d93a58c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3eef2d-12bb-4dc8-aa99-2a680fa42d41', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=294a4822-9a42-4d06-8976-2cf65d54c6f2, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=2adad6cf-78fd-4f03-b073-1df1a5fbf944) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.447 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 2adad6cf-78fd-4f03-b073-1df1a5fbf944 in datapath 9451d867-0aba-464d-b4d9-f947b887e903 unbound from our chassis
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.448 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9451d867-0aba-464d-b4d9-f947b887e903, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.451 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d01f1d44-a2d4-4326-8dac-07932955c1e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.451 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 namespace which is not needed anymore
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.461 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000016.scope: Deactivated successfully.
Dec 06 07:03:39 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000016.scope: Consumed 16.034s CPU time.
Dec 06 07:03:39 compute-0 systemd-machined[212986]: Machine qemu-11-instance-00000016 terminated.
Dec 06 07:03:39 compute-0 sudo[271151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:39 compute-0 sudo[271151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:39 compute-0 sudo[271151]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:39 compute-0 sudo[271193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:03:39 compute-0 sudo[271193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:39 compute-0 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[270643]: [NOTICE]   (270647) : haproxy version is 2.8.14-c23fe91
Dec 06 07:03:39 compute-0 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[270643]: [NOTICE]   (270647) : path to executable is /usr/sbin/haproxy
Dec 06 07:03:39 compute-0 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[270643]: [WARNING]  (270647) : Exiting Master process...
Dec 06 07:03:39 compute-0 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[270643]: [ALERT]    (270647) : Current worker (270649) exited with code 143 (Terminated)
Dec 06 07:03:39 compute-0 neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903[270643]: [WARNING]  (270647) : All workers exited. Exiting... (0)
Dec 06 07:03:39 compute-0 systemd[1]: libpod-6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708.scope: Deactivated successfully.
Dec 06 07:03:39 compute-0 podman[271221]: 2025-12-06 07:03:39.613496484 +0000 UTC m=+0.050733829 container died 6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.613 251996 INFO nova.virt.libvirt.driver [-] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Instance destroyed successfully.
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.614 251996 DEBUG nova.objects.instance [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lazy-loading 'resources' on Instance uuid b2d26234-5d5c-402f-85cf-d826d2dbae79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.635 251996 DEBUG nova.virt.libvirt.vif [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:02:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-762315562',display_name='tempest-ServersAdminTestJSON-server-762315562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-762315562',id=22,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:02:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b6179a8b65c2484eb7ca1e068d93a58c',ramdisk_id='',reservation_id='r-ayig0b0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1902776367',owner_user_name='tempest-ServersAdminTestJSON-1902776367-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:02:45Z,user_data=None,user_id='a3cae056210a400fa5e3495fe827d29a',uuid=b2d26234-5d5c-402f-85cf-d826d2dbae79,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.636 251996 DEBUG nova.network.os_vif_util [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converting VIF {"id": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "address": "fa:16:3e:83:d4:cf", "network": {"id": "9451d867-0aba-464d-b4d9-f947b887e903", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-291936370-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b6179a8b65c2484eb7ca1e068d93a58c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2adad6cf-78", "ovs_interfaceid": "2adad6cf-78fd-4f03-b073-1df1a5fbf944", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.637 251996 DEBUG nova.network.os_vif_util [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:d4:cf,bridge_name='br-int',has_traffic_filtering=True,id=2adad6cf-78fd-4f03-b073-1df1a5fbf944,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2adad6cf-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.638 251996 DEBUG os_vif [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:d4:cf,bridge_name='br-int',has_traffic_filtering=True,id=2adad6cf-78fd-4f03-b073-1df1a5fbf944,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2adad6cf-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.640 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.641 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2adad6cf-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.643 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708-userdata-shm.mount: Deactivated successfully.
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.644 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.648 251996 INFO os_vif [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:d4:cf,bridge_name='br-int',has_traffic_filtering=True,id=2adad6cf-78fd-4f03-b073-1df1a5fbf944,network=Network(9451d867-0aba-464d-b4d9-f947b887e903),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2adad6cf-78')
Dec 06 07:03:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f963743b3615247302df87ed22952c6085f3c98941f713becbd6c2919e773839-merged.mount: Deactivated successfully.
Dec 06 07:03:39 compute-0 podman[271221]: 2025-12-06 07:03:39.661846384 +0000 UTC m=+0.099083719 container cleanup 6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:03:39 compute-0 systemd[1]: libpod-conmon-6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708.scope: Deactivated successfully.
Dec 06 07:03:39 compute-0 podman[271281]: 2025-12-06 07:03:39.730717915 +0000 UTC m=+0.044283889 container remove 6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.747 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9487c0c2-08e7-4a7d-9988-4377f5f38ceb]: (4, ('Sat Dec  6 07:03:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 (6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708)\n6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708\nSat Dec  6 07:03:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 (6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708)\n6d5c40a3da194db9d5c2018518548ae969e94ed88b493bd1cb6965800c64b708\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.749 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa5ac61-8b3a-49bc-b959-47c3092152e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.750 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9451d867-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.753 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 kernel: tap9451d867-00: left promiscuous mode
Dec 06 07:03:39 compute-0 nova_compute[251992]: 2025-12-06 07:03:39.769 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.773 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[653644f9-25cd-486f-8b91-9f2dc179c446]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.789 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d0114e93-c658-4e86-adea-bcbd6ec27bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.790 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c4eaa979-4613-4ed4-ab11-a9ce9afe91a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.809 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f405b253-8c67-471a-8db7-92d3f0dbfc34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 483783, 'reachable_time': 32751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271320, 'error': None, 'target': 'ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.813 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9451d867-0aba-464d-b4d9-f947b887e903 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:03:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:39.813 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[e461b551-b948-4bc2-a59c-936c9e2f5e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d9451d867\x2d0aba\x2d464d\x2db4d9\x2df947b887e903.mount: Deactivated successfully.
Dec 06 07:03:39 compute-0 podman[271340]: 2025-12-06 07:03:39.940979747 +0000 UTC m=+0.045175334 container create 41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:03:39 compute-0 systemd[1]: Started libpod-conmon-41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32.scope.
Dec 06 07:03:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:03:40 compute-0 podman[271340]: 2025-12-06 07:03:40.014013403 +0000 UTC m=+0.118208900 container init 41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:03:40 compute-0 podman[271340]: 2025-12-06 07:03:39.920495399 +0000 UTC m=+0.024690906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:03:40 compute-0 podman[271340]: 2025-12-06 07:03:40.02220534 +0000 UTC m=+0.126400827 container start 41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec 06 07:03:40 compute-0 podman[271340]: 2025-12-06 07:03:40.026091138 +0000 UTC m=+0.130286625 container attach 41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:03:40 compute-0 hardcore_heisenberg[271357]: 167 167
Dec 06 07:03:40 compute-0 systemd[1]: libpod-41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32.scope: Deactivated successfully.
Dec 06 07:03:40 compute-0 podman[271340]: 2025-12-06 07:03:40.030417188 +0000 UTC m=+0.134612675 container died 41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:03:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa2d18c027616134d182afcfb81190a458dce51fd5aef58b60faf32f457d418-merged.mount: Deactivated successfully.
Dec 06 07:03:40 compute-0 podman[271340]: 2025-12-06 07:03:40.069081521 +0000 UTC m=+0.173277008 container remove 41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:03:40 compute-0 systemd[1]: libpod-conmon-41411115f2299a09f46a1af1f937ef2921ba6c9341dea4aafd9fee9f1e816b32.scope: Deactivated successfully.
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.078 251996 INFO nova.virt.libvirt.driver [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Deleting instance files /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79_del
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.079 251996 INFO nova.virt.libvirt.driver [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Deletion of /var/lib/nova/instances/b2d26234-5d5c-402f-85cf-d826d2dbae79_del complete
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.130 251996 INFO nova.compute.manager [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Took 0.76 seconds to destroy the instance on the hypervisor.
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.131 251996 DEBUG oslo.service.loopingcall [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.131 251996 DEBUG nova.compute.manager [-] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.131 251996 DEBUG nova.network.neutron [-] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:03:40 compute-0 podman[271380]: 2025-12-06 07:03:40.228538584 +0000 UTC m=+0.044066974 container create 936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:03:40 compute-0 systemd[1]: Started libpod-conmon-936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b.scope.
Dec 06 07:03:40 compute-0 ceph-mon[74339]: pgmap v1284: 305 pgs: 305 active+clean; 484 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 2.7 MiB/s wr, 104 op/s
Dec 06 07:03:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:03:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:03:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:03:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:03:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:03:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214f34120ccd36507ef4a44012a28b1ece08e022f0c884b5fca442d2bada34e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214f34120ccd36507ef4a44012a28b1ece08e022f0c884b5fca442d2bada34e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214f34120ccd36507ef4a44012a28b1ece08e022f0c884b5fca442d2bada34e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214f34120ccd36507ef4a44012a28b1ece08e022f0c884b5fca442d2bada34e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/214f34120ccd36507ef4a44012a28b1ece08e022f0c884b5fca442d2bada34e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:40 compute-0 podman[271380]: 2025-12-06 07:03:40.212837418 +0000 UTC m=+0.028365828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:03:40 compute-0 podman[271380]: 2025-12-06 07:03:40.316497423 +0000 UTC m=+0.132025823 container init 936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:03:40 compute-0 podman[271380]: 2025-12-06 07:03:40.326063248 +0000 UTC m=+0.141591638 container start 936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:03:40 compute-0 podman[271380]: 2025-12-06 07:03:40.329215015 +0000 UTC m=+0.144743405 container attach 936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.558 251996 DEBUG nova.compute.manager [req-f33a2ba8-af1c-4c3f-84a3-f89c17d1ab01 req-ea807ce1-5913-4690-be5d-c13ee386afd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received event network-vif-unplugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.559 251996 DEBUG oslo_concurrency.lockutils [req-f33a2ba8-af1c-4c3f-84a3-f89c17d1ab01 req-ea807ce1-5913-4690-be5d-c13ee386afd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.560 251996 DEBUG oslo_concurrency.lockutils [req-f33a2ba8-af1c-4c3f-84a3-f89c17d1ab01 req-ea807ce1-5913-4690-be5d-c13ee386afd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.560 251996 DEBUG oslo_concurrency.lockutils [req-f33a2ba8-af1c-4c3f-84a3-f89c17d1ab01 req-ea807ce1-5913-4690-be5d-c13ee386afd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.560 251996 DEBUG nova.compute.manager [req-f33a2ba8-af1c-4c3f-84a3-f89c17d1ab01 req-ea807ce1-5913-4690-be5d-c13ee386afd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] No waiting events found dispatching network-vif-unplugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:40 compute-0 nova_compute[251992]: 2025-12-06 07:03:40.561 251996 DEBUG nova.compute.manager [req-f33a2ba8-af1c-4c3f-84a3-f89c17d1ab01 req-ea807ce1-5913-4690-be5d-c13ee386afd3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received event network-vif-unplugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:03:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 423 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Dec 06 07:03:41 compute-0 eloquent_leakey[271396]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:03:41 compute-0 eloquent_leakey[271396]: --> relative data size: 1.0
Dec 06 07:03:41 compute-0 eloquent_leakey[271396]: --> All data devices are unavailable
Dec 06 07:03:41 compute-0 systemd[1]: libpod-936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b.scope: Deactivated successfully.
Dec 06 07:03:41 compute-0 podman[271380]: 2025-12-06 07:03:41.155613767 +0000 UTC m=+0.971142157 container died 936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-214f34120ccd36507ef4a44012a28b1ece08e022f0c884b5fca442d2bada34e8-merged.mount: Deactivated successfully.
Dec 06 07:03:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:41.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:41 compute-0 podman[271380]: 2025-12-06 07:03:41.213771571 +0000 UTC m=+1.029299961 container remove 936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_leakey, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:03:41 compute-0 systemd[1]: libpod-conmon-936b8c65569d4e42a3b982500ed3a2db7f1d8f16b60c1101c62832693a249a7b.scope: Deactivated successfully.
Dec 06 07:03:41 compute-0 sudo[271193]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:41 compute-0 sudo[271424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:41 compute-0 sudo[271424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:41 compute-0 sudo[271424]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:41 compute-0 sudo[271449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:03:41 compute-0 sudo[271449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:41 compute-0 sudo[271449]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:41 compute-0 sudo[271474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:41 compute-0 sudo[271474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:41 compute-0 sudo[271474]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:41 compute-0 sudo[271499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:03:41 compute-0 sudo[271499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:41 compute-0 nova_compute[251992]: 2025-12-06 07:03:41.676 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:41 compute-0 podman[271565]: 2025-12-06 07:03:41.757071491 +0000 UTC m=+0.036507694 container create e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:03:41 compute-0 systemd[1]: Started libpod-conmon-e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885.scope.
Dec 06 07:03:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:03:41 compute-0 podman[271565]: 2025-12-06 07:03:41.828483462 +0000 UTC m=+0.107919685 container init e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:03:41 compute-0 podman[271565]: 2025-12-06 07:03:41.835772564 +0000 UTC m=+0.115208767 container start e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:03:41 compute-0 podman[271565]: 2025-12-06 07:03:41.741761527 +0000 UTC m=+0.021197750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:03:41 compute-0 podman[271565]: 2025-12-06 07:03:41.83920536 +0000 UTC m=+0.118641563 container attach e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:03:41 compute-0 silly_chebyshev[271581]: 167 167
Dec 06 07:03:41 compute-0 podman[271565]: 2025-12-06 07:03:41.841203444 +0000 UTC m=+0.120639647 container died e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:03:41 compute-0 systemd[1]: libpod-e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885.scope: Deactivated successfully.
Dec 06 07:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-27dfe559384e206ed7c79443b8cd2d25112216cf389baa30508e8ca84ec1dd0e-merged.mount: Deactivated successfully.
Dec 06 07:03:41 compute-0 podman[271565]: 2025-12-06 07:03:41.876468573 +0000 UTC m=+0.155904776 container remove e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec 06 07:03:41 compute-0 systemd[1]: libpod-conmon-e12eaef9d8f6bbde3bcc549f40c103856e94e84b00e14b07c07ddaf1aa2ac885.scope: Deactivated successfully.
Dec 06 07:03:42 compute-0 podman[271606]: 2025-12-06 07:03:42.026956967 +0000 UTC m=+0.039390944 container create 8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:03:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:42 compute-0 systemd[1]: Started libpod-conmon-8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e.scope.
Dec 06 07:03:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7feaf342a8adac0c7962718e933330a1bb2760a812b1271d20f68520fabd4e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7feaf342a8adac0c7962718e933330a1bb2760a812b1271d20f68520fabd4e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7feaf342a8adac0c7962718e933330a1bb2760a812b1271d20f68520fabd4e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7feaf342a8adac0c7962718e933330a1bb2760a812b1271d20f68520fabd4e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:42 compute-0 podman[271606]: 2025-12-06 07:03:42.011084937 +0000 UTC m=+0.023518944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:03:42 compute-0 podman[271606]: 2025-12-06 07:03:42.115379069 +0000 UTC m=+0.127813056 container init 8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:03:42 compute-0 podman[271606]: 2025-12-06 07:03:42.122766195 +0000 UTC m=+0.135200192 container start 8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:03:42 compute-0 podman[271606]: 2025-12-06 07:03:42.127257779 +0000 UTC m=+0.139691776 container attach 8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:03:42 compute-0 ceph-mon[74339]: pgmap v1285: 305 pgs: 305 active+clean; 423 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Dec 06 07:03:42 compute-0 nova_compute[251992]: 2025-12-06 07:03:42.514 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]: {
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:     "0": [
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:         {
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "devices": [
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "/dev/loop3"
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             ],
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "lv_name": "ceph_lv0",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "lv_size": "7511998464",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "name": "ceph_lv0",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "tags": {
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.cluster_name": "ceph",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.crush_device_class": "",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.encrypted": "0",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.osd_id": "0",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.type": "block",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:                 "ceph.vdo": "0"
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             },
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "type": "block",
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:             "vg_name": "ceph_vg0"
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:         }
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]:     ]
Dec 06 07:03:42 compute-0 vigorous_ishizaka[271623]: }
Dec 06 07:03:42 compute-0 systemd[1]: libpod-8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e.scope: Deactivated successfully.
Dec 06 07:03:42 compute-0 podman[271606]: 2025-12-06 07:03:42.913669292 +0000 UTC m=+0.926103269 container died 8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 07:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7feaf342a8adac0c7962718e933330a1bb2760a812b1271d20f68520fabd4e7-merged.mount: Deactivated successfully.
Dec 06 07:03:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:03:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:03:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:03:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:03:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:03:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:03:42 compute-0 podman[271606]: 2025-12-06 07:03:42.966123547 +0000 UTC m=+0.978557524 container remove 8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 07:03:42 compute-0 systemd[1]: libpod-conmon-8d1f7915d19f0637dc26fa4b8c23e71bb962b75a76e2e4cd3518e21f143f496e.scope: Deactivated successfully.
Dec 06 07:03:42 compute-0 sudo[271499]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 405 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Dec 06 07:03:43 compute-0 sudo[271647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:43 compute-0 sudo[271647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:43 compute-0 sudo[271647]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:43 compute-0 sudo[271672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:03:43 compute-0 sudo[271672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:43 compute-0 sudo[271672]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:43 compute-0 sudo[271697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:43 compute-0 sudo[271697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:43 compute-0 sudo[271697]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:43.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:43 compute-0 sudo[271722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:03:43 compute-0 sudo[271722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:03:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:43.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:03:43 compute-0 podman[271787]: 2025-12-06 07:03:43.499753808 +0000 UTC m=+0.036272217 container create df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 07:03:43 compute-0 systemd[1]: Started libpod-conmon-df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34.scope.
Dec 06 07:03:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:03:43 compute-0 podman[271787]: 2025-12-06 07:03:43.484786703 +0000 UTC m=+0.021305132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.688 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.688 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.689 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.689 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.689 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.802 251996 DEBUG nova.network.neutron [-] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.832 251996 INFO nova.compute.manager [-] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Took 3.70 seconds to deallocate network for instance.
Dec 06 07:03:43 compute-0 podman[271787]: 2025-12-06 07:03:43.884078828 +0000 UTC m=+0.420597257 container init df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:03:43 compute-0 podman[271787]: 2025-12-06 07:03:43.8916938 +0000 UTC m=+0.428212209 container start df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:03:43 compute-0 podman[271787]: 2025-12-06 07:03:43.89565637 +0000 UTC m=+0.432174809 container attach df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:03:43 compute-0 systemd[1]: libpod-df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34.scope: Deactivated successfully.
Dec 06 07:03:43 compute-0 unruffled_knuth[271803]: 167 167
Dec 06 07:03:43 compute-0 conmon[271803]: conmon df7f86f0d43841d6dc2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34.scope/container/memory.events
Dec 06 07:03:43 compute-0 podman[271787]: 2025-12-06 07:03:43.8992499 +0000 UTC m=+0.435768309 container died df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.903 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:43 compute-0 nova_compute[251992]: 2025-12-06 07:03:43.904 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0c2b3dfeb02d11ecb709e29103305bfa07243b37422621e16c6b5bf2ca1bf8b-merged.mount: Deactivated successfully.
Dec 06 07:03:43 compute-0 podman[271787]: 2025-12-06 07:03:43.936638636 +0000 UTC m=+0.473157045 container remove df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_knuth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:03:43 compute-0 systemd[1]: libpod-conmon-df7f86f0d43841d6dc2cd15f98fd2c5c8366fbee17dbe44edc687a2d48bc4a34.scope: Deactivated successfully.
Dec 06 07:03:44 compute-0 podman[271847]: 2025-12-06 07:03:44.084627721 +0000 UTC m=+0.038061837 container create 9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:03:44 compute-0 systemd[1]: Started libpod-conmon-9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a.scope.
Dec 06 07:03:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847334170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.146 251996 DEBUG nova.scheduler.client.report [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.154 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:44 compute-0 podman[271847]: 2025-12-06 07:03:44.069635716 +0000 UTC m=+0.023069862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:03:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8873187619cf980fceec6bc91d8709aa0d561aa7ca56c3c679cac04937fa8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8873187619cf980fceec6bc91d8709aa0d561aa7ca56c3c679cac04937fa8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8873187619cf980fceec6bc91d8709aa0d561aa7ca56c3c679cac04937fa8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8873187619cf980fceec6bc91d8709aa0d561aa7ca56c3c679cac04937fa8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:44 compute-0 podman[271847]: 2025-12-06 07:03:44.189983364 +0000 UTC m=+0.143417510 container init 9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:03:44 compute-0 podman[271847]: 2025-12-06 07:03:44.195928629 +0000 UTC m=+0.149362755 container start 9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:03:44 compute-0 podman[271847]: 2025-12-06 07:03:44.200230938 +0000 UTC m=+0.153665054 container attach 9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.229 251996 DEBUG nova.scheduler.client.report [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.229 251996 DEBUG nova.compute.provider_tree [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.305 251996 DEBUG nova.scheduler.client.report [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.345 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.347 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4748MB free_disk=20.830738067626953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.347 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.350 251996 DEBUG nova.scheduler.client.report [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.414 251996 DEBUG oslo_concurrency.processutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.643 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.733 251996 DEBUG nova.compute.manager [req-58b3138d-caaf-42eb-bc85-f6ae140303fb req-6512b04d-8c70-4e38-b2e7-4084ac1c7c46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received event network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.734 251996 DEBUG oslo_concurrency.lockutils [req-58b3138d-caaf-42eb-bc85-f6ae140303fb req-6512b04d-8c70-4e38-b2e7-4084ac1c7c46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.734 251996 DEBUG oslo_concurrency.lockutils [req-58b3138d-caaf-42eb-bc85-f6ae140303fb req-6512b04d-8c70-4e38-b2e7-4084ac1c7c46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.735 251996 DEBUG oslo_concurrency.lockutils [req-58b3138d-caaf-42eb-bc85-f6ae140303fb req-6512b04d-8c70-4e38-b2e7-4084ac1c7c46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.735 251996 DEBUG nova.compute.manager [req-58b3138d-caaf-42eb-bc85-f6ae140303fb req-6512b04d-8c70-4e38-b2e7-4084ac1c7c46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] No waiting events found dispatching network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.735 251996 WARNING nova.compute.manager [req-58b3138d-caaf-42eb-bc85-f6ae140303fb req-6512b04d-8c70-4e38-b2e7-4084ac1c7c46 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received unexpected event network-vif-plugged-2adad6cf-78fd-4f03-b073-1df1a5fbf944 for instance with vm_state deleted and task_state None.
Dec 06 07:03:44 compute-0 ceph-mon[74339]: pgmap v1286: 305 pgs: 305 active+clean; 405 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Dec 06 07:03:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2765538678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/847334170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2531916716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.851 251996 DEBUG oslo_concurrency.processutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.859 251996 DEBUG nova.compute.provider_tree [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.878 251996 DEBUG nova.scheduler.client.report [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.904 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.907 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.942 251996 INFO nova.scheduler.client.report [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Deleted allocations for instance b2d26234-5d5c-402f-85cf-d826d2dbae79
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.991 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:03:44 compute-0 nova_compute[251992]: 2025-12-06 07:03:44.992 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:03:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Dec 06 07:03:45 compute-0 nova_compute[251992]: 2025-12-06 07:03:45.024 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:45 compute-0 cool_johnson[271865]: {
Dec 06 07:03:45 compute-0 cool_johnson[271865]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:03:45 compute-0 cool_johnson[271865]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:03:45 compute-0 cool_johnson[271865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:03:45 compute-0 cool_johnson[271865]:         "osd_id": 0,
Dec 06 07:03:45 compute-0 cool_johnson[271865]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:03:45 compute-0 cool_johnson[271865]:         "type": "bluestore"
Dec 06 07:03:45 compute-0 cool_johnson[271865]:     }
Dec 06 07:03:45 compute-0 cool_johnson[271865]: }
Dec 06 07:03:45 compute-0 nova_compute[251992]: 2025-12-06 07:03:45.052 251996 DEBUG oslo_concurrency.lockutils [None req-51db1c18-e385-40e9-892e-0a3a372c581b a3cae056210a400fa5e3495fe827d29a b6179a8b65c2484eb7ca1e068d93a58c - - default default] Lock "b2d26234-5d5c-402f-85cf-d826d2dbae79" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:45 compute-0 systemd[1]: libpod-9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a.scope: Deactivated successfully.
Dec 06 07:03:45 compute-0 conmon[271865]: conmon 9121a5f30f048cab325b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a.scope/container/memory.events
Dec 06 07:03:45 compute-0 podman[271847]: 2025-12-06 07:03:45.076518874 +0000 UTC m=+1.029953010 container died 9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 07:03:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c8873187619cf980fceec6bc91d8709aa0d561aa7ca56c3c679cac04937fa8e-merged.mount: Deactivated successfully.
Dec 06 07:03:45 compute-0 podman[271847]: 2025-12-06 07:03:45.14379134 +0000 UTC m=+1.097225456 container remove 9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_johnson, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:03:45 compute-0 systemd[1]: libpod-conmon-9121a5f30f048cab325b94ffb4674e3d2376f04c298c4b70fb868aa25487fd1a.scope: Deactivated successfully.
Dec 06 07:03:45 compute-0 sudo[271722]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:03:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:45.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:03:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:45.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796179177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 327bbb98-ce00-44f4-95d4-7659ebae9f11 does not exist
Dec 06 07:03:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8129afda-252e-4820-b52d-f0abc0caa30a does not exist
Dec 06 07:03:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0af75da2-aecc-4c9b-b040-5e069dc2ec54 does not exist
Dec 06 07:03:45 compute-0 nova_compute[251992]: 2025-12-06 07:03:45.464 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:45 compute-0 nova_compute[251992]: 2025-12-06 07:03:45.469 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:03:45 compute-0 nova_compute[251992]: 2025-12-06 07:03:45.482 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:03:45 compute-0 sudo[271947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:45 compute-0 sudo[271947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:45 compute-0 sudo[271947]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:45 compute-0 nova_compute[251992]: 2025-12-06 07:03:45.507 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:03:45 compute-0 nova_compute[251992]: 2025-12-06 07:03:45.508 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:45 compute-0 sudo[271972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:03:45 compute-0 sudo[271972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:45 compute-0 sudo[271972]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2531916716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3834391616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:03:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1796179177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:46 compute-0 nova_compute[251992]: 2025-12-06 07:03:46.503 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:46 compute-0 nova_compute[251992]: 2025-12-06 07:03:46.503 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:46 compute-0 nova_compute[251992]: 2025-12-06 07:03:46.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:46 compute-0 nova_compute[251992]: 2025-12-06 07:03:46.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:46 compute-0 nova_compute[251992]: 2025-12-06 07:03:46.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:03:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 127 op/s
Dec 06 07:03:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:47 compute-0 ceph-mon[74339]: pgmap v1287: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Dec 06 07:03:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:03:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:47.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:03:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:47 compute-0 nova_compute[251992]: 2025-12-06 07:03:47.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:47 compute-0 nova_compute[251992]: 2025-12-06 07:03:47.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:47 compute-0 nova_compute[251992]: 2025-12-06 07:03:47.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:03:47 compute-0 nova_compute[251992]: 2025-12-06 07:03:47.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:03:47 compute-0 nova_compute[251992]: 2025-12-06 07:03:47.676 251996 DEBUG nova.compute.manager [req-95d0513a-3f3d-4a29-81a3-97a56ced95b6 req-18c916c6-d020-493c-b0ce-c51b858e1861 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Received event network-vif-deleted-2adad6cf-78fd-4f03-b073-1df1a5fbf944 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:47 compute-0 nova_compute[251992]: 2025-12-06 07:03:47.691 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:03:47 compute-0 nova_compute[251992]: 2025-12-06 07:03:47.691 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:03:48 compute-0 nova_compute[251992]: 2025-12-06 07:03:48.099 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:48 compute-0 nova_compute[251992]: 2025-12-06 07:03:48.099 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:48 compute-0 nova_compute[251992]: 2025-12-06 07:03:48.152 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:03:48 compute-0 ceph-mon[74339]: pgmap v1288: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 127 op/s
Dec 06 07:03:48 compute-0 nova_compute[251992]: 2025-12-06 07:03:48.241 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:48 compute-0 nova_compute[251992]: 2025-12-06 07:03:48.241 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:48 compute-0 nova_compute[251992]: 2025-12-06 07:03:48.247 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:03:48 compute-0 nova_compute[251992]: 2025-12-06 07:03:48.248 251996 INFO nova.compute.claims [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:03:48 compute-0 nova_compute[251992]: 2025-12-06 07:03:48.629 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 99 op/s
Dec 06 07:03:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:03:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948676523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.146 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.151 251996 DEBUG nova.compute.provider_tree [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.165 251996 DEBUG nova.scheduler.client.report [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.184 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.185 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:03:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:03:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:49.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.225 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.226 251996 DEBUG nova.network.neutron [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:03:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3055950552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2535004301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/948676523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.248 251996 INFO nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.263 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.379 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.381 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.381 251996 INFO nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Creating image(s)
Dec 06 07:03:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:49.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.407 251996 DEBUG nova.storage.rbd_utils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 76601abc-9380-4d0e-8360-39afb25adf0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.434 251996 DEBUG nova.storage.rbd_utils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 76601abc-9380-4d0e-8360-39afb25adf0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.459 251996 DEBUG nova.storage.rbd_utils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 76601abc-9380-4d0e-8360-39afb25adf0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.463 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.524 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.525 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.526 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.526 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.559 251996 DEBUG nova.storage.rbd_utils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 76601abc-9380-4d0e-8360-39afb25adf0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.563 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 76601abc-9380-4d0e-8360-39afb25adf0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.646 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.835 251996 DEBUG nova.policy [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '756e3e1fa7e44042bdf37a6cdd877fac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9e86c61372e24db392d4a12ca71f7e00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:03:49 compute-0 nova_compute[251992]: 2025-12-06 07:03:49.969 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 76601abc-9380-4d0e-8360-39afb25adf0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:50 compute-0 nova_compute[251992]: 2025-12-06 07:03:50.048 251996 DEBUG nova.storage.rbd_utils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] resizing rbd image 76601abc-9380-4d0e-8360-39afb25adf0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:03:50 compute-0 nova_compute[251992]: 2025-12-06 07:03:50.156 251996 DEBUG nova.objects.instance [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lazy-loading 'migration_context' on Instance uuid 76601abc-9380-4d0e-8360-39afb25adf0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:50 compute-0 nova_compute[251992]: 2025-12-06 07:03:50.170 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:03:50 compute-0 nova_compute[251992]: 2025-12-06 07:03:50.170 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Ensure instance console log exists: /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:03:50 compute-0 nova_compute[251992]: 2025-12-06 07:03:50.171 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:50 compute-0 nova_compute[251992]: 2025-12-06 07:03:50.171 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:50 compute-0 nova_compute[251992]: 2025-12-06 07:03:50.171 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:50 compute-0 ceph-mon[74339]: pgmap v1289: 305 pgs: 305 active+clean; 405 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 99 op/s
Dec 06 07:03:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/729169714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2473430784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 397 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 171 op/s
Dec 06 07:03:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:51.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:51.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:51 compute-0 podman[272188]: 2025-12-06 07:03:51.424714464 +0000 UTC m=+0.084263889 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:03:51 compute-0 nova_compute[251992]: 2025-12-06 07:03:51.788 251996 DEBUG nova.network.neutron [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Successfully updated port: 60375867-89f6-4607-b20c-d94ca837383e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:03:51 compute-0 nova_compute[251992]: 2025-12-06 07:03:51.819 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:03:51 compute-0 nova_compute[251992]: 2025-12-06 07:03:51.819 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquired lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:03:51 compute-0 nova_compute[251992]: 2025-12-06 07:03:51.820 251996 DEBUG nova.network.neutron [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:03:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/828642717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:52 compute-0 nova_compute[251992]: 2025-12-06 07:03:52.518 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:52 compute-0 nova_compute[251992]: 2025-12-06 07:03:52.536 251996 DEBUG nova.network.neutron [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:03:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 397 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.8 MiB/s wr, 147 op/s
Dec 06 07:03:53 compute-0 ceph-mon[74339]: pgmap v1290: 305 pgs: 305 active+clean; 397 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 171 op/s
Dec 06 07:03:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:53.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:53 compute-0 nova_compute[251992]: 2025-12-06 07:03:53.718 251996 DEBUG nova.compute.manager [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-changed-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:53 compute-0 nova_compute[251992]: 2025-12-06 07:03:53.719 251996 DEBUG nova.compute.manager [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Refreshing instance network info cache due to event network-changed-60375867-89f6-4607-b20c-d94ca837383e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:03:53 compute-0 nova_compute[251992]: 2025-12-06 07:03:53.719 251996 DEBUG oslo_concurrency.lockutils [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:03:54 compute-0 ceph-mon[74339]: pgmap v1291: 305 pgs: 305 active+clean; 397 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.8 MiB/s wr, 147 op/s
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.508 251996 DEBUG nova.network.neutron [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updating instance_info_cache with network_info: [{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.533 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Releasing lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.533 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Instance network_info: |[{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.534 251996 DEBUG oslo_concurrency.lockutils [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.534 251996 DEBUG nova.network.neutron [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Refreshing network info cache for port 60375867-89f6-4607-b20c-d94ca837383e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.537 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Start _get_guest_xml network_info=[{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.541 251996 WARNING nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.548 251996 DEBUG nova.virt.libvirt.host [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.548 251996 DEBUG nova.virt.libvirt.host [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.552 251996 DEBUG nova.virt.libvirt.host [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.553 251996 DEBUG nova.virt.libvirt.host [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.555 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.555 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.556 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.556 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.556 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.556 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.556 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.557 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.557 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.557 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.557 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.558 251996 DEBUG nova.virt.hardware [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.561 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.612 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004619.6104403, b2d26234-5d5c-402f-85cf-d826d2dbae79 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.612 251996 INFO nova.compute.manager [-] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] VM Stopped (Lifecycle Event)
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.634 251996 DEBUG nova.compute.manager [None req-0d249948-c4da-4955-aabf-0d773bb8c909 - - - - - -] [instance: b2d26234-5d5c-402f-85cf-d826d2dbae79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:54 compute-0 nova_compute[251992]: 2025-12-06 07:03:54.648 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:03:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015818047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.011 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 346 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 386 KiB/s rd, 3.8 MiB/s wr, 165 op/s
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.049 251996 DEBUG nova.storage.rbd_utils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 76601abc-9380-4d0e-8360-39afb25adf0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.053 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:55.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:03:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527159582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.477 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.479 251996 DEBUG nova.virt.libvirt.vif [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:03:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1401760543',display_name='tempest-LiveMigrationTest-server-1401760543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1401760543',id=24,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-5qbwc0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:03:49Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=76601abc-9380-4d0e-8360-39afb25adf0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.480 251996 DEBUG nova.network.os_vif_util [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converting VIF {"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.481 251996 DEBUG nova.network.os_vif_util [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.482 251996 DEBUG nova.objects.instance [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 76601abc-9380-4d0e-8360-39afb25adf0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:03:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/761473893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3015818047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2320804594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:03:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2320804594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:03:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3510907800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.512 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <uuid>76601abc-9380-4d0e-8360-39afb25adf0c</uuid>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <name>instance-00000018</name>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <nova:name>tempest-LiveMigrationTest-server-1401760543</nova:name>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:03:54</nova:creationTime>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <nova:user uuid="756e3e1fa7e44042bdf37a6cdd877fac">tempest-LiveMigrationTest-854827502-project-member</nova:user>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <nova:project uuid="9e86c61372e24db392d4a12ca71f7e00">tempest-LiveMigrationTest-854827502</nova:project>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <nova:port uuid="60375867-89f6-4607-b20c-d94ca837383e">
Dec 06 07:03:55 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <system>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <entry name="serial">76601abc-9380-4d0e-8360-39afb25adf0c</entry>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <entry name="uuid">76601abc-9380-4d0e-8360-39afb25adf0c</entry>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </system>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <os>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   </os>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <features>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   </features>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/76601abc-9380-4d0e-8360-39afb25adf0c_disk">
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       </source>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/76601abc-9380-4d0e-8360-39afb25adf0c_disk.config">
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       </source>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:03:55 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:2e:4d:99"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <target dev="tap60375867-89"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/console.log" append="off"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <video>
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </video>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:03:55 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:03:55 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:03:55 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:03:55 compute-0 nova_compute[251992]: </domain>
Dec 06 07:03:55 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.513 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Preparing to wait for external event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.514 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.514 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.514 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.515 251996 DEBUG nova.virt.libvirt.vif [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:03:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1401760543',display_name='tempest-LiveMigrationTest-server-1401760543',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1401760543',id=24,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-5qbwc0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:03:49Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=76601abc-9380-4d0e-8360-39afb25adf0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.515 251996 DEBUG nova.network.os_vif_util [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converting VIF {"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.516 251996 DEBUG nova.network.os_vif_util [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.516 251996 DEBUG os_vif [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.517 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.518 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.521 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.522 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60375867-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.522 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60375867-89, col_values=(('external_ids', {'iface-id': '60375867-89f6-4607-b20c-d94ca837383e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:4d:99', 'vm-uuid': '76601abc-9380-4d0e-8360-39afb25adf0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.523 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:55 compute-0 NetworkManager[48965]: <info>  [1765004635.5250] manager: (tap60375867-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.530 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.532 251996 INFO os_vif [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89')
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.593 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.595 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.595 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] No VIF found with MAC fa:16:3e:2e:4d:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.596 251996 INFO nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Using config drive
Dec 06 07:03:55 compute-0 nova_compute[251992]: 2025-12-06 07:03:55.619 251996 DEBUG nova.storage.rbd_utils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 76601abc-9380-4d0e-8360-39afb25adf0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:56 compute-0 nova_compute[251992]: 2025-12-06 07:03:56.028 251996 INFO nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Creating config drive at /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/disk.config
Dec 06 07:03:56 compute-0 nova_compute[251992]: 2025-12-06 07:03:56.037 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu1ag9dbj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:56 compute-0 sudo[272299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:56 compute-0 sudo[272299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:56 compute-0 sudo[272299]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:56 compute-0 sudo[272327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:03:56 compute-0 sudo[272327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:03:56 compute-0 sudo[272327]: pam_unix(sudo:session): session closed for user root
Dec 06 07:03:56 compute-0 nova_compute[251992]: 2025-12-06 07:03:56.169 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu1ag9dbj" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:56 compute-0 nova_compute[251992]: 2025-12-06 07:03:56.198 251996 DEBUG nova.storage.rbd_utils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] rbd image 76601abc-9380-4d0e-8360-39afb25adf0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:03:56 compute-0 nova_compute[251992]: 2025-12-06 07:03:56.201 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/disk.config 76601abc-9380-4d0e-8360-39afb25adf0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:03:56 compute-0 nova_compute[251992]: 2025-12-06 07:03:56.496 251996 DEBUG nova.network.neutron [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updated VIF entry in instance network info cache for port 60375867-89f6-4607-b20c-d94ca837383e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:03:56 compute-0 nova_compute[251992]: 2025-12-06 07:03:56.497 251996 DEBUG nova.network.neutron [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updating instance_info_cache with network_info: [{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:03:56 compute-0 nova_compute[251992]: 2025-12-06 07:03:56.520 251996 DEBUG oslo_concurrency.lockutils [req-b2f108a3-d3d2-415a-af19-ed641f1ed219 req-32e84315-66b4-482d-add7-0a5510f55f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:03:56 compute-0 ceph-mon[74339]: pgmap v1292: 305 pgs: 305 active+clean; 346 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 386 KiB/s rd, 3.8 MiB/s wr, 165 op/s
Dec 06 07:03:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3527159582' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:03:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 246 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:03:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:03:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:57 compute-0 podman[272390]: 2025-12-06 07:03:57.395387515 +0000 UTC m=+0.051758427 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:03:57 compute-0 podman[272391]: 2025-12-06 07:03:57.405964207 +0000 UTC m=+0.056459226 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:03:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:03:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:57.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:03:57 compute-0 nova_compute[251992]: 2025-12-06 07:03:57.520 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-0 nova_compute[251992]: 2025-12-06 07:03:57.897 251996 DEBUG oslo_concurrency.processutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/disk.config 76601abc-9380-4d0e-8360-39afb25adf0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:03:57 compute-0 nova_compute[251992]: 2025-12-06 07:03:57.897 251996 INFO nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Deleting local config drive /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c/disk.config because it was imported into RBD.
Dec 06 07:03:57 compute-0 kernel: tap60375867-89: entered promiscuous mode
Dec 06 07:03:57 compute-0 ovn_controller[147168]: 2025-12-06T07:03:57Z|00060|binding|INFO|Claiming lport 60375867-89f6-4607-b20c-d94ca837383e for this chassis.
Dec 06 07:03:57 compute-0 ovn_controller[147168]: 2025-12-06T07:03:57Z|00061|binding|INFO|60375867-89f6-4607-b20c-d94ca837383e: Claiming fa:16:3e:2e:4d:99 10.100.0.6
Dec 06 07:03:57 compute-0 ovn_controller[147168]: 2025-12-06T07:03:57Z|00062|binding|INFO|Claiming lport 40dca971-8880-4c3a-a5fd-8d055d31de88 for this chassis.
Dec 06 07:03:57 compute-0 ovn_controller[147168]: 2025-12-06T07:03:57Z|00063|binding|INFO|40dca971-8880-4c3a-a5fd-8d055d31de88: Claiming fa:16:3e:7f:91:51 19.80.0.39
Dec 06 07:03:57 compute-0 NetworkManager[48965]: <info>  [1765004637.9607] manager: (tap60375867-89): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Dec 06 07:03:57 compute-0 nova_compute[251992]: 2025-12-06 07:03:57.959 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-0 nova_compute[251992]: 2025-12-06 07:03:57.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:57.975 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:91:51 19.80.0.39'], port_security=['fa:16:3e:7f:91:51 19.80.0.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['60375867-89f6-4607-b20c-d94ca837383e'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-308547339', 'neutron:cidrs': '19.80.0.39/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-870f583a-cbfc-4c59-b592-cf3095306ec5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-308547339', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=8aebe1f8-cfb2-4cb1-a24a-2cc56e9a461b, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=40dca971-8880-4c3a-a5fd-8d055d31de88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:57.979 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:4d:99 10.100.0.6'], port_security=['fa:16:3e:2e:4d:99 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2114195499', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '76601abc-9380-4d0e-8360-39afb25adf0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2114195499', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=60375867-89f6-4607-b20c-d94ca837383e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:03:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:57.981 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 40dca971-8880-4c3a-a5fd-8d055d31de88 in datapath 870f583a-cbfc-4c59-b592-cf3095306ec5 bound to our chassis
Dec 06 07:03:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:57.984 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 870f583a-cbfc-4c59-b592-cf3095306ec5
Dec 06 07:03:57 compute-0 systemd-machined[212986]: New machine qemu-12-instance-00000018.
Dec 06 07:03:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:57.998 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[26d29dce-9955-4ed1-bb44-c4559aef34d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:57.999 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap870f583a-c1 in ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.002 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap870f583a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.002 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7bea235d-74a5-4cd8-9ee1-0bc7f1fa73f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.003 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ceb9cb-78fd-4487-8226-551af36dca4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.017 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[982a72da-ee41-4acf-888f-d8ab10a74a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.042 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[31d9b151-64d7-491a-8654-91e1f7e86be3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000018.
Dec 06 07:03:58 compute-0 systemd-udevd[272447]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.066 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-0 ovn_controller[147168]: 2025-12-06T07:03:58Z|00064|binding|INFO|Setting lport 60375867-89f6-4607-b20c-d94ca837383e ovn-installed in OVS
Dec 06 07:03:58 compute-0 ovn_controller[147168]: 2025-12-06T07:03:58Z|00065|binding|INFO|Setting lport 60375867-89f6-4607-b20c-d94ca837383e up in Southbound
Dec 06 07:03:58 compute-0 ovn_controller[147168]: 2025-12-06T07:03:58Z|00066|binding|INFO|Setting lport 40dca971-8880-4c3a-a5fd-8d055d31de88 up in Southbound
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.074 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-0 NetworkManager[48965]: <info>  [1765004638.0786] device (tap60375867-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:03:58 compute-0 NetworkManager[48965]: <info>  [1765004638.0804] device (tap60375867-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.081 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec7fc6c-f7fe-459b-a3fb-520f1c1aa505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 NetworkManager[48965]: <info>  [1765004638.0878] manager: (tap870f583a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.086 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[299ed478-5c8c-4f17-a406-2f632ab06dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.122 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[de64e281-d023-44c6-b5ab-02ec6065bee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.125 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c77a3554-49ee-422e-a83a-717328de85c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 NetworkManager[48965]: <info>  [1765004638.1440] device (tap870f583a-c0): carrier: link connected
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.149 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[624677d7-64e6-49b3-a888-12483e0f5960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.166 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[85ddc5bb-f241-4fd8-a0cf-1ff225715809]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap870f583a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:d9:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491073, 'reachable_time': 21622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272476, 'error': None, 'target': 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.181 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[411a11ce-a20c-40cb-bcb5-bbca69341fae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:d987'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491073, 'tstamp': 491073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272477, 'error': None, 'target': 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.195 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[98c19bcd-7c6d-4380-a0b8-46f31d0a86f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap870f583a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:d9:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491073, 'reachable_time': 21622, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272478, 'error': None, 'target': 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.220 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[45a8973d-3a5a-479a-882f-505bc487edb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.274 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2cda0df7-a7dc-4882-b3a0-1678e51e53de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.276 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap870f583a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.276 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.277 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap870f583a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.278 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-0 NetworkManager[48965]: <info>  [1765004638.2794] manager: (tap870f583a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec 06 07:03:58 compute-0 kernel: tap870f583a-c0: entered promiscuous mode
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.281 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.281 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap870f583a-c0, col_values=(('external_ids', {'iface-id': 'a37358dd-1cb9-4fbc-9a91-e10a5094a9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.282 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-0 ovn_controller[147168]: 2025-12-06T07:03:58Z|00067|binding|INFO|Releasing lport a37358dd-1cb9-4fbc-9a91-e10a5094a9ac from this chassis (sb_readonly=0)
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.296 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.297 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/870f583a-cbfc-4c59-b592-cf3095306ec5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/870f583a-cbfc-4c59-b592-cf3095306ec5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.298 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a9356f43-6e15-4d67-a815-adc203e8b269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.298 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-870f583a-cbfc-4c59-b592-cf3095306ec5
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/870f583a-cbfc-4c59-b592-cf3095306ec5.pid.haproxy
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 870f583a-cbfc-4c59-b592-cf3095306ec5
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.299 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'env', 'PROCESS_TAG=haproxy-870f583a-cbfc-4c59-b592-cf3095306ec5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/870f583a-cbfc-4c59-b592-cf3095306ec5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.531 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004638.5312011, 76601abc-9380-4d0e-8360-39afb25adf0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.532 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] VM Started (Lifecycle Event)
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.554 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.557 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004638.5322847, 76601abc-9380-4d0e-8360-39afb25adf0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.558 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] VM Paused (Lifecycle Event)
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.645 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.648 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:03:58 compute-0 ceph-mon[74339]: pgmap v1293: 305 pgs: 305 active+clean; 246 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:03:58 compute-0 podman[272553]: 2025-12-06 07:03:58.662674826 +0000 UTC m=+0.047048067 container create 543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.673 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:03:58 compute-0 systemd[1]: Started libpod-conmon-543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13.scope.
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.714 251996 DEBUG nova.compute.manager [req-96b647b4-843e-4acd-9a48-5ee59385e2c4 req-be15aa82-707b-48d7-bd10-6561d54f39c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.715 251996 DEBUG oslo_concurrency.lockutils [req-96b647b4-843e-4acd-9a48-5ee59385e2c4 req-be15aa82-707b-48d7-bd10-6561d54f39c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.715 251996 DEBUG oslo_concurrency.lockutils [req-96b647b4-843e-4acd-9a48-5ee59385e2c4 req-be15aa82-707b-48d7-bd10-6561d54f39c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.715 251996 DEBUG oslo_concurrency.lockutils [req-96b647b4-843e-4acd-9a48-5ee59385e2c4 req-be15aa82-707b-48d7-bd10-6561d54f39c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.715 251996 DEBUG nova.compute.manager [req-96b647b4-843e-4acd-9a48-5ee59385e2c4 req-be15aa82-707b-48d7-bd10-6561d54f39c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Processing event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.716 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.719 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004638.7193546, 76601abc-9380-4d0e-8360-39afb25adf0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.720 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] VM Resumed (Lifecycle Event)
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.721 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.723 251996 INFO nova.virt.libvirt.driver [-] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Instance spawned successfully.
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.724 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:03:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:03:58 compute-0 podman[272553]: 2025-12-06 07:03:58.636701766 +0000 UTC m=+0.021075037 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f9d3d50c573a7271d624e791cd4ff7728fe76aba2e5ef747d5303352e3ec09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.746 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:58 compute-0 podman[272553]: 2025-12-06 07:03:58.749488444 +0000 UTC m=+0.133861685 container init 543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.753 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.754 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:58 compute-0 podman[272553]: 2025-12-06 07:03:58.755020057 +0000 UTC m=+0.139393298 container start 543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.755 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.756 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.757 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.758 251996 DEBUG nova.virt.libvirt.driver [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.765 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:03:58 compute-0 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[272569]: [NOTICE]   (272573) : New worker (272575) forked
Dec 06 07:03:58 compute-0 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[272569]: [NOTICE]   (272573) : Loading success.
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.808 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 60375867-89f6-4607-b20c-d94ca837383e in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 unbound from our chassis
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.811 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9238b9b5-08f5-4634-bd05-370e3192b201
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.821 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[54e8060d-b51c-4bed-91f5-a2139e4134a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.822 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9238b9b5-01 in ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.824 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9238b9b5-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.824 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[702ee7b2-4516-4879-88d9-a289f7d40a90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.825 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1936cc4f-7243-423c-9cb5-91ac6fa1ce67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.835 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[c22f5175-6764-450a-a5f6-515db0e3ac33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.848 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5256ccfe-7731-47a9-9695-4ca4ebe30b66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.861 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.872 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e17b51-9a28-427f-a45d-6deec059430b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.877 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[59e786b4-fba1-4d4c-bb99-d5c612b67861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 NetworkManager[48965]: <info>  [1765004638.8780] manager: (tap9238b9b5-00): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Dec 06 07:03:58 compute-0 systemd-udevd[272458]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.904 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[76015d2e-4ba0-4461-b81f-57a046fc59a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.907 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[68c2a52d-66d8-4c6f-b294-b40aa9ee7379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 NetworkManager[48965]: <info>  [1765004638.9279] device (tap9238b9b5-00): carrier: link connected
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.932 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1391a414-76c7-481d-983b-c8e3b33899a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.949 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[117bce3d-d9ef-4f1c-bbaf-afffcb923f9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491151, 'reachable_time': 30027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272594, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.974 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[73b8e122-ddb8-4afb-b9e3-228ff87f4e0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:4ff3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491151, 'tstamp': 491151}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272595, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.979 251996 INFO nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Took 9.60 seconds to spawn the instance on the hypervisor.
Dec 06 07:03:58 compute-0 nova_compute[251992]: 2025-12-06 07:03:58.979 251996 DEBUG nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:03:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:58.996 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e834fa29-ff13-48de-9f64-a10c2335f10b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9238b9b5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:4f:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491151, 'reachable_time': 30027, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272596, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 246 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.027 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7e923936-91e5-443e-a4d8-9b2439223a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-0 nova_compute[251992]: 2025-12-06 07:03:59.078 251996 INFO nova.compute.manager [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Took 10.86 seconds to build instance.
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.085 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3b78d2c7-6e01-4c00-9e0c-6e850340cc36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.086 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.087 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.087 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9238b9b5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:59 compute-0 nova_compute[251992]: 2025-12-06 07:03:59.089 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:59 compute-0 NetworkManager[48965]: <info>  [1765004639.0899] manager: (tap9238b9b5-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec 06 07:03:59 compute-0 kernel: tap9238b9b5-00: entered promiscuous mode
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.097 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9238b9b5-00, col_values=(('external_ids', {'iface-id': '5c223717-35ae-4662-bf3f-55f7a73b7a9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:03:59 compute-0 nova_compute[251992]: 2025-12-06 07:03:59.098 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:59 compute-0 ovn_controller[147168]: 2025-12-06T07:03:59Z|00068|binding|INFO|Releasing lport 5c223717-35ae-4662-bf3f-55f7a73b7a9a from this chassis (sb_readonly=0)
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.101 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.102 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef62170-69e7-48dd-93d3-b01a167d38b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.103 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-9238b9b5-08f5-4634-bd05-370e3192b201
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/9238b9b5-08f5-4634-bd05-370e3192b201.pid.haproxy
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 9238b9b5-08f5-4634-bd05-370e3192b201
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:03:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:03:59.104 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'env', 'PROCESS_TAG=haproxy-9238b9b5-08f5-4634-bd05-370e3192b201', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9238b9b5-08f5-4634-bd05-370e3192b201.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:03:59 compute-0 nova_compute[251992]: 2025-12-06 07:03:59.114 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:03:59 compute-0 nova_compute[251992]: 2025-12-06 07:03:59.124 251996 DEBUG oslo_concurrency.lockutils [None req-9cfcfcbc-6d21-4123-af58-ea437683fd35 756e3e1fa7e44042bdf37a6cdd877fac 9e86c61372e24db392d4a12ca71f7e00 - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:03:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:03:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:03:59.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:03:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:03:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:03:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:03:59.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:03:59 compute-0 podman[272628]: 2025-12-06 07:03:59.487407741 +0000 UTC m=+0.051003725 container create 1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:03:59 compute-0 systemd[1]: Started libpod-conmon-1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c.scope.
Dec 06 07:03:59 compute-0 podman[272628]: 2025-12-06 07:03:59.459197189 +0000 UTC m=+0.022793193 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:03:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d829a5457c38b829506a1a06139323d63bbe729e7c39733b9916d5af838f7929/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:03:59 compute-0 podman[272628]: 2025-12-06 07:03:59.574215729 +0000 UTC m=+0.137811733 container init 1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:03:59 compute-0 podman[272628]: 2025-12-06 07:03:59.580123303 +0000 UTC m=+0.143719287 container start 1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:03:59 compute-0 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[272643]: [NOTICE]   (272647) : New worker (272649) forked
Dec 06 07:03:59 compute-0 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[272643]: [NOTICE]   (272647) : Loading success.
Dec 06 07:04:00 compute-0 nova_compute[251992]: 2025-12-06 07:04:00.525 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:00 compute-0 ceph-mon[74339]: pgmap v1294: 305 pgs: 305 active+clean; 246 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 401 KiB/s rd, 3.9 MiB/s wr, 176 op/s
Dec 06 07:04:00 compute-0 nova_compute[251992]: 2025-12-06 07:04:00.844 251996 DEBUG nova.compute.manager [req-21a94776-62a9-4036-9469-4db2e0cc0886 req-414f4a7a-b86e-44b3-a34f-1e4918c84136 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:00 compute-0 nova_compute[251992]: 2025-12-06 07:04:00.845 251996 DEBUG oslo_concurrency.lockutils [req-21a94776-62a9-4036-9469-4db2e0cc0886 req-414f4a7a-b86e-44b3-a34f-1e4918c84136 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:00 compute-0 nova_compute[251992]: 2025-12-06 07:04:00.845 251996 DEBUG oslo_concurrency.lockutils [req-21a94776-62a9-4036-9469-4db2e0cc0886 req-414f4a7a-b86e-44b3-a34f-1e4918c84136 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:00 compute-0 nova_compute[251992]: 2025-12-06 07:04:00.845 251996 DEBUG oslo_concurrency.lockutils [req-21a94776-62a9-4036-9469-4db2e0cc0886 req-414f4a7a-b86e-44b3-a34f-1e4918c84136 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:00 compute-0 nova_compute[251992]: 2025-12-06 07:04:00.846 251996 DEBUG nova.compute.manager [req-21a94776-62a9-4036-9469-4db2e0cc0886 req-414f4a7a-b86e-44b3-a34f-1e4918c84136 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:04:00 compute-0 nova_compute[251992]: 2025-12-06 07:04:00.846 251996 WARNING nova.compute.manager [req-21a94776-62a9-4036-9469-4db2e0cc0886 req-414f4a7a-b86e-44b3-a34f-1e4918c84136 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received unexpected event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e for instance with vm_state active and task_state None.
Dec 06 07:04:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 134 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Dec 06 07:04:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:01.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:01.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1303928982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:02 compute-0 nova_compute[251992]: 2025-12-06 07:04:02.542 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:02 compute-0 ceph-mon[74339]: pgmap v1295: 305 pgs: 305 active+clean; 134 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Dec 06 07:04:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 791 KiB/s wr, 203 op/s
Dec 06 07:04:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:03.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:03.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:03.813 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:03.814 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:03.815 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1730755814' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/199556350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3035910838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:04 compute-0 nova_compute[251992]: 2025-12-06 07:04:04.412 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Check if temp file /var/lib/nova/instances/tmpfww7d9mr exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 06 07:04:04 compute-0 nova_compute[251992]: 2025-12-06 07:04:04.412 251996 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfww7d9mr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76601abc-9380-4d0e-8360-39afb25adf0c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 06 07:04:04 compute-0 ceph-mon[74339]: pgmap v1296: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 791 KiB/s wr, 203 op/s
Dec 06 07:04:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 101 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 679 KiB/s wr, 226 op/s
Dec 06 07:04:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:05.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:05 compute-0 nova_compute[251992]: 2025-12-06 07:04:05.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3958060182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:05 compute-0 ceph-mon[74339]: pgmap v1297: 305 pgs: 305 active+clean; 101 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 679 KiB/s wr, 226 op/s
Dec 06 07:04:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:06.104 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:04:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:06.105 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:04:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:06.106 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:06 compute-0 nova_compute[251992]: 2025-12-06 07:04:06.163 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 134 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 192 op/s
Dec 06 07:04:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:07.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:07 compute-0 nova_compute[251992]: 2025-12-06 07:04:07.543 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:08 compute-0 ceph-mon[74339]: pgmap v1298: 305 pgs: 305 active+clean; 134 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 192 op/s
Dec 06 07:04:08 compute-0 nova_compute[251992]: 2025-12-06 07:04:08.789 251996 DEBUG nova.compute.manager [req-aa92566e-44e6-409a-bbdc-98cac6ea0a79 req-3312d69a-77a8-494b-833f-7c8eb45cba66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:08 compute-0 nova_compute[251992]: 2025-12-06 07:04:08.789 251996 DEBUG oslo_concurrency.lockutils [req-aa92566e-44e6-409a-bbdc-98cac6ea0a79 req-3312d69a-77a8-494b-833f-7c8eb45cba66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:08 compute-0 nova_compute[251992]: 2025-12-06 07:04:08.790 251996 DEBUG oslo_concurrency.lockutils [req-aa92566e-44e6-409a-bbdc-98cac6ea0a79 req-3312d69a-77a8-494b-833f-7c8eb45cba66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:08 compute-0 nova_compute[251992]: 2025-12-06 07:04:08.790 251996 DEBUG oslo_concurrency.lockutils [req-aa92566e-44e6-409a-bbdc-98cac6ea0a79 req-3312d69a-77a8-494b-833f-7c8eb45cba66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:08 compute-0 nova_compute[251992]: 2025-12-06 07:04:08.790 251996 DEBUG nova.compute.manager [req-aa92566e-44e6-409a-bbdc-98cac6ea0a79 req-3312d69a-77a8-494b-833f-7c8eb45cba66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:04:08 compute-0 nova_compute[251992]: 2025-12-06 07:04:08.790 251996 DEBUG nova.compute.manager [req-aa92566e-44e6-409a-bbdc-98cac6ea0a79 req-3312d69a-77a8-494b-833f-7c8eb45cba66 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:04:08 compute-0 ovn_controller[147168]: 2025-12-06T07:04:08Z|00069|binding|INFO|Releasing lport a37358dd-1cb9-4fbc-9a91-e10a5094a9ac from this chassis (sb_readonly=0)
Dec 06 07:04:08 compute-0 ovn_controller[147168]: 2025-12-06T07:04:08Z|00070|binding|INFO|Releasing lport 5c223717-35ae-4662-bf3f-55f7a73b7a9a from this chassis (sb_readonly=0)
Dec 06 07:04:08 compute-0 nova_compute[251992]: 2025-12-06 07:04:08.911 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 134 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 06 07:04:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:09.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1565463242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:04:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1565463242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:04:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3303593592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:04:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:09.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.245 251996 INFO nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Took 4.25 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.246 251996 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:04:10 compute-0 ceph-mon[74339]: pgmap v1299: 305 pgs: 305 active+clean; 134 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.278 251996 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpfww7d9mr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='76601abc-9380-4d0e-8360-39afb25adf0c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(bf6fe033-4cae-4353-8c4c-f75650a858c2),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.281 251996 DEBUG nova.objects.instance [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lazy-loading 'migration_context' on Instance uuid 76601abc-9380-4d0e-8360-39afb25adf0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.283 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.284 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.285 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.322 251996 DEBUG nova.virt.libvirt.vif [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:03:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1401760543',display_name='tempest-LiveMigrationTest-server-1401760543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1401760543',id=24,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-5qbwc0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:03:59Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=76601abc-9380-4d0e-8360-39afb25adf0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.322 251996 DEBUG nova.network.os_vif_util [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.323 251996 DEBUG nova.network.os_vif_util [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.324 251996 DEBUG nova.virt.libvirt.migration [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updating guest XML with vif config: <interface type="ethernet">
Dec 06 07:04:10 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:2e:4d:99"/>
Dec 06 07:04:10 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 07:04:10 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:04:10 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 07:04:10 compute-0 nova_compute[251992]:   <target dev="tap60375867-89"/>
Dec 06 07:04:10 compute-0 nova_compute[251992]: </interface>
Dec 06 07:04:10 compute-0 nova_compute[251992]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.325 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.529 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.787 251996 DEBUG nova.virt.libvirt.migration [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.788 251996 INFO nova.virt.libvirt.migration [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.921 251996 INFO nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.987 251996 DEBUG nova.compute.manager [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.987 251996 DEBUG oslo_concurrency.lockutils [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.988 251996 DEBUG oslo_concurrency.lockutils [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.988 251996 DEBUG oslo_concurrency.lockutils [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.988 251996 DEBUG nova.compute.manager [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.988 251996 WARNING nova.compute.manager [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received unexpected event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e for instance with vm_state active and task_state migrating.
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.989 251996 DEBUG nova.compute.manager [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-changed-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.989 251996 DEBUG nova.compute.manager [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Refreshing instance network info cache due to event network-changed-60375867-89f6-4607-b20c-d94ca837383e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.989 251996 DEBUG oslo_concurrency.lockutils [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.989 251996 DEBUG oslo_concurrency.lockutils [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:04:10 compute-0 nova_compute[251992]: 2025-12-06 07:04:10.990 251996 DEBUG nova.network.neutron [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Refreshing network info cache for port 60375867-89f6-4607-b20c-d94ca837383e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:04:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 99 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 06 07:04:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:11.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.425 251996 DEBUG nova.virt.libvirt.migration [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.427 251996 DEBUG nova.virt.libvirt.migration [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 07:04:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:11.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.519 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004651.519018, 76601abc-9380-4d0e-8360-39afb25adf0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.520 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] VM Paused (Lifecycle Event)
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.542 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.545 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.572 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 06 07:04:11 compute-0 kernel: tap60375867-89 (unregistering): left promiscuous mode
Dec 06 07:04:11 compute-0 NetworkManager[48965]: <info>  [1765004651.7110] device (tap60375867-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:04:11 compute-0 ovn_controller[147168]: 2025-12-06T07:04:11Z|00071|binding|INFO|Releasing lport 60375867-89f6-4607-b20c-d94ca837383e from this chassis (sb_readonly=0)
Dec 06 07:04:11 compute-0 ovn_controller[147168]: 2025-12-06T07:04:11Z|00072|binding|INFO|Setting lport 60375867-89f6-4607-b20c-d94ca837383e down in Southbound
Dec 06 07:04:11 compute-0 ovn_controller[147168]: 2025-12-06T07:04:11Z|00073|binding|INFO|Releasing lport 40dca971-8880-4c3a-a5fd-8d055d31de88 from this chassis (sb_readonly=0)
Dec 06 07:04:11 compute-0 ovn_controller[147168]: 2025-12-06T07:04:11Z|00074|binding|INFO|Setting lport 40dca971-8880-4c3a-a5fd-8d055d31de88 down in Southbound
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:11 compute-0 ovn_controller[147168]: 2025-12-06T07:04:11Z|00075|binding|INFO|Removing iface tap60375867-89 ovn-installed in OVS
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.721 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:11 compute-0 ovn_controller[147168]: 2025-12-06T07:04:11Z|00076|binding|INFO|Releasing lport a37358dd-1cb9-4fbc-9a91-e10a5094a9ac from this chassis (sb_readonly=0)
Dec 06 07:04:11 compute-0 ovn_controller[147168]: 2025-12-06T07:04:11Z|00077|binding|INFO|Releasing lport 5c223717-35ae-4662-bf3f-55f7a73b7a9a from this chassis (sb_readonly=0)
Dec 06 07:04:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:11.731 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:91:51 19.80.0.39'], port_security=['fa:16:3e:7f:91:51 19.80.0.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['60375867-89f6-4607-b20c-d94ca837383e'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-308547339', 'neutron:cidrs': '19.80.0.39/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-870f583a-cbfc-4c59-b592-cf3095306ec5', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-308547339', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '3', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=8aebe1f8-cfb2-4cb1-a24a-2cc56e9a461b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=40dca971-8880-4c3a-a5fd-8d055d31de88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:04:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:11.733 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:4d:99 10.100.0.6'], port_security=['fa:16:3e:2e:4d:99 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '03fe054d-d727-4af3-9c5e-92e57505f242'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-2114195499', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '76601abc-9380-4d0e-8360-39afb25adf0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9238b9b5-08f5-4634-bd05-370e3192b201', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-2114195499', 'neutron:project_id': '9e86c61372e24db392d4a12ca71f7e00', 'neutron:revision_number': '8', 'neutron:security_group_ids': '36a83e30-1797-4590-94d1-4f6fcbdcefb2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db7e9816-53a6-4d9a-be4a-e5a8dcf8a64b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=60375867-89f6-4607-b20c-d94ca837383e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:04:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:11.734 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 40dca971-8880-4c3a-a5fd-8d055d31de88 in datapath 870f583a-cbfc-4c59-b592-cf3095306ec5 unbound from our chassis
Dec 06 07:04:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:11.738 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 870f583a-cbfc-4c59-b592-cf3095306ec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:04:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:11.740 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[00e5f486-94eb-4351-9cec-96ce1d5d8354]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:11.741 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 namespace which is not needed anymore
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.764 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:11 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000018.scope: Deactivated successfully.
Dec 06 07:04:11 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000018.scope: Consumed 13.096s CPU time.
Dec 06 07:04:11 compute-0 systemd-machined[212986]: Machine qemu-12-instance-00000018 terminated.
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.850 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:11 compute-0 virtqemud[251613]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/76601abc-9380-4d0e-8360-39afb25adf0c_disk: No such file or directory
Dec 06 07:04:11 compute-0 virtqemud[251613]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/76601abc-9380-4d0e-8360-39afb25adf0c_disk: No such file or directory
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.879 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:11 compute-0 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[272569]: [NOTICE]   (272573) : haproxy version is 2.8.14-c23fe91
Dec 06 07:04:11 compute-0 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[272569]: [NOTICE]   (272573) : path to executable is /usr/sbin/haproxy
Dec 06 07:04:11 compute-0 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[272569]: [WARNING]  (272573) : Exiting Master process...
Dec 06 07:04:11 compute-0 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[272569]: [WARNING]  (272573) : Exiting Master process...
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.884 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:11 compute-0 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[272569]: [ALERT]    (272573) : Current worker (272575) exited with code 143 (Terminated)
Dec 06 07:04:11 compute-0 neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5[272569]: [WARNING]  (272573) : All workers exited. Exiting... (0)
Dec 06 07:04:11 compute-0 systemd[1]: libpod-543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13.scope: Deactivated successfully.
Dec 06 07:04:11 compute-0 podman[272690]: 2025-12-06 07:04:11.896451766 +0000 UTC m=+0.053291579 container died 543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.897 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.898 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.898 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.933 251996 DEBUG nova.virt.libvirt.guest [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '76601abc-9380-4d0e-8360-39afb25adf0c' (instance-00000018) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.933 251996 INFO nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Migration operation has completed
Dec 06 07:04:11 compute-0 nova_compute[251992]: 2025-12-06 07:04:11.934 251996 INFO nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] _post_live_migration() is started..
Dec 06 07:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6f9d3d50c573a7271d624e791cd4ff7728fe76aba2e5ef747d5303352e3ec09-merged.mount: Deactivated successfully.
Dec 06 07:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13-userdata-shm.mount: Deactivated successfully.
Dec 06 07:04:11 compute-0 podman[272690]: 2025-12-06 07:04:11.940822767 +0000 UTC m=+0.097662580 container cleanup 543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:04:11 compute-0 systemd[1]: libpod-conmon-543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13.scope: Deactivated successfully.
Dec 06 07:04:12 compute-0 podman[272726]: 2025-12-06 07:04:12.007457876 +0000 UTC m=+0.046411609 container remove 543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.012 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3839e049-8e4e-4b2d-9182-d7d3c8b7a81e]: (4, ('Sat Dec  6 07:04:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 (543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13)\n543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13\nSat Dec  6 07:04:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 (543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13)\n543806b2de305f93e6eb90266eaca8c83f6c41770c62a3f1daf840752c636a13\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.015 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4874d3-e89d-4917-9310-f2b1ab2907c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.017 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap870f583a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.072 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:12 compute-0 kernel: tap870f583a-c0: left promiscuous mode
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.098 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[84e2244b-5800-4cf6-8f17-c148623e6532]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.112 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[024bcc8f-bec6-48a6-9b5e-6201f3fcd501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.113 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[843f74aa-5380-425d-9cf5-ae4dd3089135]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.130 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f30fda2d-1ff9-42bc-a82c-831105c968e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491066, 'reachable_time': 25418, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272745, 'error': None, 'target': 'ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d870f583a\x2dcbfc\x2d4c59\x2db592\x2dcf3095306ec5.mount: Deactivated successfully.
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.132 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-870f583a-cbfc-4c59-b592-cf3095306ec5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.132 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[78cfdd9e-880b-4df0-8494-3560e040e6b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.133 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 60375867-89f6-4607-b20c-d94ca837383e in datapath 9238b9b5-08f5-4634-bd05-370e3192b201 unbound from our chassis
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.134 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9238b9b5-08f5-4634-bd05-370e3192b201, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.135 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5adb4a9c-36bd-4672-bd29-045cdb778b5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.136 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 namespace which is not needed anymore
Dec 06 07:04:12 compute-0 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[272643]: [NOTICE]   (272647) : haproxy version is 2.8.14-c23fe91
Dec 06 07:04:12 compute-0 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[272643]: [NOTICE]   (272647) : path to executable is /usr/sbin/haproxy
Dec 06 07:04:12 compute-0 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[272643]: [WARNING]  (272647) : Exiting Master process...
Dec 06 07:04:12 compute-0 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[272643]: [ALERT]    (272647) : Current worker (272649) exited with code 143 (Terminated)
Dec 06 07:04:12 compute-0 neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201[272643]: [WARNING]  (272647) : All workers exited. Exiting... (0)
Dec 06 07:04:12 compute-0 systemd[1]: libpod-1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c.scope: Deactivated successfully.
Dec 06 07:04:12 compute-0 podman[272763]: 2025-12-06 07:04:12.280647373 +0000 UTC m=+0.050036389 container died 1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 07:04:12 compute-0 ceph-mon[74339]: pgmap v1300: 305 pgs: 305 active+clean; 99 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Dec 06 07:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c-userdata-shm.mount: Deactivated successfully.
Dec 06 07:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d829a5457c38b829506a1a06139323d63bbe729e7c39733b9916d5af838f7929-merged.mount: Deactivated successfully.
Dec 06 07:04:12 compute-0 podman[272763]: 2025-12-06 07:04:12.324291324 +0000 UTC m=+0.093680310 container cleanup 1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:04:12 compute-0 systemd[1]: libpod-conmon-1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c.scope: Deactivated successfully.
Dec 06 07:04:12 compute-0 podman[272788]: 2025-12-06 07:04:12.378430945 +0000 UTC m=+0.035559377 container remove 1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.383 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cbabf53d-3b03-4bf1-abdb-d67b33327a83]: (4, ('Sat Dec  6 07:04:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 (1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c)\n1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c\nSat Dec  6 07:04:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 (1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c)\n1cacccc0d828b71679f7ee7a422579e335db2bec3740fbe5af3428755948e90c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.384 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[338232d6-88de-43f4-8e28-aeebc0941723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.385 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9238b9b5-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.386 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:12 compute-0 kernel: tap9238b9b5-00: left promiscuous mode
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.402 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.403 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.405 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[212d09d4-47b0-46e1-bbc9-12225009b50a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.418 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ffaf6731-4a0f-428b-9041-93af8b7c9094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.419 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ecdcc1f2-3fc5-4ba6-ba7d-4b090deae571]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.431 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[754ba56e-638f-42a8-9cab-ba266b9ead4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491145, 'reachable_time': 40237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272807, 'error': None, 'target': 'ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.433 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9238b9b5-08f5-4634-bd05-370e3192b201 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:04:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:12.433 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[4253a9b0-5ac6-47cd-9156-72996a3a8e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:04:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d9238b9b5\x2d08f5\x2d4634\x2dbd05\x2d370e3192b201.mount: Deactivated successfully.
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.499 251996 DEBUG nova.compute.manager [req-f3199aa2-0713-41d8-8a22-5a8b2ea9345e req-2a08fd88-b138-4a15-908a-47580edc4f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.500 251996 DEBUG oslo_concurrency.lockutils [req-f3199aa2-0713-41d8-8a22-5a8b2ea9345e req-2a08fd88-b138-4a15-908a-47580edc4f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.500 251996 DEBUG oslo_concurrency.lockutils [req-f3199aa2-0713-41d8-8a22-5a8b2ea9345e req-2a08fd88-b138-4a15-908a-47580edc4f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.500 251996 DEBUG oslo_concurrency.lockutils [req-f3199aa2-0713-41d8-8a22-5a8b2ea9345e req-2a08fd88-b138-4a15-908a-47580edc4f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.500 251996 DEBUG nova.compute.manager [req-f3199aa2-0713-41d8-8a22-5a8b2ea9345e req-2a08fd88-b138-4a15-908a-47580edc4f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.500 251996 DEBUG nova.compute.manager [req-f3199aa2-0713-41d8-8a22-5a8b2ea9345e req-2a08fd88-b138-4a15-908a-47580edc4f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-unplugged-60375867-89f6-4607-b20c-d94ca837383e for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:04:12 compute-0 nova_compute[251992]: 2025-12-06 07:04:12.543 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:04:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:04:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:04:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:04:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:04:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:04:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 97 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.169 251996 DEBUG nova.network.neutron [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Activated binding for port 60375867-89f6-4607-b20c-d94ca837383e and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.170 251996 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.171 251996 DEBUG nova.virt.libvirt.vif [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:03:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1401760543',display_name='tempest-LiveMigrationTest-server-1401760543',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1401760543',id=24,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:03:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9e86c61372e24db392d4a12ca71f7e00',ramdisk_id='',reservation_id='r-5qbwc0k8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-854827502',owner_user_name='tempest-LiveMigrationTest-854827502-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:04:03Z,user_data=None,user_id='756e3e1fa7e44042bdf37a6cdd877fac',uuid=76601abc-9380-4d0e-8360-39afb25adf0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.172 251996 DEBUG nova.network.os_vif_util [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converting VIF {"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.173 251996 DEBUG nova.network.os_vif_util [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.173 251996 DEBUG os_vif [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.176 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.176 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60375867-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.178 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.179 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.182 251996 INFO os_vif [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:4d:99,bridge_name='br-int',has_traffic_filtering=True,id=60375867-89f6-4607-b20c-d94ca837383e,network=Network(9238b9b5-08f5-4634-bd05-370e3192b201),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap60375867-89')
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.183 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.183 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.184 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.184 251996 DEBUG nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.184 251996 INFO nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Deleting instance files /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c_del
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.185 251996 INFO nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Deletion of /var/lib/nova/instances/76601abc-9380-4d0e-8360-39afb25adf0c_del complete
Dec 06 07:04:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:13.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:13.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.684 251996 DEBUG nova.network.neutron [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updated VIF entry in instance network info cache for port 60375867-89f6-4607-b20c-d94ca837383e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.684 251996 DEBUG nova.network.neutron [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Updating instance_info_cache with network_info: [{"id": "60375867-89f6-4607-b20c-d94ca837383e", "address": "fa:16:3e:2e:4d:99", "network": {"id": "9238b9b5-08f5-4634-bd05-370e3192b201", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1420048747-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e86c61372e24db392d4a12ca71f7e00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60375867-89", "ovs_interfaceid": "60375867-89f6-4607-b20c-d94ca837383e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:04:13 compute-0 nova_compute[251992]: 2025-12-06 07:04:13.729 251996 DEBUG oslo_concurrency.lockutils [req-a7795ae0-e607-498a-966d-dcaa4b8a166e req-d6853094-72d5-493e-82e3-ecfe1a73b6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-76601abc-9380-4d0e-8360-39afb25adf0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:04:14 compute-0 ceph-mon[74339]: pgmap v1301: 305 pgs: 305 active+clean; 97 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.930 251996 DEBUG nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.931 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.931 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.931 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.931 251996 DEBUG nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.932 251996 WARNING nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received unexpected event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e for instance with vm_state active and task_state migrating.
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.932 251996 DEBUG nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.932 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.933 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.933 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.933 251996 DEBUG nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.933 251996 WARNING nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received unexpected event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e for instance with vm_state active and task_state migrating.
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.934 251996 DEBUG nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.934 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.934 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.934 251996 DEBUG oslo_concurrency.lockutils [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.935 251996 DEBUG nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] No waiting events found dispatching network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:04:14 compute-0 nova_compute[251992]: 2025-12-06 07:04:14.935 251996 WARNING nova.compute.manager [req-1306e72e-fd1d-4d78-a234-330299fc7680 req-81e156a0-a878-4fd9-a458-f20d80010e5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Received unexpected event network-vif-plugged-60375867-89f6-4607-b20c-d94ca837383e for instance with vm_state active and task_state migrating.
Dec 06 07:04:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 110 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 203 op/s
Dec 06 07:04:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:15.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:15.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:16 compute-0 sudo[272810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:16 compute-0 sudo[272810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:16 compute-0 sudo[272810]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:16 compute-0 sudo[272835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:16 compute-0 sudo[272835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:16 compute-0 sudo[272835]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:16 compute-0 ceph-mon[74339]: pgmap v1302: 305 pgs: 305 active+clean; 110 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 203 op/s
Dec 06 07:04:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 121 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 186 op/s
Dec 06 07:04:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:17.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:17 compute-0 nova_compute[251992]: 2025-12-06 07:04:17.545 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:18 compute-0 nova_compute[251992]: 2025-12-06 07:04:18.178 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:04:18
Dec 06 07:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', '.mgr', 'default.rgw.log', 'backups', 'volumes']
Dec 06 07:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:04:18 compute-0 ceph-mon[74339]: pgmap v1303: 305 pgs: 305 active+clean; 121 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 186 op/s
Dec 06 07:04:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 121 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Dec 06 07:04:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:19.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:19 compute-0 nova_compute[251992]: 2025-12-06 07:04:19.526 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:19 compute-0 nova_compute[251992]: 2025-12-06 07:04:19.526 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:19 compute-0 nova_compute[251992]: 2025-12-06 07:04:19.526 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "76601abc-9380-4d0e-8360-39afb25adf0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:19 compute-0 nova_compute[251992]: 2025-12-06 07:04:19.547 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:19 compute-0 nova_compute[251992]: 2025-12-06 07:04:19.548 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:19 compute-0 nova_compute[251992]: 2025-12-06 07:04:19.548 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:19 compute-0 nova_compute[251992]: 2025-12-06 07:04:19.548 251996 DEBUG nova.compute.resource_tracker [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:04:19 compute-0 nova_compute[251992]: 2025-12-06 07:04:19.549 251996 DEBUG oslo_concurrency.processutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136576715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.040 251996 DEBUG oslo_concurrency.processutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.207 251996 WARNING nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.209 251996 DEBUG nova.compute.resource_tracker [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4803MB free_disk=20.94293212890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.209 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.209 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.248 251996 DEBUG nova.compute.resource_tracker [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Migration for instance 76601abc-9380-4d0e-8360-39afb25adf0c refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.263 251996 DEBUG nova.compute.resource_tracker [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.308 251996 DEBUG nova.compute.resource_tracker [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Migration bf6fe033-4cae-4353-8c4c-f75650a858c2 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.308 251996 DEBUG nova.compute.resource_tracker [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.309 251996 DEBUG nova.compute.resource_tracker [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.349 251996 DEBUG oslo_concurrency.processutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:20 compute-0 ceph-mon[74339]: pgmap v1304: 305 pgs: 305 active+clean; 121 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Dec 06 07:04:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/886785863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/136576715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413554628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.835 251996 DEBUG oslo_concurrency.processutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.842 251996 DEBUG nova.compute.provider_tree [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:04:20 compute-0 nova_compute[251992]: 2025-12-06 07:04:20.950 251996 DEBUG nova.scheduler.client.report [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:04:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 126 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 155 op/s
Dec 06 07:04:21 compute-0 nova_compute[251992]: 2025-12-06 07:04:21.220 251996 DEBUG nova.compute.resource_tracker [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:04:21 compute-0 nova_compute[251992]: 2025-12-06 07:04:21.221 251996 DEBUG oslo_concurrency.lockutils [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:21.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:21 compute-0 nova_compute[251992]: 2025-12-06 07:04:21.229 251996 INFO nova.compute.manager [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Migrating instance to compute-1.ctlplane.example.com finished successfully.
Dec 06 07:04:21 compute-0 nova_compute[251992]: 2025-12-06 07:04:21.352 251996 INFO nova.scheduler.client.report [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] Deleted allocation for migration bf6fe033-4cae-4353-8c4c-f75650a858c2
Dec 06 07:04:21 compute-0 nova_compute[251992]: 2025-12-06 07:04:21.353 251996 DEBUG nova.virt.libvirt.driver [None req-e9abfedf-fd6b-4bd0-9248-069b69731012 daab2cfaa69a4e9f819a57290bfd54d9 646c511edd3f4a5c93117e8dcfea183b - - default default] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 06 07:04:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:21.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/413554628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:22 compute-0 podman[272908]: 2025-12-06 07:04:22.417002769 +0000 UTC m=+0.077117330 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 07:04:22 compute-0 nova_compute[251992]: 2025-12-06 07:04:22.547 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:22 compute-0 ceph-mon[74339]: pgmap v1305: 305 pgs: 305 active+clean; 126 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 155 op/s
Dec 06 07:04:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3895071870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1359816576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 133 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 344 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Dec 06 07:04:23 compute-0 nova_compute[251992]: 2025-12-06 07:04:23.212 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:04:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:23.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:04:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 153 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 2.6 MiB/s wr, 85 op/s
Dec 06 07:04:25 compute-0 ceph-mon[74339]: pgmap v1306: 305 pgs: 305 active+clean; 133 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 344 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Dec 06 07:04:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:25.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:04:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031561525800297787 of space, bias 1.0, pg target 0.9468457740089337 quantized to 32 (current 32)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:04:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:04:26 compute-0 ceph-mon[74339]: pgmap v1307: 305 pgs: 305 active+clean; 153 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 2.6 MiB/s wr, 85 op/s
Dec 06 07:04:26 compute-0 nova_compute[251992]: 2025-12-06 07:04:26.897 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004651.8958516, 76601abc-9380-4d0e-8360-39afb25adf0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:26 compute-0 nova_compute[251992]: 2025-12-06 07:04:26.898 251996 INFO nova.compute.manager [-] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] VM Stopped (Lifecycle Event)
Dec 06 07:04:26 compute-0 nova_compute[251992]: 2025-12-06 07:04:26.918 251996 DEBUG nova.compute.manager [None req-feb85c0e-758c-4263-8d45-e81cb9db7508 - - - - - -] [instance: 76601abc-9380-4d0e-8360-39afb25adf0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:04:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:27.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:27.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:27 compute-0 nova_compute[251992]: 2025-12-06 07:04:27.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:28 compute-0 nova_compute[251992]: 2025-12-06 07:04:28.214 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:28 compute-0 podman[272939]: 2025-12-06 07:04:28.40974456 +0000 UTC m=+0.058037371 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:04:28 compute-0 podman[272938]: 2025-12-06 07:04:28.410152062 +0000 UTC m=+0.054626877 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 07:04:28 compute-0 ceph-mon[74339]: pgmap v1308: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:04:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 06 07:04:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:29.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:29.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:30 compute-0 ceph-mon[74339]: pgmap v1309: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 75 op/s
Dec 06 07:04:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 128 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Dec 06 07:04:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:04:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:31.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:04:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:31.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1089812738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:32 compute-0 nova_compute[251992]: 2025-12-06 07:04:32.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.6 MiB/s wr, 133 op/s
Dec 06 07:04:33 compute-0 ceph-mon[74339]: pgmap v1310: 305 pgs: 305 active+clean; 128 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Dec 06 07:04:33 compute-0 nova_compute[251992]: 2025-12-06 07:04:33.230 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:33.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:34 compute-0 ceph-mon[74339]: pgmap v1311: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.6 MiB/s wr, 133 op/s
Dec 06 07:04:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 145 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 147 op/s
Dec 06 07:04:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:35.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:04:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:35.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:04:36 compute-0 ceph-mon[74339]: pgmap v1312: 305 pgs: 305 active+clean; 145 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 147 op/s
Dec 06 07:04:36 compute-0 sudo[272981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:36 compute-0 sudo[272981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:36 compute-0 sudo[272981]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:36 compute-0 sudo[273006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:36 compute-0 sudo[273006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:36 compute-0 sudo[273006]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 140 op/s
Dec 06 07:04:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:04:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:37.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:04:37 compute-0 nova_compute[251992]: 2025-12-06 07:04:37.596 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:38 compute-0 nova_compute[251992]: 2025-12-06 07:04:38.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:38 compute-0 ceph-mon[74339]: pgmap v1313: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 140 op/s
Dec 06 07:04:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 06 07:04:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/554912748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:39 compute-0 nova_compute[251992]: 2025-12-06 07:04:39.935 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "dd7ff314-b789-4550-ab9a-44dc02948350" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:39 compute-0 nova_compute[251992]: 2025-12-06 07:04:39.935 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:04:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753776930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.096 251996 DEBUG nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.173 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.174 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.180 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.181 251996 INFO nova.compute.claims [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.293 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.683 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:40 compute-0 ceph-mon[74339]: pgmap v1314: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 06 07:04:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1753776930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461733571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.722 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.732 251996 DEBUG nova.compute.provider_tree [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.810 251996 DEBUG nova.scheduler.client.report [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.889 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.890 251996 DEBUG nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.947 251996 DEBUG nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.964 251996 INFO nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:04:40 compute-0 nova_compute[251992]: 2025-12-06 07:04:40.979 251996 DEBUG nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:04:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.078 251996 DEBUG nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.079 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.080 251996 INFO nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating image(s)
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.108 251996 DEBUG nova.storage.rbd_utils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.134 251996 DEBUG nova.storage.rbd_utils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.161 251996 DEBUG nova.storage.rbd_utils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.165 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.219 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.220 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.221 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.221 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.244 251996 DEBUG nova.storage.rbd_utils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.247 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef dd7ff314-b789-4550-ab9a-44dc02948350_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:41.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000055s ======
Dec 06 07:04:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:41.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.605 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef dd7ff314-b789-4550-ab9a-44dc02948350_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.679 251996 DEBUG nova.storage.rbd_utils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] resizing rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.773 251996 DEBUG nova.objects.instance [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lazy-loading 'migration_context' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.810 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.811 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Ensure instance console log exists: /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.811 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.811 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.812 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.813 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.817 251996 WARNING nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.823 251996 DEBUG nova.virt.libvirt.host [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.824 251996 DEBUG nova.virt.libvirt.host [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.826 251996 DEBUG nova.virt.libvirt.host [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.827 251996 DEBUG nova.virt.libvirt.host [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.828 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.828 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.828 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.828 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.829 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.829 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.829 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.829 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.829 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.830 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.830 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.830 251996 DEBUG nova.virt.hardware [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:04:41 compute-0 nova_compute[251992]: 2025-12-06 07:04:41.833 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2461733571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3843001736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:04:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468274952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.285 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.317 251996 DEBUG nova.storage.rbd_utils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.321 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.599 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:04:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113499709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.763 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.765 251996 DEBUG nova.objects.instance [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.783 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <uuid>dd7ff314-b789-4550-ab9a-44dc02948350</uuid>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <name>instance-0000001c</name>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-316061423</nova:name>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:04:41</nova:creationTime>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <nova:user uuid="3c3859554f86419a941a5924e80b88de">tempest-UnshelveToHostMultiNodesTest-311268056-project-member</nova:user>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <nova:project uuid="e2384cf38a13417c9220db3aafff6b24">tempest-UnshelveToHostMultiNodesTest-311268056</nova:project>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <system>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <entry name="serial">dd7ff314-b789-4550-ab9a-44dc02948350</entry>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <entry name="uuid">dd7ff314-b789-4550-ab9a-44dc02948350</entry>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     </system>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <os>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   </os>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <features>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   </features>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk">
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       </source>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk.config">
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       </source>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:04:42 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/console.log" append="off"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <video>
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     </video>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:04:42 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:04:42 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:04:42 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:04:42 compute-0 nova_compute[251992]: </domain>
Dec 06 07:04:42 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.830 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.832 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.833 251996 INFO nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Using config drive
Dec 06 07:04:42 compute-0 nova_compute[251992]: 2025-12-06 07:04:42.862 251996 DEBUG nova.storage.rbd_utils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:04:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:04:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:04:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:04:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:04:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:04:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 172 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Dec 06 07:04:43 compute-0 nova_compute[251992]: 2025-12-06 07:04:43.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:43.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:43 compute-0 nova_compute[251992]: 2025-12-06 07:04:43.575 251996 INFO nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating config drive at /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config
Dec 06 07:04:43 compute-0 nova_compute[251992]: 2025-12-06 07:04:43.579 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7mdvc22w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:43 compute-0 nova_compute[251992]: 2025-12-06 07:04:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:43 compute-0 nova_compute[251992]: 2025-12-06 07:04:43.704 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7mdvc22w" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:43 compute-0 nova_compute[251992]: 2025-12-06 07:04:43.732 251996 DEBUG nova.storage.rbd_utils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:43 compute-0 nova_compute[251992]: 2025-12-06 07:04:43.735 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config dd7ff314-b789-4550-ab9a-44dc02948350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:44 compute-0 ceph-mon[74339]: pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Dec 06 07:04:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/468274952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4113499709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:44 compute-0 nova_compute[251992]: 2025-12-06 07:04:44.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:44 compute-0 nova_compute[251992]: 2025-12-06 07:04:44.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:44 compute-0 nova_compute[251992]: 2025-12-06 07:04:44.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:44.815 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:04:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:44.816 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:04:44 compute-0 nova_compute[251992]: 2025-12-06 07:04:44.884 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 225 MiB data, 461 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 4.6 MiB/s wr, 49 op/s
Dec 06 07:04:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:45.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:45 compute-0 nova_compute[251992]: 2025-12-06 07:04:45.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:45 compute-0 nova_compute[251992]: 2025-12-06 07:04:45.676 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:45 compute-0 nova_compute[251992]: 2025-12-06 07:04:45.676 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:45 compute-0 nova_compute[251992]: 2025-12-06 07:04:45.676 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:45 compute-0 nova_compute[251992]: 2025-12-06 07:04:45.676 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:04:45 compute-0 nova_compute[251992]: 2025-12-06 07:04:45.677 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:45 compute-0 sudo[273364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:45 compute-0 sudo[273364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:45 compute-0 sudo[273364]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:45 compute-0 sudo[273389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:04:45 compute-0 sudo[273389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:45 compute-0 sudo[273389]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:46 compute-0 ceph-mon[74339]: pgmap v1316: 305 pgs: 305 active+clean; 172 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Dec 06 07:04:46 compute-0 sudo[273414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:46 compute-0 sudo[273414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:46 compute-0 sudo[273414]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:46 compute-0 ovn_controller[147168]: 2025-12-06T07:04:46Z|00078|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Dec 06 07:04:46 compute-0 sudo[273439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:04:46 compute-0 sudo[273439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.073 251996 DEBUG oslo_concurrency.processutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config dd7ff314-b789-4550-ab9a-44dc02948350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.074 251996 INFO nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deleting local config drive /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config because it was imported into RBD.
Dec 06 07:04:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687729709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.132 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:46 compute-0 systemd-machined[212986]: New machine qemu-13-instance-0000001c.
Dec 06 07:04:46 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000001c.
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.273 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.274 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.440 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.441 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4760MB free_disk=20.93905258178711GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.441 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.442 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance dd7ff314-b789-4550-ab9a-44dc02948350 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:04:46 compute-0 sudo[273439]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:46 compute-0 nova_compute[251992]: 2025-12-06 07:04:46.536 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 07:04:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:04:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:04:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:04:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:04:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:04:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:04:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3726549628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.000 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.007 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.028 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:04:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 816a5ac4-0b3c-4f22-8f59-a304dc4fd7af does not exist
Dec 06 07:04:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 38a78dbb-3088-4ba1-8d8b-e528c1fef71c does not exist
Dec 06 07:04:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e40228a7-d2bb-4bdd-a0b1-25330aa8bd8e does not exist
Dec 06 07:04:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:04:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:04:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:04:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 4.6 MiB/s wr, 67 op/s
Dec 06 07:04:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.075 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.075 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:47 compute-0 sudo[273551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:47 compute-0 sudo[273551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:47 compute-0 sudo[273551]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:47 compute-0 sudo[273594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:04:47 compute-0 sudo[273594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:47 compute-0 sudo[273594]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:47 compute-0 sudo[273619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:47 compute-0 sudo[273619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:47 compute-0 sudo[273619]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:47 compute-0 ceph-mon[74339]: pgmap v1317: 305 pgs: 305 active+clean; 225 MiB data, 461 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 4.6 MiB/s wr, 49 op/s
Dec 06 07:04:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1474950124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2687729709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3020370546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:04:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3726549628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:47.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:47 compute-0 sudo[273644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:04:47 compute-0 sudo[273644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.537 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004687.5365894, dd7ff314-b789-4550-ab9a-44dc02948350 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.537 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Resumed (Lifecycle Event)
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.539 251996 DEBUG nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.540 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.543 251996 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance spawned successfully.
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.543 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.557 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.561 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.570 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.571 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.571 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.571 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.572 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.572 251996 DEBUG nova.virt.libvirt.driver [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.578 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.579 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004687.5368457, dd7ff314-b789-4550-ab9a-44dc02948350 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.579 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Started (Lifecycle Event)
Dec 06 07:04:47 compute-0 podman[273715]: 2025-12-06 07:04:47.58500816 +0000 UTC m=+0.042735347 container create 3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swartz, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.599 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.610 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.613 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.634 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.642 251996 INFO nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Took 6.56 seconds to spawn the instance on the hypervisor.
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.643 251996 DEBUG nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:47 compute-0 podman[273715]: 2025-12-06 07:04:47.565731925 +0000 UTC m=+0.023459142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.693 251996 INFO nova.compute.manager [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Took 7.55 seconds to build instance.
Dec 06 07:04:47 compute-0 nova_compute[251992]: 2025-12-06 07:04:47.708 251996 DEBUG oslo_concurrency.lockutils [None req-41fd42af-3dd5-42c5-9670-2482e7ad5efb 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:04:47.818 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:04:48 compute-0 nova_compute[251992]: 2025-12-06 07:04:48.236 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:48 compute-0 systemd[1]: Started libpod-conmon-3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac.scope.
Dec 06 07:04:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:04:48 compute-0 podman[273715]: 2025-12-06 07:04:48.667194956 +0000 UTC m=+1.124922193 container init 3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swartz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 07:04:48 compute-0 podman[273715]: 2025-12-06 07:04:48.67814321 +0000 UTC m=+1.135870407 container start 3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swartz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:04:48 compute-0 podman[273715]: 2025-12-06 07:04:48.686014229 +0000 UTC m=+1.143741426 container attach 3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swartz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:04:48 compute-0 eloquent_swartz[273732]: 167 167
Dec 06 07:04:48 compute-0 systemd[1]: libpod-3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac.scope: Deactivated successfully.
Dec 06 07:04:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2509531797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:04:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:04:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:04:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/599834630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:48 compute-0 podman[273737]: 2025-12-06 07:04:48.73184834 +0000 UTC m=+0.027039652 container died 3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 07:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c32b121fd61ed505bf644883ba6ed0c17e2f004715f3d7aab1227df0966713c-merged.mount: Deactivated successfully.
Dec 06 07:04:48 compute-0 podman[273737]: 2025-12-06 07:04:48.76686154 +0000 UTC m=+0.062052822 container remove 3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:48 compute-0 systemd[1]: libpod-conmon-3120aca3286b540eca2bb30a220a297696204303ba5a43733d5d8e2f9b3e6dac.scope: Deactivated successfully.
Dec 06 07:04:48 compute-0 podman[273758]: 2025-12-06 07:04:48.981445713 +0000 UTC m=+0.054274496 container create ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:04:49 compute-0 systemd[1]: Started libpod-conmon-ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911.scope.
Dec 06 07:04:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 3.6 MiB/s wr, 47 op/s
Dec 06 07:04:49 compute-0 podman[273758]: 2025-12-06 07:04:48.960020538 +0000 UTC m=+0.032849341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:04:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec49c69a82ddb61914fe9fae99a78b599e60ee570d62a772ced987e9493cca2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec49c69a82ddb61914fe9fae99a78b599e60ee570d62a772ced987e9493cca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec49c69a82ddb61914fe9fae99a78b599e60ee570d62a772ced987e9493cca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec49c69a82ddb61914fe9fae99a78b599e60ee570d62a772ced987e9493cca2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec49c69a82ddb61914fe9fae99a78b599e60ee570d62a772ced987e9493cca2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.076 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.078 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.079 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:04:49 compute-0 podman[273758]: 2025-12-06 07:04:49.097261005 +0000 UTC m=+0.170089848 container init ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.102 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.103 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.103 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.103 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:49 compute-0 podman[273758]: 2025-12-06 07:04:49.108904179 +0000 UTC m=+0.181732962 container start ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:04:49 compute-0 podman[273758]: 2025-12-06 07:04:49.113382992 +0000 UTC m=+0.186211775 container attach ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.191 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "dd7ff314-b789-4550-ab9a-44dc02948350" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.192 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.192 251996 INFO nova.compute.manager [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Shelving
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.215 251996 DEBUG nova.virt.libvirt.driver [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:04:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:49.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.262 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:04:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:49.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.725 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.742 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.742 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.743 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.743 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.743 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:04:49 compute-0 nova_compute[251992]: 2025-12-06 07:04:49.743 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:04:49 compute-0 ceph-mon[74339]: pgmap v1318: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 4.6 MiB/s wr, 67 op/s
Dec 06 07:04:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3821259062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2894877082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/119544535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:49 compute-0 determined_tharp[273776]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:04:49 compute-0 determined_tharp[273776]: --> relative data size: 1.0
Dec 06 07:04:49 compute-0 determined_tharp[273776]: --> All data devices are unavailable
Dec 06 07:04:49 compute-0 systemd[1]: libpod-ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911.scope: Deactivated successfully.
Dec 06 07:04:49 compute-0 podman[273758]: 2025-12-06 07:04:49.982275463 +0000 UTC m=+1.055104286 container died ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ec49c69a82ddb61914fe9fae99a78b599e60ee570d62a772ced987e9493cca2-merged.mount: Deactivated successfully.
Dec 06 07:04:50 compute-0 podman[273758]: 2025-12-06 07:04:50.042266908 +0000 UTC m=+1.115095691 container remove ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:04:50 compute-0 systemd[1]: libpod-conmon-ca70fd49757ae97df0f037f541d53e761e8778628d1932575677e2588fe3e911.scope: Deactivated successfully.
Dec 06 07:04:50 compute-0 sudo[273644]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:50 compute-0 sudo[273800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:50 compute-0 sudo[273800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:50 compute-0 sudo[273800]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:50 compute-0 sudo[273825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:04:50 compute-0 sudo[273825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:50 compute-0 sudo[273825]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:50 compute-0 sudo[273850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:50 compute-0 sudo[273850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:50 compute-0 sudo[273850]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:50 compute-0 sudo[273875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:04:50 compute-0 sudo[273875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:50 compute-0 podman[273943]: 2025-12-06 07:04:50.608146994 +0000 UTC m=+0.039170598 container create 02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:50 compute-0 systemd[1]: Started libpod-conmon-02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d.scope.
Dec 06 07:04:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:04:50 compute-0 podman[273943]: 2025-12-06 07:04:50.688886743 +0000 UTC m=+0.119910397 container init 02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 07:04:50 compute-0 podman[273943]: 2025-12-06 07:04:50.593541238 +0000 UTC m=+0.024564862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:04:50 compute-0 podman[273943]: 2025-12-06 07:04:50.696618587 +0000 UTC m=+0.127642191 container start 02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:04:50 compute-0 podman[273943]: 2025-12-06 07:04:50.699373483 +0000 UTC m=+0.130397167 container attach 02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 07:04:50 compute-0 charming_ramanujan[273960]: 167 167
Dec 06 07:04:50 compute-0 systemd[1]: libpod-02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d.scope: Deactivated successfully.
Dec 06 07:04:50 compute-0 podman[273943]: 2025-12-06 07:04:50.703245421 +0000 UTC m=+0.134269025 container died 02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:04:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-45f00380b053ec1b4b32150e7eaac92bb978a2811c8c722f0f86e6124d1432ea-merged.mount: Deactivated successfully.
Dec 06 07:04:50 compute-0 podman[273943]: 2025-12-06 07:04:50.741354788 +0000 UTC m=+0.172378392 container remove 02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:04:50 compute-0 systemd[1]: libpod-conmon-02ead02808367f51c48a3c6fbea4b7a0f3e9ce6133ec2bb4f19515d1376f667d.scope: Deactivated successfully.
Dec 06 07:04:50 compute-0 ceph-mon[74339]: pgmap v1319: 305 pgs: 305 active+clean; 260 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 3.6 MiB/s wr, 47 op/s
Dec 06 07:04:50 compute-0 podman[273982]: 2025-12-06 07:04:50.904619237 +0000 UTC m=+0.040977808 container create f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:04:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Dec 06 07:04:50 compute-0 systemd[1]: Started libpod-conmon-f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13.scope.
Dec 06 07:04:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Dec 06 07:04:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Dec 06 07:04:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e70da68a67992056ddf14d8f92fcf823ae10738393c7030051aebceaf7c2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e70da68a67992056ddf14d8f92fcf823ae10738393c7030051aebceaf7c2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e70da68a67992056ddf14d8f92fcf823ae10738393c7030051aebceaf7c2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e70da68a67992056ddf14d8f92fcf823ae10738393c7030051aebceaf7c2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:50 compute-0 podman[273982]: 2025-12-06 07:04:50.980002057 +0000 UTC m=+0.116360648 container init f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banach, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:04:50 compute-0 podman[273982]: 2025-12-06 07:04:50.887368698 +0000 UTC m=+0.023727289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:04:50 compute-0 podman[273982]: 2025-12-06 07:04:50.989867871 +0000 UTC m=+0.126226442 container start f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banach, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:04:50 compute-0 podman[273982]: 2025-12-06 07:04:50.994114559 +0000 UTC m=+0.130473150 container attach f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:04:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 272 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.5 MiB/s wr, 175 op/s
Dec 06 07:04:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:51.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:51.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]: {
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:     "0": [
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:         {
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "devices": [
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "/dev/loop3"
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             ],
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "lv_name": "ceph_lv0",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "lv_size": "7511998464",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "name": "ceph_lv0",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "tags": {
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.cluster_name": "ceph",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.crush_device_class": "",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.encrypted": "0",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.osd_id": "0",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.type": "block",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:                 "ceph.vdo": "0"
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             },
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "type": "block",
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:             "vg_name": "ceph_vg0"
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:         }
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]:     ]
Dec 06 07:04:51 compute-0 ecstatic_banach[273998]: }
Dec 06 07:04:51 compute-0 systemd[1]: libpod-f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13.scope: Deactivated successfully.
Dec 06 07:04:51 compute-0 podman[273982]: 2025-12-06 07:04:51.736165321 +0000 UTC m=+0.872523902 container died f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banach, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 07:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-55e70da68a67992056ddf14d8f92fcf823ae10738393c7030051aebceaf7c2ef-merged.mount: Deactivated successfully.
Dec 06 07:04:51 compute-0 podman[273982]: 2025-12-06 07:04:51.795683803 +0000 UTC m=+0.932042364 container remove f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_banach, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:04:51 compute-0 systemd[1]: libpod-conmon-f78457b785221fab51205e460fe4061e660fdd59dcf95cb8cd978fddc9243d13.scope: Deactivated successfully.
Dec 06 07:04:51 compute-0 sudo[273875]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:51 compute-0 sudo[274018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:51 compute-0 sudo[274018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:51 compute-0 sudo[274018]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:51 compute-0 sudo[274043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:04:51 compute-0 sudo[274043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:51 compute-0 sudo[274043]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:51 compute-0 ceph-mon[74339]: osdmap e169: 3 total, 3 up, 3 in
Dec 06 07:04:51 compute-0 ceph-mon[74339]: pgmap v1321: 305 pgs: 305 active+clean; 272 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.5 MiB/s wr, 175 op/s
Dec 06 07:04:52 compute-0 sudo[274068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:52 compute-0 sudo[274068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:52 compute-0 sudo[274068]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:52 compute-0 sudo[274093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:04:52 compute-0 sudo[274093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:52 compute-0 podman[274157]: 2025-12-06 07:04:52.360295233 +0000 UTC m=+0.039887637 container create 9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bartik, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:04:52 compute-0 systemd[1]: Started libpod-conmon-9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c.scope.
Dec 06 07:04:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:04:52 compute-0 podman[274157]: 2025-12-06 07:04:52.415783352 +0000 UTC m=+0.095375777 container init 9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:04:52 compute-0 podman[274157]: 2025-12-06 07:04:52.423187508 +0000 UTC m=+0.102779912 container start 9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:52 compute-0 laughing_bartik[274174]: 167 167
Dec 06 07:04:52 compute-0 podman[274157]: 2025-12-06 07:04:52.425943915 +0000 UTC m=+0.105536319 container attach 9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:04:52 compute-0 systemd[1]: libpod-9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c.scope: Deactivated successfully.
Dec 06 07:04:52 compute-0 podman[274157]: 2025-12-06 07:04:52.434369028 +0000 UTC m=+0.113961432 container died 9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 07:04:52 compute-0 podman[274157]: 2025-12-06 07:04:52.341462741 +0000 UTC m=+0.021055175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:04:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9143bcb337c8df698b33139eba78d9e83b29b2f1d5635894594dd4010112639-merged.mount: Deactivated successfully.
Dec 06 07:04:52 compute-0 podman[274157]: 2025-12-06 07:04:52.468446513 +0000 UTC m=+0.148038927 container remove 9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 07:04:52 compute-0 systemd[1]: libpod-conmon-9b173576d5dba984a86caa627186a5448b9c3ada98ab6a539f8505240d76b30c.scope: Deactivated successfully.
Dec 06 07:04:52 compute-0 podman[274179]: 2025-12-06 07:04:52.553544753 +0000 UTC m=+0.089931345 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:04:52 compute-0 nova_compute[251992]: 2025-12-06 07:04:52.657 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:52 compute-0 podman[274224]: 2025-12-06 07:04:52.680519085 +0000 UTC m=+0.095173181 container create 36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shirley, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:04:52 compute-0 systemd[1]: Started libpod-conmon-36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5.scope.
Dec 06 07:04:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0f7a6f9cf558a0268360308c7bb1ab39e840ab42dbbf34a03a603ef5edc6dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0f7a6f9cf558a0268360308c7bb1ab39e840ab42dbbf34a03a603ef5edc6dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0f7a6f9cf558a0268360308c7bb1ab39e840ab42dbbf34a03a603ef5edc6dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0f7a6f9cf558a0268360308c7bb1ab39e840ab42dbbf34a03a603ef5edc6dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:04:52 compute-0 podman[274224]: 2025-12-06 07:04:52.664417469 +0000 UTC m=+0.079071545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:04:52 compute-0 podman[274224]: 2025-12-06 07:04:52.764950508 +0000 UTC m=+0.179604574 container init 36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:04:52 compute-0 podman[274224]: 2025-12-06 07:04:52.772229719 +0000 UTC m=+0.186883785 container start 36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shirley, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:04:52 compute-0 podman[274224]: 2025-12-06 07:04:52.775678105 +0000 UTC m=+0.190332161 container attach 36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shirley, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:04:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 272 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 6.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:53.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:53.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]: {
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]:         "osd_id": 0,
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]:         "type": "bluestore"
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]:     }
Dec 06 07:04:53 compute-0 eloquent_shirley[274240]: }
Dec 06 07:04:53 compute-0 systemd[1]: libpod-36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5.scope: Deactivated successfully.
Dec 06 07:04:53 compute-0 podman[274224]: 2025-12-06 07:04:53.621584818 +0000 UTC m=+1.036238874 container died 36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shirley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e0f7a6f9cf558a0268360308c7bb1ab39e840ab42dbbf34a03a603ef5edc6dd-merged.mount: Deactivated successfully.
Dec 06 07:04:53 compute-0 podman[274224]: 2025-12-06 07:04:53.673119898 +0000 UTC m=+1.087773954 container remove 36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_shirley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:04:53 compute-0 sudo[274093]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:53 compute-0 systemd[1]: libpod-conmon-36faf6068e96dce3ef81e31ad10abc37d977692e381197aa66e388fc3ab50aa5.scope: Deactivated successfully.
Dec 06 07:04:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.715 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "0c35b522-9a1e-4166-97b7-99cf43734fac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.716 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "0c35b522-9a1e-4166-97b7-99cf43734fac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:04:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 67cc6903-3062-4376-b9af-d439b7f3d672 does not exist
Dec 06 07:04:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a290ff42-7ebb-4df3-a70f-2bd2a86f1085 does not exist
Dec 06 07:04:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 93457c17-feb2-44ca-9c97-e9bb2a6af970 does not exist
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.748 251996 DEBUG nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:04:53 compute-0 sudo[274274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:53 compute-0 sudo[274274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:53 compute-0 sudo[274274]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.823 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.823 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.830 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.831 251996 INFO nova.compute.claims [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:04:53 compute-0 sudo[274299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:04:53 compute-0 sudo[274299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:53 compute-0 sudo[274299]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:53 compute-0 nova_compute[251992]: 2025-12-06 07:04:53.942 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:04:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603884885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.386 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.391 251996 DEBUG nova.compute.provider_tree [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.424 251996 DEBUG nova.scheduler.client.report [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.446 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.446 251996 DEBUG nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.489 251996 DEBUG nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.489 251996 DEBUG nova.network.neutron [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.507 251996 INFO nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.546 251996 DEBUG nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.678 251996 DEBUG nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.680 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.680 251996 INFO nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Creating image(s)
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.711 251996 DEBUG nova.storage.rbd_utils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 0c35b522-9a1e-4166-97b7-99cf43734fac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.750 251996 DEBUG nova.storage.rbd_utils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 0c35b522-9a1e-4166-97b7-99cf43734fac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.775 251996 DEBUG nova.storage.rbd_utils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 0c35b522-9a1e-4166-97b7-99cf43734fac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.779 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.812 251996 DEBUG nova.network.neutron [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.813 251996 DEBUG nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.836 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.837 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.837 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.838 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.859 251996 DEBUG nova.storage.rbd_utils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 0c35b522-9a1e-4166-97b7-99cf43734fac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:54 compute-0 nova_compute[251992]: 2025-12-06 07:04:54.863 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0c35b522-9a1e-4166-97b7-99cf43734fac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:55 compute-0 ceph-mon[74339]: pgmap v1322: 305 pgs: 305 active+clean; 272 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 6.4 MiB/s rd, 5.2 MiB/s wr, 247 op/s
Dec 06 07:04:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:04:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2603884885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 280 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.0 MiB/s wr, 326 op/s
Dec 06 07:04:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:04:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.337 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 0c35b522-9a1e-4166-97b7-99cf43734fac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.417 251996 DEBUG nova.storage.rbd_utils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] resizing rbd image 0c35b522-9a1e-4166-97b7-99cf43734fac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:04:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:04:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:55.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.521 251996 DEBUG nova.objects.instance [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0c35b522-9a1e-4166-97b7-99cf43734fac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.534 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.534 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Ensure instance console log exists: /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.535 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.535 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.535 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.537 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.540 251996 WARNING nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.544 251996 DEBUG nova.virt.libvirt.host [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.545 251996 DEBUG nova.virt.libvirt.host [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.548 251996 DEBUG nova.virt.libvirt.host [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.548 251996 DEBUG nova.virt.libvirt.host [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.549 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.549 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.550 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.550 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.550 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.551 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.551 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.551 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.552 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.552 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.552 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.552 251996 DEBUG nova.virt.hardware [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.555 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:04:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/152312170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:55 compute-0 nova_compute[251992]: 2025-12-06 07:04:55.978 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.009 251996 DEBUG nova.storage.rbd_utils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 0c35b522-9a1e-4166-97b7-99cf43734fac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.014 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:56 compute-0 ceph-mon[74339]: pgmap v1323: 305 pgs: 305 active+clean; 280 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.0 MiB/s wr, 326 op/s
Dec 06 07:04:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/152312170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:04:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591720672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.465 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.467 251996 DEBUG nova.objects.instance [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c35b522-9a1e-4166-97b7-99cf43734fac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:56 compute-0 sudo[274573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:56 compute-0 sudo[274573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.484 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <uuid>0c35b522-9a1e-4166-97b7-99cf43734fac</uuid>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <name>instance-0000001e</name>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:04:56 compute-0 sudo[274573]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-824579159</nova:name>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:04:55</nova:creationTime>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <nova:user uuid="1a0ca5a46a9442b1845863069ff295f4">tempest-ServersAdminNegativeTestJSON-1345610286-project-member</nova:user>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <nova:project uuid="a055b1b8e2e54e4a81cfca74765ddcb1">tempest-ServersAdminNegativeTestJSON-1345610286</nova:project>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <system>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <entry name="serial">0c35b522-9a1e-4166-97b7-99cf43734fac</entry>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <entry name="uuid">0c35b522-9a1e-4166-97b7-99cf43734fac</entry>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     </system>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <os>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   </os>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <features>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   </features>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/0c35b522-9a1e-4166-97b7-99cf43734fac_disk">
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       </source>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/0c35b522-9a1e-4166-97b7-99cf43734fac_disk.config">
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       </source>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:04:56 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac/console.log" append="off"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <video>
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     </video>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:04:56 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:04:56 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:04:56 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:04:56 compute-0 nova_compute[251992]: </domain>
Dec 06 07:04:56 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:04:56 compute-0 sudo[274601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:04:56 compute-0 sudo[274601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:04:56 compute-0 sudo[274601]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.561 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.562 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.563 251996 INFO nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Using config drive
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.588 251996 DEBUG nova.storage.rbd_utils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 0c35b522-9a1e-4166-97b7-99cf43734fac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.723 251996 INFO nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Creating config drive at /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac/disk.config
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.729 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqvv_nbos execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.860 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqvv_nbos" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.889 251996 DEBUG nova.storage.rbd_utils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] rbd image 0c35b522-9a1e-4166-97b7-99cf43734fac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:04:56 compute-0 nova_compute[251992]: 2025-12-06 07:04:56.893 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac/disk.config 0c35b522-9a1e-4166-97b7-99cf43734fac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:04:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:04:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4064367949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.032 251996 DEBUG oslo_concurrency.processutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac/disk.config 0c35b522-9a1e-4166-97b7-99cf43734fac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.033 251996 INFO nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Deleting local config drive /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac/disk.config because it was imported into RBD.
Dec 06 07:04:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2591720672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/32107808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:04:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4064367949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:04:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 305 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 3.1 MiB/s wr, 306 op/s
Dec 06 07:04:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:04:57 compute-0 systemd-machined[212986]: New machine qemu-14-instance-0000001e.
Dec 06 07:04:57 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000001e.
Dec 06 07:04:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:57.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:04:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:57.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.851 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004697.8508108, 0c35b522-9a1e-4166-97b7-99cf43734fac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.852 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] VM Resumed (Lifecycle Event)
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.879 251996 DEBUG nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.880 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.885 251996 INFO nova.virt.libvirt.driver [-] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Instance spawned successfully.
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.885 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.932 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.936 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.945 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.946 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.947 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.947 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.948 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.949 251996 DEBUG nova.virt.libvirt.driver [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.991 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.992 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004697.8787339, 0c35b522-9a1e-4166-97b7-99cf43734fac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:57 compute-0 nova_compute[251992]: 2025-12-06 07:04:57.992 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] VM Started (Lifecycle Event)
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.023 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.030 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.035 251996 INFO nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Took 3.36 seconds to spawn the instance on the hypervisor.
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.036 251996 DEBUG nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.086 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:04:58 compute-0 ceph-mon[74339]: pgmap v1324: 305 pgs: 305 active+clean; 305 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 3.1 MiB/s wr, 306 op/s
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.159 251996 INFO nova.compute.manager [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Took 4.36 seconds to build instance.
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.179 251996 DEBUG oslo_concurrency.lockutils [None req-c3ad2095-0bd1-483d-9f08-de6c9f83785e 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "0c35b522-9a1e-4166-97b7-99cf43734fac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.248 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.748 251996 DEBUG nova.objects.instance [None req-7fb38f4f-e278-4a35-9dc0-4ce5a89a0948 3023ba2dbe884b7bae3939f663e07aea 236cf6b5e92b4f19a8664920ae6791af - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c35b522-9a1e-4166-97b7-99cf43734fac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.780 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004698.7786043, 0c35b522-9a1e-4166-97b7-99cf43734fac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.781 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] VM Paused (Lifecycle Event)
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.813 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.817 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:04:58 compute-0 nova_compute[251992]: 2025-12-06 07:04:58.841 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 06 07:04:58 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Dec 06 07:04:58 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001e.scope: Consumed 1.697s CPU time.
Dec 06 07:04:58 compute-0 systemd-machined[212986]: Machine qemu-14-instance-0000001e terminated.
Dec 06 07:04:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 305 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 3.1 MiB/s wr, 306 op/s
Dec 06 07:04:59 compute-0 podman[274748]: 2025-12-06 07:04:59.051774548 +0000 UTC m=+0.058863764 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 06 07:04:59 compute-0 podman[274747]: 2025-12-06 07:04:59.074888539 +0000 UTC m=+0.083346634 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:04:59 compute-0 nova_compute[251992]: 2025-12-06 07:04:59.078 251996 DEBUG nova.compute.manager [None req-7fb38f4f-e278-4a35-9dc0-4ce5a89a0948 3023ba2dbe884b7bae3939f663e07aea 236cf6b5e92b4f19a8664920ae6791af - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:04:59 compute-0 nova_compute[251992]: 2025-12-06 07:04:59.261 251996 DEBUG nova.virt.libvirt.driver [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:04:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:04:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:04:59.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:04:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:04:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:04:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:04:59.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:00 compute-0 ceph-mon[74339]: pgmap v1325: 305 pgs: 305 active+clean; 305 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 3.1 MiB/s wr, 306 op/s
Dec 06 07:05:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 369 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.8 MiB/s wr, 346 op/s
Dec 06 07:05:01 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 06 07:05:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:01.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:05:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:01.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:05:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:02 compute-0 ceph-mon[74339]: pgmap v1326: 305 pgs: 305 active+clean; 369 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.8 MiB/s wr, 346 op/s
Dec 06 07:05:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2461479803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:02 compute-0 nova_compute[251992]: 2025-12-06 07:05:02.661 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 392 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.5 MiB/s wr, 295 op/s
Dec 06 07:05:03 compute-0 nova_compute[251992]: 2025-12-06 07:05:03.250 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:03.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3823637738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:03.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:05:03.814 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:05:03.815 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:05:03.815 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.236 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "0c35b522-9a1e-4166-97b7-99cf43734fac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.236 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "0c35b522-9a1e-4166-97b7-99cf43734fac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.236 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "0c35b522-9a1e-4166-97b7-99cf43734fac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.236 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "0c35b522-9a1e-4166-97b7-99cf43734fac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.237 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "0c35b522-9a1e-4166-97b7-99cf43734fac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.238 251996 INFO nova.compute.manager [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Terminating instance
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.238 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "refresh_cache-0c35b522-9a1e-4166-97b7-99cf43734fac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.238 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquired lock "refresh_cache-0c35b522-9a1e-4166-97b7-99cf43734fac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.239 251996 DEBUG nova.network.neutron [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:05:04 compute-0 nova_compute[251992]: 2025-12-06 07:05:04.442 251996 DEBUG nova.network.neutron [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:05:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 429 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 8.1 MiB/s wr, 286 op/s
Dec 06 07:05:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:05.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:05 compute-0 ceph-mon[74339]: pgmap v1327: 305 pgs: 305 active+clean; 392 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.5 MiB/s wr, 295 op/s
Dec 06 07:05:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:05.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:05 compute-0 nova_compute[251992]: 2025-12-06 07:05:05.558 251996 DEBUG nova.network.neutron [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:05 compute-0 nova_compute[251992]: 2025-12-06 07:05:05.575 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Releasing lock "refresh_cache-0c35b522-9a1e-4166-97b7-99cf43734fac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:05:05 compute-0 nova_compute[251992]: 2025-12-06 07:05:05.575 251996 DEBUG nova.compute.manager [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:05:05 compute-0 nova_compute[251992]: 2025-12-06 07:05:05.580 251996 INFO nova.virt.libvirt.driver [-] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Instance destroyed successfully.
Dec 06 07:05:05 compute-0 nova_compute[251992]: 2025-12-06 07:05:05.581 251996 DEBUG nova.objects.instance [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lazy-loading 'resources' on Instance uuid 0c35b522-9a1e-4166-97b7-99cf43734fac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.080 251996 INFO nova.virt.libvirt.driver [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Deleting instance files /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac_del
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.081 251996 INFO nova.virt.libvirt.driver [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Deletion of /var/lib/nova/instances/0c35b522-9a1e-4166-97b7-99cf43734fac_del complete
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.126 251996 INFO nova.compute.manager [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Took 0.55 seconds to destroy the instance on the hypervisor.
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.127 251996 DEBUG oslo.service.loopingcall [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.127 251996 DEBUG nova.compute.manager [-] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.127 251996 DEBUG nova.network.neutron [-] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:05:06 compute-0 ceph-mon[74339]: pgmap v1328: 305 pgs: 305 active+clean; 429 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 8.1 MiB/s wr, 286 op/s
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.584 251996 DEBUG nova.network.neutron [-] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.599 251996 DEBUG nova.network.neutron [-] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.615 251996 INFO nova.compute.manager [-] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Took 0.49 seconds to deallocate network for instance.
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.660 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.661 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:06 compute-0 nova_compute[251992]: 2025-12-06 07:05:06.723 251996 DEBUG oslo_concurrency.processutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 435 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.8 MiB/s wr, 317 op/s
Dec 06 07:05:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338766413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:07 compute-0 nova_compute[251992]: 2025-12-06 07:05:07.228 251996 DEBUG oslo_concurrency.processutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:07 compute-0 nova_compute[251992]: 2025-12-06 07:05:07.234 251996 DEBUG nova.compute.provider_tree [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:07 compute-0 nova_compute[251992]: 2025-12-06 07:05:07.267 251996 DEBUG nova.scheduler.client.report [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:07 compute-0 nova_compute[251992]: 2025-12-06 07:05:07.291 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:07 compute-0 nova_compute[251992]: 2025-12-06 07:05:07.321 251996 INFO nova.scheduler.client.report [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Deleted allocations for instance 0c35b522-9a1e-4166-97b7-99cf43734fac
Dec 06 07:05:07 compute-0 nova_compute[251992]: 2025-12-06 07:05:07.389 251996 DEBUG oslo_concurrency.lockutils [None req-22f9391a-73ba-475c-965f-ad3d5a9a766d 1a0ca5a46a9442b1845863069ff295f4 a055b1b8e2e54e4a81cfca74765ddcb1 - - default default] Lock "0c35b522-9a1e-4166-97b7-99cf43734fac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/338766413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:07.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:07 compute-0 nova_compute[251992]: 2025-12-06 07:05:07.662 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:08 compute-0 nova_compute[251992]: 2025-12-06 07:05:08.252 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:08 compute-0 ceph-mon[74339]: pgmap v1329: 305 pgs: 305 active+clean; 435 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.8 MiB/s wr, 317 op/s
Dec 06 07:05:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 435 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 9.0 MiB/s wr, 302 op/s
Dec 06 07:05:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:09.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:09.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:10 compute-0 nova_compute[251992]: 2025-12-06 07:05:10.301 251996 DEBUG nova.virt.libvirt.driver [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:05:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2765720048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1546777161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:05:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1546777161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:05:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 422 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 9.1 MiB/s wr, 384 op/s
Dec 06 07:05:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:05:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:05:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:11.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:11 compute-0 ceph-mon[74339]: pgmap v1330: 305 pgs: 305 active+clean; 435 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 9.0 MiB/s wr, 302 op/s
Dec 06 07:05:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4125302750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:12 compute-0 nova_compute[251992]: 2025-12-06 07:05:12.665 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:05:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:05:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:05:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:05:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:05:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:05:13 compute-0 ceph-mon[74339]: pgmap v1331: 305 pgs: 305 active+clean; 422 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 9.1 MiB/s wr, 384 op/s
Dec 06 07:05:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 425 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.0 MiB/s wr, 299 op/s
Dec 06 07:05:13 compute-0 nova_compute[251992]: 2025-12-06 07:05:13.255 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:13.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:13.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:14 compute-0 nova_compute[251992]: 2025-12-06 07:05:14.079 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004699.07828, 0c35b522-9a1e-4166-97b7-99cf43734fac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:14 compute-0 nova_compute[251992]: 2025-12-06 07:05:14.080 251996 INFO nova.compute.manager [-] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] VM Stopped (Lifecycle Event)
Dec 06 07:05:14 compute-0 nova_compute[251992]: 2025-12-06 07:05:14.103 251996 DEBUG nova.compute.manager [None req-9c3cfd7e-935f-4437-89dc-096934124e66 - - - - - -] [instance: 0c35b522-9a1e-4166-97b7-99cf43734fac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:14 compute-0 ceph-mon[74339]: pgmap v1332: 305 pgs: 305 active+clean; 425 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.0 MiB/s wr, 299 op/s
Dec 06 07:05:14 compute-0 nova_compute[251992]: 2025-12-06 07:05:14.318 251996 INFO nova.virt.libvirt.driver [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance shutdown successfully after 25 seconds.
Dec 06 07:05:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 386 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 266 op/s
Dec 06 07:05:15 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Dec 06 07:05:15 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001c.scope: Consumed 14.128s CPU time.
Dec 06 07:05:15 compute-0 systemd-machined[212986]: Machine qemu-13-instance-0000001c terminated.
Dec 06 07:05:15 compute-0 nova_compute[251992]: 2025-12-06 07:05:15.138 251996 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance destroyed successfully.
Dec 06 07:05:15 compute-0 nova_compute[251992]: 2025-12-06 07:05:15.138 251996 DEBUG nova.objects.instance [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lazy-loading 'numa_topology' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:05:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/759470460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:05:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:15.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:05:15 compute-0 nova_compute[251992]: 2025-12-06 07:05:15.405 251996 INFO nova.virt.libvirt.driver [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Beginning cold snapshot process
Dec 06 07:05:15 compute-0 nova_compute[251992]: 2025-12-06 07:05:15.536 251996 DEBUG nova.virt.libvirt.imagebackend [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:05:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:15.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:15 compute-0 nova_compute[251992]: 2025-12-06 07:05:15.751 251996 DEBUG nova.storage.rbd_utils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] creating snapshot(7ec3873281534b87ac042f5785a1ab89) on rbd image(dd7ff314-b789-4550-ab9a-44dc02948350_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:05:16 compute-0 ceph-mon[74339]: pgmap v1333: 305 pgs: 305 active+clean; 386 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.2 MiB/s wr, 266 op/s
Dec 06 07:05:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/759470460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2455542892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:16 compute-0 sudo[274892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:16 compute-0 sudo[274892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:16 compute-0 sudo[274892]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:16 compute-0 sudo[274917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:16 compute-0 sudo[274917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:16 compute-0 sudo[274917]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 346 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 224 op/s
Dec 06 07:05:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Dec 06 07:05:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:17.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Dec 06 07:05:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Dec 06 07:05:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:17.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:17 compute-0 nova_compute[251992]: 2025-12-06 07:05:17.667 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:17 compute-0 nova_compute[251992]: 2025-12-06 07:05:17.710 251996 DEBUG nova.storage.rbd_utils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] cloning vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk@7ec3873281534b87ac042f5785a1ab89 to images/6e62d22c-45f1-4591-b9e8-303a0bc90e48 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:05:17 compute-0 nova_compute[251992]: 2025-12-06 07:05:17.878 251996 DEBUG nova.storage.rbd_utils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] flattening images/6e62d22c-45f1-4591-b9e8-303a0bc90e48 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:05:18 compute-0 nova_compute[251992]: 2025-12-06 07:05:18.257 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:05:18
Dec 06 07:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes']
Dec 06 07:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:05:18 compute-0 nova_compute[251992]: 2025-12-06 07:05:18.501 251996 DEBUG nova.storage.rbd_utils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] removing snapshot(7ec3873281534b87ac042f5785a1ab89) on rbd image(dd7ff314-b789-4550-ab9a-44dc02948350_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:05:18 compute-0 ceph-mon[74339]: pgmap v1334: 305 pgs: 305 active+clean; 346 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 224 op/s
Dec 06 07:05:18 compute-0 ceph-mon[74339]: osdmap e170: 3 total, 3 up, 3 in
Dec 06 07:05:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 346 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 208 KiB/s wr, 147 op/s
Dec 06 07:05:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:19.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:19.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Dec 06 07:05:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Dec 06 07:05:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Dec 06 07:05:19 compute-0 nova_compute[251992]: 2025-12-06 07:05:19.773 251996 DEBUG nova.storage.rbd_utils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] creating snapshot(snap) on rbd image(6e62d22c-45f1-4591-b9e8-303a0bc90e48) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:05:20 compute-0 ceph-mon[74339]: pgmap v1336: 305 pgs: 305 active+clean; 346 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 208 KiB/s wr, 147 op/s
Dec 06 07:05:20 compute-0 ceph-mon[74339]: osdmap e171: 3 total, 3 up, 3 in
Dec 06 07:05:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Dec 06 07:05:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Dec 06 07:05:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Dec 06 07:05:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 417 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.3 MiB/s wr, 266 op/s
Dec 06 07:05:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:21.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:21.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:22 compute-0 ceph-mon[74339]: osdmap e172: 3 total, 3 up, 3 in
Dec 06 07:05:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:05:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6090 writes, 27K keys, 6083 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6090 writes, 6083 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1596 writes, 7231 keys, 1594 commit groups, 1.0 writes per commit group, ingest: 10.85 MB, 0.02 MB/s
                                           Interval WAL: 1596 writes, 1594 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     82.7      0.40              0.11        14    0.029       0      0       0.0       0.0
                                             L6      1/0    9.85 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6    133.0    111.0      1.06              0.40        13    0.082     66K   6943       0.0       0.0
                                            Sum      1/0    9.85 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6     96.6    103.3      1.47              0.51        27    0.054     66K   6943       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.2    125.4    128.3      0.44              0.15        10    0.044     28K   2582       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    133.0    111.0      1.06              0.40        13    0.082     66K   6943       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     83.4      0.40              0.11        13    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.032, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.15 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 1.5 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 12.93 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.00013 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(731,12.40 MB,4.07781%) FilterBlock(28,190.98 KB,0.0613514%) IndexBlock(28,355.72 KB,0.11427%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:05:22 compute-0 nova_compute[251992]: 2025-12-06 07:05:22.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 452 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 12 MiB/s wr, 264 op/s
Dec 06 07:05:23 compute-0 nova_compute[251992]: 2025-12-06 07:05:23.259 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:05:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:23.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:05:23 compute-0 podman[275035]: 2025-12-06 07:05:23.432924762 +0000 UTC m=+0.089539564 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:05:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:23.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 06 07:05:24 compute-0 ceph-mon[74339]: pgmap v1339: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 417 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.3 MiB/s wr, 266 op/s
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.460817) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724460853, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2300, "num_deletes": 509, "total_data_size": 3455323, "memory_usage": 3516552, "flush_reason": "Manual Compaction"}
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724474813, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2368532, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25179, "largest_seqno": 27478, "table_properties": {"data_size": 2360433, "index_size": 4081, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 23222, "raw_average_key_size": 19, "raw_value_size": 2340778, "raw_average_value_size": 2014, "num_data_blocks": 180, "num_entries": 1162, "num_filter_entries": 1162, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004537, "oldest_key_time": 1765004537, "file_creation_time": 1765004724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 14033 microseconds, and 5820 cpu microseconds.
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.474849) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2368532 bytes OK
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.474864) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.476040) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.476051) EVENT_LOG_v1 {"time_micros": 1765004724476048, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.476065) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3444738, prev total WAL file size 3444738, number of live WAL files 2.
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.476803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353036' seq:72057594037927935, type:22 .. '6C6F676D00373538' seq:0, type:0; will stop at (end)
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2313KB)], [56(10086KB)]
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724476842, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12697210, "oldest_snapshot_seqno": -1}
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5655 keys, 9729665 bytes, temperature: kUnknown
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724564301, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 9729665, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9691728, "index_size": 22723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14149, "raw_key_size": 144618, "raw_average_key_size": 25, "raw_value_size": 9589683, "raw_average_value_size": 1695, "num_data_blocks": 921, "num_entries": 5655, "num_filter_entries": 5655, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765004724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.564673) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 9729665 bytes
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.565959) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.8 rd, 111.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(9.5) write-amplify(4.1) OK, records in: 6626, records dropped: 971 output_compression: NoCompression
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.565974) EVENT_LOG_v1 {"time_micros": 1765004724565966, "job": 30, "event": "compaction_finished", "compaction_time_micros": 87692, "compaction_time_cpu_micros": 24863, "output_level": 6, "num_output_files": 1, "total_output_size": 9729665, "num_input_records": 6626, "num_output_records": 5655, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724566764, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004724568712, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.476744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.568876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.568882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.568883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.568885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:24 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:24.568886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 9.6 MiB/s wr, 223 op/s
Dec 06 07:05:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:25.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:25 compute-0 ceph-mon[74339]: pgmap v1340: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 452 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 12 MiB/s wr, 264 op/s
Dec 06 07:05:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:25.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006508053868881444 of space, bias 1.0, pg target 1.9524161606644332 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6463716877210125 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005017128466406105 of space, bias 1.0, pg target 1.5001214114554253 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:05:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:05:25 compute-0 nova_compute[251992]: 2025-12-06 07:05:25.968 251996 INFO nova.virt.libvirt.driver [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Snapshot image upload complete
Dec 06 07:05:25 compute-0 nova_compute[251992]: 2025-12-06 07:05:25.969 251996 DEBUG nova.compute.manager [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.022 251996 INFO nova.compute.manager [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Shelve offloading
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.028 251996 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance destroyed successfully.
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.028 251996 DEBUG nova.compute.manager [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.030 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.030 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquired lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.031 251996 DEBUG nova.network.neutron [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.586 251996 DEBUG nova.network.neutron [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.821 251996 DEBUG nova.network.neutron [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.836 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Releasing lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.842 251996 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance destroyed successfully.
Dec 06 07:05:26 compute-0 nova_compute[251992]: 2025-12-06 07:05:26.843 251996 DEBUG nova.objects.instance [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lazy-loading 'resources' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:26 compute-0 ceph-mon[74339]: pgmap v1341: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 9.6 MiB/s wr, 223 op/s
Dec 06 07:05:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1417600619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 9.0 MiB/s wr, 226 op/s
Dec 06 07:05:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Dec 06 07:05:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Dec 06 07:05:27 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Dec 06 07:05:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:27.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:27 compute-0 nova_compute[251992]: 2025-12-06 07:05:27.428 251996 INFO nova.virt.libvirt.driver [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deleting instance files /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350_del
Dec 06 07:05:27 compute-0 nova_compute[251992]: 2025-12-06 07:05:27.430 251996 INFO nova.virt.libvirt.driver [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deletion of /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350_del complete
Dec 06 07:05:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:27.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:27 compute-0 nova_compute[251992]: 2025-12-06 07:05:27.578 251996 INFO nova.scheduler.client.report [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Deleted allocations for instance dd7ff314-b789-4550-ab9a-44dc02948350
Dec 06 07:05:27 compute-0 nova_compute[251992]: 2025-12-06 07:05:27.627 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:27 compute-0 nova_compute[251992]: 2025-12-06 07:05:27.627 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:27 compute-0 nova_compute[251992]: 2025-12-06 07:05:27.653 251996 DEBUG oslo_concurrency.processutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:27 compute-0 nova_compute[251992]: 2025-12-06 07:05:27.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:28 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401685630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:28 compute-0 nova_compute[251992]: 2025-12-06 07:05:28.076 251996 DEBUG oslo_concurrency.processutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:28 compute-0 nova_compute[251992]: 2025-12-06 07:05:28.085 251996 DEBUG nova.compute.provider_tree [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:28 compute-0 nova_compute[251992]: 2025-12-06 07:05:28.109 251996 DEBUG nova.scheduler.client.report [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:28 compute-0 nova_compute[251992]: 2025-12-06 07:05:28.142 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:28 compute-0 nova_compute[251992]: 2025-12-06 07:05:28.195 251996 DEBUG oslo_concurrency.lockutils [None req-02a41c27-5760-4dba-8e59-b7be837e28c3 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 39.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:28 compute-0 ovn_controller[147168]: 2025-12-06T07:05:28Z|00079|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Dec 06 07:05:28 compute-0 ceph-mon[74339]: pgmap v1342: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 9.0 MiB/s wr, 226 op/s
Dec 06 07:05:28 compute-0 ceph-mon[74339]: osdmap e173: 3 total, 3 up, 3 in
Dec 06 07:05:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2483436212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3401685630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:28 compute-0 nova_compute[251992]: 2025-12-06 07:05:28.301 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.5 MiB/s wr, 183 op/s
Dec 06 07:05:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4211555932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:29.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:29 compute-0 podman[275107]: 2025-12-06 07:05:29.387154549 +0000 UTC m=+0.047276899 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 07:05:29 compute-0 podman[275108]: 2025-12-06 07:05:29.392731221 +0000 UTC m=+0.049724127 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:05:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:29.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.138 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004715.1362345, dd7ff314-b789-4550-ab9a-44dc02948350 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.138 251996 INFO nova.compute.manager [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Stopped (Lifecycle Event)
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.165 251996 DEBUG nova.compute.manager [None req-365d641c-cc6f-426f-917d-3653d6896996 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:30 compute-0 ceph-mon[74339]: pgmap v1344: 305 pgs: 305 active+clean; 458 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.5 MiB/s wr, 183 op/s
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.646 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "dd7ff314-b789-4550-ab9a-44dc02948350" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.646 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.647 251996 INFO nova.compute.manager [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Unshelving
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.745 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.745 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.751 251996 DEBUG nova.objects.instance [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'pci_requests' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.766 251996 DEBUG nova.objects.instance [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'numa_topology' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.784 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.785 251996 INFO nova.compute.claims [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:05:30 compute-0 nova_compute[251992]: 2025-12-06 07:05:30.907 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 375 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 806 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Dec 06 07:05:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:31.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/837476368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:31 compute-0 nova_compute[251992]: 2025-12-06 07:05:31.342 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:31 compute-0 nova_compute[251992]: 2025-12-06 07:05:31.347 251996 DEBUG nova.compute.provider_tree [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:31 compute-0 nova_compute[251992]: 2025-12-06 07:05:31.416 251996 DEBUG nova.scheduler.client.report [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:31 compute-0 nova_compute[251992]: 2025-12-06 07:05:31.452 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3560226663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:05:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3560226663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:05:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:31.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:31 compute-0 nova_compute[251992]: 2025-12-06 07:05:31.637 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:05:31 compute-0 nova_compute[251992]: 2025-12-06 07:05:31.638 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquired lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:05:31 compute-0 nova_compute[251992]: 2025-12-06 07:05:31.638 251996 DEBUG nova.network.neutron [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:05:31 compute-0 nova_compute[251992]: 2025-12-06 07:05:31.804 251996 DEBUG nova.network.neutron [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.090 251996 DEBUG nova.network.neutron [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.175 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Releasing lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.177 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.177 251996 INFO nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating image(s)
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.208 251996 DEBUG nova.storage.rbd_utils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.212 251996 DEBUG nova.objects.instance [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'trusted_certs' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.256 251996 DEBUG nova.storage.rbd_utils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.283 251996 DEBUG nova.storage.rbd_utils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.286 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "0913dce2fe799e815c2ba12ddd49111cfad816a8" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.287 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "0913dce2fe799e815c2ba12ddd49111cfad816a8" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:32 compute-0 ceph-mon[74339]: pgmap v1345: 305 pgs: 305 active+clean; 375 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 806 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Dec 06 07:05:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/837476368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.573 251996 DEBUG nova.virt.libvirt.imagebackend [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6e62d22c-45f1-4591-b9e8-303a0bc90e48/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6e62d22c-45f1-4591-b9e8-303a0bc90e48/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.635 251996 DEBUG nova.virt.libvirt.imagebackend [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6e62d22c-45f1-4591-b9e8-303a0bc90e48/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.636 251996 DEBUG nova.storage.rbd_utils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] cloning images/6e62d22c-45f1-4591-b9e8-303a0bc90e48@snap to None/dd7ff314-b789-4550-ab9a-44dc02948350_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.672 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:32 compute-0 nova_compute[251992]: 2025-12-06 07:05:32.925 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "0913dce2fe799e815c2ba12ddd49111cfad816a8" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.055 251996 DEBUG nova.objects.instance [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'migration_context' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 331 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 30 KiB/s wr, 90 op/s
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.124 251996 DEBUG nova.storage.rbd_utils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] flattening vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:05:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:33.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.303 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3394481438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.503 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Image rbd:vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.504 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.505 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Ensure instance console log exists: /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.505 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.505 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.505 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.507 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:04:49Z,direct_url=<?>,disk_format='raw',id=6e62d22c-45f1-4591-b9e8-303a0bc90e48,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-316061423-shelved',owner='e2384cf38a13417c9220db3aafff6b24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:05:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.509 251996 WARNING nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.514 251996 DEBUG nova.virt.libvirt.host [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.514 251996 DEBUG nova.virt.libvirt.host [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.516 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:05:33.515 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.517 251996 DEBUG nova.virt.libvirt.host [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.517 251996 DEBUG nova.virt.libvirt.host [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:05:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:05:33.518 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.519 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.519 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:04:49Z,direct_url=<?>,disk_format='raw',id=6e62d22c-45f1-4591-b9e8-303a0bc90e48,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-316061423-shelved',owner='e2384cf38a13417c9220db3aafff6b24',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:05:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.520 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.520 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.520 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.520 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.521 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.521 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.521 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.521 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.522 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.522 251996 DEBUG nova.virt.hardware [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.522 251996 DEBUG nova.objects.instance [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'vcpu_model' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.548 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:33.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:05:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1654068104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:33 compute-0 nova_compute[251992]: 2025-12-06 07:05:33.993 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.018 251996 DEBUG nova.storage.rbd_utils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.022 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:05:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3395349547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.500 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.502 251996 DEBUG nova.objects.instance [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'pci_devices' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.518 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <uuid>dd7ff314-b789-4550-ab9a-44dc02948350</uuid>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <name>instance-0000001c</name>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-316061423</nova:name>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:05:33</nova:creationTime>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <nova:user uuid="3c3859554f86419a941a5924e80b88de">tempest-UnshelveToHostMultiNodesTest-311268056-project-member</nova:user>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <nova:project uuid="e2384cf38a13417c9220db3aafff6b24">tempest-UnshelveToHostMultiNodesTest-311268056</nova:project>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6e62d22c-45f1-4591-b9e8-303a0bc90e48"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <system>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <entry name="serial">dd7ff314-b789-4550-ab9a-44dc02948350</entry>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <entry name="uuid">dd7ff314-b789-4550-ab9a-44dc02948350</entry>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     </system>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <os>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   </os>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <features>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   </features>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk">
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       </source>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk.config">
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       </source>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:05:34 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/console.log" append="off"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <video>
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     </video>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <input type="keyboard" bus="usb"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:05:34 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:05:34 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:05:34 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:05:34 compute-0 nova_compute[251992]: </domain>
Dec 06 07:05:34 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:05:34 compute-0 ceph-mon[74339]: pgmap v1346: 305 pgs: 305 active+clean; 331 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 30 KiB/s wr, 90 op/s
Dec 06 07:05:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1654068104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3395349547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.808 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.808 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.809 251996 INFO nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Using config drive
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.873 251996 DEBUG nova.storage.rbd_utils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.893 251996 DEBUG nova.objects.instance [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'ec2_ids' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:34.936931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734936967, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 384, "num_deletes": 252, "total_data_size": 235378, "memory_usage": 244136, "flush_reason": "Manual Compaction"}
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 06 07:05:34 compute-0 nova_compute[251992]: 2025-12-06 07:05:34.949 251996 DEBUG nova.objects.instance [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lazy-loading 'keypairs' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734970664, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 233059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27479, "largest_seqno": 27862, "table_properties": {"data_size": 230750, "index_size": 409, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6069, "raw_average_key_size": 19, "raw_value_size": 226001, "raw_average_value_size": 712, "num_data_blocks": 18, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004724, "oldest_key_time": 1765004724, "file_creation_time": 1765004734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 33775 microseconds, and 1099 cpu microseconds.
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:34.970707) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 233059 bytes OK
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:34.970727) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:34.978024) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:34.978041) EVENT_LOG_v1 {"time_micros": 1765004734978036, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:34.978059) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 232882, prev total WAL file size 232882, number of live WAL files 2.
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:34.978558) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(227KB)], [59(9501KB)]
Dec 06 07:05:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004734978587, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 9962724, "oldest_snapshot_seqno": -1}
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5455 keys, 7990343 bytes, temperature: kUnknown
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004735050537, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 7990343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7955219, "index_size": 20358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 141184, "raw_average_key_size": 25, "raw_value_size": 7858137, "raw_average_value_size": 1440, "num_data_blocks": 813, "num_entries": 5455, "num_filter_entries": 5455, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765004734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:35.050936) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 7990343 bytes
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:35.052094) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.0 rd, 110.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.3 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(77.0) write-amplify(34.3) OK, records in: 5972, records dropped: 517 output_compression: NoCompression
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:35.052134) EVENT_LOG_v1 {"time_micros": 1765004735052124, "job": 32, "event": "compaction_finished", "compaction_time_micros": 72181, "compaction_time_cpu_micros": 18673, "output_level": 6, "num_output_files": 1, "total_output_size": 7990343, "num_input_records": 5972, "num_output_records": 5455, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004735052610, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004735054190, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:34.978464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:35.054351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:35.054358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:35.054361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:35.054363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:05:35.054365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:05:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 287 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 154 op/s
Dec 06 07:05:35 compute-0 nova_compute[251992]: 2025-12-06 07:05:35.217 251996 INFO nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Creating config drive at /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config
Dec 06 07:05:35 compute-0 nova_compute[251992]: 2025-12-06 07:05:35.223 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa33fgev_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:35.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:35 compute-0 nova_compute[251992]: 2025-12-06 07:05:35.352 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa33fgev_" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:35 compute-0 nova_compute[251992]: 2025-12-06 07:05:35.381 251996 DEBUG nova.storage.rbd_utils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] rbd image dd7ff314-b789-4550-ab9a-44dc02948350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:05:35 compute-0 nova_compute[251992]: 2025-12-06 07:05:35.384 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config dd7ff314-b789-4550-ab9a-44dc02948350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:35 compute-0 nova_compute[251992]: 2025-12-06 07:05:35.548 251996 DEBUG oslo_concurrency.processutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config dd7ff314-b789-4550-ab9a-44dc02948350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:35 compute-0 nova_compute[251992]: 2025-12-06 07:05:35.549 251996 INFO nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deleting local config drive /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350/disk.config because it was imported into RBD.
Dec 06 07:05:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:35.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:35 compute-0 systemd-machined[212986]: New machine qemu-15-instance-0000001c.
Dec 06 07:05:35 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000001c.
Dec 06 07:05:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2942269010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1388583674' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.613 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004736.612613, dd7ff314-b789-4550-ab9a-44dc02948350 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.614 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Resumed (Lifecycle Event)
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.617 251996 DEBUG nova.compute.manager [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.617 251996 DEBUG nova.virt.libvirt.driver [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.621 251996 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance spawned successfully.
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.644 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.646 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.678 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.678 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004736.6136081, dd7ff314-b789-4550-ab9a-44dc02948350 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.679 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Started (Lifecycle Event)
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.719 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.722 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:05:36 compute-0 nova_compute[251992]: 2025-12-06 07:05:36.760 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:05:36 compute-0 sudo[275560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:36 compute-0 sudo[275560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:36 compute-0 sudo[275560]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:36 compute-0 sudo[275585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:36 compute-0 sudo[275585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:36 compute-0 sudo[275585]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 266 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.8 MiB/s wr, 275 op/s
Dec 06 07:05:37 compute-0 ceph-mon[74339]: pgmap v1347: 305 pgs: 305 active+clean; 287 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 154 op/s
Dec 06 07:05:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1868520597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3011554113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:05:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:37.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:05:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:05:37.519 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:05:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:37 compute-0 nova_compute[251992]: 2025-12-06 07:05:37.699 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:38 compute-0 nova_compute[251992]: 2025-12-06 07:05:38.306 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:38 compute-0 ceph-mon[74339]: pgmap v1348: 305 pgs: 305 active+clean; 266 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.8 MiB/s wr, 275 op/s
Dec 06 07:05:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 266 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 230 op/s
Dec 06 07:05:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:39.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:39.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Dec 06 07:05:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Dec 06 07:05:39 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Dec 06 07:05:40 compute-0 nova_compute[251992]: 2025-12-06 07:05:40.056 251996 DEBUG nova.compute.manager [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:05:40 compute-0 nova_compute[251992]: 2025-12-06 07:05:40.132 251996 DEBUG oslo_concurrency.lockutils [None req-9b6207d0-e036-45fe-aae1-b8baa1d9488e fe5ab093e92243bbbb3fe0185089dd0b db5c847a1cb84e2ba4e1e360b69c614f - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:40 compute-0 ceph-mon[74339]: pgmap v1349: 305 pgs: 305 active+clean; 266 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 230 op/s
Dec 06 07:05:40 compute-0 ceph-mon[74339]: osdmap e174: 3 total, 3 up, 3 in
Dec 06 07:05:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3372856054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 162 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 8.2 MiB/s rd, 6.8 MiB/s wr, 435 op/s
Dec 06 07:05:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:41.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:41.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/526554932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:05:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/526554932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:05:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:42 compute-0 nova_compute[251992]: 2025-12-06 07:05:42.701 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:42 compute-0 ceph-mon[74339]: pgmap v1351: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 162 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 8.2 MiB/s rd, 6.8 MiB/s wr, 435 op/s
Dec 06 07:05:42 compute-0 nova_compute[251992]: 2025-12-06 07:05:42.912 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "dd7ff314-b789-4550-ab9a-44dc02948350" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:42 compute-0 nova_compute[251992]: 2025-12-06 07:05:42.913 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:42 compute-0 nova_compute[251992]: 2025-12-06 07:05:42.913 251996 INFO nova.compute.manager [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Shelving
Dec 06 07:05:42 compute-0 nova_compute[251992]: 2025-12-06 07:05:42.945 251996 DEBUG nova.virt.libvirt.driver [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:05:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:05:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:05:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:05:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:05:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:05:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:05:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 121 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.8 MiB/s wr, 471 op/s
Dec 06 07:05:43 compute-0 nova_compute[251992]: 2025-12-06 07:05:43.308 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:43.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:43.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:44 compute-0 ceph-mon[74339]: pgmap v1352: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 121 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 6.8 MiB/s wr, 471 op/s
Dec 06 07:05:44 compute-0 nova_compute[251992]: 2025-12-06 07:05:44.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:44 compute-0 nova_compute[251992]: 2025-12-06 07:05:44.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.6 MiB/s wr, 400 op/s
Dec 06 07:05:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:45.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:45.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:45 compute-0 nova_compute[251992]: 2025-12-06 07:05:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:45 compute-0 nova_compute[251992]: 2025-12-06 07:05:45.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:45 compute-0 nova_compute[251992]: 2025-12-06 07:05:45.679 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:45 compute-0 nova_compute[251992]: 2025-12-06 07:05:45.679 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:45 compute-0 nova_compute[251992]: 2025-12-06 07:05:45.680 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:05:45 compute-0 nova_compute[251992]: 2025-12-06 07:05:45.680 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187097515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.128 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:46 compute-0 ceph-mon[74339]: pgmap v1353: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.6 MiB/s wr, 400 op/s
Dec 06 07:05:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/249549195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2187097515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.222 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.222 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.386 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.388 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4553MB free_disk=20.9427490234375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.388 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.389 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.481 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance dd7ff314-b789-4550-ab9a-44dc02948350 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.482 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.482 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.524 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:05:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:05:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455870158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.971 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.978 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:05:46 compute-0 nova_compute[251992]: 2025-12-06 07:05:46.998 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:05:47 compute-0 nova_compute[251992]: 2025-12-06 07:05:47.045 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:05:47 compute-0 nova_compute[251992]: 2025-12-06 07:05:47.046 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:05:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 34 KiB/s wr, 266 op/s
Dec 06 07:05:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/173234693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3455870158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Dec 06 07:05:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Dec 06 07:05:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Dec 06 07:05:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:47.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:47.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:47 compute-0 nova_compute[251992]: 2025-12-06 07:05:47.702 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.039 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.040 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.040 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.310 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:48 compute-0 ceph-mon[74339]: pgmap v1354: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 34 KiB/s wr, 266 op/s
Dec 06 07:05:48 compute-0 ceph-mon[74339]: osdmap e175: 3 total, 3 up, 3 in
Dec 06 07:05:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/109364209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3546733337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.827 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.828 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.828 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:05:48 compute-0 nova_compute[251992]: 2025-12-06 07:05:48.829 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 36 KiB/s wr, 282 op/s
Dec 06 07:05:49 compute-0 nova_compute[251992]: 2025-12-06 07:05:49.188 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:05:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:49.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:49.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:49 compute-0 nova_compute[251992]: 2025-12-06 07:05:49.969 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:05:49 compute-0 nova_compute[251992]: 2025-12-06 07:05:49.990 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:05:49 compute-0 nova_compute[251992]: 2025-12-06 07:05:49.991 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:05:49 compute-0 nova_compute[251992]: 2025-12-06 07:05:49.992 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:49 compute-0 nova_compute[251992]: 2025-12-06 07:05:49.992 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:05:50 compute-0 ceph-mon[74339]: pgmap v1356: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 36 KiB/s wr, 282 op/s
Dec 06 07:05:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 110 op/s
Dec 06 07:05:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.017000454s ======
Dec 06 07:05:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:51.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.017000454s
Dec 06 07:05:51 compute-0 nova_compute[251992]: 2025-12-06 07:05:51.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:05:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:52 compute-0 ceph-mon[74339]: pgmap v1357: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 110 op/s
Dec 06 07:05:52 compute-0 nova_compute[251992]: 2025-12-06 07:05:52.704 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:52 compute-0 nova_compute[251992]: 2025-12-06 07:05:52.986 251996 DEBUG nova.virt.libvirt.driver [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:05:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 635 KiB/s rd, 13 KiB/s wr, 56 op/s
Dec 06 07:05:53 compute-0 nova_compute[251992]: 2025-12-06 07:05:53.313 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:53.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:53.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:54 compute-0 sudo[275663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:54 compute-0 sudo[275663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:54 compute-0 sudo[275663]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:54 compute-0 sudo[275694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:05:54 compute-0 sudo[275694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:54 compute-0 sudo[275694]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:54 compute-0 podman[275687]: 2025-12-06 07:05:54.389680323 +0000 UTC m=+0.122288251 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 07:05:54 compute-0 sudo[275736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:54 compute-0 sudo[275736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:54 compute-0 sudo[275736]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:54 compute-0 ceph-mon[74339]: pgmap v1358: 305 pgs: 305 active+clean; 121 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 635 KiB/s rd, 13 KiB/s wr, 56 op/s
Dec 06 07:05:54 compute-0 sudo[275766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:05:54 compute-0 sudo[275766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:54 compute-0 sudo[275766]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:05:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:05:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:05:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:05:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:05:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:05:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 66311de1-2214-45b4-9e3a-691485cc0a1f does not exist
Dec 06 07:05:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 757e1a9b-f38b-4054-8542-2fc287e4b441 does not exist
Dec 06 07:05:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3514e64e-bf32-4c7c-84a1-600f01c3b244 does not exist
Dec 06 07:05:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:05:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:05:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:05:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:05:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:05:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:05:55 compute-0 sudo[275825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:55 compute-0 sudo[275825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:55 compute-0 sudo[275825]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 636 KiB/s rd, 31 KiB/s wr, 56 op/s
Dec 06 07:05:55 compute-0 sudo[275850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:05:55 compute-0 sudo[275850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:55 compute-0 sudo[275850]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:55 compute-0 sudo[275875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:55 compute-0 sudo[275875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:55 compute-0 sudo[275875]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:55 compute-0 sudo[275900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:05:55 compute-0 sudo[275900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:55.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:05:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:05:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:05:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:05:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:05:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:05:55 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Dec 06 07:05:55 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001c.scope: Consumed 14.417s CPU time.
Dec 06 07:05:55 compute-0 podman[275967]: 2025-12-06 07:05:55.536024072 +0000 UTC m=+0.043873219 container create f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:05:55 compute-0 systemd-machined[212986]: Machine qemu-15-instance-0000001c terminated.
Dec 06 07:05:55 compute-0 systemd[1]: Started libpod-conmon-f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516.scope.
Dec 06 07:05:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:05:55 compute-0 podman[275967]: 2025-12-06 07:05:55.51455238 +0000 UTC m=+0.022401577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:05:55 compute-0 podman[275967]: 2025-12-06 07:05:55.620031676 +0000 UTC m=+0.127880853 container init f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:05:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:55.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:55 compute-0 podman[275967]: 2025-12-06 07:05:55.626732897 +0000 UTC m=+0.134582054 container start f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:05:55 compute-0 podman[275967]: 2025-12-06 07:05:55.6305506 +0000 UTC m=+0.138399747 container attach f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:05:55 compute-0 quirky_austin[275984]: 167 167
Dec 06 07:05:55 compute-0 systemd[1]: libpod-f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516.scope: Deactivated successfully.
Dec 06 07:05:55 compute-0 podman[275967]: 2025-12-06 07:05:55.63277172 +0000 UTC m=+0.140620877 container died f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 07:05:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0d817731859f339b1a5267bb6990928ea0a215d79c4edec78d42a7a820ffe56-merged.mount: Deactivated successfully.
Dec 06 07:05:55 compute-0 podman[275967]: 2025-12-06 07:05:55.676586336 +0000 UTC m=+0.184435483 container remove f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:05:55 compute-0 systemd[1]: libpod-conmon-f0f08209955c3d5d374f9edcb29321b887275d680d45a56f3facec1be65de516.scope: Deactivated successfully.
Dec 06 07:05:55 compute-0 podman[276010]: 2025-12-06 07:05:55.835889579 +0000 UTC m=+0.038365350 container create de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:05:55 compute-0 systemd[1]: Started libpod-conmon-de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9.scope.
Dec 06 07:05:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0ea16cb985bcfc923a4c6ede61f1ac764a292cb92cc77eb89f132b9f2b8d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0ea16cb985bcfc923a4c6ede61f1ac764a292cb92cc77eb89f132b9f2b8d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0ea16cb985bcfc923a4c6ede61f1ac764a292cb92cc77eb89f132b9f2b8d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0ea16cb985bcfc923a4c6ede61f1ac764a292cb92cc77eb89f132b9f2b8d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf0ea16cb985bcfc923a4c6ede61f1ac764a292cb92cc77eb89f132b9f2b8d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:55 compute-0 podman[276010]: 2025-12-06 07:05:55.819577387 +0000 UTC m=+0.022053168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:05:55 compute-0 podman[276010]: 2025-12-06 07:05:55.917260451 +0000 UTC m=+0.119736212 container init de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:05:55 compute-0 podman[276010]: 2025-12-06 07:05:55.925445443 +0000 UTC m=+0.127921204 container start de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sinoussi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 07:05:55 compute-0 podman[276010]: 2025-12-06 07:05:55.92868595 +0000 UTC m=+0.131161721 container attach de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:05:56 compute-0 nova_compute[251992]: 2025-12-06 07:05:56.001 251996 INFO nova.virt.libvirt.driver [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance shutdown successfully after 13 seconds.
Dec 06 07:05:56 compute-0 nova_compute[251992]: 2025-12-06 07:05:56.015 251996 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance destroyed successfully.
Dec 06 07:05:56 compute-0 nova_compute[251992]: 2025-12-06 07:05:56.016 251996 DEBUG nova.objects.instance [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lazy-loading 'numa_topology' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:05:56 compute-0 nova_compute[251992]: 2025-12-06 07:05:56.488 251996 INFO nova.virt.libvirt.driver [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Beginning cold snapshot process
Dec 06 07:05:56 compute-0 ceph-mon[74339]: pgmap v1359: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 636 KiB/s rd, 31 KiB/s wr, 56 op/s
Dec 06 07:05:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/30733717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:05:56 compute-0 nova_compute[251992]: 2025-12-06 07:05:56.644 251996 DEBUG nova.virt.libvirt.imagebackend [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:05:56 compute-0 hopeful_sinoussi[276027]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:05:56 compute-0 hopeful_sinoussi[276027]: --> relative data size: 1.0
Dec 06 07:05:56 compute-0 hopeful_sinoussi[276027]: --> All data devices are unavailable
Dec 06 07:05:56 compute-0 systemd[1]: libpod-de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9.scope: Deactivated successfully.
Dec 06 07:05:56 compute-0 podman[276010]: 2025-12-06 07:05:56.802685008 +0000 UTC m=+1.005160769 container died de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sinoussi, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:05:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bf0ea16cb985bcfc923a4c6ede61f1ac764a292cb92cc77eb89f132b9f2b8d6-merged.mount: Deactivated successfully.
Dec 06 07:05:56 compute-0 podman[276010]: 2025-12-06 07:05:56.879439385 +0000 UTC m=+1.081915146 container remove de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:05:56 compute-0 systemd[1]: libpod-conmon-de4735dcc9f6b324c0abac498e9e960d3ed661612121a396047a44e6f33644e9.scope: Deactivated successfully.
Dec 06 07:05:56 compute-0 sudo[275900]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:56 compute-0 sudo[276088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:56 compute-0 sudo[276088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:56 compute-0 sudo[276088]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:56 compute-0 sudo[276112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:56 compute-0 nova_compute[251992]: 2025-12-06 07:05:56.951 251996 DEBUG nova.storage.rbd_utils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] creating snapshot(61affc467a2744d6a7805020a41fc818) on rbd image(dd7ff314-b789-4550-ab9a-44dc02948350_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:05:56 compute-0 sudo[276112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:56 compute-0 sudo[276112]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:56 compute-0 sudo[276120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:56 compute-0 sudo[276120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:56 compute-0 sudo[276120]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:57 compute-0 sudo[276176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:05:57 compute-0 sudo[276176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:57 compute-0 sudo[276176]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 637 KiB/s rd, 35 KiB/s wr, 57 op/s
Dec 06 07:05:57 compute-0 sudo[276206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:57 compute-0 sudo[276206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:57 compute-0 sudo[276206]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:57 compute-0 sudo[276231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:05:57 compute-0 sudo[276231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:05:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:05:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:57.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:05:57 compute-0 podman[276297]: 2025-12-06 07:05:57.460429741 +0000 UTC m=+0.035770469 container create 138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:05:57 compute-0 systemd[1]: Started libpod-conmon-138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705.scope.
Dec 06 07:05:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:05:57 compute-0 podman[276297]: 2025-12-06 07:05:57.53095705 +0000 UTC m=+0.106297808 container init 138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:05:57 compute-0 podman[276297]: 2025-12-06 07:05:57.539019629 +0000 UTC m=+0.114360357 container start 138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:05:57 compute-0 podman[276297]: 2025-12-06 07:05:57.443889674 +0000 UTC m=+0.019230422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:05:57 compute-0 podman[276297]: 2025-12-06 07:05:57.542491422 +0000 UTC m=+0.117832150 container attach 138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:05:57 compute-0 upbeat_aryabhata[276314]: 167 167
Dec 06 07:05:57 compute-0 systemd[1]: libpod-138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705.scope: Deactivated successfully.
Dec 06 07:05:57 compute-0 podman[276297]: 2025-12-06 07:05:57.545913545 +0000 UTC m=+0.121254293 container died 138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:05:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Dec 06 07:05:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Dec 06 07:05:57 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Dec 06 07:05:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bf39989594b78e41f5ecef35639f26d8f880015c0ed0d04b2a0ad5be8749db2-merged.mount: Deactivated successfully.
Dec 06 07:05:57 compute-0 podman[276297]: 2025-12-06 07:05:57.587433569 +0000 UTC m=+0.162774297 container remove 138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_aryabhata, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:05:57 compute-0 systemd[1]: libpod-conmon-138f0d0771fd673f089d541d354e69c39af7d53ef10faa7da3fab0b4ef65b705.scope: Deactivated successfully.
Dec 06 07:05:57 compute-0 nova_compute[251992]: 2025-12-06 07:05:57.620 251996 DEBUG nova.storage.rbd_utils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] cloning vms/dd7ff314-b789-4550-ab9a-44dc02948350_disk@61affc467a2744d6a7805020a41fc818 to images/11dbd9ca-4725-400f-ade9-f5493c82f6e2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:05:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:57.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:57 compute-0 nova_compute[251992]: 2025-12-06 07:05:57.706 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:57 compute-0 podman[276371]: 2025-12-06 07:05:57.767834551 +0000 UTC m=+0.066250713 container create 542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 06 07:05:57 compute-0 nova_compute[251992]: 2025-12-06 07:05:57.809 251996 DEBUG nova.storage.rbd_utils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] flattening images/11dbd9ca-4725-400f-ade9-f5493c82f6e2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:05:57 compute-0 systemd[1]: Started libpod-conmon-542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6.scope.
Dec 06 07:05:57 compute-0 podman[276371]: 2025-12-06 07:05:57.724988022 +0000 UTC m=+0.023404204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:05:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76311ad0ba82e695c3a3d90edae353358046a815fb1d56c1d46402a526de1265/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76311ad0ba82e695c3a3d90edae353358046a815fb1d56c1d46402a526de1265/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76311ad0ba82e695c3a3d90edae353358046a815fb1d56c1d46402a526de1265/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76311ad0ba82e695c3a3d90edae353358046a815fb1d56c1d46402a526de1265/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:57 compute-0 podman[276371]: 2025-12-06 07:05:57.861447606 +0000 UTC m=+0.159863768 container init 542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:05:57 compute-0 podman[276371]: 2025-12-06 07:05:57.870026328 +0000 UTC m=+0.168442490 container start 542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:05:57 compute-0 podman[276371]: 2025-12-06 07:05:57.873172583 +0000 UTC m=+0.171588765 container attach 542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:05:58 compute-0 nova_compute[251992]: 2025-12-06 07:05:58.202 251996 DEBUG nova.storage.rbd_utils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] removing snapshot(61affc467a2744d6a7805020a41fc818) on rbd image(dd7ff314-b789-4550-ab9a-44dc02948350_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:05:58 compute-0 nova_compute[251992]: 2025-12-06 07:05:58.315 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:05:58 compute-0 ceph-mon[74339]: pgmap v1360: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 637 KiB/s rd, 35 KiB/s wr, 57 op/s
Dec 06 07:05:58 compute-0 ceph-mon[74339]: osdmap e176: 3 total, 3 up, 3 in
Dec 06 07:05:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1169664722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/273254576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:05:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Dec 06 07:05:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Dec 06 07:05:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Dec 06 07:05:58 compute-0 keen_hamilton[276390]: {
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:     "0": [
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:         {
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "devices": [
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "/dev/loop3"
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             ],
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "lv_name": "ceph_lv0",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "lv_size": "7511998464",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "name": "ceph_lv0",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "tags": {
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.cluster_name": "ceph",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.crush_device_class": "",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.encrypted": "0",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.osd_id": "0",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.type": "block",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:                 "ceph.vdo": "0"
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             },
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "type": "block",
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:             "vg_name": "ceph_vg0"
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:         }
Dec 06 07:05:58 compute-0 keen_hamilton[276390]:     ]
Dec 06 07:05:58 compute-0 keen_hamilton[276390]: }
Dec 06 07:05:58 compute-0 systemd[1]: libpod-542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6.scope: Deactivated successfully.
Dec 06 07:05:58 compute-0 podman[276371]: 2025-12-06 07:05:58.632448685 +0000 UTC m=+0.930864847 container died 542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:05:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-76311ad0ba82e695c3a3d90edae353358046a815fb1d56c1d46402a526de1265-merged.mount: Deactivated successfully.
Dec 06 07:05:58 compute-0 podman[276371]: 2025-12-06 07:05:58.681613806 +0000 UTC m=+0.980029968 container remove 542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:05:58 compute-0 systemd[1]: libpod-conmon-542d1e8dbb172fe431cd8871816dd9b8449a38089f69314d9cbbc5e03664e0f6.scope: Deactivated successfully.
Dec 06 07:05:58 compute-0 sudo[276231]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:58 compute-0 sudo[276449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:58 compute-0 sudo[276449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:58 compute-0 sudo[276449]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:58 compute-0 nova_compute[251992]: 2025-12-06 07:05:58.797 251996 DEBUG nova.storage.rbd_utils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] creating snapshot(snap) on rbd image(11dbd9ca-4725-400f-ade9-f5493c82f6e2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:05:58 compute-0 sudo[276474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:05:58 compute-0 sudo[276474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:58 compute-0 sudo[276474]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:58 compute-0 sudo[276517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:05:58 compute-0 sudo[276517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:58 compute-0 sudo[276517]: pam_unix(sudo:session): session closed for user root
Dec 06 07:05:58 compute-0 sudo[276542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:05:58 compute-0 sudo[276542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:05:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 27 KiB/s wr, 15 op/s
Dec 06 07:05:59 compute-0 podman[276608]: 2025-12-06 07:05:59.279982452 +0000 UTC m=+0.038682768 container create 2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_raman, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:05:59 compute-0 systemd[1]: Started libpod-conmon-2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be.scope.
Dec 06 07:05:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:05:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:05:59.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:59 compute-0 podman[276608]: 2025-12-06 07:05:59.332543425 +0000 UTC m=+0.091243741 container init 2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_raman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:05:59 compute-0 podman[276608]: 2025-12-06 07:05:59.339261637 +0000 UTC m=+0.097961953 container start 2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_raman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 07:05:59 compute-0 podman[276608]: 2025-12-06 07:05:59.342807223 +0000 UTC m=+0.101507529 container attach 2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_raman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:05:59 compute-0 strange_raman[276624]: 167 167
Dec 06 07:05:59 compute-0 systemd[1]: libpod-2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be.scope: Deactivated successfully.
Dec 06 07:05:59 compute-0 podman[276608]: 2025-12-06 07:05:59.34639995 +0000 UTC m=+0.105100276 container died 2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:05:59 compute-0 podman[276608]: 2025-12-06 07:05:59.262811688 +0000 UTC m=+0.021512034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:05:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc1cc90d854e75f95709f460040b29a3b5b6d29778593c2b0be23883a2568ecd-merged.mount: Deactivated successfully.
Dec 06 07:05:59 compute-0 podman[276608]: 2025-12-06 07:05:59.381401127 +0000 UTC m=+0.140101443 container remove 2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_raman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:05:59 compute-0 systemd[1]: libpod-conmon-2fda2be5fe9349e31464d16e2e84473fcb7da3729c7e4994b282e53697e9c7be.scope: Deactivated successfully.
Dec 06 07:05:59 compute-0 podman[276648]: 2025-12-06 07:05:59.532064085 +0000 UTC m=+0.045564654 container create 5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lumiere, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:05:59 compute-0 systemd[1]: Started libpod-conmon-5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53.scope.
Dec 06 07:05:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:05:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Dec 06 07:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bc0e7a252ff36b2624381273d7356e3fe71ef18fdc4a84828c06b15c86bd43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bc0e7a252ff36b2624381273d7356e3fe71ef18fdc4a84828c06b15c86bd43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bc0e7a252ff36b2624381273d7356e3fe71ef18fdc4a84828c06b15c86bd43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8bc0e7a252ff36b2624381273d7356e3fe71ef18fdc4a84828c06b15c86bd43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:05:59 compute-0 podman[276648]: 2025-12-06 07:05:59.599519071 +0000 UTC m=+0.113019640 container init 5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lumiere, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:05:59 compute-0 podman[276648]: 2025-12-06 07:05:59.606298545 +0000 UTC m=+0.119799114 container start 5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:05:59 compute-0 podman[276648]: 2025-12-06 07:05:59.514368517 +0000 UTC m=+0.027869116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:05:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Dec 06 07:05:59 compute-0 podman[276648]: 2025-12-06 07:05:59.6112916 +0000 UTC m=+0.124792169 container attach 5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:05:59 compute-0 ceph-mon[74339]: osdmap e177: 3 total, 3 up, 3 in
Dec 06 07:05:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Dec 06 07:05:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:05:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:05:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:05:59.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:05:59 compute-0 podman[276662]: 2025-12-06 07:05:59.633827521 +0000 UTC m=+0.059879133 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:05:59 compute-0 podman[276665]: 2025-12-06 07:05:59.637859239 +0000 UTC m=+0.063795538 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:06:00 compute-0 nice_lumiere[276666]: {
Dec 06 07:06:00 compute-0 nice_lumiere[276666]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:06:00 compute-0 nice_lumiere[276666]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:06:00 compute-0 nice_lumiere[276666]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:06:00 compute-0 nice_lumiere[276666]:         "osd_id": 0,
Dec 06 07:06:00 compute-0 nice_lumiere[276666]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:06:00 compute-0 nice_lumiere[276666]:         "type": "bluestore"
Dec 06 07:06:00 compute-0 nice_lumiere[276666]:     }
Dec 06 07:06:00 compute-0 nice_lumiere[276666]: }
Dec 06 07:06:00 compute-0 systemd[1]: libpod-5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53.scope: Deactivated successfully.
Dec 06 07:06:00 compute-0 podman[276648]: 2025-12-06 07:06:00.41371267 +0000 UTC m=+0.927213239 container died 5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lumiere, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:06:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8bc0e7a252ff36b2624381273d7356e3fe71ef18fdc4a84828c06b15c86bd43-merged.mount: Deactivated successfully.
Dec 06 07:06:00 compute-0 podman[276648]: 2025-12-06 07:06:00.466444048 +0000 UTC m=+0.979944617 container remove 5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lumiere, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:06:00 compute-0 systemd[1]: libpod-conmon-5f9ec46b70af646873dc8d51a3150acbc6bae74c6bacdc302bc488d3c182fe53.scope: Deactivated successfully.
Dec 06 07:06:00 compute-0 sudo[276542]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:06:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:06:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:06:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:06:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1f0e3f92-fb94-4494-8c84-2aa603f135c5 does not exist
Dec 06 07:06:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 35e7c463-d491-436e-9730-ec0eb4c48327 does not exist
Dec 06 07:06:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ce0c01fc-c109-49bd-9cad-4d3ba08d57b9 does not exist
Dec 06 07:06:00 compute-0 sudo[276734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:06:00 compute-0 sudo[276734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:00 compute-0 sudo[276734]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:00 compute-0 ceph-mon[74339]: pgmap v1363: 305 pgs: 305 active+clean; 122 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 27 KiB/s wr, 15 op/s
Dec 06 07:06:00 compute-0 ceph-mon[74339]: osdmap e178: 3 total, 3 up, 3 in
Dec 06 07:06:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:06:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:06:00 compute-0 sudo[276759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:06:00 compute-0 sudo[276759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:00 compute-0 sudo[276759]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 209 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.3 MiB/s wr, 184 op/s
Dec 06 07:06:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:01.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.338 251996 INFO nova.virt.libvirt.driver [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Snapshot image upload complete
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.339 251996 DEBUG nova.compute.manager [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.466 251996 INFO nova.compute.manager [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Shelve offloading
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.472 251996 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance destroyed successfully.
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.473 251996 DEBUG nova.compute.manager [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.475 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.475 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquired lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.475 251996 DEBUG nova.network.neutron [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:01.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.639 251996 DEBUG nova.network.neutron [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:01 compute-0 nova_compute[251992]: 2025-12-06 07:06:01.992 251996 DEBUG nova.network.neutron [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.009 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Releasing lock "refresh_cache-dd7ff314-b789-4550-ab9a-44dc02948350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.015 251996 INFO nova.virt.libvirt.driver [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Instance destroyed successfully.
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.015 251996 DEBUG nova.objects.instance [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lazy-loading 'resources' on Instance uuid dd7ff314-b789-4550-ab9a-44dc02948350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:02 compute-0 ceph-mon[74339]: pgmap v1365: 305 pgs: 305 active+clean; 209 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.3 MiB/s wr, 184 op/s
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.628 251996 INFO nova.virt.libvirt.driver [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deleting instance files /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350_del
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.628 251996 INFO nova.virt.libvirt.driver [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Deletion of /var/lib/nova/instances/dd7ff314-b789-4550-ab9a-44dc02948350_del complete
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.707 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.758 251996 INFO nova.scheduler.client.report [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Deleted allocations for instance dd7ff314-b789-4550-ab9a-44dc02948350
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.844 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.845 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:02 compute-0 nova_compute[251992]: 2025-12-06 07:06:02.881 251996 DEBUG oslo_concurrency.processutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 250 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 341 op/s
Dec 06 07:06:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019730247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:03 compute-0 nova_compute[251992]: 2025-12-06 07:06:03.316 251996 DEBUG oslo_concurrency.processutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:03 compute-0 nova_compute[251992]: 2025-12-06 07:06:03.317 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:03 compute-0 nova_compute[251992]: 2025-12-06 07:06:03.323 251996 DEBUG nova.compute.provider_tree [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:03.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:03 compute-0 nova_compute[251992]: 2025-12-06 07:06:03.349 251996 DEBUG nova.scheduler.client.report [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:03 compute-0 nova_compute[251992]: 2025-12-06 07:06:03.425 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:03 compute-0 nova_compute[251992]: 2025-12-06 07:06:03.535 251996 DEBUG oslo_concurrency.lockutils [None req-f7e97849-b88a-46dd-90e1-e76c53ff4646 3c3859554f86419a941a5924e80b88de e2384cf38a13417c9220db3aafff6b24 - - default default] Lock "dd7ff314-b789-4550-ab9a-44dc02948350" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:03.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4019730247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/931065436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:03.814 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:03.815 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:03.815 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:05 compute-0 ceph-mon[74339]: pgmap v1366: 305 pgs: 305 active+clean; 250 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 341 op/s
Dec 06 07:06:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 197 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 9.1 MiB/s wr, 352 op/s
Dec 06 07:06:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:05.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:05.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:06 compute-0 ceph-mon[74339]: pgmap v1367: 305 pgs: 305 active+clean; 197 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 9.4 MiB/s rd, 9.1 MiB/s wr, 352 op/s
Dec 06 07:06:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 122 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.0 MiB/s wr, 340 op/s
Dec 06 07:06:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Dec 06 07:06:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Dec 06 07:06:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Dec 06 07:06:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:07.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:07.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:07 compute-0 nova_compute[251992]: 2025-12-06 07:06:07.710 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:07 compute-0 nova_compute[251992]: 2025-12-06 07:06:07.832 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "f10e044d-9118-49ce-b890-7ec41fc40cb0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:07 compute-0 nova_compute[251992]: 2025-12-06 07:06:07.832 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:07 compute-0 nova_compute[251992]: 2025-12-06 07:06:07.869 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:06:07 compute-0 nova_compute[251992]: 2025-12-06 07:06:07.994 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:07 compute-0 nova_compute[251992]: 2025-12-06 07:06:07.995 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.002 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.002 251996 INFO nova.compute.claims [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.181 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:08 compute-0 ceph-mon[74339]: pgmap v1368: 305 pgs: 305 active+clean; 122 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.0 MiB/s wr, 340 op/s
Dec 06 07:06:08 compute-0 ceph-mon[74339]: osdmap e179: 3 total, 3 up, 3 in
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.319 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2825645883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.610 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.616 251996 DEBUG nova.compute.provider_tree [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.634 251996 DEBUG nova.scheduler.client.report [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.672 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.673 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.727 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.728 251996 DEBUG nova.network.neutron [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.761 251996 INFO nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:06:08 compute-0 nova_compute[251992]: 2025-12-06 07:06:08.785 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:06:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 122 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.2 MiB/s wr, 305 op/s
Dec 06 07:06:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2488208435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2825645883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4213712616' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:06:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4213712616' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:06:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2450534043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:09.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:09.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:09 compute-0 nova_compute[251992]: 2025-12-06 07:06:09.673 251996 DEBUG nova.policy [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c9025bd3f0854dff80d9408800d6b76b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503b2dfdce9d47598a8b9de4b15e1d45', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.128 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.129 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.130 251996 INFO nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Creating image(s)
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.155 251996 DEBUG nova.storage.rbd_utils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image f10e044d-9118-49ce-b890-7ec41fc40cb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.182 251996 DEBUG nova.storage.rbd_utils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image f10e044d-9118-49ce-b890-7ec41fc40cb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.208 251996 DEBUG nova.storage.rbd_utils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image f10e044d-9118-49ce-b890-7ec41fc40cb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.211 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.272 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.273 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.273 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.274 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.298 251996 DEBUG nova.storage.rbd_utils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image f10e044d-9118-49ce-b890-7ec41fc40cb0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.303 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f10e044d-9118-49ce-b890-7ec41fc40cb0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:10 compute-0 ceph-mon[74339]: pgmap v1370: 305 pgs: 305 active+clean; 122 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.2 MiB/s wr, 305 op/s
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.634 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f10e044d-9118-49ce-b890-7ec41fc40cb0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.701 251996 DEBUG nova.storage.rbd_utils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] resizing rbd image f10e044d-9118-49ce-b890-7ec41fc40cb0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.732 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004755.7034702, dd7ff314-b789-4550-ab9a-44dc02948350 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.732 251996 INFO nova.compute.manager [-] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] VM Stopped (Lifecycle Event)
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.796 251996 DEBUG nova.compute.manager [None req-df1732fc-7c5f-4658-8230-bcc6c9a8657b - - - - - -] [instance: dd7ff314-b789-4550-ab9a-44dc02948350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.801 251996 DEBUG nova.objects.instance [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lazy-loading 'migration_context' on Instance uuid f10e044d-9118-49ce-b890-7ec41fc40cb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.826 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.827 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Ensure instance console log exists: /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.828 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.828 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:10 compute-0 nova_compute[251992]: 2025-12-06 07:06:10.828 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 155 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 198 op/s
Dec 06 07:06:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:11.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:11 compute-0 nova_compute[251992]: 2025-12-06 07:06:11.657 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:11.658 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:06:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:11.659 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:06:11 compute-0 ovn_controller[147168]: 2025-12-06T07:06:11Z|00080|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Dec 06 07:06:11 compute-0 nova_compute[251992]: 2025-12-06 07:06:11.904 251996 DEBUG nova.network.neutron [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Successfully created port: 532b1d43-25cc-449a-8e3d-fd25a56a26b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:06:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:12 compute-0 ceph-mon[74339]: pgmap v1371: 305 pgs: 305 active+clean; 155 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 198 op/s
Dec 06 07:06:12 compute-0 nova_compute[251992]: 2025-12-06 07:06:12.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:06:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:06:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:06:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:06:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:06:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:06:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 184 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 655 KiB/s rd, 3.0 MiB/s wr, 118 op/s
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.321 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:13.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.489 251996 DEBUG nova.network.neutron [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Successfully updated port: 532b1d43-25cc-449a-8e3d-fd25a56a26b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.509 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.509 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquired lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.509 251996 DEBUG nova.network.neutron [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.810 251996 DEBUG nova.network.neutron [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.843 251996 DEBUG nova.compute.manager [req-6378b4a2-e3f7-4c0f-88b5-1e37a9f0dff8 req-e0aedab4-268c-4459-85d0-185dfdacb31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.844 251996 DEBUG nova.compute.manager [req-6378b4a2-e3f7-4c0f-88b5-1e37a9f0dff8 req-e0aedab4-268c-4459-85d0-185dfdacb31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing instance network info cache due to event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:06:13 compute-0 nova_compute[251992]: 2025-12-06 07:06:13.844 251996 DEBUG oslo_concurrency.lockutils [req-6378b4a2-e3f7-4c0f-88b5-1e37a9f0dff8 req-e0aedab4-268c-4459-85d0-185dfdacb31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:14 compute-0 ceph-mon[74339]: pgmap v1372: 305 pgs: 305 active+clean; 184 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 655 KiB/s rd, 3.0 MiB/s wr, 118 op/s
Dec 06 07:06:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2867888828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2772567101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 247 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.9 MiB/s wr, 119 op/s
Dec 06 07:06:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:15.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:15.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:15.661 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.777 251996 DEBUG nova.network.neutron [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updating instance_info_cache with network_info: [{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.806 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Releasing lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.807 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Instance network_info: |[{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.807 251996 DEBUG oslo_concurrency.lockutils [req-6378b4a2-e3f7-4c0f-88b5-1e37a9f0dff8 req-e0aedab4-268c-4459-85d0-185dfdacb31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.807 251996 DEBUG nova.network.neutron [req-6378b4a2-e3f7-4c0f-88b5-1e37a9f0dff8 req-e0aedab4-268c-4459-85d0-185dfdacb31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.810 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Start _get_guest_xml network_info=[{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.814 251996 WARNING nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.818 251996 DEBUG nova.virt.libvirt.host [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.819 251996 DEBUG nova.virt.libvirt.host [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.821 251996 DEBUG nova.virt.libvirt.host [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.821 251996 DEBUG nova.virt.libvirt.host [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.822 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.823 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.823 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.823 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.823 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.824 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.824 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.824 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.825 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.825 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.825 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.825 251996 DEBUG nova.virt.hardware [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:06:15 compute-0 nova_compute[251992]: 2025-12-06 07:06:15.828 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077149545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.255 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.281 251996 DEBUG nova.storage.rbd_utils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image f10e044d-9118-49ce-b890-7ec41fc40cb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.285 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Dec 06 07:06:16 compute-0 ceph-mon[74339]: pgmap v1373: 305 pgs: 305 active+clean; 247 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.9 MiB/s wr, 119 op/s
Dec 06 07:06:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1077149545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1384302522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Dec 06 07:06:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Dec 06 07:06:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117782657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.750 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.752 251996 DEBUG nova.virt.libvirt.vif [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:06:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-737606194',display_name='tempest-FloatingIPsAssociationTestJSON-server-737606194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-737606194',id=34,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503b2dfdce9d47598a8b9de4b15e1d45',ramdisk_id='',reservation_id='r-2gf6bx6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-263213844',owner_user_name='tempest-FloatingIPsAssociationTestJSON-263213844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:06:08Z,user_data=None,user_id='c9025bd3f0854dff80d9408800d6b76b',uuid=f10e044d-9118-49ce-b890-7ec41fc40cb0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.753 251996 DEBUG nova.network.os_vif_util [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converting VIF {"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.753 251996 DEBUG nova.network.os_vif_util [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:52:5f,bridge_name='br-int',has_traffic_filtering=True,id=532b1d43-25cc-449a-8e3d-fd25a56a26b4,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap532b1d43-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.755 251996 DEBUG nova.objects.instance [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lazy-loading 'pci_devices' on Instance uuid f10e044d-9118-49ce-b890-7ec41fc40cb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.777 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <uuid>f10e044d-9118-49ce-b890-7ec41fc40cb0</uuid>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <name>instance-00000022</name>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-737606194</nova:name>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:06:15</nova:creationTime>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <nova:user uuid="c9025bd3f0854dff80d9408800d6b76b">tempest-FloatingIPsAssociationTestJSON-263213844-project-member</nova:user>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <nova:project uuid="503b2dfdce9d47598a8b9de4b15e1d45">tempest-FloatingIPsAssociationTestJSON-263213844</nova:project>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <nova:port uuid="532b1d43-25cc-449a-8e3d-fd25a56a26b4">
Dec 06 07:06:16 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <system>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <entry name="serial">f10e044d-9118-49ce-b890-7ec41fc40cb0</entry>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <entry name="uuid">f10e044d-9118-49ce-b890-7ec41fc40cb0</entry>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </system>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <os>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   </os>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <features>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   </features>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f10e044d-9118-49ce-b890-7ec41fc40cb0_disk">
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       </source>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f10e044d-9118-49ce-b890-7ec41fc40cb0_disk.config">
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       </source>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:06:16 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:e9:52:5f"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <target dev="tap532b1d43-25"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0/console.log" append="off"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <video>
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </video>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:06:16 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:06:16 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:06:16 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:06:16 compute-0 nova_compute[251992]: </domain>
Dec 06 07:06:16 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.778 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Preparing to wait for external event network-vif-plugged-532b1d43-25cc-449a-8e3d-fd25a56a26b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.778 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.779 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.779 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.780 251996 DEBUG nova.virt.libvirt.vif [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:06:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-737606194',display_name='tempest-FloatingIPsAssociationTestJSON-server-737606194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-737606194',id=34,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503b2dfdce9d47598a8b9de4b15e1d45',ramdisk_id='',reservation_id='r-2gf6bx6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-263213844',owner_user_name='tempest-FloatingIPsAssociationTestJSON-263213844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:06:08Z,user_data=None,user_id='c9025bd3f0854dff80d9408800d6b76b',uuid=f10e044d-9118-49ce-b890-7ec41fc40cb0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.780 251996 DEBUG nova.network.os_vif_util [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converting VIF {"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.780 251996 DEBUG nova.network.os_vif_util [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:52:5f,bridge_name='br-int',has_traffic_filtering=True,id=532b1d43-25cc-449a-8e3d-fd25a56a26b4,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap532b1d43-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.781 251996 DEBUG os_vif [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:52:5f,bridge_name='br-int',has_traffic_filtering=True,id=532b1d43-25cc-449a-8e3d-fd25a56a26b4,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap532b1d43-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.781 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.782 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.782 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.786 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.787 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap532b1d43-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.788 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap532b1d43-25, col_values=(('external_ids', {'iface-id': '532b1d43-25cc-449a-8e3d-fd25a56a26b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:52:5f', 'vm-uuid': 'f10e044d-9118-49ce-b890-7ec41fc40cb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.789 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.791 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:06:16 compute-0 NetworkManager[48965]: <info>  [1765004776.7908] manager: (tap532b1d43-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.796 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.797 251996 INFO os_vif [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:52:5f,bridge_name='br-int',has_traffic_filtering=True,id=532b1d43-25cc-449a-8e3d-fd25a56a26b4,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap532b1d43-25')
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.864 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.865 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.865 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] No VIF found with MAC fa:16:3e:e9:52:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.865 251996 INFO nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Using config drive
Dec 06 07:06:16 compute-0 nova_compute[251992]: 2025-12-06 07:06:16.889 251996 DEBUG nova.storage.rbd_utils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image f10e044d-9118-49ce-b890-7ec41fc40cb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:17 compute-0 sudo[277104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:06:17 compute-0 sudo[277104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:17 compute-0 sudo[277104]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 272 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 9.1 MiB/s wr, 198 op/s
Dec 06 07:06:17 compute-0 sudo[277129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:06:17 compute-0 sudo[277129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:17 compute-0 sudo[277129]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:17.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:17 compute-0 ceph-mon[74339]: osdmap e180: 3 total, 3 up, 3 in
Dec 06 07:06:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/117782657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1542511494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:17 compute-0 nova_compute[251992]: 2025-12-06 07:06:17.715 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:17 compute-0 nova_compute[251992]: 2025-12-06 07:06:17.822 251996 INFO nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Creating config drive at /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0/disk.config
Dec 06 07:06:17 compute-0 nova_compute[251992]: 2025-12-06 07:06:17.831 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp81cf8338 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:17 compute-0 nova_compute[251992]: 2025-12-06 07:06:17.960 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp81cf8338" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:17 compute-0 nova_compute[251992]: 2025-12-06 07:06:17.988 251996 DEBUG nova.storage.rbd_utils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] rbd image f10e044d-9118-49ce-b890-7ec41fc40cb0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:17 compute-0 nova_compute[251992]: 2025-12-06 07:06:17.992 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0/disk.config f10e044d-9118-49ce-b890-7ec41fc40cb0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.056 251996 DEBUG nova.network.neutron [req-6378b4a2-e3f7-4c0f-88b5-1e37a9f0dff8 req-e0aedab4-268c-4459-85d0-185dfdacb31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updated VIF entry in instance network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.057 251996 DEBUG nova.network.neutron [req-6378b4a2-e3f7-4c0f-88b5-1e37a9f0dff8 req-e0aedab4-268c-4459-85d0-185dfdacb31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updating instance_info_cache with network_info: [{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.090 251996 DEBUG oslo_concurrency.lockutils [req-6378b4a2-e3f7-4c0f-88b5-1e37a9f0dff8 req-e0aedab4-268c-4459-85d0-185dfdacb31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.149 251996 DEBUG oslo_concurrency.processutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0/disk.config f10e044d-9118-49ce-b890-7ec41fc40cb0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.150 251996 INFO nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Deleting local config drive /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0/disk.config because it was imported into RBD.
Dec 06 07:06:18 compute-0 kernel: tap532b1d43-25: entered promiscuous mode
Dec 06 07:06:18 compute-0 NetworkManager[48965]: <info>  [1765004778.2441] manager: (tap532b1d43-25): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.243 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:18 compute-0 ovn_controller[147168]: 2025-12-06T07:06:18Z|00081|binding|INFO|Claiming lport 532b1d43-25cc-449a-8e3d-fd25a56a26b4 for this chassis.
Dec 06 07:06:18 compute-0 ovn_controller[147168]: 2025-12-06T07:06:18Z|00082|binding|INFO|532b1d43-25cc-449a-8e3d-fd25a56a26b4: Claiming fa:16:3e:e9:52:5f 10.100.0.14
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.246 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.250 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.259 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:52:5f 10.100.0.14'], port_security=['fa:16:3e:e9:52:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f10e044d-9118-49ce-b890-7ec41fc40cb0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-344a2c5d-4516-4e02-9384-4797cfc76497', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503b2dfdce9d47598a8b9de4b15e1d45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd9b8be6-903c-4fbc-b82c-3bc27105f7c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9327b272-f30f-48a7-bcf4-62320d61cbdc, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=532b1d43-25cc-449a-8e3d-fd25a56a26b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.261 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 532b1d43-25cc-449a-8e3d-fd25a56a26b4 in datapath 344a2c5d-4516-4e02-9384-4797cfc76497 bound to our chassis
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.262 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 344a2c5d-4516-4e02-9384-4797cfc76497
Dec 06 07:06:18 compute-0 systemd-udevd[277207]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:06:18 compute-0 systemd-machined[212986]: New machine qemu-16-instance-00000022.
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.276 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe09c62-e622-4548-99fc-54431bf16a74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.278 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap344a2c5d-41 in ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.280 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap344a2c5d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.280 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e15bc018-bc2a-4b76-a7db-cb538f6e44db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.281 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0df19a06-8379-4456-9254-7271dfbb647e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 NetworkManager[48965]: <info>  [1765004778.2856] device (tap532b1d43-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:06:18 compute-0 NetworkManager[48965]: <info>  [1765004778.2864] device (tap532b1d43-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.297 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[184b503d-a9af-45c1-9df8-381cd2da5618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.322 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[38d33742-cd1b-491d-b445-915817941841]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:06:18
Dec 06 07:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Dec 06 07:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:06:18 compute-0 ovn_controller[147168]: 2025-12-06T07:06:18Z|00083|binding|INFO|Setting lport 532b1d43-25cc-449a-8e3d-fd25a56a26b4 ovn-installed in OVS
Dec 06 07:06:18 compute-0 ovn_controller[147168]: 2025-12-06T07:06:18Z|00084|binding|INFO|Setting lport 532b1d43-25cc-449a-8e3d-fd25a56a26b4 up in Southbound
Dec 06 07:06:18 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000022.
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.325 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.356 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[29587709-7385-491f-aa5c-313f6c3d6d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.361 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a4220c6f-6da4-4f77-98ff-78f4832667b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 NetworkManager[48965]: <info>  [1765004778.3620] manager: (tap344a2c5d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.387 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[53f2f8cc-c05d-4983-8ac4-3aa58990d897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.389 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9ded29d1-c036-454e-8ca6-0bd889ea1222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 NetworkManager[48965]: <info>  [1765004778.4083] device (tap344a2c5d-40): carrier: link connected
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.416 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf6cec6-d2ce-4ed4-b242-25156d6a9119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.436 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa7319a-5210-4015-b6ba-b0e09095e226]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap344a2c5d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:2c:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505099, 'reachable_time': 24997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277240, 'error': None, 'target': 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.455 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3517d262-bf5a-4864-994d-bf0371722707]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:2c87'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505099, 'tstamp': 505099}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277241, 'error': None, 'target': 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.470 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b74255b3-fac7-4786-9682-2ec045c23a49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap344a2c5d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:2c:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505099, 'reachable_time': 24997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277243, 'error': None, 'target': 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.496 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[274617fb-991a-4ee8-9fc2-c5abf24cbfcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.550 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4dce42-da64-439a-8480-a73d24572bdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.551 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap344a2c5d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.552 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.552 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap344a2c5d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.554 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:18 compute-0 NetworkManager[48965]: <info>  [1765004778.5546] manager: (tap344a2c5d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec 06 07:06:18 compute-0 kernel: tap344a2c5d-40: entered promiscuous mode
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.558 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap344a2c5d-40, col_values=(('external_ids', {'iface-id': '0a001355-8429-4726-b883-743f03fd3e79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:18 compute-0 ovn_controller[147168]: 2025-12-06T07:06:18Z|00085|binding|INFO|Releasing lport 0a001355-8429-4726-b883-743f03fd3e79 from this chassis (sb_readonly=0)
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.565 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.580 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.581 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/344a2c5d-4516-4e02-9384-4797cfc76497.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/344a2c5d-4516-4e02-9384-4797cfc76497.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.582 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2810ba38-edd8-48a9-b5df-b7dcc46fe695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.583 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-344a2c5d-4516-4e02-9384-4797cfc76497
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/344a2c5d-4516-4e02-9384-4797cfc76497.pid.haproxy
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 344a2c5d-4516-4e02-9384-4797cfc76497
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:06:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:18.583 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'env', 'PROCESS_TAG=haproxy-344a2c5d-4516-4e02-9384-4797cfc76497', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/344a2c5d-4516-4e02-9384-4797cfc76497.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:06:18 compute-0 ceph-mon[74339]: pgmap v1375: 305 pgs: 305 active+clean; 272 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 9.1 MiB/s wr, 198 op/s
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.617 251996 DEBUG nova.compute.manager [req-94934d15-fa21-4499-9aea-94ae6130b3ba req-c98061ce-124f-48d5-9097-e50f4bb4e436 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received event network-vif-plugged-532b1d43-25cc-449a-8e3d-fd25a56a26b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.617 251996 DEBUG oslo_concurrency.lockutils [req-94934d15-fa21-4499-9aea-94ae6130b3ba req-c98061ce-124f-48d5-9097-e50f4bb4e436 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.618 251996 DEBUG oslo_concurrency.lockutils [req-94934d15-fa21-4499-9aea-94ae6130b3ba req-c98061ce-124f-48d5-9097-e50f4bb4e436 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.618 251996 DEBUG oslo_concurrency.lockutils [req-94934d15-fa21-4499-9aea-94ae6130b3ba req-c98061ce-124f-48d5-9097-e50f4bb4e436 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:18 compute-0 nova_compute[251992]: 2025-12-06 07:06:18.618 251996 DEBUG nova.compute.manager [req-94934d15-fa21-4499-9aea-94ae6130b3ba req-c98061ce-124f-48d5-9097-e50f4bb4e436 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Processing event network-vif-plugged-532b1d43-25cc-449a-8e3d-fd25a56a26b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:06:18 compute-0 podman[277275]: 2025-12-06 07:06:18.940707793 +0000 UTC m=+0.049886401 container create 312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:06:18 compute-0 systemd[1]: Started libpod-conmon-312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db.scope.
Dec 06 07:06:19 compute-0 podman[277275]: 2025-12-06 07:06:18.918616526 +0000 UTC m=+0.027795154 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:06:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c29c13c9dcfd4132108087b2566c86bad0d4e8b7d1a3db00ad9d864c6653860/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:06:19 compute-0 podman[277275]: 2025-12-06 07:06:19.031413068 +0000 UTC m=+0.140591746 container init 312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:06:19 compute-0 podman[277275]: 2025-12-06 07:06:19.03627372 +0000 UTC m=+0.145452368 container start 312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:06:19 compute-0 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[277291]: [NOTICE]   (277295) : New worker (277297) forked
Dec 06 07:06:19 compute-0 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[277291]: [NOTICE]   (277295) : Loading success.
Dec 06 07:06:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 272 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 8.9 MiB/s wr, 193 op/s
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.341 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.343 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004779.340912, f10e044d-9118-49ce-b890-7ec41fc40cb0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.343 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] VM Started (Lifecycle Event)
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.346 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:06:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:19.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.350 251996 INFO nova.virt.libvirt.driver [-] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Instance spawned successfully.
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.351 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.377 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.382 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.385 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.386 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.386 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.387 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.387 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.388 251996 DEBUG nova.virt.libvirt.driver [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.470 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.471 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004779.3419209, f10e044d-9118-49ce-b890-7ec41fc40cb0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.471 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] VM Paused (Lifecycle Event)
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.504 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.508 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004779.3455775, f10e044d-9118-49ce-b890-7ec41fc40cb0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.509 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] VM Resumed (Lifecycle Event)
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.516 251996 INFO nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Took 9.39 seconds to spawn the instance on the hypervisor.
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.517 251996 DEBUG nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.529 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.531 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.579 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.628 251996 INFO nova.compute.manager [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Took 11.66 seconds to build instance.
Dec 06 07:06:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:19.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:19 compute-0 nova_compute[251992]: 2025-12-06 07:06:19.685 251996 DEBUG oslo_concurrency.lockutils [None req-dba3c703-a608-4a2e-baca-fe4a7dde1f3b c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:20 compute-0 ceph-mon[74339]: pgmap v1376: 305 pgs: 305 active+clean; 272 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 8.9 MiB/s wr, 193 op/s
Dec 06 07:06:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1535216654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1582107465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:20 compute-0 nova_compute[251992]: 2025-12-06 07:06:20.709 251996 DEBUG nova.compute.manager [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received event network-vif-plugged-532b1d43-25cc-449a-8e3d-fd25a56a26b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:06:20 compute-0 nova_compute[251992]: 2025-12-06 07:06:20.711 251996 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:20 compute-0 nova_compute[251992]: 2025-12-06 07:06:20.712 251996 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:20 compute-0 nova_compute[251992]: 2025-12-06 07:06:20.713 251996 DEBUG oslo_concurrency.lockutils [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:20 compute-0 nova_compute[251992]: 2025-12-06 07:06:20.714 251996 DEBUG nova.compute.manager [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] No waiting events found dispatching network-vif-plugged-532b1d43-25cc-449a-8e3d-fd25a56a26b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:06:20 compute-0 nova_compute[251992]: 2025-12-06 07:06:20.715 251996 WARNING nova.compute.manager [req-4f256411-765b-4798-ab83-1272683addb6 req-173f4c7c-12b1-4ce8-a47e-251c9002c959 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received unexpected event network-vif-plugged-532b1d43-25cc-449a-8e3d-fd25a56a26b4 for instance with vm_state active and task_state None.
Dec 06 07:06:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 169 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.4 MiB/s wr, 335 op/s
Dec 06 07:06:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:21.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/182648734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:21.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:21 compute-0 nova_compute[251992]: 2025-12-06 07:06:21.792 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Dec 06 07:06:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Dec 06 07:06:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Dec 06 07:06:22 compute-0 ceph-mon[74339]: pgmap v1377: 305 pgs: 305 active+clean; 169 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.4 MiB/s wr, 335 op/s
Dec 06 07:06:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2621079158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:22 compute-0 ceph-mon[74339]: osdmap e181: 3 total, 3 up, 3 in
Dec 06 07:06:22 compute-0 nova_compute[251992]: 2025-12-06 07:06:22.716 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 162 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.4 MiB/s wr, 385 op/s
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:06:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:23.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:06:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:23.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:24 compute-0 ceph-mon[74339]: pgmap v1379: 305 pgs: 305 active+clean; 162 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.4 MiB/s wr, 385 op/s
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.4 MiB/s wr, 422 op/s
Dec 06 07:06:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:25.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:25 compute-0 podman[277351]: 2025-12-06 07:06:25.448032572 +0000 UTC m=+0.104095659 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029802150218686024 of space, bias 1.0, pg target 0.8940645065605808 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:06:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:06:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:25.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:26 compute-0 ceph-mon[74339]: pgmap v1380: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.4 MiB/s wr, 422 op/s
Dec 06 07:06:26 compute-0 nova_compute[251992]: 2025-12-06 07:06:26.795 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 9.2 MiB/s rd, 2.2 MiB/s wr, 410 op/s
Dec 06 07:06:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:27.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:27.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.718 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.834 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.834 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.852 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.875 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.875 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.904 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.960 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.962 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.979 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.980 251996 INFO nova.compute.claims [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:06:27 compute-0 nova_compute[251992]: 2025-12-06 07:06:27.986 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.163 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:28 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219645129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.674 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.680 251996 DEBUG nova.compute.provider_tree [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.701 251996 DEBUG nova.scheduler.client.report [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:28 compute-0 ceph-mon[74339]: pgmap v1381: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 9.2 MiB/s rd, 2.2 MiB/s wr, 410 op/s
Dec 06 07:06:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3325075958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4219645129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.729 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.730 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.738 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.739 251996 INFO nova.compute.claims [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.764 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "989bc177-ed4b-4973-9da7-60557a01038f" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.765 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "989bc177-ed4b-4973-9da7-60557a01038f" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.795 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "989bc177-ed4b-4973-9da7-60557a01038f" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.797 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.858 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.859 251996 DEBUG nova.network.neutron [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.885 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.910 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:06:28 compute-0 nova_compute[251992]: 2025-12-06 07:06:28.943 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.023 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.025 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.025 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Creating image(s)
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.058 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 9.2 MiB/s rd, 2.2 MiB/s wr, 410 op/s
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.112 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.138 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.143 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.209 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.210 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.211 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.211 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.249 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.254 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.289 251996 DEBUG nova.network.neutron [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.289 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:06:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:29.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676091815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.416 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.422 251996 DEBUG nova.compute.provider_tree [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.439 251996 DEBUG nova.scheduler.client.report [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.472 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.489 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "989bc177-ed4b-4973-9da7-60557a01038f" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.489 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "989bc177-ed4b-4973-9da7-60557a01038f" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.547 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "989bc177-ed4b-4973-9da7-60557a01038f" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.548 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.617 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.618 251996 DEBUG nova.network.neutron [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.650 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:06:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:29.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.681 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.780 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.782 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.782 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Creating image(s)
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.824 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.869 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.899 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.904 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.939 251996 DEBUG nova.network.neutron [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:06:29 compute-0 nova_compute[251992]: 2025-12-06 07:06:29.940 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.000 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.000 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.001 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.002 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.032 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.038 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2676091815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.095 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.840s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.216 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] resizing rbd image f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:06:30 compute-0 podman[277667]: 2025-12-06 07:06:30.413222129 +0000 UTC m=+0.061940247 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:06:30 compute-0 podman[277668]: 2025-12-06 07:06:30.430025774 +0000 UTC m=+0.078116646 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.593 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.631 251996 DEBUG nova.objects.instance [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'migration_context' on Instance uuid f0140185-4a12-4407-b799-4c4a2b8ebb6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.671 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] resizing rbd image e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.738 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.739 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Ensure instance console log exists: /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.740 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.740 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.741 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.743 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.784 251996 DEBUG nova.objects.instance [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'migration_context' on Instance uuid e87113ac-c01b-4dee-8ade-e02cb3ff1be9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.789 251996 WARNING nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.794 251996 DEBUG nova.virt.libvirt.host [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.795 251996 DEBUG nova.virt.libvirt.host [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.799 251996 DEBUG nova.virt.libvirt.host [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.799 251996 DEBUG nova.virt.libvirt.host [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.801 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.801 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.802 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.802 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.803 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.803 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.804 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.804 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.804 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.805 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.805 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.806 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.810 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.930 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.932 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Ensure instance console log exists: /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.933 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.933 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.934 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.936 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.942 251996 WARNING nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.946 251996 DEBUG nova.virt.libvirt.host [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.947 251996 DEBUG nova.virt.libvirt.host [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.950 251996 DEBUG nova.virt.libvirt.host [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.950 251996 DEBUG nova.virt.libvirt.host [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.952 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.952 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.953 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.953 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.953 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.953 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.953 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.954 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.954 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.954 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.954 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.954 251996 DEBUG nova.virt.hardware [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:06:30 compute-0 nova_compute[251992]: 2025-12-06 07:06:30.957 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 283 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.2 MiB/s wr, 303 op/s
Dec 06 07:06:31 compute-0 ceph-mon[74339]: pgmap v1382: 305 pgs: 305 active+clean; 181 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 9.2 MiB/s rd, 2.2 MiB/s wr, 410 op/s
Dec 06 07:06:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3699045936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.286 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.317 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.323 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:31.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700628066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.450 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.484 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.488 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:31.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.798 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1686269888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.860 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.861 251996 DEBUG nova.objects.instance [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'pci_devices' on Instance uuid f0140185-4a12-4407-b799-4c4a2b8ebb6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.875 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <uuid>f0140185-4a12-4407-b799-4c4a2b8ebb6c</uuid>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <name>instance-00000026</name>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersOnMultiNodesTest-server-1819293355-1</nova:name>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:06:30</nova:creationTime>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:user uuid="21af856fbd8843c2969956a9587ca48a">tempest-ServersOnMultiNodesTest-646911698-project-member</nova:user>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:project uuid="2c1f48b58dd04b828c83d6350cc4e13d">tempest-ServersOnMultiNodesTest-646911698</nova:project>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <system>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="serial">f0140185-4a12-4407-b799-4c4a2b8ebb6c</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="uuid">f0140185-4a12-4407-b799-4c4a2b8ebb6c</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </system>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <os>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </os>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <features>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </features>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </source>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk.config">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </source>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c/console.log" append="off"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <video>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </video>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:06:31 compute-0 nova_compute[251992]: </domain>
Dec 06 07:06:31 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.932 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.933 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.933 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Using config drive
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.955 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3337467087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.977 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.978 251996 DEBUG nova.objects.instance [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'pci_devices' on Instance uuid e87113ac-c01b-4dee-8ade-e02cb3ff1be9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:31 compute-0 nova_compute[251992]: 2025-12-06 07:06:31.993 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <uuid>e87113ac-c01b-4dee-8ade-e02cb3ff1be9</uuid>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <name>instance-00000027</name>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersOnMultiNodesTest-server-1819293355-2</nova:name>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:06:30</nova:creationTime>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:user uuid="21af856fbd8843c2969956a9587ca48a">tempest-ServersOnMultiNodesTest-646911698-project-member</nova:user>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <nova:project uuid="2c1f48b58dd04b828c83d6350cc4e13d">tempest-ServersOnMultiNodesTest-646911698</nova:project>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <system>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="serial">e87113ac-c01b-4dee-8ade-e02cb3ff1be9</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="uuid">e87113ac-c01b-4dee-8ade-e02cb3ff1be9</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </system>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <os>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </os>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <features>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </features>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </source>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk.config">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </source>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:06:31 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9/console.log" append="off"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <video>
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </video>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:06:31 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:06:31 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:06:31 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:06:31 compute-0 nova_compute[251992]: </domain>
Dec 06 07:06:31 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.038 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.038 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.039 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Using config drive
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.062 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.158 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Creating config drive at /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c/disk.config
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.164 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_mvmvdbm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.253 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Creating config drive at /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9/disk.config
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.259 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4i0f7n3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.291 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_mvmvdbm" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.318 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.322 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c/disk.config f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:32 compute-0 ovn_controller[147168]: 2025-12-06T07:06:32Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:52:5f 10.100.0.14
Dec 06 07:06:32 compute-0 ovn_controller[147168]: 2025-12-06T07:06:32Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:52:5f 10.100.0.14
Dec 06 07:06:32 compute-0 ceph-mon[74339]: pgmap v1383: 305 pgs: 305 active+clean; 283 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.2 MiB/s wr, 303 op/s
Dec 06 07:06:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3699045936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2700628066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1686269888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3337467087' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.610 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c/disk.config f0140185-4a12-4407-b799-4c4a2b8ebb6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.611 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Deleting local config drive /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c/disk.config because it was imported into RBD.
Dec 06 07:06:32 compute-0 systemd-machined[212986]: New machine qemu-17-instance-00000026.
Dec 06 07:06:32 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000026.
Dec 06 07:06:32 compute-0 nova_compute[251992]: 2025-12-06 07:06:32.721 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 312 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.0 MiB/s wr, 262 op/s
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.344 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004793.343526, f0140185-4a12-4407-b799-4c4a2b8ebb6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.344 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] VM Resumed (Lifecycle Event)
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.350 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.351 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.354 251996 INFO nova.virt.libvirt.driver [-] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Instance spawned successfully.
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.355 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:06:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:33.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.370 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.376 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.379 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.380 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.381 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.381 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.382 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.382 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.395 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4i0f7n3x" returned: 0 in 1.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.427 251996 DEBUG nova.storage.rbd_utils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] rbd image e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.430 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9/disk.config e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.455 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.456 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004793.3498683, f0140185-4a12-4407-b799-4c4a2b8ebb6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.456 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] VM Started (Lifecycle Event)
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.460 251996 INFO nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Took 4.44 seconds to spawn the instance on the hypervisor.
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.461 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.487 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.492 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.526 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2455388943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1436394406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.542 251996 INFO nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Took 5.63 seconds to build instance.
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.568 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.606 251996 DEBUG oslo_concurrency.processutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9/disk.config e87113ac-c01b-4dee-8ade-e02cb3ff1be9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:33 compute-0 nova_compute[251992]: 2025-12-06 07:06:33.607 251996 INFO nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Deleting local config drive /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9/disk.config because it was imported into RBD.
Dec 06 07:06:33 compute-0 systemd-machined[212986]: New machine qemu-18-instance-00000027.
Dec 06 07:06:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:33.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:33 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000027.
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.211 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004794.210932, e87113ac-c01b-4dee-8ade-e02cb3ff1be9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.212 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] VM Resumed (Lifecycle Event)
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.214 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.214 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.217 251996 INFO nova.virt.libvirt.driver [-] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Instance spawned successfully.
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.217 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.237 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.242 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.246 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.246 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.247 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.247 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.248 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.248 251996 DEBUG nova.virt.libvirt.driver [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.279 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.280 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004794.211796, e87113ac-c01b-4dee-8ade-e02cb3ff1be9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.280 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] VM Started (Lifecycle Event)
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.308 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.311 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.326 251996 INFO nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Took 4.54 seconds to spawn the instance on the hypervisor.
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.327 251996 DEBUG nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.341 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.405 251996 INFO nova.compute.manager [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Took 6.45 seconds to build instance.
Dec 06 07:06:34 compute-0 nova_compute[251992]: 2025-12-06 07:06:34.424 251996 DEBUG oslo_concurrency.lockutils [None req-3a973853-8fef-40d9-8203-ad15f6b78ee2 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:34 compute-0 ceph-mon[74339]: pgmap v1384: 305 pgs: 305 active+clean; 312 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.0 MiB/s wr, 262 op/s
Dec 06 07:06:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 362 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 9.0 MiB/s wr, 346 op/s
Dec 06 07:06:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:35.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:35.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:36 compute-0 ceph-mon[74339]: pgmap v1385: 305 pgs: 305 active+clean; 362 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 9.0 MiB/s wr, 346 op/s
Dec 06 07:06:36 compute-0 nova_compute[251992]: 2025-12-06 07:06:36.802 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 406 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 12 MiB/s wr, 498 op/s
Dec 06 07:06:37 compute-0 sudo[278153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:06:37 compute-0 sudo[278153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:37 compute-0 sudo[278153]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:37 compute-0 sudo[278178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:06:37 compute-0 sudo[278178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:37 compute-0 sudo[278178]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:37.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:37 compute-0 NetworkManager[48965]: <info>  [1765004797.6531] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/48)
Dec 06 07:06:37 compute-0 NetworkManager[48965]: <info>  [1765004797.6539] device (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 07:06:37 compute-0 NetworkManager[48965]: <info>  [1765004797.6554] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/49)
Dec 06 07:06:37 compute-0 NetworkManager[48965]: <info>  [1765004797.6560] device (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 06 07:06:37 compute-0 NetworkManager[48965]: <info>  [1765004797.6573] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec 06 07:06:37 compute-0 NetworkManager[48965]: <info>  [1765004797.6582] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Dec 06 07:06:37 compute-0 NetworkManager[48965]: <info>  [1765004797.6588] device (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 07:06:37 compute-0 NetworkManager[48965]: <info>  [1765004797.6593] device (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 06 07:06:37 compute-0 nova_compute[251992]: 2025-12-06 07:06:37.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:37.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:37 compute-0 nova_compute[251992]: 2025-12-06 07:06:37.820 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:37 compute-0 ovn_controller[147168]: 2025-12-06T07:06:37Z|00086|binding|INFO|Releasing lport 0a001355-8429-4726-b883-743f03fd3e79 from this chassis (sb_readonly=0)
Dec 06 07:06:37 compute-0 nova_compute[251992]: 2025-12-06 07:06:37.834 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:37 compute-0 nova_compute[251992]: 2025-12-06 07:06:37.946 251996 DEBUG nova.compute.manager [req-853001ef-6be5-4e7e-8da2-36631c1d3113 req-09a7b46c-12c9-493e-92ea-67a4bdde2998 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:06:37 compute-0 nova_compute[251992]: 2025-12-06 07:06:37.946 251996 DEBUG nova.compute.manager [req-853001ef-6be5-4e7e-8da2-36631c1d3113 req-09a7b46c-12c9-493e-92ea-67a4bdde2998 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing instance network info cache due to event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:06:37 compute-0 nova_compute[251992]: 2025-12-06 07:06:37.946 251996 DEBUG oslo_concurrency.lockutils [req-853001ef-6be5-4e7e-8da2-36631c1d3113 req-09a7b46c-12c9-493e-92ea-67a4bdde2998 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:37 compute-0 nova_compute[251992]: 2025-12-06 07:06:37.947 251996 DEBUG oslo_concurrency.lockutils [req-853001ef-6be5-4e7e-8da2-36631c1d3113 req-09a7b46c-12c9-493e-92ea-67a4bdde2998 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:37 compute-0 nova_compute[251992]: 2025-12-06 07:06:37.947 251996 DEBUG nova.network.neutron [req-853001ef-6be5-4e7e-8da2-36631c1d3113 req-09a7b46c-12c9-493e-92ea-67a4bdde2998 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:06:38 compute-0 nova_compute[251992]: 2025-12-06 07:06:38.524 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:38 compute-0 ceph-mon[74339]: pgmap v1386: 305 pgs: 305 active+clean; 406 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 12 MiB/s wr, 498 op/s
Dec 06 07:06:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 406 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 409 op/s
Dec 06 07:06:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:39.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:39.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:39 compute-0 nova_compute[251992]: 2025-12-06 07:06:39.701 251996 DEBUG nova.network.neutron [req-853001ef-6be5-4e7e-8da2-36631c1d3113 req-09a7b46c-12c9-493e-92ea-67a4bdde2998 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updated VIF entry in instance network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:06:39 compute-0 nova_compute[251992]: 2025-12-06 07:06:39.702 251996 DEBUG nova.network.neutron [req-853001ef-6be5-4e7e-8da2-36631c1d3113 req-09a7b46c-12c9-493e-92ea-67a4bdde2998 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updating instance_info_cache with network_info: [{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:39 compute-0 nova_compute[251992]: 2025-12-06 07:06:39.760 251996 DEBUG oslo_concurrency.lockutils [req-853001ef-6be5-4e7e-8da2-36631c1d3113 req-09a7b46c-12c9-493e-92ea-67a4bdde2998 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:40 compute-0 ceph-mon[74339]: pgmap v1387: 305 pgs: 305 active+clean; 406 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 12 MiB/s wr, 409 op/s
Dec 06 07:06:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/790145495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3249629797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:40 compute-0 nova_compute[251992]: 2025-12-06 07:06:40.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 400 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 12 MiB/s wr, 505 op/s
Dec 06 07:06:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:41.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:41 compute-0 nova_compute[251992]: 2025-12-06 07:06:41.535 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:41.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:41 compute-0 nova_compute[251992]: 2025-12-06 07:06:41.804 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:42 compute-0 nova_compute[251992]: 2025-12-06 07:06:42.283 251996 DEBUG nova.compute.manager [req-e6726ab3-d997-498b-abd6-192e6f13467a req-b043f54b-7553-46c4-a4d1-f559d24691f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:06:42 compute-0 nova_compute[251992]: 2025-12-06 07:06:42.283 251996 DEBUG nova.compute.manager [req-e6726ab3-d997-498b-abd6-192e6f13467a req-b043f54b-7553-46c4-a4d1-f559d24691f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing instance network info cache due to event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:06:42 compute-0 nova_compute[251992]: 2025-12-06 07:06:42.284 251996 DEBUG oslo_concurrency.lockutils [req-e6726ab3-d997-498b-abd6-192e6f13467a req-b043f54b-7553-46c4-a4d1-f559d24691f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:42 compute-0 nova_compute[251992]: 2025-12-06 07:06:42.284 251996 DEBUG oslo_concurrency.lockutils [req-e6726ab3-d997-498b-abd6-192e6f13467a req-b043f54b-7553-46c4-a4d1-f559d24691f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:42 compute-0 nova_compute[251992]: 2025-12-06 07:06:42.284 251996 DEBUG nova.network.neutron [req-e6726ab3-d997-498b-abd6-192e6f13467a req-b043f54b-7553-46c4-a4d1-f559d24691f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:06:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:06:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2056994777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-0 ceph-mon[74339]: pgmap v1388: 305 pgs: 305 active+clean; 400 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 12 MiB/s wr, 505 op/s
Dec 06 07:06:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/770132395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2056994777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1413870301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:42 compute-0 nova_compute[251992]: 2025-12-06 07:06:42.878 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:06:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:06:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:06:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:06:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:06:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:06:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 388 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.9 MiB/s wr, 491 op/s
Dec 06 07:06:43 compute-0 ceph-osd[84884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Dec 06 07:06:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:06:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:43.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:06:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3914379697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:06:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4056199244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:43.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:44 compute-0 nova_compute[251992]: 2025-12-06 07:06:44.644 251996 DEBUG nova.network.neutron [req-e6726ab3-d997-498b-abd6-192e6f13467a req-b043f54b-7553-46c4-a4d1-f559d24691f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updated VIF entry in instance network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:06:44 compute-0 nova_compute[251992]: 2025-12-06 07:06:44.645 251996 DEBUG nova.network.neutron [req-e6726ab3-d997-498b-abd6-192e6f13467a req-b043f54b-7553-46c4-a4d1-f559d24691f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updating instance_info_cache with network_info: [{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:44 compute-0 ceph-mon[74339]: pgmap v1389: 305 pgs: 305 active+clean; 388 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.9 MiB/s wr, 491 op/s
Dec 06 07:06:44 compute-0 nova_compute[251992]: 2025-12-06 07:06:44.684 251996 DEBUG oslo_concurrency.lockutils [req-e6726ab3-d997-498b-abd6-192e6f13467a req-b043f54b-7553-46c4-a4d1-f559d24691f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:44 compute-0 ceph-osd[84884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Dec 06 07:06:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 423 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 9.6 MiB/s wr, 515 op/s
Dec 06 07:06:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:45.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:45 compute-0 nova_compute[251992]: 2025-12-06 07:06:45.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:45 compute-0 nova_compute[251992]: 2025-12-06 07:06:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:45.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:46 compute-0 nova_compute[251992]: 2025-12-06 07:06:46.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:46 compute-0 ceph-mon[74339]: pgmap v1390: 305 pgs: 305 active+clean; 423 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 9.6 MiB/s wr, 515 op/s
Dec 06 07:06:46 compute-0 nova_compute[251992]: 2025-12-06 07:06:46.806 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 444 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 7.9 MiB/s wr, 488 op/s
Dec 06 07:06:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:47.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:47 compute-0 nova_compute[251992]: 2025-12-06 07:06:47.880 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:48 compute-0 ceph-mon[74339]: pgmap v1391: 305 pgs: 305 active+clean; 444 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 7.9 MiB/s wr, 488 op/s
Dec 06 07:06:48 compute-0 nova_compute[251992]: 2025-12-06 07:06:48.853 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:48 compute-0 nova_compute[251992]: 2025-12-06 07:06:48.854 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:48 compute-0 nova_compute[251992]: 2025-12-06 07:06:48.854 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:48 compute-0 nova_compute[251992]: 2025-12-06 07:06:48.854 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:06:48 compute-0 nova_compute[251992]: 2025-12-06 07:06:48.855 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 444 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.4 MiB/s wr, 257 op/s
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:49.270 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:06:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:49.272 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:06:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309178670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.370 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:49.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.621 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.621 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.624 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.624 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.626 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.627 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:06:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:49.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.757 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.758 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4188MB free_disk=20.783279418945312GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.758 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.758 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.951 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f10e044d-9118-49ce-b890-7ec41fc40cb0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.951 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f0140185-4a12-4407-b799-4c4a2b8ebb6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.952 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance e87113ac-c01b-4dee-8ade-e02cb3ff1be9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.953 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:06:49 compute-0 nova_compute[251992]: 2025-12-06 07:06:49.953 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:06:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1309178670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:50 compute-0 nova_compute[251992]: 2025-12-06 07:06:50.218 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:06:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:06:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1891790592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:50 compute-0 nova_compute[251992]: 2025-12-06 07:06:50.666 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:06:50 compute-0 nova_compute[251992]: 2025-12-06 07:06:50.671 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:06:50 compute-0 nova_compute[251992]: 2025-12-06 07:06:50.691 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:06:50 compute-0 nova_compute[251992]: 2025-12-06 07:06:50.716 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:06:50 compute-0 nova_compute[251992]: 2025-12-06 07:06:50.717 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 501 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.9 MiB/s wr, 481 op/s
Dec 06 07:06:51 compute-0 ceph-mon[74339]: pgmap v1392: 305 pgs: 305 active+clean; 444 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.4 MiB/s wr, 257 op/s
Dec 06 07:06:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1891790592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:06:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:51.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:06:51 compute-0 nova_compute[251992]: 2025-12-06 07:06:51.810 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:52 compute-0 ceph-mon[74339]: pgmap v1393: 305 pgs: 305 active+clean; 501 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.9 MiB/s wr, 481 op/s
Dec 06 07:06:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3186190025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:52 compute-0 nova_compute[251992]: 2025-12-06 07:06:52.711 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:52 compute-0 nova_compute[251992]: 2025-12-06 07:06:52.711 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:52 compute-0 nova_compute[251992]: 2025-12-06 07:06:52.711 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:06:52 compute-0 nova_compute[251992]: 2025-12-06 07:06:52.711 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:06:52 compute-0 nova_compute[251992]: 2025-12-06 07:06:52.881 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 528 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.4 MiB/s wr, 426 op/s
Dec 06 07:06:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3908023371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:53.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:54 compute-0 ceph-mon[74339]: pgmap v1394: 305 pgs: 305 active+clean; 528 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.4 MiB/s wr, 426 op/s
Dec 06 07:06:54 compute-0 nova_compute[251992]: 2025-12-06 07:06:54.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:54 compute-0 nova_compute[251992]: 2025-12-06 07:06:54.694 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:54 compute-0 nova_compute[251992]: 2025-12-06 07:06:54.695 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:06:54 compute-0 nova_compute[251992]: 2025-12-06 07:06:54.695 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f10e044d-9118-49ce-b890-7ec41fc40cb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:54 compute-0 ovn_controller[147168]: 2025-12-06T07:06:54Z|00087|binding|INFO|Releasing lport 0a001355-8429-4726-b883-743f03fd3e79 from this chassis (sb_readonly=0)
Dec 06 07:06:55 compute-0 nova_compute[251992]: 2025-12-06 07:06:55.005 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 478 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 8.9 MiB/s wr, 411 op/s
Dec 06 07:06:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:06:55.275 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:06:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4114717483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4094548463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:55.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:55.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.226 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.227 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.227 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.228 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.228 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.229 251996 INFO nova.compute.manager [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Terminating instance
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.231 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "refresh_cache-f0140185-4a12-4407-b799-4c4a2b8ebb6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.231 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquired lock "refresh_cache-f0140185-4a12-4407-b799-4c4a2b8ebb6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.231 251996 DEBUG nova.network.neutron [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:56 compute-0 ceph-mon[74339]: pgmap v1395: 305 pgs: 305 active+clean; 478 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 8.9 MiB/s wr, 411 op/s
Dec 06 07:06:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/199118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1662777040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:06:56 compute-0 podman[278258]: 2025-12-06 07:06:56.443021638 +0000 UTC m=+0.090267545 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.500 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.500 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.501 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.501 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.501 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.502 251996 INFO nova.compute.manager [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Terminating instance
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.503 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "refresh_cache-e87113ac-c01b-4dee-8ade-e02cb3ff1be9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.503 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquired lock "refresh_cache-e87113ac-c01b-4dee-8ade-e02cb3ff1be9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.503 251996 DEBUG nova.network.neutron [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.520 251996 DEBUG nova.network.neutron [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.779 251996 DEBUG nova.network.neutron [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:06:56 compute-0 nova_compute[251992]: 2025-12-06 07:06:56.812 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 438 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.6 MiB/s wr, 387 op/s
Dec 06 07:06:57 compute-0 nova_compute[251992]: 2025-12-06 07:06:57.240 251996 DEBUG nova.network.neutron [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:06:57 compute-0 sudo[278288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:06:57 compute-0 sudo[278288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:57 compute-0 sudo[278288]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:57.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:57 compute-0 sudo[278313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:06:57 compute-0 sudo[278313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:06:57 compute-0 sudo[278313]: pam_unix(sudo:session): session closed for user root
Dec 06 07:06:57 compute-0 nova_compute[251992]: 2025-12-06 07:06:57.657 251996 DEBUG nova.network.neutron [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:57 compute-0 nova_compute[251992]: 2025-12-06 07:06:57.883 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:06:58 compute-0 ceph-mon[74339]: pgmap v1396: 305 pgs: 305 active+clean; 438 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.6 MiB/s wr, 387 op/s
Dec 06 07:06:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 438 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 304 op/s
Dec 06 07:06:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:06:59.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:06:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:06:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:06:59.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.724 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Releasing lock "refresh_cache-f0140185-4a12-4407-b799-4c4a2b8ebb6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.725 251996 DEBUG nova.compute.manager [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.738 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updating instance_info_cache with network_info: [{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:06:59 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000026.scope: Deactivated successfully.
Dec 06 07:06:59 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000026.scope: Consumed 14.552s CPU time.
Dec 06 07:06:59 compute-0 systemd-machined[212986]: Machine qemu-17-instance-00000026 terminated.
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.808 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Releasing lock "refresh_cache-e87113ac-c01b-4dee-8ade-e02cb3ff1be9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.809 251996 DEBUG nova.compute.manager [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.933 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.934 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.935 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.935 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.935 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.935 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.935 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.946 251996 INFO nova.virt.libvirt.driver [-] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Instance destroyed successfully.
Dec 06 07:06:59 compute-0 nova_compute[251992]: 2025-12-06 07:06:59.947 251996 DEBUG nova.objects.instance [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'resources' on Instance uuid f0140185-4a12-4407-b799-4c4a2b8ebb6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:06:59 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000027.scope: Deactivated successfully.
Dec 06 07:06:59 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000027.scope: Consumed 15.323s CPU time.
Dec 06 07:06:59 compute-0 systemd-machined[212986]: Machine qemu-18-instance-00000027 terminated.
Dec 06 07:07:00 compute-0 nova_compute[251992]: 2025-12-06 07:07:00.029 251996 INFO nova.virt.libvirt.driver [-] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Instance destroyed successfully.
Dec 06 07:07:00 compute-0 nova_compute[251992]: 2025-12-06 07:07:00.030 251996 DEBUG nova.objects.instance [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lazy-loading 'resources' on Instance uuid e87113ac-c01b-4dee-8ade-e02cb3ff1be9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:07:00 compute-0 ceph-mon[74339]: pgmap v1397: 305 pgs: 305 active+clean; 438 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 304 op/s
Dec 06 07:07:00 compute-0 nova_compute[251992]: 2025-12-06 07:07:00.938 251996 INFO nova.virt.libvirt.driver [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Deleting instance files /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c_del
Dec 06 07:07:00 compute-0 nova_compute[251992]: 2025-12-06 07:07:00.939 251996 INFO nova.virt.libvirt.driver [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Deletion of /var/lib/nova/instances/f0140185-4a12-4407-b799-4c4a2b8ebb6c_del complete
Dec 06 07:07:00 compute-0 nova_compute[251992]: 2025-12-06 07:07:00.946 251996 INFO nova.virt.libvirt.driver [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Deleting instance files /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9_del
Dec 06 07:07:00 compute-0 nova_compute[251992]: 2025-12-06 07:07:00.946 251996 INFO nova.virt.libvirt.driver [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Deletion of /var/lib/nova/instances/e87113ac-c01b-4dee-8ade-e02cb3ff1be9_del complete
Dec 06 07:07:01 compute-0 sudo[278383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:01 compute-0 sudo[278383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:01 compute-0 sudo[278383]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.008 251996 INFO nova.compute.manager [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Took 1.28 seconds to destroy the instance on the hypervisor.
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.009 251996 DEBUG oslo.service.loopingcall [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.009 251996 DEBUG nova.compute.manager [-] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.009 251996 DEBUG nova.network.neutron [-] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.011 251996 INFO nova.compute.manager [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Took 1.20 seconds to destroy the instance on the hypervisor.
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.011 251996 DEBUG oslo.service.loopingcall [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.012 251996 DEBUG nova.compute.manager [-] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.012 251996 DEBUG nova.network.neutron [-] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:07:01 compute-0 sudo[278420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:07:01 compute-0 sudo[278420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:01 compute-0 podman[278407]: 2025-12-06 07:07:01.065021175 +0000 UTC m=+0.057699662 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:07:01 compute-0 sudo[278420]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:01 compute-0 podman[278408]: 2025-12-06 07:07:01.093895628 +0000 UTC m=+0.084681034 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:07:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 368 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 326 op/s
Dec 06 07:07:01 compute-0 sudo[278470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:01 compute-0 sudo[278470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:01 compute-0 sudo[278470]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:01 compute-0 sudo[278495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:07:01 compute-0 sudo[278495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2620495894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:01 compute-0 podman[278591]: 2025-12-06 07:07:01.667550365 +0000 UTC m=+0.063220952 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.697 251996 DEBUG nova.network.neutron [-] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.700 251996 DEBUG nova.network.neutron [-] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:07:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:01.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.725 251996 DEBUG nova.network.neutron [-] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.728 251996 DEBUG nova.network.neutron [-] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.748 251996 INFO nova.compute.manager [-] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Took 0.74 seconds to deallocate network for instance.
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.749 251996 INFO nova.compute.manager [-] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Took 0.74 seconds to deallocate network for instance.
Dec 06 07:07:01 compute-0 podman[278591]: 2025-12-06 07:07:01.8029347 +0000 UTC m=+0.198605257 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.810 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.811 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.812 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.816 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:01 compute-0 nova_compute[251992]: 2025-12-06 07:07:01.885 251996 DEBUG oslo_concurrency.processutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:02 compute-0 ceph-mon[74339]: pgmap v1398: 305 pgs: 305 active+clean; 368 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 326 op/s
Dec 06 07:07:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/686088067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:02 compute-0 podman[278761]: 2025-12-06 07:07:02.382513818 +0000 UTC m=+0.063823689 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:07:02 compute-0 nova_compute[251992]: 2025-12-06 07:07:02.386 251996 DEBUG oslo_concurrency.processutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:02 compute-0 nova_compute[251992]: 2025-12-06 07:07:02.393 251996 DEBUG nova.compute.provider_tree [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:02 compute-0 podman[278761]: 2025-12-06 07:07:02.422584582 +0000 UTC m=+0.103894483 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:07:02 compute-0 podman[278831]: 2025-12-06 07:07:02.635478545 +0000 UTC m=+0.060612472 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4)
Dec 06 07:07:02 compute-0 podman[278831]: 2025-12-06 07:07:02.676079773 +0000 UTC m=+0.101213700 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, version=2.2.4, io.openshift.expose-services=, release=1793, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-type=git, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 07:07:02 compute-0 sudo[278495]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:07:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:07:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:02 compute-0 sudo[278864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:02 compute-0 sudo[278864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:02 compute-0 sudo[278864]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:02 compute-0 sudo[278889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:07:02 compute-0 sudo[278889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:02 compute-0 sudo[278889]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:02 compute-0 nova_compute[251992]: 2025-12-06 07:07:02.924 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:02 compute-0 nova_compute[251992]: 2025-12-06 07:07:02.943 251996 DEBUG nova.scheduler.client.report [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:02 compute-0 sudo[278914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:02 compute-0 sudo[278914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:02 compute-0 sudo[278914]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:02 compute-0 nova_compute[251992]: 2025-12-06 07:07:02.975 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:02 compute-0 nova_compute[251992]: 2025-12-06 07:07:02.978 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:03 compute-0 sudo[278939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:07:03 compute-0 sudo[278939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.065 251996 INFO nova.scheduler.client.report [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Deleted allocations for instance e87113ac-c01b-4dee-8ade-e02cb3ff1be9
Dec 06 07:07:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 314 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 1.1 MiB/s wr, 125 op/s
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.112 251996 DEBUG oslo_concurrency.processutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.200 251996 DEBUG oslo_concurrency.lockutils [None req-cceb423a-a66e-4792-a282-22bdca0d31b7 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "e87113ac-c01b-4dee-8ade-e02cb3ff1be9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/686088067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:03.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:03 compute-0 sudo[278939]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:07:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:07:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:07:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:07:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:07:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 220484c0-4377-464c-8aa5-1dc234f572e0 does not exist
Dec 06 07:07:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dd2d9049-2b19-451c-8902-497494355943 does not exist
Dec 06 07:07:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2128fd8f-6aa3-40b3-b27e-1f39a1b63559 does not exist
Dec 06 07:07:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:07:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:07:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:07:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:07:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:07:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:07:03 compute-0 sudo[279013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:03 compute-0 sudo[279013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:03 compute-0 sudo[279013]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557369398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.605 251996 DEBUG oslo_concurrency.processutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.610 251996 DEBUG nova.compute.provider_tree [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.630 251996 DEBUG nova.scheduler.client.report [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:03 compute-0 sudo[279039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:07:03 compute-0 sudo[279039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:03 compute-0 sudo[279039]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.675 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:03 compute-0 sudo[279065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:03 compute-0 sudo[279065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:03 compute-0 sudo[279065]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.696 251996 INFO nova.scheduler.client.report [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Deleted allocations for instance f0140185-4a12-4407-b799-4c4a2b8ebb6c
Dec 06 07:07:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:03.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:03 compute-0 sudo[279090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:07:03 compute-0 sudo[279090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:03 compute-0 nova_compute[251992]: 2025-12-06 07:07:03.781 251996 DEBUG oslo_concurrency.lockutils [None req-b71b7310-e622-43e6-9cc3-4210c8597326 21af856fbd8843c2969956a9587ca48a 2c1f48b58dd04b828c83d6350cc4e13d - - default default] Lock "f0140185-4a12-4407-b799-4c4a2b8ebb6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:03.815 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:03.816 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:03.817 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:04 compute-0 podman[279156]: 2025-12-06 07:07:04.022924529 +0000 UTC m=+0.041396221 container create 22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:07:04 compute-0 systemd[1]: Started libpod-conmon-22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f.scope.
Dec 06 07:07:04 compute-0 podman[279156]: 2025-12-06 07:07:04.003136204 +0000 UTC m=+0.021607916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:07:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:07:04 compute-0 podman[279156]: 2025-12-06 07:07:04.130322277 +0000 UTC m=+0.148793989 container init 22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:07:04 compute-0 podman[279156]: 2025-12-06 07:07:04.139276239 +0000 UTC m=+0.157747941 container start 22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:07:04 compute-0 podman[279156]: 2025-12-06 07:07:04.143624526 +0000 UTC m=+0.162096238 container attach 22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:07:04 compute-0 suspicious_hoover[279173]: 167 167
Dec 06 07:07:04 compute-0 systemd[1]: libpod-22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f.scope: Deactivated successfully.
Dec 06 07:07:04 compute-0 podman[279156]: 2025-12-06 07:07:04.146841883 +0000 UTC m=+0.165313575 container died 22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:07:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c035ab97979df996d64527c5447779daef1821952804732c1acefc7d8d26f7d-merged.mount: Deactivated successfully.
Dec 06 07:07:04 compute-0 podman[279156]: 2025-12-06 07:07:04.199068918 +0000 UTC m=+0.217540610 container remove 22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 07:07:04 compute-0 systemd[1]: libpod-conmon-22f189cceda13f4a08a7bcaec308c8aa9ef4891c3453508ed209d1dc55dfa15f.scope: Deactivated successfully.
Dec 06 07:07:04 compute-0 ceph-mon[74339]: pgmap v1399: 305 pgs: 305 active+clean; 314 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 1.1 MiB/s wr, 125 op/s
Dec 06 07:07:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:07:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:07:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:07:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:07:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:07:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2557369398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1887594548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:04 compute-0 podman[279197]: 2025-12-06 07:07:04.388859255 +0000 UTC m=+0.036979382 container create 531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:07:04 compute-0 systemd[1]: Started libpod-conmon-531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5.scope.
Dec 06 07:07:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b38c0b625f147a691379aa47c591d92bd7c3e2d376d9d847ce006737eb775a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b38c0b625f147a691379aa47c591d92bd7c3e2d376d9d847ce006737eb775a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b38c0b625f147a691379aa47c591d92bd7c3e2d376d9d847ce006737eb775a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b38c0b625f147a691379aa47c591d92bd7c3e2d376d9d847ce006737eb775a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b38c0b625f147a691379aa47c591d92bd7c3e2d376d9d847ce006737eb775a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:04 compute-0 podman[279197]: 2025-12-06 07:07:04.468571972 +0000 UTC m=+0.116692119 container init 531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 07:07:04 compute-0 podman[279197]: 2025-12-06 07:07:04.373447077 +0000 UTC m=+0.021567224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:07:04 compute-0 podman[279197]: 2025-12-06 07:07:04.475224123 +0000 UTC m=+0.123344250 container start 531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mccarthy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:07:04 compute-0 podman[279197]: 2025-12-06 07:07:04.478494381 +0000 UTC m=+0.126614508 container attach 531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:07:04 compute-0 nova_compute[251992]: 2025-12-06 07:07:04.980 251996 DEBUG nova.compute.manager [req-8f77c7b0-9cba-49e8-9a3a-23adef46bb74 req-45a8298c-d997-4f12-bf3f-591920f60368 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:04 compute-0 nova_compute[251992]: 2025-12-06 07:07:04.982 251996 DEBUG nova.compute.manager [req-8f77c7b0-9cba-49e8-9a3a-23adef46bb74 req-45a8298c-d997-4f12-bf3f-591920f60368 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing instance network info cache due to event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:07:04 compute-0 nova_compute[251992]: 2025-12-06 07:07:04.982 251996 DEBUG oslo_concurrency.lockutils [req-8f77c7b0-9cba-49e8-9a3a-23adef46bb74 req-45a8298c-d997-4f12-bf3f-591920f60368 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:07:04 compute-0 nova_compute[251992]: 2025-12-06 07:07:04.982 251996 DEBUG oslo_concurrency.lockutils [req-8f77c7b0-9cba-49e8-9a3a-23adef46bb74 req-45a8298c-d997-4f12-bf3f-591920f60368 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:07:04 compute-0 nova_compute[251992]: 2025-12-06 07:07:04.983 251996 DEBUG nova.network.neutron [req-8f77c7b0-9cba-49e8-9a3a-23adef46bb74 req-45a8298c-d997-4f12-bf3f-591920f60368 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:07:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 222 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 560 KiB/s wr, 130 op/s
Dec 06 07:07:05 compute-0 happy_mccarthy[279213]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:07:05 compute-0 happy_mccarthy[279213]: --> relative data size: 1.0
Dec 06 07:07:05 compute-0 happy_mccarthy[279213]: --> All data devices are unavailable
Dec 06 07:07:05 compute-0 systemd[1]: libpod-531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5.scope: Deactivated successfully.
Dec 06 07:07:05 compute-0 podman[279197]: 2025-12-06 07:07:05.341200633 +0000 UTC m=+0.989320770 container died 531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:07:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b38c0b625f147a691379aa47c591d92bd7c3e2d376d9d847ce006737eb775a1-merged.mount: Deactivated successfully.
Dec 06 07:07:05 compute-0 podman[279197]: 2025-12-06 07:07:05.397486836 +0000 UTC m=+1.045606963 container remove 531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mccarthy, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:07:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:05.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:05 compute-0 systemd[1]: libpod-conmon-531cc84bc3130d0c9a9fb7bbd5e9f0368c8594a974d31bdc8513226fd56e24b5.scope: Deactivated successfully.
Dec 06 07:07:05 compute-0 sudo[279090]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:05 compute-0 sudo[279243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:05 compute-0 sudo[279243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:05 compute-0 sudo[279243]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:05 compute-0 sudo[279268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:07:05 compute-0 sudo[279268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:05 compute-0 sudo[279268]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:05 compute-0 sudo[279293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:05 compute-0 sudo[279293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:05 compute-0 sudo[279293]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:05 compute-0 sudo[279318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:07:05 compute-0 sudo[279318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:05.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:05 compute-0 podman[279383]: 2025-12-06 07:07:05.986951621 +0000 UTC m=+0.041991577 container create 9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lederberg, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:07:06 compute-0 systemd[1]: Started libpod-conmon-9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0.scope.
Dec 06 07:07:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:07:06 compute-0 podman[279383]: 2025-12-06 07:07:05.970068244 +0000 UTC m=+0.025108230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:07:06 compute-0 podman[279383]: 2025-12-06 07:07:06.069787964 +0000 UTC m=+0.124827920 container init 9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:07:06 compute-0 podman[279383]: 2025-12-06 07:07:06.076280749 +0000 UTC m=+0.131320705 container start 9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:07:06 compute-0 podman[279383]: 2025-12-06 07:07:06.079521437 +0000 UTC m=+0.134561393 container attach 9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lederberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 07:07:06 compute-0 systemd[1]: libpod-9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0.scope: Deactivated successfully.
Dec 06 07:07:06 compute-0 jolly_lederberg[279399]: 167 167
Dec 06 07:07:06 compute-0 conmon[279399]: conmon 9e7269c298e838cf3821 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0.scope/container/memory.events
Dec 06 07:07:06 compute-0 podman[279383]: 2025-12-06 07:07:06.083405193 +0000 UTC m=+0.138445169 container died 9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:07:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fea211d30ab6bb3ff5b2a2326ab46ff43d6066e67fa2ed9d999e5134e367e8c-merged.mount: Deactivated successfully.
Dec 06 07:07:06 compute-0 podman[279383]: 2025-12-06 07:07:06.124030322 +0000 UTC m=+0.179070278 container remove 9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lederberg, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 07:07:06 compute-0 systemd[1]: libpod-conmon-9e7269c298e838cf3821e8f90c3ef1ff745e69a6a9ccb48223f2a3460a8b1cc0.scope: Deactivated successfully.
Dec 06 07:07:06 compute-0 podman[279423]: 2025-12-06 07:07:06.284269429 +0000 UTC m=+0.040234880 container create 17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:07:06 compute-0 systemd[1]: Started libpod-conmon-17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b.scope.
Dec 06 07:07:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ae1da721044d7ddf9c3727d9e6c7d4e4958543e7512446115b8087250491d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ae1da721044d7ddf9c3727d9e6c7d4e4958543e7512446115b8087250491d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ae1da721044d7ddf9c3727d9e6c7d4e4958543e7512446115b8087250491d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ae1da721044d7ddf9c3727d9e6c7d4e4958543e7512446115b8087250491d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:06 compute-0 podman[279423]: 2025-12-06 07:07:06.266150519 +0000 UTC m=+0.022116000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:07:06 compute-0 podman[279423]: 2025-12-06 07:07:06.361746716 +0000 UTC m=+0.117712187 container init 17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec 06 07:07:06 compute-0 podman[279423]: 2025-12-06 07:07:06.36889697 +0000 UTC m=+0.124862421 container start 17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:07:06 compute-0 podman[279423]: 2025-12-06 07:07:06.371661655 +0000 UTC m=+0.127627136 container attach 17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:07:06 compute-0 ceph-mon[74339]: pgmap v1400: 305 pgs: 305 active+clean; 222 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 560 KiB/s wr, 130 op/s
Dec 06 07:07:06 compute-0 nova_compute[251992]: 2025-12-06 07:07:06.820 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 246 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 134 op/s
Dec 06 07:07:07 compute-0 strange_wilbur[279440]: {
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:     "0": [
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:         {
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "devices": [
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "/dev/loop3"
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             ],
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "lv_name": "ceph_lv0",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "lv_size": "7511998464",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "name": "ceph_lv0",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "tags": {
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.cluster_name": "ceph",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.crush_device_class": "",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.encrypted": "0",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.osd_id": "0",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.type": "block",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:                 "ceph.vdo": "0"
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             },
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "type": "block",
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:             "vg_name": "ceph_vg0"
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:         }
Dec 06 07:07:07 compute-0 strange_wilbur[279440]:     ]
Dec 06 07:07:07 compute-0 strange_wilbur[279440]: }
Dec 06 07:07:07 compute-0 systemd[1]: libpod-17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b.scope: Deactivated successfully.
Dec 06 07:07:07 compute-0 podman[279423]: 2025-12-06 07:07:07.147187707 +0000 UTC m=+0.903153178 container died 17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-16ae1da721044d7ddf9c3727d9e6c7d4e4958543e7512446115b8087250491d3-merged.mount: Deactivated successfully.
Dec 06 07:07:07 compute-0 podman[279423]: 2025-12-06 07:07:07.210687135 +0000 UTC m=+0.966652586 container remove 17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_wilbur, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:07:07 compute-0 systemd[1]: libpod-conmon-17226639e0a831843160775cf661f7896eb8757d6b40ed3fa968a3623c9d6f4b.scope: Deactivated successfully.
Dec 06 07:07:07 compute-0 sudo[279318]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:07 compute-0 nova_compute[251992]: 2025-12-06 07:07:07.261 251996 DEBUG nova.network.neutron [req-8f77c7b0-9cba-49e8-9a3a-23adef46bb74 req-45a8298c-d997-4f12-bf3f-591920f60368 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updated VIF entry in instance network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:07:07 compute-0 nova_compute[251992]: 2025-12-06 07:07:07.263 251996 DEBUG nova.network.neutron [req-8f77c7b0-9cba-49e8-9a3a-23adef46bb74 req-45a8298c-d997-4f12-bf3f-591920f60368 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updating instance_info_cache with network_info: [{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:07 compute-0 nova_compute[251992]: 2025-12-06 07:07:07.283 251996 DEBUG oslo_concurrency.lockutils [req-8f77c7b0-9cba-49e8-9a3a-23adef46bb74 req-45a8298c-d997-4f12-bf3f-591920f60368 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:07:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:07 compute-0 sudo[279460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:07 compute-0 sudo[279460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:07 compute-0 sudo[279460]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:07 compute-0 sudo[279485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:07:07 compute-0 sudo[279485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:07 compute-0 sudo[279485]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:07.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:07 compute-0 sudo[279510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:07 compute-0 sudo[279510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:07 compute-0 sudo[279510]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:07 compute-0 sudo[279535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:07:07 compute-0 sudo[279535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:07.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:07 compute-0 podman[279600]: 2025-12-06 07:07:07.802162835 +0000 UTC m=+0.040943439 container create ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:07:07 compute-0 systemd[1]: Started libpod-conmon-ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b.scope.
Dec 06 07:07:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:07:07 compute-0 podman[279600]: 2025-12-06 07:07:07.875351236 +0000 UTC m=+0.114131820 container init ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:07:07 compute-0 podman[279600]: 2025-12-06 07:07:07.785872504 +0000 UTC m=+0.024653088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:07:07 compute-0 podman[279600]: 2025-12-06 07:07:07.883297561 +0000 UTC m=+0.122078135 container start ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:07:07 compute-0 podman[279600]: 2025-12-06 07:07:07.887130286 +0000 UTC m=+0.125910870 container attach ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:07:07 compute-0 elated_jackson[279616]: 167 167
Dec 06 07:07:07 compute-0 systemd[1]: libpod-ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b.scope: Deactivated successfully.
Dec 06 07:07:07 compute-0 podman[279600]: 2025-12-06 07:07:07.889266053 +0000 UTC m=+0.128046627 container died ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 07:07:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c646323a7ac6f307e7387d1cc12b1d1b4328879a2284f96e948e7cbba6b04a4-merged.mount: Deactivated successfully.
Dec 06 07:07:07 compute-0 podman[279600]: 2025-12-06 07:07:07.922723298 +0000 UTC m=+0.161503862 container remove ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 07:07:07 compute-0 nova_compute[251992]: 2025-12-06 07:07:07.926 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:07 compute-0 systemd[1]: libpod-conmon-ce7a69edabefb5d0408875af680d5f2b0352a3f38d05897e619931e39f23958b.scope: Deactivated successfully.
Dec 06 07:07:08 compute-0 podman[279641]: 2025-12-06 07:07:08.081403364 +0000 UTC m=+0.043077067 container create 21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:07:08 compute-0 systemd[1]: Started libpod-conmon-21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58.scope.
Dec 06 07:07:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32760a7ba47a505a41ec796c3d07e5eb8d69688d3d6bb7cdfeb7225606757f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32760a7ba47a505a41ec796c3d07e5eb8d69688d3d6bb7cdfeb7225606757f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32760a7ba47a505a41ec796c3d07e5eb8d69688d3d6bb7cdfeb7225606757f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b32760a7ba47a505a41ec796c3d07e5eb8d69688d3d6bb7cdfeb7225606757f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:08 compute-0 podman[279641]: 2025-12-06 07:07:08.151886381 +0000 UTC m=+0.113560104 container init 21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:07:08 compute-0 podman[279641]: 2025-12-06 07:07:08.062167903 +0000 UTC m=+0.023841636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:07:08 compute-0 podman[279641]: 2025-12-06 07:07:08.160072963 +0000 UTC m=+0.121746666 container start 21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:07:08 compute-0 podman[279641]: 2025-12-06 07:07:08.163599098 +0000 UTC m=+0.125272801 container attach 21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:07:08 compute-0 ceph-mon[74339]: pgmap v1401: 305 pgs: 305 active+clean; 246 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 134 op/s
Dec 06 07:07:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2724217004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1639568485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:08 compute-0 magical_tharp[279658]: {
Dec 06 07:07:08 compute-0 magical_tharp[279658]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:07:08 compute-0 magical_tharp[279658]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:07:08 compute-0 magical_tharp[279658]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:07:08 compute-0 magical_tharp[279658]:         "osd_id": 0,
Dec 06 07:07:08 compute-0 magical_tharp[279658]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:07:08 compute-0 magical_tharp[279658]:         "type": "bluestore"
Dec 06 07:07:08 compute-0 magical_tharp[279658]:     }
Dec 06 07:07:08 compute-0 magical_tharp[279658]: }
Dec 06 07:07:09 compute-0 systemd[1]: libpod-21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58.scope: Deactivated successfully.
Dec 06 07:07:09 compute-0 podman[279641]: 2025-12-06 07:07:09.006282808 +0000 UTC m=+0.967956521 container died 21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:07:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b32760a7ba47a505a41ec796c3d07e5eb8d69688d3d6bb7cdfeb7225606757f4-merged.mount: Deactivated successfully.
Dec 06 07:07:09 compute-0 podman[279641]: 2025-12-06 07:07:09.057083343 +0000 UTC m=+1.018757046 container remove 21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:07:09 compute-0 systemd[1]: libpod-conmon-21a3962b0ec95a08f03c33b23d1a6b83dacce091ab37983d62df950a7bc69f58.scope: Deactivated successfully.
Dec 06 07:07:09 compute-0 sudo[279535]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:07:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:07:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 246 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 74 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Dec 06 07:07:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e6da2feb-ad55-44e3-9831-a857b19e0c96 does not exist
Dec 06 07:07:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c8236af5-699f-4fec-b049-eb9b38303a9f does not exist
Dec 06 07:07:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 03738659-d308-44f6-ae2e-070c7c3ca29f does not exist
Dec 06 07:07:09 compute-0 sudo[279694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:09 compute-0 sudo[279694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:09 compute-0 sudo[279694]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:09 compute-0 sudo[279719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:07:09 compute-0 sudo[279719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:09 compute-0 sudo[279719]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:09.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1832287170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:07:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1832287170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:07:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/255844533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:07:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:09.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:10 compute-0 ceph-mon[74339]: pgmap v1402: 305 pgs: 305 active+clean; 246 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 74 KiB/s rd, 1.8 MiB/s wr, 111 op/s
Dec 06 07:07:10 compute-0 nova_compute[251992]: 2025-12-06 07:07:10.588 251996 DEBUG nova.compute.manager [req-6f46179a-9d50-4214-9196-898005e65b01 req-c17a8ad6-d5f3-4a8f-aef3-c1c0187e6db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:10 compute-0 nova_compute[251992]: 2025-12-06 07:07:10.588 251996 DEBUG nova.compute.manager [req-6f46179a-9d50-4214-9196-898005e65b01 req-c17a8ad6-d5f3-4a8f-aef3-c1c0187e6db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing instance network info cache due to event network-changed-532b1d43-25cc-449a-8e3d-fd25a56a26b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:07:10 compute-0 nova_compute[251992]: 2025-12-06 07:07:10.589 251996 DEBUG oslo_concurrency.lockutils [req-6f46179a-9d50-4214-9196-898005e65b01 req-c17a8ad6-d5f3-4a8f-aef3-c1c0187e6db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:07:10 compute-0 nova_compute[251992]: 2025-12-06 07:07:10.589 251996 DEBUG oslo_concurrency.lockutils [req-6f46179a-9d50-4214-9196-898005e65b01 req-c17a8ad6-d5f3-4a8f-aef3-c1c0187e6db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:07:10 compute-0 nova_compute[251992]: 2025-12-06 07:07:10.589 251996 DEBUG nova.network.neutron [req-6f46179a-9d50-4214-9196-898005e65b01 req-c17a8ad6-d5f3-4a8f-aef3-c1c0187e6db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Refreshing network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:07:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 191 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 1.8 MiB/s wr, 137 op/s
Dec 06 07:07:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:11.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:11.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:11 compute-0 nova_compute[251992]: 2025-12-06 07:07:11.826 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:12 compute-0 nova_compute[251992]: 2025-12-06 07:07:12.181 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:12 compute-0 ceph-mon[74339]: pgmap v1403: 305 pgs: 305 active+clean; 191 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 1.8 MiB/s wr, 137 op/s
Dec 06 07:07:12 compute-0 nova_compute[251992]: 2025-12-06 07:07:12.927 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:07:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:07:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:07:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:07:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:07:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:07:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:07:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:13.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:13.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:14 compute-0 ceph-mon[74339]: pgmap v1404: 305 pgs: 305 active+clean; 167 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:07:14 compute-0 nova_compute[251992]: 2025-12-06 07:07:14.945 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004819.9446433, f0140185-4a12-4407-b799-4c4a2b8ebb6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:14 compute-0 nova_compute[251992]: 2025-12-06 07:07:14.945 251996 INFO nova.compute.manager [-] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] VM Stopped (Lifecycle Event)
Dec 06 07:07:15 compute-0 nova_compute[251992]: 2025-12-06 07:07:15.029 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004820.0278933, e87113ac-c01b-4dee-8ade-e02cb3ff1be9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:15 compute-0 nova_compute[251992]: 2025-12-06 07:07:15.029 251996 INFO nova.compute.manager [-] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] VM Stopped (Lifecycle Event)
Dec 06 07:07:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 623 KiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec 06 07:07:15 compute-0 nova_compute[251992]: 2025-12-06 07:07:15.175 251996 DEBUG nova.network.neutron [req-6f46179a-9d50-4214-9196-898005e65b01 req-c17a8ad6-d5f3-4a8f-aef3-c1c0187e6db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updated VIF entry in instance network info cache for port 532b1d43-25cc-449a-8e3d-fd25a56a26b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:07:15 compute-0 nova_compute[251992]: 2025-12-06 07:07:15.175 251996 DEBUG nova.network.neutron [req-6f46179a-9d50-4214-9196-898005e65b01 req-c17a8ad6-d5f3-4a8f-aef3-c1c0187e6db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updating instance_info_cache with network_info: [{"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:15 compute-0 nova_compute[251992]: 2025-12-06 07:07:15.238 251996 DEBUG nova.compute.manager [None req-792f5270-5884-44d9-82ba-483471c0b11d - - - - - -] [instance: f0140185-4a12-4407-b799-4c4a2b8ebb6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:15 compute-0 nova_compute[251992]: 2025-12-06 07:07:15.240 251996 DEBUG nova.compute.manager [None req-2a3a16ae-4036-430b-8003-a52514a40081 - - - - - -] [instance: e87113ac-c01b-4dee-8ade-e02cb3ff1be9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:15 compute-0 nova_compute[251992]: 2025-12-06 07:07:15.247 251996 DEBUG oslo_concurrency.lockutils [req-6f46179a-9d50-4214-9196-898005e65b01 req-c17a8ad6-d5f3-4a8f-aef3-c1c0187e6db9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f10e044d-9118-49ce-b890-7ec41fc40cb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:07:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:15.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:15.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:16 compute-0 nova_compute[251992]: 2025-12-06 07:07:16.830 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:16 compute-0 ceph-mon[74339]: pgmap v1405: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 623 KiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec 06 07:07:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 122 op/s
Dec 06 07:07:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:17.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:17 compute-0 sudo[279748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:17 compute-0 sudo[279748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:17 compute-0 sudo[279748]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:17 compute-0 sudo[279773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:17 compute-0 sudo[279773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:17 compute-0 sudo[279773]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:17.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:17 compute-0 nova_compute[251992]: 2025-12-06 07:07:17.929 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:18 compute-0 ceph-mon[74339]: pgmap v1406: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 122 op/s
Dec 06 07:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:07:18
Dec 06 07:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'images']
Dec 06 07:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:07:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 102 op/s
Dec 06 07:07:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:19.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:20 compute-0 ceph-mon[74339]: pgmap v1407: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 102 op/s
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.444 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "f10e044d-9118-49ce-b890-7ec41fc40cb0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.444 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.444 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.444 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.445 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.446 251996 INFO nova.compute.manager [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Terminating instance
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.447 251996 DEBUG nova.compute.manager [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:07:20 compute-0 kernel: tap532b1d43-25 (unregistering): left promiscuous mode
Dec 06 07:07:20 compute-0 NetworkManager[48965]: <info>  [1765004840.5371] device (tap532b1d43-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.543 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-0 ovn_controller[147168]: 2025-12-06T07:07:20Z|00088|binding|INFO|Releasing lport 532b1d43-25cc-449a-8e3d-fd25a56a26b4 from this chassis (sb_readonly=0)
Dec 06 07:07:20 compute-0 ovn_controller[147168]: 2025-12-06T07:07:20Z|00089|binding|INFO|Setting lport 532b1d43-25cc-449a-8e3d-fd25a56a26b4 down in Southbound
Dec 06 07:07:20 compute-0 ovn_controller[147168]: 2025-12-06T07:07:20Z|00090|binding|INFO|Removing iface tap532b1d43-25 ovn-installed in OVS
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.546 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.564 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.565 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:52:5f 10.100.0.14'], port_security=['fa:16:3e:e9:52:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f10e044d-9118-49ce-b890-7ec41fc40cb0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-344a2c5d-4516-4e02-9384-4797cfc76497', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503b2dfdce9d47598a8b9de4b15e1d45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd9b8be6-903c-4fbc-b82c-3bc27105f7c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9327b272-f30f-48a7-bcf4-62320d61cbdc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=532b1d43-25cc-449a-8e3d-fd25a56a26b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.570 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 532b1d43-25cc-449a-8e3d-fd25a56a26b4 in datapath 344a2c5d-4516-4e02-9384-4797cfc76497 unbound from our chassis
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.574 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 344a2c5d-4516-4e02-9384-4797cfc76497, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.578 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[685f5390-b2b1-4e2f-b6fe-9e276a44f02a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.581 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 namespace which is not needed anymore
Dec 06 07:07:20 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000022.scope: Deactivated successfully.
Dec 06 07:07:20 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000022.scope: Consumed 15.973s CPU time.
Dec 06 07:07:20 compute-0 systemd-machined[212986]: Machine qemu-16-instance-00000022 terminated.
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.679 251996 INFO nova.virt.libvirt.driver [-] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Instance destroyed successfully.
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.680 251996 DEBUG nova.objects.instance [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lazy-loading 'resources' on Instance uuid f10e044d-9118-49ce-b890-7ec41fc40cb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:07:20 compute-0 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[277291]: [NOTICE]   (277295) : haproxy version is 2.8.14-c23fe91
Dec 06 07:07:20 compute-0 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[277291]: [NOTICE]   (277295) : path to executable is /usr/sbin/haproxy
Dec 06 07:07:20 compute-0 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[277291]: [WARNING]  (277295) : Exiting Master process...
Dec 06 07:07:20 compute-0 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[277291]: [WARNING]  (277295) : Exiting Master process...
Dec 06 07:07:20 compute-0 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[277291]: [ALERT]    (277295) : Current worker (277297) exited with code 143 (Terminated)
Dec 06 07:07:20 compute-0 neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497[277291]: [WARNING]  (277295) : All workers exited. Exiting... (0)
Dec 06 07:07:20 compute-0 systemd[1]: libpod-312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db.scope: Deactivated successfully.
Dec 06 07:07:20 compute-0 podman[279829]: 2025-12-06 07:07:20.721458922 +0000 UTC m=+0.045470312 container died 312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:07:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db-userdata-shm.mount: Deactivated successfully.
Dec 06 07:07:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c29c13c9dcfd4132108087b2566c86bad0d4e8b7d1a3db00ad9d864c6653860-merged.mount: Deactivated successfully.
Dec 06 07:07:20 compute-0 podman[279829]: 2025-12-06 07:07:20.754736993 +0000 UTC m=+0.078748383 container cleanup 312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.757 251996 DEBUG nova.virt.libvirt.vif [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:06:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-737606194',display_name='tempest-FloatingIPsAssociationTestJSON-server-737606194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-737606194',id=34,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:06:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503b2dfdce9d47598a8b9de4b15e1d45',ramdisk_id='',reservation_id='r-2gf6bx6d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-263213844',owner_user_name='tempest-FloatingIPsAssociationTestJSON-263213844-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:06:19Z,user_data=None,user_id='c9025bd3f0854dff80d9408800d6b76b',uuid=f10e044d-9118-49ce-b890-7ec41fc40cb0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.757 251996 DEBUG nova.network.os_vif_util [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converting VIF {"id": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "address": "fa:16:3e:e9:52:5f", "network": {"id": "344a2c5d-4516-4e02-9384-4797cfc76497", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1967587837-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503b2dfdce9d47598a8b9de4b15e1d45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap532b1d43-25", "ovs_interfaceid": "532b1d43-25cc-449a-8e3d-fd25a56a26b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.758 251996 DEBUG nova.network.os_vif_util [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e9:52:5f,bridge_name='br-int',has_traffic_filtering=True,id=532b1d43-25cc-449a-8e3d-fd25a56a26b4,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap532b1d43-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.758 251996 DEBUG os_vif [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:52:5f,bridge_name='br-int',has_traffic_filtering=True,id=532b1d43-25cc-449a-8e3d-fd25a56a26b4,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap532b1d43-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.761 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.761 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap532b1d43-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.763 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.765 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.769 251996 INFO os_vif [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e9:52:5f,bridge_name='br-int',has_traffic_filtering=True,id=532b1d43-25cc-449a-8e3d-fd25a56a26b4,network=Network(344a2c5d-4516-4e02-9384-4797cfc76497),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap532b1d43-25')
Dec 06 07:07:20 compute-0 systemd[1]: libpod-conmon-312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db.scope: Deactivated successfully.
Dec 06 07:07:20 compute-0 podman[279866]: 2025-12-06 07:07:20.81929829 +0000 UTC m=+0.046330955 container remove 312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.824 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[97b287d6-be01-4b10-85aa-c45c4ff4b7eb]: (4, ('Sat Dec  6 07:07:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 (312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db)\n312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db\nSat Dec  6 07:07:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 (312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db)\n312e99f366a27feafca5031a706eca2e8092da0a0a17f0437cd23b289679c4db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.825 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0c48643d-efc4-49c0-835a-3be732810d9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.826 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap344a2c5d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.828 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-0 kernel: tap344a2c5d-40: left promiscuous mode
Dec 06 07:07:20 compute-0 nova_compute[251992]: 2025-12-06 07:07:20.843 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.847 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8f63c30a-9a5f-430a-a1e4-2417491d2863]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.868 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[796a087e-1da4-4e18-afe7-e2ef529898a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.869 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[299c41d9-6828-4425-8c31-b0dd63954209]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.884 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8dd8cd-16aa-454f-a7e0-adcab34da792]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505093, 'reachable_time': 39795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279897, 'error': None, 'target': 'ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d344a2c5d\x2d4516\x2d4e02\x2d9384\x2d4797cfc76497.mount: Deactivated successfully.
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.889 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-344a2c5d-4516-4e02-9384-4797cfc76497 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:07:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:20.889 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9a5963-f868-40c2-9890-6499d1afc7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 149 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 114 op/s
Dec 06 07:07:21 compute-0 nova_compute[251992]: 2025-12-06 07:07:21.147 251996 INFO nova.virt.libvirt.driver [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Deleting instance files /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0_del
Dec 06 07:07:21 compute-0 nova_compute[251992]: 2025-12-06 07:07:21.148 251996 INFO nova.virt.libvirt.driver [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Deletion of /var/lib/nova/instances/f10e044d-9118-49ce-b890-7ec41fc40cb0_del complete
Dec 06 07:07:21 compute-0 nova_compute[251992]: 2025-12-06 07:07:21.236 251996 INFO nova.compute.manager [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Took 0.79 seconds to destroy the instance on the hypervisor.
Dec 06 07:07:21 compute-0 nova_compute[251992]: 2025-12-06 07:07:21.236 251996 DEBUG oslo.service.loopingcall [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:07:21 compute-0 nova_compute[251992]: 2025-12-06 07:07:21.237 251996 DEBUG nova.compute.manager [-] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:07:21 compute-0 nova_compute[251992]: 2025-12-06 07:07:21.237 251996 DEBUG nova.network.neutron [-] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:07:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:21.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:22 compute-0 ceph-mon[74339]: pgmap v1408: 305 pgs: 305 active+clean; 149 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 114 op/s
Dec 06 07:07:22 compute-0 nova_compute[251992]: 2025-12-06 07:07:22.971 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 121 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.5 KiB/s wr, 97 op/s
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:07:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:23.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:07:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:07:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:23.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:07:24 compute-0 ceph-mon[74339]: pgmap v1409: 305 pgs: 305 active+clean; 121 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.5 KiB/s wr, 97 op/s
Dec 06 07:07:24 compute-0 nova_compute[251992]: 2025-12-06 07:07:24.508 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:24 compute-0 nova_compute[251992]: 2025-12-06 07:07:24.933 251996 DEBUG nova.network.neutron [-] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.020 251996 INFO nova.compute.manager [-] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Took 3.78 seconds to deallocate network for instance.
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.083 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.084 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 96 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 115 op/s
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.146 251996 DEBUG oslo_concurrency.processutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:25.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.495 251996 DEBUG nova.compute.manager [req-f43912c6-5dfc-46d5-bb4e-63d00f72e808 req-73cb5c51-ac77-44c2-b52f-d766d73f89c1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Received event network-vif-deleted-532b1d43-25cc-449a-8e3d-fd25a56a26b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/781877424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.583 251996 DEBUG oslo_concurrency.processutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.589 251996 DEBUG nova.compute.provider_tree [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.621 251996 DEBUG nova.scheduler.client.report [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015443609831565233 of space, bias 1.0, pg target 0.46330829494695697 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:07:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.720 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:25.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.756 251996 INFO nova.scheduler.client.report [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Deleted allocations for instance f10e044d-9118-49ce-b890-7ec41fc40cb0
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.763 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:25 compute-0 nova_compute[251992]: 2025-12-06 07:07:25.902 251996 DEBUG oslo_concurrency.lockutils [None req-1b4b1812-7b7b-424a-aa95-2183e76dc87c c9025bd3f0854dff80d9408800d6b76b 503b2dfdce9d47598a8b9de4b15e1d45 - - default default] Lock "f10e044d-9118-49ce-b890-7ec41fc40cb0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:26 compute-0 ceph-mon[74339]: pgmap v1410: 305 pgs: 305 active+clean; 96 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 115 op/s
Dec 06 07:07:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/781877424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 115 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Dec 06 07:07:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:27.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:27 compute-0 podman[279924]: 2025-12-06 07:07:27.478849926 +0000 UTC m=+0.127464800 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:07:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:27.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:27 compute-0 nova_compute[251992]: 2025-12-06 07:07:27.973 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:28 compute-0 ceph-mon[74339]: pgmap v1411: 305 pgs: 305 active+clean; 115 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Dec 06 07:07:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 115 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 07:07:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:29.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:29.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:30 compute-0 ceph-mon[74339]: pgmap v1412: 305 pgs: 305 active+clean; 115 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 07:07:30 compute-0 nova_compute[251992]: 2025-12-06 07:07:30.766 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 07:07:31 compute-0 podman[279955]: 2025-12-06 07:07:31.38930774 +0000 UTC m=+0.052472298 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:07:31 compute-0 podman[279956]: 2025-12-06 07:07:31.389348481 +0000 UTC m=+0.051159123 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:07:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:31.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:31.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:32 compute-0 ceph-mon[74339]: pgmap v1413: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 07:07:32 compute-0 nova_compute[251992]: 2025-12-06 07:07:32.974 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 07:07:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:33.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:34 compute-0 ceph-mon[74339]: pgmap v1414: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 07:07:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 07:07:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:35.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:35 compute-0 nova_compute[251992]: 2025-12-06 07:07:35.678 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004840.6764607, f10e044d-9118-49ce-b890-7ec41fc40cb0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:35 compute-0 nova_compute[251992]: 2025-12-06 07:07:35.678 251996 INFO nova.compute.manager [-] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] VM Stopped (Lifecycle Event)
Dec 06 07:07:35 compute-0 nova_compute[251992]: 2025-12-06 07:07:35.768 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:35.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:35 compute-0 nova_compute[251992]: 2025-12-06 07:07:35.827 251996 DEBUG nova.compute.manager [None req-cd0f8766-e5b9-43a7-a698-4b61fcd85f8d - - - - - -] [instance: f10e044d-9118-49ce-b890-7ec41fc40cb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.375 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "d46e42cf-1110-412b-84e1-780a7f05e1c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.376 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.398 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.484 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.485 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.492 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.492 251996 INFO nova.compute.claims [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.651 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.673 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:36 compute-0 nova_compute[251992]: 2025-12-06 07:07:36.909 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:37 compute-0 ceph-mon[74339]: pgmap v1415: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 07:07:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861690964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:37 compute-0 nova_compute[251992]: 2025-12-06 07:07:37.103 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:37 compute-0 nova_compute[251992]: 2025-12-06 07:07:37.110 251996 DEBUG nova.compute.provider_tree [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 1.2 MiB/s wr, 45 op/s
Dec 06 07:07:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:37.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:37 compute-0 sudo[280018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:37 compute-0 sudo[280018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:37 compute-0 sudo[280018]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:37 compute-0 sudo[280043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:37 compute-0 sudo[280043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:37 compute-0 sudo[280043]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:37.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:37 compute-0 nova_compute[251992]: 2025-12-06 07:07:37.977 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3861690964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:38 compute-0 ceph-mon[74339]: pgmap v1416: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 1.2 MiB/s wr, 45 op/s
Dec 06 07:07:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 60 KiB/s wr, 5 op/s
Dec 06 07:07:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:39.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:39 compute-0 nova_compute[251992]: 2025-12-06 07:07:39.779 251996 DEBUG nova.scheduler.client.report [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:39 compute-0 nova_compute[251992]: 2025-12-06 07:07:39.949 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:39 compute-0 nova_compute[251992]: 2025-12-06 07:07:39.950 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.096 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.097 251996 DEBUG nova.network.neutron [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.152 251996 INFO nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:07:40 compute-0 ceph-mon[74339]: pgmap v1417: 305 pgs: 305 active+clean; 121 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 60 KiB/s wr, 5 op/s
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.230 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.635 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.636 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.637 251996 INFO nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Creating image(s)
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.666 251996 DEBUG nova.storage.rbd_utils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] rbd image d46e42cf-1110-412b-84e1-780a7f05e1c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.693 251996 DEBUG nova.storage.rbd_utils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] rbd image d46e42cf-1110-412b-84e1-780a7f05e1c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.719 251996 DEBUG nova.storage.rbd_utils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] rbd image d46e42cf-1110-412b-84e1-780a7f05e1c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.722 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.771 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.783 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.783 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.784 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.784 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.810 251996 DEBUG nova.storage.rbd_utils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] rbd image d46e42cf-1110-412b-84e1-780a7f05e1c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.813 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d46e42cf-1110-412b-84e1-780a7f05e1c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:40 compute-0 nova_compute[251992]: 2025-12-06 07:07:40.996 251996 DEBUG nova.policy [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b9343f7eea174bc8ad0a14b1247d7d0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09660a2b244f472083042e6223025786', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:07:41 compute-0 nova_compute[251992]: 2025-12-06 07:07:41.098 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d46e42cf-1110-412b-84e1-780a7f05e1c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 129 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 399 KiB/s wr, 9 op/s
Dec 06 07:07:41 compute-0 nova_compute[251992]: 2025-12-06 07:07:41.162 251996 DEBUG nova.storage.rbd_utils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] resizing rbd image d46e42cf-1110-412b-84e1-780a7f05e1c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:07:41 compute-0 nova_compute[251992]: 2025-12-06 07:07:41.265 251996 DEBUG nova.objects.instance [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lazy-loading 'migration_context' on Instance uuid d46e42cf-1110-412b-84e1-780a7f05e1c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:07:41 compute-0 nova_compute[251992]: 2025-12-06 07:07:41.292 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:07:41 compute-0 nova_compute[251992]: 2025-12-06 07:07:41.293 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Ensure instance console log exists: /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:07:41 compute-0 nova_compute[251992]: 2025-12-06 07:07:41.293 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:41 compute-0 nova_compute[251992]: 2025-12-06 07:07:41.293 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:41 compute-0 nova_compute[251992]: 2025-12-06 07:07:41.294 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:41.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:42 compute-0 ceph-mon[74339]: pgmap v1418: 305 pgs: 305 active+clean; 129 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 399 KiB/s wr, 9 op/s
Dec 06 07:07:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:42 compute-0 nova_compute[251992]: 2025-12-06 07:07:42.698 251996 DEBUG nova.network.neutron [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Successfully created port: 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:07:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:07:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:07:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:07:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:07:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:07:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:07:42 compute-0 nova_compute[251992]: 2025-12-06 07:07:42.979 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 134 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 610 KiB/s wr, 5 op/s
Dec 06 07:07:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:43.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:43.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:44 compute-0 ceph-mon[74339]: pgmap v1419: 305 pgs: 305 active+clean; 134 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 2.8 KiB/s rd, 610 KiB/s wr, 5 op/s
Dec 06 07:07:44 compute-0 nova_compute[251992]: 2025-12-06 07:07:44.734 251996 DEBUG nova.network.neutron [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Successfully updated port: 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:07:44 compute-0 nova_compute[251992]: 2025-12-06 07:07:44.767 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "refresh_cache-d46e42cf-1110-412b-84e1-780a7f05e1c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:07:44 compute-0 nova_compute[251992]: 2025-12-06 07:07:44.768 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquired lock "refresh_cache-d46e42cf-1110-412b-84e1-780a7f05e1c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:07:44 compute-0 nova_compute[251992]: 2025-12-06 07:07:44.768 251996 DEBUG nova.network.neutron [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:07:45 compute-0 nova_compute[251992]: 2025-12-06 07:07:45.022 251996 DEBUG nova.compute.manager [req-3c8abd61-cfb2-497a-a22f-e42bcf99a7b5 req-e3c177c7-7901-4186-aa8c-1eadb1ba566b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received event network-changed-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:45 compute-0 nova_compute[251992]: 2025-12-06 07:07:45.023 251996 DEBUG nova.compute.manager [req-3c8abd61-cfb2-497a-a22f-e42bcf99a7b5 req-e3c177c7-7901-4186-aa8c-1eadb1ba566b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Refreshing instance network info cache due to event network-changed-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:07:45 compute-0 nova_compute[251992]: 2025-12-06 07:07:45.023 251996 DEBUG oslo_concurrency.lockutils [req-3c8abd61-cfb2-497a-a22f-e42bcf99a7b5 req-e3c177c7-7901-4186-aa8c-1eadb1ba566b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d46e42cf-1110-412b-84e1-780a7f05e1c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:07:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 163 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Dec 06 07:07:45 compute-0 nova_compute[251992]: 2025-12-06 07:07:45.142 251996 DEBUG nova.network.neutron [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:07:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:45.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:45 compute-0 nova_compute[251992]: 2025-12-06 07:07:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:45 compute-0 nova_compute[251992]: 2025-12-06 07:07:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:45 compute-0 nova_compute[251992]: 2025-12-06 07:07:45.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:45 compute-0 nova_compute[251992]: 2025-12-06 07:07:45.774 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:45.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:46 compute-0 nova_compute[251992]: 2025-12-06 07:07:46.766 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:46 compute-0 nova_compute[251992]: 2025-12-06 07:07:46.766 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:07:46 compute-0 nova_compute[251992]: 2025-12-06 07:07:46.805 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:07:46 compute-0 ceph-mon[74339]: pgmap v1420: 305 pgs: 305 active+clean; 163 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 9.4 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Dec 06 07:07:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:07:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.346 251996 DEBUG nova.network.neutron [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Updating instance_info_cache with network_info: [{"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:47.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.675 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Releasing lock "refresh_cache-d46e42cf-1110-412b-84e1-780a7f05e1c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.675 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Instance network_info: |[{"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.675 251996 DEBUG oslo_concurrency.lockutils [req-3c8abd61-cfb2-497a-a22f-e42bcf99a7b5 req-e3c177c7-7901-4186-aa8c-1eadb1ba566b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d46e42cf-1110-412b-84e1-780a7f05e1c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.675 251996 DEBUG nova.network.neutron [req-3c8abd61-cfb2-497a-a22f-e42bcf99a7b5 req-e3c177c7-7901-4186-aa8c-1eadb1ba566b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Refreshing network info cache for port 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.678 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Start _get_guest_xml network_info=[{"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.682 251996 WARNING nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.688 251996 DEBUG nova.virt.libvirt.host [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.688 251996 DEBUG nova.virt.libvirt.host [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.691 251996 DEBUG nova.virt.libvirt.host [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.691 251996 DEBUG nova.virt.libvirt.host [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.692 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.692 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.693 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.693 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.693 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.693 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.693 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.694 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.694 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.694 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.694 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.695 251996 DEBUG nova.virt.hardware [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.699 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.722 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.723 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.723 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.723 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:07:47 compute-0 nova_compute[251992]: 2025-12-06 07:07:47.724 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:47.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4060825194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:07:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3405180293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892501038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.148 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.177 251996 DEBUG nova.storage.rbd_utils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] rbd image d46e42cf-1110-412b-84e1-780a7f05e1c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.183 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.205 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.359 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.360 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4729MB free_disk=20.921947479248047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.361 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.361 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.619 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance d46e42cf-1110-412b-84e1-780a7f05e1c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.619 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.620 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:07:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:07:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2210710338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.644 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.645 251996 DEBUG nova.virt.libvirt.vif [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-2075763697',display_name='tempest-ImagesNegativeTestJSON-server-2075763697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-2075763697',id=43,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09660a2b244f472083042e6223025786',ramdisk_id='',reservation_id='r-hn8yuzhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-2015493065',owner_user_name='tempest-ImagesNegativeTestJSON-2015493065-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:07:40Z,user_data=None,user_id='b9343f7eea174bc8ad0a14b1247d7d0f',uuid=d46e42cf-1110-412b-84e1-780a7f05e1c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.646 251996 DEBUG nova.network.os_vif_util [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Converting VIF {"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.647 251996 DEBUG nova.network.os_vif_util [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:e1:01,bridge_name='br-int',has_traffic_filtering=True,id=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd,network=Network(251e5963-d880-4c08-88e8-a0038135133a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ecb7238-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.648 251996 DEBUG nova.objects.instance [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lazy-loading 'pci_devices' on Instance uuid d46e42cf-1110-412b-84e1-780a7f05e1c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.693 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:48 compute-0 ceph-mon[74339]: pgmap v1421: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:07:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3405180293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3892501038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2210710338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.932 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <uuid>d46e42cf-1110-412b-84e1-780a7f05e1c2</uuid>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <name>instance-0000002b</name>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <nova:name>tempest-ImagesNegativeTestJSON-server-2075763697</nova:name>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:07:47</nova:creationTime>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <nova:user uuid="b9343f7eea174bc8ad0a14b1247d7d0f">tempest-ImagesNegativeTestJSON-2015493065-project-member</nova:user>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <nova:project uuid="09660a2b244f472083042e6223025786">tempest-ImagesNegativeTestJSON-2015493065</nova:project>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <nova:port uuid="2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd">
Dec 06 07:07:48 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <system>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <entry name="serial">d46e42cf-1110-412b-84e1-780a7f05e1c2</entry>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <entry name="uuid">d46e42cf-1110-412b-84e1-780a7f05e1c2</entry>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </system>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <os>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   </os>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <features>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   </features>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/d46e42cf-1110-412b-84e1-780a7f05e1c2_disk">
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       </source>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/d46e42cf-1110-412b-84e1-780a7f05e1c2_disk.config">
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       </source>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:07:48 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:09:e1:01"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <target dev="tap2ecb7238-7c"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2/console.log" append="off"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <video>
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </video>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:07:48 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:07:48 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:07:48 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:07:48 compute-0 nova_compute[251992]: </domain>
Dec 06 07:07:48 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.982 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Preparing to wait for external event network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.984 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.984 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.985 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.986 251996 DEBUG nova.virt.libvirt.vif [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-2075763697',display_name='tempest-ImagesNegativeTestJSON-server-2075763697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-2075763697',id=43,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09660a2b244f472083042e6223025786',ramdisk_id='',reservation_id='r-hn8yuzhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-2015493065',owner_user_name='tempest-ImagesNegativeTestJSON-2015493065-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:07:40Z,user_data=None,user_id='b9343f7eea174bc8ad0a14b1247d7d0f',uuid=d46e42cf-1110-412b-84e1-780a7f05e1c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.987 251996 DEBUG nova.network.os_vif_util [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Converting VIF {"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.988 251996 DEBUG nova.network.os_vif_util [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:e1:01,bridge_name='br-int',has_traffic_filtering=True,id=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd,network=Network(251e5963-d880-4c08-88e8-a0038135133a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ecb7238-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.988 251996 DEBUG os_vif [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:e1:01,bridge_name='br-int',has_traffic_filtering=True,id=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd,network=Network(251e5963-d880-4c08-88e8-a0038135133a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ecb7238-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.990 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.991 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.993 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.997 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.998 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ecb7238-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:48 compute-0 nova_compute[251992]: 2025-12-06 07:07:48.998 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ecb7238-7c, col_values=(('external_ids', {'iface-id': '2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:e1:01', 'vm-uuid': 'd46e42cf-1110-412b-84e1-780a7f05e1c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.000 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:49 compute-0 NetworkManager[48965]: <info>  [1765004869.0015] manager: (tap2ecb7238-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.003 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.015 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.015 251996 INFO os_vif [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:e1:01,bridge_name='br-int',has_traffic_filtering=True,id=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd,network=Network(251e5963-d880-4c08-88e8-a0038135133a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ecb7238-7c')
Dec 06 07:07:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:07:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525017526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.215 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.225 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.252 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.253 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.253 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] No VIF found with MAC fa:16:3e:09:e1:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.254 251996 INFO nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Using config drive
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.285 251996 DEBUG nova.storage.rbd_utils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] rbd image d46e42cf-1110-412b-84e1-780a7f05e1c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.302 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:49.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.515 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.516 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.517 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.517 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.776 251996 DEBUG nova.network.neutron [req-3c8abd61-cfb2-497a-a22f-e42bcf99a7b5 req-e3c177c7-7901-4186-aa8c-1eadb1ba566b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Updated VIF entry in instance network info cache for port 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.777 251996 DEBUG nova.network.neutron [req-3c8abd61-cfb2-497a-a22f-e42bcf99a7b5 req-e3c177c7-7901-4186-aa8c-1eadb1ba566b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Updating instance_info_cache with network_info: [{"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:49.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:49 compute-0 nova_compute[251992]: 2025-12-06 07:07:49.948 251996 DEBUG oslo_concurrency.lockutils [req-3c8abd61-cfb2-497a-a22f-e42bcf99a7b5 req-e3c177c7-7901-4186-aa8c-1eadb1ba566b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d46e42cf-1110-412b-84e1-780a7f05e1c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:07:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1525017526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2129314298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.039 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.038 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.041 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.072 251996 INFO nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Creating config drive at /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2/disk.config
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.077 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppnnbfzpu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.206 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppnnbfzpu" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.237 251996 DEBUG nova.storage.rbd_utils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] rbd image d46e42cf-1110-412b-84e1-780a7f05e1c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.241 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2/disk.config d46e42cf-1110-412b-84e1-780a7f05e1c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.407 251996 DEBUG oslo_concurrency.processutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2/disk.config d46e42cf-1110-412b-84e1-780a7f05e1c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.408 251996 INFO nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Deleting local config drive /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2/disk.config because it was imported into RBD.
Dec 06 07:07:50 compute-0 kernel: tap2ecb7238-7c: entered promiscuous mode
Dec 06 07:07:50 compute-0 NetworkManager[48965]: <info>  [1765004870.4554] manager: (tap2ecb7238-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 ovn_controller[147168]: 2025-12-06T07:07:50Z|00091|binding|INFO|Claiming lport 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd for this chassis.
Dec 06 07:07:50 compute-0 ovn_controller[147168]: 2025-12-06T07:07:50Z|00092|binding|INFO|2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd: Claiming fa:16:3e:09:e1:01 10.100.0.7
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 systemd-udevd[280418]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:07:50 compute-0 NetworkManager[48965]: <info>  [1765004870.4976] device (tap2ecb7238-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:07:50 compute-0 NetworkManager[48965]: <info>  [1765004870.4987] device (tap2ecb7238-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.529 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.533 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 ovn_controller[147168]: 2025-12-06T07:07:50Z|00093|binding|INFO|Setting lport 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd ovn-installed in OVS
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.538 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 systemd-machined[212986]: New machine qemu-19-instance-0000002b.
Dec 06 07:07:50 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-0000002b.
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.602 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.602 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:50 compute-0 ovn_controller[147168]: 2025-12-06T07:07:50Z|00094|binding|INFO|Setting lport 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd up in Southbound
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.663 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:e1:01 10.100.0.7'], port_security=['fa:16:3e:09:e1:01 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd46e42cf-1110-412b-84e1-780a7f05e1c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-251e5963-d880-4c08-88e8-a0038135133a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09660a2b244f472083042e6223025786', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6b6578cf-dcf2-43aa-9358-c2f6127fa13c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efc8c20-7ca3-40ec-9030-5d164de7fe97, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.664 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd in datapath 251e5963-d880-4c08-88e8-a0038135133a bound to our chassis
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.665 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 251e5963-d880-4c08-88e8-a0038135133a
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.678 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[88c63a9b-7033-4015-857a-407847b8919e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.679 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap251e5963-d1 in ovnmeta-251e5963-d880-4c08-88e8-a0038135133a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.682 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap251e5963-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.682 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8e294db2-1151-43db-8d09-9474edf6b956]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.683 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[694fe865-9949-4f1c-bc99-1e506df1a8b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.698 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[676f6cde-2366-4236-90a1-e4b92a7ffc9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.710 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f53bd56a-c624-4a22-83c1-56fb564612c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.740 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b18ddb-9a26-4ac8-8e77-ad8f5415b185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 NetworkManager[48965]: <info>  [1765004870.7462] manager: (tap251e5963-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.745 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1079ece2-ad58-4e76-b50d-fd292e2338ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.779 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[78e90008-44aa-43f8-9d74-d403de37df0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.786 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[269eac42-aa11-4366-be45-9c4316a96d2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 NetworkManager[48965]: <info>  [1765004870.8102] device (tap251e5963-d0): carrier: link connected
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.819 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc3c8bf-270f-4056-a684-a566697b11be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.837 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1de90f-c9f6-49c4-888a-992692fb83b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap251e5963-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:c3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514339, 'reachable_time': 27836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280454, 'error': None, 'target': 'ovnmeta-251e5963-d880-4c08-88e8-a0038135133a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.850 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4e75ddb5-2ea6-4549-ba7f-6caee02d538d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:c377'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514339, 'tstamp': 514339}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280455, 'error': None, 'target': 'ovnmeta-251e5963-d880-4c08-88e8-a0038135133a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.863 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bc69cbce-1306-4e67-b50e-c50c7ab2a14c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap251e5963-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:c3:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514339, 'reachable_time': 27836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280456, 'error': None, 'target': 'ovnmeta-251e5963-d880-4c08-88e8-a0038135133a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.887 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9e74c1df-027e-49ab-9cdd-ccd7402e0a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.931 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1857e3-84e7-403f-8884-65718dff634b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.932 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap251e5963-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.933 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.933 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap251e5963-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.935 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 NetworkManager[48965]: <info>  [1765004870.9358] manager: (tap251e5963-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec 06 07:07:50 compute-0 kernel: tap251e5963-d0: entered promiscuous mode
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.938 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap251e5963-d0, col_values=(('external_ids', {'iface-id': 'a666aff7-a8c1-4efb-8cb7-f88bd2a34589'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.939 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 ovn_controller[147168]: 2025-12-06T07:07:50Z|00095|binding|INFO|Releasing lport a666aff7-a8c1-4efb-8cb7-f88bd2a34589 from this chassis (sb_readonly=0)
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.940 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.940 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/251e5963-d880-4c08-88e8-a0038135133a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/251e5963-d880-4c08-88e8-a0038135133a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.941 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b430d8-bef8-46dc-a07a-94f09d4ae692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.942 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-251e5963-d880-4c08-88e8-a0038135133a
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/251e5963-d880-4c08-88e8-a0038135133a.pid.haproxy
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 251e5963-d880-4c08-88e8-a0038135133a
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:07:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:50.943 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-251e5963-d880-4c08-88e8-a0038135133a', 'env', 'PROCESS_TAG=haproxy-251e5963-d880-4c08-88e8-a0038135133a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/251e5963-d880-4c08-88e8-a0038135133a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:07:50 compute-0 nova_compute[251992]: 2025-12-06 07:07:50.954 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:51 compute-0 ceph-mon[74339]: pgmap v1422: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:07:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2763915287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 203 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 3.2 MiB/s wr, 33 op/s
Dec 06 07:07:51 compute-0 podman[280503]: 2025-12-06 07:07:51.322547 +0000 UTC m=+0.049325686 container create 28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:07:51 compute-0 systemd[1]: Started libpod-conmon-28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695.scope.
Dec 06 07:07:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:07:51 compute-0 podman[280503]: 2025-12-06 07:07:51.296371888 +0000 UTC m=+0.023150614 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea7cc9764c4994e6e6016b066861a1a63f713d9ccfdc5d5707b3a978f806d972/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:07:51 compute-0 podman[280503]: 2025-12-06 07:07:51.405189112 +0000 UTC m=+0.131967828 container init 28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:07:51 compute-0 podman[280503]: 2025-12-06 07:07:51.411181028 +0000 UTC m=+0.137959714 container start 28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:07:51 compute-0 neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a[280518]: [NOTICE]   (280537) : New worker (280543) forked
Dec 06 07:07:51 compute-0 neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a[280518]: [NOTICE]   (280537) : Loading success.
Dec 06 07:07:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:51.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.526 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004871.5263317, d46e42cf-1110-412b-84e1-780a7f05e1c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.527 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] VM Started (Lifecycle Event)
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.536 251996 DEBUG nova.compute.manager [req-430a1bb7-81c3-401d-93a1-01e432ee4071 req-bf6f829a-a6da-4cb1-a320-22563bba1ec1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received event network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.537 251996 DEBUG oslo_concurrency.lockutils [req-430a1bb7-81c3-401d-93a1-01e432ee4071 req-bf6f829a-a6da-4cb1-a320-22563bba1ec1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.537 251996 DEBUG oslo_concurrency.lockutils [req-430a1bb7-81c3-401d-93a1-01e432ee4071 req-bf6f829a-a6da-4cb1-a320-22563bba1ec1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.537 251996 DEBUG oslo_concurrency.lockutils [req-430a1bb7-81c3-401d-93a1-01e432ee4071 req-bf6f829a-a6da-4cb1-a320-22563bba1ec1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.537 251996 DEBUG nova.compute.manager [req-430a1bb7-81c3-401d-93a1-01e432ee4071 req-bf6f829a-a6da-4cb1-a320-22563bba1ec1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Processing event network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.538 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.541 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.544 251996 INFO nova.virt.libvirt.driver [-] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Instance spawned successfully.
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.544 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.564 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.568 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.610 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.611 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.612 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.613 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.613 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.614 251996 DEBUG nova.virt.libvirt.driver [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.624 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.625 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004871.526522, d46e42cf-1110-412b-84e1-780a7f05e1c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.625 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] VM Paused (Lifecycle Event)
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.669 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.673 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004871.5401497, d46e42cf-1110-412b-84e1-780a7f05e1c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.674 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] VM Resumed (Lifecycle Event)
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.703 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.707 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.729 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.729 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.740 251996 INFO nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Took 11.11 seconds to spawn the instance on the hypervisor.
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.741 251996 DEBUG nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.742 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:07:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:51.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.825 251996 INFO nova.compute.manager [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Took 15.38 seconds to build instance.
Dec 06 07:07:51 compute-0 nova_compute[251992]: 2025-12-06 07:07:51.876 251996 DEBUG oslo_concurrency.lockutils [None req-22c13121-d189-4fab-ae74-9627eaf40c0d b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:52 compute-0 ceph-mon[74339]: pgmap v1423: 305 pgs: 305 active+clean; 203 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 3.2 MiB/s wr, 33 op/s
Dec 06 07:07:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/237580491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2213247620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:52.043 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:52 compute-0 nova_compute[251992]: 2025-12-06 07:07:52.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:07:52 compute-0 nova_compute[251992]: 2025-12-06 07:07:52.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.035 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 213 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.2 MiB/s wr, 57 op/s
Dec 06 07:07:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:53.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.676 251996 DEBUG nova.compute.manager [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received event network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.677 251996 DEBUG oslo_concurrency.lockutils [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.677 251996 DEBUG oslo_concurrency.lockutils [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.677 251996 DEBUG oslo_concurrency.lockutils [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.677 251996 DEBUG nova.compute.manager [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] No waiting events found dispatching network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.677 251996 WARNING nova.compute.manager [req-fdd26c73-3c0c-4bdc-9f09-ee4477da5912 req-3dec1196-46fe-4441-8166-b659378ec39a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received unexpected event network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd for instance with vm_state active and task_state None.
Dec 06 07:07:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:07:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:53.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.948 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "d46e42cf-1110-412b-84e1-780a7f05e1c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.949 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.949 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.950 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.950 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.952 251996 INFO nova.compute.manager [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Terminating instance
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.955 251996 DEBUG nova.compute.manager [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:07:53 compute-0 kernel: tap2ecb7238-7c (unregistering): left promiscuous mode
Dec 06 07:07:53 compute-0 NetworkManager[48965]: <info>  [1765004873.9908] device (tap2ecb7238-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:07:53 compute-0 ovn_controller[147168]: 2025-12-06T07:07:53Z|00096|binding|INFO|Releasing lport 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd from this chassis (sb_readonly=0)
Dec 06 07:07:53 compute-0 nova_compute[251992]: 2025-12-06 07:07:53.997 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:53 compute-0 ovn_controller[147168]: 2025-12-06T07:07:53Z|00097|binding|INFO|Setting lport 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd down in Southbound
Dec 06 07:07:53 compute-0 ovn_controller[147168]: 2025-12-06T07:07:53Z|00098|binding|INFO|Removing iface tap2ecb7238-7c ovn-installed in OVS
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.000 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.037 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:e1:01 10.100.0.7'], port_security=['fa:16:3e:09:e1:01 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd46e42cf-1110-412b-84e1-780a7f05e1c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-251e5963-d880-4c08-88e8-a0038135133a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09660a2b244f472083042e6223025786', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6b6578cf-dcf2-43aa-9358-c2f6127fa13c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efc8c20-7ca3-40ec-9030-5d164de7fe97, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.038 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd in datapath 251e5963-d880-4c08-88e8-a0038135133a unbound from our chassis
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.039 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 251e5963-d880-4c08-88e8-a0038135133a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.040 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[78229518-eea5-40aa-9d9a-907792ccd3db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.041 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-251e5963-d880-4c08-88e8-a0038135133a namespace which is not needed anymore
Dec 06 07:07:54 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Dec 06 07:07:54 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002b.scope: Consumed 3.240s CPU time.
Dec 06 07:07:54 compute-0 systemd-machined[212986]: Machine qemu-19-instance-0000002b terminated.
Dec 06 07:07:54 compute-0 neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a[280518]: [NOTICE]   (280537) : haproxy version is 2.8.14-c23fe91
Dec 06 07:07:54 compute-0 neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a[280518]: [NOTICE]   (280537) : path to executable is /usr/sbin/haproxy
Dec 06 07:07:54 compute-0 neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a[280518]: [WARNING]  (280537) : Exiting Master process...
Dec 06 07:07:54 compute-0 neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a[280518]: [WARNING]  (280537) : Exiting Master process...
Dec 06 07:07:54 compute-0 neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a[280518]: [ALERT]    (280537) : Current worker (280543) exited with code 143 (Terminated)
Dec 06 07:07:54 compute-0 neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a[280518]: [WARNING]  (280537) : All workers exited. Exiting... (0)
Dec 06 07:07:54 compute-0 systemd[1]: libpod-28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695.scope: Deactivated successfully.
Dec 06 07:07:54 compute-0 podman[280583]: 2025-12-06 07:07:54.166546847 +0000 UTC m=+0.042524668 container died 28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.175 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.179 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 ceph-mon[74339]: pgmap v1424: 305 pgs: 305 active+clean; 213 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.2 MiB/s wr, 57 op/s
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.187 251996 INFO nova.virt.libvirt.driver [-] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Instance destroyed successfully.
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.188 251996 DEBUG nova.objects.instance [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lazy-loading 'resources' on Instance uuid d46e42cf-1110-412b-84e1-780a7f05e1c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695-userdata-shm.mount: Deactivated successfully.
Dec 06 07:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea7cc9764c4994e6e6016b066861a1a63f713d9ccfdc5d5707b3a978f806d972-merged.mount: Deactivated successfully.
Dec 06 07:07:54 compute-0 podman[280583]: 2025-12-06 07:07:54.210329267 +0000 UTC m=+0.086307088 container cleanup 28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.210 251996 DEBUG nova.virt.libvirt.vif [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:07:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-2075763697',display_name='tempest-ImagesNegativeTestJSON-server-2075763697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-2075763697',id=43,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:07:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09660a2b244f472083042e6223025786',ramdisk_id='',reservation_id='r-hn8yuzhu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-2015493065',owner_user_name='tempest-ImagesNegativeTestJSON-2015493065-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:07:51Z,user_data=None,user_id='b9343f7eea174bc8ad0a14b1247d7d0f',uuid=d46e42cf-1110-412b-84e1-780a7f05e1c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.210 251996 DEBUG nova.network.os_vif_util [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Converting VIF {"id": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "address": "fa:16:3e:09:e1:01", "network": {"id": "251e5963-d880-4c08-88e8-a0038135133a", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-107644914-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09660a2b244f472083042e6223025786", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ecb7238-7c", "ovs_interfaceid": "2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.211 251996 DEBUG nova.network.os_vif_util [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:e1:01,bridge_name='br-int',has_traffic_filtering=True,id=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd,network=Network(251e5963-d880-4c08-88e8-a0038135133a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ecb7238-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.211 251996 DEBUG os_vif [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:e1:01,bridge_name='br-int',has_traffic_filtering=True,id=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd,network=Network(251e5963-d880-4c08-88e8-a0038135133a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ecb7238-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.213 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.214 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ecb7238-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.215 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 systemd[1]: libpod-conmon-28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695.scope: Deactivated successfully.
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.217 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.220 251996 INFO os_vif [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:e1:01,bridge_name='br-int',has_traffic_filtering=True,id=2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd,network=Network(251e5963-d880-4c08-88e8-a0038135133a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ecb7238-7c')
Dec 06 07:07:54 compute-0 podman[280626]: 2025-12-06 07:07:54.27570162 +0000 UTC m=+0.042064876 container remove 28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.283 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ef492776-a3a3-470f-b99d-5f65e40a8d77]: (4, ('Sat Dec  6 07:07:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a (28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695)\n28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695\nSat Dec  6 07:07:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-251e5963-d880-4c08-88e8-a0038135133a (28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695)\n28b771eb3b659c354bfa8ade1f5393024067df1fe2b240b14420b8e4ea4cc695\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.285 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[95a8cd79-0828-4e09-96de-c0d62716b641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.286 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap251e5963-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:07:54 compute-0 kernel: tap251e5963-d0: left promiscuous mode
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.288 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.303 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.307 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[45278a74-4ce7-4676-9c31-aaf240b43665]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.325 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ba40be25-c254-4d13-ac3f-14f11d1b624f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.327 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[360abf56-c1e7-49e4-8398-414195e2c451]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.341 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[82f818b2-4dfc-493c-b155-7c23625037ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514332, 'reachable_time': 19871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280659, 'error': None, 'target': 'ovnmeta-251e5963-d880-4c08-88e8-a0038135133a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d251e5963\x2dd880\x2d4c08\x2d88e8\x2da0038135133a.mount: Deactivated successfully.
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.345 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-251e5963-d880-4c08-88e8-a0038135133a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:07:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:07:54.345 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[62f3867d-9aba-46d2-aa3f-91b93cf5c484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.583 251996 INFO nova.virt.libvirt.driver [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Deleting instance files /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2_del
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.584 251996 INFO nova.virt.libvirt.driver [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Deletion of /var/lib/nova/instances/d46e42cf-1110-412b-84e1-780a7f05e1c2_del complete
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.671 251996 INFO nova.compute.manager [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Took 0.72 seconds to destroy the instance on the hypervisor.
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.672 251996 DEBUG oslo.service.loopingcall [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.672 251996 DEBUG nova.compute.manager [-] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:07:54 compute-0 nova_compute[251992]: 2025-12-06 07:07:54.672 251996 DEBUG nova.network.neutron [-] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:07:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 185 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 MiB/s wr, 103 op/s
Dec 06 07:07:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:07:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:55.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:07:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:55.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.864 251996 DEBUG nova.compute.manager [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received event network-vif-unplugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.864 251996 DEBUG oslo_concurrency.lockutils [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.864 251996 DEBUG oslo_concurrency.lockutils [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.865 251996 DEBUG oslo_concurrency.lockutils [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.865 251996 DEBUG nova.compute.manager [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] No waiting events found dispatching network-vif-unplugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.865 251996 DEBUG nova.compute.manager [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received event network-vif-unplugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.865 251996 DEBUG nova.compute.manager [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received event network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.866 251996 DEBUG oslo_concurrency.lockutils [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.866 251996 DEBUG oslo_concurrency.lockutils [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.866 251996 DEBUG oslo_concurrency.lockutils [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.866 251996 DEBUG nova.compute.manager [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] No waiting events found dispatching network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:07:55 compute-0 nova_compute[251992]: 2025-12-06 07:07:55.866 251996 WARNING nova.compute.manager [req-6a1ef347-1d21-4285-8223-e62a4709cba8 req-9dbd1eac-9101-46f5-b2c4-b4420482076a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received unexpected event network-vif-plugged-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd for instance with vm_state active and task_state deleting.
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.214 251996 DEBUG nova.network.neutron [-] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:07:56 compute-0 ceph-mon[74339]: pgmap v1425: 305 pgs: 305 active+clean; 185 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 MiB/s wr, 103 op/s
Dec 06 07:07:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/797452098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3883785481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.259 251996 INFO nova.compute.manager [-] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Took 1.59 seconds to deallocate network for instance.
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.368 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.368 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.475 251996 DEBUG oslo_concurrency.processutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.515 251996 DEBUG nova.compute.manager [req-80488862-a4a4-4d6c-a237-5f16a3ad4975 req-0ed53eac-372a-443e-b249-32e9d2addf59 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Received event network-vif-deleted-2ecb7238-7cf4-43cf-a7c8-61a3c2c193bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:07:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:07:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143644919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.943 251996 DEBUG oslo_concurrency.processutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.949 251996 DEBUG nova.compute.provider_tree [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:07:56 compute-0 nova_compute[251992]: 2025-12-06 07:07:56.972 251996 DEBUG nova.scheduler.client.report [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:07:57 compute-0 nova_compute[251992]: 2025-12-06 07:07:57.000 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:57 compute-0 nova_compute[251992]: 2025-12-06 07:07:57.048 251996 INFO nova.scheduler.client.report [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Deleted allocations for instance d46e42cf-1110-412b-84e1-780a7f05e1c2
Dec 06 07:07:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Dec 06 07:07:57 compute-0 nova_compute[251992]: 2025-12-06 07:07:57.121 251996 DEBUG oslo_concurrency.lockutils [None req-1f799659-09df-4eb3-ab49-68f8a8176807 b9343f7eea174bc8ad0a14b1247d7d0f 09660a2b244f472083042e6223025786 - - default default] Lock "d46e42cf-1110-412b-84e1-780a7f05e1c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:07:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4143644919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:07:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:07:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:57.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:57 compute-0 sudo[280685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:57 compute-0 sudo[280685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:57 compute-0 sudo[280685]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:57 compute-0 sudo[280715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:07:57 compute-0 sudo[280715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:07:57 compute-0 sudo[280715]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:57 compute-0 podman[280709]: 2025-12-06 07:07:57.873241252 +0000 UTC m=+0.085198979 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:07:58 compute-0 nova_compute[251992]: 2025-12-06 07:07:58.036 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:58 compute-0 ceph-mon[74339]: pgmap v1426: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Dec 06 07:07:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 06 07:07:59 compute-0 nova_compute[251992]: 2025-12-06 07:07:59.217 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:07:59 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:07:59 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:07:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:07:59.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:07:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:07:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:07:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:07:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:00 compute-0 ceph-mon[74339]: pgmap v1427: 305 pgs: 305 active+clean; 167 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 06 07:08:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Dec 06 07:08:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:01.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:01.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:02 compute-0 podman[280765]: 2025-12-06 07:08:02.400978281 +0000 UTC m=+0.060473116 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:08:02 compute-0 podman[280766]: 2025-12-06 07:08:02.412918352 +0000 UTC m=+0.069785588 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:08:03 compute-0 nova_compute[251992]: 2025-12-06 07:08:03.038 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 424 KiB/s wr, 131 op/s
Dec 06 07:08:03 compute-0 ceph-mon[74339]: pgmap v1428: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Dec 06 07:08:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:03.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:03.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:03.816 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:03.816 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:03.816 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:04 compute-0 nova_compute[251992]: 2025-12-06 07:08:04.220 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:04 compute-0 ceph-mon[74339]: pgmap v1429: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 424 KiB/s wr, 131 op/s
Dec 06 07:08:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 30 KiB/s wr, 144 op/s
Dec 06 07:08:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:05.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:05.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:05 compute-0 nova_compute[251992]: 2025-12-06 07:08:05.980 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:06 compute-0 ceph-mon[74339]: pgmap v1430: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 30 KiB/s wr, 144 op/s
Dec 06 07:08:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 18 KiB/s wr, 120 op/s
Dec 06 07:08:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:07.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:07.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:08 compute-0 nova_compute[251992]: 2025-12-06 07:08:08.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:08 compute-0 ceph-mon[74339]: pgmap v1431: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 18 KiB/s wr, 120 op/s
Dec 06 07:08:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Dec 06 07:08:09 compute-0 nova_compute[251992]: 2025-12-06 07:08:09.187 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765004874.185031, d46e42cf-1110-412b-84e1-780a7f05e1c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:08:09 compute-0 nova_compute[251992]: 2025-12-06 07:08:09.188 251996 INFO nova.compute.manager [-] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] VM Stopped (Lifecycle Event)
Dec 06 07:08:09 compute-0 nova_compute[251992]: 2025-12-06 07:08:09.211 251996 DEBUG nova.compute.manager [None req-8509db13-41d8-41fe-b143-774b8d4acac5 - - - - - -] [instance: d46e42cf-1110-412b-84e1-780a7f05e1c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:09 compute-0 nova_compute[251992]: 2025-12-06 07:08:09.222 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:09.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2977333581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:08:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2977333581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:08:09 compute-0 sudo[280807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:09 compute-0 sudo[280807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:09 compute-0 sudo[280807]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:09 compute-0 sudo[280832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:08:09 compute-0 sudo[280832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:09 compute-0 sudo[280832]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:09 compute-0 sudo[280857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:09 compute-0 sudo[280857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:09 compute-0 sudo[280857]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:09 compute-0 sudo[280882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:08:09 compute-0 sudo[280882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:09.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:09 compute-0 sudo[280882]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:08:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:08:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:10 compute-0 sudo[280927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:10 compute-0 sudo[280927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:10 compute-0 sudo[280927]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:10 compute-0 sudo[280952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:08:10 compute-0 sudo[280952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:10 compute-0 sudo[280952]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:10 compute-0 sudo[280977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:10 compute-0 sudo[280977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:10 compute-0 sudo[280977]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:10 compute-0 sudo[281002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:08:10 compute-0 sudo[281002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:10 compute-0 ceph-mon[74339]: pgmap v1432: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Dec 06 07:08:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:10 compute-0 sudo[281002]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:08:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:08:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Dec 06 07:08:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:08:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:08:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:08:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:08:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:08:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ee5c88f5-814e-45c7-acd4-8b6f3b9d5340 does not exist
Dec 06 07:08:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 46808edc-c8e8-40c8-903c-ee58c5bd3e13 does not exist
Dec 06 07:08:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev df16bf71-9015-4aae-851a-52cab37f8ae0 does not exist
Dec 06 07:08:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:08:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:08:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:08:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:08:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:08:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:08:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:11.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:11 compute-0 sudo[281058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:11 compute-0 sudo[281058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:11 compute-0 sudo[281058]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:11 compute-0 sudo[281083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:08:11 compute-0 sudo[281083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:11 compute-0 sudo[281083]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:11 compute-0 sudo[281108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:11 compute-0 sudo[281108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:11 compute-0 sudo[281108]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:11 compute-0 sudo[281133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:08:11 compute-0 sudo[281133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:08:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:08:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:08:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:08:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:08:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:11.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:12 compute-0 podman[281197]: 2025-12-06 07:08:12.116954309 +0000 UTC m=+0.118344834 container create a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 07:08:12 compute-0 podman[281197]: 2025-12-06 07:08:12.028796923 +0000 UTC m=+0.030187468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:08:12 compute-0 systemd[1]: Started libpod-conmon-a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe.scope.
Dec 06 07:08:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:08:12 compute-0 podman[281197]: 2025-12-06 07:08:12.285638482 +0000 UTC m=+0.287029027 container init a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:08:12 compute-0 podman[281197]: 2025-12-06 07:08:12.293282211 +0000 UTC m=+0.294672726 container start a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:08:12 compute-0 systemd[1]: libpod-a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe.scope: Deactivated successfully.
Dec 06 07:08:12 compute-0 great_bohr[281214]: 167 167
Dec 06 07:08:12 compute-0 conmon[281214]: conmon a8181ff2b8e8b4ed46ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe.scope/container/memory.events
Dec 06 07:08:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:12 compute-0 podman[281197]: 2025-12-06 07:08:12.334519325 +0000 UTC m=+0.335909850 container attach a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:08:12 compute-0 podman[281197]: 2025-12-06 07:08:12.335493071 +0000 UTC m=+0.336883606 container died a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-38dc10542cc866e67fef9d839712d154301d76c50d9d3aa133a771116029bd8f-merged.mount: Deactivated successfully.
Dec 06 07:08:12 compute-0 podman[281197]: 2025-12-06 07:08:12.506041171 +0000 UTC m=+0.507431696 container remove a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:08:12 compute-0 systemd[1]: libpod-conmon-a8181ff2b8e8b4ed46ac90ace115a7e3a851fbd9a0d28d5ba30f8d77b4586ffe.scope: Deactivated successfully.
Dec 06 07:08:12 compute-0 podman[281241]: 2025-12-06 07:08:12.68260647 +0000 UTC m=+0.046500862 container create 4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:08:12 compute-0 systemd[1]: Started libpod-conmon-4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a.scope.
Dec 06 07:08:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259cb5f7a1cc07f25549bcd158965deb69d16f3da528dd79a0d0ad70e744224/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259cb5f7a1cc07f25549bcd158965deb69d16f3da528dd79a0d0ad70e744224/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259cb5f7a1cc07f25549bcd158965deb69d16f3da528dd79a0d0ad70e744224/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259cb5f7a1cc07f25549bcd158965deb69d16f3da528dd79a0d0ad70e744224/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2259cb5f7a1cc07f25549bcd158965deb69d16f3da528dd79a0d0ad70e744224/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:12 compute-0 podman[281241]: 2025-12-06 07:08:12.754609585 +0000 UTC m=+0.118503987 container init 4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:08:12 compute-0 podman[281241]: 2025-12-06 07:08:12.666603634 +0000 UTC m=+0.030498036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:08:12 compute-0 podman[281241]: 2025-12-06 07:08:12.762328616 +0000 UTC m=+0.126223008 container start 4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:08:12 compute-0 ceph-mon[74339]: pgmap v1433: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Dec 06 07:08:12 compute-0 podman[281241]: 2025-12-06 07:08:12.76705355 +0000 UTC m=+0.130947972 container attach 4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:08:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:08:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:08:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:08:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:08:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:08:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:08:13 compute-0 nova_compute[251992]: 2025-12-06 07:08:13.097 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 66 op/s
Dec 06 07:08:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:13.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:13 compute-0 pensive_poincare[281258]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:08:13 compute-0 pensive_poincare[281258]: --> relative data size: 1.0
Dec 06 07:08:13 compute-0 pensive_poincare[281258]: --> All data devices are unavailable
Dec 06 07:08:13 compute-0 systemd[1]: libpod-4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a.scope: Deactivated successfully.
Dec 06 07:08:13 compute-0 podman[281241]: 2025-12-06 07:08:13.616353988 +0000 UTC m=+0.980248400 container died 4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec 06 07:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2259cb5f7a1cc07f25549bcd158965deb69d16f3da528dd79a0d0ad70e744224-merged.mount: Deactivated successfully.
Dec 06 07:08:13 compute-0 podman[281241]: 2025-12-06 07:08:13.718351154 +0000 UTC m=+1.082245556 container remove 4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_poincare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:08:13 compute-0 systemd[1]: libpod-conmon-4dd37f82a1a33e2666f519b23f12daa898b150c77b98ecdfd67d88ca8a55180a.scope: Deactivated successfully.
Dec 06 07:08:13 compute-0 sudo[281133]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:08:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 18K writes, 68K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 6329 syncs, 2.94 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7746 writes, 29K keys, 7746 commit groups, 1.0 writes per commit group, ingest: 29.70 MB, 0.05 MB/s
                                           Interval WAL: 7746 writes, 3240 syncs, 2.39 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:08:13 compute-0 sudo[281287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:13 compute-0 sudo[281287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:13.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:13 compute-0 sudo[281287]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:13 compute-0 sudo[281312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:08:13 compute-0 sudo[281312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:13 compute-0 sudo[281312]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:13 compute-0 sudo[281337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:13 compute-0 sudo[281337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:13 compute-0 sudo[281337]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:13 compute-0 sudo[281362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:08:14 compute-0 sudo[281362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:14 compute-0 nova_compute[251992]: 2025-12-06 07:08:14.225 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:14 compute-0 podman[281425]: 2025-12-06 07:08:14.306486742 +0000 UTC m=+0.021948413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:08:14 compute-0 podman[281425]: 2025-12-06 07:08:14.439320492 +0000 UTC m=+0.154782143 container create d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:08:14 compute-0 systemd[1]: Started libpod-conmon-d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe.scope.
Dec 06 07:08:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:08:14 compute-0 podman[281425]: 2025-12-06 07:08:14.531212554 +0000 UTC m=+0.246674205 container init d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:08:14 compute-0 podman[281425]: 2025-12-06 07:08:14.537751745 +0000 UTC m=+0.253213396 container start d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:08:14 compute-0 kind_goodall[281442]: 167 167
Dec 06 07:08:14 compute-0 systemd[1]: libpod-d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe.scope: Deactivated successfully.
Dec 06 07:08:14 compute-0 podman[281425]: 2025-12-06 07:08:14.605335585 +0000 UTC m=+0.320797266 container attach d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 07:08:14 compute-0 podman[281425]: 2025-12-06 07:08:14.605799797 +0000 UTC m=+0.321261448 container died d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ea387f51e349323e9ecc1f9945379b44511a6bf4149da4ae0bdffc2e318acc8-merged.mount: Deactivated successfully.
Dec 06 07:08:14 compute-0 podman[281425]: 2025-12-06 07:08:14.688035368 +0000 UTC m=+0.403497019 container remove d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:08:14 compute-0 systemd[1]: libpod-conmon-d50f0d78fca06f2909db52180fe9ff0e1941c20130da57c6c09a5fe2caa14efe.scope: Deactivated successfully.
Dec 06 07:08:14 compute-0 ceph-mon[74339]: pgmap v1434: 305 pgs: 305 active+clean; 167 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 66 op/s
Dec 06 07:08:14 compute-0 podman[281465]: 2025-12-06 07:08:14.88123891 +0000 UTC m=+0.061747569 container create 84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:08:14 compute-0 systemd[1]: Started libpod-conmon-84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15.scope.
Dec 06 07:08:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b421bff680cb6be5b7e21e910546c0fa0e5529b431e30b5af29d945271703650/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:14 compute-0 podman[281465]: 2025-12-06 07:08:14.843170379 +0000 UTC m=+0.023679068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b421bff680cb6be5b7e21e910546c0fa0e5529b431e30b5af29d945271703650/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b421bff680cb6be5b7e21e910546c0fa0e5529b431e30b5af29d945271703650/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b421bff680cb6be5b7e21e910546c0fa0e5529b431e30b5af29d945271703650/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:15 compute-0 podman[281465]: 2025-12-06 07:08:15.11430338 +0000 UTC m=+0.294812089 container init 84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ellis, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:08:15 compute-0 podman[281465]: 2025-12-06 07:08:15.121771435 +0000 UTC m=+0.302280094 container start 84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ellis, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:08:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 177 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 957 KiB/s wr, 83 op/s
Dec 06 07:08:15 compute-0 podman[281465]: 2025-12-06 07:08:15.173012169 +0000 UTC m=+0.353520858 container attach 84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 07:08:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:15.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:15 compute-0 interesting_ellis[281481]: {
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:     "0": [
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:         {
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "devices": [
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "/dev/loop3"
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             ],
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "lv_name": "ceph_lv0",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "lv_size": "7511998464",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "name": "ceph_lv0",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "tags": {
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.cluster_name": "ceph",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.crush_device_class": "",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.encrypted": "0",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.osd_id": "0",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.type": "block",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:                 "ceph.vdo": "0"
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             },
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "type": "block",
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:             "vg_name": "ceph_vg0"
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:         }
Dec 06 07:08:15 compute-0 interesting_ellis[281481]:     ]
Dec 06 07:08:15 compute-0 interesting_ellis[281481]: }
Dec 06 07:08:15 compute-0 systemd[1]: libpod-84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15.scope: Deactivated successfully.
Dec 06 07:08:15 compute-0 podman[281490]: 2025-12-06 07:08:15.962033858 +0000 UTC m=+0.025545666 container died 84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b421bff680cb6be5b7e21e910546c0fa0e5529b431e30b5af29d945271703650-merged.mount: Deactivated successfully.
Dec 06 07:08:16 compute-0 podman[281490]: 2025-12-06 07:08:16.131674047 +0000 UTC m=+0.195185835 container remove 84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:08:16 compute-0 systemd[1]: libpod-conmon-84f417f778d2d449d95c100da20cc691a7d01f961d00d8f6d64e344c3d374c15.scope: Deactivated successfully.
Dec 06 07:08:16 compute-0 sudo[281362]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:16 compute-0 sudo[281505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:16 compute-0 sudo[281505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:16 compute-0 sudo[281505]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:16 compute-0 sudo[281530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:08:16 compute-0 sudo[281530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:16 compute-0 sudo[281530]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:16 compute-0 sudo[281555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:16 compute-0 sudo[281555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:16 compute-0 sudo[281555]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:16 compute-0 sudo[281580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:08:16 compute-0 sudo[281580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Dec 06 07:08:16 compute-0 podman[281647]: 2025-12-06 07:08:16.748956143 +0000 UTC m=+0.021556553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:08:16 compute-0 podman[281647]: 2025-12-06 07:08:16.979935577 +0000 UTC m=+0.252535997 container create 5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:08:17 compute-0 systemd[1]: Started libpod-conmon-5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d.scope.
Dec 06 07:08:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:08:17 compute-0 podman[281647]: 2025-12-06 07:08:17.12130421 +0000 UTC m=+0.393904610 container init 5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dubinsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:08:17 compute-0 podman[281647]: 2025-12-06 07:08:17.132906592 +0000 UTC m=+0.405506972 container start 5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Dec 06 07:08:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 192 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Dec 06 07:08:17 compute-0 epic_dubinsky[281664]: 167 167
Dec 06 07:08:17 compute-0 systemd[1]: libpod-5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d.scope: Deactivated successfully.
Dec 06 07:08:17 compute-0 conmon[281664]: conmon 5c8ddbd3ca0d2b6d7589 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d.scope/container/memory.events
Dec 06 07:08:17 compute-0 podman[281647]: 2025-12-06 07:08:17.159158265 +0000 UTC m=+0.431758655 container attach 5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:08:17 compute-0 podman[281647]: 2025-12-06 07:08:17.159586297 +0000 UTC m=+0.432186677 container died 5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:08:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec776f7d60fad4bbcc4816dad3d6f365048468b992da6950f8ed81e33c0934b3-merged.mount: Deactivated successfully.
Dec 06 07:08:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:17.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:17 compute-0 podman[281647]: 2025-12-06 07:08:17.522680772 +0000 UTC m=+0.795281152 container remove 5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_dubinsky, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:08:17 compute-0 systemd[1]: libpod-conmon-5c8ddbd3ca0d2b6d7589612969d6b553e493092d18c8e10011bc2baea6a5ab4d.scope: Deactivated successfully.
Dec 06 07:08:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Dec 06 07:08:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Dec 06 07:08:17 compute-0 podman[281690]: 2025-12-06 07:08:17.696482599 +0000 UTC m=+0.064450990 container create 0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 07:08:17 compute-0 systemd[1]: Started libpod-conmon-0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb.scope.
Dec 06 07:08:17 compute-0 podman[281690]: 2025-12-06 07:08:17.653454818 +0000 UTC m=+0.021423229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:08:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efc87f39e55df47ef60104e00465f152c5233b8cfb60acd6f8b5f8aea657c62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efc87f39e55df47ef60104e00465f152c5233b8cfb60acd6f8b5f8aea657c62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efc87f39e55df47ef60104e00465f152c5233b8cfb60acd6f8b5f8aea657c62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efc87f39e55df47ef60104e00465f152c5233b8cfb60acd6f8b5f8aea657c62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:17.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:17 compute-0 podman[281690]: 2025-12-06 07:08:17.867923963 +0000 UTC m=+0.235892364 container init 0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:08:17 compute-0 podman[281690]: 2025-12-06 07:08:17.882586236 +0000 UTC m=+0.250554657 container start 0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:08:17 compute-0 podman[281690]: 2025-12-06 07:08:17.887856533 +0000 UTC m=+0.255824944 container attach 0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Dec 06 07:08:17 compute-0 sudo[281710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:17 compute-0 sudo[281710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:17 compute-0 sudo[281710]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:17 compute-0 sudo[281737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:18 compute-0 sudo[281737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:18 compute-0 sudo[281737]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:18 compute-0 nova_compute[251992]: 2025-12-06 07:08:18.098 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:08:18
Dec 06 07:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', '.mgr', 'backups', 'vms', 'default.rgw.meta', '.rgw.root']
Dec 06 07:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:08:18 compute-0 ceph-mon[74339]: pgmap v1435: 305 pgs: 305 active+clean; 177 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 957 KiB/s wr, 83 op/s
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]: {
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]:         "osd_id": 0,
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]:         "type": "bluestore"
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]:     }
Dec 06 07:08:18 compute-0 affectionate_dirac[281707]: }
Dec 06 07:08:18 compute-0 podman[281690]: 2025-12-06 07:08:18.78857043 +0000 UTC m=+1.156538821 container died 0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:08:18 compute-0 systemd[1]: libpod-0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb.scope: Deactivated successfully.
Dec 06 07:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2efc87f39e55df47ef60104e00465f152c5233b8cfb60acd6f8b5f8aea657c62-merged.mount: Deactivated successfully.
Dec 06 07:08:18 compute-0 podman[281690]: 2025-12-06 07:08:18.842924166 +0000 UTC m=+1.210892547 container remove 0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:08:18 compute-0 systemd[1]: libpod-conmon-0d9c049b30d0cda4778b901113ca5864a8cc371470a8261b2051a30c297e1abb.scope: Deactivated successfully.
Dec 06 07:08:18 compute-0 sudo[281580]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:08:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:08:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 66041a9c-f310-40a2-b711-738a020cfa0b does not exist
Dec 06 07:08:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d640a14e-3397-4d4d-8167-44ccb7fca175 does not exist
Dec 06 07:08:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4870e6d5-ab0d-48e7-8220-ca98b853dfd7 does not exist
Dec 06 07:08:18 compute-0 sudo[281790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:18 compute-0 sudo[281790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:18 compute-0 sudo[281790]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:19 compute-0 sudo[281815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:08:19 compute-0 sudo[281815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:19 compute-0 sudo[281815]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 192 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Dec 06 07:08:19 compute-0 nova_compute[251992]: 2025-12-06 07:08:19.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:19.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:19 compute-0 ceph-mon[74339]: pgmap v1436: 305 pgs: 305 active+clean; 192 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Dec 06 07:08:19 compute-0 ceph-mon[74339]: osdmap e182: 3 total, 3 up, 3 in
Dec 06 07:08:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:08:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:19.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:21 compute-0 ceph-mon[74339]: pgmap v1438: 305 pgs: 305 active+clean; 192 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Dec 06 07:08:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Dec 06 07:08:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:21.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:21.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 07:08:23 compute-0 ceph-mon[74339]: pgmap v1439: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Dec 06 07:08:23 compute-0 nova_compute[251992]: 2025-12-06 07:08:23.100 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:08:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:23.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:08:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:23.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:24 compute-0 ceph-mon[74339]: pgmap v1440: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Dec 06 07:08:24 compute-0 nova_compute[251992]: 2025-12-06 07:08:24.264 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.107 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.107 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.128 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 1.4 MiB/s wr, 71 op/s
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.223 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.223 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.231 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.231 251996 INFO nova.compute.claims [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.361 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:25.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043362791969104785 of space, bias 1.0, pg target 1.3008837590731435 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001903869753861902 of space, bias 1.0, pg target 0.5692570564047087 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:08:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:08:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:08:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191893823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.845 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:25.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.852 251996 DEBUG nova.compute.provider_tree [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.881 251996 DEBUG nova.scheduler.client.report [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.909 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.910 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.970 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:08:25 compute-0 nova_compute[251992]: 2025-12-06 07:08:25.970 251996 DEBUG nova.network.neutron [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.003 251996 INFO nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.020 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.123 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.124 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.125 251996 INFO nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Creating image(s)
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.147 251996 DEBUG nova.storage.rbd_utils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.174 251996 DEBUG nova.storage.rbd_utils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.199 251996 DEBUG nova.storage.rbd_utils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.202 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.224 251996 DEBUG nova.policy [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '33518fed43cc4fdfbdce993ccb4cc360', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4c41abd44bbf46f39df642d2a2cd19eb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.264 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.264 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.265 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.265 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.287 251996 DEBUG nova.storage.rbd_utils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:26 compute-0 nova_compute[251992]: 2025-12-06 07:08:26.291 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:26 compute-0 ceph-mon[74339]: pgmap v1441: 305 pgs: 305 active+clean; 200 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 1.4 MiB/s wr, 71 op/s
Dec 06 07:08:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/191893823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 215 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 606 KiB/s wr, 33 op/s
Dec 06 07:08:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:27.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.551 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.620 251996 DEBUG nova.storage.rbd_utils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] resizing rbd image 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.821 251996 DEBUG nova.objects.instance [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'migration_context' on Instance uuid 76a87dcb-b252-427a-8f49-7a8ab838bb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.836 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.836 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Ensure instance console log exists: /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.836 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.837 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.837 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:27 compute-0 nova_compute[251992]: 2025-12-06 07:08:27.838 251996 DEBUG nova.network.neutron [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Successfully created port: c3d7b61d-0558-4443-ae89-f36f1815c38d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:08:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:27.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:28 compute-0 nova_compute[251992]: 2025-12-06 07:08:28.101 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:28 compute-0 podman[282032]: 2025-12-06 07:08:28.423870419 +0000 UTC m=+0.079149043 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 07:08:28 compute-0 ceph-mon[74339]: pgmap v1442: 305 pgs: 305 active+clean; 215 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 606 KiB/s wr, 33 op/s
Dec 06 07:08:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 215 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 125 KiB/s rd, 527 KiB/s wr, 29 op/s
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.210 251996 DEBUG nova.network.neutron [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Successfully updated port: c3d7b61d-0558-4443-ae89-f36f1815c38d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.223 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.223 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquired lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.223 251996 DEBUG nova.network.neutron [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.334 251996 DEBUG nova.compute.manager [req-a6900e03-1b30-4e76-b497-6806fbde13fd req-894fc376-0450-4c1d-a7ad-c557c459d76f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received event network-changed-c3d7b61d-0558-4443-ae89-f36f1815c38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.334 251996 DEBUG nova.compute.manager [req-a6900e03-1b30-4e76-b497-6806fbde13fd req-894fc376-0450-4c1d-a7ad-c557c459d76f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Refreshing instance network info cache due to event network-changed-c3d7b61d-0558-4443-ae89-f36f1815c38d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.334 251996 DEBUG oslo_concurrency.lockutils [req-a6900e03-1b30-4e76-b497-6806fbde13fd req-894fc376-0450-4c1d-a7ad-c557c459d76f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:08:29 compute-0 nova_compute[251992]: 2025-12-06 07:08:29.428 251996 DEBUG nova.network.neutron [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:08:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000052s ======
Dec 06 07:08:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:29.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Dec 06 07:08:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:29.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.351 251996 DEBUG nova.network.neutron [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updating instance_info_cache with network_info: [{"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.373 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Releasing lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.374 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Instance network_info: |[{"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.374 251996 DEBUG oslo_concurrency.lockutils [req-a6900e03-1b30-4e76-b497-6806fbde13fd req-894fc376-0450-4c1d-a7ad-c557c459d76f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.374 251996 DEBUG nova.network.neutron [req-a6900e03-1b30-4e76-b497-6806fbde13fd req-894fc376-0450-4c1d-a7ad-c557c459d76f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Refreshing network info cache for port c3d7b61d-0558-4443-ae89-f36f1815c38d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.377 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Start _get_guest_xml network_info=[{"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.382 251996 WARNING nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.388 251996 DEBUG nova.virt.libvirt.host [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.388 251996 DEBUG nova.virt.libvirt.host [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.392 251996 DEBUG nova.virt.libvirt.host [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.393 251996 DEBUG nova.virt.libvirt.host [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.394 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.395 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.395 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.395 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.396 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.396 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.396 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.397 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.397 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.397 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.398 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.398 251996 DEBUG nova.virt.hardware [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.402 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:08:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502816875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.864 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:30 compute-0 ceph-mon[74339]: pgmap v1443: 305 pgs: 305 active+clean; 215 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 125 KiB/s rd, 527 KiB/s wr, 29 op/s
Dec 06 07:08:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2037535136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.894 251996 DEBUG nova.storage.rbd_utils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:30 compute-0 nova_compute[251992]: 2025-12-06 07:08:30.898 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 254 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Dec 06 07:08:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:08:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750598982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.354 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.356 251996 DEBUG nova.virt.libvirt.vif [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:08:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2093212749',display_name='tempest-VolumesAdminNegativeTest-server-2093212749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2093212749',id=45,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHXERM0ASuk1NXvDv/w8837PhM546n3eLOfvHBLzodpQkSdglUi6jeAl375cWLUlKwvyVZolXWZkry65OPgT6v4kiGuE+BNk0US7dkQtbDM5ULJiYxRwF9bDh8uP72MaMw==',key_name='tempest-keypair-95067880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c41abd44bbf46f39df642d2a2cd19eb',ramdisk_id='',reservation_id='r-7lxgqffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-201901816',owner_user_name='tempest-VolumesAdminNegativeTest-201901816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:08:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33518fed43cc4fdfbdce993ccb4cc360',uuid=76a87dcb-b252-427a-8f49-7a8ab838bb3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.356 251996 DEBUG nova.network.os_vif_util [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converting VIF {"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.358 251996 DEBUG nova.network.os_vif_util [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:57:77,bridge_name='br-int',has_traffic_filtering=True,id=c3d7b61d-0558-4443-ae89-f36f1815c38d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3d7b61d-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.359 251996 DEBUG nova.objects.instance [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'pci_devices' on Instance uuid 76a87dcb-b252-427a-8f49-7a8ab838bb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.382 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <uuid>76a87dcb-b252-427a-8f49-7a8ab838bb3f</uuid>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <name>instance-0000002d</name>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <nova:name>tempest-VolumesAdminNegativeTest-server-2093212749</nova:name>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:08:30</nova:creationTime>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <nova:user uuid="33518fed43cc4fdfbdce993ccb4cc360">tempest-VolumesAdminNegativeTest-201901816-project-member</nova:user>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <nova:project uuid="4c41abd44bbf46f39df642d2a2cd19eb">tempest-VolumesAdminNegativeTest-201901816</nova:project>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <nova:port uuid="c3d7b61d-0558-4443-ae89-f36f1815c38d">
Dec 06 07:08:31 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <system>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <entry name="serial">76a87dcb-b252-427a-8f49-7a8ab838bb3f</entry>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <entry name="uuid">76a87dcb-b252-427a-8f49-7a8ab838bb3f</entry>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </system>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <os>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   </os>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <features>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   </features>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk">
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       </source>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk.config">
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       </source>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:08:31 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:3d:57:77"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <target dev="tapc3d7b61d-05"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f/console.log" append="off"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <video>
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </video>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:08:31 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:08:31 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:08:31 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:08:31 compute-0 nova_compute[251992]: </domain>
Dec 06 07:08:31 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.383 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Preparing to wait for external event network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.383 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.384 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.384 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.385 251996 DEBUG nova.virt.libvirt.vif [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:08:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2093212749',display_name='tempest-VolumesAdminNegativeTest-server-2093212749',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2093212749',id=45,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHXERM0ASuk1NXvDv/w8837PhM546n3eLOfvHBLzodpQkSdglUi6jeAl375cWLUlKwvyVZolXWZkry65OPgT6v4kiGuE+BNk0US7dkQtbDM5ULJiYxRwF9bDh8uP72MaMw==',key_name='tempest-keypair-95067880',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4c41abd44bbf46f39df642d2a2cd19eb',ramdisk_id='',reservation_id='r-7lxgqffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-201901816',owner_user_name='tempest-VolumesAdminNegativeTest-201901816-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:08:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33518fed43cc4fdfbdce993ccb4cc360',uuid=76a87dcb-b252-427a-8f49-7a8ab838bb3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.385 251996 DEBUG nova.network.os_vif_util [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converting VIF {"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.386 251996 DEBUG nova.network.os_vif_util [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:57:77,bridge_name='br-int',has_traffic_filtering=True,id=c3d7b61d-0558-4443-ae89-f36f1815c38d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3d7b61d-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.386 251996 DEBUG os_vif [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:57:77,bridge_name='br-int',has_traffic_filtering=True,id=c3d7b61d-0558-4443-ae89-f36f1815c38d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3d7b61d-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.386 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.387 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.387 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.391 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.391 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3d7b61d-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.391 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc3d7b61d-05, col_values=(('external_ids', {'iface-id': 'c3d7b61d-0558-4443-ae89-f36f1815c38d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:57:77', 'vm-uuid': '76a87dcb-b252-427a-8f49-7a8ab838bb3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.393 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:31 compute-0 NetworkManager[48965]: <info>  [1765004911.3942] manager: (tapc3d7b61d-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.396 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.399 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.400 251996 INFO os_vif [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:57:77,bridge_name='br-int',has_traffic_filtering=True,id=c3d7b61d-0558-4443-ae89-f36f1815c38d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3d7b61d-05')
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.465 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.466 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.466 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No VIF found with MAC fa:16:3e:3d:57:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.466 251996 INFO nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Using config drive
Dec 06 07:08:31 compute-0 nova_compute[251992]: 2025-12-06 07:08:31.489 251996 DEBUG nova.storage.rbd_utils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:31.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:31.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1502816875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3750598982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.361 251996 DEBUG nova.network.neutron [req-a6900e03-1b30-4e76-b497-6806fbde13fd req-894fc376-0450-4c1d-a7ad-c557c459d76f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updated VIF entry in instance network info cache for port c3d7b61d-0558-4443-ae89-f36f1815c38d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.362 251996 DEBUG nova.network.neutron [req-a6900e03-1b30-4e76-b497-6806fbde13fd req-894fc376-0450-4c1d-a7ad-c557c459d76f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updating instance_info_cache with network_info: [{"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.390 251996 DEBUG oslo_concurrency.lockutils [req-a6900e03-1b30-4e76-b497-6806fbde13fd req-894fc376-0450-4c1d-a7ad-c557c459d76f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.577 251996 INFO nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Creating config drive at /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f/disk.config
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.583 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7o5phz71 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.709 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7o5phz71" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.737 251996 DEBUG nova.storage.rbd_utils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] rbd image 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.741 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f/disk.config 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.893 251996 DEBUG oslo_concurrency.processutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f/disk.config 76a87dcb-b252-427a-8f49-7a8ab838bb3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.893 251996 INFO nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Deleting local config drive /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f/disk.config because it was imported into RBD.
Dec 06 07:08:32 compute-0 kernel: tapc3d7b61d-05: entered promiscuous mode
Dec 06 07:08:32 compute-0 NetworkManager[48965]: <info>  [1765004912.9374] manager: (tapc3d7b61d-05): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Dec 06 07:08:32 compute-0 ovn_controller[147168]: 2025-12-06T07:08:32Z|00099|binding|INFO|Claiming lport c3d7b61d-0558-4443-ae89-f36f1815c38d for this chassis.
Dec 06 07:08:32 compute-0 ovn_controller[147168]: 2025-12-06T07:08:32Z|00100|binding|INFO|c3d7b61d-0558-4443-ae89-f36f1815c38d: Claiming fa:16:3e:3d:57:77 10.100.0.7
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.938 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:32 compute-0 nova_compute[251992]: 2025-12-06 07:08:32.941 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:32.956 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:57:77 10.100.0.7'], port_security=['fa:16:3e:3d:57:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '76a87dcb-b252-427a-8f49-7a8ab838bb3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c41abd44bbf46f39df642d2a2cd19eb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4c2aa6d-eed5-4630-9aab-9c1afaa4bab0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=934cdf1d-ea09-45f0-b1f2-a1da72094e2e, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=c3d7b61d-0558-4443-ae89-f36f1815c38d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:08:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:32.957 158118 INFO neutron.agent.ovn.metadata.agent [-] Port c3d7b61d-0558-4443-ae89-f36f1815c38d in datapath d62d33de-d7cc-4103-8a83-88ba86c97b8f bound to our chassis
Dec 06 07:08:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:32.959 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d62d33de-d7cc-4103-8a83-88ba86c97b8f
Dec 06 07:08:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:32.972 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4c95fda8-be61-43ba-bbf3-822c1d67bc6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:32.973 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd62d33de-d1 in ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:08:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:32.975 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd62d33de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:08:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:32.975 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[95e903e9-6277-4f94-af21-3b4a81b604e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:32.976 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4b01c55d-fb38-49ef-8d13-94950ba22672]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:32 compute-0 systemd-machined[212986]: New machine qemu-20-instance-0000002d.
Dec 06 07:08:32 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-0000002d.
Dec 06 07:08:32 compute-0 systemd-udevd[282218]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:08:33 compute-0 ceph-mon[74339]: pgmap v1444: 305 pgs: 305 active+clean; 254 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.9 MiB/s wr, 54 op/s
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.037 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[57c1f3c9-a762-409b-b399-54d142b87385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 NetworkManager[48965]: <info>  [1765004913.0559] device (tapc3d7b61d-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:08:33 compute-0 NetworkManager[48965]: <info>  [1765004913.0569] device (tapc3d7b61d-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.057 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[064e4646-d6ea-4981-956c-609c6b70226d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.065 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.072 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 ovn_controller[147168]: 2025-12-06T07:08:33Z|00101|binding|INFO|Setting lport c3d7b61d-0558-4443-ae89-f36f1815c38d ovn-installed in OVS
Dec 06 07:08:33 compute-0 ovn_controller[147168]: 2025-12-06T07:08:33Z|00102|binding|INFO|Setting lport c3d7b61d-0558-4443-ae89-f36f1815c38d up in Southbound
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.078 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.088 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b6523e-f731-4c66-ad7f-862fa8afcb40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 podman[282195]: 2025-12-06 07:08:33.094762365 +0000 UTC m=+0.128533109 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:08:33 compute-0 NetworkManager[48965]: <info>  [1765004913.0980] manager: (tapd62d33de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.096 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf69b41-3dc1-4989-b6c0-7741b0fd6875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 systemd-udevd[282221]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:08:33 compute-0 podman[282196]: 2025-12-06 07:08:33.100013692 +0000 UTC m=+0.133827317 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.102 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.127 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5de740-96ec-401a-b4d0-85d324957b54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.132 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3a872193-7f82-47c0-bc92-c3463fdb5e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 262 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 29 op/s
Dec 06 07:08:33 compute-0 NetworkManager[48965]: <info>  [1765004913.1536] device (tapd62d33de-d0): carrier: link connected
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.161 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[03229d38-02ca-418b-aa61-8c7ba8b5be51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.181 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[61b2b8df-185d-45ef-a7b3-4b27901189a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62d33de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:c1:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518574, 'reachable_time': 31982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282271, 'error': None, 'target': 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.199 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[70b08489-554b-4db5-a25e-97a7fd67dc60]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:c1e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518574, 'tstamp': 518574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282272, 'error': None, 'target': 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.217 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9befd8-2790-49e6-b58e-f4b9f7aeddbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd62d33de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:c1:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518574, 'reachable_time': 31982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282273, 'error': None, 'target': 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.256 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0507c8c6-f5e5-4061-a730-28b7ce924133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.309 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2a094dbd-adc3-42c8-9050-772dd9aacb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.311 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62d33de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.312 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.313 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd62d33de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.315 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 NetworkManager[48965]: <info>  [1765004913.3156] manager: (tapd62d33de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec 06 07:08:33 compute-0 kernel: tapd62d33de-d0: entered promiscuous mode
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.317 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.318 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd62d33de-d0, col_values=(('external_ids', {'iface-id': '0d779b99-8ce3-4716-88ae-2ec54698edd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:33 compute-0 ovn_controller[147168]: 2025-12-06T07:08:33Z|00103|binding|INFO|Releasing lport 0d779b99-8ce3-4716-88ae-2ec54698edd7 from this chassis (sb_readonly=0)
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.320 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.334 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.336 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.337 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d62d33de-d7cc-4103-8a83-88ba86c97b8f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d62d33de-d7cc-4103-8a83-88ba86c97b8f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.337 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[72871e20-1c6a-4001-8f1b-c40f35de15bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.338 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-d62d33de-d7cc-4103-8a83-88ba86c97b8f
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/d62d33de-d7cc-4103-8a83-88ba86c97b8f.pid.haproxy
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID d62d33de-d7cc-4103-8a83-88ba86c97b8f
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:08:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:33.338 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'env', 'PROCESS_TAG=haproxy-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d62d33de-d7cc-4103-8a83-88ba86c97b8f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.484 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004913.483574, 76a87dcb-b252-427a-8f49-7a8ab838bb3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.484 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] VM Started (Lifecycle Event)
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.507 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.511 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004913.487157, 76a87dcb-b252-427a-8f49-7a8ab838bb3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.511 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] VM Paused (Lifecycle Event)
Dec 06 07:08:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:33.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.531 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.534 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:08:33 compute-0 nova_compute[251992]: 2025-12-06 07:08:33.562 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:08:33 compute-0 podman[282347]: 2025-12-06 07:08:33.69512592 +0000 UTC m=+0.045535236 container create ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:08:33 compute-0 systemd[1]: Started libpod-conmon-ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630.scope.
Dec 06 07:08:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91a0f428531e5024c91797bae9c8a1443db792f16475fab01e1839b5df23cc0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:08:33 compute-0 podman[282347]: 2025-12-06 07:08:33.757362991 +0000 UTC m=+0.107772327 container init ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:08:33 compute-0 podman[282347]: 2025-12-06 07:08:33.761955271 +0000 UTC m=+0.112364587 container start ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:08:33 compute-0 podman[282347]: 2025-12-06 07:08:33.671316011 +0000 UTC m=+0.021725347 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:08:33 compute-0 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[282362]: [NOTICE]   (282366) : New worker (282368) forked
Dec 06 07:08:33 compute-0 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[282362]: [NOTICE]   (282366) : Loading success.
Dec 06 07:08:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:33.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:34 compute-0 ceph-mon[74339]: pgmap v1445: 305 pgs: 305 active+clean; 262 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 2.5 MiB/s wr, 29 op/s
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.893 251996 DEBUG nova.compute.manager [req-b2cd688e-29a5-437d-9bae-ecad1246157b req-18c40fe9-47ce-421c-ac60-0fb6c5ce256b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received event network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.894 251996 DEBUG oslo_concurrency.lockutils [req-b2cd688e-29a5-437d-9bae-ecad1246157b req-18c40fe9-47ce-421c-ac60-0fb6c5ce256b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.894 251996 DEBUG oslo_concurrency.lockutils [req-b2cd688e-29a5-437d-9bae-ecad1246157b req-18c40fe9-47ce-421c-ac60-0fb6c5ce256b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.894 251996 DEBUG oslo_concurrency.lockutils [req-b2cd688e-29a5-437d-9bae-ecad1246157b req-18c40fe9-47ce-421c-ac60-0fb6c5ce256b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.894 251996 DEBUG nova.compute.manager [req-b2cd688e-29a5-437d-9bae-ecad1246157b req-18c40fe9-47ce-421c-ac60-0fb6c5ce256b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Processing event network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.895 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.899 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004914.8989944, 76a87dcb-b252-427a-8f49-7a8ab838bb3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.899 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] VM Resumed (Lifecycle Event)
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.901 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.903 251996 INFO nova.virt.libvirt.driver [-] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Instance spawned successfully.
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.904 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.923 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.931 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.934 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.935 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.936 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.936 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.937 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.937 251996 DEBUG nova.virt.libvirt.driver [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.963 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.994 251996 INFO nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Took 8.87 seconds to spawn the instance on the hypervisor.
Dec 06 07:08:34 compute-0 nova_compute[251992]: 2025-12-06 07:08:34.994 251996 DEBUG nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:08:35 compute-0 nova_compute[251992]: 2025-12-06 07:08:35.087 251996 INFO nova.compute.manager [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Took 9.90 seconds to build instance.
Dec 06 07:08:35 compute-0 nova_compute[251992]: 2025-12-06 07:08:35.114 251996 DEBUG oslo_concurrency.lockutils [None req-eddacaab-ee1c-4356-8388-1dbf27410eeb 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 293 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Dec 06 07:08:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:35.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:35.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Dec 06 07:08:36 compute-0 ceph-mon[74339]: pgmap v1446: 305 pgs: 305 active+clean; 293 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Dec 06 07:08:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2803480774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/442084149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Dec 06 07:08:36 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Dec 06 07:08:36 compute-0 nova_compute[251992]: 2025-12-06 07:08:36.394 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:37 compute-0 nova_compute[251992]: 2025-12-06 07:08:37.083 251996 DEBUG nova.compute.manager [req-28cda2d1-422d-4bd9-a20b-97d4dfb1eca2 req-d90214a7-b18c-4645-a475-aa0191331aaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received event network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:37 compute-0 nova_compute[251992]: 2025-12-06 07:08:37.085 251996 DEBUG oslo_concurrency.lockutils [req-28cda2d1-422d-4bd9-a20b-97d4dfb1eca2 req-d90214a7-b18c-4645-a475-aa0191331aaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:37 compute-0 nova_compute[251992]: 2025-12-06 07:08:37.085 251996 DEBUG oslo_concurrency.lockutils [req-28cda2d1-422d-4bd9-a20b-97d4dfb1eca2 req-d90214a7-b18c-4645-a475-aa0191331aaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:37 compute-0 nova_compute[251992]: 2025-12-06 07:08:37.086 251996 DEBUG oslo_concurrency.lockutils [req-28cda2d1-422d-4bd9-a20b-97d4dfb1eca2 req-d90214a7-b18c-4645-a475-aa0191331aaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:37 compute-0 nova_compute[251992]: 2025-12-06 07:08:37.086 251996 DEBUG nova.compute.manager [req-28cda2d1-422d-4bd9-a20b-97d4dfb1eca2 req-d90214a7-b18c-4645-a475-aa0191331aaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] No waiting events found dispatching network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:08:37 compute-0 nova_compute[251992]: 2025-12-06 07:08:37.086 251996 WARNING nova.compute.manager [req-28cda2d1-422d-4bd9-a20b-97d4dfb1eca2 req-d90214a7-b18c-4645-a475-aa0191331aaa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received unexpected event network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d for instance with vm_state active and task_state None.
Dec 06 07:08:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Dec 06 07:08:37 compute-0 ceph-mon[74339]: osdmap e183: 3 total, 3 up, 3 in
Dec 06 07:08:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:37.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:37.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:38 compute-0 sudo[282379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:38 compute-0 sudo[282379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:38 compute-0 sudo[282379]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:38 compute-0 nova_compute[251992]: 2025-12-06 07:08:38.105 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:38 compute-0 sudo[282404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:38 compute-0 sudo[282404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:38 compute-0 sudo[282404]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:38 compute-0 ceph-mon[74339]: pgmap v1448: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Dec 06 07:08:38 compute-0 NetworkManager[48965]: <info>  [1765004918.7203] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Dec 06 07:08:38 compute-0 NetworkManager[48965]: <info>  [1765004918.7213] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec 06 07:08:38 compute-0 nova_compute[251992]: 2025-12-06 07:08:38.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:38 compute-0 nova_compute[251992]: 2025-12-06 07:08:38.940 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:38 compute-0 ovn_controller[147168]: 2025-12-06T07:08:38Z|00104|binding|INFO|Releasing lport 0d779b99-8ce3-4716-88ae-2ec54698edd7 from this chassis (sb_readonly=0)
Dec 06 07:08:38 compute-0 nova_compute[251992]: 2025-12-06 07:08:38.966 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Dec 06 07:08:39 compute-0 nova_compute[251992]: 2025-12-06 07:08:39.389 251996 DEBUG nova.compute.manager [req-6213c91b-dfd7-4bde-a4c8-5c09e4922803 req-1810dfe2-f2a3-4b19-b10a-24325eaba36d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received event network-changed-c3d7b61d-0558-4443-ae89-f36f1815c38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:08:39 compute-0 nova_compute[251992]: 2025-12-06 07:08:39.390 251996 DEBUG nova.compute.manager [req-6213c91b-dfd7-4bde-a4c8-5c09e4922803 req-1810dfe2-f2a3-4b19-b10a-24325eaba36d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Refreshing instance network info cache due to event network-changed-c3d7b61d-0558-4443-ae89-f36f1815c38d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:08:39 compute-0 nova_compute[251992]: 2025-12-06 07:08:39.390 251996 DEBUG oslo_concurrency.lockutils [req-6213c91b-dfd7-4bde-a4c8-5c09e4922803 req-1810dfe2-f2a3-4b19-b10a-24325eaba36d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:08:39 compute-0 nova_compute[251992]: 2025-12-06 07:08:39.390 251996 DEBUG oslo_concurrency.lockutils [req-6213c91b-dfd7-4bde-a4c8-5c09e4922803 req-1810dfe2-f2a3-4b19-b10a-24325eaba36d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:08:39 compute-0 nova_compute[251992]: 2025-12-06 07:08:39.390 251996 DEBUG nova.network.neutron [req-6213c91b-dfd7-4bde-a4c8-5c09e4922803 req-1810dfe2-f2a3-4b19-b10a-24325eaba36d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Refreshing network info cache for port c3d7b61d-0558-4443-ae89-f36f1815c38d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:08:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4017702487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:39.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:39.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:40 compute-0 ceph-mon[74339]: pgmap v1449: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.8 MiB/s wr, 128 op/s
Dec 06 07:08:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 210 op/s
Dec 06 07:08:41 compute-0 nova_compute[251992]: 2025-12-06 07:08:41.212 251996 DEBUG nova.network.neutron [req-6213c91b-dfd7-4bde-a4c8-5c09e4922803 req-1810dfe2-f2a3-4b19-b10a-24325eaba36d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updated VIF entry in instance network info cache for port c3d7b61d-0558-4443-ae89-f36f1815c38d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:08:41 compute-0 nova_compute[251992]: 2025-12-06 07:08:41.212 251996 DEBUG nova.network.neutron [req-6213c91b-dfd7-4bde-a4c8-5c09e4922803 req-1810dfe2-f2a3-4b19-b10a-24325eaba36d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updating instance_info_cache with network_info: [{"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:08:41 compute-0 nova_compute[251992]: 2025-12-06 07:08:41.235 251996 DEBUG oslo_concurrency.lockutils [req-6213c91b-dfd7-4bde-a4c8-5c09e4922803 req-1810dfe2-f2a3-4b19-b10a-24325eaba36d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:08:41 compute-0 nova_compute[251992]: 2025-12-06 07:08:41.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:41.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:42 compute-0 ceph-mon[74339]: pgmap v1450: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.0 MiB/s wr, 210 op/s
Dec 06 07:08:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Dec 06 07:08:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Dec 06 07:08:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Dec 06 07:08:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:08:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:08:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:08:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:08:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:08:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:08:43 compute-0 nova_compute[251992]: 2025-12-06 07:08:43.136 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 22 KiB/s wr, 245 op/s
Dec 06 07:08:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Dec 06 07:08:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:43.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:43 compute-0 nova_compute[251992]: 2025-12-06 07:08:43.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:43.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Dec 06 07:08:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Dec 06 07:08:44 compute-0 ceph-mon[74339]: osdmap e184: 3 total, 3 up, 3 in
Dec 06 07:08:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2836448693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 312 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.0 MiB/s wr, 233 op/s
Dec 06 07:08:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Dec 06 07:08:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Dec 06 07:08:45 compute-0 ceph-mon[74339]: pgmap v1452: 305 pgs: 305 active+clean; 293 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 22 KiB/s wr, 245 op/s
Dec 06 07:08:45 compute-0 ceph-mon[74339]: osdmap e185: 3 total, 3 up, 3 in
Dec 06 07:08:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Dec 06 07:08:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:45.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:45 compute-0 nova_compute[251992]: 2025-12-06 07:08:45.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:45.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Dec 06 07:08:46 compute-0 nova_compute[251992]: 2025-12-06 07:08:46.399 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Dec 06 07:08:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Dec 06 07:08:46 compute-0 ceph-mon[74339]: pgmap v1454: 305 pgs: 305 active+clean; 312 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.0 MiB/s wr, 233 op/s
Dec 06 07:08:46 compute-0 ceph-mon[74339]: osdmap e186: 3 total, 3 up, 3 in
Dec 06 07:08:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 339 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 158 op/s
Dec 06 07:08:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:08:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Dec 06 07:08:47 compute-0 ceph-mon[74339]: osdmap e187: 3 total, 3 up, 3 in
Dec 06 07:08:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:47.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Dec 06 07:08:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Dec 06 07:08:47 compute-0 nova_compute[251992]: 2025-12-06 07:08:47.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:47.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:48 compute-0 nova_compute[251992]: 2025-12-06 07:08:48.139 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:49 compute-0 ceph-mon[74339]: pgmap v1457: 305 pgs: 305 active+clean; 339 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 158 op/s
Dec 06 07:08:49 compute-0 ceph-mon[74339]: osdmap e188: 3 total, 3 up, 3 in
Dec 06 07:08:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 339 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Dec 06 07:08:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:49 compute-0 nova_compute[251992]: 2025-12-06 07:08:49.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:49 compute-0 nova_compute[251992]: 2025-12-06 07:08:49.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:49 compute-0 nova_compute[251992]: 2025-12-06 07:08:49.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:49 compute-0 nova_compute[251992]: 2025-12-06 07:08:49.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:49 compute-0 nova_compute[251992]: 2025-12-06 07:08:49.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:49 compute-0 nova_compute[251992]: 2025-12-06 07:08:49.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:49 compute-0 nova_compute[251992]: 2025-12-06 07:08:49.683 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:08:49 compute-0 nova_compute[251992]: 2025-12-06 07:08:49.683 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:49.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:08:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/212881929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.167 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2194169652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:50 compute-0 ceph-mon[74339]: pgmap v1459: 305 pgs: 305 active+clean; 339 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 153 op/s
Dec 06 07:08:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1698194847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.238 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.239 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.408 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.410 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4508MB free_disk=20.85525894165039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.411 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.411 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.646 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 76a87dcb-b252-427a-8f49-7a8ab838bb3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.647 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.647 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.750 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.822 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.822 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.838 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.866 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:08:50 compute-0 ovn_controller[147168]: 2025-12-06T07:08:50Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:57:77 10.100.0.7
Dec 06 07:08:50 compute-0 ovn_controller[147168]: 2025-12-06T07:08:50Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:57:77 10.100.0.7
Dec 06 07:08:50 compute-0 nova_compute[251992]: 2025-12-06 07:08:50.907 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 360 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 234 op/s
Dec 06 07:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:51.191 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:51.192 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:08:51 compute-0 nova_compute[251992]: 2025-12-06 07:08:51.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/212881929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.260031) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931260124, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2227, "num_deletes": 254, "total_data_size": 3832817, "memory_usage": 3891520, "flush_reason": "Manual Compaction"}
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931282206, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3744233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27863, "largest_seqno": 30089, "table_properties": {"data_size": 3734041, "index_size": 6494, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21767, "raw_average_key_size": 20, "raw_value_size": 3713428, "raw_average_value_size": 3567, "num_data_blocks": 282, "num_entries": 1041, "num_filter_entries": 1041, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004735, "oldest_key_time": 1765004735, "file_creation_time": 1765004931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 22215 microseconds, and 8849 cpu microseconds.
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:08:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.282257) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3744233 bytes OK
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.282277) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.284281) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.284291) EVENT_LOG_v1 {"time_micros": 1765004931284288, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.284307) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3823514, prev total WAL file size 3839053, number of live WAL files 2.
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.285547) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3656KB)], [62(7803KB)]
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931285637, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11734576, "oldest_snapshot_seqno": -1}
Dec 06 07:08:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Dec 06 07:08:51 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Dec 06 07:08:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:08:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602284135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:51 compute-0 nova_compute[251992]: 2025-12-06 07:08:51.354 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:51 compute-0 nova_compute[251992]: 2025-12-06 07:08:51.359 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5971 keys, 9720131 bytes, temperature: kUnknown
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931363557, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9720131, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9680062, "index_size": 24023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 153085, "raw_average_key_size": 25, "raw_value_size": 9572536, "raw_average_value_size": 1603, "num_data_blocks": 965, "num_entries": 5971, "num_filter_entries": 5971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765004931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.363802) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9720131 bytes
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.365288) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.4 rd, 124.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6496, records dropped: 525 output_compression: NoCompression
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.365304) EVENT_LOG_v1 {"time_micros": 1765004931365296, "job": 34, "event": "compaction_finished", "compaction_time_micros": 78016, "compaction_time_cpu_micros": 44011, "output_level": 6, "num_output_files": 1, "total_output_size": 9720131, "num_input_records": 6496, "num_output_records": 5971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931366071, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765004931367408, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.285301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.367531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.367538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.367540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.367542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:08:51.367544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:08:51 compute-0 nova_compute[251992]: 2025-12-06 07:08:51.377 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:08:51 compute-0 nova_compute[251992]: 2025-12-06 07:08:51.400 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:51 compute-0 nova_compute[251992]: 2025-12-06 07:08:51.410 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:08:51 compute-0 nova_compute[251992]: 2025-12-06 07:08:51.411 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:51.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:51.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Dec 06 07:08:52 compute-0 nova_compute[251992]: 2025-12-06 07:08:52.411 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Dec 06 07:08:52 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Dec 06 07:08:52 compute-0 nova_compute[251992]: 2025-12-06 07:08:52.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:52 compute-0 nova_compute[251992]: 2025-12-06 07:08:52.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:52 compute-0 nova_compute[251992]: 2025-12-06 07:08:52.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:08:52 compute-0 ceph-mon[74339]: pgmap v1460: 305 pgs: 305 active+clean; 360 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.2 MiB/s wr, 234 op/s
Dec 06 07:08:52 compute-0 ceph-mon[74339]: osdmap e189: 3 total, 3 up, 3 in
Dec 06 07:08:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3602284135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2587063817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.141 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 357 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 895 KiB/s rd, 5.4 MiB/s wr, 242 op/s
Dec 06 07:08:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:53.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:08:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:53.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.933 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.933 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.934 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.934 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 76a87dcb-b252-427a-8f49-7a8ab838bb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.936 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "c67cab07-55d9-4f41-aa3c-c367f840ba27" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.937 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "c67cab07-55d9-4f41-aa3c-c367f840ba27" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:53 compute-0 nova_compute[251992]: 2025-12-06 07:08:53.964 251996 DEBUG nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:08:53 compute-0 ceph-mon[74339]: osdmap e190: 3 total, 3 up, 3 in
Dec 06 07:08:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1084889315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4138188771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.035 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.036 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.040 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.040 251996 INFO nova.compute.claims [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.145 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:08:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2408473293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.624 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.630 251996 DEBUG nova.compute.provider_tree [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.646 251996 DEBUG nova.scheduler.client.report [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.670 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.671 251996 DEBUG nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.720 251996 DEBUG nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.720 251996 DEBUG nova.network.neutron [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.741 251996 INFO nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.763 251996 DEBUG nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.835 251996 DEBUG nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.836 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.837 251996 INFO nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Creating image(s)
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.867 251996 DEBUG nova.storage.rbd_utils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image c67cab07-55d9-4f41-aa3c-c367f840ba27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.902 251996 DEBUG nova.storage.rbd_utils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image c67cab07-55d9-4f41-aa3c-c367f840ba27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.931 251996 DEBUG nova.storage.rbd_utils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image c67cab07-55d9-4f41-aa3c-c367f840ba27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:54 compute-0 nova_compute[251992]: 2025-12-06 07:08:54.934 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:54 compute-0 ceph-mon[74339]: pgmap v1463: 305 pgs: 305 active+clean; 357 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 895 KiB/s rd, 5.4 MiB/s wr, 242 op/s
Dec 06 07:08:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2408473293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.004 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.004 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.005 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.005 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.031 251996 DEBUG nova.storage.rbd_utils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image c67cab07-55d9-4f41-aa3c-c367f840ba27_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.034 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c67cab07-55d9-4f41-aa3c-c367f840ba27_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.107 251996 DEBUG nova.network.neutron [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.108 251996 DEBUG nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:08:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 311 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.7 MiB/s wr, 322 op/s
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.164 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updating instance_info_cache with network_info: [{"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.186 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.186 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.332 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c67cab07-55d9-4f41-aa3c-c367f840ba27_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.402 251996 DEBUG nova.storage.rbd_utils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] resizing rbd image c67cab07-55d9-4f41-aa3c-c367f840ba27_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:08:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:55.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.552 251996 DEBUG nova.objects.instance [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lazy-loading 'migration_context' on Instance uuid c67cab07-55d9-4f41-aa3c-c367f840ba27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.567 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.568 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Ensure instance console log exists: /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.568 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.569 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.569 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.570 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.575 251996 WARNING nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.580 251996 DEBUG nova.virt.libvirt.host [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.580 251996 DEBUG nova.virt.libvirt.host [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.583 251996 DEBUG nova.virt.libvirt.host [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.583 251996 DEBUG nova.virt.libvirt.host [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.584 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.584 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.584 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.585 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.585 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.585 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.585 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.585 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.585 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.586 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.586 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.586 251996 DEBUG nova.virt.hardware [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:08:55 compute-0 nova_compute[251992]: 2025-12-06 07:08:55.588 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:08:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:55.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:08:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:08:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567383709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.268 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.304 251996 DEBUG nova.storage.rbd_utils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image c67cab07-55d9-4f41-aa3c-c367f840ba27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.308 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.403 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:08:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4167133663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.759 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.762 251996 DEBUG nova.objects.instance [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lazy-loading 'pci_devices' on Instance uuid c67cab07-55d9-4f41-aa3c-c367f840ba27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.784 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <uuid>c67cab07-55d9-4f41-aa3c-c367f840ba27</uuid>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <name>instance-0000002f</name>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <nova:name>tempest-ListImageFiltersTestJSON-server-1962166951</nova:name>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:08:55</nova:creationTime>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <nova:user uuid="3786fc2472ec43adb27b29bfa497a6a2">tempest-ListImageFiltersTestJSON-1891540223-project-member</nova:user>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <nova:project uuid="6f86ab5a5bf14cb6b789f065cc8ca04a">tempest-ListImageFiltersTestJSON-1891540223</nova:project>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <system>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <entry name="serial">c67cab07-55d9-4f41-aa3c-c367f840ba27</entry>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <entry name="uuid">c67cab07-55d9-4f41-aa3c-c367f840ba27</entry>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     </system>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <os>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   </os>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <features>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   </features>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c67cab07-55d9-4f41-aa3c-c367f840ba27_disk">
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       </source>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c67cab07-55d9-4f41-aa3c-c367f840ba27_disk.config">
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       </source>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:08:56 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27/console.log" append="off"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <video>
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     </video>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:08:56 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:08:56 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:08:56 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:08:56 compute-0 nova_compute[251992]: </domain>
Dec 06 07:08:56 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.828 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.829 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:08:56 compute-0 nova_compute[251992]: 2025-12-06 07:08:56.829 251996 INFO nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Using config drive
Dec 06 07:08:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 313 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.0 MiB/s wr, 359 op/s
Dec 06 07:08:57 compute-0 nova_compute[251992]: 2025-12-06 07:08:57.524 251996 DEBUG nova.storage.rbd_utils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image c67cab07-55d9-4f41-aa3c-c367f840ba27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:57.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:08:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:57.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:08:57 compute-0 nova_compute[251992]: 2025-12-06 07:08:57.910 251996 INFO nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Creating config drive at /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27/disk.config
Dec 06 07:08:57 compute-0 nova_compute[251992]: 2025-12-06 07:08:57.914 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcruolt0u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:58 compute-0 nova_compute[251992]: 2025-12-06 07:08:58.041 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcruolt0u" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:08:58 compute-0 nova_compute[251992]: 2025-12-06 07:08:58.067 251996 DEBUG nova.storage.rbd_utils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] rbd image c67cab07-55d9-4f41-aa3c-c367f840ba27_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:08:58 compute-0 nova_compute[251992]: 2025-12-06 07:08:58.070 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27/disk.config c67cab07-55d9-4f41-aa3c-c367f840ba27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:08:58 compute-0 nova_compute[251992]: 2025-12-06 07:08:58.176 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:08:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:08:58.194 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:08:58 compute-0 sudo[282791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:58 compute-0 sudo[282791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:58 compute-0 sudo[282791]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:58 compute-0 sudo[282816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:08:58 compute-0 sudo[282816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:08:58 compute-0 sudo[282816]: pam_unix(sudo:session): session closed for user root
Dec 06 07:08:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Dec 06 07:08:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1692578340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:08:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 313 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 616 KiB/s rd, 5.9 MiB/s wr, 226 op/s
Dec 06 07:08:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Dec 06 07:08:59 compute-0 podman[282842]: 2025-12-06 07:08:59.453885687 +0000 UTC m=+0.115209122 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:08:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:08:59.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:08:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:08:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:08:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:08:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:00 compute-0 ceph-mon[74339]: pgmap v1464: 305 pgs: 305 active+clean; 311 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.7 MiB/s wr, 322 op/s
Dec 06 07:09:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2567383709' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4167133663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:00 compute-0 ceph-mon[74339]: pgmap v1465: 305 pgs: 305 active+clean; 313 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.0 MiB/s wr, 359 op/s
Dec 06 07:09:00 compute-0 ceph-mon[74339]: pgmap v1467: 305 pgs: 305 active+clean; 313 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 616 KiB/s rd, 5.9 MiB/s wr, 226 op/s
Dec 06 07:09:00 compute-0 ceph-mon[74339]: osdmap e191: 3 total, 3 up, 3 in
Dec 06 07:09:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2855757156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:00 compute-0 nova_compute[251992]: 2025-12-06 07:09:00.795 251996 DEBUG oslo_concurrency.processutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27/disk.config c67cab07-55d9-4f41-aa3c-c367f840ba27_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.725s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:00 compute-0 nova_compute[251992]: 2025-12-06 07:09:00.796 251996 INFO nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Deleting local config drive /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27/disk.config because it was imported into RBD.
Dec 06 07:09:00 compute-0 systemd-machined[212986]: New machine qemu-21-instance-0000002f.
Dec 06 07:09:00 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-0000002f.
Dec 06 07:09:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 374 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 9.7 MiB/s wr, 289 op/s
Dec 06 07:09:01 compute-0 nova_compute[251992]: 2025-12-06 07:09:01.407 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:01.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2898849422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.432 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004942.4320834, c67cab07-55d9-4f41-aa3c-c367f840ba27 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.433 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] VM Resumed (Lifecycle Event)
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.435 251996 DEBUG nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.435 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.439 251996 INFO nova.virt.libvirt.driver [-] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Instance spawned successfully.
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.439 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.470 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.474 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.474 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.475 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.475 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.475 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.476 251996 DEBUG nova.virt.libvirt.driver [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.479 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:09:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Dec 06 07:09:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Dec 06 07:09:02 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.511 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.511 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765004942.433449, c67cab07-55d9-4f41-aa3c-c367f840ba27 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.511 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] VM Started (Lifecycle Event)
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.587 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.591 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.600 251996 INFO nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Took 7.76 seconds to spawn the instance on the hypervisor.
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.601 251996 DEBUG nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.625 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.663 251996 INFO nova.compute.manager [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Took 8.65 seconds to build instance.
Dec 06 07:09:02 compute-0 nova_compute[251992]: 2025-12-06 07:09:02.678 251996 DEBUG oslo_concurrency.lockutils [None req-429d0b5e-e783-4cf0-a47d-b6a0ae014ed2 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "c67cab07-55d9-4f41-aa3c-c367f840ba27" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 375 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 9.0 MiB/s wr, 220 op/s
Dec 06 07:09:03 compute-0 ceph-mon[74339]: pgmap v1468: 305 pgs: 305 active+clean; 374 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 9.7 MiB/s wr, 289 op/s
Dec 06 07:09:03 compute-0 ceph-mon[74339]: osdmap e192: 3 total, 3 up, 3 in
Dec 06 07:09:03 compute-0 nova_compute[251992]: 2025-12-06 07:09:03.180 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:03 compute-0 podman[282933]: 2025-12-06 07:09:03.416936329 +0000 UTC m=+0.068009642 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:09:03 compute-0 podman[282932]: 2025-12-06 07:09:03.420146063 +0000 UTC m=+0.059561272 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:09:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Dec 06 07:09:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Dec 06 07:09:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Dec 06 07:09:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:03.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:09:03.816 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:09:03.817 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:09:03.818 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:09:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:03.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:09:04 compute-0 ceph-mon[74339]: pgmap v1470: 305 pgs: 305 active+clean; 375 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 9.0 MiB/s wr, 220 op/s
Dec 06 07:09:04 compute-0 ceph-mon[74339]: osdmap e193: 3 total, 3 up, 3 in
Dec 06 07:09:05 compute-0 nova_compute[251992]: 2025-12-06 07:09:05.045 251996 DEBUG nova.compute.manager [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:05 compute-0 nova_compute[251992]: 2025-12-06 07:09:05.102 251996 INFO nova.compute.manager [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] instance snapshotting
Dec 06 07:09:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 367 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 410 op/s
Dec 06 07:09:05 compute-0 nova_compute[251992]: 2025-12-06 07:09:05.338 251996 INFO nova.virt.libvirt.driver [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Beginning live snapshot process
Dec 06 07:09:05 compute-0 nova_compute[251992]: 2025-12-06 07:09:05.487 251996 DEBUG nova.virt.libvirt.imagebackend [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:09:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:09:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:05.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:09:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1998551748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:05 compute-0 nova_compute[251992]: 2025-12-06 07:09:05.692 251996 DEBUG nova.storage.rbd_utils [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] creating snapshot(04a6d8ec8b9d45d1a41475ff0b754e7c) on rbd image(c67cab07-55d9-4f41-aa3c-c367f840ba27_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:09:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:05.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:06 compute-0 nova_compute[251992]: 2025-12-06 07:09:06.411 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Dec 06 07:09:06 compute-0 ceph-mon[74339]: pgmap v1472: 305 pgs: 305 active+clean; 367 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 410 op/s
Dec 06 07:09:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Dec 06 07:09:06 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Dec 06 07:09:06 compute-0 nova_compute[251992]: 2025-12-06 07:09:06.800 251996 DEBUG nova.storage.rbd_utils [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] cloning vms/c67cab07-55d9-4f41-aa3c-c367f840ba27_disk@04a6d8ec8b9d45d1a41475ff0b754e7c to images/84696b79-26a5-4d20-90c5-9f50a860fde7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:09:06 compute-0 nova_compute[251992]: 2025-12-06 07:09:06.939 251996 DEBUG nova.storage.rbd_utils [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] flattening images/84696b79-26a5-4d20-90c5-9f50a860fde7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:09:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 372 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 4.3 MiB/s wr, 436 op/s
Dec 06 07:09:07 compute-0 nova_compute[251992]: 2025-12-06 07:09:07.391 251996 DEBUG nova.storage.rbd_utils [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] removing snapshot(04a6d8ec8b9d45d1a41475ff0b754e7c) on rbd image(c67cab07-55d9-4f41-aa3c-c367f840ba27_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:09:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:07.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Dec 06 07:09:07 compute-0 ceph-mon[74339]: osdmap e194: 3 total, 3 up, 3 in
Dec 06 07:09:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:09:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3000496672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:09:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3000496672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Dec 06 07:09:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Dec 06 07:09:07 compute-0 nova_compute[251992]: 2025-12-06 07:09:07.828 251996 DEBUG nova.storage.rbd_utils [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] creating snapshot(snap) on rbd image(84696b79-26a5-4d20-90c5-9f50a860fde7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:09:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:07.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:08 compute-0 nova_compute[251992]: 2025-12-06 07:09:08.182 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Dec 06 07:09:08 compute-0 ceph-mon[74339]: pgmap v1474: 305 pgs: 305 active+clean; 372 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 4.3 MiB/s wr, 436 op/s
Dec 06 07:09:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3000496672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3000496672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-0 ceph-mon[74339]: osdmap e195: 3 total, 3 up, 3 in
Dec 06 07:09:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2023570458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2023570458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Dec 06 07:09:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Dec 06 07:09:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 372 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 3.1 MiB/s wr, 411 op/s
Dec 06 07:09:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:09.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:09.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:10 compute-0 ceph-mon[74339]: osdmap e196: 3 total, 3 up, 3 in
Dec 06 07:09:10 compute-0 ceph-mon[74339]: pgmap v1477: 305 pgs: 305 active+clean; 372 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 3.1 MiB/s wr, 411 op/s
Dec 06 07:09:10 compute-0 nova_compute[251992]: 2025-12-06 07:09:10.820 251996 INFO nova.virt.libvirt.driver [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Snapshot image upload complete
Dec 06 07:09:10 compute-0 nova_compute[251992]: 2025-12-06 07:09:10.820 251996 INFO nova.compute.manager [None req-676aca74-ab59-4e6e-b714-8b37f9b6b51a 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Took 5.72 seconds to snapshot the instance on the hypervisor.
Dec 06 07:09:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 327 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.4 MiB/s wr, 319 op/s
Dec 06 07:09:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1388176424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1388176424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:11 compute-0 nova_compute[251992]: 2025-12-06 07:09:11.415 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:11.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:11.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Dec 06 07:09:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Dec 06 07:09:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Dec 06 07:09:12 compute-0 ceph-mon[74339]: pgmap v1478: 305 pgs: 305 active+clean; 327 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.4 MiB/s wr, 319 op/s
Dec 06 07:09:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2443906861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:12 compute-0 ceph-mon[74339]: osdmap e197: 3 total, 3 up, 3 in
Dec 06 07:09:12 compute-0 nova_compute[251992]: 2025-12-06 07:09:12.803 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:12 compute-0 nova_compute[251992]: 2025-12-06 07:09:12.803 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:12 compute-0 nova_compute[251992]: 2025-12-06 07:09:12.819 251996 DEBUG nova.objects.instance [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'flavor' on Instance uuid 76a87dcb-b252-427a-8f49-7a8ab838bb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:09:12 compute-0 nova_compute[251992]: 2025-12-06 07:09:12.866 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:09:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:09:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:09:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:09:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:09:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.059 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.059 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.060 251996 INFO nova.compute.manager [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Attaching volume 5076e091-edac-4ad3-a1b0-4b489857bfaf to /dev/vdb
Dec 06 07:09:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 311 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 241 op/s
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.184 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.217 251996 DEBUG os_brick.utils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.218 251996 INFO oslo.privsep.daemon [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpv6gdz0ca/privsep.sock']
Dec 06 07:09:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:13.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Dec 06 07:09:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Dec 06 07:09:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Dec 06 07:09:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:13.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.960 251996 INFO oslo.privsep.daemon [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Spawned new privsep daemon via rootwrap
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.822 283120 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.826 283120 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.828 283120 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.828 283120 INFO oslo.privsep.daemon [-] privsep daemon running as pid 283120
Dec 06 07:09:13 compute-0 nova_compute[251992]: 2025-12-06 07:09:13.965 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[352d5ae4-bdfd-44b0-a238-42eb1fe3b392]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.059 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.071 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.071 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[81e5be4a-7013-471f-a227-b100300489df]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.073 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.081 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.082 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[252b6288-fe4d-4245-a16d-ec880a59c168]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.084 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.094 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.094 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[465174db-5e16-4add-be1b-d3a744973cde]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.096 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[36adf763-400d-496e-99de-aad83f39a7c2]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.096 251996 DEBUG oslo_concurrency.processutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.123 251996 DEBUG oslo_concurrency.processutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.126 251996 DEBUG os_brick.initiator.connectors.lightos [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.127 251996 DEBUG os_brick.initiator.connectors.lightos [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.127 251996 DEBUG os_brick.initiator.connectors.lightos [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.128 251996 DEBUG os_brick.utils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] <== get_connector_properties: return (910ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.128 251996 DEBUG nova.virt.block_device [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updating existing volume attachment record: 9b7992bd-aa08-47ce-afa0-12dacb2ac5d0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:09:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Dec 06 07:09:14 compute-0 ceph-mon[74339]: pgmap v1480: 305 pgs: 305 active+clean; 311 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 241 op/s
Dec 06 07:09:14 compute-0 ceph-mon[74339]: osdmap e198: 3 total, 3 up, 3 in
Dec 06 07:09:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Dec 06 07:09:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Dec 06 07:09:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:09:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2476696267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.881 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.881 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.883 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.892 251996 DEBUG nova.objects.instance [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'flavor' on Instance uuid 76a87dcb-b252-427a-8f49-7a8ab838bb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.917 251996 DEBUG nova.virt.libvirt.driver [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Attempting to attach volume 5076e091-edac-4ad3-a1b0-4b489857bfaf with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:09:14 compute-0 nova_compute[251992]: 2025-12-06 07:09:14.920 251996 DEBUG nova.virt.libvirt.guest [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:09:14 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:09:14 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-5076e091-edac-4ad3-a1b0-4b489857bfaf">
Dec 06 07:09:14 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:09:14 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:09:14 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:09:14 compute-0 nova_compute[251992]:   </source>
Dec 06 07:09:14 compute-0 nova_compute[251992]:   <auth username="openstack">
Dec 06 07:09:14 compute-0 nova_compute[251992]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:09:14 compute-0 nova_compute[251992]:   </auth>
Dec 06 07:09:14 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:09:14 compute-0 nova_compute[251992]:   <serial>5076e091-edac-4ad3-a1b0-4b489857bfaf</serial>
Dec 06 07:09:14 compute-0 nova_compute[251992]: </disk>
Dec 06 07:09:14 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:09:15 compute-0 nova_compute[251992]: 2025-12-06 07:09:15.045 251996 DEBUG nova.virt.libvirt.driver [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:09:15 compute-0 nova_compute[251992]: 2025-12-06 07:09:15.046 251996 DEBUG nova.virt.libvirt.driver [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:09:15 compute-0 nova_compute[251992]: 2025-12-06 07:09:15.046 251996 DEBUG nova.virt.libvirt.driver [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:09:15 compute-0 nova_compute[251992]: 2025-12-06 07:09:15.047 251996 DEBUG nova.virt.libvirt.driver [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] No VIF found with MAC fa:16:3e:3d:57:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:09:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 276 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 5.4 MiB/s wr, 297 op/s
Dec 06 07:09:15 compute-0 ovn_controller[147168]: 2025-12-06T07:09:15Z|00105|binding|INFO|Releasing lport 0d779b99-8ce3-4716-88ae-2ec54698edd7 from this chassis (sb_readonly=0)
Dec 06 07:09:15 compute-0 nova_compute[251992]: 2025-12-06 07:09:15.287 251996 DEBUG oslo_concurrency.lockutils [None req-f9915445-c537-4faa-bf6a-6ecec42e2276 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:15 compute-0 nova_compute[251992]: 2025-12-06 07:09:15.302 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:15.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Dec 06 07:09:15 compute-0 ceph-mon[74339]: osdmap e199: 3 total, 3 up, 3 in
Dec 06 07:09:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2476696267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:15.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Dec 06 07:09:15 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Dec 06 07:09:16 compute-0 ovn_controller[147168]: 2025-12-06T07:09:16Z|00106|binding|INFO|Releasing lport 0d779b99-8ce3-4716-88ae-2ec54698edd7 from this chassis (sb_readonly=0)
Dec 06 07:09:16 compute-0 nova_compute[251992]: 2025-12-06 07:09:16.404 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:16 compute-0 nova_compute[251992]: 2025-12-06 07:09:16.417 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:16 compute-0 nova_compute[251992]: 2025-12-06 07:09:16.894 251996 DEBUG oslo_concurrency.lockutils [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:16 compute-0 nova_compute[251992]: 2025-12-06 07:09:16.895 251996 DEBUG oslo_concurrency.lockutils [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:16 compute-0 nova_compute[251992]: 2025-12-06 07:09:16.909 251996 INFO nova.compute.manager [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Detaching volume 5076e091-edac-4ad3-a1b0-4b489857bfaf
Dec 06 07:09:16 compute-0 ceph-mon[74339]: pgmap v1483: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 276 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 5.4 MiB/s wr, 297 op/s
Dec 06 07:09:16 compute-0 ceph-mon[74339]: osdmap e200: 3 total, 3 up, 3 in
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.068 251996 INFO nova.virt.block_device [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Attempting to driver detach volume 5076e091-edac-4ad3-a1b0-4b489857bfaf from mountpoint /dev/vdb
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.077 251996 DEBUG nova.virt.libvirt.driver [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Attempting to detach device vdb from instance 76a87dcb-b252-427a-8f49-7a8ab838bb3f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.078 251996 DEBUG nova.virt.libvirt.guest [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-5076e091-edac-4ad3-a1b0-4b489857bfaf">
Dec 06 07:09:17 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   </source>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <serial>5076e091-edac-4ad3-a1b0-4b489857bfaf</serial>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]: </disk>
Dec 06 07:09:17 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.086 251996 INFO nova.virt.libvirt.driver [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Successfully detached device vdb from instance 76a87dcb-b252-427a-8f49-7a8ab838bb3f from the persistent domain config.
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.087 251996 DEBUG nova.virt.libvirt.driver [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 76a87dcb-b252-427a-8f49-7a8ab838bb3f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.088 251996 DEBUG nova.virt.libvirt.guest [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-5076e091-edac-4ad3-a1b0-4b489857bfaf">
Dec 06 07:09:17 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   </source>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <serial>5076e091-edac-4ad3-a1b0-4b489857bfaf</serial>
Dec 06 07:09:17 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:09:17 compute-0 nova_compute[251992]: </disk>
Dec 06 07:09:17 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.138 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765004957.137951, 76a87dcb-b252-427a-8f49-7a8ab838bb3f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.139 251996 DEBUG nova.virt.libvirt.driver [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 76a87dcb-b252-427a-8f49-7a8ab838bb3f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.141 251996 INFO nova.virt.libvirt.driver [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Successfully detached device vdb from instance 76a87dcb-b252-427a-8f49-7a8ab838bb3f from the live domain config.
Dec 06 07:09:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 329 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 10 MiB/s wr, 315 op/s
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.396 251996 DEBUG nova.objects.instance [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'flavor' on Instance uuid 76a87dcb-b252-427a-8f49-7a8ab838bb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:09:17 compute-0 nova_compute[251992]: 2025-12-06 07:09:17.467 251996 DEBUG oslo_concurrency.lockutils [None req-9fc8802a-0bfb-4702-b04a-c2013e12bc84 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Dec 06 07:09:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:17.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Dec 06 07:09:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Dec 06 07:09:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:17.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:18 compute-0 nova_compute[251992]: 2025-12-06 07:09:18.184 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:09:18
Dec 06 07:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'images', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.data']
Dec 06 07:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:09:18 compute-0 sudo[283155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:18 compute-0 sudo[283155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:18 compute-0 sudo[283155]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:18 compute-0 sudo[283180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:18 compute-0 sudo[283180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:18 compute-0 sudo[283180]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:18 compute-0 ceph-mon[74339]: pgmap v1485: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 329 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 10 MiB/s wr, 315 op/s
Dec 06 07:09:18 compute-0 ceph-mon[74339]: osdmap e201: 3 total, 3 up, 3 in
Dec 06 07:09:19 compute-0 ovn_controller[147168]: 2025-12-06T07:09:19Z|00107|binding|INFO|Releasing lport 0d779b99-8ce3-4716-88ae-2ec54698edd7 from this chassis (sb_readonly=0)
Dec 06 07:09:19 compute-0 nova_compute[251992]: 2025-12-06 07:09:19.105 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 329 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 8.3 MiB/s wr, 263 op/s
Dec 06 07:09:19 compute-0 sudo[283206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:19 compute-0 sudo[283206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-0 sudo[283206]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:19 compute-0 sudo[283231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:09:19 compute-0 sudo[283231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-0 sudo[283231]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:19 compute-0 sudo[283256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:19 compute-0 sudo[283256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-0 sudo[283256]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:19.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:19 compute-0 sudo[283281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:09:19 compute-0 sudo[283281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:09:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:09:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:19 compute-0 nova_compute[251992]: 2025-12-06 07:09:19.928 251996 DEBUG nova.compute.manager [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:09:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:19.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:19 compute-0 nova_compute[251992]: 2025-12-06 07:09:19.977 251996 INFO nova.compute.manager [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] instance snapshotting
Dec 06 07:09:20 compute-0 sudo[283281]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:20 compute-0 nova_compute[251992]: 2025-12-06 07:09:20.462 251996 INFO nova.virt.libvirt.driver [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Beginning live snapshot process
Dec 06 07:09:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:09:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:09:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:09:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:09:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:09:20 compute-0 nova_compute[251992]: 2025-12-06 07:09:20.630 251996 DEBUG nova.virt.libvirt.imagebackend [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:09:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0cf5dcdb-849e-492a-bf43-7e6fc76ae0cb does not exist
Dec 06 07:09:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d0c80dd4-76e1-48f1-bd18-61b8f8e3fd44 does not exist
Dec 06 07:09:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dcf5c88c-05aa-4a47-b7bc-4a3d3405c003 does not exist
Dec 06 07:09:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:09:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:09:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:09:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:09:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:09:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:09:20 compute-0 sudo[283371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:20 compute-0 sudo[283371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:20 compute-0 sudo[283371]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:20 compute-0 sudo[283396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:09:20 compute-0 sudo[283396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:20 compute-0 sudo[283396]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:20 compute-0 sudo[283421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:20 compute-0 sudo[283421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:20 compute-0 sudo[283421]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:20 compute-0 sudo[283446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:09:20 compute-0 sudo[283446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:21 compute-0 nova_compute[251992]: 2025-12-06 07:09:21.141 251996 DEBUG nova.storage.rbd_utils [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] creating snapshot(71840979a7f0480daa33c55736fa42bf) on rbd image(c67cab07-55d9-4f41-aa3c-c367f840ba27_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:09:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 367 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 11 MiB/s wr, 371 op/s
Dec 06 07:09:21 compute-0 podman[283528]: 2025-12-06 07:09:21.212900869 +0000 UTC m=+0.037196930 container create cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:09:21 compute-0 systemd[1]: Started libpod-conmon-cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b.scope.
Dec 06 07:09:21 compute-0 podman[283528]: 2025-12-06 07:09:21.197053897 +0000 UTC m=+0.021349978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:09:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:09:21 compute-0 podman[283528]: 2025-12-06 07:09:21.326551449 +0000 UTC m=+0.150847530 container init cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:09:21 compute-0 podman[283528]: 2025-12-06 07:09:21.334872676 +0000 UTC m=+0.159168737 container start cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hermann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:09:21 compute-0 podman[283528]: 2025-12-06 07:09:21.338929031 +0000 UTC m=+0.163225112 container attach cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hermann, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 07:09:21 compute-0 happy_hermann[283547]: 167 167
Dec 06 07:09:21 compute-0 systemd[1]: libpod-cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b.scope: Deactivated successfully.
Dec 06 07:09:21 compute-0 podman[283528]: 2025-12-06 07:09:21.341805537 +0000 UTC m=+0.166101598 container died cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-45c633699b6ab16cf9fe31c39a7662266829a53bb1632c3fd08660a3c59280bc-merged.mount: Deactivated successfully.
Dec 06 07:09:21 compute-0 podman[283528]: 2025-12-06 07:09:21.557602587 +0000 UTC m=+0.381898648 container remove cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:09:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:21.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:21 compute-0 systemd[1]: libpod-conmon-cc47e85df8bf1a257b8ed77b845a483fcd981c5af2820c36e9ddfabfc0c6d46b.scope: Deactivated successfully.
Dec 06 07:09:21 compute-0 nova_compute[251992]: 2025-12-06 07:09:21.765 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:21 compute-0 podman[283570]: 2025-12-06 07:09:21.698168178 +0000 UTC m=+0.022269831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:09:21 compute-0 ceph-mon[74339]: pgmap v1487: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 329 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 8.3 MiB/s wr, 263 op/s
Dec 06 07:09:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:09:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:09:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:09:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:09:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:09:21 compute-0 podman[283570]: 2025-12-06 07:09:21.919995505 +0000 UTC m=+0.244097138 container create 6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec 06 07:09:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:21.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:21 compute-0 systemd[1]: Started libpod-conmon-6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247.scope.
Dec 06 07:09:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e63a39147c2da38f8216fa7a0e61baff0ba6761764b4abfa13cd002f51d122/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e63a39147c2da38f8216fa7a0e61baff0ba6761764b4abfa13cd002f51d122/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e63a39147c2da38f8216fa7a0e61baff0ba6761764b4abfa13cd002f51d122/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e63a39147c2da38f8216fa7a0e61baff0ba6761764b4abfa13cd002f51d122/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e63a39147c2da38f8216fa7a0e61baff0ba6761764b4abfa13cd002f51d122/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:22 compute-0 podman[283570]: 2025-12-06 07:09:22.053911283 +0000 UTC m=+0.378012946 container init 6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:09:22 compute-0 podman[283570]: 2025-12-06 07:09:22.062211579 +0000 UTC m=+0.386313212 container start 6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:09:22 compute-0 podman[283570]: 2025-12-06 07:09:22.065626608 +0000 UTC m=+0.389728241 container attach 6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:09:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Dec 06 07:09:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Dec 06 07:09:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Dec 06 07:09:22 compute-0 nova_compute[251992]: 2025-12-06 07:09:22.736 251996 DEBUG nova.storage.rbd_utils [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] cloning vms/c67cab07-55d9-4f41-aa3c-c367f840ba27_disk@71840979a7f0480daa33c55736fa42bf to images/0a1ac31b-510b-45cf-af8d-b5ab53f7ddcc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:09:22 compute-0 goofy_franklin[283587]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:09:22 compute-0 goofy_franklin[283587]: --> relative data size: 1.0
Dec 06 07:09:22 compute-0 goofy_franklin[283587]: --> All data devices are unavailable
Dec 06 07:09:22 compute-0 systemd[1]: libpod-6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247.scope: Deactivated successfully.
Dec 06 07:09:22 compute-0 podman[283570]: 2025-12-06 07:09:22.918317555 +0000 UTC m=+1.242419218 container died 6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:09:23 compute-0 ceph-mon[74339]: pgmap v1488: 305 pgs: 305 active+clean; 367 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 11 MiB/s wr, 371 op/s
Dec 06 07:09:23 compute-0 ceph-mon[74339]: osdmap e202: 3 total, 3 up, 3 in
Dec 06 07:09:23 compute-0 nova_compute[251992]: 2025-12-06 07:09:23.026 251996 DEBUG nova.storage.rbd_utils [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] flattening images/0a1ac31b-510b-45cf-af8d-b5ab53f7ddcc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-84e63a39147c2da38f8216fa7a0e61baff0ba6761764b4abfa13cd002f51d122-merged.mount: Deactivated successfully.
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 372 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 8.5 MiB/s wr, 312 op/s
Dec 06 07:09:23 compute-0 nova_compute[251992]: 2025-12-06 07:09:23.186 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:23 compute-0 podman[283570]: 2025-12-06 07:09:23.194267672 +0000 UTC m=+1.518369305 container remove 6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:09:23 compute-0 systemd[1]: libpod-conmon-6a7eabb262d61437f2facfc059a72f6119bcc33b90d9d0fa021542a6e0510247.scope: Deactivated successfully.
Dec 06 07:09:23 compute-0 sudo[283446]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:23 compute-0 sudo[283671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:23 compute-0 sudo[283671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:23 compute-0 sudo[283671]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:09:23 compute-0 sudo[283696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:09:23 compute-0 sudo[283696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:23 compute-0 sudo[283696]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:23 compute-0 sudo[283721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:23 compute-0 sudo[283721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:23 compute-0 sudo[283721]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:23 compute-0 sudo[283746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:09:23 compute-0 sudo[283746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:23.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:09:23 compute-0 podman[283813]: 2025-12-06 07:09:23.767400587 +0000 UTC m=+0.025608447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:09:23 compute-0 podman[283813]: 2025-12-06 07:09:23.873652554 +0000 UTC m=+0.131860374 container create 7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 07:09:23 compute-0 systemd[1]: Started libpod-conmon-7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f.scope.
Dec 06 07:09:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:23.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:09:24 compute-0 podman[283813]: 2025-12-06 07:09:24.056818376 +0000 UTC m=+0.315026226 container init 7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:09:24 compute-0 podman[283813]: 2025-12-06 07:09:24.064741752 +0000 UTC m=+0.322949582 container start 7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 07:09:24 compute-0 strange_hawking[283829]: 167 167
Dec 06 07:09:24 compute-0 systemd[1]: libpod-7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f.scope: Deactivated successfully.
Dec 06 07:09:24 compute-0 podman[283813]: 2025-12-06 07:09:24.082938925 +0000 UTC m=+0.341146775 container attach 7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec 06 07:09:24 compute-0 podman[283813]: 2025-12-06 07:09:24.084587789 +0000 UTC m=+0.342795619 container died 7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:09:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3664114671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:24 compute-0 ceph-mon[74339]: pgmap v1490: 305 pgs: 305 active+clean; 372 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 8.5 MiB/s wr, 312 op/s
Dec 06 07:09:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-047d1b13f414f8da15b2197489ab26684d1550e05f14eab7a8eea9240cb81fba-merged.mount: Deactivated successfully.
Dec 06 07:09:24 compute-0 podman[283813]: 2025-12-06 07:09:24.138207975 +0000 UTC m=+0.396415805 container remove 7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 07:09:24 compute-0 systemd[1]: libpod-conmon-7ff8b4f0e7e808ebd180d4d33f99800ae57b8b638092001ae22cc5f449daf70f.scope: Deactivated successfully.
Dec 06 07:09:24 compute-0 nova_compute[251992]: 2025-12-06 07:09:24.197 251996 DEBUG nova.storage.rbd_utils [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] removing snapshot(71840979a7f0480daa33c55736fa42bf) on rbd image(c67cab07-55d9-4f41-aa3c-c367f840ba27_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:09:24 compute-0 podman[283870]: 2025-12-06 07:09:24.316149809 +0000 UTC m=+0.036722897 container create 474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_torvalds, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 07:09:24 compute-0 podman[283870]: 2025-12-06 07:09:24.302034021 +0000 UTC m=+0.022607109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:09:24 compute-0 systemd[1]: Started libpod-conmon-474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb.scope.
Dec 06 07:09:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40fb58f2d54ac121ccae384972f925d0caedbb28eb6eecbc26473be7d836d12a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40fb58f2d54ac121ccae384972f925d0caedbb28eb6eecbc26473be7d836d12a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40fb58f2d54ac121ccae384972f925d0caedbb28eb6eecbc26473be7d836d12a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40fb58f2d54ac121ccae384972f925d0caedbb28eb6eecbc26473be7d836d12a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:24 compute-0 podman[283870]: 2025-12-06 07:09:24.590915455 +0000 UTC m=+0.311488563 container init 474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_torvalds, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:09:24 compute-0 podman[283870]: 2025-12-06 07:09:24.597470416 +0000 UTC m=+0.318043504 container start 474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_torvalds, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:09:24 compute-0 podman[283870]: 2025-12-06 07:09:24.601060459 +0000 UTC m=+0.321633557 container attach 474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:09:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Dec 06 07:09:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Dec 06 07:09:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Dec 06 07:09:25 compute-0 nova_compute[251992]: 2025-12-06 07:09:25.138 251996 DEBUG nova.storage.rbd_utils [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] creating snapshot(snap) on rbd image(0a1ac31b-510b-45cf-af8d-b5ab53f7ddcc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 414 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.9 MiB/s wr, 233 op/s
Dec 06 07:09:25 compute-0 eager_torvalds[283887]: {
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:     "0": [
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:         {
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "devices": [
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "/dev/loop3"
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             ],
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "lv_name": "ceph_lv0",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "lv_size": "7511998464",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "name": "ceph_lv0",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "tags": {
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.cluster_name": "ceph",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.crush_device_class": "",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.encrypted": "0",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.osd_id": "0",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.type": "block",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:                 "ceph.vdo": "0"
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             },
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "type": "block",
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:             "vg_name": "ceph_vg0"
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:         }
Dec 06 07:09:25 compute-0 eager_torvalds[283887]:     ]
Dec 06 07:09:25 compute-0 eager_torvalds[283887]: }
Dec 06 07:09:25 compute-0 systemd[1]: libpod-474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb.scope: Deactivated successfully.
Dec 06 07:09:25 compute-0 podman[283870]: 2025-12-06 07:09:25.454647691 +0000 UTC m=+1.175220789 container died 474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:09:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-40fb58f2d54ac121ccae384972f925d0caedbb28eb6eecbc26473be7d836d12a-merged.mount: Deactivated successfully.
Dec 06 07:09:25 compute-0 podman[283870]: 2025-12-06 07:09:25.515264299 +0000 UTC m=+1.235837387 container remove 474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:09:25 compute-0 systemd[1]: libpod-conmon-474b6134018bf9efa760ea6952b89e24f92a043dcce33b65bde350e017d194cb.scope: Deactivated successfully.
Dec 06 07:09:25 compute-0 sudo[283746]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:25.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:25 compute-0 sudo[283928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:25 compute-0 sudo[283928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:25 compute-0 sudo[283928]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:25 compute-0 sudo[283953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:09:25 compute-0 sudo[283953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:25 compute-0 sudo[283953]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006829030860320119 of space, bias 1.0, pg target 2.0487092580960358 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004733410978503629 of space, bias 1.0, pg target 1.4105564715940815 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:09:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:09:25 compute-0 sudo[283978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:25 compute-0 sudo[283978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:25 compute-0 sudo[283978]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:25 compute-0 sudo[284003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:09:25 compute-0 sudo[284003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:25.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Dec 06 07:09:26 compute-0 podman[284070]: 2025-12-06 07:09:26.146631172 +0000 UTC m=+0.039586652 container create a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:09:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Dec 06 07:09:26 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Dec 06 07:09:26 compute-0 systemd[1]: Started libpod-conmon-a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d.scope.
Dec 06 07:09:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:09:26 compute-0 ceph-mon[74339]: osdmap e203: 3 total, 3 up, 3 in
Dec 06 07:09:26 compute-0 ceph-mon[74339]: pgmap v1492: 305 pgs: 305 active+clean; 414 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.9 MiB/s wr, 233 op/s
Dec 06 07:09:26 compute-0 podman[284070]: 2025-12-06 07:09:26.13200898 +0000 UTC m=+0.024964490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:09:26 compute-0 podman[284070]: 2025-12-06 07:09:26.233602667 +0000 UTC m=+0.126558197 container init a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:09:26 compute-0 podman[284070]: 2025-12-06 07:09:26.240748523 +0000 UTC m=+0.133704003 container start a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:09:26 compute-0 friendly_brown[284086]: 167 167
Dec 06 07:09:26 compute-0 systemd[1]: libpod-a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d.scope: Deactivated successfully.
Dec 06 07:09:26 compute-0 podman[284070]: 2025-12-06 07:09:26.247020987 +0000 UTC m=+0.139976487 container attach a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 07:09:26 compute-0 podman[284070]: 2025-12-06 07:09:26.247826128 +0000 UTC m=+0.140781608 container died a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fbf264c4cce626c034ab6c4bbc77c62151c3a39d3413aebfbe6701d0594c0e6-merged.mount: Deactivated successfully.
Dec 06 07:09:26 compute-0 podman[284070]: 2025-12-06 07:09:26.283182698 +0000 UTC m=+0.176138178 container remove a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:09:26 compute-0 systemd[1]: libpod-conmon-a1263f8611f40a9a1ebcfe1c6cdff14d06c48f80d1ebde3323293d7ddec2c76d.scope: Deactivated successfully.
Dec 06 07:09:26 compute-0 podman[284110]: 2025-12-06 07:09:26.456956624 +0000 UTC m=+0.042490838 container create 6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:09:26 compute-0 systemd[1]: Started libpod-conmon-6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf.scope.
Dec 06 07:09:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02940d2e09d08b984bf89056a6b8eff3e26c6f907877f6f45cb02dd8864e31ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02940d2e09d08b984bf89056a6b8eff3e26c6f907877f6f45cb02dd8864e31ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02940d2e09d08b984bf89056a6b8eff3e26c6f907877f6f45cb02dd8864e31ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02940d2e09d08b984bf89056a6b8eff3e26c6f907877f6f45cb02dd8864e31ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:09:26 compute-0 podman[284110]: 2025-12-06 07:09:26.438670998 +0000 UTC m=+0.024205232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:09:26 compute-0 podman[284110]: 2025-12-06 07:09:26.545073769 +0000 UTC m=+0.130608013 container init 6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:09:26 compute-0 podman[284110]: 2025-12-06 07:09:26.553183981 +0000 UTC m=+0.138718195 container start 6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_fermi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:09:26 compute-0 podman[284110]: 2025-12-06 07:09:26.556272331 +0000 UTC m=+0.141806545 container attach 6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:09:26 compute-0 nova_compute[251992]: 2025-12-06 07:09:26.769 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 268 op/s
Dec 06 07:09:27 compute-0 ceph-mon[74339]: osdmap e204: 3 total, 3 up, 3 in
Dec 06 07:09:27 compute-0 jovial_fermi[284128]: {
Dec 06 07:09:27 compute-0 jovial_fermi[284128]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:09:27 compute-0 jovial_fermi[284128]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:09:27 compute-0 jovial_fermi[284128]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:09:27 compute-0 jovial_fermi[284128]:         "osd_id": 0,
Dec 06 07:09:27 compute-0 jovial_fermi[284128]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:09:27 compute-0 jovial_fermi[284128]:         "type": "bluestore"
Dec 06 07:09:27 compute-0 jovial_fermi[284128]:     }
Dec 06 07:09:27 compute-0 jovial_fermi[284128]: }
Dec 06 07:09:27 compute-0 systemd[1]: libpod-6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf.scope: Deactivated successfully.
Dec 06 07:09:27 compute-0 podman[284110]: 2025-12-06 07:09:27.392883659 +0000 UTC m=+0.978417873 container died 6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_fermi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-02940d2e09d08b984bf89056a6b8eff3e26c6f907877f6f45cb02dd8864e31ed-merged.mount: Deactivated successfully.
Dec 06 07:09:27 compute-0 podman[284110]: 2025-12-06 07:09:27.444465982 +0000 UTC m=+1.030000196 container remove 6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_fermi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:09:27 compute-0 systemd[1]: libpod-conmon-6e25dc6ead9ed58a3edfbdb5472ab222386228c83a8c75baad544776b8fdabcf.scope: Deactivated successfully.
Dec 06 07:09:27 compute-0 sudo[284003]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:09:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:09:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:09:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:27.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:09:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d737b0a8-1bc0-4bbf-9cf1-d9270f68e273 does not exist
Dec 06 07:09:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fb84ed76-05b0-4657-8534-eaa8627b1e59 does not exist
Dec 06 07:09:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ae59193b-c084-4762-a95d-6d7b61b4c506 does not exist
Dec 06 07:09:27 compute-0 sudo[284161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:27 compute-0 sudo[284161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:27 compute-0 sudo[284161]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:27 compute-0 sudo[284186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:09:27 compute-0 sudo[284186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:27 compute-0 sudo[284186]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:27.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:28 compute-0 nova_compute[251992]: 2025-12-06 07:09:28.222 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:28 compute-0 nova_compute[251992]: 2025-12-06 07:09:28.431 251996 INFO nova.virt.libvirt.driver [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Snapshot image upload complete
Dec 06 07:09:28 compute-0 nova_compute[251992]: 2025-12-06 07:09:28.432 251996 INFO nova.compute.manager [None req-7cc01c3c-7590-4ed8-b4a8-b9ca6721b846 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Took 8.45 seconds to snapshot the instance on the hypervisor.
Dec 06 07:09:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Dec 06 07:09:28 compute-0 ceph-mon[74339]: pgmap v1494: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 268 op/s
Dec 06 07:09:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:09:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1455735636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3910499721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:09:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Dec 06 07:09:28 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Dec 06 07:09:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 237 op/s
Dec 06 07:09:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000025s ======
Dec 06 07:09:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:29.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec 06 07:09:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:29.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:29 compute-0 nova_compute[251992]: 2025-12-06 07:09:29.987 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:09:29.987 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:09:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:09:29.988 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:09:30 compute-0 ceph-mon[74339]: osdmap e205: 3 total, 3 up, 3 in
Dec 06 07:09:30 compute-0 podman[284213]: 2025-12-06 07:09:30.444812192 +0000 UTC m=+0.089980884 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:09:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.0 MiB/s wr, 189 op/s
Dec 06 07:09:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:31.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:31 compute-0 ceph-mon[74339]: pgmap v1496: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 237 op/s
Dec 06 07:09:31 compute-0 nova_compute[251992]: 2025-12-06 07:09:31.774 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Dec 06 07:09:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Dec 06 07:09:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Dec 06 07:09:32 compute-0 ceph-mon[74339]: pgmap v1497: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 7.0 MiB/s wr, 189 op/s
Dec 06 07:09:32 compute-0 ceph-mon[74339]: osdmap e206: 3 total, 3 up, 3 in
Dec 06 07:09:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.4 MiB/s wr, 201 op/s
Dec 06 07:09:33 compute-0 nova_compute[251992]: 2025-12-06 07:09:33.224 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:33.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:33.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:34 compute-0 podman[284241]: 2025-12-06 07:09:34.419057355 +0000 UTC m=+0.071800271 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:09:34 compute-0 podman[284242]: 2025-12-06 07:09:34.419987429 +0000 UTC m=+0.071650557 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:09:35 compute-0 ceph-mon[74339]: pgmap v1499: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.4 MiB/s wr, 201 op/s
Dec 06 07:09:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 137 op/s
Dec 06 07:09:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:35.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:09:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:35.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:09:36 compute-0 nova_compute[251992]: 2025-12-06 07:09:36.778 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:36 compute-0 ceph-mon[74339]: pgmap v1500: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 137 op/s
Dec 06 07:09:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 21 KiB/s wr, 158 op/s
Dec 06 07:09:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Dec 06 07:09:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:37.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:37.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Dec 06 07:09:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Dec 06 07:09:38 compute-0 nova_compute[251992]: 2025-12-06 07:09:38.226 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:38 compute-0 ceph-mon[74339]: pgmap v1501: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 21 KiB/s wr, 158 op/s
Dec 06 07:09:38 compute-0 ceph-mon[74339]: osdmap e207: 3 total, 3 up, 3 in
Dec 06 07:09:38 compute-0 sudo[284281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:38 compute-0 sudo[284281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:38 compute-0 sudo[284281]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:38 compute-0 sudo[284306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:38 compute-0 sudo[284306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:38 compute-0 sudo[284306]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 KiB/s wr, 121 op/s
Dec 06 07:09:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:39.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:39.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:09:39.990 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:09:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 81 op/s
Dec 06 07:09:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:41 compute-0 nova_compute[251992]: 2025-12-06 07:09:41.781 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000056s ======
Dec 06 07:09:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:41.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Dec 06 07:09:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:09:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:09:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:09:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:09:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:09:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:09:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 11 KiB/s wr, 70 op/s
Dec 06 07:09:43 compute-0 nova_compute[251992]: 2025-12-06 07:09:43.228 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:43 compute-0 ceph-mon[74339]: pgmap v1503: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 KiB/s wr, 121 op/s
Dec 06 07:09:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:09:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:43.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:09:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:43.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:44 compute-0 nova_compute[251992]: 2025-12-06 07:09:44.047 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:44 compute-0 ceph-mon[74339]: pgmap v1504: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 81 op/s
Dec 06 07:09:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2877826368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:09:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2877826368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:09:44 compute-0 ceph-mon[74339]: pgmap v1505: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 11 KiB/s wr, 70 op/s
Dec 06 07:09:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 644 KiB/s rd, 11 KiB/s wr, 43 op/s
Dec 06 07:09:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:09:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:45.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:09:45 compute-0 nova_compute[251992]: 2025-12-06 07:09:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:45.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:46 compute-0 ceph-mon[74339]: pgmap v1506: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 644 KiB/s rd, 11 KiB/s wr, 43 op/s
Dec 06 07:09:46 compute-0 nova_compute[251992]: 2025-12-06 07:09:46.784 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 32 op/s
Dec 06 07:09:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Dec 06 07:09:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:47.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:47 compute-0 nova_compute[251992]: 2025-12-06 07:09:47.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:48 compute-0 nova_compute[251992]: 2025-12-06 07:09:48.231 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:48 compute-0 nova_compute[251992]: 2025-12-06 07:09:48.465 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:49 compute-0 nova_compute[251992]: 2025-12-06 07:09:49.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 10 KiB/s wr, 28 op/s
Dec 06 07:09:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Dec 06 07:09:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Dec 06 07:09:49 compute-0 nova_compute[251992]: 2025-12-06 07:09:49.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:49 compute-0 nova_compute[251992]: 2025-12-06 07:09:49.690 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:49 compute-0 nova_compute[251992]: 2025-12-06 07:09:49.691 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:49 compute-0 nova_compute[251992]: 2025-12-06 07:09:49.691 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:49 compute-0 nova_compute[251992]: 2025-12-06 07:09:49.691 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:09:49 compute-0 nova_compute[251992]: 2025-12-06 07:09:49.692 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:49.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:09:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:49.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:09:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:09:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114967645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.129 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:50 compute-0 ceph-mon[74339]: pgmap v1507: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 12 KiB/s wr, 32 op/s
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.271 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.272 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.276 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.276 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.432 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.433 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4240MB free_disk=20.83084487915039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.433 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.434 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.762 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 76a87dcb-b252-427a-8f49-7a8ab838bb3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.763 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c67cab07-55d9-4f41-aa3c-c367f840ba27 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.763 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.763 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:09:50 compute-0 nova_compute[251992]: 2025-12-06 07:09:50.936 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:09:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 438 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 23 KiB/s wr, 29 op/s
Dec 06 07:09:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/913651263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:51 compute-0 ceph-mon[74339]: pgmap v1508: 305 pgs: 305 active+clean; 497 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 10 KiB/s wr, 28 op/s
Dec 06 07:09:51 compute-0 ceph-mon[74339]: osdmap e208: 3 total, 3 up, 3 in
Dec 06 07:09:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1104098664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/114967645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:09:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420057684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:51 compute-0 nova_compute[251992]: 2025-12-06 07:09:51.367 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:09:51 compute-0 nova_compute[251992]: 2025-12-06 07:09:51.373 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:09:51 compute-0 nova_compute[251992]: 2025-12-06 07:09:51.409 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:09:51 compute-0 nova_compute[251992]: 2025-12-06 07:09:51.455 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:09:51 compute-0 nova_compute[251992]: 2025-12-06 07:09:51.456 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:09:51 compute-0 nova_compute[251992]: 2025-12-06 07:09:51.787 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:51.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:52 compute-0 ceph-mon[74339]: pgmap v1510: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 438 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 23 KiB/s wr, 29 op/s
Dec 06 07:09:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2420057684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:52 compute-0 nova_compute[251992]: 2025-12-06 07:09:52.449 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:52 compute-0 nova_compute[251992]: 2025-12-06 07:09:52.450 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:52 compute-0 nova_compute[251992]: 2025-12-06 07:09:52.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Dec 06 07:09:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 414 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 25 KiB/s wr, 53 op/s
Dec 06 07:09:53 compute-0 nova_compute[251992]: 2025-12-06 07:09:53.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Dec 06 07:09:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Dec 06 07:09:53 compute-0 nova_compute[251992]: 2025-12-06 07:09:53.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/599168457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:53 compute-0 ceph-mon[74339]: osdmap e209: 3 total, 3 up, 3 in
Dec 06 07:09:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:53.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:54 compute-0 nova_compute[251992]: 2025-12-06 07:09:54.069 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:54 compute-0 nova_compute[251992]: 2025-12-06 07:09:54.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:54 compute-0 nova_compute[251992]: 2025-12-06 07:09:54.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:09:54 compute-0 nova_compute[251992]: 2025-12-06 07:09:54.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:09:54 compute-0 nova_compute[251992]: 2025-12-06 07:09:54.955 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:09:54 compute-0 nova_compute[251992]: 2025-12-06 07:09:54.956 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:09:54 compute-0 nova_compute[251992]: 2025-12-06 07:09:54.956 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:09:54 compute-0 nova_compute[251992]: 2025-12-06 07:09:54.956 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 76a87dcb-b252-427a-8f49-7a8ab838bb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:09:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 372 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 54 op/s
Dec 06 07:09:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:55.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:55 compute-0 ceph-mon[74339]: pgmap v1511: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 414 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 25 KiB/s wr, 53 op/s
Dec 06 07:09:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2932571782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2339433088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:09:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Dec 06 07:09:56 compute-0 nova_compute[251992]: 2025-12-06 07:09:56.789 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Dec 06 07:09:57 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Dec 06 07:09:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 341 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 34 KiB/s wr, 114 op/s
Dec 06 07:09:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:09:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:57.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:09:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:09:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:57.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:09:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:09:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Dec 06 07:09:58 compute-0 nova_compute[251992]: 2025-12-06 07:09:58.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:09:58 compute-0 sudo[284386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:58 compute-0 sudo[284386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:58 compute-0 sudo[284386]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:58 compute-0 sudo[284411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:09:58 compute-0 sudo[284411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:09:58 compute-0 sudo[284411]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:58 compute-0 ceph-mon[74339]: pgmap v1513: 305 pgs: 305 active+clean; 372 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 31 KiB/s wr, 54 op/s
Dec 06 07:09:59 compute-0 nova_compute[251992]: 2025-12-06 07:09:59.013 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updating instance_info_cache with network_info: [{"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:09:59 compute-0 nova_compute[251992]: 2025-12-06 07:09:59.039 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-76a87dcb-b252-427a-8f49-7a8ab838bb3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:09:59 compute-0 nova_compute[251992]: 2025-12-06 07:09:59.040 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:09:59 compute-0 nova_compute[251992]: 2025-12-06 07:09:59.040 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:09:59 compute-0 nova_compute[251992]: 2025-12-06 07:09:59.040 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:09:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Dec 06 07:09:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Dec 06 07:09:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 341 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 85 op/s
Dec 06 07:09:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:09:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:09:59.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:09:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:09:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:09:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:09:59.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:10:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:10:00 compute-0 ceph-mon[74339]: osdmap e210: 3 total, 3 up, 3 in
Dec 06 07:10:00 compute-0 ceph-mon[74339]: pgmap v1515: 305 pgs: 305 active+clean; 341 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 34 KiB/s wr, 114 op/s
Dec 06 07:10:00 compute-0 ceph-mon[74339]: osdmap e211: 3 total, 3 up, 3 in
Dec 06 07:10:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 295 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.8 KiB/s wr, 81 op/s
Dec 06 07:10:01 compute-0 ceph-mon[74339]: pgmap v1517: 305 pgs: 305 active+clean; 341 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 85 op/s
Dec 06 07:10:01 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:10:01 compute-0 podman[284437]: 2025-12-06 07:10:01.427860582 +0000 UTC m=+0.081151288 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:10:01 compute-0 nova_compute[251992]: 2025-12-06 07:10:01.620 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:01 compute-0 nova_compute[251992]: 2025-12-06 07:10:01.620 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:01 compute-0 nova_compute[251992]: 2025-12-06 07:10:01.621 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:01 compute-0 nova_compute[251992]: 2025-12-06 07:10:01.621 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:01 compute-0 nova_compute[251992]: 2025-12-06 07:10:01.621 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:01 compute-0 nova_compute[251992]: 2025-12-06 07:10:01.623 251996 INFO nova.compute.manager [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Terminating instance
Dec 06 07:10:01 compute-0 nova_compute[251992]: 2025-12-06 07:10:01.624 251996 DEBUG nova.compute.manager [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:10:01 compute-0 nova_compute[251992]: 2025-12-06 07:10:01.791 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:01.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:02 compute-0 ceph-mon[74339]: pgmap v1518: 305 pgs: 305 active+clean; 295 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.8 KiB/s wr, 81 op/s
Dec 06 07:10:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1088743501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:10:02 compute-0 kernel: tapc3d7b61d-05 (unregistering): left promiscuous mode
Dec 06 07:10:02 compute-0 NetworkManager[48965]: <info>  [1765005002.6129] device (tapc3d7b61d-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:10:02 compute-0 ovn_controller[147168]: 2025-12-06T07:10:02Z|00108|binding|INFO|Releasing lport c3d7b61d-0558-4443-ae89-f36f1815c38d from this chassis (sb_readonly=0)
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.620 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:02 compute-0 ovn_controller[147168]: 2025-12-06T07:10:02Z|00109|binding|INFO|Setting lport c3d7b61d-0558-4443-ae89-f36f1815c38d down in Southbound
Dec 06 07:10:02 compute-0 ovn_controller[147168]: 2025-12-06T07:10:02Z|00110|binding|INFO|Removing iface tapc3d7b61d-05 ovn-installed in OVS
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.623 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.632 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:57:77 10.100.0.7'], port_security=['fa:16:3e:3d:57:77 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '76a87dcb-b252-427a-8f49-7a8ab838bb3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4c41abd44bbf46f39df642d2a2cd19eb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4c2aa6d-eed5-4630-9aab-9c1afaa4bab0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=934cdf1d-ea09-45f0-b1f2-a1da72094e2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=c3d7b61d-0558-4443-ae89-f36f1815c38d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.634 158118 INFO neutron.agent.ovn.metadata.agent [-] Port c3d7b61d-0558-4443-ae89-f36f1815c38d in datapath d62d33de-d7cc-4103-8a83-88ba86c97b8f unbound from our chassis
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.635 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d62d33de-d7cc-4103-8a83-88ba86c97b8f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.638 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3a20c4c0-46e6-48aa-a658-e29bff65737d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.639 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f namespace which is not needed anymore
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:02 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Dec 06 07:10:02 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002d.scope: Consumed 17.193s CPU time.
Dec 06 07:10:02 compute-0 systemd-machined[212986]: Machine qemu-20-instance-0000002d terminated.
Dec 06 07:10:02 compute-0 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[282362]: [NOTICE]   (282366) : haproxy version is 2.8.14-c23fe91
Dec 06 07:10:02 compute-0 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[282362]: [NOTICE]   (282366) : path to executable is /usr/sbin/haproxy
Dec 06 07:10:02 compute-0 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[282362]: [WARNING]  (282366) : Exiting Master process...
Dec 06 07:10:02 compute-0 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[282362]: [WARNING]  (282366) : Exiting Master process...
Dec 06 07:10:02 compute-0 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[282362]: [ALERT]    (282366) : Current worker (282368) exited with code 143 (Terminated)
Dec 06 07:10:02 compute-0 neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f[282362]: [WARNING]  (282366) : All workers exited. Exiting... (0)
Dec 06 07:10:02 compute-0 systemd[1]: libpod-ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630.scope: Deactivated successfully.
Dec 06 07:10:02 compute-0 podman[284491]: 2025-12-06 07:10:02.765971596 +0000 UTC m=+0.047907302 container died ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630-userdata-shm.mount: Deactivated successfully.
Dec 06 07:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c91a0f428531e5024c91797bae9c8a1443db792f16475fab01e1839b5df23cc0-merged.mount: Deactivated successfully.
Dec 06 07:10:02 compute-0 podman[284491]: 2025-12-06 07:10:02.803425471 +0000 UTC m=+0.085361167 container cleanup ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:10:02 compute-0 systemd[1]: libpod-conmon-ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630.scope: Deactivated successfully.
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.866 251996 INFO nova.virt.libvirt.driver [-] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Instance destroyed successfully.
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.867 251996 DEBUG nova.objects.instance [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lazy-loading 'resources' on Instance uuid 76a87dcb-b252-427a-8f49-7a8ab838bb3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:10:02 compute-0 podman[284520]: 2025-12-06 07:10:02.869789981 +0000 UTC m=+0.046086290 container remove ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.876 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7774387e-a827-4d7b-8c5a-bcb8bf436669]: (4, ('Sat Dec  6 07:10:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f (ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630)\nca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630\nSat Dec  6 07:10:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f (ca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630)\nca790bdffeb6d6250c4e59cf6dd6f97f4457401d0764512b0fe17107ecb94630\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.878 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2b86a8ee-a06c-4d0b-a44d-3e613330ca12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.879 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd62d33de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.881 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:02 compute-0 kernel: tapd62d33de-d0: left promiscuous mode
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.899 251996 DEBUG nova.virt.libvirt.vif [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:08:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2093212749',display_name='tempest-VolumesAdminNegativeTest-server-2093212749',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2093212749',id=45,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHXERM0ASuk1NXvDv/w8837PhM546n3eLOfvHBLzodpQkSdglUi6jeAl375cWLUlKwvyVZolXWZkry65OPgT6v4kiGuE+BNk0US7dkQtbDM5ULJiYxRwF9bDh8uP72MaMw==',key_name='tempest-keypair-95067880',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:08:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4c41abd44bbf46f39df642d2a2cd19eb',ramdisk_id='',reservation_id='r-7lxgqffq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-201901816',owner_user_name='tempest-VolumesAdminNegativeTest-201901816-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:08:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='33518fed43cc4fdfbdce993ccb4cc360',uuid=76a87dcb-b252-427a-8f49-7a8ab838bb3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.900 251996 DEBUG nova.network.os_vif_util [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converting VIF {"id": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "address": "fa:16:3e:3d:57:77", "network": {"id": "d62d33de-d7cc-4103-8a83-88ba86c97b8f", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-265855790-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4c41abd44bbf46f39df642d2a2cd19eb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3d7b61d-05", "ovs_interfaceid": "c3d7b61d-0558-4443-ae89-f36f1815c38d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.901 251996 DEBUG nova.network.os_vif_util [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:57:77,bridge_name='br-int',has_traffic_filtering=True,id=c3d7b61d-0558-4443-ae89-f36f1815c38d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3d7b61d-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.901 251996 DEBUG os_vif [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:57:77,bridge_name='br-int',has_traffic_filtering=True,id=c3d7b61d-0558-4443-ae89-f36f1815c38d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3d7b61d-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.902 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccb2f59-215f-4a62-94b3-57f73268243d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.904 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.905 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3d7b61d-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.905 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.906 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.906 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.912 251996 INFO os_vif [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:57:77,bridge_name='br-int',has_traffic_filtering=True,id=c3d7b61d-0558-4443-ae89-f36f1815c38d,network=Network(d62d33de-d7cc-4103-8a83-88ba86c97b8f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3d7b61d-05')
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.921 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d7df6b76-8cc2-4ac9-8364-d39edeb787f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.923 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[02b8961b-64a7-4dd1-9e6f-f417d5ace06a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.937 251996 DEBUG nova.compute.manager [req-783d00d3-91ab-4429-b7ea-68aad51a2659 req-b460a15f-9987-4c5b-acdd-615aeb600c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received event network-vif-unplugged-c3d7b61d-0558-4443-ae89-f36f1815c38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.937 251996 DEBUG oslo_concurrency.lockutils [req-783d00d3-91ab-4429-b7ea-68aad51a2659 req-b460a15f-9987-4c5b-acdd-615aeb600c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.937 251996 DEBUG oslo_concurrency.lockutils [req-783d00d3-91ab-4429-b7ea-68aad51a2659 req-b460a15f-9987-4c5b-acdd-615aeb600c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.938 251996 DEBUG oslo_concurrency.lockutils [req-783d00d3-91ab-4429-b7ea-68aad51a2659 req-b460a15f-9987-4c5b-acdd-615aeb600c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.938 251996 DEBUG nova.compute.manager [req-783d00d3-91ab-4429-b7ea-68aad51a2659 req-b460a15f-9987-4c5b-acdd-615aeb600c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] No waiting events found dispatching network-vif-unplugged-c3d7b61d-0558-4443-ae89-f36f1815c38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:10:02 compute-0 nova_compute[251992]: 2025-12-06 07:10:02.938 251996 DEBUG nova.compute.manager [req-783d00d3-91ab-4429-b7ea-68aad51a2659 req-b460a15f-9987-4c5b-acdd-615aeb600c9a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received event network-vif-unplugged-c3d7b61d-0558-4443-ae89-f36f1815c38d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.938 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[65732c35-ee10-4f48-b65b-71ba480ce2d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518567, 'reachable_time': 29033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284562, 'error': None, 'target': 'ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.942 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d62d33de-d7cc-4103-8a83-88ba86c97b8f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:10:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:02.943 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[88e586fe-2baf-4260-9700-641e67343ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:10:02 compute-0 systemd[1]: run-netns-ovnmeta\x2dd62d33de\x2dd7cc\x2d4103\x2d8a83\x2d88ba86c97b8f.mount: Deactivated successfully.
Dec 06 07:10:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Dec 06 07:10:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 279 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 4.7 KiB/s wr, 94 op/s
Dec 06 07:10:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Dec 06 07:10:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Dec 06 07:10:03 compute-0 nova_compute[251992]: 2025-12-06 07:10:03.237 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1088743501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:10:03 compute-0 ceph-mon[74339]: osdmap e212: 3 total, 3 up, 3 in
Dec 06 07:10:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:03.817 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:03.818 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:03.818 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:03.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:10:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:04.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:10:04 compute-0 ceph-mon[74339]: pgmap v1519: 305 pgs: 305 active+clean; 279 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 4.7 KiB/s wr, 94 op/s
Dec 06 07:10:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 236 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 KiB/s wr, 58 op/s
Dec 06 07:10:05 compute-0 nova_compute[251992]: 2025-12-06 07:10:05.212 251996 DEBUG nova.compute.manager [req-3875449b-42a8-4654-b4c3-b39c3c08c435 req-93cda02a-181f-42ff-8df6-1b418f025d8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received event network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:10:05 compute-0 nova_compute[251992]: 2025-12-06 07:10:05.212 251996 DEBUG oslo_concurrency.lockutils [req-3875449b-42a8-4654-b4c3-b39c3c08c435 req-93cda02a-181f-42ff-8df6-1b418f025d8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:05 compute-0 nova_compute[251992]: 2025-12-06 07:10:05.212 251996 DEBUG oslo_concurrency.lockutils [req-3875449b-42a8-4654-b4c3-b39c3c08c435 req-93cda02a-181f-42ff-8df6-1b418f025d8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:05 compute-0 nova_compute[251992]: 2025-12-06 07:10:05.212 251996 DEBUG oslo_concurrency.lockutils [req-3875449b-42a8-4654-b4c3-b39c3c08c435 req-93cda02a-181f-42ff-8df6-1b418f025d8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:05 compute-0 nova_compute[251992]: 2025-12-06 07:10:05.213 251996 DEBUG nova.compute.manager [req-3875449b-42a8-4654-b4c3-b39c3c08c435 req-93cda02a-181f-42ff-8df6-1b418f025d8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] No waiting events found dispatching network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:10:05 compute-0 nova_compute[251992]: 2025-12-06 07:10:05.213 251996 WARNING nova.compute.manager [req-3875449b-42a8-4654-b4c3-b39c3c08c435 req-93cda02a-181f-42ff-8df6-1b418f025d8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received unexpected event network-vif-plugged-c3d7b61d-0558-4443-ae89-f36f1815c38d for instance with vm_state active and task_state deleting.
Dec 06 07:10:05 compute-0 podman[284568]: 2025-12-06 07:10:05.404301962 +0000 UTC m=+0.064294093 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 06 07:10:05 compute-0 podman[284569]: 2025-12-06 07:10:05.414062567 +0000 UTC m=+0.073997406 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:10:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:05.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:06.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 144 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 93 op/s
Dec 06 07:10:07 compute-0 ceph-mon[74339]: pgmap v1521: 305 pgs: 305 active+clean; 236 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 KiB/s wr, 58 op/s
Dec 06 07:10:07 compute-0 nova_compute[251992]: 2025-12-06 07:10:07.926 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:10:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:07.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:10:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:08.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.238 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.469 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "c67cab07-55d9-4f41-aa3c-c367f840ba27" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.469 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "c67cab07-55d9-4f41-aa3c-c367f840ba27" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.470 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "c67cab07-55d9-4f41-aa3c-c367f840ba27-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.470 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "c67cab07-55d9-4f41-aa3c-c367f840ba27-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.470 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "c67cab07-55d9-4f41-aa3c-c367f840ba27-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.471 251996 INFO nova.compute.manager [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Terminating instance
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.472 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "refresh_cache-c67cab07-55d9-4f41-aa3c-c367f840ba27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.472 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquired lock "refresh_cache-c67cab07-55d9-4f41-aa3c-c367f840ba27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.473 251996 DEBUG nova.network.neutron [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:10:08 compute-0 nova_compute[251992]: 2025-12-06 07:10:08.694 251996 DEBUG nova.network.neutron [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:10:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Dec 06 07:10:08 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Dec 06 07:10:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:10:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3597249496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:10:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:10:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3597249496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:10:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 144 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Dec 06 07:10:09 compute-0 nova_compute[251992]: 2025-12-06 07:10:09.188 251996 DEBUG nova.network.neutron [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:10:09 compute-0 nova_compute[251992]: 2025-12-06 07:10:09.214 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Releasing lock "refresh_cache-c67cab07-55d9-4f41-aa3c-c367f840ba27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:10:09 compute-0 nova_compute[251992]: 2025-12-06 07:10:09.215 251996 DEBUG nova.compute.manager [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:10:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:09.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:10.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:10 compute-0 ceph-mon[74339]: pgmap v1522: 305 pgs: 305 active+clean; 144 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 93 op/s
Dec 06 07:10:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3796797556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:10 compute-0 nova_compute[251992]: 2025-12-06 07:10:10.431 251996 INFO nova.virt.libvirt.driver [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Deleting instance files /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f_del
Dec 06 07:10:10 compute-0 nova_compute[251992]: 2025-12-06 07:10:10.433 251996 INFO nova.virt.libvirt.driver [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Deletion of /var/lib/nova/instances/76a87dcb-b252-427a-8f49-7a8ab838bb3f_del complete
Dec 06 07:10:10 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Dec 06 07:10:10 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d0000002f.scope: Consumed 15.933s CPU time.
Dec 06 07:10:10 compute-0 systemd-machined[212986]: Machine qemu-21-instance-0000002f terminated.
Dec 06 07:10:10 compute-0 nova_compute[251992]: 2025-12-06 07:10:10.506 251996 INFO nova.compute.manager [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Took 8.88 seconds to destroy the instance on the hypervisor.
Dec 06 07:10:10 compute-0 nova_compute[251992]: 2025-12-06 07:10:10.506 251996 DEBUG oslo.service.loopingcall [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:10:10 compute-0 nova_compute[251992]: 2025-12-06 07:10:10.506 251996 DEBUG nova.compute.manager [-] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:10:10 compute-0 nova_compute[251992]: 2025-12-06 07:10:10.507 251996 DEBUG nova.network.neutron [-] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:10:10 compute-0 nova_compute[251992]: 2025-12-06 07:10:10.643 251996 INFO nova.virt.libvirt.driver [-] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Instance destroyed successfully.
Dec 06 07:10:10 compute-0 nova_compute[251992]: 2025-12-06 07:10:10.643 251996 DEBUG nova.objects.instance [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lazy-loading 'resources' on Instance uuid c67cab07-55d9-4f41-aa3c-c367f840ba27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:10:11 compute-0 nova_compute[251992]: 2025-12-06 07:10:11.067 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 121 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 3.1 KiB/s wr, 81 op/s
Dec 06 07:10:11 compute-0 nova_compute[251992]: 2025-12-06 07:10:11.308 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:11 compute-0 ceph-mon[74339]: osdmap e213: 3 total, 3 up, 3 in
Dec 06 07:10:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3597249496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:10:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3597249496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:10:11 compute-0 ceph-mon[74339]: pgmap v1524: 305 pgs: 305 active+clean; 144 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 2.9 KiB/s wr, 77 op/s
Dec 06 07:10:11 compute-0 nova_compute[251992]: 2025-12-06 07:10:11.766 251996 DEBUG nova.network.neutron [-] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:10:11 compute-0 nova_compute[251992]: 2025-12-06 07:10:11.790 251996 INFO nova.compute.manager [-] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Took 1.28 seconds to deallocate network for instance.
Dec 06 07:10:11 compute-0 nova_compute[251992]: 2025-12-06 07:10:11.854 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:11 compute-0 nova_compute[251992]: 2025-12-06 07:10:11.855 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:12.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.030 251996 DEBUG oslo_concurrency.processutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.283 251996 DEBUG nova.compute.manager [req-36e18e9d-b887-4c39-bb3b-5cd56fb4511f req-1b0fabb9-d2f4-48b8-9ee9-041c26801009 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Received event network-vif-deleted-c3d7b61d-0558-4443-ae89-f36f1815c38d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.352 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:12.352 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:10:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:12.354 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:10:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:10:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476187685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.483 251996 DEBUG oslo_concurrency.processutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.490 251996 DEBUG nova.compute.provider_tree [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.511 251996 DEBUG nova.scheduler.client.report [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.554 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.587 251996 INFO nova.scheduler.client.report [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Deleted allocations for instance 76a87dcb-b252-427a-8f49-7a8ab838bb3f
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.707 251996 DEBUG oslo_concurrency.lockutils [None req-93b43d5a-5cb2-4562-be52-9dc8757ce021 33518fed43cc4fdfbdce993ccb4cc360 4c41abd44bbf46f39df642d2a2cd19eb - - default default] Lock "76a87dcb-b252-427a-8f49-7a8ab838bb3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:12 compute-0 ceph-mon[74339]: pgmap v1525: 305 pgs: 305 active+clean; 121 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 3.1 KiB/s wr, 81 op/s
Dec 06 07:10:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2476187685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:12 compute-0 nova_compute[251992]: 2025-12-06 07:10:12.927 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:10:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:10:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:10:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:10:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:10:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:10:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 121 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 2.9 KiB/s wr, 69 op/s
Dec 06 07:10:13 compute-0 nova_compute[251992]: 2025-12-06 07:10:13.241 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:10:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:14.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:10:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:14.356 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:10:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 91 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Dec 06 07:10:15 compute-0 ceph-mon[74339]: pgmap v1526: 305 pgs: 305 active+clean; 121 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 2.9 KiB/s wr, 69 op/s
Dec 06 07:10:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:15.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:15 compute-0 nova_compute[251992]: 2025-12-06 07:10:15.978 251996 INFO nova.virt.libvirt.driver [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Deleting instance files /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27_del
Dec 06 07:10:15 compute-0 nova_compute[251992]: 2025-12-06 07:10:15.980 251996 INFO nova.virt.libvirt.driver [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Deletion of /var/lib/nova/instances/c67cab07-55d9-4f41-aa3c-c367f840ba27_del complete
Dec 06 07:10:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:16.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.061 251996 INFO nova.compute.manager [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Took 6.85 seconds to destroy the instance on the hypervisor.
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.062 251996 DEBUG oslo.service.loopingcall [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.063 251996 DEBUG nova.compute.manager [-] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.063 251996 DEBUG nova.network.neutron [-] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.521 251996 DEBUG nova.network.neutron [-] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.548 251996 DEBUG nova.network.neutron [-] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.569 251996 INFO nova.compute.manager [-] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Took 0.51 seconds to deallocate network for instance.
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.618 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.618 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:16 compute-0 nova_compute[251992]: 2025-12-06 07:10:16.685 251996 DEBUG oslo_concurrency.processutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:10:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:10:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285027204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:17 compute-0 ceph-mon[74339]: pgmap v1527: 305 pgs: 305 active+clean; 91 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.3 KiB/s wr, 61 op/s
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.149 251996 DEBUG oslo_concurrency.processutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.157 251996 DEBUG nova.compute.provider_tree [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:10:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.7 KiB/s wr, 53 op/s
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.183 251996 DEBUG nova.scheduler.client.report [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.213 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.297 251996 INFO nova.scheduler.client.report [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Deleted allocations for instance c67cab07-55d9-4f41-aa3c-c367f840ba27
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.410 251996 DEBUG oslo_concurrency.lockutils [None req-a44cc08b-f131-4164-9ab8-341b4cc8cd5d 3786fc2472ec43adb27b29bfa497a6a2 6f86ab5a5bf14cb6b789f065cc8ca04a - - default default] Lock "c67cab07-55d9-4f41-aa3c-c367f840ba27" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.862 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005002.8609781, 76a87dcb-b252-427a-8f49-7a8ab838bb3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.863 251996 INFO nova.compute.manager [-] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] VM Stopped (Lifecycle Event)
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.883 251996 DEBUG nova.compute.manager [None req-f55d86ed-1c9b-4485-bc46-50c849804fe9 - - - - - -] [instance: 76a87dcb-b252-427a-8f49-7a8ab838bb3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:10:17 compute-0 nova_compute[251992]: 2025-12-06 07:10:17.929 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:17.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:18.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:18 compute-0 nova_compute[251992]: 2025-12-06 07:10:18.242 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2285027204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:18 compute-0 ceph-mon[74339]: pgmap v1528: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.7 KiB/s wr, 53 op/s
Dec 06 07:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:10:18
Dec 06 07:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.rgw.root']
Dec 06 07:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:10:18 compute-0 sudo[284685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:18 compute-0 sudo[284685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:18 compute-0 sudo[284685]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:18 compute-0 sudo[284710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:18 compute-0 sudo[284710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:18 compute-0 sudo[284710]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.6 KiB/s wr, 51 op/s
Dec 06 07:10:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:19.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:20.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:20 compute-0 ceph-mon[74339]: pgmap v1529: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.6 KiB/s wr, 51 op/s
Dec 06 07:10:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.7 KiB/s wr, 50 op/s
Dec 06 07:10:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Dec 06 07:10:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Dec 06 07:10:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Dec 06 07:10:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:21.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:22.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:22 compute-0 ceph-mon[74339]: pgmap v1530: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.7 KiB/s wr, 50 op/s
Dec 06 07:10:22 compute-0 ceph-mon[74339]: osdmap e214: 3 total, 3 up, 3 in
Dec 06 07:10:22 compute-0 nova_compute[251992]: 2025-12-06 07:10:22.931 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Dec 06 07:10:23 compute-0 nova_compute[251992]: 2025-12-06 07:10:23.243 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:10:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:23.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:24.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Dec 06 07:10:24 compute-0 ceph-mon[74339]: pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Dec 06 07:10:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Dec 06 07:10:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Dec 06 07:10:25 compute-0 nova_compute[251992]: 2025-12-06 07:10:25.641 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005010.6409574, c67cab07-55d9-4f41-aa3c-c367f840ba27 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:10:25 compute-0 nova_compute[251992]: 2025-12-06 07:10:25.642 251996 INFO nova.compute.manager [-] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] VM Stopped (Lifecycle Event)
Dec 06 07:10:25 compute-0 nova_compute[251992]: 2025-12-06 07:10:25.664 251996 DEBUG nova.compute.manager [None req-bd28b7e7-810a-401f-aeaa-9adab9538201 - - - - - -] [instance: c67cab07-55d9-4f41-aa3c-c367f840ba27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001904778522240834 of space, bias 1.0, pg target 0.5714335566722502 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:10:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:10:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:26.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:26 compute-0 ceph-mon[74339]: osdmap e215: 3 total, 3 up, 3 in
Dec 06 07:10:26 compute-0 ceph-mon[74339]: pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Dec 06 07:10:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Dec 06 07:10:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Dec 06 07:10:27 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Dec 06 07:10:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 3.8 KiB/s wr, 70 op/s
Dec 06 07:10:27 compute-0 nova_compute[251992]: 2025-12-06 07:10:27.933 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:28 compute-0 sudo[284739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:28 compute-0 sudo[284739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:28.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:28 compute-0 sudo[284739]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:28 compute-0 sudo[284764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:10:28 compute-0 sudo[284764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:28 compute-0 sudo[284764]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:28 compute-0 ceph-mon[74339]: osdmap e216: 3 total, 3 up, 3 in
Dec 06 07:10:28 compute-0 ceph-mon[74339]: pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 3.8 KiB/s wr, 70 op/s
Dec 06 07:10:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:28 compute-0 sudo[284789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:28 compute-0 sudo[284789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:28 compute-0 sudo[284789]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:28 compute-0 sudo[284814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:10:28 compute-0 sudo[284814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:28 compute-0 nova_compute[251992]: 2025-12-06 07:10:28.246 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:10:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:10:28 compute-0 sudo[284814]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 07:10:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:10:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 07:10:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 3.1 KiB/s wr, 56 op/s
Dec 06 07:10:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:10:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:10:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:10:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 96f233d7-2ccc-4f19-895e-f3f794e6da39 does not exist
Dec 06 07:10:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev cf868d32-be63-48d1-bedd-d914675e8aa2 does not exist
Dec 06 07:10:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 768d3100-8d5d-4559-a36e-16fea3dc4f1b does not exist
Dec 06 07:10:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:10:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:10:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:10:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:10:29 compute-0 sudo[284874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:29 compute-0 sudo[284874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:29 compute-0 sudo[284874]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:29 compute-0 sudo[284899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:10:29 compute-0 sudo[284899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:29 compute-0 sudo[284899]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:10:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:10:29 compute-0 sudo[284924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:29 compute-0 sudo[284924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:29 compute-0 sudo[284924]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:29 compute-0 sudo[284949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:10:29 compute-0 sudo[284949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:29.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:29 compute-0 podman[285014]: 2025-12-06 07:10:29.988030217 +0000 UTC m=+0.037292871 container create 9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:10:30 compute-0 systemd[1]: Started libpod-conmon-9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6.scope.
Dec 06 07:10:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:30.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:10:30 compute-0 podman[285014]: 2025-12-06 07:10:29.971245094 +0000 UTC m=+0.020507778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:10:30 compute-0 podman[285014]: 2025-12-06 07:10:30.07474677 +0000 UTC m=+0.124009424 container init 9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:10:30 compute-0 podman[285014]: 2025-12-06 07:10:30.080970375 +0000 UTC m=+0.130233029 container start 9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:10:30 compute-0 podman[285014]: 2025-12-06 07:10:30.083410214 +0000 UTC m=+0.132672888 container attach 9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:10:30 compute-0 interesting_mirzakhani[285030]: 167 167
Dec 06 07:10:30 compute-0 systemd[1]: libpod-9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6.scope: Deactivated successfully.
Dec 06 07:10:30 compute-0 conmon[285030]: conmon 9e56e02630fe82a2db38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6.scope/container/memory.events
Dec 06 07:10:30 compute-0 podman[285014]: 2025-12-06 07:10:30.086835731 +0000 UTC m=+0.136098395 container died 9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:10:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb089cea8e515c6dde82ce2a899b25e96c0a3e8ac8c8fb5db853052cb90f4251-merged.mount: Deactivated successfully.
Dec 06 07:10:30 compute-0 podman[285014]: 2025-12-06 07:10:30.123418781 +0000 UTC m=+0.172681435 container remove 9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:10:30 compute-0 systemd[1]: libpod-conmon-9e56e02630fe82a2db384142110599fd2bd43c64f19fddd2367ab08864bdc4c6.scope: Deactivated successfully.
Dec 06 07:10:30 compute-0 podman[285054]: 2025-12-06 07:10:30.279251533 +0000 UTC m=+0.040600646 container create 817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:10:30 compute-0 systemd[1]: Started libpod-conmon-817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323.scope.
Dec 06 07:10:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af17a9b383fae468f5848f7e5dd7ca2d919d13dcd416e48ba8a72d062141bb7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af17a9b383fae468f5848f7e5dd7ca2d919d13dcd416e48ba8a72d062141bb7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af17a9b383fae468f5848f7e5dd7ca2d919d13dcd416e48ba8a72d062141bb7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af17a9b383fae468f5848f7e5dd7ca2d919d13dcd416e48ba8a72d062141bb7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af17a9b383fae468f5848f7e5dd7ca2d919d13dcd416e48ba8a72d062141bb7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:30 compute-0 podman[285054]: 2025-12-06 07:10:30.26285281 +0000 UTC m=+0.024201943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:10:30 compute-0 podman[285054]: 2025-12-06 07:10:30.360635726 +0000 UTC m=+0.121984869 container init 817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:10:30 compute-0 podman[285054]: 2025-12-06 07:10:30.368341543 +0000 UTC m=+0.129690656 container start 817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:10:30 compute-0 podman[285054]: 2025-12-06 07:10:30.371241735 +0000 UTC m=+0.132590848 container attach 817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 07:10:30 compute-0 ceph-mon[74339]: pgmap v1537: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 3.1 KiB/s wr, 56 op/s
Dec 06 07:10:31 compute-0 kind_curie[285071]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:10:31 compute-0 kind_curie[285071]: --> relative data size: 1.0
Dec 06 07:10:31 compute-0 kind_curie[285071]: --> All data devices are unavailable
Dec 06 07:10:31 compute-0 systemd[1]: libpod-817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323.scope: Deactivated successfully.
Dec 06 07:10:31 compute-0 podman[285054]: 2025-12-06 07:10:31.166535293 +0000 UTC m=+0.927884406 container died 817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 07:10:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 4.2 KiB/s wr, 82 op/s
Dec 06 07:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-af17a9b383fae468f5848f7e5dd7ca2d919d13dcd416e48ba8a72d062141bb7c-merged.mount: Deactivated successfully.
Dec 06 07:10:31 compute-0 podman[285054]: 2025-12-06 07:10:31.224432264 +0000 UTC m=+0.985781377 container remove 817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:10:31 compute-0 systemd[1]: libpod-conmon-817fe1824012045a32e0cf0efcefd4dd891edc11c8eb3dbd3326e74384302323.scope: Deactivated successfully.
Dec 06 07:10:31 compute-0 sudo[284949]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:31 compute-0 sudo[285102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:31 compute-0 sudo[285102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:31 compute-0 sudo[285102]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:31 compute-0 sudo[285127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:10:31 compute-0 sudo[285127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:31 compute-0 sudo[285127]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:31 compute-0 sudo[285152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:31 compute-0 sudo[285152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:31 compute-0 sudo[285152]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:31 compute-0 sudo[285177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:10:31 compute-0 sudo[285177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:31 compute-0 podman[285201]: 2025-12-06 07:10:31.575480155 +0000 UTC m=+0.083637637 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 07:10:31 compute-0 podman[285267]: 2025-12-06 07:10:31.773256577 +0000 UTC m=+0.036219271 container create dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:10:31 compute-0 systemd[1]: Started libpod-conmon-dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd.scope.
Dec 06 07:10:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:10:31 compute-0 podman[285267]: 2025-12-06 07:10:31.757953037 +0000 UTC m=+0.020915761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:10:31 compute-0 podman[285267]: 2025-12-06 07:10:31.855275368 +0000 UTC m=+0.118238082 container init dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ganguly, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Dec 06 07:10:31 compute-0 podman[285267]: 2025-12-06 07:10:31.861595237 +0000 UTC m=+0.124557931 container start dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ganguly, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec 06 07:10:31 compute-0 podman[285267]: 2025-12-06 07:10:31.865177318 +0000 UTC m=+0.128140042 container attach dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ganguly, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:10:31 compute-0 sharp_ganguly[285283]: 167 167
Dec 06 07:10:31 compute-0 systemd[1]: libpod-dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd.scope: Deactivated successfully.
Dec 06 07:10:31 compute-0 podman[285267]: 2025-12-06 07:10:31.866997939 +0000 UTC m=+0.129960633 container died dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-523cbcc75da3e0641d481141be0093780a16cc556b2a0439691ab3c35316cb40-merged.mount: Deactivated successfully.
Dec 06 07:10:31 compute-0 podman[285267]: 2025-12-06 07:10:31.897282472 +0000 UTC m=+0.160245166 container remove dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 07:10:31 compute-0 systemd[1]: libpod-conmon-dbfaa773977cf266142cdb94b382d04ffbecf5c8026f9cb632c60a0225b5b0cd.scope: Deactivated successfully.
Dec 06 07:10:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:31.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:32.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:32 compute-0 podman[285307]: 2025-12-06 07:10:32.053842173 +0000 UTC m=+0.042863358 container create ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mayer, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:10:32 compute-0 systemd[1]: Started libpod-conmon-ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925.scope.
Dec 06 07:10:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bff230ffa6b785d313d39fb6cdc84ba887c08810ec9c70cafcc7762799c6bc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bff230ffa6b785d313d39fb6cdc84ba887c08810ec9c70cafcc7762799c6bc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bff230ffa6b785d313d39fb6cdc84ba887c08810ec9c70cafcc7762799c6bc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bff230ffa6b785d313d39fb6cdc84ba887c08810ec9c70cafcc7762799c6bc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:32 compute-0 podman[285307]: 2025-12-06 07:10:32.11617123 +0000 UTC m=+0.105192425 container init ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:10:32 compute-0 podman[285307]: 2025-12-06 07:10:32.122675613 +0000 UTC m=+0.111696788 container start ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:10:32 compute-0 podman[285307]: 2025-12-06 07:10:32.125629986 +0000 UTC m=+0.114651161 container attach ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:10:32 compute-0 podman[285307]: 2025-12-06 07:10:32.035707823 +0000 UTC m=+0.024729028 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:10:32 compute-0 ceph-mon[74339]: pgmap v1538: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 4.2 KiB/s wr, 82 op/s
Dec 06 07:10:32 compute-0 objective_mayer[285323]: {
Dec 06 07:10:32 compute-0 objective_mayer[285323]:     "0": [
Dec 06 07:10:32 compute-0 objective_mayer[285323]:         {
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "devices": [
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "/dev/loop3"
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             ],
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "lv_name": "ceph_lv0",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "lv_size": "7511998464",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "name": "ceph_lv0",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "tags": {
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.cluster_name": "ceph",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.crush_device_class": "",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.encrypted": "0",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.osd_id": "0",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.type": "block",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:                 "ceph.vdo": "0"
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             },
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "type": "block",
Dec 06 07:10:32 compute-0 objective_mayer[285323]:             "vg_name": "ceph_vg0"
Dec 06 07:10:32 compute-0 objective_mayer[285323]:         }
Dec 06 07:10:32 compute-0 objective_mayer[285323]:     ]
Dec 06 07:10:32 compute-0 objective_mayer[285323]: }
Dec 06 07:10:32 compute-0 systemd[1]: libpod-ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925.scope: Deactivated successfully.
Dec 06 07:10:32 compute-0 conmon[285323]: conmon ba69cc4e15b193579234 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925.scope/container/memory.events
Dec 06 07:10:32 compute-0 podman[285307]: 2025-12-06 07:10:32.871375378 +0000 UTC m=+0.860396563 container died ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bff230ffa6b785d313d39fb6cdc84ba887c08810ec9c70cafcc7762799c6bc6-merged.mount: Deactivated successfully.
Dec 06 07:10:32 compute-0 podman[285307]: 2025-12-06 07:10:32.927591132 +0000 UTC m=+0.916612307 container remove ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 07:10:32 compute-0 systemd[1]: libpod-conmon-ba69cc4e15b193579234935f2aa02da43d8b6058aa3a05f4878ca9763b519925.scope: Deactivated successfully.
Dec 06 07:10:32 compute-0 nova_compute[251992]: 2025-12-06 07:10:32.934 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:32 compute-0 sudo[285177]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:33 compute-0 sudo[285347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:33 compute-0 sudo[285347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:33 compute-0 sudo[285347]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:33 compute-0 sudo[285372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:10:33 compute-0 sudo[285372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:33 compute-0 sudo[285372]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:33 compute-0 sudo[285397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:33 compute-0 sudo[285397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:33 compute-0 sudo[285397]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Dec 06 07:10:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Dec 06 07:10:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Dec 06 07:10:33 compute-0 sudo[285422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:10:33 compute-0 sudo[285422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 4.0 KiB/s wr, 77 op/s
Dec 06 07:10:33 compute-0 nova_compute[251992]: 2025-12-06 07:10:33.248 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:33 compute-0 podman[285487]: 2025-12-06 07:10:33.487838668 +0000 UTC m=+0.037723644 container create 53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 07:10:33 compute-0 systemd[1]: Started libpod-conmon-53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367.scope.
Dec 06 07:10:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:10:33 compute-0 podman[285487]: 2025-12-06 07:10:33.559541968 +0000 UTC m=+0.109426944 container init 53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:10:33 compute-0 podman[285487]: 2025-12-06 07:10:33.566073232 +0000 UTC m=+0.115958218 container start 53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:10:33 compute-0 podman[285487]: 2025-12-06 07:10:33.470970492 +0000 UTC m=+0.020855488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:10:33 compute-0 podman[285487]: 2025-12-06 07:10:33.570318612 +0000 UTC m=+0.120203618 container attach 53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:10:33 compute-0 intelligent_moser[285503]: 167 167
Dec 06 07:10:33 compute-0 systemd[1]: libpod-53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367.scope: Deactivated successfully.
Dec 06 07:10:33 compute-0 podman[285487]: 2025-12-06 07:10:33.573517512 +0000 UTC m=+0.123402498 container died 53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 07:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0b309b5fff576b7dcd8f9ba66619e4fbd19da56809335ea0c48c10d36b37e71-merged.mount: Deactivated successfully.
Dec 06 07:10:33 compute-0 podman[285487]: 2025-12-06 07:10:33.613395796 +0000 UTC m=+0.163280772 container remove 53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:10:33 compute-0 systemd[1]: libpod-conmon-53275e34b8c4581428b39cf347325a34d2a3abc69bd3f72f971fe625f490d367.scope: Deactivated successfully.
Dec 06 07:10:33 compute-0 podman[285527]: 2025-12-06 07:10:33.77181649 +0000 UTC m=+0.040337579 container create d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:10:33 compute-0 systemd[1]: Started libpod-conmon-d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48.scope.
Dec 06 07:10:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7737bdbe0cd96f8411d60e35da6fa1ddff0c6462df6eba0c6149208ccedbaf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7737bdbe0cd96f8411d60e35da6fa1ddff0c6462df6eba0c6149208ccedbaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7737bdbe0cd96f8411d60e35da6fa1ddff0c6462df6eba0c6149208ccedbaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7737bdbe0cd96f8411d60e35da6fa1ddff0c6462df6eba0c6149208ccedbaf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:10:33 compute-0 podman[285527]: 2025-12-06 07:10:33.846642497 +0000 UTC m=+0.115163626 container init d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 07:10:33 compute-0 podman[285527]: 2025-12-06 07:10:33.751543248 +0000 UTC m=+0.020064367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:10:33 compute-0 podman[285527]: 2025-12-06 07:10:33.852906004 +0000 UTC m=+0.121427093 container start d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:10:33 compute-0 podman[285527]: 2025-12-06 07:10:33.856720262 +0000 UTC m=+0.125241341 container attach d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:10:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:33.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:10:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:34.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:10:34 compute-0 ceph-mon[74339]: osdmap e217: 3 total, 3 up, 3 in
Dec 06 07:10:34 compute-0 ceph-mon[74339]: pgmap v1540: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 4.0 KiB/s wr, 77 op/s
Dec 06 07:10:34 compute-0 clever_jepsen[285544]: {
Dec 06 07:10:34 compute-0 clever_jepsen[285544]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:10:34 compute-0 clever_jepsen[285544]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:10:34 compute-0 clever_jepsen[285544]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:10:34 compute-0 clever_jepsen[285544]:         "osd_id": 0,
Dec 06 07:10:34 compute-0 clever_jepsen[285544]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:10:34 compute-0 clever_jepsen[285544]:         "type": "bluestore"
Dec 06 07:10:34 compute-0 clever_jepsen[285544]:     }
Dec 06 07:10:34 compute-0 clever_jepsen[285544]: }
Dec 06 07:10:34 compute-0 systemd[1]: libpod-d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48.scope: Deactivated successfully.
Dec 06 07:10:34 compute-0 podman[285527]: 2025-12-06 07:10:34.676620323 +0000 UTC m=+0.945141422 container died d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f7737bdbe0cd96f8411d60e35da6fa1ddff0c6462df6eba0c6149208ccedbaf-merged.mount: Deactivated successfully.
Dec 06 07:10:34 compute-0 podman[285527]: 2025-12-06 07:10:34.737492648 +0000 UTC m=+1.006013737 container remove d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 07:10:34 compute-0 systemd[1]: libpod-conmon-d943efb8b2cc446e74a6039db433844d955ea32ec557f761771a3f9f38a57d48.scope: Deactivated successfully.
Dec 06 07:10:34 compute-0 sudo[285422]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:10:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:10:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev aea95f43-6623-4861-8e47-464773ebf2b4 does not exist
Dec 06 07:10:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b873cc9b-4fb3-4e48-a74e-f6e7efacc3c9 does not exist
Dec 06 07:10:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e7591d66-d05e-4c68-b148-00d3bb11d14f does not exist
Dec 06 07:10:34 compute-0 sudo[285578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:34 compute-0 sudo[285578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:34 compute-0 sudo[285578]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:34 compute-0 sudo[285603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:10:34 compute-0 sudo[285603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:34 compute-0 sudo[285603]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 06 07:10:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:10:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:35.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:36.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:36 compute-0 podman[285628]: 2025-12-06 07:10:36.421503027 +0000 UTC m=+0.078924565 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:10:36 compute-0 podman[285629]: 2025-12-06 07:10:36.43191344 +0000 UTC m=+0.090066298 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 06 07:10:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 06 07:10:37 compute-0 ceph-mon[74339]: pgmap v1541: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Dec 06 07:10:37 compute-0 nova_compute[251992]: 2025-12-06 07:10:37.937 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:37.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Dec 06 07:10:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Dec 06 07:10:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Dec 06 07:10:38 compute-0 nova_compute[251992]: 2025-12-06 07:10:38.251 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:38 compute-0 sudo[285671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:38 compute-0 sudo[285671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:38 compute-0 sudo[285671]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:39 compute-0 sudo[285696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:39 compute-0 sudo[285696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:39 compute-0 sudo[285696]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 383 B/s wr, 1 op/s
Dec 06 07:10:39 compute-0 ceph-mon[74339]: pgmap v1542: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec 06 07:10:39 compute-0 ceph-mon[74339]: osdmap e218: 3 total, 3 up, 3 in
Dec 06 07:10:39 compute-0 sshd-session[285668]: Invalid user admin from 45.135.232.92 port 43610
Dec 06 07:10:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:39.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:40 compute-0 sshd-session[285668]: Connection reset by invalid user admin 45.135.232.92 port 43610 [preauth]
Dec 06 07:10:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:40.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:40 compute-0 ceph-mon[74339]: pgmap v1544: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 383 B/s wr, 1 op/s
Dec 06 07:10:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:41.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:42.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:42 compute-0 ceph-mon[74339]: pgmap v1545: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:42 compute-0 nova_compute[251992]: 2025-12-06 07:10:42.938 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:10:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:10:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:10:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:10:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:10:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:10:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:43 compute-0 sshd-session[285721]: Connection reset by authenticating user root 45.135.232.92 port 43616 [preauth]
Dec 06 07:10:43 compute-0 nova_compute[251992]: 2025-12-06 07:10:43.251 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:43.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:45 compute-0 ceph-mon[74339]: pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:45 compute-0 sshd-session[285725]: Connection reset by authenticating user root 45.135.232.92 port 43622 [preauth]
Dec 06 07:10:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:10:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:45.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:10:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:46.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:46 compute-0 nova_compute[251992]: 2025-12-06 07:10:46.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:46 compute-0 nova_compute[251992]: 2025-12-06 07:10:46.683 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:46 compute-0 ceph-mon[74339]: pgmap v1547: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:47 compute-0 sshd-session[285728]: Invalid user support from 45.135.232.92 port 53078
Dec 06 07:10:47 compute-0 nova_compute[251992]: 2025-12-06 07:10:47.941 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:47.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:10:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:10:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:48 compute-0 nova_compute[251992]: 2025-12-06 07:10:48.254 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:48 compute-0 sshd-session[285728]: Connection reset by invalid user support 45.135.232.92 port 53078 [preauth]
Dec 06 07:10:48 compute-0 ceph-mon[74339]: pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:49 compute-0 nova_compute[251992]: 2025-12-06 07:10:49.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:49.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:50.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:50 compute-0 sshd-session[285732]: Invalid user admin from 45.135.232.92 port 53094
Dec 06 07:10:50 compute-0 nova_compute[251992]: 2025-12-06 07:10:50.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:51 compute-0 sshd-session[285732]: Connection reset by invalid user admin 45.135.232.92 port 53094 [preauth]
Dec 06 07:10:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 07:10:51 compute-0 nova_compute[251992]: 2025-12-06 07:10:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:51 compute-0 nova_compute[251992]: 2025-12-06 07:10:51.699 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:51 compute-0 nova_compute[251992]: 2025-12-06 07:10:51.700 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:51 compute-0 nova_compute[251992]: 2025-12-06 07:10:51.700 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:51 compute-0 nova_compute[251992]: 2025-12-06 07:10:51.700 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:10:51 compute-0 nova_compute[251992]: 2025-12-06 07:10:51.701 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:10:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:51.734 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:10:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:51.736 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:10:51 compute-0 nova_compute[251992]: 2025-12-06 07:10:51.735 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:10:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1779705932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.192 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:10:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1779807841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/964958267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.356 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.358 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4616MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.358 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.358 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.499 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.499 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.528 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:10:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:10:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596050169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.974 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:52 compute-0 nova_compute[251992]: 2025-12-06 07:10:52.995 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:10:53 compute-0 nova_compute[251992]: 2025-12-06 07:10:53.000 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:10:53 compute-0 nova_compute[251992]: 2025-12-06 07:10:53.012 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:10:53 compute-0 nova_compute[251992]: 2025-12-06 07:10:53.033 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:10:53 compute-0 nova_compute[251992]: 2025-12-06 07:10:53.034 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:10:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 07:10:53 compute-0 nova_compute[251992]: 2025-12-06 07:10:53.256 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:53 compute-0 ceph-mon[74339]: pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail
Dec 06 07:10:53 compute-0 ceph-mon[74339]: pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 07:10:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1779705932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1803060845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2596050169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:10:53.738 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:10:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:53.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:54 compute-0 nova_compute[251992]: 2025-12-06 07:10:54.027 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:54 compute-0 nova_compute[251992]: 2025-12-06 07:10:54.028 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:54.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:54 compute-0 nova_compute[251992]: 2025-12-06 07:10:54.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 56 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 841 KiB/s wr, 13 op/s
Dec 06 07:10:55 compute-0 ceph-mon[74339]: pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 07:10:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4067379754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:55 compute-0 nova_compute[251992]: 2025-12-06 07:10:55.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:55 compute-0 nova_compute[251992]: 2025-12-06 07:10:55.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:10:55 compute-0 nova_compute[251992]: 2025-12-06 07:10:55.686 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:10:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:55.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:56 compute-0 nova_compute[251992]: 2025-12-06 07:10:56.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:10:56 compute-0 nova_compute[251992]: 2025-12-06 07:10:56.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:10:56 compute-0 ceph-mon[74339]: pgmap v1552: 305 pgs: 305 active+clean; 56 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 841 KiB/s wr, 13 op/s
Dec 06 07:10:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2114790814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:10:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:10:57 compute-0 ovn_controller[147168]: 2025-12-06T07:10:57Z|00111|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:10:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:57.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:58 compute-0 nova_compute[251992]: 2025-12-06 07:10:58.015 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:10:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:10:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1615937358' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:10:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1468335041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:10:58 compute-0 nova_compute[251992]: 2025-12-06 07:10:58.258 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:10:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:10:59 compute-0 sudo[285784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:59 compute-0 sudo[285784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:59 compute-0 sudo[285784]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:59 compute-0 sudo[285809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:10:59 compute-0 sudo[285809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:10:59 compute-0 sudo[285809]: pam_unix(sudo:session): session closed for user root
Dec 06 07:10:59 compute-0 ceph-mon[74339]: pgmap v1553: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:10:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:10:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:10:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:10:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:10:59.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:00 compute-0 ceph-mon[74339]: pgmap v1554: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:11:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:11:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:01.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:11:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:11:02 compute-0 podman[285835]: 2025-12-06 07:11:02.442756833 +0000 UTC m=+0.100872643 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:11:02 compute-0 ceph-mon[74339]: pgmap v1555: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:11:03 compute-0 nova_compute[251992]: 2025-12-06 07:11:03.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 452 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Dec 06 07:11:03 compute-0 nova_compute[251992]: 2025-12-06 07:11:03.260 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:03.819 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:03.819 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:03.819 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:03.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:04.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:05 compute-0 ceph-mon[74339]: pgmap v1556: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 452 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Dec 06 07:11:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Dec 06 07:11:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:05.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:06 compute-0 ceph-mon[74339]: pgmap v1557: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Dec 06 07:11:06 compute-0 nova_compute[251992]: 2025-12-06 07:11:06.997 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:06 compute-0 nova_compute[251992]: 2025-12-06 07:11:06.998 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.012 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.092 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.092 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.100 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.100 251996 INFO nova.compute.claims [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:11:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 988 KiB/s wr, 88 op/s
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.263 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:07 compute-0 podman[285867]: 2025-12-06 07:11:07.391050287 +0000 UTC m=+0.051630966 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 07:11:07 compute-0 podman[285866]: 2025-12-06 07:11:07.391619273 +0000 UTC m=+0.049084104 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 07:11:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3367998681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.835 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.842 251996 DEBUG nova.compute.provider_tree [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.872 251996 DEBUG nova.scheduler.client.report [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.906 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.906 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:11:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3367998681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.959 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.959 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.975 251996 INFO nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:11:07 compute-0 nova_compute[251992]: 2025-12-06 07:11:07.991 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:11:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:07.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:08.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.157 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.158 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.159 251996 INFO nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Creating image(s)
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.186 251996 DEBUG nova.storage.rbd_utils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.210 251996 DEBUG nova.storage.rbd_utils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.237 251996 DEBUG nova.storage.rbd_utils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.240 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.321 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.322 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.322 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.323 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.353 251996 DEBUG nova.storage.rbd_utils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.357 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.421 251996 DEBUG nova.policy [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '337447c5cffc48bf8256c9166a6ff0e2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:11:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.798 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:08 compute-0 nova_compute[251992]: 2025-12-06 07:11:08.874 251996 DEBUG nova.storage.rbd_utils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] resizing rbd image db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:11:08 compute-0 ceph-mon[74339]: pgmap v1558: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 988 KiB/s wr, 88 op/s
Dec 06 07:11:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3960716737' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:11:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3960716737' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:11:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4167333267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:09 compute-0 nova_compute[251992]: 2025-12-06 07:11:09.142 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Successfully created port: 516c0976-46a2-4bcc-b9b8-278383257c28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:11:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 07:11:09 compute-0 nova_compute[251992]: 2025-12-06 07:11:09.791 251996 DEBUG nova.objects.instance [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lazy-loading 'migration_context' on Instance uuid db2f61ae-900d-44ed-a7a6-e02df74fcd02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:09 compute-0 nova_compute[251992]: 2025-12-06 07:11:09.805 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:11:09 compute-0 nova_compute[251992]: 2025-12-06 07:11:09.806 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Ensure instance console log exists: /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:11:09 compute-0 nova_compute[251992]: 2025-12-06 07:11:09.806 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:09 compute-0 nova_compute[251992]: 2025-12-06 07:11:09.807 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:09 compute-0 nova_compute[251992]: 2025-12-06 07:11:09.807 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:10.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:10.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:10 compute-0 nova_compute[251992]: 2025-12-06 07:11:10.378 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Successfully created port: 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:11:10 compute-0 ceph-mon[74339]: pgmap v1559: 305 pgs: 305 active+clean; 88 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 07:11:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 137 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Dec 06 07:11:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:11:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:12.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:11:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:12.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:12 compute-0 nova_compute[251992]: 2025-12-06 07:11:12.157 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Successfully created port: 27d4e523-f411-4c37-9c85-1408d599e7fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:11:12 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 06 07:11:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:11:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:11:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:11:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:11:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:11:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:11:12 compute-0 ceph-mon[74339]: pgmap v1560: 305 pgs: 305 active+clean; 137 MiB data, 541 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 135 op/s
Dec 06 07:11:13 compute-0 nova_compute[251992]: 2025-12-06 07:11:13.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 159 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Dec 06 07:11:13 compute-0 nova_compute[251992]: 2025-12-06 07:11:13.264 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:13 compute-0 nova_compute[251992]: 2025-12-06 07:11:13.781 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Successfully updated port: 516c0976-46a2-4bcc-b9b8-278383257c28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:11:13 compute-0 ceph-mgr[74630]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 07:11:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:14.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:14.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:14 compute-0 ceph-mon[74339]: pgmap v1561: 305 pgs: 305 active+clean; 159 MiB data, 552 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Dec 06 07:11:14 compute-0 nova_compute[251992]: 2025-12-06 07:11:14.915 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Successfully updated port: 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.138 251996 DEBUG nova.compute.manager [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-changed-516c0976-46a2-4bcc-b9b8-278383257c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.139 251996 DEBUG nova.compute.manager [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Refreshing instance network info cache due to event network-changed-516c0976-46a2-4bcc-b9b8-278383257c28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.139 251996 DEBUG oslo_concurrency.lockutils [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.139 251996 DEBUG oslo_concurrency.lockutils [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.139 251996 DEBUG nova.network.neutron [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Refreshing network info cache for port 516c0976-46a2-4bcc-b9b8-278383257c28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:11:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 192 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.1 MiB/s wr, 196 op/s
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.353 251996 DEBUG nova.network.neutron [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.824 251996 DEBUG nova.network.neutron [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.848 251996 DEBUG oslo_concurrency.lockutils [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.848 251996 DEBUG nova.compute.manager [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-changed-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.849 251996 DEBUG nova.compute.manager [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Refreshing instance network info cache due to event network-changed-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.849 251996 DEBUG oslo_concurrency.lockutils [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.850 251996 DEBUG oslo_concurrency.lockutils [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.850 251996 DEBUG nova.network.neutron [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Refreshing network info cache for port 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.979 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Successfully updated port: 27d4e523-f411-4c37-9c85-1408d599e7fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:11:15 compute-0 nova_compute[251992]: 2025-12-06 07:11:15.994 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:16.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:16 compute-0 nova_compute[251992]: 2025-12-06 07:11:16.095 251996 DEBUG nova.network.neutron [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:11:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:16.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4056302847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:17 compute-0 nova_compute[251992]: 2025-12-06 07:11:17.022 251996 DEBUG nova.network.neutron [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:17 compute-0 nova_compute[251992]: 2025-12-06 07:11:17.052 251996 DEBUG oslo_concurrency.lockutils [req-c589edc7-539c-49b4-9dde-d1b9a9a995a4 req-fa6a9539-dc85-48ae-85d3-e8c62ad27433 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:17 compute-0 nova_compute[251992]: 2025-12-06 07:11:17.053 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquired lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:17 compute-0 nova_compute[251992]: 2025-12-06 07:11:17.053 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:11:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 208 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 971 KiB/s rd, 5.6 MiB/s wr, 251 op/s
Dec 06 07:11:17 compute-0 nova_compute[251992]: 2025-12-06 07:11:17.264 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:11:17 compute-0 nova_compute[251992]: 2025-12-06 07:11:17.305 251996 DEBUG nova.compute.manager [req-95ad353e-e09b-4bc2-9408-505dce13bb42 req-d7b3becd-f63e-4a48-935f-59836ee742a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-changed-27d4e523-f411-4c37-9c85-1408d599e7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:17 compute-0 nova_compute[251992]: 2025-12-06 07:11:17.306 251996 DEBUG nova.compute.manager [req-95ad353e-e09b-4bc2-9408-505dce13bb42 req-d7b3becd-f63e-4a48-935f-59836ee742a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Refreshing instance network info cache due to event network-changed-27d4e523-f411-4c37-9c85-1408d599e7fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:11:17 compute-0 nova_compute[251992]: 2025-12-06 07:11:17.306 251996 DEBUG oslo_concurrency.lockutils [req-95ad353e-e09b-4bc2-9408-505dce13bb42 req-d7b3becd-f63e-4a48-935f-59836ee742a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:18.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:18 compute-0 nova_compute[251992]: 2025-12-06 07:11:18.021 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:18.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:18 compute-0 nova_compute[251992]: 2025-12-06 07:11:18.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:11:18
Dec 06 07:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'vms']
Dec 06 07:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:11:18 compute-0 ceph-mon[74339]: pgmap v1562: 305 pgs: 305 active+clean; 192 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.1 MiB/s wr, 196 op/s
Dec 06 07:11:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1187363185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 208 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 411 KiB/s rd, 5.6 MiB/s wr, 232 op/s
Dec 06 07:11:19 compute-0 sudo[286095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:19 compute-0 sudo[286095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:19 compute-0 sudo[286095]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:19 compute-0 sudo[286120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:19 compute-0 sudo[286120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:19 compute-0 sudo[286120]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:19 compute-0 ceph-mon[74339]: pgmap v1563: 305 pgs: 305 active+clean; 208 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 971 KiB/s rd, 5.6 MiB/s wr, 251 op/s
Dec 06 07:11:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:20.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:20.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:21 compute-0 ceph-mon[74339]: pgmap v1564: 305 pgs: 305 active+clean; 208 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 411 KiB/s rd, 5.6 MiB/s wr, 232 op/s
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.146 251996 DEBUG nova.network.neutron [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Updating instance_info_cache with network_info: [{"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.166 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Releasing lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.167 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Instance network_info: |[{"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.167 251996 DEBUG oslo_concurrency.lockutils [req-95ad353e-e09b-4bc2-9408-505dce13bb42 req-d7b3becd-f63e-4a48-935f-59836ee742a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.168 251996 DEBUG nova.network.neutron [req-95ad353e-e09b-4bc2-9408-505dce13bb42 req-d7b3becd-f63e-4a48-935f-59836ee742a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Refreshing network info cache for port 27d4e523-f411-4c37-9c85-1408d599e7fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.172 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Start _get_guest_xml network_info=[{"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.176 251996 WARNING nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.180 251996 DEBUG nova.virt.libvirt.host [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.181 251996 DEBUG nova.virt.libvirt.host [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.187 251996 DEBUG nova.virt.libvirt.host [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.188 251996 DEBUG nova.virt.libvirt.host [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.189 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.190 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.190 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.191 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.191 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.191 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.192 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.192 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.192 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.193 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.193 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.193 251996 DEBUG nova.virt.hardware [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.197 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 213 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 460 KiB/s rd, 5.7 MiB/s wr, 280 op/s
Dec 06 07:11:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:11:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2418961624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.661 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.693 251996 DEBUG nova.storage.rbd_utils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:21 compute-0 nova_compute[251992]: 2025-12-06 07:11:21.697 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:22.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:11:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830245454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.172 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.173 251996 DEBUG nova.virt.libvirt.vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:08Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.174 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.174 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:89:33,bridge_name='br-int',has_traffic_filtering=True,id=516c0976-46a2-4bcc-b9b8-278383257c28,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap516c0976-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.175 251996 DEBUG nova.virt.libvirt.vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:08Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.176 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.176 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8c:91,bridge_name='br-int',has_traffic_filtering=True,id=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f,network=Network(cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b3cf6d1-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.177 251996 DEBUG nova.virt.libvirt.vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:08Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.177 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.177 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:09:1b,bridge_name='br-int',has_traffic_filtering=True,id=27d4e523-f411-4c37-9c85-1408d599e7fd,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27d4e523-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.178 251996 DEBUG nova.objects.instance [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lazy-loading 'pci_devices' on Instance uuid db2f61ae-900d-44ed-a7a6-e02df74fcd02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.198 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <uuid>db2f61ae-900d-44ed-a7a6-e02df74fcd02</uuid>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <name>instance-00000033</name>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersTestMultiNic-server-1667420854</nova:name>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:11:21</nova:creationTime>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:user uuid="337447c5cffc48bf8256c9166a6ff0e2">tempest-ServersTestMultiNic-889484419-project-member</nova:user>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:project uuid="da24f0e2d59745828feaaecfeb9fed45">tempest-ServersTestMultiNic-889484419</nova:project>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:port uuid="516c0976-46a2-4bcc-b9b8-278383257c28">
Dec 06 07:11:22 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.197" ipVersion="4"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:port uuid="2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f">
Dec 06 07:11:22 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.1.130" ipVersion="4"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <nova:port uuid="27d4e523-f411-4c37-9c85-1408d599e7fd">
Dec 06 07:11:22 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <system>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <entry name="serial">db2f61ae-900d-44ed-a7a6-e02df74fcd02</entry>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <entry name="uuid">db2f61ae-900d-44ed-a7a6-e02df74fcd02</entry>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </system>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <os>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   </os>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <features>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   </features>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk">
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       </source>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk.config">
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       </source>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:11:22 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:ac:89:33"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <target dev="tap516c0976-46"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:b4:8c:91"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <target dev="tap2b3cf6d1-e3"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:7a:09:1b"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <target dev="tap27d4e523-f4"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02/console.log" append="off"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <video>
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </video>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:11:22 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:11:22 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:11:22 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:11:22 compute-0 nova_compute[251992]: </domain>
Dec 06 07:11:22 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.200 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Preparing to wait for external event network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.200 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.200 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.201 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.201 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Preparing to wait for external event network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.201 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.201 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.201 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.202 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Preparing to wait for external event network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.202 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.202 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.202 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.203 251996 DEBUG nova.virt.libvirt.vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:08Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.203 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.204 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:89:33,bridge_name='br-int',has_traffic_filtering=True,id=516c0976-46a2-4bcc-b9b8-278383257c28,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap516c0976-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.204 251996 DEBUG os_vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:89:33,bridge_name='br-int',has_traffic_filtering=True,id=516c0976-46a2-4bcc-b9b8-278383257c28,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap516c0976-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.205 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.205 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.205 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.210 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.211 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap516c0976-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.211 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap516c0976-46, col_values=(('external_ids', {'iface-id': '516c0976-46a2-4bcc-b9b8-278383257c28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:89:33', 'vm-uuid': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.213 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 NetworkManager[48965]: <info>  [1765005082.2150] manager: (tap516c0976-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.217 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.221 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.222 251996 INFO os_vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:89:33,bridge_name='br-int',has_traffic_filtering=True,id=516c0976-46a2-4bcc-b9b8-278383257c28,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap516c0976-46')
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.223 251996 DEBUG nova.virt.libvirt.vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:08Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.224 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.225 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8c:91,bridge_name='br-int',has_traffic_filtering=True,id=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f,network=Network(cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b3cf6d1-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.225 251996 DEBUG os_vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8c:91,bridge_name='br-int',has_traffic_filtering=True,id=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f,network=Network(cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b3cf6d1-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.226 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.226 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.227 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.229 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.229 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b3cf6d1-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.230 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b3cf6d1-e3, col_values=(('external_ids', {'iface-id': '2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:8c:91', 'vm-uuid': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 NetworkManager[48965]: <info>  [1765005082.2331] manager: (tap2b3cf6d1-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.238 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.240 251996 INFO os_vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8c:91,bridge_name='br-int',has_traffic_filtering=True,id=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f,network=Network(cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b3cf6d1-e3')
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.241 251996 DEBUG nova.virt.libvirt.vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:08Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.241 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.242 251996 DEBUG nova.network.os_vif_util [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:09:1b,bridge_name='br-int',has_traffic_filtering=True,id=27d4e523-f411-4c37-9c85-1408d599e7fd,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27d4e523-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.242 251996 DEBUG os_vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:09:1b,bridge_name='br-int',has_traffic_filtering=True,id=27d4e523-f411-4c37-9c85-1408d599e7fd,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27d4e523-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.243 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.243 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.243 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.246 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.246 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27d4e523-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.247 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27d4e523-f4, col_values=(('external_ids', {'iface-id': '27d4e523-f411-4c37-9c85-1408d599e7fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:09:1b', 'vm-uuid': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:22 compute-0 NetworkManager[48965]: <info>  [1765005082.2500] manager: (tap27d4e523-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.252 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.256 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.257 251996 INFO os_vif [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:09:1b,bridge_name='br-int',has_traffic_filtering=True,id=27d4e523-f411-4c37-9c85-1408d599e7fd,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27d4e523-f4')
Dec 06 07:11:22 compute-0 ceph-mon[74339]: pgmap v1565: 305 pgs: 305 active+clean; 213 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 460 KiB/s rd, 5.7 MiB/s wr, 280 op/s
Dec 06 07:11:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2418961624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.996 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.997 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.997 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No VIF found with MAC fa:16:3e:ac:89:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:11:22 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.997 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No VIF found with MAC fa:16:3e:b4:8c:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.997 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] No VIF found with MAC fa:16:3e:7a:09:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:22.998 251996 INFO nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Using config drive
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:23.024 251996 DEBUG nova.storage.rbd_utils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 213 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 737 KiB/s rd, 3.7 MiB/s wr, 243 op/s
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:23.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:11:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2830245454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:23.800 251996 INFO nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Creating config drive at /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02/disk.config
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:23.804 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5teg18zd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:23.933 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5teg18zd" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:23.966 251996 DEBUG nova.storage.rbd_utils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] rbd image db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:23 compute-0 nova_compute[251992]: 2025-12-06 07:11:23.969 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02/disk.config db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:24.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:24.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.135 251996 DEBUG nova.network.neutron [req-95ad353e-e09b-4bc2-9408-505dce13bb42 req-d7b3becd-f63e-4a48-935f-59836ee742a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Updated VIF entry in instance network info cache for port 27d4e523-f411-4c37-9c85-1408d599e7fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.137 251996 DEBUG nova.network.neutron [req-95ad353e-e09b-4bc2-9408-505dce13bb42 req-d7b3becd-f63e-4a48-935f-59836ee742a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Updating instance_info_cache with network_info: [{"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.172 251996 DEBUG oslo_concurrency.lockutils [req-95ad353e-e09b-4bc2-9408-505dce13bb42 req-d7b3becd-f63e-4a48-935f-59836ee742a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-db2f61ae-900d-44ed-a7a6-e02df74fcd02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.249 251996 DEBUG oslo_concurrency.processutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02/disk.config db2f61ae-900d-44ed-a7a6-e02df74fcd02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.250 251996 INFO nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Deleting local config drive /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02/disk.config because it was imported into RBD.
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3070] manager: (tap516c0976-46): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Dec 06 07:11:24 compute-0 kernel: tap516c0976-46: entered promiscuous mode
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.314 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00112|binding|INFO|Claiming lport 516c0976-46a2-4bcc-b9b8-278383257c28 for this chassis.
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00113|binding|INFO|516c0976-46a2-4bcc-b9b8-278383257c28: Claiming fa:16:3e:ac:89:33 10.100.0.197
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3271] manager: (tap2b3cf6d1-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.335 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:89:33 10.100.0.197'], port_security=['fa:16:3e:ac:89:33 10.100.0.197'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.197/24', 'neutron:device_id': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e233789e-bb2d-4beb-a35c-8d27498a648c, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=516c0976-46a2-4bcc-b9b8-278383257c28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.336 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 516c0976-46a2-4bcc-b9b8-278383257c28 in datapath 84d05a1d-1f7d-4572-b589-e66f411ec5b2 bound to our chassis
Dec 06 07:11:24 compute-0 systemd-udevd[286291]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:11:24 compute-0 systemd-udevd[286292]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.338 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84d05a1d-1f7d-4572-b589-e66f411ec5b2
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3446] manager: (tap27d4e523-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Dec 06 07:11:24 compute-0 systemd-udevd[286299]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3548] device (tap516c0976-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.353 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4d09342a-1a77-461a-b6dd-05af4b83cdea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.354 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84d05a1d-11 in ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.359 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84d05a1d-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.359 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cea7fb92-4cb0-44e8-811c-f8f2780413d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.360 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0080f2-3aca-420c-bf87-a0b6bd52ad96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3616] device (tap516c0976-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:11:24 compute-0 kernel: tap27d4e523-f4: entered promiscuous mode
Dec 06 07:11:24 compute-0 kernel: tap2b3cf6d1-e3: entered promiscuous mode
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3680] device (tap27d4e523-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3688] device (tap2b3cf6d1-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00114|binding|INFO|Claiming lport 27d4e523-f411-4c37-9c85-1408d599e7fd for this chassis.
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00115|binding|INFO|27d4e523-f411-4c37-9c85-1408d599e7fd: Claiming fa:16:3e:7a:09:1b 10.100.0.30
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00116|binding|INFO|Claiming lport 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f for this chassis.
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00117|binding|INFO|2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f: Claiming fa:16:3e:b4:8c:91 10.100.1.130
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3701] device (tap27d4e523-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.369 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.3706] device (tap2b3cf6d1-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00118|binding|INFO|Setting lport 516c0976-46a2-4bcc-b9b8-278383257c28 ovn-installed in OVS
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.376 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.376 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[df648584-eda1-4ffd-933a-3c3254ed518c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 systemd-machined[212986]: New machine qemu-22-instance-00000033.
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00119|binding|INFO|Setting lport 516c0976-46a2-4bcc-b9b8-278383257c28 up in Southbound
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.398 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:09:1b 10.100.0.30'], port_security=['fa:16:3e:7a:09:1b 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/24', 'neutron:device_id': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e233789e-bb2d-4beb-a35c-8d27498a648c, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=27d4e523-f411-4c37-9c85-1408d599e7fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.400 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:8c:91 10.100.1.130'], port_security=['fa:16:3e:b4:8c:91 10.100.1.130'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.130/24', 'neutron:device_id': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26236771-682d-4b17-9e5c-46b882fdf23d, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.401 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e80e6bce-bc86-47c2-90ca-c82b78ddf49e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000033.
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00120|binding|INFO|Setting lport 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f ovn-installed in OVS
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00121|binding|INFO|Setting lport 27d4e523-f411-4c37-9c85-1408d599e7fd ovn-installed in OVS
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.425 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.434 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c0100b53-610c-4f6b-a599-0d4abcf89b89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.442 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[32fd5832-9926-4bab-b4ee-54adb365805c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.4434] manager: (tap84d05a1d-10): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.468 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f21ce58d-d099-4d6a-9570-01836b9c3076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.470 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[552ae3b1-3728-413e-a921-3348a8409737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.4873] device (tap84d05a1d-10): carrier: link connected
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.490 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c648a86e-9e82-4eaf-8367-a884ec5f78bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.504 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9efc26-3de8-45ec-8cd2-4333e738db73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84d05a1d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:55:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535707, 'reachable_time': 34184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286333, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.517 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c7351d0d-405a-463d-af10-59d76e38edf9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:5569'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535707, 'tstamp': 535707}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286334, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.530 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[95d08629-133e-4f65-a65d-a4932cf93487]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84d05a1d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:55:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535707, 'reachable_time': 34184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286335, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00122|binding|INFO|Setting lport 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f up in Southbound
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00123|binding|INFO|Setting lport 27d4e523-f411-4c37-9c85-1408d599e7fd up in Southbound
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.556 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1a35d969-252c-453b-979a-5192aaff8a51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.597 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbf8ab0-4e4a-458a-bc51-9c9eb9363920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.598 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84d05a1d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.598 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.599 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84d05a1d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:24 compute-0 NetworkManager[48965]: <info>  [1765005084.6013] manager: (tap84d05a1d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.600 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:24 compute-0 kernel: tap84d05a1d-10: entered promiscuous mode
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.608 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84d05a1d-10, col_values=(('external_ids', {'iface-id': 'faed6d56-bc08-4e6e-b59e-721c2cca5019'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.610 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:24 compute-0 ovn_controller[147168]: 2025-12-06T07:11:24Z|00124|binding|INFO|Releasing lport faed6d56-bc08-4e6e-b59e-721c2cca5019 from this chassis (sb_readonly=1)
Dec 06 07:11:24 compute-0 nova_compute[251992]: 2025-12-06 07:11:24.623 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.624 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84d05a1d-1f7d-4572-b589-e66f411ec5b2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84d05a1d-1f7d-4572-b589-e66f411ec5b2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.625 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b79cf4-ef47-44a0-a169-0cbbe2601d87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.625 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-84d05a1d-1f7d-4572-b589-e66f411ec5b2
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/84d05a1d-1f7d-4572-b589-e66f411ec5b2.pid.haproxy
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 84d05a1d-1f7d-4572-b589-e66f411ec5b2
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:11:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:24.626 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'env', 'PROCESS_TAG=haproxy-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84d05a1d-1f7d-4572-b589-e66f411ec5b2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:11:25 compute-0 ceph-mon[74339]: pgmap v1566: 305 pgs: 305 active+clean; 213 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 737 KiB/s rd, 3.7 MiB/s wr, 243 op/s
Dec 06 07:11:25 compute-0 podman[286410]: 2025-12-06 07:11:25.010147226 +0000 UTC m=+0.056567175 container create e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:11:25 compute-0 systemd[1]: Started libpod-conmon-e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c.scope.
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.042 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005085.0413918, db2f61ae-900d-44ed-a7a6-e02df74fcd02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.043 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] VM Started (Lifecycle Event)
Dec 06 07:11:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db2d0990d6e45dce14db606dedc9a6dbca012e0a7fd4a15bfc55431ef79f2121/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:25 compute-0 podman[286410]: 2025-12-06 07:11:25.069843817 +0000 UTC m=+0.116263786 container init e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:11:25 compute-0 podman[286410]: 2025-12-06 07:11:24.981413896 +0000 UTC m=+0.027833875 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:11:25 compute-0 podman[286410]: 2025-12-06 07:11:25.074682224 +0000 UTC m=+0.121102173 container start e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:11:25 compute-0 neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2[286427]: [NOTICE]   (286431) : New worker (286433) forked
Dec 06 07:11:25 compute-0 neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2[286427]: [NOTICE]   (286431) : Loading success.
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.131 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 27d4e523-f411-4c37-9c85-1408d599e7fd in datapath 84d05a1d-1f7d-4572-b589-e66f411ec5b2 unbound from our chassis
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.133 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84d05a1d-1f7d-4572-b589-e66f411ec5b2
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.146 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c6e436-ba0c-496f-80a4-5c9bdb098dea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.173 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f49b3c4c-a0d8-47bc-982e-072cc3922cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.175 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c75fa8ee-4207-4d6f-9032-285e2e2f53bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.202 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ba16f0e4-83e5-4ddd-9d31-7a4296751e8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 250 op/s
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.218 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[51fe14c7-a3cf-4ee0-99cf-ed301ac476ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84d05a1d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:55:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 6, 'rx_bytes': 90, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 6, 'rx_bytes': 90, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535707, 'reachable_time': 34184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286447, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.231 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9ec74e-c04f-40fd-bc3b-0f113a3d7218]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap84d05a1d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535716, 'tstamp': 535716}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286448, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84d05a1d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535718, 'tstamp': 535718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286448, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.233 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84d05a1d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.237 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84d05a1d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.237 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.238 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84d05a1d-10, col_values=(('external_ids', {'iface-id': 'faed6d56-bc08-4e6e-b59e-721c2cca5019'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.238 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.239 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f in datapath cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb unbound from our chassis
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.240 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.251 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cedcfd48-cbb3-444a-83cc-a9696157c67c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.251 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcce6bacf-f1 in ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.253 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcce6bacf-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.253 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a4100e-2f05-4d53-8147-ee9db7e5012a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.253 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[15603b92-1e7c-4cbc-8ed8-95a8e2ecad8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.264 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[53f3bae5-9dcc-42ae-bc09-513b9fae8366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.291 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[401906c8-a6ff-4325-977f-389fd49dfaa6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.314 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.318 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005085.043153, db2f61ae-900d-44ed-a7a6-e02df74fcd02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.318 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] VM Paused (Lifecycle Event)
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.318 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[cb2b4369-fe68-4bc2-945d-bb3d0bf95376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.323 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb4475c-1980-44fb-a2e7-b40210dc38c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 NetworkManager[48965]: <info>  [1765005085.3237] manager: (tapcce6bacf-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Dec 06 07:11:25 compute-0 systemd-udevd[286324]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.352 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[928effcc-5d31-429a-a6a1-9c41e36095dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.355 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[8c2380d5-88a4-443f-8457-70b5d666d9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 NetworkManager[48965]: <info>  [1765005085.3737] device (tapcce6bacf-f0): carrier: link connected
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.377 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ee6158-3446-4e60-8515-b31b97fa16d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.392 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[81e03343-6003-4f29-b489-241aa9812c53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce6bacf-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b7:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535796, 'reachable_time': 15853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286459, 'error': None, 'target': 'ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.405 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b9be0438-230c-4c02-8ca8-f751c87892a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:b746'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535796, 'tstamp': 535796}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286460, 'error': None, 'target': 'ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.420 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[176e5bf0-79da-4051-a02e-9c8678dcf170]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcce6bacf-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:b7:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535796, 'reachable_time': 15853, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286461, 'error': None, 'target': 'ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.445 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbb5e3b-281b-4918-be33-d4468b1de42a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.493 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4c77a7-c360-4ac1-ab5b-6661dea85216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.494 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce6bacf-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.494 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.495 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcce6bacf-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.496 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:25 compute-0 kernel: tapcce6bacf-f0: entered promiscuous mode
Dec 06 07:11:25 compute-0 NetworkManager[48965]: <info>  [1765005085.4970] manager: (tapcce6bacf-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.498 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.499 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcce6bacf-f0, col_values=(('external_ids', {'iface-id': '2cc8ac24-6dbe-463d-8a32-14ef09962b80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.500 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:25 compute-0 ovn_controller[147168]: 2025-12-06T07:11:25Z|00125|binding|INFO|Releasing lport 2cc8ac24-6dbe-463d-8a32-14ef09962b80 from this chassis (sb_readonly=0)
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.513 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.515 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.516 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a7afb1f2-87d6-43c6-bec1-8fa023766448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.516 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb.pid.haproxy
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:11:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:25.517 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb', 'env', 'PROCESS_TAG=haproxy-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.660 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.664 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004155616043178857 of space, bias 1.0, pg target 1.2466848129536572 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:11:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:11:25 compute-0 nova_compute[251992]: 2025-12-06 07:11:25.730 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:11:25 compute-0 podman[286493]: 2025-12-06 07:11:25.867245475 +0000 UTC m=+0.044551446 container create b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 07:11:25 compute-0 systemd[1]: Started libpod-conmon-b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a.scope.
Dec 06 07:11:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:25 compute-0 podman[286493]: 2025-12-06 07:11:25.844537856 +0000 UTC m=+0.021843847 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/433088bb5cd1227f4d1f18802a90fc0a0dc7ab2e6ea7d835110ab7d9789d7605/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:25 compute-0 podman[286493]: 2025-12-06 07:11:25.961154951 +0000 UTC m=+0.138461012 container init b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:11:25 compute-0 podman[286493]: 2025-12-06 07:11:25.968312033 +0000 UTC m=+0.145618044 container start b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:11:25 compute-0 neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb[286508]: [NOTICE]   (286512) : New worker (286514) forked
Dec 06 07:11:25 compute-0 neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb[286508]: [NOTICE]   (286512) : Loading success.
Dec 06 07:11:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:26.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:11:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:26.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:11:26 compute-0 ceph-mon[74339]: pgmap v1567: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 250 op/s
Dec 06 07:11:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 234 op/s
Dec 06 07:11:27 compute-0 nova_compute[251992]: 2025-12-06 07:11:27.251 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:27 compute-0 nova_compute[251992]: 2025-12-06 07:11:27.411 251996 DEBUG nova.compute.manager [req-c0119f2d-2d56-47e9-9b8a-a038cbf5cc67 req-e95c37ea-a2ed-4178-beb8-2551eb477837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:27 compute-0 nova_compute[251992]: 2025-12-06 07:11:27.411 251996 DEBUG oslo_concurrency.lockutils [req-c0119f2d-2d56-47e9-9b8a-a038cbf5cc67 req-e95c37ea-a2ed-4178-beb8-2551eb477837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:27 compute-0 nova_compute[251992]: 2025-12-06 07:11:27.412 251996 DEBUG oslo_concurrency.lockutils [req-c0119f2d-2d56-47e9-9b8a-a038cbf5cc67 req-e95c37ea-a2ed-4178-beb8-2551eb477837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:27 compute-0 nova_compute[251992]: 2025-12-06 07:11:27.412 251996 DEBUG oslo_concurrency.lockutils [req-c0119f2d-2d56-47e9-9b8a-a038cbf5cc67 req-e95c37ea-a2ed-4178-beb8-2551eb477837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:27 compute-0 nova_compute[251992]: 2025-12-06 07:11:27.412 251996 DEBUG nova.compute.manager [req-c0119f2d-2d56-47e9-9b8a-a038cbf5cc67 req-e95c37ea-a2ed-4178-beb8-2551eb477837 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Processing event network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:11:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:28.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:28.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:28 compute-0 nova_compute[251992]: 2025-12-06 07:11:28.315 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 122 KiB/s wr, 148 op/s
Dec 06 07:11:29 compute-0 ceph-mon[74339]: pgmap v1568: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 234 op/s
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.562 251996 DEBUG nova.compute.manager [req-827b6b73-06fe-48ed-8700-eb83397a8ba1 req-84cf88bd-c3e2-4700-9955-2264471eb32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.563 251996 DEBUG oslo_concurrency.lockutils [req-827b6b73-06fe-48ed-8700-eb83397a8ba1 req-84cf88bd-c3e2-4700-9955-2264471eb32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.563 251996 DEBUG oslo_concurrency.lockutils [req-827b6b73-06fe-48ed-8700-eb83397a8ba1 req-84cf88bd-c3e2-4700-9955-2264471eb32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.563 251996 DEBUG oslo_concurrency.lockutils [req-827b6b73-06fe-48ed-8700-eb83397a8ba1 req-84cf88bd-c3e2-4700-9955-2264471eb32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.563 251996 DEBUG nova.compute.manager [req-827b6b73-06fe-48ed-8700-eb83397a8ba1 req-84cf88bd-c3e2-4700-9955-2264471eb32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No event matching network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f in dict_keys([('network-vif-plugged', '516c0976-46a2-4bcc-b9b8-278383257c28'), ('network-vif-plugged', '27d4e523-f411-4c37-9c85-1408d599e7fd')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.563 251996 WARNING nova.compute.manager [req-827b6b73-06fe-48ed-8700-eb83397a8ba1 req-84cf88bd-c3e2-4700-9955-2264471eb32b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received unexpected event network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f for instance with vm_state building and task_state spawning.
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.673 251996 DEBUG nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.674 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.674 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.674 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.674 251996 DEBUG nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Processing event network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.674 251996 DEBUG nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.675 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.675 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.675 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.675 251996 DEBUG nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No event matching network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 in dict_keys([('network-vif-plugged', '27d4e523-f411-4c37-9c85-1408d599e7fd')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.675 251996 WARNING nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received unexpected event network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 for instance with vm_state building and task_state spawning.
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.675 251996 DEBUG nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.675 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.676 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.676 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.676 251996 DEBUG nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Processing event network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.676 251996 DEBUG nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.676 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.676 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.676 251996 DEBUG oslo_concurrency.lockutils [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.677 251996 DEBUG nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No waiting events found dispatching network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.677 251996 WARNING nova.compute.manager [req-ae595cb6-99d7-470c-a498-afbc751700ba req-57aa82dd-bd75-4ced-9c6f-ea4e16f2c3e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received unexpected event network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd for instance with vm_state building and task_state spawning.
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.677 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.683 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005089.6832745, db2f61ae-900d-44ed-a7a6-e02df74fcd02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.684 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] VM Resumed (Lifecycle Event)
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.687 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.692 251996 INFO nova.virt.libvirt.driver [-] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Instance spawned successfully.
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.693 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.722 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.731 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.737 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.737 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.738 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.738 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.739 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.739 251996 DEBUG nova.virt.libvirt.driver [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.767 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.802 251996 INFO nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Took 21.64 seconds to spawn the instance on the hypervisor.
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.803 251996 DEBUG nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.912 251996 INFO nova.compute.manager [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Took 22.86 seconds to build instance.
Dec 06 07:11:29 compute-0 nova_compute[251992]: 2025-12-06 07:11:29.932 251996 DEBUG oslo_concurrency.lockutils [None req-505d55ba-a1e4-411f-b133-5e3fdd3b6152 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:30.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:30 compute-0 ceph-mon[74339]: pgmap v1569: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 122 KiB/s wr, 148 op/s
Dec 06 07:11:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Dec 06 07:11:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 123 KiB/s wr, 169 op/s
Dec 06 07:11:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Dec 06 07:11:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.566 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.566 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.567 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.567 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.568 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.569 251996 INFO nova.compute.manager [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Terminating instance
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.570 251996 DEBUG nova.compute.manager [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:11:31 compute-0 kernel: tap516c0976-46 (unregistering): left promiscuous mode
Dec 06 07:11:31 compute-0 NetworkManager[48965]: <info>  [1765005091.6618] device (tap516c0976-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00126|binding|INFO|Releasing lport 516c0976-46a2-4bcc-b9b8-278383257c28 from this chassis (sb_readonly=0)
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00127|binding|INFO|Setting lport 516c0976-46a2-4bcc-b9b8-278383257c28 down in Southbound
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00128|binding|INFO|Removing iface tap516c0976-46 ovn-installed in OVS
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.672 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.684 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 kernel: tap2b3cf6d1-e3 (unregistering): left promiscuous mode
Dec 06 07:11:31 compute-0 NetworkManager[48965]: <info>  [1765005091.6885] device (tap2b3cf6d1-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00129|binding|INFO|Releasing lport 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f from this chassis (sb_readonly=1)
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.697 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00130|binding|INFO|Removing iface tap2b3cf6d1-e3 ovn-installed in OVS
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00131|if_status|INFO|Not setting lport 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f down as sb is readonly
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.700 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 kernel: tap27d4e523-f4 (unregistering): left promiscuous mode
Dec 06 07:11:31 compute-0 NetworkManager[48965]: <info>  [1765005091.7139] device (tap27d4e523-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.716 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00132|binding|INFO|Releasing lport 27d4e523-f411-4c37-9c85-1408d599e7fd from this chassis (sb_readonly=1)
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00133|binding|INFO|Removing iface tap27d4e523-f4 ovn-installed in OVS
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.725 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.746 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000033.scope: Deactivated successfully.
Dec 06 07:11:31 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000033.scope: Consumed 2.603s CPU time.
Dec 06 07:11:31 compute-0 systemd-machined[212986]: Machine qemu-22-instance-00000033 terminated.
Dec 06 07:11:31 compute-0 NetworkManager[48965]: <info>  [1765005091.7901] manager: (tap516c0976-46): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Dec 06 07:11:31 compute-0 NetworkManager[48965]: <info>  [1765005091.8067] manager: (tap2b3cf6d1-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Dec 06 07:11:31 compute-0 NetworkManager[48965]: <info>  [1765005091.8168] manager: (tap27d4e523-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.829 251996 INFO nova.virt.libvirt.driver [-] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Instance destroyed successfully.
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.829 251996 DEBUG nova.objects.instance [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lazy-loading 'resources' on Instance uuid db2f61ae-900d-44ed-a7a6-e02df74fcd02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00134|binding|INFO|Setting lport 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f down in Southbound
Dec 06 07:11:31 compute-0 ovn_controller[147168]: 2025-12-06T07:11:31Z|00135|binding|INFO|Setting lport 27d4e523-f411-4c37-9c85-1408d599e7fd down in Southbound
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.845 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:89:33 10.100.0.197'], port_security=['fa:16:3e:ac:89:33 10.100.0.197'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.197/24', 'neutron:device_id': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e233789e-bb2d-4beb-a35c-8d27498a648c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=516c0976-46a2-4bcc-b9b8-278383257c28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.846 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 516c0976-46a2-4bcc-b9b8-278383257c28 in datapath 84d05a1d-1f7d-4572-b589-e66f411ec5b2 unbound from our chassis
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.848 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84d05a1d-1f7d-4572-b589-e66f411ec5b2
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.849 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:09:1b 10.100.0.30'], port_security=['fa:16:3e:7a:09:1b 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/24', 'neutron:device_id': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e233789e-bb2d-4beb-a35c-8d27498a648c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=27d4e523-f411-4c37-9c85-1408d599e7fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.851 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:8c:91 10.100.1.130'], port_security=['fa:16:3e:b4:8c:91 10.100.1.130'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.130/24', 'neutron:device_id': 'db2f61ae-900d-44ed-a7a6-e02df74fcd02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da24f0e2d59745828feaaecfeb9fed45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f4b40e0f-999e-445b-b382-f95dc10d9fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26236771-682d-4b17-9e5c-46b882fdf23d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.855 251996 DEBUG nova.virt.libvirt.vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:11:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:11:29Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.856 251996 DEBUG nova.network.os_vif_util [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "516c0976-46a2-4bcc-b9b8-278383257c28", "address": "fa:16:3e:ac:89:33", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.197", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap516c0976-46", "ovs_interfaceid": "516c0976-46a2-4bcc-b9b8-278383257c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.856 251996 DEBUG nova.network.os_vif_util [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:89:33,bridge_name='br-int',has_traffic_filtering=True,id=516c0976-46a2-4bcc-b9b8-278383257c28,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap516c0976-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.857 251996 DEBUG os_vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:89:33,bridge_name='br-int',has_traffic_filtering=True,id=516c0976-46a2-4bcc-b9b8-278383257c28,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap516c0976-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.858 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.859 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap516c0976-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.860 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.861 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a92d7dfd-b758-466f-9c0b-c9f44cc47657]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.862 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.865 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.868 251996 INFO os_vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:89:33,bridge_name='br-int',has_traffic_filtering=True,id=516c0976-46a2-4bcc-b9b8-278383257c28,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap516c0976-46')
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.868 251996 DEBUG nova.virt.libvirt.vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:11:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:11:29Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.869 251996 DEBUG nova.network.os_vif_util [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.869 251996 DEBUG nova.network.os_vif_util [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8c:91,bridge_name='br-int',has_traffic_filtering=True,id=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f,network=Network(cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b3cf6d1-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.869 251996 DEBUG os_vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8c:91,bridge_name='br-int',has_traffic_filtering=True,id=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f,network=Network(cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b3cf6d1-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.871 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.871 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b3cf6d1-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.874 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.876 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.877 251996 INFO os_vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:8c:91,bridge_name='br-int',has_traffic_filtering=True,id=2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f,network=Network(cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b3cf6d1-e3')
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.878 251996 DEBUG nova.virt.libvirt.vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:11:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1667420854',display_name='tempest-ServersTestMultiNic-server-1667420854',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1667420854',id=51,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:11:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da24f0e2d59745828feaaecfeb9fed45',ramdisk_id='',reservation_id='r-x40zn6bs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-889484419',owner_user_name='tempest-ServersTestMultiNic-889484419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:11:29Z,user_data=None,user_id='337447c5cffc48bf8256c9166a6ff0e2',uuid=db2f61ae-900d-44ed-a7a6-e02df74fcd02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.879 251996 DEBUG nova.network.os_vif_util [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converting VIF {"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.879 251996 DEBUG nova.network.os_vif_util [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:09:1b,bridge_name='br-int',has_traffic_filtering=True,id=27d4e523-f411-4c37-9c85-1408d599e7fd,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27d4e523-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.880 251996 DEBUG os_vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:09:1b,bridge_name='br-int',has_traffic_filtering=True,id=27d4e523-f411-4c37-9c85-1408d599e7fd,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27d4e523-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.881 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.881 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27d4e523-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.885 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.886 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[83109e6a-eebe-4168-b7a4-dccb64b76f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.887 251996 INFO os_vif [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:09:1b,bridge_name='br-int',has_traffic_filtering=True,id=27d4e523-f411-4c37-9c85-1408d599e7fd,network=Network(84d05a1d-1f7d-4572-b589-e66f411ec5b2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27d4e523-f4')
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.889 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[be1c6314-2fbf-4816-b66f-ce299711e4ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.914 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ba51f9c6-b62c-432f-adc4-0cc9ba3bc9c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.929 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[37a86822-5321-431b-90dc-33b90f36afe8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84d05a1d-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ca:55:69'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535707, 'reachable_time': 34184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286605, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.944 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1283fa2e-31cb-427e-aa4a-8c037b7aace3]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap84d05a1d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535716, 'tstamp': 535716}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286609, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84d05a1d-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535718, 'tstamp': 535718}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286609, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.946 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84d05a1d-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:31 compute-0 nova_compute[251992]: 2025-12-06 07:11:31.947 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.948 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84d05a1d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.948 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.949 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84d05a1d-10, col_values=(('external_ids', {'iface-id': 'faed6d56-bc08-4e6e-b59e-721c2cca5019'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.949 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.950 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 27d4e523-f411-4c37-9c85-1408d599e7fd in datapath 84d05a1d-1f7d-4572-b589-e66f411ec5b2 unbound from our chassis
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.952 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84d05a1d-1f7d-4572-b589-e66f411ec5b2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.952 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8478e8-8a7e-488a-8173-059e59d7f97d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:31.953 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2 namespace which is not needed anymore
Dec 06 07:11:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:11:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2[286427]: [NOTICE]   (286431) : haproxy version is 2.8.14-c23fe91
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2[286427]: [NOTICE]   (286431) : path to executable is /usr/sbin/haproxy
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2[286427]: [WARNING]  (286431) : Exiting Master process...
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2[286427]: [WARNING]  (286431) : Exiting Master process...
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2[286427]: [ALERT]    (286431) : Current worker (286433) exited with code 143 (Terminated)
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2[286427]: [WARNING]  (286431) : All workers exited. Exiting... (0)
Dec 06 07:11:32 compute-0 systemd[1]: libpod-e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c.scope: Deactivated successfully.
Dec 06 07:11:32 compute-0 podman[286627]: 2025-12-06 07:11:32.067278898 +0000 UTC m=+0.039931036 container died e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c-userdata-shm.mount: Deactivated successfully.
Dec 06 07:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-db2d0990d6e45dce14db606dedc9a6dbca012e0a7fd4a15bfc55431ef79f2121-merged.mount: Deactivated successfully.
Dec 06 07:11:32 compute-0 podman[286627]: 2025-12-06 07:11:32.115683032 +0000 UTC m=+0.088335170 container cleanup e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:11:32 compute-0 systemd[1]: libpod-conmon-e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c.scope: Deactivated successfully.
Dec 06 07:11:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:32.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:32 compute-0 podman[286657]: 2025-12-06 07:11:32.184636355 +0000 UTC m=+0.045333518 container remove e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.190 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1f45177f-1b3d-477d-a04d-87cb09e4732d]: (4, ('Sat Dec  6 07:11:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2 (e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c)\ne6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c\nSat Dec  6 07:11:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2 (e6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c)\ne6de313907f1bcd2559b5f5ef0cbc9c645619b9bfbebd73f3d7a94563956c64c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.193 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8a732975-368c-42ff-a980-5f922989d831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.194 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84d05a1d-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:32 compute-0 kernel: tap84d05a1d-10: left promiscuous mode
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.197 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.209 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.213 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3ee0e8-1bf3-4aa7-b846-e2ebe3395207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.225 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[974e7378-9c73-4229-b204-496aa84984cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.227 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[422db469-6de8-41f6-9054-28071c524ffb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.240 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[95606538-4e48-47a0-9199-6a3246e775a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535701, 'reachable_time': 23568, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286672, 'error': None, 'target': 'ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.242 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84d05a1d-1f7d-4572-b589-e66f411ec5b2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.242 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd988cb-ff41-4528-b9da-3f15b855ecca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d84d05a1d\x2d1f7d\x2d4572\x2db589\x2de66f411ec5b2.mount: Deactivated successfully.
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.244 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f in datapath cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb unbound from our chassis
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.245 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.246 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f451399e-fe03-4ac1-b747-b628c9d5563e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.247 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb namespace which is not needed anymore
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.307 251996 INFO nova.virt.libvirt.driver [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Deleting instance files /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02_del
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.308 251996 INFO nova.virt.libvirt.driver [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Deletion of /var/lib/nova/instances/db2f61ae-900d-44ed-a7a6-e02df74fcd02_del complete
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb[286508]: [NOTICE]   (286512) : haproxy version is 2.8.14-c23fe91
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb[286508]: [NOTICE]   (286512) : path to executable is /usr/sbin/haproxy
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb[286508]: [WARNING]  (286512) : Exiting Master process...
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb[286508]: [ALERT]    (286512) : Current worker (286514) exited with code 143 (Terminated)
Dec 06 07:11:32 compute-0 neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb[286508]: [WARNING]  (286512) : All workers exited. Exiting... (0)
Dec 06 07:11:32 compute-0 systemd[1]: libpod-b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a.scope: Deactivated successfully.
Dec 06 07:11:32 compute-0 podman[286688]: 2025-12-06 07:11:32.387897572 +0000 UTC m=+0.041369607 container died b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a-userdata-shm.mount: Deactivated successfully.
Dec 06 07:11:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-433088bb5cd1227f4d1f18802a90fc0a0dc7ab2e6ea7d835110ab7d9789d7605-merged.mount: Deactivated successfully.
Dec 06 07:11:32 compute-0 podman[286688]: 2025-12-06 07:11:32.417373182 +0000 UTC m=+0.070845227 container cleanup b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:11:32 compute-0 ceph-mon[74339]: pgmap v1570: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 123 KiB/s wr, 169 op/s
Dec 06 07:11:32 compute-0 ceph-mon[74339]: osdmap e219: 3 total, 3 up, 3 in
Dec 06 07:11:32 compute-0 systemd[1]: libpod-conmon-b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a.scope: Deactivated successfully.
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.437 251996 DEBUG nova.compute.manager [req-dbca7284-9e15-4805-9ce7-eb309d3ce07b req-8e21c4da-a29f-4d9f-be4b-42aed95028c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-unplugged-516c0976-46a2-4bcc-b9b8-278383257c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.437 251996 DEBUG oslo_concurrency.lockutils [req-dbca7284-9e15-4805-9ce7-eb309d3ce07b req-8e21c4da-a29f-4d9f-be4b-42aed95028c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.438 251996 DEBUG oslo_concurrency.lockutils [req-dbca7284-9e15-4805-9ce7-eb309d3ce07b req-8e21c4da-a29f-4d9f-be4b-42aed95028c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.438 251996 DEBUG oslo_concurrency.lockutils [req-dbca7284-9e15-4805-9ce7-eb309d3ce07b req-8e21c4da-a29f-4d9f-be4b-42aed95028c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.438 251996 DEBUG nova.compute.manager [req-dbca7284-9e15-4805-9ce7-eb309d3ce07b req-8e21c4da-a29f-4d9f-be4b-42aed95028c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No waiting events found dispatching network-vif-unplugged-516c0976-46a2-4bcc-b9b8-278383257c28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.439 251996 DEBUG nova.compute.manager [req-dbca7284-9e15-4805-9ce7-eb309d3ce07b req-8e21c4da-a29f-4d9f-be4b-42aed95028c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-unplugged-516c0976-46a2-4bcc-b9b8-278383257c28 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:11:32 compute-0 podman[286717]: 2025-12-06 07:11:32.473530645 +0000 UTC m=+0.037039925 container remove b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.478 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[78d72875-82e1-41a3-8165-a2d6a0724f27]: (4, ('Sat Dec  6 07:11:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb (b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a)\nb73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a\nSat Dec  6 07:11:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb (b73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a)\nb73490745130c2f8c9ea49461e2f118afbe154fe8814ebf03c5b23431678587a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.480 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[281d5847-1a4a-4bca-bcf4-2804bf840a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.481 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcce6bacf-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.482 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:32 compute-0 kernel: tapcce6bacf-f0: left promiscuous mode
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.485 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.487 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9c366f3d-29ae-4d10-8ec9-2ed01864625d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.496 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.502 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[32f2f270-7ee7-4cc9-88ab-cbf049c4b9af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.503 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b7772c07-0295-436d-b517-13950229cf44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.518 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4aa5a6-3597-4f86-a04f-52aa7b1b9f7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535790, 'reachable_time': 18799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286734, 'error': None, 'target': 'ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.520 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:11:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:32.520 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7d55ae-416b-497c-97c5-849b4d9b1326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.555 251996 INFO nova.compute.manager [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Took 0.98 seconds to destroy the instance on the hypervisor.
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.556 251996 DEBUG oslo.service.loopingcall [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.556 251996 DEBUG nova.compute.manager [-] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.557 251996 DEBUG nova.network.neutron [-] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:11:32 compute-0 podman[286732]: 2025-12-06 07:11:32.604506545 +0000 UTC m=+0.090339006 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:11:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.695 251996 DEBUG nova.compute.manager [req-36b88791-ee87-4c77-a583-47df6e8aa195 req-d413113a-2ffc-4fc9-82a3-8a42d9245b6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-unplugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.695 251996 DEBUG oslo_concurrency.lockutils [req-36b88791-ee87-4c77-a583-47df6e8aa195 req-d413113a-2ffc-4fc9-82a3-8a42d9245b6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.695 251996 DEBUG oslo_concurrency.lockutils [req-36b88791-ee87-4c77-a583-47df6e8aa195 req-d413113a-2ffc-4fc9-82a3-8a42d9245b6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.695 251996 DEBUG oslo_concurrency.lockutils [req-36b88791-ee87-4c77-a583-47df6e8aa195 req-d413113a-2ffc-4fc9-82a3-8a42d9245b6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.696 251996 DEBUG nova.compute.manager [req-36b88791-ee87-4c77-a583-47df6e8aa195 req-d413113a-2ffc-4fc9-82a3-8a42d9245b6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No waiting events found dispatching network-vif-unplugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:32 compute-0 nova_compute[251992]: 2025-12-06 07:11:32.696 251996 DEBUG nova.compute.manager [req-36b88791-ee87-4c77-a583-47df6e8aa195 req-d413113a-2ffc-4fc9-82a3-8a42d9245b6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-unplugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:11:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Dec 06 07:11:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Dec 06 07:11:33 compute-0 systemd[1]: run-netns-ovnmeta\x2dcce6bacf\x2dfd8d\x2d4b01\x2d8fee\x2dcda47e0e8ffb.mount: Deactivated successfully.
Dec 06 07:11:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 21 KiB/s wr, 130 op/s
Dec 06 07:11:33 compute-0 nova_compute[251992]: 2025-12-06 07:11:33.317 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:33 compute-0 nova_compute[251992]: 2025-12-06 07:11:33.460 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:33.462 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:33.463 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:11:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Dec 06 07:11:33 compute-0 ceph-mon[74339]: osdmap e220: 3 total, 3 up, 3 in
Dec 06 07:11:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Dec 06 07:11:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Dec 06 07:11:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:34.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:34.463 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.649 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.650 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.650 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.650 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.650 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No waiting events found dispatching network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.650 251996 WARNING nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received unexpected event network-vif-plugged-516c0976-46a2-4bcc-b9b8-278383257c28 for instance with vm_state active and task_state deleting.
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.651 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-unplugged-27d4e523-f411-4c37-9c85-1408d599e7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.651 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.651 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.651 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.651 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No waiting events found dispatching network-vif-unplugged-27d4e523-f411-4c37-9c85-1408d599e7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.651 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-unplugged-27d4e523-f411-4c37-9c85-1408d599e7fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.652 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-deleted-516c0976-46a2-4bcc-b9b8-278383257c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.652 251996 INFO nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Neutron deleted interface 516c0976-46a2-4bcc-b9b8-278383257c28; detaching it from the instance and deleting it from the info cache
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.652 251996 DEBUG nova.network.neutron [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Updating instance_info_cache with network_info: [{"id": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "address": "fa:16:3e:b4:8c:91", "network": {"id": "cce6bacf-fd8d-4b01-8fee-cda47e0e8ffb", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1345723289", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.130", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b3cf6d1-e3", "ovs_interfaceid": "2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "27d4e523-f411-4c37-9c85-1408d599e7fd", "address": "fa:16:3e:7a:09:1b", "network": {"id": "84d05a1d-1f7d-4572-b589-e66f411ec5b2", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1480893278", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da24f0e2d59745828feaaecfeb9fed45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27d4e523-f4", "ovs_interfaceid": "27d4e523-f411-4c37-9c85-1408d599e7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.677 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Detach interface failed, port_id=516c0976-46a2-4bcc-b9b8-278383257c28, reason: Instance db2f61ae-900d-44ed-a7a6-e02df74fcd02 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.677 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.678 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.678 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.678 251996 DEBUG oslo_concurrency.lockutils [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.678 251996 DEBUG nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No waiting events found dispatching network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.678 251996 WARNING nova.compute.manager [req-59f305e6-9109-4085-b8d8-759bccfdf979 req-b6707daf-e110-487b-bed0-e425b0ac28e2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received unexpected event network-vif-plugged-27d4e523-f411-4c37-9c85-1408d599e7fd for instance with vm_state active and task_state deleting.
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.790 251996 DEBUG nova.compute.manager [req-cae522f0-f0b7-4f47-b5f0-6e969aeec5d4 req-5018fb16-0230-4756-a2ad-ad229cad9b4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.791 251996 DEBUG oslo_concurrency.lockutils [req-cae522f0-f0b7-4f47-b5f0-6e969aeec5d4 req-5018fb16-0230-4756-a2ad-ad229cad9b4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.791 251996 DEBUG oslo_concurrency.lockutils [req-cae522f0-f0b7-4f47-b5f0-6e969aeec5d4 req-5018fb16-0230-4756-a2ad-ad229cad9b4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.791 251996 DEBUG oslo_concurrency.lockutils [req-cae522f0-f0b7-4f47-b5f0-6e969aeec5d4 req-5018fb16-0230-4756-a2ad-ad229cad9b4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.792 251996 DEBUG nova.compute.manager [req-cae522f0-f0b7-4f47-b5f0-6e969aeec5d4 req-5018fb16-0230-4756-a2ad-ad229cad9b4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] No waiting events found dispatching network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:34 compute-0 nova_compute[251992]: 2025-12-06 07:11:34.792 251996 WARNING nova.compute.manager [req-cae522f0-f0b7-4f47-b5f0-6e969aeec5d4 req-5018fb16-0230-4756-a2ad-ad229cad9b4d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received unexpected event network-vif-plugged-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f for instance with vm_state active and task_state deleting.
Dec 06 07:11:34 compute-0 ceph-mon[74339]: pgmap v1573: 305 pgs: 305 active+clean; 214 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 21 KiB/s wr, 130 op/s
Dec 06 07:11:34 compute-0 ceph-mon[74339]: osdmap e221: 3 total, 3 up, 3 in
Dec 06 07:11:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 200 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 666 KiB/s wr, 238 op/s
Dec 06 07:11:35 compute-0 sudo[286762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:35 compute-0 sudo[286762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:35 compute-0 sudo[286762]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:35 compute-0 sudo[286787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:11:35 compute-0 sudo[286787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:35 compute-0 sudo[286787]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:35 compute-0 sudo[286812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:35 compute-0 sudo[286812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:35 compute-0 sudo[286812]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:35 compute-0 sudo[286837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:11:35 compute-0 sudo[286837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:35 compute-0 sudo[286837]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:36.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:36 compute-0 ceph-mon[74339]: pgmap v1575: 305 pgs: 305 active+clean; 200 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 666 KiB/s wr, 238 op/s
Dec 06 07:11:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:11:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:36.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:11:36 compute-0 nova_compute[251992]: 2025-12-06 07:11:36.491 251996 DEBUG nova.network.neutron [-] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:36 compute-0 nova_compute[251992]: 2025-12-06 07:11:36.522 251996 INFO nova.compute.manager [-] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Took 3.97 seconds to deallocate network for instance.
Dec 06 07:11:36 compute-0 nova_compute[251992]: 2025-12-06 07:11:36.580 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:36 compute-0 nova_compute[251992]: 2025-12-06 07:11:36.581 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:36 compute-0 nova_compute[251992]: 2025-12-06 07:11:36.652 251996 DEBUG oslo_concurrency.processutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:36 compute-0 nova_compute[251992]: 2025-12-06 07:11:36.880 251996 DEBUG nova.compute.manager [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-deleted-27d4e523-f411-4c37-9c85-1408d599e7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:36 compute-0 nova_compute[251992]: 2025-12-06 07:11:36.880 251996 DEBUG nova.compute.manager [req-5aadf657-ab15-41c2-a3e3-82f4629d4913 req-8cfd697b-cb31-4a45-83e6-9b040c4b5c52 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Received event network-vif-deleted-2b3cf6d1-e3b0-4b92-a58c-8c0f991a471f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:36 compute-0 nova_compute[251992]: 2025-12-06 07:11:36.885 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627254312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:11:37 compute-0 nova_compute[251992]: 2025-12-06 07:11:37.083 251996 DEBUG oslo_concurrency.processutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:37 compute-0 nova_compute[251992]: 2025-12-06 07:11:37.089 251996 DEBUG nova.compute.provider_tree [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:37 compute-0 nova_compute[251992]: 2025-12-06 07:11:37.196 251996 DEBUG nova.scheduler.client.report [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:11:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 213 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.6 MiB/s wr, 253 op/s
Dec 06 07:11:37 compute-0 nova_compute[251992]: 2025-12-06 07:11:37.265 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:37 compute-0 nova_compute[251992]: 2025-12-06 07:11:37.310 251996 INFO nova.scheduler.client.report [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Deleted allocations for instance db2f61ae-900d-44ed-a7a6-e02df74fcd02
Dec 06 07:11:37 compute-0 nova_compute[251992]: 2025-12-06 07:11:37.502 251996 DEBUG oslo_concurrency.lockutils [None req-e7805789-b3f4-40df-8da4-a837476efb64 337447c5cffc48bf8256c9166a6ff0e2 da24f0e2d59745828feaaecfeb9fed45 - - default default] Lock "db2f61ae-900d-44ed-a7a6-e02df74fcd02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b4574151-1bae-4ad4-af7d-f3c9209aeb62 does not exist
Dec 06 07:11:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 682ba136-3e17-4d72-bdee-1ff78b9ea9d7 does not exist
Dec 06 07:11:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 896ca1ad-19f5-47bd-b66a-dba97fd6c802 does not exist
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:11:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:11:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:11:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000055s ======
Dec 06 07:11:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:38.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec 06 07:11:38 compute-0 sudo[286917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:38 compute-0 sudo[286917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:38 compute-0 sudo[286917]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3627254312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:38 compute-0 ceph-mon[74339]: pgmap v1576: 305 pgs: 305 active+clean; 213 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.6 MiB/s wr, 253 op/s
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:11:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:11:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Dec 06 07:11:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Dec 06 07:11:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Dec 06 07:11:38 compute-0 podman[286941]: 2025-12-06 07:11:38.118805486 +0000 UTC m=+0.053201940 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:11:38 compute-0 sudo[286954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:11:38 compute-0 podman[286942]: 2025-12-06 07:11:38.12675184 +0000 UTC m=+0.057944434 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:11:38 compute-0 sudo[286954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:38 compute-0 sudo[286954]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:38.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:38 compute-0 sudo[287000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:38 compute-0 sudo[287000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:38 compute-0 sudo[287000]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:38 compute-0 sudo[287025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:11:38 compute-0 sudo[287025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:38 compute-0 nova_compute[251992]: 2025-12-06 07:11:38.371 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:38 compute-0 podman[287091]: 2025-12-06 07:11:38.541086755 +0000 UTC m=+0.037742984 container create d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:11:38 compute-0 systemd[1]: Started libpod-conmon-d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9.scope.
Dec 06 07:11:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:38 compute-0 podman[287091]: 2025-12-06 07:11:38.524668092 +0000 UTC m=+0.021324341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:11:38 compute-0 podman[287091]: 2025-12-06 07:11:38.623213558 +0000 UTC m=+0.119869787 container init d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_leakey, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:11:38 compute-0 podman[287091]: 2025-12-06 07:11:38.631179763 +0000 UTC m=+0.127835992 container start d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_leakey, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 07:11:38 compute-0 podman[287091]: 2025-12-06 07:11:38.634579799 +0000 UTC m=+0.131236058 container attach d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:11:38 compute-0 keen_leakey[287108]: 167 167
Dec 06 07:11:38 compute-0 systemd[1]: libpod-d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9.scope: Deactivated successfully.
Dec 06 07:11:38 compute-0 conmon[287108]: conmon d90093bcaa03481a8ffb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9.scope/container/memory.events
Dec 06 07:11:38 compute-0 podman[287091]: 2025-12-06 07:11:38.639417775 +0000 UTC m=+0.136074004 container died d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_leakey, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:11:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Dec 06 07:11:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed887e037847c08694002469be1b0fd8a64ac76f5df0ead4ed53c4fb9c8d6faf-merged.mount: Deactivated successfully.
Dec 06 07:11:38 compute-0 podman[287091]: 2025-12-06 07:11:38.800589937 +0000 UTC m=+0.297246166 container remove d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_leakey, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:11:38 compute-0 systemd[1]: libpod-conmon-d90093bcaa03481a8ffb5f4eb942c778bfdc6754258e5d71753f687ff16a5cf9.scope: Deactivated successfully.
Dec 06 07:11:38 compute-0 podman[287133]: 2025-12-06 07:11:38.96147919 +0000 UTC m=+0.037615471 container create e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_newton, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:11:38 compute-0 systemd[1]: Started libpod-conmon-e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc.scope.
Dec 06 07:11:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5038f8e9e88c6aa1d7b15f63032a057bf3cb035c865b5331280608575d91d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5038f8e9e88c6aa1d7b15f63032a057bf3cb035c865b5331280608575d91d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5038f8e9e88c6aa1d7b15f63032a057bf3cb035c865b5331280608575d91d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5038f8e9e88c6aa1d7b15f63032a057bf3cb035c865b5331280608575d91d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5038f8e9e88c6aa1d7b15f63032a057bf3cb035c865b5331280608575d91d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:39 compute-0 podman[287133]: 2025-12-06 07:11:39.024594308 +0000 UTC m=+0.100730619 container init e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 07:11:39 compute-0 podman[287133]: 2025-12-06 07:11:39.032542821 +0000 UTC m=+0.108679102 container start e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_newton, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:11:39 compute-0 podman[287133]: 2025-12-06 07:11:39.038023226 +0000 UTC m=+0.114159497 container attach e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 07:11:39 compute-0 podman[287133]: 2025-12-06 07:11:38.946177369 +0000 UTC m=+0.022313670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:11:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 213 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.4 MiB/s wr, 194 op/s
Dec 06 07:11:39 compute-0 sudo[287155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:39 compute-0 sudo[287155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:39 compute-0 sudo[287155]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:39 compute-0 sudo[287180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:39 compute-0 sudo[287180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:39 compute-0 sudo[287180]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:39 compute-0 distracted_newton[287150]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:11:39 compute-0 distracted_newton[287150]: --> relative data size: 1.0
Dec 06 07:11:39 compute-0 distracted_newton[287150]: --> All data devices are unavailable
Dec 06 07:11:39 compute-0 systemd[1]: libpod-e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc.scope: Deactivated successfully.
Dec 06 07:11:39 compute-0 podman[287133]: 2025-12-06 07:11:39.849732147 +0000 UTC m=+0.925868428 container died e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_newton, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:11:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b5038f8e9e88c6aa1d7b15f63032a057bf3cb035c865b5331280608575d91d0-merged.mount: Deactivated successfully.
Dec 06 07:11:39 compute-0 podman[287133]: 2025-12-06 07:11:39.907069323 +0000 UTC m=+0.983205604 container remove e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_newton, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:11:39 compute-0 systemd[1]: libpod-conmon-e97dbb65812b42d287c2d6ef6d47064a2e1b3429cb4144129467cc9997486efc.scope: Deactivated successfully.
Dec 06 07:11:39 compute-0 sudo[287025]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:40 compute-0 sudo[287227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:40 compute-0 sudo[287227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:40 compute-0 sudo[287227]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:11:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:40.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:11:40 compute-0 sudo[287252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:11:40 compute-0 sudo[287252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:40 compute-0 sudo[287252]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:40 compute-0 sudo[287277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:40 compute-0 sudo[287277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:40 compute-0 sudo[287277]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:40.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:40 compute-0 sudo[287302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:11:40 compute-0 sudo[287302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:40 compute-0 podman[287370]: 2025-12-06 07:11:40.54403875 +0000 UTC m=+0.048000723 container create c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:11:40 compute-0 systemd[1]: Started libpod-conmon-c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f.scope.
Dec 06 07:11:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:40 compute-0 podman[287370]: 2025-12-06 07:11:40.524926701 +0000 UTC m=+0.028888664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:11:40 compute-0 podman[287370]: 2025-12-06 07:11:40.622243184 +0000 UTC m=+0.126205147 container init c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:11:40 compute-0 podman[287370]: 2025-12-06 07:11:40.63274254 +0000 UTC m=+0.136704483 container start c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 07:11:40 compute-0 podman[287370]: 2025-12-06 07:11:40.636141345 +0000 UTC m=+0.140103308 container attach c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_liskov, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:11:40 compute-0 vigilant_liskov[287386]: 167 167
Dec 06 07:11:40 compute-0 systemd[1]: libpod-c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f.scope: Deactivated successfully.
Dec 06 07:11:40 compute-0 podman[287370]: 2025-12-06 07:11:40.638205404 +0000 UTC m=+0.142167367 container died c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:11:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4451e5bf4d708e334311dfdb286928807ee6352e07640d671c0706e2af11eb2-merged.mount: Deactivated successfully.
Dec 06 07:11:40 compute-0 nova_compute[251992]: 2025-12-06 07:11:40.676 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:40 compute-0 podman[287370]: 2025-12-06 07:11:40.685187627 +0000 UTC m=+0.189149570 container remove c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_liskov, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:11:40 compute-0 systemd[1]: libpod-conmon-c836aa8d666f868742df03f472d73b4088ea7094bf856dcb6da622f63b5c7d4f.scope: Deactivated successfully.
Dec 06 07:11:40 compute-0 podman[287410]: 2025-12-06 07:11:40.857957265 +0000 UTC m=+0.051164752 container create 0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:11:40 compute-0 systemd[1]: Started libpod-conmon-0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3.scope.
Dec 06 07:11:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5713fe2b4f512e8edbff845fb527852661e73f36105df39357da807de9e1fe61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5713fe2b4f512e8edbff845fb527852661e73f36105df39357da807de9e1fe61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5713fe2b4f512e8edbff845fb527852661e73f36105df39357da807de9e1fe61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5713fe2b4f512e8edbff845fb527852661e73f36105df39357da807de9e1fe61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:40 compute-0 podman[287410]: 2025-12-06 07:11:40.831977073 +0000 UTC m=+0.025184610 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:11:40 compute-0 podman[287410]: 2025-12-06 07:11:40.948480035 +0000 UTC m=+0.141687592 container init 0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:11:40 compute-0 podman[287410]: 2025-12-06 07:11:40.959553097 +0000 UTC m=+0.152760624 container start 0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:11:40 compute-0 podman[287410]: 2025-12-06 07:11:40.96390814 +0000 UTC m=+0.157115627 container attach 0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 07:11:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 194 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 201 op/s
Dec 06 07:11:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Dec 06 07:11:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Dec 06 07:11:41 compute-0 ceph-mon[74339]: osdmap e222: 3 total, 3 up, 3 in
Dec 06 07:11:41 compute-0 practical_nash[287426]: {
Dec 06 07:11:41 compute-0 practical_nash[287426]:     "0": [
Dec 06 07:11:41 compute-0 practical_nash[287426]:         {
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "devices": [
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "/dev/loop3"
Dec 06 07:11:41 compute-0 practical_nash[287426]:             ],
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "lv_name": "ceph_lv0",
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "lv_size": "7511998464",
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "name": "ceph_lv0",
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "tags": {
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.cluster_name": "ceph",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.crush_device_class": "",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.encrypted": "0",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.osd_id": "0",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.type": "block",
Dec 06 07:11:41 compute-0 practical_nash[287426]:                 "ceph.vdo": "0"
Dec 06 07:11:41 compute-0 practical_nash[287426]:             },
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "type": "block",
Dec 06 07:11:41 compute-0 practical_nash[287426]:             "vg_name": "ceph_vg0"
Dec 06 07:11:41 compute-0 practical_nash[287426]:         }
Dec 06 07:11:41 compute-0 practical_nash[287426]:     ]
Dec 06 07:11:41 compute-0 practical_nash[287426]: }
Dec 06 07:11:41 compute-0 systemd[1]: libpod-0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3.scope: Deactivated successfully.
Dec 06 07:11:41 compute-0 podman[287410]: 2025-12-06 07:11:41.747950432 +0000 UTC m=+0.941157929 container died 0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 07:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5713fe2b4f512e8edbff845fb527852661e73f36105df39357da807de9e1fe61-merged.mount: Deactivated successfully.
Dec 06 07:11:41 compute-0 podman[287410]: 2025-12-06 07:11:41.800999177 +0000 UTC m=+0.994206664 container remove 0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_nash, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:11:41 compute-0 systemd[1]: libpod-conmon-0f64fbd097609a05a83a96976f639f1352e484caddca30d32e73d995e7352fa3.scope: Deactivated successfully.
Dec 06 07:11:41 compute-0 sudo[287302]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:41 compute-0 nova_compute[251992]: 2025-12-06 07:11:41.886 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:41 compute-0 sudo[287449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:41 compute-0 sudo[287449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:41 compute-0 sudo[287449]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:41 compute-0 sudo[287474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:11:41 compute-0 sudo[287474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:41 compute-0 sudo[287474]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:42.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:42 compute-0 sudo[287499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:42 compute-0 sudo[287499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:42 compute-0 sudo[287499]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:42 compute-0 sudo[287524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:11:42 compute-0 sudo[287524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:42.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:42 compute-0 podman[287590]: 2025-12-06 07:11:42.37715333 +0000 UTC m=+0.036610753 container create 1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:11:42 compute-0 systemd[1]: Started libpod-conmon-1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39.scope.
Dec 06 07:11:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:42 compute-0 podman[287590]: 2025-12-06 07:11:42.359827962 +0000 UTC m=+0.019285415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:11:42 compute-0 podman[287590]: 2025-12-06 07:11:42.467047263 +0000 UTC m=+0.126504716 container init 1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:11:42 compute-0 podman[287590]: 2025-12-06 07:11:42.475546132 +0000 UTC m=+0.135003595 container start 1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:11:42 compute-0 podman[287590]: 2025-12-06 07:11:42.479694049 +0000 UTC m=+0.139151502 container attach 1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:11:42 compute-0 agitated_poincare[287606]: 167 167
Dec 06 07:11:42 compute-0 systemd[1]: libpod-1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39.scope: Deactivated successfully.
Dec 06 07:11:42 compute-0 podman[287590]: 2025-12-06 07:11:42.482814017 +0000 UTC m=+0.142271480 container died 1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 07:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8437424d11fda7b191c110db673c60227341934b0223f67c27245485073da8a6-merged.mount: Deactivated successfully.
Dec 06 07:11:42 compute-0 podman[287590]: 2025-12-06 07:11:42.53259457 +0000 UTC m=+0.192052043 container remove 1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:11:42 compute-0 systemd[1]: libpod-conmon-1b857a5de91878c39e37b283c8bb5b9bc8c0e57f3f7a0f37226485a76b64ef39.scope: Deactivated successfully.
Dec 06 07:11:42 compute-0 ceph-mon[74339]: pgmap v1578: 305 pgs: 305 active+clean; 213 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.4 MiB/s wr, 194 op/s
Dec 06 07:11:42 compute-0 ceph-mon[74339]: pgmap v1579: 305 pgs: 305 active+clean; 194 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 201 op/s
Dec 06 07:11:42 compute-0 ceph-mon[74339]: osdmap e223: 3 total, 3 up, 3 in
Dec 06 07:11:42 compute-0 podman[287631]: 2025-12-06 07:11:42.740295122 +0000 UTC m=+0.061342209 container create ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:11:42 compute-0 systemd[1]: Started libpod-conmon-ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f.scope.
Dec 06 07:11:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660b8277ece45c747f9927023b76d4d4f5f3909f4c5391393f02357f734946d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660b8277ece45c747f9927023b76d4d4f5f3909f4c5391393f02357f734946d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660b8277ece45c747f9927023b76d4d4f5f3909f4c5391393f02357f734946d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/660b8277ece45c747f9927023b76d4d4f5f3909f4c5391393f02357f734946d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:42 compute-0 podman[287631]: 2025-12-06 07:11:42.713297601 +0000 UTC m=+0.034344788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:11:42 compute-0 podman[287631]: 2025-12-06 07:11:42.811512278 +0000 UTC m=+0.132559415 container init ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:11:42 compute-0 podman[287631]: 2025-12-06 07:11:42.818475125 +0000 UTC m=+0.139522212 container start ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:11:42 compute-0 podman[287631]: 2025-12-06 07:11:42.822864608 +0000 UTC m=+0.143911875 container attach ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:11:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:11:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:11:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:11:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:11:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:11:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:11:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 145 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 92 op/s
Dec 06 07:11:43 compute-0 nova_compute[251992]: 2025-12-06 07:11:43.373 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:43 compute-0 stoic_nash[287647]: {
Dec 06 07:11:43 compute-0 stoic_nash[287647]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:11:43 compute-0 stoic_nash[287647]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:11:43 compute-0 stoic_nash[287647]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:11:43 compute-0 stoic_nash[287647]:         "osd_id": 0,
Dec 06 07:11:43 compute-0 stoic_nash[287647]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:11:43 compute-0 stoic_nash[287647]:         "type": "bluestore"
Dec 06 07:11:43 compute-0 stoic_nash[287647]:     }
Dec 06 07:11:43 compute-0 stoic_nash[287647]: }
Dec 06 07:11:43 compute-0 systemd[1]: libpod-ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f.scope: Deactivated successfully.
Dec 06 07:11:43 compute-0 podman[287668]: 2025-12-06 07:11:43.68958849 +0000 UTC m=+0.022629919 container died ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-660b8277ece45c747f9927023b76d4d4f5f3909f4c5391393f02357f734946d4-merged.mount: Deactivated successfully.
Dec 06 07:11:43 compute-0 podman[287668]: 2025-12-06 07:11:43.734569937 +0000 UTC m=+0.067611366 container remove ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:11:43 compute-0 systemd[1]: libpod-conmon-ceb70bf63dde9acc08e8ba2a4cddd4a906b3cc3e4ae055b715cc2bdf722d431f.scope: Deactivated successfully.
Dec 06 07:11:43 compute-0 sudo[287524]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:11:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Dec 06 07:11:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:11:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Dec 06 07:11:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Dec 06 07:11:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7c8b82fc-8d02-4e14-8308-10a699fa14e2 does not exist
Dec 06 07:11:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4502e5bb-a731-4176-909f-99ef5261c21e does not exist
Dec 06 07:11:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a878eb4e-7727-49c3-a897-5644cc8bf409 does not exist
Dec 06 07:11:43 compute-0 nova_compute[251992]: 2025-12-06 07:11:43.991 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:43 compute-0 nova_compute[251992]: 2025-12-06 07:11:43.992 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.009 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:11:44 compute-0 sudo[287683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:44 compute-0 sudo[287683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:44 compute-0 sudo[287683]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:44.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.101 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.102 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:44 compute-0 sudo[287708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:11:44 compute-0 sudo[287708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.112 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.112 251996 INFO nova.compute.claims [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:11:44 compute-0 sudo[287708]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:44.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.283 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14396442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.751 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.758 251996 DEBUG nova.compute.provider_tree [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.776 251996 DEBUG nova.scheduler.client.report [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.800 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.801 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:11:44 compute-0 ceph-mon[74339]: pgmap v1581: 305 pgs: 305 active+clean; 145 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 92 op/s
Dec 06 07:11:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:44 compute-0 ceph-mon[74339]: osdmap e224: 3 total, 3 up, 3 in
Dec 06 07:11:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:11:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2485373490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3058757587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/14396442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.856 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.857 251996 DEBUG nova.network.neutron [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.894 251996 INFO nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:11:44 compute-0 nova_compute[251992]: 2025-12-06 07:11:44.940 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.066 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.068 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.069 251996 INFO nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Creating image(s)
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.105 251996 DEBUG nova.storage.rbd_utils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image d59682c6-381c-46bb-9d18-5f76d43dc560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.143 251996 DEBUG nova.storage.rbd_utils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image d59682c6-381c-46bb-9d18-5f76d43dc560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.194 251996 DEBUG nova.storage.rbd_utils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image d59682c6-381c-46bb-9d18-5f76d43dc560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.200 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 92 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.7 KiB/s wr, 113 op/s
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.235 251996 DEBUG nova.policy [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdd7994b0ebb4035a373b6560aa7dbcf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.274 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.275 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.276 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.276 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.304 251996 DEBUG nova.storage.rbd_utils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image d59682c6-381c-46bb-9d18-5f76d43dc560_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.308 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d59682c6-381c-46bb-9d18-5f76d43dc560_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:45 compute-0 nova_compute[251992]: 2025-12-06 07:11:45.817 251996 DEBUG nova.network.neutron [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Successfully created port: ecdd9822-20e7-48b8-8a41-d2e490c2be1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:11:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:11:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:46.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.132 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d59682c6-381c-46bb-9d18-5f76d43dc560_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.824s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:46.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.208 251996 DEBUG nova.storage.rbd_utils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] resizing rbd image d59682c6-381c-46bb-9d18-5f76d43dc560_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.323 251996 DEBUG nova.objects.instance [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'migration_context' on Instance uuid d59682c6-381c-46bb-9d18-5f76d43dc560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.339 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.339 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Ensure instance console log exists: /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.339 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.340 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.340 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.828 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005091.8273187, db2f61ae-900d-44ed-a7a6-e02df74fcd02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.828 251996 INFO nova.compute.manager [-] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] VM Stopped (Lifecycle Event)
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.855 251996 DEBUG nova.compute.manager [None req-32b0b0e6-98e5-4247-9f50-788d485f7fa5 - - - - - -] [instance: db2f61ae-900d-44ed-a7a6-e02df74fcd02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.930 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.955 251996 DEBUG nova.network.neutron [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Successfully updated port: ecdd9822-20e7-48b8-8a41-d2e490c2be1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:11:46 compute-0 ceph-mon[74339]: pgmap v1583: 305 pgs: 305 active+clean; 92 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.7 KiB/s wr, 113 op/s
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.971 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.972 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquired lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:46 compute-0 nova_compute[251992]: 2025-12-06 07:11:46.972 251996 DEBUG nova.network.neutron [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:11:47 compute-0 nova_compute[251992]: 2025-12-06 07:11:47.063 251996 DEBUG nova.compute.manager [req-4ef7880a-018d-483f-b3d3-320cb04e5133 req-7675ca70-4495-4e11-aa10-6c7a04bc02f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received event network-changed-ecdd9822-20e7-48b8-8a41-d2e490c2be1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:47 compute-0 nova_compute[251992]: 2025-12-06 07:11:47.064 251996 DEBUG nova.compute.manager [req-4ef7880a-018d-483f-b3d3-320cb04e5133 req-7675ca70-4495-4e11-aa10-6c7a04bc02f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Refreshing instance network info cache due to event network-changed-ecdd9822-20e7-48b8-8a41-d2e490c2be1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:11:47 compute-0 nova_compute[251992]: 2025-12-06 07:11:47.065 251996 DEBUG oslo_concurrency.lockutils [req-4ef7880a-018d-483f-b3d3-320cb04e5133 req-7675ca70-4495-4e11-aa10-6c7a04bc02f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 66 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 1.5 MiB/s wr, 147 op/s
Dec 06 07:11:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:48.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:48 compute-0 nova_compute[251992]: 2025-12-06 07:11:48.059 251996 DEBUG nova.network.neutron [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:11:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:48.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:48 compute-0 nova_compute[251992]: 2025-12-06 07:11:48.375 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:49 compute-0 ceph-mon[74339]: pgmap v1584: 305 pgs: 305 active+clean; 66 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 1.5 MiB/s wr, 147 op/s
Dec 06 07:11:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 66 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 1.5 MiB/s wr, 100 op/s
Dec 06 07:11:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:50.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.086 251996 DEBUG nova.network.neutron [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Updating instance_info_cache with network_info: [{"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.104 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Releasing lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.104 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Instance network_info: |[{"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.104 251996 DEBUG oslo_concurrency.lockutils [req-4ef7880a-018d-483f-b3d3-320cb04e5133 req-7675ca70-4495-4e11-aa10-6c7a04bc02f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.105 251996 DEBUG nova.network.neutron [req-4ef7880a-018d-483f-b3d3-320cb04e5133 req-7675ca70-4495-4e11-aa10-6c7a04bc02f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Refreshing network info cache for port ecdd9822-20e7-48b8-8a41-d2e490c2be1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.108 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Start _get_guest_xml network_info=[{"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.113 251996 WARNING nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.117 251996 DEBUG nova.virt.libvirt.host [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.118 251996 DEBUG nova.virt.libvirt.host [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.127 251996 DEBUG nova.virt.libvirt.host [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.128 251996 DEBUG nova.virt.libvirt.host [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.129 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.129 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.130 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.130 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.131 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.131 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.131 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.132 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.132 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.132 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.133 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.133 251996 DEBUG nova.virt.hardware [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.137 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:50.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3351923020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:50 compute-0 ceph-mon[74339]: pgmap v1585: 305 pgs: 305 active+clean; 66 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 1.5 MiB/s wr, 100 op/s
Dec 06 07:11:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3783602994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/815777043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:11:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906275142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.583 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.617 251996 DEBUG nova.storage.rbd_utils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image d59682c6-381c-46bb-9d18-5f76d43dc560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.623 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:50 compute-0 nova_compute[251992]: 2025-12-06 07:11:50.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:11:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413864327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.062 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.063 251996 DEBUG nova.virt.libvirt.vif [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-714013780',display_name='tempest-ImagesTestJSON-server-714013780',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-714013780',id=53,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-61fehmed',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:44Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=d59682c6-381c-46bb-9d18-5f76d43dc560,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.064 251996 DEBUG nova.network.os_vif_util [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.065 251996 DEBUG nova.network.os_vif_util [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d8:25,bridge_name='br-int',has_traffic_filtering=True,id=ecdd9822-20e7-48b8-8a41-d2e490c2be1f,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdd9822-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.066 251996 DEBUG nova.objects.instance [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d59682c6-381c-46bb-9d18-5f76d43dc560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.089 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <uuid>d59682c6-381c-46bb-9d18-5f76d43dc560</uuid>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <name>instance-00000035</name>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <nova:name>tempest-ImagesTestJSON-server-714013780</nova:name>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:11:50</nova:creationTime>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <nova:user uuid="bdd7994b0ebb4035a373b6560aa7dbcf">tempest-ImagesTestJSON-134159412-project-member</nova:user>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <nova:project uuid="af7365adc05f4624a08a71cd5a77ada6">tempest-ImagesTestJSON-134159412</nova:project>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <nova:port uuid="ecdd9822-20e7-48b8-8a41-d2e490c2be1f">
Dec 06 07:11:51 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <system>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <entry name="serial">d59682c6-381c-46bb-9d18-5f76d43dc560</entry>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <entry name="uuid">d59682c6-381c-46bb-9d18-5f76d43dc560</entry>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </system>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <os>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   </os>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <features>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   </features>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/d59682c6-381c-46bb-9d18-5f76d43dc560_disk">
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       </source>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/d59682c6-381c-46bb-9d18-5f76d43dc560_disk.config">
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       </source>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:11:51 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:b0:d8:25"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <target dev="tapecdd9822-20"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560/console.log" append="off"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <video>
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </video>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:11:51 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:11:51 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:11:51 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:11:51 compute-0 nova_compute[251992]: </domain>
Dec 06 07:11:51 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.090 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Preparing to wait for external event network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.091 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.091 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.091 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.092 251996 DEBUG nova.virt.libvirt.vif [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:11:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-714013780',display_name='tempest-ImagesTestJSON-server-714013780',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-714013780',id=53,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-61fehmed',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:11:44Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=d59682c6-381c-46bb-9d18-5f76d43dc560,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.092 251996 DEBUG nova.network.os_vif_util [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.093 251996 DEBUG nova.network.os_vif_util [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d8:25,bridge_name='br-int',has_traffic_filtering=True,id=ecdd9822-20e7-48b8-8a41-d2e490c2be1f,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdd9822-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.093 251996 DEBUG os_vif [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d8:25,bridge_name='br-int',has_traffic_filtering=True,id=ecdd9822-20e7-48b8-8a41-d2e490c2be1f,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdd9822-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.093 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.094 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.094 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.097 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.097 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapecdd9822-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.097 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapecdd9822-20, col_values=(('external_ids', {'iface-id': 'ecdd9822-20e7-48b8-8a41-d2e490c2be1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:d8:25', 'vm-uuid': 'd59682c6-381c-46bb-9d18-5f76d43dc560'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.099 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:51 compute-0 NetworkManager[48965]: <info>  [1765005111.1010] manager: (tapecdd9822-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.102 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.110 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.112 251996 INFO os_vif [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:d8:25,bridge_name='br-int',has_traffic_filtering=True,id=ecdd9822-20e7-48b8-8a41-d2e490c2be1f,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdd9822-20')
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.170 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.172 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.172 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No VIF found with MAC fa:16:3e:b0:d8:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.173 251996 INFO nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Using config drive
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.208 251996 DEBUG nova.storage.rbd_utils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image d59682c6-381c-46bb-9d18-5f76d43dc560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 99 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.675 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.676 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.676 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.676 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:11:51 compute-0 nova_compute[251992]: 2025-12-06 07:11:51.677 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/906275142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/413864327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573529648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.129 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:52.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.183 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.183 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.291 251996 INFO nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Creating config drive at /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560/disk.config
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.298 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk42vay38 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.366 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.367 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4531MB free_disk=20.96365737915039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.367 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.368 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.428 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk42vay38" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.456 251996 DEBUG nova.storage.rbd_utils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image d59682c6-381c-46bb-9d18-5f76d43dc560_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.459 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560/disk.config d59682c6-381c-46bb-9d18-5f76d43dc560_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.518 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance d59682c6-381c-46bb-9d18-5f76d43dc560 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.519 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.519 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.558 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.966 251996 DEBUG oslo_concurrency.processutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560/disk.config d59682c6-381c-46bb-9d18-5f76d43dc560_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.967 251996 INFO nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Deleting local config drive /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560/disk.config because it was imported into RBD.
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.988 251996 DEBUG nova.network.neutron [req-4ef7880a-018d-483f-b3d3-320cb04e5133 req-7675ca70-4495-4e11-aa10-6c7a04bc02f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Updated VIF entry in instance network info cache for port ecdd9822-20e7-48b8-8a41-d2e490c2be1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:11:52 compute-0 nova_compute[251992]: 2025-12-06 07:11:52.990 251996 DEBUG nova.network.neutron [req-4ef7880a-018d-483f-b3d3-320cb04e5133 req-7675ca70-4495-4e11-aa10-6c7a04bc02f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Updating instance_info_cache with network_info: [{"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:11:53 compute-0 kernel: tapecdd9822-20: entered promiscuous mode
Dec 06 07:11:53 compute-0 NetworkManager[48965]: <info>  [1765005113.0131] manager: (tapecdd9822-20): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.013 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 ovn_controller[147168]: 2025-12-06T07:11:53Z|00136|binding|INFO|Claiming lport ecdd9822-20e7-48b8-8a41-d2e490c2be1f for this chassis.
Dec 06 07:11:53 compute-0 ovn_controller[147168]: 2025-12-06T07:11:53Z|00137|binding|INFO|ecdd9822-20e7-48b8-8a41-d2e490c2be1f: Claiming fa:16:3e:b0:d8:25 10.100.0.13
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.018 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.034 251996 DEBUG oslo_concurrency.lockutils [req-4ef7880a-018d-483f-b3d3-320cb04e5133 req-7675ca70-4495-4e11-aa10-6c7a04bc02f1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:11:53 compute-0 systemd-machined[212986]: New machine qemu-23-instance-00000035.
Dec 06 07:11:53 compute-0 systemd-udevd[288105]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:11:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:11:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2494911632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:53 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000035.
Dec 06 07:11:53 compute-0 NetworkManager[48965]: <info>  [1765005113.0662] device (tapecdd9822-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:11:53 compute-0 NetworkManager[48965]: <info>  [1765005113.0669] device (tapecdd9822-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.081 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.082 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 ovn_controller[147168]: 2025-12-06T07:11:53Z|00138|binding|INFO|Setting lport ecdd9822-20e7-48b8-8a41-d2e490c2be1f ovn-installed in OVS
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.087 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.091 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:11:53 compute-0 ceph-mon[74339]: pgmap v1586: 305 pgs: 305 active+clean; 99 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Dec 06 07:11:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3573529648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:53 compute-0 ovn_controller[147168]: 2025-12-06T07:11:53Z|00139|binding|INFO|Setting lport ecdd9822-20e7-48b8-8a41-d2e490c2be1f up in Southbound
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.181 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:d8:25 10.100.0.13'], port_security=['fa:16:3e:b0:d8:25 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd59682c6-381c-46bb-9d18-5f76d43dc560', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ecdd9822-20e7-48b8-8a41-d2e490c2be1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.182 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ecdd9822-20e7-48b8-8a41-d2e490c2be1f in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad bound to our chassis
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.184 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.195 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[599ab76c-8826-4b17-b963-45626e79c28f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.196 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b0835d7-81 in ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.198 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b0835d7-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.199 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ae0b3d-e677-4c6b-9ddd-d29634cdcf02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.199 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f2bb5dd3-9414-40cd-b022-214141eaf2f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.214 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.215 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[cf312a8f-9195-477b-b20f-eb8a7aeccc33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 113 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.2 MiB/s wr, 122 op/s
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.228 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[aa08fa11-3237-402b-b727-5691f1b80d13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.261 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[82e139ef-f8cb-4e48-9c3c-038e3669ce53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.266 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c497ad00-a0a9-4949-92ee-e05094b43959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 NetworkManager[48965]: <info>  [1765005113.2697] manager: (tap2b0835d7-80): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.299 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[860e6e07-60c8-459b-8f28-0f33bd785c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.303 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[523cb8b4-d1ee-487d-9954-1080384f0114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.305 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.305 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:53 compute-0 NetworkManager[48965]: <info>  [1765005113.3236] device (tap2b0835d7-80): carrier: link connected
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.327 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e3430c5f-a483-47c8-a083-d9726849762a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.342 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[854f13ab-2235-40e1-903b-1be04ac500ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538591, 'reachable_time': 42959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288176, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.355 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[61d1b932-520f-4c5a-a393-8da9fd11179b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:4e19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538591, 'tstamp': 538591}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288177, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.368 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd31242-0049-4a34-b225-7e5c630673e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538591, 'reachable_time': 42959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288178, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.376 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.396 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ac7bb3-c428-424f-8d83-55ec3082eb71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.454 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4c6174-0c36-4d05-b2c9-c348329b1ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.455 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.456 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.456 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b0835d7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:53 compute-0 NetworkManager[48965]: <info>  [1765005113.4587] manager: (tap2b0835d7-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Dec 06 07:11:53 compute-0 kernel: tap2b0835d7-80: entered promiscuous mode
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.461 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b0835d7-80, col_values=(('external_ids', {'iface-id': '87f2c5b0-3684-4269-9fbf-5a4dfd5a8759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:11:53 compute-0 ovn_controller[147168]: 2025-12-06T07:11:53Z|00140|binding|INFO|Releasing lport 87f2c5b0-3684-4269-9fbf-5a4dfd5a8759 from this chassis (sb_readonly=0)
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.463 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.477 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.480 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.481 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[aecdfe27-cc0f-4809-98ae-714f5a704352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.481 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:11:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:11:53.482 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'env', 'PROCESS_TAG=haproxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.572 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005113.5717735, d59682c6-381c-46bb-9d18-5f76d43dc560 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.573 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] VM Started (Lifecycle Event)
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.602 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.607 251996 DEBUG nova.compute.manager [req-5111c2f0-0bbf-4991-b8ec-1b0e449cb529 req-869368bf-65e8-4370-9238-e8b5c6d9bfd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received event network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.607 251996 DEBUG oslo_concurrency.lockutils [req-5111c2f0-0bbf-4991-b8ec-1b0e449cb529 req-869368bf-65e8-4370-9238-e8b5c6d9bfd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.607 251996 DEBUG oslo_concurrency.lockutils [req-5111c2f0-0bbf-4991-b8ec-1b0e449cb529 req-869368bf-65e8-4370-9238-e8b5c6d9bfd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.607 251996 DEBUG oslo_concurrency.lockutils [req-5111c2f0-0bbf-4991-b8ec-1b0e449cb529 req-869368bf-65e8-4370-9238-e8b5c6d9bfd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.608 251996 DEBUG nova.compute.manager [req-5111c2f0-0bbf-4991-b8ec-1b0e449cb529 req-869368bf-65e8-4370-9238-e8b5c6d9bfd5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Processing event network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.608 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.609 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005113.5724862, d59682c6-381c-46bb-9d18-5f76d43dc560 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.609 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] VM Paused (Lifecycle Event)
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.611 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.614 251996 INFO nova.virt.libvirt.driver [-] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Instance spawned successfully.
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.614 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.637 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.640 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005113.6115005, d59682c6-381c-46bb-9d18-5f76d43dc560 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.641 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] VM Resumed (Lifecycle Event)
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.649 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.649 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.649 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.650 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.650 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.650 251996 DEBUG nova.virt.libvirt.driver [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.678 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.680 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.704 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.714 251996 INFO nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Took 8.65 seconds to spawn the instance on the hypervisor.
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.714 251996 DEBUG nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.817 251996 INFO nova.compute.manager [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Took 9.75 seconds to build instance.
Dec 06 07:11:53 compute-0 nova_compute[251992]: 2025-12-06 07:11:53.842 251996 DEBUG oslo_concurrency.lockutils [None req-1b07bb37-d251-405e-b54d-c72035fd855d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:53 compute-0 podman[288216]: 2025-12-06 07:11:53.85895341 +0000 UTC m=+0.053322095 container create 426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:11:53 compute-0 systemd[1]: Started libpod-conmon-426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2.scope.
Dec 06 07:11:53 compute-0 podman[288216]: 2025-12-06 07:11:53.831777874 +0000 UTC m=+0.026146568 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:11:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:11:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ea709349cd77a4137fb06627780a16ddd3b199b3797586b222bf1da9461fcb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:11:53 compute-0 podman[288216]: 2025-12-06 07:11:53.971565333 +0000 UTC m=+0.165934007 container init 426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 07:11:53 compute-0 podman[288216]: 2025-12-06 07:11:53.978017102 +0000 UTC m=+0.172385766 container start 426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 07:11:54 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[288231]: [NOTICE]   (288235) : New worker (288237) forked
Dec 06 07:11:54 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[288231]: [NOTICE]   (288235) : Loading success.
Dec 06 07:11:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:54.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:54.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2494911632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:54 compute-0 ceph-mon[74339]: pgmap v1587: 305 pgs: 305 active+clean; 113 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.2 MiB/s wr, 122 op/s
Dec 06 07:11:54 compute-0 nova_compute[251992]: 2025-12-06 07:11:54.305 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:54 compute-0 nova_compute[251992]: 2025-12-06 07:11:54.306 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:54 compute-0 nova_compute[251992]: 2025-12-06 07:11:54.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 134 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 717 KiB/s rd, 3.7 MiB/s wr, 133 op/s
Dec 06 07:11:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/159950985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:56 compute-0 nova_compute[251992]: 2025-12-06 07:11:56.051 251996 DEBUG nova.compute.manager [req-74bdef82-40b8-4269-95dd-24a031cd0c8e req-60028b9a-0f52-48e9-b01d-bec3039fdb01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received event network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:11:56 compute-0 nova_compute[251992]: 2025-12-06 07:11:56.052 251996 DEBUG oslo_concurrency.lockutils [req-74bdef82-40b8-4269-95dd-24a031cd0c8e req-60028b9a-0f52-48e9-b01d-bec3039fdb01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:56 compute-0 nova_compute[251992]: 2025-12-06 07:11:56.052 251996 DEBUG oslo_concurrency.lockutils [req-74bdef82-40b8-4269-95dd-24a031cd0c8e req-60028b9a-0f52-48e9-b01d-bec3039fdb01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:56 compute-0 nova_compute[251992]: 2025-12-06 07:11:56.052 251996 DEBUG oslo_concurrency.lockutils [req-74bdef82-40b8-4269-95dd-24a031cd0c8e req-60028b9a-0f52-48e9-b01d-bec3039fdb01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:11:56 compute-0 nova_compute[251992]: 2025-12-06 07:11:56.052 251996 DEBUG nova.compute.manager [req-74bdef82-40b8-4269-95dd-24a031cd0c8e req-60028b9a-0f52-48e9-b01d-bec3039fdb01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] No waiting events found dispatching network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:11:56 compute-0 nova_compute[251992]: 2025-12-06 07:11:56.052 251996 WARNING nova.compute.manager [req-74bdef82-40b8-4269-95dd-24a031cd0c8e req-60028b9a-0f52-48e9-b01d-bec3039fdb01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received unexpected event network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f for instance with vm_state active and task_state None.
Dec 06 07:11:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:56 compute-0 nova_compute[251992]: 2025-12-06 07:11:56.099 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:56.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:56 compute-0 ceph-mon[74339]: pgmap v1588: 305 pgs: 305 active+clean; 134 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 717 KiB/s rd, 3.7 MiB/s wr, 133 op/s
Dec 06 07:11:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2847361391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2776394950' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1606640908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:11:56 compute-0 nova_compute[251992]: 2025-12-06 07:11:56.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:11:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2715161050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 155 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Dec 06 07:11:57 compute-0 nova_compute[251992]: 2025-12-06 07:11:57.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:11:57 compute-0 nova_compute[251992]: 2025-12-06 07:11:57.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:11:57 compute-0 nova_compute[251992]: 2025-12-06 07:11:57.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:11:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2715161050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.048 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.048 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.049 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.050 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d59682c6-381c-46bb-9d18-5f76d43dc560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:11:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:11:58.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:11:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:11:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:11:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:11:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.415 251996 DEBUG oslo_concurrency.lockutils [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.415 251996 DEBUG oslo_concurrency.lockutils [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.416 251996 DEBUG nova.compute.manager [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.423 251996 DEBUG nova.compute.manager [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.424 251996 DEBUG nova.objects.instance [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'flavor' on Instance uuid d59682c6-381c-46bb-9d18-5f76d43dc560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:11:58 compute-0 nova_compute[251992]: 2025-12-06 07:11:58.452 251996 DEBUG nova.virt.libvirt.driver [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:11:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:11:59 compute-0 ceph-mon[74339]: pgmap v1589: 305 pgs: 305 active+clean; 155 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Dec 06 07:11:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 155 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 113 op/s
Dec 06 07:11:59 compute-0 sudo[288249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:59 compute-0 sudo[288249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:59 compute-0 sudo[288249]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:59 compute-0 sudo[288274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:11:59 compute-0 sudo[288274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:11:59 compute-0 sudo[288274]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:00.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:00 compute-0 ceph-mon[74339]: pgmap v1590: 305 pgs: 305 active+clean; 155 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 113 op/s
Dec 06 07:12:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:00 compute-0 nova_compute[251992]: 2025-12-06 07:12:00.590 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Updating instance_info_cache with network_info: [{"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:00 compute-0 nova_compute[251992]: 2025-12-06 07:12:00.609 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-d59682c6-381c-46bb-9d18-5f76d43dc560" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:00 compute-0 nova_compute[251992]: 2025-12-06 07:12:00.610 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:12:00 compute-0 nova_compute[251992]: 2025-12-06 07:12:00.611 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:00 compute-0 nova_compute[251992]: 2025-12-06 07:12:00.611 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:12:01 compute-0 nova_compute[251992]: 2025-12-06 07:12:01.101 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 171 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.431395) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121431487, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2355, "num_deletes": 269, "total_data_size": 4004227, "memory_usage": 4068576, "flush_reason": "Manual Compaction"}
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121461409, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3921227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30090, "largest_seqno": 32444, "table_properties": {"data_size": 3910173, "index_size": 7228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23428, "raw_average_key_size": 21, "raw_value_size": 3888025, "raw_average_value_size": 3566, "num_data_blocks": 310, "num_entries": 1090, "num_filter_entries": 1090, "num_deletions": 269, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765004931, "oldest_key_time": 1765004931, "file_creation_time": 1765005121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 30161 microseconds, and 9654 cpu microseconds.
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.461565) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3921227 bytes OK
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.461617) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.482294) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.482331) EVENT_LOG_v1 {"time_micros": 1765005121482322, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.482352) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3994277, prev total WAL file size 3994277, number of live WAL files 2.
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.483645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3829KB)], [65(9492KB)]
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121483710, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13641358, "oldest_snapshot_seqno": -1}
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6514 keys, 11756128 bytes, temperature: kUnknown
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121898643, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 11756128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11710164, "index_size": 28574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 165667, "raw_average_key_size": 25, "raw_value_size": 11590725, "raw_average_value_size": 1779, "num_data_blocks": 1153, "num_entries": 6514, "num_filter_entries": 6514, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765005121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.899238) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11756128 bytes
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.946672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.9 rd, 28.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 9.3 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7061, records dropped: 547 output_compression: NoCompression
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.946717) EVENT_LOG_v1 {"time_micros": 1765005121946691, "job": 36, "event": "compaction_finished", "compaction_time_micros": 415108, "compaction_time_cpu_micros": 25976, "output_level": 6, "num_output_files": 1, "total_output_size": 11756128, "num_input_records": 7061, "num_output_records": 6514, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121947614, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005121949374, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.483553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.949412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.949416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.949418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.949420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:01 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:12:01.949422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:12:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:02.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:02.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:02 compute-0 ceph-mon[74339]: pgmap v1591: 305 pgs: 305 active+clean; 171 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 139 op/s
Dec 06 07:12:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 181 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 135 op/s
Dec 06 07:12:03 compute-0 nova_compute[251992]: 2025-12-06 07:12:03.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:03 compute-0 podman[288301]: 2025-12-06 07:12:03.459243765 +0000 UTC m=+0.109571189 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:12:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:03.820 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:03.821 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:03.822 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:04.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:05 compute-0 ceph-mon[74339]: pgmap v1592: 305 pgs: 305 active+clean; 181 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 135 op/s
Dec 06 07:12:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 181 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 148 op/s
Dec 06 07:12:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:06.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:06 compute-0 nova_compute[251992]: 2025-12-06 07:12:06.104 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:06.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:06 compute-0 ceph-mon[74339]: pgmap v1593: 305 pgs: 305 active+clean; 181 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 148 op/s
Dec 06 07:12:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 182 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 160 op/s
Dec 06 07:12:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/885063946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/506353290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:08.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:08.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:08 compute-0 ovn_controller[147168]: 2025-12-06T07:12:08Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:d8:25 10.100.0.13
Dec 06 07:12:08 compute-0 ovn_controller[147168]: 2025-12-06T07:12:08Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:d8:25 10.100.0.13
Dec 06 07:12:08 compute-0 nova_compute[251992]: 2025-12-06 07:12:08.386 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:08 compute-0 podman[288330]: 2025-12-06 07:12:08.411908371 +0000 UTC m=+0.066863381 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:12:08 compute-0 podman[288331]: 2025-12-06 07:12:08.430834598 +0000 UTC m=+0.077066106 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 06 07:12:08 compute-0 nova_compute[251992]: 2025-12-06 07:12:08.498 251996 DEBUG nova.virt.libvirt.driver [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:12:08 compute-0 ceph-mon[74339]: pgmap v1594: 305 pgs: 305 active+clean; 182 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 160 op/s
Dec 06 07:12:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 182 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 108 op/s
Dec 06 07:12:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3786266720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:12:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3786266720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:12:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:10.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:10.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Dec 06 07:12:11 compute-0 nova_compute[251992]: 2025-12-06 07:12:11.105 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 202 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 158 op/s
Dec 06 07:12:11 compute-0 nova_compute[251992]: 2025-12-06 07:12:11.513 251996 INFO nova.virt.libvirt.driver [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Instance shutdown successfully after 13 seconds.
Dec 06 07:12:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:12.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:12.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:12:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:12:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:12:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:12:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:12:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:12:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Dec 06 07:12:13 compute-0 nova_compute[251992]: 2025-12-06 07:12:13.387 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:14.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:14.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:14 compute-0 nova_compute[251992]: 2025-12-06 07:12:14.713 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:14.713 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:14.716 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:12:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Dec 06 07:12:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:16.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:16 compute-0 nova_compute[251992]: 2025-12-06 07:12:16.107 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:16.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:16.719 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Dec 06 07:12:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:18.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:18.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:12:18
Dec 06 07:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', '.mgr', 'images', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Dec 06 07:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:12:18 compute-0 nova_compute[251992]: 2025-12-06 07:12:18.389 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 151 op/s
Dec 06 07:12:19 compute-0 sudo[288373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:19 compute-0 sudo[288373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:19 compute-0 sudo[288373]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:19 compute-0 sudo[288398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:19 compute-0 sudo[288398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:19 compute-0 sudo[288398]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 07:12:19 compute-0 ceph-mon[74339]: paxos.0).electionLogic(47) init, last seen epoch 47, mid-election, bumping
Dec 06 07:12:19 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:20.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:20.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:20 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.110 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 151 op/s
Dec 06 07:12:21 compute-0 kernel: tapecdd9822-20 (unregistering): left promiscuous mode
Dec 06 07:12:21 compute-0 NetworkManager[48965]: <info>  [1765005141.5510] device (tapecdd9822-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:12:21 compute-0 ovn_controller[147168]: 2025-12-06T07:12:21Z|00141|binding|INFO|Releasing lport ecdd9822-20e7-48b8-8a41-d2e490c2be1f from this chassis (sb_readonly=0)
Dec 06 07:12:21 compute-0 ovn_controller[147168]: 2025-12-06T07:12:21Z|00142|binding|INFO|Setting lport ecdd9822-20e7-48b8-8a41-d2e490c2be1f down in Southbound
Dec 06 07:12:21 compute-0 ovn_controller[147168]: 2025-12-06T07:12:21Z|00143|binding|INFO|Removing iface tapecdd9822-20 ovn-installed in OVS
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.555 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.561 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:d8:25 10.100.0.13'], port_security=['fa:16:3e:b0:d8:25 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd59682c6-381c-46bb-9d18-5f76d43dc560', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ecdd9822-20e7-48b8-8a41-d2e490c2be1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.562 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ecdd9822-20e7-48b8-8a41-d2e490c2be1f in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad unbound from our chassis
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.564 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.566 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa10c1d-4429-4de5-a1f5-35fbecb815b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.567 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace which is not needed anymore
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.575 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:21 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000035.scope: Deactivated successfully.
Dec 06 07:12:21 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000035.scope: Consumed 13.948s CPU time.
Dec 06 07:12:21 compute-0 systemd-machined[212986]: Machine qemu-23-instance-00000035 terminated.
Dec 06 07:12:21 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[288231]: [NOTICE]   (288235) : haproxy version is 2.8.14-c23fe91
Dec 06 07:12:21 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[288231]: [NOTICE]   (288235) : path to executable is /usr/sbin/haproxy
Dec 06 07:12:21 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[288231]: [WARNING]  (288235) : Exiting Master process...
Dec 06 07:12:21 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[288231]: [ALERT]    (288235) : Current worker (288237) exited with code 143 (Terminated)
Dec 06 07:12:21 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[288231]: [WARNING]  (288235) : All workers exited. Exiting... (0)
Dec 06 07:12:21 compute-0 systemd[1]: libpod-426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2.scope: Deactivated successfully.
Dec 06 07:12:21 compute-0 conmon[288231]: conmon 426a618b764661c0a476 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2.scope/container/memory.events
Dec 06 07:12:21 compute-0 podman[288448]: 2025-12-06 07:12:21.691050306 +0000 UTC m=+0.041100854 container died 426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2-userdata-shm.mount: Deactivated successfully.
Dec 06 07:12:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-27ea709349cd77a4137fb06627780a16ddd3b199b3797586b222bf1da9461fcb-merged.mount: Deactivated successfully.
Dec 06 07:12:21 compute-0 podman[288448]: 2025-12-06 07:12:21.734093963 +0000 UTC m=+0.084144491 container cleanup 426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:12:21 compute-0 systemd[1]: libpod-conmon-426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2.scope: Deactivated successfully.
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.757 251996 INFO nova.virt.libvirt.driver [-] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Instance destroyed successfully.
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.758 251996 DEBUG nova.objects.instance [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'numa_topology' on Instance uuid d59682c6-381c-46bb-9d18-5f76d43dc560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.782 251996 DEBUG nova.compute.manager [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:21 compute-0 podman[288481]: 2025-12-06 07:12:21.799447782 +0000 UTC m=+0.043325677 container remove 426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.807 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6952a0b6-9543-4cad-98bb-e79a0b0fa6ed]: (4, ('Sat Dec  6 07:12:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2)\n426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2\nSat Dec  6 07:12:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2)\n426a618b764661c0a47644be1d594a55d13c983b7b8028dad72aa4994cd960c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.811 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c533fa-c819-4827-9cc2-8fe61dabd794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.812 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.814 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:21 compute-0 kernel: tap2b0835d7-80: left promiscuous mode
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.826 251996 DEBUG oslo_concurrency.lockutils [None req-53880662-71fb-4518-9708-73696aeaa00d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 23.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:21 compute-0 nova_compute[251992]: 2025-12-06 07:12:21.830 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.833 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e0b4ec-cc0f-4a77-8181-98e5106fec6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.853 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[67f1388b-ddc2-40a1-9e54-0b53789add81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.857 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6337c074-a5f7-4f00-9911-23b89e62dada]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.879 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1cada50f-2b15-4356-8d86-3fdd20fb9a46]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538584, 'reachable_time': 30213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288507, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d2b0835d7\x2d87e4\x2d46cc\x2d8a94\x2de4e042bd4bad.mount: Deactivated successfully.
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.887 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:12:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:21.888 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[35af8e18-fb88-41b5-93f9-1770328105df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:22.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:22.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:22 compute-0 nova_compute[251992]: 2025-12-06 07:12:22.537 251996 DEBUG nova.compute.manager [req-698db66e-56ec-4280-b3a7-8aa3b9622919 req-3ab6013a-4121-49ee-b365-89f36c63d7c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received event network-vif-unplugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:22 compute-0 nova_compute[251992]: 2025-12-06 07:12:22.538 251996 DEBUG oslo_concurrency.lockutils [req-698db66e-56ec-4280-b3a7-8aa3b9622919 req-3ab6013a-4121-49ee-b365-89f36c63d7c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:22 compute-0 nova_compute[251992]: 2025-12-06 07:12:22.538 251996 DEBUG oslo_concurrency.lockutils [req-698db66e-56ec-4280-b3a7-8aa3b9622919 req-3ab6013a-4121-49ee-b365-89f36c63d7c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:22 compute-0 nova_compute[251992]: 2025-12-06 07:12:22.538 251996 DEBUG oslo_concurrency.lockutils [req-698db66e-56ec-4280-b3a7-8aa3b9622919 req-3ab6013a-4121-49ee-b365-89f36c63d7c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:22 compute-0 nova_compute[251992]: 2025-12-06 07:12:22.538 251996 DEBUG nova.compute.manager [req-698db66e-56ec-4280-b3a7-8aa3b9622919 req-3ab6013a-4121-49ee-b365-89f36c63d7c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] No waiting events found dispatching network-vif-unplugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:12:22 compute-0 nova_compute[251992]: 2025-12-06 07:12:22.538 251996 WARNING nova.compute.manager [req-698db66e-56ec-4280-b3a7-8aa3b9622919 req-3ab6013a-4121-49ee-b365-89f36c63d7c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received unexpected event network-vif-unplugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f for instance with vm_state stopped and task_state None.
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 246 KiB/s wr, 101 op/s
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:12:23 compute-0 nova_compute[251992]: 2025-12-06 07:12:23.391 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:12:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:24.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:24 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_timecheck drop unexpected msg
Dec 06 07:12:24 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:24 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 07:12:24 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.645 251996 DEBUG nova.compute.manager [req-686aa9b1-9794-47d9-b882-f666c88c8288 req-6bbb8431-70a6-463f-9046-26b64577c0bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received event network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.645 251996 DEBUG oslo_concurrency.lockutils [req-686aa9b1-9794-47d9-b882-f666c88c8288 req-6bbb8431-70a6-463f-9046-26b64577c0bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.646 251996 DEBUG oslo_concurrency.lockutils [req-686aa9b1-9794-47d9-b882-f666c88c8288 req-6bbb8431-70a6-463f-9046-26b64577c0bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.646 251996 DEBUG oslo_concurrency.lockutils [req-686aa9b1-9794-47d9-b882-f666c88c8288 req-6bbb8431-70a6-463f-9046-26b64577c0bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.646 251996 DEBUG nova.compute.manager [req-686aa9b1-9794-47d9-b882-f666c88c8288 req-6bbb8431-70a6-463f-9046-26b64577c0bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] No waiting events found dispatching network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.646 251996 WARNING nova.compute.manager [req-686aa9b1-9794-47d9-b882-f666c88c8288 req-6bbb8431-70a6-463f-9046-26b64577c0bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received unexpected event network-vif-plugged-ecdd9822-20e7-48b8-8a41-d2e490c2be1f for instance with vm_state stopped and task_state None.
Dec 06 07:12:24 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.904 251996 DEBUG nova.compute.manager [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 07:12:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:12:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 46m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.975 251996 INFO nova.compute.manager [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] instance snapshotting
Dec 06 07:12:24 compute-0 nova_compute[251992]: 2025-12-06 07:12:24.975 251996 WARNING nova.compute.manager [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] trying to snapshot a non-running instance: (state: 4 expected: 1)
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:12:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 198 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 17 op/s
Dec 06 07:12:25 compute-0 nova_compute[251992]: 2025-12-06 07:12:25.243 251996 INFO nova.virt.libvirt.driver [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Beginning cold snapshot process
Dec 06 07:12:25 compute-0 nova_compute[251992]: 2025-12-06 07:12:25.392 251996 DEBUG nova.virt.libvirt.imagebackend [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:12:25 compute-0 nova_compute[251992]: 2025-12-06 07:12:25.602 251996 DEBUG nova.storage.rbd_utils [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(bc3b924168554f9aa28f7cfece3e70d1) on rbd image(d59682c6-381c-46bb-9d18-5f76d43dc560_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0038262783826540107 of space, bias 1.0, pg target 1.1478835147962032 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:12:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:12:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Dec 06 07:12:25 compute-0 ceph-mon[74339]: pgmap v1596: 305 pgs: 305 active+clean; 202 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 158 op/s
Dec 06 07:12:25 compute-0 ceph-mon[74339]: pgmap v1597: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Dec 06 07:12:25 compute-0 ceph-mon[74339]: pgmap v1598: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 217 op/s
Dec 06 07:12:25 compute-0 ceph-mon[74339]: pgmap v1599: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Dec 06 07:12:25 compute-0 ceph-mon[74339]: pgmap v1600: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 151 op/s
Dec 06 07:12:25 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 07:12:25 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 07:12:25 compute-0 ceph-mon[74339]: pgmap v1601: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 151 op/s
Dec 06 07:12:25 compute-0 ceph-mon[74339]: pgmap v1602: 305 pgs: 305 active+clean; 214 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 246 KiB/s wr, 101 op/s
Dec 06 07:12:25 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 07:12:25 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:12:25 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:12:25 compute-0 ceph-mon[74339]: osdmap e225: 3 total, 3 up, 3 in
Dec 06 07:12:25 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 46m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:12:25 compute-0 ceph-mon[74339]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 07:12:25 compute-0 ceph-mon[74339]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:12:25 compute-0 ceph-mon[74339]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:12:25 compute-0 ceph-mon[74339]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 07:12:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3405406043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Dec 06 07:12:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Dec 06 07:12:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:26 compute-0 nova_compute[251992]: 2025-12-06 07:12:26.114 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:26.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:26 compute-0 nova_compute[251992]: 2025-12-06 07:12:26.293 251996 DEBUG nova.storage.rbd_utils [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] cloning vms/d59682c6-381c-46bb-9d18-5f76d43dc560_disk@bc3b924168554f9aa28f7cfece3e70d1 to images/eba516b2-6064-409c-87d5-453285b93544 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:12:26 compute-0 nova_compute[251992]: 2025-12-06 07:12:26.481 251996 DEBUG nova.storage.rbd_utils [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] flattening images/eba516b2-6064-409c-87d5-453285b93544 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:12:26 compute-0 ceph-mon[74339]: pgmap v1604: 305 pgs: 305 active+clean; 198 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 17 op/s
Dec 06 07:12:26 compute-0 ceph-mon[74339]: osdmap e226: 3 total, 3 up, 3 in
Dec 06 07:12:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 194 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 807 KiB/s rd, 740 KiB/s wr, 64 op/s
Dec 06 07:12:27 compute-0 nova_compute[251992]: 2025-12-06 07:12:27.849 251996 DEBUG nova.storage.rbd_utils [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] removing snapshot(bc3b924168554f9aa28f7cfece3e70d1) on rbd image(d59682c6-381c-46bb-9d18-5f76d43dc560_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:12:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/821615078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:28.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:28 compute-0 nova_compute[251992]: 2025-12-06 07:12:28.392 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Dec 06 07:12:29 compute-0 ceph-mon[74339]: pgmap v1606: 305 pgs: 305 active+clean; 194 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 807 KiB/s rd, 740 KiB/s wr, 64 op/s
Dec 06 07:12:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Dec 06 07:12:29 compute-0 nova_compute[251992]: 2025-12-06 07:12:29.066 251996 DEBUG nova.storage.rbd_utils [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(snap) on rbd image(eba516b2-6064-409c-87d5-453285b93544) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:12:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 203 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.7 MiB/s wr, 114 op/s
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 07:12:29 compute-0 ceph-mon[74339]: paxos.0).electionLogic(50) init, last seen epoch 50
Dec 06 07:12:29 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:12:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 46m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 07:12:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 07:12:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e227 got unmanaged-snap pool op from entity with insufficient privileges. message: pool_op(create unmanaged snap pool 2 tid 21 name  v0) v4
                                           caps: allow profile rbd
Dec 06 07:12:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Dec 06 07:12:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:30.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:12:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Dec 06 07:12:30 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 07:12:30 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 07:12:30 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:12:30 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:12:30 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:12:30 compute-0 ceph-mon[74339]: osdmap e227: 3 total, 3 up, 3 in
Dec 06 07:12:30 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 46m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:12:30 compute-0 ceph-mon[74339]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 07:12:30 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 07:12:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Dec 06 07:12:31 compute-0 nova_compute[251992]: 2025-12-06 07:12:31.117 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 257 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 8.2 MiB/s wr, 151 op/s
Dec 06 07:12:31 compute-0 ceph-mon[74339]: mon.compute-1 calling monitor election
Dec 06 07:12:31 compute-0 ceph-mon[74339]: pgmap v1608: 305 pgs: 305 active+clean; 203 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.7 MiB/s wr, 114 op/s
Dec 06 07:12:31 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:12:31 compute-0 ceph-mon[74339]: osdmap e228: 3 total, 3 up, 3 in
Dec 06 07:12:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3363960885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1472456111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2845239760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:32.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:32 compute-0 nova_compute[251992]: 2025-12-06 07:12:32.125 251996 INFO nova.virt.libvirt.driver [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Snapshot image upload complete
Dec 06 07:12:32 compute-0 nova_compute[251992]: 2025-12-06 07:12:32.125 251996 INFO nova.compute.manager [None req-2e802767-4ff1-42f3-ad72-e9e5c6bfabbf bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Took 7.15 seconds to snapshot the instance on the hypervisor.
Dec 06 07:12:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:32.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:32 compute-0 ceph-mon[74339]: pgmap v1610: 305 pgs: 305 active+clean; 257 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 8.2 MiB/s wr, 151 op/s
Dec 06 07:12:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 303 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 11 MiB/s wr, 249 op/s
Dec 06 07:12:33 compute-0 nova_compute[251992]: 2025-12-06 07:12:33.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Dec 06 07:12:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Dec 06 07:12:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Dec 06 07:12:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:34.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:34.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:34 compute-0 podman[288655]: 2025-12-06 07:12:34.422086555 +0000 UTC m=+0.075637205 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.834 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.834 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.834 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.835 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.835 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.836 251996 INFO nova.compute.manager [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Terminating instance
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.837 251996 DEBUG nova.compute.manager [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.843 251996 INFO nova.virt.libvirt.driver [-] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Instance destroyed successfully.
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.844 251996 DEBUG nova.objects.instance [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'resources' on Instance uuid d59682c6-381c-46bb-9d18-5f76d43dc560 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:34 compute-0 ceph-mon[74339]: pgmap v1611: 305 pgs: 305 active+clean; 303 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 11 MiB/s wr, 249 op/s
Dec 06 07:12:34 compute-0 ceph-mon[74339]: osdmap e229: 3 total, 3 up, 3 in
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.881 251996 DEBUG nova.virt.libvirt.vif [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:11:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-714013780',display_name='tempest-ImagesTestJSON-server-714013780',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-714013780',id=53,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:11:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-61fehmed',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:12:32Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=d59682c6-381c-46bb-9d18-5f76d43dc560,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.882 251996 DEBUG nova.network.os_vif_util [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "address": "fa:16:3e:b0:d8:25", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecdd9822-20", "ovs_interfaceid": "ecdd9822-20e7-48b8-8a41-d2e490c2be1f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.883 251996 DEBUG nova.network.os_vif_util [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:d8:25,bridge_name='br-int',has_traffic_filtering=True,id=ecdd9822-20e7-48b8-8a41-d2e490c2be1f,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdd9822-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.883 251996 DEBUG os_vif [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:d8:25,bridge_name='br-int',has_traffic_filtering=True,id=ecdd9822-20e7-48b8-8a41-d2e490c2be1f,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdd9822-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.885 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.886 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecdd9822-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.887 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.889 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:34 compute-0 nova_compute[251992]: 2025-12-06 07:12:34.893 251996 INFO os_vif [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:d8:25,bridge_name='br-int',has_traffic_filtering=True,id=ecdd9822-20e7-48b8-8a41-d2e490c2be1f,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapecdd9822-20')
Dec 06 07:12:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 250 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 11 MiB/s wr, 242 op/s
Dec 06 07:12:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:36.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:36.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:36 compute-0 nova_compute[251992]: 2025-12-06 07:12:36.755 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005141.7546482, d59682c6-381c-46bb-9d18-5f76d43dc560 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:36 compute-0 nova_compute[251992]: 2025-12-06 07:12:36.756 251996 INFO nova.compute.manager [-] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] VM Stopped (Lifecycle Event)
Dec 06 07:12:36 compute-0 nova_compute[251992]: 2025-12-06 07:12:36.793 251996 DEBUG nova.compute.manager [None req-0477e7bc-d208-4ad8-b83f-c49759ef72b7 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:36 compute-0 nova_compute[251992]: 2025-12-06 07:12:36.797 251996 DEBUG nova.compute.manager [None req-0477e7bc-d208-4ad8-b83f-c49759ef72b7 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: deleting, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:12:36 compute-0 nova_compute[251992]: 2025-12-06 07:12:36.814 251996 INFO nova.compute.manager [None req-0477e7bc-d208-4ad8-b83f-c49759ef72b7 - - - - - -] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] During sync_power_state the instance has a pending task (deleting). Skip.
Dec 06 07:12:36 compute-0 nova_compute[251992]: 2025-12-06 07:12:36.964 251996 INFO nova.virt.libvirt.driver [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Deleting instance files /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560_del
Dec 06 07:12:36 compute-0 nova_compute[251992]: 2025-12-06 07:12:36.965 251996 INFO nova.virt.libvirt.driver [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Deletion of /var/lib/nova/instances/d59682c6-381c-46bb-9d18-5f76d43dc560_del complete
Dec 06 07:12:37 compute-0 nova_compute[251992]: 2025-12-06 07:12:37.013 251996 INFO nova.compute.manager [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Took 2.18 seconds to destroy the instance on the hypervisor.
Dec 06 07:12:37 compute-0 nova_compute[251992]: 2025-12-06 07:12:37.013 251996 DEBUG oslo.service.loopingcall [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:12:37 compute-0 nova_compute[251992]: 2025-12-06 07:12:37.014 251996 DEBUG nova.compute.manager [-] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:12:37 compute-0 nova_compute[251992]: 2025-12-06 07:12:37.014 251996 DEBUG nova.network.neutron [-] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:12:37 compute-0 ceph-mon[74339]: pgmap v1613: 305 pgs: 305 active+clean; 250 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 11 MiB/s wr, 242 op/s
Dec 06 07:12:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1748162983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 205 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 8.8 MiB/s wr, 257 op/s
Dec 06 07:12:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:38.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:38 compute-0 nova_compute[251992]: 2025-12-06 07:12:38.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:38 compute-0 nova_compute[251992]: 2025-12-06 07:12:38.462 251996 DEBUG nova.network.neutron [-] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:38 compute-0 nova_compute[251992]: 2025-12-06 07:12:38.482 251996 INFO nova.compute.manager [-] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Took 1.47 seconds to deallocate network for instance.
Dec 06 07:12:38 compute-0 ceph-mon[74339]: pgmap v1614: 305 pgs: 305 active+clean; 205 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 8.8 MiB/s wr, 257 op/s
Dec 06 07:12:38 compute-0 nova_compute[251992]: 2025-12-06 07:12:38.543 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:38 compute-0 nova_compute[251992]: 2025-12-06 07:12:38.543 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:38 compute-0 nova_compute[251992]: 2025-12-06 07:12:38.616 251996 DEBUG oslo_concurrency.processutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Dec 06 07:12:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Dec 06 07:12:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Dec 06 07:12:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194120696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:39 compute-0 nova_compute[251992]: 2025-12-06 07:12:39.083 251996 DEBUG oslo_concurrency.processutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:39 compute-0 nova_compute[251992]: 2025-12-06 07:12:39.090 251996 DEBUG nova.compute.provider_tree [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:39 compute-0 nova_compute[251992]: 2025-12-06 07:12:39.110 251996 DEBUG nova.scheduler.client.report [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:39 compute-0 nova_compute[251992]: 2025-12-06 07:12:39.138 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:39 compute-0 nova_compute[251992]: 2025-12-06 07:12:39.163 251996 INFO nova.scheduler.client.report [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Deleted allocations for instance d59682c6-381c-46bb-9d18-5f76d43dc560
Dec 06 07:12:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 129 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 314 op/s
Dec 06 07:12:39 compute-0 nova_compute[251992]: 2025-12-06 07:12:39.251 251996 DEBUG oslo_concurrency.lockutils [None req-aeb5bcac-b125-4157-b3e8-fe0adbc48c13 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "d59682c6-381c-46bb-9d18-5f76d43dc560" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:39 compute-0 podman[288728]: 2025-12-06 07:12:39.392869826 +0000 UTC m=+0.055150426 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:12:39 compute-0 podman[288727]: 2025-12-06 07:12:39.409294983 +0000 UTC m=+0.072394685 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 07:12:39 compute-0 sudo[288765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:39 compute-0 sudo[288765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:39 compute-0 sudo[288765]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:39 compute-0 sudo[288790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:39 compute-0 sudo[288790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:39 compute-0 sudo[288790]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:39 compute-0 nova_compute[251992]: 2025-12-06 07:12:39.889 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:40 compute-0 ceph-mon[74339]: osdmap e230: 3 total, 3 up, 3 in
Dec 06 07:12:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2194120696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1662024483' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:40.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:40 compute-0 nova_compute[251992]: 2025-12-06 07:12:40.584 251996 DEBUG nova.compute.manager [req-0eac07ed-8fd0-48f1-93d4-afd8cc08ff44 req-262b6e9e-108c-4ddd-bef0-516c2bdfc59d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d59682c6-381c-46bb-9d18-5f76d43dc560] Received event network-vif-deleted-ecdd9822-20e7-48b8-8a41-d2e490c2be1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:41 compute-0 ceph-mon[74339]: pgmap v1616: 305 pgs: 305 active+clean; 129 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 314 op/s
Dec 06 07:12:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 88 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.7 MiB/s wr, 247 op/s
Dec 06 07:12:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:42.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:42.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:42 compute-0 ceph-mon[74339]: pgmap v1617: 305 pgs: 305 active+clean; 88 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.7 MiB/s wr, 247 op/s
Dec 06 07:12:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:12:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:12:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:12:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:12:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:12:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:12:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 104 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 240 op/s
Dec 06 07:12:43 compute-0 nova_compute[251992]: 2025-12-06 07:12:43.398 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Dec 06 07:12:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Dec 06 07:12:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Dec 06 07:12:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:44.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:44.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.414 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "4e5a488b-67a2-44eb-a8b5-e963515206c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.414 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.442 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:12:44 compute-0 sudo[288817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:44 compute-0 sudo[288817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:44 compute-0 sudo[288817]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:44 compute-0 sudo[288843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:12:44 compute-0 sudo[288843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.527 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.528 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:44 compute-0 sudo[288843]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.533 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.533 251996 INFO nova.compute.claims [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:12:44 compute-0 sudo[288868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:44 compute-0 sudo[288868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:44 compute-0 sudo[288868]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.637 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:44 compute-0 sudo[288893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:12:44 compute-0 sudo[288893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:44 compute-0 ceph-mon[74339]: pgmap v1618: 305 pgs: 305 active+clean; 104 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 240 op/s
Dec 06 07:12:44 compute-0 ceph-mon[74339]: osdmap e231: 3 total, 3 up, 3 in
Dec 06 07:12:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/478934728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1382731662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:44 compute-0 nova_compute[251992]: 2025-12-06 07:12:44.892 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816668834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:45 compute-0 sudo[288893]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.084 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.090 251996 DEBUG nova.compute.provider_tree [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.115 251996 DEBUG nova.scheduler.client.report [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.145 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.146 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.199 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.199 251996 DEBUG nova.network.neutron [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.224 251996 INFO nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:12:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 134 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Dec 06 07:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9abb200e-41ea-4a86-ada4-620a95228ca9 does not exist
Dec 06 07:12:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6d84c2f3-ef9c-4f6c-af6c-a637905fc9f8 does not exist
Dec 06 07:12:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 71432bff-1dbd-4446-a553-404000d59628 does not exist
Dec 06 07:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.277 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:12:45 compute-0 sudo[288971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:45 compute-0 sudo[288971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:45 compute-0 sudo[288971]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:45 compute-0 sudo[288996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:12:45 compute-0 sudo[288996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.377 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:12:45 compute-0 sudo[288996]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.378 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.379 251996 INFO nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Creating image(s)
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.405 251996 DEBUG nova.storage.rbd_utils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:45 compute-0 sudo[289025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.433 251996 DEBUG nova.storage.rbd_utils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:45 compute-0 sudo[289025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:45 compute-0 sudo[289025]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.465 251996 DEBUG nova.storage.rbd_utils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.470 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:45 compute-0 sudo[289089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:12:45 compute-0 sudo[289089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.497 251996 DEBUG nova.policy [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a1ed181a1103481fa4d0b29ce1009dca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.533 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.534 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.534 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.535 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.560 251996 DEBUG nova.storage.rbd_utils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.563 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:45 compute-0 podman[289198]: 2025-12-06 07:12:45.806579775 +0000 UTC m=+0.041392631 container create db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.832 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:45 compute-0 systemd[1]: Started libpod-conmon-db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae.scope.
Dec 06 07:12:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1816668834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:12:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:12:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:12:45 compute-0 podman[289198]: 2025-12-06 07:12:45.789786978 +0000 UTC m=+0.024599844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:12:45 compute-0 podman[289198]: 2025-12-06 07:12:45.89875706 +0000 UTC m=+0.133569936 container init db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:12:45 compute-0 podman[289198]: 2025-12-06 07:12:45.90559166 +0000 UTC m=+0.140404516 container start db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:12:45 compute-0 podman[289198]: 2025-12-06 07:12:45.910633051 +0000 UTC m=+0.145445927 container attach db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 07:12:45 compute-0 laughing_chatterjee[289221]: 167 167
Dec 06 07:12:45 compute-0 systemd[1]: libpod-db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae.scope: Deactivated successfully.
Dec 06 07:12:45 compute-0 podman[289198]: 2025-12-06 07:12:45.912594765 +0000 UTC m=+0.147407631 container died db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:12:45 compute-0 nova_compute[251992]: 2025-12-06 07:12:45.920 251996 DEBUG nova.storage.rbd_utils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] resizing rbd image 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-841caa2f2e3dc84218e07830491f51f1b0d62b5516ba093221caf307ff44e7d3-merged.mount: Deactivated successfully.
Dec 06 07:12:45 compute-0 podman[289198]: 2025-12-06 07:12:45.951361893 +0000 UTC m=+0.186174749 container remove db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_chatterjee, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:12:45 compute-0 systemd[1]: libpod-conmon-db9ac15d82906b86f8a5359fa40388dbf4ecff9aac2c9c6838afead0dd6341ae.scope: Deactivated successfully.
Dec 06 07:12:46 compute-0 nova_compute[251992]: 2025-12-06 07:12:46.032 251996 DEBUG nova.objects.instance [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e5a488b-67a2-44eb-a8b5-e963515206c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:46 compute-0 nova_compute[251992]: 2025-12-06 07:12:46.050 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:12:46 compute-0 nova_compute[251992]: 2025-12-06 07:12:46.051 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Ensure instance console log exists: /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:12:46 compute-0 nova_compute[251992]: 2025-12-06 07:12:46.051 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:46 compute-0 nova_compute[251992]: 2025-12-06 07:12:46.052 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:46 compute-0 nova_compute[251992]: 2025-12-06 07:12:46.052 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:46 compute-0 podman[289310]: 2025-12-06 07:12:46.10362983 +0000 UTC m=+0.037026132 container create ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec 06 07:12:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:46.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:46 compute-0 systemd[1]: Started libpod-conmon-ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82.scope.
Dec 06 07:12:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3293412122f6721c0d3eb630ab2851a60a33fdc0bddcde82d14799a4bf3073df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3293412122f6721c0d3eb630ab2851a60a33fdc0bddcde82d14799a4bf3073df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3293412122f6721c0d3eb630ab2851a60a33fdc0bddcde82d14799a4bf3073df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3293412122f6721c0d3eb630ab2851a60a33fdc0bddcde82d14799a4bf3073df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3293412122f6721c0d3eb630ab2851a60a33fdc0bddcde82d14799a4bf3073df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:46 compute-0 podman[289310]: 2025-12-06 07:12:46.182209976 +0000 UTC m=+0.115606288 container init ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_gagarin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:12:46 compute-0 podman[289310]: 2025-12-06 07:12:46.088349954 +0000 UTC m=+0.021746286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:12:46 compute-0 podman[289310]: 2025-12-06 07:12:46.191058622 +0000 UTC m=+0.124454924 container start ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 07:12:46 compute-0 podman[289310]: 2025-12-06 07:12:46.194279481 +0000 UTC m=+0.127675783 container attach ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_gagarin, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:12:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:46.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:46 compute-0 nova_compute[251992]: 2025-12-06 07:12:46.436 251996 DEBUG nova.network.neutron [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Successfully created port: b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:12:46 compute-0 nova_compute[251992]: 2025-12-06 07:12:46.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:46 compute-0 ceph-mon[74339]: pgmap v1620: 305 pgs: 305 active+clean; 134 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Dec 06 07:12:46 compute-0 objective_gagarin[289326]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:12:46 compute-0 objective_gagarin[289326]: --> relative data size: 1.0
Dec 06 07:12:46 compute-0 objective_gagarin[289326]: --> All data devices are unavailable
Dec 06 07:12:47 compute-0 systemd[1]: libpod-ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82.scope: Deactivated successfully.
Dec 06 07:12:47 compute-0 conmon[289326]: conmon ef5676a7b6775aac331f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82.scope/container/memory.events
Dec 06 07:12:47 compute-0 podman[289310]: 2025-12-06 07:12:47.002750181 +0000 UTC m=+0.936146483 container died ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_gagarin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 07:12:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3293412122f6721c0d3eb630ab2851a60a33fdc0bddcde82d14799a4bf3073df-merged.mount: Deactivated successfully.
Dec 06 07:12:47 compute-0 podman[289310]: 2025-12-06 07:12:47.054617864 +0000 UTC m=+0.988014166 container remove ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_gagarin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:12:47 compute-0 systemd[1]: libpod-conmon-ef5676a7b6775aac331fa560830bab77e643cd241b2d28532b0cac792ae98e82.scope: Deactivated successfully.
Dec 06 07:12:47 compute-0 sudo[289089]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:47 compute-0 sudo[289353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:47 compute-0 sudo[289353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:47 compute-0 sudo[289353]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:47 compute-0 sudo[289378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:12:47 compute-0 sudo[289378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:47 compute-0 sudo[289378]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 143 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 84 op/s
Dec 06 07:12:47 compute-0 sudo[289403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:47 compute-0 sudo[289403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:47 compute-0 sudo[289403]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:47 compute-0 sudo[289428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:12:47 compute-0 sudo[289428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:47 compute-0 podman[289493]: 2025-12-06 07:12:47.667438312 +0000 UTC m=+0.024928164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:12:48 compute-0 podman[289493]: 2025-12-06 07:12:48.007513952 +0000 UTC m=+0.365003784 container create ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:12:48 compute-0 systemd[1]: Started libpod-conmon-ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550.scope.
Dec 06 07:12:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:12:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:48.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:48.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.363 251996 DEBUG nova.network.neutron [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Successfully updated port: b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.387 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.387 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquired lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.387 251996 DEBUG nova.network.neutron [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.400 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:48 compute-0 ceph-mon[74339]: pgmap v1621: 305 pgs: 305 active+clean; 143 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 84 op/s
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.462 251996 DEBUG nova.compute.manager [req-2b20f161-2e3c-414f-9dbc-1c56bd585d4d req-58de410c-fb6b-4089-aa96-41a4ef5dd61e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received event network-changed-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.462 251996 DEBUG nova.compute.manager [req-2b20f161-2e3c-414f-9dbc-1c56bd585d4d req-58de410c-fb6b-4089-aa96-41a4ef5dd61e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Refreshing instance network info cache due to event network-changed-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.462 251996 DEBUG oslo_concurrency.lockutils [req-2b20f161-2e3c-414f-9dbc-1c56bd585d4d req-58de410c-fb6b-4089-aa96-41a4ef5dd61e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:48 compute-0 podman[289493]: 2025-12-06 07:12:48.46934927 +0000 UTC m=+0.826839122 container init ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:12:48 compute-0 podman[289493]: 2025-12-06 07:12:48.477140037 +0000 UTC m=+0.834629879 container start ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:12:48 compute-0 focused_dewdney[289510]: 167 167
Dec 06 07:12:48 compute-0 systemd[1]: libpod-ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550.scope: Deactivated successfully.
Dec 06 07:12:48 compute-0 conmon[289510]: conmon ecdded70d3170b894fdb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550.scope/container/memory.events
Dec 06 07:12:48 compute-0 nova_compute[251992]: 2025-12-06 07:12:48.528 251996 DEBUG nova.network.neutron [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:12:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 173 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 4.8 MiB/s wr, 115 op/s
Dec 06 07:12:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:49 compute-0 nova_compute[251992]: 2025-12-06 07:12:49.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:49 compute-0 podman[289493]: 2025-12-06 07:12:49.715461255 +0000 UTC m=+2.072951097 container attach ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:12:49 compute-0 podman[289493]: 2025-12-06 07:12:49.716501673 +0000 UTC m=+2.073991525 container died ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:12:49 compute-0 nova_compute[251992]: 2025-12-06 07:12:49.894 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb52c2d62b7aa7cce39ba43569052ae8bd9c4e71ddba94c9434b6395a0425a1-merged.mount: Deactivated successfully.
Dec 06 07:12:49 compute-0 podman[289493]: 2025-12-06 07:12:49.921514707 +0000 UTC m=+2.279004539 container remove ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dewdney, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:12:49 compute-0 systemd[1]: libpod-conmon-ecdded70d3170b894fdb0df0becd1c3b6c539f5ea1b659a2095ab7593e1cd550.scope: Deactivated successfully.
Dec 06 07:12:50 compute-0 podman[289535]: 2025-12-06 07:12:50.106749549 +0000 UTC m=+0.071564491 container create 6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:12:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:50.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:50 compute-0 podman[289535]: 2025-12-06 07:12:50.059691651 +0000 UTC m=+0.024506613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:12:50 compute-0 systemd[1]: Started libpod-conmon-6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8.scope.
Dec 06 07:12:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f5a46de5cbb9057babd9337c29a919c7ccc0b1ac0e11cfa00327928fe02263/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f5a46de5cbb9057babd9337c29a919c7ccc0b1ac0e11cfa00327928fe02263/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f5a46de5cbb9057babd9337c29a919c7ccc0b1ac0e11cfa00327928fe02263/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f5a46de5cbb9057babd9337c29a919c7ccc0b1ac0e11cfa00327928fe02263/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:50 compute-0 podman[289535]: 2025-12-06 07:12:50.261855695 +0000 UTC m=+0.226670657 container init 6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:12:50 compute-0 podman[289535]: 2025-12-06 07:12:50.267936304 +0000 UTC m=+0.232751246 container start 6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:12:50 compute-0 podman[289535]: 2025-12-06 07:12:50.2746501 +0000 UTC m=+0.239465052 container attach 6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:12:50 compute-0 nova_compute[251992]: 2025-12-06 07:12:50.648 251996 DEBUG nova.network.neutron [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Updating instance_info_cache with network_info: [{"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:50.674 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Releasing lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:50.675 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Instance network_info: |[{"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:50.675 251996 DEBUG oslo_concurrency.lockutils [req-2b20f161-2e3c-414f-9dbc-1c56bd585d4d req-58de410c-fb6b-4089-aa96-41a4ef5dd61e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:50.675 251996 DEBUG nova.network.neutron [req-2b20f161-2e3c-414f-9dbc-1c56bd585d4d req-58de410c-fb6b-4089-aa96-41a4ef5dd61e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Refreshing network info cache for port b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:50.679 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Start _get_guest_xml network_info=[{"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:50.714 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.013 251996 WARNING nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.019 251996 DEBUG nova.virt.libvirt.host [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.021 251996 DEBUG nova.virt.libvirt.host [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:12:51 compute-0 ceph-mon[74339]: pgmap v1622: 305 pgs: 305 active+clean; 173 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 4.8 MiB/s wr, 115 op/s
Dec 06 07:12:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3794954356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.025 251996 DEBUG nova.virt.libvirt.host [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.026 251996 DEBUG nova.virt.libvirt.host [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.027 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.028 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.028 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.029 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.029 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.029 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.029 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.030 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.030 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.030 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.030 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.031 251996 DEBUG nova.virt.hardware [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.034 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:51 compute-0 epic_rosalind[289551]: {
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:     "0": [
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:         {
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "devices": [
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "/dev/loop3"
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             ],
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "lv_name": "ceph_lv0",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "lv_size": "7511998464",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "name": "ceph_lv0",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "tags": {
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.cluster_name": "ceph",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.crush_device_class": "",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.encrypted": "0",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.osd_id": "0",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.type": "block",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:                 "ceph.vdo": "0"
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             },
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "type": "block",
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:             "vg_name": "ceph_vg0"
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:         }
Dec 06 07:12:51 compute-0 epic_rosalind[289551]:     ]
Dec 06 07:12:51 compute-0 epic_rosalind[289551]: }
Dec 06 07:12:51 compute-0 systemd[1]: libpod-6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8.scope: Deactivated successfully.
Dec 06 07:12:51 compute-0 podman[289563]: 2025-12-06 07:12:51.177414094 +0000 UTC m=+0.028849724 container died 6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:12:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 200 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.5 MiB/s wr, 158 op/s
Dec 06 07:12:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:12:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104088393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.480 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.510 251996 DEBUG nova.storage.rbd_utils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.515 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.682 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.682 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:12:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1153011379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.961 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.963 251996 DEBUG nova.virt.libvirt.vif [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1564803638',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1564803638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1564803638',id=58,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-7kt6z9ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:45Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=4e5a488b-67a2-44eb-a8b5-e963515206c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.963 251996 DEBUG nova.network.os_vif_util [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.964 251996 DEBUG nova.network.os_vif_util [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:98:a6,bridge_name='br-int',has_traffic_filtering=True,id=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4ce6cd9-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:51 compute-0 nova_compute[251992]: 2025-12-06 07:12:51.966 251996 DEBUG nova.objects.instance [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e5a488b-67a2-44eb-a8b5-e963515206c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f5a46de5cbb9057babd9337c29a919c7ccc0b1ac0e11cfa00327928fe02263-merged.mount: Deactivated successfully.
Dec 06 07:12:52 compute-0 podman[289563]: 2025-12-06 07:12:52.02242362 +0000 UTC m=+0.873859220 container remove 6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 07:12:52 compute-0 systemd[1]: libpod-conmon-6a9c2b4d437428365e1e23184f357ab35b93670d307cd0e49dd2785700f717d8.scope: Deactivated successfully.
Dec 06 07:12:52 compute-0 sudo[289428]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:52 compute-0 sudo[289658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:52 compute-0 sudo[289658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:52 compute-0 sudo[289658]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.132 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <uuid>4e5a488b-67a2-44eb-a8b5-e963515206c9</uuid>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <name>instance-0000003a</name>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1564803638</nova:name>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:12:51</nova:creationTime>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <nova:user uuid="a1ed181a1103481fa4d0b29ce1009dca">tempest-ImagesOneServerNegativeTestJSON-324135674-project-member</nova:user>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <nova:project uuid="c297e84c3a9f48a9a82aebc9e5ade875">tempest-ImagesOneServerNegativeTestJSON-324135674</nova:project>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <nova:port uuid="b4ce6cd9-a8d3-4b54-b552-679a64be6ca3">
Dec 06 07:12:52 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <system>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <entry name="serial">4e5a488b-67a2-44eb-a8b5-e963515206c9</entry>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <entry name="uuid">4e5a488b-67a2-44eb-a8b5-e963515206c9</entry>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </system>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <os>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   </os>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <features>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   </features>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/4e5a488b-67a2-44eb-a8b5-e963515206c9_disk">
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       </source>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/4e5a488b-67a2-44eb-a8b5-e963515206c9_disk.config">
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       </source>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:12:52 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:54:98:a6"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <target dev="tapb4ce6cd9-a8"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9/console.log" append="off"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <video>
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </video>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:12:52 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:12:52 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:12:52 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:12:52 compute-0 nova_compute[251992]: </domain>
Dec 06 07:12:52 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.139 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Preparing to wait for external event network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.140 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.140 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.140 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.141 251996 DEBUG nova.virt.libvirt.vif [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1564803638',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1564803638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1564803638',id=58,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-7kt6z9ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:45Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=4e5a488b-67a2-44eb-a8b5-e963515206c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.141 251996 DEBUG nova.network.os_vif_util [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.142 251996 DEBUG nova.network.os_vif_util [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:98:a6,bridge_name='br-int',has_traffic_filtering=True,id=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4ce6cd9-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.143 251996 DEBUG os_vif [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:98:a6,bridge_name='br-int',has_traffic_filtering=True,id=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4ce6cd9-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.143 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.144 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.144 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:12:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174284925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.149 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.149 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4ce6cd9-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.150 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4ce6cd9-a8, col_values=(('external_ids', {'iface-id': 'b4ce6cd9-a8d3-4b54-b552-679a64be6ca3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:98:a6', 'vm-uuid': '4e5a488b-67a2-44eb-a8b5-e963515206c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:52 compute-0 NetworkManager[48965]: <info>  [1765005172.1530] manager: (tapb4ce6cd9-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.154 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.159 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.159 251996 INFO os_vif [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:98:a6,bridge_name='br-int',has_traffic_filtering=True,id=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4ce6cd9-a8')
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.169 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:52 compute-0 sudo[289683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:12:52 compute-0 sudo[289683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:52 compute-0 sudo[289683]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:52 compute-0 sudo[289713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:52 compute-0 sudo[289713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:52 compute-0 sudo[289713]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:52.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:52 compute-0 sudo[289738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:12:52 compute-0 sudo[289738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3431653592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3104088393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1153011379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.394 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:52.396 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:52.397 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.414 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.415 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.416 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.416 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.417 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No VIF found with MAC fa:16:3e:54:98:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.417 251996 INFO nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Using config drive
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.457 251996 DEBUG nova.storage.rbd_utils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.619 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.621 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4589MB free_disk=20.920528411865234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.621 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.621 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:52 compute-0 podman[289822]: 2025-12-06 07:12:52.627882923 +0000 UTC m=+0.059838365 container create b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:12:52 compute-0 systemd[1]: Started libpod-conmon-b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b.scope.
Dec 06 07:12:52 compute-0 podman[289822]: 2025-12-06 07:12:52.592995903 +0000 UTC m=+0.024951375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:12:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.702 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 4e5a488b-67a2-44eb-a8b5-e963515206c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.720 251996 INFO nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance fa49fe6a-09de-40f5-9afe-8b1c9f15f489 has allocations against this compute host but is not found in the database.
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.721 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.721 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.780 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:52 compute-0 podman[289822]: 2025-12-06 07:12:52.857527852 +0000 UTC m=+0.289483324 container init b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lewin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:12:52 compute-0 podman[289822]: 2025-12-06 07:12:52.866329997 +0000 UTC m=+0.298285439 container start b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:12:52 compute-0 boring_lewin[289838]: 167 167
Dec 06 07:12:52 compute-0 systemd[1]: libpod-b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b.scope: Deactivated successfully.
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.915 251996 INFO nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Creating config drive at /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9/disk.config
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.920 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8qiwei3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:52 compute-0 podman[289822]: 2025-12-06 07:12:52.945569301 +0000 UTC m=+0.377524743 container attach b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 07:12:52 compute-0 podman[289822]: 2025-12-06 07:12:52.946046694 +0000 UTC m=+0.378002136 container died b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.950 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.951 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bac25bb2c3e156b27307a66050180546cf00fe250b5ac5307ece8dfa27d92ab-merged.mount: Deactivated successfully.
Dec 06 07:12:52 compute-0 nova_compute[251992]: 2025-12-06 07:12:52.975 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:12:52 compute-0 podman[289822]: 2025-12-06 07:12:52.987034245 +0000 UTC m=+0.418989687 container remove b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lewin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:12:52 compute-0 systemd[1]: libpod-conmon-b349260e3d148be5db9bafe45721c1a3d6f938684d005e1e0800f2541e27e62b.scope: Deactivated successfully.
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.058 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8qiwei3" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.088 251996 DEBUG nova.storage.rbd_utils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] rbd image 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.096 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9/disk.config 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.141 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:53 compute-0 podman[289904]: 2025-12-06 07:12:53.126728231 +0000 UTC m=+0.020640276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:12:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 209 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Dec 06 07:12:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535144048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.266 251996 DEBUG nova.network.neutron [req-2b20f161-2e3c-414f-9dbc-1c56bd585d4d req-58de410c-fb6b-4089-aa96-41a4ef5dd61e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Updated VIF entry in instance network info cache for port b4ce6cd9-a8d3-4b54-b552-679a64be6ca3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.266 251996 DEBUG nova.network.neutron [req-2b20f161-2e3c-414f-9dbc-1c56bd585d4d req-58de410c-fb6b-4089-aa96-41a4ef5dd61e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Updating instance_info_cache with network_info: [{"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.273 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.279 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.282 251996 DEBUG oslo_concurrency.lockutils [req-2b20f161-2e3c-414f-9dbc-1c56bd585d4d req-58de410c-fb6b-4089-aa96-41a4ef5dd61e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.295 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.318 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.319 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.319 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.320 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.320 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.328 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.328 251996 INFO nova.compute.claims [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.332 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.402 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:53 compute-0 podman[289904]: 2025-12-06 07:12:53.475049821 +0000 UTC m=+0.368961836 container create 02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.479 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:53 compute-0 ceph-mon[74339]: pgmap v1623: 305 pgs: 305 active+clean; 200 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.5 MiB/s wr, 158 op/s
Dec 06 07:12:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2174284925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1535144048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:53 compute-0 systemd[1]: Started libpod-conmon-02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf.scope.
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.557 251996 DEBUG oslo_concurrency.processutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9/disk.config 4e5a488b-67a2-44eb-a8b5-e963515206c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.558 251996 INFO nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Deleting local config drive /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9/disk.config because it was imported into RBD.
Dec 06 07:12:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63f1ffe8b31910b94507aa3b559d11bc2fbf6db764d111629c1116d65cb23d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63f1ffe8b31910b94507aa3b559d11bc2fbf6db764d111629c1116d65cb23d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63f1ffe8b31910b94507aa3b559d11bc2fbf6db764d111629c1116d65cb23d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63f1ffe8b31910b94507aa3b559d11bc2fbf6db764d111629c1116d65cb23d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:53 compute-0 podman[289904]: 2025-12-06 07:12:53.586851121 +0000 UTC m=+0.480763156 container init 02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gauss, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 07:12:53 compute-0 podman[289904]: 2025-12-06 07:12:53.594485172 +0000 UTC m=+0.488397187 container start 02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:12:53 compute-0 podman[289904]: 2025-12-06 07:12:53.598376471 +0000 UTC m=+0.492288506 container attach 02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gauss, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:12:53 compute-0 NetworkManager[48965]: <info>  [1765005173.6192] manager: (tapb4ce6cd9-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Dec 06 07:12:53 compute-0 kernel: tapb4ce6cd9-a8: entered promiscuous mode
Dec 06 07:12:53 compute-0 ovn_controller[147168]: 2025-12-06T07:12:53Z|00144|binding|INFO|Claiming lport b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 for this chassis.
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.628 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:53 compute-0 ovn_controller[147168]: 2025-12-06T07:12:53Z|00145|binding|INFO|b4ce6cd9-a8d3-4b54-b552-679a64be6ca3: Claiming fa:16:3e:54:98:a6 10.100.0.11
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.632 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:53 compute-0 systemd-machined[212986]: New machine qemu-24-instance-0000003a.
Dec 06 07:12:53 compute-0 systemd-udevd[289979]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:12:53 compute-0 NetworkManager[48965]: <info>  [1765005173.6695] device (tapb4ce6cd9-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:12:53 compute-0 NetworkManager[48965]: <info>  [1765005173.6707] device (tapb4ce6cd9-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:12:53 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-0000003a.
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.692 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:98:a6 10.100.0.11'], port_security=['fa:16:3e:54:98:a6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4e5a488b-67a2-44eb-a8b5-e963515206c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49680c77-2db5-4d0f-bd5b-08899440c38e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'neutron:revision_number': '2', 'neutron:security_group_ids': '44ffcb05-d145-4d07-b800-cc9e3941da49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2602f87-ec0b-4d1c-8f8b-eee8bbcfddb2, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.694 158118 INFO neutron.agent.ovn.metadata.agent [-] Port b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 in datapath 49680c77-2db5-4d0f-bd5b-08899440c38e bound to our chassis
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.695 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49680c77-2db5-4d0f-bd5b-08899440c38e
Dec 06 07:12:53 compute-0 ovn_controller[147168]: 2025-12-06T07:12:53Z|00146|binding|INFO|Setting lport b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 ovn-installed in OVS
Dec 06 07:12:53 compute-0 ovn_controller[147168]: 2025-12-06T07:12:53Z|00147|binding|INFO|Setting lport b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 up in Southbound
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.707 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.706 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b9171d7a-2ccc-4fdd-a6e3-1481dda7f6f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.707 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap49680c77-21 in ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.712 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap49680c77-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.712 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[de44d28b-c725-470a-8928-1e6a7f422f1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.714 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[56679294-62a7-4d85-bdec-bea0af9ea369]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.726 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[6929f76d-a9d1-4385-af8e-d189589a9373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.739 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[09d9aac5-03e1-441e-bd21-858a87868f75]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.769 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3368e7d9-65da-490b-bc1f-9b9a27be3a1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.775 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[65ac9ccf-d77e-4ae0-9461-3ed7a387a955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 NetworkManager[48965]: <info>  [1765005173.7760] manager: (tap49680c77-20): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.804 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[75cbe41a-78ee-4ccd-8cfd-8bca073bea89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.807 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ed034585-ec85-414d-8834-7a8f4e21fd04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 NetworkManager[48965]: <info>  [1765005173.8259] device (tap49680c77-20): carrier: link connected
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.831 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[16710195-ca20-4207-bd3e-63ebb51b83ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.848 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[19f5040a-2db0-46ae-8b24-331953fc945d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49680c77-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:86:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544641, 'reachable_time': 30348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290012, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.865 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[afcd3a7d-2467-42fc-8581-7981a1827ffc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:86b2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 544641, 'tstamp': 544641}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290013, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.881 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a54669f7-2d5d-46a1-95ba-ce05ff97c1b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49680c77-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:86:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544641, 'reachable_time': 30348, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290014, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.909 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[23d6c609-024b-4822-acf0-143efe3388a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:12:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750448777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.966 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d446aa74-e556-406b-8d83-17eb345bfe7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.968 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49680c77-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.968 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.969 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49680c77-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:53 compute-0 kernel: tap49680c77-20: entered promiscuous mode
Dec 06 07:12:53 compute-0 NetworkManager[48965]: <info>  [1765005173.9723] manager: (tap49680c77-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.972 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.977 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49680c77-20, col_values=(('external_ids', {'iface-id': '585005cf-d18f-4bcc-9942-2eb7dec20acd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:53 compute-0 ovn_controller[147168]: 2025-12-06T07:12:53Z|00148|binding|INFO|Releasing lport 585005cf-d18f-4bcc-9942-2eb7dec20acd from this chassis (sb_readonly=0)
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.979 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.981 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.982 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0964a5c7-e20a-4405-8a14-575ba737c460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.983 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-49680c77-2db5-4d0f-bd5b-08899440c38e
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/49680c77-2db5-4d0f-bd5b-08899440c38e.pid.haproxy
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 49680c77-2db5-4d0f-bd5b-08899440c38e
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:12:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:53.985 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'env', 'PROCESS_TAG=haproxy-49680c77-2db5-4d0f-bd5b-08899440c38e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/49680c77-2db5-4d0f-bd5b-08899440c38e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.986 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.994 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:53 compute-0 nova_compute[251992]: 2025-12-06 07:12:53.998 251996 DEBUG nova.compute.provider_tree [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.020 251996 DEBUG nova.scheduler.client.report [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.047 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.048 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.087 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005174.0866802, 4e5a488b-67a2-44eb-a8b5-e963515206c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.087 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] VM Started (Lifecycle Event)
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.099 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.100 251996 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.117 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.122 251996 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.127 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005174.0876725, 4e5a488b-67a2-44eb-a8b5-e963515206c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.127 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] VM Paused (Lifecycle Event)
Dec 06 07:12:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:54.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.144 251996 DEBUG nova.compute.manager [req-61bab5ef-dfe6-4cce-99b6-68911f478864 req-5f23ff5d-91e7-4c9a-9ea2-5a93b7bfed20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received event network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.145 251996 DEBUG oslo_concurrency.lockutils [req-61bab5ef-dfe6-4cce-99b6-68911f478864 req-5f23ff5d-91e7-4c9a-9ea2-5a93b7bfed20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.145 251996 DEBUG oslo_concurrency.lockutils [req-61bab5ef-dfe6-4cce-99b6-68911f478864 req-5f23ff5d-91e7-4c9a-9ea2-5a93b7bfed20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.146 251996 DEBUG oslo_concurrency.lockutils [req-61bab5ef-dfe6-4cce-99b6-68911f478864 req-5f23ff5d-91e7-4c9a-9ea2-5a93b7bfed20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.146 251996 DEBUG nova.compute.manager [req-61bab5ef-dfe6-4cce-99b6-68911f478864 req-5f23ff5d-91e7-4c9a-9ea2-5a93b7bfed20 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Processing event network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.147 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.152 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.155 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.158 251996 INFO nova.virt.libvirt.driver [-] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Instance spawned successfully.
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.159 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.162 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.166 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005174.1505084, 4e5a488b-67a2-44eb-a8b5-e963515206c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.167 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] VM Resumed (Lifecycle Event)
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.203 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.208 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.208 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.209 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.210 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.210 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.211 251996 DEBUG nova.virt.libvirt.driver [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.220 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:12:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.272 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.314 251996 INFO nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Took 8.94 seconds to spawn the instance on the hypervisor.
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.315 251996 DEBUG nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.316 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.317 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.318 251996 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Creating image(s)
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.350 251996 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.395 251996 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.427 251996 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.432 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:54 compute-0 podman[290114]: 2025-12-06 07:12:54.443229853 +0000 UTC m=+0.056144963 container create 0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.459 251996 DEBUG nova.policy [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03fb2817729e4b71932023a7637c6244', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.463 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.464 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:54 compute-0 systemd[1]: Started libpod-conmon-0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c.scope.
Dec 06 07:12:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Dec 06 07:12:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.509 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:54 compute-0 podman[290114]: 2025-12-06 07:12:54.41397763 +0000 UTC m=+0.026892760 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.511 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.512 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.512 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Dec 06 07:12:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:12:54 compute-0 ceph-mon[74339]: pgmap v1624: 305 pgs: 305 active+clean; 209 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Dec 06 07:12:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1198184614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3750448777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee82db74e67ca950b27987fffe23995a3235dfede14013ad7ebc30f27bec0db9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:12:54 compute-0 podman[290114]: 2025-12-06 07:12:54.541858048 +0000 UTC m=+0.154773168 container init 0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 07:12:54 compute-0 podman[290114]: 2025-12-06 07:12:54.547891795 +0000 UTC m=+0.160806905 container start 0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.553 251996 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]: {
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]:         "osd_id": 0,
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]:         "type": "bluestore"
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]:     }
Dec 06 07:12:54 compute-0 xenodochial_gauss[289942]: }
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.565 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:54 compute-0 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[290166]: [NOTICE]   (290191) : New worker (290197) forked
Dec 06 07:12:54 compute-0 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[290166]: [NOTICE]   (290191) : Loading success.
Dec 06 07:12:54 compute-0 systemd[1]: libpod-02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf.scope: Deactivated successfully.
Dec 06 07:12:54 compute-0 podman[289904]: 2025-12-06 07:12:54.591331383 +0000 UTC m=+1.485243398 container died 02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.600 251996 INFO nova.compute.manager [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Took 10.10 seconds to build instance.
Dec 06 07:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a63f1ffe8b31910b94507aa3b559d11bc2fbf6db764d111629c1116d65cb23d5-merged.mount: Deactivated successfully.
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.622 251996 DEBUG oslo_concurrency.lockutils [None req-65e0731d-b589-449c-8acd-91b773266080 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:54 compute-0 podman[289904]: 2025-12-06 07:12:54.650986293 +0000 UTC m=+1.544898308 container remove 02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:54 compute-0 systemd[1]: libpod-conmon-02a44dac79e4705cdcc4ecbbbc8fb1a607fd8e320862bc3063a02e581d3b4caf.scope: Deactivated successfully.
Dec 06 07:12:54 compute-0 sudo[289738]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:12:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:12:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e5da89f0-b568-4d40-a014-fcf6b9e7919a does not exist
Dec 06 07:12:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1a30b36b-50e7-4618-a431-a58ffbbcc291 does not exist
Dec 06 07:12:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 91a909ec-68b4-41f6-aea3-7b05b8f74e55 does not exist
Dec 06 07:12:54 compute-0 sudo[290236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:54 compute-0 sudo[290236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:54 compute-0 sudo[290236]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:54 compute-0 sudo[290261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:12:54 compute-0 sudo[290261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:54 compute-0 sudo[290261]: pam_unix(sudo:session): session closed for user root
Dec 06 07:12:54 compute-0 nova_compute[251992]: 2025-12-06 07:12:54.981 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:55 compute-0 nova_compute[251992]: 2025-12-06 07:12:55.077 251996 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] resizing rbd image fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:12:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 264 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.2 MiB/s wr, 219 op/s
Dec 06 07:12:55 compute-0 nova_compute[251992]: 2025-12-06 07:12:55.256 251996 DEBUG nova.objects.instance [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'migration_context' on Instance uuid fa49fe6a-09de-40f5-9afe-8b1c9f15f489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:55 compute-0 nova_compute[251992]: 2025-12-06 07:12:55.274 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:12:55 compute-0 nova_compute[251992]: 2025-12-06 07:12:55.274 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Ensure instance console log exists: /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:12:55 compute-0 nova_compute[251992]: 2025-12-06 07:12:55.275 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:55 compute-0 nova_compute[251992]: 2025-12-06 07:12:55.275 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:55 compute-0 nova_compute[251992]: 2025-12-06 07:12:55.275 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:55 compute-0 nova_compute[251992]: 2025-12-06 07:12:55.439 251996 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Successfully created port: 7c232208-466f-4670-b4dd-df7f111c3185 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:12:55 compute-0 ceph-mon[74339]: osdmap e232: 3 total, 3 up, 3 in
Dec 06 07:12:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:12:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Dec 06 07:12:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Dec 06 07:12:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Dec 06 07:12:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:12:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:56.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.151 251996 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Successfully updated port: 7c232208-466f-4670-b4dd-df7f111c3185 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.171 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "refresh_cache-fa49fe6a-09de-40f5-9afe-8b1c9f15f489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.171 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquired lock "refresh_cache-fa49fe6a-09de-40f5-9afe-8b1c9f15f489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.171 251996 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:12:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:12:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:56.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.299 251996 DEBUG nova.compute.manager [req-a0604fc9-afb1-4fbd-a4b6-3c9b0b0d1a59 req-41073de1-c6ed-4ed4-988f-c9526968a7b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received event network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.299 251996 DEBUG oslo_concurrency.lockutils [req-a0604fc9-afb1-4fbd-a4b6-3c9b0b0d1a59 req-41073de1-c6ed-4ed4-988f-c9526968a7b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.299 251996 DEBUG oslo_concurrency.lockutils [req-a0604fc9-afb1-4fbd-a4b6-3c9b0b0d1a59 req-41073de1-c6ed-4ed4-988f-c9526968a7b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.299 251996 DEBUG oslo_concurrency.lockutils [req-a0604fc9-afb1-4fbd-a4b6-3c9b0b0d1a59 req-41073de1-c6ed-4ed4-988f-c9526968a7b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.299 251996 DEBUG nova.compute.manager [req-a0604fc9-afb1-4fbd-a4b6-3c9b0b0d1a59 req-41073de1-c6ed-4ed4-988f-c9526968a7b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] No waiting events found dispatching network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.300 251996 WARNING nova.compute.manager [req-a0604fc9-afb1-4fbd-a4b6-3c9b0b0d1a59 req-41073de1-c6ed-4ed4-988f-c9526968a7b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received unexpected event network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 for instance with vm_state active and task_state None.
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.301 251996 DEBUG nova.compute.manager [req-15601659-768c-45d3-b0ee-46a86766dbec req-59901042-262e-4663-9c96-209dc699199d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received event network-changed-7c232208-466f-4670-b4dd-df7f111c3185 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.301 251996 DEBUG nova.compute.manager [req-15601659-768c-45d3-b0ee-46a86766dbec req-59901042-262e-4663-9c96-209dc699199d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Refreshing instance network info cache due to event network-changed-7c232208-466f-4670-b4dd-df7f111c3185. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.302 251996 DEBUG oslo_concurrency.lockutils [req-15601659-768c-45d3-b0ee-46a86766dbec req-59901042-262e-4663-9c96-209dc699199d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-fa49fe6a-09de-40f5-9afe-8b1c9f15f489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.438 251996 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:12:56 compute-0 ceph-mon[74339]: pgmap v1626: 305 pgs: 305 active+clean; 264 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.2 MiB/s wr, 219 op/s
Dec 06 07:12:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2861895599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:56 compute-0 ceph-mon[74339]: osdmap e233: 3 total, 3 up, 3 in
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.649 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:56 compute-0 nova_compute[251992]: 2025-12-06 07:12:56.655 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Dec 06 07:12:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Dec 06 07:12:56 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.197 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 292 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.4 MiB/s wr, 271 op/s
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.299 251996 DEBUG nova.compute.manager [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.355 251996 INFO nova.compute.manager [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] instance snapshotting
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.642 251996 DEBUG nova.network.neutron [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Updating instance_info_cache with network_info: [{"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.667 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Releasing lock "refresh_cache-fa49fe6a-09de-40f5-9afe-8b1c9f15f489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.668 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Instance network_info: |[{"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.668 251996 DEBUG oslo_concurrency.lockutils [req-15601659-768c-45d3-b0ee-46a86766dbec req-59901042-262e-4663-9c96-209dc699199d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-fa49fe6a-09de-40f5-9afe-8b1c9f15f489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.669 251996 DEBUG nova.network.neutron [req-15601659-768c-45d3-b0ee-46a86766dbec req-59901042-262e-4663-9c96-209dc699199d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Refreshing network info cache for port 7c232208-466f-4670-b4dd-df7f111c3185 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.672 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Start _get_guest_xml network_info=[{"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.679 251996 INFO nova.virt.libvirt.driver [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Beginning live snapshot process
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.682 251996 WARNING nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.686 251996 DEBUG nova.virt.libvirt.host [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.687 251996 DEBUG nova.virt.libvirt.host [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.690 251996 DEBUG nova.virt.libvirt.host [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.690 251996 DEBUG nova.virt.libvirt.host [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.691 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.692 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.692 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.692 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.693 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.693 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.693 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.694 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.694 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.694 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.694 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.695 251996 DEBUG nova.virt.hardware [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.697 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2384847691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:12:57 compute-0 ceph-mon[74339]: osdmap e234: 3 total, 3 up, 3 in
Dec 06 07:12:57 compute-0 nova_compute[251992]: 2025-12-06 07:12:57.855 251996 DEBUG nova.virt.libvirt.imagebackend [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.043 251996 DEBUG nova.storage.rbd_utils [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] creating snapshot(dda3012ebe344133a912776fc3107abc) on rbd image(4e5a488b-67a2-44eb-a8b5-e963515206c9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:12:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:12:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324865310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:12:58.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.144 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.172 251996 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.176 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:12:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:12:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:12:58.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:12:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:12:58.399 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.405 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:12:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2405761541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.622 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.624 251996 DEBUG nova.virt.libvirt.vif [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-481293432',display_name='tempest-tempest.common.compute-instance-481293432-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-481293432-2',id=60,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-tgs77gw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:54Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=fa49fe6a-09de-40f5-9afe-8b1c9f15f489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.625 251996 DEBUG nova.network.os_vif_util [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.626 251996 DEBUG nova.network.os_vif_util [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d8:09,bridge_name='br-int',has_traffic_filtering=True,id=7c232208-466f-4670-b4dd-df7f111c3185,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c232208-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.627 251996 DEBUG nova.objects.instance [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'pci_devices' on Instance uuid fa49fe6a-09de-40f5-9afe-8b1c9f15f489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.650 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <uuid>fa49fe6a-09de-40f5-9afe-8b1c9f15f489</uuid>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <name>instance-0000003c</name>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <nova:name>tempest-tempest.common.compute-instance-481293432-2</nova:name>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:12:57</nova:creationTime>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <nova:user uuid="03fb2817729e4b71932023a7637c6244">tempest-MultipleCreateTestJSON-1199242675-project-member</nova:user>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <nova:project uuid="de09de98b3b1445f88b6094b6aac4a30">tempest-MultipleCreateTestJSON-1199242675</nova:project>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <nova:port uuid="7c232208-466f-4670-b4dd-df7f111c3185">
Dec 06 07:12:58 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <system>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <entry name="serial">fa49fe6a-09de-40f5-9afe-8b1c9f15f489</entry>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <entry name="uuid">fa49fe6a-09de-40f5-9afe-8b1c9f15f489</entry>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </system>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <os>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   </os>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <features>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   </features>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk">
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       </source>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk.config">
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       </source>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:12:58 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:ba:d8:09"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <target dev="tap7c232208-46"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489/console.log" append="off"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <video>
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </video>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:12:58 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:12:58 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:12:58 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:12:58 compute-0 nova_compute[251992]: </domain>
Dec 06 07:12:58 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.657 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Preparing to wait for external event network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.658 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.658 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.658 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.659 251996 DEBUG nova.virt.libvirt.vif [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-481293432',display_name='tempest-tempest.common.compute-instance-481293432-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-481293432-2',id=60,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-tgs77gw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:12:54Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=fa49fe6a-09de-40f5-9afe-8b1c9f15f489,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.659 251996 DEBUG nova.network.os_vif_util [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.660 251996 DEBUG nova.network.os_vif_util [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d8:09,bridge_name='br-int',has_traffic_filtering=True,id=7c232208-466f-4670-b4dd-df7f111c3185,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c232208-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.660 251996 DEBUG os_vif [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d8:09,bridge_name='br-int',has_traffic_filtering=True,id=7c232208-466f-4670-b4dd-df7f111c3185,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c232208-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.661 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.662 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.662 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.663 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.663 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.665 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.665 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.668 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.669 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c232208-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.669 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c232208-46, col_values=(('external_ids', {'iface-id': '7c232208-466f-4670-b4dd-df7f111c3185', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:d8:09', 'vm-uuid': 'fa49fe6a-09de-40f5-9afe-8b1c9f15f489'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.671 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.672 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:12:58 compute-0 NetworkManager[48965]: <info>  [1765005178.6742] manager: (tap7c232208-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.682 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:12:58 compute-0 nova_compute[251992]: 2025-12-06 07:12:58.683 251996 INFO os_vif [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d8:09,bridge_name='br-int',has_traffic_filtering=True,id=7c232208-466f-4670-b4dd-df7f111c3185,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c232208-46')
Dec 06 07:12:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Dec 06 07:12:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 352 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 329 op/s
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.292 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.293 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.294 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No VIF found with MAC fa:16:3e:ba:d8:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.294 251996 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Using config drive
Dec 06 07:12:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Dec 06 07:12:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.334 251996 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.351 251996 DEBUG nova.network.neutron [req-15601659-768c-45d3-b0ee-46a86766dbec req-59901042-262e-4663-9c96-209dc699199d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Updated VIF entry in instance network info cache for port 7c232208-466f-4670-b4dd-df7f111c3185. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.352 251996 DEBUG nova.network.neutron [req-15601659-768c-45d3-b0ee-46a86766dbec req-59901042-262e-4663-9c96-209dc699199d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Updating instance_info_cache with network_info: [{"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.366 251996 DEBUG oslo_concurrency.lockutils [req-15601659-768c-45d3-b0ee-46a86766dbec req-59901042-262e-4663-9c96-209dc699199d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-fa49fe6a-09de-40f5-9afe-8b1c9f15f489" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.520 251996 DEBUG nova.storage.rbd_utils [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] cloning vms/4e5a488b-67a2-44eb-a8b5-e963515206c9_disk@dda3012ebe344133a912776fc3107abc to images/78b95c69-8ea4-40d9-832c-1e5db1eba4f8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:12:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.697 251996 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Creating config drive at /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489/disk.config
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.704 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgiciqdy6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.741 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.742 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.743 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.762 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.762 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.762 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.763 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.763 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e5a488b-67a2-44eb-a8b5-e963515206c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:12:59 compute-0 nova_compute[251992]: 2025-12-06 07:12:59.836 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgiciqdy6" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:12:59 compute-0 sudo[290537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:12:59 compute-0 sudo[290537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:12:59 compute-0 sudo[290537]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:00 compute-0 sudo[290562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:00 compute-0 sudo[290562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:00 compute-0 sudo[290562]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000055s ======
Dec 06 07:13:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:00.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec 06 07:13:00 compute-0 ceph-mon[74339]: pgmap v1629: 305 pgs: 305 active+clean; 292 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.4 MiB/s wr, 271 op/s
Dec 06 07:13:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3324865310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1024249341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2405761541' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:00.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:00 compute-0 nova_compute[251992]: 2025-12-06 07:13:00.277 251996 DEBUG nova.storage.rbd_utils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:00 compute-0 nova_compute[251992]: 2025-12-06 07:13:00.282 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489/disk.config fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:00 compute-0 nova_compute[251992]: 2025-12-06 07:13:00.350 251996 DEBUG nova.storage.rbd_utils [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] flattening images/78b95c69-8ea4-40d9-832c-1e5db1eba4f8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:13:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 352 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.5 MiB/s wr, 324 op/s
Dec 06 07:13:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2506934581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:01 compute-0 ceph-mon[74339]: pgmap v1630: 305 pgs: 305 active+clean; 352 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 329 op/s
Dec 06 07:13:01 compute-0 ceph-mon[74339]: osdmap e235: 3 total, 3 up, 3 in
Dec 06 07:13:01 compute-0 nova_compute[251992]: 2025-12-06 07:13:01.957 251996 DEBUG oslo_concurrency.processutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489/disk.config fa49fe6a-09de-40f5-9afe-8b1c9f15f489_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:01 compute-0 nova_compute[251992]: 2025-12-06 07:13:01.958 251996 INFO nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Deleting local config drive /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489/disk.config because it was imported into RBD.
Dec 06 07:13:02 compute-0 kernel: tap7c232208-46: entered promiscuous mode
Dec 06 07:13:02 compute-0 NetworkManager[48965]: <info>  [1765005182.0132] manager: (tap7c232208-46): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.016 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:02 compute-0 ovn_controller[147168]: 2025-12-06T07:13:02Z|00149|binding|INFO|Claiming lport 7c232208-466f-4670-b4dd-df7f111c3185 for this chassis.
Dec 06 07:13:02 compute-0 ovn_controller[147168]: 2025-12-06T07:13:02Z|00150|binding|INFO|7c232208-466f-4670-b4dd-df7f111c3185: Claiming fa:16:3e:ba:d8:09 10.100.0.14
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.030 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:d8:09 10.100.0.14'], port_security=['fa:16:3e:ba:d8:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fa49fe6a-09de-40f5-9afe-8b1c9f15f489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c3e308f8-65fc-402f-96ed-2039201b95ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5d88f71-5dee-4f84-b30d-05c6ecea010a, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=7c232208-466f-4670-b4dd-df7f111c3185) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.031 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 7c232208-466f-4670-b4dd-df7f111c3185 in datapath c0aeacae-e53d-425f-88e7-942ba0ab660c bound to our chassis
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.033 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.042 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3943e5-792a-41c9-a7a7-443294ac0e59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.043 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc0aeacae-e1 in ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.045 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc0aeacae-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.045 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cb44469a-b11e-4cf6-883a-325ef6b3dbde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.046 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[886731df-100e-4aa0-8f80-3f5f66a5580a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 systemd-udevd[290653]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:02 compute-0 systemd-machined[212986]: New machine qemu-25-instance-0000003c.
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.060 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd47d56-77c5-40ad-8072-106799f27945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-0000003c.
Dec 06 07:13:02 compute-0 NetworkManager[48965]: <info>  [1765005182.0807] device (tap7c232208-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:13:02 compute-0 NetworkManager[48965]: <info>  [1765005182.0819] device (tap7c232208-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.085 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0f54c8-f3ee-474e-9568-1de1699a703c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.094 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:02 compute-0 ovn_controller[147168]: 2025-12-06T07:13:02Z|00151|binding|INFO|Setting lport 7c232208-466f-4670-b4dd-df7f111c3185 ovn-installed in OVS
Dec 06 07:13:02 compute-0 ovn_controller[147168]: 2025-12-06T07:13:02Z|00152|binding|INFO|Setting lport 7c232208-466f-4670-b4dd-df7f111c3185 up in Southbound
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.102 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.117 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d5dae7-7266-41c0-95d2-d2695ff15dfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 systemd-udevd[290657]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:02 compute-0 NetworkManager[48965]: <info>  [1765005182.1227] manager: (tapc0aeacae-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/85)
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.121 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d577a11c-9e7b-4022-a27f-1bc6bfe9f79f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.126 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Updating instance_info_cache with network_info: [{"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:02.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.152 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-4e5a488b-67a2-44eb-a8b5-e963515206c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.152 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.162 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[281363a0-9924-4f20-837a-18ed03061c33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.165 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fc59f6-e6b2-474a-8406-ac5e2dbea8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 NetworkManager[48965]: <info>  [1765005182.1866] device (tapc0aeacae-e0): carrier: link connected
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.191 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[579de17f-8c38-4fd9-89ae-0b2e3248fcfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.206 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[90544cdc-e2bf-4625-a2ce-65bf7f0cd13c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0aeacae-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:b3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545477, 'reachable_time': 28203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290685, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.222 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fd75030f-d836-4e74-bbcb-c93941dfeeef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:b3f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545477, 'tstamp': 545477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290686, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.237 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e07c50b2-87ac-4f1a-9868-5073753c0da0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0aeacae-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:b3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545477, 'reachable_time': 28203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290687, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.264 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[35300025-0111-4c3f-9e6d-cee126ec19f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.317 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bceeeafb-f686-4da4-8602-9300973c47a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.318 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0aeacae-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.319 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.319 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0aeacae-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:02 compute-0 NetworkManager[48965]: <info>  [1765005182.3217] manager: (tapc0aeacae-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Dec 06 07:13:02 compute-0 kernel: tapc0aeacae-e0: entered promiscuous mode
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.323 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc0aeacae-e0, col_values=(('external_ids', {'iface-id': 'f88eb5d2-a701-4b34-8226-834c175af63c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:02 compute-0 ovn_controller[147168]: 2025-12-06T07:13:02Z|00153|binding|INFO|Releasing lport f88eb5d2-a701-4b34-8226-834c175af63c from this chassis (sb_readonly=0)
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.334 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.342 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.343 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.344 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bff5995b-03d8-4faa-ba58-af0bccf5eb36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.345 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.345 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:13:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:02.346 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'env', 'PROCESS_TAG=haproxy-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c0aeacae-e53d-425f-88e7-942ba0ab660c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.456 251996 DEBUG nova.compute.manager [req-387a2a74-87c6-4589-9511-e942cc89a08c req-271e3528-ced5-4d54-9b9e-c5316a827569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received event network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.457 251996 DEBUG oslo_concurrency.lockutils [req-387a2a74-87c6-4589-9511-e942cc89a08c req-271e3528-ced5-4d54-9b9e-c5316a827569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.458 251996 DEBUG oslo_concurrency.lockutils [req-387a2a74-87c6-4589-9511-e942cc89a08c req-271e3528-ced5-4d54-9b9e-c5316a827569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.458 251996 DEBUG oslo_concurrency.lockutils [req-387a2a74-87c6-4589-9511-e942cc89a08c req-271e3528-ced5-4d54-9b9e-c5316a827569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.458 251996 DEBUG nova.compute.manager [req-387a2a74-87c6-4589-9511-e942cc89a08c req-271e3528-ced5-4d54-9b9e-c5316a827569 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Processing event network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:13:02 compute-0 podman[290720]: 2025-12-06 07:13:02.706504034 +0000 UTC m=+0.027214228 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:13:02 compute-0 podman[290720]: 2025-12-06 07:13:02.864809619 +0000 UTC m=+0.185519803 container create 9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.919 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.921 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005182.92047, fa49fe6a-09de-40f5-9afe-8b1c9f15f489 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.921 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] VM Started (Lifecycle Event)
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.925 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.928 251996 INFO nova.virt.libvirt.driver [-] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Instance spawned successfully.
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.928 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.949 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.956 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.961 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.962 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.962 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.963 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.964 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:02 compute-0 nova_compute[251992]: 2025-12-06 07:13:02.964 251996 DEBUG nova.virt.libvirt.driver [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:02 compute-0 systemd[1]: Started libpod-conmon-9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c.scope.
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.021 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.021 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005182.9205358, fa49fe6a-09de-40f5-9afe-8b1c9f15f489 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.022 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] VM Paused (Lifecycle Event)
Dec 06 07:13:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.025 251996 INFO nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Took 8.71 seconds to spawn the instance on the hypervisor.
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.025 251996 DEBUG nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fa4ec4dc51d63e54586e38c971e30c17b0246c66c160927f5b710f3506ccdc4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.040 251996 DEBUG nova.storage.rbd_utils [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] removing snapshot(dda3012ebe344133a912776fc3107abc) on rbd image(4e5a488b-67a2-44eb-a8b5-e963515206c9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.048 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.051 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005182.9243598, fa49fe6a-09de-40f5-9afe-8b1c9f15f489 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.051 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] VM Resumed (Lifecycle Event)
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.082 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.085 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.102 251996 INFO nova.compute.manager [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Took 10.05 seconds to build instance.
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.123 251996 DEBUG oslo_concurrency.lockutils [None req-792f3fb6-a9f6-4a1d-b034-b4d853def3c2 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:03 compute-0 ceph-mon[74339]: pgmap v1632: 305 pgs: 305 active+clean; 352 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.5 MiB/s wr, 324 op/s
Dec 06 07:13:03 compute-0 podman[290720]: 2025-12-06 07:13:03.147097721 +0000 UTC m=+0.467807915 container init 9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:13:03 compute-0 podman[290720]: 2025-12-06 07:13:03.152197573 +0000 UTC m=+0.472907747 container start 9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 07:13:03 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[290791]: [NOTICE]   (290800) : New worker (290802) forked
Dec 06 07:13:03 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[290791]: [NOTICE]   (290800) : Loading success.
Dec 06 07:13:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 387 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.9 MiB/s wr, 385 op/s
Dec 06 07:13:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Dec 06 07:13:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.409 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:03 compute-0 nova_compute[251992]: 2025-12-06 07:13:03.671 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:03.820 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:03.821 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:03.822 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:04.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:04.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Dec 06 07:13:04 compute-0 ceph-mon[74339]: pgmap v1633: 305 pgs: 305 active+clean; 387 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.9 MiB/s wr, 385 op/s
Dec 06 07:13:04 compute-0 ceph-mon[74339]: osdmap e236: 3 total, 3 up, 3 in
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.572 251996 DEBUG nova.compute.manager [req-203b58e5-7a6b-4d42-ac3c-acf9c9424696 req-1143e001-292e-42e3-b514-303e4ab45803 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received event network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.572 251996 DEBUG oslo_concurrency.lockutils [req-203b58e5-7a6b-4d42-ac3c-acf9c9424696 req-1143e001-292e-42e3-b514-303e4ab45803 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.572 251996 DEBUG oslo_concurrency.lockutils [req-203b58e5-7a6b-4d42-ac3c-acf9c9424696 req-1143e001-292e-42e3-b514-303e4ab45803 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.573 251996 DEBUG oslo_concurrency.lockutils [req-203b58e5-7a6b-4d42-ac3c-acf9c9424696 req-1143e001-292e-42e3-b514-303e4ab45803 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.573 251996 DEBUG nova.compute.manager [req-203b58e5-7a6b-4d42-ac3c-acf9c9424696 req-1143e001-292e-42e3-b514-303e4ab45803 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] No waiting events found dispatching network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.573 251996 WARNING nova.compute.manager [req-203b58e5-7a6b-4d42-ac3c-acf9c9424696 req-1143e001-292e-42e3-b514-303e4ab45803 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received unexpected event network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 for instance with vm_state active and task_state None.
Dec 06 07:13:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Dec 06 07:13:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.822 251996 DEBUG nova.storage.rbd_utils [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] creating snapshot(snap) on rbd image(78b95c69-8ea4-40d9-832c-1e5db1eba4f8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.916 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.917 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.917 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.917 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.918 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.919 251996 INFO nova.compute.manager [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Terminating instance
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.920 251996 DEBUG nova.compute.manager [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:13:04 compute-0 kernel: tap7c232208-46 (unregistering): left promiscuous mode
Dec 06 07:13:04 compute-0 NetworkManager[48965]: <info>  [1765005184.9559] device (tap7c232208-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.964 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-0 ovn_controller[147168]: 2025-12-06T07:13:04Z|00154|binding|INFO|Releasing lport 7c232208-466f-4670-b4dd-df7f111c3185 from this chassis (sb_readonly=0)
Dec 06 07:13:04 compute-0 ovn_controller[147168]: 2025-12-06T07:13:04Z|00155|binding|INFO|Setting lport 7c232208-466f-4670-b4dd-df7f111c3185 down in Southbound
Dec 06 07:13:04 compute-0 ovn_controller[147168]: 2025-12-06T07:13:04Z|00156|binding|INFO|Removing iface tap7c232208-46 ovn-installed in OVS
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.969 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:04.975 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:d8:09 10.100.0.14'], port_security=['fa:16:3e:ba:d8:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fa49fe6a-09de-40f5-9afe-8b1c9f15f489', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c3e308f8-65fc-402f-96ed-2039201b95ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5d88f71-5dee-4f84-b30d-05c6ecea010a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=7c232208-466f-4670-b4dd-df7f111c3185) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:04.976 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 7c232208-466f-4670-b4dd-df7f111c3185 in datapath c0aeacae-e53d-425f-88e7-942ba0ab660c unbound from our chassis
Dec 06 07:13:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:04.977 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0aeacae-e53d-425f-88e7-942ba0ab660c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:04.978 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c20b2d16-3abb-4a60-8693-be66b1031408]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:04.979 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c namespace which is not needed anymore
Dec 06 07:13:04 compute-0 nova_compute[251992]: 2025-12-06 07:13:04.986 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:04 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Dec 06 07:13:04 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000003c.scope: Consumed 2.865s CPU time.
Dec 06 07:13:04 compute-0 systemd-machined[212986]: Machine qemu-25-instance-0000003c terminated.
Dec 06 07:13:05 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[290791]: [NOTICE]   (290800) : haproxy version is 2.8.14-c23fe91
Dec 06 07:13:05 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[290791]: [NOTICE]   (290800) : path to executable is /usr/sbin/haproxy
Dec 06 07:13:05 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[290791]: [WARNING]  (290800) : Exiting Master process...
Dec 06 07:13:05 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[290791]: [ALERT]    (290800) : Current worker (290802) exited with code 143 (Terminated)
Dec 06 07:13:05 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[290791]: [WARNING]  (290800) : All workers exited. Exiting... (0)
Dec 06 07:13:05 compute-0 podman[290830]: 2025-12-06 07:13:05.091775629 +0000 UTC m=+0.110089234 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:13:05 compute-0 systemd[1]: libpod-9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c.scope: Deactivated successfully.
Dec 06 07:13:05 compute-0 conmon[290791]: conmon 9aca448af5ecbaab2e4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c.scope/container/memory.events
Dec 06 07:13:05 compute-0 podman[290872]: 2025-12-06 07:13:05.102194758 +0000 UTC m=+0.044180679 container died 9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c-userdata-shm.mount: Deactivated successfully.
Dec 06 07:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fa4ec4dc51d63e54586e38c971e30c17b0246c66c160927f5b710f3506ccdc4-merged.mount: Deactivated successfully.
Dec 06 07:13:05 compute-0 podman[290872]: 2025-12-06 07:13:05.146408449 +0000 UTC m=+0.088394370 container cleanup 9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.156 251996 INFO nova.virt.libvirt.driver [-] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Instance destroyed successfully.
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.156 251996 DEBUG nova.objects.instance [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'resources' on Instance uuid fa49fe6a-09de-40f5-9afe-8b1c9f15f489 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:05 compute-0 systemd[1]: libpod-conmon-9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c.scope: Deactivated successfully.
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.168 251996 DEBUG nova.virt.libvirt.vif [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:12:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-481293432',display_name='tempest-tempest.common.compute-instance-481293432-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-481293432-2',id=60,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-06T07:13:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-tgs77gw2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:03Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=fa49fe6a-09de-40f5-9afe-8b1c9f15f489,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.169 251996 DEBUG nova.network.os_vif_util [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "7c232208-466f-4670-b4dd-df7f111c3185", "address": "fa:16:3e:ba:d8:09", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c232208-46", "ovs_interfaceid": "7c232208-466f-4670-b4dd-df7f111c3185", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.170 251996 DEBUG nova.network.os_vif_util [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d8:09,bridge_name='br-int',has_traffic_filtering=True,id=7c232208-466f-4670-b4dd-df7f111c3185,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c232208-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.170 251996 DEBUG os_vif [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d8:09,bridge_name='br-int',has_traffic_filtering=True,id=7c232208-466f-4670-b4dd-df7f111c3185,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c232208-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.174 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.174 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c232208-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.176 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.177 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.179 251996 INFO os_vif [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:d8:09,bridge_name='br-int',has_traffic_filtering=True,id=7c232208-466f-4670-b4dd-df7f111c3185,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c232208-46')
Dec 06 07:13:05 compute-0 podman[290907]: 2025-12-06 07:13:05.215220093 +0000 UTC m=+0.043305786 container remove 9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.221 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[80851ff1-377f-4791-b519-f1a307025e0a]: (4, ('Sat Dec  6 07:13:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c (9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c)\n9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c\nSat Dec  6 07:13:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c (9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c)\n9aca448af5ecbaab2e4a7b2db12a84ac35f6e9787c59d3c19f47247014658f3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.224 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e39ac2ca-6a12-4aff-96b2-fa9bd29d414a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.225 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0aeacae-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.226 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-0 kernel: tapc0aeacae-e0: left promiscuous mode
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.228 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.230 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1e78d42f-7d50-422a-b490-b8efd06d2a10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-0 nova_compute[251992]: 2025-12-06 07:13:05.245 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 393 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 3.6 MiB/s wr, 364 op/s
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.254 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1f97e3-a86d-48b0-9183-14fe75689b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.258 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f7cf61-403f-4ca9-b5c3-63cd86fa7767]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.273 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ad89adb9-fc50-4c05-9167-ca528e58b1c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545470, 'reachable_time': 21203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290945, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.276 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:13:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:05.276 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9ad01d-7b29-4279-b639-7d2452044283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:05 compute-0 systemd[1]: run-netns-ovnmeta\x2dc0aeacae\x2de53d\x2d425f\x2d88e7\x2d942ba0ab660c.mount: Deactivated successfully.
Dec 06 07:13:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Dec 06 07:13:05 compute-0 ceph-mon[74339]: osdmap e237: 3 total, 3 up, 3 in
Dec 06 07:13:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:06.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:06.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.772 251996 DEBUG nova.compute.manager [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received event network-vif-unplugged-7c232208-466f-4670-b4dd-df7f111c3185 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.772 251996 DEBUG oslo_concurrency.lockutils [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.772 251996 DEBUG oslo_concurrency.lockutils [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.773 251996 DEBUG oslo_concurrency.lockutils [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.773 251996 DEBUG nova.compute.manager [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] No waiting events found dispatching network-vif-unplugged-7c232208-466f-4670-b4dd-df7f111c3185 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.773 251996 DEBUG nova.compute.manager [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received event network-vif-unplugged-7c232208-466f-4670-b4dd-df7f111c3185 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.773 251996 DEBUG nova.compute.manager [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received event network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.773 251996 DEBUG oslo_concurrency.lockutils [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.774 251996 DEBUG oslo_concurrency.lockutils [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.774 251996 DEBUG oslo_concurrency.lockutils [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.774 251996 DEBUG nova.compute.manager [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] No waiting events found dispatching network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:06 compute-0 nova_compute[251992]: 2025-12-06 07:13:06.774 251996 WARNING nova.compute.manager [req-32f8ac7d-e4fc-48cb-9133-38b8167b39bb req-ead95652-f7a8-431a-a710-f7625e80a610 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received unexpected event network-vif-plugged-7c232208-466f-4670-b4dd-df7f111c3185 for instance with vm_state active and task_state deleting.
Dec 06 07:13:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 359 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 2.7 MiB/s wr, 350 op/s
Dec 06 07:13:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:08.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:08.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:08 compute-0 nova_compute[251992]: 2025-12-06 07:13:08.408 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 305 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 2.9 MiB/s wr, 400 op/s
Dec 06 07:13:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:10.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:10 compute-0 nova_compute[251992]: 2025-12-06 07:13:10.176 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:10.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:10 compute-0 podman[290951]: 2025-12-06 07:13:10.40090085 +0000 UTC m=+0.058832348 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 06 07:13:10 compute-0 podman[290950]: 2025-12-06 07:13:10.447894628 +0000 UTC m=+0.098740169 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:13:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Dec 06 07:13:10 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Dec 06 07:13:11 compute-0 ceph-mon[74339]: pgmap v1636: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 393 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 3.6 MiB/s wr, 364 op/s
Dec 06 07:13:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 266 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 306 op/s
Dec 06 07:13:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:12.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 78b95c69-8ea4-40d9-832c-1e5db1eba4f8 could not be found.
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 78b95c69-8ea4-40d9-832c-1e5db1eba4f8
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver 
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver 
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 78b95c69-8ea4-40d9-832c-1e5db1eba4f8 could not be found.
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.219 251996 ERROR nova.virt.libvirt.driver 
Dec 06 07:13:12 compute-0 nova_compute[251992]: 2025-12-06 07:13:12.272 251996 DEBUG nova.storage.rbd_utils [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] removing snapshot(snap) on rbd image(78b95c69-8ea4-40d9-832c-1e5db1eba4f8) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:13:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:12.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:12 compute-0 ceph-mon[74339]: pgmap v1637: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 359 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 2.7 MiB/s wr, 350 op/s
Dec 06 07:13:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/247600855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2172624807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:13:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2172624807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:13:12 compute-0 ceph-mon[74339]: pgmap v1638: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 305 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 2.9 MiB/s wr, 400 op/s
Dec 06 07:13:12 compute-0 ceph-mon[74339]: osdmap e238: 3 total, 3 up, 3 in
Dec 06 07:13:12 compute-0 ceph-mon[74339]: pgmap v1640: 305 pgs: 305 active+clean; 266 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.7 MiB/s wr, 306 op/s
Dec 06 07:13:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:13:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:13:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:13:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:13:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:13:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:13:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 271 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 329 op/s
Dec 06 07:13:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Dec 06 07:13:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Dec 06 07:13:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Dec 06 07:13:13 compute-0 nova_compute[251992]: 2025-12-06 07:13:13.412 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:13 compute-0 nova_compute[251992]: 2025-12-06 07:13:13.716 251996 INFO nova.virt.libvirt.driver [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Deleting instance files /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489_del
Dec 06 07:13:13 compute-0 nova_compute[251992]: 2025-12-06 07:13:13.717 251996 INFO nova.virt.libvirt.driver [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Deletion of /var/lib/nova/instances/fa49fe6a-09de-40f5-9afe-8b1c9f15f489_del complete
Dec 06 07:13:13 compute-0 nova_compute[251992]: 2025-12-06 07:13:13.812 251996 INFO nova.compute.manager [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Took 8.89 seconds to destroy the instance on the hypervisor.
Dec 06 07:13:13 compute-0 nova_compute[251992]: 2025-12-06 07:13:13.813 251996 DEBUG oslo.service.loopingcall [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:13:13 compute-0 nova_compute[251992]: 2025-12-06 07:13:13.813 251996 DEBUG nova.compute.manager [-] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:13:13 compute-0 nova_compute[251992]: 2025-12-06 07:13:13.814 251996 DEBUG nova.network.neutron [-] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:13:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:14.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:14 compute-0 ovn_controller[147168]: 2025-12-06T07:13:14Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:54:98:a6 10.100.0.11
Dec 06 07:13:14 compute-0 ovn_controller[147168]: 2025-12-06T07:13:14Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:54:98:a6 10.100.0.11
Dec 06 07:13:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:14 compute-0 nova_compute[251992]: 2025-12-06 07:13:14.315 251996 WARNING nova.compute.manager [None req-9b62c58f-4671-4e7f-9722-0cbba836a129 a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Image not found during snapshot: nova.exception.ImageNotFound: Image 78b95c69-8ea4-40d9-832c-1e5db1eba4f8 could not be found.
Dec 06 07:13:14 compute-0 ceph-mon[74339]: pgmap v1641: 305 pgs: 305 active+clean; 271 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 329 op/s
Dec 06 07:13:14 compute-0 ceph-mon[74339]: osdmap e239: 3 total, 3 up, 3 in
Dec 06 07:13:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Dec 06 07:13:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Dec 06 07:13:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Dec 06 07:13:15 compute-0 nova_compute[251992]: 2025-12-06 07:13:15.178 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 236 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 6.5 MiB/s wr, 208 op/s
Dec 06 07:13:15 compute-0 ceph-mon[74339]: osdmap e240: 3 total, 3 up, 3 in
Dec 06 07:13:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1671056831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1549486840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2367303753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.051 251996 DEBUG nova.network.neutron [-] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.146 251996 INFO nova.compute.manager [-] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Took 2.33 seconds to deallocate network for instance.
Dec 06 07:13:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:16.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.232 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.233 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:16.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.312 251996 DEBUG oslo_concurrency.processutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.523 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "4e5a488b-67a2-44eb-a8b5-e963515206c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.530 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.531 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.531 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.531 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.533 251996 INFO nova.compute.manager [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Terminating instance
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.534 251996 DEBUG nova.compute.manager [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:13:16 compute-0 kernel: tapb4ce6cd9-a8 (unregistering): left promiscuous mode
Dec 06 07:13:16 compute-0 NetworkManager[48965]: <info>  [1765005196.5972] device (tapb4ce6cd9-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.604 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 ovn_controller[147168]: 2025-12-06T07:13:16Z|00157|binding|INFO|Releasing lport b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 from this chassis (sb_readonly=0)
Dec 06 07:13:16 compute-0 ovn_controller[147168]: 2025-12-06T07:13:16Z|00158|binding|INFO|Setting lport b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 down in Southbound
Dec 06 07:13:16 compute-0 ovn_controller[147168]: 2025-12-06T07:13:16Z|00159|binding|INFO|Removing iface tapb4ce6cd9-a8 ovn-installed in OVS
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.608 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Dec 06 07:13:16 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000003a.scope: Consumed 14.583s CPU time.
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.645 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:98:a6 10.100.0.11'], port_security=['fa:16:3e:54:98:a6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '4e5a488b-67a2-44eb-a8b5-e963515206c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49680c77-2db5-4d0f-bd5b-08899440c38e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c297e84c3a9f48a9a82aebc9e5ade875', 'neutron:revision_number': '4', 'neutron:security_group_ids': '44ffcb05-d145-4d07-b800-cc9e3941da49', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2602f87-ec0b-4d1c-8f8b-eee8bbcfddb2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.646 158118 INFO neutron.agent.ovn.metadata.agent [-] Port b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 in datapath 49680c77-2db5-4d0f-bd5b-08899440c38e unbound from our chassis
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.650 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49680c77-2db5-4d0f-bd5b-08899440c38e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:16 compute-0 systemd-machined[212986]: Machine qemu-24-instance-0000003a terminated.
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.652 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[15ba0183-d6bf-452c-97bd-56c308d1f69c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.653 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e namespace which is not needed anymore
Dec 06 07:13:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239620160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.754 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.767 251996 INFO nova.virt.libvirt.driver [-] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Instance destroyed successfully.
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.768 251996 DEBUG nova.objects.instance [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lazy-loading 'resources' on Instance uuid 4e5a488b-67a2-44eb-a8b5-e963515206c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.772 251996 DEBUG oslo_concurrency.processutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:16 compute-0 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[290166]: [NOTICE]   (290191) : haproxy version is 2.8.14-c23fe91
Dec 06 07:13:16 compute-0 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[290166]: [NOTICE]   (290191) : path to executable is /usr/sbin/haproxy
Dec 06 07:13:16 compute-0 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[290166]: [WARNING]  (290191) : Exiting Master process...
Dec 06 07:13:16 compute-0 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[290166]: [ALERT]    (290191) : Current worker (290197) exited with code 143 (Terminated)
Dec 06 07:13:16 compute-0 neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e[290166]: [WARNING]  (290191) : All workers exited. Exiting... (0)
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.782 251996 DEBUG nova.compute.provider_tree [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:16 compute-0 systemd[1]: libpod-0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c.scope: Deactivated successfully.
Dec 06 07:13:16 compute-0 podman[291073]: 2025-12-06 07:13:16.78983053 +0000 UTC m=+0.053782197 container died 0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.803 251996 DEBUG nova.virt.libvirt.vif [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:12:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1564803638',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1564803638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1564803638',id=58,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:12:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c297e84c3a9f48a9a82aebc9e5ade875',ramdisk_id='',reservation_id='r-7kt6z9ov',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-324135674',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-324135674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:14Z,user_data=None,user_id='a1ed181a1103481fa4d0b29ce1009dca',uuid=4e5a488b-67a2-44eb-a8b5-e963515206c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.803 251996 DEBUG nova.network.os_vif_util [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converting VIF {"id": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "address": "fa:16:3e:54:98:a6", "network": {"id": "49680c77-2db5-4d0f-bd5b-08899440c38e", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-432299576-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c297e84c3a9f48a9a82aebc9e5ade875", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4ce6cd9-a8", "ovs_interfaceid": "b4ce6cd9-a8d3-4b54-b552-679a64be6ca3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.804 251996 DEBUG nova.network.os_vif_util [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:98:a6,bridge_name='br-int',has_traffic_filtering=True,id=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4ce6cd9-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.805 251996 DEBUG os_vif [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:98:a6,bridge_name='br-int',has_traffic_filtering=True,id=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4ce6cd9-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.806 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.807 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ce6cd9-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.808 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.810 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c-userdata-shm.mount: Deactivated successfully.
Dec 06 07:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee82db74e67ca950b27987fffe23995a3235dfede14013ad7ebc30f27bec0db9-merged.mount: Deactivated successfully.
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.818 251996 INFO os_vif [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:98:a6,bridge_name='br-int',has_traffic_filtering=True,id=b4ce6cd9-a8d3-4b54-b552-679a64be6ca3,network=Network(49680c77-2db5-4d0f-bd5b-08899440c38e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4ce6cd9-a8')
Dec 06 07:13:16 compute-0 podman[291073]: 2025-12-06 07:13:16.826335216 +0000 UTC m=+0.090286873 container cleanup 0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.841 251996 DEBUG nova.scheduler.client.report [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:16 compute-0 systemd[1]: libpod-conmon-0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c.scope: Deactivated successfully.
Dec 06 07:13:16 compute-0 podman[291128]: 2025-12-06 07:13:16.88797033 +0000 UTC m=+0.040101736 container remove 0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.893 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eee74b23-7014-4719-a998-70a62d18a705]: (4, ('Sat Dec  6 07:13:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e (0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c)\n0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c\nSat Dec  6 07:13:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e (0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c)\n0d6983f60ef974cfbfaa53721d77d6f883f81d332aac9d2b3767adfe0bb9188c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.895 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[80e5166d-4537-4b3d-b473-15ea1768c075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.896 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49680c77-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:16 compute-0 kernel: tap49680c77-20: left promiscuous mode
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.898 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 ceph-mon[74339]: pgmap v1644: 305 pgs: 305 active+clean; 236 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 6.5 MiB/s wr, 208 op/s
Dec 06 07:13:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2437772098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3239620160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.914 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.916 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[67adc2ea-6ea2-42ef-bf2a-4132c33d8ddc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.925 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[44d7bb59-a6c2-4305-a022-803886767058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.926 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6c768569-d803-4ecb-8d84-219d9789cb7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.933 251996 DEBUG nova.compute.manager [req-717a0d57-961a-4a54-9997-3a27c71aea88 req-9168a085-7848-41f9-86be-e4d3f921a4c7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Received event network-vif-deleted-7c232208-466f-4670-b4dd-df7f111c3185 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.943 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[443ded49-037f-4a2e-8238-390712becf7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 544635, 'reachable_time': 33095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291148, 'error': None, 'target': 'ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d49680c77\x2d2db5\x2d4d0f\x2dbd5b\x2d08899440c38e.mount: Deactivated successfully.
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.948 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-49680c77-2db5-4d0f-bd5b-08899440c38e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:13:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:16.949 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[4e685b3c-d466-4133-8cd3-b4af4fd63db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:16 compute-0 nova_compute[251992]: 2025-12-06 07:13:16.962 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:17 compute-0 nova_compute[251992]: 2025-12-06 07:13:17.034 251996 INFO nova.scheduler.client.report [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Deleted allocations for instance fa49fe6a-09de-40f5-9afe-8b1c9f15f489
Dec 06 07:13:17 compute-0 nova_compute[251992]: 2025-12-06 07:13:17.234 251996 DEBUG oslo_concurrency.lockutils [None req-33525dbd-226b-42f0-8c09-24128425f651 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "fa49fe6a-09de-40f5-9afe-8b1c9f15f489" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 237 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 4.9 MiB/s wr, 212 op/s
Dec 06 07:13:17 compute-0 nova_compute[251992]: 2025-12-06 07:13:17.707 251996 INFO nova.virt.libvirt.driver [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Deleting instance files /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9_del
Dec 06 07:13:17 compute-0 nova_compute[251992]: 2025-12-06 07:13:17.708 251996 INFO nova.virt.libvirt.driver [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Deletion of /var/lib/nova/instances/4e5a488b-67a2-44eb-a8b5-e963515206c9_del complete
Dec 06 07:13:17 compute-0 nova_compute[251992]: 2025-12-06 07:13:17.811 251996 INFO nova.compute.manager [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Took 1.28 seconds to destroy the instance on the hypervisor.
Dec 06 07:13:17 compute-0 nova_compute[251992]: 2025-12-06 07:13:17.812 251996 DEBUG oslo.service.loopingcall [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:13:17 compute-0 nova_compute[251992]: 2025-12-06 07:13:17.812 251996 DEBUG nova.compute.manager [-] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:13:17 compute-0 nova_compute[251992]: 2025-12-06 07:13:17.812 251996 DEBUG nova.network.neutron [-] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:13:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1931217294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:18.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:13:18
Dec 06 07:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.control', '.mgr', 'images']
Dec 06 07:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:13:18 compute-0 nova_compute[251992]: 2025-12-06 07:13:18.412 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.156 251996 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received event network-vif-unplugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.156 251996 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.156 251996 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.156 251996 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.157 251996 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] No waiting events found dispatching network-vif-unplugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.157 251996 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received event network-vif-unplugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.157 251996 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received event network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.157 251996 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.157 251996 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.158 251996 DEBUG oslo_concurrency.lockutils [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.158 251996 DEBUG nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] No waiting events found dispatching network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.158 251996 WARNING nova.compute.manager [req-6746c8bd-b03e-4bd3-96a8-ab67b79d4484 req-7bbeefef-6a9b-4f4f-bec8-a6cc47048e93 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received unexpected event network-vif-plugged-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 for instance with vm_state active and task_state deleting.
Dec 06 07:13:19 compute-0 ceph-mon[74339]: pgmap v1645: 305 pgs: 305 active+clean; 237 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 4.9 MiB/s wr, 212 op/s
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.212 251996 DEBUG nova.network.neutron [-] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 151 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 4.4 MiB/s wr, 276 op/s
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.280 251996 INFO nova.compute.manager [-] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Took 1.47 seconds to deallocate network for instance.
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.415 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.417 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.468 251996 DEBUG oslo_concurrency.processutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737763718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.937 251996 DEBUG oslo_concurrency.processutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.942 251996 DEBUG nova.compute.provider_tree [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:19 compute-0 nova_compute[251992]: 2025-12-06 07:13:19.982 251996 DEBUG nova.scheduler.client.report [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:20 compute-0 nova_compute[251992]: 2025-12-06 07:13:20.048 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:20 compute-0 nova_compute[251992]: 2025-12-06 07:13:20.111 251996 INFO nova.scheduler.client.report [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Deleted allocations for instance 4e5a488b-67a2-44eb-a8b5-e963515206c9
Dec 06 07:13:20 compute-0 sudo[291173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:20 compute-0 sudo[291173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:20 compute-0 sudo[291173]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:20 compute-0 nova_compute[251992]: 2025-12-06 07:13:20.155 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005185.1549044, fa49fe6a-09de-40f5-9afe-8b1c9f15f489 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:20 compute-0 nova_compute[251992]: 2025-12-06 07:13:20.155 251996 INFO nova.compute.manager [-] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] VM Stopped (Lifecycle Event)
Dec 06 07:13:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:20.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:20 compute-0 sudo[291198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:20 compute-0 sudo[291198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:20 compute-0 sudo[291198]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:20 compute-0 nova_compute[251992]: 2025-12-06 07:13:20.207 251996 DEBUG nova.compute.manager [None req-6cc33019-475e-40a0-9805-9d9cb8824f58 - - - - - -] [instance: fa49fe6a-09de-40f5-9afe-8b1c9f15f489] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:20 compute-0 nova_compute[251992]: 2025-12-06 07:13:20.259 251996 DEBUG oslo_concurrency.lockutils [None req-60be151b-01c9-47f4-88cc-0e5bfddaa4ec a1ed181a1103481fa4d0b29ce1009dca c297e84c3a9f48a9a82aebc9e5ade875 - - default default] Lock "4e5a488b-67a2-44eb-a8b5-e963515206c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:20 compute-0 ceph-mon[74339]: pgmap v1646: 305 pgs: 305 active+clean; 151 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 4.4 MiB/s wr, 276 op/s
Dec 06 07:13:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1737763718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 282 op/s
Dec 06 07:13:21 compute-0 nova_compute[251992]: 2025-12-06 07:13:21.275 251996 DEBUG nova.compute.manager [req-eaa4e298-4d43-4c9f-bf24-1e9481c3c6ca req-9606e687-def4-440a-8ba2-8cd34cc67e0b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Received event network-vif-deleted-b4ce6cd9-a8d3-4b54-b552-679a64be6ca3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:21 compute-0 nova_compute[251992]: 2025-12-06 07:13:21.811 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:22.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:22.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:22 compute-0 nova_compute[251992]: 2025-12-06 07:13:22.740 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:22 compute-0 nova_compute[251992]: 2025-12-06 07:13:22.741 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:22 compute-0 nova_compute[251992]: 2025-12-06 07:13:22.761 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:13:22 compute-0 ceph-mon[74339]: pgmap v1647: 305 pgs: 305 active+clean; 88 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 282 op/s
Dec 06 07:13:22 compute-0 nova_compute[251992]: 2025-12-06 07:13:22.893 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:22 compute-0 nova_compute[251992]: 2025-12-06 07:13:22.894 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:22 compute-0 nova_compute[251992]: 2025-12-06 07:13:22.906 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:13:22 compute-0 nova_compute[251992]: 2025-12-06 07:13:22.907 251996 INFO nova.compute.claims [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.048 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 88 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 241 op/s
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.414 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004435533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.509 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.515 251996 DEBUG nova.compute.provider_tree [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.537 251996 DEBUG nova.scheduler.client.report [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.575 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.576 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.656 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.656 251996 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.674 251996 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.692 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:13:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.798 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.799 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.799 251996 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Creating image(s)
Dec 06 07:13:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Dec 06 07:13:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Dec 06 07:13:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3627339594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4004435533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.846 251996 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 6f99df47-6b1f-403c-995d-e8f72597bf58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.874 251996 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 6f99df47-6b1f-403c-995d-e8f72597bf58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.903 251996 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 6f99df47-6b1f-403c-995d-e8f72597bf58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.908 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.969 251996 DEBUG nova.policy [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '03fb2817729e4b71932023a7637c6244', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.996 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:23 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.998 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:23.999 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.000 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.049 251996 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 6f99df47-6b1f-403c-995d-e8f72597bf58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.054 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6f99df47-6b1f-403c-995d-e8f72597bf58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:13:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:24.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:13:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.411 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6f99df47-6b1f-403c-995d-e8f72597bf58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.523 251996 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] resizing rbd image 6f99df47-6b1f-403c-995d-e8f72597bf58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.652 251996 DEBUG nova.objects.instance [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f99df47-6b1f-403c-995d-e8f72597bf58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.673 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.673 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Ensure instance console log exists: /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.674 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.674 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:24 compute-0 nova_compute[251992]: 2025-12-06 07:13:24.675 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Dec 06 07:13:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Dec 06 07:13:24 compute-0 ceph-mon[74339]: pgmap v1648: 305 pgs: 305 active+clean; 88 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 MiB/s wr, 241 op/s
Dec 06 07:13:24 compute-0 ceph-mon[74339]: osdmap e241: 3 total, 3 up, 3 in
Dec 06 07:13:24 compute-0 ceph-mon[74339]: osdmap e242: 3 total, 3 up, 3 in
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 136 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.0 MiB/s wr, 302 op/s
Dec 06 07:13:25 compute-0 nova_compute[251992]: 2025-12-06 07:13:25.295 251996 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Successfully created port: bcf291c2-36ca-46c4-9059-50514f8c171d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:13:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001964757235250326 of space, bias 1.0, pg target 0.5894271705750977 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019269524706867671 of space, bias 1.0, pg target 0.5780857412060302 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:13:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:13:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Dec 06 07:13:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Dec 06 07:13:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/895580667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:25 compute-0 ceph-mon[74339]: osdmap e243: 3 total, 3 up, 3 in
Dec 06 07:13:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:13:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:26.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:13:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:26 compute-0 nova_compute[251992]: 2025-12-06 07:13:26.815 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:26 compute-0 nova_compute[251992]: 2025-12-06 07:13:26.817 251996 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Successfully updated port: bcf291c2-36ca-46c4-9059-50514f8c171d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:13:26 compute-0 nova_compute[251992]: 2025-12-06 07:13:26.835 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "refresh_cache-6f99df47-6b1f-403c-995d-e8f72597bf58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:13:26 compute-0 nova_compute[251992]: 2025-12-06 07:13:26.835 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquired lock "refresh_cache-6f99df47-6b1f-403c-995d-e8f72597bf58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:13:26 compute-0 nova_compute[251992]: 2025-12-06 07:13:26.835 251996 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:13:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Dec 06 07:13:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Dec 06 07:13:26 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Dec 06 07:13:26 compute-0 ceph-mon[74339]: pgmap v1651: 305 pgs: 305 active+clean; 136 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.0 MiB/s wr, 302 op/s
Dec 06 07:13:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 192 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 12 MiB/s wr, 346 op/s
Dec 06 07:13:27 compute-0 nova_compute[251992]: 2025-12-06 07:13:27.343 251996 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:13:27 compute-0 nova_compute[251992]: 2025-12-06 07:13:27.577 251996 DEBUG nova.compute.manager [req-c719d3a9-f25d-4fd5-a2a4-fd3654641749 req-ee103ce2-ed3c-4f8e-8656-1f991354eb7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received event network-changed-bcf291c2-36ca-46c4-9059-50514f8c171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:27 compute-0 nova_compute[251992]: 2025-12-06 07:13:27.578 251996 DEBUG nova.compute.manager [req-c719d3a9-f25d-4fd5-a2a4-fd3654641749 req-ee103ce2-ed3c-4f8e-8656-1f991354eb7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Refreshing instance network info cache due to event network-changed-bcf291c2-36ca-46c4-9059-50514f8c171d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:13:27 compute-0 nova_compute[251992]: 2025-12-06 07:13:27.578 251996 DEBUG oslo_concurrency.lockutils [req-c719d3a9-f25d-4fd5-a2a4-fd3654641749 req-ee103ce2-ed3c-4f8e-8656-1f991354eb7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6f99df47-6b1f-403c-995d-e8f72597bf58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:13:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:28.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:28.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.414 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.558 251996 DEBUG nova.network.neutron [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Updating instance_info_cache with network_info: [{"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.610 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Releasing lock "refresh_cache-6f99df47-6b1f-403c-995d-e8f72597bf58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.610 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Instance network_info: |[{"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.611 251996 DEBUG oslo_concurrency.lockutils [req-c719d3a9-f25d-4fd5-a2a4-fd3654641749 req-ee103ce2-ed3c-4f8e-8656-1f991354eb7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6f99df47-6b1f-403c-995d-e8f72597bf58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.611 251996 DEBUG nova.network.neutron [req-c719d3a9-f25d-4fd5-a2a4-fd3654641749 req-ee103ce2-ed3c-4f8e-8656-1f991354eb7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Refreshing network info cache for port bcf291c2-36ca-46c4-9059-50514f8c171d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.615 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Start _get_guest_xml network_info=[{"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.621 251996 WARNING nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.624 251996 DEBUG nova.virt.libvirt.host [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.625 251996 DEBUG nova.virt.libvirt.host [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.627 251996 DEBUG nova.virt.libvirt.host [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.627 251996 DEBUG nova.virt.libvirt.host [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.629 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.629 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.629 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.630 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.630 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.630 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.630 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.631 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.631 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.631 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.631 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.632 251996 DEBUG nova.virt.hardware [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:13:28 compute-0 nova_compute[251992]: 2025-12-06 07:13:28.635 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:13:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768478065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.096 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.120 251996 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 6f99df47-6b1f-403c-995d-e8f72597bf58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.124 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 251 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 14 MiB/s wr, 354 op/s
Dec 06 07:13:29 compute-0 ceph-mon[74339]: osdmap e244: 3 total, 3 up, 3 in
Dec 06 07:13:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:13:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/996662023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.575 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.577 251996 DEBUG nova.virt.libvirt.vif [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1838229485',display_name='tempest-MultipleCreateTestJSON-server-1838229485-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1838229485-2',id=63,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-kbd1i45d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:23Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=6f99df47-6b1f-403c-995d-e8f72597bf58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.577 251996 DEBUG nova.network.os_vif_util [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.578 251996 DEBUG nova.network.os_vif_util [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=bcf291c2-36ca-46c4-9059-50514f8c171d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcf291c2-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.579 251996 DEBUG nova.objects.instance [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f99df47-6b1f-403c-995d-e8f72597bf58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.610 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <uuid>6f99df47-6b1f-403c-995d-e8f72597bf58</uuid>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <name>instance-0000003f</name>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <nova:name>tempest-MultipleCreateTestJSON-server-1838229485-2</nova:name>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:13:28</nova:creationTime>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <nova:user uuid="03fb2817729e4b71932023a7637c6244">tempest-MultipleCreateTestJSON-1199242675-project-member</nova:user>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <nova:project uuid="de09de98b3b1445f88b6094b6aac4a30">tempest-MultipleCreateTestJSON-1199242675</nova:project>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <nova:port uuid="bcf291c2-36ca-46c4-9059-50514f8c171d">
Dec 06 07:13:29 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <system>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <entry name="serial">6f99df47-6b1f-403c-995d-e8f72597bf58</entry>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <entry name="uuid">6f99df47-6b1f-403c-995d-e8f72597bf58</entry>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </system>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <os>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   </os>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <features>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   </features>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6f99df47-6b1f-403c-995d-e8f72597bf58_disk">
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       </source>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6f99df47-6b1f-403c-995d-e8f72597bf58_disk.config">
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       </source>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:13:29 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:a7:47:3c"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <target dev="tapbcf291c2-36"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58/console.log" append="off"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <video>
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </video>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:13:29 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:13:29 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:13:29 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:13:29 compute-0 nova_compute[251992]: </domain>
Dec 06 07:13:29 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.611 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Preparing to wait for external event network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.612 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.612 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.612 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.613 251996 DEBUG nova.virt.libvirt.vif [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1838229485',display_name='tempest-MultipleCreateTestJSON-server-1838229485-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1838229485-2',id=63,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-kbd1i45d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:23Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=6f99df47-6b1f-403c-995d-e8f72597bf58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.613 251996 DEBUG nova.network.os_vif_util [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.614 251996 DEBUG nova.network.os_vif_util [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=bcf291c2-36ca-46c4-9059-50514f8c171d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcf291c2-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.614 251996 DEBUG os_vif [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=bcf291c2-36ca-46c4-9059-50514f8c171d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcf291c2-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.614 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.615 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.615 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.618 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.618 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbcf291c2-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.619 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbcf291c2-36, col_values=(('external_ids', {'iface-id': 'bcf291c2-36ca-46c4-9059-50514f8c171d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:47:3c', 'vm-uuid': '6f99df47-6b1f-403c-995d-e8f72597bf58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.620 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-0 NetworkManager[48965]: <info>  [1765005209.6214] manager: (tapbcf291c2-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.623 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.627 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.629 251996 INFO os_vif [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=bcf291c2-36ca-46c4-9059-50514f8c171d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcf291c2-36')
Dec 06 07:13:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.724 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.724 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.725 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] No VIF found with MAC fa:16:3e:a7:47:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.725 251996 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Using config drive
Dec 06 07:13:29 compute-0 nova_compute[251992]: 2025-12-06 07:13:29.761 251996 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 6f99df47-6b1f-403c-995d-e8f72597bf58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:30.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:30.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:30 compute-0 nova_compute[251992]: 2025-12-06 07:13:30.331 251996 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Creating config drive at /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58/disk.config
Dec 06 07:13:30 compute-0 nova_compute[251992]: 2025-12-06 07:13:30.336 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdzz1u7zw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:30 compute-0 nova_compute[251992]: 2025-12-06 07:13:30.466 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdzz1u7zw" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:30 compute-0 nova_compute[251992]: 2025-12-06 07:13:30.594 251996 DEBUG nova.storage.rbd_utils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] rbd image 6f99df47-6b1f-403c-995d-e8f72597bf58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:30 compute-0 nova_compute[251992]: 2025-12-06 07:13:30.598 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58/disk.config 6f99df47-6b1f-403c-995d-e8f72597bf58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:30 compute-0 nova_compute[251992]: 2025-12-06 07:13:30.674 251996 DEBUG nova.network.neutron [req-c719d3a9-f25d-4fd5-a2a4-fd3654641749 req-ee103ce2-ed3c-4f8e-8656-1f991354eb7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Updated VIF entry in instance network info cache for port bcf291c2-36ca-46c4-9059-50514f8c171d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:13:30 compute-0 nova_compute[251992]: 2025-12-06 07:13:30.676 251996 DEBUG nova.network.neutron [req-c719d3a9-f25d-4fd5-a2a4-fd3654641749 req-ee103ce2-ed3c-4f8e-8656-1f991354eb7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Updating instance_info_cache with network_info: [{"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:30 compute-0 nova_compute[251992]: 2025-12-06 07:13:30.715 251996 DEBUG oslo_concurrency.lockutils [req-c719d3a9-f25d-4fd5-a2a4-fd3654641749 req-ee103ce2-ed3c-4f8e-8656-1f991354eb7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6f99df47-6b1f-403c-995d-e8f72597bf58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:30 compute-0 ceph-mon[74339]: pgmap v1654: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 192 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 12 MiB/s wr, 346 op/s
Dec 06 07:13:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2051350914' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2768478065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:30 compute-0 ceph-mon[74339]: pgmap v1655: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 251 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 14 MiB/s wr, 354 op/s
Dec 06 07:13:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/996662023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2163268125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 273 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 13 MiB/s wr, 339 op/s
Dec 06 07:13:31 compute-0 nova_compute[251992]: 2025-12-06 07:13:31.765 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005196.7635624, 4e5a488b-67a2-44eb-a8b5-e963515206c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:31 compute-0 nova_compute[251992]: 2025-12-06 07:13:31.765 251996 INFO nova.compute.manager [-] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] VM Stopped (Lifecycle Event)
Dec 06 07:13:31 compute-0 nova_compute[251992]: 2025-12-06 07:13:31.788 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:31 compute-0 nova_compute[251992]: 2025-12-06 07:13:31.800 251996 DEBUG nova.compute.manager [None req-22f88256-99c3-4760-b8e1-4a986e80cc14 - - - - - -] [instance: 4e5a488b-67a2-44eb-a8b5-e963515206c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:31 compute-0 nova_compute[251992]: 2025-12-06 07:13:31.819 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid 6f99df47-6b1f-403c-995d-e8f72597bf58 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:13:31 compute-0 nova_compute[251992]: 2025-12-06 07:13:31.820 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1271285825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.172 251996 DEBUG oslo_concurrency.processutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58/disk.config 6f99df47-6b1f-403c-995d-e8f72597bf58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.172 251996 INFO nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Deleting local config drive /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58/disk.config because it was imported into RBD.
Dec 06 07:13:32 compute-0 kernel: tapbcf291c2-36: entered promiscuous mode
Dec 06 07:13:32 compute-0 NetworkManager[48965]: <info>  [1765005212.2312] manager: (tapbcf291c2-36): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Dec 06 07:13:32 compute-0 ovn_controller[147168]: 2025-12-06T07:13:32Z|00160|binding|INFO|Claiming lport bcf291c2-36ca-46c4-9059-50514f8c171d for this chassis.
Dec 06 07:13:32 compute-0 ovn_controller[147168]: 2025-12-06T07:13:32Z|00161|binding|INFO|bcf291c2-36ca-46c4-9059-50514f8c171d: Claiming fa:16:3e:a7:47:3c 10.100.0.5
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.233 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.241 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:47:3c 10.100.0.5'], port_security=['fa:16:3e:a7:47:3c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6f99df47-6b1f-403c-995d-e8f72597bf58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c3e308f8-65fc-402f-96ed-2039201b95ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5d88f71-5dee-4f84-b30d-05c6ecea010a, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=bcf291c2-36ca-46c4-9059-50514f8c171d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.242 158118 INFO neutron.agent.ovn.metadata.agent [-] Port bcf291c2-36ca-46c4-9059-50514f8c171d in datapath c0aeacae-e53d-425f-88e7-942ba0ab660c bound to our chassis
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.244 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:32 compute-0 ovn_controller[147168]: 2025-12-06T07:13:32Z|00162|binding|INFO|Setting lport bcf291c2-36ca-46c4-9059-50514f8c171d ovn-installed in OVS
Dec 06 07:13:32 compute-0 ovn_controller[147168]: 2025-12-06T07:13:32Z|00163|binding|INFO|Setting lport bcf291c2-36ca-46c4-9059-50514f8c171d up in Southbound
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.256 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f2451342-b23b-4ec5-8922-a6da45b2af53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.258 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc0aeacae-e1 in ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.259 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc0aeacae-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.259 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d39086cd-b76f-4482-b46e-916e58978b84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.259 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.261 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9e58e9ed-b55a-44af-bd34-96191fcccdb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 systemd-udevd[291553]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-0 systemd-machined[212986]: New machine qemu-26-instance-0000003f.
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.273 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[238bf726-8597-4850-b295-ccd3f08424c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 NetworkManager[48965]: <info>  [1765005212.2772] device (tapbcf291c2-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:13:32 compute-0 NetworkManager[48965]: <info>  [1765005212.2778] device (tapbcf291c2-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:13:32 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000003f.
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.297 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[88a61b55-e889-463b-a03a-49cf2c5d3256]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:32.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.331 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b23f147f-466d-4775-8f21-6a0fcdcdf8fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.336 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d643ff76-b33e-443b-9235-d88bdb9263d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 NetworkManager[48965]: <info>  [1765005212.3383] manager: (tapc0aeacae-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.368 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5cdb5ba4-62f8-452d-a295-e4c5a826b345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.372 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[48f46fa9-46cd-4b1e-800c-642ff2350553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 NetworkManager[48965]: <info>  [1765005212.3946] device (tapc0aeacae-e0): carrier: link connected
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.397 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[13a95a93-574d-46ea-98a8-9d85e618f8d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.415 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5f175e2a-a62d-4dfb-ac10-03d1df17a62b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0aeacae-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:b3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548498, 'reachable_time': 22345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291586, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.428 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a5b4cb-db4e-4d57-8b34-b9e7e472acb7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:b3f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548498, 'tstamp': 548498}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291587, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.443 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eb593fc4-2e90-4c92-b99c-96ba5413bed7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc0aeacae-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:b3:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548498, 'reachable_time': 22345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291588, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.467 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9d321201-1080-4fbf-9e84-b51b835c0f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.522 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fddf06-71e4-4630-97c8-3e6e1992d13b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.523 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0aeacae-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.524 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.524 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0aeacae-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:32 compute-0 kernel: tapc0aeacae-e0: entered promiscuous mode
Dec 06 07:13:32 compute-0 NetworkManager[48965]: <info>  [1765005212.5272] manager: (tapc0aeacae-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.528 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc0aeacae-e0, col_values=(('external_ids', {'iface-id': 'f88eb5d2-a701-4b34-8226-834c175af63c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.529 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.531 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-0 ovn_controller[147168]: 2025-12-06T07:13:32Z|00164|binding|INFO|Releasing lport f88eb5d2-a701-4b34-8226-834c175af63c from this chassis (sb_readonly=0)
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.533 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.544 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[44941306-810b-453a-a738-b9a16a400332]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.545 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/c0aeacae-e53d-425f-88e7-942ba0ab660c.pid.haproxy
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID c0aeacae-e53d-425f-88e7-942ba0ab660c
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:32.547 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'env', 'PROCESS_TAG=haproxy-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c0aeacae-e53d-425f-88e7-942ba0ab660c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.549 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.571 251996 DEBUG nova.compute.manager [req-bcb4ac8c-c540-4b1d-805d-bd96d338d5e5 req-e8a02515-95d9-4a81-9cf8-1dabee66831e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received event network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.571 251996 DEBUG oslo_concurrency.lockutils [req-bcb4ac8c-c540-4b1d-805d-bd96d338d5e5 req-e8a02515-95d9-4a81-9cf8-1dabee66831e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.572 251996 DEBUG oslo_concurrency.lockutils [req-bcb4ac8c-c540-4b1d-805d-bd96d338d5e5 req-e8a02515-95d9-4a81-9cf8-1dabee66831e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.572 251996 DEBUG oslo_concurrency.lockutils [req-bcb4ac8c-c540-4b1d-805d-bd96d338d5e5 req-e8a02515-95d9-4a81-9cf8-1dabee66831e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:32 compute-0 nova_compute[251992]: 2025-12-06 07:13:32.572 251996 DEBUG nova.compute.manager [req-bcb4ac8c-c540-4b1d-805d-bd96d338d5e5 req-e8a02515-95d9-4a81-9cf8-1dabee66831e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Processing event network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:13:32 compute-0 podman[291620]: 2025-12-06 07:13:32.898475218 +0000 UTC m=+0.050899307 container create a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:13:32 compute-0 systemd[1]: Started libpod-conmon-a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf.scope.
Dec 06 07:13:32 compute-0 podman[291620]: 2025-12-06 07:13:32.874318355 +0000 UTC m=+0.026742464 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:13:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc7700a25ab3959b52ca21ad8ee9e62261ecd12b92ffb2c54fcdaebee8a767d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:32 compute-0 podman[291620]: 2025-12-06 07:13:32.992990286 +0000 UTC m=+0.145414405 container init a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:13:32 compute-0 podman[291620]: 2025-12-06 07:13:32.998206892 +0000 UTC m=+0.150630981 container start a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:13:33 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[291636]: [NOTICE]   (291640) : New worker (291642) forked
Dec 06 07:13:33 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[291636]: [NOTICE]   (291640) : Loading success.
Dec 06 07:13:33 compute-0 ceph-mon[74339]: pgmap v1656: 305 pgs: 305 active+clean; 273 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 13 MiB/s wr, 339 op/s
Dec 06 07:13:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3851881568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 280 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.9 MiB/s wr, 178 op/s
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.416 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.754 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.754 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005213.7535195, 6f99df47-6b1f-403c-995d-e8f72597bf58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.755 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] VM Started (Lifecycle Event)
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.757 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.761 251996 INFO nova.virt.libvirt.driver [-] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Instance spawned successfully.
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.761 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.786 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.792 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.796 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.796 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.797 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.797 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.798 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.798 251996 DEBUG nova.virt.libvirt.driver [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.830 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.830 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005213.7552264, 6f99df47-6b1f-403c-995d-e8f72597bf58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.830 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] VM Paused (Lifecycle Event)
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.858 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.862 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005213.7570214, 6f99df47-6b1f-403c-995d-e8f72597bf58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.862 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] VM Resumed (Lifecycle Event)
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.899 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.902 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.905 251996 INFO nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Took 10.11 seconds to spawn the instance on the hypervisor.
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.905 251996 DEBUG nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.931 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.964 251996 INFO nova.compute.manager [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Took 11.14 seconds to build instance.
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.980 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "73478c05-0d06-42b1-b7f8-3c0500924287" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.980 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.982 251996 DEBUG oslo_concurrency.lockutils [None req-58f48666-3b78-420e-af4d-de3a56228a17 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.982 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.983 251996 INFO nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.983 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:33 compute-0 nova_compute[251992]: 2025-12-06 07:13:33.998 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.087 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.088 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.094 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.095 251996 INFO nova.compute.claims [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:13:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:34.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.254 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:34.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.622 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Dec 06 07:13:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2415543248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.723 251996 DEBUG nova.compute.manager [req-07832e30-eb69-4e47-893f-4f5d95ff0214 req-8b19de37-0d37-4fb9-b0b2-b7374886c6a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received event network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.724 251996 DEBUG oslo_concurrency.lockutils [req-07832e30-eb69-4e47-893f-4f5d95ff0214 req-8b19de37-0d37-4fb9-b0b2-b7374886c6a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.724 251996 DEBUG oslo_concurrency.lockutils [req-07832e30-eb69-4e47-893f-4f5d95ff0214 req-8b19de37-0d37-4fb9-b0b2-b7374886c6a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.724 251996 DEBUG oslo_concurrency.lockutils [req-07832e30-eb69-4e47-893f-4f5d95ff0214 req-8b19de37-0d37-4fb9-b0b2-b7374886c6a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.725 251996 DEBUG nova.compute.manager [req-07832e30-eb69-4e47-893f-4f5d95ff0214 req-8b19de37-0d37-4fb9-b0b2-b7374886c6a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] No waiting events found dispatching network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.725 251996 WARNING nova.compute.manager [req-07832e30-eb69-4e47-893f-4f5d95ff0214 req-8b19de37-0d37-4fb9-b0b2-b7374886c6a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received unexpected event network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d for instance with vm_state active and task_state None.
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.725 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.731 251996 DEBUG nova.compute.provider_tree [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.912 251996 DEBUG nova.scheduler.client.report [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.962 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:34 compute-0 nova_compute[251992]: 2025-12-06 07:13:34.963 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.047 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.048 251996 DEBUG nova.network.neutron [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.082 251996 INFO nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.106 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.247 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.249 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.249 251996 INFO nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Creating image(s)
Dec 06 07:13:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 295 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 9.0 MiB/s wr, 259 op/s
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.276 251996 DEBUG nova.storage.rbd_utils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 73478c05-0d06-42b1-b7f8-3c0500924287_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.312 251996 DEBUG nova.storage.rbd_utils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 73478c05-0d06-42b1-b7f8-3c0500924287_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Dec 06 07:13:35 compute-0 ceph-mon[74339]: pgmap v1657: 305 pgs: 305 active+clean; 280 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.9 MiB/s wr, 178 op/s
Dec 06 07:13:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.387 251996 DEBUG nova.storage.rbd_utils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 73478c05-0d06-42b1-b7f8-3c0500924287_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.391 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "f997b43695a3483e375ff9bfd74cbb171b527230" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.393 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "f997b43695a3483e375ff9bfd74cbb171b527230" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.397 251996 DEBUG nova.policy [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdd7994b0ebb4035a373b6560aa7dbcf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:13:35 compute-0 podman[291753]: 2025-12-06 07:13:35.439940847 +0000 UTC m=+0.098846530 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.693 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.694 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.694 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.694 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.695 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.696 251996 INFO nova.compute.manager [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Terminating instance
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.698 251996 DEBUG nova.compute.manager [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.733 251996 DEBUG nova.virt.libvirt.imagebackend [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6b59300d-e8aa-4c06-be5d-280ba7e39063/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6b59300d-e8aa-4c06-be5d-280ba7e39063/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.794 251996 DEBUG nova.virt.libvirt.imagebackend [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/6b59300d-e8aa-4c06-be5d-280ba7e39063/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.795 251996 DEBUG nova.storage.rbd_utils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] cloning images/6b59300d-e8aa-4c06-be5d-280ba7e39063@snap to None/73478c05-0d06-42b1-b7f8-3c0500924287_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:13:35 compute-0 kernel: tapbcf291c2-36 (unregistering): left promiscuous mode
Dec 06 07:13:35 compute-0 NetworkManager[48965]: <info>  [1765005215.8156] device (tapbcf291c2-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:13:35 compute-0 ovn_controller[147168]: 2025-12-06T07:13:35Z|00165|binding|INFO|Releasing lport bcf291c2-36ca-46c4-9059-50514f8c171d from this chassis (sb_readonly=0)
Dec 06 07:13:35 compute-0 ovn_controller[147168]: 2025-12-06T07:13:35Z|00166|binding|INFO|Setting lport bcf291c2-36ca-46c4-9059-50514f8c171d down in Southbound
Dec 06 07:13:35 compute-0 ovn_controller[147168]: 2025-12-06T07:13:35Z|00167|binding|INFO|Removing iface tapbcf291c2-36 ovn-installed in OVS
Dec 06 07:13:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:35.836 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:47:3c 10.100.0.5'], port_security=['fa:16:3e:a7:47:3c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6f99df47-6b1f-403c-995d-e8f72597bf58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de09de98b3b1445f88b6094b6aac4a30', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c3e308f8-65fc-402f-96ed-2039201b95ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5d88f71-5dee-4f84-b30d-05c6ecea010a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=bcf291c2-36ca-46c4-9059-50514f8c171d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:35.838 158118 INFO neutron.agent.ovn.metadata.agent [-] Port bcf291c2-36ca-46c4-9059-50514f8c171d in datapath c0aeacae-e53d-425f-88e7-942ba0ab660c unbound from our chassis
Dec 06 07:13:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:35.839 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c0aeacae-e53d-425f-88e7-942ba0ab660c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:35.840 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc5f9ff-9bc3-45f4-8cae-1e04ed14019e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:35.841 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c namespace which is not needed anymore
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.854 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:35 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Dec 06 07:13:35 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003f.scope: Consumed 3.548s CPU time.
Dec 06 07:13:35 compute-0 systemd-machined[212986]: Machine qemu-26-instance-0000003f terminated.
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.918 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.924 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.937 251996 INFO nova.virt.libvirt.driver [-] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Instance destroyed successfully.
Dec 06 07:13:35 compute-0 nova_compute[251992]: 2025-12-06 07:13:35.938 251996 DEBUG nova.objects.instance [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lazy-loading 'resources' on Instance uuid 6f99df47-6b1f-403c-995d-e8f72597bf58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:35 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[291636]: [NOTICE]   (291640) : haproxy version is 2.8.14-c23fe91
Dec 06 07:13:35 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[291636]: [NOTICE]   (291640) : path to executable is /usr/sbin/haproxy
Dec 06 07:13:35 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[291636]: [WARNING]  (291640) : Exiting Master process...
Dec 06 07:13:35 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[291636]: [WARNING]  (291640) : Exiting Master process...
Dec 06 07:13:35 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[291636]: [ALERT]    (291640) : Current worker (291642) exited with code 143 (Terminated)
Dec 06 07:13:35 compute-0 neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c[291636]: [WARNING]  (291640) : All workers exited. Exiting... (0)
Dec 06 07:13:35 compute-0 systemd[1]: libpod-a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf.scope: Deactivated successfully.
Dec 06 07:13:35 compute-0 podman[291890]: 2025-12-06 07:13:35.991922222 +0000 UTC m=+0.060535855 container died a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.006 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "f997b43695a3483e375ff9bfd74cbb171b527230" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf-userdata-shm.mount: Deactivated successfully.
Dec 06 07:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bc7700a25ab3959b52ca21ad8ee9e62261ecd12b92ffb2c54fcdaebee8a767d-merged.mount: Deactivated successfully.
Dec 06 07:13:36 compute-0 podman[291890]: 2025-12-06 07:13:36.113756531 +0000 UTC m=+0.182370164 container cleanup a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:13:36 compute-0 systemd[1]: libpod-conmon-a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf.scope: Deactivated successfully.
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.120 251996 DEBUG nova.virt.libvirt.vif [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:13:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-1838229485',display_name='tempest-MultipleCreateTestJSON-server-1838229485-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-1838229485-2',id=63,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-06T07:13:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de09de98b3b1445f88b6094b6aac4a30',ramdisk_id='',reservation_id='r-kbd1i45d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1199242675',owner_user_name='tempest-MultipleCreateTestJSON-1199242675-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:33Z,user_data=None,user_id='03fb2817729e4b71932023a7637c6244',uuid=6f99df47-6b1f-403c-995d-e8f72597bf58,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.121 251996 DEBUG nova.network.os_vif_util [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converting VIF {"id": "bcf291c2-36ca-46c4-9059-50514f8c171d", "address": "fa:16:3e:a7:47:3c", "network": {"id": "c0aeacae-e53d-425f-88e7-942ba0ab660c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-368607188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de09de98b3b1445f88b6094b6aac4a30", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcf291c2-36", "ovs_interfaceid": "bcf291c2-36ca-46c4-9059-50514f8c171d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.122 251996 DEBUG nova.network.os_vif_util [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=bcf291c2-36ca-46c4-9059-50514f8c171d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcf291c2-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.122 251996 DEBUG os_vif [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=bcf291c2-36ca-46c4-9059-50514f8c171d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcf291c2-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.124 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.124 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcf291c2-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.126 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.128 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.131 251996 INFO os_vif [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:47:3c,bridge_name='br-int',has_traffic_filtering=True,id=bcf291c2-36ca-46c4-9059-50514f8c171d,network=Network(c0aeacae-e53d-425f-88e7-942ba0ab660c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcf291c2-36')
Dec 06 07:13:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:36 compute-0 podman[291966]: 2025-12-06 07:13:36.200072882 +0000 UTC m=+0.065018049 container remove a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.205 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0caf1f32-a443-4aff-b29c-b27ee768bb62]: (4, ('Sat Dec  6 07:13:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c (a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf)\na167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf\nSat Dec  6 07:13:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c (a167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf)\na167f622ff86b0628f634942b6d49c8a34e98850a09e3cc5f0bebcfb1b3ddfaf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.207 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[03bc986d-ab78-4063-852e-92e7dd47feee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.208 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0aeacae-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:36 compute-0 kernel: tapc0aeacae-e0: left promiscuous mode
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.220 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.228 251996 DEBUG nova.objects.instance [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'migration_context' on Instance uuid 73478c05-0d06-42b1-b7f8-3c0500924287 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.229 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.229 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a4126ab4-960d-4956-8cd4-d5404d3e8a2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.243 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[834389a0-ada1-41dd-a686-9804c7574aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.245 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee718b8-d148-4865-81a9-931a93ef4f60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.258 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[257a04d7-de92-4b09-b943-dd02ccda1798]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548491, 'reachable_time': 23034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292017, 'error': None, 'target': 'ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-0 systemd[1]: run-netns-ovnmeta\x2dc0aeacae\x2de53d\x2d425f\x2d88e7\x2d942ba0ab660c.mount: Deactivated successfully.
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.260 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c0aeacae-e53d-425f-88e7-942ba0ab660c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:13:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:36.260 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[ebea6544-d2d8-407b-a078-30444dec7582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.262 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.262 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Ensure instance console log exists: /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.263 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.263 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.263 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:13:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:13:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2415543248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:36 compute-0 ceph-mon[74339]: pgmap v1658: 305 pgs: 305 active+clean; 295 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 9.0 MiB/s wr, 259 op/s
Dec 06 07:13:36 compute-0 ceph-mon[74339]: osdmap e245: 3 total, 3 up, 3 in
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.880 251996 DEBUG nova.compute.manager [req-51ab2158-6b90-4430-8b8e-1bf4b1f86124 req-d6373348-7c1c-4f01-a905-09510c36f614 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received event network-vif-unplugged-bcf291c2-36ca-46c4-9059-50514f8c171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.880 251996 DEBUG oslo_concurrency.lockutils [req-51ab2158-6b90-4430-8b8e-1bf4b1f86124 req-d6373348-7c1c-4f01-a905-09510c36f614 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.880 251996 DEBUG oslo_concurrency.lockutils [req-51ab2158-6b90-4430-8b8e-1bf4b1f86124 req-d6373348-7c1c-4f01-a905-09510c36f614 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.881 251996 DEBUG oslo_concurrency.lockutils [req-51ab2158-6b90-4430-8b8e-1bf4b1f86124 req-d6373348-7c1c-4f01-a905-09510c36f614 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.881 251996 DEBUG nova.compute.manager [req-51ab2158-6b90-4430-8b8e-1bf4b1f86124 req-d6373348-7c1c-4f01-a905-09510c36f614 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] No waiting events found dispatching network-vif-unplugged-bcf291c2-36ca-46c4-9059-50514f8c171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.881 251996 DEBUG nova.compute.manager [req-51ab2158-6b90-4430-8b8e-1bf4b1f86124 req-d6373348-7c1c-4f01-a905-09510c36f614 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received event network-vif-unplugged-bcf291c2-36ca-46c4-9059-50514f8c171d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:13:36 compute-0 nova_compute[251992]: 2025-12-06 07:13:36.906 251996 DEBUG nova.network.neutron [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Successfully created port: 69c3d15c-5184-4131-bf3a-495b7c0cb7fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:13:37 compute-0 nova_compute[251992]: 2025-12-06 07:13:37.110 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:37.111 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:37.112 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:13:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 304 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 285 op/s
Dec 06 07:13:38 compute-0 nova_compute[251992]: 2025-12-06 07:13:38.173 251996 DEBUG nova.network.neutron [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Successfully updated port: 69c3d15c-5184-4131-bf3a-495b7c0cb7fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:13:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:38.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:13:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:38.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:13:38 compute-0 nova_compute[251992]: 2025-12-06 07:13:38.402 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "refresh_cache-73478c05-0d06-42b1-b7f8-3c0500924287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:13:38 compute-0 nova_compute[251992]: 2025-12-06 07:13:38.402 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquired lock "refresh_cache-73478c05-0d06-42b1-b7f8-3c0500924287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:13:38 compute-0 nova_compute[251992]: 2025-12-06 07:13:38.403 251996 DEBUG nova.network.neutron [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:13:38 compute-0 nova_compute[251992]: 2025-12-06 07:13:38.418 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:38 compute-0 nova_compute[251992]: 2025-12-06 07:13:38.553 251996 DEBUG nova.network.neutron [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.044 251996 DEBUG nova.compute.manager [req-4632b2b7-1ad3-44fa-aeb6-84aea619a58a req-d80b93a1-e675-4881-a36d-ff5dd52d6aed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received event network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.044 251996 DEBUG oslo_concurrency.lockutils [req-4632b2b7-1ad3-44fa-aeb6-84aea619a58a req-d80b93a1-e675-4881-a36d-ff5dd52d6aed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.045 251996 DEBUG oslo_concurrency.lockutils [req-4632b2b7-1ad3-44fa-aeb6-84aea619a58a req-d80b93a1-e675-4881-a36d-ff5dd52d6aed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.045 251996 DEBUG oslo_concurrency.lockutils [req-4632b2b7-1ad3-44fa-aeb6-84aea619a58a req-d80b93a1-e675-4881-a36d-ff5dd52d6aed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.045 251996 DEBUG nova.compute.manager [req-4632b2b7-1ad3-44fa-aeb6-84aea619a58a req-d80b93a1-e675-4881-a36d-ff5dd52d6aed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] No waiting events found dispatching network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.045 251996 WARNING nova.compute.manager [req-4632b2b7-1ad3-44fa-aeb6-84aea619a58a req-d80b93a1-e675-4881-a36d-ff5dd52d6aed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received unexpected event network-vif-plugged-bcf291c2-36ca-46c4-9059-50514f8c171d for instance with vm_state active and task_state deleting.
Dec 06 07:13:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 279 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 315 op/s
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.312 251996 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received event network-changed-69c3d15c-5184-4131-bf3a-495b7c0cb7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.312 251996 DEBUG nova.compute.manager [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Refreshing instance network info cache due to event network-changed-69c3d15c-5184-4131-bf3a-495b7c0cb7fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:13:39 compute-0 nova_compute[251992]: 2025-12-06 07:13:39.313 251996 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-73478c05-0d06-42b1-b7f8-3c0500924287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:13:39 compute-0 ceph-mon[74339]: pgmap v1660: 305 pgs: 305 active+clean; 304 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 285 op/s
Dec 06 07:13:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:13:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:40.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:13:40 compute-0 nova_compute[251992]: 2025-12-06 07:13:40.265 251996 INFO nova.virt.libvirt.driver [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Deleting instance files /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58_del
Dec 06 07:13:40 compute-0 nova_compute[251992]: 2025-12-06 07:13:40.265 251996 INFO nova.virt.libvirt.driver [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Deletion of /var/lib/nova/instances/6f99df47-6b1f-403c-995d-e8f72597bf58_del complete
Dec 06 07:13:40 compute-0 sudo[292021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:40 compute-0 sudo[292021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:40 compute-0 sudo[292021]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:13:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:40.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:13:40 compute-0 nova_compute[251992]: 2025-12-06 07:13:40.321 251996 INFO nova.compute.manager [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Took 4.62 seconds to destroy the instance on the hypervisor.
Dec 06 07:13:40 compute-0 nova_compute[251992]: 2025-12-06 07:13:40.321 251996 DEBUG oslo.service.loopingcall [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:13:40 compute-0 nova_compute[251992]: 2025-12-06 07:13:40.321 251996 DEBUG nova.compute.manager [-] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:13:40 compute-0 nova_compute[251992]: 2025-12-06 07:13:40.322 251996 DEBUG nova.network.neutron [-] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:13:40 compute-0 sudo[292046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:40 compute-0 sudo[292046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:40 compute-0 sudo[292046]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.127 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 214 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 2.6 MiB/s wr, 440 op/s
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.390 251996 DEBUG nova.network.neutron [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Updating instance_info_cache with network_info: [{"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.413 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Releasing lock "refresh_cache-73478c05-0d06-42b1-b7f8-3c0500924287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:41 compute-0 podman[292073]: 2025-12-06 07:13:41.413018538 +0000 UTC m=+0.058866879 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.413 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Instance network_info: |[{"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.416 251996 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-73478c05-0d06-42b1-b7f8-3c0500924287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.416 251996 DEBUG nova.network.neutron [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Refreshing network info cache for port 69c3d15c-5184-4131-bf3a-495b7c0cb7fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.419 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Start _get_guest_xml network_info=[{"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:13:22Z,direct_url=<?>,disk_format='raw',id=6b59300d-e8aa-4c06-be5d-280ba7e39063,min_disk=1,min_ram=0,name='tempest-test-snap-1571596018',owner='af7365adc05f4624a08a71cd5a77ada6',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:13:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6b59300d-e8aa-4c06-be5d-280ba7e39063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.423 251996 WARNING nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.430 251996 DEBUG nova.virt.libvirt.host [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.430 251996 DEBUG nova.virt.libvirt.host [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:13:41 compute-0 podman[292072]: 2025-12-06 07:13:41.431088161 +0000 UTC m=+0.072410606 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.437 251996 DEBUG nova.virt.libvirt.host [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.437 251996 DEBUG nova.virt.libvirt.host [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.438 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.439 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:13:22Z,direct_url=<?>,disk_format='raw',id=6b59300d-e8aa-4c06-be5d-280ba7e39063,min_disk=1,min_ram=0,name='tempest-test-snap-1571596018',owner='af7365adc05f4624a08a71cd5a77ada6',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:13:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.439 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.439 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.440 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.440 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.440 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.440 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.441 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.441 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.441 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.441 251996 DEBUG nova.virt.hardware [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.444 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:13:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231508313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.906 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.933 251996 DEBUG nova.storage.rbd_utils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 73478c05-0d06-42b1-b7f8-3c0500924287_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:41 compute-0 nova_compute[251992]: 2025-12-06 07:13:41.937 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:41 compute-0 ceph-mon[74339]: pgmap v1661: 305 pgs: 305 active+clean; 279 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 315 op/s
Dec 06 07:13:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:42.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:42.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:13:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896354293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.803 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.866s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.804 251996 DEBUG nova.virt.libvirt.vif [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-142077524',display_name='tempest-ImagesTestJSON-server-142077524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-142077524',id=65,image_ref='6b59300d-e8aa-4c06-be5d-280ba7e39063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ezl0258c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='db2b4e57-20af-415b-ad7b-ac5b7197acb6',image_min_disk='1',image_min_ram='0',image_owner_id='af7365adc05f4624a08a71cd5a77ada6',image_owner_project_name='tempest-ImagesTestJSON-134159412',image_owner_user_name='tempest-ImagesTestJSON-134159412-project-member',image_user_id='bdd7994b0ebb4035a373b6560aa7dbcf',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:35Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=73478c05-0d06-42b1-b7f8-3c0500924287,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.805 251996 DEBUG nova.network.os_vif_util [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.806 251996 DEBUG nova.network.os_vif_util [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:eb:6f,bridge_name='br-int',has_traffic_filtering=True,id=69c3d15c-5184-4131-bf3a-495b7c0cb7fd,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c3d15c-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.807 251996 DEBUG nova.objects.instance [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73478c05-0d06-42b1-b7f8-3c0500924287 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:13:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:13:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:13:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:13:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:13:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.980 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <uuid>73478c05-0d06-42b1-b7f8-3c0500924287</uuid>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <name>instance-00000041</name>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <nova:name>tempest-ImagesTestJSON-server-142077524</nova:name>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:13:41</nova:creationTime>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <nova:user uuid="bdd7994b0ebb4035a373b6560aa7dbcf">tempest-ImagesTestJSON-134159412-project-member</nova:user>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <nova:project uuid="af7365adc05f4624a08a71cd5a77ada6">tempest-ImagesTestJSON-134159412</nova:project>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6b59300d-e8aa-4c06-be5d-280ba7e39063"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <nova:port uuid="69c3d15c-5184-4131-bf3a-495b7c0cb7fd">
Dec 06 07:13:42 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <system>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <entry name="serial">73478c05-0d06-42b1-b7f8-3c0500924287</entry>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <entry name="uuid">73478c05-0d06-42b1-b7f8-3c0500924287</entry>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </system>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <os>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   </os>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <features>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   </features>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/73478c05-0d06-42b1-b7f8-3c0500924287_disk">
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       </source>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/73478c05-0d06-42b1-b7f8-3c0500924287_disk.config">
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       </source>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:13:42 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:33:eb:6f"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <target dev="tap69c3d15c-51"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287/console.log" append="off"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <video>
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </video>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <input type="keyboard" bus="usb"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:13:42 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:13:42 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:13:42 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:13:42 compute-0 nova_compute[251992]: </domain>
Dec 06 07:13:42 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.980 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Preparing to wait for external event network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.982 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.982 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.982 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.983 251996 DEBUG nova.virt.libvirt.vif [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-142077524',display_name='tempest-ImagesTestJSON-server-142077524',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-142077524',id=65,image_ref='6b59300d-e8aa-4c06-be5d-280ba7e39063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ezl0258c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='db2b4e57-20af-415b-ad7b-ac5b7197acb6',image_min_disk='1',image_min_ram='0',image_owner_id='af7365adc05f4624a08a71cd5a77ada6',image_owner_project_name='tempest-ImagesTestJSON-134159412',image_owner_user_name='tempest-ImagesTestJSON-134159412-project-member',image_user_id='bdd7994b0ebb4035a373b6560aa7dbcf',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:13:35Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=73478c05-0d06-42b1-b7f8-3c0500924287,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.983 251996 DEBUG nova.network.os_vif_util [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.984 251996 DEBUG nova.network.os_vif_util [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:eb:6f,bridge_name='br-int',has_traffic_filtering=True,id=69c3d15c-5184-4131-bf3a-495b7c0cb7fd,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c3d15c-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.984 251996 DEBUG os_vif [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:eb:6f,bridge_name='br-int',has_traffic_filtering=True,id=69c3d15c-5184-4131-bf3a-495b7c0cb7fd,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c3d15c-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.985 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.985 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.986 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.989 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.989 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69c3d15c-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.990 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69c3d15c-51, col_values=(('external_ids', {'iface-id': '69c3d15c-5184-4131-bf3a-495b7c0cb7fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:eb:6f', 'vm-uuid': '73478c05-0d06-42b1-b7f8-3c0500924287'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.991 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:42 compute-0 NetworkManager[48965]: <info>  [1765005222.9925] manager: (tap69c3d15c-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.994 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.997 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:42 compute-0 nova_compute[251992]: 2025-12-06 07:13:42.999 251996 INFO os_vif [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:eb:6f,bridge_name='br-int',has_traffic_filtering=True,id=69c3d15c-5184-4131-bf3a-495b7c0cb7fd,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c3d15c-51')
Dec 06 07:13:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 213 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 1.9 MiB/s wr, 417 op/s
Dec 06 07:13:43 compute-0 ceph-mon[74339]: pgmap v1662: 305 pgs: 305 active+clean; 214 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 2.6 MiB/s wr, 440 op/s
Dec 06 07:13:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1231508313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3896354293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.421 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.464 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.464 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.464 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No VIF found with MAC fa:16:3e:33:eb:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.465 251996 INFO nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Using config drive
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.540 251996 DEBUG nova.storage.rbd_utils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 73478c05-0d06-42b1-b7f8-3c0500924287_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.646 251996 DEBUG nova.network.neutron [-] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.667 251996 INFO nova.compute.manager [-] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Took 3.35 seconds to deallocate network for instance.
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.719 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.719 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.741 251996 DEBUG nova.compute.manager [req-e69a87f2-5818-47d8-9b1e-7a3ffa95ca0e req-fc2e325b-71a0-41ea-b13d-ad2c8a15d6cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Received event network-vif-deleted-bcf291c2-36ca-46c4-9059-50514f8c171d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:43 compute-0 nova_compute[251992]: 2025-12-06 07:13:43.813 251996 DEBUG oslo_concurrency.processutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.087 251996 INFO nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Creating config drive at /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287/disk.config
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.092 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqnux61o3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:44.114 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:44.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.221 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqnux61o3" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.248 251996 DEBUG nova.storage.rbd_utils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 73478c05-0d06-42b1-b7f8-3c0500924287_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.251 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287/disk.config 73478c05-0d06-42b1-b7f8-3c0500924287_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150186985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:13:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:44.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.332 251996 DEBUG oslo_concurrency.processutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.338 251996 DEBUG nova.compute.provider_tree [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.355 251996 DEBUG nova.scheduler.client.report [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.393 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.430 251996 DEBUG nova.network.neutron [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Updated VIF entry in instance network info cache for port 69c3d15c-5184-4131-bf3a-495b7c0cb7fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.431 251996 DEBUG nova.network.neutron [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Updating instance_info_cache with network_info: [{"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.437 251996 INFO nova.scheduler.client.report [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Deleted allocations for instance 6f99df47-6b1f-403c-995d-e8f72597bf58
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.450 251996 DEBUG oslo_concurrency.lockutils [req-10da706c-ae9a-4da6-935d-013c680f29fd req-e1267900-13ff-4f98-8368-b737fbcd8744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-73478c05-0d06-42b1-b7f8-3c0500924287" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:13:44 compute-0 nova_compute[251992]: 2025-12-06 07:13:44.499 251996 DEBUG oslo_concurrency.lockutils [None req-36e4e934-d7ea-4061-91df-010d5b7138a6 03fb2817729e4b71932023a7637c6244 de09de98b3b1445f88b6094b6aac4a30 - - default default] Lock "6f99df47-6b1f-403c-995d-e8f72597bf58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:45 compute-0 ceph-mon[74339]: pgmap v1663: 305 pgs: 305 active+clean; 213 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 1.9 MiB/s wr, 417 op/s
Dec 06 07:13:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3150186985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 192 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 497 KiB/s wr, 315 op/s
Dec 06 07:13:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:46.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:46.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1368209956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:46 compute-0 ceph-mon[74339]: pgmap v1664: 305 pgs: 305 active+clean; 192 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 497 KiB/s wr, 315 op/s
Dec 06 07:13:46 compute-0 nova_compute[251992]: 2025-12-06 07:13:46.689 251996 DEBUG oslo_concurrency.processutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287/disk.config 73478c05-0d06-42b1-b7f8-3c0500924287_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:46 compute-0 nova_compute[251992]: 2025-12-06 07:13:46.690 251996 INFO nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Deleting local config drive /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287/disk.config because it was imported into RBD.
Dec 06 07:13:46 compute-0 kernel: tap69c3d15c-51: entered promiscuous mode
Dec 06 07:13:46 compute-0 NetworkManager[48965]: <info>  [1765005226.7413] manager: (tap69c3d15c-51): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Dec 06 07:13:46 compute-0 ovn_controller[147168]: 2025-12-06T07:13:46Z|00168|binding|INFO|Claiming lport 69c3d15c-5184-4131-bf3a-495b7c0cb7fd for this chassis.
Dec 06 07:13:46 compute-0 ovn_controller[147168]: 2025-12-06T07:13:46Z|00169|binding|INFO|69c3d15c-5184-4131-bf3a-495b7c0cb7fd: Claiming fa:16:3e:33:eb:6f 10.100.0.3
Dec 06 07:13:46 compute-0 nova_compute[251992]: 2025-12-06 07:13:46.743 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.749 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:eb:6f 10.100.0.3'], port_security=['fa:16:3e:33:eb:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '73478c05-0d06-42b1-b7f8-3c0500924287', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=69c3d15c-5184-4131-bf3a-495b7c0cb7fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.751 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 69c3d15c-5184-4131-bf3a-495b7c0cb7fd in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad bound to our chassis
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.752 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:13:46 compute-0 ovn_controller[147168]: 2025-12-06T07:13:46Z|00170|binding|INFO|Setting lport 69c3d15c-5184-4131-bf3a-495b7c0cb7fd ovn-installed in OVS
Dec 06 07:13:46 compute-0 ovn_controller[147168]: 2025-12-06T07:13:46Z|00171|binding|INFO|Setting lport 69c3d15c-5184-4131-bf3a-495b7c0cb7fd up in Southbound
Dec 06 07:13:46 compute-0 nova_compute[251992]: 2025-12-06 07:13:46.764 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.765 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5d78f85e-bddb-4a86-a76b-e2b546082c77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.766 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b0835d7-81 in ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:13:46 compute-0 nova_compute[251992]: 2025-12-06 07:13:46.767 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.767 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b0835d7-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.768 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc9a229-8c4d-435c-aa6b-b6851c47704e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.769 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f236972f-4b20-497f-97ac-07778ea7904f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 systemd-udevd[292271]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.779 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[2a334eb8-da5a-4b7a-a99d-b88e68b267c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 systemd-machined[212986]: New machine qemu-27-instance-00000041.
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.793 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[00a0f87d-24ab-4e21-a63e-a89abedd0a19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 NetworkManager[48965]: <info>  [1765005226.8002] device (tap69c3d15c-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:13:46 compute-0 NetworkManager[48965]: <info>  [1765005226.8009] device (tap69c3d15c-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:13:46 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000041.
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.822 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8ce6ae-e8b2-45e7-9f93-6a0dbd339c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.828 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[db41a39f-3aae-438c-b015-543a3fb75cf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 NetworkManager[48965]: <info>  [1765005226.8289] manager: (tap2b0835d7-80): new Veth device (/org/freedesktop/NetworkManager/Devices/93)
Dec 06 07:13:46 compute-0 systemd-udevd[292279]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.860 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb05e2b-f24f-41c6-a6a4-0ad832c5dd41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.865 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[282bd012-a1e2-4ce0-b47a-f52ecb35ee7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 NetworkManager[48965]: <info>  [1765005226.8891] device (tap2b0835d7-80): carrier: link connected
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.893 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[52de064b-9eec-4d66-8ed0-60d76a215f90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.912 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8b10b83f-978f-4d26-ad70-c8eb6d214530]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549947, 'reachable_time': 21966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292306, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.930 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d01a813e-778e-42aa-8f34-37789c995d32]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:4e19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 549947, 'tstamp': 549947}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292307, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.951 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6a373f-f2d7-470f-b95d-0ff2ae2273db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549947, 'reachable_time': 21966, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292308, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:46.979 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ec74f7-cd2d-4800-a97d-c38f208de87d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.029 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0f377918-64d3-428b-a6ea-49f522347b19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.031 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.031 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.031 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b0835d7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:47 compute-0 nova_compute[251992]: 2025-12-06 07:13:47.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:47 compute-0 NetworkManager[48965]: <info>  [1765005227.0337] manager: (tap2b0835d7-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Dec 06 07:13:47 compute-0 kernel: tap2b0835d7-80: entered promiscuous mode
Dec 06 07:13:47 compute-0 nova_compute[251992]: 2025-12-06 07:13:47.035 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.036 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b0835d7-80, col_values=(('external_ids', {'iface-id': '87f2c5b0-3684-4269-9fbf-5a4dfd5a8759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:47 compute-0 nova_compute[251992]: 2025-12-06 07:13:47.037 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:47 compute-0 ovn_controller[147168]: 2025-12-06T07:13:47Z|00172|binding|INFO|Releasing lport 87f2c5b0-3684-4269-9fbf-5a4dfd5a8759 from this chassis (sb_readonly=0)
Dec 06 07:13:47 compute-0 nova_compute[251992]: 2025-12-06 07:13:47.053 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.054 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.055 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[61bf8dce-4a1f-42a6-a06e-fd9d7a8eed4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.056 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:13:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:47.057 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'env', 'PROCESS_TAG=haproxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:13:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 175 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 416 KiB/s wr, 279 op/s
Dec 06 07:13:47 compute-0 podman[292358]: 2025-12-06 07:13:47.370253408 +0000 UTC m=+0.024013300 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:13:47 compute-0 nova_compute[251992]: 2025-12-06 07:13:47.847 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005227.8472936, 73478c05-0d06-42b1-b7f8-3c0500924287 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:47 compute-0 nova_compute[251992]: 2025-12-06 07:13:47.848 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] VM Started (Lifecycle Event)
Dec 06 07:13:47 compute-0 nova_compute[251992]: 2025-12-06 07:13:47.992 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.036 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.041 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005227.8499324, 73478c05-0d06-42b1-b7f8-3c0500924287 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.041 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] VM Paused (Lifecycle Event)
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.058 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.061 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.082 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:48 compute-0 podman[292358]: 2025-12-06 07:13:48.15650011 +0000 UTC m=+0.810259982 container create 2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:13:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:48 compute-0 systemd[1]: Started libpod-conmon-2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab.scope.
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.213 251996 DEBUG nova.compute.manager [req-a2d529e8-9db9-46d6-ac12-da10a4891f76 req-d2341c63-1e5c-4e3b-a1b3-34b31c8b0c7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received event network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.214 251996 DEBUG oslo_concurrency.lockutils [req-a2d529e8-9db9-46d6-ac12-da10a4891f76 req-d2341c63-1e5c-4e3b-a1b3-34b31c8b0c7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.215 251996 DEBUG oslo_concurrency.lockutils [req-a2d529e8-9db9-46d6-ac12-da10a4891f76 req-d2341c63-1e5c-4e3b-a1b3-34b31c8b0c7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.215 251996 DEBUG oslo_concurrency.lockutils [req-a2d529e8-9db9-46d6-ac12-da10a4891f76 req-d2341c63-1e5c-4e3b-a1b3-34b31c8b0c7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.215 251996 DEBUG nova.compute.manager [req-a2d529e8-9db9-46d6-ac12-da10a4891f76 req-d2341c63-1e5c-4e3b-a1b3-34b31c8b0c7b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Processing event network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.216 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.220 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005228.220249, 73478c05-0d06-42b1-b7f8-3c0500924287 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.220 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] VM Resumed (Lifecycle Event)
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.223 251996 DEBUG nova.virt.libvirt.driver [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.229 251996 INFO nova.virt.libvirt.driver [-] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Instance spawned successfully.
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.230 251996 INFO nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Took 12.98 seconds to spawn the instance on the hypervisor.
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.230 251996 DEBUG nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c98dfbdb09182ee14ca0a833a9c49eaf1ddd68cff63aa0a02349d2bf7e55c25/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.290 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.293 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:13:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:48.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.333 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.348 251996 INFO nova.compute.manager [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Took 14.29 seconds to build instance.
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.378 251996 DEBUG oslo_concurrency.lockutils [None req-f8547191-cead-4a04-abd3-c08086d48fa5 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:48 compute-0 podman[292358]: 2025-12-06 07:13:48.421999436 +0000 UTC m=+1.075759328 container init 2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.425 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:48 compute-0 podman[292358]: 2025-12-06 07:13:48.428416644 +0000 UTC m=+1.082176516 container start 2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:13:48 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[292397]: [NOTICE]   (292401) : New worker (292403) forked
Dec 06 07:13:48 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[292397]: [NOTICE]   (292401) : Loading success.
Dec 06 07:13:48 compute-0 nova_compute[251992]: 2025-12-06 07:13:48.689 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:49 compute-0 ovn_controller[147168]: 2025-12-06T07:13:49Z|00173|binding|INFO|Releasing lport 87f2c5b0-3684-4269-9fbf-5a4dfd5a8759 from this chassis (sb_readonly=0)
Dec 06 07:13:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 167 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 44 KiB/s wr, 217 op/s
Dec 06 07:13:49 compute-0 nova_compute[251992]: 2025-12-06 07:13:49.737 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:49 compute-0 ceph-mon[74339]: pgmap v1665: 305 pgs: 305 active+clean; 175 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 416 KiB/s wr, 279 op/s
Dec 06 07:13:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:50.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.310 251996 DEBUG nova.compute.manager [req-cb515cbc-f8ca-48f4-95c7-f6459fc8d275 req-f2c12a11-475b-462b-9ff3-bc3e0b9cd1ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received event network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.310 251996 DEBUG oslo_concurrency.lockutils [req-cb515cbc-f8ca-48f4-95c7-f6459fc8d275 req-f2c12a11-475b-462b-9ff3-bc3e0b9cd1ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.310 251996 DEBUG oslo_concurrency.lockutils [req-cb515cbc-f8ca-48f4-95c7-f6459fc8d275 req-f2c12a11-475b-462b-9ff3-bc3e0b9cd1ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.311 251996 DEBUG oslo_concurrency.lockutils [req-cb515cbc-f8ca-48f4-95c7-f6459fc8d275 req-f2c12a11-475b-462b-9ff3-bc3e0b9cd1ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.311 251996 DEBUG nova.compute.manager [req-cb515cbc-f8ca-48f4-95c7-f6459fc8d275 req-f2c12a11-475b-462b-9ff3-bc3e0b9cd1ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] No waiting events found dispatching network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.311 251996 WARNING nova.compute.manager [req-cb515cbc-f8ca-48f4-95c7-f6459fc8d275 req-f2c12a11-475b-462b-9ff3-bc3e0b9cd1ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received unexpected event network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd for instance with vm_state active and task_state None.
Dec 06 07:13:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:50.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.573 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "73478c05-0d06-42b1-b7f8-3c0500924287" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.574 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.574 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.575 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.575 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.576 251996 INFO nova.compute.manager [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Terminating instance
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.578 251996 DEBUG nova.compute.manager [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.932 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005215.931415, 6f99df47-6b1f-403c-995d-e8f72597bf58 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.933 251996 INFO nova.compute.manager [-] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] VM Stopped (Lifecycle Event)
Dec 06 07:13:50 compute-0 nova_compute[251992]: 2025-12-06 07:13:50.958 251996 DEBUG nova.compute.manager [None req-ace679e3-c8c1-42a8-9450-a047af207842 - - - - - -] [instance: 6f99df47-6b1f-403c-995d-e8f72597bf58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:13:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1022856760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/951077284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:51 compute-0 ceph-mon[74339]: pgmap v1666: 305 pgs: 305 active+clean; 167 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 44 KiB/s wr, 217 op/s
Dec 06 07:13:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:51 compute-0 kernel: tap69c3d15c-51 (unregistering): left promiscuous mode
Dec 06 07:13:51 compute-0 NetworkManager[48965]: <info>  [1765005231.4864] device (tap69c3d15c-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:13:51 compute-0 ovn_controller[147168]: 2025-12-06T07:13:51Z|00174|binding|INFO|Releasing lport 69c3d15c-5184-4131-bf3a-495b7c0cb7fd from this chassis (sb_readonly=0)
Dec 06 07:13:51 compute-0 ovn_controller[147168]: 2025-12-06T07:13:51Z|00175|binding|INFO|Setting lport 69c3d15c-5184-4131-bf3a-495b7c0cb7fd down in Southbound
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.495 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:51 compute-0 ovn_controller[147168]: 2025-12-06T07:13:51Z|00176|binding|INFO|Removing iface tap69c3d15c-51 ovn-installed in OVS
Dec 06 07:13:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:51.500 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:eb:6f 10.100.0.3'], port_security=['fa:16:3e:33:eb:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '73478c05-0d06-42b1-b7f8-3c0500924287', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=69c3d15c-5184-4131-bf3a-495b7c0cb7fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:51.502 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 69c3d15c-5184-4131-bf3a-495b7c0cb7fd in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad unbound from our chassis
Dec 06 07:13:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:51.504 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.505378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231505529, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1654, "num_deletes": 523, "total_data_size": 1974185, "memory_usage": 2005896, "flush_reason": "Manual Compaction"}
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 06 07:13:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:51.505 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4419d438-5ca7-4e6d-9374-d4176e61845a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:51.506 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace which is not needed anymore
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.516 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231545622, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1350599, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32445, "largest_seqno": 34098, "table_properties": {"data_size": 1344213, "index_size": 2885, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19520, "raw_average_key_size": 20, "raw_value_size": 1328323, "raw_average_value_size": 1390, "num_data_blocks": 126, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 523, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005122, "oldest_key_time": 1765005122, "file_creation_time": 1765005231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 40249 microseconds, and 5890 cpu microseconds.
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.545674) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1350599 bytes OK
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.545700) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.550232) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.550258) EVENT_LOG_v1 {"time_micros": 1765005231550251, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.550276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1965756, prev total WAL file size 1965756, number of live WAL files 2.
Dec 06 07:13:51 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000041.scope: Deactivated successfully.
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.551113) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303130' seq:72057594037927935, type:22 .. '6D6772737461740031323732' seq:0, type:0; will stop at (end)
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1318KB)], [68(11MB)]
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231551199, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 13106727, "oldest_snapshot_seqno": -1}
Dec 06 07:13:51 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000041.scope: Consumed 3.238s CPU time.
Dec 06 07:13:51 compute-0 systemd-machined[212986]: Machine qemu-27-instance-00000041 terminated.
Dec 06 07:13:51 compute-0 kernel: tap69c3d15c-51: entered promiscuous mode
Dec 06 07:13:51 compute-0 NetworkManager[48965]: <info>  [1765005231.6267] manager: (tap69c3d15c-51): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Dec 06 07:13:51 compute-0 kernel: tap69c3d15c-51 (unregistering): left promiscuous mode
Dec 06 07:13:51 compute-0 ovn_controller[147168]: 2025-12-06T07:13:51Z|00177|binding|INFO|Claiming lport 69c3d15c-5184-4131-bf3a-495b7c0cb7fd for this chassis.
Dec 06 07:13:51 compute-0 ovn_controller[147168]: 2025-12-06T07:13:51Z|00178|binding|INFO|69c3d15c-5184-4131-bf3a-495b7c0cb7fd: Claiming fa:16:3e:33:eb:6f 10.100.0.3
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.630 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:51.641 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:eb:6f 10.100.0.3'], port_security=['fa:16:3e:33:eb:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '73478c05-0d06-42b1-b7f8-3c0500924287', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=69c3d15c-5184-4131-bf3a-495b7c0cb7fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.651 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:51 compute-0 ovn_controller[147168]: 2025-12-06T07:13:51Z|00179|binding|INFO|Releasing lport 69c3d15c-5184-4131-bf3a-495b7c0cb7fd from this chassis (sb_readonly=0)
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.654 251996 INFO nova.virt.libvirt.driver [-] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Instance destroyed successfully.
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.654 251996 DEBUG nova.objects.instance [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'resources' on Instance uuid 73478c05-0d06-42b1-b7f8-3c0500924287 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:13:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:51.658 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:eb:6f 10.100.0.3'], port_security=['fa:16:3e:33:eb:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '73478c05-0d06-42b1-b7f8-3c0500924287', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=69c3d15c-5184-4131-bf3a-495b7c0cb7fd) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.678 251996 DEBUG nova.virt.libvirt.vif [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:13:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-142077524',display_name='tempest-ImagesTestJSON-server-142077524',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-142077524',id=65,image_ref='6b59300d-e8aa-4c06-be5d-280ba7e39063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:13:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ezl0258c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='db2b4e57-20af-415b-ad7b-ac5b7197acb6',image_min_disk='1',image_min_ram='0',image_owner_id='af7365adc05f4624a08a71cd5a77ada6',image_owner_project_name='tempest-ImagesTestJSON-134159412',image_owner_user_name='tempest-ImagesTestJSON-134159412-project-member',image_user_id='bdd7994b0ebb4035a373b6560aa7dbcf',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:13:48Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=73478c05-0d06-42b1-b7f8-3c0500924287,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.679 251996 DEBUG nova.network.os_vif_util [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "address": "fa:16:3e:33:eb:6f", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69c3d15c-51", "ovs_interfaceid": "69c3d15c-5184-4131-bf3a-495b7c0cb7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.680 251996 DEBUG nova.network.os_vif_util [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:eb:6f,bridge_name='br-int',has_traffic_filtering=True,id=69c3d15c-5184-4131-bf3a-495b7c0cb7fd,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c3d15c-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.680 251996 DEBUG os_vif [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:eb:6f,bridge_name='br-int',has_traffic_filtering=True,id=69c3d15c-5184-4131-bf3a-495b7c0cb7fd,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c3d15c-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.682 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.682 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69c3d15c-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.683 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.686 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:13:51 compute-0 nova_compute[251992]: 2025-12-06 07:13:51.688 251996 INFO os_vif [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:eb:6f,bridge_name='br-int',has_traffic_filtering=True,id=69c3d15c-5184-4131-bf3a-495b7c0cb7fd,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69c3d15c-51')
Dec 06 07:13:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 167 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 27 KiB/s wr, 193 op/s
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6440 keys, 9571024 bytes, temperature: kUnknown
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231699095, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9571024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9528080, "index_size": 25746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 166156, "raw_average_key_size": 25, "raw_value_size": 9412547, "raw_average_value_size": 1461, "num_data_blocks": 1029, "num_entries": 6440, "num_filter_entries": 6440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765005231, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.699674) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9571024 bytes
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.709081) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.4 rd, 64.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 11.2 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(16.8) write-amplify(7.1) OK, records in: 7469, records dropped: 1029 output_compression: NoCompression
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.709132) EVENT_LOG_v1 {"time_micros": 1765005231709117, "job": 38, "event": "compaction_finished", "compaction_time_micros": 148216, "compaction_time_cpu_micros": 30024, "output_level": 6, "num_output_files": 1, "total_output_size": 9571024, "num_input_records": 7469, "num_output_records": 6440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231709499, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005231711328, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.550984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.711407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.711413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.711415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.711416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:13:51.711418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:13:51 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[292397]: [NOTICE]   (292401) : haproxy version is 2.8.14-c23fe91
Dec 06 07:13:51 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[292397]: [NOTICE]   (292401) : path to executable is /usr/sbin/haproxy
Dec 06 07:13:51 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[292397]: [ALERT]    (292401) : Current worker (292403) exited with code 143 (Terminated)
Dec 06 07:13:51 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[292397]: [WARNING]  (292401) : All workers exited. Exiting... (0)
Dec 06 07:13:51 compute-0 systemd[1]: libpod-2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab.scope: Deactivated successfully.
Dec 06 07:13:51 compute-0 podman[292441]: 2025-12-06 07:13:51.758293807 +0000 UTC m=+0.109212749 container died 2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab-userdata-shm.mount: Deactivated successfully.
Dec 06 07:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c98dfbdb09182ee14ca0a833a9c49eaf1ddd68cff63aa0a02349d2bf7e55c25-merged.mount: Deactivated successfully.
Dec 06 07:13:51 compute-0 podman[292441]: 2025-12-06 07:13:51.925339934 +0000 UTC m=+0.276258876 container cleanup 2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:13:51 compute-0 systemd[1]: libpod-conmon-2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab.scope: Deactivated successfully.
Dec 06 07:13:52 compute-0 podman[292488]: 2025-12-06 07:13:52.173753754 +0000 UTC m=+0.226229783 container remove 2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.179 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8aabe489-1e90-42f8-bed6-0bc23b8b26ef]: (4, ('Sat Dec  6 07:13:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab)\n2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab\nSat Dec  6 07:13:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab)\n2316478bfdb266e39df9ff433ca52b3fd64d5376707b20b9b5faa8dab28d49ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.181 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[88800dee-2a2e-4c04-b21a-ea142a6a9662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.182 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.184 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:52 compute-0 kernel: tap2b0835d7-80: left promiscuous mode
Dec 06 07:13:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:52.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.204 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac8464a-6458-4865-adfb-901b48113685]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.229 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[85ab9e76-186d-4312-af09-7f67ae9e8ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.230 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e6efa72c-9ed8-4a3b-b477-cbc5a2e7e22b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.245 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c5194fa9-a0a6-4a21-bd3f-8703e48ae249]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 549940, 'reachable_time': 32953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292501, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d2b0835d7\x2d87e4\x2d46cc\x2d8a94\x2de4e042bd4bad.mount: Deactivated successfully.
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.248 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.248 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ad7d38-9a3e-4b94-86dd-da259a8cb661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.250 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 69c3d15c-5184-4131-bf3a-495b7c0cb7fd in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad unbound from our chassis
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.251 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.251 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5a45fc-0746-400b-b91d-f53c00de25b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.252 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 69c3d15c-5184-4131-bf3a-495b7c0cb7fd in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad unbound from our chassis
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.253 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:13:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:13:52.253 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2d0de7-e40d-4c90-a7d9-db8f5b3ed157]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:13:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:52.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.493 251996 DEBUG nova.compute.manager [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received event network-vif-unplugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.494 251996 DEBUG oslo_concurrency.lockutils [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.494 251996 DEBUG oslo_concurrency.lockutils [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.494 251996 DEBUG oslo_concurrency.lockutils [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.495 251996 DEBUG nova.compute.manager [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] No waiting events found dispatching network-vif-unplugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.495 251996 DEBUG nova.compute.manager [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received event network-vif-unplugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.495 251996 DEBUG nova.compute.manager [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received event network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.495 251996 DEBUG oslo_concurrency.lockutils [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.496 251996 DEBUG oslo_concurrency.lockutils [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.496 251996 DEBUG oslo_concurrency.lockutils [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.496 251996 DEBUG nova.compute.manager [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] No waiting events found dispatching network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.497 251996 WARNING nova.compute.manager [req-4cbbf839-cc55-4904-8fbd-ab17de3240d0 req-3790a4e6-e5eb-40b8-bcf4-2ef6a5237ca5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received unexpected event network-vif-plugged-69c3d15c-5184-4131-bf3a-495b7c0cb7fd for instance with vm_state active and task_state deleting.
Dec 06 07:13:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3576366386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/603883367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.685 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.685 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.685 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.832 251996 INFO nova.virt.libvirt.driver [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Deleting instance files /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287_del
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.834 251996 INFO nova.virt.libvirt.driver [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Deletion of /var/lib/nova/instances/73478c05-0d06-42b1-b7f8-3c0500924287_del complete
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.909 251996 INFO nova.compute.manager [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Took 2.33 seconds to destroy the instance on the hypervisor.
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.910 251996 DEBUG oslo.service.loopingcall [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.910 251996 DEBUG nova.compute.manager [-] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:13:52 compute-0 nova_compute[251992]: 2025-12-06 07:13:52.911 251996 DEBUG nova.network.neutron [-] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:13:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/205864655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.188 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.371 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.372 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4550MB free_disk=20.942642211914062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.372 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.372 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.426 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:53 compute-0 ceph-mon[74339]: pgmap v1667: 305 pgs: 305 active+clean; 167 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 27 KiB/s wr, 193 op/s
Dec 06 07:13:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/205864655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 176 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 314 KiB/s wr, 115 op/s
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.758 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 73478c05-0d06-42b1-b7f8-3c0500924287 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.759 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.759 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.872 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.982 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.983 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:13:53 compute-0 nova_compute[251992]: 2025-12-06 07:13:53.997 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.032 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.060 251996 DEBUG nova.network.neutron [-] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.070 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.096 251996 INFO nova.compute.manager [-] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Took 1.19 seconds to deallocate network for instance.
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.144 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:13:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:54.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:54.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262571811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.550 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.555 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.580 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.607 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.608 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.608 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.626 251996 DEBUG nova.compute.manager [req-257ede36-0769-4ec6-8042-3123c7173070 req-4ca3c731-0f3b-4eda-b388-878bc5f92318 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Received event network-vif-deleted-69c3d15c-5184-4131-bf3a-495b7c0cb7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:13:54 compute-0 nova_compute[251992]: 2025-12-06 07:13:54.662 251996 DEBUG oslo_concurrency.processutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:13:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1262571811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:13:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1297350945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.102 251996 DEBUG oslo_concurrency.processutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.112 251996 DEBUG nova.compute.provider_tree [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.131 251996 DEBUG nova.scheduler.client.report [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.156 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.179 251996 INFO nova.scheduler.client.report [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Deleted allocations for instance 73478c05-0d06-42b1-b7f8-3c0500924287
Dec 06 07:13:55 compute-0 sudo[292572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:55 compute-0 sudo[292572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:55 compute-0 sudo[292572]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.255 251996 DEBUG oslo_concurrency.lockutils [None req-69684947-5576-4b75-8a48-b5f6b1186f9b bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "73478c05-0d06-42b1-b7f8-3c0500924287" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:13:55 compute-0 sudo[292597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:13:55 compute-0 sudo[292597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:55 compute-0 sudo[292597]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:55 compute-0 sudo[292622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:55 compute-0 sudo[292622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:55 compute-0 sudo[292622]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:55 compute-0 sudo[292647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:13:55 compute-0 sudo[292647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.609 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.610 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:55 compute-0 nova_compute[251992]: 2025-12-06 07:13:55.610 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 199 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 145 op/s
Dec 06 07:13:55 compute-0 sudo[292647]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:13:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:13:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:13:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:13:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:13:56 compute-0 ceph-mon[74339]: pgmap v1668: 305 pgs: 305 active+clean; 176 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 314 KiB/s wr, 115 op/s
Dec 06 07:13:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3357547717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1297350945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2222722586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:13:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:13:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e1543fe6-a0a7-46ed-b6eb-48b6ac907e97 does not exist
Dec 06 07:13:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b8a36c05-27a7-4ec9-aea6-c66896db1e9f does not exist
Dec 06 07:13:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev db982ee8-5231-42ec-ace5-cb4f2e94fa93 does not exist
Dec 06 07:13:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:13:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:13:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:13:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:13:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:13:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:13:56 compute-0 sudo[292703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:56 compute-0 sudo[292703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:56 compute-0 sudo[292703]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:56 compute-0 sudo[292728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:13:56 compute-0 sudo[292728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:56 compute-0 sudo[292728]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:13:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:13:56 compute-0 sudo[292753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:13:56 compute-0 sudo[292753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:56 compute-0 sudo[292753]: pam_unix(sudo:session): session closed for user root
Dec 06 07:13:56 compute-0 sudo[292778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:13:56 compute-0 sudo[292778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:13:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:56.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:13:56 compute-0 nova_compute[251992]: 2025-12-06 07:13:56.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:56 compute-0 nova_compute[251992]: 2025-12-06 07:13:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:13:56 compute-0 podman[292846]: 2025-12-06 07:13:56.572091839 +0000 UTC m=+0.022411685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:13:56 compute-0 nova_compute[251992]: 2025-12-06 07:13:56.685 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:56 compute-0 podman[292846]: 2025-12-06 07:13:56.82481365 +0000 UTC m=+0.275133476 container create b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:13:56 compute-0 systemd[1]: Started libpod-conmon-b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26.scope.
Dec 06 07:13:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:13:57 compute-0 ceph-mon[74339]: pgmap v1669: 305 pgs: 305 active+clean; 199 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 145 op/s
Dec 06 07:13:57 compute-0 podman[292846]: 2025-12-06 07:13:57.087524827 +0000 UTC m=+0.537844673 container init b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:13:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:13:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:13:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:13:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:13:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:13:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:13:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2607273512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:57 compute-0 podman[292846]: 2025-12-06 07:13:57.097322 +0000 UTC m=+0.547641826 container start b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:13:57 compute-0 silly_faraday[292862]: 167 167
Dec 06 07:13:57 compute-0 systemd[1]: libpod-b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26.scope: Deactivated successfully.
Dec 06 07:13:57 compute-0 podman[292846]: 2025-12-06 07:13:57.113312665 +0000 UTC m=+0.563632521 container attach b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:13:57 compute-0 podman[292846]: 2025-12-06 07:13:57.113633784 +0000 UTC m=+0.563953610 container died b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-188015369dc50d4c0508e2c4f1390d230cba2f24a64a6c4f923550dfcab3a30c-merged.mount: Deactivated successfully.
Dec 06 07:13:57 compute-0 podman[292846]: 2025-12-06 07:13:57.649441949 +0000 UTC m=+1.099761775 container remove b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:13:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 213 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 144 op/s
Dec 06 07:13:57 compute-0 systemd[1]: libpod-conmon-b321c02f26c4951d963b4479c29c073f95147e64b22d6d4c759cee3b0a93ce26.scope: Deactivated successfully.
Dec 06 07:13:57 compute-0 podman[292889]: 2025-12-06 07:13:57.85184782 +0000 UTC m=+0.077363343 container create 80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:13:57 compute-0 systemd[1]: Started libpod-conmon-80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65.scope.
Dec 06 07:13:57 compute-0 podman[292889]: 2025-12-06 07:13:57.798166187 +0000 UTC m=+0.023681730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:13:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8aacba6267a50f73194e0fb015345ced2e043af433e5bae3d04d99d823f7bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8aacba6267a50f73194e0fb015345ced2e043af433e5bae3d04d99d823f7bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8aacba6267a50f73194e0fb015345ced2e043af433e5bae3d04d99d823f7bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8aacba6267a50f73194e0fb015345ced2e043af433e5bae3d04d99d823f7bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c8aacba6267a50f73194e0fb015345ced2e043af433e5bae3d04d99d823f7bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:13:57 compute-0 podman[292889]: 2025-12-06 07:13:57.974871492 +0000 UTC m=+0.200387045 container init 80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:13:57 compute-0 podman[292889]: 2025-12-06 07:13:57.984459419 +0000 UTC m=+0.209974932 container start 80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:13:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Dec 06 07:13:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:13:58.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:58 compute-0 podman[292889]: 2025-12-06 07:13:58.212070031 +0000 UTC m=+0.437585554 container attach 80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:13:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Dec 06 07:13:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Dec 06 07:13:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/567437665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:13:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:13:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:13:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:13:58.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:13:58 compute-0 nova_compute[251992]: 2025-12-06 07:13:58.427 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:13:58 compute-0 nice_heyrovsky[292905]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:13:58 compute-0 nice_heyrovsky[292905]: --> relative data size: 1.0
Dec 06 07:13:58 compute-0 nice_heyrovsky[292905]: --> All data devices are unavailable
Dec 06 07:13:58 compute-0 systemd[1]: libpod-80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65.scope: Deactivated successfully.
Dec 06 07:13:58 compute-0 podman[292922]: 2025-12-06 07:13:58.92630522 +0000 UTC m=+0.023923947 container died 80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:13:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 213 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Dec 06 07:13:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c8aacba6267a50f73194e0fb015345ced2e043af433e5bae3d04d99d823f7bc-merged.mount: Deactivated successfully.
Dec 06 07:13:59 compute-0 ceph-mon[74339]: pgmap v1670: 305 pgs: 305 active+clean; 213 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 144 op/s
Dec 06 07:13:59 compute-0 ceph-mon[74339]: osdmap e246: 3 total, 3 up, 3 in
Dec 06 07:14:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:00 compute-0 podman[292922]: 2025-12-06 07:14:00.227714692 +0000 UTC m=+1.325333419 container remove 80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_heyrovsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:14:00 compute-0 systemd[1]: libpod-conmon-80f766dd313dc5e6e84575fe8e1337dfbccd679aea7d3cb7902aadee8b198b65.scope: Deactivated successfully.
Dec 06 07:14:00 compute-0 sudo[292778]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:00 compute-0 sudo[292937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:00.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:00 compute-0 sudo[292937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:00 compute-0 sudo[292937]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:00 compute-0 sudo[292962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:14:00 compute-0 sudo[292962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:00 compute-0 sudo[292962]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:00 compute-0 sudo[292985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:00 compute-0 sudo[292985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:00 compute-0 sudo[292985]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:00 compute-0 sudo[293008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:00 compute-0 sudo[293008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:00 compute-0 sudo[293008]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:00 compute-0 sudo[293036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:00 compute-0 sudo[293036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:00 compute-0 sudo[293036]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:00 compute-0 sudo[293050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:14:00 compute-0 sudo[293050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:00 compute-0 nova_compute[251992]: 2025-12-06 07:14:00.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:00 compute-0 nova_compute[251992]: 2025-12-06 07:14:00.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:14:00 compute-0 podman[293126]: 2025-12-06 07:14:00.926293816 +0000 UTC m=+0.111642987 container create 94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:14:00 compute-0 podman[293126]: 2025-12-06 07:14:00.838052281 +0000 UTC m=+0.023401492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:14:01 compute-0 systemd[1]: Started libpod-conmon-94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e.scope.
Dec 06 07:14:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:14:01 compute-0 podman[293126]: 2025-12-06 07:14:01.48829891 +0000 UTC m=+0.673648091 container init 94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:14:01 compute-0 podman[293126]: 2025-12-06 07:14:01.496254931 +0000 UTC m=+0.681604102 container start 94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:14:01 compute-0 podman[293126]: 2025-12-06 07:14:01.500586162 +0000 UTC m=+0.685935363 container attach 94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:14:01 compute-0 upbeat_chandrasekhar[293142]: 167 167
Dec 06 07:14:01 compute-0 systemd[1]: libpod-94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e.scope: Deactivated successfully.
Dec 06 07:14:01 compute-0 podman[293126]: 2025-12-06 07:14:01.502766282 +0000 UTC m=+0.688115463 container died 94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 07:14:01 compute-0 ceph-mon[74339]: pgmap v1672: 305 pgs: 305 active+clean; 213 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Dec 06 07:14:01 compute-0 nova_compute[251992]: 2025-12-06 07:14:01.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:01 compute-0 nova_compute[251992]: 2025-12-06 07:14:01.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:14:01 compute-0 nova_compute[251992]: 2025-12-06 07:14:01.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:14:01 compute-0 nova_compute[251992]: 2025-12-06 07:14:01.680 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:14:01 compute-0 nova_compute[251992]: 2025-12-06 07:14:01.687 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 195 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 200 op/s
Dec 06 07:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-cef3cff256ea73124b0c1316bfac1fdd4306e87613c9e4cf6cd18a632b87c476-merged.mount: Deactivated successfully.
Dec 06 07:14:01 compute-0 podman[293126]: 2025-12-06 07:14:01.743563761 +0000 UTC m=+0.928912952 container remove 94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:14:01 compute-0 systemd[1]: libpod-conmon-94c972f735ea7be41c28df1e6ebf1383159a9a6f1f66d7c6b0c68dcf6bac834e.scope: Deactivated successfully.
Dec 06 07:14:01 compute-0 podman[293166]: 2025-12-06 07:14:01.888157203 +0000 UTC m=+0.028559865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:14:02 compute-0 podman[293166]: 2025-12-06 07:14:02.100684616 +0000 UTC m=+0.241087178 container create 549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:14:02 compute-0 systemd[1]: Started libpod-conmon-549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4.scope.
Dec 06 07:14:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abee1a549100fa554883425bba43d1cd41410bc8583a8635292f9993814d376/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abee1a549100fa554883425bba43d1cd41410bc8583a8635292f9993814d376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abee1a549100fa554883425bba43d1cd41410bc8583a8635292f9993814d376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0abee1a549100fa554883425bba43d1cd41410bc8583a8635292f9993814d376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:02.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:02 compute-0 podman[293166]: 2025-12-06 07:14:02.279026557 +0000 UTC m=+0.419429109 container init 549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:14:02 compute-0 podman[293166]: 2025-12-06 07:14:02.286611997 +0000 UTC m=+0.427014549 container start 549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:14:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:02 compute-0 podman[293166]: 2025-12-06 07:14:02.444900791 +0000 UTC m=+0.585303363 container attach 549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 07:14:02 compute-0 ceph-mon[74339]: pgmap v1673: 305 pgs: 305 active+clean; 195 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 200 op/s
Dec 06 07:14:03 compute-0 laughing_jemison[293183]: {
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:     "0": [
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:         {
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "devices": [
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "/dev/loop3"
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             ],
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "lv_name": "ceph_lv0",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "lv_size": "7511998464",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "name": "ceph_lv0",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "tags": {
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.cluster_name": "ceph",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.crush_device_class": "",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.encrypted": "0",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.osd_id": "0",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.type": "block",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:                 "ceph.vdo": "0"
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             },
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "type": "block",
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:             "vg_name": "ceph_vg0"
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:         }
Dec 06 07:14:03 compute-0 laughing_jemison[293183]:     ]
Dec 06 07:14:03 compute-0 laughing_jemison[293183]: }
Dec 06 07:14:03 compute-0 systemd[1]: libpod-549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4.scope: Deactivated successfully.
Dec 06 07:14:03 compute-0 podman[293166]: 2025-12-06 07:14:03.075302878 +0000 UTC m=+1.215705420 container died 549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:14:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0abee1a549100fa554883425bba43d1cd41410bc8583a8635292f9993814d376-merged.mount: Deactivated successfully.
Dec 06 07:14:03 compute-0 podman[293166]: 2025-12-06 07:14:03.317144325 +0000 UTC m=+1.457546877 container remove 549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:14:03 compute-0 systemd[1]: libpod-conmon-549a0d42b7829fa38fdf5f6dc4fa0662a44a6166d02beffe8f17d72f6a8907f4.scope: Deactivated successfully.
Dec 06 07:14:03 compute-0 sudo[293050]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:03 compute-0 sudo[293208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:03 compute-0 sudo[293208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:03 compute-0 sudo[293208]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:03 compute-0 nova_compute[251992]: 2025-12-06 07:14:03.428 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:03 compute-0 sudo[293233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:14:03 compute-0 sudo[293233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:03 compute-0 sudo[293233]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:03 compute-0 sudo[293258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:03 compute-0 sudo[293258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:03 compute-0 sudo[293258]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:03 compute-0 sudo[293283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:14:03 compute-0 sudo[293283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 153 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Dec 06 07:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:03.821 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:03.822 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:03.822 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:03 compute-0 podman[293347]: 2025-12-06 07:14:03.918213667 +0000 UTC m=+0.064285810 container create ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 07:14:03 compute-0 systemd[1]: Started libpod-conmon-ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5.scope.
Dec 06 07:14:03 compute-0 podman[293347]: 2025-12-06 07:14:03.878166002 +0000 UTC m=+0.024238165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:14:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:14:04 compute-0 podman[293347]: 2025-12-06 07:14:04.064576618 +0000 UTC m=+0.210648781 container init ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:14:04 compute-0 podman[293347]: 2025-12-06 07:14:04.072013895 +0000 UTC m=+0.218086038 container start ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shirley, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 07:14:04 compute-0 pedantic_shirley[293363]: 167 167
Dec 06 07:14:04 compute-0 systemd[1]: libpod-ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5.scope: Deactivated successfully.
Dec 06 07:14:04 compute-0 podman[293347]: 2025-12-06 07:14:04.101185446 +0000 UTC m=+0.247257599 container attach ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:14:04 compute-0 podman[293347]: 2025-12-06 07:14:04.102964156 +0000 UTC m=+0.249036299 container died ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shirley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 07:14:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4683b509a6389b7cd71981c5c3cdd9670cf226309925968292a20d40944d3719-merged.mount: Deactivated successfully.
Dec 06 07:14:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:04.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:04.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:04 compute-0 podman[293347]: 2025-12-06 07:14:04.37343281 +0000 UTC m=+0.519504953 container remove ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shirley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 07:14:04 compute-0 systemd[1]: libpod-conmon-ddaff9d6f2cd1cf7c375fbc8294d8cf00ee14c61ea8753438aa9c459d62b73b5.scope: Deactivated successfully.
Dec 06 07:14:04 compute-0 podman[293388]: 2025-12-06 07:14:04.565621346 +0000 UTC m=+0.066286065 container create 5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:14:04 compute-0 systemd[1]: Started libpod-conmon-5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271.scope.
Dec 06 07:14:04 compute-0 podman[293388]: 2025-12-06 07:14:04.524659906 +0000 UTC m=+0.025324655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:14:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05026cbded56b989cc25cc8c9e8b7ee6691368093197449dabe4df55f37ef65f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05026cbded56b989cc25cc8c9e8b7ee6691368093197449dabe4df55f37ef65f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05026cbded56b989cc25cc8c9e8b7ee6691368093197449dabe4df55f37ef65f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05026cbded56b989cc25cc8c9e8b7ee6691368093197449dabe4df55f37ef65f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:04 compute-0 podman[293388]: 2025-12-06 07:14:04.653998844 +0000 UTC m=+0.154663573 container init 5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:14:04 compute-0 podman[293388]: 2025-12-06 07:14:04.659654432 +0000 UTC m=+0.160319151 container start 5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:14:04 compute-0 podman[293388]: 2025-12-06 07:14:04.679284768 +0000 UTC m=+0.179949487 container attach 5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:14:04 compute-0 nova_compute[251992]: 2025-12-06 07:14:04.849 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "47e12df1-0113-4c0a-9272-6078816e5844" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:04 compute-0 nova_compute[251992]: 2025-12-06 07:14:04.850 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:04 compute-0 nova_compute[251992]: 2025-12-06 07:14:04.887 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:14:04 compute-0 nova_compute[251992]: 2025-12-06 07:14:04.980 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:04 compute-0 nova_compute[251992]: 2025-12-06 07:14:04.981 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:04 compute-0 nova_compute[251992]: 2025-12-06 07:14:04.988 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:14:04 compute-0 nova_compute[251992]: 2025-12-06 07:14:04.989 251996 INFO nova.compute.claims [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.118 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:05 compute-0 ceph-mon[74339]: pgmap v1674: 305 pgs: 305 active+clean; 153 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Dec 06 07:14:05 compute-0 nice_payne[293404]: {
Dec 06 07:14:05 compute-0 nice_payne[293404]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:14:05 compute-0 nice_payne[293404]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:14:05 compute-0 nice_payne[293404]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:14:05 compute-0 nice_payne[293404]:         "osd_id": 0,
Dec 06 07:14:05 compute-0 nice_payne[293404]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:14:05 compute-0 nice_payne[293404]:         "type": "bluestore"
Dec 06 07:14:05 compute-0 nice_payne[293404]:     }
Dec 06 07:14:05 compute-0 nice_payne[293404]: }
Dec 06 07:14:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:14:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1148274591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.578 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.584 251996 DEBUG nova.compute.provider_tree [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:14:05 compute-0 systemd[1]: libpod-5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271.scope: Deactivated successfully.
Dec 06 07:14:05 compute-0 podman[293388]: 2025-12-06 07:14:05.586203867 +0000 UTC m=+1.086868586 container died 5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.615 251996 DEBUG nova.scheduler.client.report [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:14:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-05026cbded56b989cc25cc8c9e8b7ee6691368093197449dabe4df55f37ef65f-merged.mount: Deactivated successfully.
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.635 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.636 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:14:05 compute-0 podman[293388]: 2025-12-06 07:14:05.643217703 +0000 UTC m=+1.143882422 container remove 5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_payne, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 07:14:05 compute-0 systemd[1]: libpod-conmon-5c26b5bb1bbd49e93da7a999cd4f5b6633ebc2a7963e7569c46910afdd328271.scope: Deactivated successfully.
Dec 06 07:14:05 compute-0 sudo[293283]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.688 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.689 251996 DEBUG nova.network.neutron [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:14:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 107 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 887 KiB/s wr, 139 op/s
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.711 251996 INFO nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:14:05 compute-0 podman[293447]: 2025-12-06 07:14:05.722076227 +0000 UTC m=+0.105322421 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.729 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:14:05 compute-0 nova_compute[251992]: 2025-12-06 07:14:05.952 251996 DEBUG nova.policy [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdd7994b0ebb4035a373b6560aa7dbcf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.030 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.031 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.032 251996 INFO nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Creating image(s)
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.062 251996 DEBUG nova.storage.rbd_utils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 47e12df1-0113-4c0a-9272-6078816e5844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.092 251996 DEBUG nova.storage.rbd_utils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 47e12df1-0113-4c0a-9272-6078816e5844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.120 251996 DEBUG nova.storage.rbd_utils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 47e12df1-0113-4c0a-9272-6078816e5844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:14:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.124 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.204 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.206 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.206 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.207 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:06.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.235 251996 DEBUG nova.storage.rbd_utils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 47e12df1-0113-4c0a-9272-6078816e5844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.239 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 47e12df1-0113-4c0a-9272-6078816e5844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:14:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3cb76f8c-b246-4e5a-ad42-7856bfba6b30 does not exist
Dec 06 07:14:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 541090b3-9a4a-4f39-aa93-c932cde5947e does not exist
Dec 06 07:14:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a7fdef93-588a-4753-a30a-40decedcfa07 does not exist
Dec 06 07:14:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:06.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:06 compute-0 sudo[293576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:06 compute-0 sudo[293576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:06 compute-0 sudo[293576]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:06 compute-0 sudo[293601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:14:06 compute-0 sudo[293601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:06 compute-0 sudo[293601]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Dec 06 07:14:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2158123452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1148274591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.652 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005231.65074, 73478c05-0d06-42b1-b7f8-3c0500924287 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.652 251996 INFO nova.compute.manager [-] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] VM Stopped (Lifecycle Event)
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.675 251996 DEBUG nova.compute.manager [None req-97518d01-7c87-40be-9795-4c0b6d4c34fd - - - - - -] [instance: 73478c05-0d06-42b1-b7f8-3c0500924287] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.690 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:06 compute-0 nova_compute[251992]: 2025-12-06 07:14:06.886 251996 DEBUG nova.network.neutron [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Successfully created port: 661ef154-f252-42d0-99a1-7ff83bf9ba3e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:14:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Dec 06 07:14:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Dec 06 07:14:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 151 op/s
Dec 06 07:14:07 compute-0 nova_compute[251992]: 2025-12-06 07:14:07.981 251996 DEBUG nova.network.neutron [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Successfully updated port: 661ef154-f252-42d0-99a1-7ff83bf9ba3e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:14:07 compute-0 nova_compute[251992]: 2025-12-06 07:14:07.996 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "refresh_cache-47e12df1-0113-4c0a-9272-6078816e5844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:14:07 compute-0 nova_compute[251992]: 2025-12-06 07:14:07.997 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquired lock "refresh_cache-47e12df1-0113-4c0a-9272-6078816e5844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:14:07 compute-0 nova_compute[251992]: 2025-12-06 07:14:07.997 251996 DEBUG nova.network.neutron [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:14:08 compute-0 nova_compute[251992]: 2025-12-06 07:14:08.190 251996 DEBUG nova.compute.manager [req-fae21298-3867-4eb9-a689-af321a5488bb req-6935c910-2211-4f1e-a88c-e922fa8bb1d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received event network-changed-661ef154-f252-42d0-99a1-7ff83bf9ba3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:08 compute-0 nova_compute[251992]: 2025-12-06 07:14:08.191 251996 DEBUG nova.compute.manager [req-fae21298-3867-4eb9-a689-af321a5488bb req-6935c910-2211-4f1e-a88c-e922fa8bb1d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Refreshing instance network info cache due to event network-changed-661ef154-f252-42d0-99a1-7ff83bf9ba3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:14:08 compute-0 nova_compute[251992]: 2025-12-06 07:14:08.191 251996 DEBUG oslo_concurrency.lockutils [req-fae21298-3867-4eb9-a689-af321a5488bb req-6935c910-2211-4f1e-a88c-e922fa8bb1d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-47e12df1-0113-4c0a-9272-6078816e5844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:14:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:08.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:08 compute-0 nova_compute[251992]: 2025-12-06 07:14:08.352 251996 DEBUG nova.network.neutron [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:14:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:08.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:08 compute-0 nova_compute[251992]: 2025-12-06 07:14:08.430 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:08 compute-0 ceph-mon[74339]: pgmap v1675: 305 pgs: 305 active+clean; 107 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 887 KiB/s wr, 139 op/s
Dec 06 07:14:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:14:08 compute-0 ceph-mon[74339]: osdmap e247: 3 total, 3 up, 3 in
Dec 06 07:14:09 compute-0 nova_compute[251992]: 2025-12-06 07:14:09.277 251996 DEBUG nova.network.neutron [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Updating instance_info_cache with network_info: [{"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:14:09 compute-0 nova_compute[251992]: 2025-12-06 07:14:09.299 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Releasing lock "refresh_cache-47e12df1-0113-4c0a-9272-6078816e5844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:14:09 compute-0 nova_compute[251992]: 2025-12-06 07:14:09.299 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Instance network_info: |[{"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:14:09 compute-0 nova_compute[251992]: 2025-12-06 07:14:09.300 251996 DEBUG oslo_concurrency.lockutils [req-fae21298-3867-4eb9-a689-af321a5488bb req-6935c910-2211-4f1e-a88c-e922fa8bb1d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-47e12df1-0113-4c0a-9272-6078816e5844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:14:09 compute-0 nova_compute[251992]: 2025-12-06 07:14:09.300 251996 DEBUG nova.network.neutron [req-fae21298-3867-4eb9-a689-af321a5488bb req-6935c910-2211-4f1e-a88c-e922fa8bb1d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Refreshing network info cache for port 661ef154-f252-42d0-99a1-7ff83bf9ba3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:14:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 144 op/s
Dec 06 07:14:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:10.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:11 compute-0 nova_compute[251992]: 2025-12-06 07:14:11.508 251996 DEBUG nova.network.neutron [req-fae21298-3867-4eb9-a689-af321a5488bb req-6935c910-2211-4f1e-a88c-e922fa8bb1d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Updated VIF entry in instance network info cache for port 661ef154-f252-42d0-99a1-7ff83bf9ba3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:14:11 compute-0 nova_compute[251992]: 2025-12-06 07:14:11.509 251996 DEBUG nova.network.neutron [req-fae21298-3867-4eb9-a689-af321a5488bb req-6935c910-2211-4f1e-a88c-e922fa8bb1d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Updating instance_info_cache with network_info: [{"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:14:11 compute-0 nova_compute[251992]: 2025-12-06 07:14:11.526 251996 DEBUG oslo_concurrency.lockutils [req-fae21298-3867-4eb9-a689-af321a5488bb req-6935c910-2211-4f1e-a88c-e922fa8bb1d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-47e12df1-0113-4c0a-9272-6078816e5844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:14:11 compute-0 nova_compute[251992]: 2025-12-06 07:14:11.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 115 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 1.1 MiB/s wr, 68 op/s
Dec 06 07:14:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:12.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:12.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:12 compute-0 podman[293632]: 2025-12-06 07:14:12.415807835 +0000 UTC m=+0.064308770 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:14:12 compute-0 podman[293633]: 2025-12-06 07:14:12.421356209 +0000 UTC m=+0.070149912 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:14:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:14:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:14:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:14:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:14:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:14:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:14:13 compute-0 nova_compute[251992]: 2025-12-06 07:14:13.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 115 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.1 MiB/s wr, 47 op/s
Dec 06 07:14:14 compute-0 nova_compute[251992]: 2025-12-06 07:14:14.128 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:14.127 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:14:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:14.129 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:14:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:14.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:14.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 115 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Dec 06 07:14:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:16.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:16 compute-0 nova_compute[251992]: 2025-12-06 07:14:16.697 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:17 compute-0 nova_compute[251992]: 2025-12-06 07:14:17.031 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 47e12df1-0113-4c0a-9272-6078816e5844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 10.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:17 compute-0 ceph-mon[74339]: pgmap v1677: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 151 op/s
Dec 06 07:14:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1032650525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:14:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1032650525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:14:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 134 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.0 MiB/s wr, 27 op/s
Dec 06 07:14:17 compute-0 nova_compute[251992]: 2025-12-06 07:14:17.722 251996 DEBUG nova.storage.rbd_utils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] resizing rbd image 47e12df1-0113-4c0a-9272-6078816e5844_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:14:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:18.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:14:18
Dec 06 07:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'images', 'default.rgw.log', '.mgr', 'vms', 'backups']
Dec 06 07:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:14:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:18.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 134 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 07:14:20 compute-0 nova_compute[251992]: 2025-12-06 07:14:20.149 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:20 compute-0 ceph-mon[74339]: pgmap v1678: 305 pgs: 305 active+clean; 88 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 144 op/s
Dec 06 07:14:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1544661985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:20 compute-0 ceph-mon[74339]: pgmap v1679: 305 pgs: 305 active+clean; 115 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 1.1 MiB/s wr, 68 op/s
Dec 06 07:14:20 compute-0 ceph-mon[74339]: pgmap v1680: 305 pgs: 305 active+clean; 115 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.1 MiB/s wr, 47 op/s
Dec 06 07:14:20 compute-0 ceph-mon[74339]: pgmap v1681: 305 pgs: 305 active+clean; 115 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Dec 06 07:14:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:20.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:20 compute-0 sudo[293731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:20 compute-0 sudo[293731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:20 compute-0 sudo[293731]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:20 compute-0 sudo[293756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:20 compute-0 sudo[293756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:20 compute-0 sudo[293756]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:21 compute-0 ceph-mon[74339]: pgmap v1682: 305 pgs: 305 active+clean; 134 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.0 MiB/s wr, 27 op/s
Dec 06 07:14:21 compute-0 ceph-mon[74339]: pgmap v1683: 305 pgs: 305 active+clean; 134 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 07:14:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:21 compute-0 nova_compute[251992]: 2025-12-06 07:14:21.699 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 161 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 4.1 MiB/s wr, 61 op/s
Dec 06 07:14:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:22.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:22.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.871 251996 DEBUG nova.objects.instance [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'migration_context' on Instance uuid 47e12df1-0113-4c0a-9272-6078816e5844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.893 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.893 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Ensure instance console log exists: /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.894 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.894 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.895 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.897 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Start _get_guest_xml network_info=[{"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.902 251996 WARNING nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.912 251996 DEBUG nova.virt.libvirt.host [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.912 251996 DEBUG nova.virt.libvirt.host [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.916 251996 DEBUG nova.virt.libvirt.host [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.917 251996 DEBUG nova.virt.libvirt.host [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.918 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.918 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.919 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.919 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.920 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.920 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.920 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.921 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.921 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.921 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.922 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.922 251996 DEBUG nova.virt.hardware [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:14:22 compute-0 nova_compute[251992]: 2025-12-06 07:14:22.925 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:14:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:14:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1136458103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.388 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.417 251996 DEBUG nova.storage.rbd_utils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 47e12df1-0113-4c0a-9272-6078816e5844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.420 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:14:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 161 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.2 MiB/s wr, 44 op/s
Dec 06 07:14:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:14:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2713055160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.855 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.857 251996 DEBUG nova.virt.libvirt.vif [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-248169569',display_name='tempest-ImagesTestJSON-server-248169569',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-248169569',id=67,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ho7609e1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:14:05Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=47e12df1-0113-4c0a-9272-6078816e5844,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.858 251996 DEBUG nova.network.os_vif_util [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.859 251996 DEBUG nova.network.os_vif_util [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:9f:11,bridge_name='br-int',has_traffic_filtering=True,id=661ef154-f252-42d0-99a1-7ff83bf9ba3e,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap661ef154-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.860 251996 DEBUG nova.objects.instance [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 47e12df1-0113-4c0a-9272-6078816e5844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.892 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <uuid>47e12df1-0113-4c0a-9272-6078816e5844</uuid>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <name>instance-00000043</name>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <nova:name>tempest-ImagesTestJSON-server-248169569</nova:name>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:14:22</nova:creationTime>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <nova:user uuid="bdd7994b0ebb4035a373b6560aa7dbcf">tempest-ImagesTestJSON-134159412-project-member</nova:user>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <nova:project uuid="af7365adc05f4624a08a71cd5a77ada6">tempest-ImagesTestJSON-134159412</nova:project>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <nova:port uuid="661ef154-f252-42d0-99a1-7ff83bf9ba3e">
Dec 06 07:14:23 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <system>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <entry name="serial">47e12df1-0113-4c0a-9272-6078816e5844</entry>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <entry name="uuid">47e12df1-0113-4c0a-9272-6078816e5844</entry>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </system>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <os>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   </os>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <features>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   </features>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/47e12df1-0113-4c0a-9272-6078816e5844_disk">
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       </source>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/47e12df1-0113-4c0a-9272-6078816e5844_disk.config">
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       </source>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:14:23 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:d2:9f:11"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <target dev="tap661ef154-f2"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844/console.log" append="off"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <video>
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </video>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:14:23 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:14:23 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:14:23 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:14:23 compute-0 nova_compute[251992]: </domain>
Dec 06 07:14:23 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.894 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Preparing to wait for external event network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.894 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "47e12df1-0113-4c0a-9272-6078816e5844-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.894 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.894 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.895 251996 DEBUG nova.virt.libvirt.vif [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-248169569',display_name='tempest-ImagesTestJSON-server-248169569',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-248169569',id=67,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ho7609e1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:14:05Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=47e12df1-0113-4c0a-9272-6078816e5844,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.895 251996 DEBUG nova.network.os_vif_util [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.896 251996 DEBUG nova.network.os_vif_util [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:9f:11,bridge_name='br-int',has_traffic_filtering=True,id=661ef154-f252-42d0-99a1-7ff83bf9ba3e,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap661ef154-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.896 251996 DEBUG os_vif [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:9f:11,bridge_name='br-int',has_traffic_filtering=True,id=661ef154-f252-42d0-99a1-7ff83bf9ba3e,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap661ef154-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.897 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.897 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.898 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.901 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.902 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap661ef154-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.902 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap661ef154-f2, col_values=(('external_ids', {'iface-id': '661ef154-f252-42d0-99a1-7ff83bf9ba3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:9f:11', 'vm-uuid': '47e12df1-0113-4c0a-9272-6078816e5844'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.903 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:23 compute-0 NetworkManager[48965]: <info>  [1765005263.9048] manager: (tap661ef154-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.906 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.911 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:23 compute-0 nova_compute[251992]: 2025-12-06 07:14:23.913 251996 INFO os_vif [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:9f:11,bridge_name='br-int',has_traffic_filtering=True,id=661ef154-f252-42d0-99a1-7ff83bf9ba3e,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap661ef154-f2')
Dec 06 07:14:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:24.131 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:24.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:24.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:24 compute-0 ceph-mon[74339]: pgmap v1684: 305 pgs: 305 active+clean; 161 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 4.1 MiB/s wr, 61 op/s
Dec 06 07:14:24 compute-0 nova_compute[251992]: 2025-12-06 07:14:24.459 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:14:24 compute-0 nova_compute[251992]: 2025-12-06 07:14:24.465 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:14:24 compute-0 nova_compute[251992]: 2025-12-06 07:14:24.466 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No VIF found with MAC fa:16:3e:d2:9f:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:14:24 compute-0 nova_compute[251992]: 2025-12-06 07:14:24.467 251996 INFO nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Using config drive
Dec 06 07:14:24 compute-0 nova_compute[251992]: 2025-12-06 07:14:24.510 251996 DEBUG nova.storage.rbd_utils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 47e12df1-0113-4c0a-9272-6078816e5844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:14:24 compute-0 nova_compute[251992]: 2025-12-06 07:14:24.948 251996 INFO nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Creating config drive at /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844/disk.config
Dec 06 07:14:24 compute-0 nova_compute[251992]: 2025-12-06 07:14:24.953 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xnnwjvk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:25 compute-0 nova_compute[251992]: 2025-12-06 07:14:25.086 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_xnnwjvk" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:25 compute-0 nova_compute[251992]: 2025-12-06 07:14:25.120 251996 DEBUG nova.storage.rbd_utils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] rbd image 47e12df1-0113-4c0a-9272-6078816e5844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:14:25 compute-0 nova_compute[251992]: 2025-12-06 07:14:25.125 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844/disk.config 47e12df1-0113-4c0a-9272-6078816e5844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1136458103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:25 compute-0 ceph-mon[74339]: pgmap v1685: 305 pgs: 305 active+clean; 161 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.2 MiB/s wr, 44 op/s
Dec 06 07:14:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2713055160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 190 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 4.3 MiB/s wr, 64 op/s
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00403783966126931 of space, bias 1.0, pg target 1.211351898380793 quantized to 32 (current 32)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:14:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:14:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:26.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:26.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.545 251996 DEBUG oslo_concurrency.processutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844/disk.config 47e12df1-0113-4c0a-9272-6078816e5844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.546 251996 INFO nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Deleting local config drive /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844/disk.config because it was imported into RBD.
Dec 06 07:14:26 compute-0 kernel: tap661ef154-f2: entered promiscuous mode
Dec 06 07:14:26 compute-0 NetworkManager[48965]: <info>  [1765005266.6006] manager: (tap661ef154-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Dec 06 07:14:26 compute-0 ovn_controller[147168]: 2025-12-06T07:14:26Z|00180|binding|INFO|Claiming lport 661ef154-f252-42d0-99a1-7ff83bf9ba3e for this chassis.
Dec 06 07:14:26 compute-0 ovn_controller[147168]: 2025-12-06T07:14:26Z|00181|binding|INFO|661ef154-f252-42d0-99a1-7ff83bf9ba3e: Claiming fa:16:3e:d2:9f:11 10.100.0.10
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.599 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.606 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:9f:11 10.100.0.10'], port_security=['fa:16:3e:d2:9f:11 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '47e12df1-0113-4c0a-9272-6078816e5844', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=661ef154-f252-42d0-99a1-7ff83bf9ba3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.607 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 661ef154-f252-42d0-99a1-7ff83bf9ba3e in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad bound to our chassis
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.609 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.617 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:26 compute-0 ovn_controller[147168]: 2025-12-06T07:14:26Z|00182|binding|INFO|Setting lport 661ef154-f252-42d0-99a1-7ff83bf9ba3e ovn-installed in OVS
Dec 06 07:14:26 compute-0 ovn_controller[147168]: 2025-12-06T07:14:26Z|00183|binding|INFO|Setting lport 661ef154-f252-42d0-99a1-7ff83bf9ba3e up in Southbound
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.619 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.621 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9962d4a7-009b-4464-9414-d3df802f4ec9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.623 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b0835d7-81 in ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.626 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b0835d7-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.626 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c58b521d-ec54-4976-983c-b7414ffd75ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.629 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6618a667-002e-482e-88db-0715a6dc991b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 systemd-machined[212986]: New machine qemu-28-instance-00000043.
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.644 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[8d12d080-4f0c-4aac-af3a-cd32f01b9b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-00000043.
Dec 06 07:14:26 compute-0 systemd-udevd[293940]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.668 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[62f72111-d96a-4994-96ee-f3b177963135]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 NetworkManager[48965]: <info>  [1765005266.6729] device (tap661ef154-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:14:26 compute-0 NetworkManager[48965]: <info>  [1765005266.6743] device (tap661ef154-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.700 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[47718d66-f3ea-4055-bdc6-970604ad5c29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 systemd-udevd[293943]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.713 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[60785294-a6f8-4e92-bf57-7a2465726bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 NetworkManager[48965]: <info>  [1765005266.7156] manager: (tap2b0835d7-80): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
Dec 06 07:14:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/838833154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.751 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[508a69e6-5629-4564-a456-8aff78346a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.755 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[57ad12e7-0402-4652-abb2-68a404b4ea65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 NetworkManager[48965]: <info>  [1765005266.7816] device (tap2b0835d7-80): carrier: link connected
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.794 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[80791afa-899b-465c-abe8-b736d9601cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.814 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6114bf85-a7cb-47ad-8adb-95f3b8eec8c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553936, 'reachable_time': 26736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293970, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.824 251996 DEBUG nova.compute.manager [req-84e65bcc-867c-42ee-89a3-6d82e777be07 req-a7398330-8547-47ad-9324-96fc5b063e34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received event network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.825 251996 DEBUG oslo_concurrency.lockutils [req-84e65bcc-867c-42ee-89a3-6d82e777be07 req-a7398330-8547-47ad-9324-96fc5b063e34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "47e12df1-0113-4c0a-9272-6078816e5844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.825 251996 DEBUG oslo_concurrency.lockutils [req-84e65bcc-867c-42ee-89a3-6d82e777be07 req-a7398330-8547-47ad-9324-96fc5b063e34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.825 251996 DEBUG oslo_concurrency.lockutils [req-84e65bcc-867c-42ee-89a3-6d82e777be07 req-a7398330-8547-47ad-9324-96fc5b063e34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.825 251996 DEBUG nova.compute.manager [req-84e65bcc-867c-42ee-89a3-6d82e777be07 req-a7398330-8547-47ad-9324-96fc5b063e34 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Processing event network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.830 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9b40d2f2-8b07-4ff3-80f0-9c780bdf6904]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:4e19'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 553936, 'tstamp': 553936}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293971, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.850 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c6ae49-045f-4fd6-9057-5d9b7991c80f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b0835d7-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:4e:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553936, 'reachable_time': 26736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293972, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.879 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dc65faf6-50bf-4b61-a9b7-7a7fde2b46d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.925 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4d7888-694b-4a12-b0a5-11e01dd1c8fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.927 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.927 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.927 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b0835d7-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:26 compute-0 NetworkManager[48965]: <info>  [1765005266.9306] manager: (tap2b0835d7-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.930 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:26 compute-0 kernel: tap2b0835d7-80: entered promiscuous mode
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.933 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b0835d7-80, col_values=(('external_ids', {'iface-id': '87f2c5b0-3684-4269-9fbf-5a4dfd5a8759'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:26 compute-0 ovn_controller[147168]: 2025-12-06T07:14:26Z|00184|binding|INFO|Releasing lport 87f2c5b0-3684-4269-9fbf-5a4dfd5a8759 from this chassis (sb_readonly=0)
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.934 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.938 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.939 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[11dbc4d1-ac3a-4381-b455-b47fe255ca2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.939 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.pid.haproxy
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 2b0835d7-87e4-46cc-8a94-e4e042bd4bad
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:14:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:26.940 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'env', 'PROCESS_TAG=haproxy-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b0835d7-87e4-46cc-8a94-e4e042bd4bad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:14:26 compute-0 nova_compute[251992]: 2025-12-06 07:14:26.952 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:27 compute-0 podman[294004]: 2025-12-06 07:14:27.324178129 +0000 UTC m=+0.058170179 container create 684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:14:27 compute-0 systemd[1]: Started libpod-conmon-684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62.scope.
Dec 06 07:14:27 compute-0 podman[294004]: 2025-12-06 07:14:27.291364996 +0000 UTC m=+0.025357076 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:14:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/642497500be327368769b5334625822ec31c50b78aeb8be2c4678e0aae13f5b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:14:27 compute-0 podman[294004]: 2025-12-06 07:14:27.412451754 +0000 UTC m=+0.146443824 container init 684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:14:27 compute-0 podman[294004]: 2025-12-06 07:14:27.418160773 +0000 UTC m=+0.152152823 container start 684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:14:27 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[294020]: [NOTICE]   (294040) : New worker (294051) forked
Dec 06 07:14:27 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[294020]: [NOTICE]   (294040) : Loading success.
Dec 06 07:14:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 206 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 4.7 MiB/s wr, 93 op/s
Dec 06 07:14:27 compute-0 ceph-mon[74339]: pgmap v1686: 305 pgs: 305 active+clean; 190 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 4.3 MiB/s wr, 64 op/s
Dec 06 07:14:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2970115831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.104 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.105 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005268.1036599, 47e12df1-0113-4c0a-9272-6078816e5844 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.105 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] VM Started (Lifecycle Event)
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.110 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.115 251996 INFO nova.virt.libvirt.driver [-] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Instance spawned successfully.
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.116 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.131 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.140 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.144 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.144 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.145 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.145 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.146 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.147 251996 DEBUG nova.virt.libvirt.driver [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.167 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.167 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005268.1039517, 47e12df1-0113-4c0a-9272-6078816e5844 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.168 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] VM Paused (Lifecycle Event)
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.195 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.198 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005268.1089463, 47e12df1-0113-4c0a-9272-6078816e5844 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.198 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] VM Resumed (Lifecycle Event)
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.217 251996 INFO nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Took 22.19 seconds to spawn the instance on the hypervisor.
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.218 251996 DEBUG nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.228 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.230 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:14:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:28.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.260 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.288 251996 INFO nova.compute.manager [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Took 23.34 seconds to build instance.
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.307 251996 DEBUG oslo_concurrency.lockutils [None req-31e7e63a-d104-40b0-89fe-ebabc0d28f7c bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:28.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.904 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.920 251996 DEBUG nova.compute.manager [req-b68782dd-f3cc-40eb-a4b4-9c06052e4895 req-4543cf23-92fc-4ddc-8ea7-b303476b1327 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received event network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.920 251996 DEBUG oslo_concurrency.lockutils [req-b68782dd-f3cc-40eb-a4b4-9c06052e4895 req-4543cf23-92fc-4ddc-8ea7-b303476b1327 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "47e12df1-0113-4c0a-9272-6078816e5844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.921 251996 DEBUG oslo_concurrency.lockutils [req-b68782dd-f3cc-40eb-a4b4-9c06052e4895 req-4543cf23-92fc-4ddc-8ea7-b303476b1327 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.921 251996 DEBUG oslo_concurrency.lockutils [req-b68782dd-f3cc-40eb-a4b4-9c06052e4895 req-4543cf23-92fc-4ddc-8ea7-b303476b1327 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.921 251996 DEBUG nova.compute.manager [req-b68782dd-f3cc-40eb-a4b4-9c06052e4895 req-4543cf23-92fc-4ddc-8ea7-b303476b1327 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] No waiting events found dispatching network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:14:28 compute-0 nova_compute[251992]: 2025-12-06 07:14:28.921 251996 WARNING nova.compute.manager [req-b68782dd-f3cc-40eb-a4b4-9c06052e4895 req-4543cf23-92fc-4ddc-8ea7-b303476b1327 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received unexpected event network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e for instance with vm_state active and task_state None.
Dec 06 07:14:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 206 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Dec 06 07:14:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:30.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:30.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:30 compute-0 ceph-mon[74339]: pgmap v1687: 305 pgs: 305 active+clean; 206 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 4.7 MiB/s wr, 93 op/s
Dec 06 07:14:30 compute-0 nova_compute[251992]: 2025-12-06 07:14:30.935 251996 DEBUG nova.compute.manager [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:14:30 compute-0 nova_compute[251992]: 2025-12-06 07:14:30.991 251996 INFO nova.compute.manager [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] instance snapshotting
Dec 06 07:14:31 compute-0 nova_compute[251992]: 2025-12-06 07:14:31.324 251996 INFO nova.virt.libvirt.driver [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Beginning live snapshot process
Dec 06 07:14:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:31 compute-0 nova_compute[251992]: 2025-12-06 07:14:31.579 251996 DEBUG nova.virt.libvirt.imagebackend [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:14:31 compute-0 ceph-mon[74339]: pgmap v1688: 305 pgs: 305 active+clean; 206 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Dec 06 07:14:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 213 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 184 op/s
Dec 06 07:14:31 compute-0 nova_compute[251992]: 2025-12-06 07:14:31.815 251996 DEBUG nova.storage.rbd_utils [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(c4c867415d2948509112175fd6bc8ecd) on rbd image(47e12df1-0113-4c0a-9272-6078816e5844_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:14:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:32.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:32.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Dec 06 07:14:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Dec 06 07:14:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Dec 06 07:14:32 compute-0 nova_compute[251992]: 2025-12-06 07:14:32.790 251996 DEBUG nova.storage.rbd_utils [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] cloning vms/47e12df1-0113-4c0a-9272-6078816e5844_disk@c4c867415d2948509112175fd6bc8ecd to images/0439f43d-0ed1-4c0e-8789-8b023308de16 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:14:33 compute-0 nova_compute[251992]: 2025-12-06 07:14:33.095 251996 DEBUG nova.storage.rbd_utils [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] flattening images/0439f43d-0ed1-4c0e-8789-8b023308de16 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:14:33 compute-0 nova_compute[251992]: 2025-12-06 07:14:33.414 251996 DEBUG nova.storage.rbd_utils [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] removing snapshot(c4c867415d2948509112175fd6bc8ecd) on rbd image(47e12df1-0113-4c0a-9272-6078816e5844_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:14:33 compute-0 nova_compute[251992]: 2025-12-06 07:14:33.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 213 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.9 MiB/s wr, 175 op/s
Dec 06 07:14:33 compute-0 nova_compute[251992]: 2025-12-06 07:14:33.906 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:34 compute-0 ceph-mon[74339]: pgmap v1689: 305 pgs: 305 active+clean; 213 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 184 op/s
Dec 06 07:14:34 compute-0 ceph-mon[74339]: osdmap e248: 3 total, 3 up, 3 in
Dec 06 07:14:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:34.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:14:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:34.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:14:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Dec 06 07:14:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Dec 06 07:14:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Dec 06 07:14:35 compute-0 ceph-mon[74339]: pgmap v1691: 305 pgs: 305 active+clean; 213 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.9 MiB/s wr, 175 op/s
Dec 06 07:14:35 compute-0 nova_compute[251992]: 2025-12-06 07:14:35.340 251996 DEBUG nova.storage.rbd_utils [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] creating snapshot(snap) on rbd image(0439f43d-0ed1-4c0e-8789-8b023308de16) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:14:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 231 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 973 KiB/s wr, 209 op/s
Dec 06 07:14:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:36.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Dec 06 07:14:36 compute-0 ceph-mon[74339]: osdmap e249: 3 total, 3 up, 3 in
Dec 06 07:14:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Dec 06 07:14:36 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Dec 06 07:14:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:36.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:36 compute-0 podman[294223]: 2025-12-06 07:14:36.433886006 +0000 UTC m=+0.085985643 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:14:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 0439f43d-0ed1-4c0e-8789-8b023308de16 could not be found.
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 0439f43d-0ed1-4c0e-8789-8b023308de16
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver 
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver 
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     image = self._client.call(
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 0439f43d-0ed1-4c0e-8789-8b023308de16 could not be found.
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.586 251996 ERROR nova.virt.libvirt.driver 
Dec 06 07:14:36 compute-0 nova_compute[251992]: 2025-12-06 07:14:36.900 251996 DEBUG nova.storage.rbd_utils [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] removing snapshot(snap) on rbd image(0439f43d-0ed1-4c0e-8789-8b023308de16) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:14:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Dec 06 07:14:37 compute-0 ceph-mon[74339]: pgmap v1693: 305 pgs: 305 active+clean; 231 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 973 KiB/s wr, 209 op/s
Dec 06 07:14:37 compute-0 ceph-mon[74339]: osdmap e250: 3 total, 3 up, 3 in
Dec 06 07:14:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Dec 06 07:14:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Dec 06 07:14:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 216 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 4.3 MiB/s wr, 308 op/s
Dec 06 07:14:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:38.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:38.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:38 compute-0 nova_compute[251992]: 2025-12-06 07:14:38.440 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:38 compute-0 ceph-mon[74339]: osdmap e251: 3 total, 3 up, 3 in
Dec 06 07:14:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3488248931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:38 compute-0 nova_compute[251992]: 2025-12-06 07:14:38.909 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:38 compute-0 nova_compute[251992]: 2025-12-06 07:14:38.922 251996 WARNING nova.compute.manager [None req-4e884c80-ed53-478c-abda-e90d215fe197 bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Image not found during snapshot: nova.exception.ImageNotFound: Image 0439f43d-0ed1-4c0e-8789-8b023308de16 could not be found.
Dec 06 07:14:39 compute-0 ceph-mon[74339]: pgmap v1696: 305 pgs: 305 active+clean; 216 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 4.3 MiB/s wr, 308 op/s
Dec 06 07:14:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 216 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 257 op/s
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.068 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "47e12df1-0113-4c0a-9272-6078816e5844" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.068 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.069 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "47e12df1-0113-4c0a-9272-6078816e5844-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.069 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.069 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.070 251996 INFO nova.compute.manager [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Terminating instance
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.077 251996 DEBUG nova.compute.manager [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:14:40 compute-0 kernel: tap661ef154-f2 (unregistering): left promiscuous mode
Dec 06 07:14:40 compute-0 NetworkManager[48965]: <info>  [1765005280.1536] device (tap661ef154-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.163 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:40 compute-0 ovn_controller[147168]: 2025-12-06T07:14:40Z|00185|binding|INFO|Releasing lport 661ef154-f252-42d0-99a1-7ff83bf9ba3e from this chassis (sb_readonly=0)
Dec 06 07:14:40 compute-0 ovn_controller[147168]: 2025-12-06T07:14:40Z|00186|binding|INFO|Setting lport 661ef154-f252-42d0-99a1-7ff83bf9ba3e down in Southbound
Dec 06 07:14:40 compute-0 ovn_controller[147168]: 2025-12-06T07:14:40Z|00187|binding|INFO|Removing iface tap661ef154-f2 ovn-installed in OVS
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:40.170 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:9f:11 10.100.0.10'], port_security=['fa:16:3e:d2:9f:11 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '47e12df1-0113-4c0a-9272-6078816e5844', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'af7365adc05f4624a08a71cd5a77ada6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b536f2c5-b22f-47bf-a47f-57e098f673a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e40662-9f9d-450b-8c39-94d50ba422c6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=661ef154-f252-42d0-99a1-7ff83bf9ba3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:14:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:40.173 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 661ef154-f252-42d0-99a1-7ff83bf9ba3e in datapath 2b0835d7-87e4-46cc-8a94-e4e042bd4bad unbound from our chassis
Dec 06 07:14:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:40.174 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b0835d7-87e4-46cc-8a94-e4e042bd4bad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:14:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:40.176 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d21426ce-6e67-4e12-b874-440236f3da01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:40.177 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad namespace which is not needed anymore
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.187 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:40 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000043.scope: Deactivated successfully.
Dec 06 07:14:40 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000043.scope: Consumed 13.164s CPU time.
Dec 06 07:14:40 compute-0 systemd-machined[212986]: Machine qemu-28-instance-00000043 terminated.
Dec 06 07:14:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:40.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.335 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.350 251996 INFO nova.virt.libvirt.driver [-] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Instance destroyed successfully.
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.351 251996 DEBUG nova.objects.instance [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lazy-loading 'resources' on Instance uuid 47e12df1-0113-4c0a-9272-6078816e5844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.369 251996 DEBUG nova.virt.libvirt.vif [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-248169569',display_name='tempest-ImagesTestJSON-server-248169569',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-248169569',id=67,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='af7365adc05f4624a08a71cd5a77ada6',ramdisk_id='',reservation_id='r-ho7609e1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-134159412',owner_user_name='tempest-ImagesTestJSON-134159412-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:14:38Z,user_data=None,user_id='bdd7994b0ebb4035a373b6560aa7dbcf',uuid=47e12df1-0113-4c0a-9272-6078816e5844,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.370 251996 DEBUG nova.network.os_vif_util [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converting VIF {"id": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "address": "fa:16:3e:d2:9f:11", "network": {"id": "2b0835d7-87e4-46cc-8a94-e4e042bd4bad", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1132836552-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "af7365adc05f4624a08a71cd5a77ada6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap661ef154-f2", "ovs_interfaceid": "661ef154-f252-42d0-99a1-7ff83bf9ba3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.370 251996 DEBUG nova.network.os_vif_util [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:9f:11,bridge_name='br-int',has_traffic_filtering=True,id=661ef154-f252-42d0-99a1-7ff83bf9ba3e,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap661ef154-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.371 251996 DEBUG os_vif [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:9f:11,bridge_name='br-int',has_traffic_filtering=True,id=661ef154-f252-42d0-99a1-7ff83bf9ba3e,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap661ef154-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.372 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.373 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap661ef154-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.375 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.377 251996 INFO os_vif [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:9f:11,bridge_name='br-int',has_traffic_filtering=True,id=661ef154-f252-42d0-99a1-7ff83bf9ba3e,network=Network(2b0835d7-87e4-46cc-8a94-e4e042bd4bad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap661ef154-f2')
Dec 06 07:14:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:40 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[294020]: [NOTICE]   (294040) : haproxy version is 2.8.14-c23fe91
Dec 06 07:14:40 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[294020]: [NOTICE]   (294040) : path to executable is /usr/sbin/haproxy
Dec 06 07:14:40 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[294020]: [WARNING]  (294040) : Exiting Master process...
Dec 06 07:14:40 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[294020]: [ALERT]    (294040) : Current worker (294051) exited with code 143 (Terminated)
Dec 06 07:14:40 compute-0 neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad[294020]: [WARNING]  (294040) : All workers exited. Exiting... (0)
Dec 06 07:14:40 compute-0 systemd[1]: libpod-684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62.scope: Deactivated successfully.
Dec 06 07:14:40 compute-0 podman[294312]: 2025-12-06 07:14:40.44505323 +0000 UTC m=+0.182104966 container died 684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.460 251996 DEBUG nova.compute.manager [req-8bb0878b-0262-4c7e-b5a6-39ed38ff885d req-3e582a86-42d3-4a29-be7c-5c9fc0b13972 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received event network-vif-unplugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.461 251996 DEBUG oslo_concurrency.lockutils [req-8bb0878b-0262-4c7e-b5a6-39ed38ff885d req-3e582a86-42d3-4a29-be7c-5c9fc0b13972 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "47e12df1-0113-4c0a-9272-6078816e5844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.461 251996 DEBUG oslo_concurrency.lockutils [req-8bb0878b-0262-4c7e-b5a6-39ed38ff885d req-3e582a86-42d3-4a29-be7c-5c9fc0b13972 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.461 251996 DEBUG oslo_concurrency.lockutils [req-8bb0878b-0262-4c7e-b5a6-39ed38ff885d req-3e582a86-42d3-4a29-be7c-5c9fc0b13972 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.462 251996 DEBUG nova.compute.manager [req-8bb0878b-0262-4c7e-b5a6-39ed38ff885d req-3e582a86-42d3-4a29-be7c-5c9fc0b13972 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] No waiting events found dispatching network-vif-unplugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:14:40 compute-0 nova_compute[251992]: 2025-12-06 07:14:40.462 251996 DEBUG nova.compute.manager [req-8bb0878b-0262-4c7e-b5a6-39ed38ff885d req-3e582a86-42d3-4a29-be7c-5c9fc0b13972 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received event network-vif-unplugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:14:40 compute-0 sudo[294368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:40 compute-0 sudo[294368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:40 compute-0 sudo[294368]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:40 compute-0 sudo[294393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:14:40 compute-0 sudo[294393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:14:40 compute-0 sudo[294393]: pam_unix(sudo:session): session closed for user root
Dec 06 07:14:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 115 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.3 MiB/s wr, 250 op/s
Dec 06 07:14:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:42.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:42.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62-userdata-shm.mount: Deactivated successfully.
Dec 06 07:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-642497500be327368769b5334625822ec31c50b78aeb8be2c4678e0aae13f5b6-merged.mount: Deactivated successfully.
Dec 06 07:14:42 compute-0 ceph-mon[74339]: pgmap v1697: 305 pgs: 305 active+clean; 216 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 3.6 MiB/s wr, 257 op/s
Dec 06 07:14:42 compute-0 podman[294312]: 2025-12-06 07:14:42.557556247 +0000 UTC m=+2.294607973 container cleanup 684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:14:42 compute-0 systemd[1]: libpod-conmon-684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62.scope: Deactivated successfully.
Dec 06 07:14:42 compute-0 nova_compute[251992]: 2025-12-06 07:14:42.578 251996 DEBUG nova.compute.manager [req-86820b2a-a6e2-4376-890c-e734f6c10702 req-88effac3-afcb-477a-8090-b0bda9ccb965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received event network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:42 compute-0 nova_compute[251992]: 2025-12-06 07:14:42.579 251996 DEBUG oslo_concurrency.lockutils [req-86820b2a-a6e2-4376-890c-e734f6c10702 req-88effac3-afcb-477a-8090-b0bda9ccb965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "47e12df1-0113-4c0a-9272-6078816e5844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:42 compute-0 nova_compute[251992]: 2025-12-06 07:14:42.579 251996 DEBUG oslo_concurrency.lockutils [req-86820b2a-a6e2-4376-890c-e734f6c10702 req-88effac3-afcb-477a-8090-b0bda9ccb965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:42 compute-0 nova_compute[251992]: 2025-12-06 07:14:42.582 251996 DEBUG oslo_concurrency.lockutils [req-86820b2a-a6e2-4376-890c-e734f6c10702 req-88effac3-afcb-477a-8090-b0bda9ccb965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:42 compute-0 nova_compute[251992]: 2025-12-06 07:14:42.582 251996 DEBUG nova.compute.manager [req-86820b2a-a6e2-4376-890c-e734f6c10702 req-88effac3-afcb-477a-8090-b0bda9ccb965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] No waiting events found dispatching network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:14:42 compute-0 nova_compute[251992]: 2025-12-06 07:14:42.582 251996 WARNING nova.compute.manager [req-86820b2a-a6e2-4376-890c-e734f6c10702 req-88effac3-afcb-477a-8090-b0bda9ccb965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received unexpected event network-vif-plugged-661ef154-f252-42d0-99a1-7ff83bf9ba3e for instance with vm_state active and task_state deleting.
Dec 06 07:14:42 compute-0 podman[294422]: 2025-12-06 07:14:42.617551986 +0000 UTC m=+0.120171245 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 07:14:42 compute-0 podman[294420]: 2025-12-06 07:14:42.637514721 +0000 UTC m=+0.144438429 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:14:42 compute-0 podman[294443]: 2025-12-06 07:14:42.792265905 +0000 UTC m=+0.206934416 container remove 684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.798 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff343b2-256d-44b7-a376-d7809b16b97b]: (4, ('Sat Dec  6 07:14:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62)\n684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62\nSat Dec  6 07:14:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad (684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62)\n684749d8542f2f1a3ebff42bdbb61320b362505e2b982f55803e49a311f25d62\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.799 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[78f9cbaf-3a3e-43fc-a817-3a5a13cbac0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.800 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b0835d7-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:14:42 compute-0 nova_compute[251992]: 2025-12-06 07:14:42.803 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:42 compute-0 kernel: tap2b0835d7-80: left promiscuous mode
Dec 06 07:14:42 compute-0 nova_compute[251992]: 2025-12-06 07:14:42.817 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.820 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9efff956-e38e-40af-b0ab-0b75aa720645]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.837 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[15f29c4d-72c0-46a3-a8f4-1e3fb4be897e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.838 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2af3d0-dda2-46f7-b820-97c95212894c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.853 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[877ca45c-a847-4216-9951-1039a5c1a173]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 553928, 'reachable_time': 38356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294477, 'error': None, 'target': 'ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.857 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b0835d7-87e4-46cc-8a94-e4e042bd4bad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:14:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d2b0835d7\x2d87e4\x2d46cc\x2d8a94\x2de4e042bd4bad.mount: Deactivated successfully.
Dec 06 07:14:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:42.858 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0c38e638-3849-4179-af72-ec6b4b8b1bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:14:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:14:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:14:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:14:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:14:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:14:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:14:43 compute-0 nova_compute[251992]: 2025-12-06 07:14:43.442 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 115 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Dec 06 07:14:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:44.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:44.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:44 compute-0 ceph-mon[74339]: pgmap v1698: 305 pgs: 305 active+clean; 115 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.3 MiB/s wr, 250 op/s
Dec 06 07:14:44 compute-0 nova_compute[251992]: 2025-12-06 07:14:44.695 251996 INFO nova.virt.libvirt.driver [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Deleting instance files /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844_del
Dec 06 07:14:44 compute-0 nova_compute[251992]: 2025-12-06 07:14:44.696 251996 INFO nova.virt.libvirt.driver [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Deletion of /var/lib/nova/instances/47e12df1-0113-4c0a-9272-6078816e5844_del complete
Dec 06 07:14:44 compute-0 nova_compute[251992]: 2025-12-06 07:14:44.755 251996 INFO nova.compute.manager [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Took 4.68 seconds to destroy the instance on the hypervisor.
Dec 06 07:14:44 compute-0 nova_compute[251992]: 2025-12-06 07:14:44.756 251996 DEBUG oslo.service.loopingcall [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:14:44 compute-0 nova_compute[251992]: 2025-12-06 07:14:44.756 251996 DEBUG nova.compute.manager [-] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:14:44 compute-0 nova_compute[251992]: 2025-12-06 07:14:44.756 251996 DEBUG nova.network.neutron [-] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:14:45 compute-0 nova_compute[251992]: 2025-12-06 07:14:45.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:45 compute-0 nova_compute[251992]: 2025-12-06 07:14:45.508 251996 DEBUG nova.network.neutron [-] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:14:45 compute-0 nova_compute[251992]: 2025-12-06 07:14:45.534 251996 INFO nova.compute.manager [-] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Took 0.78 seconds to deallocate network for instance.
Dec 06 07:14:45 compute-0 nova_compute[251992]: 2025-12-06 07:14:45.614 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:45 compute-0 nova_compute[251992]: 2025-12-06 07:14:45.615 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:45 compute-0 nova_compute[251992]: 2025-12-06 07:14:45.685 251996 DEBUG oslo_concurrency.processutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:45 compute-0 ceph-mon[74339]: pgmap v1699: 305 pgs: 305 active+clean; 115 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Dec 06 07:14:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 120 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 193 op/s
Dec 06 07:14:45 compute-0 nova_compute[251992]: 2025-12-06 07:14:45.756 251996 DEBUG nova.compute.manager [req-8429a551-3d0b-4eec-84ea-a2d8cc64f377 req-66ac7969-5a84-45da-ac35-efa3eae72b58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Received event network-vif-deleted-661ef154-f252-42d0-99a1-7ff83bf9ba3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:14:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:14:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417296555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:46 compute-0 nova_compute[251992]: 2025-12-06 07:14:46.166 251996 DEBUG oslo_concurrency.processutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:46 compute-0 nova_compute[251992]: 2025-12-06 07:14:46.174 251996 DEBUG nova.compute.provider_tree [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:14:46 compute-0 nova_compute[251992]: 2025-12-06 07:14:46.190 251996 DEBUG nova.scheduler.client.report [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:14:46 compute-0 nova_compute[251992]: 2025-12-06 07:14:46.212 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:46 compute-0 nova_compute[251992]: 2025-12-06 07:14:46.241 251996 INFO nova.scheduler.client.report [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Deleted allocations for instance 47e12df1-0113-4c0a-9272-6078816e5844
Dec 06 07:14:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:46.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:46 compute-0 nova_compute[251992]: 2025-12-06 07:14:46.309 251996 DEBUG oslo_concurrency.lockutils [None req-d703cf48-df4f-431b-9cf6-17a6e8f4494d bdd7994b0ebb4035a373b6560aa7dbcf af7365adc05f4624a08a71cd5a77ada6 - - default default] Lock "47e12df1-0113-4c0a-9272-6078816e5844" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:46.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Dec 06 07:14:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Dec 06 07:14:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Dec 06 07:14:46 compute-0 ceph-mon[74339]: pgmap v1700: 305 pgs: 305 active+clean; 120 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 193 op/s
Dec 06 07:14:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2417296555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:46 compute-0 ceph-mon[74339]: osdmap e252: 3 total, 3 up, 3 in
Dec 06 07:14:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 118 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.5 MiB/s wr, 152 op/s
Dec 06 07:14:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:48.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:48.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:48 compute-0 nova_compute[251992]: 2025-12-06 07:14:48.445 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:49 compute-0 ceph-mon[74339]: pgmap v1702: 305 pgs: 305 active+clean; 118 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.5 MiB/s wr, 152 op/s
Dec 06 07:14:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 118 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.5 MiB/s wr, 152 op/s
Dec 06 07:14:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000056s ======
Dec 06 07:14:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:50.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Dec 06 07:14:50 compute-0 nova_compute[251992]: 2025-12-06 07:14:50.375 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:50.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:50 compute-0 nova_compute[251992]: 2025-12-06 07:14:50.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:51 compute-0 ceph-mon[74339]: pgmap v1703: 305 pgs: 305 active+clean; 118 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 2.5 MiB/s wr, 152 op/s
Dec 06 07:14:51 compute-0 nova_compute[251992]: 2025-12-06 07:14:51.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Dec 06 07:14:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:52.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:52.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:52 compute-0 nova_compute[251992]: 2025-12-06 07:14:52.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:52 compute-0 nova_compute[251992]: 2025-12-06 07:14:52.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:52 compute-0 nova_compute[251992]: 2025-12-06 07:14:52.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:52 compute-0 nova_compute[251992]: 2025-12-06 07:14:52.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:52 compute-0 nova_compute[251992]: 2025-12-06 07:14:52.683 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:14:52 compute-0 nova_compute[251992]: 2025-12-06 07:14:52.684 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:14:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1137790135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.149 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.299 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.300 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4548MB free_disk=20.942859649658203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.300 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.301 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.405 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.405 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.432 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Dec 06 07:14:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:14:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605191494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.880 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.888 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.909 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:14:53 compute-0 ceph-mon[74339]: pgmap v1704: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Dec 06 07:14:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3053445577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1137790135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.943 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:14:53 compute-0 nova_compute[251992]: 2025-12-06 07:14:53.943 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:14:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:54.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:14:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:54.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:14:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:54.473 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:14:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:14:54.473 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:14:54 compute-0 nova_compute[251992]: 2025-12-06 07:14:54.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:54 compute-0 nova_compute[251992]: 2025-12-06 07:14:54.785 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:54 compute-0 nova_compute[251992]: 2025-12-06 07:14:54.944 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:55 compute-0 ceph-mon[74339]: pgmap v1705: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.6 MiB/s wr, 100 op/s
Dec 06 07:14:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3605191494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2002282999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:55 compute-0 nova_compute[251992]: 2025-12-06 07:14:55.349 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005280.3489053, 47e12df1-0113-4c0a-9272-6078816e5844 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:14:55 compute-0 nova_compute[251992]: 2025-12-06 07:14:55.350 251996 INFO nova.compute.manager [-] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] VM Stopped (Lifecycle Event)
Dec 06 07:14:55 compute-0 nova_compute[251992]: 2025-12-06 07:14:55.377 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:55 compute-0 nova_compute[251992]: 2025-12-06 07:14:55.379 251996 DEBUG nova.compute.manager [None req-27e9acb0-f6d5-492a-985d-5e9b1000a2f5 - - - - - -] [instance: 47e12df1-0113-4c0a-9272-6078816e5844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:14:55 compute-0 nova_compute[251992]: 2025-12-06 07:14:55.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:55 compute-0 nova_compute[251992]: 2025-12-06 07:14:55.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 1.0 MiB/s wr, 50 op/s
Dec 06 07:14:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:56.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:56.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:56 compute-0 nova_compute[251992]: 2025-12-06 07:14:56.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:14:56 compute-0 ceph-mon[74339]: pgmap v1706: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 1.0 MiB/s wr, 50 op/s
Dec 06 07:14:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 199 KiB/s wr, 26 op/s
Dec 06 07:14:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:14:58.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2536976784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2670706817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:14:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:14:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:14:58.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:14:58 compute-0 nova_compute[251992]: 2025-12-06 07:14:58.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:14:58 compute-0 nova_compute[251992]: 2025-12-06 07:14:58.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:14:59 compute-0 ceph-mon[74339]: pgmap v1707: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 199 KiB/s wr, 26 op/s
Dec 06 07:14:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2433529301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:14:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2565456730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:14:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 54 KiB/s wr, 5 op/s
Dec 06 07:15:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:00.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.282 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.283 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.345 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:00.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.437 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.438 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.442 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.443 251996 INFO nova.compute.claims [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.686 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:00 compute-0 sudo[294556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:00 compute-0 sudo[294556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:00 compute-0 sudo[294556]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.896 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "97016241-c559-4e76-9b89-a68445510fad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.897 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:00 compute-0 sudo[294599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:00 compute-0 sudo[294599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:00 compute-0 sudo[294599]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:00 compute-0 nova_compute[251992]: 2025-12-06 07:15:00.955 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.032 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/430836661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.196 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.201 251996 DEBUG nova.compute.provider_tree [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.238 251996 DEBUG nova.scheduler.client.report [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.284 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.285 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.287 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.294 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.294 251996 INFO nova.compute.claims [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.373 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.374 251996 DEBUG nova.network.neutron [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.405 251996 INFO nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:15:01 compute-0 ceph-mon[74339]: pgmap v1708: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 54 KiB/s wr, 5 op/s
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.448 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:15:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:01.476 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.490 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.624 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.625 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.626 251996 INFO nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Creating image(s)
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.651 251996 DEBUG nova.storage.rbd_utils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.680 251996 DEBUG nova.storage.rbd_utils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.708 251996 DEBUG nova.storage.rbd_utils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.712 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 991 KiB/s rd, 54 KiB/s wr, 44 op/s
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.738 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.739 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.778 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.779 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.780 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.780 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.809 251996 DEBUG nova.storage.rbd_utils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.813 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305077607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.944 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:01 compute-0 nova_compute[251992]: 2025-12-06 07:15:01.951 251996 DEBUG nova.compute.provider_tree [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:15:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:02.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.374 251996 DEBUG nova.scheduler.client.report [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.378 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.464 251996 DEBUG nova.storage.rbd_utils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] resizing rbd image b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.630 251996 DEBUG nova.policy [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '67604a2c995248f8931119287d416e1c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ba80f0b33d04d6d9508bc18e9b1914b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.633 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.634 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.706 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.706 251996 DEBUG nova.network.neutron [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.726 251996 INFO nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.742 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.823 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.824 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:15:02 compute-0 nova_compute[251992]: 2025-12-06 07:15:02.824 251996 INFO nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Creating image(s)
Dec 06 07:15:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/430836661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/305077607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.158 251996 DEBUG nova.storage.rbd_utils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] rbd image 97016241-c559-4e76-9b89-a68445510fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.187 251996 DEBUG nova.storage.rbd_utils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] rbd image 97016241-c559-4e76-9b89-a68445510fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.220 251996 DEBUG nova.storage.rbd_utils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] rbd image 97016241-c559-4e76-9b89-a68445510fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.223 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.291 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.292 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.293 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.293 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.321 251996 DEBUG nova.storage.rbd_utils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] rbd image 97016241-c559-4e76-9b89-a68445510fad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.326 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 97016241-c559-4e76-9b89-a68445510fad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.521 251996 DEBUG nova.policy [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60c8cae0bd8d40059b7dc1f903f672b0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '72a7e6711aab4e1eadba6423fa038649', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.530 251996 DEBUG nova.objects.instance [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'migration_context' on Instance uuid b926cd32-34cf-4b7f-9908-8a7691a5d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.563 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.564 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Ensure instance console log exists: /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.564 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.567 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.567 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.682 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.682 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:15:03 compute-0 nova_compute[251992]: 2025-12-06 07:15:03.683 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:15:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 952 KiB/s rd, 13 KiB/s wr, 39 op/s
Dec 06 07:15:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:03.822 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:03.823 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:03.823 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:04 compute-0 ceph-mon[74339]: pgmap v1709: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 991 KiB/s rd, 54 KiB/s wr, 44 op/s
Dec 06 07:15:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:04.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:04.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:05 compute-0 nova_compute[251992]: 2025-12-06 07:15:05.098 251996 DEBUG nova.network.neutron [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Successfully created port: 3a177776-2c63-4dc7-8f3b-d4b4576299bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:15:05 compute-0 nova_compute[251992]: 2025-12-06 07:15:05.288 251996 DEBUG nova.network.neutron [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Successfully created port: b00851c3-68e1-49d4-b268-c5d4bc92aaa0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:15:05 compute-0 nova_compute[251992]: 2025-12-06 07:15:05.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 171 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 86 op/s
Dec 06 07:15:05 compute-0 nova_compute[251992]: 2025-12-06 07:15:05.917 251996 DEBUG nova.network.neutron [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Successfully updated port: 3a177776-2c63-4dc7-8f3b-d4b4576299bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:15:05 compute-0 nova_compute[251992]: 2025-12-06 07:15:05.930 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:05 compute-0 nova_compute[251992]: 2025-12-06 07:15:05.930 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquired lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:05 compute-0 nova_compute[251992]: 2025-12-06 07:15:05.930 251996 DEBUG nova.network.neutron [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.023 251996 DEBUG nova.compute.manager [req-f0a34a55-f2c1-4ed1-a62d-f1c3199fb3fd req-d27816a2-f68a-4220-9f81-b920e6520fc6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-changed-3a177776-2c63-4dc7-8f3b-d4b4576299bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.023 251996 DEBUG nova.compute.manager [req-f0a34a55-f2c1-4ed1-a62d-f1c3199fb3fd req-d27816a2-f68a-4220-9f81-b920e6520fc6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Refreshing instance network info cache due to event network-changed-3a177776-2c63-4dc7-8f3b-d4b4576299bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.023 251996 DEBUG oslo_concurrency.lockutils [req-f0a34a55-f2c1-4ed1-a62d-f1c3199fb3fd req-d27816a2-f68a-4220-9f81-b920e6520fc6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.081 251996 DEBUG nova.network.neutron [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:15:06 compute-0 ceph-mon[74339]: pgmap v1710: 305 pgs: 305 active+clean; 121 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 952 KiB/s rd, 13 KiB/s wr, 39 op/s
Dec 06 07:15:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:06.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.471 251996 DEBUG nova.network.neutron [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Successfully updated port: b00851c3-68e1-49d4-b268-c5d4bc92aaa0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.490 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.491 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquired lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.491 251996 DEBUG nova.network.neutron [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.628 251996 DEBUG nova.compute.manager [req-8db07251-b99f-4e97-8d8f-c617e2a395e2 req-937e4038-5a42-4880-9d5c-f65a49c87bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received event network-changed-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.629 251996 DEBUG nova.compute.manager [req-8db07251-b99f-4e97-8d8f-c617e2a395e2 req-937e4038-5a42-4880-9d5c-f65a49c87bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Refreshing instance network info cache due to event network-changed-b00851c3-68e1-49d4-b268-c5d4bc92aaa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.629 251996 DEBUG oslo_concurrency.lockutils [req-8db07251-b99f-4e97-8d8f-c617e2a395e2 req-937e4038-5a42-4880-9d5c-f65a49c87bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:06 compute-0 sudo[294911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:06 compute-0 sudo[294911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:06 compute-0 sudo[294911]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.790 251996 DEBUG nova.network.neutron [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:15:06 compute-0 sudo[294939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:15:06 compute-0 sudo[294939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:06 compute-0 sudo[294939]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:06 compute-0 podman[294935]: 2025-12-06 07:15:06.840313823 +0000 UTC m=+0.094969353 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:15:06 compute-0 sudo[294983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:06 compute-0 sudo[294983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:06 compute-0 sudo[294983]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:06 compute-0 sudo[295012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:15:06 compute-0 sudo[295012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:06 compute-0 nova_compute[251992]: 2025-12-06 07:15:06.984 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 97016241-c559-4e76-9b89-a68445510fad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.059 251996 DEBUG nova.storage.rbd_utils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] resizing rbd image 97016241-c559-4e76-9b89-a68445510fad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:15:07 compute-0 ceph-mon[74339]: pgmap v1711: 305 pgs: 305 active+clean; 171 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 86 op/s
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.298 251996 DEBUG nova.network.neutron [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updating instance_info_cache with network_info: [{"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:07 compute-0 sudo[295012]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 07:15:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:15:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:15:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:15:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:15:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:15:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.470 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Releasing lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.470 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Instance network_info: |[{"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.471 251996 DEBUG oslo_concurrency.lockutils [req-f0a34a55-f2c1-4ed1-a62d-f1c3199fb3fd req-d27816a2-f68a-4220-9f81-b920e6520fc6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.471 251996 DEBUG nova.network.neutron [req-f0a34a55-f2c1-4ed1-a62d-f1c3199fb3fd req-d27816a2-f68a-4220-9f81-b920e6520fc6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Refreshing network info cache for port 3a177776-2c63-4dc7-8f3b-d4b4576299bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.475 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Start _get_guest_xml network_info=[{"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:15:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev daca13f5-7769-4976-86f5-7923553c0f8c does not exist
Dec 06 07:15:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 12074cc8-9ca3-429f-b4a8-ec1d88c1a8d3 does not exist
Dec 06 07:15:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7b6a0337-e961-4da6-ae97-31d16aeef11a does not exist
Dec 06 07:15:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:15:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:15:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:15:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:15:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:15:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.521 251996 DEBUG nova.objects.instance [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lazy-loading 'migration_context' on Instance uuid 97016241-c559-4e76-9b89-a68445510fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.527 251996 WARNING nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.531 251996 DEBUG nova.virt.libvirt.host [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.532 251996 DEBUG nova.virt.libvirt.host [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.535 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.536 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Ensure instance console log exists: /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.536 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.536 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.536 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.537 251996 DEBUG nova.virt.libvirt.host [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.537 251996 DEBUG nova.virt.libvirt.host [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.538 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.539 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.539 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.539 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.539 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.539 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.540 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.540 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.540 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.540 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.540 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.540 251996 DEBUG nova.virt.hardware [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.543 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:07 compute-0 sudo[295137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:07 compute-0 sudo[295137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:07 compute-0 sudo[295137]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:07 compute-0 sudo[295166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:15:07 compute-0 sudo[295166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:07 compute-0 sudo[295166]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:07 compute-0 sudo[295191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:07 compute-0 sudo[295191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:07 compute-0 sudo[295191]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 213 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Dec 06 07:15:07 compute-0 sudo[295233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:15:07 compute-0 sudo[295233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:07 compute-0 nova_compute[251992]: 2025-12-06 07:15:07.848 251996 DEBUG nova.network.neutron [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Updating instance_info_cache with network_info: [{"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:15:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892722272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:08 compute-0 podman[295301]: 2025-12-06 07:15:08.010047053 +0000 UTC m=+0.037360780 container create fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.023 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:08 compute-0 systemd[1]: Started libpod-conmon-fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf.scope.
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.053 251996 DEBUG nova.storage.rbd_utils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.059 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:15:08 compute-0 podman[295301]: 2025-12-06 07:15:07.99379026 +0000 UTC m=+0.021104007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:15:08 compute-0 podman[295301]: 2025-12-06 07:15:08.093679189 +0000 UTC m=+0.120992946 container init fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:15:08 compute-0 podman[295301]: 2025-12-06 07:15:08.100490279 +0000 UTC m=+0.127804006 container start fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:15:08 compute-0 podman[295301]: 2025-12-06 07:15:08.103904724 +0000 UTC m=+0.131218471 container attach fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:15:08 compute-0 pensive_mirzakhani[295334]: 167 167
Dec 06 07:15:08 compute-0 systemd[1]: libpod-fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf.scope: Deactivated successfully.
Dec 06 07:15:08 compute-0 conmon[295334]: conmon fdd0f34229fc17ec8211 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf.scope/container/memory.events
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.106 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Releasing lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.106 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Instance network_info: |[{"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.107 251996 DEBUG oslo_concurrency.lockutils [req-8db07251-b99f-4e97-8d8f-c617e2a395e2 req-937e4038-5a42-4880-9d5c-f65a49c87bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.108 251996 DEBUG nova.network.neutron [req-8db07251-b99f-4e97-8d8f-c617e2a395e2 req-937e4038-5a42-4880-9d5c-f65a49c87bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Refreshing network info cache for port b00851c3-68e1-49d4-b268-c5d4bc92aaa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.110 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Start _get_guest_xml network_info=[{"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.115 251996 WARNING nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.122 251996 DEBUG nova.virt.libvirt.host [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.123 251996 DEBUG nova.virt.libvirt.host [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.130 251996 DEBUG nova.virt.libvirt.host [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.130 251996 DEBUG nova.virt.libvirt.host [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.132 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.132 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.133 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.133 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.133 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.133 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.133 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.134 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.134 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.134 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.134 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.134 251996 DEBUG nova.virt.hardware [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.137 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:08 compute-0 podman[295343]: 2025-12-06 07:15:08.146489449 +0000 UTC m=+0.024206605 container died fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6a33c13fe2ec6745e5fb1ac247cbd484bd2da29aacc38405be45083ff2f6d85-merged.mount: Deactivated successfully.
Dec 06 07:15:08 compute-0 podman[295343]: 2025-12-06 07:15:08.188983131 +0000 UTC m=+0.066700297 container remove fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec 06 07:15:08 compute-0 systemd[1]: libpod-conmon-fdd0f34229fc17ec8211edd3f0e523f1aff87d55880751fd5e977ba3d237b7cf.scope: Deactivated successfully.
Dec 06 07:15:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:08.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:15:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:15:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:15:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:15:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:15:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:15:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1892722272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:08 compute-0 podman[295404]: 2025-12-06 07:15:08.365374258 +0000 UTC m=+0.049771246 container create 90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 07:15:08 compute-0 systemd[1]: Started libpod-conmon-90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916.scope.
Dec 06 07:15:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247b296e14ffbe799378419cd288a91e13b5d0bff75c9380a289c9d4c4b5150/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247b296e14ffbe799378419cd288a91e13b5d0bff75c9380a289c9d4c4b5150/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247b296e14ffbe799378419cd288a91e13b5d0bff75c9380a289c9d4c4b5150/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247b296e14ffbe799378419cd288a91e13b5d0bff75c9380a289c9d4c4b5150/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5247b296e14ffbe799378419cd288a91e13b5d0bff75c9380a289c9d4c4b5150/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:08 compute-0 podman[295404]: 2025-12-06 07:15:08.341816303 +0000 UTC m=+0.026213321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:15:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:08.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:08 compute-0 podman[295404]: 2025-12-06 07:15:08.451993467 +0000 UTC m=+0.136390465 container init 90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.453 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:08 compute-0 podman[295404]: 2025-12-06 07:15:08.461320206 +0000 UTC m=+0.145717194 container start 90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:15:08 compute-0 podman[295404]: 2025-12-06 07:15:08.46464419 +0000 UTC m=+0.149041178 container attach 90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 07:15:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:15:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028038919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.518 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.521 251996 DEBUG nova.virt.libvirt.vif [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:14:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1250940394',display_name='tempest-SecurityGroupsTestJSON-server-1250940394',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1250940394',id=69,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-l078b0ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:15:01Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=b926cd32-34cf-4b7f-9908-8a7691a5d46a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.521 251996 DEBUG nova.network.os_vif_util [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.523 251996 DEBUG nova.network.os_vif_util [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:d9:aa,bridge_name='br-int',has_traffic_filtering=True,id=3a177776-2c63-4dc7-8f3b-d4b4576299bb,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a177776-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.524 251996 DEBUG nova.objects.instance [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'pci_devices' on Instance uuid b926cd32-34cf-4b7f-9908-8a7691a5d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.560 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <uuid>b926cd32-34cf-4b7f-9908-8a7691a5d46a</uuid>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <name>instance-00000045</name>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1250940394</nova:name>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:15:07</nova:creationTime>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <nova:user uuid="67604a2c995248f8931119287d416e1c">tempest-SecurityGroupsTestJSON-409098844-project-member</nova:user>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <nova:project uuid="4ba80f0b33d04d6d9508bc18e9b1914b">tempest-SecurityGroupsTestJSON-409098844</nova:project>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <nova:port uuid="3a177776-2c63-4dc7-8f3b-d4b4576299bb">
Dec 06 07:15:08 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <system>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <entry name="serial">b926cd32-34cf-4b7f-9908-8a7691a5d46a</entry>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <entry name="uuid">b926cd32-34cf-4b7f-9908-8a7691a5d46a</entry>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </system>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <os>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   </os>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <features>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   </features>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk">
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       </source>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk.config">
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       </source>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:15:08 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:0d:d9:aa"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <target dev="tap3a177776-2c"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a/console.log" append="off"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <video>
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </video>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:15:08 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:15:08 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:15:08 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:15:08 compute-0 nova_compute[251992]: </domain>
Dec 06 07:15:08 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.561 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Preparing to wait for external event network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.561 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.562 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.562 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:08 compute-0 NetworkManager[48965]: <info>  [1765005308.5717] manager: (tap3a177776-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.563 251996 DEBUG nova.virt.libvirt.vif [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:14:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1250940394',display_name='tempest-SecurityGroupsTestJSON-server-1250940394',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1250940394',id=69,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-l078b0ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:15:01Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=b926cd32-34cf-4b7f-9908-8a7691a5d46a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.563 251996 DEBUG nova.network.os_vif_util [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.564 251996 DEBUG nova.network.os_vif_util [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:d9:aa,bridge_name='br-int',has_traffic_filtering=True,id=3a177776-2c63-4dc7-8f3b-d4b4576299bb,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a177776-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.564 251996 DEBUG os_vif [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:d9:aa,bridge_name='br-int',has_traffic_filtering=True,id=3a177776-2c63-4dc7-8f3b-d4b4576299bb,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a177776-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.564 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.565 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.565 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.568 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.568 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a177776-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.569 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a177776-2c, col_values=(('external_ids', {'iface-id': '3a177776-2c63-4dc7-8f3b-d4b4576299bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:d9:aa', 'vm-uuid': 'b926cd32-34cf-4b7f-9908-8a7691a5d46a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.577 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.578 251996 INFO os_vif [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:d9:aa,bridge_name='br-int',has_traffic_filtering=True,id=3a177776-2c63-4dc7-8f3b-d4b4576299bb,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a177776-2c')
Dec 06 07:15:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:15:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1935116714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.634 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.658 251996 DEBUG nova.storage.rbd_utils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] rbd image 97016241-c559-4e76-9b89-a68445510fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.663 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:15:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184190138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:15:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:15:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184190138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.804 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.804 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.805 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] No VIF found with MAC fa:16:3e:0d:d9:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.805 251996 INFO nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Using config drive
Dec 06 07:15:08 compute-0 nova_compute[251992]: 2025-12-06 07:15:08.833 251996 DEBUG nova.storage.rbd_utils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.125 251996 DEBUG nova.network.neutron [req-f0a34a55-f2c1-4ed1-a62d-f1c3199fb3fd req-d27816a2-f68a-4220-9f81-b920e6520fc6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updated VIF entry in instance network info cache for port 3a177776-2c63-4dc7-8f3b-d4b4576299bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.125 251996 DEBUG nova.network.neutron [req-f0a34a55-f2c1-4ed1-a62d-f1c3199fb3fd req-d27816a2-f68a-4220-9f81-b920e6520fc6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updating instance_info_cache with network_info: [{"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.168 251996 DEBUG oslo_concurrency.lockutils [req-f0a34a55-f2c1-4ed1-a62d-f1c3199fb3fd req-d27816a2-f68a-4220-9f81-b920e6520fc6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:15:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753630511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.219 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.221 251996 DEBUG nova.virt.libvirt.vif [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:14:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=70,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjogaROFJKlxC/BXz8lDwLGYVPK0JABNSfhgG5jRjad5l56y9xx8mv1qZYVJiqaXnb+CuMqKXxUYYMl0I0l1HKhxwP+FpVMu4We105MiPvDRob6sqetN6HmyUdWuC/Rxg==',key_name='tempest-keypair-500043249',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72a7e6711aab4e1eadba6423fa038649',ramdisk_id='',reservation_id='r-j8yutpo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1525169241',owner_user_name='tempest-ServersV294TestFqdnHostnames-1525169241-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:15:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60c8cae0bd8d40059b7dc1f903f672b0',uuid=97016241-c559-4e76-9b89-a68445510fad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.221 251996 DEBUG nova.network.os_vif_util [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Converting VIF {"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.222 251996 DEBUG nova.network.os_vif_util [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:31:fd,bridge_name='br-int',has_traffic_filtering=True,id=b00851c3-68e1-49d4-b268-c5d4bc92aaa0,network=Network(d4c0b3dc-922d-4a19-8152-8770b1021325),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb00851c3-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.223 251996 DEBUG nova.objects.instance [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lazy-loading 'pci_devices' on Instance uuid 97016241-c559-4e76-9b89-a68445510fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.265 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <uuid>97016241-c559-4e76-9b89-a68445510fad</uuid>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <name>instance-00000046</name>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <nova:name>guest-instance-1</nova:name>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:15:08</nova:creationTime>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <nova:user uuid="60c8cae0bd8d40059b7dc1f903f672b0">tempest-ServersV294TestFqdnHostnames-1525169241-project-member</nova:user>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <nova:project uuid="72a7e6711aab4e1eadba6423fa038649">tempest-ServersV294TestFqdnHostnames-1525169241</nova:project>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <nova:port uuid="b00851c3-68e1-49d4-b268-c5d4bc92aaa0">
Dec 06 07:15:09 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <system>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <entry name="serial">97016241-c559-4e76-9b89-a68445510fad</entry>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <entry name="uuid">97016241-c559-4e76-9b89-a68445510fad</entry>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </system>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <os>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   </os>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <features>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   </features>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/97016241-c559-4e76-9b89-a68445510fad_disk">
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       </source>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/97016241-c559-4e76-9b89-a68445510fad_disk.config">
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       </source>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:15:09 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:00:31:fd"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <target dev="tapb00851c3-68"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad/console.log" append="off"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <video>
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </video>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:15:09 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:15:09 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:15:09 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:15:09 compute-0 nova_compute[251992]: </domain>
Dec 06 07:15:09 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.266 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Preparing to wait for external event network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.266 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "97016241-c559-4e76-9b89-a68445510fad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.267 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.267 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.268 251996 DEBUG nova.virt.libvirt.vif [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:14:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=70,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjogaROFJKlxC/BXz8lDwLGYVPK0JABNSfhgG5jRjad5l56y9xx8mv1qZYVJiqaXnb+CuMqKXxUYYMl0I0l1HKhxwP+FpVMu4We105MiPvDRob6sqetN6HmyUdWuC/Rxg==',key_name='tempest-keypair-500043249',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72a7e6711aab4e1eadba6423fa038649',ramdisk_id='',reservation_id='r-j8yutpo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-1525169241',owner_user_name='tempest-ServersV294TestFqdnHostnames-1525169241-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:15:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60c8cae0bd8d40059b7dc1f903f672b0',uuid=97016241-c559-4e76-9b89-a68445510fad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.268 251996 DEBUG nova.network.os_vif_util [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Converting VIF {"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.269 251996 DEBUG nova.network.os_vif_util [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:31:fd,bridge_name='br-int',has_traffic_filtering=True,id=b00851c3-68e1-49d4-b268-c5d4bc92aaa0,network=Network(d4c0b3dc-922d-4a19-8152-8770b1021325),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb00851c3-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.269 251996 DEBUG os_vif [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:31:fd,bridge_name='br-int',has_traffic_filtering=True,id=b00851c3-68e1-49d4-b268-c5d4bc92aaa0,network=Network(d4c0b3dc-922d-4a19-8152-8770b1021325),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb00851c3-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.270 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.271 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.274 251996 INFO nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Creating config drive at /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a/disk.config
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.279 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxx0h4fei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.306 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.307 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb00851c3-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.308 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb00851c3-68, col_values=(('external_ids', {'iface-id': 'b00851c3-68e1-49d4-b268-c5d4bc92aaa0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:31:fd', 'vm-uuid': '97016241-c559-4e76-9b89-a68445510fad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:09 compute-0 unruffled_keldysh[295420]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:15:09 compute-0 unruffled_keldysh[295420]: --> relative data size: 1.0
Dec 06 07:15:09 compute-0 unruffled_keldysh[295420]: --> All data devices are unavailable
Dec 06 07:15:09 compute-0 NetworkManager[48965]: <info>  [1765005309.3111] manager: (tapb00851c3-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.314 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.318 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.319 251996 INFO os_vif [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:31:fd,bridge_name='br-int',has_traffic_filtering=True,id=b00851c3-68e1-49d4-b268-c5d4bc92aaa0,network=Network(d4c0b3dc-922d-4a19-8152-8770b1021325),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb00851c3-68')
Dec 06 07:15:09 compute-0 systemd[1]: libpod-90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916.scope: Deactivated successfully.
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.385 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.385 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.386 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] No VIF found with MAC fa:16:3e:00:31:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:15:09 compute-0 podman[295505]: 2025-12-06 07:15:09.386666519 +0000 UTC m=+0.026933621 container died 90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.386 251996 INFO nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Using config drive
Dec 06 07:15:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5247b296e14ffbe799378419cd288a91e13b5d0bff75c9380a289c9d4c4b5150-merged.mount: Deactivated successfully.
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.415 251996 DEBUG nova.storage.rbd_utils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] rbd image 97016241-c559-4e76-9b89-a68445510fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.424 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxx0h4fei" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:09 compute-0 podman[295505]: 2025-12-06 07:15:09.44854728 +0000 UTC m=+0.088814362 container remove 90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:15:09 compute-0 systemd[1]: libpod-conmon-90f9141373644dde2bd58614741170aaaf7eb3c4fb41d920ed04a944822d0916.scope: Deactivated successfully.
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.458 251996 DEBUG nova.storage.rbd_utils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] rbd image b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:09 compute-0 nova_compute[251992]: 2025-12-06 07:15:09.464 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a/disk.config b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:09 compute-0 sudo[295233]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:09 compute-0 sudo[295555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:09 compute-0 sudo[295555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:09 compute-0 sudo[295555]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:09 compute-0 sudo[295595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:15:09 compute-0 sudo[295595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:09 compute-0 sudo[295595]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:09 compute-0 sudo[295620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:09 compute-0 sudo[295620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:09 compute-0 sudo[295620]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 213 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 114 op/s
Dec 06 07:15:09 compute-0 sudo[295645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:15:09 compute-0 sudo[295645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:09 compute-0 ceph-mon[74339]: pgmap v1712: 305 pgs: 305 active+clean; 213 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Dec 06 07:15:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4028038919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1935116714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2184190138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:15:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2184190138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:15:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1753630511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:10 compute-0 podman[295712]: 2025-12-06 07:15:10.038912013 +0000 UTC m=+0.040418506 container create 393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:15:10 compute-0 systemd[1]: Started libpod-conmon-393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd.scope.
Dec 06 07:15:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:15:10 compute-0 podman[295712]: 2025-12-06 07:15:10.113987551 +0000 UTC m=+0.115494084 container init 393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:15:10 compute-0 podman[295712]: 2025-12-06 07:15:10.023334099 +0000 UTC m=+0.024840612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:15:10 compute-0 podman[295712]: 2025-12-06 07:15:10.122616421 +0000 UTC m=+0.124122904 container start 393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hugle, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:15:10 compute-0 podman[295712]: 2025-12-06 07:15:10.127449346 +0000 UTC m=+0.128955839 container attach 393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:15:10 compute-0 lucid_hugle[295728]: 167 167
Dec 06 07:15:10 compute-0 podman[295712]: 2025-12-06 07:15:10.1286688 +0000 UTC m=+0.130175313 container died 393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Dec 06 07:15:10 compute-0 systemd[1]: libpod-393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd.scope: Deactivated successfully.
Dec 06 07:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-56bb1e5983488d0ef1c08b9bd3a6a88c662ff513afc53f6fe5241a8d0ff383ab-merged.mount: Deactivated successfully.
Dec 06 07:15:10 compute-0 podman[295712]: 2025-12-06 07:15:10.176308115 +0000 UTC m=+0.177814608 container remove 393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hugle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 07:15:10 compute-0 systemd[1]: libpod-conmon-393377764e8c1e7540fceca7ff5a25a8ff341ae9c00c586d2c36b3aff649c4dd.scope: Deactivated successfully.
Dec 06 07:15:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:10.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:10 compute-0 podman[295752]: 2025-12-06 07:15:10.340342608 +0000 UTC m=+0.039880680 container create f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:15:10 compute-0 systemd[1]: Started libpod-conmon-f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4.scope.
Dec 06 07:15:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77512850fce075dd6b12d08957f71f765f0ec2764e6eae46719a049a8e423e0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77512850fce075dd6b12d08957f71f765f0ec2764e6eae46719a049a8e423e0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77512850fce075dd6b12d08957f71f765f0ec2764e6eae46719a049a8e423e0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77512850fce075dd6b12d08957f71f765f0ec2764e6eae46719a049a8e423e0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:10 compute-0 podman[295752]: 2025-12-06 07:15:10.323315234 +0000 UTC m=+0.022853326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:15:10 compute-0 podman[295752]: 2025-12-06 07:15:10.420775596 +0000 UTC m=+0.120313668 container init f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:15:10 compute-0 podman[295752]: 2025-12-06 07:15:10.428712577 +0000 UTC m=+0.128250659 container start f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:15:10 compute-0 podman[295752]: 2025-12-06 07:15:10.43316542 +0000 UTC m=+0.132703512 container attach f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:15:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:10.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:10 compute-0 nova_compute[251992]: 2025-12-06 07:15:10.558 251996 DEBUG nova.network.neutron [req-8db07251-b99f-4e97-8d8f-c617e2a395e2 req-937e4038-5a42-4880-9d5c-f65a49c87bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Updated VIF entry in instance network info cache for port b00851c3-68e1-49d4-b268-c5d4bc92aaa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:15:10 compute-0 nova_compute[251992]: 2025-12-06 07:15:10.559 251996 DEBUG nova.network.neutron [req-8db07251-b99f-4e97-8d8f-c617e2a395e2 req-937e4038-5a42-4880-9d5c-f65a49c87bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Updating instance_info_cache with network_info: [{"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:10 compute-0 nova_compute[251992]: 2025-12-06 07:15:10.765 251996 DEBUG oslo_concurrency.lockutils [req-8db07251-b99f-4e97-8d8f-c617e2a395e2 req-937e4038-5a42-4880-9d5c-f65a49c87bac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:10 compute-0 nova_compute[251992]: 2025-12-06 07:15:10.818 251996 INFO nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Creating config drive at /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad/disk.config
Dec 06 07:15:10 compute-0 nova_compute[251992]: 2025-12-06 07:15:10.825 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqp245idm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:10 compute-0 nova_compute[251992]: 2025-12-06 07:15:10.958 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqp245idm" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:10 compute-0 nova_compute[251992]: 2025-12-06 07:15:10.989 251996 DEBUG nova.storage.rbd_utils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] rbd image 97016241-c559-4e76-9b89-a68445510fad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:15:10 compute-0 nova_compute[251992]: 2025-12-06 07:15:10.994 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad/disk.config 97016241-c559-4e76-9b89-a68445510fad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:11 compute-0 vibrant_colden[295769]: {
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:     "0": [
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:         {
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "devices": [
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "/dev/loop3"
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             ],
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "lv_name": "ceph_lv0",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "lv_size": "7511998464",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "name": "ceph_lv0",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "tags": {
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.cluster_name": "ceph",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.crush_device_class": "",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.encrypted": "0",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.osd_id": "0",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.type": "block",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:                 "ceph.vdo": "0"
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             },
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "type": "block",
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:             "vg_name": "ceph_vg0"
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:         }
Dec 06 07:15:11 compute-0 vibrant_colden[295769]:     ]
Dec 06 07:15:11 compute-0 vibrant_colden[295769]: }
Dec 06 07:15:11 compute-0 systemd[1]: libpod-f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4.scope: Deactivated successfully.
Dec 06 07:15:11 compute-0 podman[295752]: 2025-12-06 07:15:11.235547211 +0000 UTC m=+0.935085283 container died f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.245 251996 DEBUG oslo_concurrency.processutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad/disk.config 97016241-c559-4e76-9b89-a68445510fad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.246 251996 INFO nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Deleting local config drive /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad/disk.config because it was imported into RBD.
Dec 06 07:15:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-77512850fce075dd6b12d08957f71f765f0ec2764e6eae46719a049a8e423e0e-merged.mount: Deactivated successfully.
Dec 06 07:15:11 compute-0 podman[295752]: 2025-12-06 07:15:11.289354598 +0000 UTC m=+0.988892660 container remove f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_colden, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:15:11 compute-0 systemd[1]: libpod-conmon-f219bac9a14692c529fc0dc681079959606c0ae2ade78be2b95af35361941cf4.scope: Deactivated successfully.
Dec 06 07:15:11 compute-0 kernel: tapb00851c3-68: entered promiscuous mode
Dec 06 07:15:11 compute-0 NetworkManager[48965]: <info>  [1765005311.3061] manager: (tapb00851c3-68): new Tun device (/org/freedesktop/NetworkManager/Devices/102)
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:11 compute-0 ovn_controller[147168]: 2025-12-06T07:15:11Z|00188|binding|INFO|Claiming lport b00851c3-68e1-49d4-b268-c5d4bc92aaa0 for this chassis.
Dec 06 07:15:11 compute-0 ovn_controller[147168]: 2025-12-06T07:15:11Z|00189|binding|INFO|b00851c3-68e1-49d4-b268-c5d4bc92aaa0: Claiming fa:16:3e:00:31:fd 10.100.0.11
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.324 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:31:fd 10.100.0.11'], port_security=['fa:16:3e:00:31:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '97016241-c559-4e76-9b89-a68445510fad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4c0b3dc-922d-4a19-8152-8770b1021325', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72a7e6711aab4e1eadba6423fa038649', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'af222fcd-317a-4ca2-b4ba-51e225f84b9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb435c77-7f58-41b2-bf4f-30c3076de776, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=b00851c3-68e1-49d4-b268-c5d4bc92aaa0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.326 158118 INFO neutron.agent.ovn.metadata.agent [-] Port b00851c3-68e1-49d4-b268-c5d4bc92aaa0 in datapath d4c0b3dc-922d-4a19-8152-8770b1021325 bound to our chassis
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.327 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d4c0b3dc-922d-4a19-8152-8770b1021325
Dec 06 07:15:11 compute-0 sudo[295645]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:11 compute-0 systemd-udevd[295849]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.338 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0059d42d-028a-4d11-9ea3-27030e2cd353]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.339 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd4c0b3dc-91 in ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:15:11 compute-0 systemd-machined[212986]: New machine qemu-29-instance-00000046.
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.343 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd4c0b3dc-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.344 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8d505d-41bf-472c-bdca-d97fa46a99dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.345 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[47016a02-dc4f-42d9-84ae-b59753c85f0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 NetworkManager[48965]: <info>  [1765005311.3544] device (tapb00851c3-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:15:11 compute-0 NetworkManager[48965]: <info>  [1765005311.3556] device (tapb00851c3-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.357 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa92bdb-e472-49ed-b66c-0e5931b65c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-00000046.
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.381 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a5b270-1196-42ae-86bb-b43ed7b4ecf6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 sudo[295851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:11 compute-0 sudo[295851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:11 compute-0 sudo[295851]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.411 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.413 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2d052cec-4905-44c2-ab4d-57cc86ee89be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.419 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1c72e4db-8cd4-4f7c-ba06-affab5062bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 NetworkManager[48965]: <info>  [1765005311.4209] manager: (tapd4c0b3dc-90): new Veth device (/org/freedesktop/NetworkManager/Devices/103)
Dec 06 07:15:11 compute-0 ovn_controller[147168]: 2025-12-06T07:15:11Z|00190|binding|INFO|Setting lport b00851c3-68e1-49d4-b268-c5d4bc92aaa0 ovn-installed in OVS
Dec 06 07:15:11 compute-0 ovn_controller[147168]: 2025-12-06T07:15:11Z|00191|binding|INFO|Setting lport b00851c3-68e1-49d4-b268-c5d4bc92aaa0 up in Southbound
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.430 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.452 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4501e162-bbc6-40fd-8335-618fa1f44498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.457 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[68f23aec-3142-4514-8687-36ae7919089f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 sudo[295882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:15:11 compute-0 sudo[295882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:11 compute-0 sudo[295882]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:11 compute-0 NetworkManager[48965]: <info>  [1765005311.4831] device (tapd4c0b3dc-90): carrier: link connected
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.488 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3f80247f-1832-4833-906c-61427f1cbae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.504 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d6e04d-69d2-443f-af8e-b4656364213a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4c0b3dc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:00:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558407, 'reachable_time': 36152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295945, 'error': None, 'target': 'ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 sudo[295935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.521 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[60b8e3aa-8b21-49a5-85c4-2db4db104364]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2a:31'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558407, 'tstamp': 558407}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295960, 'error': None, 'target': 'ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 sudo[295935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:11 compute-0 sudo[295935]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.539 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[10de6aa4-69ea-4db4-9e91-3ac6f3fa1dc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd4c0b3dc-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2a:00:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558407, 'reachable_time': 36152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295963, 'error': None, 'target': 'ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.570 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[334f37d2-b27d-4816-8916-b48b5ca61ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 sudo[295964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:15:11 compute-0 sudo[295964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.626 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a2dbd58c-f802-4b57-9689-acbbd2fffd36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.628 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4c0b3dc-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.628 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.628 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4c0b3dc-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.630 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:11 compute-0 kernel: tapd4c0b3dc-90: entered promiscuous mode
Dec 06 07:15:11 compute-0 NetworkManager[48965]: <info>  [1765005311.6307] manager: (tapd4c0b3dc-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.635 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.635 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd4c0b3dc-90, col_values=(('external_ids', {'iface-id': '07908d93-03dd-40d1-a26d-49c93e348dfd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:11 compute-0 ovn_controller[147168]: 2025-12-06T07:15:11Z|00192|binding|INFO|Releasing lport 07908d93-03dd-40d1-a26d-49c93e348dfd from this chassis (sb_readonly=0)
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.653 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.654 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d4c0b3dc-922d-4a19-8152-8770b1021325.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d4c0b3dc-922d-4a19-8152-8770b1021325.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.656 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f47599-517d-4972-b9ce-500a871a6b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.658 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-d4c0b3dc-922d-4a19-8152-8770b1021325
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/d4c0b3dc-922d-4a19-8152-8770b1021325.pid.haproxy
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID d4c0b3dc-922d-4a19-8152-8770b1021325
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:15:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:11.661 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325', 'env', 'PROCESS_TAG=haproxy-d4c0b3dc-922d-4a19-8152-8770b1021325', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d4c0b3dc-922d-4a19-8152-8770b1021325.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:15:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.701 251996 DEBUG nova.compute.manager [req-982baf00-6193-423f-8ca5-36cbbb1f2da5 req-98000b40-0ba1-4581-8f8c-8f0fab60d12a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received event network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.701 251996 DEBUG oslo_concurrency.lockutils [req-982baf00-6193-423f-8ca5-36cbbb1f2da5 req-98000b40-0ba1-4581-8f8c-8f0fab60d12a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "97016241-c559-4e76-9b89-a68445510fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.701 251996 DEBUG oslo_concurrency.lockutils [req-982baf00-6193-423f-8ca5-36cbbb1f2da5 req-98000b40-0ba1-4581-8f8c-8f0fab60d12a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.701 251996 DEBUG oslo_concurrency.lockutils [req-982baf00-6193-423f-8ca5-36cbbb1f2da5 req-98000b40-0ba1-4581-8f8c-8f0fab60d12a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:11 compute-0 nova_compute[251992]: 2025-12-06 07:15:11.702 251996 DEBUG nova.compute.manager [req-982baf00-6193-423f-8ca5-36cbbb1f2da5 req-98000b40-0ba1-4581-8f8c-8f0fab60d12a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Processing event network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:15:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 213 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 132 op/s
Dec 06 07:15:11 compute-0 podman[296057]: 2025-12-06 07:15:11.928074966 +0000 UTC m=+0.044775216 container create 1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:15:11 compute-0 systemd[1]: Started libpod-conmon-1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc.scope.
Dec 06 07:15:11 compute-0 ceph-mon[74339]: pgmap v1713: 305 pgs: 305 active+clean; 213 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 114 op/s
Dec 06 07:15:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:15:12 compute-0 podman[296057]: 2025-12-06 07:15:11.910647942 +0000 UTC m=+0.027348212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:15:12 compute-0 podman[296057]: 2025-12-06 07:15:12.01484433 +0000 UTC m=+0.131544570 container init 1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:15:12 compute-0 podman[296057]: 2025-12-06 07:15:12.027047449 +0000 UTC m=+0.143747689 container start 1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:15:12 compute-0 podman[296057]: 2025-12-06 07:15:12.031273487 +0000 UTC m=+0.147973727 container attach 1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:15:12 compute-0 systemd[1]: libpod-1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc.scope: Deactivated successfully.
Dec 06 07:15:12 compute-0 stupefied_torvalds[296108]: 167 167
Dec 06 07:15:12 compute-0 podman[296057]: 2025-12-06 07:15:12.036997946 +0000 UTC m=+0.153698186 container died 1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:15:12 compute-0 conmon[296108]: conmon 1f7adc24f0e262a38e78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc.scope/container/memory.events
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.088 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.090 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005312.0883029, 97016241-c559-4e76-9b89-a68445510fad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.090 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] VM Started (Lifecycle Event)
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.100 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.104 251996 INFO nova.virt.libvirt.driver [-] [instance: 97016241-c559-4e76-9b89-a68445510fad] Instance spawned successfully.
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.104 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.120 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.124 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.131 251996 DEBUG oslo_concurrency.processutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a/disk.config b926cd32-34cf-4b7f-9908-8a7691a5d46a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.132 251996 INFO nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Deleting local config drive /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a/disk.config because it was imported into RBD.
Dec 06 07:15:12 compute-0 podman[296123]: 2025-12-06 07:15:12.045261176 +0000 UTC m=+0.034800899 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.144 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.145 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.145 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.146 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.146 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.147 251996 DEBUG nova.virt.libvirt.driver [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.177 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.178 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005312.0884783, 97016241-c559-4e76-9b89-a68445510fad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0567636a2e2142b3ff70151ef99c9e41b143915ca902421774878b8352cc270e-merged.mount: Deactivated successfully.
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.179 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] VM Paused (Lifecycle Event)
Dec 06 07:15:12 compute-0 kernel: tap3a177776-2c: entered promiscuous mode
Dec 06 07:15:12 compute-0 systemd-udevd[295920]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:15:12 compute-0 ovn_controller[147168]: 2025-12-06T07:15:12Z|00193|binding|INFO|Claiming lport 3a177776-2c63-4dc7-8f3b-d4b4576299bb for this chassis.
Dec 06 07:15:12 compute-0 ovn_controller[147168]: 2025-12-06T07:15:12Z|00194|binding|INFO|3a177776-2c63-4dc7-8f3b-d4b4576299bb: Claiming fa:16:3e:0d:d9:aa 10.100.0.5
Dec 06 07:15:12 compute-0 NetworkManager[48965]: <info>  [1765005312.2007] manager: (tap3a177776-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.204 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-0 NetworkManager[48965]: <info>  [1765005312.2123] device (tap3a177776-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:15:12 compute-0 NetworkManager[48965]: <info>  [1765005312.2148] device (tap3a177776-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:15:12 compute-0 podman[296057]: 2025-12-06 07:15:12.221829318 +0000 UTC m=+0.338529558 container remove 1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.230 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:d9:aa 10.100.0.5'], port_security=['fa:16:3e:0d:d9:aa 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b926cd32-34cf-4b7f-9908-8a7691a5d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-facf815c-af05-4eae-8215-596b89b048ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ba80f0b33d04d6d9508bc18e9b1914b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06e30c9c-0168-4e04-b100-fe33575ec890', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e725b401-7789-49e6-93b9-c5c8c58adad1, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=3a177776-2c63-4dc7-8f3b-d4b4576299bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:15:12 compute-0 systemd-machined[212986]: New machine qemu-30-instance-00000045.
Dec 06 07:15:12 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000045.
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.260 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:12 compute-0 podman[296123]: 2025-12-06 07:15:12.264999869 +0000 UTC m=+0.254539562 container create 71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.278 251996 INFO nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Took 9.45 seconds to spawn the instance on the hypervisor.
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.279 251996 DEBUG nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.282 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005312.0928154, 97016241-c559-4e76-9b89-a68445510fad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.283 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] VM Resumed (Lifecycle Event)
Dec 06 07:15:12 compute-0 systemd[1]: libpod-conmon-1f7adc24f0e262a38e7845ed75a3a9901d4428a136c6d0fb17a2d398dcd016dc.scope: Deactivated successfully.
Dec 06 07:15:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:12.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.297 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-0 ovn_controller[147168]: 2025-12-06T07:15:12Z|00195|binding|INFO|Setting lport 3a177776-2c63-4dc7-8f3b-d4b4576299bb ovn-installed in OVS
Dec 06 07:15:12 compute-0 ovn_controller[147168]: 2025-12-06T07:15:12Z|00196|binding|INFO|Setting lport 3a177776-2c63-4dc7-8f3b-d4b4576299bb up in Southbound
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.301 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.331 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:12 compute-0 systemd[1]: Started libpod-conmon-71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5.scope.
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.344 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.364 251996 INFO nova.compute.manager [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Took 11.35 seconds to build instance.
Dec 06 07:15:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3e2a4fd743d21f5ec611a3168aa8ed9d7a3d2a413b7f9a206eb257a17a608c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.402 251996 DEBUG oslo_concurrency.lockutils [None req-62e838dc-1e64-48dd-842d-183012ee47bf 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:12 compute-0 podman[296123]: 2025-12-06 07:15:12.421745009 +0000 UTC m=+0.411284712 container init 71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:15:12 compute-0 podman[296123]: 2025-12-06 07:15:12.428857118 +0000 UTC m=+0.418396811 container start 71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:15:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:12.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:12 compute-0 neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325[296176]: [NOTICE]   (296195) : New worker (296212) forked
Dec 06 07:15:12 compute-0 neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325[296176]: [NOTICE]   (296195) : Loading success.
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.491 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 3a177776-2c63-4dc7-8f3b-d4b4576299bb in datapath facf815c-af05-4eae-8215-596b89b048ab unbound from our chassis
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.493 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.506 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2d863ce6-6a62-485e-ae71-1125408e1767]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.510 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfacf815c-a1 in ovnmeta-facf815c-af05-4eae-8215-596b89b048ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.513 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfacf815c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.513 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[70caba73-4155-4524-8626-913d60769188]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.515 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4c305d-031e-484b-96a5-0e454b662e29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 podman[296184]: 2025-12-06 07:15:12.517179834 +0000 UTC m=+0.129818102 container create 45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:15:12 compute-0 podman[296184]: 2025-12-06 07:15:12.43256468 +0000 UTC m=+0.045202968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.530 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0097bc54-a8f7-475e-966a-c07350212655]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.542 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a2784610-34f8-467c-863a-819a20fcbaa4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.574 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa99af9-a97c-4b60-aa8d-b8df05fc3f84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.579 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[26685907-25d6-4859-84e0-263fa82d13df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 NetworkManager[48965]: <info>  [1765005312.5810] manager: (tapfacf815c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.613 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[28bf88f0-2052-475e-98a1-ffb942586a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.617 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7d03168f-0ef3-43f3-ac85-f0de7d2dcd80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 NetworkManager[48965]: <info>  [1765005312.6419] device (tapfacf815c-a0): carrier: link connected
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.647 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[18ebb1e0-f614-4b8f-9492-6e19254397b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.666 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc5aaa6-b702-4e14-bc24-7cba811bfd30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfacf815c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:6f:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558523, 'reachable_time': 26983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296257, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.682 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[70139c2f-d8ac-4cbd-b9e3-354cdaa942b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:6f88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 558523, 'tstamp': 558523}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296259, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.701 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3665d6f7-867a-4b81-8922-7dfad89a5654]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfacf815c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:6f:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558523, 'reachable_time': 26983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296260, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.735 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1a708626-e6ac-4bf0-9c4e-a0c545e9543f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.744 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005312.744609, b926cd32-34cf-4b7f-9908-8a7691a5d46a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.745 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] VM Started (Lifecycle Event)
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.780 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.783 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005312.7447636, b926cd32-34cf-4b7f-9908-8a7691a5d46a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.783 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] VM Paused (Lifecycle Event)
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.799 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[04e99a60-9c36-489c-adfd-00350a87a15e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.801 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfacf815c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.801 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.802 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfacf815c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.804 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-0 NetworkManager[48965]: <info>  [1765005312.8055] manager: (tapfacf815c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Dec 06 07:15:12 compute-0 kernel: tapfacf815c-a0: entered promiscuous mode
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.806 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.808 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfacf815c-a0, col_values=(('external_ids', {'iface-id': 'ad9e5490-4d4e-46f0-9eb3-53b36e933dda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.808 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:12 compute-0 ovn_controller[147168]: 2025-12-06T07:15:12Z|00197|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.809 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.812 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.814 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.817 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[09dcfece-b3c4-4ebe-9d24-4d0d16b11e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.818 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/facf815c-af05-4eae-8215-596b89b048ab.pid.haproxy
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID facf815c-af05-4eae-8215-596b89b048ab
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:15:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:12.821 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'env', 'PROCESS_TAG=haproxy-facf815c-af05-4eae-8215-596b89b048ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/facf815c-af05-4eae-8215-596b89b048ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.828 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.846 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:15:12 compute-0 systemd[1]: Started libpod-conmon-45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc.scope.
Dec 06 07:15:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a589e34f4f0f688d5d3e32052615d4acbe51b3a24f4a00872e795b2fc5004d6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a589e34f4f0f688d5d3e32052615d4acbe51b3a24f4a00872e795b2fc5004d6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a589e34f4f0f688d5d3e32052615d4acbe51b3a24f4a00872e795b2fc5004d6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a589e34f4f0f688d5d3e32052615d4acbe51b3a24f4a00872e795b2fc5004d6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:15:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:15:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:15:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:15:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:15:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.981 251996 DEBUG nova.compute.manager [req-ffcb2f58-0432-4d11-b7a9-7fe56a8cebeb req-ebcd97e0-f939-4c6e-b9d4-014a3cb707a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.982 251996 DEBUG oslo_concurrency.lockutils [req-ffcb2f58-0432-4d11-b7a9-7fe56a8cebeb req-ebcd97e0-f939-4c6e-b9d4-014a3cb707a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.982 251996 DEBUG oslo_concurrency.lockutils [req-ffcb2f58-0432-4d11-b7a9-7fe56a8cebeb req-ebcd97e0-f939-4c6e-b9d4-014a3cb707a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.983 251996 DEBUG oslo_concurrency.lockutils [req-ffcb2f58-0432-4d11-b7a9-7fe56a8cebeb req-ebcd97e0-f939-4c6e-b9d4-014a3cb707a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.983 251996 DEBUG nova.compute.manager [req-ffcb2f58-0432-4d11-b7a9-7fe56a8cebeb req-ebcd97e0-f939-4c6e-b9d4-014a3cb707a2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Processing event network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.984 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.989 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005312.9894805, b926cd32-34cf-4b7f-9908-8a7691a5d46a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.990 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] VM Resumed (Lifecycle Event)
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.994 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.998 251996 INFO nova.virt.libvirt.driver [-] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Instance spawned successfully.
Dec 06 07:15:12 compute-0 nova_compute[251992]: 2025-12-06 07:15:12.999 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.018 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.024 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.028 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.029 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.029 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.030 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.030 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.031 251996 DEBUG nova.virt.libvirt.driver [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.063 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.127 251996 INFO nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Took 11.50 seconds to spawn the instance on the hypervisor.
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.128 251996 DEBUG nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:15:13 compute-0 podman[296184]: 2025-12-06 07:15:13.160946413 +0000 UTC m=+0.773584711 container init 45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:15:13 compute-0 podman[296278]: 2025-12-06 07:15:13.164692027 +0000 UTC m=+0.220164195 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 07:15:13 compute-0 podman[296184]: 2025-12-06 07:15:13.173237484 +0000 UTC m=+0.785875752 container start 45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.275 251996 INFO nova.compute.manager [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Took 12.86 seconds to build instance.
Dec 06 07:15:13 compute-0 podman[296184]: 2025-12-06 07:15:13.29313967 +0000 UTC m=+0.905777968 container attach 45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:15:13 compute-0 podman[296279]: 2025-12-06 07:15:13.352752648 +0000 UTC m=+0.408073442 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.411 251996 DEBUG oslo_concurrency.lockutils [None req-3b4cd7a8-4cbe-40d8-a2dc-89a67e9cbf02 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:13 compute-0 podman[296336]: 2025-12-06 07:15:13.326422296 +0000 UTC m=+0.112060708 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.453 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 213 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.5 MiB/s wr, 92 op/s
Dec 06 07:15:13 compute-0 ceph-mon[74339]: pgmap v1714: 305 pgs: 305 active+clean; 213 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 132 op/s
Dec 06 07:15:13 compute-0 podman[296336]: 2025-12-06 07:15:13.90101343 +0000 UTC m=+0.686651852 container create 274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.953 251996 DEBUG nova.compute.manager [req-4971ad60-efa7-4678-9a2a-c8656ac0ab43 req-5ff8dcc6-491d-4a7b-ac0b-2d5bf18e1223 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received event network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.953 251996 DEBUG oslo_concurrency.lockutils [req-4971ad60-efa7-4678-9a2a-c8656ac0ab43 req-5ff8dcc6-491d-4a7b-ac0b-2d5bf18e1223 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "97016241-c559-4e76-9b89-a68445510fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.953 251996 DEBUG oslo_concurrency.lockutils [req-4971ad60-efa7-4678-9a2a-c8656ac0ab43 req-5ff8dcc6-491d-4a7b-ac0b-2d5bf18e1223 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.953 251996 DEBUG oslo_concurrency.lockutils [req-4971ad60-efa7-4678-9a2a-c8656ac0ab43 req-5ff8dcc6-491d-4a7b-ac0b-2d5bf18e1223 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.954 251996 DEBUG nova.compute.manager [req-4971ad60-efa7-4678-9a2a-c8656ac0ab43 req-5ff8dcc6-491d-4a7b-ac0b-2d5bf18e1223 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] No waiting events found dispatching network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:15:13 compute-0 nova_compute[251992]: 2025-12-06 07:15:13.954 251996 WARNING nova.compute.manager [req-4971ad60-efa7-4678-9a2a-c8656ac0ab43 req-5ff8dcc6-491d-4a7b-ac0b-2d5bf18e1223 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received unexpected event network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 for instance with vm_state active and task_state None.
Dec 06 07:15:13 compute-0 systemd[1]: Started libpod-conmon-274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e.scope.
Dec 06 07:15:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53668e93bca4e9a14612113ebcc51e96b66c0a915618c72380795b44fcdb40d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:15:14 compute-0 podman[296336]: 2025-12-06 07:15:14.060850106 +0000 UTC m=+0.846488548 container init 274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:15:14 compute-0 podman[296336]: 2025-12-06 07:15:14.067461641 +0000 UTC m=+0.853100053 container start 274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:15:14 compute-0 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[296356]: [NOTICE]   (296369) : New worker (296373) forked
Dec 06 07:15:14 compute-0 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[296356]: [NOTICE]   (296369) : Loading success.
Dec 06 07:15:14 compute-0 interesting_shtern[296280]: {
Dec 06 07:15:14 compute-0 interesting_shtern[296280]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:15:14 compute-0 interesting_shtern[296280]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:15:14 compute-0 interesting_shtern[296280]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:15:14 compute-0 interesting_shtern[296280]:         "osd_id": 0,
Dec 06 07:15:14 compute-0 interesting_shtern[296280]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:15:14 compute-0 interesting_shtern[296280]:         "type": "bluestore"
Dec 06 07:15:14 compute-0 interesting_shtern[296280]:     }
Dec 06 07:15:14 compute-0 interesting_shtern[296280]: }
Dec 06 07:15:14 compute-0 systemd[1]: libpod-45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc.scope: Deactivated successfully.
Dec 06 07:15:14 compute-0 podman[296184]: 2025-12-06 07:15:14.124165708 +0000 UTC m=+1.736804006 container died 45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a589e34f4f0f688d5d3e32052615d4acbe51b3a24f4a00872e795b2fc5004d6e-merged.mount: Deactivated successfully.
Dec 06 07:15:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:14.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:14 compute-0 nova_compute[251992]: 2025-12-06 07:15:14.342 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:14 compute-0 podman[296184]: 2025-12-06 07:15:14.371678763 +0000 UTC m=+1.984317031 container remove 45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shtern, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:15:14 compute-0 sudo[295964]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:14 compute-0 systemd[1]: libpod-conmon-45131f00b228c95d2ae3c2ba0027576d00bab35af14564ef8947e16a89ce42fc.scope: Deactivated successfully.
Dec 06 07:15:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:15:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:14.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:15:15 compute-0 nova_compute[251992]: 2025-12-06 07:15:15.129 251996 DEBUG nova.compute.manager [req-1f0fd160-ab53-4aa1-9161-da1a3d7afc49 req-ec973cb1-fc15-4b81-b16a-7190322ca40f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:15 compute-0 nova_compute[251992]: 2025-12-06 07:15:15.129 251996 DEBUG oslo_concurrency.lockutils [req-1f0fd160-ab53-4aa1-9161-da1a3d7afc49 req-ec973cb1-fc15-4b81-b16a-7190322ca40f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:15 compute-0 nova_compute[251992]: 2025-12-06 07:15:15.129 251996 DEBUG oslo_concurrency.lockutils [req-1f0fd160-ab53-4aa1-9161-da1a3d7afc49 req-ec973cb1-fc15-4b81-b16a-7190322ca40f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:15 compute-0 nova_compute[251992]: 2025-12-06 07:15:15.130 251996 DEBUG oslo_concurrency.lockutils [req-1f0fd160-ab53-4aa1-9161-da1a3d7afc49 req-ec973cb1-fc15-4b81-b16a-7190322ca40f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:15 compute-0 nova_compute[251992]: 2025-12-06 07:15:15.130 251996 DEBUG nova.compute.manager [req-1f0fd160-ab53-4aa1-9161-da1a3d7afc49 req-ec973cb1-fc15-4b81-b16a-7190322ca40f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] No waiting events found dispatching network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:15:15 compute-0 nova_compute[251992]: 2025-12-06 07:15:15.130 251996 WARNING nova.compute.manager [req-1f0fd160-ab53-4aa1-9161-da1a3d7afc49 req-ec973cb1-fc15-4b81-b16a-7190322ca40f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received unexpected event network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb for instance with vm_state active and task_state None.
Dec 06 07:15:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 06 07:15:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ba649940-dc66-42be-95c5-633e33bbe842 does not exist
Dec 06 07:15:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7d0c7e70-311a-4503-8fa6-46e30c5b52cc does not exist
Dec 06 07:15:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f0996210-f481-4038-aa2d-11db893cfd5b does not exist
Dec 06 07:15:16 compute-0 sudo[296398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:16 compute-0 sudo[296398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:16 compute-0 sudo[296398]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:16 compute-0 sudo[296423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:15:16 compute-0 sudo[296423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:16 compute-0 sudo[296423]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:16.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:16.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:16 compute-0 ceph-mon[74339]: pgmap v1715: 305 pgs: 305 active+clean; 213 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.5 MiB/s wr, 92 op/s
Dec 06 07:15:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:16 compute-0 NetworkManager[48965]: <info>  [1765005316.7778] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Dec 06 07:15:16 compute-0 NetworkManager[48965]: <info>  [1765005316.7791] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Dec 06 07:15:16 compute-0 nova_compute[251992]: 2025-12-06 07:15:16.777 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:16 compute-0 ovn_controller[147168]: 2025-12-06T07:15:16Z|00198|binding|INFO|Releasing lport 07908d93-03dd-40d1-a26d-49c93e348dfd from this chassis (sb_readonly=0)
Dec 06 07:15:16 compute-0 ovn_controller[147168]: 2025-12-06T07:15:16Z|00199|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:15:16 compute-0 nova_compute[251992]: 2025-12-06 07:15:16.875 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:16 compute-0 nova_compute[251992]: 2025-12-06 07:15:16.892 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.253 251996 DEBUG nova.compute.manager [req-2e4bbf35-fe74-41a0-82e9-1c50113b562d req-d12b8b93-d8df-4581-b817-dee98df7b0df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received event network-changed-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.254 251996 DEBUG nova.compute.manager [req-2e4bbf35-fe74-41a0-82e9-1c50113b562d req-d12b8b93-d8df-4581-b817-dee98df7b0df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Refreshing instance network info cache due to event network-changed-b00851c3-68e1-49d4-b268-c5d4bc92aaa0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.254 251996 DEBUG oslo_concurrency.lockutils [req-2e4bbf35-fe74-41a0-82e9-1c50113b562d req-d12b8b93-d8df-4581-b817-dee98df7b0df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.255 251996 DEBUG oslo_concurrency.lockutils [req-2e4bbf35-fe74-41a0-82e9-1c50113b562d req-d12b8b93-d8df-4581-b817-dee98df7b0df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.255 251996 DEBUG nova.network.neutron [req-2e4bbf35-fe74-41a0-82e9-1c50113b562d req-d12b8b93-d8df-4581-b817-dee98df7b0df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Refreshing network info cache for port b00851c3-68e1-49d4-b268-c5d4bc92aaa0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.365 251996 DEBUG nova.compute.manager [req-9adf1cbb-700b-49fe-aa96-ea90e4b28120 req-464563fb-b2bf-4ac8-8278-1637f4d8678f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-changed-3a177776-2c63-4dc7-8f3b-d4b4576299bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.366 251996 DEBUG nova.compute.manager [req-9adf1cbb-700b-49fe-aa96-ea90e4b28120 req-464563fb-b2bf-4ac8-8278-1637f4d8678f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Refreshing instance network info cache due to event network-changed-3a177776-2c63-4dc7-8f3b-d4b4576299bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.367 251996 DEBUG oslo_concurrency.lockutils [req-9adf1cbb-700b-49fe-aa96-ea90e4b28120 req-464563fb-b2bf-4ac8-8278-1637f4d8678f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.367 251996 DEBUG oslo_concurrency.lockutils [req-9adf1cbb-700b-49fe-aa96-ea90e4b28120 req-464563fb-b2bf-4ac8-8278-1637f4d8678f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:17 compute-0 nova_compute[251992]: 2025-12-06 07:15:17.367 251996 DEBUG nova.network.neutron [req-9adf1cbb-700b-49fe-aa96-ea90e4b28120 req-464563fb-b2bf-4ac8-8278-1637f4d8678f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Refreshing network info cache for port 3a177776-2c63-4dc7-8f3b-d4b4576299bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:15:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.3 MiB/s wr, 231 op/s
Dec 06 07:15:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:18.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:15:18
Dec 06 07:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'images']
Dec 06 07:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:15:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:18 compute-0 nova_compute[251992]: 2025-12-06 07:15:18.455 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:18 compute-0 nova_compute[251992]: 2025-12-06 07:15:18.598 251996 DEBUG nova.network.neutron [req-2e4bbf35-fe74-41a0-82e9-1c50113b562d req-d12b8b93-d8df-4581-b817-dee98df7b0df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Updated VIF entry in instance network info cache for port b00851c3-68e1-49d4-b268-c5d4bc92aaa0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:15:18 compute-0 nova_compute[251992]: 2025-12-06 07:15:18.599 251996 DEBUG nova.network.neutron [req-2e4bbf35-fe74-41a0-82e9-1c50113b562d req-d12b8b93-d8df-4581-b817-dee98df7b0df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Updating instance_info_cache with network_info: [{"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:18 compute-0 nova_compute[251992]: 2025-12-06 07:15:18.624 251996 DEBUG oslo_concurrency.lockutils [req-2e4bbf35-fe74-41a0-82e9-1c50113b562d req-d12b8b93-d8df-4581-b817-dee98df7b0df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-97016241-c559-4e76-9b89-a68445510fad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:18 compute-0 nova_compute[251992]: 2025-12-06 07:15:18.984 251996 DEBUG nova.network.neutron [req-9adf1cbb-700b-49fe-aa96-ea90e4b28120 req-464563fb-b2bf-4ac8-8278-1637f4d8678f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updated VIF entry in instance network info cache for port 3a177776-2c63-4dc7-8f3b-d4b4576299bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:15:18 compute-0 nova_compute[251992]: 2025-12-06 07:15:18.985 251996 DEBUG nova.network.neutron [req-9adf1cbb-700b-49fe-aa96-ea90e4b28120 req-464563fb-b2bf-4ac8-8278-1637f4d8678f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updating instance_info_cache with network_info: [{"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:19 compute-0 nova_compute[251992]: 2025-12-06 07:15:19.014 251996 DEBUG oslo_concurrency.lockutils [req-9adf1cbb-700b-49fe-aa96-ea90e4b28120 req-464563fb-b2bf-4ac8-8278-1637f4d8678f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:19 compute-0 nova_compute[251992]: 2025-12-06 07:15:19.345 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:19 compute-0 nova_compute[251992]: 2025-12-06 07:15:19.354 251996 DEBUG nova.compute.manager [req-a3b9ba24-6dc6-4054-b8a6-d57475eb76ff req-9cbfce5a-e22d-45ce-9215-2c969b1ab948 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-changed-3a177776-2c63-4dc7-8f3b-d4b4576299bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:19 compute-0 nova_compute[251992]: 2025-12-06 07:15:19.354 251996 DEBUG nova.compute.manager [req-a3b9ba24-6dc6-4054-b8a6-d57475eb76ff req-9cbfce5a-e22d-45ce-9215-2c969b1ab948 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Refreshing instance network info cache due to event network-changed-3a177776-2c63-4dc7-8f3b-d4b4576299bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:15:19 compute-0 nova_compute[251992]: 2025-12-06 07:15:19.355 251996 DEBUG oslo_concurrency.lockutils [req-a3b9ba24-6dc6-4054-b8a6-d57475eb76ff req-9cbfce5a-e22d-45ce-9215-2c969b1ab948 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:15:19 compute-0 nova_compute[251992]: 2025-12-06 07:15:19.355 251996 DEBUG oslo_concurrency.lockutils [req-a3b9ba24-6dc6-4054-b8a6-d57475eb76ff req-9cbfce5a-e22d-45ce-9215-2c969b1ab948 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:15:19 compute-0 nova_compute[251992]: 2025-12-06 07:15:19.356 251996 DEBUG nova.network.neutron [req-a3b9ba24-6dc6-4054-b8a6-d57475eb76ff req-9cbfce5a-e22d-45ce-9215-2c969b1ab948 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Refreshing network info cache for port 3a177776-2c63-4dc7-8f3b-d4b4576299bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:15:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 39 KiB/s wr, 203 op/s
Dec 06 07:15:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:20.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:20.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:20 compute-0 nova_compute[251992]: 2025-12-06 07:15:20.552 251996 DEBUG nova.network.neutron [req-a3b9ba24-6dc6-4054-b8a6-d57475eb76ff req-9cbfce5a-e22d-45ce-9215-2c969b1ab948 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updated VIF entry in instance network info cache for port 3a177776-2c63-4dc7-8f3b-d4b4576299bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:15:20 compute-0 nova_compute[251992]: 2025-12-06 07:15:20.553 251996 DEBUG nova.network.neutron [req-a3b9ba24-6dc6-4054-b8a6-d57475eb76ff req-9cbfce5a-e22d-45ce-9215-2c969b1ab948 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updating instance_info_cache with network_info: [{"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:20 compute-0 nova_compute[251992]: 2025-12-06 07:15:20.590 251996 DEBUG oslo_concurrency.lockutils [req-a3b9ba24-6dc6-4054-b8a6-d57475eb76ff req-9cbfce5a-e22d-45ce-9215-2c969b1ab948 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:15:20 compute-0 ceph-mon[74339]: pgmap v1716: 305 pgs: 305 active+clean; 213 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Dec 06 07:15:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:15:20 compute-0 sudo[296452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:21 compute-0 sudo[296452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:21 compute-0 sudo[296452]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:21 compute-0 sudo[296477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:21 compute-0 sudo[296477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:21 compute-0 sudo[296477]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:21 compute-0 nova_compute[251992]: 2025-12-06 07:15:21.120 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 65 KiB/s wr, 206 op/s
Dec 06 07:15:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:22 compute-0 ceph-mon[74339]: pgmap v1717: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.3 MiB/s wr, 231 op/s
Dec 06 07:15:22 compute-0 ceph-mon[74339]: pgmap v1718: 305 pgs: 305 active+clean; 214 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 39 KiB/s wr, 203 op/s
Dec 06 07:15:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:22.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:22.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:15:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7773 writes, 34K keys, 7765 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7773 writes, 7765 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1683 writes, 7706 keys, 1682 commit groups, 1.0 writes per commit group, ingest: 11.10 MB, 0.02 MB/s
                                           Interval WAL: 1683 writes, 1682 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.7      0.54              0.14        19    0.029       0      0       0.0       0.0
                                             L6      1/0    9.13 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.7    107.1     88.3      1.87              0.54        18    0.104    100K    10K       0.0       0.0
                                            Sum      1/0    9.13 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.7     83.0     86.8      2.41              0.68        37    0.065    100K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.2     61.9     61.2      0.94              0.17        10    0.094     33K   3589       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    107.1     88.3      1.87              0.54        18    0.104    100K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     82.2      0.54              0.14        18    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.043, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.07 MB/s write, 0.20 GB read, 0.07 MB/s read, 2.4 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 20.79 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000302 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1191,20.02 MB,6.58631%) FilterBlock(38,283.17 KB,0.0909655%) IndexBlock(38,504.41 KB,0.162034%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:15:23 compute-0 nova_compute[251992]: 2025-12-06 07:15:23.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:15:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 65 KiB/s wr, 189 op/s
Dec 06 07:15:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:24.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:24 compute-0 ceph-mon[74339]: pgmap v1719: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 65 KiB/s wr, 206 op/s
Dec 06 07:15:24 compute-0 nova_compute[251992]: 2025-12-06 07:15:24.347 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:24.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 65 KiB/s wr, 193 op/s
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004165794249022892 of space, bias 1.0, pg target 1.2497382747068677 quantized to 32 (current 32)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:15:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:15:26 compute-0 nova_compute[251992]: 2025-12-06 07:15:26.050 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:26.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:26.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:27 compute-0 ceph-mon[74339]: pgmap v1720: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 65 KiB/s wr, 189 op/s
Dec 06 07:15:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 218 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 193 KiB/s wr, 84 op/s
Dec 06 07:15:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:28.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:28 compute-0 nova_compute[251992]: 2025-12-06 07:15:28.458 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:28.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:29 compute-0 nova_compute[251992]: 2025-12-06 07:15:29.350 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 218 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 178 KiB/s wr, 16 op/s
Dec 06 07:15:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:30.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:30.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:30 compute-0 ceph-mon[74339]: pgmap v1721: 305 pgs: 305 active+clean; 216 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 65 KiB/s wr, 193 op/s
Dec 06 07:15:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2013365150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:30 compute-0 ceph-mon[74339]: pgmap v1722: 305 pgs: 305 active+clean; 218 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 193 KiB/s wr, 84 op/s
Dec 06 07:15:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4084160538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 242 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 155 KiB/s rd, 2.9 MiB/s wr, 54 op/s
Dec 06 07:15:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:32.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:32.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:33 compute-0 nova_compute[251992]: 2025-12-06 07:15:33.461 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 242 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 152 KiB/s rd, 2.9 MiB/s wr, 51 op/s
Dec 06 07:15:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:34.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:34 compute-0 nova_compute[251992]: 2025-12-06 07:15:34.352 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:34 compute-0 ceph-mon[74339]: pgmap v1723: 305 pgs: 305 active+clean; 218 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 178 KiB/s wr, 16 op/s
Dec 06 07:15:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 252 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 90 op/s
Dec 06 07:15:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:36.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:36.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:36 compute-0 ceph-mon[74339]: pgmap v1724: 305 pgs: 305 active+clean; 242 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 155 KiB/s rd, 2.9 MiB/s wr, 54 op/s
Dec 06 07:15:36 compute-0 ceph-mon[74339]: pgmap v1725: 305 pgs: 305 active+clean; 242 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 152 KiB/s rd, 2.9 MiB/s wr, 51 op/s
Dec 06 07:15:37 compute-0 podman[296512]: 2025-12-06 07:15:37.456922642 +0000 UTC m=+0.112590362 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 07:15:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 258 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 116 op/s
Dec 06 07:15:37 compute-0 ovn_controller[147168]: 2025-12-06T07:15:37Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:31:fd 10.100.0.11
Dec 06 07:15:37 compute-0 ovn_controller[147168]: 2025-12-06T07:15:37Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:31:fd 10.100.0.11
Dec 06 07:15:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:38 compute-0 nova_compute[251992]: 2025-12-06 07:15:38.464 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:38.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:38 compute-0 ceph-mon[74339]: pgmap v1726: 305 pgs: 305 active+clean; 252 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 90 op/s
Dec 06 07:15:39 compute-0 ovn_controller[147168]: 2025-12-06T07:15:39Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:d9:aa 10.100.0.5
Dec 06 07:15:39 compute-0 ovn_controller[147168]: 2025-12-06T07:15:39Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:d9:aa 10.100.0.5
Dec 06 07:15:39 compute-0 nova_compute[251992]: 2025-12-06 07:15:39.379 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 258 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 107 op/s
Dec 06 07:15:39 compute-0 ceph-mon[74339]: pgmap v1727: 305 pgs: 305 active+clean; 258 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 116 op/s
Dec 06 07:15:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:40.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:41 compute-0 sudo[296540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1564804685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:41 compute-0 sudo[296540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:41 compute-0 ceph-mon[74339]: pgmap v1728: 305 pgs: 305 active+clean; 258 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 107 op/s
Dec 06 07:15:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3250951825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:41 compute-0 sudo[296540]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:41 compute-0 sudo[296565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:15:41 compute-0 sudo[296565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:15:41 compute-0 sudo[296565]: pam_unix(sudo:session): session closed for user root
Dec 06 07:15:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 322 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 177 op/s
Dec 06 07:15:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:42.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:42.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:15:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:15:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:15:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:15:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:15:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:15:43 compute-0 podman[296591]: 2025-12-06 07:15:43.394099516 +0000 UTC m=+0.056373809 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 07:15:43 compute-0 nova_compute[251992]: 2025-12-06 07:15:43.466 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 322 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 139 op/s
Dec 06 07:15:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:44.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:44 compute-0 nova_compute[251992]: 2025-12-06 07:15:44.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:44 compute-0 podman[296611]: 2025-12-06 07:15:44.392218701 +0000 UTC m=+0.054455295 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:15:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:44.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 367 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.7 MiB/s wr, 181 op/s
Dec 06 07:15:45 compute-0 ceph-mon[74339]: pgmap v1729: 305 pgs: 305 active+clean; 322 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 177 op/s
Dec 06 07:15:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:46.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:46.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 374 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 170 op/s
Dec 06 07:15:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:15:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:48.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:15:48 compute-0 nova_compute[251992]: 2025-12-06 07:15:48.469 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:48.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:49 compute-0 nova_compute[251992]: 2025-12-06 07:15:49.384 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:49 compute-0 ceph-mon[74339]: pgmap v1730: 305 pgs: 305 active+clean; 322 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 139 op/s
Dec 06 07:15:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 374 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 650 KiB/s rd, 3.8 MiB/s wr, 140 op/s
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:49.763 158254 DEBUG eventlet.wsgi.server [-] (158254) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:49.768 158254 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: Accept: */*
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: Connection: close
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: Content-Type: text/plain
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: Host: 169.254.169.254
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: User-Agent: curl/7.84.0
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: X-Forwarded-For: 10.100.0.11
Dec 06 07:15:49 compute-0 ovn_metadata_agent[158111]: X-Ovn-Network-Id: d4c0b3dc-922d-4a19-8152-8770b1021325 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 06 07:15:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:50.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:50.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2853557948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:51 compute-0 ceph-mon[74339]: pgmap v1731: 305 pgs: 305 active+clean; 367 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.7 MiB/s wr, 181 op/s
Dec 06 07:15:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/840482458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:51 compute-0 ceph-mon[74339]: pgmap v1732: 305 pgs: 305 active+clean; 374 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.4 MiB/s wr, 170 op/s
Dec 06 07:15:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3152543257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4167606791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:15:51 compute-0 nova_compute[251992]: 2025-12-06 07:15:51.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 374 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 851 KiB/s rd, 3.8 MiB/s wr, 159 op/s
Dec 06 07:15:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:51.774 158254 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 06 07:15:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:51.774 158254 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1671 time: 2.0073819
Dec 06 07:15:51 compute-0 haproxy-metadata-proxy-d4c0b3dc-922d-4a19-8152-8770b1021325[296212]: 10.100.0.11:40784 [06/Dec/2025:07:15:49.761] listener listener/metadata 0/0/0/2013/2013 200 1655 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Dec 06 07:15:51 compute-0 nova_compute[251992]: 2025-12-06 07:15:51.960 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "97016241-c559-4e76-9b89-a68445510fad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:51 compute-0 nova_compute[251992]: 2025-12-06 07:15:51.961 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:51 compute-0 nova_compute[251992]: 2025-12-06 07:15:51.961 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "97016241-c559-4e76-9b89-a68445510fad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:51 compute-0 nova_compute[251992]: 2025-12-06 07:15:51.961 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:51 compute-0 nova_compute[251992]: 2025-12-06 07:15:51.962 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:51 compute-0 nova_compute[251992]: 2025-12-06 07:15:51.963 251996 INFO nova.compute.manager [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Terminating instance
Dec 06 07:15:51 compute-0 nova_compute[251992]: 2025-12-06 07:15:51.965 251996 DEBUG nova.compute.manager [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:15:52 compute-0 kernel: tapb00851c3-68 (unregistering): left promiscuous mode
Dec 06 07:15:52 compute-0 NetworkManager[48965]: <info>  [1765005352.0213] device (tapb00851c3-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:15:52 compute-0 ovn_controller[147168]: 2025-12-06T07:15:52Z|00200|binding|INFO|Releasing lport b00851c3-68e1-49d4-b268-c5d4bc92aaa0 from this chassis (sb_readonly=0)
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.030 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-0 ovn_controller[147168]: 2025-12-06T07:15:52Z|00201|binding|INFO|Setting lport b00851c3-68e1-49d4-b268-c5d4bc92aaa0 down in Southbound
Dec 06 07:15:52 compute-0 ovn_controller[147168]: 2025-12-06T07:15:52Z|00202|binding|INFO|Removing iface tapb00851c3-68 ovn-installed in OVS
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.034 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.043 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:31:fd 10.100.0.11'], port_security=['fa:16:3e:00:31:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '97016241-c559-4e76-9b89-a68445510fad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d4c0b3dc-922d-4a19-8152-8770b1021325', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72a7e6711aab4e1eadba6423fa038649', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'af222fcd-317a-4ca2-b4ba-51e225f84b9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb435c77-7f58-41b2-bf4f-30c3076de776, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=b00851c3-68e1-49d4-b268-c5d4bc92aaa0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.046 158118 INFO neutron.agent.ovn.metadata.agent [-] Port b00851c3-68e1-49d4-b268-c5d4bc92aaa0 in datapath d4c0b3dc-922d-4a19-8152-8770b1021325 unbound from our chassis
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.047 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d4c0b3dc-922d-4a19-8152-8770b1021325, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.049 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.050 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e8f0de-b62e-4fe7-a789-29f499d1676e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.051 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325 namespace which is not needed anymore
Dec 06 07:15:52 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000046.scope: Deactivated successfully.
Dec 06 07:15:52 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000046.scope: Consumed 15.421s CPU time.
Dec 06 07:15:52 compute-0 systemd-machined[212986]: Machine qemu-29-instance-00000046 terminated.
Dec 06 07:15:52 compute-0 neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325[296176]: [NOTICE]   (296195) : haproxy version is 2.8.14-c23fe91
Dec 06 07:15:52 compute-0 neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325[296176]: [NOTICE]   (296195) : path to executable is /usr/sbin/haproxy
Dec 06 07:15:52 compute-0 neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325[296176]: [WARNING]  (296195) : Exiting Master process...
Dec 06 07:15:52 compute-0 neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325[296176]: [WARNING]  (296195) : Exiting Master process...
Dec 06 07:15:52 compute-0 neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325[296176]: [ALERT]    (296195) : Current worker (296212) exited with code 143 (Terminated)
Dec 06 07:15:52 compute-0 neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325[296176]: [WARNING]  (296195) : All workers exited. Exiting... (0)
Dec 06 07:15:52 compute-0 systemd[1]: libpod-71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5.scope: Deactivated successfully.
Dec 06 07:15:52 compute-0 podman[296658]: 2025-12-06 07:15:52.186286088 +0000 UTC m=+0.046319500 container died 71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.186 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.191 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.206 251996 INFO nova.virt.libvirt.driver [-] [instance: 97016241-c559-4e76-9b89-a68445510fad] Instance destroyed successfully.
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.207 251996 DEBUG nova.objects.instance [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lazy-loading 'resources' on Instance uuid 97016241-c559-4e76-9b89-a68445510fad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5-userdata-shm.mount: Deactivated successfully.
Dec 06 07:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab3e2a4fd743d21f5ec611a3168aa8ed9d7a3d2a413b7f9a206eb257a17a608c-merged.mount: Deactivated successfully.
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.222 251996 DEBUG nova.virt.libvirt.vif [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=70,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGjogaROFJKlxC/BXz8lDwLGYVPK0JABNSfhgG5jRjad5l56y9xx8mv1qZYVJiqaXnb+CuMqKXxUYYMl0I0l1HKhxwP+FpVMu4We105MiPvDRob6sqetN6HmyUdWuC/Rxg==',key_name='tempest-keypair-500043249',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='72a7e6711aab4e1eadba6423fa038649',ramdisk_id='',reservation_id='r-j8yutpo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-1525169241',owner_user_name='tempest-ServersV294TestFqdnHostnames-1525169241-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='60c8cae0bd8d40059b7dc1f903f672b0',uuid=97016241-c559-4e76-9b89-a68445510fad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.223 251996 DEBUG nova.network.os_vif_util [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Converting VIF {"id": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "address": "fa:16:3e:00:31:fd", "network": {"id": "d4c0b3dc-922d-4a19-8152-8770b1021325", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-56521996-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72a7e6711aab4e1eadba6423fa038649", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb00851c3-68", "ovs_interfaceid": "b00851c3-68e1-49d4-b268-c5d4bc92aaa0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.226 251996 DEBUG nova.network.os_vif_util [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:31:fd,bridge_name='br-int',has_traffic_filtering=True,id=b00851c3-68e1-49d4-b268-c5d4bc92aaa0,network=Network(d4c0b3dc-922d-4a19-8152-8770b1021325),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb00851c3-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.227 251996 DEBUG os_vif [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:31:fd,bridge_name='br-int',has_traffic_filtering=True,id=b00851c3-68e1-49d4-b268-c5d4bc92aaa0,network=Network(d4c0b3dc-922d-4a19-8152-8770b1021325),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb00851c3-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.231 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.231 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb00851c3-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:52 compute-0 podman[296658]: 2025-12-06 07:15:52.234869579 +0000 UTC m=+0.094902991 container cleanup 71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.239 251996 INFO os_vif [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:31:fd,bridge_name='br-int',has_traffic_filtering=True,id=b00851c3-68e1-49d4-b268-c5d4bc92aaa0,network=Network(d4c0b3dc-922d-4a19-8152-8770b1021325),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb00851c3-68')
Dec 06 07:15:52 compute-0 systemd[1]: libpod-conmon-71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5.scope: Deactivated successfully.
Dec 06 07:15:52 compute-0 podman[296696]: 2025-12-06 07:15:52.318347891 +0000 UTC m=+0.058624722 container remove 71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.324 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c7695f43-d9e6-46f5-9517-b855d156c988]: (4, ('Sat Dec  6 07:15:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325 (71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5)\n71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5\nSat Dec  6 07:15:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325 (71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5)\n71e08dab9e46cf420fc915635c1e18720b3540bdc380aceb27596094b98749d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.326 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[691e438d-c742-455e-b1f8-6ae8838a9244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.327 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4c0b3dc-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.329 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-0 kernel: tapd4c0b3dc-90: left promiscuous mode
Dec 06 07:15:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:52.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.344 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.348 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[283081bc-bf9e-450c-b2a5-3e0605c1e423]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.369 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e967ce-7200-47af-bf87-4c30cba6070f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.371 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e48c3f6c-ad91-4482-9e17-dbb7eb0e81b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.387 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff43a90-df4e-4a1c-a7a1-e3f436649bf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558399, 'reachable_time': 18681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296726, 'error': None, 'target': 'ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.392 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d4c0b3dc-922d-4a19-8152-8770b1021325 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:15:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:52.392 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8578e6-4674-43a7-895f-552f48628ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:15:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dd4c0b3dc\x2d922d\x2d4a19\x2d8152\x2d8770b1021325.mount: Deactivated successfully.
Dec 06 07:15:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:52 compute-0 ceph-mon[74339]: pgmap v1733: 305 pgs: 305 active+clean; 374 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 650 KiB/s rd, 3.8 MiB/s wr, 140 op/s
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.526298) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352526351, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1248, "num_deletes": 254, "total_data_size": 1986527, "memory_usage": 2008704, "flush_reason": "Manual Compaction"}
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352541289, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1951469, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34099, "largest_seqno": 35346, "table_properties": {"data_size": 1945462, "index_size": 3274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13654, "raw_average_key_size": 20, "raw_value_size": 1933142, "raw_average_value_size": 2924, "num_data_blocks": 144, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005232, "oldest_key_time": 1765005232, "file_creation_time": 1765005352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 15045 microseconds, and 6003 cpu microseconds.
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.541337) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1951469 bytes OK
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.541359) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.542521) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.542533) EVENT_LOG_v1 {"time_micros": 1765005352542529, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.542549) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1980849, prev total WAL file size 1980849, number of live WAL files 2.
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.543232) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1905KB)], [71(9346KB)]
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352543307, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11522493, "oldest_snapshot_seqno": -1}
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6577 keys, 9589526 bytes, temperature: kUnknown
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352643594, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9589526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9545752, "index_size": 26254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 170005, "raw_average_key_size": 25, "raw_value_size": 9427812, "raw_average_value_size": 1433, "num_data_blocks": 1044, "num_entries": 6577, "num_filter_entries": 6577, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765005352, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.643858) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9589526 bytes
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.645226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.8 rd, 95.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(10.8) write-amplify(4.9) OK, records in: 7101, records dropped: 524 output_compression: NoCompression
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.645253) EVENT_LOG_v1 {"time_micros": 1765005352645235, "job": 40, "event": "compaction_finished", "compaction_time_micros": 100364, "compaction_time_cpu_micros": 27184, "output_level": 6, "num_output_files": 1, "total_output_size": 9589526, "num_input_records": 7101, "num_output_records": 6577, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352645735, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005352647345, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.543116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.647378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.647382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.647384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.647386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:15:52.647388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.739 251996 INFO nova.virt.libvirt.driver [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Deleting instance files /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad_del
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.740 251996 INFO nova.virt.libvirt.driver [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Deletion of /var/lib/nova/instances/97016241-c559-4e76-9b89-a68445510fad_del complete
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.768 251996 DEBUG nova.compute.manager [req-4daa4f67-ef8e-474e-953a-f8fb623ecff0 req-c6704a40-e16f-4093-a624-0fb18528a496 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received event network-vif-unplugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.769 251996 DEBUG oslo_concurrency.lockutils [req-4daa4f67-ef8e-474e-953a-f8fb623ecff0 req-c6704a40-e16f-4093-a624-0fb18528a496 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "97016241-c559-4e76-9b89-a68445510fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.769 251996 DEBUG oslo_concurrency.lockutils [req-4daa4f67-ef8e-474e-953a-f8fb623ecff0 req-c6704a40-e16f-4093-a624-0fb18528a496 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.769 251996 DEBUG oslo_concurrency.lockutils [req-4daa4f67-ef8e-474e-953a-f8fb623ecff0 req-c6704a40-e16f-4093-a624-0fb18528a496 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.770 251996 DEBUG nova.compute.manager [req-4daa4f67-ef8e-474e-953a-f8fb623ecff0 req-c6704a40-e16f-4093-a624-0fb18528a496 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] No waiting events found dispatching network-vif-unplugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.770 251996 DEBUG nova.compute.manager [req-4daa4f67-ef8e-474e-953a-f8fb623ecff0 req-c6704a40-e16f-4093-a624-0fb18528a496 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received event network-vif-unplugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.789 251996 INFO nova.compute.manager [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Took 0.82 seconds to destroy the instance on the hypervisor.
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.790 251996 DEBUG oslo.service.loopingcall [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.790 251996 DEBUG nova.compute.manager [-] [instance: 97016241-c559-4e76-9b89-a68445510fad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:15:52 compute-0 nova_compute[251992]: 2025-12-06 07:15:52.791 251996 DEBUG nova.network.neutron [-] [instance: 97016241-c559-4e76-9b89-a68445510fad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:15:53 compute-0 nova_compute[251992]: 2025-12-06 07:15:53.470 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:53 compute-0 ceph-mon[74339]: pgmap v1734: 305 pgs: 305 active+clean; 374 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 851 KiB/s rd, 3.8 MiB/s wr, 159 op/s
Dec 06 07:15:53 compute-0 nova_compute[251992]: 2025-12-06 07:15:53.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:53 compute-0 nova_compute[251992]: 2025-12-06 07:15:53.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:53 compute-0 nova_compute[251992]: 2025-12-06 07:15:53.695 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:53 compute-0 nova_compute[251992]: 2025-12-06 07:15:53.696 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:53 compute-0 nova_compute[251992]: 2025-12-06 07:15:53.696 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:53 compute-0 nova_compute[251992]: 2025-12-06 07:15:53.696 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:15:53 compute-0 nova_compute[251992]: 2025-12-06 07:15:53.697 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 374 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 1.6 MiB/s wr, 89 op/s
Dec 06 07:15:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2420440032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.211 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.310 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.311 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:15:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:54.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.422 251996 DEBUG nova.network.neutron [-] [instance: 97016241-c559-4e76-9b89-a68445510fad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.437 251996 INFO nova.compute.manager [-] [instance: 97016241-c559-4e76-9b89-a68445510fad] Took 1.65 seconds to deallocate network for instance.
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.492 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.493 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:54.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.505 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.506 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4343MB free_disk=20.81011962890625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.506 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.565 251996 DEBUG oslo_concurrency.processutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:54.578 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:15:54.579 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:15:54 compute-0 nova_compute[251992]: 2025-12-06 07:15:54.591 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2156316598' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2420440032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643282998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.036 251996 DEBUG oslo_concurrency.processutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.042 251996 DEBUG nova.compute.provider_tree [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.064 251996 DEBUG nova.scheduler.client.report [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.102 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.104 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.121 251996 DEBUG nova.compute.manager [req-72e2a3a9-cfc6-4c14-9bc9-19ce23cc1f58 req-e7fa0464-c373-4f1d-8516-90848094b31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received event network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.121 251996 DEBUG oslo_concurrency.lockutils [req-72e2a3a9-cfc6-4c14-9bc9-19ce23cc1f58 req-e7fa0464-c373-4f1d-8516-90848094b31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "97016241-c559-4e76-9b89-a68445510fad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.122 251996 DEBUG oslo_concurrency.lockutils [req-72e2a3a9-cfc6-4c14-9bc9-19ce23cc1f58 req-e7fa0464-c373-4f1d-8516-90848094b31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.122 251996 DEBUG oslo_concurrency.lockutils [req-72e2a3a9-cfc6-4c14-9bc9-19ce23cc1f58 req-e7fa0464-c373-4f1d-8516-90848094b31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.122 251996 DEBUG nova.compute.manager [req-72e2a3a9-cfc6-4c14-9bc9-19ce23cc1f58 req-e7fa0464-c373-4f1d-8516-90848094b31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] No waiting events found dispatching network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.122 251996 WARNING nova.compute.manager [req-72e2a3a9-cfc6-4c14-9bc9-19ce23cc1f58 req-e7fa0464-c373-4f1d-8516-90848094b31c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received unexpected event network-vif-plugged-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 for instance with vm_state deleted and task_state None.
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.136 251996 INFO nova.scheduler.client.report [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Deleted allocations for instance 97016241-c559-4e76-9b89-a68445510fad
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.148 251996 DEBUG nova.compute.manager [req-a2ddbc47-32da-450e-970c-ea6453c3bdd7 req-719e58a4-e873-4338-a1ba-149cd98969e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 97016241-c559-4e76-9b89-a68445510fad] Received event network-vif-deleted-b00851c3-68e1-49d4-b268-c5d4bc92aaa0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.189 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance b926cd32-34cf-4b7f-9908-8a7691a5d46a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.189 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.190 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.235 251996 DEBUG oslo_concurrency.lockutils [None req-b792c3db-b5a5-4e90-94e6-5ce5afd5e489 60c8cae0bd8d40059b7dc1f903f672b0 72a7e6711aab4e1eadba6423fa038649 - - default default] Lock "97016241-c559-4e76-9b89-a68445510fad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.274s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.237 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:15:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:15:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702482958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.673 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.678 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.695 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.723 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:15:55 compute-0 nova_compute[251992]: 2025-12-06 07:15:55.723 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:15:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 346 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.6 MiB/s wr, 162 op/s
Dec 06 07:15:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:56.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:15:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:15:56 compute-0 ceph-mon[74339]: pgmap v1735: 305 pgs: 305 active+clean; 374 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 1.6 MiB/s wr, 89 op/s
Dec 06 07:15:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1643282998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3849921206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1702482958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:57 compute-0 nova_compute[251992]: 2025-12-06 07:15:57.293 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 295 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 248 KiB/s wr, 213 op/s
Dec 06 07:15:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:15:58.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:58 compute-0 nova_compute[251992]: 2025-12-06 07:15:58.473 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:15:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:15:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:15:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:15:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:15:58 compute-0 ceph-mon[74339]: pgmap v1736: 305 pgs: 305 active+clean; 346 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.6 MiB/s wr, 162 op/s
Dec 06 07:15:58 compute-0 nova_compute[251992]: 2025-12-06 07:15:58.724 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:58 compute-0 nova_compute[251992]: 2025-12-06 07:15:58.725 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:58 compute-0 nova_compute[251992]: 2025-12-06 07:15:58.725 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:58 compute-0 nova_compute[251992]: 2025-12-06 07:15:58.725 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:15:59 compute-0 ceph-mon[74339]: pgmap v1737: 305 pgs: 305 active+clean; 295 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 248 KiB/s wr, 213 op/s
Dec 06 07:15:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/404538913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:15:59 compute-0 ovn_controller[147168]: 2025-12-06T07:15:59Z|00203|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:15:59 compute-0 nova_compute[251992]: 2025-12-06 07:15:59.704 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:15:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 295 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 64 KiB/s wr, 185 op/s
Dec 06 07:15:59 compute-0 ovn_controller[147168]: 2025-12-06T07:15:59Z|00204|binding|INFO|Releasing lport ad9e5490-4d4e-46f0-9eb3-53b36e933dda from this chassis (sb_readonly=0)
Dec 06 07:15:59 compute-0 nova_compute[251992]: 2025-12-06 07:15:59.921 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:00.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:00.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2563791832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:01 compute-0 sudo[296801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:01 compute-0 sudo[296801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:01 compute-0 sudo[296801]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:01 compute-0 sudo[296826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:01 compute-0 sudo[296826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:01 compute-0 sudo[296826]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:01.581 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 75 KiB/s wr, 218 op/s
Dec 06 07:16:02 compute-0 ceph-mon[74339]: pgmap v1738: 305 pgs: 305 active+clean; 295 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 64 KiB/s wr, 185 op/s
Dec 06 07:16:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2145519798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/638429195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:02 compute-0 nova_compute[251992]: 2025-12-06 07:16:02.298 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:02.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:03 compute-0 ceph-mon[74339]: pgmap v1739: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 75 KiB/s wr, 218 op/s
Dec 06 07:16:03 compute-0 nova_compute[251992]: 2025-12-06 07:16:03.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:03 compute-0 nova_compute[251992]: 2025-12-06 07:16:03.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:03 compute-0 nova_compute[251992]: 2025-12-06 07:16:03.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:16:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 41 KiB/s wr, 199 op/s
Dec 06 07:16:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:03.823 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:03.824 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:03.825 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:04.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:04.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:04 compute-0 nova_compute[251992]: 2025-12-06 07:16:04.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:04 compute-0 nova_compute[251992]: 2025-12-06 07:16:04.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:16:04 compute-0 nova_compute[251992]: 2025-12-06 07:16:04.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:16:04 compute-0 nova_compute[251992]: 2025-12-06 07:16:04.915 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:16:04 compute-0 nova_compute[251992]: 2025-12-06 07:16:04.916 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:16:04 compute-0 nova_compute[251992]: 2025-12-06 07:16:04.916 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:16:04 compute-0 nova_compute[251992]: 2025-12-06 07:16:04.916 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b926cd32-34cf-4b7f-9908-8a7691a5d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:05 compute-0 ceph-mon[74339]: pgmap v1740: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 41 KiB/s wr, 199 op/s
Dec 06 07:16:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 49 KiB/s wr, 221 op/s
Dec 06 07:16:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:06.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3858770152' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2759368417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:07 compute-0 nova_compute[251992]: 2025-12-06 07:16:07.203 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005352.2016442, 97016241-c559-4e76-9b89-a68445510fad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:16:07 compute-0 nova_compute[251992]: 2025-12-06 07:16:07.203 251996 INFO nova.compute.manager [-] [instance: 97016241-c559-4e76-9b89-a68445510fad] VM Stopped (Lifecycle Event)
Dec 06 07:16:07 compute-0 nova_compute[251992]: 2025-12-06 07:16:07.227 251996 DEBUG nova.compute.manager [None req-fd5f1a6c-664f-4edc-90fb-0c1450b11b7e - - - - - -] [instance: 97016241-c559-4e76-9b89-a68445510fad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:07 compute-0 nova_compute[251992]: 2025-12-06 07:16:07.273 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updating instance_info_cache with network_info: [{"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:07 compute-0 nova_compute[251992]: 2025-12-06 07:16:07.300 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-b926cd32-34cf-4b7f-9908-8a7691a5d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:07 compute-0 nova_compute[251992]: 2025-12-06 07:16:07.300 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:16:07 compute-0 nova_compute[251992]: 2025-12-06 07:16:07.365 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 304 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.1 MiB/s wr, 228 op/s
Dec 06 07:16:08 compute-0 ceph-mon[74339]: pgmap v1741: 305 pgs: 305 active+clean; 295 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 49 KiB/s wr, 221 op/s
Dec 06 07:16:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:08.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:08 compute-0 nova_compute[251992]: 2025-12-06 07:16:08.476 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:08 compute-0 podman[296854]: 2025-12-06 07:16:08.48595469 +0000 UTC m=+0.143764919 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:16:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:16:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2814864671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:16:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:16:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2814864671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:16:09 compute-0 ceph-mon[74339]: pgmap v1742: 305 pgs: 305 active+clean; 304 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.1 MiB/s wr, 228 op/s
Dec 06 07:16:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2814864671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:16:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 304 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Dec 06 07:16:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:10.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:10.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2814864671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:16:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 281 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.2 MiB/s wr, 259 op/s
Dec 06 07:16:12 compute-0 ceph-mon[74339]: pgmap v1743: 305 pgs: 305 active+clean; 304 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Dec 06 07:16:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:12.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:12 compute-0 nova_compute[251992]: 2025-12-06 07:16:12.367 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:12.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:16:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:16:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:16:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:16:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:16:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:16:13 compute-0 nova_compute[251992]: 2025-12-06 07:16:13.478 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 281 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Dec 06 07:16:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:14 compute-0 ceph-mon[74339]: pgmap v1744: 305 pgs: 305 active+clean; 281 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.2 MiB/s wr, 259 op/s
Dec 06 07:16:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:14.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:14 compute-0 podman[296883]: 2025-12-06 07:16:14.397395883 +0000 UTC m=+0.051094516 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:16:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:14.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:14 compute-0 podman[296902]: 2025-12-06 07:16:14.522064279 +0000 UTC m=+0.083409267 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:16:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3465677993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:15 compute-0 ceph-mon[74339]: pgmap v1745: 305 pgs: 305 active+clean; 281 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Dec 06 07:16:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 232 op/s
Dec 06 07:16:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:16.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:16 compute-0 sudo[296923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:16 compute-0 sudo[296923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:16 compute-0 sudo[296923]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:16.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:16 compute-0 sudo[296949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:16:16 compute-0 sudo[296949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:16 compute-0 sudo[296949]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:16 compute-0 sudo[296974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:16 compute-0 sudo[296974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:16 compute-0 sudo[296974]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:16 compute-0 sudo[296999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:16:16 compute-0 sudo[296999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:17 compute-0 sudo[296999]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:16:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:16:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:16:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:16:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:16:17 compute-0 nova_compute[251992]: 2025-12-06 07:16:17.371 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Dec 06 07:16:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7d3d6c8a-42da-49ab-b761-b73d5c05ea56 does not exist
Dec 06 07:16:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 308595c6-36d2-4f9e-8a54-d581d033be14 does not exist
Dec 06 07:16:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 316aa737-e8e1-425e-bd73-c1bc3a695836 does not exist
Dec 06 07:16:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:16:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:16:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:16:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:16:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:16:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:16:17 compute-0 ceph-mon[74339]: pgmap v1746: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 232 op/s
Dec 06 07:16:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:16:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:16:17 compute-0 sudo[297056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:17 compute-0 sudo[297056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:17 compute-0 sudo[297056]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:18 compute-0 sudo[297081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:16:18 compute-0 sudo[297081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:18 compute-0 sudo[297081]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:18 compute-0 sudo[297106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:18 compute-0 sudo[297106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:18 compute-0 sudo[297106]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:18 compute-0 sudo[297131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:16:18 compute-0 sudo[297131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:18.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:16:18
Dec 06 07:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'images', 'backups', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms']
Dec 06 07:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:16:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:18.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:18 compute-0 nova_compute[251992]: 2025-12-06 07:16:18.541 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:18 compute-0 podman[297196]: 2025-12-06 07:16:18.582020971 +0000 UTC m=+0.025570224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:16:18 compute-0 podman[297196]: 2025-12-06 07:16:18.846390752 +0000 UTC m=+0.289939955 container create ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:16:18 compute-0 systemd[1]: Started libpod-conmon-ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4.scope.
Dec 06 07:16:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.386 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.387 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.388 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.388 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.388 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.389 251996 INFO nova.compute.manager [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Terminating instance
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.391 251996 DEBUG nova.compute.manager [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:16:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:19 compute-0 ceph-mon[74339]: pgmap v1747: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Dec 06 07:16:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:16:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:16:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:16:19 compute-0 podman[297196]: 2025-12-06 07:16:19.421151916 +0000 UTC m=+0.864701169 container init ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_morse, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:16:19 compute-0 podman[297196]: 2025-12-06 07:16:19.430051294 +0000 UTC m=+0.873600497 container start ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_morse, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:16:19 compute-0 podman[297196]: 2025-12-06 07:16:19.434351904 +0000 UTC m=+0.877901147 container attach ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_morse, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:16:19 compute-0 exciting_morse[297212]: 167 167
Dec 06 07:16:19 compute-0 systemd[1]: libpod-ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4.scope: Deactivated successfully.
Dec 06 07:16:19 compute-0 kernel: tap3a177776-2c (unregistering): left promiscuous mode
Dec 06 07:16:19 compute-0 NetworkManager[48965]: <info>  [1765005379.4735] device (tap3a177776-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.482 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:19 compute-0 ovn_controller[147168]: 2025-12-06T07:16:19Z|00205|binding|INFO|Releasing lport 3a177776-2c63-4dc7-8f3b-d4b4576299bb from this chassis (sb_readonly=0)
Dec 06 07:16:19 compute-0 ovn_controller[147168]: 2025-12-06T07:16:19Z|00206|binding|INFO|Setting lport 3a177776-2c63-4dc7-8f3b-d4b4576299bb down in Southbound
Dec 06 07:16:19 compute-0 ovn_controller[147168]: 2025-12-06T07:16:19Z|00207|binding|INFO|Removing iface tap3a177776-2c ovn-installed in OVS
Dec 06 07:16:19 compute-0 podman[297217]: 2025-12-06 07:16:19.487487415 +0000 UTC m=+0.029532794 container died ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_morse, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:16:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:19.493 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:d9:aa 10.100.0.5'], port_security=['fa:16:3e:0d:d9:aa 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b926cd32-34cf-4b7f-9908-8a7691a5d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-facf815c-af05-4eae-8215-596b89b048ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ba80f0b33d04d6d9508bc18e9b1914b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '06e30c9c-0168-4e04-b100-fe33575ec890 50152102-b2a4-4cf5-8c08-711fd2cb8e9b d16e42c8-d674-4f11-a30e-6439bf177e6b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e725b401-7789-49e6-93b9-c5c8c58adad1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=3a177776-2c63-4dc7-8f3b-d4b4576299bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:19.495 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 3a177776-2c63-4dc7-8f3b-d4b4576299bb in datapath facf815c-af05-4eae-8215-596b89b048ab unbound from our chassis
Dec 06 07:16:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:19.496 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network facf815c-af05-4eae-8215-596b89b048ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:16:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:19.498 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fe504fe2-fdb8-4516-8bc6-000e8c14047d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:19.499 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-facf815c-af05-4eae-8215-596b89b048ab namespace which is not needed anymore
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.503 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:19 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000045.scope: Deactivated successfully.
Dec 06 07:16:19 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000045.scope: Consumed 16.584s CPU time.
Dec 06 07:16:19 compute-0 systemd-machined[212986]: Machine qemu-30-instance-00000045 terminated.
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.665 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.673 251996 INFO nova.virt.libvirt.driver [-] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Instance destroyed successfully.
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.674 251996 DEBUG nova.objects.instance [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lazy-loading 'resources' on Instance uuid b926cd32-34cf-4b7f-9908-8a7691a5d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.699 251996 DEBUG nova.virt.libvirt.vif [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1250940394',display_name='tempest-SecurityGroupsTestJSON-server-1250940394',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1250940394',id=69,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ba80f0b33d04d6d9508bc18e9b1914b',ramdisk_id='',reservation_id='r-l078b0ff',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-409098844',owner_user_name='tempest-SecurityGroupsTestJSON-409098844-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:15:13Z,user_data=None,user_id='67604a2c995248f8931119287d416e1c',uuid=b926cd32-34cf-4b7f-9908-8a7691a5d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.700 251996 DEBUG nova.network.os_vif_util [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converting VIF {"id": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "address": "fa:16:3e:0d:d9:aa", "network": {"id": "facf815c-af05-4eae-8215-596b89b048ab", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-869864316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ba80f0b33d04d6d9508bc18e9b1914b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a177776-2c", "ovs_interfaceid": "3a177776-2c63-4dc7-8f3b-d4b4576299bb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.701 251996 DEBUG nova.network.os_vif_util [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:d9:aa,bridge_name='br-int',has_traffic_filtering=True,id=3a177776-2c63-4dc7-8f3b-d4b4576299bb,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a177776-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.702 251996 DEBUG os_vif [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:d9:aa,bridge_name='br-int',has_traffic_filtering=True,id=3a177776-2c63-4dc7-8f3b-d4b4576299bb,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a177776-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.704 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.705 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a177776-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.706 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.709 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.711 251996 INFO os_vif [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:d9:aa,bridge_name='br-int',has_traffic_filtering=True,id=3a177776-2c63-4dc7-8f3b-d4b4576299bb,network=Network(facf815c-af05-4eae-8215-596b89b048ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a177776-2c')
Dec 06 07:16:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 131 op/s
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.784 251996 DEBUG nova.compute.manager [req-c7e5adfd-e864-463b-aa40-21eebbee19c4 req-e523ebaf-f8a6-4ebf-9c61-ade3ca081969 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-vif-unplugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.785 251996 DEBUG oslo_concurrency.lockutils [req-c7e5adfd-e864-463b-aa40-21eebbee19c4 req-e523ebaf-f8a6-4ebf-9c61-ade3ca081969 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce00dee81c7cef786cccc89210d9c2ba342aeda63a72a89b44eb60842d4dac59-merged.mount: Deactivated successfully.
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.785 251996 DEBUG oslo_concurrency.lockutils [req-c7e5adfd-e864-463b-aa40-21eebbee19c4 req-e523ebaf-f8a6-4ebf-9c61-ade3ca081969 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.785 251996 DEBUG oslo_concurrency.lockutils [req-c7e5adfd-e864-463b-aa40-21eebbee19c4 req-e523ebaf-f8a6-4ebf-9c61-ade3ca081969 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.785 251996 DEBUG nova.compute.manager [req-c7e5adfd-e864-463b-aa40-21eebbee19c4 req-e523ebaf-f8a6-4ebf-9c61-ade3ca081969 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] No waiting events found dispatching network-vif-unplugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:19 compute-0 nova_compute[251992]: 2025-12-06 07:16:19.786 251996 DEBUG nova.compute.manager [req-c7e5adfd-e864-463b-aa40-21eebbee19c4 req-e523ebaf-f8a6-4ebf-9c61-ade3ca081969 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-vif-unplugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:16:19 compute-0 podman[297217]: 2025-12-06 07:16:19.808130935 +0000 UTC m=+0.350176304 container remove ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:16:19 compute-0 systemd[1]: libpod-conmon-ef3e4078f414835950f9d66067c5f0b59cd9dcac62beada6b8fecf8cd2aebdf4.scope: Deactivated successfully.
Dec 06 07:16:20 compute-0 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[296356]: [NOTICE]   (296369) : haproxy version is 2.8.14-c23fe91
Dec 06 07:16:20 compute-0 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[296356]: [NOTICE]   (296369) : path to executable is /usr/sbin/haproxy
Dec 06 07:16:20 compute-0 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[296356]: [WARNING]  (296369) : Exiting Master process...
Dec 06 07:16:20 compute-0 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[296356]: [WARNING]  (296369) : Exiting Master process...
Dec 06 07:16:20 compute-0 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[296356]: [ALERT]    (296369) : Current worker (296373) exited with code 143 (Terminated)
Dec 06 07:16:20 compute-0 neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab[296356]: [WARNING]  (296369) : All workers exited. Exiting... (0)
Dec 06 07:16:20 compute-0 systemd[1]: libpod-274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e.scope: Deactivated successfully.
Dec 06 07:16:20 compute-0 podman[297285]: 2025-12-06 07:16:20.178646404 +0000 UTC m=+0.298410740 container died 274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:16:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:20.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e-userdata-shm.mount: Deactivated successfully.
Dec 06 07:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c53668e93bca4e9a14612113ebcc51e96b66c0a915618c72380795b44fcdb40d-merged.mount: Deactivated successfully.
Dec 06 07:16:20 compute-0 podman[297285]: 2025-12-06 07:16:20.388471705 +0000 UTC m=+0.508236031 container cleanup 274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:16:20 compute-0 systemd[1]: libpod-conmon-274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e.scope: Deactivated successfully.
Dec 06 07:16:20 compute-0 podman[297303]: 2025-12-06 07:16:20.392639491 +0000 UTC m=+0.459145881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:16:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:20.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:20 compute-0 podman[297303]: 2025-12-06 07:16:20.949811125 +0000 UTC m=+1.016317455 container create 2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:16:21 compute-0 ceph-mon[74339]: pgmap v1748: 305 pgs: 305 active+clean; 281 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 131 op/s
Dec 06 07:16:21 compute-0 systemd[1]: Started libpod-conmon-2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c.scope.
Dec 06 07:16:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf16e3f1aab690a51455bd244b2cadf15cc912b174fd844c4881bf418523187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf16e3f1aab690a51455bd244b2cadf15cc912b174fd844c4881bf418523187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf16e3f1aab690a51455bd244b2cadf15cc912b174fd844c4881bf418523187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf16e3f1aab690a51455bd244b2cadf15cc912b174fd844c4881bf418523187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccf16e3f1aab690a51455bd244b2cadf15cc912b174fd844c4881bf418523187/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:21 compute-0 sudo[297350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:21 compute-0 sudo[297350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:21 compute-0 sudo[297350]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:21 compute-0 sudo[297376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:21 compute-0 sudo[297376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:21 compute-0 sudo[297376]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:21 compute-0 podman[297303]: 2025-12-06 07:16:21.643639949 +0000 UTC m=+1.710146329 container init 2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:16:21 compute-0 podman[297303]: 2025-12-06 07:16:21.65192016 +0000 UTC m=+1.718426490 container start 2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:16:21 compute-0 podman[297333]: 2025-12-06 07:16:21.655065857 +0000 UTC m=+1.244112596 container remove 274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:16:21 compute-0 podman[297303]: 2025-12-06 07:16:21.658571555 +0000 UTC m=+1.725077885 container attach 2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ishizaka, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.663 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2a148aa2-1299-4e96-849d-d2bde259ffa0]: (4, ('Sat Dec  6 07:16:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab (274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e)\n274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e\nSat Dec  6 07:16:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-facf815c-af05-4eae-8215-596b89b048ab (274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e)\n274eeeb01dca466a9e2cb7509bc9da98fdee423e87d77dbb3339cced38a5025e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.666 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1dcf815c-fd35-409b-9a80-fee7996fb203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.667 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfacf815c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:21 compute-0 kernel: tapfacf815c-a0: left promiscuous mode
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.683 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.684 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.686 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[18b721dc-bf7b-424e-8518-97bede0fa6a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.701 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1ad7c841-4115-49ee-b867-2417d843f3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.703 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3e85ef8b-a94b-447d-baf7-b279bae64936]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.716 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[29366cd3-4a73-4f26-9b2f-9530bfcddd54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558515, 'reachable_time': 24738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297406, 'error': None, 'target': 'ovnmeta-facf815c-af05-4eae-8215-596b89b048ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:21 compute-0 systemd[1]: run-netns-ovnmeta\x2dfacf815c\x2daf05\x2d4eae\x2d8215\x2d596b89b048ab.mount: Deactivated successfully.
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.719 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-facf815c-af05-4eae-8215-596b89b048ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:16:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:21.719 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[988ddce1-faaf-4ed3-8ce2-8be78b0f8b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 258 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.1 MiB/s wr, 182 op/s
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.874 251996 DEBUG nova.compute.manager [req-d029623c-bea6-40ca-95a4-8a1d23728ad8 req-d13a35f8-c853-490e-ac90-d71f131d33e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.874 251996 DEBUG oslo_concurrency.lockutils [req-d029623c-bea6-40ca-95a4-8a1d23728ad8 req-d13a35f8-c853-490e-ac90-d71f131d33e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.874 251996 DEBUG oslo_concurrency.lockutils [req-d029623c-bea6-40ca-95a4-8a1d23728ad8 req-d13a35f8-c853-490e-ac90-d71f131d33e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.874 251996 DEBUG oslo_concurrency.lockutils [req-d029623c-bea6-40ca-95a4-8a1d23728ad8 req-d13a35f8-c853-490e-ac90-d71f131d33e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.874 251996 DEBUG nova.compute.manager [req-d029623c-bea6-40ca-95a4-8a1d23728ad8 req-d13a35f8-c853-490e-ac90-d71f131d33e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] No waiting events found dispatching network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:16:21 compute-0 nova_compute[251992]: 2025-12-06 07:16:21.875 251996 WARNING nova.compute.manager [req-d029623c-bea6-40ca-95a4-8a1d23728ad8 req-d13a35f8-c853-490e-ac90-d71f131d33e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received unexpected event network-vif-plugged-3a177776-2c63-4dc7-8f3b-d4b4576299bb for instance with vm_state active and task_state deleting.
Dec 06 07:16:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:22.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:22 compute-0 inspiring_ishizaka[297347]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:16:22 compute-0 inspiring_ishizaka[297347]: --> relative data size: 1.0
Dec 06 07:16:22 compute-0 inspiring_ishizaka[297347]: --> All data devices are unavailable
Dec 06 07:16:22 compute-0 nova_compute[251992]: 2025-12-06 07:16:22.471 251996 INFO nova.virt.libvirt.driver [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Deleting instance files /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a_del
Dec 06 07:16:22 compute-0 nova_compute[251992]: 2025-12-06 07:16:22.472 251996 INFO nova.virt.libvirt.driver [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Deletion of /var/lib/nova/instances/b926cd32-34cf-4b7f-9908-8a7691a5d46a_del complete
Dec 06 07:16:22 compute-0 systemd[1]: libpod-2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c.scope: Deactivated successfully.
Dec 06 07:16:22 compute-0 conmon[297347]: conmon 2b03b63a31969d2982f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c.scope/container/memory.events
Dec 06 07:16:22 compute-0 podman[297303]: 2025-12-06 07:16:22.506793255 +0000 UTC m=+2.573299615 container died 2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 07:16:22 compute-0 nova_compute[251992]: 2025-12-06 07:16:22.521 251996 INFO nova.compute.manager [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Took 3.13 seconds to destroy the instance on the hypervisor.
Dec 06 07:16:22 compute-0 nova_compute[251992]: 2025-12-06 07:16:22.521 251996 DEBUG oslo.service.loopingcall [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:16:22 compute-0 nova_compute[251992]: 2025-12-06 07:16:22.522 251996 DEBUG nova.compute.manager [-] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:16:22 compute-0 nova_compute[251992]: 2025-12-06 07:16:22.522 251996 DEBUG nova.network.neutron [-] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:16:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:22.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccf16e3f1aab690a51455bd244b2cadf15cc912b174fd844c4881bf418523187-merged.mount: Deactivated successfully.
Dec 06 07:16:23 compute-0 podman[297303]: 2025-12-06 07:16:23.015001663 +0000 UTC m=+3.081507993 container remove 2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:16:23 compute-0 systemd[1]: libpod-conmon-2b03b63a31969d2982f7ddcfaf2627c2aa7df91a388dc4283d30cfd31042410c.scope: Deactivated successfully.
Dec 06 07:16:23 compute-0 sudo[297131]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:23 compute-0 sudo[297430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:23 compute-0 sudo[297430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:23 compute-0 sudo[297430]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:23 compute-0 sudo[297455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:16:23 compute-0 sudo[297455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:23 compute-0 sudo[297455]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:23 compute-0 sudo[297480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:23 compute-0 sudo[297480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:23 compute-0 sudo[297480]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:23 compute-0 sudo[297505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:16:23 compute-0 sudo[297505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:16:23 compute-0 nova_compute[251992]: 2025-12-06 07:16:23.543 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:23 compute-0 podman[297568]: 2025-12-06 07:16:23.60025461 +0000 UTC m=+0.022432896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:16:23 compute-0 ceph-mon[74339]: pgmap v1749: 305 pgs: 305 active+clean; 258 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.1 MiB/s wr, 182 op/s
Dec 06 07:16:23 compute-0 podman[297568]: 2025-12-06 07:16:23.746227791 +0000 UTC m=+0.168406057 container create 52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sutherland, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:16:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 258 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 26 KiB/s wr, 58 op/s
Dec 06 07:16:23 compute-0 systemd[1]: Started libpod-conmon-52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84.scope.
Dec 06 07:16:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.121 251996 DEBUG nova.network.neutron [-] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.151 251996 INFO nova.compute.manager [-] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Took 1.63 seconds to deallocate network for instance.
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.211 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.212 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.231 251996 DEBUG nova.compute.manager [req-a785a52c-56ba-46e4-9e85-b954127593f9 req-f4b57c98-38bf-4d7e-ab52-390bf08e195d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Received event network-vif-deleted-3a177776-2c63-4dc7-8f3b-d4b4576299bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.268 251996 DEBUG oslo_concurrency.processutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:24.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:24.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.708 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/789104311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:24 compute-0 podman[297568]: 2025-12-06 07:16:24.71716162 +0000 UTC m=+1.139339896 container init 52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:16:24 compute-0 podman[297568]: 2025-12-06 07:16:24.726752518 +0000 UTC m=+1.148930794 container start 52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sutherland, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:16:24 compute-0 mystifying_sutherland[297583]: 167 167
Dec 06 07:16:24 compute-0 systemd[1]: libpod-52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84.scope: Deactivated successfully.
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.735 251996 DEBUG oslo_concurrency.processutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.740 251996 DEBUG nova.compute.provider_tree [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.764 251996 DEBUG nova.scheduler.client.report [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.804 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:24 compute-0 podman[297568]: 2025-12-06 07:16:24.811328606 +0000 UTC m=+1.233506892 container attach 52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:16:24 compute-0 podman[297568]: 2025-12-06 07:16:24.812757165 +0000 UTC m=+1.234935421 container died 52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sutherland, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.829 251996 INFO nova.scheduler.client.report [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Deleted allocations for instance b926cd32-34cf-4b7f-9908-8a7691a5d46a
Dec 06 07:16:24 compute-0 ceph-mon[74339]: pgmap v1750: 305 pgs: 305 active+clean; 258 MiB data, 706 MiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 26 KiB/s wr, 58 op/s
Dec 06 07:16:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/789104311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:24 compute-0 nova_compute[251992]: 2025-12-06 07:16:24.946 251996 DEBUG oslo_concurrency.lockutils [None req-cb71a52d-0353-4609-9c94-11c91ab6a1bf 67604a2c995248f8931119287d416e1c 4ba80f0b33d04d6d9508bc18e9b1914b - - default default] Lock "b926cd32-34cf-4b7f-9908-8a7691a5d46a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bd70ffed478a9ba5d74d7064987c295cf7e0d578d0456201fd8a5ef5dfce373-merged.mount: Deactivated successfully.
Dec 06 07:16:25 compute-0 podman[297568]: 2025-12-06 07:16:25.511191199 +0000 UTC m=+1.933369455 container remove 52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sutherland, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:16:25 compute-0 systemd[1]: libpod-conmon-52fd90d9da817d444eb24b852a430566b7cb5b0bb2a67099d1ff4efa4992ea84.scope: Deactivated successfully.
Dec 06 07:16:25 compute-0 podman[297631]: 2025-12-06 07:16:25.640268157 +0000 UTC m=+0.020862393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 209 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 574 KiB/s rd, 27 KiB/s wr, 72 op/s
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004345366880699795 of space, bias 1.0, pg target 1.3036100642099384 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:16:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:16:25 compute-0 podman[297631]: 2025-12-06 07:16:25.783998384 +0000 UTC m=+0.164592650 container create e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wu, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:16:25 compute-0 systemd[1]: Started libpod-conmon-e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf.scope.
Dec 06 07:16:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8fb597b2f5b8c9283ef7ddcae66ee9d77fc0bac85fdc9a87f70cd59ea988b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8fb597b2f5b8c9283ef7ddcae66ee9d77fc0bac85fdc9a87f70cd59ea988b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8fb597b2f5b8c9283ef7ddcae66ee9d77fc0bac85fdc9a87f70cd59ea988b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8fb597b2f5b8c9283ef7ddcae66ee9d77fc0bac85fdc9a87f70cd59ea988b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:25 compute-0 podman[297631]: 2025-12-06 07:16:25.886549804 +0000 UTC m=+0.267144050 container init e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:16:25 compute-0 podman[297631]: 2025-12-06 07:16:25.893368503 +0000 UTC m=+0.273962729 container start e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wu, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:16:25 compute-0 podman[297631]: 2025-12-06 07:16:25.897245341 +0000 UTC m=+0.277839567 container attach e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 07:16:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:26.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:26.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:26 compute-0 hardcore_wu[297647]: {
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:     "0": [
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:         {
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "devices": [
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "/dev/loop3"
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             ],
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "lv_name": "ceph_lv0",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "lv_size": "7511998464",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "name": "ceph_lv0",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "tags": {
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.cluster_name": "ceph",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.crush_device_class": "",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.encrypted": "0",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.osd_id": "0",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.type": "block",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:                 "ceph.vdo": "0"
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             },
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "type": "block",
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:             "vg_name": "ceph_vg0"
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:         }
Dec 06 07:16:26 compute-0 hardcore_wu[297647]:     ]
Dec 06 07:16:26 compute-0 hardcore_wu[297647]: }
Dec 06 07:16:26 compute-0 systemd[1]: libpod-e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf.scope: Deactivated successfully.
Dec 06 07:16:26 compute-0 podman[297631]: 2025-12-06 07:16:26.679087199 +0000 UTC m=+1.059681445 container died e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wu, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 07:16:27 compute-0 ceph-mon[74339]: pgmap v1751: 305 pgs: 305 active+clean; 209 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 574 KiB/s rd, 27 KiB/s wr, 72 op/s
Dec 06 07:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e8fb597b2f5b8c9283ef7ddcae66ee9d77fc0bac85fdc9a87f70cd59ea988b0-merged.mount: Deactivated successfully.
Dec 06 07:16:27 compute-0 podman[297631]: 2025-12-06 07:16:27.209062616 +0000 UTC m=+1.589656842 container remove e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:16:27 compute-0 sudo[297505]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:27 compute-0 systemd[1]: libpod-conmon-e7aadb94f95c9578d6ac3aebe7ee7b7f8b4387a9e17996ce0659423062ab97bf.scope: Deactivated successfully.
Dec 06 07:16:27 compute-0 sudo[297669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:27 compute-0 sudo[297669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:27 compute-0 sudo[297669]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:27 compute-0 sudo[297694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:16:27 compute-0 sudo[297694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:27 compute-0 sudo[297694]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:27 compute-0 sudo[297719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:27 compute-0 sudo[297719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:27 compute-0 sudo[297719]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:27 compute-0 sudo[297744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:16:27 compute-0 sudo[297744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 202 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 35 KiB/s wr, 81 op/s
Dec 06 07:16:27 compute-0 podman[297809]: 2025-12-06 07:16:27.783235284 +0000 UTC m=+0.045098259 container create ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curie, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:16:27 compute-0 systemd[1]: Started libpod-conmon-ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981.scope.
Dec 06 07:16:27 compute-0 podman[297809]: 2025-12-06 07:16:27.760810169 +0000 UTC m=+0.022673174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:16:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:16:27 compute-0 podman[297809]: 2025-12-06 07:16:27.876947486 +0000 UTC m=+0.138810481 container init ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:16:27 compute-0 podman[297809]: 2025-12-06 07:16:27.88320563 +0000 UTC m=+0.145068605 container start ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:16:27 compute-0 podman[297809]: 2025-12-06 07:16:27.886336658 +0000 UTC m=+0.148199653 container attach ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:16:27 compute-0 musing_curie[297825]: 167 167
Dec 06 07:16:27 compute-0 systemd[1]: libpod-ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981.scope: Deactivated successfully.
Dec 06 07:16:27 compute-0 podman[297809]: 2025-12-06 07:16:27.890127504 +0000 UTC m=+0.151990489 container died ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 07:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd7c4c5a6dbd058504f5d79032364637d0419624fb426e90053f0a71d453459-merged.mount: Deactivated successfully.
Dec 06 07:16:27 compute-0 podman[297809]: 2025-12-06 07:16:27.980949256 +0000 UTC m=+0.242812231 container remove ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 07:16:27 compute-0 systemd[1]: libpod-conmon-ae5f562193b2002d539bc111c65d49f343942c0474d728013160f1502ad1e981.scope: Deactivated successfully.
Dec 06 07:16:28 compute-0 podman[297849]: 2025-12-06 07:16:28.133659564 +0000 UTC m=+0.039432701 container create c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_grothendieck, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:16:28 compute-0 systemd[1]: Started libpod-conmon-c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2.scope.
Dec 06 07:16:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99271058e6b2a466a374dc304e68cb48808f76e4e802bf31b9bb2904af778734/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99271058e6b2a466a374dc304e68cb48808f76e4e802bf31b9bb2904af778734/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99271058e6b2a466a374dc304e68cb48808f76e4e802bf31b9bb2904af778734/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99271058e6b2a466a374dc304e68cb48808f76e4e802bf31b9bb2904af778734/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:28 compute-0 podman[297849]: 2025-12-06 07:16:28.199982092 +0000 UTC m=+0.105755249 container init c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:16:28 compute-0 podman[297849]: 2025-12-06 07:16:28.20882531 +0000 UTC m=+0.114598447 container start c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_grothendieck, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 07:16:28 compute-0 podman[297849]: 2025-12-06 07:16:28.116309609 +0000 UTC m=+0.022082776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:16:28 compute-0 podman[297849]: 2025-12-06 07:16:28.213046637 +0000 UTC m=+0.118819784 container attach c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:16:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:28.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:28 compute-0 nova_compute[251992]: 2025-12-06 07:16:28.544 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:28.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]: {
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]:         "osd_id": 0,
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]:         "type": "bluestore"
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]:     }
Dec 06 07:16:29 compute-0 trusting_grothendieck[297865]: }
Dec 06 07:16:29 compute-0 systemd[1]: libpod-c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2.scope: Deactivated successfully.
Dec 06 07:16:29 compute-0 conmon[297865]: conmon c8e0ff64e8f53bbe7a11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2.scope/container/memory.events
Dec 06 07:16:29 compute-0 podman[297849]: 2025-12-06 07:16:29.091491299 +0000 UTC m=+0.997264436 container died c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_grothendieck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 07:16:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-99271058e6b2a466a374dc304e68cb48808f76e4e802bf31b9bb2904af778734-merged.mount: Deactivated successfully.
Dec 06 07:16:29 compute-0 podman[297849]: 2025-12-06 07:16:29.145868144 +0000 UTC m=+1.051641281 container remove c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:16:29 compute-0 systemd[1]: libpod-conmon-c8e0ff64e8f53bbe7a115f6062f305660c1acdfcd69e7c6d2deb247ed6a135a2.scope: Deactivated successfully.
Dec 06 07:16:29 compute-0 ceph-mon[74339]: pgmap v1752: 305 pgs: 305 active+clean; 202 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 35 KiB/s wr, 81 op/s
Dec 06 07:16:29 compute-0 sudo[297744]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:16:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:16:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8ccdc6c4-ceed-4ba8-9423-deea9f29a3c7 does not exist
Dec 06 07:16:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3e4fcb9b-32fb-49c9-aa20-bdc742b9d971 does not exist
Dec 06 07:16:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 489a5afb-6724-427f-842a-b25769d0b29c does not exist
Dec 06 07:16:29 compute-0 sudo[297901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:29 compute-0 sudo[297901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:29 compute-0 sudo[297901]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:29 compute-0 sudo[297926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:16:29 compute-0 sudo[297926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:29 compute-0 sudo[297926]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:29 compute-0 nova_compute[251992]: 2025-12-06 07:16:29.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 202 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 34 KiB/s wr, 79 op/s
Dec 06 07:16:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:16:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:30.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:31 compute-0 ceph-mon[74339]: pgmap v1753: 305 pgs: 305 active+clean; 202 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 34 KiB/s wr, 79 op/s
Dec 06 07:16:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 239 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Dec 06 07:16:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:32.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:33 compute-0 ceph-mon[74339]: pgmap v1754: 305 pgs: 305 active+clean; 239 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 114 op/s
Dec 06 07:16:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:33.487 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:33.488 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:16:33 compute-0 nova_compute[251992]: 2025-12-06 07:16:33.488 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:33 compute-0 nova_compute[251992]: 2025-12-06 07:16:33.546 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:33 compute-0 nova_compute[251992]: 2025-12-06 07:16:33.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 239 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 64 op/s
Dec 06 07:16:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:34.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:34 compute-0 nova_compute[251992]: 2025-12-06 07:16:34.672 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005379.671416, b926cd32-34cf-4b7f-9908-8a7691a5d46a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:16:34 compute-0 nova_compute[251992]: 2025-12-06 07:16:34.673 251996 INFO nova.compute.manager [-] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] VM Stopped (Lifecycle Event)
Dec 06 07:16:34 compute-0 nova_compute[251992]: 2025-12-06 07:16:34.699 251996 DEBUG nova.compute.manager [None req-e91fdc53-2e6c-425a-90c8-b38131988f7f - - - - - -] [instance: b926cd32-34cf-4b7f-9908-8a7691a5d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:34 compute-0 nova_compute[251992]: 2025-12-06 07:16:34.714 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:35 compute-0 ceph-mon[74339]: pgmap v1755: 305 pgs: 305 active+clean; 239 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 64 op/s
Dec 06 07:16:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 06 07:16:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:36.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:36 compute-0 ceph-mon[74339]: pgmap v1756: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 06 07:16:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 06 07:16:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:38.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:38 compute-0 nova_compute[251992]: 2025-12-06 07:16:38.548 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:38.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:39 compute-0 ceph-mon[74339]: pgmap v1757: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 52 op/s
Dec 06 07:16:39 compute-0 podman[297957]: 2025-12-06 07:16:39.439903367 +0000 UTC m=+0.090671180 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:16:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:39 compute-0 nova_compute[251992]: 2025-12-06 07:16:39.716 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 07:16:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:40.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:41 compute-0 ceph-mon[74339]: pgmap v1758: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 07:16:41 compute-0 sudo[297986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:41 compute-0 sudo[297986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:41 compute-0 sudo[297986]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:41 compute-0 sudo[298011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:16:41 compute-0 sudo[298011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:16:41 compute-0 sudo[298011]: pam_unix(sudo:session): session closed for user root
Dec 06 07:16:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 07:16:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:42.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:42.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:16:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:16:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:16:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:16:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:16:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:16:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:43.490 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:43 compute-0 nova_compute[251992]: 2025-12-06 07:16:43.549 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:43 compute-0 ceph-mon[74339]: pgmap v1759: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 07:16:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 548 KiB/s wr, 3 op/s
Dec 06 07:16:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:44.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:44 compute-0 nova_compute[251992]: 2025-12-06 07:16:44.718 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:45 compute-0 podman[298038]: 2025-12-06 07:16:45.390370078 +0000 UTC m=+0.050877810 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:16:45 compute-0 podman[298039]: 2025-12-06 07:16:45.394014319 +0000 UTC m=+0.054043337 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:16:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 209 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 549 KiB/s wr, 27 op/s
Dec 06 07:16:45 compute-0 nova_compute[251992]: 2025-12-06 07:16:45.831 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:45 compute-0 nova_compute[251992]: 2025-12-06 07:16:45.831 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:45 compute-0 nova_compute[251992]: 2025-12-06 07:16:45.852 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:16:45 compute-0 nova_compute[251992]: 2025-12-06 07:16:45.930 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:45 compute-0 nova_compute[251992]: 2025-12-06 07:16:45.931 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:45 compute-0 nova_compute[251992]: 2025-12-06 07:16:45.937 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:16:45 compute-0 nova_compute[251992]: 2025-12-06 07:16:45.938 251996 INFO nova.compute.claims [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:16:46 compute-0 nova_compute[251992]: 2025-12-06 07:16:46.034 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:46 compute-0 ceph-mon[74339]: pgmap v1760: 305 pgs: 305 active+clean; 248 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 548 KiB/s wr, 3 op/s
Dec 06 07:16:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134509086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.202 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.210 251996 DEBUG nova.compute.provider_tree [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.254 251996 DEBUG nova.scheduler.client.report [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.288 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.289 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.345 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.346 251996 DEBUG nova.network.neutron [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.368 251996 INFO nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.393 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.518 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.520 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.521 251996 INFO nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Creating image(s)
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.560 251996 DEBUG nova.storage.rbd_utils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.592 251996 DEBUG nova.storage.rbd_utils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.626 251996 DEBUG nova.storage.rbd_utils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.630 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.693 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.694 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.695 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.696 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.726 251996 DEBUG nova.storage.rbd_utils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:16:47 compute-0 nova_compute[251992]: 2025-12-06 07:16:47.730 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef dd21a47b-0073-4789-b313-f2484ea4c357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 6.1 KiB/s wr, 29 op/s
Dec 06 07:16:48 compute-0 nova_compute[251992]: 2025-12-06 07:16:48.208 251996 DEBUG nova.policy [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '627c36bb63534e52a4b1d5adf47e6ffd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '929e2be1488d4b80b7ad8946093a6abe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:16:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:48 compute-0 nova_compute[251992]: 2025-12-06 07:16:48.552 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:48.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:48 compute-0 nova_compute[251992]: 2025-12-06 07:16:48.937 251996 DEBUG nova.network.neutron [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Successfully created port: ad0242d9-4af1-43ec-974d-c21d786abe3f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:16:49 compute-0 nova_compute[251992]: 2025-12-06 07:16:49.720 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Dec 06 07:16:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:50 compute-0 ceph-mon[74339]: pgmap v1761: 305 pgs: 305 active+clean; 209 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 549 KiB/s wr, 27 op/s
Dec 06 07:16:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3643432773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2134509086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:50.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:50.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:50 compute-0 nova_compute[251992]: 2025-12-06 07:16:50.828 251996 DEBUG nova.network.neutron [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Successfully updated port: ad0242d9-4af1-43ec-974d-c21d786abe3f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:16:50 compute-0 nova_compute[251992]: 2025-12-06 07:16:50.852 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:16:50 compute-0 nova_compute[251992]: 2025-12-06 07:16:50.852 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:16:50 compute-0 nova_compute[251992]: 2025-12-06 07:16:50.852 251996 DEBUG nova.network.neutron [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:16:50 compute-0 nova_compute[251992]: 2025-12-06 07:16:50.927 251996 DEBUG nova.compute.manager [req-729f0650-af06-45ad-a801-ef54cf8f240b req-00247edc-feb6-4da5-9bd6-5aec60685896 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-changed-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:16:50 compute-0 nova_compute[251992]: 2025-12-06 07:16:50.928 251996 DEBUG nova.compute.manager [req-729f0650-af06-45ad-a801-ef54cf8f240b req-00247edc-feb6-4da5-9bd6-5aec60685896 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Refreshing instance network info cache due to event network-changed-ad0242d9-4af1-43ec-974d-c21d786abe3f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:16:50 compute-0 nova_compute[251992]: 2025-12-06 07:16:50.928 251996 DEBUG oslo_concurrency.lockutils [req-729f0650-af06-45ad-a801-ef54cf8f240b req-00247edc-feb6-4da5-9bd6-5aec60685896 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:16:51 compute-0 nova_compute[251992]: 2025-12-06 07:16:51.027 251996 DEBUG nova.network.neutron [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:16:51 compute-0 nova_compute[251992]: 2025-12-06 07:16:51.443 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef dd21a47b-0073-4789-b313-f2484ea4c357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.714s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:51 compute-0 nova_compute[251992]: 2025-12-06 07:16:51.518 251996 DEBUG nova.storage.rbd_utils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] resizing rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:16:51 compute-0 ceph-mon[74339]: pgmap v1762: 305 pgs: 305 active+clean; 169 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 6.1 KiB/s wr, 29 op/s
Dec 06 07:16:51 compute-0 ceph-mon[74339]: pgmap v1763: 305 pgs: 305 active+clean; 169 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Dec 06 07:16:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.121 251996 DEBUG nova.objects.instance [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'migration_context' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.139 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.139 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Ensure instance console log exists: /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.140 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.140 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.140 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.352 251996 DEBUG nova.network.neutron [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Updating instance_info_cache with network_info: [{"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.378 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.378 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance network_info: |[{"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.379 251996 DEBUG oslo_concurrency.lockutils [req-729f0650-af06-45ad-a801-ef54cf8f240b req-00247edc-feb6-4da5-9bd6-5aec60685896 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.379 251996 DEBUG nova.network.neutron [req-729f0650-af06-45ad-a801-ef54cf8f240b req-00247edc-feb6-4da5-9bd6-5aec60685896 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Refreshing network info cache for port ad0242d9-4af1-43ec-974d-c21d786abe3f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.382 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Start _get_guest_xml network_info=[{"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.387 251996 WARNING nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:16:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:16:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:52.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.422 251996 DEBUG nova.virt.libvirt.host [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.423 251996 DEBUG nova.virt.libvirt.host [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.431 251996 DEBUG nova.virt.libvirt.host [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.432 251996 DEBUG nova.virt.libvirt.host [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.434 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.435 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.436 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.436 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.437 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.437 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.438 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.438 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.439 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.439 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.440 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.440 251996 DEBUG nova.virt.hardware [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.446 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:52.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:52 compute-0 nova_compute[251992]: 2025-12-06 07:16:52.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.554 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.679 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.679 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.679 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.680 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:16:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054429614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.753 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.779 251996 DEBUG nova.storage.rbd_utils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:16:53 compute-0 nova_compute[251992]: 2025-12-06 07:16:53.783 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:54 compute-0 ceph-mon[74339]: pgmap v1764: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Dec 06 07:16:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3089742917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:16:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693536985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.381 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.392 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.393 251996 DEBUG nova.virt.libvirt.vif [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-183617772',display_name='tempest-tempest.common.compute-instance-183617772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-183617772',id=73,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-5g3opbrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:16:47Z,user_data=None,user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=dd21a47b-0073-4789-b313-f2484ea4c357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.394 251996 DEBUG nova.network.os_vif_util [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.395 251996 DEBUG nova.network.os_vif_util [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.396 251996 DEBUG nova.objects.instance [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:16:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.430 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <uuid>dd21a47b-0073-4789-b313-f2484ea4c357</uuid>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <name>instance-00000049</name>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <nova:name>tempest-tempest.common.compute-instance-183617772</nova:name>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:16:52</nova:creationTime>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <nova:port uuid="ad0242d9-4af1-43ec-974d-c21d786abe3f">
Dec 06 07:16:54 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <system>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <entry name="serial">dd21a47b-0073-4789-b313-f2484ea4c357</entry>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <entry name="uuid">dd21a47b-0073-4789-b313-f2484ea4c357</entry>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </system>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <os>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   </os>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <features>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   </features>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd21a47b-0073-4789-b313-f2484ea4c357_disk">
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       </source>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd21a47b-0073-4789-b313-f2484ea4c357_disk.config">
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       </source>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:16:54 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:05:ed:60"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <target dev="tapad0242d9-4a"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/console.log" append="off"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <video>
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </video>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:16:54 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:16:54 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:16:54 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:16:54 compute-0 nova_compute[251992]: </domain>
Dec 06 07:16:54 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.431 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Preparing to wait for external event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.432 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.432 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.432 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.433 251996 DEBUG nova.virt.libvirt.vif [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-183617772',display_name='tempest-tempest.common.compute-instance-183617772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-183617772',id=73,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-5g3opbrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:16:47Z,user_data=None,user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=dd21a47b-0073-4789-b313-f2484ea4c357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.433 251996 DEBUG nova.network.os_vif_util [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.434 251996 DEBUG nova.network.os_vif_util [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.434 251996 DEBUG os_vif [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.435 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.435 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.440 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.440 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad0242d9-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.441 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad0242d9-4a, col_values=(('external_ids', {'iface-id': 'ad0242d9-4af1-43ec-974d-c21d786abe3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:ed:60', 'vm-uuid': 'dd21a47b-0073-4789-b313-f2484ea4c357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.577 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:54 compute-0 NetworkManager[48965]: <info>  [1765005414.5789] manager: (tapad0242d9-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.580 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.585 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.586 251996 INFO os_vif [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a')
Dec 06 07:16:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:54.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.695 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.697 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4512MB free_disk=20.921871185302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.697 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.697 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.863 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance dd21a47b-0073-4789-b313-f2484ea4c357 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.864 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.865 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:16:54 compute-0 nova_compute[251992]: 2025-12-06 07:16:54.919 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:55 compute-0 nova_compute[251992]: 2025-12-06 07:16:55.649 251996 DEBUG nova.network.neutron [req-729f0650-af06-45ad-a801-ef54cf8f240b req-00247edc-feb6-4da5-9bd6-5aec60685896 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Updated VIF entry in instance network info cache for port ad0242d9-4af1-43ec-974d-c21d786abe3f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:16:55 compute-0 nova_compute[251992]: 2025-12-06 07:16:55.650 251996 DEBUG nova.network.neutron [req-729f0650-af06-45ad-a801-ef54cf8f240b req-00247edc-feb6-4da5-9bd6-5aec60685896 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Updating instance_info_cache with network_info: [{"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:16:55 compute-0 nova_compute[251992]: 2025-12-06 07:16:55.678 251996 DEBUG oslo_concurrency.lockutils [req-729f0650-af06-45ad-a801-ef54cf8f240b req-00247edc-feb6-4da5-9bd6-5aec60685896 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:16:55 compute-0 nova_compute[251992]: 2025-12-06 07:16:55.721 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:16:55 compute-0 nova_compute[251992]: 2025-12-06 07:16:55.721 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:16:55 compute-0 nova_compute[251992]: 2025-12-06 07:16:55.722 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No VIF found with MAC fa:16:3e:05:ed:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:16:55 compute-0 nova_compute[251992]: 2025-12-06 07:16:55.722 251996 INFO nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Using config drive
Dec 06 07:16:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 06 07:16:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:16:56 compute-0 nova_compute[251992]: 2025-12-06 07:16:56.279 251996 DEBUG nova.storage.rbd_utils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:16:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2054429614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:56 compute-0 ceph-mon[74339]: pgmap v1765: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Dec 06 07:16:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3089742917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3693536985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:16:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:16:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:56.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:16:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:16:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3299157153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:56.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:56 compute-0 nova_compute[251992]: 2025-12-06 07:16:56.603 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:56 compute-0 nova_compute[251992]: 2025-12-06 07:16:56.608 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:16:57 compute-0 nova_compute[251992]: 2025-12-06 07:16:57.189 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:16:57 compute-0 nova_compute[251992]: 2025-12-06 07:16:57.237 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:16:57 compute-0 nova_compute[251992]: 2025-12-06 07:16:57.238 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:16:57 compute-0 ceph-mon[74339]: pgmap v1766: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 06 07:16:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2263186031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3299157153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:57 compute-0 nova_compute[251992]: 2025-12-06 07:16:57.619 251996 INFO nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Creating config drive at /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config
Dec 06 07:16:57 compute-0 nova_compute[251992]: 2025-12-06 07:16:57.623 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8wfpke5r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:57 compute-0 nova_compute[251992]: 2025-12-06 07:16:57.754 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8wfpke5r" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:16:57 compute-0 nova_compute[251992]: 2025-12-06 07:16:57.783 251996 DEBUG nova.storage.rbd_utils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:16:57 compute-0 nova_compute[251992]: 2025-12-06 07:16:57.785 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config dd21a47b-0073-4789-b313-f2484ea4c357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.240 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.287 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.288 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.313 251996 DEBUG oslo_concurrency.processutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config dd21a47b-0073-4789-b313-f2484ea4c357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.314 251996 INFO nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Deleting local config drive /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config because it was imported into RBD.
Dec 06 07:16:58 compute-0 kernel: tapad0242d9-4a: entered promiscuous mode
Dec 06 07:16:58 compute-0 NetworkManager[48965]: <info>  [1765005418.3839] manager: (tapad0242d9-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.384 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-0 ovn_controller[147168]: 2025-12-06T07:16:58Z|00208|binding|INFO|Claiming lport ad0242d9-4af1-43ec-974d-c21d786abe3f for this chassis.
Dec 06 07:16:58 compute-0 ovn_controller[147168]: 2025-12-06T07:16:58Z|00209|binding|INFO|ad0242d9-4af1-43ec-974d-c21d786abe3f: Claiming fa:16:3e:05:ed:60 10.100.0.11
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.388 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.391 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.407 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-0 NetworkManager[48965]: <info>  [1765005418.4099] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Dec 06 07:16:58 compute-0 NetworkManager[48965]: <info>  [1765005418.4109] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Dec 06 07:16:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:16:58.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:58 compute-0 systemd-machined[212986]: New machine qemu-31-instance-00000049.
Dec 06 07:16:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2033545108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1288267147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:58 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-00000049.
Dec 06 07:16:58 compute-0 systemd-udevd[298452]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:16:58 compute-0 NetworkManager[48965]: <info>  [1765005418.5077] device (tapad0242d9-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:16:58 compute-0 NetworkManager[48965]: <info>  [1765005418.5086] device (tapad0242d9-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:16:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:16:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:16:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:16:58.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.642 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.648 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.830 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ed:60 10.100.0.11'], port_security=['fa:16:3e:05:ed:60 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dd21a47b-0073-4789-b313-f2484ea4c357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e1b3c5b-2965-422b-9e23-f20ff1aa60b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ad0242d9-4af1-43ec-974d-c21d786abe3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.831 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ad0242d9-4af1-43ec-974d-c21d786abe3f in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.832 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:16:58 compute-0 ovn_controller[147168]: 2025-12-06T07:16:58Z|00210|binding|INFO|Setting lport ad0242d9-4af1-43ec-974d-c21d786abe3f ovn-installed in OVS
Dec 06 07:16:58 compute-0 ovn_controller[147168]: 2025-12-06T07:16:58Z|00211|binding|INFO|Setting lport ad0242d9-4af1-43ec-974d-c21d786abe3f up in Southbound
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.840 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.846 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e57f1534-6cef-4501-bea3-ef26f658caba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.847 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.848 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.848 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7703fd-d818-4a4b-a6a1-d7afb82cb3cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.849 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d628c74a-8948-4e38-9c37-edf42afb426f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 nova_compute[251992]: 2025-12-06 07:16:58.855 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.862 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[ade5e38c-90f0-4714-8705-88ce62a0c86c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.883 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1f522831-d585-425c-a71b-6262b9b0e7cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.917 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[706cfa70-d095-41f2-9728-9e0574c6b131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 systemd-udevd[298454]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:16:58 compute-0 NetworkManager[48965]: <info>  [1765005418.9240] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.925 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4400a464-482d-40d0-ab04-68a76d0250ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.960 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[02157b9d-b830-4f4c-b8ea-8589b3a4c8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.963 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[51a5ea7c-c727-44dc-b0d7-93bd5eff2cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:58 compute-0 NetworkManager[48965]: <info>  [1765005418.9844] device (tap4d599401-30): carrier: link connected
Dec 06 07:16:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:58.991 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[07ecf5d6-ba70-4ec3-ad95-8a689fc0c294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.008 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a5b1a1-0943-4e84-a46b-73c7ef21a06b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569157, 'reachable_time': 21434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298485, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.024 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[475c7a90-2f98-4337-b6fa-b2fc48e8ca93]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 569157, 'tstamp': 569157}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298486, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.041 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[727b2284-c44b-498a-a0b1-095934444110]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569157, 'reachable_time': 21434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298487, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.072 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ff64d6b2-af25-4ee2-8a31-649922511fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.130 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[75f50590-ed70-4462-9fb9-678cd1731b0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.135 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.135 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.136 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:59 compute-0 NetworkManager[48965]: <info>  [1765005419.1384] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.137 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:59 compute-0 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.140 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.142 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.143 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:59 compute-0 ovn_controller[147168]: 2025-12-06T07:16:59Z|00212|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=1)
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.158 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.159 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.160 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cb1563-17d4-4bd1-bd5c-fd9f63668444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.161 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:16:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:16:59.161 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.424 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005419.4235, dd21a47b-0073-4789-b313-f2484ea4c357 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.424 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] VM Started (Lifecycle Event)
Dec 06 07:16:59 compute-0 ceph-mon[74339]: pgmap v1767: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:16:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/857765210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.501 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.505 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005419.4260428, dd21a47b-0073-4789-b313-f2484ea4c357 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.505 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] VM Paused (Lifecycle Event)
Dec 06 07:16:59 compute-0 podman[298561]: 2025-12-06 07:16:59.543251446 +0000 UTC m=+0.046723264 container create 2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:16:59 compute-0 systemd[1]: Started libpod-conmon-2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb.scope.
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.577 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:16:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:16:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54550cf63dc3ba51abbf3a5142c4c584d195fa213b078313bce2878c0ffecc89/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.604 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:16:59 compute-0 podman[298561]: 2025-12-06 07:16:59.60759896 +0000 UTC m=+0.111070808 container init 2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.610 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:16:59 compute-0 podman[298561]: 2025-12-06 07:16:59.612754704 +0000 UTC m=+0.116226522 container start 2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:16:59 compute-0 podman[298561]: 2025-12-06 07:16:59.518175646 +0000 UTC m=+0.021647494 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:16:59 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[298576]: [NOTICE]   (298580) : New worker (298582) forked
Dec 06 07:16:59 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[298576]: [NOTICE]   (298580) : Loading success.
Dec 06 07:16:59 compute-0 nova_compute[251992]: 2025-12-06 07:16:59.642 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:16:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:17:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:00.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1613157721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:00.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.281 251996 DEBUG nova.compute.manager [req-06fc90c4-b03a-4696-8824-2ee30a068c5e req-46898ec3-2a79-44f7-9df3-b8a5448a1987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.282 251996 DEBUG oslo_concurrency.lockutils [req-06fc90c4-b03a-4696-8824-2ee30a068c5e req-46898ec3-2a79-44f7-9df3-b8a5448a1987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.282 251996 DEBUG oslo_concurrency.lockutils [req-06fc90c4-b03a-4696-8824-2ee30a068c5e req-46898ec3-2a79-44f7-9df3-b8a5448a1987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.283 251996 DEBUG oslo_concurrency.lockutils [req-06fc90c4-b03a-4696-8824-2ee30a068c5e req-46898ec3-2a79-44f7-9df3-b8a5448a1987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.283 251996 DEBUG nova.compute.manager [req-06fc90c4-b03a-4696-8824-2ee30a068c5e req-46898ec3-2a79-44f7-9df3-b8a5448a1987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Processing event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.285 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.288 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005421.288582, dd21a47b-0073-4789-b313-f2484ea4c357 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.289 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] VM Resumed (Lifecycle Event)
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.293 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.296 251996 INFO nova.virt.libvirt.driver [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance spawned successfully.
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.298 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.537 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.543 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.547 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.548 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.548 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.549 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.549 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.550 251996 DEBUG nova.virt.libvirt.driver [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:01 compute-0 ceph-mon[74339]: pgmap v1768: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.603 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.663 251996 INFO nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Took 14.14 seconds to spawn the instance on the hypervisor.
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.663 251996 DEBUG nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:01 compute-0 sudo[298592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:01 compute-0 sudo[298592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:01 compute-0 sudo[298592]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 07:17:01 compute-0 sudo[298617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:01 compute-0 sudo[298617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:01 compute-0 sudo[298617]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.815 251996 INFO nova.compute.manager [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Took 15.91 seconds to build instance.
Dec 06 07:17:01 compute-0 nova_compute[251992]: 2025-12-06 07:17:01.869 251996 DEBUG oslo_concurrency.lockutils [None req-0c8e4229-f571-4071-a7f8-f59eecf957a6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:02.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:03 compute-0 nova_compute[251992]: 2025-12-06 07:17:03.449 251996 DEBUG nova.compute.manager [req-59be7665-6dc2-4db4-a468-ec233940837f req-5696e7c6-6c93-4233-b76e-1fb5ef9287bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:03 compute-0 nova_compute[251992]: 2025-12-06 07:17:03.450 251996 DEBUG oslo_concurrency.lockutils [req-59be7665-6dc2-4db4-a468-ec233940837f req-5696e7c6-6c93-4233-b76e-1fb5ef9287bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:03 compute-0 nova_compute[251992]: 2025-12-06 07:17:03.450 251996 DEBUG oslo_concurrency.lockutils [req-59be7665-6dc2-4db4-a468-ec233940837f req-5696e7c6-6c93-4233-b76e-1fb5ef9287bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:03 compute-0 nova_compute[251992]: 2025-12-06 07:17:03.450 251996 DEBUG oslo_concurrency.lockutils [req-59be7665-6dc2-4db4-a468-ec233940837f req-5696e7c6-6c93-4233-b76e-1fb5ef9287bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:03 compute-0 nova_compute[251992]: 2025-12-06 07:17:03.450 251996 DEBUG nova.compute.manager [req-59be7665-6dc2-4db4-a468-ec233940837f req-5696e7c6-6c93-4233-b76e-1fb5ef9287bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] No waiting events found dispatching network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:03 compute-0 nova_compute[251992]: 2025-12-06 07:17:03.451 251996 WARNING nova.compute.manager [req-59be7665-6dc2-4db4-a468-ec233940837f req-5696e7c6-6c93-4233-b76e-1fb5ef9287bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received unexpected event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f for instance with vm_state active and task_state None.
Dec 06 07:17:03 compute-0 nova_compute[251992]: 2025-12-06 07:17:03.637 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 20 op/s
Dec 06 07:17:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:03.824 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:03.826 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:03.827 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:04 compute-0 ceph-mon[74339]: pgmap v1769: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 07:17:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3941074034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:04.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:04 compute-0 nova_compute[251992]: 2025-12-06 07:17:04.579 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:05 compute-0 ceph-mon[74339]: pgmap v1770: 305 pgs: 305 active+clean; 215 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 20 op/s
Dec 06 07:17:05 compute-0 nova_compute[251992]: 2025-12-06 07:17:05.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:05 compute-0 nova_compute[251992]: 2025-12-06 07:17:05.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:17:05 compute-0 nova_compute[251992]: 2025-12-06 07:17:05.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:17:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 231 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 466 KiB/s wr, 77 op/s
Dec 06 07:17:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:17:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3670335113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:06 compute-0 nova_compute[251992]: 2025-12-06 07:17:06.089 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:06 compute-0 nova_compute[251992]: 2025-12-06 07:17:06.090 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:06 compute-0 nova_compute[251992]: 2025-12-06 07:17:06.090 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:17:06 compute-0 nova_compute[251992]: 2025-12-06 07:17:06.091 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3670335113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:06.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:06.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:06 compute-0 nova_compute[251992]: 2025-12-06 07:17:06.803 251996 INFO nova.compute.manager [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Rebuilding instance
Dec 06 07:17:07 compute-0 ceph-mon[74339]: pgmap v1771: 305 pgs: 305 active+clean; 231 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 466 KiB/s wr, 77 op/s
Dec 06 07:17:07 compute-0 nova_compute[251992]: 2025-12-06 07:17:07.387 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'trusted_certs' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:17:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2059182180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:07 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:17:07 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:17:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:17:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2059182180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:08.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:08 compute-0 nova_compute[251992]: 2025-12-06 07:17:08.638 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:17:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/134053312' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:17:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:17:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/134053312' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:17:08 compute-0 nova_compute[251992]: 2025-12-06 07:17:08.895 251996 DEBUG nova.compute.manager [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:09 compute-0 ceph-mon[74339]: pgmap v1772: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:17:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/134053312' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:17:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/134053312' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.489 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_requests' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.501 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.519 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.548 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'migration_context' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.623 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.642 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.645 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.680 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Updating instance_info_cache with network_info: [{"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.749 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-dd21a47b-0073-4789-b313-f2484ea4c357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.750 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.750 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:09 compute-0 nova_compute[251992]: 2025-12-06 07:17:09.751 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:17:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:17:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/412767804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3995752661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:10.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:10 compute-0 podman[298647]: 2025-12-06 07:17:10.442042598 +0000 UTC m=+0.102561210 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:17:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:11 compute-0 ceph-mon[74339]: pgmap v1773: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:17:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2290600617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:17:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:12.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:17:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:17:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:17:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:17:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:17:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:17:13 compute-0 ceph-mon[74339]: pgmap v1774: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:17:13 compute-0 nova_compute[251992]: 2025-12-06 07:17:13.640 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 06 07:17:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000082s ======
Dec 06 07:17:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:14.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Dec 06 07:17:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:14.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:14 compute-0 nova_compute[251992]: 2025-12-06 07:17:14.625 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:14 compute-0 nova_compute[251992]: 2025-12-06 07:17:14.912 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:15 compute-0 ceph-mon[74339]: pgmap v1775: 305 pgs: 305 active+clean; 262 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Dec 06 07:17:15 compute-0 nova_compute[251992]: 2025-12-06 07:17:15.447 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:15.447 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:15.449 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:17:15 compute-0 ovn_controller[147168]: 2025-12-06T07:17:15Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:ed:60 10.100.0.11
Dec 06 07:17:15 compute-0 ovn_controller[147168]: 2025-12-06T07:17:15Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:ed:60 10.100.0.11
Dec 06 07:17:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 277 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.1 MiB/s wr, 148 op/s
Dec 06 07:17:16 compute-0 podman[298676]: 2025-12-06 07:17:16.407998511 +0000 UTC m=+0.068742258 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec 06 07:17:16 compute-0 podman[298677]: 2025-12-06 07:17:16.413651929 +0000 UTC m=+0.070716763 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:17:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:16.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:17 compute-0 ceph-mon[74339]: pgmap v1776: 305 pgs: 305 active+clean; 277 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.1 MiB/s wr, 148 op/s
Dec 06 07:17:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 295 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 170 op/s
Dec 06 07:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:17:18
Dec 06 07:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root']
Dec 06 07:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:17:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:18.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:18 compute-0 nova_compute[251992]: 2025-12-06 07:17:18.642 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:18 compute-0 nova_compute[251992]: 2025-12-06 07:17:18.904 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:19 compute-0 ceph-mon[74339]: pgmap v1777: 305 pgs: 305 active+clean; 295 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 170 op/s
Dec 06 07:17:19 compute-0 nova_compute[251992]: 2025-12-06 07:17:19.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:19 compute-0 nova_compute[251992]: 2025-12-06 07:17:19.688 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:17:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 295 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 07:17:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:20.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:20.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:21 compute-0 ceph-mon[74339]: pgmap v1778: 305 pgs: 305 active+clean; 295 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 07:17:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 295 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Dec 06 07:17:21 compute-0 sudo[298712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:21 compute-0 sudo[298712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:21 compute-0 sudo[298712]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:21 compute-0 sudo[298737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:21 compute-0 sudo[298737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:21 compute-0 sudo[298737]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:22.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.473 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.474 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.491 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.578 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.579 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.589 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.589 251996 INFO nova.compute.claims [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:17:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:22.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.703 251996 INFO nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance shutdown successfully after 13 seconds.
Dec 06 07:17:22 compute-0 nova_compute[251992]: 2025-12-06 07:17:22.717 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:17:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281632848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.208 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.218 251996 DEBUG nova.compute.provider_tree [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.317 251996 DEBUG nova.scheduler.client.report [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.398 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.398 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.588 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.589 251996 DEBUG nova.network.neutron [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.644 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.655 251996 INFO nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.682 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:17:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 295 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.874 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.875 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.875 251996 INFO nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Creating image(s)
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.902 251996 DEBUG nova.storage.rbd_utils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] rbd image 3997e85b-0d13-4e0a-9316-863294c82484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.932 251996 DEBUG nova.storage.rbd_utils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] rbd image 3997e85b-0d13-4e0a-9316-863294c82484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.956 251996 DEBUG nova.storage.rbd_utils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] rbd image 3997e85b-0d13-4e0a-9316-863294c82484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:23 compute-0 nova_compute[251992]: 2025-12-06 07:17:23.960 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.028 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.029 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.030 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.030 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.053 251996 DEBUG nova.storage.rbd_utils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] rbd image 3997e85b-0d13-4e0a-9316-863294c82484_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.056 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 3997e85b-0d13-4e0a-9316-863294c82484_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.103 251996 DEBUG nova.policy [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bc90c28aab6c4b1d8e2d984f532d7894', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a369472476f14c5db73734ea0b24ecf0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:17:24 compute-0 kernel: tapad0242d9-4a (unregistering): left promiscuous mode
Dec 06 07:17:24 compute-0 NetworkManager[48965]: <info>  [1765005444.2062] device (tapad0242d9-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.213 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:24 compute-0 ovn_controller[147168]: 2025-12-06T07:17:24Z|00213|binding|INFO|Releasing lport ad0242d9-4af1-43ec-974d-c21d786abe3f from this chassis (sb_readonly=0)
Dec 06 07:17:24 compute-0 ovn_controller[147168]: 2025-12-06T07:17:24Z|00214|binding|INFO|Setting lport ad0242d9-4af1-43ec-974d-c21d786abe3f down in Southbound
Dec 06 07:17:24 compute-0 ovn_controller[147168]: 2025-12-06T07:17:24Z|00215|binding|INFO|Removing iface tapad0242d9-4a ovn-installed in OVS
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.218 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.242 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ed:60 10.100.0.11'], port_security=['fa:16:3e:05:ed:60 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dd21a47b-0073-4789-b313-f2484ea4c357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e1b3c5b-2965-422b-9e23-f20ff1aa60b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ad0242d9-4af1-43ec-974d-c21d786abe3f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.243 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ad0242d9-4af1-43ec-974d-c21d786abe3f in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.245 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.247 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2b3a7c-c11b-4752-b3e9-fca4db20370c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.248 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:17:24 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000049.scope: Deactivated successfully.
Dec 06 07:17:24 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000049.scope: Consumed 14.349s CPU time.
Dec 06 07:17:24 compute-0 systemd-machined[212986]: Machine qemu-31-instance-00000049 terminated.
Dec 06 07:17:24 compute-0 ceph-mon[74339]: pgmap v1779: 305 pgs: 305 active+clean; 295 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.333 251996 INFO nova.virt.libvirt.driver [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance destroyed successfully.
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.339 251996 INFO nova.virt.libvirt.driver [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance destroyed successfully.
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.340 251996 DEBUG nova.virt.libvirt.vif [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-183617772',display_name='tempest-ServerActionsTestJSON-server-229441047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-183617772',id=73,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-5g3opbrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:06Z,user_data=None,user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=dd21a47b-0073-4789-b313-f2484ea4c357,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.340 251996 DEBUG nova.network.os_vif_util [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.341 251996 DEBUG nova.network.os_vif_util [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.341 251996 DEBUG os_vif [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.344 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.345 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad0242d9-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.346 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.350 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.354 251996 INFO os_vif [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a')
Dec 06 07:17:24 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[298576]: [NOTICE]   (298580) : haproxy version is 2.8.14-c23fe91
Dec 06 07:17:24 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[298576]: [NOTICE]   (298580) : path to executable is /usr/sbin/haproxy
Dec 06 07:17:24 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[298576]: [WARNING]  (298580) : Exiting Master process...
Dec 06 07:17:24 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[298576]: [ALERT]    (298580) : Current worker (298582) exited with code 143 (Terminated)
Dec 06 07:17:24 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[298576]: [WARNING]  (298580) : All workers exited. Exiting... (0)
Dec 06 07:17:24 compute-0 systemd[1]: libpod-2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb.scope: Deactivated successfully.
Dec 06 07:17:24 compute-0 podman[298902]: 2025-12-06 07:17:24.424837464 +0000 UTC m=+0.094458795 container died 2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:17:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:24.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:24.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb-userdata-shm.mount: Deactivated successfully.
Dec 06 07:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-54550cf63dc3ba51abbf3a5142c4c584d195fa213b078313bce2878c0ffecc89-merged.mount: Deactivated successfully.
Dec 06 07:17:24 compute-0 podman[298902]: 2025-12-06 07:17:24.788641477 +0000 UTC m=+0.458262828 container cleanup 2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:17:24 compute-0 systemd[1]: libpod-conmon-2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb.scope: Deactivated successfully.
Dec 06 07:17:24 compute-0 podman[298960]: 2025-12-06 07:17:24.965526188 +0000 UTC m=+0.155683702 container remove 2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.973 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1e446f51-f46b-4225-b8df-dcc4b9bb8034]: (4, ('Sat Dec  6 07:17:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb)\n2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb\nSat Dec  6 07:17:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb)\n2da0161b2b79115d24cd55d41aae2656da38122b5e0d5e35989ab03adb85f6eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.975 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5168ffc8-cc5b-4c62-b192-b725245e78a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.976 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.978 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:24 compute-0 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:17:24 compute-0 nova_compute[251992]: 2025-12-06 07:17:24.994 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:24.997 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e8568c36-84f0-429d-a351-1db3d623a1e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:25.012 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4288b931-aaa2-464e-a524-1dd52526ebb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:25.013 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9576399c-d45f-4b78-9949-f6199b5215ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:25.027 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[815214e0-3ca4-42ba-add7-d9b0c2436caa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 569150, 'reachable_time': 35897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298975, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:25.031 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:17:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:25.032 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[a3814ec6-2c15-4daa-bcfd-7be993d4e76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:17:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:25.452 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:25 compute-0 nova_compute[251992]: 2025-12-06 07:17:25.650 251996 DEBUG nova.network.neutron [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Successfully created port: 50c0e93a-7306-495b-9a6f-121997fa4acb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 301 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 157 op/s
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005641634096407965 of space, bias 1.0, pg target 1.6924902289223895 quantized to 32 (current 32)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2959049806323283 quantized to 32 (current 32)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:17:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:17:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3281632848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:26 compute-0 ceph-mon[74339]: pgmap v1780: 305 pgs: 305 active+clean; 295 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 07:17:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:26.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.919 251996 DEBUG nova.compute.manager [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-unplugged-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.919 251996 DEBUG oslo_concurrency.lockutils [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.920 251996 DEBUG oslo_concurrency.lockutils [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.920 251996 DEBUG oslo_concurrency.lockutils [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.920 251996 DEBUG nova.compute.manager [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] No waiting events found dispatching network-vif-unplugged-ad0242d9-4af1-43ec-974d-c21d786abe3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.920 251996 WARNING nova.compute.manager [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received unexpected event network-vif-unplugged-ad0242d9-4af1-43ec-974d-c21d786abe3f for instance with vm_state active and task_state rebuilding.
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.920 251996 DEBUG nova.compute.manager [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.920 251996 DEBUG oslo_concurrency.lockutils [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.921 251996 DEBUG oslo_concurrency.lockutils [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.921 251996 DEBUG oslo_concurrency.lockutils [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.921 251996 DEBUG nova.compute.manager [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] No waiting events found dispatching network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:26 compute-0 nova_compute[251992]: 2025-12-06 07:17:26.921 251996 WARNING nova.compute.manager [req-197886fe-716c-4c47-a89f-b09d992eac22 req-a8f4574b-712b-4d65-b57d-3b57d1fc560a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received unexpected event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f for instance with vm_state active and task_state rebuilding.
Dec 06 07:17:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 311 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:17:27 compute-0 nova_compute[251992]: 2025-12-06 07:17:27.956 251996 DEBUG nova.network.neutron [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Successfully updated port: 50c0e93a-7306-495b-9a6f-121997fa4acb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:17:28 compute-0 nova_compute[251992]: 2025-12-06 07:17:28.001 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:28 compute-0 nova_compute[251992]: 2025-12-06 07:17:28.001 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquired lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:28 compute-0 nova_compute[251992]: 2025-12-06 07:17:28.002 251996 DEBUG nova.network.neutron [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:17:28 compute-0 nova_compute[251992]: 2025-12-06 07:17:28.157 251996 DEBUG nova.compute.manager [req-f41f37aa-ef76-4f7d-805b-2fe599c4851d req-222f5a7e-4965-47cf-8009-9c0d151cc260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-changed-50c0e93a-7306-495b-9a6f-121997fa4acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:28 compute-0 nova_compute[251992]: 2025-12-06 07:17:28.158 251996 DEBUG nova.compute.manager [req-f41f37aa-ef76-4f7d-805b-2fe599c4851d req-222f5a7e-4965-47cf-8009-9c0d151cc260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Refreshing instance network info cache due to event network-changed-50c0e93a-7306-495b-9a6f-121997fa4acb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:17:28 compute-0 nova_compute[251992]: 2025-12-06 07:17:28.158 251996 DEBUG oslo_concurrency.lockutils [req-f41f37aa-ef76-4f7d-805b-2fe599c4851d req-222f5a7e-4965-47cf-8009-9c0d151cc260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:28 compute-0 nova_compute[251992]: 2025-12-06 07:17:28.350 251996 DEBUG nova.network.neutron [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:17:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:28.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:28 compute-0 nova_compute[251992]: 2025-12-06 07:17:28.647 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:28 compute-0 ceph-mon[74339]: pgmap v1781: 305 pgs: 305 active+clean; 301 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 157 op/s
Dec 06 07:17:29 compute-0 nova_compute[251992]: 2025-12-06 07:17:29.347 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:29 compute-0 sudo[298981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:29 compute-0 sudo[298981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:29 compute-0 sudo[298981]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:29 compute-0 sudo[299006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:17:29 compute-0 sudo[299006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:29 compute-0 sudo[299006]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 311 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Dec 06 07:17:29 compute-0 sudo[299031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:29 compute-0 sudo[299031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:29 compute-0 sudo[299031]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:29 compute-0 sudo[299056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:17:29 compute-0 sudo[299056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:30 compute-0 podman[299153]: 2025-12-06 07:17:30.40562165 +0000 UTC m=+0.126199469 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:17:30 compute-0 ceph-mon[74339]: pgmap v1782: 305 pgs: 305 active+clean; 311 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:17:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:30.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:30 compute-0 nova_compute[251992]: 2025-12-06 07:17:30.472 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 3997e85b-0d13-4e0a-9316-863294c82484_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:30 compute-0 podman[299153]: 2025-12-06 07:17:30.512632953 +0000 UTC m=+0.233210762 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:17:30 compute-0 nova_compute[251992]: 2025-12-06 07:17:30.546 251996 DEBUG nova.storage.rbd_utils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] resizing rbd image 3997e85b-0d13-4e0a-9316-863294c82484_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:17:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:30.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:30 compute-0 nova_compute[251992]: 2025-12-06 07:17:30.657 251996 DEBUG nova.objects.instance [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lazy-loading 'migration_context' on Instance uuid 3997e85b-0d13-4e0a-9316-863294c82484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:30 compute-0 nova_compute[251992]: 2025-12-06 07:17:30.734 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:17:30 compute-0 nova_compute[251992]: 2025-12-06 07:17:30.735 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Ensure instance console log exists: /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:17:30 compute-0 nova_compute[251992]: 2025-12-06 07:17:30.735 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:30 compute-0 nova_compute[251992]: 2025-12-06 07:17:30.736 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:30 compute-0 nova_compute[251992]: 2025-12-06 07:17:30.736 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:31 compute-0 podman[299379]: 2025-12-06 07:17:31.088624702 +0000 UTC m=+0.049787059 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:17:31 compute-0 podman[299379]: 2025-12-06 07:17:31.10038289 +0000 UTC m=+0.061545227 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:17:31 compute-0 podman[299445]: 2025-12-06 07:17:31.291773606 +0000 UTC m=+0.046608371 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, name=keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec 06 07:17:31 compute-0 podman[299445]: 2025-12-06 07:17:31.30518881 +0000 UTC m=+0.060023555 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, version=2.2.4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, description=keepalived for Ceph, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived)
Dec 06 07:17:31 compute-0 sudo[299056]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:17:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:17:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:31 compute-0 ceph-mon[74339]: pgmap v1783: 305 pgs: 305 active+clean; 311 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 1.5 MiB/s wr, 30 op/s
Dec 06 07:17:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:31 compute-0 sudo[299476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:31 compute-0 sudo[299476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:31 compute-0 sudo[299476]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:31 compute-0 sudo[299502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:17:31 compute-0 sudo[299502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:31 compute-0 sudo[299502]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:31 compute-0 sudo[299527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:31 compute-0 sudo[299527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:31 compute-0 sudo[299527]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:31 compute-0 sudo[299552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:17:31 compute-0 sudo[299552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.655 251996 DEBUG nova.network.neutron [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Updating instance_info_cache with network_info: [{"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.681 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Releasing lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.681 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Instance network_info: |[{"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.682 251996 DEBUG oslo_concurrency.lockutils [req-f41f37aa-ef76-4f7d-805b-2fe599c4851d req-222f5a7e-4965-47cf-8009-9c0d151cc260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.682 251996 DEBUG nova.network.neutron [req-f41f37aa-ef76-4f7d-805b-2fe599c4851d req-222f5a7e-4965-47cf-8009-9c0d151cc260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Refreshing network info cache for port 50c0e93a-7306-495b-9a6f-121997fa4acb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.685 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Start _get_guest_xml network_info=[{"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.690 251996 WARNING nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.715 251996 DEBUG nova.virt.libvirt.host [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.717 251996 DEBUG nova.virt.libvirt.host [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.724 251996 INFO nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Deleting instance files /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357_del
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.725 251996 INFO nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Deletion of /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357_del complete
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.735 251996 DEBUG nova.virt.libvirt.host [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.736 251996 DEBUG nova.virt.libvirt.host [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.738 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.738 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.739 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.739 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.740 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.740 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.741 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.741 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.742 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.742 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.742 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.743 251996 DEBUG nova.virt.hardware [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.750 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 334 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.952 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.956 251996 INFO nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Creating image(s)
Dec 06 07:17:31 compute-0 nova_compute[251992]: 2025-12-06 07:17:31.985 251996 DEBUG nova.storage.rbd_utils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.019 251996 DEBUG nova.storage.rbd_utils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.046 251996 DEBUG nova.storage.rbd_utils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.050 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:32 compute-0 sudo[299552]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.115 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.116 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.117 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.117 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.145 251996 DEBUG nova.storage.rbd_utils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.149 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 dd21a47b-0073-4789-b313-f2484ea4c357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:17:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:17:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:17:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9f9a4ddc-2e06-48e9-a8f1-008aa6cb609a does not exist
Dec 06 07:17:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4cae33fc-f7ee-4729-b9be-be82918027b4 does not exist
Dec 06 07:17:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e0230848-d0c6-45b0-8a6a-49db16c29215 does not exist
Dec 06 07:17:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:17:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:17:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:17:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:17:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/234714428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.230 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:32 compute-0 sudo[299702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:32 compute-0 sudo[299702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:32 compute-0 sudo[299702]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.259 251996 DEBUG nova.storage.rbd_utils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] rbd image 3997e85b-0d13-4e0a-9316-863294c82484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.266 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:32 compute-0 sudo[299756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:17:32 compute-0 sudo[299756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:32 compute-0 sudo[299756]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:32 compute-0 sudo[299791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:32 compute-0 sudo[299791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:32 compute-0 sudo[299791]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:32 compute-0 sudo[299816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:17:32 compute-0 sudo[299816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:17:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/234714428' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.530 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 dd21a47b-0073-4789-b313-f2484ea4c357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.622 251996 DEBUG nova.storage.rbd_utils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] resizing rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:17:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:32.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:17:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848123894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.731 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.732 251996 DEBUG nova.virt.libvirt.vif [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-169244694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-169244694',id=76,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a369472476f14c5db73734ea0b24ecf0',ramdisk_id='',reservation_id='r-0q07guu2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-876880010',owner_user_name='tempest-AttachInterfacesV270Test-876880010-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:23Z,user_data=None,user_id='bc90c28aab6c4b1d8e2d984f532d7894',uuid=3997e85b-0d13-4e0a-9316-863294c82484,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.733 251996 DEBUG nova.network.os_vif_util [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converting VIF {"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.734 251996 DEBUG nova.network.os_vif_util [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:a5:8a,bridge_name='br-int',has_traffic_filtering=True,id=50c0e93a-7306-495b-9a6f-121997fa4acb,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c0e93a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.735 251996 DEBUG nova.objects.instance [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3997e85b-0d13-4e0a-9316-863294c82484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.740 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.741 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Ensure instance console log exists: /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.742 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.742 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.742 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.744 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Start _get_guest_xml network_info=[{"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.747 251996 WARNING nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.751 251996 DEBUG nova.virt.libvirt.host [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.752 251996 DEBUG nova.virt.libvirt.host [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.754 251996 DEBUG nova.virt.libvirt.host [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.755 251996 DEBUG nova.virt.libvirt.host [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.756 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.756 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.756 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.757 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.757 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.757 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.757 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.758 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.758 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.758 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.758 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.759 251996 DEBUG nova.virt.hardware [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.759 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'vcpu_model' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.762 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <uuid>3997e85b-0d13-4e0a-9316-863294c82484</uuid>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <name>instance-0000004c</name>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <nova:name>tempest-AttachInterfacesV270Test-server-169244694</nova:name>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:17:31</nova:creationTime>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <nova:user uuid="bc90c28aab6c4b1d8e2d984f532d7894">tempest-AttachInterfacesV270Test-876880010-project-member</nova:user>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <nova:project uuid="a369472476f14c5db73734ea0b24ecf0">tempest-AttachInterfacesV270Test-876880010</nova:project>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <nova:port uuid="50c0e93a-7306-495b-9a6f-121997fa4acb">
Dec 06 07:17:32 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <system>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <entry name="serial">3997e85b-0d13-4e0a-9316-863294c82484</entry>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <entry name="uuid">3997e85b-0d13-4e0a-9316-863294c82484</entry>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </system>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <os>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   </os>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <features>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   </features>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/3997e85b-0d13-4e0a-9316-863294c82484_disk">
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       </source>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/3997e85b-0d13-4e0a-9316-863294c82484_disk.config">
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       </source>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:17:32 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:91:a5:8a"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <target dev="tap50c0e93a-73"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484/console.log" append="off"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <video>
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </video>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:17:32 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:17:32 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:17:32 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:17:32 compute-0 nova_compute[251992]: </domain>
Dec 06 07:17:32 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.763 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Preparing to wait for external event network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.763 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.763 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.763 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.764 251996 DEBUG nova.virt.libvirt.vif [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-169244694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-169244694',id=76,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a369472476f14c5db73734ea0b24ecf0',ramdisk_id='',reservation_id='r-0q07guu2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-876880010',owner_user_name='tempest-AttachInterfacesV270Test-876880010-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:23Z,user_data=None,user_id='bc90c28aab6c4b1d8e2d984f532d7894',uuid=3997e85b-0d13-4e0a-9316-863294c82484,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.764 251996 DEBUG nova.network.os_vif_util [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converting VIF {"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.765 251996 DEBUG nova.network.os_vif_util [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:a5:8a,bridge_name='br-int',has_traffic_filtering=True,id=50c0e93a-7306-495b-9a6f-121997fa4acb,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c0e93a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.765 251996 DEBUG os_vif [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:a5:8a,bridge_name='br-int',has_traffic_filtering=True,id=50c0e93a-7306-495b-9a6f-121997fa4acb,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c0e93a-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.766 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.766 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.767 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.770 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.770 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50c0e93a-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.771 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50c0e93a-73, col_values=(('external_ids', {'iface-id': '50c0e93a-7306-495b-9a6f-121997fa4acb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:a5:8a', 'vm-uuid': '3997e85b-0d13-4e0a-9316-863294c82484'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.773 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:32 compute-0 NetworkManager[48965]: <info>  [1765005452.7739] manager: (tap50c0e93a-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.779 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.802 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.805 251996 INFO os_vif [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:a5:8a,bridge_name='br-int',has_traffic_filtering=True,id=50c0e93a-7306-495b-9a6f-121997fa4acb,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c0e93a-73')
Dec 06 07:17:32 compute-0 podman[299951]: 2025-12-06 07:17:32.719058019 +0000 UTC m=+0.025502842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:17:32 compute-0 podman[299951]: 2025-12-06 07:17:32.91633355 +0000 UTC m=+0.222778353 container create 276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:17:32 compute-0 systemd[1]: Started libpod-conmon-276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de.scope.
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.957 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.958 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.958 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] No VIF found with MAC fa:16:3e:91:a5:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.959 251996 INFO nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Using config drive
Dec 06 07:17:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:17:32 compute-0 nova_compute[251992]: 2025-12-06 07:17:32.988 251996 DEBUG nova.storage.rbd_utils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] rbd image 3997e85b-0d13-4e0a-9316-863294c82484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:32 compute-0 podman[299951]: 2025-12-06 07:17:32.996995469 +0000 UTC m=+0.303440272 container init 276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:17:33 compute-0 podman[299951]: 2025-12-06 07:17:33.006951906 +0000 UTC m=+0.313396709 container start 276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:17:33 compute-0 friendly_hofstadter[300011]: 167 167
Dec 06 07:17:33 compute-0 systemd[1]: libpod-276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de.scope: Deactivated successfully.
Dec 06 07:17:33 compute-0 podman[299951]: 2025-12-06 07:17:33.012046848 +0000 UTC m=+0.318491671 container attach 276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:17:33 compute-0 conmon[300011]: conmon 276817e5d72745f74210 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de.scope/container/memory.events
Dec 06 07:17:33 compute-0 podman[299951]: 2025-12-06 07:17:33.013600532 +0000 UTC m=+0.320045335 container died 276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe349668493403cdd2aa8a979f122782282f10ce4971bba2c6d27fb032750f5b-merged.mount: Deactivated successfully.
Dec 06 07:17:33 compute-0 podman[299951]: 2025-12-06 07:17:33.045759778 +0000 UTC m=+0.352204581 container remove 276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 07:17:33 compute-0 systemd[1]: libpod-conmon-276817e5d72745f742104280185fc25111fa18fb245fb81b399eac0d0fec46de.scope: Deactivated successfully.
Dec 06 07:17:33 compute-0 podman[300052]: 2025-12-06 07:17:33.199797303 +0000 UTC m=+0.041013895 container create f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:17:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:17:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/297393884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:33 compute-0 systemd[1]: Started libpod-conmon-f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07.scope.
Dec 06 07:17:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.250 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52951c2ca16a82d9a97986b90520c99cd2fffdcae8aa66ebb1d0163903acd51d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52951c2ca16a82d9a97986b90520c99cd2fffdcae8aa66ebb1d0163903acd51d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52951c2ca16a82d9a97986b90520c99cd2fffdcae8aa66ebb1d0163903acd51d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52951c2ca16a82d9a97986b90520c99cd2fffdcae8aa66ebb1d0163903acd51d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52951c2ca16a82d9a97986b90520c99cd2fffdcae8aa66ebb1d0163903acd51d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.276 251996 DEBUG nova.storage.rbd_utils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:33 compute-0 podman[300052]: 2025-12-06 07:17:33.181861483 +0000 UTC m=+0.023078095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:17:33 compute-0 podman[300052]: 2025-12-06 07:17:33.278178858 +0000 UTC m=+0.119395460 container init f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wescoff, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.281 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:33 compute-0 podman[300052]: 2025-12-06 07:17:33.285551314 +0000 UTC m=+0.126767896 container start f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:17:33 compute-0 podman[300052]: 2025-12-06 07:17:33.289234946 +0000 UTC m=+0.130451548 container attach f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wescoff, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:17:33 compute-0 ceph-mon[74339]: pgmap v1784: 305 pgs: 305 active+clean; 334 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 3.5 MiB/s wr, 82 op/s
Dec 06 07:17:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3848123894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/297393884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.650 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:17:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2369414319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.730 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.732 251996 DEBUG nova.virt.libvirt.vif [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-183617772',display_name='tempest-ServerActionsTestJSON-server-229441047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-183617772',id=73,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-5g3opbrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:31Z,user_data=None,user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=dd21a47b-0073-4789-b313-f2484ea4c357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.733 251996 DEBUG nova.network.os_vif_util [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.734 251996 DEBUG nova.network.os_vif_util [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.737 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <uuid>dd21a47b-0073-4789-b313-f2484ea4c357</uuid>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <name>instance-00000049</name>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestJSON-server-229441047</nova:name>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:17:32</nova:creationTime>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <nova:port uuid="ad0242d9-4af1-43ec-974d-c21d786abe3f">
Dec 06 07:17:33 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <system>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <entry name="serial">dd21a47b-0073-4789-b313-f2484ea4c357</entry>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <entry name="uuid">dd21a47b-0073-4789-b313-f2484ea4c357</entry>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </system>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <os>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   </os>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <features>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   </features>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd21a47b-0073-4789-b313-f2484ea4c357_disk">
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       </source>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd21a47b-0073-4789-b313-f2484ea4c357_disk.config">
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       </source>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:17:33 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:05:ed:60"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <target dev="tapad0242d9-4a"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/console.log" append="off"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <video>
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </video>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:17:33 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:17:33 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:17:33 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:17:33 compute-0 nova_compute[251992]: </domain>
Dec 06 07:17:33 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.743 251996 DEBUG nova.compute.manager [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Preparing to wait for external event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.743 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.744 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.744 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.745 251996 DEBUG nova.virt.libvirt.vif [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-183617772',display_name='tempest-ServerActionsTestJSON-server-229441047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-183617772',id=73,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-5g3opbrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:17:31Z,user_data=None,user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=dd21a47b-0073-4789-b313-f2484ea4c357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.745 251996 DEBUG nova.network.os_vif_util [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.746 251996 DEBUG nova.network.os_vif_util [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.746 251996 DEBUG os_vif [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.747 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.748 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.748 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.750 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.751 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad0242d9-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.751 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad0242d9-4a, col_values=(('external_ids', {'iface-id': 'ad0242d9-4af1-43ec-974d-c21d786abe3f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:ed:60', 'vm-uuid': 'dd21a47b-0073-4789-b313-f2484ea4c357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.752 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:33 compute-0 NetworkManager[48965]: <info>  [1765005453.7538] manager: (tapad0242d9-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.755 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.761 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.762 251996 INFO os_vif [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a')
Dec 06 07:17:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 334 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.840 251996 INFO nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Creating config drive at /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484/disk.config
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.846 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn8hy20yr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.875 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.876 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.876 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No VIF found with MAC fa:16:3e:05:ed:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.877 251996 INFO nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Using config drive
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.901 251996 DEBUG nova.storage.rbd_utils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.930 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'ec2_ids' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.968 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'keypairs' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:33 compute-0 nova_compute[251992]: 2025-12-06 07:17:33.978 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn8hy20yr" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.006 251996 DEBUG nova.storage.rbd_utils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] rbd image 3997e85b-0d13-4e0a-9316-863294c82484_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.010 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484/disk.config 3997e85b-0d13-4e0a-9316-863294c82484_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:34 compute-0 fervent_wescoff[300071]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:17:34 compute-0 fervent_wescoff[300071]: --> relative data size: 1.0
Dec 06 07:17:34 compute-0 fervent_wescoff[300071]: --> All data devices are unavailable
Dec 06 07:17:34 compute-0 systemd[1]: libpod-f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07.scope: Deactivated successfully.
Dec 06 07:17:34 compute-0 podman[300052]: 2025-12-06 07:17:34.125366667 +0000 UTC m=+0.966583250 container died f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-52951c2ca16a82d9a97986b90520c99cd2fffdcae8aa66ebb1d0163903acd51d-merged.mount: Deactivated successfully.
Dec 06 07:17:34 compute-0 podman[300052]: 2025-12-06 07:17:34.171835963 +0000 UTC m=+1.013052545 container remove f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wescoff, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:17:34 compute-0 systemd[1]: libpod-conmon-f2fab702964e9d3f1e7d3e33142d48c81e464a2734e144d4eac21a9cbdfdaa07.scope: Deactivated successfully.
Dec 06 07:17:34 compute-0 sudo[299816]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:34 compute-0 sudo[300199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:34 compute-0 sudo[300199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:34 compute-0 sudo[300199]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:34 compute-0 sudo[300224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:17:34 compute-0 sudo[300224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:34 compute-0 sudo[300224]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:34 compute-0 sudo[300249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:34 compute-0 sudo[300249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:34 compute-0 sudo[300249]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:34 compute-0 sudo[300274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:17:34 compute-0 sudo[300274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:34.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:34 compute-0 podman[300337]: 2025-12-06 07:17:34.699859335 +0000 UTC m=+0.033003961 container create 44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 07:17:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2369414319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.727 251996 DEBUG oslo_concurrency.processutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484/disk.config 3997e85b-0d13-4e0a-9316-863294c82484_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.717s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.728 251996 INFO nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Deleting local config drive /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484/disk.config because it was imported into RBD.
Dec 06 07:17:34 compute-0 systemd[1]: Started libpod-conmon-44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3.scope.
Dec 06 07:17:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:17:34 compute-0 NetworkManager[48965]: <info>  [1765005454.7764] manager: (tap50c0e93a-73): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Dec 06 07:17:34 compute-0 kernel: tap50c0e93a-73: entered promiscuous mode
Dec 06 07:17:34 compute-0 podman[300337]: 2025-12-06 07:17:34.685876486 +0000 UTC m=+0.019021022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:17:34 compute-0 ovn_controller[147168]: 2025-12-06T07:17:34Z|00216|binding|INFO|Claiming lport 50c0e93a-7306-495b-9a6f-121997fa4acb for this chassis.
Dec 06 07:17:34 compute-0 ovn_controller[147168]: 2025-12-06T07:17:34Z|00217|binding|INFO|50c0e93a-7306-495b-9a6f-121997fa4acb: Claiming fa:16:3e:91:a5:8a 10.100.0.7
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.813 251996 INFO nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Creating config drive at /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.819 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_fhlfrb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.823 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:a5:8a 10.100.0.7'], port_security=['fa:16:3e:91:a5:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3997e85b-0d13-4e0a-9316-863294c82484', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a369472476f14c5db73734ea0b24ecf0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd44518a3-9dbe-4e6e-884b-e1f99389c6ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c71088b-c43c-4f75-9f02-d22903770157, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=50c0e93a-7306-495b-9a6f-121997fa4acb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.824 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 50c0e93a-7306-495b-9a6f-121997fa4acb in datapath f5f7d890-fc89-4729-9976-7a81ce11ddb5 bound to our chassis
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.826 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5f7d890-fc89-4729-9976-7a81ce11ddb5
Dec 06 07:17:34 compute-0 ovn_controller[147168]: 2025-12-06T07:17:34Z|00218|binding|INFO|Setting lport 50c0e93a-7306-495b-9a6f-121997fa4acb ovn-installed in OVS
Dec 06 07:17:34 compute-0 ovn_controller[147168]: 2025-12-06T07:17:34Z|00219|binding|INFO|Setting lport 50c0e93a-7306-495b-9a6f-121997fa4acb up in Southbound
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.837 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[793525e0-45e5-42c7-9cd8-8af02f69ed05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.837 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5f7d890-f1 in ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.840 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5f7d890-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.840 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[12e95e10-8574-4e0f-9577-df27835a233b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.841 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec485f0-b43b-45b2-b50a-575ca9628abe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 systemd-machined[212986]: New machine qemu-32-instance-0000004c.
Dec 06 07:17:34 compute-0 systemd-udevd[300374]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.847 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.852 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e2bfe8-8bdd-4beb-b513-586224a3e8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 NetworkManager[48965]: <info>  [1765005454.8544] device (tap50c0e93a-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:17:34 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000004c.
Dec 06 07:17:34 compute-0 NetworkManager[48965]: <info>  [1765005454.8562] device (tap50c0e93a-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.865 251996 DEBUG nova.network.neutron [req-f41f37aa-ef76-4f7d-805b-2fe599c4851d req-222f5a7e-4965-47cf-8009-9c0d151cc260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Updated VIF entry in instance network info cache for port 50c0e93a-7306-495b-9a6f-121997fa4acb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.866 251996 DEBUG nova.network.neutron [req-f41f37aa-ef76-4f7d-805b-2fe599c4851d req-222f5a7e-4965-47cf-8009-9c0d151cc260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Updating instance_info_cache with network_info: [{"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.874 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[59f9035b-f222-4ff4-bfda-2c624d0cdad4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.886 251996 DEBUG oslo_concurrency.lockutils [req-f41f37aa-ef76-4f7d-805b-2fe599c4851d req-222f5a7e-4965-47cf-8009-9c0d151cc260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.902 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f299c3f7-b257-4bd0-a23a-b6331ff83712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 NetworkManager[48965]: <info>  [1765005454.9103] manager: (tapf5f7d890-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Dec 06 07:17:34 compute-0 systemd-udevd[300379]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.910 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3733cb3d-0ba1-4671-8209-5026a91717c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.938 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6da287-3b90-49a2-845c-18a20acdc394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.941 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f9647ceb-da71-4b14-af41-6f2482975a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 nova_compute[251992]: 2025-12-06 07:17:34.955 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_fhlfrb" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:34 compute-0 NetworkManager[48965]: <info>  [1765005454.9612] device (tapf5f7d890-f0): carrier: link connected
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.965 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[897cb7ad-7495-4fba-8773-8a7ad00af5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.984 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1a01f4c1-292a-485d-b4ad-f9bde322f406]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5f7d890-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c0:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572754, 'reachable_time': 15604, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300418, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:34.998 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d20d5a0b-acb4-4379-862a-93fe73dd8b6d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:c04c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572754, 'tstamp': 572754}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300419, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.012 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c384cf98-a666-4fd2-a685-5460dcc58369]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5f7d890-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c0:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572754, 'reachable_time': 15604, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300420, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.045 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[68e85c3a-6c41-4cc2-ae1e-4bcf24fa0667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.102 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4493f124-9c05-4d8d-a841-ddee9487f1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.104 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5f7d890-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.104 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.104 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5f7d890-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:35 compute-0 NetworkManager[48965]: <info>  [1765005455.1068] manager: (tapf5f7d890-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Dec 06 07:17:35 compute-0 kernel: tapf5f7d890-f0: entered promiscuous mode
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.112 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5f7d890-f0, col_values=(('external_ids', {'iface-id': 'f275ad1d-107c-461e-9302-6a01ff4f0872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:35 compute-0 ovn_controller[147168]: 2025-12-06T07:17:35Z|00220|binding|INFO|Releasing lport f275ad1d-107c-461e-9302-6a01ff4f0872 from this chassis (sb_readonly=0)
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.133 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5f7d890-fc89-4729-9976-7a81ce11ddb5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5f7d890-fc89-4729-9976-7a81ce11ddb5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.134 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba88df5-e7c4-461e-9b6c-4aa75e9095eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.134 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-f5f7d890-fc89-4729-9976-7a81ce11ddb5
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/f5f7d890-fc89-4729-9976-7a81ce11ddb5.pid.haproxy
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID f5f7d890-fc89-4729-9976-7a81ce11ddb5
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:17:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:35.135 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'env', 'PROCESS_TAG=haproxy-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5f7d890-fc89-4729-9976-7a81ce11ddb5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:17:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 312 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 4.2 MiB/s wr, 137 op/s
Dec 06 07:17:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:36.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:37 compute-0 podman[300337]: 2025-12-06 07:17:37.254211321 +0000 UTC m=+2.587355877 container init 44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:17:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:37 compute-0 podman[300337]: 2025-12-06 07:17:37.266846104 +0000 UTC m=+2.599990630 container start 44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 07:17:37 compute-0 podman[300337]: 2025-12-06 07:17:37.271637538 +0000 UTC m=+2.604782094 container attach 44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:17:37 compute-0 systemd[1]: libpod-44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3.scope: Deactivated successfully.
Dec 06 07:17:37 compute-0 hopeful_kare[300353]: 167 167
Dec 06 07:17:37 compute-0 conmon[300353]: conmon 44ab0668a672e33de16f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3.scope/container/memory.events
Dec 06 07:17:37 compute-0 podman[300337]: 2025-12-06 07:17:37.277261604 +0000 UTC m=+2.610406130 container died 44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.283 251996 DEBUG nova.storage.rbd_utils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image dd21a47b-0073-4789-b313-f2484ea4c357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.290 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config dd21a47b-0073-4789-b313-f2484ea4c357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-19bd83ce50f765c518b1b31944dfa061d86aea1939b30d7cbd843299d3d47684-merged.mount: Deactivated successfully.
Dec 06 07:17:37 compute-0 ceph-mon[74339]: pgmap v1785: 305 pgs: 305 active+clean; 334 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Dec 06 07:17:37 compute-0 ceph-mon[74339]: pgmap v1786: 305 pgs: 305 active+clean; 312 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 4.2 MiB/s wr, 137 op/s
Dec 06 07:17:37 compute-0 podman[300337]: 2025-12-06 07:17:37.323504583 +0000 UTC m=+2.656649109 container remove 44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.323 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.339 251996 DEBUG nova.compute.manager [req-546717e1-f370-4020-b9df-ef557caea775 req-de5acf2f-8d5f-438e-bf9b-f5414947ce73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.339 251996 DEBUG oslo_concurrency.lockutils [req-546717e1-f370-4020-b9df-ef557caea775 req-de5acf2f-8d5f-438e-bf9b-f5414947ce73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.339 251996 DEBUG oslo_concurrency.lockutils [req-546717e1-f370-4020-b9df-ef557caea775 req-de5acf2f-8d5f-438e-bf9b-f5414947ce73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.340 251996 DEBUG oslo_concurrency.lockutils [req-546717e1-f370-4020-b9df-ef557caea775 req-de5acf2f-8d5f-438e-bf9b-f5414947ce73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.340 251996 DEBUG nova.compute.manager [req-546717e1-f370-4020-b9df-ef557caea775 req-de5acf2f-8d5f-438e-bf9b-f5414947ce73 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Processing event network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.341 251996 DEBUG nova.compute.manager [req-0216e077-5afd-4266-8203-0df121dd4d2e req-7a273540-9394-4fda-b34d-d355e259ec6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.341 251996 DEBUG oslo_concurrency.lockutils [req-0216e077-5afd-4266-8203-0df121dd4d2e req-7a273540-9394-4fda-b34d-d355e259ec6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.341 251996 DEBUG oslo_concurrency.lockutils [req-0216e077-5afd-4266-8203-0df121dd4d2e req-7a273540-9394-4fda-b34d-d355e259ec6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.342 251996 DEBUG oslo_concurrency.lockutils [req-0216e077-5afd-4266-8203-0df121dd4d2e req-7a273540-9394-4fda-b34d-d355e259ec6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:37 compute-0 systemd[1]: libpod-conmon-44ab0668a672e33de16f69ec2e0569730f99d22b6aa608247269cb2432c3aeb3.scope: Deactivated successfully.
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.342 251996 DEBUG nova.compute.manager [req-0216e077-5afd-4266-8203-0df121dd4d2e req-7a273540-9394-4fda-b34d-d355e259ec6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] No waiting events found dispatching network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.342 251996 WARNING nova.compute.manager [req-0216e077-5afd-4266-8203-0df121dd4d2e req-7a273540-9394-4fda-b34d-d355e259ec6e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received unexpected event network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb for instance with vm_state building and task_state spawning.
Dec 06 07:17:37 compute-0 podman[300516]: 2025-12-06 07:17:37.417915936 +0000 UTC m=+0.049245015 container create 53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:17:37 compute-0 systemd[1]: Started libpod-conmon-53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039.scope.
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.471 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005457.4707878, 3997e85b-0d13-4e0a-9316-863294c82484 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.472 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] VM Started (Lifecycle Event)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.474 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.483 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:17:37 compute-0 podman[300516]: 2025-12-06 07:17:37.393717881 +0000 UTC m=+0.025046990 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:17:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fc55925d30c270225dedfadc2468c70703c7625c1eca66d99c8757c75775a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.495 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.500 251996 INFO nova.virt.libvirt.driver [-] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Instance spawned successfully.
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.500 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.506 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.512 251996 DEBUG oslo_concurrency.processutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config dd21a47b-0073-4789-b313-f2484ea4c357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:37 compute-0 podman[300516]: 2025-12-06 07:17:37.509441307 +0000 UTC m=+0.140770406 container init 53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.513 251996 INFO nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Deleting local config drive /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357/disk.config because it was imported into RBD.
Dec 06 07:17:37 compute-0 podman[300562]: 2025-12-06 07:17:37.517497792 +0000 UTC m=+0.056522167 container create e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:17:37 compute-0 podman[300516]: 2025-12-06 07:17:37.520192457 +0000 UTC m=+0.151521536 container start 53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.519 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.520 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.520 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.520 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.521 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.521 251996 DEBUG nova.virt.libvirt.driver [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.524 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.524 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005457.4709003, 3997e85b-0d13-4e0a-9316-863294c82484 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.524 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] VM Paused (Lifecycle Event)
Dec 06 07:17:37 compute-0 neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5[300565]: [NOTICE]   (300579) : New worker (300587) forked
Dec 06 07:17:37 compute-0 neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5[300565]: [NOTICE]   (300579) : Loading success.
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.552 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.557 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005457.4763772, 3997e85b-0d13-4e0a-9316-863294c82484 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.558 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] VM Resumed (Lifecycle Event)
Dec 06 07:17:37 compute-0 systemd[1]: Started libpod-conmon-e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109.scope.
Dec 06 07:17:37 compute-0 kernel: tapad0242d9-4a: entered promiscuous mode
Dec 06 07:17:37 compute-0 NetworkManager[48965]: <info>  [1765005457.5811] manager: (tapad0242d9-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Dec 06 07:17:37 compute-0 ovn_controller[147168]: 2025-12-06T07:17:37Z|00221|binding|INFO|Claiming lport ad0242d9-4af1-43ec-974d-c21d786abe3f for this chassis.
Dec 06 07:17:37 compute-0 ovn_controller[147168]: 2025-12-06T07:17:37Z|00222|binding|INFO|ad0242d9-4af1-43ec-974d-c21d786abe3f: Claiming fa:16:3e:05:ed:60 10.100.0.11
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.583 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:37 compute-0 systemd-udevd[300398]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:17:37 compute-0 podman[300562]: 2025-12-06 07:17:37.491317972 +0000 UTC m=+0.030342367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.588 251996 INFO nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Took 13.71 seconds to spawn the instance on the hypervisor.
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.588 251996 DEBUG nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.589 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.592 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:37 compute-0 NetworkManager[48965]: <info>  [1765005457.6001] device (tapad0242d9-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.600 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ed:60 10.100.0.11'], port_security=['fa:16:3e:05:ed:60 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dd21a47b-0073-4789-b313-f2484ea4c357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7e1b3c5b-2965-422b-9e23-f20ff1aa60b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ad0242d9-4af1-43ec-974d-c21d786abe3f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.601 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ad0242d9-4af1-43ec-974d-c21d786abe3f in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:17:37 compute-0 NetworkManager[48965]: <info>  [1765005457.6030] device (tapad0242d9-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.603 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.608 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:17:37 compute-0 ovn_controller[147168]: 2025-12-06T07:17:37Z|00223|binding|INFO|Setting lport ad0242d9-4af1-43ec-974d-c21d786abe3f ovn-installed in OVS
Dec 06 07:17:37 compute-0 ovn_controller[147168]: 2025-12-06T07:17:37Z|00224|binding|INFO|Setting lport ad0242d9-4af1-43ec-974d-c21d786abe3f up in Southbound
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.612 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.614 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0583bb-f8ab-4f63-ace0-f500a2d5822b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.615 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.617 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.617 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[383ea133-541e-4a10-8e4a-11e354f608c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.619 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[368842ea-6944-423c-9858-8d6e0f09b7b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbdd4d75f781d275f686b3dca4dd0851c7b57d7f89d3f9123bcdcf7bbf93f30e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbdd4d75f781d275f686b3dca4dd0851c7b57d7f89d3f9123bcdcf7bbf93f30e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbdd4d75f781d275f686b3dca4dd0851c7b57d7f89d3f9123bcdcf7bbf93f30e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbdd4d75f781d275f686b3dca4dd0851c7b57d7f89d3f9123bcdcf7bbf93f30e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.632 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[fa934e3d-684e-4a86-99b5-c4540b0debeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 systemd-machined[212986]: New machine qemu-33-instance-00000049.
Dec 06 07:17:37 compute-0 podman[300562]: 2025-12-06 07:17:37.645286175 +0000 UTC m=+0.184310570 container init e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ganguly, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:17:37 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-00000049.
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.654 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b20d79a8-6bc6-4b06-8b3f-606d8656059f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 podman[300562]: 2025-12-06 07:17:37.655755977 +0000 UTC m=+0.194780352 container start e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ganguly, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.659 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:17:37 compute-0 podman[300562]: 2025-12-06 07:17:37.660507059 +0000 UTC m=+0.199531454 container attach e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.685 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[da45b50f-3d12-411c-b27b-d178fc594bf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.693 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdbb6c3-376c-470d-a160-08534ebb8f00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 NetworkManager[48965]: <info>  [1765005457.6947] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.702 251996 INFO nova.compute.manager [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Took 15.16 seconds to build instance.
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.727 251996 DEBUG oslo_concurrency.lockutils [None req-aaa70371-65a8-47c0-8099-31ea470c1097 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.738 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b730ba-c182-460a-8066-fcbfcc6b2a15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.744 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[30cc2026-79ce-4ebc-888f-8e5846a44533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 NetworkManager[48965]: <info>  [1765005457.7636] device (tap4d599401-30): carrier: link connected
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.767 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[dd10be71-bb01-40bc-bd86-917229c154d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 5.1 MiB/s wr, 133 op/s
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.784 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a57091b6-55b5-46e6-8c60-f6f9c25c5413]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573035, 'reachable_time': 38007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300628, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.805 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f84b53-71e4-44ac-a355-470e1ea4ed93]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573035, 'tstamp': 573035}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300629, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.821 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dad5aa04-99f4-4b8a-af48-b08f3a8b22f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573035, 'reachable_time': 38007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300630, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.850 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd611b27-da7a-4204-b0ec-fc1bc86608fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.901 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e5cbdc9e-485f-4c61-8158-97dd5cb4243c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.903 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.903 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.903 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:37 compute-0 NetworkManager[48965]: <info>  [1765005457.9058] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Dec 06 07:17:37 compute-0 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.906 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.908 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:37 compute-0 ovn_controller[147168]: 2025-12-06T07:17:37Z|00225|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.908 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.925 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:37 compute-0 nova_compute[251992]: 2025-12-06 07:17:37.926 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.927 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.928 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[afc5cd76-2c72-44da-809f-e2c5587e6c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.928 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:17:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:37.929 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:17:38 compute-0 podman[300698]: 2025-12-06 07:17:38.262269366 +0000 UTC m=+0.020225454 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:17:38 compute-0 podman[300698]: 2025-12-06 07:17:38.405242072 +0000 UTC m=+0.163198180 container create 2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]: {
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:     "0": [
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:         {
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "devices": [
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "/dev/loop3"
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             ],
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "lv_name": "ceph_lv0",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "lv_size": "7511998464",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "name": "ceph_lv0",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "tags": {
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.cluster_name": "ceph",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.crush_device_class": "",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.encrypted": "0",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.osd_id": "0",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.type": "block",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:                 "ceph.vdo": "0"
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             },
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "type": "block",
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:             "vg_name": "ceph_vg0"
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:         }
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]:     ]
Dec 06 07:17:38 compute-0 infallible_ganguly[300603]: }
Dec 06 07:17:38 compute-0 podman[300562]: 2025-12-06 07:17:38.454042523 +0000 UTC m=+0.993066908 container died e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 07:17:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:38.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:38 compute-0 systemd[1]: Started libpod-conmon-2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3.scope.
Dec 06 07:17:38 compute-0 systemd[1]: libpod-e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109.scope: Deactivated successfully.
Dec 06 07:17:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:17:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbdd4d75f781d275f686b3dca4dd0851c7b57d7f89d3f9123bcdcf7bbf93f30e-merged.mount: Deactivated successfully.
Dec 06 07:17:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89bf1ca2c94925a7f512cc9cc1fbf85a2710bb1ba5d0bfc403424f7dcb1a7115/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:38 compute-0 podman[300698]: 2025-12-06 07:17:38.5105899 +0000 UTC m=+0.268546078 container init 2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:17:38 compute-0 podman[300562]: 2025-12-06 07:17:38.514295613 +0000 UTC m=+1.053319988 container remove e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ganguly, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:17:38 compute-0 podman[300698]: 2025-12-06 07:17:38.520218158 +0000 UTC m=+0.278174216 container start 2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:17:38 compute-0 systemd[1]: libpod-conmon-e6160e27defa22a6a510d65caf19f36afd4dc294d77d9de932b442afb8eba109.scope: Deactivated successfully.
Dec 06 07:17:38 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[300718]: [NOTICE]   (300734) : New worker (300736) forked
Dec 06 07:17:38 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[300718]: [NOTICE]   (300734) : Loading success.
Dec 06 07:17:38 compute-0 sudo[300274]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:38 compute-0 sudo[300745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:38 compute-0 sudo[300745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:38 compute-0 sudo[300745]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.644 251996 DEBUG oslo_concurrency.lockutils [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "interface-3997e85b-0d13-4e0a-9316-863294c82484-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.644 251996 DEBUG oslo_concurrency.lockutils [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "interface-3997e85b-0d13-4e0a-9316-863294c82484-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.645 251996 DEBUG nova.objects.instance [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lazy-loading 'flavor' on Instance uuid 3997e85b-0d13-4e0a-9316-863294c82484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.653 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.666 251996 DEBUG nova.objects.instance [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3997e85b-0d13-4e0a-9316-863294c82484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.681 251996 DEBUG nova.network.neutron [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:17:38 compute-0 sudo[300770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:17:38 compute-0 sudo[300770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:38 compute-0 sudo[300770]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.752 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:38 compute-0 sudo[300795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:38 compute-0 sudo[300795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:38 compute-0 sudo[300795]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:38 compute-0 ceph-mon[74339]: pgmap v1787: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 5.1 MiB/s wr, 133 op/s
Dec 06 07:17:38 compute-0 sudo[300820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:17:38 compute-0 sudo[300820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.915 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for dd21a47b-0073-4789-b313-f2484ea4c357 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.915 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005458.9144316, dd21a47b-0073-4789-b313-f2484ea4c357 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.915 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] VM Started (Lifecycle Event)
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.934 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.939 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005458.914684, dd21a47b-0073-4789-b313-f2484ea4c357 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.939 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] VM Paused (Lifecycle Event)
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.971 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:38 compute-0 nova_compute[251992]: 2025-12-06 07:17:38.973 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.009 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.123 251996 DEBUG nova.policy [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bc90c28aab6c4b1d8e2d984f532d7894', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a369472476f14c5db73734ea0b24ecf0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:17:39 compute-0 podman[300894]: 2025-12-06 07:17:39.184974112 +0000 UTC m=+0.042110005 container create 86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:17:39 compute-0 systemd[1]: Started libpod-conmon-86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1.scope.
Dec 06 07:17:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:17:39 compute-0 podman[300894]: 2025-12-06 07:17:39.166887737 +0000 UTC m=+0.024023640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.337 251996 DEBUG nova.compute.manager [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.337 251996 DEBUG oslo_concurrency.lockutils [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.338 251996 DEBUG oslo_concurrency.lockutils [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.338 251996 DEBUG oslo_concurrency.lockutils [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.338 251996 DEBUG nova.compute.manager [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Processing event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.338 251996 DEBUG nova.compute.manager [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.338 251996 DEBUG oslo_concurrency.lockutils [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.338 251996 DEBUG oslo_concurrency.lockutils [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.339 251996 DEBUG oslo_concurrency.lockutils [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.339 251996 DEBUG nova.compute.manager [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] No waiting events found dispatching network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.339 251996 WARNING nova.compute.manager [req-69113fdf-1dce-44f7-a406-b3e8ce07e62d req-7c34e70d-7bae-49f2-a1e2-0162acde7199 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received unexpected event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f for instance with vm_state active and task_state rebuild_spawning.
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.339 251996 DEBUG nova.compute.manager [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.343 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005459.3433495, dd21a47b-0073-4789-b313-f2484ea4c357 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.343 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] VM Resumed (Lifecycle Event)
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.345 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.348 251996 INFO nova.virt.libvirt.driver [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance spawned successfully.
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.349 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.380 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.386 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.386 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.386 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.387 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.387 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.387 251996 DEBUG nova.virt.libvirt.driver [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.390 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.442 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.487 251996 DEBUG nova.compute.manager [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.552 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.553 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.553 251996 DEBUG nova.objects.instance [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.635 251996 DEBUG oslo_concurrency.lockutils [None req-3bf3987c-6093-4557-bc60-521301b103b0 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:39 compute-0 podman[300894]: 2025-12-06 07:17:39.759602223 +0000 UTC m=+0.616738126 container init 86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:17:39 compute-0 podman[300894]: 2025-12-06 07:17:39.766957627 +0000 UTC m=+0.624093510 container start 86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_turing, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:17:39 compute-0 magical_turing[300911]: 167 167
Dec 06 07:17:39 compute-0 systemd[1]: libpod-86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1.scope: Deactivated successfully.
Dec 06 07:17:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Dec 06 07:17:39 compute-0 nova_compute[251992]: 2025-12-06 07:17:39.884 251996 DEBUG nova.network.neutron [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Successfully created port: 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:17:40 compute-0 podman[300894]: 2025-12-06 07:17:40.021784892 +0000 UTC m=+0.878920855 container attach 86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:17:40 compute-0 podman[300894]: 2025-12-06 07:17:40.02241309 +0000 UTC m=+0.879549003 container died 86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_turing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:17:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:40.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:40.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad05f8013a101f776f8d9d7272033f874cb54fc1302a402a8bfc4501a00a82f7-merged.mount: Deactivated successfully.
Dec 06 07:17:40 compute-0 podman[300894]: 2025-12-06 07:17:40.735295525 +0000 UTC m=+1.592431428 container remove 86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:17:40 compute-0 systemd[1]: libpod-conmon-86f4829eda5f218ea11b7d02b2ac59f561bcd20b62c778e7cd5892d6c327a8f1.scope: Deactivated successfully.
Dec 06 07:17:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2337843887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:17:40 compute-0 podman[300929]: 2025-12-06 07:17:40.879077774 +0000 UTC m=+0.134571883 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:17:40 compute-0 podman[300961]: 2025-12-06 07:17:40.962644894 +0000 UTC m=+0.049280235 container create 48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:17:41 compute-0 systemd[1]: Started libpod-conmon-48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c.scope.
Dec 06 07:17:41 compute-0 podman[300961]: 2025-12-06 07:17:40.938534432 +0000 UTC m=+0.025169803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:17:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fefd2dbadf7a3c89128c3de9e132d3b62248cd6a5d73ec5be77c043d6d806a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fefd2dbadf7a3c89128c3de9e132d3b62248cd6a5d73ec5be77c043d6d806a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fefd2dbadf7a3c89128c3de9e132d3b62248cd6a5d73ec5be77c043d6d806a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fefd2dbadf7a3c89128c3de9e132d3b62248cd6a5d73ec5be77c043d6d806a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:17:41 compute-0 podman[300961]: 2025-12-06 07:17:41.060584034 +0000 UTC m=+0.147219395 container init 48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:17:41 compute-0 podman[300961]: 2025-12-06 07:17:41.068392782 +0000 UTC m=+0.155028133 container start 48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:17:41 compute-0 podman[300961]: 2025-12-06 07:17:41.072228449 +0000 UTC m=+0.158863810 container attach 48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:17:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 267 op/s
Dec 06 07:17:41 compute-0 ceph-mon[74339]: pgmap v1788: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 321 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]: {
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]:         "osd_id": 0,
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]:         "type": "bluestore"
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]:     }
Dec 06 07:17:41 compute-0 pedantic_bassi[300977]: }
Dec 06 07:17:41 compute-0 systemd[1]: libpod-48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c.scope: Deactivated successfully.
Dec 06 07:17:41 compute-0 podman[300961]: 2025-12-06 07:17:41.909221555 +0000 UTC m=+0.995856896 container died 48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 07:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-31fefd2dbadf7a3c89128c3de9e132d3b62248cd6a5d73ec5be77c043d6d806a-merged.mount: Deactivated successfully.
Dec 06 07:17:41 compute-0 podman[300961]: 2025-12-06 07:17:41.958094647 +0000 UTC m=+1.044729978 container remove 48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:17:41 compute-0 systemd[1]: libpod-conmon-48efe4de7edc73971602f5f93cd1caf70af47dfaf8f8f77b0d204824d7bd461c.scope: Deactivated successfully.
Dec 06 07:17:41 compute-0 sudo[300820]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:17:42 compute-0 sudo[301011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:42 compute-0 sudo[301011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:42 compute-0 sudo[301011]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:42.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:42 compute-0 nova_compute[251992]: 2025-12-06 07:17:42.615 251996 DEBUG nova.network.neutron [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Successfully updated port: 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:17:42 compute-0 nova_compute[251992]: 2025-12-06 07:17:42.631 251996 DEBUG oslo_concurrency.lockutils [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:42 compute-0 nova_compute[251992]: 2025-12-06 07:17:42.631 251996 DEBUG oslo_concurrency.lockutils [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquired lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:42 compute-0 nova_compute[251992]: 2025-12-06 07:17:42.632 251996 DEBUG nova.network.neutron [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:17:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:42.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:17:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 23ed111a-75e5-42f1-bec4-5d3efbaa18ec does not exist
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 793976df-9afc-4548-a55e-2dae968d8701 does not exist
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4639eae2-8824-430e-8031-4b6a91f70350 does not exist
Dec 06 07:17:42 compute-0 sudo[301037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:42 compute-0 sudo[301038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:17:42 compute-0 sudo[301037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:42 compute-0 sudo[301038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:42 compute-0 sudo[301037]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:42 compute-0 sudo[301038]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:42 compute-0 nova_compute[251992]: 2025-12-06 07:17:42.939 251996 WARNING nova.network.neutron [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] f5f7d890-fc89-4729-9976-7a81ce11ddb5 already exists in list: networks containing: ['f5f7d890-fc89-4729-9976-7a81ce11ddb5']. ignoring it
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:17:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:17:42 compute-0 sudo[301087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:17:42 compute-0 sudo[301087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:17:42 compute-0 ceph-mon[74339]: pgmap v1789: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.3 MiB/s wr, 267 op/s
Dec 06 07:17:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:43 compute-0 sudo[301087]: pam_unix(sudo:session): session closed for user root
Dec 06 07:17:43 compute-0 nova_compute[251992]: 2025-12-06 07:17:43.643 251996 DEBUG nova.compute.manager [req-c7b5735a-2668-43e9-bbc3-1d2d5ff3b783 req-98157d49-e33b-4246-b922-136397694ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-changed-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:43 compute-0 nova_compute[251992]: 2025-12-06 07:17:43.644 251996 DEBUG nova.compute.manager [req-c7b5735a-2668-43e9-bbc3-1d2d5ff3b783 req-98157d49-e33b-4246-b922-136397694ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Refreshing instance network info cache due to event network-changed-2336a4b6-e7bc-46ae-a318-1af2ecc257c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:17:43 compute-0 nova_compute[251992]: 2025-12-06 07:17:43.645 251996 DEBUG oslo_concurrency.lockutils [req-c7b5735a-2668-43e9-bbc3-1d2d5ff3b783 req-98157d49-e33b-4246-b922-136397694ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:17:43 compute-0 nova_compute[251992]: 2025-12-06 07:17:43.654 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:43 compute-0 nova_compute[251992]: 2025-12-06 07:17:43.754 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 215 op/s
Dec 06 07:17:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:17:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:44.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:44.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.684 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.685 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.685 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.686 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.686 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.687 251996 INFO nova.compute.manager [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Terminating instance
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.688 251996 DEBUG nova.compute.manager [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:17:44 compute-0 kernel: tapad0242d9-4a (unregistering): left promiscuous mode
Dec 06 07:17:44 compute-0 NetworkManager[48965]: <info>  [1765005464.7326] device (tapad0242d9-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:17:44 compute-0 ovn_controller[147168]: 2025-12-06T07:17:44Z|00226|binding|INFO|Releasing lport ad0242d9-4af1-43ec-974d-c21d786abe3f from this chassis (sb_readonly=0)
Dec 06 07:17:44 compute-0 ovn_controller[147168]: 2025-12-06T07:17:44Z|00227|binding|INFO|Setting lport ad0242d9-4af1-43ec-974d-c21d786abe3f down in Southbound
Dec 06 07:17:44 compute-0 ovn_controller[147168]: 2025-12-06T07:17:44Z|00228|binding|INFO|Removing iface tapad0242d9-4a ovn-installed in OVS
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.755 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:44.763 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ed:60 10.100.0.11'], port_security=['fa:16:3e:05:ed:60 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dd21a47b-0073-4789-b313-f2484ea4c357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7e1b3c5b-2965-422b-9e23-f20ff1aa60b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ad0242d9-4af1-43ec-974d-c21d786abe3f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:44.766 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ad0242d9-4af1-43ec-974d-c21d786abe3f in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:17:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:44.777 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.777 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:44.779 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6877bfff-792b-49fa-860d-041a993724fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:44.781 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:17:44 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000049.scope: Deactivated successfully.
Dec 06 07:17:44 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000049.scope: Consumed 5.732s CPU time.
Dec 06 07:17:44 compute-0 systemd-machined[212986]: Machine qemu-33-instance-00000049 terminated.
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.907 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.912 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.920 251996 INFO nova.virt.libvirt.driver [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Instance destroyed successfully.
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.921 251996 DEBUG nova.objects.instance [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid dd21a47b-0073-4789-b313-f2484ea4c357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.938 251996 DEBUG nova.virt.libvirt.vif [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:16:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-183617772',display_name='tempest-ServerActionsTestJSON-server-229441047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-183617772',id=73,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-5g3opbrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:39Z,user_data=None,user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=dd21a47b-0073-4789-b313-f2484ea4c357,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.939 251996 DEBUG nova.network.os_vif_util [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "address": "fa:16:3e:05:ed:60", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad0242d9-4a", "ovs_interfaceid": "ad0242d9-4af1-43ec-974d-c21d786abe3f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.939 251996 DEBUG nova.network.os_vif_util [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.940 251996 DEBUG os_vif [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.941 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.942 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad0242d9-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.943 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.945 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.947 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:44 compute-0 nova_compute[251992]: 2025-12-06 07:17:44.951 251996 INFO os_vif [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:ed:60,bridge_name='br-int',has_traffic_filtering=True,id=ad0242d9-4af1-43ec-974d-c21d786abe3f,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad0242d9-4a')
Dec 06 07:17:45 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[300718]: [NOTICE]   (300734) : haproxy version is 2.8.14-c23fe91
Dec 06 07:17:45 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[300718]: [NOTICE]   (300734) : path to executable is /usr/sbin/haproxy
Dec 06 07:17:45 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[300718]: [WARNING]  (300734) : Exiting Master process...
Dec 06 07:17:45 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[300718]: [WARNING]  (300734) : Exiting Master process...
Dec 06 07:17:45 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[300718]: [ALERT]    (300734) : Current worker (300736) exited with code 143 (Terminated)
Dec 06 07:17:45 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[300718]: [WARNING]  (300734) : All workers exited. Exiting... (0)
Dec 06 07:17:45 compute-0 systemd[1]: libpod-2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3.scope: Deactivated successfully.
Dec 06 07:17:45 compute-0 podman[301137]: 2025-12-06 07:17:45.276390773 +0000 UTC m=+0.396336071 container died 2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.548 251996 DEBUG nova.network.neutron [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Updating instance_info_cache with network_info: [{"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "address": "fa:16:3e:a6:4f:73", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2336a4b6-e7", "ovs_interfaceid": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.572 251996 DEBUG oslo_concurrency.lockutils [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Releasing lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.573 251996 DEBUG oslo_concurrency.lockutils [req-c7b5735a-2668-43e9-bbc3-1d2d5ff3b783 req-98157d49-e33b-4246-b922-136397694ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.573 251996 DEBUG nova.network.neutron [req-c7b5735a-2668-43e9-bbc3-1d2d5ff3b783 req-98157d49-e33b-4246-b922-136397694ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Refreshing network info cache for port 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.576 251996 DEBUG nova.virt.libvirt.vif [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-169244694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-169244694',id=76,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a369472476f14c5db73734ea0b24ecf0',ramdisk_id='',reservation_id='r-0q07guu2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-876880010',owner_user_name='tempest-AttachInterfacesV270Test-876880010-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:37Z,user_data=None,user_id='bc90c28aab6c4b1d8e2d984f532d7894',uuid=3997e85b-0d13-4e0a-9316-863294c82484,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "address": "fa:16:3e:a6:4f:73", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2336a4b6-e7", "ovs_interfaceid": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.576 251996 DEBUG nova.network.os_vif_util [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converting VIF {"id": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "address": "fa:16:3e:a6:4f:73", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2336a4b6-e7", "ovs_interfaceid": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.577 251996 DEBUG nova.network.os_vif_util [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:4f:73,bridge_name='br-int',has_traffic_filtering=True,id=2336a4b6-e7bc-46ae-a318-1af2ecc257c7,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2336a4b6-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.577 251996 DEBUG os_vif [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:4f:73,bridge_name='br-int',has_traffic_filtering=True,id=2336a4b6-e7bc-46ae-a318-1af2ecc257c7,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2336a4b6-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.577 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.578 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.578 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.580 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.580 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2336a4b6-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.581 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2336a4b6-e7, col_values=(('external_ids', {'iface-id': '2336a4b6-e7bc-46ae-a318-1af2ecc257c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:4f:73', 'vm-uuid': '3997e85b-0d13-4e0a-9316-863294c82484'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.582 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:45 compute-0 NetworkManager[48965]: <info>  [1765005465.5834] manager: (tap2336a4b6-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.583 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.588 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.589 251996 INFO os_vif [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:4f:73,bridge_name='br-int',has_traffic_filtering=True,id=2336a4b6-e7bc-46ae-a318-1af2ecc257c7,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2336a4b6-e7')
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.590 251996 DEBUG nova.virt.libvirt.vif [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-169244694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-169244694',id=76,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a369472476f14c5db73734ea0b24ecf0',ramdisk_id='',reservation_id='r-0q07guu2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-876880010',owner_user_name='tempest-AttachInterfacesV270Test-876880010-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:37Z,user_data=None,user_id='bc90c28aab6c4b1d8e2d984f532d7894',uuid=3997e85b-0d13-4e0a-9316-863294c82484,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "address": "fa:16:3e:a6:4f:73", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2336a4b6-e7", "ovs_interfaceid": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.590 251996 DEBUG nova.network.os_vif_util [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converting VIF {"id": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "address": "fa:16:3e:a6:4f:73", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2336a4b6-e7", "ovs_interfaceid": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.591 251996 DEBUG nova.network.os_vif_util [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:4f:73,bridge_name='br-int',has_traffic_filtering=True,id=2336a4b6-e7bc-46ae-a318-1af2ecc257c7,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2336a4b6-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.594 251996 DEBUG nova.virt.libvirt.guest [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] attach device xml: <interface type="ethernet">
Dec 06 07:17:45 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:a6:4f:73"/>
Dec 06 07:17:45 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 07:17:45 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:17:45 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 07:17:45 compute-0 nova_compute[251992]:   <target dev="tap2336a4b6-e7"/>
Dec 06 07:17:45 compute-0 nova_compute[251992]: </interface>
Dec 06 07:17:45 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:17:45 compute-0 kernel: tap2336a4b6-e7: entered promiscuous mode
Dec 06 07:17:45 compute-0 NetworkManager[48965]: <info>  [1765005465.6050] manager: (tap2336a4b6-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Dec 06 07:17:45 compute-0 ovn_controller[147168]: 2025-12-06T07:17:45Z|00229|binding|INFO|Claiming lport 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 for this chassis.
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.605 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:45 compute-0 ovn_controller[147168]: 2025-12-06T07:17:45Z|00230|binding|INFO|2336a4b6-e7bc-46ae-a318-1af2ecc257c7: Claiming fa:16:3e:a6:4f:73 10.100.0.13
Dec 06 07:17:45 compute-0 systemd-udevd[301116]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:17:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:45.614 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:4f:73 10.100.0.13'], port_security=['fa:16:3e:a6:4f:73 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3997e85b-0d13-4e0a-9316-863294c82484', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a369472476f14c5db73734ea0b24ecf0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd44518a3-9dbe-4e6e-884b-e1f99389c6ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c71088b-c43c-4f75-9f02-d22903770157, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=2336a4b6-e7bc-46ae-a318-1af2ecc257c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:45 compute-0 NetworkManager[48965]: <info>  [1765005465.6202] device (tap2336a4b6-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:17:45 compute-0 NetworkManager[48965]: <info>  [1765005465.6209] device (tap2336a4b6-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:17:45 compute-0 ovn_controller[147168]: 2025-12-06T07:17:45Z|00231|binding|INFO|Setting lport 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 ovn-installed in OVS
Dec 06 07:17:45 compute-0 ovn_controller[147168]: 2025-12-06T07:17:45Z|00232|binding|INFO|Setting lport 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 up in Southbound
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.638 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:45 compute-0 nova_compute[251992]: 2025-12-06 07:17:45.641 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 231 op/s
Dec 06 07:17:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:46.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:17:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:46.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.925 251996 DEBUG nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-unplugged-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.928 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.929 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.930 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.930 251996 DEBUG nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] No waiting events found dispatching network-vif-unplugged-ad0242d9-4af1-43ec-974d-c21d786abe3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.931 251996 DEBUG nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-unplugged-ad0242d9-4af1-43ec-974d-c21d786abe3f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.932 251996 DEBUG nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.932 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.933 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.934 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.934 251996 DEBUG nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] No waiting events found dispatching network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.935 251996 WARNING nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received unexpected event network-vif-plugged-ad0242d9-4af1-43ec-974d-c21d786abe3f for instance with vm_state active and task_state deleting.
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.936 251996 DEBUG nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.936 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.937 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.938 251996 DEBUG oslo_concurrency.lockutils [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.938 251996 DEBUG nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] No waiting events found dispatching network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:46 compute-0 nova_compute[251992]: 2025-12-06 07:17:46.939 251996 WARNING nova.compute.manager [req-cf3af82a-d53e-4034-8112-96f019677f82 req-5015fbd9-00d1-41d5-8df4-b4c63dfaafe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received unexpected event network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 for instance with vm_state active and task_state None.
Dec 06 07:17:47 compute-0 nova_compute[251992]: 2025-12-06 07:17:47.212 251996 DEBUG nova.virt.libvirt.driver [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:47 compute-0 nova_compute[251992]: 2025-12-06 07:17:47.213 251996 DEBUG nova.virt.libvirt.driver [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:17:47 compute-0 nova_compute[251992]: 2025-12-06 07:17:47.213 251996 DEBUG nova.virt.libvirt.driver [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] No VIF found with MAC fa:16:3e:91:a5:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:17:47 compute-0 nova_compute[251992]: 2025-12-06 07:17:47.213 251996 DEBUG nova.virt.libvirt.driver [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] No VIF found with MAC fa:16:3e:a6:4f:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:17:47 compute-0 nova_compute[251992]: 2025-12-06 07:17:47.247 251996 DEBUG nova.virt.libvirt.guest [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:17:47 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   <nova:name>tempest-AttachInterfacesV270Test-server-169244694</nova:name>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 07:17:47</nova:creationTime>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:user uuid="bc90c28aab6c4b1d8e2d984f532d7894">tempest-AttachInterfacesV270Test-876880010-project-member</nova:user>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:project uuid="a369472476f14c5db73734ea0b24ecf0">tempest-AttachInterfacesV270Test-876880010</nova:project>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:port uuid="50c0e93a-7306-495b-9a6f-121997fa4acb">
Dec 06 07:17:47 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     <nova:port uuid="2336a4b6-e7bc-46ae-a318-1af2ecc257c7">
Dec 06 07:17:47 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:17:47 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:17:47 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 07:17:47 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 07:17:47 compute-0 nova_compute[251992]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:17:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:47 compute-0 nova_compute[251992]: 2025-12-06 07:17:47.290 251996 DEBUG oslo_concurrency.lockutils [None req-9191c095-9a9a-45b3-a1fe-243acb4f96a5 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "interface-3997e85b-0d13-4e0a-9316-863294c82484-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:47 compute-0 ceph-mon[74339]: pgmap v1790: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 215 op/s
Dec 06 07:17:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 309 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Dec 06 07:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3-userdata-shm.mount: Deactivated successfully.
Dec 06 07:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-89bf1ca2c94925a7f512cc9cc1fbf85a2710bb1ba5d0bfc403424f7dcb1a7115-merged.mount: Deactivated successfully.
Dec 06 07:17:48 compute-0 podman[301198]: 2025-12-06 07:17:48.106211599 +0000 UTC m=+0.754450565 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:17:48 compute-0 podman[301199]: 2025-12-06 07:17:48.133905141 +0000 UTC m=+0.782162117 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:17:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:48.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:48 compute-0 podman[301137]: 2025-12-06 07:17:48.500529243 +0000 UTC m=+3.620474531 container cleanup 2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:17:48 compute-0 systemd[1]: libpod-conmon-2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3.scope: Deactivated successfully.
Dec 06 07:17:48 compute-0 ceph-mon[74339]: pgmap v1791: 305 pgs: 305 active+clean; 341 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 231 op/s
Dec 06 07:17:48 compute-0 podman[301241]: 2025-12-06 07:17:48.635159727 +0000 UTC m=+0.080351052 container remove 2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.642 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e2134658-8bff-4b34-898a-1efff6a311b9]: (4, ('Sat Dec  6 07:17:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3)\n2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3\nSat Dec  6 07:17:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3)\n2497d7a4601943c6d7fc6d601434bb730db028367c12ffa805781ccc5f70f4f3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.645 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5aa730d2-f330-4968-b42e-ed3b1d6dd71b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.646 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:48 compute-0 nova_compute[251992]: 2025-12-06 07:17:48.648 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:48 compute-0 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:17:48 compute-0 nova_compute[251992]: 2025-12-06 07:17:48.654 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.662 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9c211cff-fe41-43b8-bcb1-fe1dba7f7512]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:48.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:48 compute-0 nova_compute[251992]: 2025-12-06 07:17:48.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.681 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4d682c-1633-4cf2-bafa-d4d237d0c6b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.683 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[46f2f362-29a1-458b-be1e-7c784a577bdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.700 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[601c1d51-9a81-4303-b3c5-f1db891d3743]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573026, 'reachable_time': 19808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301255, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.703 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.703 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d8feca-801d-4b77-9eb9-82f616df7364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.706 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 in datapath f5f7d890-fc89-4729-9976-7a81ce11ddb5 unbound from our chassis
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.708 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5f7d890-fc89-4729-9976-7a81ce11ddb5
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.722 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3aff02ab-0a19-4526-8bc2-2d286ec45bf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.757 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[deb9806c-4f1e-4097-83ac-df51b4eb71a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.761 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[340e5778-6284-4aa0-9927-29691b476cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.795 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[39aed910-7c21-4e98-bb53-2f657759df4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.814 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d8b2bb-b2f6-4a61-ac00-25d6fe154775]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5f7d890-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c0:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572754, 'reachable_time': 15604, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301266, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.839 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f94c7c04-c0dc-4ffa-aab1-f5e3766cc674]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5f7d890-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572765, 'tstamp': 572765}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301267, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5f7d890-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572768, 'tstamp': 572768}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301267, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.841 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5f7d890-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:48 compute-0 nova_compute[251992]: 2025-12-06 07:17:48.842 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.843 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5f7d890-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.843 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.844 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5f7d890-f0, col_values=(('external_ids', {'iface-id': 'f275ad1d-107c-461e-9302-6a01ff4f0872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:48.844 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.024 251996 DEBUG nova.compute.manager [req-b7357a14-7f2d-47c5-be81-d26ff090353b req-f4b162fb-6204-486d-88d9-24e32894a72a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.025 251996 DEBUG oslo_concurrency.lockutils [req-b7357a14-7f2d-47c5-be81-d26ff090353b req-f4b162fb-6204-486d-88d9-24e32894a72a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.026 251996 DEBUG oslo_concurrency.lockutils [req-b7357a14-7f2d-47c5-be81-d26ff090353b req-f4b162fb-6204-486d-88d9-24e32894a72a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.026 251996 DEBUG oslo_concurrency.lockutils [req-b7357a14-7f2d-47c5-be81-d26ff090353b req-f4b162fb-6204-486d-88d9-24e32894a72a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.026 251996 DEBUG nova.compute.manager [req-b7357a14-7f2d-47c5-be81-d26ff090353b req-f4b162fb-6204-486d-88d9-24e32894a72a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] No waiting events found dispatching network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.027 251996 WARNING nova.compute.manager [req-b7357a14-7f2d-47c5-be81-d26ff090353b req-f4b162fb-6204-486d-88d9-24e32894a72a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received unexpected event network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 for instance with vm_state active and task_state None.
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.648 251996 DEBUG nova.network.neutron [req-c7b5735a-2668-43e9-bbc3-1d2d5ff3b783 req-98157d49-e33b-4246-b922-136397694ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Updated VIF entry in instance network info cache for port 2336a4b6-e7bc-46ae-a318-1af2ecc257c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.649 251996 DEBUG nova.network.neutron [req-c7b5735a-2668-43e9-bbc3-1d2d5ff3b783 req-98157d49-e33b-4246-b922-136397694ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Updating instance_info_cache with network_info: [{"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "address": "fa:16:3e:a6:4f:73", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2336a4b6-e7", "ovs_interfaceid": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.684 251996 DEBUG oslo_concurrency.lockutils [req-c7b5735a-2668-43e9-bbc3-1d2d5ff3b783 req-98157d49-e33b-4246-b922-136397694ae5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3997e85b-0d13-4e0a-9316-863294c82484" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:17:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 309 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 44 KiB/s wr, 163 op/s
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.814 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.814 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.815 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.815 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.815 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.816 251996 INFO nova.compute.manager [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Terminating instance
Dec 06 07:17:49 compute-0 nova_compute[251992]: 2025-12-06 07:17:49.817 251996 DEBUG nova.compute.manager [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:17:49 compute-0 ceph-mon[74339]: pgmap v1792: 305 pgs: 305 active+clean; 309 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 174 op/s
Dec 06 07:17:50 compute-0 kernel: tap50c0e93a-73 (unregistering): left promiscuous mode
Dec 06 07:17:50 compute-0 NetworkManager[48965]: <info>  [1765005470.3966] device (tap50c0e93a-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:17:50 compute-0 ovn_controller[147168]: 2025-12-06T07:17:50Z|00233|binding|INFO|Releasing lport 50c0e93a-7306-495b-9a6f-121997fa4acb from this chassis (sb_readonly=0)
Dec 06 07:17:50 compute-0 ovn_controller[147168]: 2025-12-06T07:17:50Z|00234|binding|INFO|Setting lport 50c0e93a-7306-495b-9a6f-121997fa4acb down in Southbound
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.404 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_controller[147168]: 2025-12-06T07:17:50Z|00235|binding|INFO|Removing iface tap50c0e93a-73 ovn-installed in OVS
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.406 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.415 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:a5:8a 10.100.0.7'], port_security=['fa:16:3e:91:a5:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3997e85b-0d13-4e0a-9316-863294c82484', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a369472476f14c5db73734ea0b24ecf0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd44518a3-9dbe-4e6e-884b-e1f99389c6ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c71088b-c43c-4f75-9f02-d22903770157, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=50c0e93a-7306-495b-9a6f-121997fa4acb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.416 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 50c0e93a-7306-495b-9a6f-121997fa4acb in datapath f5f7d890-fc89-4729-9976-7a81ce11ddb5 unbound from our chassis
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.418 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5f7d890-fc89-4729-9976-7a81ce11ddb5
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.419 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 kernel: tap2336a4b6-e7 (unregistering): left promiscuous mode
Dec 06 07:17:50 compute-0 NetworkManager[48965]: <info>  [1765005470.4276] device (tap2336a4b6-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.428 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.433 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[303b9415-d619-4c40-85ca-daa01934b6ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.434 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_controller[147168]: 2025-12-06T07:17:50Z|00236|binding|INFO|Releasing lport 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 from this chassis (sb_readonly=0)
Dec 06 07:17:50 compute-0 ovn_controller[147168]: 2025-12-06T07:17:50Z|00237|binding|INFO|Setting lport 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 down in Southbound
Dec 06 07:17:50 compute-0 ovn_controller[147168]: 2025-12-06T07:17:50Z|00238|binding|INFO|Removing iface tap2336a4b6-e7 ovn-installed in OVS
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.442 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:4f:73 10.100.0.13'], port_security=['fa:16:3e:a6:4f:73 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3997e85b-0d13-4e0a-9316-863294c82484', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a369472476f14c5db73734ea0b24ecf0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd44518a3-9dbe-4e6e-884b-e1f99389c6ed', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c71088b-c43c-4f75-9f02-d22903770157, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=2336a4b6-e7bc-46ae-a318-1af2ecc257c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.451 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.465 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec6745a-9372-4165-a57a-8eb203a5f967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.468 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c18cdc-5ba9-457a-95d6-ace0c7dbb88d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:50.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:50 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Dec 06 07:17:50 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004c.scope: Consumed 12.752s CPU time.
Dec 06 07:17:50 compute-0 systemd-machined[212986]: Machine qemu-32-instance-0000004c terminated.
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.497 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c212fd54-ee0e-4f75-adcd-4176a1419e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.514 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8e748f81-1a11-439c-97ee-eddea9b33f99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5f7d890-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:c0:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572754, 'reachable_time': 15604, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301283, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.528 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bb82bcbb-c2d3-461e-9802-923209f1e2c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5f7d890-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572765, 'tstamp': 572765}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301284, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5f7d890-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572768, 'tstamp': 572768}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301284, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.530 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5f7d890-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.531 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.537 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5f7d890-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.538 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.537 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.538 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5f7d890-f0, col_values=(('external_ids', {'iface-id': 'f275ad1d-107c-461e-9302-6a01ff4f0872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.539 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.539 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 2336a4b6-e7bc-46ae-a318-1af2ecc257c7 in datapath f5f7d890-fc89-4729-9976-7a81ce11ddb5 unbound from our chassis
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.541 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5f7d890-fc89-4729-9976-7a81ce11ddb5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.542 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f6604db2-c3a5-454a-af7f-77642144a8a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.542 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5 namespace which is not needed anymore
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.582 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 NetworkManager[48965]: <info>  [1765005470.6365] manager: (tap50c0e93a-73): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Dec 06 07:17:50 compute-0 NetworkManager[48965]: <info>  [1765005470.6467] manager: (tap2336a4b6-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.664 251996 INFO nova.virt.libvirt.driver [-] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Instance destroyed successfully.
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.665 251996 DEBUG nova.objects.instance [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lazy-loading 'resources' on Instance uuid 3997e85b-0d13-4e0a-9316-863294c82484 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:17:50 compute-0 neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5[300565]: [NOTICE]   (300579) : haproxy version is 2.8.14-c23fe91
Dec 06 07:17:50 compute-0 neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5[300565]: [NOTICE]   (300579) : path to executable is /usr/sbin/haproxy
Dec 06 07:17:50 compute-0 neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5[300565]: [WARNING]  (300579) : Exiting Master process...
Dec 06 07:17:50 compute-0 neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5[300565]: [ALERT]    (300579) : Current worker (300587) exited with code 143 (Terminated)
Dec 06 07:17:50 compute-0 neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5[300565]: [WARNING]  (300579) : All workers exited. Exiting... (0)
Dec 06 07:17:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:50 compute-0 systemd[1]: libpod-53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039.scope: Deactivated successfully.
Dec 06 07:17:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:50.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:50 compute-0 podman[301304]: 2025-12-06 07:17:50.681427797 +0000 UTC m=+0.056769523 container died 53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.694 251996 DEBUG nova.virt.libvirt.vif [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-169244694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-169244694',id=76,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a369472476f14c5db73734ea0b24ecf0',ramdisk_id='',reservation_id='r-0q07guu2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-876880010',owner_user_name='tempest-AttachInterfacesV270Test-876880010-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:37Z,user_data=None,user_id='bc90c28aab6c4b1d8e2d984f532d7894',uuid=3997e85b-0d13-4e0a-9316-863294c82484,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.695 251996 DEBUG nova.network.os_vif_util [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converting VIF {"id": "50c0e93a-7306-495b-9a6f-121997fa4acb", "address": "fa:16:3e:91:a5:8a", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50c0e93a-73", "ovs_interfaceid": "50c0e93a-7306-495b-9a6f-121997fa4acb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.695 251996 DEBUG nova.network.os_vif_util [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:91:a5:8a,bridge_name='br-int',has_traffic_filtering=True,id=50c0e93a-7306-495b-9a6f-121997fa4acb,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c0e93a-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.696 251996 DEBUG os_vif [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:a5:8a,bridge_name='br-int',has_traffic_filtering=True,id=50c0e93a-7306-495b-9a6f-121997fa4acb,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c0e93a-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.697 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.698 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50c0e93a-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.701 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.705 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.708 251996 INFO os_vif [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:a5:8a,bridge_name='br-int',has_traffic_filtering=True,id=50c0e93a-7306-495b-9a6f-121997fa4acb,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50c0e93a-73')
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.709 251996 DEBUG nova.virt.libvirt.vif [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:17:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-169244694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-169244694',id=76,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:17:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a369472476f14c5db73734ea0b24ecf0',ramdisk_id='',reservation_id='r-0q07guu2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-876880010',owner_user_name='tempest-AttachInterfacesV270Test-876880010-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:17:37Z,user_data=None,user_id='bc90c28aab6c4b1d8e2d984f532d7894',uuid=3997e85b-0d13-4e0a-9316-863294c82484,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "address": "fa:16:3e:a6:4f:73", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2336a4b6-e7", "ovs_interfaceid": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.709 251996 DEBUG nova.network.os_vif_util [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converting VIF {"id": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "address": "fa:16:3e:a6:4f:73", "network": {"id": "f5f7d890-fc89-4729-9976-7a81ce11ddb5", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-2103017265-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a369472476f14c5db73734ea0b24ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2336a4b6-e7", "ovs_interfaceid": "2336a4b6-e7bc-46ae-a318-1af2ecc257c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.710 251996 DEBUG nova.network.os_vif_util [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:4f:73,bridge_name='br-int',has_traffic_filtering=True,id=2336a4b6-e7bc-46ae-a318-1af2ecc257c7,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2336a4b6-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.710 251996 DEBUG os_vif [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:4f:73,bridge_name='br-int',has_traffic_filtering=True,id=2336a4b6-e7bc-46ae-a318-1af2ecc257c7,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2336a4b6-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-54fc55925d30c270225dedfadc2468c70703c7625c1eca66d99c8757c75775a4-merged.mount: Deactivated successfully.
Dec 06 07:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039-userdata-shm.mount: Deactivated successfully.
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.712 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2336a4b6-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.715 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.717 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.719 251996 INFO os_vif [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:4f:73,bridge_name='br-int',has_traffic_filtering=True,id=2336a4b6-e7bc-46ae-a318-1af2ecc257c7,network=Network(f5f7d890-fc89-4729-9976-7a81ce11ddb5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2336a4b6-e7')
Dec 06 07:17:50 compute-0 podman[301304]: 2025-12-06 07:17:50.722335618 +0000 UTC m=+0.097677344 container cleanup 53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:17:50 compute-0 systemd[1]: libpod-conmon-53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039.scope: Deactivated successfully.
Dec 06 07:17:50 compute-0 podman[301371]: 2025-12-06 07:17:50.794757097 +0000 UTC m=+0.043717740 container remove 53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.800 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4092cea9-13fb-40e2-90a0-c65f6faac074]: (4, ('Sat Dec  6 07:17:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5 (53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039)\n53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039\nSat Dec  6 07:17:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5 (53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039)\n53f96cce29263432d98d2ebe2c74a6557e3972a92dcac0aad595a9cde8763039\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.801 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb3dfa9-88c4-4d95-807a-752b59ff1f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.802 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5f7d890-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.804 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 kernel: tapf5f7d890-f0: left promiscuous mode
Dec 06 07:17:50 compute-0 nova_compute[251992]: 2025-12-06 07:17:50.818 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.821 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[df527a3a-e171-4bc3-ac28-d4818a40b04e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.842 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd680567-9313-4438-abb0-72074f0453e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.844 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[80f6fc05-7760-43ef-895d-e80b3e21e6ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.857 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7863090a-ef3c-44e5-ae43-e7666f51daa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572748, 'reachable_time': 29279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301389, 'error': None, 'target': 'ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.859 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5f7d890-fc89-4729-9976-7a81ce11ddb5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:50.859 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bf1f4c-17c5-4d9e-9e4a-0dc6370344a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:17:50 compute-0 systemd[1]: run-netns-ovnmeta\x2df5f7d890\x2dfc89\x2d4729\x2d9976\x2d7a81ce11ddb5.mount: Deactivated successfully.
Dec 06 07:17:51 compute-0 ceph-mon[74339]: pgmap v1793: 305 pgs: 305 active+clean; 309 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 44 KiB/s wr, 163 op/s
Dec 06 07:17:51 compute-0 nova_compute[251992]: 2025-12-06 07:17:51.710 251996 DEBUG nova.compute.manager [req-4d66ba0a-179f-4a06-acf2-bbe44c976ada req-eaa8e74a-4e1b-4594-aba0-b79f1844f4b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-unplugged-50c0e93a-7306-495b-9a6f-121997fa4acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:51 compute-0 nova_compute[251992]: 2025-12-06 07:17:51.710 251996 DEBUG oslo_concurrency.lockutils [req-4d66ba0a-179f-4a06-acf2-bbe44c976ada req-eaa8e74a-4e1b-4594-aba0-b79f1844f4b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:51 compute-0 nova_compute[251992]: 2025-12-06 07:17:51.710 251996 DEBUG oslo_concurrency.lockutils [req-4d66ba0a-179f-4a06-acf2-bbe44c976ada req-eaa8e74a-4e1b-4594-aba0-b79f1844f4b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:51 compute-0 nova_compute[251992]: 2025-12-06 07:17:51.710 251996 DEBUG oslo_concurrency.lockutils [req-4d66ba0a-179f-4a06-acf2-bbe44c976ada req-eaa8e74a-4e1b-4594-aba0-b79f1844f4b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:51 compute-0 nova_compute[251992]: 2025-12-06 07:17:51.711 251996 DEBUG nova.compute.manager [req-4d66ba0a-179f-4a06-acf2-bbe44c976ada req-eaa8e74a-4e1b-4594-aba0-b79f1844f4b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] No waiting events found dispatching network-vif-unplugged-50c0e93a-7306-495b-9a6f-121997fa4acb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:51 compute-0 nova_compute[251992]: 2025-12-06 07:17:51.711 251996 DEBUG nova.compute.manager [req-4d66ba0a-179f-4a06-acf2-bbe44c976ada req-eaa8e74a-4e1b-4594-aba0-b79f1844f4b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-unplugged-50c0e93a-7306-495b-9a6f-121997fa4acb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:17:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 295 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 47 KiB/s wr, 249 op/s
Dec 06 07:17:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:52.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:52 compute-0 nova_compute[251992]: 2025-12-06 07:17:52.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:52.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.115 251996 INFO nova.virt.libvirt.driver [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Deleting instance files /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357_del
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.116 251996 INFO nova.virt.libvirt.driver [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Deletion of /var/lib/nova/instances/dd21a47b-0073-4789-b313-f2484ea4c357_del complete
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.175 251996 INFO nova.compute.manager [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Took 8.49 seconds to destroy the instance on the hypervisor.
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.176 251996 DEBUG oslo.service.loopingcall [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.176 251996 DEBUG nova.compute.manager [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.176 251996 DEBUG nova.network.neutron [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:17:53 compute-0 ceph-mon[74339]: pgmap v1794: 305 pgs: 305 active+clean; 295 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 47 KiB/s wr, 249 op/s
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.656 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 295 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 106 op/s
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.787 251996 INFO nova.virt.libvirt.driver [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Deleting instance files /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484_del
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.788 251996 INFO nova.virt.libvirt.driver [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Deletion of /var/lib/nova/instances/3997e85b-0d13-4e0a-9316-863294c82484_del complete
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.853 251996 INFO nova.compute.manager [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Took 4.04 seconds to destroy the instance on the hypervisor.
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.853 251996 DEBUG oslo.service.loopingcall [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.854 251996 DEBUG nova.compute.manager [-] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.854 251996 DEBUG nova.network.neutron [-] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.882 251996 DEBUG nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.883 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.883 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.883 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.884 251996 DEBUG nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] No waiting events found dispatching network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.884 251996 WARNING nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received unexpected event network-vif-plugged-50c0e93a-7306-495b-9a6f-121997fa4acb for instance with vm_state active and task_state deleting.
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.884 251996 DEBUG nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-unplugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.884 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.884 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.885 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.885 251996 DEBUG nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] No waiting events found dispatching network-vif-unplugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.885 251996 DEBUG nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-unplugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.886 251996 DEBUG nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.886 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3997e85b-0d13-4e0a-9316-863294c82484-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.886 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.886 251996 DEBUG oslo_concurrency.lockutils [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.887 251996 DEBUG nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] No waiting events found dispatching network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:17:53 compute-0 nova_compute[251992]: 2025-12-06 07:17:53.887 251996 WARNING nova.compute.manager [req-af27558e-d671-49f3-b655-e4ea2452b83d req-c15914c9-3925-4c1d-ba5a-5b5ecaee5500 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received unexpected event network-vif-plugged-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 for instance with vm_state active and task_state deleting.
Dec 06 07:17:54 compute-0 nova_compute[251992]: 2025-12-06 07:17:54.237 251996 DEBUG nova.network.neutron [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:54 compute-0 nova_compute[251992]: 2025-12-06 07:17:54.308 251996 INFO nova.compute.manager [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Took 1.13 seconds to deallocate network for instance.
Dec 06 07:17:54 compute-0 nova_compute[251992]: 2025-12-06 07:17:54.461 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:54 compute-0 nova_compute[251992]: 2025-12-06 07:17:54.462 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:54.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:54 compute-0 nova_compute[251992]: 2025-12-06 07:17:54.611 251996 DEBUG oslo_concurrency.processutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:54.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:17:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968549645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.058 251996 DEBUG oslo_concurrency.processutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.064 251996 DEBUG nova.compute.provider_tree [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.096 251996 DEBUG nova.scheduler.client.report [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.163 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.309 251996 INFO nova.scheduler.client.report [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Deleted allocations for instance dd21a47b-0073-4789-b313-f2484ea4c357
Dec 06 07:17:55 compute-0 ceph-mon[74339]: pgmap v1795: 305 pgs: 305 active+clean; 295 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 106 op/s
Dec 06 07:17:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2968549645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.549 251996 DEBUG oslo_concurrency.lockutils [None req-7f70a339-234e-47fd-bc34-6b19c18f88ab 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "dd21a47b-0073-4789-b313-f2484ea4c357" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.588 251996 DEBUG nova.network.neutron [-] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.650 251996 INFO nova.compute.manager [-] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Took 1.80 seconds to deallocate network for instance.
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.684 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.685 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.713 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.714 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.715 251996 DEBUG nova.compute.manager [req-76a9acb3-3847-4806-a47c-66a050823941 req-e44d3704-6850-458b-abb8-3c1dfb7c9ef5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Received event network-vif-deleted-ad0242d9-4af1-43ec-974d-c21d786abe3f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.716 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.768 251996 DEBUG oslo_concurrency.processutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 273 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 113 op/s
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.982 251996 DEBUG nova.compute.manager [req-b79044ea-9f8b-468f-a051-3bdce9fbf11a req-600bfcf8-cb16-40ce-9735-5b6de4c986e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-deleted-50c0e93a-7306-495b-9a6f-121997fa4acb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:55 compute-0 nova_compute[251992]: 2025-12-06 07:17:55.983 251996 DEBUG nova.compute.manager [req-b79044ea-9f8b-468f-a051-3bdce9fbf11a req-600bfcf8-cb16-40ce-9735-5b6de4c986e0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Received event network-vif-deleted-2336a4b6-e7bc-46ae-a318-1af2ecc257c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:17:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:17:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831876106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.115 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:17:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1251802086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.207 251996 DEBUG oslo_concurrency.processutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.212 251996 DEBUG nova.compute.provider_tree [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.233 251996 DEBUG nova.scheduler.client.report [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.282 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.284 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4454MB free_disk=20.87594223022461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.284 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.285 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.287 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.323 251996 INFO nova.scheduler.client.report [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Deleted allocations for instance 3997e85b-0d13-4e0a-9316-863294c82484
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.417 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.417 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.440 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.467 251996 DEBUG oslo_concurrency.lockutils [None req-fe893529-9bad-4aab-a0a9-6b88e1ed0382 bc90c28aab6c4b1d8e2d984f532d7894 a369472476f14c5db73734ea0b24ecf0 - - default default] Lock "3997e85b-0d13-4e0a-9316-863294c82484" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:56.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:56.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3831876106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1251802086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:17:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589256854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.874 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.881 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.914 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.950 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:17:56 compute-0 nova_compute[251992]: 2025-12-06 07:17:56.950 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:17:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:17:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 249 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.7 KiB/s wr, 116 op/s
Dec 06 07:17:58 compute-0 ceph-mon[74339]: pgmap v1796: 305 pgs: 305 active+clean; 273 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 113 op/s
Dec 06 07:17:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3589256854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:17:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:17:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:17:58.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:17:58 compute-0 nova_compute[251992]: 2025-12-06 07:17:58.657 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:17:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:17:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:17:58.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:17:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:59.194 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:17:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:17:59.194 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:17:59 compute-0 nova_compute[251992]: 2025-12-06 07:17:59.203 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:17:59 compute-0 ceph-mon[74339]: pgmap v1797: 305 pgs: 305 active+clean; 249 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.7 KiB/s wr, 116 op/s
Dec 06 07:17:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 249 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.7 KiB/s wr, 112 op/s
Dec 06 07:17:59 compute-0 nova_compute[251992]: 2025-12-06 07:17:59.920 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005464.918858, dd21a47b-0073-4789-b313-f2484ea4c357 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:17:59 compute-0 nova_compute[251992]: 2025-12-06 07:17:59.920 251996 INFO nova.compute.manager [-] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] VM Stopped (Lifecycle Event)
Dec 06 07:17:59 compute-0 nova_compute[251992]: 2025-12-06 07:17:59.945 251996 DEBUG nova.compute.manager [None req-74c0c7b4-63c2-47b8-9f84-3eb13ce32458 - - - - - -] [instance: dd21a47b-0073-4789-b313-f2484ea4c357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:17:59 compute-0 nova_compute[251992]: 2025-12-06 07:17:59.951 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:59 compute-0 nova_compute[251992]: 2025-12-06 07:17:59.951 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:59 compute-0 nova_compute[251992]: 2025-12-06 07:17:59.951 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:17:59 compute-0 nova_compute[251992]: 2025-12-06 07:17:59.951 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:00 compute-0 nova_compute[251992]: 2025-12-06 07:18:00.379 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:00.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:00 compute-0 nova_compute[251992]: 2025-12-06 07:18:00.646 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:00.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:00 compute-0 nova_compute[251992]: 2025-12-06 07:18:00.716 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:01.196 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:01 compute-0 ceph-mon[74339]: pgmap v1798: 305 pgs: 305 active+clean; 249 MiB data, 714 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.7 KiB/s wr, 112 op/s
Dec 06 07:18:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3472577622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 266 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 07:18:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:02.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/13716958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:02 compute-0 sudo[301487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:03 compute-0 sudo[301487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:03 compute-0 sudo[301487]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:03 compute-0 sudo[301512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:03 compute-0 sudo[301512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:03 compute-0 sudo[301512]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.159 251996 DEBUG nova.compute.manager [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.262 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.262 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.290 251996 DEBUG nova.objects.instance [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_requests' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.312 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.313 251996 INFO nova.compute.claims [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.313 251996 DEBUG nova.objects.instance [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.332 251996 DEBUG nova.objects.instance [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.386 251996 INFO nova.compute.resource_tracker [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating resource usage from migration e712029d-e908-4f4e-9c2c-a2742ca0daa7
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.386 251996 DEBUG nova.compute.resource_tracker [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Starting to track incoming migration e712029d-e908-4f4e-9c2c-a2742ca0daa7 with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.533 251996 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:03 compute-0 nova_compute[251992]: 2025-12-06 07:18:03.658 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 266 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Dec 06 07:18:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:03.825 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:03.826 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:03.826 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:04 compute-0 ceph-mon[74339]: pgmap v1799: 305 pgs: 305 active+clean; 266 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 07:18:04 compute-0 nova_compute[251992]: 2025-12-06 07:18:04.128 251996 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:04 compute-0 nova_compute[251992]: 2025-12-06 07:18:04.135 251996 DEBUG nova.compute.provider_tree [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:18:04 compute-0 nova_compute[251992]: 2025-12-06 07:18:04.154 251996 DEBUG nova.scheduler.client.report [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:18:04 compute-0 nova_compute[251992]: 2025-12-06 07:18:04.179 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:04 compute-0 nova_compute[251992]: 2025-12-06 07:18:04.180 251996 INFO nova.compute.manager [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Migrating
Dec 06 07:18:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:04 compute-0 nova_compute[251992]: 2025-12-06 07:18:04.670 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:04 compute-0 nova_compute[251992]: 2025-12-06 07:18:04.671 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:18:04 compute-0 nova_compute[251992]: 2025-12-06 07:18:04.694 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:18:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:04.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:05 compute-0 ceph-mon[74339]: pgmap v1800: 305 pgs: 305 active+clean; 266 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Dec 06 07:18:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1036209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:05 compute-0 nova_compute[251992]: 2025-12-06 07:18:05.661 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005470.6603997, 3997e85b-0d13-4e0a-9316-863294c82484 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:18:05 compute-0 nova_compute[251992]: 2025-12-06 07:18:05.662 251996 INFO nova.compute.manager [-] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] VM Stopped (Lifecycle Event)
Dec 06 07:18:05 compute-0 nova_compute[251992]: 2025-12-06 07:18:05.680 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:05 compute-0 nova_compute[251992]: 2025-12-06 07:18:05.681 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:18:05 compute-0 nova_compute[251992]: 2025-12-06 07:18:05.681 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:18:05 compute-0 nova_compute[251992]: 2025-12-06 07:18:05.718 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:05 compute-0 nova_compute[251992]: 2025-12-06 07:18:05.721 251996 DEBUG nova.compute.manager [None req-efc5fa27-bc3b-453e-865e-cf6d6c2044b8 - - - - - -] [instance: 3997e85b-0d13-4e0a-9316-863294c82484] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:18:05 compute-0 nova_compute[251992]: 2025-12-06 07:18:05.736 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:18:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 271 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 06 07:18:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:06.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:06 compute-0 nova_compute[251992]: 2025-12-06 07:18:06.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:06 compute-0 nova_compute[251992]: 2025-12-06 07:18:06.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:18:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:06.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2412116112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 458 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 07:18:08 compute-0 sshd-session[301561]: Accepted publickey for nova from 192.168.122.102 port 46970 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:18:08 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Dec 06 07:18:08 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec 06 07:18:08 compute-0 systemd-logind[798]: New session 53 of user nova.
Dec 06 07:18:08 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec 06 07:18:08 compute-0 systemd[1]: Starting User Manager for UID 42436...
Dec 06 07:18:08 compute-0 systemd[301565]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:18:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:08.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:08 compute-0 systemd[301565]: Queued start job for default target Main User Target.
Dec 06 07:18:08 compute-0 systemd[301565]: Created slice User Application Slice.
Dec 06 07:18:08 compute-0 systemd[301565]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:18:08 compute-0 systemd[301565]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 07:18:08 compute-0 systemd[301565]: Reached target Paths.
Dec 06 07:18:08 compute-0 systemd[301565]: Reached target Timers.
Dec 06 07:18:08 compute-0 systemd[301565]: Starting D-Bus User Message Bus Socket...
Dec 06 07:18:08 compute-0 systemd[301565]: Starting Create User's Volatile Files and Directories...
Dec 06 07:18:08 compute-0 systemd[301565]: Finished Create User's Volatile Files and Directories.
Dec 06 07:18:08 compute-0 systemd[301565]: Listening on D-Bus User Message Bus Socket.
Dec 06 07:18:08 compute-0 systemd[301565]: Reached target Sockets.
Dec 06 07:18:08 compute-0 systemd[301565]: Reached target Basic System.
Dec 06 07:18:08 compute-0 systemd[301565]: Reached target Main User Target.
Dec 06 07:18:08 compute-0 systemd[301565]: Startup finished in 143ms.
Dec 06 07:18:08 compute-0 systemd[1]: Started User Manager for UID 42436.
Dec 06 07:18:08 compute-0 systemd[1]: Started Session 53 of User nova.
Dec 06 07:18:08 compute-0 sshd-session[301561]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:18:08 compute-0 ceph-mon[74339]: pgmap v1801: 305 pgs: 305 active+clean; 271 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Dec 06 07:18:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3335321634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:08 compute-0 sshd-session[301581]: Received disconnect from 192.168.122.102 port 46970:11: disconnected by user
Dec 06 07:18:08 compute-0 sshd-session[301581]: Disconnected from user nova 192.168.122.102 port 46970
Dec 06 07:18:08 compute-0 sshd-session[301561]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:18:08 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec 06 07:18:08 compute-0 systemd-logind[798]: Session 53 logged out. Waiting for processes to exit.
Dec 06 07:18:08 compute-0 nova_compute[251992]: 2025-12-06 07:18:08.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:08 compute-0 systemd-logind[798]: Removed session 53.
Dec 06 07:18:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:08 compute-0 sshd-session[301583]: Accepted publickey for nova from 192.168.122.102 port 46984 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:18:08 compute-0 systemd-logind[798]: New session 55 of user nova.
Dec 06 07:18:08 compute-0 systemd[1]: Started Session 55 of User nova.
Dec 06 07:18:08 compute-0 sshd-session[301583]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:18:08 compute-0 sshd-session[301586]: Received disconnect from 192.168.122.102 port 46984:11: disconnected by user
Dec 06 07:18:08 compute-0 sshd-session[301586]: Disconnected from user nova 192.168.122.102 port 46984
Dec 06 07:18:08 compute-0 sshd-session[301583]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:18:08 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec 06 07:18:08 compute-0 systemd-logind[798]: Session 55 logged out. Waiting for processes to exit.
Dec 06 07:18:08 compute-0 systemd-logind[798]: Removed session 55.
Dec 06 07:18:09 compute-0 nova_compute[251992]: 2025-12-06 07:18:09.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:09 compute-0 nova_compute[251992]: 2025-12-06 07:18:09.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:18:09 compute-0 ceph-mon[74339]: pgmap v1802: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 458 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 07:18:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3749883810' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3749883810' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Dec 06 07:18:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:10.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:10 compute-0 nova_compute[251992]: 2025-12-06 07:18:10.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:11 compute-0 podman[301589]: 2025-12-06 07:18:11.451616098 +0000 UTC m=+0.114107932 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec 06 07:18:11 compute-0 ceph-mon[74339]: pgmap v1803: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Dec 06 07:18:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Dec 06 07:18:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:12.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:12.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:18:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:18:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:18:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:18:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:18:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:18:13 compute-0 nova_compute[251992]: 2025-12-06 07:18:13.062 251996 DEBUG nova.compute.manager [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:13 compute-0 nova_compute[251992]: 2025-12-06 07:18:13.063 251996 DEBUG oslo_concurrency.lockutils [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:13 compute-0 nova_compute[251992]: 2025-12-06 07:18:13.063 251996 DEBUG oslo_concurrency.lockutils [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:13 compute-0 nova_compute[251992]: 2025-12-06 07:18:13.063 251996 DEBUG oslo_concurrency.lockutils [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:13 compute-0 nova_compute[251992]: 2025-12-06 07:18:13.063 251996 DEBUG nova.compute.manager [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:13 compute-0 nova_compute[251992]: 2025-12-06 07:18:13.064 251996 WARNING nova.compute.manager [req-9e2c0094-98fb-4b01-b693-20fdbdd70c99 req-f24b0031-9152-4171-b571-4f369575afd2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state resize_migrated.
Dec 06 07:18:13 compute-0 nova_compute[251992]: 2025-12-06 07:18:13.676 251996 INFO nova.network.neutron [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating port a599f1a0-5413-4dc9-9ae4-d7ba512d761c with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 07:18:13 compute-0 nova_compute[251992]: 2025-12-06 07:18:13.708 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:18:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 26K writes, 94K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 26K writes, 9425 syncs, 2.78 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7581 writes, 25K keys, 7581 commit groups, 1.0 writes per commit group, ingest: 25.22 MB, 0.04 MB/s
                                           Interval WAL: 7581 writes, 3096 syncs, 2.45 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:18:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 363 KiB/s wr, 42 op/s
Dec 06 07:18:13 compute-0 ceph-mon[74339]: pgmap v1804: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 2.2 MiB/s wr, 73 op/s
Dec 06 07:18:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:14.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:14.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.517 251996 DEBUG nova.compute.manager [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.517 251996 DEBUG oslo_concurrency.lockutils [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.517 251996 DEBUG oslo_concurrency.lockutils [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.517 251996 DEBUG oslo_concurrency.lockutils [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.518 251996 DEBUG nova.compute.manager [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.518 251996 WARNING nova.compute.manager [req-ee1604d1-3696-40d8-a2b4-711f31ed87f0 req-57e9855c-cdff-4e64-9e5e-4aab2ef9af6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state resize_migrated.
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.537 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.537 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.538 251996 DEBUG nova.network.neutron [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.693 251996 DEBUG nova.compute.manager [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-changed-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.694 251996 DEBUG nova.compute.manager [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Refreshing instance network info cache due to event network-changed-a599f1a0-5413-4dc9-9ae4-d7ba512d761c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.694 251996 DEBUG oslo_concurrency.lockutils [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:18:15 compute-0 nova_compute[251992]: 2025-12-06 07:18:15.721 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 363 KiB/s wr, 42 op/s
Dec 06 07:18:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:16.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:16.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:17 compute-0 ceph-mon[74339]: pgmap v1805: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 363 KiB/s wr, 42 op/s
Dec 06 07:18:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 111 KiB/s wr, 21 op/s
Dec 06 07:18:18 compute-0 ceph-mon[74339]: pgmap v1806: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 363 KiB/s wr, 42 op/s
Dec 06 07:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:18:18
Dec 06 07:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'images']
Dec 06 07:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:18:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:18.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.549 251996 DEBUG nova.network.neutron [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.588 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.591 251996 DEBUG oslo_concurrency.lockutils [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.591 251996 DEBUG nova.network.neutron [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Refreshing network info cache for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.688 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.689 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.689 251996 INFO nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Creating image(s)
Dec 06 07:18:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:18.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:18 compute-0 nova_compute[251992]: 2025-12-06 07:18:18.724 251996 DEBUG nova.storage.rbd_utils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] creating snapshot(nova-resize) on rbd image(c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:18:19 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Dec 06 07:18:19 compute-0 systemd[301565]: Activating special unit Exit the Session...
Dec 06 07:18:19 compute-0 systemd[301565]: Stopped target Main User Target.
Dec 06 07:18:19 compute-0 systemd[301565]: Stopped target Basic System.
Dec 06 07:18:19 compute-0 systemd[301565]: Stopped target Paths.
Dec 06 07:18:19 compute-0 systemd[301565]: Stopped target Sockets.
Dec 06 07:18:19 compute-0 systemd[301565]: Stopped target Timers.
Dec 06 07:18:19 compute-0 systemd[301565]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:18:19 compute-0 systemd[301565]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 07:18:19 compute-0 systemd[301565]: Closed D-Bus User Message Bus Socket.
Dec 06 07:18:19 compute-0 systemd[301565]: Stopped Create User's Volatile Files and Directories.
Dec 06 07:18:19 compute-0 systemd[301565]: Removed slice User Application Slice.
Dec 06 07:18:19 compute-0 systemd[301565]: Reached target Shutdown.
Dec 06 07:18:19 compute-0 systemd[301565]: Finished Exit the Session.
Dec 06 07:18:19 compute-0 systemd[301565]: Reached target Exit the Session.
Dec 06 07:18:19 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Dec 06 07:18:19 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Dec 06 07:18:19 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec 06 07:18:19 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec 06 07:18:19 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec 06 07:18:19 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec 06 07:18:19 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Dec 06 07:18:19 compute-0 podman[301656]: 2025-12-06 07:18:19.137631897 +0000 UTC m=+0.047722861 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:18:19 compute-0 podman[301657]: 2025-12-06 07:18:19.145838756 +0000 UTC m=+0.051661111 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:18:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Dec 06 07:18:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 25 KiB/s wr, 3 op/s
Dec 06 07:18:20 compute-0 ceph-mon[74339]: pgmap v1807: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 111 KiB/s wr, 21 op/s
Dec 06 07:18:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:20.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:20.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:20 compute-0 nova_compute[251992]: 2025-12-06 07:18:20.723 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.040 251996 DEBUG nova.network.neutron [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updated VIF entry in instance network info cache for port a599f1a0-5413-4dc9-9ae4-d7ba512d761c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.041 251996 DEBUG nova.network.neutron [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Dec 06 07:18:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.105 251996 DEBUG oslo_concurrency.lockutils [req-43acaf54-ac81-467e-b039-bb35cf106912 req-ed797b66-e9d3-4278-bd3c-9d0e7452c168 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c8403a0c-2fe6-48fe-91af-ec5aca71e12d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.341 251996 DEBUG nova.objects.instance [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'trusted_certs' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:21 compute-0 ceph-mon[74339]: pgmap v1808: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 25 KiB/s wr, 3 op/s
Dec 06 07:18:21 compute-0 ceph-mon[74339]: osdmap e253: 3 total, 3 up, 3 in
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.452 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.452 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Ensure instance console log exists: /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.452 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.453 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.453 251996 DEBUG oslo_concurrency.lockutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.456 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start _get_guest_xml network_info=[{"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:9b:0b:0a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.462 251996 WARNING nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.470 251996 DEBUG nova.virt.libvirt.host [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.471 251996 DEBUG nova.virt.libvirt.host [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.477 251996 DEBUG nova.virt.libvirt.host [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.478 251996 DEBUG nova.virt.libvirt.host [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.479 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.480 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.481 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.481 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.482 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.482 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.483 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.483 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.483 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.484 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.484 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.485 251996 DEBUG nova.virt.hardware [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.485 251996 DEBUG nova.objects.instance [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.508 251996 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 17 KiB/s wr, 29 op/s
Dec 06 07:18:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:18:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1013867470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:21 compute-0 nova_compute[251992]: 2025-12-06 07:18:21.950 251996 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.003 251996 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:18:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826628988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.433 251996 DEBUG oslo_concurrency.processutils [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.435 251996 DEBUG nova.virt.libvirt.vif [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:18:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:9b:0b:0a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.435 251996 DEBUG nova.network.os_vif_util [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:9b:0b:0a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.436 251996 DEBUG nova.network.os_vif_util [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.438 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <uuid>c8403a0c-2fe6-48fe-91af-ec5aca71e12d</uuid>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <name>instance-00000044</name>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <memory>196608</memory>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestJSON-server-893709654</nova:name>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:18:21</nova:creationTime>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <nova:flavor name="m1.micro">
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <nova:memory>192</nova:memory>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <nova:port uuid="a599f1a0-5413-4dc9-9ae4-d7ba512d761c">
Dec 06 07:18:22 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <system>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <entry name="serial">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <entry name="uuid">c8403a0c-2fe6-48fe-91af-ec5aca71e12d</entry>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </system>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <os>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   </os>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <features>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   </features>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk">
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       </source>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_disk.config">
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       </source>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:18:22 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:9b:0b:0a"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <target dev="tapa599f1a0-54"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d/console.log" append="off"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <video>
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </video>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:18:22 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:18:22 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:18:22 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:18:22 compute-0 nova_compute[251992]: </domain>
Dec 06 07:18:22 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.439 251996 DEBUG nova.virt.libvirt.vif [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:14:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:18:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:9b:0b:0a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.439 251996 DEBUG nova.network.os_vif_util [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-809610913-network", "vif_mac": "fa:16:3e:9b:0b:0a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.440 251996 DEBUG nova.network.os_vif_util [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.440 251996 DEBUG os_vif [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.441 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.441 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.442 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.444 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa599f1a0-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.445 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa599f1a0-54, col_values=(('external_ids', {'iface-id': 'a599f1a0-5413-4dc9-9ae4-d7ba512d761c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:0b:0a', 'vm-uuid': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.446 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.4473] manager: (tapa599f1a0-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.450 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.453 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.455 251996 INFO os_vif [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')
Dec 06 07:18:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1013867470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1826628988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:22.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.521 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.522 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.522 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No VIF found with MAC fa:16:3e:9b:0b:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.523 251996 INFO nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Using config drive
Dec 06 07:18:22 compute-0 kernel: tapa599f1a0-54: entered promiscuous mode
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.6251] manager: (tapa599f1a0-54): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Dec 06 07:18:22 compute-0 ovn_controller[147168]: 2025-12-06T07:18:22Z|00239|binding|INFO|Claiming lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c for this chassis.
Dec 06 07:18:22 compute-0 ovn_controller[147168]: 2025-12-06T07:18:22Z|00240|binding|INFO|a599f1a0-5413-4dc9-9ae4-d7ba512d761c: Claiming fa:16:3e:9b:0b:0a 10.100.0.6
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.625 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.634 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.638 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.646 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.6469] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.6474] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Dec 06 07:18:22 compute-0 systemd-machined[212986]: New machine qemu-34-instance-00000044.
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.656 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '12', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.658 158118 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.660 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.671 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1177af6d-e581-4109-ab61-3b8d96bd827d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.672 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.674 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.674 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5701b3e1-e20f-4155-9059-fdeac1ae4439]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.675 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[70edba55-0173-4c07-9d4b-da8701c73277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.685 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6b2f6a-5930-4305-a7e2-ded9dc37b0ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-00000044.
Dec 06 07:18:22 compute-0 systemd-udevd[301831]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.7087] device (tapa599f1a0-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.7094] device (tapa599f1a0-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.710 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1256c54c-a1bb-40d1-a3bf-97b090cf3d16]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.739 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[61af0771-331e-4971-9d22-b3e763b361ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.7482] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/132)
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.747 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d9bb46-b1c2-49b3-8cda-a0aebc3276ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.778 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b8a2be-d747-4130-a1ea-04622c81eedf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.781 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[22c2be91-ab31-435b-9cb3-06a8206807cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.8002] device (tap4d599401-30): carrier: link connected
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.805 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[94c8a27d-2477-49ff-9b5a-226a9717c4ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.820 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[35c83b5e-5e1c-4103-b14b-45a21572b4ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577538, 'reachable_time': 27262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301861, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.832 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfb5a4f-dc8a-4fad-b356-c6d956697f92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 577538, 'tstamp': 577538}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301862, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.848 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[addc8748-af45-4d3a-ac1c-7bca5f29646c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577538, 'reachable_time': 27262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301863, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.877 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ef59f2a8-ae77-4c02-888c-6ae683d9e723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.906 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.934 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.937 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5524e802-82cf-44f8-83d3-849b9abe9568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.938 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.939 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.939 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:22 compute-0 NetworkManager[48965]: <info>  [1765005502.9417] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.941 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 ovn_controller[147168]: 2025-12-06T07:18:22Z|00241|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c ovn-installed in OVS
Dec 06 07:18:22 compute-0 ovn_controller[147168]: 2025-12-06T07:18:22Z|00242|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c up in Southbound
Dec 06 07:18:22 compute-0 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.947 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.948 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:22 compute-0 ovn_controller[147168]: 2025-12-06T07:18:22Z|00243|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=1)
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.949 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 nova_compute[251992]: 2025-12-06 07:18:22.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.966 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.967 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[184a7bae-fc0f-4ceb-9357-c902a15d6282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.968 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:18:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:22.969 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.114 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005503.114058, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.115 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Resumed (Lifecycle Event)
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.117 251996 DEBUG nova.compute.manager [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.121 251996 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance running successfully.
Dec 06 07:18:23 compute-0 virtqemud[251613]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.124 251996 DEBUG nova.virt.libvirt.guest [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.124 251996 DEBUG nova.virt.libvirt.driver [None req-9d2f27c2-102e-403c-8ac8-9209b27bd84d 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 07:18:23 compute-0 systemd[1]: Starting dnf makecache...
Dec 06 07:18:23 compute-0 sudo[301915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:23 compute-0 sudo[301915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.156 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:18:23 compute-0 sudo[301915]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.183 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.217 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.218 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005503.115259, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.218 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Started (Lifecycle Event)
Dec 06 07:18:23 compute-0 sudo[301941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:23 compute-0 sudo[301941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:23 compute-0 sudo[301941]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.250 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.255 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:18:23 compute-0 dnf[301939]: Metadata cache refreshed recently.
Dec 06 07:18:23 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 06 07:18:23 compute-0 systemd[1]: Finished dnf makecache.
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:18:23 compute-0 podman[301988]: 2025-12-06 07:18:23.402512604 +0000 UTC m=+0.051328953 container create 25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:18:23 compute-0 systemd[1]: Started libpod-conmon-25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc.scope.
Dec 06 07:18:23 compute-0 podman[301988]: 2025-12-06 07:18:23.376919401 +0000 UTC m=+0.025735770 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:18:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:18:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fbff888517543e3692dce032c2b478de803afea24052d7eaf244549cc63589f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:23 compute-0 ceph-mon[74339]: pgmap v1810: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 17 KiB/s wr, 29 op/s
Dec 06 07:18:23 compute-0 podman[301988]: 2025-12-06 07:18:23.513657582 +0000 UTC m=+0.162473951 container init 25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:18:23 compute-0 podman[301988]: 2025-12-06 07:18:23.518651302 +0000 UTC m=+0.167467651 container start 25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:18:23 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[302004]: [NOTICE]   (302008) : New worker (302010) forked
Dec 06 07:18:23 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[302004]: [NOTICE]   (302008) : Loading success.
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.600 251996 DEBUG nova.compute.manager [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.600 251996 DEBUG oslo_concurrency.lockutils [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.601 251996 DEBUG oslo_concurrency.lockutils [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.601 251996 DEBUG oslo_concurrency.lockutils [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.601 251996 DEBUG nova.compute.manager [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.601 251996 WARNING nova.compute.manager [req-2897585d-2448-4d9b-b977-1f72dbb55905 req-ed94691b-b9f7-4b4e-a463-54003da5041a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state resized and task_state None.
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.732 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:23 compute-0 nova_compute[251992]: 2025-12-06 07:18:23.734 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 17 KiB/s wr, 29 op/s
Dec 06 07:18:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:24.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:25 compute-0 ceph-mon[74339]: pgmap v1811: 305 pgs: 305 active+clean; 282 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 17 KiB/s wr, 29 op/s
Dec 06 07:18:25 compute-0 nova_compute[251992]: 2025-12-06 07:18:25.763 251996 DEBUG nova.compute.manager [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:25 compute-0 nova_compute[251992]: 2025-12-06 07:18:25.764 251996 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:25 compute-0 nova_compute[251992]: 2025-12-06 07:18:25.764 251996 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:25 compute-0 nova_compute[251992]: 2025-12-06 07:18:25.765 251996 DEBUG oslo_concurrency.lockutils [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:25 compute-0 nova_compute[251992]: 2025-12-06 07:18:25.765 251996 DEBUG nova.compute.manager [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:25 compute-0 nova_compute[251992]: 2025-12-06 07:18:25.765 251996 WARNING nova.compute.manager [req-bb401adf-f9ba-49f2-843d-7a00ef39d874 req-2ab77e90-7983-4802-8474-ea6f91544cec 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state resized and task_state None.
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 17 KiB/s wr, 73 op/s
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004353000535082822 of space, bias 1.0, pg target 1.3059001605248466 quantized to 32 (current 32)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216759433742788 of space, bias 1.0, pg target 0.6481107068909361 quantized to 32 (current 32)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:18:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:18:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:26.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:26.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:27 compute-0 ceph-mon[74339]: pgmap v1812: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 17 KiB/s wr, 73 op/s
Dec 06 07:18:27 compute-0 nova_compute[251992]: 2025-12-06 07:18:27.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 125 op/s
Dec 06 07:18:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Dec 06 07:18:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Dec 06 07:18:28 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Dec 06 07:18:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:28.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000055s ======
Dec 06 07:18:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:28.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec 06 07:18:28 compute-0 nova_compute[251992]: 2025-12-06 07:18:28.736 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:29 compute-0 ceph-mon[74339]: pgmap v1813: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 125 op/s
Dec 06 07:18:29 compute-0 ceph-mon[74339]: osdmap e254: 3 total, 3 up, 3 in
Dec 06 07:18:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1035860075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 9.2 KiB/s wr, 139 op/s
Dec 06 07:18:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2140050300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:30.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:30.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:31 compute-0 ceph-mon[74339]: pgmap v1815: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 9.2 KiB/s wr, 139 op/s
Dec 06 07:18:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.0 KiB/s wr, 115 op/s
Dec 06 07:18:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:32 compute-0 nova_compute[251992]: 2025-12-06 07:18:32.452 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:32.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:32.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:32 compute-0 nova_compute[251992]: 2025-12-06 07:18:32.859 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:32 compute-0 nova_compute[251992]: 2025-12-06 07:18:32.859 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:32 compute-0 nova_compute[251992]: 2025-12-06 07:18:32.860 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:32 compute-0 nova_compute[251992]: 2025-12-06 07:18:32.860 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:32 compute-0 nova_compute[251992]: 2025-12-06 07:18:32.861 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:32 compute-0 nova_compute[251992]: 2025-12-06 07:18:32.862 251996 INFO nova.compute.manager [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Terminating instance
Dec 06 07:18:32 compute-0 nova_compute[251992]: 2025-12-06 07:18:32.864 251996 DEBUG nova.compute.manager [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:18:33 compute-0 kernel: tapa599f1a0-54 (unregistering): left promiscuous mode
Dec 06 07:18:33 compute-0 NetworkManager[48965]: <info>  [1765005513.2773] device (tapa599f1a0-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:18:33 compute-0 ovn_controller[147168]: 2025-12-06T07:18:33Z|00244|binding|INFO|Releasing lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c from this chassis (sb_readonly=0)
Dec 06 07:18:33 compute-0 ovn_controller[147168]: 2025-12-06T07:18:33Z|00245|binding|INFO|Setting lport a599f1a0-5413-4dc9-9ae4-d7ba512d761c down in Southbound
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.289 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 ovn_controller[147168]: 2025-12-06T07:18:33Z|00246|binding|INFO|Removing iface tapa599f1a0-54 ovn-installed in OVS
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.292 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.300 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:0b:0a 10.100.0.6'], port_security=['fa:16:3e:9b:0b:0a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c8403a0c-2fe6-48fe-91af-ec5aca71e12d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '14', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=a599f1a0-5413-4dc9-9ae4-d7ba512d761c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.304 158118 INFO neutron.agent.ovn.metadata.agent [-] Port a599f1a0-5413-4dc9-9ae4-d7ba512d761c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.308 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.311 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[92424ba6-1297-4e7a-9e77-0474c27e69ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.312 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:18:33 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000044.scope: Deactivated successfully.
Dec 06 07:18:33 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000044.scope: Consumed 10.395s CPU time.
Dec 06 07:18:33 compute-0 systemd-machined[212986]: Machine qemu-34-instance-00000044 terminated.
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.498 251996 INFO nova.virt.libvirt.driver [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Instance destroyed successfully.
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.499 251996 DEBUG nova.objects.instance [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid c8403a0c-2fe6-48fe-91af-ec5aca71e12d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:33 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[302004]: [NOTICE]   (302008) : haproxy version is 2.8.14-c23fe91
Dec 06 07:18:33 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[302004]: [NOTICE]   (302008) : path to executable is /usr/sbin/haproxy
Dec 06 07:18:33 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[302004]: [WARNING]  (302008) : Exiting Master process...
Dec 06 07:18:33 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[302004]: [WARNING]  (302008) : Exiting Master process...
Dec 06 07:18:33 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[302004]: [ALERT]    (302008) : Current worker (302010) exited with code 143 (Terminated)
Dec 06 07:18:33 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[302004]: [WARNING]  (302008) : All workers exited. Exiting... (0)
Dec 06 07:18:33 compute-0 systemd[1]: libpod-25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc.scope: Deactivated successfully.
Dec 06 07:18:33 compute-0 podman[302048]: 2025-12-06 07:18:33.520484497 +0000 UTC m=+0.060924630 container died 25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.527 251996 DEBUG nova.virt.libvirt.vif [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-893709654',display_name='tempest-ServerActionsTestJSON-server-893709654',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-893709654',id=68,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:18:23Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-klri94j0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:18:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=c8403a0c-2fe6-48fe-91af-ec5aca71e12d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.528 251996 DEBUG nova.network.os_vif_util [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "address": "fa:16:3e:9b:0b:0a", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa599f1a0-54", "ovs_interfaceid": "a599f1a0-5413-4dc9-9ae4-d7ba512d761c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.528 251996 DEBUG nova.network.os_vif_util [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.528 251996 DEBUG os_vif [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.531 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.531 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa599f1a0-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.532 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.534 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.537 251996 INFO os_vif [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:0b:0a,bridge_name='br-int',has_traffic_filtering=True,id=a599f1a0-5413-4dc9-9ae4-d7ba512d761c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa599f1a0-54')
Dec 06 07:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc-userdata-shm.mount: Deactivated successfully.
Dec 06 07:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fbff888517543e3692dce032c2b478de803afea24052d7eaf244549cc63589f-merged.mount: Deactivated successfully.
Dec 06 07:18:33 compute-0 podman[302048]: 2025-12-06 07:18:33.583134963 +0000 UTC m=+0.123575076 container cleanup 25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:18:33 compute-0 systemd[1]: libpod-conmon-25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc.scope: Deactivated successfully.
Dec 06 07:18:33 compute-0 podman[302106]: 2025-12-06 07:18:33.644569876 +0000 UTC m=+0.039761499 container remove 25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.649 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0c73f026-4a10-47c2-87bc-49570eae444e]: (4, ('Sat Dec  6 07:18:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc)\n25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc\nSat Dec  6 07:18:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc)\n25a37716338b4a978b14927144e7f750f866c2a7a3af274b0208031f1bda29dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.651 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cde026b1-a31b-4d62-86c5-265dae76bf8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.652 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.654 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.667 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.671 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[14d677b2-b96d-4bf0-8db5-2b321d7d01ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.693 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6148ae7a-ae66-4318-90a2-959f24603fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.694 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fd353def-3adc-4d18-bd30-48034d913f7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.709 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[feebd880-8afb-4490-9cab-96c1c87ba202]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577532, 'reachable_time': 35220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302121, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.712 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:18:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:33.713 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b97344c0-e391-40e6-a7f0-fc80e4abf59f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:18:33 compute-0 ceph-mon[74339]: pgmap v1816: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.0 KiB/s wr, 115 op/s
Dec 06 07:18:33 compute-0 nova_compute[251992]: 2025-12-06 07:18:33.738 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.0 KiB/s wr, 115 op/s
Dec 06 07:18:34 compute-0 nova_compute[251992]: 2025-12-06 07:18:34.255 251996 INFO nova.virt.libvirt.driver [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Deleting instance files /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_del
Dec 06 07:18:34 compute-0 nova_compute[251992]: 2025-12-06 07:18:34.256 251996 INFO nova.virt.libvirt.driver [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Deletion of /var/lib/nova/instances/c8403a0c-2fe6-48fe-91af-ec5aca71e12d_del complete
Dec 06 07:18:34 compute-0 nova_compute[251992]: 2025-12-06 07:18:34.310 251996 INFO nova.compute.manager [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Took 1.45 seconds to destroy the instance on the hypervisor.
Dec 06 07:18:34 compute-0 nova_compute[251992]: 2025-12-06 07:18:34.311 251996 DEBUG oslo.service.loopingcall [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:18:34 compute-0 nova_compute[251992]: 2025-12-06 07:18:34.311 251996 DEBUG nova.compute.manager [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:18:34 compute-0 nova_compute[251992]: 2025-12-06 07:18:34.311 251996 DEBUG nova.network.neutron [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:18:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:34.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:34.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:35 compute-0 ceph-mon[74339]: pgmap v1817: 305 pgs: 305 active+clean; 281 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.0 KiB/s wr, 115 op/s
Dec 06 07:18:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 209 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.9 KiB/s wr, 103 op/s
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.163 251996 DEBUG nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.164 251996 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.165 251996 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.165 251996 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.166 251996 DEBUG nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.166 251996 DEBUG nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-unplugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.166 251996 DEBUG nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.167 251996 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.167 251996 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.168 251996 DEBUG oslo_concurrency.lockutils [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.168 251996 DEBUG nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] No waiting events found dispatching network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.169 251996 WARNING nova.compute.manager [req-f5ad83be-d21f-4ee9-831e-8d187d6451bd req-f75d4b20-b4da-491b-aba2-150529483860 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received unexpected event network-vif-plugged-a599f1a0-5413-4dc9-9ae4-d7ba512d761c for instance with vm_state active and task_state deleting.
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.426 251996 DEBUG nova.network.neutron [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.450 251996 INFO nova.compute.manager [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Took 2.14 seconds to deallocate network for instance.
Dec 06 07:18:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:36.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.523 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.524 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.528 251996 DEBUG nova.compute.manager [req-4cfc3457-3a2b-4a52-acca-deb2eac3aa32 req-4df51497-7114-409f-82cb-af26211e9315 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Received event network-vif-deleted-a599f1a0-5413-4dc9-9ae4-d7ba512d761c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.535 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.587 251996 INFO nova.scheduler.client.report [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Deleted allocations for instance c8403a0c-2fe6-48fe-91af-ec5aca71e12d
Dec 06 07:18:36 compute-0 nova_compute[251992]: 2025-12-06 07:18:36.683 251996 DEBUG oslo_concurrency.lockutils [None req-0f978144-c526-45dd-b9dd-6d34f2e58425 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "c8403a0c-2fe6-48fe-91af-ec5aca71e12d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:36.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:36 compute-0 ceph-mon[74339]: pgmap v1818: 305 pgs: 305 active+clean; 209 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.9 KiB/s wr, 103 op/s
Dec 06 07:18:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Dec 06 07:18:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Dec 06 07:18:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Dec 06 07:18:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 121 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 KiB/s wr, 108 op/s
Dec 06 07:18:38 compute-0 ceph-mon[74339]: osdmap e255: 3 total, 3 up, 3 in
Dec 06 07:18:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:38.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:38 compute-0 nova_compute[251992]: 2025-12-06 07:18:38.533 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:38 compute-0 nova_compute[251992]: 2025-12-06 07:18:38.741 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:38.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:39 compute-0 ceph-mon[74339]: pgmap v1820: 305 pgs: 305 active+clean; 121 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 KiB/s wr, 108 op/s
Dec 06 07:18:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4221516505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 121 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 KiB/s wr, 102 op/s
Dec 06 07:18:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:40.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2792564145' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2792564145' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 167 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 06 07:18:42 compute-0 podman[302130]: 2025-12-06 07:18:42.477333406 +0000 UTC m=+0.136523877 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Dec 06 07:18:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:42.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:42 compute-0 ceph-mon[74339]: pgmap v1821: 305 pgs: 305 active+clean; 121 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 KiB/s wr, 102 op/s
Dec 06 07:18:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:18:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:18:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:18:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:18:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:18:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:18:43 compute-0 sudo[302159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:43 compute-0 sudo[302160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:43 compute-0 sudo[302159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 sudo[302160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 sudo[302159]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-0 sudo[302160]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-0 sudo[302209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:18:43 compute-0 sudo[302210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:43 compute-0 sudo[302210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 sudo[302209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 sudo[302210]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-0 sudo[302209]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-0 sudo[302259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:43 compute-0 sudo[302259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 sudo[302259]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-0 sudo[302284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:18:43 compute-0 sudo[302284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 nova_compute[251992]: 2025-12-06 07:18:43.535 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:43 compute-0 nova_compute[251992]: 2025-12-06 07:18:43.743 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:43 compute-0 sudo[302284]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:18:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:18:43 compute-0 ceph-mon[74339]: pgmap v1822: 305 pgs: 305 active+clean; 167 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 06 07:18:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2349873373' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2349873373' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 167 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 06 07:18:43 compute-0 sudo[302328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:43 compute-0 sudo[302328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 sudo[302328]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-0 sudo[302353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:18:43 compute-0 sudo[302353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 sudo[302353]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:43 compute-0 sudo[302378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:43 compute-0 sudo[302378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:43 compute-0 sudo[302378]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:44 compute-0 sudo[302403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:18:44 compute-0 sudo[302403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:44 compute-0 nova_compute[251992]: 2025-12-06 07:18:44.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:44.381 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:18:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:44.382 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:18:44 compute-0 sudo[302403]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:18:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:18:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:18:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:44.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:18:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:44 compute-0 ceph-mon[74339]: pgmap v1823: 305 pgs: 305 active+clean; 167 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Dec 06 07:18:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:18:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:18:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:18:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:18:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:18:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 146 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Dec 06 07:18:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e7723237-b613-4236-ae6a-38cf0bd0d611 does not exist
Dec 06 07:18:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bf88bdea-38ff-494d-aef4-d3120e150e4a does not exist
Dec 06 07:18:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 44786e80-5673-4467-a495-d8e915f45104 does not exist
Dec 06 07:18:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:18:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:18:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:18:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:18:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:18:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:18:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:18:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:18:46 compute-0 sudo[302458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:46 compute-0 sudo[302458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:46 compute-0 sudo[302458]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:46 compute-0 sudo[302483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:18:46 compute-0 sudo[302483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:46 compute-0 sudo[302483]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:46 compute-0 sudo[302508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:46 compute-0 sudo[302508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:46 compute-0 sudo[302508]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:46 compute-0 sudo[302533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:18:46 compute-0 sudo[302533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:18:46.384 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:18:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:46 compute-0 podman[302601]: 2025-12-06 07:18:46.644535088 +0000 UTC m=+0.040270873 container create 3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:18:46 compute-0 systemd[1]: Started libpod-conmon-3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835.scope.
Dec 06 07:18:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:18:46 compute-0 podman[302601]: 2025-12-06 07:18:46.625781756 +0000 UTC m=+0.021517561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:18:46 compute-0 podman[302601]: 2025-12-06 07:18:46.728191271 +0000 UTC m=+0.123927076 container init 3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:18:46 compute-0 podman[302601]: 2025-12-06 07:18:46.7356809 +0000 UTC m=+0.131416685 container start 3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:18:46 compute-0 podman[302601]: 2025-12-06 07:18:46.74001265 +0000 UTC m=+0.135748455 container attach 3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:18:46 compute-0 charming_joliot[302617]: 167 167
Dec 06 07:18:46 compute-0 systemd[1]: libpod-3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835.scope: Deactivated successfully.
Dec 06 07:18:46 compute-0 podman[302601]: 2025-12-06 07:18:46.742904631 +0000 UTC m=+0.138640416 container died 3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:18:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:46.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-56890d674b317f85920c26d083a3147358249e76d467b8ac29442e0c31167604-merged.mount: Deactivated successfully.
Dec 06 07:18:46 compute-0 podman[302601]: 2025-12-06 07:18:46.792913385 +0000 UTC m=+0.188649170 container remove 3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:18:46 compute-0 systemd[1]: libpod-conmon-3b7519ef1de89a26568ded75a23ebad857dcecc3733200d3439fd9ecc0d3d835.scope: Deactivated successfully.
Dec 06 07:18:46 compute-0 podman[302641]: 2025-12-06 07:18:46.957037491 +0000 UTC m=+0.047880746 container create 497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dirac, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:18:46 compute-0 systemd[1]: Started libpod-conmon-497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e.scope.
Dec 06 07:18:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:18:47 compute-0 podman[302641]: 2025-12-06 07:18:46.936145758 +0000 UTC m=+0.026989033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098f22ff2122a0749abfd7a96bf4ae2c788ce7a81e1611aa3a96336345faa8ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098f22ff2122a0749abfd7a96bf4ae2c788ce7a81e1611aa3a96336345faa8ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098f22ff2122a0749abfd7a96bf4ae2c788ce7a81e1611aa3a96336345faa8ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098f22ff2122a0749abfd7a96bf4ae2c788ce7a81e1611aa3a96336345faa8ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098f22ff2122a0749abfd7a96bf4ae2c788ce7a81e1611aa3a96336345faa8ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:47 compute-0 podman[302641]: 2025-12-06 07:18:47.041257959 +0000 UTC m=+0.132101214 container init 497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:18:47 compute-0 podman[302641]: 2025-12-06 07:18:47.049597002 +0000 UTC m=+0.140440257 container start 497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:18:47 compute-0 podman[302641]: 2025-12-06 07:18:47.053557062 +0000 UTC m=+0.144400317 container attach 497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dirac, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:18:47 compute-0 ceph-mon[74339]: pgmap v1824: 305 pgs: 305 active+clean; 146 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Dec 06 07:18:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3859210869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:18:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3859210869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:18:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:18:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:18:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:18:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1916357225' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 88 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.0 MiB/s wr, 72 op/s
Dec 06 07:18:48 compute-0 peaceful_dirac[302658]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:18:48 compute-0 peaceful_dirac[302658]: --> relative data size: 1.0
Dec 06 07:18:48 compute-0 peaceful_dirac[302658]: --> All data devices are unavailable
Dec 06 07:18:48 compute-0 systemd[1]: libpod-497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e.scope: Deactivated successfully.
Dec 06 07:18:48 compute-0 podman[302641]: 2025-12-06 07:18:48.270590094 +0000 UTC m=+1.361433349 container died 497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dirac, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-098f22ff2122a0749abfd7a96bf4ae2c788ce7a81e1611aa3a96336345faa8ad-merged.mount: Deactivated successfully.
Dec 06 07:18:48 compute-0 podman[302641]: 2025-12-06 07:18:48.336445479 +0000 UTC m=+1.427288734 container remove 497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:18:48 compute-0 systemd[1]: libpod-conmon-497760f58882886222e28d9a367eacdf3afe1cd1cbeaaae7557995422ab9a45e.scope: Deactivated successfully.
Dec 06 07:18:48 compute-0 sudo[302533]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:48 compute-0 sudo[302683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:48 compute-0 sudo[302683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:48 compute-0 sudo[302683]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:48 compute-0 sudo[302708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:18:48 compute-0 sudo[302708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:48 compute-0 sudo[302708]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:48 compute-0 nova_compute[251992]: 2025-12-06 07:18:48.496 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005513.4953523, c8403a0c-2fe6-48fe-91af-ec5aca71e12d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:18:48 compute-0 nova_compute[251992]: 2025-12-06 07:18:48.497 251996 INFO nova.compute.manager [-] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] VM Stopped (Lifecycle Event)
Dec 06 07:18:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:48.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:48 compute-0 nova_compute[251992]: 2025-12-06 07:18:48.537 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:48 compute-0 sudo[302734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:48 compute-0 sudo[302734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:48 compute-0 nova_compute[251992]: 2025-12-06 07:18:48.540 251996 DEBUG nova.compute.manager [None req-2cfc2dc5-863e-4314-ae5e-1d9c3e4ea925 - - - - - -] [instance: c8403a0c-2fe6-48fe-91af-ec5aca71e12d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:18:48 compute-0 sudo[302734]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:48 compute-0 sudo[302759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:18:48 compute-0 sudo[302759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:48 compute-0 nova_compute[251992]: 2025-12-06 07:18:48.745 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:48.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:48 compute-0 podman[302823]: 2025-12-06 07:18:48.985404793 +0000 UTC m=+0.082100030 container create b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cori, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:18:49 compute-0 podman[302823]: 2025-12-06 07:18:48.926730167 +0000 UTC m=+0.023425474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:18:49 compute-0 systemd[1]: Started libpod-conmon-b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66.scope.
Dec 06 07:18:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:18:49 compute-0 podman[302823]: 2025-12-06 07:18:49.2589396 +0000 UTC m=+0.355634907 container init b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:18:49 compute-0 podman[302823]: 2025-12-06 07:18:49.267053676 +0000 UTC m=+0.363748893 container start b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cori, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 07:18:49 compute-0 clever_cori[302839]: 167 167
Dec 06 07:18:49 compute-0 systemd[1]: libpod-b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66.scope: Deactivated successfully.
Dec 06 07:18:49 compute-0 podman[302823]: 2025-12-06 07:18:49.287443904 +0000 UTC m=+0.384139121 container attach b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:18:49 compute-0 podman[302823]: 2025-12-06 07:18:49.287959858 +0000 UTC m=+0.384655075 container died b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cori, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:18:49 compute-0 ceph-mon[74339]: pgmap v1825: 305 pgs: 305 active+clean; 88 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.0 MiB/s wr, 72 op/s
Dec 06 07:18:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/368597711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dbfe960232de3c1f038a8e166129d0b6b223bc5468d90b5b0708b01a1c70e0b-merged.mount: Deactivated successfully.
Dec 06 07:18:49 compute-0 podman[302823]: 2025-12-06 07:18:49.350024409 +0000 UTC m=+0.446719626 container remove b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cori, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:18:49 compute-0 systemd[1]: libpod-conmon-b4216d1081509cbdd8dbb23b9b795963530b56d21cb6fbdec561b68716cfed66.scope: Deactivated successfully.
Dec 06 07:18:49 compute-0 podman[302845]: 2025-12-06 07:18:49.416295897 +0000 UTC m=+0.115123631 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:18:49 compute-0 podman[302851]: 2025-12-06 07:18:49.421985445 +0000 UTC m=+0.120603583 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:18:49 compute-0 podman[302901]: 2025-12-06 07:18:49.507408976 +0000 UTC m=+0.037390853 container create 8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:18:49 compute-0 systemd[1]: Started libpod-conmon-8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83.scope.
Dec 06 07:18:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:18:49 compute-0 podman[302901]: 2025-12-06 07:18:49.492189852 +0000 UTC m=+0.022171749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6443b43c5a081123b9a106157f5b9a52ece7f3b4703d9b5e8d9ddffedd8cd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6443b43c5a081123b9a106157f5b9a52ece7f3b4703d9b5e8d9ddffedd8cd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6443b43c5a081123b9a106157f5b9a52ece7f3b4703d9b5e8d9ddffedd8cd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6443b43c5a081123b9a106157f5b9a52ece7f3b4703d9b5e8d9ddffedd8cd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:49 compute-0 podman[302901]: 2025-12-06 07:18:49.602392445 +0000 UTC m=+0.132374352 container init 8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 07:18:49 compute-0 podman[302901]: 2025-12-06 07:18:49.609301488 +0000 UTC m=+0.139283365 container start 8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:18:49 compute-0 podman[302901]: 2025-12-06 07:18:49.613173215 +0000 UTC m=+0.143155092 container attach 8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:18:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Dec 06 07:18:50 compute-0 charming_jones[302917]: {
Dec 06 07:18:50 compute-0 charming_jones[302917]:     "0": [
Dec 06 07:18:50 compute-0 charming_jones[302917]:         {
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "devices": [
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "/dev/loop3"
Dec 06 07:18:50 compute-0 charming_jones[302917]:             ],
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "lv_name": "ceph_lv0",
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "lv_size": "7511998464",
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "name": "ceph_lv0",
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "tags": {
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.cluster_name": "ceph",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.crush_device_class": "",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.encrypted": "0",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.osd_id": "0",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.type": "block",
Dec 06 07:18:50 compute-0 charming_jones[302917]:                 "ceph.vdo": "0"
Dec 06 07:18:50 compute-0 charming_jones[302917]:             },
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "type": "block",
Dec 06 07:18:50 compute-0 charming_jones[302917]:             "vg_name": "ceph_vg0"
Dec 06 07:18:50 compute-0 charming_jones[302917]:         }
Dec 06 07:18:50 compute-0 charming_jones[302917]:     ]
Dec 06 07:18:50 compute-0 charming_jones[302917]: }
Dec 06 07:18:50 compute-0 systemd[1]: libpod-8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83.scope: Deactivated successfully.
Dec 06 07:18:50 compute-0 podman[302901]: 2025-12-06 07:18:50.376333262 +0000 UTC m=+0.906315169 container died 8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:18:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f6443b43c5a081123b9a106157f5b9a52ece7f3b4703d9b5e8d9ddffedd8cd1-merged.mount: Deactivated successfully.
Dec 06 07:18:50 compute-0 podman[302901]: 2025-12-06 07:18:50.441748537 +0000 UTC m=+0.971730414 container remove 8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jones, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:18:50 compute-0 systemd[1]: libpod-conmon-8d322f9f7d68bff9223f5c81a43bd3b9650b2b569e10407e8923787fb0148b83.scope: Deactivated successfully.
Dec 06 07:18:50 compute-0 sudo[302759]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:50.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:50 compute-0 sudo[302940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:50 compute-0 sudo[302940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:50 compute-0 sudo[302940]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:50 compute-0 sudo[302965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:18:50 compute-0 sudo[302965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:50 compute-0 sudo[302965]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:50 compute-0 sudo[302990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:50 compute-0 sudo[302990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:50 compute-0 sudo[302990]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:50 compute-0 sudo[303015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:18:50 compute-0 sudo[303015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:50.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:51 compute-0 podman[303080]: 2025-12-06 07:18:51.079066915 +0000 UTC m=+0.044349617 container create 53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_davinci, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:18:51 compute-0 systemd[1]: Started libpod-conmon-53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27.scope.
Dec 06 07:18:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:18:51 compute-0 podman[303080]: 2025-12-06 07:18:51.151820743 +0000 UTC m=+0.117103445 container init 53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_davinci, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:18:51 compute-0 podman[303080]: 2025-12-06 07:18:51.057227177 +0000 UTC m=+0.022509919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:18:51 compute-0 podman[303080]: 2025-12-06 07:18:51.157378318 +0000 UTC m=+0.122661020 container start 53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_davinci, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:18:51 compute-0 amazing_davinci[303097]: 167 167
Dec 06 07:18:51 compute-0 podman[303080]: 2025-12-06 07:18:51.161840223 +0000 UTC m=+0.127122935 container attach 53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_davinci, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 07:18:51 compute-0 systemd[1]: libpod-53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27.scope: Deactivated successfully.
Dec 06 07:18:51 compute-0 conmon[303097]: conmon 53cee063d81aeca20130 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27.scope/container/memory.events
Dec 06 07:18:51 compute-0 podman[303080]: 2025-12-06 07:18:51.164449536 +0000 UTC m=+0.129732238 container died 53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_davinci, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:18:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2c4e783666305ec2d43b770246a4a4b9d2ef9a4d1c468b8598ebfa964496587-merged.mount: Deactivated successfully.
Dec 06 07:18:51 compute-0 podman[303080]: 2025-12-06 07:18:51.205717067 +0000 UTC m=+0.170999769 container remove 53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:18:51 compute-0 systemd[1]: libpod-conmon-53cee063d81aeca20130d4a273806d05137adf220fe0349be4f35f81c6699f27.scope: Deactivated successfully.
Dec 06 07:18:51 compute-0 podman[303121]: 2025-12-06 07:18:51.383621326 +0000 UTC m=+0.048207035 container create 6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:18:51 compute-0 systemd[1]: Started libpod-conmon-6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2.scope.
Dec 06 07:18:51 compute-0 podman[303121]: 2025-12-06 07:18:51.359147233 +0000 UTC m=+0.023732972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:18:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1557711ee44cbda66e09fe84acfaf74c2280ccccb4e1569405919fc070555530/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1557711ee44cbda66e09fe84acfaf74c2280ccccb4e1569405919fc070555530/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1557711ee44cbda66e09fe84acfaf74c2280ccccb4e1569405919fc070555530/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1557711ee44cbda66e09fe84acfaf74c2280ccccb4e1569405919fc070555530/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:18:51 compute-0 podman[303121]: 2025-12-06 07:18:51.482333248 +0000 UTC m=+0.146918977 container init 6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:18:51 compute-0 podman[303121]: 2025-12-06 07:18:51.488941873 +0000 UTC m=+0.153527592 container start 6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 07:18:51 compute-0 podman[303121]: 2025-12-06 07:18:51.491827693 +0000 UTC m=+0.156413412 container attach 6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:18:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 3.5 MiB/s wr, 90 op/s
Dec 06 07:18:51 compute-0 ceph-mon[74339]: pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]: {
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]:         "osd_id": 0,
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]:         "type": "bluestore"
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]:     }
Dec 06 07:18:52 compute-0 priceless_sanderson[303138]: }
Dec 06 07:18:52 compute-0 systemd[1]: libpod-6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2.scope: Deactivated successfully.
Dec 06 07:18:52 compute-0 podman[303121]: 2025-12-06 07:18:52.32958695 +0000 UTC m=+0.994172669 container died 6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sanderson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:18:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1557711ee44cbda66e09fe84acfaf74c2280ccccb4e1569405919fc070555530-merged.mount: Deactivated successfully.
Dec 06 07:18:52 compute-0 podman[303121]: 2025-12-06 07:18:52.386575179 +0000 UTC m=+1.051160908 container remove 6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:18:52 compute-0 systemd[1]: libpod-conmon-6a332b18efc4462c1370df3d950e78a548ccf6eae280faea125f98226f81e6c2.scope: Deactivated successfully.
Dec 06 07:18:52 compute-0 sudo[303015]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:18:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:52.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:18:52 compute-0 nova_compute[251992]: 2025-12-06 07:18:52.686 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:52.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0070f097-f0d9-41b9-8711-969ec92ff979 does not exist
Dec 06 07:18:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c3c5e023-b9a6-4268-a4f4-d041b2fc1b0d does not exist
Dec 06 07:18:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 255ab3c4-9bfd-4550-a3ea-1750d9f6d0a6 does not exist
Dec 06 07:18:52 compute-0 sudo[303173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:18:52 compute-0 sudo[303173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:52 compute-0 sudo[303173]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:52 compute-0 sudo[303198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:18:52 compute-0 sudo[303198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:18:52 compute-0 sudo[303198]: pam_unix(sudo:session): session closed for user root
Dec 06 07:18:53 compute-0 nova_compute[251992]: 2025-12-06 07:18:53.539 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:53 compute-0 nova_compute[251992]: 2025-12-06 07:18:53.747 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.288 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.289 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.334 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.468 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.470 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.481 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.482 251996 INFO nova.compute.claims [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:18:54 compute-0 ceph-mon[74339]: pgmap v1827: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 3.5 MiB/s wr, 90 op/s
Dec 06 07:18:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:18:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:18:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.733 251996 DEBUG nova.scheduler.client.report [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:18:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:54.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.868 251996 DEBUG nova.scheduler.client.report [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.869 251996 DEBUG nova.compute.provider_tree [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.883 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.891 251996 DEBUG nova.scheduler.client.report [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.929 251996 DEBUG nova.scheduler.client.report [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:18:54 compute-0 nova_compute[251992]: 2025-12-06 07:18:54.970 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:55 compute-0 nova_compute[251992]: 2025-12-06 07:18:55.164 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674897031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:55 compute-0 nova_compute[251992]: 2025-12-06 07:18:55.465 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:55 compute-0 nova_compute[251992]: 2025-12-06 07:18:55.474 251996 DEBUG nova.compute.provider_tree [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:18:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec 06 07:18:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1106685511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:56 compute-0 ceph-mon[74339]: pgmap v1828: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec 06 07:18:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1674897031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.631 251996 DEBUG nova.scheduler.client.report [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.681 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.683 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.694 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.694 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.695 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.695 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.696 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.767 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.768 251996 DEBUG nova.network.neutron [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:18:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:56.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.826 251996 INFO nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.860 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.972 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.974 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:18:56 compute-0 nova_compute[251992]: 2025-12-06 07:18:56.974 251996 INFO nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Creating image(s)
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.013 251996 DEBUG nova.storage.rbd_utils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 288aae5a-11e0-4906-903d-acea3cebcf63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.049 251996 DEBUG nova.storage.rbd_utils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 288aae5a-11e0-4906-903d-acea3cebcf63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.078 251996 DEBUG nova.storage.rbd_utils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 288aae5a-11e0-4906-903d-acea3cebcf63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.083 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.141 251996 DEBUG nova.policy [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:18:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4153241235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.165 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.166 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.167 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.167 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.193 251996 DEBUG nova.storage.rbd_utils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 288aae5a-11e0-4906-903d-acea3cebcf63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.196 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 288aae5a-11e0-4906-903d-acea3cebcf63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.219 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.376 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.377 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4496MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.378 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.378 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.494 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 288aae5a-11e0-4906-903d-acea3cebcf63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.494 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.494 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:18:57 compute-0 nova_compute[251992]: 2025-12-06 07:18:57.556 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:18:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:18:57 compute-0 ceph-mon[74339]: pgmap v1829: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Dec 06 07:18:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec 06 07:18:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:18:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3302555264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-0 nova_compute[251992]: 2025-12-06 07:18:58.000 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:58 compute-0 nova_compute[251992]: 2025-12-06 07:18:58.005 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:18:58 compute-0 nova_compute[251992]: 2025-12-06 07:18:58.025 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:18:58 compute-0 nova_compute[251992]: 2025-12-06 07:18:58.063 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:18:58 compute-0 nova_compute[251992]: 2025-12-06 07:18:58.064 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:18:58.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:58 compute-0 nova_compute[251992]: 2025-12-06 07:18:58.542 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:58 compute-0 nova_compute[251992]: 2025-12-06 07:18:58.749 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:18:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:18:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:18:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:18:58.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:18:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4153241235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/740716915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/658088653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-0 ceph-mon[74339]: pgmap v1830: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec 06 07:18:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3302555264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:18:58 compute-0 nova_compute[251992]: 2025-12-06 07:18:58.947 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 288aae5a-11e0-4906-903d-acea3cebcf63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.033 251996 DEBUG nova.network.neutron [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Successfully created port: c51cc596-c273-4444-b624-c7f87bb78323 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.041 251996 DEBUG nova.storage.rbd_utils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] resizing rbd image 288aae5a-11e0-4906-903d-acea3cebcf63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.078 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.101 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.101 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.150 251996 DEBUG nova.objects.instance [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'migration_context' on Instance uuid 288aae5a-11e0-4906-903d-acea3cebcf63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.169 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.169 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Ensure instance console log exists: /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.170 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.170 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.171 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:18:59 compute-0 nova_compute[251992]: 2025-12-06 07:18:59.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:18:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:19:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2324004116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2158792105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2096741357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:00.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:00 compute-0 nova_compute[251992]: 2025-12-06 07:19:00.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:00 compute-0 nova_compute[251992]: 2025-12-06 07:19:00.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:00.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:01 compute-0 nova_compute[251992]: 2025-12-06 07:19:01.251 251996 DEBUG nova.network.neutron [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Successfully updated port: c51cc596-c273-4444-b624-c7f87bb78323 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:19:01 compute-0 nova_compute[251992]: 2025-12-06 07:19:01.272 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:01 compute-0 nova_compute[251992]: 2025-12-06 07:19:01.272 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:01 compute-0 nova_compute[251992]: 2025-12-06 07:19:01.273 251996 DEBUG nova.network.neutron [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:19:01 compute-0 nova_compute[251992]: 2025-12-06 07:19:01.408 251996 DEBUG nova.compute.manager [req-6ba6b70d-7372-4624-a1a7-a01c35331035 req-202cf31a-2464-49d2-aa97-0c6cbadadf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-changed-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:01 compute-0 nova_compute[251992]: 2025-12-06 07:19:01.409 251996 DEBUG nova.compute.manager [req-6ba6b70d-7372-4624-a1a7-a01c35331035 req-202cf31a-2464-49d2-aa97-0c6cbadadf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing instance network info cache due to event network-changed-c51cc596-c273-4444-b624-c7f87bb78323. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:01 compute-0 nova_compute[251992]: 2025-12-06 07:19:01.409 251996 DEBUG oslo_concurrency.lockutils [req-6ba6b70d-7372-4624-a1a7-a01c35331035 req-202cf31a-2464-49d2-aa97-0c6cbadadf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:01 compute-0 ceph-mon[74339]: pgmap v1831: 305 pgs: 305 active+clean; 134 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:19:01 compute-0 nova_compute[251992]: 2025-12-06 07:19:01.691 251996 DEBUG nova.network.neutron [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:19:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec 06 07:19:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:02.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:02.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.106 251996 DEBUG nova.network.neutron [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.141 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.142 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Instance network_info: |[{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.142 251996 DEBUG oslo_concurrency.lockutils [req-6ba6b70d-7372-4624-a1a7-a01c35331035 req-202cf31a-2464-49d2-aa97-0c6cbadadf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.143 251996 DEBUG nova.network.neutron [req-6ba6b70d-7372-4624-a1a7-a01c35331035 req-202cf31a-2464-49d2-aa97-0c6cbadadf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing network info cache for port c51cc596-c273-4444-b624-c7f87bb78323 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.145 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Start _get_guest_xml network_info=[{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.150 251996 WARNING nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.164 251996 DEBUG nova.virt.libvirt.host [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.164 251996 DEBUG nova.virt.libvirt.host [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.170 251996 DEBUG nova.virt.libvirt.host [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.170 251996 DEBUG nova.virt.libvirt.host [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.172 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.172 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.172 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.173 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.173 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.173 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.173 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.174 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.174 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.174 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.174 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.175 251996 DEBUG nova.virt.hardware [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.178 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:03 compute-0 sudo[303482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:03 compute-0 sudo[303482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:03 compute-0 sudo[303482]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:03 compute-0 sudo[303507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:03 compute-0 sudo[303507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:03 compute-0 sudo[303507]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.544 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:03 compute-0 ceph-mon[74339]: pgmap v1832: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec 06 07:19:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2383397060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:19:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596240762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.618 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.650 251996 DEBUG nova.storage.rbd_utils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 288aae5a-11e0-4906-903d-acea3cebcf63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.656 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:03 compute-0 nova_compute[251992]: 2025-12-06 07:19:03.750 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:03.826 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:03.827 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:03.827 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:19:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69132736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.133 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.135 251996 DEBUG nova.virt.libvirt.vif [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-319358649',display_name='tempest-tempest.common.compute-instance-319358649',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-319358649',id=79,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g9xh7f4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:18:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=288aae5a-11e0-4906-903d-acea3cebcf63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.135 251996 DEBUG nova.network.os_vif_util [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.136 251996 DEBUG nova.network.os_vif_util [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:a9:46,bridge_name='br-int',has_traffic_filtering=True,id=c51cc596-c273-4444-b624-c7f87bb78323,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc51cc596-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.137 251996 DEBUG nova.objects.instance [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_devices' on Instance uuid 288aae5a-11e0-4906-903d-acea3cebcf63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.154 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <uuid>288aae5a-11e0-4906-903d-acea3cebcf63</uuid>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <name>instance-0000004f</name>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <nova:name>tempest-tempest.common.compute-instance-319358649</nova:name>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:19:03</nova:creationTime>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <nova:port uuid="c51cc596-c273-4444-b624-c7f87bb78323">
Dec 06 07:19:04 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <system>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <entry name="serial">288aae5a-11e0-4906-903d-acea3cebcf63</entry>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <entry name="uuid">288aae5a-11e0-4906-903d-acea3cebcf63</entry>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </system>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <os>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   </os>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <features>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   </features>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/288aae5a-11e0-4906-903d-acea3cebcf63_disk">
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       </source>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/288aae5a-11e0-4906-903d-acea3cebcf63_disk.config">
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       </source>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:19:04 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:a3:a9:46"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <target dev="tapc51cc596-c2"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/console.log" append="off"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <video>
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </video>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:19:04 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:19:04 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:19:04 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:19:04 compute-0 nova_compute[251992]: </domain>
Dec 06 07:19:04 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.154 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Preparing to wait for external event network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.154 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.155 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.155 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.155 251996 DEBUG nova.virt.libvirt.vif [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-319358649',display_name='tempest-tempest.common.compute-instance-319358649',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-319358649',id=79,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g9xh7f4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:18:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=288aae5a-11e0-4906-903d-acea3cebcf63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.156 251996 DEBUG nova.network.os_vif_util [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.156 251996 DEBUG nova.network.os_vif_util [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:a9:46,bridge_name='br-int',has_traffic_filtering=True,id=c51cc596-c273-4444-b624-c7f87bb78323,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc51cc596-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.157 251996 DEBUG os_vif [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:a9:46,bridge_name='br-int',has_traffic_filtering=True,id=c51cc596-c273-4444-b624-c7f87bb78323,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc51cc596-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.157 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.158 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.158 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.162 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.163 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc51cc596-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.163 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc51cc596-c2, col_values=(('external_ids', {'iface-id': 'c51cc596-c273-4444-b624-c7f87bb78323', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:a9:46', 'vm-uuid': '288aae5a-11e0-4906-903d-acea3cebcf63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:04 compute-0 NetworkManager[48965]: <info>  [1765005544.1663] manager: (tapc51cc596-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.167 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.171 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.173 251996 INFO os_vif [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:a9:46,bridge_name='br-int',has_traffic_filtering=True,id=c51cc596-c273-4444-b624-c7f87bb78323,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc51cc596-c2')
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.438 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.439 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.439 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:a3:a9:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.440 251996 INFO nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Using config drive
Dec 06 07:19:04 compute-0 nova_compute[251992]: 2025-12-06 07:19:04.474 251996 DEBUG nova.storage.rbd_utils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 288aae5a-11e0-4906-903d-acea3cebcf63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:04.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:04.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.206 251996 INFO nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Creating config drive at /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/disk.config
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.212 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1jiblfdf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.341 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1jiblfdf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.373 251996 DEBUG nova.storage.rbd_utils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] rbd image 288aae5a-11e0-4906-903d-acea3cebcf63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.376 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/disk.config 288aae5a-11e0-4906-903d-acea3cebcf63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/134940863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2596240762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/69132736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.753 251996 DEBUG oslo_concurrency.processutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/disk.config 288aae5a-11e0-4906-903d-acea3cebcf63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.377s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.754 251996 INFO nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Deleting local config drive /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/disk.config because it was imported into RBD.
Dec 06 07:19:05 compute-0 NetworkManager[48965]: <info>  [1765005545.8149] manager: (tapc51cc596-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Dec 06 07:19:05 compute-0 kernel: tapc51cc596-c2: entered promiscuous mode
Dec 06 07:19:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 875 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 06 07:19:05 compute-0 ovn_controller[147168]: 2025-12-06T07:19:05Z|00247|binding|INFO|Claiming lport c51cc596-c273-4444-b624-c7f87bb78323 for this chassis.
Dec 06 07:19:05 compute-0 ovn_controller[147168]: 2025-12-06T07:19:05Z|00248|binding|INFO|c51cc596-c273-4444-b624-c7f87bb78323: Claiming fa:16:3e:a3:a9:46 10.100.0.4
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.818 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.831 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:a9:46 10.100.0.4'], port_security=['fa:16:3e:a3:a9:46 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '288aae5a-11e0-4906-903d-acea3cebcf63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6207e763-a213-4f4e-8aa9-04781b6722bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=c51cc596-c273-4444-b624-c7f87bb78323) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.832 158118 INFO neutron.agent.ovn.metadata.agent [-] Port c51cc596-c273-4444-b624-c7f87bb78323 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 bound to our chassis
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.833 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.845 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8d50d6d4-ec89-487a-9ed2-1ac90301948b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.845 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap61a21643-71 in ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.848 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap61a21643-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.848 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f495dd89-2023-4c33-a262-24e0c8b45140]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.849 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1da58bf9-ab86-4eba-9512-3faccec82e4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 systemd-machined[212986]: New machine qemu-35-instance-0000004f.
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.860 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[58845e82-65ce-40cb-974e-f874c266162b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.875 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[938c7c18-7439-4e92-8777-1574e5f6c66e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000004f.
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.888 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:05 compute-0 ovn_controller[147168]: 2025-12-06T07:19:05Z|00249|binding|INFO|Setting lport c51cc596-c273-4444-b624-c7f87bb78323 ovn-installed in OVS
Dec 06 07:19:05 compute-0 ovn_controller[147168]: 2025-12-06T07:19:05Z|00250|binding|INFO|Setting lport c51cc596-c273-4444-b624-c7f87bb78323 up in Southbound
Dec 06 07:19:05 compute-0 nova_compute[251992]: 2025-12-06 07:19:05.896 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:05 compute-0 systemd-udevd[303653]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.911 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1e437a76-83c5-45bd-a2ac-4fc26bf12579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 NetworkManager[48965]: <info>  [1765005545.9163] device (tapc51cc596-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:19:05 compute-0 NetworkManager[48965]: <info>  [1765005545.9170] device (tapc51cc596-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:19:05 compute-0 NetworkManager[48965]: <info>  [1765005545.9195] manager: (tap61a21643-70): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Dec 06 07:19:05 compute-0 systemd-udevd[303656]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.918 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd1bfc8-9537-45b4-938c-20f86758fba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.952 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[760a7de1-d4a1-4aba-b9a4-361ac6e13e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.955 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4acb1e43-3f40-4f4b-9ac1-9fa5f7d34e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 NetworkManager[48965]: <info>  [1765005545.9773] device (tap61a21643-70): carrier: link connected
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.983 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[95a367c6-4fb0-4aa7-8a93-b77a7dcd2403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:05.998 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac0db68-301e-416a-9fa8-a3762c932bb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581856, 'reachable_time': 29313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303681, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.013 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd32f84-dddc-4b7c-81f3-4698b04d2687]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:67b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581856, 'tstamp': 581856}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303682, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.027 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[16d76c12-f1da-4071-ad72-ca048bc126ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581856, 'reachable_time': 29313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303683, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.054 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[89933af3-92a2-46a5-8b82-0e78db6ff304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.104 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e6578544-7322-447a-ac19-ab5dfe9c97da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.105 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.105 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.105 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.107 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:06 compute-0 NetworkManager[48965]: <info>  [1765005546.1079] manager: (tap61a21643-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Dec 06 07:19:06 compute-0 kernel: tap61a21643-70: entered promiscuous mode
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.110 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.111 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.112 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:06 compute-0 ovn_controller[147168]: 2025-12-06T07:19:06Z|00251|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.126 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.127 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.128 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d362d5fe-c31e-47f5-8724-4e221eb72fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.129 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/61a21643-77ba-4a09-8184-10dc4bd52b26.pid.haproxy
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:19:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:06.129 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'env', 'PROCESS_TAG=haproxy-61a21643-77ba-4a09-8184-10dc4bd52b26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/61a21643-77ba-4a09-8184-10dc4bd52b26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.382 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005546.3822484, 288aae5a-11e0-4906-903d-acea3cebcf63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.383 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] VM Started (Lifecycle Event)
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.415 251996 DEBUG nova.compute.manager [req-b9ceae9a-9022-4845-98e3-27f66bba7695 req-dee449c1-d2ee-4a17-a45e-683b43a1b446 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.415 251996 DEBUG oslo_concurrency.lockutils [req-b9ceae9a-9022-4845-98e3-27f66bba7695 req-dee449c1-d2ee-4a17-a45e-683b43a1b446 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.415 251996 DEBUG oslo_concurrency.lockutils [req-b9ceae9a-9022-4845-98e3-27f66bba7695 req-dee449c1-d2ee-4a17-a45e-683b43a1b446 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.415 251996 DEBUG oslo_concurrency.lockutils [req-b9ceae9a-9022-4845-98e3-27f66bba7695 req-dee449c1-d2ee-4a17-a45e-683b43a1b446 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.416 251996 DEBUG nova.compute.manager [req-b9ceae9a-9022-4845-98e3-27f66bba7695 req-dee449c1-d2ee-4a17-a45e-683b43a1b446 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Processing event network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.416 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.421 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.422 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.426 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.428 251996 INFO nova.virt.libvirt.driver [-] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Instance spawned successfully.
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.429 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.458 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.458 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005546.3825076, 288aae5a-11e0-4906-903d-acea3cebcf63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.459 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] VM Paused (Lifecycle Event)
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.463 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.463 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.463 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.464 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.464 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.465 251996 DEBUG nova.virt.libvirt.driver [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.503 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.507 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005546.4196596, 288aae5a-11e0-4906-903d-acea3cebcf63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.507 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] VM Resumed (Lifecycle Event)
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.519 251996 DEBUG nova.network.neutron [req-6ba6b70d-7372-4624-a1a7-a01c35331035 req-202cf31a-2464-49d2-aa97-0c6cbadadf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updated VIF entry in instance network info cache for port c51cc596-c273-4444-b624-c7f87bb78323. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.520 251996 DEBUG nova.network.neutron [req-6ba6b70d-7372-4624-a1a7-a01c35331035 req-202cf31a-2464-49d2-aa97-0c6cbadadf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.529 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.532 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.534 251996 DEBUG oslo_concurrency.lockutils [req-6ba6b70d-7372-4624-a1a7-a01c35331035 req-202cf31a-2464-49d2-aa97-0c6cbadadf42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.555 251996 INFO nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Took 9.58 seconds to spawn the instance on the hypervisor.
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.556 251996 DEBUG nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:06 compute-0 podman[303757]: 2025-12-06 07:19:06.465086753 +0000 UTC m=+0.023514747 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.566 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:19:06 compute-0 podman[303757]: 2025-12-06 07:19:06.594070379 +0000 UTC m=+0.152498353 container create ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:19:06 compute-0 systemd[1]: Started libpod-conmon-ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f.scope.
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.641 251996 INFO nova.compute.manager [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Took 12.25 seconds to build instance.
Dec 06 07:19:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73aa627205d3a2d6351aa7b1b4d9296dc2494416aac81a8600fec6457a44bf0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:19:06 compute-0 nova_compute[251992]: 2025-12-06 07:19:06.664 251996 DEBUG oslo_concurrency.lockutils [None req-7a8ca359-5b5e-48d7-bed9-b70eae1e291e 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:06 compute-0 podman[303757]: 2025-12-06 07:19:06.666710454 +0000 UTC m=+0.225138448 container init ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:19:06 compute-0 podman[303757]: 2025-12-06 07:19:06.672764612 +0000 UTC m=+0.231192596 container start ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:19:06 compute-0 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[303774]: [NOTICE]   (303778) : New worker (303780) forked
Dec 06 07:19:06 compute-0 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[303774]: [NOTICE]   (303778) : Loading success.
Dec 06 07:19:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:06.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:07 compute-0 ceph-mon[74339]: pgmap v1833: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:19:07 compute-0 nova_compute[251992]: 2025-12-06 07:19:07.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:07 compute-0 nova_compute[251992]: 2025-12-06 07:19:07.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:19:07 compute-0 nova_compute[251992]: 2025-12-06 07:19:07.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:19:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.351 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.352 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.352 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.352 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 288aae5a-11e0-4906-903d-acea3cebcf63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:08 compute-0 ceph-mon[74339]: pgmap v1834: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 875 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 06 07:19:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:08.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.632 251996 DEBUG nova.compute.manager [req-57abfb19-64bf-46ff-a2ab-30da9feaccc3 req-710beb21-16fe-4c00-8d0b-89de6213bcc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.632 251996 DEBUG oslo_concurrency.lockutils [req-57abfb19-64bf-46ff-a2ab-30da9feaccc3 req-710beb21-16fe-4c00-8d0b-89de6213bcc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.632 251996 DEBUG oslo_concurrency.lockutils [req-57abfb19-64bf-46ff-a2ab-30da9feaccc3 req-710beb21-16fe-4c00-8d0b-89de6213bcc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.633 251996 DEBUG oslo_concurrency.lockutils [req-57abfb19-64bf-46ff-a2ab-30da9feaccc3 req-710beb21-16fe-4c00-8d0b-89de6213bcc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.633 251996 DEBUG nova.compute.manager [req-57abfb19-64bf-46ff-a2ab-30da9feaccc3 req-710beb21-16fe-4c00-8d0b-89de6213bcc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] No waiting events found dispatching network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.633 251996 WARNING nova.compute.manager [req-57abfb19-64bf-46ff-a2ab-30da9feaccc3 req-710beb21-16fe-4c00-8d0b-89de6213bcc5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received unexpected event network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 for instance with vm_state active and task_state None.
Dec 06 07:19:08 compute-0 nova_compute[251992]: 2025-12-06 07:19:08.752 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:08.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:09 compute-0 nova_compute[251992]: 2025-12-06 07:19:09.166 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:19:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:10 compute-0 ceph-mon[74339]: pgmap v1835: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:19:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2115880549' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:19:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2115880549' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:19:10 compute-0 nova_compute[251992]: 2025-12-06 07:19:10.619 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:10 compute-0 nova_compute[251992]: 2025-12-06 07:19:10.652 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:10 compute-0 nova_compute[251992]: 2025-12-06 07:19:10.652 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:19:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:10.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:10 compute-0 nova_compute[251992]: 2025-12-06 07:19:10.846 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:10 compute-0 NetworkManager[48965]: <info>  [1765005550.8468] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Dec 06 07:19:10 compute-0 NetworkManager[48965]: <info>  [1765005550.8480] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Dec 06 07:19:10 compute-0 nova_compute[251992]: 2025-12-06 07:19:10.981 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:10 compute-0 ovn_controller[147168]: 2025-12-06T07:19:10Z|00252|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:19:10 compute-0 nova_compute[251992]: 2025-12-06 07:19:10.999 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:11 compute-0 nova_compute[251992]: 2025-12-06 07:19:11.280 251996 DEBUG nova.compute.manager [req-aa142a00-68ac-40e6-ad73-00ba1e4823b6 req-9b250be6-9e1f-49f9-9c35-63d2adcca260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-changed-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:11 compute-0 nova_compute[251992]: 2025-12-06 07:19:11.281 251996 DEBUG nova.compute.manager [req-aa142a00-68ac-40e6-ad73-00ba1e4823b6 req-9b250be6-9e1f-49f9-9c35-63d2adcca260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing instance network info cache due to event network-changed-c51cc596-c273-4444-b624-c7f87bb78323. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:11 compute-0 nova_compute[251992]: 2025-12-06 07:19:11.281 251996 DEBUG oslo_concurrency.lockutils [req-aa142a00-68ac-40e6-ad73-00ba1e4823b6 req-9b250be6-9e1f-49f9-9c35-63d2adcca260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:11 compute-0 nova_compute[251992]: 2025-12-06 07:19:11.281 251996 DEBUG oslo_concurrency.lockutils [req-aa142a00-68ac-40e6-ad73-00ba1e4823b6 req-9b250be6-9e1f-49f9-9c35-63d2adcca260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:11 compute-0 nova_compute[251992]: 2025-12-06 07:19:11.282 251996 DEBUG nova.network.neutron [req-aa142a00-68ac-40e6-ad73-00ba1e4823b6 req-9b250be6-9e1f-49f9-9c35-63d2adcca260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing network info cache for port c51cc596-c273-4444-b624-c7f87bb78323 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 248 op/s
Dec 06 07:19:11 compute-0 ceph-mon[74339]: pgmap v1836: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:19:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:12.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:12.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:19:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:19:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:19:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:19:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:19:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:19:13 compute-0 podman[303793]: 2025-12-06 07:19:13.415945564 +0000 UTC m=+0.077122521 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:19:13 compute-0 nova_compute[251992]: 2025-12-06 07:19:13.755 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:13 compute-0 ceph-mon[74339]: pgmap v1837: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 248 op/s
Dec 06 07:19:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 15 KiB/s wr, 216 op/s
Dec 06 07:19:14 compute-0 nova_compute[251992]: 2025-12-06 07:19:14.169 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:14 compute-0 ceph-mon[74339]: pgmap v1838: 305 pgs: 305 active+clean; 181 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 15 KiB/s wr, 216 op/s
Dec 06 07:19:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:14 compute-0 nova_compute[251992]: 2025-12-06 07:19:14.859 251996 DEBUG nova.network.neutron [req-aa142a00-68ac-40e6-ad73-00ba1e4823b6 req-9b250be6-9e1f-49f9-9c35-63d2adcca260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updated VIF entry in instance network info cache for port c51cc596-c273-4444-b624-c7f87bb78323. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:14 compute-0 nova_compute[251992]: 2025-12-06 07:19:14.860 251996 DEBUG nova.network.neutron [req-aa142a00-68ac-40e6-ad73-00ba1e4823b6 req-9b250be6-9e1f-49f9-9c35-63d2adcca260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:14 compute-0 nova_compute[251992]: 2025-12-06 07:19:14.896 251996 DEBUG oslo_concurrency.lockutils [req-aa142a00-68ac-40e6-ad73-00ba1e4823b6 req-9b250be6-9e1f-49f9-9c35-63d2adcca260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 182 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 333 KiB/s wr, 220 op/s
Dec 06 07:19:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:16.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:17 compute-0 nova_compute[251992]: 2025-12-06 07:19:17.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:17 compute-0 ceph-mon[74339]: pgmap v1839: 305 pgs: 305 active+clean; 182 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 333 KiB/s wr, 220 op/s
Dec 06 07:19:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3659152555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 201 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Dec 06 07:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:19:18
Dec 06 07:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'images', '.mgr', '.rgw.root', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Dec 06 07:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:19:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:18 compute-0 nova_compute[251992]: 2025-12-06 07:19:18.757 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:18.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:19 compute-0 nova_compute[251992]: 2025-12-06 07:19:19.172 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:19 compute-0 ceph-mon[74339]: pgmap v1840: 305 pgs: 305 active+clean; 201 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Dec 06 07:19:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 201 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Dec 06 07:19:20 compute-0 podman[303825]: 2025-12-06 07:19:20.399970072 +0000 UTC m=+0.050823198 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:19:20 compute-0 podman[303824]: 2025-12-06 07:19:20.406657388 +0000 UTC m=+0.058632156 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:19:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:20 compute-0 ovn_controller[147168]: 2025-12-06T07:19:20Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:a9:46 10.100.0.4
Dec 06 07:19:20 compute-0 ovn_controller[147168]: 2025-12-06T07:19:20Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:a9:46 10.100.0.4
Dec 06 07:19:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:20.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:21 compute-0 nova_compute[251992]: 2025-12-06 07:19:21.082 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 314 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.1 MiB/s wr, 274 op/s
Dec 06 07:19:21 compute-0 ceph-mon[74339]: pgmap v1841: 305 pgs: 305 active+clean; 201 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Dec 06 07:19:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:22.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:23 compute-0 ceph-mon[74339]: pgmap v1842: 305 pgs: 305 active+clean; 314 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.1 MiB/s wr, 274 op/s
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:19:23 compute-0 sudo[303864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:23 compute-0 sudo[303864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:23 compute-0 sudo[303864]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:23 compute-0 sudo[303889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:23 compute-0 sudo[303889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:23 compute-0 sudo[303889]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:19:23 compute-0 nova_compute[251992]: 2025-12-06 07:19:23.758 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 314 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 1004 KiB/s rd, 8.1 MiB/s wr, 207 op/s
Dec 06 07:19:24 compute-0 nova_compute[251992]: 2025-12-06 07:19:24.174 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:24.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:25.079 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.079 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:25.081 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.210 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "686daa50-6941-4d67-8ec0-3d6fc9187a48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.210 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3193354649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3612372411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.230 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.326 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.327 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.335 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.336 251996 INFO nova.compute.claims [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:19:25 compute-0 nova_compute[251992]: 2025-12-06 07:19:25.502 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005301936476363298 of space, bias 1.0, pg target 1.5905809429089894 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021561438558533406 of space, bias 1.0, pg target 0.6446870129001488 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:19:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 322 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 8.1 MiB/s wr, 226 op/s
Dec 06 07:19:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:26.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:19:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783859983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.385 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.883s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.393 251996 DEBUG nova.compute.provider_tree [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.416 251996 DEBUG nova.scheduler.client.report [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.437 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.438 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.503 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.504 251996 DEBUG nova.network.neutron [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.545 251996 INFO nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.570 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.668 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.670 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.670 251996 INFO nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Creating image(s)
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.696 251996 DEBUG nova.storage.rbd_utils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] rbd image 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.724 251996 DEBUG nova.storage.rbd_utils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] rbd image 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.752 251996 DEBUG nova.storage.rbd_utils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] rbd image 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.756 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.792 251996 DEBUG nova.policy [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3af050664af642888620680c329441c5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '08ea3b6ed6c24b0f87e477586001ac99', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:19:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.9 MiB/s wr, 247 op/s
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.833 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.834 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.835 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.835 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.866 251996 DEBUG nova.storage.rbd_utils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] rbd image 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:27 compute-0 nova_compute[251992]: 2025-12-06 07:19:27.871 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:28 compute-0 ceph-mon[74339]: pgmap v1843: 305 pgs: 305 active+clean; 314 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 1004 KiB/s rd, 8.1 MiB/s wr, 207 op/s
Dec 06 07:19:28 compute-0 ceph-mon[74339]: pgmap v1844: 305 pgs: 305 active+clean; 322 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 8.1 MiB/s wr, 226 op/s
Dec 06 07:19:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1783859983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:28 compute-0 nova_compute[251992]: 2025-12-06 07:19:28.658 251996 DEBUG nova.network.neutron [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Successfully created port: 812ade77-4fa8-41e9-986b-5db4d1d03dd4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:19:28 compute-0 nova_compute[251992]: 2025-12-06 07:19:28.760 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:28.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:29.083 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:29 compute-0 nova_compute[251992]: 2025-12-06 07:19:29.176 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.1 MiB/s wr, 217 op/s
Dec 06 07:19:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:30.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:30 compute-0 nova_compute[251992]: 2025-12-06 07:19:30.661 251996 DEBUG nova.network.neutron [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Successfully updated port: 812ade77-4fa8-41e9-986b-5db4d1d03dd4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:19:30 compute-0 nova_compute[251992]: 2025-12-06 07:19:30.683 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "refresh_cache-686daa50-6941-4d67-8ec0-3d6fc9187a48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:30 compute-0 nova_compute[251992]: 2025-12-06 07:19:30.683 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquired lock "refresh_cache-686daa50-6941-4d67-8ec0-3d6fc9187a48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:30 compute-0 nova_compute[251992]: 2025-12-06 07:19:30.683 251996 DEBUG nova.network.neutron [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:19:30 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec 06 07:19:30 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:30.705055) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:19:30 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec 06 07:19:30 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005570705148, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2174, "num_deletes": 252, "total_data_size": 3874035, "memory_usage": 3931776, "flush_reason": "Manual Compaction"}
Dec 06 07:19:30 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec 06 07:19:30 compute-0 ceph-mon[74339]: pgmap v1845: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.9 MiB/s wr, 247 op/s
Dec 06 07:19:30 compute-0 nova_compute[251992]: 2025-12-06 07:19:30.792 251996 DEBUG nova.compute.manager [req-9df47358-0b8c-4675-966e-c98b63eb5ca1 req-bc4bdf66-9af4-4906-8c93-1e1bcb4d1023 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received event network-changed-812ade77-4fa8-41e9-986b-5db4d1d03dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:30 compute-0 nova_compute[251992]: 2025-12-06 07:19:30.793 251996 DEBUG nova.compute.manager [req-9df47358-0b8c-4675-966e-c98b63eb5ca1 req-bc4bdf66-9af4-4906-8c93-1e1bcb4d1023 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Refreshing instance network info cache due to event network-changed-812ade77-4fa8-41e9-986b-5db4d1d03dd4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:30 compute-0 nova_compute[251992]: 2025-12-06 07:19:30.793 251996 DEBUG oslo_concurrency.lockutils [req-9df47358-0b8c-4675-966e-c98b63eb5ca1 req-bc4bdf66-9af4-4906-8c93-1e1bcb4d1023 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-686daa50-6941-4d67-8ec0-3d6fc9187a48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:30.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:30 compute-0 nova_compute[251992]: 2025-12-06 07:19:30.936 251996 DEBUG nova.network.neutron [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571123861, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3783982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35348, "largest_seqno": 37520, "table_properties": {"data_size": 3774226, "index_size": 6122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20835, "raw_average_key_size": 20, "raw_value_size": 3754504, "raw_average_value_size": 3721, "num_data_blocks": 266, "num_entries": 1009, "num_filter_entries": 1009, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005353, "oldest_key_time": 1765005353, "file_creation_time": 1765005570, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 418855 microseconds, and 10986 cpu microseconds.
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.123905) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3783982 bytes OK
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.123925) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.128507) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.128531) EVENT_LOG_v1 {"time_micros": 1765005571128525, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.128549) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3865141, prev total WAL file size 3865141, number of live WAL files 2.
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.129564) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3695KB)], [74(9364KB)]
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571129682, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 13373508, "oldest_snapshot_seqno": -1}
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 7063 keys, 11345115 bytes, temperature: kUnknown
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571257040, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11345115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11297023, "index_size": 29356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 181068, "raw_average_key_size": 25, "raw_value_size": 11169609, "raw_average_value_size": 1581, "num_data_blocks": 1169, "num_entries": 7063, "num_filter_entries": 7063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765005571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:19:31 compute-0 nova_compute[251992]: 2025-12-06 07:19:31.272 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.257708) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11345115 bytes
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.282480) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.7 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.1 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7586, records dropped: 523 output_compression: NoCompression
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.282557) EVENT_LOG_v1 {"time_micros": 1765005571282509, "job": 42, "event": "compaction_finished", "compaction_time_micros": 127744, "compaction_time_cpu_micros": 34869, "output_level": 6, "num_output_files": 1, "total_output_size": 11345115, "num_input_records": 7586, "num_output_records": 7063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571283526, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005571285546, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.129393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.285614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.285619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.285620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.285622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:31.285623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:31 compute-0 nova_compute[251992]: 2025-12-06 07:19:31.380 251996 DEBUG nova.storage.rbd_utils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] resizing rbd image 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:19:31 compute-0 nova_compute[251992]: 2025-12-06 07:19:31.496 251996 DEBUG nova.objects.instance [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lazy-loading 'migration_context' on Instance uuid 686daa50-6941-4d67-8ec0-3d6fc9187a48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:31 compute-0 nova_compute[251992]: 2025-12-06 07:19:31.509 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:19:31 compute-0 nova_compute[251992]: 2025-12-06 07:19:31.509 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Ensure instance console log exists: /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:19:31 compute-0 nova_compute[251992]: 2025-12-06 07:19:31.510 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:31 compute-0 nova_compute[251992]: 2025-12-06 07:19:31.510 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:31 compute-0 nova_compute[251992]: 2025-12-06 07:19:31.510 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.9 MiB/s wr, 281 op/s
Dec 06 07:19:32 compute-0 ceph-mon[74339]: pgmap v1846: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.1 MiB/s wr, 217 op/s
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.124 251996 DEBUG nova.network.neutron [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Updating instance_info_cache with network_info: [{"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.150 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Releasing lock "refresh_cache-686daa50-6941-4d67-8ec0-3d6fc9187a48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.150 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Instance network_info: |[{"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.151 251996 DEBUG oslo_concurrency.lockutils [req-9df47358-0b8c-4675-966e-c98b63eb5ca1 req-bc4bdf66-9af4-4906-8c93-1e1bcb4d1023 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-686daa50-6941-4d67-8ec0-3d6fc9187a48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.151 251996 DEBUG nova.network.neutron [req-9df47358-0b8c-4675-966e-c98b63eb5ca1 req-bc4bdf66-9af4-4906-8c93-1e1bcb4d1023 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Refreshing network info cache for port 812ade77-4fa8-41e9-986b-5db4d1d03dd4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.154 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Start _get_guest_xml network_info=[{"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.158 251996 WARNING nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.166 251996 DEBUG nova.virt.libvirt.host [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.166 251996 DEBUG nova.virt.libvirt.host [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.172 251996 DEBUG nova.virt.libvirt.host [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.173 251996 DEBUG nova.virt.libvirt.host [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.174 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.174 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.175 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.175 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.175 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.175 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.175 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.175 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.175 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.176 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.176 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.176 251996 DEBUG nova.virt.hardware [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.179 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:19:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868323730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.663 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.691 251996 DEBUG nova.storage.rbd_utils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] rbd image 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:32 compute-0 nova_compute[251992]: 2025-12-06 07:19:32.695 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:32.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:19:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552608381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.158 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.160 251996 DEBUG nova.virt.libvirt.vif [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1661001278',display_name='tempest-InstanceActionsNegativeTestJSON-server-1661001278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1661001278',id=81,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08ea3b6ed6c24b0f87e477586001ac99',ramdisk_id='',reservation_id='r-27ga0qwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-571235371',owner_user_name='tempest-InstanceActionsNegativeTestJSON-571235371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:19:27Z,user_data=None,user_id='3af050664af642888620680c329441c5',uuid=686daa50-6941-4d67-8ec0-3d6fc9187a48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.160 251996 DEBUG nova.network.os_vif_util [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Converting VIF {"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.161 251996 DEBUG nova.network.os_vif_util [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:70:48,bridge_name='br-int',has_traffic_filtering=True,id=812ade77-4fa8-41e9-986b-5db4d1d03dd4,network=Network(3eab116e-2d42-4979-828f-0c5410c830e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap812ade77-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.162 251996 DEBUG nova.objects.instance [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lazy-loading 'pci_devices' on Instance uuid 686daa50-6941-4d67-8ec0-3d6fc9187a48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.188 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <uuid>686daa50-6941-4d67-8ec0-3d6fc9187a48</uuid>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <name>instance-00000051</name>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1661001278</nova:name>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:19:32</nova:creationTime>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <nova:user uuid="3af050664af642888620680c329441c5">tempest-InstanceActionsNegativeTestJSON-571235371-project-member</nova:user>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <nova:project uuid="08ea3b6ed6c24b0f87e477586001ac99">tempest-InstanceActionsNegativeTestJSON-571235371</nova:project>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <nova:port uuid="812ade77-4fa8-41e9-986b-5db4d1d03dd4">
Dec 06 07:19:33 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <system>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <entry name="serial">686daa50-6941-4d67-8ec0-3d6fc9187a48</entry>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <entry name="uuid">686daa50-6941-4d67-8ec0-3d6fc9187a48</entry>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </system>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <os>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   </os>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <features>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   </features>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/686daa50-6941-4d67-8ec0-3d6fc9187a48_disk">
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       </source>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/686daa50-6941-4d67-8ec0-3d6fc9187a48_disk.config">
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       </source>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:19:33 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:65:70:48"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <target dev="tap812ade77-4f"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48/console.log" append="off"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <video>
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </video>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:19:33 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:19:33 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:19:33 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:19:33 compute-0 nova_compute[251992]: </domain>
Dec 06 07:19:33 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.190 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Preparing to wait for external event network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.190 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.190 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.190 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.191 251996 DEBUG nova.virt.libvirt.vif [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1661001278',display_name='tempest-InstanceActionsNegativeTestJSON-server-1661001278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1661001278',id=81,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='08ea3b6ed6c24b0f87e477586001ac99',ramdisk_id='',reservation_id='r-27ga0qwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-571235371',owner_user_name='tempest-InstanceActionsNegativeTestJSON-571235371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:19:27Z,user_data=None,user_id='3af050664af642888620680c329441c5',uuid=686daa50-6941-4d67-8ec0-3d6fc9187a48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.191 251996 DEBUG nova.network.os_vif_util [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Converting VIF {"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.192 251996 DEBUG nova.network.os_vif_util [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:70:48,bridge_name='br-int',has_traffic_filtering=True,id=812ade77-4fa8-41e9-986b-5db4d1d03dd4,network=Network(3eab116e-2d42-4979-828f-0c5410c830e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap812ade77-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.192 251996 DEBUG os_vif [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:70:48,bridge_name='br-int',has_traffic_filtering=True,id=812ade77-4fa8-41e9-986b-5db4d1d03dd4,network=Network(3eab116e-2d42-4979-828f-0c5410c830e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap812ade77-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.193 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.194 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.194 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.198 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.198 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap812ade77-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.199 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap812ade77-4f, col_values=(('external_ids', {'iface-id': '812ade77-4fa8-41e9-986b-5db4d1d03dd4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:70:48', 'vm-uuid': '686daa50-6941-4d67-8ec0-3d6fc9187a48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:33 compute-0 NetworkManager[48965]: <info>  [1765005573.2018] manager: (tap812ade77-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.203 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.207 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.208 251996 INFO os_vif [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:70:48,bridge_name='br-int',has_traffic_filtering=True,id=812ade77-4fa8-41e9-986b-5db4d1d03dd4,network=Network(3eab116e-2d42-4979-828f-0c5410c830e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap812ade77-4f')
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.289 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.291 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.291 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] No VIF found with MAC fa:16:3e:65:70:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.292 251996 INFO nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Using config drive
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.326 251996 DEBUG nova.storage.rbd_utils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] rbd image 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.762 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 108 op/s
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.909 251996 INFO nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Creating config drive at /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48/disk.config
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.915 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1o8eci7x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:33 compute-0 ceph-mon[74339]: pgmap v1847: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.9 MiB/s wr, 281 op/s
Dec 06 07:19:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2868323730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:33.927691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573927768, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 259, "total_data_size": 79205, "memory_usage": 86360, "flush_reason": "Manual Compaction"}
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573930065, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 79132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37521, "largest_seqno": 37810, "table_properties": {"data_size": 77209, "index_size": 151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4876, "raw_average_key_size": 17, "raw_value_size": 73335, "raw_average_value_size": 263, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005571, "oldest_key_time": 1765005571, "file_creation_time": 1765005573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 2415 microseconds, and 897 cpu microseconds.
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:33.930118) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 79132 bytes OK
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:33.930132) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:33.931489) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:33.931504) EVENT_LOG_v1 {"time_micros": 1765005573931499, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:33.931517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 77024, prev total WAL file size 77024, number of live WAL files 2.
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:33.932142) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303038' seq:72057594037927935, type:22 .. '6C6F676D0031323633' seq:0, type:0; will stop at (end)
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(77KB)], [77(10MB)]
Dec 06 07:19:33 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005573932470, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11424247, "oldest_snapshot_seqno": -1}
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.968 251996 DEBUG nova.compute.manager [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-changed-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.969 251996 DEBUG nova.compute.manager [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing instance network info cache due to event network-changed-c51cc596-c273-4444-b624-c7f87bb78323. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.969 251996 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.969 251996 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:33 compute-0 nova_compute[251992]: 2025-12-06 07:19:33.969 251996 DEBUG nova.network.neutron [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing network info cache for port c51cc596-c273-4444-b624-c7f87bb78323 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6815 keys, 11286670 bytes, temperature: kUnknown
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574030353, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 11286670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11239715, "index_size": 28795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 176871, "raw_average_key_size": 25, "raw_value_size": 11116071, "raw_average_value_size": 1631, "num_data_blocks": 1143, "num_entries": 6815, "num_filter_entries": 6815, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765005573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:34.030709) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11286670 bytes
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:34.032672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.6 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(287.0) write-amplify(142.6) OK, records in: 7341, records dropped: 526 output_compression: NoCompression
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:34.032720) EVENT_LOG_v1 {"time_micros": 1765005574032701, "job": 44, "event": "compaction_finished", "compaction_time_micros": 97987, "compaction_time_cpu_micros": 35996, "output_level": 6, "num_output_files": 1, "total_output_size": 11286670, "num_input_records": 7341, "num_output_records": 6815, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574033179, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005574035656, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:33.931901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:34.035709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:34.035713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:34.035715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:34.035716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:19:34.035718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.049 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1o8eci7x" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.080 251996 DEBUG nova.storage.rbd_utils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] rbd image 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.085 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48/disk.config 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.445 251996 DEBUG oslo_concurrency.processutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48/disk.config 686daa50-6941-4d67-8ec0-3d6fc9187a48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.446 251996 INFO nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Deleting local config drive /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48/disk.config because it was imported into RBD.
Dec 06 07:19:34 compute-0 kernel: tap812ade77-4f: entered promiscuous mode
Dec 06 07:19:34 compute-0 NetworkManager[48965]: <info>  [1765005574.4993] manager: (tap812ade77-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Dec 06 07:19:34 compute-0 ovn_controller[147168]: 2025-12-06T07:19:34Z|00253|binding|INFO|Claiming lport 812ade77-4fa8-41e9-986b-5db4d1d03dd4 for this chassis.
Dec 06 07:19:34 compute-0 ovn_controller[147168]: 2025-12-06T07:19:34Z|00254|binding|INFO|812ade77-4fa8-41e9-986b-5db4d1d03dd4: Claiming fa:16:3e:65:70:48 10.100.0.9
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.501 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.508 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:70:48 10.100.0.9'], port_security=['fa:16:3e:65:70:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '686daa50-6941-4d67-8ec0-3d6fc9187a48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3eab116e-2d42-4979-828f-0c5410c830e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08ea3b6ed6c24b0f87e477586001ac99', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f1f2c21-48c5-4f4a-915a-7839ec54825f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93751d6e-5361-47c2-91c6-daf692694fce, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=812ade77-4fa8-41e9-986b-5db4d1d03dd4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.509 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 812ade77-4fa8-41e9-986b-5db4d1d03dd4 in datapath 3eab116e-2d42-4979-828f-0c5410c830e8 bound to our chassis
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.511 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3eab116e-2d42-4979-828f-0c5410c830e8
Dec 06 07:19:34 compute-0 ovn_controller[147168]: 2025-12-06T07:19:34Z|00255|binding|INFO|Setting lport 812ade77-4fa8-41e9-986b-5db4d1d03dd4 ovn-installed in OVS
Dec 06 07:19:34 compute-0 ovn_controller[147168]: 2025-12-06T07:19:34Z|00256|binding|INFO|Setting lport 812ade77-4fa8-41e9-986b-5db4d1d03dd4 up in Southbound
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.519 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.522 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8f79f6e5-eb02-4d3c-af10-7f3227bbdf6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.523 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3eab116e-21 in ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.525 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3eab116e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.525 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[66cf8e1f-5eee-44f8-aa71-eaf3b36dad03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.526 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[318dc08a-3de4-43f2-b08d-f1b1a5c920a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 systemd-udevd[304244]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.538 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[850e0a8e-f841-422e-b29a-fcca4b00da47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 systemd-machined[212986]: New machine qemu-36-instance-00000051.
Dec 06 07:19:34 compute-0 NetworkManager[48965]: <info>  [1765005574.5454] device (tap812ade77-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:19:34 compute-0 NetworkManager[48965]: <info>  [1765005574.5464] device (tap812ade77-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:19:34 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000051.
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.554 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[27a9eb12-2d64-487c-9e46-37f7d40229ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:34.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.585 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c5801b-7363-431c-96b2-2a6a9982f678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 NetworkManager[48965]: <info>  [1765005574.5918] manager: (tap3eab116e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.591 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b33a03-3cbd-4cee-add7-f8c5cc025e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.622 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d14679d4-3442-4c63-88db-227313c1bd39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.625 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[981cfa69-3c1c-48ae-a289-0d71df754a3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 NetworkManager[48965]: <info>  [1765005574.6459] device (tap3eab116e-20): carrier: link connected
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.651 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[910f0fef-df96-49a9-ae46-58cedb3b2c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.667 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[55ef18e4-979e-47f9-9a52-3f959a05a5d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3eab116e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:6f:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584723, 'reachable_time': 22822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304277, 'error': None, 'target': 'ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.682 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[201b04c8-53c3-4826-a654-d6562e0796b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:6f97'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584723, 'tstamp': 584723}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304278, 'error': None, 'target': 'ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.699 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6f70c4d4-67b1-4aea-8398-8072314d66e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3eab116e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:6f:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584723, 'reachable_time': 22822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304279, 'error': None, 'target': 'ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.728 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[073c6aa4-f83e-4a29-9593-c869c2109afa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.786 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f048c057-8ad6-4353-b14a-536885157bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.788 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eab116e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.788 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.789 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eab116e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.791 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:34 compute-0 NetworkManager[48965]: <info>  [1765005574.7919] manager: (tap3eab116e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Dec 06 07:19:34 compute-0 kernel: tap3eab116e-20: entered promiscuous mode
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.793 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.797 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3eab116e-20, col_values=(('external_ids', {'iface-id': '6bcf5fa1-98f0-46dc-87be-601c035f7d7e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.798 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:34 compute-0 ovn_controller[147168]: 2025-12-06T07:19:34Z|00257|binding|INFO|Releasing lport 6bcf5fa1-98f0-46dc-87be-601c035f7d7e from this chassis (sb_readonly=0)
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.799 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.801 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3eab116e-2d42-4979-828f-0c5410c830e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3eab116e-2d42-4979-828f-0c5410c830e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.802 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[50ac1cde-b508-4e2d-946b-2e962b5ef8b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.803 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-3eab116e-2d42-4979-828f-0c5410c830e8
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/3eab116e-2d42-4979-828f-0c5410c830e8.pid.haproxy
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 3eab116e-2d42-4979-828f-0c5410c830e8
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:19:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:34.804 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8', 'env', 'PROCESS_TAG=haproxy-3eab116e-2d42-4979-828f-0c5410c830e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3eab116e-2d42-4979-828f-0c5410c830e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.813 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:34.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.912 251996 DEBUG nova.network.neutron [req-9df47358-0b8c-4675-966e-c98b63eb5ca1 req-bc4bdf66-9af4-4906-8c93-1e1bcb4d1023 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Updated VIF entry in instance network info cache for port 812ade77-4fa8-41e9-986b-5db4d1d03dd4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.913 251996 DEBUG nova.network.neutron [req-9df47358-0b8c-4675-966e-c98b63eb5ca1 req-bc4bdf66-9af4-4906-8c93-1e1bcb4d1023 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Updating instance_info_cache with network_info: [{"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:34 compute-0 nova_compute[251992]: 2025-12-06 07:19:34.930 251996 DEBUG oslo_concurrency.lockutils [req-9df47358-0b8c-4675-966e-c98b63eb5ca1 req-bc4bdf66-9af4-4906-8c93-1e1bcb4d1023 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-686daa50-6941-4d67-8ec0-3d6fc9187a48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1552608381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:35 compute-0 ceph-mon[74339]: pgmap v1848: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 108 op/s
Dec 06 07:19:35 compute-0 podman[304311]: 2025-12-06 07:19:35.174627085 +0000 UTC m=+0.047602078 container create 595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:19:35 compute-0 systemd[1]: Started libpod-conmon-595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639.scope.
Dec 06 07:19:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28646da4f4316f3a4c30e6640f649b2374e467b8018ab54b469f3b21f7267954/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:35 compute-0 podman[304311]: 2025-12-06 07:19:35.14934385 +0000 UTC m=+0.022318873 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:19:35 compute-0 podman[304311]: 2025-12-06 07:19:35.256987191 +0000 UTC m=+0.129962214 container init 595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:19:35 compute-0 podman[304311]: 2025-12-06 07:19:35.262602288 +0000 UTC m=+0.135577281 container start 595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:19:35 compute-0 neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8[304326]: [NOTICE]   (304330) : New worker (304339) forked
Dec 06 07:19:35 compute-0 neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8[304326]: [NOTICE]   (304330) : Loading success.
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.469 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005575.4685626, 686daa50-6941-4d67-8ec0-3d6fc9187a48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.469 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] VM Started (Lifecycle Event)
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.497 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.502 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005575.468823, 686daa50-6941-4d67-8ec0-3d6fc9187a48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.502 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] VM Paused (Lifecycle Event)
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.529 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.533 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.564 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.774 251996 DEBUG nova.network.neutron [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updated VIF entry in instance network info cache for port c51cc596-c273-4444-b624-c7f87bb78323. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.775 251996 DEBUG nova.network.neutron [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:35 compute-0 nova_compute[251992]: 2025-12-06 07:19:35.819 251996 DEBUG oslo_concurrency.lockutils [req-30976b16-5ed0-41a8-a10d-392c7810eb35 req-a294eb37-bb5a-425b-94f8-7dc1230eceba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 113 op/s
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.099 251996 DEBUG nova.compute.manager [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received event network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.099 251996 DEBUG oslo_concurrency.lockutils [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.100 251996 DEBUG oslo_concurrency.lockutils [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.100 251996 DEBUG oslo_concurrency.lockutils [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.100 251996 DEBUG nova.compute.manager [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Processing event network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.100 251996 DEBUG nova.compute.manager [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received event network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.100 251996 DEBUG oslo_concurrency.lockutils [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.100 251996 DEBUG oslo_concurrency.lockutils [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.101 251996 DEBUG oslo_concurrency.lockutils [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.101 251996 DEBUG nova.compute.manager [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] No waiting events found dispatching network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.102 251996 WARNING nova.compute.manager [req-5c3602ca-bc17-485e-b8b5-dba7729f16f3 req-c8ea93c9-35d0-4dda-8c52-1e66f35e3661 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received unexpected event network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 for instance with vm_state building and task_state spawning.
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.102 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.106 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.107 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005576.1065762, 686daa50-6941-4d67-8ec0-3d6fc9187a48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.107 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] VM Resumed (Lifecycle Event)
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.111 251996 INFO nova.virt.libvirt.driver [-] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Instance spawned successfully.
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.112 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.124 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.126 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.136 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.137 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.137 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.137 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.138 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.138 251996 DEBUG nova.virt.libvirt.driver [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.144 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.187 251996 INFO nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Took 8.52 seconds to spawn the instance on the hypervisor.
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.188 251996 DEBUG nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.258 251996 INFO nova.compute.manager [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Took 10.96 seconds to build instance.
Dec 06 07:19:36 compute-0 nova_compute[251992]: 2025-12-06 07:19:36.282 251996 DEBUG oslo_concurrency.lockutils [None req-7272d803-a443-4c56-acc8-d487a013e015 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:36.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:37 compute-0 nova_compute[251992]: 2025-12-06 07:19:37.335 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "686daa50-6941-4d67-8ec0-3d6fc9187a48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:37 compute-0 nova_compute[251992]: 2025-12-06 07:19:37.336 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:37 compute-0 nova_compute[251992]: 2025-12-06 07:19:37.336 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:37 compute-0 nova_compute[251992]: 2025-12-06 07:19:37.336 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:37 compute-0 nova_compute[251992]: 2025-12-06 07:19:37.337 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:37 compute-0 nova_compute[251992]: 2025-12-06 07:19:37.338 251996 INFO nova.compute.manager [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Terminating instance
Dec 06 07:19:37 compute-0 nova_compute[251992]: 2025-12-06 07:19:37.339 251996 DEBUG nova.compute.manager [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:19:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.027 251996 DEBUG oslo_concurrency.lockutils [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-288aae5a-11e0-4906-903d-acea3cebcf63-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.028 251996 DEBUG oslo_concurrency.lockutils [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-288aae5a-11e0-4906-903d-acea3cebcf63-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.028 251996 DEBUG nova.objects.instance [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid 288aae5a-11e0-4906-903d-acea3cebcf63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.203 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.401 251996 DEBUG nova.compute.manager [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-changed-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.401 251996 DEBUG nova.compute.manager [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing instance network info cache due to event network-changed-c51cc596-c273-4444-b624-c7f87bb78323. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.401 251996 DEBUG oslo_concurrency.lockutils [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.401 251996 DEBUG oslo_concurrency.lockutils [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.402 251996 DEBUG nova.network.neutron [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing network info cache for port c51cc596-c273-4444-b624-c7f87bb78323 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.609 251996 DEBUG nova.objects.instance [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'pci_requests' on Instance uuid 288aae5a-11e0-4906-903d-acea3cebcf63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.630 251996 DEBUG nova.network.neutron [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:19:38 compute-0 nova_compute[251992]: 2025-12-06 07:19:38.766 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:38.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:39 compute-0 nova_compute[251992]: 2025-12-06 07:19:39.414 251996 DEBUG nova.policy [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '06f5b46553b24b39a1493d96ec4e503e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '35df5125c2cf4d29a6b975951af14910', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:19:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec 06 07:19:39 compute-0 ceph-mon[74339]: pgmap v1849: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 113 op/s
Dec 06 07:19:40 compute-0 kernel: tap812ade77-4f (unregistering): left promiscuous mode
Dec 06 07:19:40 compute-0 NetworkManager[48965]: <info>  [1765005580.3186] device (tap812ade77-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:19:40 compute-0 ovn_controller[147168]: 2025-12-06T07:19:40Z|00258|binding|INFO|Releasing lport 812ade77-4fa8-41e9-986b-5db4d1d03dd4 from this chassis (sb_readonly=0)
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.323 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:40 compute-0 ovn_controller[147168]: 2025-12-06T07:19:40Z|00259|binding|INFO|Setting lport 812ade77-4fa8-41e9-986b-5db4d1d03dd4 down in Southbound
Dec 06 07:19:40 compute-0 ovn_controller[147168]: 2025-12-06T07:19:40Z|00260|binding|INFO|Removing iface tap812ade77-4f ovn-installed in OVS
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.325 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:40.332 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:70:48 10.100.0.9'], port_security=['fa:16:3e:65:70:48 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '686daa50-6941-4d67-8ec0-3d6fc9187a48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3eab116e-2d42-4979-828f-0c5410c830e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '08ea3b6ed6c24b0f87e477586001ac99', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f1f2c21-48c5-4f4a-915a-7839ec54825f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93751d6e-5361-47c2-91c6-daf692694fce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=812ade77-4fa8-41e9-986b-5db4d1d03dd4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:40.334 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 812ade77-4fa8-41e9-986b-5db4d1d03dd4 in datapath 3eab116e-2d42-4979-828f-0c5410c830e8 unbound from our chassis
Dec 06 07:19:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:40.335 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3eab116e-2d42-4979-828f-0c5410c830e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:19:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:40.337 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2ed9a0-b30a-4c67-895b-bab221dd5555]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:40.337 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8 namespace which is not needed anymore
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.342 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:40 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000051.scope: Deactivated successfully.
Dec 06 07:19:40 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000051.scope: Consumed 2.223s CPU time.
Dec 06 07:19:40 compute-0 systemd-machined[212986]: Machine qemu-36-instance-00000051 terminated.
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.412 251996 DEBUG nova.network.neutron [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updated VIF entry in instance network info cache for port c51cc596-c273-4444-b624-c7f87bb78323. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.412 251996 DEBUG nova.network.neutron [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.435 251996 DEBUG oslo_concurrency.lockutils [req-d1a90b65-696c-4251-b1a8-47e50326bfd3 req-59c62610-cbe3-4256-bc89-223234dfc6b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.463 251996 DEBUG nova.network.neutron [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Successfully updated port: 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.506 251996 DEBUG oslo_concurrency.lockutils [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.507 251996 DEBUG oslo_concurrency.lockutils [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.507 251996 DEBUG nova.network.neutron [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.564 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.568 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.577 251996 INFO nova.virt.libvirt.driver [-] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Instance destroyed successfully.
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.578 251996 DEBUG nova.objects.instance [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lazy-loading 'resources' on Instance uuid 686daa50-6941-4d67-8ec0-3d6fc9187a48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.586 251996 DEBUG nova.compute.manager [req-dc53be25-ef1d-436f-a56e-2791330c9808 req-ee6a267e-6b5b-48f5-a06a-51dca37de673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-changed-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.587 251996 DEBUG nova.compute.manager [req-dc53be25-ef1d-436f-a56e-2791330c9808 req-ee6a267e-6b5b-48f5-a06a-51dca37de673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing instance network info cache due to event network-changed-3036e2e9-ad2c-4f44-96e5-c7dcdea69629. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.587 251996 DEBUG oslo_concurrency.lockutils [req-dc53be25-ef1d-436f-a56e-2791330c9808 req-ee6a267e-6b5b-48f5-a06a-51dca37de673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.611 251996 DEBUG nova.virt.libvirt.vif [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:19:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1661001278',display_name='tempest-InstanceActionsNegativeTestJSON-server-1661001278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1661001278',id=81,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='08ea3b6ed6c24b0f87e477586001ac99',ramdisk_id='',reservation_id='r-27ga0qwx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-571235371',owner_user_name='tempest-InstanceActionsNegativeTestJSON-571235371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:36Z,user_data=None,user_id='3af050664af642888620680c329441c5',uuid=686daa50-6941-4d67-8ec0-3d6fc9187a48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.612 251996 DEBUG nova.network.os_vif_util [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Converting VIF {"id": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "address": "fa:16:3e:65:70:48", "network": {"id": "3eab116e-2d42-4979-828f-0c5410c830e8", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-802678261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "08ea3b6ed6c24b0f87e477586001ac99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap812ade77-4f", "ovs_interfaceid": "812ade77-4fa8-41e9-986b-5db4d1d03dd4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.612 251996 DEBUG nova.network.os_vif_util [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:70:48,bridge_name='br-int',has_traffic_filtering=True,id=812ade77-4fa8-41e9-986b-5db4d1d03dd4,network=Network(3eab116e-2d42-4979-828f-0c5410c830e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap812ade77-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.613 251996 DEBUG os_vif [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:70:48,bridge_name='br-int',has_traffic_filtering=True,id=812ade77-4fa8-41e9-986b-5db4d1d03dd4,network=Network(3eab116e-2d42-4979-828f-0c5410c830e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap812ade77-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.615 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.615 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap812ade77-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.616 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.619 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.622 251996 INFO os_vif [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:70:48,bridge_name='br-int',has_traffic_filtering=True,id=812ade77-4fa8-41e9-986b-5db4d1d03dd4,network=Network(3eab116e-2d42-4979-828f-0c5410c830e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap812ade77-4f')
Dec 06 07:19:40 compute-0 nova_compute[251992]: 2025-12-06 07:19:40.806 251996 WARNING nova.network.neutron [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] 61a21643-77ba-4a09-8184-10dc4bd52b26 already exists in list: networks containing: ['61a21643-77ba-4a09-8184-10dc4bd52b26']. ignoring it
Dec 06 07:19:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:40.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:41 compute-0 neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8[304326]: [NOTICE]   (304330) : haproxy version is 2.8.14-c23fe91
Dec 06 07:19:41 compute-0 neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8[304326]: [NOTICE]   (304330) : path to executable is /usr/sbin/haproxy
Dec 06 07:19:41 compute-0 neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8[304326]: [WARNING]  (304330) : Exiting Master process...
Dec 06 07:19:41 compute-0 neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8[304326]: [ALERT]    (304330) : Current worker (304339) exited with code 143 (Terminated)
Dec 06 07:19:41 compute-0 neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8[304326]: [WARNING]  (304330) : All workers exited. Exiting... (0)
Dec 06 07:19:41 compute-0 systemd[1]: libpod-595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639.scope: Deactivated successfully.
Dec 06 07:19:41 compute-0 podman[304410]: 2025-12-06 07:19:41.381855115 +0000 UTC m=+0.961094077 container died 595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:19:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4046454772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:41 compute-0 ceph-mon[74339]: pgmap v1850: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Dec 06 07:19:41 compute-0 ceph-mon[74339]: pgmap v1851: 305 pgs: 305 active+clean; 372 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec 06 07:19:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 394 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 177 op/s
Dec 06 07:19:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.730 251996 DEBUG nova.compute.manager [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received event network-vif-unplugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.731 251996 DEBUG oslo_concurrency.lockutils [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.731 251996 DEBUG oslo_concurrency.lockutils [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.732 251996 DEBUG oslo_concurrency.lockutils [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.732 251996 DEBUG nova.compute.manager [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] No waiting events found dispatching network-vif-unplugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.733 251996 DEBUG nova.compute.manager [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received event network-vif-unplugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.733 251996 DEBUG nova.compute.manager [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received event network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.734 251996 DEBUG oslo_concurrency.lockutils [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.735 251996 DEBUG oslo_concurrency.lockutils [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.735 251996 DEBUG oslo_concurrency.lockutils [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.736 251996 DEBUG nova.compute.manager [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] No waiting events found dispatching network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:42 compute-0 nova_compute[251992]: 2025-12-06 07:19:42.736 251996 WARNING nova.compute.manager [req-24ed5485-aecb-462e-8dc3-8d15b8b95d53 req-37815afe-cd35-4820-970b-2a8f159ca3de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received unexpected event network-vif-plugged-812ade77-4fa8-41e9-986b-5db4d1d03dd4 for instance with vm_state active and task_state deleting.
Dec 06 07:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639-userdata-shm.mount: Deactivated successfully.
Dec 06 07:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-28646da4f4316f3a4c30e6640f649b2374e467b8018ab54b469f3b21f7267954-merged.mount: Deactivated successfully.
Dec 06 07:19:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:19:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:19:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:19:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:19:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:19:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.014 251996 DEBUG nova.network.neutron [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.029 251996 DEBUG oslo_concurrency.lockutils [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.031 251996 DEBUG oslo_concurrency.lockutils [req-dc53be25-ef1d-436f-a56e-2791330c9808 req-ee6a267e-6b5b-48f5-a06a-51dca37de673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.031 251996 DEBUG nova.network.neutron [req-dc53be25-ef1d-436f-a56e-2791330c9808 req-ee6a267e-6b5b-48f5-a06a-51dca37de673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing network info cache for port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.035 251996 DEBUG nova.virt.libvirt.vif [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-319358649',display_name='tempest-tempest.common.compute-instance-319358649',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-319358649',id=79,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g9xh7f4l',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=288aae5a-11e0-4906-903d-acea3cebcf63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.036 251996 DEBUG nova.network.os_vif_util [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.037 251996 DEBUG nova.network.os_vif_util [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.037 251996 DEBUG os_vif [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.038 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.039 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.045 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.049 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.049 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3036e2e9-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.050 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3036e2e9-ad, col_values=(('external_ids', {'iface-id': '3036e2e9-ad2c-4f44-96e5-c7dcdea69629', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:2a:84', 'vm-uuid': '288aae5a-11e0-4906-903d-acea3cebcf63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.052 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 NetworkManager[48965]: <info>  [1765005583.0531] manager: (tap3036e2e9-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.060 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.061 251996 INFO os_vif [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad')
Dec 06 07:19:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.062 251996 DEBUG nova.virt.libvirt.vif [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-319358649',display_name='tempest-tempest.common.compute-instance-319358649',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-319358649',id=79,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g9xh7f4l',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=288aae5a-11e0-4906-903d-acea3cebcf63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.062 251996 DEBUG nova.network.os_vif_util [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.063 251996 DEBUG nova.network.os_vif_util [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.067 251996 DEBUG nova.virt.libvirt.guest [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] attach device xml: <interface type="ethernet">
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:46:2a:84"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <target dev="tap3036e2e9-ad"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]: </interface>
Dec 06 07:19:43 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:19:43 compute-0 kernel: tap3036e2e9-ad: entered promiscuous mode
Dec 06 07:19:43 compute-0 systemd-udevd[304390]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:19:43 compute-0 NetworkManager[48965]: <info>  [1765005583.0818] manager: (tap3036e2e9-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.082 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 ovn_controller[147168]: 2025-12-06T07:19:43Z|00261|binding|INFO|Claiming lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for this chassis.
Dec 06 07:19:43 compute-0 ovn_controller[147168]: 2025-12-06T07:19:43Z|00262|binding|INFO|3036e2e9-ad2c-4f44-96e5-c7dcdea69629: Claiming fa:16:3e:46:2a:84 10.100.0.14
Dec 06 07:19:43 compute-0 NetworkManager[48965]: <info>  [1765005583.0944] device (tap3036e2e9-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:19:43 compute-0 NetworkManager[48965]: <info>  [1765005583.0952] device (tap3036e2e9-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:19:43 compute-0 ovn_controller[147168]: 2025-12-06T07:19:43Z|00263|binding|INFO|Setting lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 ovn-installed in OVS
Dec 06 07:19:43 compute-0 ovn_controller[147168]: 2025-12-06T07:19:43Z|00264|binding|INFO|Setting lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 up in Southbound
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.100 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:43.098 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:2a:84 10.100.0.14'], port_security=['fa:16:3e:46:2a:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1354203550', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '288aae5a-11e0-4906-903d-acea3cebcf63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1354203550', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=3036e2e9-ad2c-4f44-96e5-c7dcdea69629) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.101 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 ovn_controller[147168]: 2025-12-06T07:19:43Z|00265|binding|INFO|Releasing lport 6bcf5fa1-98f0-46dc-87be-601c035f7d7e from this chassis (sb_readonly=0)
Dec 06 07:19:43 compute-0 ovn_controller[147168]: 2025-12-06T07:19:43Z|00266|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.240 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.656 251996 DEBUG nova.virt.libvirt.driver [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.656 251996 DEBUG nova.virt.libvirt.driver [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.657 251996 DEBUG nova.virt.libvirt.driver [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:a3:a9:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.657 251996 DEBUG nova.virt.libvirt.driver [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] No VIF found with MAC fa:16:3e:46:2a:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.677 251996 DEBUG nova.virt.libvirt.guest [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <nova:name>tempest-tempest.common.compute-instance-319358649</nova:name>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 07:19:43</nova:creationTime>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:port uuid="c51cc596-c273-4444-b624-c7f87bb78323">
Dec 06 07:19:43 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     <nova:port uuid="3036e2e9-ad2c-4f44-96e5-c7dcdea69629">
Dec 06 07:19:43 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:19:43 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:19:43 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 07:19:43 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 07:19:43 compute-0 nova_compute[251992]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.704 251996 DEBUG oslo_concurrency.lockutils [None req-6f70861d-adcd-45a0-9281-e417c70d0048 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-288aae5a-11e0-4906-903d-acea3cebcf63-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:43 compute-0 nova_compute[251992]: 2025-12-06 07:19:43.769 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:43 compute-0 sudo[304474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:43 compute-0 sudo[304474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:43 compute-0 sudo[304474]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 394 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Dec 06 07:19:43 compute-0 sudo[304505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:43 compute-0 sudo[304505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:43 compute-0 sudo[304505]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:44 compute-0 ceph-mon[74339]: pgmap v1852: 305 pgs: 305 active+clean; 394 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 177 op/s
Dec 06 07:19:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2456623870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:44.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.740 251996 DEBUG nova.network.neutron [req-dc53be25-ef1d-436f-a56e-2791330c9808 req-ee6a267e-6b5b-48f5-a06a-51dca37de673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updated VIF entry in instance network info cache for port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.740 251996 DEBUG nova.network.neutron [req-dc53be25-ef1d-436f-a56e-2791330c9808 req-ee6a267e-6b5b-48f5-a06a-51dca37de673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.745 251996 DEBUG oslo_concurrency.lockutils [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "interface-288aae5a-11e0-4906-903d-acea3cebcf63-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.745 251996 DEBUG oslo_concurrency.lockutils [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-288aae5a-11e0-4906-903d-acea3cebcf63-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.759 251996 DEBUG oslo_concurrency.lockutils [req-dc53be25-ef1d-436f-a56e-2791330c9808 req-ee6a267e-6b5b-48f5-a06a-51dca37de673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.768 251996 DEBUG nova.objects.instance [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'flavor' on Instance uuid 288aae5a-11e0-4906-903d-acea3cebcf63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.792 251996 DEBUG nova.virt.libvirt.vif [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-319358649',display_name='tempest-tempest.common.compute-instance-319358649',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-319358649',id=79,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g9xh7f4l',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=288aae5a-11e0-4906-903d-acea3cebcf63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.792 251996 DEBUG nova.network.os_vif_util [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.793 251996 DEBUG nova.network.os_vif_util [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.798 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.801 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.804 251996 DEBUG nova.virt.libvirt.driver [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Attempting to detach device tap3036e2e9-ad from instance 288aae5a-11e0-4906-903d-acea3cebcf63 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.804 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] detach device xml: <interface type="ethernet">
Dec 06 07:19:44 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:46:2a:84"/>
Dec 06 07:19:44 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 07:19:44 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:19:44 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 07:19:44 compute-0 nova_compute[251992]:   <target dev="tap3036e2e9-ad"/>
Dec 06 07:19:44 compute-0 nova_compute[251992]: </interface>
Dec 06 07:19:44 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:19:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:44.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.860 251996 DEBUG nova.compute.manager [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.861 251996 DEBUG oslo_concurrency.lockutils [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.861 251996 DEBUG oslo_concurrency.lockutils [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.862 251996 DEBUG oslo_concurrency.lockutils [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.862 251996 DEBUG nova.compute.manager [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] No waiting events found dispatching network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.862 251996 WARNING nova.compute.manager [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received unexpected event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for instance with vm_state active and task_state None.
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.863 251996 DEBUG nova.compute.manager [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.863 251996 DEBUG oslo_concurrency.lockutils [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.863 251996 DEBUG oslo_concurrency.lockutils [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.864 251996 DEBUG oslo_concurrency.lockutils [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.864 251996 DEBUG nova.compute.manager [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] No waiting events found dispatching network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:44 compute-0 nova_compute[251992]: 2025-12-06 07:19:44.864 251996 WARNING nova.compute.manager [req-56d3e2d4-a6ac-469b-b0b6-be4d0b36d884 req-1f3c0584-1feb-4596-9e40-e8dffdd591c5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received unexpected event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for instance with vm_state active and task_state None.
Dec 06 07:19:44 compute-0 ovn_controller[147168]: 2025-12-06T07:19:44Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:2a:84 10.100.0.14
Dec 06 07:19:44 compute-0 ovn_controller[147168]: 2025-12-06T07:19:44Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:2a:84 10.100.0.14
Dec 06 07:19:45 compute-0 podman[304410]: 2025-12-06 07:19:45.102629741 +0000 UTC m=+4.681868713 container cleanup 595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:19:45 compute-0 systemd[1]: libpod-conmon-595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639.scope: Deactivated successfully.
Dec 06 07:19:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 394 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.262 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.267 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface>not found in domain: <domain type='kvm' id='35'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <name>instance-0000004f</name>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <uuid>288aae5a-11e0-4906-903d-acea3cebcf63</uuid>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:name>tempest-tempest.common.compute-instance-319358649</nova:name>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 07:19:43</nova:creationTime>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:port uuid="c51cc596-c273-4444-b624-c7f87bb78323">
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:port uuid="3036e2e9-ad2c-4f44-96e5-c7dcdea69629">
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 07:19:46 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <memory unit='KiB'>131072</memory>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <resource>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <partition>/machine</partition>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </resource>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <sysinfo type='smbios'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <system>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='serial'>288aae5a-11e0-4906-903d-acea3cebcf63</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='uuid'>288aae5a-11e0-4906-903d-acea3cebcf63</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </system>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <os>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <boot dev='hd'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <smbios mode='sysinfo'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </os>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <features>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <vmcoreinfo state='on'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </features>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <feature policy='require' name='x2apic'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <feature policy='require' name='vme'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <clock offset='utc'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <timer name='hpet' present='no'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <on_reboot>restart</on_reboot>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <on_crash>destroy</on_crash>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <disk type='network' device='disk'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/288aae5a-11e0-4906-903d-acea3cebcf63_disk' index='2'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </source>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target dev='vda' bus='virtio'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='virtio-disk0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <disk type='network' device='cdrom'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/288aae5a-11e0-4906-903d-acea3cebcf63_disk.config' index='1'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </source>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target dev='sda' bus='sata'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <readonly/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='sata0-0-0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pcie.0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='1' port='0x10'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='2' port='0x11'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='3' port='0x12'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.3'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='4' port='0x13'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.4'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='5' port='0x14'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.5'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='6' port='0x15'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.6'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='7' port='0x16'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.7'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='8' port='0x17'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.8'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='9' port='0x18'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.9'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='10' port='0x19'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.10'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='11' port='0x1a'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.11'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='12' port='0x1b'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.12'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='13' port='0x1c'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.13'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='14' port='0x1d'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.14'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='15' port='0x1e'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.15'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='16' port='0x1f'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.16'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='17' port='0x20'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.17'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='18' port='0x21'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.18'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='19' port='0x22'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.19'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='20' port='0x23'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.20'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='21' port='0x24'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.21'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='22' port='0x25'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.22'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='23' port='0x26'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.23'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='24' port='0x27'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.24'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='25' port='0x28'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.25'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-pci-bridge'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.26'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='usb'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='sata' index='0'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='ide'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <interface type='ethernet'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <mac address='fa:16:3e:a3:a9:46'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target dev='tapc51cc596-c2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model type='virtio'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <mtu size='1442'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='net0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <interface type='ethernet'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <mac address='fa:16:3e:46:2a:84'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target dev='tap3036e2e9-ad'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model type='virtio'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <mtu size='1442'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='net1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <serial type='pty'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/console.log' append='off'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target type='isa-serial' port='0'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <model name='isa-serial'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </target>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/console.log' append='off'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target type='serial' port='0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </console>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <input type='tablet' bus='usb'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='input0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </input>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <input type='mouse' bus='ps2'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='input1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </input>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <input type='keyboard' bus='ps2'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='input2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </input>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <listen type='address' address='::0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </graphics>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <audio id='1' type='none'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <video>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='video0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </video>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <watchdog model='itco' action='reset'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='watchdog0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </watchdog>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <memballoon model='virtio'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <stats period='10'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='balloon0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <rng model='virtio'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='rng0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <label>system_u:system_r:svirt_t:s0:c291,c472</label>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c291,c472</imagelabel>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <label>+107:+107</label>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 07:19:46 compute-0 nova_compute[251992]: </domain>
Dec 06 07:19:46 compute-0 nova_compute[251992]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.268 251996 INFO nova.virt.libvirt.driver [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully detached device tap3036e2e9-ad from instance 288aae5a-11e0-4906-903d-acea3cebcf63 from the persistent domain config.
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.269 251996 DEBUG nova.virt.libvirt.driver [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] (1/8): Attempting to detach device tap3036e2e9-ad with device alias net1 from instance 288aae5a-11e0-4906-903d-acea3cebcf63 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.269 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] detach device xml: <interface type="ethernet">
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:46:2a:84"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <target dev="tap3036e2e9-ad"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]: </interface>
Dec 06 07:19:46 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:19:46 compute-0 kernel: tap3036e2e9-ad (unregistering): left promiscuous mode
Dec 06 07:19:46 compute-0 NetworkManager[48965]: <info>  [1765005586.3746] device (tap3036e2e9-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:46 compute-0 ovn_controller[147168]: 2025-12-06T07:19:46Z|00267|binding|INFO|Releasing lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 from this chassis (sb_readonly=0)
Dec 06 07:19:46 compute-0 ovn_controller[147168]: 2025-12-06T07:19:46Z|00268|binding|INFO|Setting lport 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 down in Southbound
Dec 06 07:19:46 compute-0 ovn_controller[147168]: 2025-12-06T07:19:46Z|00269|binding|INFO|Removing iface tap3036e2e9-ad ovn-installed in OVS
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.383 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.384 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765005586.3833413, 288aae5a-11e0-4906-903d-acea3cebcf63 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.385 251996 DEBUG nova.virt.libvirt.driver [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Start waiting for the detach event from libvirt for device tap3036e2e9-ad with device alias net1 for instance 288aae5a-11e0-4906-903d-acea3cebcf63 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.386 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.389 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:46:2a:84"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap3036e2e9-ad"/></interface>not found in domain: <domain type='kvm' id='35'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <name>instance-0000004f</name>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <uuid>288aae5a-11e0-4906-903d-acea3cebcf63</uuid>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:name>tempest-tempest.common.compute-instance-319358649</nova:name>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 07:19:43</nova:creationTime>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:port uuid="c51cc596-c273-4444-b624-c7f87bb78323">
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:port uuid="3036e2e9-ad2c-4f44-96e5-c7dcdea69629">
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 07:19:46 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <memory unit='KiB'>131072</memory>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <vcpu placement='static'>1</vcpu>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <resource>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <partition>/machine</partition>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </resource>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <sysinfo type='smbios'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <system>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='manufacturer'>RDO</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='serial'>288aae5a-11e0-4906-903d-acea3cebcf63</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='uuid'>288aae5a-11e0-4906-903d-acea3cebcf63</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <entry name='family'>Virtual Machine</entry>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </system>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <os>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <boot dev='hd'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <smbios mode='sysinfo'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </os>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <features>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <vmcoreinfo state='on'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </features>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <model fallback='forbid'>Nehalem</model>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <feature policy='require' name='x2apic'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <feature policy='require' name='hypervisor'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <feature policy='require' name='vme'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <clock offset='utc'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <timer name='hpet' present='no'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <on_poweroff>destroy</on_poweroff>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <on_reboot>restart</on_reboot>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <on_crash>destroy</on_crash>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <disk type='network' device='disk'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/288aae5a-11e0-4906-903d-acea3cebcf63_disk' index='2'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </source>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target dev='vda' bus='virtio'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='virtio-disk0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <disk type='network' device='cdrom'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/288aae5a-11e0-4906-903d-acea3cebcf63_disk.config' index='1'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </source>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target dev='sda' bus='sata'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <readonly/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='sata0-0-0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pcie.0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='1' port='0x10'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='2' port='0x11'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='3' port='0x12'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.3'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='4' port='0x13'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.4'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='5' port='0x14'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.5'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='6' port='0x15'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.6'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='7' port='0x16'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.7'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='8' port='0x17'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.8'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='9' port='0x18'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.9'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='10' port='0x19'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.10'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='11' port='0x1a'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.11'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='12' port='0x1b'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.12'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='13' port='0x1c'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.13'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='14' port='0x1d'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.14'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='15' port='0x1e'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.15'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='16' port='0x1f'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.16'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='17' port='0x20'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.17'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='18' port='0x21'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.18'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='19' port='0x22'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.19'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='20' port='0x23'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.20'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='21' port='0x24'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.21'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='22' port='0x25'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.22'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='23' port='0x26'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.23'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='24' port='0x27'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.24'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target chassis='25' port='0x28'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.25'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model name='pcie-pci-bridge'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='pci.26'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='usb'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <controller type='sata' index='0'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='ide'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <interface type='ethernet'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <mac address='fa:16:3e:a3:a9:46'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target dev='tapc51cc596-c2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model type='virtio'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <mtu size='1442'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='net0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <serial type='pty'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/console.log' append='off'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target type='isa-serial' port='0'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:         <model name='isa-serial'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       </target>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63/console.log' append='off'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <target type='serial' port='0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </console>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <input type='tablet' bus='usb'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='input0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='usb' bus='0' port='1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </input>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <input type='mouse' bus='ps2'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='input1'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </input>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <input type='keyboard' bus='ps2'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='input2'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </input>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <listen type='address' address='::0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </graphics>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <audio id='1' type='none'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <video>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='video0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </video>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <watchdog model='itco' action='reset'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='watchdog0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </watchdog>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <memballoon model='virtio'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <stats period='10'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='balloon0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <rng model='virtio'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <backend model='random'>/dev/urandom</backend>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <alias name='rng0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <label>system_u:system_r:svirt_t:s0:c291,c472</label>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c291,c472</imagelabel>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <label>+107:+107</label>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <imagelabel>+107:+107</imagelabel>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 07:19:46 compute-0 nova_compute[251992]: </domain>
Dec 06 07:19:46 compute-0 nova_compute[251992]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.389 251996 INFO nova.virt.libvirt.driver [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully detached device tap3036e2e9-ad from instance 288aae5a-11e0-4906-903d-acea3cebcf63 from the live domain config.
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.390 251996 DEBUG nova.virt.libvirt.vif [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-319358649',display_name='tempest-tempest.common.compute-instance-319358649',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-319358649',id=79,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g9xh7f4l',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=288aae5a-11e0-4906-903d-acea3cebcf63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.391 251996 DEBUG nova.network.os_vif_util [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "address": "fa:16:3e:46:2a:84", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3036e2e9-ad", "ovs_interfaceid": "3036e2e9-ad2c-4f44-96e5-c7dcdea69629", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.392 251996 DEBUG nova.network.os_vif_util [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.392 251996 DEBUG os_vif [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.395 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3036e2e9-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.401 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.403 251996 INFO os_vif [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:2a:84,bridge_name='br-int',has_traffic_filtering=True,id=3036e2e9-ad2c-4f44-96e5-c7dcdea69629,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3036e2e9-ad')
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.404 251996 DEBUG nova.virt.libvirt.guest [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:name>tempest-tempest.common.compute-instance-319358649</nova:name>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 07:19:46</nova:creationTime>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:user uuid="06f5b46553b24b39a1493d96ec4e503e">tempest-AttachInterfacesTestJSON-2041841766-project-member</nova:user>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:project uuid="35df5125c2cf4d29a6b975951af14910">tempest-AttachInterfacesTestJSON-2041841766</nova:project>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     <nova:port uuid="c51cc596-c273-4444-b624-c7f87bb78323">
Dec 06 07:19:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:19:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 07:19:46 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 07:19:46 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 07:19:46 compute-0 nova_compute[251992]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 07:19:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:46.443 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:2a:84 10.100.0.14'], port_security=['fa:16:3e:46:2a:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1354203550', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '288aae5a-11e0-4906-903d-acea3cebcf63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1354203550', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e3084bf1-bc38-47e5-9deb-316970f08514', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=3036e2e9-ad2c-4f44-96e5-c7dcdea69629) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:19:46 compute-0 podman[304498]: 2025-12-06 07:19:46.464778937 +0000 UTC m=+2.674751274 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:19:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:46.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:19:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:46.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.968 251996 DEBUG nova.compute.manager [req-c48b345f-2c72-4c67-9528-7343476814d5 req-aa9722ea-48fe-451e-ae44-1628575a1b25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-unplugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.969 251996 DEBUG oslo_concurrency.lockutils [req-c48b345f-2c72-4c67-9528-7343476814d5 req-aa9722ea-48fe-451e-ae44-1628575a1b25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.970 251996 DEBUG oslo_concurrency.lockutils [req-c48b345f-2c72-4c67-9528-7343476814d5 req-aa9722ea-48fe-451e-ae44-1628575a1b25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.970 251996 DEBUG oslo_concurrency.lockutils [req-c48b345f-2c72-4c67-9528-7343476814d5 req-aa9722ea-48fe-451e-ae44-1628575a1b25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.971 251996 DEBUG nova.compute.manager [req-c48b345f-2c72-4c67-9528-7343476814d5 req-aa9722ea-48fe-451e-ae44-1628575a1b25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] No waiting events found dispatching network-vif-unplugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:46 compute-0 nova_compute[251992]: 2025-12-06 07:19:46.972 251996 WARNING nova.compute.manager [req-c48b345f-2c72-4c67-9528-7343476814d5 req-aa9722ea-48fe-451e-ae44-1628575a1b25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received unexpected event network-vif-unplugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for instance with vm_state active and task_state None.
Dec 06 07:19:47 compute-0 ceph-mon[74339]: pgmap v1853: 305 pgs: 305 active+clean; 394 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Dec 06 07:19:47 compute-0 podman[304537]: 2025-12-06 07:19:47.386762122 +0000 UTC m=+2.250927957 container remove 595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:19:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:47.399 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b55b2fef-410c-480c-853f-f2e1c2bb1d99]: (4, ('Sat Dec  6 07:19:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8 (595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639)\n595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639\nSat Dec  6 07:19:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8 (595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639)\n595ff8eb04c6473d6aa6d986ae8409fe3c19678004fc31fb51e9e7a4ed544639\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:47.402 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6572a8cb-85e1-425d-9684-55a6992a77c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:47.404 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eab116e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:47 compute-0 kernel: tap3eab116e-20: left promiscuous mode
Dec 06 07:19:47 compute-0 nova_compute[251992]: 2025-12-06 07:19:47.406 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-0 nova_compute[251992]: 2025-12-06 07:19:47.419 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:47.422 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[43f58058-7f87-4fcf-af8e-832eccbbc1fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 402 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.018 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f3063485-7634-4f8c-bad6-5a6cb12cc1f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.019 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8e73279f-0412-4464-b1e0-21425b5cfb9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.035 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[acfaf220-bc65-4a82-b50d-ee692ebbddd4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584716, 'reachable_time': 31536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304572, 'error': None, 'target': 'ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d3eab116e\x2d2d42\x2d4979\x2d828f\x2d0c5410c830e8.mount: Deactivated successfully.
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.039 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3eab116e-2d42-4979-828f-0c5410c830e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.040 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0ee019-54f9-488d-b58a-8c19d48ba7e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.041 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.043 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.058 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f39c78-a0ec-49d4-9ac8-b4296b39c96c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.086 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ac915bd1-a04f-4b8b-9e1b-62c06f6eb1d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.089 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[47dcd7b3-45f3-48d2-90a8-96b0af1fda52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.115 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[740b486a-e0b4-49f3-9bf6-5bd79960dfc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.130 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ccffc84b-5426-4695-aa6c-e84f421bf2ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581856, 'reachable_time': 29313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304578, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.148 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa09d3a-46fd-44b1-98e4-299fb77b59d2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581866, 'tstamp': 581866}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304579, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581868, 'tstamp': 581868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304579, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.150 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:48 compute-0 nova_compute[251992]: 2025-12-06 07:19:48.151 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.152 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.153 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.153 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.153 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.154 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.156 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 61a21643-77ba-4a09-8184-10dc4bd52b26
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.169 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[23a26470-7851-40b9-bbcb-3d0aa0808fef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.200 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a057e458-9ce7-4ad2-82df-849229e893a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.204 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d9174213-d494-4e96-bfab-89b9614167ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.236 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[eb79f39e-6317-483d-b691-b138873ba62f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.252 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2f222bf4-9751-4d83-84ab-b492c262019f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap61a21643-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:67:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581856, 'reachable_time': 29313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304585, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.268 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8b27371f-2eaa-46cd-86b5-00a210dd5279]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581866, 'tstamp': 581866}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304586, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap61a21643-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 581868, 'tstamp': 581868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304586, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.270 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:48 compute-0 nova_compute[251992]: 2025-12-06 07:19:48.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.272 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61a21643-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.273 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.273 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap61a21643-70, col_values=(('external_ids', {'iface-id': '8e8469cb-4434-4b4c-9dcf-a6a8244c2597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:19:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:19:48.274 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:19:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:48 compute-0 ceph-mon[74339]: pgmap v1854: 305 pgs: 305 active+clean; 394 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Dec 06 07:19:48 compute-0 ceph-mon[74339]: pgmap v1855: 305 pgs: 305 active+clean; 402 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Dec 06 07:19:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:48.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:48 compute-0 nova_compute[251992]: 2025-12-06 07:19:48.771 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:48.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.062 251996 DEBUG nova.compute.manager [req-9d857609-1df9-4d85-bf84-23fe5e08903d req-9ce9378b-ebee-4f50-965e-d1e8d5c7f5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.062 251996 DEBUG oslo_concurrency.lockutils [req-9d857609-1df9-4d85-bf84-23fe5e08903d req-9ce9378b-ebee-4f50-965e-d1e8d5c7f5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.063 251996 DEBUG oslo_concurrency.lockutils [req-9d857609-1df9-4d85-bf84-23fe5e08903d req-9ce9378b-ebee-4f50-965e-d1e8d5c7f5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.063 251996 DEBUG oslo_concurrency.lockutils [req-9d857609-1df9-4d85-bf84-23fe5e08903d req-9ce9378b-ebee-4f50-965e-d1e8d5c7f5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.063 251996 DEBUG nova.compute.manager [req-9d857609-1df9-4d85-bf84-23fe5e08903d req-9ce9378b-ebee-4f50-965e-d1e8d5c7f5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] No waiting events found dispatching network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.063 251996 WARNING nova.compute.manager [req-9d857609-1df9-4d85-bf84-23fe5e08903d req-9ce9378b-ebee-4f50-965e-d1e8d5c7f5d5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received unexpected event network-vif-plugged-3036e2e9-ad2c-4f44-96e5-c7dcdea69629 for instance with vm_state active and task_state None.
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.510 251996 DEBUG oslo_concurrency.lockutils [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.511 251996 DEBUG oslo_concurrency.lockutils [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:49 compute-0 nova_compute[251992]: 2025-12-06 07:19:49.511 251996 DEBUG nova.network.neutron [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:19:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:19:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1104714191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:19:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:19:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1104714191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:19:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1104714191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:19:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1104714191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:19:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 402 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Dec 06 07:19:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:50.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:50.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.187 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.399 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:51 compute-0 podman[304590]: 2025-12-06 07:19:51.410253259 +0000 UTC m=+0.061756863 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 06 07:19:51 compute-0 podman[304591]: 2025-12-06 07:19:51.439939486 +0000 UTC m=+0.088186939 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:19:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Dec 06 07:19:51 compute-0 ceph-mon[74339]: pgmap v1856: 305 pgs: 305 active+clean; 402 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.574 251996 INFO nova.network.neutron [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Port 3036e2e9-ad2c-4f44-96e5-c7dcdea69629 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.574 251996 DEBUG nova.network.neutron [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.608 251996 DEBUG oslo_concurrency.lockutils [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.633 251996 DEBUG oslo_concurrency.lockutils [None req-2837a502-5916-4123-bbfb-a5ca1e30f84f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "interface-288aae5a-11e0-4906-903d-acea3cebcf63-3036e2e9-ad2c-4f44-96e5-c7dcdea69629" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.726 251996 DEBUG nova.compute.manager [req-7af0c528-671b-4920-be43-196dbf8bb92e req-bb95c5be-2546-4af7-bbba-209da7fce1e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-changed-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.726 251996 DEBUG nova.compute.manager [req-7af0c528-671b-4920-be43-196dbf8bb92e req-bb95c5be-2546-4af7-bbba-209da7fce1e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing instance network info cache due to event network-changed-c51cc596-c273-4444-b624-c7f87bb78323. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.726 251996 DEBUG oslo_concurrency.lockutils [req-7af0c528-671b-4920-be43-196dbf8bb92e req-bb95c5be-2546-4af7-bbba-209da7fce1e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.727 251996 DEBUG oslo_concurrency.lockutils [req-7af0c528-671b-4920-be43-196dbf8bb92e req-bb95c5be-2546-4af7-bbba-209da7fce1e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:19:51 compute-0 nova_compute[251992]: 2025-12-06 07:19:51.727 251996 DEBUG nova.network.neutron [req-7af0c528-671b-4920-be43-196dbf8bb92e req-bb95c5be-2546-4af7-bbba-209da7fce1e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Refreshing network info cache for port c51cc596-c273-4444-b624-c7f87bb78323 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:19:51 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Dec 06 07:19:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 279 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 410 KiB/s rd, 250 KiB/s wr, 94 op/s
Dec 06 07:19:52 compute-0 nova_compute[251992]: 2025-12-06 07:19:52.433 251996 INFO nova.virt.libvirt.driver [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Deleting instance files /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48_del
Dec 06 07:19:52 compute-0 nova_compute[251992]: 2025-12-06 07:19:52.434 251996 INFO nova.virt.libvirt.driver [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Deletion of /var/lib/nova/instances/686daa50-6941-4d67-8ec0-3d6fc9187a48_del complete
Dec 06 07:19:52 compute-0 nova_compute[251992]: 2025-12-06 07:19:52.479 251996 INFO nova.compute.manager [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Took 15.14 seconds to destroy the instance on the hypervisor.
Dec 06 07:19:52 compute-0 nova_compute[251992]: 2025-12-06 07:19:52.480 251996 DEBUG oslo.service.loopingcall [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:19:52 compute-0 nova_compute[251992]: 2025-12-06 07:19:52.481 251996 DEBUG nova.compute.manager [-] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:19:52 compute-0 nova_compute[251992]: 2025-12-06 07:19:52.481 251996 DEBUG nova.network.neutron [-] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:19:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:52.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:52.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:53 compute-0 ceph-mon[74339]: osdmap e256: 3 total, 3 up, 3 in
Dec 06 07:19:53 compute-0 ceph-mon[74339]: pgmap v1858: 305 pgs: 305 active+clean; 279 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 410 KiB/s rd, 250 KiB/s wr, 94 op/s
Dec 06 07:19:53 compute-0 sudo[304631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:53 compute-0 sudo[304631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:53 compute-0 sudo[304631]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:53 compute-0 sudo[304656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:19:53 compute-0 sudo[304656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:53 compute-0 sudo[304656]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:53 compute-0 sudo[304681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:53 compute-0 sudo[304681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:53 compute-0 sudo[304681]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.473 251996 DEBUG nova.network.neutron [req-7af0c528-671b-4920-be43-196dbf8bb92e req-bb95c5be-2546-4af7-bbba-209da7fce1e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updated VIF entry in instance network info cache for port c51cc596-c273-4444-b624-c7f87bb78323. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.473 251996 DEBUG nova.network.neutron [req-7af0c528-671b-4920-be43-196dbf8bb92e req-bb95c5be-2546-4af7-bbba-209da7fce1e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:53 compute-0 sudo[304706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:19:53 compute-0 sudo[304706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.506 251996 DEBUG oslo_concurrency.lockutils [req-7af0c528-671b-4920-be43-196dbf8bb92e req-bb95c5be-2546-4af7-bbba-209da7fce1e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.507 251996 DEBUG nova.network.neutron [-] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.530 251996 INFO nova.compute.manager [-] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Took 1.05 seconds to deallocate network for instance.
Dec 06 07:19:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.590 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.591 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.669 251996 DEBUG oslo_concurrency.processutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.773 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 279 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 410 KiB/s rd, 250 KiB/s wr, 94 op/s
Dec 06 07:19:53 compute-0 nova_compute[251992]: 2025-12-06 07:19:53.833 251996 DEBUG nova.compute.manager [req-6ca0215b-efef-49a0-ae6a-b1f2bcf46694 req-f43728b1-fabd-4253-96cd-d2675957a66b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Received event network-vif-deleted-812ade77-4fa8-41e9-986b-5db4d1d03dd4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:19:53 compute-0 sudo[304706]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:19:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290631668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:54 compute-0 nova_compute[251992]: 2025-12-06 07:19:54.237 251996 DEBUG oslo_concurrency.processutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:54 compute-0 nova_compute[251992]: 2025-12-06 07:19:54.245 251996 DEBUG nova.compute.provider_tree [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:19:54 compute-0 nova_compute[251992]: 2025-12-06 07:19:54.266 251996 DEBUG nova.scheduler.client.report [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:19:54 compute-0 nova_compute[251992]: 2025-12-06 07:19:54.289 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:54 compute-0 nova_compute[251992]: 2025-12-06 07:19:54.316 251996 INFO nova.scheduler.client.report [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Deleted allocations for instance 686daa50-6941-4d67-8ec0-3d6fc9187a48
Dec 06 07:19:54 compute-0 nova_compute[251992]: 2025-12-06 07:19:54.397 251996 DEBUG oslo_concurrency.lockutils [None req-1defe885-7e9e-4b5d-bae8-2dfe3956dae5 3af050664af642888620680c329441c5 08ea3b6ed6c24b0f87e477586001ac99 - - default default] Lock "686daa50-6941-4d67-8ec0-3d6fc9187a48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 17.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:54 compute-0 nova_compute[251992]: 2025-12-06 07:19:54.483 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:54.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:54 compute-0 nova_compute[251992]: 2025-12-06 07:19:54.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:19:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:54.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/46416263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:54 compute-0 ceph-mon[74339]: pgmap v1859: 305 pgs: 305 active+clean; 279 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 410 KiB/s rd, 250 KiB/s wr, 94 op/s
Dec 06 07:19:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4290631668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:19:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:19:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:19:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:19:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:19:55 compute-0 nova_compute[251992]: 2025-12-06 07:19:55.575 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005580.5741155, 686daa50-6941-4d67-8ec0-3d6fc9187a48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:19:55 compute-0 nova_compute[251992]: 2025-12-06 07:19:55.576 251996 INFO nova.compute.manager [-] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] VM Stopped (Lifecycle Event)
Dec 06 07:19:55 compute-0 nova_compute[251992]: 2025-12-06 07:19:55.606 251996 DEBUG nova.compute.manager [None req-c36037b8-1163-4cad-8037-abe59a6f2e22 - - - - - -] [instance: 686daa50-6941-4d67-8ec0-3d6fc9187a48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:19:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ce2e59df-b773-448f-b17b-7eaad894293d does not exist
Dec 06 07:19:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 78185d86-0a6c-4cd0-b89f-35e449acb28a does not exist
Dec 06 07:19:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ecef389b-67fb-40b0-aa1e-4227faa00367 does not exist
Dec 06 07:19:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:19:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:19:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:19:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:19:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:19:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:19:55 compute-0 sudo[304784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:55 compute-0 sudo[304784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:55 compute-0 sudo[304784]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:55 compute-0 sudo[304809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:19:55 compute-0 sudo[304809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 95 KiB/s wr, 78 op/s
Dec 06 07:19:55 compute-0 sudo[304809]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:55 compute-0 sudo[304834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:55 compute-0 sudo[304834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:55 compute-0 sudo[304834]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:55 compute-0 sudo[304859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:19:55 compute-0 sudo[304859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2403216222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:19:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:19:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:19:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:19:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:19:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:19:56 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:19:56 compute-0 podman[304928]: 2025-12-06 07:19:56.275310008 +0000 UTC m=+0.065766204 container create cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:19:56 compute-0 systemd[1]: Started libpod-conmon-cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0.scope.
Dec 06 07:19:56 compute-0 podman[304928]: 2025-12-06 07:19:56.228908855 +0000 UTC m=+0.019365081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:19:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:19:56 compute-0 podman[304928]: 2025-12-06 07:19:56.36217008 +0000 UTC m=+0.152626306 container init cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:19:56 compute-0 podman[304928]: 2025-12-06 07:19:56.371375146 +0000 UTC m=+0.161831352 container start cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:19:56 compute-0 podman[304928]: 2025-12-06 07:19:56.374923786 +0000 UTC m=+0.165380012 container attach cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:19:56 compute-0 ecstatic_bartik[304945]: 167 167
Dec 06 07:19:56 compute-0 systemd[1]: libpod-cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0.scope: Deactivated successfully.
Dec 06 07:19:56 compute-0 podman[304928]: 2025-12-06 07:19:56.378898566 +0000 UTC m=+0.169354782 container died cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:19:56 compute-0 nova_compute[251992]: 2025-12-06 07:19:56.401 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b5cbc4f6cc201c821e223f99fece9b6709a7d4568e266285634c9a7cdeff732-merged.mount: Deactivated successfully.
Dec 06 07:19:56 compute-0 podman[304928]: 2025-12-06 07:19:56.435422632 +0000 UTC m=+0.225878838 container remove cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:19:56 compute-0 systemd[1]: libpod-conmon-cd61080f354c3c78551cee767cc9d31503bceefcf9640eb1fba00c0e9711b5d0.scope: Deactivated successfully.
Dec 06 07:19:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:56 compute-0 podman[304970]: 2025-12-06 07:19:56.604874147 +0000 UTC m=+0.043637848 container create 5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:19:56 compute-0 systemd[1]: Started libpod-conmon-5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7.scope.
Dec 06 07:19:56 compute-0 nova_compute[251992]: 2025-12-06 07:19:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:19:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6978e82604168f99664aa5a9937f7efe6aed545a1e135f014d81fe3aee2863a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6978e82604168f99664aa5a9937f7efe6aed545a1e135f014d81fe3aee2863a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6978e82604168f99664aa5a9937f7efe6aed545a1e135f014d81fe3aee2863a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6978e82604168f99664aa5a9937f7efe6aed545a1e135f014d81fe3aee2863a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6978e82604168f99664aa5a9937f7efe6aed545a1e135f014d81fe3aee2863a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:56 compute-0 podman[304970]: 2025-12-06 07:19:56.585086475 +0000 UTC m=+0.023850196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:19:56 compute-0 podman[304970]: 2025-12-06 07:19:56.684352012 +0000 UTC m=+0.123115713 container init 5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec 06 07:19:56 compute-0 nova_compute[251992]: 2025-12-06 07:19:56.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:56 compute-0 nova_compute[251992]: 2025-12-06 07:19:56.694 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:56 compute-0 nova_compute[251992]: 2025-12-06 07:19:56.694 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:56 compute-0 nova_compute[251992]: 2025-12-06 07:19:56.695 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:19:56 compute-0 nova_compute[251992]: 2025-12-06 07:19:56.695 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:56 compute-0 podman[304970]: 2025-12-06 07:19:56.696522962 +0000 UTC m=+0.135286663 container start 5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:19:56 compute-0 podman[304970]: 2025-12-06 07:19:56.700624846 +0000 UTC m=+0.139388547 container attach 5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:19:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:19:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:56.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:19:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:19:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2245945823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.171 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.262 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.263 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:19:57 compute-0 ceph-mon[74339]: pgmap v1860: 305 pgs: 305 active+clean; 279 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 95 KiB/s wr, 78 op/s
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.444 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.445 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4298MB free_disk=20.851486206054688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.445 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.445 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:19:57 compute-0 keen_panini[304987]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:19:57 compute-0 keen_panini[304987]: --> relative data size: 1.0
Dec 06 07:19:57 compute-0 keen_panini[304987]: --> All data devices are unavailable
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.586 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 288aae5a-11e0-4906-903d-acea3cebcf63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.587 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.587 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:19:57 compute-0 systemd[1]: libpod-5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7.scope: Deactivated successfully.
Dec 06 07:19:57 compute-0 podman[304970]: 2025-12-06 07:19:57.616703997 +0000 UTC m=+1.055467718 container died 5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 06 07:19:57 compute-0 nova_compute[251992]: 2025-12-06 07:19:57.627 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:19:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6978e82604168f99664aa5a9937f7efe6aed545a1e135f014d81fe3aee2863a4-merged.mount: Deactivated successfully.
Dec 06 07:19:57 compute-0 podman[304970]: 2025-12-06 07:19:57.67528094 +0000 UTC m=+1.114044641 container remove 5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:19:57 compute-0 systemd[1]: libpod-conmon-5fb58b43af36ca72d9470e1f91a40d36f8edd93d6b838f567611f6c61f6e4bf7.scope: Deactivated successfully.
Dec 06 07:19:57 compute-0 sudo[304859]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:57 compute-0 sudo[305037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:57 compute-0 sudo[305037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:57 compute-0 sudo[305037]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:57 compute-0 sudo[305081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:19:57 compute-0 sudo[305081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:57 compute-0 sudo[305081]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 23 KiB/s wr, 70 op/s
Dec 06 07:19:57 compute-0 sudo[305106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:57 compute-0 sudo[305106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:57 compute-0 sudo[305106]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:57 compute-0 sudo[305131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:19:57 compute-0 sudo[305131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:19:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875531211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:58 compute-0 nova_compute[251992]: 2025-12-06 07:19:58.095 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:19:58 compute-0 nova_compute[251992]: 2025-12-06 07:19:58.102 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:19:58 compute-0 nova_compute[251992]: 2025-12-06 07:19:58.138 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:19:58 compute-0 nova_compute[251992]: 2025-12-06 07:19:58.174 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:19:58 compute-0 nova_compute[251992]: 2025-12-06 07:19:58.174 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:19:58 compute-0 ovn_controller[147168]: 2025-12-06T07:19:58Z|00270|binding|INFO|Releasing lport 8e8469cb-4434-4b4c-9dcf-a6a8244c2597 from this chassis (sb_readonly=0)
Dec 06 07:19:58 compute-0 podman[305200]: 2025-12-06 07:19:58.260785984 +0000 UTC m=+0.037772905 container create 140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:19:58 compute-0 nova_compute[251992]: 2025-12-06 07:19:58.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:58 compute-0 systemd[1]: Started libpod-conmon-140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b.scope.
Dec 06 07:19:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:19:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2245945823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:58 compute-0 ceph-mon[74339]: pgmap v1861: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 23 KiB/s wr, 70 op/s
Dec 06 07:19:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1875531211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:19:58 compute-0 podman[305200]: 2025-12-06 07:19:58.33491227 +0000 UTC m=+0.111899201 container init 140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:19:58 compute-0 podman[305200]: 2025-12-06 07:19:58.243775 +0000 UTC m=+0.020761931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:19:58 compute-0 podman[305200]: 2025-12-06 07:19:58.343230982 +0000 UTC m=+0.120217893 container start 140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 07:19:58 compute-0 podman[305200]: 2025-12-06 07:19:58.346423171 +0000 UTC m=+0.123410112 container attach 140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:19:58 compute-0 peaceful_hertz[305217]: 167 167
Dec 06 07:19:58 compute-0 systemd[1]: libpod-140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b.scope: Deactivated successfully.
Dec 06 07:19:58 compute-0 podman[305200]: 2025-12-06 07:19:58.351348489 +0000 UTC m=+0.128335400 container died 140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:19:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff4d52199b25319c010f284aab227614993e6875fe5052d67505715bedef6a21-merged.mount: Deactivated successfully.
Dec 06 07:19:58 compute-0 podman[305200]: 2025-12-06 07:19:58.38835689 +0000 UTC m=+0.165343801 container remove 140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hertz, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:19:58 compute-0 systemd[1]: libpod-conmon-140d48495b45c521fda42d2e8d80849285861c630e42678c207f4c2f0805aa8b.scope: Deactivated successfully.
Dec 06 07:19:58 compute-0 podman[305244]: 2025-12-06 07:19:58.552864517 +0000 UTC m=+0.039877073 container create 6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:19:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:19:58 compute-0 systemd[1]: Started libpod-conmon-6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae.scope.
Dec 06 07:19:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:19:58.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e289438301ed916fd69886661887a15880cc14c75d8bbbdbba9edd4053930f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e289438301ed916fd69886661887a15880cc14c75d8bbbdbba9edd4053930f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e289438301ed916fd69886661887a15880cc14c75d8bbbdbba9edd4053930f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e289438301ed916fd69886661887a15880cc14c75d8bbbdbba9edd4053930f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:19:58 compute-0 podman[305244]: 2025-12-06 07:19:58.535356519 +0000 UTC m=+0.022369095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:19:58 compute-0 podman[305244]: 2025-12-06 07:19:58.637168247 +0000 UTC m=+0.124180793 container init 6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 07:19:58 compute-0 podman[305244]: 2025-12-06 07:19:58.644858392 +0000 UTC m=+0.131870968 container start 6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:19:58 compute-0 podman[305244]: 2025-12-06 07:19:58.64837344 +0000 UTC m=+0.135386016 container attach 6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 07:19:58 compute-0 nova_compute[251992]: 2025-12-06 07:19:58.775 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:19:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:19:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:19:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:19:58.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:19:59 compute-0 angry_shamir[305260]: {
Dec 06 07:19:59 compute-0 angry_shamir[305260]:     "0": [
Dec 06 07:19:59 compute-0 angry_shamir[305260]:         {
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "devices": [
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "/dev/loop3"
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             ],
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "lv_name": "ceph_lv0",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "lv_size": "7511998464",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "name": "ceph_lv0",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "tags": {
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.cluster_name": "ceph",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.crush_device_class": "",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.encrypted": "0",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.osd_id": "0",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.type": "block",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:                 "ceph.vdo": "0"
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             },
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "type": "block",
Dec 06 07:19:59 compute-0 angry_shamir[305260]:             "vg_name": "ceph_vg0"
Dec 06 07:19:59 compute-0 angry_shamir[305260]:         }
Dec 06 07:19:59 compute-0 angry_shamir[305260]:     ]
Dec 06 07:19:59 compute-0 angry_shamir[305260]: }
Dec 06 07:19:59 compute-0 systemd[1]: libpod-6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae.scope: Deactivated successfully.
Dec 06 07:19:59 compute-0 conmon[305260]: conmon 6965f52c4a767daa2ec7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae.scope/container/memory.events
Dec 06 07:19:59 compute-0 podman[305269]: 2025-12-06 07:19:59.483687848 +0000 UTC m=+0.025690897 container died 6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e289438301ed916fd69886661887a15880cc14c75d8bbbdbba9edd4053930f-merged.mount: Deactivated successfully.
Dec 06 07:19:59 compute-0 podman[305269]: 2025-12-06 07:19:59.551092819 +0000 UTC m=+0.093095868 container remove 6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:19:59 compute-0 systemd[1]: libpod-conmon-6965f52c4a767daa2ec78660a7af6477d629450de3e720f0775d1cd697e6e8ae.scope: Deactivated successfully.
Dec 06 07:19:59 compute-0 sudo[305131]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:59 compute-0 sudo[305284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:59 compute-0 sudo[305284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:59 compute-0 sudo[305284]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:59 compute-0 sudo[305309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:19:59 compute-0 sudo[305309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:59 compute-0 sudo[305309]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:59 compute-0 sudo[305334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:19:59 compute-0 sudo[305334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:59 compute-0 sudo[305334]: pam_unix(sudo:session): session closed for user root
Dec 06 07:19:59 compute-0 sudo[305359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:19:59 compute-0 sudo[305359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:19:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 23 KiB/s wr, 70 op/s
Dec 06 07:20:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:20:00 compute-0 podman[305425]: 2025-12-06 07:20:00.12756769 +0000 UTC m=+0.043080372 container create 46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:20:00 compute-0 systemd[1]: Started libpod-conmon-46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8.scope.
Dec 06 07:20:00 compute-0 nova_compute[251992]: 2025-12-06 07:20:00.174 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:00 compute-0 nova_compute[251992]: 2025-12-06 07:20:00.175 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:00 compute-0 nova_compute[251992]: 2025-12-06 07:20:00.175 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:20:00 compute-0 podman[305425]: 2025-12-06 07:20:00.199256579 +0000 UTC m=+0.114769281 container init 46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:20:00 compute-0 podman[305425]: 2025-12-06 07:20:00.108408466 +0000 UTC m=+0.023921178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:20:00 compute-0 podman[305425]: 2025-12-06 07:20:00.207113949 +0000 UTC m=+0.122626631 container start 46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:20:00 compute-0 relaxed_sinoussi[305442]: 167 167
Dec 06 07:20:00 compute-0 systemd[1]: libpod-46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8.scope: Deactivated successfully.
Dec 06 07:20:00 compute-0 podman[305425]: 2025-12-06 07:20:00.213394703 +0000 UTC m=+0.128907385 container attach 46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:20:00 compute-0 podman[305425]: 2025-12-06 07:20:00.214402891 +0000 UTC m=+0.129915573 container died 46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:20:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-89f57c72f6b6a2fc094ba0794dd7e55c875805a9d5f91b30bd0f79614df38cc5-merged.mount: Deactivated successfully.
Dec 06 07:20:00 compute-0 podman[305425]: 2025-12-06 07:20:00.261408112 +0000 UTC m=+0.176920784 container remove 46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_sinoussi, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:20:00 compute-0 systemd[1]: libpod-conmon-46eb7870d6c540cfe27f37d10943c2d7fc96aa5cc88e19333f7988b337f225b8.scope: Deactivated successfully.
Dec 06 07:20:00 compute-0 podman[305466]: 2025-12-06 07:20:00.421835414 +0000 UTC m=+0.039657886 container create 44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_neumann, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Dec 06 07:20:00 compute-0 systemd[1]: Started libpod-conmon-44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c.scope.
Dec 06 07:20:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f2c10f6ce0802df0f55173a4459a1184ced574d886b192a655362f44f94d94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f2c10f6ce0802df0f55173a4459a1184ced574d886b192a655362f44f94d94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f2c10f6ce0802df0f55173a4459a1184ced574d886b192a655362f44f94d94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:20:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f2c10f6ce0802df0f55173a4459a1184ced574d886b192a655362f44f94d94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:20:00 compute-0 podman[305466]: 2025-12-06 07:20:00.403250147 +0000 UTC m=+0.021072629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:20:00 compute-0 podman[305466]: 2025-12-06 07:20:00.502142194 +0000 UTC m=+0.119964736 container init 44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:20:00 compute-0 podman[305466]: 2025-12-06 07:20:00.507797251 +0000 UTC m=+0.125619713 container start 44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:20:00 compute-0 podman[305466]: 2025-12-06 07:20:00.511992779 +0000 UTC m=+0.129815241 container attach 44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:20:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:00.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:00 compute-0 nova_compute[251992]: 2025-12-06 07:20:00.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:01 compute-0 ceph-mon[74339]: pgmap v1862: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 23 KiB/s wr, 70 op/s
Dec 06 07:20:01 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:20:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2215598199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:01 compute-0 stoic_neumann[305482]: {
Dec 06 07:20:01 compute-0 stoic_neumann[305482]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:20:01 compute-0 stoic_neumann[305482]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:20:01 compute-0 stoic_neumann[305482]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:20:01 compute-0 stoic_neumann[305482]:         "osd_id": 0,
Dec 06 07:20:01 compute-0 stoic_neumann[305482]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:20:01 compute-0 stoic_neumann[305482]:         "type": "bluestore"
Dec 06 07:20:01 compute-0 stoic_neumann[305482]:     }
Dec 06 07:20:01 compute-0 stoic_neumann[305482]: }
Dec 06 07:20:01 compute-0 systemd[1]: libpod-44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c.scope: Deactivated successfully.
Dec 06 07:20:01 compute-0 podman[305466]: 2025-12-06 07:20:01.344622313 +0000 UTC m=+0.962444785 container died 44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:20:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f2c10f6ce0802df0f55173a4459a1184ced574d886b192a655362f44f94d94-merged.mount: Deactivated successfully.
Dec 06 07:20:01 compute-0 nova_compute[251992]: 2025-12-06 07:20:01.403 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:01 compute-0 podman[305466]: 2025-12-06 07:20:01.424734286 +0000 UTC m=+1.042556748 container remove 44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:20:01 compute-0 systemd[1]: libpod-conmon-44193b0393f58962e9e56c844e5a23a4d48b1b23165bae926a00b4df2b8a305c.scope: Deactivated successfully.
Dec 06 07:20:01 compute-0 sudo[305359]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:20:01 compute-0 nova_compute[251992]: 2025-12-06 07:20:01.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 113 op/s
Dec 06 07:20:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:20:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:20:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:02.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:20:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 968eb76b-aa1f-4f55-b0cd-828d8d3d144a does not exist
Dec 06 07:20:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ba2e00be-93bc-45fd-a19d-3aeedd254ee7 does not exist
Dec 06 07:20:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f4e8cac8-dfff-40d8-a70e-b3664a441a3b does not exist
Dec 06 07:20:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2588148620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:02 compute-0 ceph-mon[74339]: pgmap v1863: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 113 op/s
Dec 06 07:20:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:20:02 compute-0 sudo[305516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:20:02 compute-0 sudo[305516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:02 compute-0 sudo[305516]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:02 compute-0 sudo[305541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:20:02 compute-0 nova_compute[251992]: 2025-12-06 07:20:02.842 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "18df4458-2006-4721-82e1-760c93301d0c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:02 compute-0 nova_compute[251992]: 2025-12-06 07:20:02.843 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:02 compute-0 sudo[305541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:02 compute-0 sudo[305541]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:02.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:02 compute-0 nova_compute[251992]: 2025-12-06 07:20:02.871 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:20:02 compute-0 nova_compute[251992]: 2025-12-06 07:20:02.956 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:02 compute-0 nova_compute[251992]: 2025-12-06 07:20:02.957 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:02 compute-0 nova_compute[251992]: 2025-12-06 07:20:02.965 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:20:02 compute-0 nova_compute[251992]: 2025-12-06 07:20:02.965 251996 INFO nova.compute.claims [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.142 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:20:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2530500757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.594 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.600 251996 DEBUG nova.compute.provider_tree [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.756 251996 DEBUG nova.scheduler.client.report [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:20:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:20:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1169435881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2530500757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.775 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.805 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.805 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:20:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:03.827 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:03.827 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:03.828 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 89 op/s
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.883 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.884 251996 DEBUG nova.network.neutron [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:20:03 compute-0 sudo[305588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:20:03 compute-0 sudo[305588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.910 251996 INFO nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:20:03 compute-0 sudo[305588]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:03 compute-0 nova_compute[251992]: 2025-12-06 07:20:03.950 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:20:03 compute-0 sudo[305613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:20:03 compute-0 sudo[305613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:03 compute-0 sudo[305613]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.106 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.107 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.107 251996 INFO nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Creating image(s)
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.131 251996 DEBUG nova.storage.rbd_utils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] rbd image 18df4458-2006-4721-82e1-760c93301d0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.153 251996 DEBUG nova.storage.rbd_utils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] rbd image 18df4458-2006-4721-82e1-760c93301d0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.179 251996 DEBUG nova.storage.rbd_utils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] rbd image 18df4458-2006-4721-82e1-760c93301d0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.183 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.224 251996 DEBUG nova.policy [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '585886c5d2044f729963a6485c93acd5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a16333d9a99c4d4ba7c9a1c235b6219b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.246 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.247 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.248 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.248 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.270 251996 DEBUG nova.storage.rbd_utils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] rbd image 18df4458-2006-4721-82e1-760c93301d0c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.274 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 18df4458-2006-4721-82e1-760c93301d0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:04.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:04 compute-0 ceph-mon[74339]: pgmap v1864: 305 pgs: 305 active+clean; 280 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 89 op/s
Dec 06 07:20:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/529352192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:04.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:04 compute-0 nova_compute[251992]: 2025-12-06 07:20:04.969 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 18df4458-2006-4721-82e1-760c93301d0c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.053 251996 DEBUG nova.storage.rbd_utils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] resizing rbd image 18df4458-2006-4721-82e1-760c93301d0c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.171 251996 DEBUG nova.objects.instance [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lazy-loading 'migration_context' on Instance uuid 18df4458-2006-4721-82e1-760c93301d0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.187 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.187 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Ensure instance console log exists: /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.187 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.188 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.188 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:05.418 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.418 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:05.419 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:20:05 compute-0 nova_compute[251992]: 2025-12-06 07:20:05.614 251996 DEBUG nova.network.neutron [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Successfully created port: 459cc986-3132-488a-9684-df0ff049e0b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:20:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 305 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 600 KiB/s wr, 107 op/s
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.405 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/378322614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:06 compute-0 ceph-mon[74339]: pgmap v1865: 305 pgs: 305 active+clean; 305 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 600 KiB/s wr, 107 op/s
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.555 251996 DEBUG nova.network.neutron [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Successfully updated port: 459cc986-3132-488a-9684-df0ff049e0b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.572 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.572 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquired lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.572 251996 DEBUG nova.network.neutron [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:20:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:06.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.658 251996 DEBUG nova.compute.manager [req-879c38b8-084f-4397-bc6b-9d2a9ca4ad20 req-f2eb82bd-cd46-40ae-ae4a-5650c4c8b676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received event network-changed-459cc986-3132-488a-9684-df0ff049e0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.659 251996 DEBUG nova.compute.manager [req-879c38b8-084f-4397-bc6b-9d2a9ca4ad20 req-f2eb82bd-cd46-40ae-ae4a-5650c4c8b676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Refreshing instance network info cache due to event network-changed-459cc986-3132-488a-9684-df0ff049e0b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.659 251996 DEBUG oslo_concurrency.lockutils [req-879c38b8-084f-4397-bc6b-9d2a9ca4ad20 req-f2eb82bd-cd46-40ae-ae4a-5650c4c8b676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:06 compute-0 nova_compute[251992]: 2025-12-06 07:20:06.715 251996 DEBUG nova.network.neutron [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:20:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:06.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:07 compute-0 nova_compute[251992]: 2025-12-06 07:20:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:07 compute-0 nova_compute[251992]: 2025-12-06 07:20:07.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:20:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Dec 06 07:20:08 compute-0 ceph-mon[74339]: pgmap v1866: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Dec 06 07:20:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:08.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.677 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.778 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:08.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.960 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.960 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.960 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:20:08 compute-0 nova_compute[251992]: 2025-12-06 07:20:08.961 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 288aae5a-11e0-4906-903d-acea3cebcf63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.424 251996 DEBUG nova.network.neutron [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Updating instance_info_cache with network_info: [{"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.448 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Releasing lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.448 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Instance network_info: |[{"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.450 251996 DEBUG oslo_concurrency.lockutils [req-879c38b8-084f-4397-bc6b-9d2a9ca4ad20 req-f2eb82bd-cd46-40ae-ae4a-5650c4c8b676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.450 251996 DEBUG nova.network.neutron [req-879c38b8-084f-4397-bc6b-9d2a9ca4ad20 req-f2eb82bd-cd46-40ae-ae4a-5650c4c8b676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Refreshing network info cache for port 459cc986-3132-488a-9684-df0ff049e0b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.452 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Start _get_guest_xml network_info=[{"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.457 251996 WARNING nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.462 251996 DEBUG nova.virt.libvirt.host [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.463 251996 DEBUG nova.virt.libvirt.host [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.468 251996 DEBUG nova.virt.libvirt.host [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.468 251996 DEBUG nova.virt.libvirt.host [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.469 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.469 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.470 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.470 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.470 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.470 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.471 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.471 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.471 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.471 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.471 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.472 251996 DEBUG nova.virt.hardware [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.475 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2441678181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:20:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2441678181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:20:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Dec 06 07:20:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Dec 06 07:20:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Dec 06 07:20:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:20:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797948778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.931 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.955 251996 DEBUG nova.storage.rbd_utils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] rbd image 18df4458-2006-4721-82e1-760c93301d0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:09 compute-0 nova_compute[251992]: 2025-12-06 07:20:09.959 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:20:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582177779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.449 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.450 251996 DEBUG nova.virt.libvirt.vif [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2134322638',display_name='tempest-ServersTestManualDisk-server-2134322638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2134322638',id=82,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMHFsKMYn8W+O2ePOW2j46QuJJaUBmk6IaRT8G15KaA08D2znZoVmjBDp59M3ev7p26P9ukp128lSx41VYL7gsNB5GxX79YvZtMyYICrK66jnTzGoCNoZstRPQlw/UQig==',key_name='tempest-keypair-977953110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a16333d9a99c4d4ba7c9a1c235b6219b',ramdisk_id='',reservation_id='r-b81q8jpb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1770307957',owner_user_name='tempest-ServersTestManualDisk-1770307957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='585886c5d2044f729963a6485c93acd5',uuid=18df4458-2006-4721-82e1-760c93301d0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.450 251996 DEBUG nova.network.os_vif_util [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Converting VIF {"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.451 251996 DEBUG nova.network.os_vif_util [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:de:30,bridge_name='br-int',has_traffic_filtering=True,id=459cc986-3132-488a-9684-df0ff049e0b0,network=Network(fc440c61-543f-4429-abe0-62748a6f425c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap459cc986-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.452 251996 DEBUG nova.objects.instance [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lazy-loading 'pci_devices' on Instance uuid 18df4458-2006-4721-82e1-760c93301d0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.468 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <uuid>18df4458-2006-4721-82e1-760c93301d0c</uuid>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <name>instance-00000052</name>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersTestManualDisk-server-2134322638</nova:name>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:20:09</nova:creationTime>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <nova:user uuid="585886c5d2044f729963a6485c93acd5">tempest-ServersTestManualDisk-1770307957-project-member</nova:user>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <nova:project uuid="a16333d9a99c4d4ba7c9a1c235b6219b">tempest-ServersTestManualDisk-1770307957</nova:project>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <nova:port uuid="459cc986-3132-488a-9684-df0ff049e0b0">
Dec 06 07:20:10 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <system>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <entry name="serial">18df4458-2006-4721-82e1-760c93301d0c</entry>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <entry name="uuid">18df4458-2006-4721-82e1-760c93301d0c</entry>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </system>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <os>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   </os>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <features>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   </features>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/18df4458-2006-4721-82e1-760c93301d0c_disk">
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       </source>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/18df4458-2006-4721-82e1-760c93301d0c_disk.config">
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       </source>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:20:10 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:d2:de:30"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <target dev="tap459cc986-31"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c/console.log" append="off"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <video>
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </video>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:20:10 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:20:10 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:20:10 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:20:10 compute-0 nova_compute[251992]: </domain>
Dec 06 07:20:10 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.468 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Preparing to wait for external event network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.468 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "18df4458-2006-4721-82e1-760c93301d0c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.469 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.469 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.469 251996 DEBUG nova.virt.libvirt.vif [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2134322638',display_name='tempest-ServersTestManualDisk-server-2134322638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2134322638',id=82,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMHFsKMYn8W+O2ePOW2j46QuJJaUBmk6IaRT8G15KaA08D2znZoVmjBDp59M3ev7p26P9ukp128lSx41VYL7gsNB5GxX79YvZtMyYICrK66jnTzGoCNoZstRPQlw/UQig==',key_name='tempest-keypair-977953110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a16333d9a99c4d4ba7c9a1c235b6219b',ramdisk_id='',reservation_id='r-b81q8jpb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1770307957',owner_user_name='tempest-ServersTestManualDisk-1770307957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='585886c5d2044f729963a6485c93acd5',uuid=18df4458-2006-4721-82e1-760c93301d0c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.469 251996 DEBUG nova.network.os_vif_util [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Converting VIF {"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.470 251996 DEBUG nova.network.os_vif_util [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:de:30,bridge_name='br-int',has_traffic_filtering=True,id=459cc986-3132-488a-9684-df0ff049e0b0,network=Network(fc440c61-543f-4429-abe0-62748a6f425c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap459cc986-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.470 251996 DEBUG os_vif [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:de:30,bridge_name='br-int',has_traffic_filtering=True,id=459cc986-3132-488a-9684-df0ff049e0b0,network=Network(fc440c61-543f-4429-abe0-62748a6f425c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap459cc986-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.471 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.471 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.472 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.476 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.476 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap459cc986-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.477 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap459cc986-31, col_values=(('external_ids', {'iface-id': '459cc986-3132-488a-9684-df0ff049e0b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:de:30', 'vm-uuid': '18df4458-2006-4721-82e1-760c93301d0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.478 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.480 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:20:10 compute-0 NetworkManager[48965]: <info>  [1765005610.4803] manager: (tap459cc986-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.486 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.487 251996 INFO os_vif [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:de:30,bridge_name='br-int',has_traffic_filtering=True,id=459cc986-3132-488a-9684-df0ff049e0b0,network=Network(fc440c61-543f-4429-abe0-62748a6f425c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap459cc986-31')
Dec 06 07:20:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.819 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.820 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.820 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] No VIF found with MAC fa:16:3e:d2:de:30, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:20:10 compute-0 nova_compute[251992]: 2025-12-06 07:20:10.820 251996 INFO nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Using config drive
Dec 06 07:20:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:10.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:11 compute-0 ceph-mon[74339]: osdmap e257: 3 total, 3 up, 3 in
Dec 06 07:20:11 compute-0 ceph-mon[74339]: pgmap v1868: 305 pgs: 305 active+clean; 326 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Dec 06 07:20:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1797948778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1711979528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3582177779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:11 compute-0 nova_compute[251992]: 2025-12-06 07:20:11.075 251996 DEBUG nova.storage.rbd_utils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] rbd image 18df4458-2006-4721-82e1-760c93301d0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:11.421 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 299 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:20:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3107736314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:12 compute-0 ceph-mon[74339]: pgmap v1869: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 299 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:20:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/100782781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.788 251996 DEBUG nova.network.neutron [req-879c38b8-084f-4397-bc6b-9d2a9ca4ad20 req-f2eb82bd-cd46-40ae-ae4a-5650c4c8b676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Updated VIF entry in instance network info cache for port 459cc986-3132-488a-9684-df0ff049e0b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.789 251996 DEBUG nova.network.neutron [req-879c38b8-084f-4397-bc6b-9d2a9ca4ad20 req-f2eb82bd-cd46-40ae-ae4a-5650c4c8b676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Updating instance_info_cache with network_info: [{"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.832 251996 DEBUG oslo_concurrency.lockutils [req-879c38b8-084f-4397-bc6b-9d2a9ca4ad20 req-f2eb82bd-cd46-40ae-ae4a-5650c4c8b676 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.850 251996 INFO nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Creating config drive at /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c/disk.config
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.857 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0h3kb7tc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:12.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.887 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [{"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.913 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-288aae5a-11e0-4906-903d-acea3cebcf63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.913 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:20:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:20:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:20:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:20:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:20:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:20:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:20:12 compute-0 nova_compute[251992]: 2025-12-06 07:20:12.994 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0h3kb7tc" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:13 compute-0 nova_compute[251992]: 2025-12-06 07:20:13.022 251996 DEBUG nova.storage.rbd_utils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] rbd image 18df4458-2006-4721-82e1-760c93301d0c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:13 compute-0 nova_compute[251992]: 2025-12-06 07:20:13.026 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c/disk.config 18df4458-2006-4721-82e1-760c93301d0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:13 compute-0 nova_compute[251992]: 2025-12-06 07:20:13.780 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 299 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:20:13 compute-0 nova_compute[251992]: 2025-12-06 07:20:13.930 251996 DEBUG oslo_concurrency.processutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c/disk.config 18df4458-2006-4721-82e1-760c93301d0c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.904s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:13 compute-0 nova_compute[251992]: 2025-12-06 07:20:13.931 251996 INFO nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Deleting local config drive /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c/disk.config because it was imported into RBD.
Dec 06 07:20:13 compute-0 kernel: tap459cc986-31: entered promiscuous mode
Dec 06 07:20:13 compute-0 NetworkManager[48965]: <info>  [1765005613.9770] manager: (tap459cc986-31): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Dec 06 07:20:13 compute-0 ovn_controller[147168]: 2025-12-06T07:20:13Z|00271|binding|INFO|Claiming lport 459cc986-3132-488a-9684-df0ff049e0b0 for this chassis.
Dec 06 07:20:13 compute-0 ovn_controller[147168]: 2025-12-06T07:20:13Z|00272|binding|INFO|459cc986-3132-488a-9684-df0ff049e0b0: Claiming fa:16:3e:d2:de:30 10.100.0.11
Dec 06 07:20:13 compute-0 nova_compute[251992]: 2025-12-06 07:20:13.977 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:13.990 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:de:30 10.100.0.11'], port_security=['fa:16:3e:d2:de:30 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '18df4458-2006-4721-82e1-760c93301d0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc440c61-543f-4429-abe0-62748a6f425c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a16333d9a99c4d4ba7c9a1c235b6219b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36d25105-7c4e-4d13-845c-7d81cfe928b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fe8ed7f-5b0d-4b21-9e02-b33082df59c2, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=459cc986-3132-488a-9684-df0ff049e0b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:13.992 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 459cc986-3132-488a-9684-df0ff049e0b0 in datapath fc440c61-543f-4429-abe0-62748a6f425c bound to our chassis
Dec 06 07:20:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:13.993 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc440c61-543f-4429-abe0-62748a6f425c
Dec 06 07:20:13 compute-0 ovn_controller[147168]: 2025-12-06T07:20:13Z|00273|binding|INFO|Setting lport 459cc986-3132-488a-9684-df0ff049e0b0 ovn-installed in OVS
Dec 06 07:20:13 compute-0 ovn_controller[147168]: 2025-12-06T07:20:13Z|00274|binding|INFO|Setting lport 459cc986-3132-488a-9684-df0ff049e0b0 up in Southbound
Dec 06 07:20:13 compute-0 nova_compute[251992]: 2025-12-06 07:20:13.996 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:13 compute-0 nova_compute[251992]: 2025-12-06 07:20:13.997 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.005 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[75f1867c-4807-4cc4-bb22-fdb1829129f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.006 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc440c61-51 in ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.008 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc440c61-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.008 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f6108553-14d6-480f-a830-ce59f4dbd9b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 systemd-machined[212986]: New machine qemu-37-instance-00000052.
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.010 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a1dd01f0-af30-462f-b17e-600910aabe5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 systemd-udevd[305945]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:20:14 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000052.
Dec 06 07:20:14 compute-0 NetworkManager[48965]: <info>  [1765005614.0232] device (tap459cc986-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:20:14 compute-0 NetworkManager[48965]: <info>  [1765005614.0245] device (tap459cc986-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.024 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[68d04041-e60a-40e9-b239-af33f022b37e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.046 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac8b55f-06bb-426c-b381-4defc3358a5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.071 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[bad72a70-5040-46fe-87e3-a7a846870107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 systemd-udevd[305948]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:20:14 compute-0 NetworkManager[48965]: <info>  [1765005614.0783] manager: (tapfc440c61-50): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.078 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd69b8b-a801-4730-b818-6e0d7e0e2d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.108 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d2445b-0f75-4ecf-8be1-2036503948db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.111 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ea657c0b-e8a2-4618-9300-b2cc7c8f9ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 NetworkManager[48965]: <info>  [1765005614.1328] device (tapfc440c61-50): carrier: link connected
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.139 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fc46593d-65c2-4d6c-a497-0416ecebd3dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.156 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd8e112-7ba4-48a3-ac55-655dcf21bd12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc440c61-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:00:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588672, 'reachable_time': 39168, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305977, 'error': None, 'target': 'ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.172 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4d651c65-5d84-4edc-8226-192c9df9c4a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588672, 'tstamp': 588672}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305978, 'error': None, 'target': 'ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.189 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[47d11ca4-6373-42b0-8f39-4b49f8ae646b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc440c61-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:00:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588672, 'reachable_time': 39168, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305979, 'error': None, 'target': 'ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.220 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c27ab8f8-7206-4892-9053-8661f8c9bd84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.273 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4829e7-e027-4c2e-a353-4493fa1ae729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.274 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc440c61-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.275 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.275 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc440c61-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.276 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:14 compute-0 NetworkManager[48965]: <info>  [1765005614.2777] manager: (tapfc440c61-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Dec 06 07:20:14 compute-0 kernel: tapfc440c61-50: entered promiscuous mode
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.279 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.282 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc440c61-50, col_values=(('external_ids', {'iface-id': 'e468a8b5-f72b-484b-afa0-58f88617a344'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.283 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:14 compute-0 ovn_controller[147168]: 2025-12-06T07:20:14Z|00275|binding|INFO|Releasing lport e468a8b5-f72b-484b-afa0-58f88617a344 from this chassis (sb_readonly=0)
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.284 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.284 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc440c61-543f-4429-abe0-62748a6f425c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc440c61-543f-4429-abe0-62748a6f425c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.285 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[95176579-f506-42f9-b481-dcbb55bf64c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.286 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-fc440c61-543f-4429-abe0-62748a6f425c
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/fc440c61-543f-4429-abe0-62748a6f425c.pid.haproxy
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID fc440c61-543f-4429-abe0-62748a6f425c
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:20:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:14.287 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c', 'env', 'PROCESS_TAG=haproxy-fc440c61-543f-4429-abe0-62748a6f425c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc440c61-543f-4429-abe0-62748a6f425c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.299 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.580 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005614.5803337, 18df4458-2006-4721-82e1-760c93301d0c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.581 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] VM Started (Lifecycle Event)
Dec 06 07:20:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:14.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.637 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.641 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005614.5805125, 18df4458-2006-4721-82e1-760c93301d0c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.641 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] VM Paused (Lifecycle Event)
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.665 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.668 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:20:14 compute-0 podman[306053]: 2025-12-06 07:20:14.676034928 +0000 UTC m=+0.052992458 container create b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:20:14 compute-0 nova_compute[251992]: 2025-12-06 07:20:14.705 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:20:14 compute-0 systemd[1]: Started libpod-conmon-b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7.scope.
Dec 06 07:20:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ea7e50fdac66e134fb044fb9e3a2947523a4715986d4a6982733c2962ff945/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:20:14 compute-0 podman[306053]: 2025-12-06 07:20:14.649019475 +0000 UTC m=+0.025977015 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:20:14 compute-0 podman[306053]: 2025-12-06 07:20:14.749367093 +0000 UTC m=+0.126324643 container init b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:20:14 compute-0 podman[306053]: 2025-12-06 07:20:14.754676051 +0000 UTC m=+0.131633581 container start b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:20:14 compute-0 neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c[306068]: [NOTICE]   (306072) : New worker (306074) forked
Dec 06 07:20:14 compute-0 neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c[306068]: [NOTICE]   (306072) : Loading success.
Dec 06 07:20:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:14.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:15 compute-0 ceph-mon[74339]: pgmap v1870: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 299 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:20:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4200603272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.404 251996 DEBUG nova.compute.manager [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received event network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.405 251996 DEBUG oslo_concurrency.lockutils [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "18df4458-2006-4721-82e1-760c93301d0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.405 251996 DEBUG oslo_concurrency.lockutils [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.405 251996 DEBUG oslo_concurrency.lockutils [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.406 251996 DEBUG nova.compute.manager [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Processing event network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.406 251996 DEBUG nova.compute.manager [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received event network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.406 251996 DEBUG oslo_concurrency.lockutils [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "18df4458-2006-4721-82e1-760c93301d0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.407 251996 DEBUG oslo_concurrency.lockutils [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.407 251996 DEBUG oslo_concurrency.lockutils [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.407 251996 DEBUG nova.compute.manager [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] No waiting events found dispatching network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.408 251996 WARNING nova.compute.manager [req-36ac85fd-91b4-4c2e-b984-467025d128b7 req-b96a0567-06fc-44d6-a73f-bf2cdb489b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received unexpected event network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 for instance with vm_state building and task_state spawning.
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.409 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.413 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005615.412863, 18df4458-2006-4721-82e1-760c93301d0c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.413 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] VM Resumed (Lifecycle Event)
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.415 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.419 251996 INFO nova.virt.libvirt.driver [-] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Instance spawned successfully.
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.419 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.447 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.453 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.455 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.456 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.456 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.457 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.457 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.457 251996 DEBUG nova.virt.libvirt.driver [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.480 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.504 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.573 251996 INFO nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Took 11.47 seconds to spawn the instance on the hypervisor.
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.574 251996 DEBUG nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.635 251996 INFO nova.compute.manager [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Took 12.71 seconds to build instance.
Dec 06 07:20:15 compute-0 nova_compute[251992]: 2025-12-06 07:20:15.673 251996 DEBUG oslo_concurrency.lockutils [None req-7487eb1e-60f7-4ada-94d7-f572ff89d092 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 278 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 931 KiB/s rd, 2.3 MiB/s wr, 126 op/s
Dec 06 07:20:16 compute-0 nova_compute[251992]: 2025-12-06 07:20:16.185 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:16 compute-0 nova_compute[251992]: 2025-12-06 07:20:16.186 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:16 compute-0 nova_compute[251992]: 2025-12-06 07:20:16.186 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:16 compute-0 nova_compute[251992]: 2025-12-06 07:20:16.186 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:16 compute-0 nova_compute[251992]: 2025-12-06 07:20:16.186 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:16 compute-0 nova_compute[251992]: 2025-12-06 07:20:16.187 251996 INFO nova.compute.manager [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Terminating instance
Dec 06 07:20:16 compute-0 nova_compute[251992]: 2025-12-06 07:20:16.188 251996 DEBUG nova.compute.manager [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:20:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:16.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2102635153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:16 compute-0 ceph-mon[74339]: pgmap v1871: 305 pgs: 305 active+clean; 278 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 931 KiB/s rd, 2.3 MiB/s wr, 126 op/s
Dec 06 07:20:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:20:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:16.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:20:17 compute-0 kernel: tapc51cc596-c2 (unregistering): left promiscuous mode
Dec 06 07:20:17 compute-0 NetworkManager[48965]: <info>  [1765005617.2983] device (tapc51cc596-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.316 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 ovn_controller[147168]: 2025-12-06T07:20:17Z|00276|binding|INFO|Releasing lport c51cc596-c273-4444-b624-c7f87bb78323 from this chassis (sb_readonly=0)
Dec 06 07:20:17 compute-0 ovn_controller[147168]: 2025-12-06T07:20:17Z|00277|binding|INFO|Setting lport c51cc596-c273-4444-b624-c7f87bb78323 down in Southbound
Dec 06 07:20:17 compute-0 ovn_controller[147168]: 2025-12-06T07:20:17Z|00278|binding|INFO|Removing iface tapc51cc596-c2 ovn-installed in OVS
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.321 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.324 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:a9:46 10.100.0.4'], port_security=['fa:16:3e:a3:a9:46 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '288aae5a-11e0-4906-903d-acea3cebcf63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-61a21643-77ba-4a09-8184-10dc4bd52b26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '35df5125c2cf4d29a6b975951af14910', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6207e763-a213-4f4e-8aa9-04781b6722bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=85f9937f-1b1f-4430-9972-982ebc33633b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=c51cc596-c273-4444-b624-c7f87bb78323) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.325 158118 INFO neutron.agent.ovn.metadata.agent [-] Port c51cc596-c273-4444-b624-c7f87bb78323 in datapath 61a21643-77ba-4a09-8184-10dc4bd52b26 unbound from our chassis
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.327 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 61a21643-77ba-4a09-8184-10dc4bd52b26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.328 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cded3b52-90d0-4438-a275-c32dbe7a56a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.329 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 namespace which is not needed anymore
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.337 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Dec 06 07:20:17 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000004f.scope: Consumed 16.343s CPU time.
Dec 06 07:20:17 compute-0 systemd-machined[212986]: Machine qemu-35-instance-0000004f terminated.
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.409 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.416 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.424 251996 INFO nova.virt.libvirt.driver [-] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Instance destroyed successfully.
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.425 251996 DEBUG nova.objects.instance [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lazy-loading 'resources' on Instance uuid 288aae5a-11e0-4906-903d-acea3cebcf63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:17 compute-0 podman[306084]: 2025-12-06 07:20:17.439033581 +0000 UTC m=+0.114747370 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.454 251996 DEBUG nova.virt.libvirt.vif [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:18:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-319358649',display_name='tempest-tempest.common.compute-instance-319358649',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-319358649',id=79,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCr7yYrMfc/vYIBdNKoOdmUaOBP7ItkOZSnl6KnIUpDDyT0eG/8qC7eAR3XEk9oTu2KpOhlwPPAoNOMJMN2jqpIUNlWMRBhDhCC2NIrxJ1iqIveG6g7oihNF2Fx4CQJCwg==',key_name='tempest-keypair-56698529',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:19:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='35df5125c2cf4d29a6b975951af14910',ramdisk_id='',reservation_id='r-g9xh7f4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-2041841766',owner_user_name='tempest-AttachInterfacesTestJSON-2041841766-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:19:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='06f5b46553b24b39a1493d96ec4e503e',uuid=288aae5a-11e0-4906-903d-acea3cebcf63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.455 251996 DEBUG nova.network.os_vif_util [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converting VIF {"id": "c51cc596-c273-4444-b624-c7f87bb78323", "address": "fa:16:3e:a3:a9:46", "network": {"id": "61a21643-77ba-4a09-8184-10dc4bd52b26", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-327155623-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "35df5125c2cf4d29a6b975951af14910", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc51cc596-c2", "ovs_interfaceid": "c51cc596-c273-4444-b624-c7f87bb78323", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.455 251996 DEBUG nova.network.os_vif_util [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:a9:46,bridge_name='br-int',has_traffic_filtering=True,id=c51cc596-c273-4444-b624-c7f87bb78323,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc51cc596-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.456 251996 DEBUG os_vif [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:a9:46,bridge_name='br-int',has_traffic_filtering=True,id=c51cc596-c273-4444-b624-c7f87bb78323,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc51cc596-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.458 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.458 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc51cc596-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.461 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.463 251996 INFO os_vif [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:a9:46,bridge_name='br-int',has_traffic_filtering=True,id=c51cc596-c273-4444-b624-c7f87bb78323,network=Network(61a21643-77ba-4a09-8184-10dc4bd52b26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc51cc596-c2')
Dec 06 07:20:17 compute-0 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[303774]: [NOTICE]   (303778) : haproxy version is 2.8.14-c23fe91
Dec 06 07:20:17 compute-0 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[303774]: [NOTICE]   (303778) : path to executable is /usr/sbin/haproxy
Dec 06 07:20:17 compute-0 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[303774]: [WARNING]  (303778) : Exiting Master process...
Dec 06 07:20:17 compute-0 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[303774]: [WARNING]  (303778) : Exiting Master process...
Dec 06 07:20:17 compute-0 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[303774]: [ALERT]    (303778) : Current worker (303780) exited with code 143 (Terminated)
Dec 06 07:20:17 compute-0 neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26[303774]: [WARNING]  (303778) : All workers exited. Exiting... (0)
Dec 06 07:20:17 compute-0 systemd[1]: libpod-ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f.scope: Deactivated successfully.
Dec 06 07:20:17 compute-0 podman[306125]: 2025-12-06 07:20:17.47879264 +0000 UTC m=+0.064894691 container died ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f-userdata-shm.mount: Deactivated successfully.
Dec 06 07:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f73aa627205d3a2d6351aa7b1b4d9296dc2494416aac81a8600fec6457a44bf0-merged.mount: Deactivated successfully.
Dec 06 07:20:17 compute-0 podman[306125]: 2025-12-06 07:20:17.523009883 +0000 UTC m=+0.109111934 container cleanup ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:20:17 compute-0 systemd[1]: libpod-conmon-ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f.scope: Deactivated successfully.
Dec 06 07:20:17 compute-0 podman[306184]: 2025-12-06 07:20:17.596056759 +0000 UTC m=+0.050278943 container remove ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.602 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eca267a5-e25a-4cf4-88ee-9c80f2370d1d]: (4, ('Sat Dec  6 07:20:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 (ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f)\nad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f\nSat Dec  6 07:20:17 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 (ad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f)\nad5ebf8f49d36efd08a46de44142be394980d71af8d504356384d73b32c2ed7f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.606 251996 DEBUG nova.compute.manager [req-bc85c6fb-ee89-4590-8633-f2b3dff1478f req-b36e1310-aadd-4ee2-8577-8d18f247e0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-unplugged-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.607 251996 DEBUG oslo_concurrency.lockutils [req-bc85c6fb-ee89-4590-8633-f2b3dff1478f req-b36e1310-aadd-4ee2-8577-8d18f247e0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.607 251996 DEBUG oslo_concurrency.lockutils [req-bc85c6fb-ee89-4590-8633-f2b3dff1478f req-b36e1310-aadd-4ee2-8577-8d18f247e0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.608 251996 DEBUG oslo_concurrency.lockutils [req-bc85c6fb-ee89-4590-8633-f2b3dff1478f req-b36e1310-aadd-4ee2-8577-8d18f247e0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.608 251996 DEBUG nova.compute.manager [req-bc85c6fb-ee89-4590-8633-f2b3dff1478f req-b36e1310-aadd-4ee2-8577-8d18f247e0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] No waiting events found dispatching network-vif-unplugged-c51cc596-c273-4444-b624-c7f87bb78323 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.608 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a645d52d-9beb-49fd-bffb-41cd09657dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.608 251996 DEBUG nova.compute.manager [req-bc85c6fb-ee89-4590-8633-f2b3dff1478f req-b36e1310-aadd-4ee2-8577-8d18f247e0a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-unplugged-c51cc596-c273-4444-b624-c7f87bb78323 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.609 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61a21643-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.611 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 kernel: tap61a21643-70: left promiscuous mode
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.617 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cf280637-ffe2-4f65-adcb-ba8528d68089]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.632 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ef432b30-8029-48dc-af3c-97d2105e6ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.633 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfb5294-aa10-4743-8e47-468083580f35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:17 compute-0 nova_compute[251992]: 2025-12-06 07:20:17.644 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.650 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9d904c90-4f8b-4eae-9ce7-2df8bf824c4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 581849, 'reachable_time': 15199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306200, 'error': None, 'target': 'ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d61a21643\x2d77ba\x2d4a09\x2d8184\x2d10dc4bd52b26.mount: Deactivated successfully.
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.654 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-61a21643-77ba-4a09-8184-10dc4bd52b26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:20:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:17.655 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc0a936-99ef-47dd-85f9-4f0d2af6b8db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 293 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Dec 06 07:20:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2191333907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4124821027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:18 compute-0 ceph-mon[74339]: pgmap v1872: 305 pgs: 305 active+clean; 293 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Dec 06 07:20:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1593184799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:20:18
Dec 06 07:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec 06 07:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:20:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Dec 06 07:20:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:18 compute-0 nova_compute[251992]: 2025-12-06 07:20:18.782 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:18.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Dec 06 07:20:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.705 251996 DEBUG nova.compute.manager [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.705 251996 DEBUG oslo_concurrency.lockutils [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.705 251996 DEBUG oslo_concurrency.lockutils [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.706 251996 DEBUG oslo_concurrency.lockutils [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.706 251996 DEBUG nova.compute.manager [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] No waiting events found dispatching network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.706 251996 WARNING nova.compute.manager [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received unexpected event network-vif-plugged-c51cc596-c273-4444-b624-c7f87bb78323 for instance with vm_state active and task_state deleting.
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.706 251996 DEBUG nova.compute.manager [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received event network-changed-459cc986-3132-488a-9684-df0ff049e0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.707 251996 DEBUG nova.compute.manager [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Refreshing instance network info cache due to event network-changed-459cc986-3132-488a-9684-df0ff049e0b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.707 251996 DEBUG oslo_concurrency.lockutils [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.707 251996 DEBUG oslo_concurrency.lockutils [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:19 compute-0 nova_compute[251992]: 2025-12-06 07:20:19.707 251996 DEBUG nova.network.neutron [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Refreshing network info cache for port 459cc986-3132-488a-9684-df0ff049e0b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:20:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 293 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Dec 06 07:20:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:20.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:20:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:20.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:20:21 compute-0 ceph-mon[74339]: osdmap e258: 3 total, 3 up, 3 in
Dec 06 07:20:21 compute-0 ceph-mon[74339]: pgmap v1874: 305 pgs: 305 active+clean; 293 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Dec 06 07:20:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1452383183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 227 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 360 op/s
Dec 06 07:20:21 compute-0 nova_compute[251992]: 2025-12-06 07:20:21.934 251996 DEBUG nova.network.neutron [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Updated VIF entry in instance network info cache for port 459cc986-3132-488a-9684-df0ff049e0b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:20:21 compute-0 nova_compute[251992]: 2025-12-06 07:20:21.934 251996 DEBUG nova.network.neutron [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Updating instance_info_cache with network_info: [{"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:21 compute-0 nova_compute[251992]: 2025-12-06 07:20:21.962 251996 DEBUG oslo_concurrency.lockutils [req-88103a33-a9f9-4347-a42e-29e4a32f593a req-12a2317b-82a8-47eb-9d52-691e14abd137 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-18df4458-2006-4721-82e1-760c93301d0c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/400921334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:22 compute-0 ceph-mon[74339]: pgmap v1875: 305 pgs: 305 active+clean; 227 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 360 op/s
Dec 06 07:20:22 compute-0 podman[306204]: 2025-12-06 07:20:22.414612612 +0000 UTC m=+0.064997773 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:20:22 compute-0 podman[306205]: 2025-12-06 07:20:22.415736913 +0000 UTC m=+0.062995117 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 07:20:22 compute-0 nova_compute[251992]: 2025-12-06 07:20:22.462 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:22.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:22 compute-0 nova_compute[251992]: 2025-12-06 07:20:22.783 251996 INFO nova.virt.libvirt.driver [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Deleting instance files /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63_del
Dec 06 07:20:22 compute-0 nova_compute[251992]: 2025-12-06 07:20:22.784 251996 INFO nova.virt.libvirt.driver [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Deletion of /var/lib/nova/instances/288aae5a-11e0-4906-903d-acea3cebcf63_del complete
Dec 06 07:20:22 compute-0 nova_compute[251992]: 2025-12-06 07:20:22.842 251996 INFO nova.compute.manager [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Took 6.65 seconds to destroy the instance on the hypervisor.
Dec 06 07:20:22 compute-0 nova_compute[251992]: 2025-12-06 07:20:22.842 251996 DEBUG oslo.service.loopingcall [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:20:22 compute-0 nova_compute[251992]: 2025-12-06 07:20:22.843 251996 DEBUG nova.compute.manager [-] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:20:22 compute-0 nova_compute[251992]: 2025-12-06 07:20:22.843 251996 DEBUG nova.network.neutron [-] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:20:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:22.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:20:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3992363602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:20:23 compute-0 nova_compute[251992]: 2025-12-06 07:20:23.782 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 227 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 360 op/s
Dec 06 07:20:24 compute-0 sudo[306241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:20:24 compute-0 sudo[306241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:24 compute-0 sudo[306241]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:24 compute-0 sudo[306266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:20:24 compute-0 sudo[306266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:24 compute-0 sudo[306266]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:24 compute-0 nova_compute[251992]: 2025-12-06 07:20:24.486 251996 DEBUG nova.network.neutron [-] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:24 compute-0 nova_compute[251992]: 2025-12-06 07:20:24.522 251996 INFO nova.compute.manager [-] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Took 1.68 seconds to deallocate network for instance.
Dec 06 07:20:24 compute-0 nova_compute[251992]: 2025-12-06 07:20:24.569 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:24 compute-0 nova_compute[251992]: 2025-12-06 07:20:24.570 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:24 compute-0 nova_compute[251992]: 2025-12-06 07:20:24.624 251996 DEBUG oslo_concurrency.processutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:24.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:24.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:25 compute-0 nova_compute[251992]: 2025-12-06 07:20:25.817 251996 DEBUG nova.compute.manager [req-aea84267-9034-44cc-beac-fe98292c94fd req-4ae70fea-3766-4c19-875d-da1c3d336abe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Received event network-vif-deleted-c51cc596-c273-4444-b624-c7f87bb78323 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0039782244556113905 of space, bias 1.0, pg target 1.1934673366834172 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:20:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.6 MiB/s wr, 348 op/s
Dec 06 07:20:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/511668033' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:26 compute-0 ceph-mon[74339]: pgmap v1876: 305 pgs: 305 active+clean; 227 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 360 op/s
Dec 06 07:20:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:20:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648838874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:26 compute-0 nova_compute[251992]: 2025-12-06 07:20:26.371 251996 DEBUG oslo_concurrency.processutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.746s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:26 compute-0 nova_compute[251992]: 2025-12-06 07:20:26.379 251996 DEBUG nova.compute.provider_tree [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:20:26 compute-0 nova_compute[251992]: 2025-12-06 07:20:26.397 251996 DEBUG nova.scheduler.client.report [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:20:26 compute-0 nova_compute[251992]: 2025-12-06 07:20:26.435 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:26 compute-0 nova_compute[251992]: 2025-12-06 07:20:26.466 251996 INFO nova.scheduler.client.report [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Deleted allocations for instance 288aae5a-11e0-4906-903d-acea3cebcf63
Dec 06 07:20:26 compute-0 nova_compute[251992]: 2025-12-06 07:20:26.537 251996 DEBUG oslo_concurrency.lockutils [None req-2453f36b-8426-4583-ad7f-e517d6f2ff7f 06f5b46553b24b39a1493d96ec4e503e 35df5125c2cf4d29a6b975951af14910 - - default default] Lock "288aae5a-11e0-4906-903d-acea3cebcf63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:26.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:27 compute-0 ceph-mon[74339]: pgmap v1877: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.6 MiB/s wr, 348 op/s
Dec 06 07:20:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1648838874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:27 compute-0 nova_compute[251992]: 2025-12-06 07:20:27.504 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.3 MiB/s wr, 358 op/s
Dec 06 07:20:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:28 compute-0 nova_compute[251992]: 2025-12-06 07:20:28.785 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:29 compute-0 nova_compute[251992]: 2025-12-06 07:20:29.153 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:29 compute-0 nova_compute[251992]: 2025-12-06 07:20:29.154 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:29 compute-0 nova_compute[251992]: 2025-12-06 07:20:29.175 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:20:29 compute-0 nova_compute[251992]: 2025-12-06 07:20:29.238 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:29 compute-0 nova_compute[251992]: 2025-12-06 07:20:29.239 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:29 compute-0 nova_compute[251992]: 2025-12-06 07:20:29.246 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:20:29 compute-0 nova_compute[251992]: 2025-12-06 07:20:29.247 251996 INFO nova.compute.claims [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:20:29 compute-0 nova_compute[251992]: 2025-12-06 07:20:29.356 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.2 MiB/s wr, 347 op/s
Dec 06 07:20:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:30.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:30.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:20:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3117456983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:30 compute-0 nova_compute[251992]: 2025-12-06 07:20:30.952 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:30 compute-0 nova_compute[251992]: 2025-12-06 07:20:30.960 251996 DEBUG nova.compute.provider_tree [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:20:30 compute-0 nova_compute[251992]: 2025-12-06 07:20:30.980 251996 DEBUG nova.scheduler.client.report [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:30.999 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.000 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.064 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.064 251996 DEBUG nova.network.neutron [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.086 251996 INFO nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.108 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.214 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.216 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.216 251996 INFO nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Creating image(s)
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.254 251996 DEBUG nova.storage.rbd_utils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.290 251996 DEBUG nova.storage.rbd_utils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.320 251996 DEBUG nova.storage.rbd_utils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.324 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.358 251996 DEBUG nova.policy [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '627c36bb63534e52a4b1d5adf47e6ffd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '929e2be1488d4b80b7ad8946093a6abe', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.409 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.410 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.411 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.411 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.442 251996 DEBUG nova.storage.rbd_utils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:31 compute-0 nova_compute[251992]: 2025-12-06 07:20:31.446 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 242 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.1 MiB/s wr, 373 op/s
Dec 06 07:20:32 compute-0 ceph-mon[74339]: pgmap v1878: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.3 MiB/s wr, 358 op/s
Dec 06 07:20:32 compute-0 nova_compute[251992]: 2025-12-06 07:20:32.422 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005617.420941, 288aae5a-11e0-4906-903d-acea3cebcf63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:32 compute-0 nova_compute[251992]: 2025-12-06 07:20:32.423 251996 INFO nova.compute.manager [-] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] VM Stopped (Lifecycle Event)
Dec 06 07:20:32 compute-0 nova_compute[251992]: 2025-12-06 07:20:32.489 251996 DEBUG nova.compute.manager [None req-cee28137-a97c-4d9b-ac6f-3e97b8b61531 - - - - - -] [instance: 288aae5a-11e0-4906-903d-acea3cebcf63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:32 compute-0 nova_compute[251992]: 2025-12-06 07:20:32.507 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:32.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:32 compute-0 nova_compute[251992]: 2025-12-06 07:20:32.767 251996 DEBUG nova.network.neutron [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Successfully created port: 382a0d3e-d0a9-40ed-80e9-3c462d98181c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:20:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:32.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:32 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 06 07:20:33 compute-0 ceph-mon[74339]: pgmap v1879: 305 pgs: 305 active+clean; 227 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.2 MiB/s wr, 347 op/s
Dec 06 07:20:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3117456983' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:33 compute-0 ceph-mon[74339]: pgmap v1880: 305 pgs: 305 active+clean; 242 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.1 MiB/s wr, 373 op/s
Dec 06 07:20:33 compute-0 ovn_controller[147168]: 2025-12-06T07:20:33Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:de:30 10.100.0.11
Dec 06 07:20:33 compute-0 ovn_controller[147168]: 2025-12-06T07:20:33Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:de:30 10.100.0.11
Dec 06 07:20:33 compute-0 nova_compute[251992]: 2025-12-06 07:20:33.786 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 243 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.6 MiB/s wr, 238 op/s
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.138 251996 DEBUG nova.network.neutron [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Successfully updated port: 382a0d3e-d0a9-40ed-80e9-3c462d98181c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.156 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.156 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.156 251996 DEBUG nova.network.neutron [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.266 251996 DEBUG nova.compute.manager [req-a547c742-3add-4268-8b60-e1af8f43dfc5 req-39c98cfd-46c8-43df-a74b-6433543795b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-changed-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.266 251996 DEBUG nova.compute.manager [req-a547c742-3add-4268-8b60-e1af8f43dfc5 req-39c98cfd-46c8-43df-a74b-6433543795b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Refreshing instance network info cache due to event network-changed-382a0d3e-d0a9-40ed-80e9-3c462d98181c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.266 251996 DEBUG oslo_concurrency.lockutils [req-a547c742-3add-4268-8b60-e1af8f43dfc5 req-39c98cfd-46c8-43df-a74b-6433543795b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.336 251996 DEBUG nova.network.neutron [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:20:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:34 compute-0 ceph-mon[74339]: pgmap v1881: 305 pgs: 305 active+clean; 243 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.6 MiB/s wr, 238 op/s
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.827 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:34 compute-0 nova_compute[251992]: 2025-12-06 07:20:34.921 251996 DEBUG nova.storage.rbd_utils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] resizing rbd image 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:20:35 compute-0 ovn_controller[147168]: 2025-12-06T07:20:35Z|00279|binding|INFO|Releasing lport e468a8b5-f72b-484b-afa0-58f88617a344 from this chassis (sb_readonly=0)
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.356 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.691 251996 DEBUG nova.objects.instance [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'migration_context' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.710 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.711 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Ensure instance console log exists: /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.712 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.712 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.712 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.767 251996 DEBUG nova.network.neutron [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updating instance_info_cache with network_info: [{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.789 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.790 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance network_info: |[{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.790 251996 DEBUG oslo_concurrency.lockutils [req-a547c742-3add-4268-8b60-e1af8f43dfc5 req-39c98cfd-46c8-43df-a74b-6433543795b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.791 251996 DEBUG nova.network.neutron [req-a547c742-3add-4268-8b60-e1af8f43dfc5 req-39c98cfd-46c8-43df-a74b-6433543795b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Refreshing network info cache for port 382a0d3e-d0a9-40ed-80e9-3c462d98181c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.793 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Start _get_guest_xml network_info=[{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.798 251996 WARNING nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.803 251996 DEBUG nova.virt.libvirt.host [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.804 251996 DEBUG nova.virt.libvirt.host [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.812 251996 DEBUG nova.virt.libvirt.host [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.812 251996 DEBUG nova.virt.libvirt.host [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.814 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.815 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.815 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.816 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.816 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.816 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.816 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.817 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.817 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.817 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.817 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.818 251996 DEBUG nova.virt.hardware [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:20:35 compute-0 nova_compute[251992]: 2025-12-06 07:20:35.821 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 263 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.8 MiB/s wr, 293 op/s
Dec 06 07:20:36 compute-0 ceph-mon[74339]: pgmap v1882: 305 pgs: 305 active+clean; 263 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.8 MiB/s wr, 293 op/s
Dec 06 07:20:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:20:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4203592398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:36 compute-0 nova_compute[251992]: 2025-12-06 07:20:36.619 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.798s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:36.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:36 compute-0 nova_compute[251992]: 2025-12-06 07:20:36.980 251996 DEBUG nova.storage.rbd_utils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:36 compute-0 nova_compute[251992]: 2025-12-06 07:20:36.984 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:20:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887186267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.425 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.427 251996 DEBUG nova.virt.libvirt.vif [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2054432936',display_name='tempest-ServerActionsTestJSON-server-2054432936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2054432936',id=86,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-yj3rnpxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=46ad6692-490b-41f5-9d5d-d70ddcf61e04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.427 251996 DEBUG nova.network.os_vif_util [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.428 251996 DEBUG nova.network.os_vif_util [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.429 251996 DEBUG nova.objects.instance [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.453 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <uuid>46ad6692-490b-41f5-9d5d-d70ddcf61e04</uuid>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <name>instance-00000056</name>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestJSON-server-2054432936</nova:name>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:20:35</nova:creationTime>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <nova:port uuid="382a0d3e-d0a9-40ed-80e9-3c462d98181c">
Dec 06 07:20:37 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <system>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <entry name="serial">46ad6692-490b-41f5-9d5d-d70ddcf61e04</entry>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <entry name="uuid">46ad6692-490b-41f5-9d5d-d70ddcf61e04</entry>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </system>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <os>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   </os>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <features>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   </features>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk">
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       </source>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk.config">
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       </source>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:20:37 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:fe:fe:54"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <target dev="tap382a0d3e-d0"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/console.log" append="off"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <video>
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </video>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:20:37 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:20:37 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:20:37 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:20:37 compute-0 nova_compute[251992]: </domain>
Dec 06 07:20:37 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.454 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Preparing to wait for external event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.455 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.455 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.455 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.456 251996 DEBUG nova.virt.libvirt.vif [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2054432936',display_name='tempest-ServerActionsTestJSON-server-2054432936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2054432936',id=86,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-yj3rnpxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:20:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=46ad6692-490b-41f5-9d5d-d70ddcf61e04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.456 251996 DEBUG nova.network.os_vif_util [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.457 251996 DEBUG nova.network.os_vif_util [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.457 251996 DEBUG os_vif [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.458 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.458 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.459 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.463 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.463 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap382a0d3e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.464 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap382a0d3e-d0, col_values=(('external_ids', {'iface-id': '382a0d3e-d0a9-40ed-80e9-3c462d98181c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:fe:54', 'vm-uuid': '46ad6692-490b-41f5-9d5d-d70ddcf61e04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.465 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:37 compute-0 NetworkManager[48965]: <info>  [1765005637.4662] manager: (tap382a0d3e-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.467 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.471 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.472 251996 INFO os_vif [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0')
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.640 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.640 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.640 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] No VIF found with MAC fa:16:3e:fe:fe:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.641 251996 INFO nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Using config drive
Dec 06 07:20:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4203592398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2887186267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.793 251996 DEBUG nova.storage.rbd_utils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.799 251996 DEBUG nova.network.neutron [req-a547c742-3add-4268-8b60-e1af8f43dfc5 req-39c98cfd-46c8-43df-a74b-6433543795b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updated VIF entry in instance network info cache for port 382a0d3e-d0a9-40ed-80e9-3c462d98181c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.799 251996 DEBUG nova.network.neutron [req-a547c742-3add-4268-8b60-e1af8f43dfc5 req-39c98cfd-46c8-43df-a74b-6433543795b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updating instance_info_cache with network_info: [{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:37 compute-0 nova_compute[251992]: 2025-12-06 07:20:37.831 251996 DEBUG oslo_concurrency.lockutils [req-a547c742-3add-4268-8b60-e1af8f43dfc5 req-39c98cfd-46c8-43df-a74b-6433543795b5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 343 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 313 op/s
Dec 06 07:20:38 compute-0 nova_compute[251992]: 2025-12-06 07:20:38.218 251996 INFO nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Creating config drive at /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/disk.config
Dec 06 07:20:38 compute-0 nova_compute[251992]: 2025-12-06 07:20:38.225 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp937_rz7h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:38 compute-0 nova_compute[251992]: 2025-12-06 07:20:38.366 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp937_rz7h" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:38 compute-0 nova_compute[251992]: 2025-12-06 07:20:38.397 251996 DEBUG nova.storage.rbd_utils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] rbd image 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:20:38 compute-0 nova_compute[251992]: 2025-12-06 07:20:38.401 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/disk.config 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:38 compute-0 nova_compute[251992]: 2025-12-06 07:20:38.789 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:38.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:39 compute-0 ceph-mon[74339]: pgmap v1883: 305 pgs: 305 active+clean; 343 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 313 op/s
Dec 06 07:20:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 343 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 230 op/s
Dec 06 07:20:40 compute-0 ceph-mon[74339]: pgmap v1884: 305 pgs: 305 active+clean; 343 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 230 op/s
Dec 06 07:20:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.597 251996 DEBUG oslo_concurrency.processutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/disk.config 46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.598 251996 INFO nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Deleting local config drive /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/disk.config because it was imported into RBD.
Dec 06 07:20:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:40.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:40 compute-0 kernel: tap382a0d3e-d0: entered promiscuous mode
Dec 06 07:20:40 compute-0 NetworkManager[48965]: <info>  [1765005640.6571] manager: (tap382a0d3e-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/151)
Dec 06 07:20:40 compute-0 ovn_controller[147168]: 2025-12-06T07:20:40Z|00280|binding|INFO|Claiming lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c for this chassis.
Dec 06 07:20:40 compute-0 ovn_controller[147168]: 2025-12-06T07:20:40Z|00281|binding|INFO|382a0d3e-d0a9-40ed-80e9-3c462d98181c: Claiming fa:16:3e:fe:fe:54 10.100.0.6
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.658 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.670 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:fe:54 10.100.0.6'], port_security=['fa:16:3e:fe:fe:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '46ad6692-490b-41f5-9d5d-d70ddcf61e04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '2', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=382a0d3e-d0a9-40ed-80e9-3c462d98181c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.673 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 382a0d3e-d0a9-40ed-80e9-3c462d98181c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.674 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:20:40 compute-0 ovn_controller[147168]: 2025-12-06T07:20:40Z|00282|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c ovn-installed in OVS
Dec 06 07:20:40 compute-0 ovn_controller[147168]: 2025-12-06T07:20:40Z|00283|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c up in Southbound
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.678 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.688 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e35433-88e0-4cc2-be05-4a55f20b8a74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.689 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:20:40 compute-0 systemd-machined[212986]: New machine qemu-38-instance-00000056.
Dec 06 07:20:40 compute-0 systemd-udevd[306648]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.690 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.691 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca7c2e4-3bc0-4219-b8a4-dd9cd5a4b80f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.691 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6e8ebe-68ab-43b9-aeca-ea7eb8674583]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.702 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[eeda5524-f888-4e47-bf58-f0caf670174b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 NetworkManager[48965]: <info>  [1765005640.7038] device (tap382a0d3e-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:20:40 compute-0 NetworkManager[48965]: <info>  [1765005640.7047] device (tap382a0d3e-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:20:40 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000056.
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.725 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dabcce16-dc21-4691-b02a-ee38184d9f1c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.749 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[98abbdcf-e4e8-407a-85bb-bb6fff8bdf62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.753 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ae035104-1293-4b58-88a3-f9d87a31c656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 NetworkManager[48965]: <info>  [1765005640.7541] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/152)
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.782 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2c0616-26e7-46a0-899b-49c057b79b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.784 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[05ca3216-3b41-4b5e-a135-d51602cf5f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 NetworkManager[48965]: <info>  [1765005640.8024] device (tap4d599401-30): carrier: link connected
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.807 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[42efe35b-58d7-4b33-91b6-e8c48a980b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.826 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f59c18c9-bec8-4e38-b50b-d8f5322896f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591339, 'reachable_time': 30752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306680, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.840 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[da7decc6-a267-4695-8eeb-b9ebba656be1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591339, 'tstamp': 591339}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306681, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.855 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[885f17ff-f764-4388-822e-087ff65f6fa6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591339, 'reachable_time': 30752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306682, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.880 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[365a89e9-ffd7-437b-bd4e-63c588f80027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:40.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.934 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a60e08b3-4b16-4166-88db-ff5c6501df37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.935 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.935 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.935 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:40 compute-0 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:20:40 compute-0 NetworkManager[48965]: <info>  [1765005640.9379] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.937 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.939 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.940 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:40 compute-0 ovn_controller[147168]: 2025-12-06T07:20:40Z|00284|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.941 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.955 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-0 nova_compute[251992]: 2025-12-06 07:20:40.957 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.958 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.959 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c010cb18-86d8-4535-a96b-43911da780cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.959 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:20:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:40.960 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:20:41 compute-0 nova_compute[251992]: 2025-12-06 07:20:41.077 251996 DEBUG nova.compute.manager [req-067750eb-acba-45d3-b56c-6b651db4845f req-ef4963ae-fe88-4f6d-b2b0-84e71b0f1f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:41 compute-0 nova_compute[251992]: 2025-12-06 07:20:41.077 251996 DEBUG oslo_concurrency.lockutils [req-067750eb-acba-45d3-b56c-6b651db4845f req-ef4963ae-fe88-4f6d-b2b0-84e71b0f1f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:41 compute-0 nova_compute[251992]: 2025-12-06 07:20:41.078 251996 DEBUG oslo_concurrency.lockutils [req-067750eb-acba-45d3-b56c-6b651db4845f req-ef4963ae-fe88-4f6d-b2b0-84e71b0f1f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:41 compute-0 nova_compute[251992]: 2025-12-06 07:20:41.078 251996 DEBUG oslo_concurrency.lockutils [req-067750eb-acba-45d3-b56c-6b651db4845f req-ef4963ae-fe88-4f6d-b2b0-84e71b0f1f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:41 compute-0 nova_compute[251992]: 2025-12-06 07:20:41.078 251996 DEBUG nova.compute.manager [req-067750eb-acba-45d3-b56c-6b651db4845f req-ef4963ae-fe88-4f6d-b2b0-84e71b0f1f0d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Processing event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:20:41 compute-0 podman[306714]: 2025-12-06 07:20:41.279967978 +0000 UTC m=+0.021974491 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:20:41 compute-0 podman[306714]: 2025-12-06 07:20:41.586894957 +0000 UTC m=+0.328901450 container create d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:20:41 compute-0 systemd[1]: Started libpod-conmon-d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f.scope.
Dec 06 07:20:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c255f47cad573160f86e766b2f44b9fb8c6baf93858debe94caab3d0c4b6154/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:20:41 compute-0 podman[306714]: 2025-12-06 07:20:41.768761212 +0000 UTC m=+0.510767735 container init d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:20:41 compute-0 podman[306714]: 2025-12-06 07:20:41.77417793 +0000 UTC m=+0.516184423 container start d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:20:41 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[306747]: [NOTICE]   (306758) : New worker (306769) forked
Dec 06 07:20:41 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[306747]: [NOTICE]   (306758) : Loading success.
Dec 06 07:20:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 367 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 8.3 MiB/s wr, 300 op/s
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.097 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005642.0972445, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.099 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Started (Lifecycle Event)
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.101 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.104 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.107 251996 INFO nova.virt.libvirt.driver [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance spawned successfully.
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.108 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.119 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.124 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.127 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.127 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.128 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.128 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.129 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.129 251996 DEBUG nova.virt.libvirt.driver [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.175 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.176 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005642.0975006, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.176 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Paused (Lifecycle Event)
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.223 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.227 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005642.1036437, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.227 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Resumed (Lifecycle Event)
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.256 251996 INFO nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Took 11.04 seconds to spawn the instance on the hypervisor.
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.257 251996 DEBUG nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.258 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.271 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.314 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.352 251996 INFO nova.compute.manager [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Took 13.13 seconds to build instance.
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.366 251996 DEBUG oslo_concurrency.lockutils [None req-fc40327d-45fd-4b64-95e2-8d10eb7cb7e6 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:42 compute-0 nova_compute[251992]: 2025-12-06 07:20:42.467 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:42.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:42.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:20:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:20:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:20:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:20:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:20:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:20:43 compute-0 nova_compute[251992]: 2025-12-06 07:20:43.572 251996 DEBUG nova.compute.manager [req-e8dffb73-3c04-4b35-b565-845ab9a42b1f req-e51c374c-ef97-4bd6-9ed0-4a03ba37cd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:43 compute-0 nova_compute[251992]: 2025-12-06 07:20:43.572 251996 DEBUG oslo_concurrency.lockutils [req-e8dffb73-3c04-4b35-b565-845ab9a42b1f req-e51c374c-ef97-4bd6-9ed0-4a03ba37cd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:43 compute-0 nova_compute[251992]: 2025-12-06 07:20:43.573 251996 DEBUG oslo_concurrency.lockutils [req-e8dffb73-3c04-4b35-b565-845ab9a42b1f req-e51c374c-ef97-4bd6-9ed0-4a03ba37cd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:43 compute-0 nova_compute[251992]: 2025-12-06 07:20:43.573 251996 DEBUG oslo_concurrency.lockutils [req-e8dffb73-3c04-4b35-b565-845ab9a42b1f req-e51c374c-ef97-4bd6-9ed0-4a03ba37cd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:43 compute-0 nova_compute[251992]: 2025-12-06 07:20:43.573 251996 DEBUG nova.compute.manager [req-e8dffb73-3c04-4b35-b565-845ab9a42b1f req-e51c374c-ef97-4bd6-9ed0-4a03ba37cd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:43 compute-0 nova_compute[251992]: 2025-12-06 07:20:43.573 251996 WARNING nova.compute.manager [req-e8dffb73-3c04-4b35-b565-845ab9a42b1f req-e51c374c-ef97-4bd6-9ed0-4a03ba37cd8e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state active and task_state None.
Dec 06 07:20:43 compute-0 nova_compute[251992]: 2025-12-06 07:20:43.792 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 377 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.5 MiB/s wr, 244 op/s
Dec 06 07:20:44 compute-0 sudo[306787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:20:44 compute-0 sudo[306787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:44 compute-0 sudo[306787]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:44 compute-0 sudo[306812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:20:44 compute-0 sudo[306812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:20:44 compute-0 sudo[306812]: pam_unix(sudo:session): session closed for user root
Dec 06 07:20:44 compute-0 nova_compute[251992]: 2025-12-06 07:20:44.642 251996 DEBUG nova.compute.manager [req-2479ba38-91d6-40d8-a175-48f14fcd22fd req-3969d021-a774-4411-9926-a06bc1e07858 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-changed-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:44 compute-0 nova_compute[251992]: 2025-12-06 07:20:44.644 251996 DEBUG nova.compute.manager [req-2479ba38-91d6-40d8-a175-48f14fcd22fd req-3969d021-a774-4411-9926-a06bc1e07858 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Refreshing instance network info cache due to event network-changed-382a0d3e-d0a9-40ed-80e9-3c462d98181c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:20:44 compute-0 nova_compute[251992]: 2025-12-06 07:20:44.645 251996 DEBUG oslo_concurrency.lockutils [req-2479ba38-91d6-40d8-a175-48f14fcd22fd req-3969d021-a774-4411-9926-a06bc1e07858 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:20:44 compute-0 nova_compute[251992]: 2025-12-06 07:20:44.645 251996 DEBUG oslo_concurrency.lockutils [req-2479ba38-91d6-40d8-a175-48f14fcd22fd req-3969d021-a774-4411-9926-a06bc1e07858 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:20:44 compute-0 nova_compute[251992]: 2025-12-06 07:20:44.645 251996 DEBUG nova.network.neutron [req-2479ba38-91d6-40d8-a175-48f14fcd22fd req-3969d021-a774-4411-9926-a06bc1e07858 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Refreshing network info cache for port 382a0d3e-d0a9-40ed-80e9-3c462d98181c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:20:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:44.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:44.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:45 compute-0 ceph-mon[74339]: pgmap v1885: 305 pgs: 305 active+clean; 367 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 8.3 MiB/s wr, 300 op/s
Dec 06 07:20:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 394 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.5 MiB/s wr, 262 op/s
Dec 06 07:20:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:46 compute-0 ceph-mon[74339]: pgmap v1886: 305 pgs: 305 active+clean; 377 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.5 MiB/s wr, 244 op/s
Dec 06 07:20:46 compute-0 ceph-mon[74339]: pgmap v1887: 305 pgs: 305 active+clean; 394 MiB data, 906 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.5 MiB/s wr, 262 op/s
Dec 06 07:20:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2225651671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:46 compute-0 nova_compute[251992]: 2025-12-06 07:20:46.826 251996 DEBUG nova.network.neutron [req-2479ba38-91d6-40d8-a175-48f14fcd22fd req-3969d021-a774-4411-9926-a06bc1e07858 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updated VIF entry in instance network info cache for port 382a0d3e-d0a9-40ed-80e9-3c462d98181c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:20:46 compute-0 nova_compute[251992]: 2025-12-06 07:20:46.826 251996 DEBUG nova.network.neutron [req-2479ba38-91d6-40d8-a175-48f14fcd22fd req-3969d021-a774-4411-9926-a06bc1e07858 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updating instance_info_cache with network_info: [{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:20:46 compute-0 nova_compute[251992]: 2025-12-06 07:20:46.852 251996 DEBUG oslo_concurrency.lockutils [req-2479ba38-91d6-40d8-a175-48f14fcd22fd req-3969d021-a774-4411-9926-a06bc1e07858 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:20:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:46.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:47 compute-0 nova_compute[251992]: 2025-12-06 07:20:47.471 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 394 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 274 op/s
Dec 06 07:20:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3154939193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:20:48 compute-0 podman[306839]: 2025-12-06 07:20:48.429835942 +0000 UTC m=+0.091409876 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller)
Dec 06 07:20:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:48.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:48 compute-0 nova_compute[251992]: 2025-12-06 07:20:48.794 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:48.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:49 compute-0 ceph-mon[74339]: pgmap v1888: 305 pgs: 305 active+clean; 394 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 274 op/s
Dec 06 07:20:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 394 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Dec 06 07:20:50 compute-0 ceph-mon[74339]: pgmap v1889: 305 pgs: 305 active+clean; 394 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Dec 06 07:20:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:50.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 243 op/s
Dec 06 07:20:52 compute-0 nova_compute[251992]: 2025-12-06 07:20:52.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:52 compute-0 ceph-mon[74339]: pgmap v1890: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 243 op/s
Dec 06 07:20:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:52.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:53 compute-0 podman[306871]: 2025-12-06 07:20:53.395157423 +0000 UTC m=+0.052579187 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:20:53 compute-0 podman[306870]: 2025-12-06 07:20:53.419541979 +0000 UTC m=+0.079895383 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 07:20:53 compute-0 nova_compute[251992]: 2025-12-06 07:20:53.796 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 206 op/s
Dec 06 07:20:53 compute-0 nova_compute[251992]: 2025-12-06 07:20:53.949 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.161 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "18df4458-2006-4721-82e1-760c93301d0c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.161 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.162 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "18df4458-2006-4721-82e1-760c93301d0c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.162 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.163 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.164 251996 INFO nova.compute.manager [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Terminating instance
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.165 251996 DEBUG nova.compute.manager [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.313 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:54.315 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:54.318 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:20:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:54 compute-0 kernel: tap459cc986-31 (unregistering): left promiscuous mode
Dec 06 07:20:54 compute-0 NetworkManager[48965]: <info>  [1765005654.8873] device (tap459cc986-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.888 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:54 compute-0 ovn_controller[147168]: 2025-12-06T07:20:54Z|00285|binding|INFO|Releasing lport 459cc986-3132-488a-9684-df0ff049e0b0 from this chassis (sb_readonly=0)
Dec 06 07:20:54 compute-0 ovn_controller[147168]: 2025-12-06T07:20:54Z|00286|binding|INFO|Setting lport 459cc986-3132-488a-9684-df0ff049e0b0 down in Southbound
Dec 06 07:20:54 compute-0 ovn_controller[147168]: 2025-12-06T07:20:54Z|00287|binding|INFO|Removing iface tap459cc986-31 ovn-installed in OVS
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.891 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:54.896 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:de:30 10.100.0.11'], port_security=['fa:16:3e:d2:de:30 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '18df4458-2006-4721-82e1-760c93301d0c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc440c61-543f-4429-abe0-62748a6f425c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a16333d9a99c4d4ba7c9a1c235b6219b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36d25105-7c4e-4d13-845c-7d81cfe928b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fe8ed7f-5b0d-4b21-9e02-b33082df59c2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=459cc986-3132-488a-9684-df0ff049e0b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:20:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:54.897 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 459cc986-3132-488a-9684-df0ff049e0b0 in datapath fc440c61-543f-4429-abe0-62748a6f425c unbound from our chassis
Dec 06 07:20:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:54.899 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc440c61-543f-4429-abe0-62748a6f425c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:20:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:54.901 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6f518a-0f1e-4d08-b157-c918af7efe75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:20:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:20:54.902 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c namespace which is not needed anymore
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.913 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:54.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:54 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000052.scope: Deactivated successfully.
Dec 06 07:20:54 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000052.scope: Consumed 15.170s CPU time.
Dec 06 07:20:54 compute-0 systemd-machined[212986]: Machine qemu-37-instance-00000052 terminated.
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.987 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:54 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.991 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:54.999 251996 INFO nova.virt.libvirt.driver [-] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Instance destroyed successfully.
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.001 251996 DEBUG nova.objects.instance [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lazy-loading 'resources' on Instance uuid 18df4458-2006-4721-82e1-760c93301d0c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.015 251996 DEBUG nova.virt.libvirt.vif [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-2134322638',display_name='tempest-ServersTestManualDisk-server-2134322638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-2134322638',id=82,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMHFsKMYn8W+O2ePOW2j46QuJJaUBmk6IaRT8G15KaA08D2znZoVmjBDp59M3ev7p26P9ukp128lSx41VYL7gsNB5GxX79YvZtMyYICrK66jnTzGoCNoZstRPQlw/UQig==',key_name='tempest-keypair-977953110',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a16333d9a99c4d4ba7c9a1c235b6219b',ramdisk_id='',reservation_id='r-b81q8jpb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1770307957',owner_user_name='tempest-ServersTestManualDisk-1770307957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:20:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='585886c5d2044f729963a6485c93acd5',uuid=18df4458-2006-4721-82e1-760c93301d0c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.016 251996 DEBUG nova.network.os_vif_util [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Converting VIF {"id": "459cc986-3132-488a-9684-df0ff049e0b0", "address": "fa:16:3e:d2:de:30", "network": {"id": "fc440c61-543f-4429-abe0-62748a6f425c", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1869055720-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a16333d9a99c4d4ba7c9a1c235b6219b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap459cc986-31", "ovs_interfaceid": "459cc986-3132-488a-9684-df0ff049e0b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.017 251996 DEBUG nova.network.os_vif_util [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:de:30,bridge_name='br-int',has_traffic_filtering=True,id=459cc986-3132-488a-9684-df0ff049e0b0,network=Network(fc440c61-543f-4429-abe0-62748a6f425c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap459cc986-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.017 251996 DEBUG os_vif [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:de:30,bridge_name='br-int',has_traffic_filtering=True,id=459cc986-3132-488a-9684-df0ff049e0b0,network=Network(fc440c61-543f-4429-abe0-62748a6f425c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap459cc986-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.020 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap459cc986-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.021 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.023 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.028 251996 INFO os_vif [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:de:30,bridge_name='br-int',has_traffic_filtering=True,id=459cc986-3132-488a-9684-df0ff049e0b0,network=Network(fc440c61-543f-4429-abe0-62748a6f425c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap459cc986-31')
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.789 251996 DEBUG nova.compute.manager [req-eff44ba3-9210-4d04-9a9d-b705db6c1ce5 req-1d5a9d36-5e1c-4c38-b6d7-1584136a31cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received event network-vif-unplugged-459cc986-3132-488a-9684-df0ff049e0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.790 251996 DEBUG oslo_concurrency.lockutils [req-eff44ba3-9210-4d04-9a9d-b705db6c1ce5 req-1d5a9d36-5e1c-4c38-b6d7-1584136a31cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "18df4458-2006-4721-82e1-760c93301d0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.790 251996 DEBUG oslo_concurrency.lockutils [req-eff44ba3-9210-4d04-9a9d-b705db6c1ce5 req-1d5a9d36-5e1c-4c38-b6d7-1584136a31cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.790 251996 DEBUG oslo_concurrency.lockutils [req-eff44ba3-9210-4d04-9a9d-b705db6c1ce5 req-1d5a9d36-5e1c-4c38-b6d7-1584136a31cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.790 251996 DEBUG nova.compute.manager [req-eff44ba3-9210-4d04-9a9d-b705db6c1ce5 req-1d5a9d36-5e1c-4c38-b6d7-1584136a31cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] No waiting events found dispatching network-vif-unplugged-459cc986-3132-488a-9684-df0ff049e0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:55 compute-0 nova_compute[251992]: 2025-12-06 07:20:55.791 251996 DEBUG nova.compute.manager [req-eff44ba3-9210-4d04-9a9d-b705db6c1ce5 req-1d5a9d36-5e1c-4c38-b6d7-1584136a31cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received event network-vif-unplugged-459cc986-3132-488a-9684-df0ff049e0b0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:20:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 189 op/s
Dec 06 07:20:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:20:56 compute-0 neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c[306068]: [NOTICE]   (306072) : haproxy version is 2.8.14-c23fe91
Dec 06 07:20:56 compute-0 neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c[306068]: [NOTICE]   (306072) : path to executable is /usr/sbin/haproxy
Dec 06 07:20:56 compute-0 neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c[306068]: [WARNING]  (306072) : Exiting Master process...
Dec 06 07:20:56 compute-0 neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c[306068]: [WARNING]  (306072) : Exiting Master process...
Dec 06 07:20:56 compute-0 neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c[306068]: [ALERT]    (306072) : Current worker (306074) exited with code 143 (Terminated)
Dec 06 07:20:56 compute-0 neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c[306068]: [WARNING]  (306072) : All workers exited. Exiting... (0)
Dec 06 07:20:56 compute-0 systemd[1]: libpod-b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7.scope: Deactivated successfully.
Dec 06 07:20:56 compute-0 podman[306932]: 2025-12-06 07:20:56.552539171 +0000 UTC m=+1.558401041 container died b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:20:56 compute-0 nova_compute[251992]: 2025-12-06 07:20:56.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:56 compute-0 ceph-mon[74339]: pgmap v1891: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 206 op/s
Dec 06 07:20:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:56.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7-userdata-shm.mount: Deactivated successfully.
Dec 06 07:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3ea7e50fdac66e134fb044fb9e3a2947523a4715986d4a6982733c2962ff945-merged.mount: Deactivated successfully.
Dec 06 07:20:57 compute-0 nova_compute[251992]: 2025-12-06 07:20:57.641 251996 DEBUG nova.compute.manager [req-a77f3323-2e87-4e4b-940d-0c7bc8be1cd6 req-482fc1d6-6a22-4e93-9313-044446a50784 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received event network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:20:57 compute-0 nova_compute[251992]: 2025-12-06 07:20:57.642 251996 DEBUG oslo_concurrency.lockutils [req-a77f3323-2e87-4e4b-940d-0c7bc8be1cd6 req-482fc1d6-6a22-4e93-9313-044446a50784 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "18df4458-2006-4721-82e1-760c93301d0c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:57 compute-0 nova_compute[251992]: 2025-12-06 07:20:57.643 251996 DEBUG oslo_concurrency.lockutils [req-a77f3323-2e87-4e4b-940d-0c7bc8be1cd6 req-482fc1d6-6a22-4e93-9313-044446a50784 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:57 compute-0 nova_compute[251992]: 2025-12-06 07:20:57.643 251996 DEBUG oslo_concurrency.lockutils [req-a77f3323-2e87-4e4b-940d-0c7bc8be1cd6 req-482fc1d6-6a22-4e93-9313-044446a50784 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:57 compute-0 nova_compute[251992]: 2025-12-06 07:20:57.643 251996 DEBUG nova.compute.manager [req-a77f3323-2e87-4e4b-940d-0c7bc8be1cd6 req-482fc1d6-6a22-4e93-9313-044446a50784 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] No waiting events found dispatching network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:20:57 compute-0 nova_compute[251992]: 2025-12-06 07:20:57.643 251996 WARNING nova.compute.manager [req-a77f3323-2e87-4e4b-940d-0c7bc8be1cd6 req-482fc1d6-6a22-4e93-9313-044446a50784 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received unexpected event network-vif-plugged-459cc986-3132-488a-9684-df0ff049e0b0 for instance with vm_state active and task_state deleting.
Dec 06 07:20:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 405 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 209 KiB/s wr, 156 op/s
Dec 06 07:20:58 compute-0 nova_compute[251992]: 2025-12-06 07:20:58.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:58 compute-0 nova_compute[251992]: 2025-12-06 07:20:58.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:20:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:20:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:20:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:20:58 compute-0 nova_compute[251992]: 2025-12-06 07:20:58.701 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:58 compute-0 nova_compute[251992]: 2025-12-06 07:20:58.703 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:20:58 compute-0 nova_compute[251992]: 2025-12-06 07:20:58.703 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:20:58 compute-0 nova_compute[251992]: 2025-12-06 07:20:58.703 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:20:58 compute-0 nova_compute[251992]: 2025-12-06 07:20:58.704 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:20:58 compute-0 nova_compute[251992]: 2025-12-06 07:20:58.799 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:20:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:20:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:20:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:20:58.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:20:59 compute-0 podman[306932]: 2025-12-06 07:20:59.080519499 +0000 UTC m=+4.086381349 container cleanup b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:20:59 compute-0 systemd[1]: libpod-conmon-b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7.scope: Deactivated successfully.
Dec 06 07:20:59 compute-0 ceph-mon[74339]: pgmap v1892: 305 pgs: 305 active+clean; 405 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.3 MiB/s wr, 189 op/s
Dec 06 07:20:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:20:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743282048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.649 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.945s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.786 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.787 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.790 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.790 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:20:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 405 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 115 KiB/s wr, 88 op/s
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.946 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.947 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4329MB free_disk=20.78545379638672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.947 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:20:59 compute-0 nova_compute[251992]: 2025-12-06 07:20:59.947 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.023 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.045 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 18df4458-2006-4721-82e1-760c93301d0c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.046 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 46ad6692-490b-41f5-9d5d-d70ddcf61e04 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.046 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.046 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.098 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:00 compute-0 podman[307012]: 2025-12-06 07:21:00.62135463 +0000 UTC m=+1.517412092 container remove b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.630 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[372c1a5b-15e1-44e0-8f28-ee1cf9cd0cc3]: (4, ('Sat Dec  6 07:20:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c (b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7)\nb38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7\nSat Dec  6 07:20:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c (b38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7)\nb38f5e6bcd069521f2d2938431bb1d7d85a508574ef7d6af3580b173fb3a7ad7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.632 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[305fc390-961b-4827-850b-d243f8478293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.634 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc440c61-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.636 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:00 compute-0 kernel: tapfc440c61-50: left promiscuous mode
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.652 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:00 compute-0 nova_compute[251992]: 2025-12-06 07:21:00.652 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.655 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c86a6b08-3706-4339-a647-01d080344625]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:00.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.671 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a3cb3eb3-ade2-4142-b657-7eee59bbd1c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.672 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec49149-37bc-4cfb-8480-fa20b0a43359]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.696 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2df7ea-9b34-440c-8976-443f823c8335]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588665, 'reachable_time': 39638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307040, 'error': None, 'target': 'ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc440c61\x2d543f\x2d4429\x2dabe0\x2d62748a6f425c.mount: Deactivated successfully.
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.701 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc440c61-543f-4429-abe0-62748a6f425c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:21:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:00.702 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[690a1d01-10d6-4de8-bba6-b2693d721e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:00.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:01 compute-0 anacron[30883]: Job `cron.monthly' started
Dec 06 07:21:01 compute-0 anacron[30883]: Job `cron.monthly' terminated
Dec 06 07:21:01 compute-0 anacron[30883]: Normal exit (3 jobs run)
Dec 06 07:21:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:01.320 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:01 compute-0 nova_compute[251992]: 2025-12-06 07:21:01.727 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:01 compute-0 ceph-mon[74339]: pgmap v1893: 305 pgs: 305 active+clean; 405 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 209 KiB/s wr, 156 op/s
Dec 06 07:21:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2743282048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:01 compute-0 ceph-mon[74339]: pgmap v1894: 305 pgs: 305 active+clean; 405 MiB data, 926 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 115 KiB/s wr, 88 op/s
Dec 06 07:21:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 354 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 148 op/s
Dec 06 07:21:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546256446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.141 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.147 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.163 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.187 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.187 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.543 251996 INFO nova.virt.libvirt.driver [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Deleting instance files /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c_del
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.545 251996 INFO nova.virt.libvirt.driver [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Deletion of /var/lib/nova/instances/18df4458-2006-4721-82e1-760c93301d0c_del complete
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.624 251996 INFO nova.compute.manager [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Took 8.46 seconds to destroy the instance on the hypervisor.
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.625 251996 DEBUG oslo.service.loopingcall [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.625 251996 DEBUG nova.compute.manager [-] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:21:02 compute-0 nova_compute[251992]: 2025-12-06 07:21:02.626 251996 DEBUG nova.network.neutron [-] [instance: 18df4458-2006-4721-82e1-760c93301d0c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:21:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:02.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3765889850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3683988776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:02 compute-0 ceph-mon[74339]: pgmap v1895: 305 pgs: 305 active+clean; 354 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 148 op/s
Dec 06 07:21:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/546256446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:03 compute-0 sudo[307059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:03 compute-0 sudo[307059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:03 compute-0 sudo[307059]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.181 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.181 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.199 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.200 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.200 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:03 compute-0 sudo[307084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:21:03 compute-0 sudo[307084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:03 compute-0 sudo[307084]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:03 compute-0 sudo[307109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:03 compute-0 sudo[307109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:03 compute-0 sudo[307109]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:03 compute-0 sudo[307134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:21:03 compute-0 sudo[307134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.444 251996 DEBUG nova.network.neutron [-] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.466 251996 INFO nova.compute.manager [-] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Took 0.84 seconds to deallocate network for instance.
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.526 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.526 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.595 251996 DEBUG oslo_concurrency.processutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:21:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.801 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:03.828 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:03.829 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:03.831 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:03 compute-0 sudo[307134]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 347 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 112 op/s
Dec 06 07:21:03 compute-0 nova_compute[251992]: 2025-12-06 07:21:03.906 251996 DEBUG nova.compute.manager [req-83d9b0b1-5d28-4003-b7c3-074608055de2 req-f76f23a3-f2f1-46a6-9227-ed9c6019c6ff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Received event network-vif-deleted-459cc986-3132-488a-9684-df0ff049e0b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 07:21:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:21:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 07:21:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:21:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274334364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:04 compute-0 nova_compute[251992]: 2025-12-06 07:21:04.055 251996 DEBUG oslo_concurrency.processutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:04 compute-0 nova_compute[251992]: 2025-12-06 07:21:04.062 251996 DEBUG nova.compute.provider_tree [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:04 compute-0 nova_compute[251992]: 2025-12-06 07:21:04.079 251996 DEBUG nova.scheduler.client.report [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:04 compute-0 nova_compute[251992]: 2025-12-06 07:21:04.099 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:04 compute-0 nova_compute[251992]: 2025-12-06 07:21:04.148 251996 INFO nova.scheduler.client.report [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Deleted allocations for instance 18df4458-2006-4721-82e1-760c93301d0c
Dec 06 07:21:04 compute-0 nova_compute[251992]: 2025-12-06 07:21:04.239 251996 DEBUG oslo_concurrency.lockutils [None req-3f779a1b-936e-4c09-89c6-d4eaffe41b97 585886c5d2044f729963a6485c93acd5 a16333d9a99c4d4ba7c9a1c235b6219b - - default default] Lock "18df4458-2006-4721-82e1-760c93301d0c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:04 compute-0 sudo[307212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:04 compute-0 sudo[307212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:04 compute-0 sudo[307212]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:04 compute-0 sudo[307237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:04 compute-0 sudo[307237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:04 compute-0 sudo[307237]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:04.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:04 compute-0 ceph-mon[74339]: pgmap v1896: 305 pgs: 305 active+clean; 347 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 112 op/s
Dec 06 07:21:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:21:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:21:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2274334364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:04 compute-0 ovn_controller[147168]: 2025-12-06T07:21:04Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:fe:54 10.100.0.6
Dec 06 07:21:04 compute-0 ovn_controller[147168]: 2025-12-06T07:21:04Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:fe:54 10.100.0.6
Dec 06 07:21:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:04.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:05 compute-0 nova_compute[251992]: 2025-12-06 07:21:05.025 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:21:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:21:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:21:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:21:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:21:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 300 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:21:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 84d8d0dd-9ae9-46ec-8d9e-35a3c538efe7 does not exist
Dec 06 07:21:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6f1f1c79-beff-48f3-bb20-8405e51a30c1 does not exist
Dec 06 07:21:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8875784b-9d3e-4a11-b72d-2363a3786b02 does not exist
Dec 06 07:21:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:21:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:21:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:21:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:21:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:21:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:21:06 compute-0 sudo[307263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:06 compute-0 sudo[307263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:06 compute-0 sudo[307263]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:06 compute-0 sudo[307288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:21:06 compute-0 sudo[307288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:06 compute-0 sudo[307288]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:06 compute-0 sudo[307313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:06 compute-0 sudo[307313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:06 compute-0 sudo[307313]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:06 compute-0 sudo[307338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:21:06 compute-0 sudo[307338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:06 compute-0 podman[307403]: 2025-12-06 07:21:06.612681658 +0000 UTC m=+0.048444743 container create b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Dec 06 07:21:06 compute-0 systemd[1]: Started libpod-conmon-b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d.scope.
Dec 06 07:21:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:06.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:06 compute-0 podman[307403]: 2025-12-06 07:21:06.592409635 +0000 UTC m=+0.028172750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:21:06 compute-0 podman[307403]: 2025-12-06 07:21:06.703679452 +0000 UTC m=+0.139442557 container init b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:21:06 compute-0 podman[307403]: 2025-12-06 07:21:06.718064435 +0000 UTC m=+0.153827520 container start b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 07:21:06 compute-0 podman[307403]: 2025-12-06 07:21:06.722126915 +0000 UTC m=+0.157890030 container attach b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 07:21:06 compute-0 nervous_fermi[307419]: 167 167
Dec 06 07:21:06 compute-0 systemd[1]: libpod-b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d.scope: Deactivated successfully.
Dec 06 07:21:06 compute-0 podman[307403]: 2025-12-06 07:21:06.730291768 +0000 UTC m=+0.166054853 container died b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 07:21:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-06c7b7650af8318c03645c3170589455e878542d1470434336790d8eb2e1adbc-merged.mount: Deactivated successfully.
Dec 06 07:21:06 compute-0 podman[307403]: 2025-12-06 07:21:06.781934169 +0000 UTC m=+0.217697254 container remove b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:21:06 compute-0 systemd[1]: libpod-conmon-b1fe125a07a12714c58c001aa61f7a2176ef33d95f7babc0c70d47583be94a6d.scope: Deactivated successfully.
Dec 06 07:21:06 compute-0 podman[307443]: 2025-12-06 07:21:06.945774111 +0000 UTC m=+0.042021159 container create b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 07:21:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:06.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:06 compute-0 systemd[1]: Started libpod-conmon-b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d.scope.
Dec 06 07:21:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246551fabc4b87ba1d3c8cb4c1af0d98f194ac449e0f707d75c9331aeb3346ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:07 compute-0 podman[307443]: 2025-12-06 07:21:06.926868734 +0000 UTC m=+0.023115802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246551fabc4b87ba1d3c8cb4c1af0d98f194ac449e0f707d75c9331aeb3346ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246551fabc4b87ba1d3c8cb4c1af0d98f194ac449e0f707d75c9331aeb3346ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246551fabc4b87ba1d3c8cb4c1af0d98f194ac449e0f707d75c9331aeb3346ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/246551fabc4b87ba1d3c8cb4c1af0d98f194ac449e0f707d75c9331aeb3346ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:07 compute-0 podman[307443]: 2025-12-06 07:21:07.043889799 +0000 UTC m=+0.140136867 container init b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:21:07 compute-0 podman[307443]: 2025-12-06 07:21:07.051312932 +0000 UTC m=+0.147559980 container start b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:21:07 compute-0 podman[307443]: 2025-12-06 07:21:07.057296405 +0000 UTC m=+0.153543573 container attach b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:21:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3572484380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:21:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:21:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/934338195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:07 compute-0 nova_compute[251992]: 2025-12-06 07:21:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:07 compute-0 nova_compute[251992]: 2025-12-06 07:21:07.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:21:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 272 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 2.2 MiB/s wr, 146 op/s
Dec 06 07:21:07 compute-0 wizardly_pascal[307460]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:21:07 compute-0 wizardly_pascal[307460]: --> relative data size: 1.0
Dec 06 07:21:07 compute-0 wizardly_pascal[307460]: --> All data devices are unavailable
Dec 06 07:21:07 compute-0 systemd[1]: libpod-b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d.scope: Deactivated successfully.
Dec 06 07:21:07 compute-0 podman[307443]: 2025-12-06 07:21:07.915354768 +0000 UTC m=+1.011601816 container died b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-246551fabc4b87ba1d3c8cb4c1af0d98f194ac449e0f707d75c9331aeb3346ce-merged.mount: Deactivated successfully.
Dec 06 07:21:07 compute-0 podman[307443]: 2025-12-06 07:21:07.968700764 +0000 UTC m=+1.064947812 container remove b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:21:07 compute-0 systemd[1]: libpod-conmon-b1f1c321a4463f1c9c83d450516897b664fa341e069090e6df1d9f058a88b49d.scope: Deactivated successfully.
Dec 06 07:21:07 compute-0 sudo[307338]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:08 compute-0 sudo[307490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:08 compute-0 sudo[307490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:08 compute-0 sudo[307490]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:08 compute-0 sudo[307515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:21:08 compute-0 sudo[307515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:08 compute-0 sudo[307515]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:08 compute-0 sudo[307540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:08 compute-0 sudo[307540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:08 compute-0 sudo[307540]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:08 compute-0 sudo[307565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:21:08 compute-0 sudo[307565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:08 compute-0 ovn_controller[147168]: 2025-12-06T07:21:08Z|00288|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:21:08 compute-0 ceph-mon[74339]: pgmap v1897: 305 pgs: 305 active+clean; 300 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:21:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:21:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:21:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:21:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2443488506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:08 compute-0 ceph-mon[74339]: pgmap v1898: 305 pgs: 305 active+clean; 272 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 2.2 MiB/s wr, 146 op/s
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.385 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:08 compute-0 podman[307630]: 2025-12-06 07:21:08.546082065 +0000 UTC m=+0.036375904 container create 0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:21:08 compute-0 systemd[1]: Started libpod-conmon-0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2.scope.
Dec 06 07:21:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:08 compute-0 podman[307630]: 2025-12-06 07:21:08.622392819 +0000 UTC m=+0.112686668 container init 0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:21:08 compute-0 podman[307630]: 2025-12-06 07:21:08.530755337 +0000 UTC m=+0.021049206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:21:08 compute-0 podman[307630]: 2025-12-06 07:21:08.628457224 +0000 UTC m=+0.118751063 container start 0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:21:08 compute-0 lucid_jackson[307646]: 167 167
Dec 06 07:21:08 compute-0 systemd[1]: libpod-0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2.scope: Deactivated successfully.
Dec 06 07:21:08 compute-0 podman[307630]: 2025-12-06 07:21:08.634047707 +0000 UTC m=+0.124341546 container attach 0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 07:21:08 compute-0 podman[307630]: 2025-12-06 07:21:08.634404556 +0000 UTC m=+0.124698395 container died 0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:21:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9a20ddc9c73ea1ba0402b99daa29073eb4570936f68726f5354c9b4fe7b85cf-merged.mount: Deactivated successfully.
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:21:08 compute-0 podman[307630]: 2025-12-06 07:21:08.67521611 +0000 UTC m=+0.165509949 container remove 0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:21:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:08.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:08 compute-0 systemd[1]: libpod-conmon-0cd18fe21bc08c1c736f2648ced6312f280e149915e7d9ec577e9dbc0e40d2d2.scope: Deactivated successfully.
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.802 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:08 compute-0 podman[307668]: 2025-12-06 07:21:08.836835372 +0000 UTC m=+0.039695835 container create d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:21:08 compute-0 systemd[1]: Started libpod-conmon-d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b.scope.
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.883 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.883 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.884 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:21:08 compute-0 nova_compute[251992]: 2025-12-06 07:21:08.884 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b99bb662bf4838547e634dabf038fcbee9a9196b7691d754c5217d336ec3f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b99bb662bf4838547e634dabf038fcbee9a9196b7691d754c5217d336ec3f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b99bb662bf4838547e634dabf038fcbee9a9196b7691d754c5217d336ec3f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b99bb662bf4838547e634dabf038fcbee9a9196b7691d754c5217d336ec3f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:08 compute-0 podman[307668]: 2025-12-06 07:21:08.821274778 +0000 UTC m=+0.024135261 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:21:08 compute-0 podman[307668]: 2025-12-06 07:21:08.92246492 +0000 UTC m=+0.125325393 container init d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:21:08 compute-0 podman[307668]: 2025-12-06 07:21:08.93276127 +0000 UTC m=+0.135621743 container start d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:21:08 compute-0 podman[307668]: 2025-12-06 07:21:08.938220799 +0000 UTC m=+0.141081272 container attach d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:21:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:08.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:09 compute-0 condescending_perlman[307684]: {
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:     "0": [
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:         {
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "devices": [
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "/dev/loop3"
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             ],
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "lv_name": "ceph_lv0",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "lv_size": "7511998464",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "name": "ceph_lv0",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "tags": {
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.cluster_name": "ceph",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.crush_device_class": "",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.encrypted": "0",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.osd_id": "0",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.type": "block",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:                 "ceph.vdo": "0"
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             },
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "type": "block",
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:             "vg_name": "ceph_vg0"
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:         }
Dec 06 07:21:09 compute-0 condescending_perlman[307684]:     ]
Dec 06 07:21:09 compute-0 condescending_perlman[307684]: }
Dec 06 07:21:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/559791660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:21:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/559791660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:21:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2657122733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:09 compute-0 systemd[1]: libpod-d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b.scope: Deactivated successfully.
Dec 06 07:21:09 compute-0 conmon[307684]: conmon d9142970ea85fabb0fca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b.scope/container/memory.events
Dec 06 07:21:09 compute-0 podman[307668]: 2025-12-06 07:21:09.757425062 +0000 UTC m=+0.960285525 container died d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-64b99bb662bf4838547e634dabf038fcbee9a9196b7691d754c5217d336ec3f8-merged.mount: Deactivated successfully.
Dec 06 07:21:09 compute-0 podman[307668]: 2025-12-06 07:21:09.816270788 +0000 UTC m=+1.019131251 container remove d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:21:09 compute-0 systemd[1]: libpod-conmon-d9142970ea85fabb0fcad6b5696e8169fe8d804354e38e53118f5ec1ccabc93b.scope: Deactivated successfully.
Dec 06 07:21:09 compute-0 sudo[307565]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 272 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 841 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Dec 06 07:21:09 compute-0 sudo[307707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:09 compute-0 sudo[307707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:09 compute-0 sudo[307707]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:09 compute-0 sudo[307732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:21:09 compute-0 sudo[307732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:09 compute-0 sudo[307732]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:09 compute-0 nova_compute[251992]: 2025-12-06 07:21:09.997 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005654.996229, 18df4458-2006-4721-82e1-760c93301d0c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:09 compute-0 nova_compute[251992]: 2025-12-06 07:21:09.998 251996 INFO nova.compute.manager [-] [instance: 18df4458-2006-4721-82e1-760c93301d0c] VM Stopped (Lifecycle Event)
Dec 06 07:21:10 compute-0 sudo[307757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:10 compute-0 sudo[307757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:10 compute-0 sudo[307757]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:10 compute-0 nova_compute[251992]: 2025-12-06 07:21:10.026 251996 DEBUG nova.compute.manager [None req-8becc7a7-a294-4e76-afd5-ef49c4720f63 - - - - - -] [instance: 18df4458-2006-4721-82e1-760c93301d0c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:10 compute-0 nova_compute[251992]: 2025-12-06 07:21:10.027 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:10 compute-0 sudo[307782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:21:10 compute-0 sudo[307782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:10 compute-0 podman[307847]: 2025-12-06 07:21:10.386415361 +0000 UTC m=+0.045683967 container create 137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haibt, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:21:10 compute-0 systemd[1]: Started libpod-conmon-137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b.scope.
Dec 06 07:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:10 compute-0 podman[307847]: 2025-12-06 07:21:10.368559994 +0000 UTC m=+0.027828620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:21:10 compute-0 podman[307847]: 2025-12-06 07:21:10.465521891 +0000 UTC m=+0.124790517 container init 137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haibt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:21:10 compute-0 podman[307847]: 2025-12-06 07:21:10.479338148 +0000 UTC m=+0.138606754 container start 137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:21:10 compute-0 podman[307847]: 2025-12-06 07:21:10.483194534 +0000 UTC m=+0.142463170 container attach 137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:21:10 compute-0 musing_haibt[307863]: 167 167
Dec 06 07:21:10 compute-0 systemd[1]: libpod-137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b.scope: Deactivated successfully.
Dec 06 07:21:10 compute-0 podman[307847]: 2025-12-06 07:21:10.486496014 +0000 UTC m=+0.145764620 container died 137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haibt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec 06 07:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a45587b68fb6a645cc367f4b86046e18d209901ae67fdf0234262f40edb45d91-merged.mount: Deactivated successfully.
Dec 06 07:21:10 compute-0 podman[307847]: 2025-12-06 07:21:10.53068 +0000 UTC m=+0.189948606 container remove 137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:21:10 compute-0 systemd[1]: libpod-conmon-137be923a1996467d270d14a0861c89199c18beef02cea12312e775f8571900b.scope: Deactivated successfully.
Dec 06 07:21:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:10.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:10 compute-0 podman[307886]: 2025-12-06 07:21:10.700342691 +0000 UTC m=+0.044284149 container create c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:21:10 compute-0 systemd[1]: Started libpod-conmon-c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945.scope.
Dec 06 07:21:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdacd458b3c1d051c2694114a7de7da82ed176a947fef2e581f3bd97b54f9159/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdacd458b3c1d051c2694114a7de7da82ed176a947fef2e581f3bd97b54f9159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdacd458b3c1d051c2694114a7de7da82ed176a947fef2e581f3bd97b54f9159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdacd458b3c1d051c2694114a7de7da82ed176a947fef2e581f3bd97b54f9159/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:10 compute-0 ceph-mon[74339]: pgmap v1899: 305 pgs: 305 active+clean; 272 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 841 KiB/s rd, 2.1 MiB/s wr, 140 op/s
Dec 06 07:21:10 compute-0 podman[307886]: 2025-12-06 07:21:10.765156261 +0000 UTC m=+0.109097719 container init c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:21:10 compute-0 podman[307886]: 2025-12-06 07:21:10.772805819 +0000 UTC m=+0.116747277 container start c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:21:10 compute-0 podman[307886]: 2025-12-06 07:21:10.684962711 +0000 UTC m=+0.028904189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:21:10 compute-0 podman[307886]: 2025-12-06 07:21:10.781456275 +0000 UTC m=+0.125397753 container attach c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:21:10 compute-0 nova_compute[251992]: 2025-12-06 07:21:10.819 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updating instance_info_cache with network_info: [{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:10 compute-0 nova_compute[251992]: 2025-12-06 07:21:10.864 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:10 compute-0 nova_compute[251992]: 2025-12-06 07:21:10.865 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:21:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:10.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:11 compute-0 optimistic_bell[307903]: {
Dec 06 07:21:11 compute-0 optimistic_bell[307903]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:21:11 compute-0 optimistic_bell[307903]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:21:11 compute-0 optimistic_bell[307903]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:21:11 compute-0 optimistic_bell[307903]:         "osd_id": 0,
Dec 06 07:21:11 compute-0 optimistic_bell[307903]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:21:11 compute-0 optimistic_bell[307903]:         "type": "bluestore"
Dec 06 07:21:11 compute-0 optimistic_bell[307903]:     }
Dec 06 07:21:11 compute-0 optimistic_bell[307903]: }
Dec 06 07:21:11 compute-0 systemd[1]: libpod-c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945.scope: Deactivated successfully.
Dec 06 07:21:11 compute-0 podman[307886]: 2025-12-06 07:21:11.713855077 +0000 UTC m=+1.057796535 container died c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdacd458b3c1d051c2694114a7de7da82ed176a947fef2e581f3bd97b54f9159-merged.mount: Deactivated successfully.
Dec 06 07:21:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:11 compute-0 podman[307886]: 2025-12-06 07:21:11.768626843 +0000 UTC m=+1.112568301 container remove c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:21:11 compute-0 systemd[1]: libpod-conmon-c1e783097687bdb171e0e73c3229b5f3d09a0587a76e551cf861eddd4d660945.scope: Deactivated successfully.
Dec 06 07:21:11 compute-0 sudo[307782]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:21:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:21:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bf266c91-bcbe-437a-b805-c103bf88aa0a does not exist
Dec 06 07:21:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 627e5ce3-ff2b-49b4-ab5e-c65a962c5d44 does not exist
Dec 06 07:21:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e31eaab2-4691-4db4-8744-22b7a2baa1b6 does not exist
Dec 06 07:21:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 297 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 905 KiB/s rd, 3.9 MiB/s wr, 220 op/s
Dec 06 07:21:11 compute-0 sudo[307935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:11 compute-0 sudo[307935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:11 compute-0 sudo[307935]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:11 compute-0 sudo[307960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:21:11 compute-0 sudo[307960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:11 compute-0 sudo[307960]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:12.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:21:12 compute-0 ceph-mon[74339]: pgmap v1900: 305 pgs: 305 active+clean; 297 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 905 KiB/s rd, 3.9 MiB/s wr, 220 op/s
Dec 06 07:21:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3685708829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/670055331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3839594188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:12.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:21:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:21:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:21:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:21:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:21:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:21:13 compute-0 nova_compute[251992]: 2025-12-06 07:21:13.805 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 275 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 2.5 MiB/s wr, 194 op/s
Dec 06 07:21:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 07:21:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:14.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 07:21:15 compute-0 nova_compute[251992]: 2025-12-06 07:21:15.030 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:15 compute-0 sshd-session[307986]: Invalid user guest from 91.202.233.33 port 56662
Dec 06 07:21:15 compute-0 ceph-mon[74339]: pgmap v1901: 305 pgs: 305 active+clean; 275 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 646 KiB/s rd, 2.5 MiB/s wr, 194 op/s
Dec 06 07:21:15 compute-0 sshd-session[307986]: Connection reset by invalid user guest 91.202.233.33 port 56662 [preauth]
Dec 06 07:21:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 218 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 508 KiB/s rd, 2.0 MiB/s wr, 236 op/s
Dec 06 07:21:16 compute-0 nova_compute[251992]: 2025-12-06 07:21:16.322 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:16 compute-0 ceph-mon[74339]: pgmap v1902: 305 pgs: 305 active+clean; 218 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 508 KiB/s rd, 2.0 MiB/s wr, 236 op/s
Dec 06 07:21:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3709436296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:16.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:16 compute-0 nova_compute[251992]: 2025-12-06 07:21:16.732 251996 DEBUG oslo_concurrency.lockutils [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:16 compute-0 nova_compute[251992]: 2025-12-06 07:21:16.733 251996 DEBUG oslo_concurrency.lockutils [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:16 compute-0 nova_compute[251992]: 2025-12-06 07:21:16.733 251996 DEBUG nova.compute.manager [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:16 compute-0 nova_compute[251992]: 2025-12-06 07:21:16.738 251996 DEBUG nova.compute.manager [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Dec 06 07:21:16 compute-0 nova_compute[251992]: 2025-12-06 07:21:16.738 251996 DEBUG nova.objects.instance [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'flavor' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:16 compute-0 nova_compute[251992]: 2025-12-06 07:21:16.774 251996 DEBUG nova.virt.libvirt.driver [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:21:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:16.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 324 op/s
Dec 06 07:21:17 compute-0 sshd-session[307989]: Connection reset by authenticating user root 91.202.233.33 port 56668 [preauth]
Dec 06 07:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:21:18
Dec 06 07:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'images', 'backups', 'volumes', 'cephfs.cephfs.meta', 'vms']
Dec 06 07:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:21:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:18 compute-0 nova_compute[251992]: 2025-12-06 07:21:18.807 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:18 compute-0 nova_compute[251992]: 2025-12-06 07:21:18.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:18 compute-0 ceph-mon[74339]: pgmap v1903: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 324 op/s
Dec 06 07:21:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:18.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:19 compute-0 kernel: tap382a0d3e-d0 (unregistering): left promiscuous mode
Dec 06 07:21:19 compute-0 NetworkManager[48965]: <info>  [1765005679.0374] device (tap382a0d3e-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.044 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:19 compute-0 ovn_controller[147168]: 2025-12-06T07:21:19Z|00289|binding|INFO|Releasing lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c from this chassis (sb_readonly=0)
Dec 06 07:21:19 compute-0 ovn_controller[147168]: 2025-12-06T07:21:19Z|00290|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c down in Southbound
Dec 06 07:21:19 compute-0 ovn_controller[147168]: 2025-12-06T07:21:19Z|00291|binding|INFO|Removing iface tap382a0d3e-d0 ovn-installed in OVS
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.046 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.082 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:fe:54 10.100.0.6'], port_security=['fa:16:3e:fe:fe:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '46ad6692-490b-41f5-9d5d-d70ddcf61e04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=382a0d3e-d0a9-40ed-80e9-3c462d98181c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.083 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 382a0d3e-d0a9-40ed-80e9-3c462d98181c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.084 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.086 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7f89f0-89d9-4678-ae52-649235283b58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.086 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:19 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000056.scope: Deactivated successfully.
Dec 06 07:21:19 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000056.scope: Consumed 15.961s CPU time.
Dec 06 07:21:19 compute-0 systemd-machined[212986]: Machine qemu-38-instance-00000056 terminated.
Dec 06 07:21:19 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[306747]: [NOTICE]   (306758) : haproxy version is 2.8.14-c23fe91
Dec 06 07:21:19 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[306747]: [NOTICE]   (306758) : path to executable is /usr/sbin/haproxy
Dec 06 07:21:19 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[306747]: [WARNING]  (306758) : Exiting Master process...
Dec 06 07:21:19 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[306747]: [WARNING]  (306758) : Exiting Master process...
Dec 06 07:21:19 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[306747]: [ALERT]    (306758) : Current worker (306769) exited with code 143 (Terminated)
Dec 06 07:21:19 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[306747]: [WARNING]  (306758) : All workers exited. Exiting... (0)
Dec 06 07:21:19 compute-0 systemd[1]: libpod-d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f.scope: Deactivated successfully.
Dec 06 07:21:19 compute-0 podman[308039]: 2025-12-06 07:21:19.207342731 +0000 UTC m=+0.041064743 container died d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:21:19 compute-0 podman[307995]: 2025-12-06 07:21:19.210511077 +0000 UTC m=+0.113470098 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c255f47cad573160f86e766b2f44b9fb8c6baf93858debe94caab3d0c4b6154-merged.mount: Deactivated successfully.
Dec 06 07:21:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f-userdata-shm.mount: Deactivated successfully.
Dec 06 07:21:19 compute-0 podman[308039]: 2025-12-06 07:21:19.240798074 +0000 UTC m=+0.074520086 container cleanup d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:21:19 compute-0 systemd[1]: libpod-conmon-d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f.scope: Deactivated successfully.
Dec 06 07:21:19 compute-0 podman[308077]: 2025-12-06 07:21:19.29705314 +0000 UTC m=+0.036451306 container remove d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.303 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3068c734-9e23-4533-ae84-b1ffaa06a44d]: (4, ('Sat Dec  6 07:21:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f)\nd50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f\nSat Dec  6 07:21:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (d50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f)\nd50be6621842bb957486abc919db1ee83e81aa6ac52509c9ee8f3462f81fd26f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.305 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[77d0aa66-3372-4bf7-8c9a-194e05cae9d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.307 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.308 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:19 compute-0 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.328 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.330 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f33d7d25-98e2-4e0c-b97c-86ae39c0214a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.349 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b4638102-29b8-4e5a-8042-5c79792a3ab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.350 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[654e8032-7fc3-48e8-bd15-40d6ad8cfff1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.364 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c797fc1f-18ae-48dd-89c8-037738dfaa46]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591333, 'reachable_time': 18475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308107, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.366 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:19.366 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[2d802e76-951c-4a77-811f-6a5a5add88da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.792 251996 INFO nova.virt.libvirt.driver [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance shutdown successfully after 3 seconds.
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.798 251996 INFO nova.virt.libvirt.driver [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance destroyed successfully.
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.798 251996 DEBUG nova.objects.instance [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'numa_topology' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.832 251996 DEBUG nova.compute.manager [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 284 op/s
Dec 06 07:21:19 compute-0 nova_compute[251992]: 2025-12-06 07:21:19.892 251996 DEBUG oslo_concurrency.lockutils [None req-0111384f-b4d4-43d9-a6c5-8a5e6995bbe1 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:20 compute-0 nova_compute[251992]: 2025-12-06 07:21:20.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4047505159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:20.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:20.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:21 compute-0 nova_compute[251992]: 2025-12-06 07:21:21.176 251996 DEBUG nova.objects.instance [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'flavor' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:21 compute-0 nova_compute[251992]: 2025-12-06 07:21:21.228 251996 DEBUG oslo_concurrency.lockutils [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:21 compute-0 nova_compute[251992]: 2025-12-06 07:21:21.229 251996 DEBUG oslo_concurrency.lockutils [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:21 compute-0 nova_compute[251992]: 2025-12-06 07:21:21.230 251996 DEBUG nova.network.neutron [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:21:21 compute-0 nova_compute[251992]: 2025-12-06 07:21:21.230 251996 DEBUG nova.objects.instance [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'info_cache' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:21 compute-0 ceph-mon[74339]: pgmap v1904: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 284 op/s
Dec 06 07:21:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 125 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 373 op/s
Dec 06 07:21:22 compute-0 ceph-mon[74339]: pgmap v1905: 305 pgs: 305 active+clean; 125 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 373 op/s
Dec 06 07:21:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:22.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.928 251996 DEBUG nova.network.neutron [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updating instance_info_cache with network_info: [{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.942 251996 DEBUG oslo_concurrency.lockutils [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.965 251996 INFO nova.virt.libvirt.driver [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance destroyed successfully.
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.965 251996 DEBUG nova.objects.instance [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'numa_topology' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:22.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.977 251996 DEBUG nova.objects.instance [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.987 251996 DEBUG nova.virt.libvirt.vif [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2054432936',display_name='tempest-ServerActionsTestJSON-server-2054432936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2054432936',id=86,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-yj3rnpxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=46ad6692-490b-41f5-9d5d-d70ddcf61e04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.987 251996 DEBUG nova.network.os_vif_util [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.988 251996 DEBUG nova.network.os_vif_util [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.988 251996 DEBUG os_vif [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.989 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.990 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap382a0d3e-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.991 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.992 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:22 compute-0 nova_compute[251992]: 2025-12-06 07:21:22.995 251996 INFO os_vif [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0')
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.001 251996 DEBUG nova.virt.libvirt.driver [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Start _get_guest_xml network_info=[{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.005 251996 WARNING nova.virt.libvirt.driver [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.009 251996 DEBUG nova.virt.libvirt.host [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.010 251996 DEBUG nova.virt.libvirt.host [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.013 251996 DEBUG nova.virt.libvirt.host [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.013 251996 DEBUG nova.virt.libvirt.host [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.014 251996 DEBUG nova.virt.libvirt.driver [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.015 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.015 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.015 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.016 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.016 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.016 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.016 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.017 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.017 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.017 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.017 251996 DEBUG nova.virt.hardware [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.018 251996 DEBUG nova.objects.instance [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'vcpu_model' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.031 251996 DEBUG oslo_concurrency.processutils [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:21:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1246840157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.492 251996 DEBUG oslo_concurrency.processutils [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1246840157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.534 251996 DEBUG oslo_concurrency.processutils [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.809 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 121 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 94 KiB/s wr, 303 op/s
Dec 06 07:21:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804177915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.992 251996 DEBUG oslo_concurrency.processutils [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.993 251996 DEBUG nova.virt.libvirt.vif [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2054432936',display_name='tempest-ServerActionsTestJSON-server-2054432936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2054432936',id=86,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-yj3rnpxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=46ad6692-490b-41f5-9d5d-d70ddcf61e04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.994 251996 DEBUG nova.network.os_vif_util [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.995 251996 DEBUG nova.network.os_vif_util [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:23 compute-0 nova_compute[251992]: 2025-12-06 07:21:23.996 251996 DEBUG nova.objects.instance [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.016 251996 DEBUG nova.virt.libvirt.driver [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <uuid>46ad6692-490b-41f5-9d5d-d70ddcf61e04</uuid>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <name>instance-00000056</name>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestJSON-server-2054432936</nova:name>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:21:23</nova:creationTime>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <nova:user uuid="627c36bb63534e52a4b1d5adf47e6ffd">tempest-ServerActionsTestJSON-1877526843-project-member</nova:user>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <nova:project uuid="929e2be1488d4b80b7ad8946093a6abe">tempest-ServerActionsTestJSON-1877526843</nova:project>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <nova:port uuid="382a0d3e-d0a9-40ed-80e9-3c462d98181c">
Dec 06 07:21:24 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <system>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <entry name="serial">46ad6692-490b-41f5-9d5d-d70ddcf61e04</entry>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <entry name="uuid">46ad6692-490b-41f5-9d5d-d70ddcf61e04</entry>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </system>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <os>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   </os>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <features>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   </features>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk">
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       </source>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/46ad6692-490b-41f5-9d5d-d70ddcf61e04_disk.config">
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       </source>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:21:24 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:fe:fe:54"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <target dev="tap382a0d3e-d0"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04/console.log" append="off"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <video>
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </video>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <input type="keyboard" bus="usb"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:21:24 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:21:24 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:21:24 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:21:24 compute-0 nova_compute[251992]: </domain>
Dec 06 07:21:24 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.019 251996 DEBUG nova.virt.libvirt.driver [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.020 251996 DEBUG nova.virt.libvirt.driver [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.021 251996 DEBUG nova.virt.libvirt.vif [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2054432936',display_name='tempest-ServerActionsTestJSON-server-2054432936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2054432936',id=86,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-yj3rnpxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=46ad6692-490b-41f5-9d5d-d70ddcf61e04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.022 251996 DEBUG nova.network.os_vif_util [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.023 251996 DEBUG nova.network.os_vif_util [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.023 251996 DEBUG os_vif [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.024 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.025 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.025 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.028 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.028 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap382a0d3e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.029 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap382a0d3e-d0, col_values=(('external_ids', {'iface-id': '382a0d3e-d0a9-40ed-80e9-3c462d98181c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:fe:54', 'vm-uuid': '46ad6692-490b-41f5-9d5d-d70ddcf61e04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.030 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 NetworkManager[48965]: <info>  [1765005684.0315] manager: (tap382a0d3e-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.035 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.036 251996 INFO os_vif [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0')
Dec 06 07:21:24 compute-0 kernel: tap382a0d3e-d0: entered promiscuous mode
Dec 06 07:21:24 compute-0 NetworkManager[48965]: <info>  [1765005684.1110] manager: (tap382a0d3e-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.113 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.114 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 ovn_controller[147168]: 2025-12-06T07:21:24Z|00292|binding|INFO|Claiming lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c for this chassis.
Dec 06 07:21:24 compute-0 ovn_controller[147168]: 2025-12-06T07:21:24Z|00293|binding|INFO|382a0d3e-d0a9-40ed-80e9-3c462d98181c: Claiming fa:16:3e:fe:fe:54 10.100.0.6
Dec 06 07:21:24 compute-0 podman[308175]: 2025-12-06 07:21:24.130189202 +0000 UTC m=+0.059782643 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.136 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:fe:54 10.100.0.6'], port_security=['fa:16:3e:fe:fe:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '46ad6692-490b-41f5-9d5d-d70ddcf61e04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '4', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=382a0d3e-d0a9-40ed-80e9-3c462d98181c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.138 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 382a0d3e-d0a9-40ed-80e9-3c462d98181c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.139 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:21:24 compute-0 systemd-udevd[308221]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.150 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[50230a9b-0ba6-4d5d-9e81-3958f6d0e03b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.151 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:21:24 compute-0 ovn_controller[147168]: 2025-12-06T07:21:24Z|00294|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c up in Southbound
Dec 06 07:21:24 compute-0 systemd-machined[212986]: New machine qemu-39-instance-00000056.
Dec 06 07:21:24 compute-0 NetworkManager[48965]: <info>  [1765005684.2262] device (tap382a0d3e-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:21:24 compute-0 sshd-session[307992]: Connection reset by authenticating user root 91.202.233.33 port 56690 [preauth]
Dec 06 07:21:24 compute-0 NetworkManager[48965]: <info>  [1765005684.2271] device (tap382a0d3e-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:21:24 compute-0 podman[308176]: 2025-12-06 07:21:24.228389022 +0000 UTC m=+0.155881475 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:21:24 compute-0 ovn_controller[147168]: 2025-12-06T07:21:24Z|00295|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c ovn-installed in OVS
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.228 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.226 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.226 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4ac4fd-2961-4c01-88dc-fb66aab4fcdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000056.
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.232 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9ed823-fa3b-42e7-8ab2-14b690613b13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.247 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[fda970f6-ddd0-41bb-87e0-d7a908be8660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.261 251996 DEBUG nova.compute.manager [req-7255fa73-cc07-482c-98bc-81c1195fb14a req-b8624f77-c1d1-4d30-8ef5-bd863fd705df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.263 251996 DEBUG oslo_concurrency.lockutils [req-7255fa73-cc07-482c-98bc-81c1195fb14a req-b8624f77-c1d1-4d30-8ef5-bd863fd705df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.263 251996 DEBUG oslo_concurrency.lockutils [req-7255fa73-cc07-482c-98bc-81c1195fb14a req-b8624f77-c1d1-4d30-8ef5-bd863fd705df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.263 251996 DEBUG oslo_concurrency.lockutils [req-7255fa73-cc07-482c-98bc-81c1195fb14a req-b8624f77-c1d1-4d30-8ef5-bd863fd705df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.263 251996 DEBUG nova.compute.manager [req-7255fa73-cc07-482c-98bc-81c1195fb14a req-b8624f77-c1d1-4d30-8ef5-bd863fd705df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.263 251996 WARNING nova.compute.manager [req-7255fa73-cc07-482c-98bc-81c1195fb14a req-b8624f77-c1d1-4d30-8ef5-bd863fd705df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state stopped and task_state powering-on.
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.270 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b80b6b3c-934f-4f2f-a891-428a72f0f6ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.304 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[dc94c934-9872-4f95-80d6-78b9567a7fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 NetworkManager[48965]: <info>  [1765005684.3112] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/156)
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.310 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[51e97a9a-7534-45d4-9854-8866bf3848f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.348 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfc5941-2825-40f9-8425-096897d52eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.352 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4dea84be-ecec-4ce5-9fab-f8cb9f0184f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.367 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 NetworkManager[48965]: <info>  [1765005684.3774] device (tap4d599401-30): carrier: link connected
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.383 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[6042c628-b752-421b-949d-dce5a1d7c76e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.399 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eefbd15b-bd8c-4278-9a15-c4c69b05f655]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595696, 'reachable_time': 22666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308255, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.415 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[063b636d-2946-4c06-8cfa-ad514372d510]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595696, 'tstamp': 595696}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308256, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.432 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff816f6-28e6-4518-8907-de4b3007c508]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595696, 'reachable_time': 22666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308257, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.478 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7069e9e4-a507-4993-8c69-eb2e9f01e679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 sudo[308259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:24 compute-0 sudo[308259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:24 compute-0 sudo[308259]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.540 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d00566b4-8558-48c1-b14b-c6065be6d3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.542 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.542 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.542 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:24 compute-0 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.543 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 NetworkManager[48965]: <info>  [1765005684.5468] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.551 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.552 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 ovn_controller[147168]: 2025-12-06T07:21:24Z|00296|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.556 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.557 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[097f3a4a-b8bf-4bb9-9805-9bdc930cb4c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.558 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:21:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:24.558 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.576 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:24 compute-0 sudo[308290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:24 compute-0 sudo[308290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:24 compute-0 sudo[308290]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:24 compute-0 ceph-mon[74339]: pgmap v1906: 305 pgs: 305 active+clean; 121 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 94 KiB/s wr, 303 op/s
Dec 06 07:21:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1804177915' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:24.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.844 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 46ad6692-490b-41f5-9d5d-d70ddcf61e04 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.845 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005684.8441856, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.846 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Resumed (Lifecycle Event)
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.848 251996 DEBUG nova.compute.manager [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.852 251996 INFO nova.virt.libvirt.driver [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance rebooted successfully.
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.853 251996 DEBUG nova.compute.manager [None req-1fb5cb4b-efdb-4cf1-b4d5-7a1c08b7f935 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.867 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.871 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.892 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] During sync_power_state the instance has a pending task (powering-on). Skip.
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.893 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005684.8455236, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.893 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Started (Lifecycle Event)
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.916 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:24 compute-0 nova_compute[251992]: 2025-12-06 07:21:24.924 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:24 compute-0 podman[308380]: 2025-12-06 07:21:24.935745282 +0000 UTC m=+0.048558616 container create ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:21:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:24.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:24 compute-0 systemd[1]: Started libpod-conmon-ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e.scope.
Dec 06 07:21:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3792ace9e3cb9bbc3e65455539ae03ddbd7dac089d29b92e9984fdc0cebc61bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:25 compute-0 podman[308380]: 2025-12-06 07:21:24.911392587 +0000 UTC m=+0.024205941 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:21:25 compute-0 podman[308380]: 2025-12-06 07:21:25.022414038 +0000 UTC m=+0.135227382 container init ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 07:21:25 compute-0 podman[308380]: 2025-12-06 07:21:25.027507977 +0000 UTC m=+0.140321311 container start ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:21:25 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308396]: [NOTICE]   (308400) : New worker (308402) forked
Dec 06 07:21:25 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308396]: [NOTICE]   (308400) : Loading success.
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002173955716080402 of space, bias 1.0, pg target 0.6521867148241206 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:21:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 121 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 46 KiB/s wr, 269 op/s
Dec 06 07:21:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4052461057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:26 compute-0 ceph-mon[74339]: pgmap v1907: 305 pgs: 305 active+clean; 121 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 46 KiB/s wr, 269 op/s
Dec 06 07:21:26 compute-0 sshd-session[308253]: Connection reset by authenticating user root 91.202.233.33 port 59162 [preauth]
Dec 06 07:21:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.954 251996 DEBUG nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.955 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.956 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.956 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.956 251996 DEBUG nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.956 251996 WARNING nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state active and task_state None.
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.956 251996 DEBUG nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.956 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.957 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.958 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.958 251996 DEBUG nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.958 251996 WARNING nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state active and task_state None.
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.958 251996 DEBUG nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.958 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.958 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.959 251996 DEBUG oslo_concurrency.lockutils [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.959 251996 DEBUG nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:26 compute-0 nova_compute[251992]: 2025-12-06 07:21:26.959 251996 WARNING nova.compute.manager [req-e88d0553-a4d7-4e36-832f-80259ce9e720 req-419230c7-6bc1-4a97-a2df-b2611863ffbe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state active and task_state None.
Dec 06 07:21:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:26.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 151 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 297 op/s
Dec 06 07:21:28 compute-0 nova_compute[251992]: 2025-12-06 07:21:28.348 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:28.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:28 compute-0 nova_compute[251992]: 2025-12-06 07:21:28.812 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:28.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:29 compute-0 ceph-mon[74339]: pgmap v1908: 305 pgs: 305 active+clean; 151 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 297 op/s
Dec 06 07:21:29 compute-0 nova_compute[251992]: 2025-12-06 07:21:29.031 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 151 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 181 op/s
Dec 06 07:21:30 compute-0 sshd-session[308412]: Connection reset by authenticating user root 91.202.233.33 port 59166 [preauth]
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.127 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.306 251996 DEBUG nova.objects.instance [None req-de7d5a8d-48c6-4d76-895b-27080b17869f 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.331 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005690.331053, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.331 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Paused (Lifecycle Event)
Dec 06 07:21:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/875121014' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.355 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.359 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.387 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] During sync_power_state the instance has a pending task (suspending). Skip.
Dec 06 07:21:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:30.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:30 compute-0 kernel: tap382a0d3e-d0 (unregistering): left promiscuous mode
Dec 06 07:21:30 compute-0 NetworkManager[48965]: <info>  [1765005690.8820] device (tap382a0d3e-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:21:30 compute-0 ovn_controller[147168]: 2025-12-06T07:21:30Z|00297|binding|INFO|Releasing lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c from this chassis (sb_readonly=0)
Dec 06 07:21:30 compute-0 ovn_controller[147168]: 2025-12-06T07:21:30Z|00298|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c down in Southbound
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.890 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-0 ovn_controller[147168]: 2025-12-06T07:21:30Z|00299|binding|INFO|Removing iface tap382a0d3e-d0 ovn-installed in OVS
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.892 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:30.903 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:fe:54 10.100.0.6'], port_security=['fa:16:3e:fe:fe:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '46ad6692-490b-41f5-9d5d-d70ddcf61e04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '6', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=382a0d3e-d0a9-40ed-80e9-3c462d98181c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:30.905 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 382a0d3e-d0a9-40ed-80e9-3c462d98181c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:21:30 compute-0 nova_compute[251992]: 2025-12-06 07:21:30.907 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:30.909 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:21:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:30.910 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[25499f07-d2cb-424f-a18d-2c440003e017]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:30.912 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:21:30 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000056.scope: Deactivated successfully.
Dec 06 07:21:30 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000056.scope: Consumed 6.473s CPU time.
Dec 06 07:21:30 compute-0 systemd-machined[212986]: Machine qemu-39-instance-00000056 terminated.
Dec 06 07:21:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:30.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:31 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308396]: [NOTICE]   (308400) : haproxy version is 2.8.14-c23fe91
Dec 06 07:21:31 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308396]: [NOTICE]   (308400) : path to executable is /usr/sbin/haproxy
Dec 06 07:21:31 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308396]: [WARNING]  (308400) : Exiting Master process...
Dec 06 07:21:31 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308396]: [ALERT]    (308400) : Current worker (308402) exited with code 143 (Terminated)
Dec 06 07:21:31 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308396]: [WARNING]  (308400) : All workers exited. Exiting... (0)
Dec 06 07:21:31 compute-0 systemd[1]: libpod-ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e.scope: Deactivated successfully.
Dec 06 07:21:31 compute-0 podman[308443]: 2025-12-06 07:21:31.049485981 +0000 UTC m=+0.042439479 container died ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.070 251996 DEBUG nova.compute.manager [None req-de7d5a8d-48c6-4d76-895b-27080b17869f 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e-userdata-shm.mount: Deactivated successfully.
Dec 06 07:21:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3792ace9e3cb9bbc3e65455539ae03ddbd7dac089d29b92e9984fdc0cebc61bc-merged.mount: Deactivated successfully.
Dec 06 07:21:31 compute-0 podman[308443]: 2025-12-06 07:21:31.087004876 +0000 UTC m=+0.079958374 container cleanup ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:21:31 compute-0 systemd[1]: libpod-conmon-ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e.scope: Deactivated successfully.
Dec 06 07:21:31 compute-0 podman[308482]: 2025-12-06 07:21:31.146636534 +0000 UTC m=+0.037907747 container remove ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.148 251996 DEBUG nova.compute.manager [req-d569c7fe-7f1b-4a3d-807e-622417813bef req-7c034902-1657-48c6-8187-75c3fe67a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.149 251996 DEBUG oslo_concurrency.lockutils [req-d569c7fe-7f1b-4a3d-807e-622417813bef req-7c034902-1657-48c6-8187-75c3fe67a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.150 251996 DEBUG oslo_concurrency.lockutils [req-d569c7fe-7f1b-4a3d-807e-622417813bef req-7c034902-1657-48c6-8187-75c3fe67a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.150 251996 DEBUG oslo_concurrency.lockutils [req-d569c7fe-7f1b-4a3d-807e-622417813bef req-7c034902-1657-48c6-8187-75c3fe67a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.150 251996 DEBUG nova.compute.manager [req-d569c7fe-7f1b-4a3d-807e-622417813bef req-7c034902-1657-48c6-8187-75c3fe67a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.151 251996 WARNING nova.compute.manager [req-d569c7fe-7f1b-4a3d-807e-622417813bef req-7c034902-1657-48c6-8187-75c3fe67a6cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state active and task_state suspending.
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.152 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[993f07c6-8c27-4623-9e84-f722c792695d]: (4, ('Sat Dec  6 07:21:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e)\nceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e\nSat Dec  6 07:21:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (ceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e)\nceb3a65a89809c8dd54be5a03d939a9059ae45055b1621768a781edc3fb26d8e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.154 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[17030f31-ecfa-44e3-8512-54b3ae3c7703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.154 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.282 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:31 compute-0 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:21:31 compute-0 nova_compute[251992]: 2025-12-06 07:21:31.287 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.289 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd30c1f6-30f3-4de4-9f7e-b9d97eab2ab2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.319 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4e245ae0-944f-4d1d-8d85-751e2d941899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.320 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4e20e3ad-0ad5-4705-880e-1f95dbd5b7af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.335 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ff53a3b8-97c6-4eed-9a54-e31b9f88ff2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595688, 'reachable_time': 22452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308501, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.337 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:21:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:31.337 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[e3667a96-a69e-469a-a1a0-9a13ef7f82a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:21:31 compute-0 ceph-mon[74339]: pgmap v1909: 305 pgs: 305 active+clean; 151 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 181 op/s
Dec 06 07:21:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/297117926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.258 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "946841c5-aadb-47f4-a772-8b25581f01ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.258 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.286 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.370 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.370 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.377 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.378 251996 INFO nova.compute.claims [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.583 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:32 compute-0 ceph-mon[74339]: pgmap v1910: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 196 op/s
Dec 06 07:21:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:32.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.814 251996 INFO nova.compute.manager [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Resuming
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.816 251996 DEBUG nova.objects.instance [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'flavor' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.855 251996 DEBUG oslo_concurrency.lockutils [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.856 251996 DEBUG oslo_concurrency.lockutils [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquired lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:32 compute-0 nova_compute[251992]: 2025-12-06 07:21:32.856 251996 DEBUG nova.network.neutron [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:21:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:32.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2130103832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.041 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.047 251996 DEBUG nova.compute.provider_tree [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.075 251996 DEBUG nova.scheduler.client.report [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.119 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.120 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.236 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.236 251996 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.259 251996 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.268 251996 DEBUG nova.compute.manager [req-049a0bd1-2668-4008-a4c9-069bc0320aee req-9132d587-72fd-4c3a-b07b-427fbb20d7cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.269 251996 DEBUG oslo_concurrency.lockutils [req-049a0bd1-2668-4008-a4c9-069bc0320aee req-9132d587-72fd-4c3a-b07b-427fbb20d7cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.269 251996 DEBUG oslo_concurrency.lockutils [req-049a0bd1-2668-4008-a4c9-069bc0320aee req-9132d587-72fd-4c3a-b07b-427fbb20d7cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.269 251996 DEBUG oslo_concurrency.lockutils [req-049a0bd1-2668-4008-a4c9-069bc0320aee req-9132d587-72fd-4c3a-b07b-427fbb20d7cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.269 251996 DEBUG nova.compute.manager [req-049a0bd1-2668-4008-a4c9-069bc0320aee req-9132d587-72fd-4c3a-b07b-427fbb20d7cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.269 251996 WARNING nova.compute.manager [req-049a0bd1-2668-4008-a4c9-069bc0320aee req-9132d587-72fd-4c3a-b07b-427fbb20d7cc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state suspended and task_state resuming.
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.282 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.371 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.372 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.372 251996 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Creating image(s)
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.401 251996 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 946841c5-aadb-47f4-a772-8b25581f01ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.433 251996 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 946841c5-aadb-47f4-a772-8b25581f01ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.460 251996 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 946841c5-aadb-47f4-a772-8b25581f01ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.465 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.492 251996 DEBUG nova.policy [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a52e2b4388994d8791443483bd42cc33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b558585a6aa14470bdad319926a98046', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.530 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.531 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.531 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.532 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.556 251996 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 946841c5-aadb-47f4-a772-8b25581f01ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.559 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 946841c5-aadb-47f4-a772-8b25581f01ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/743799355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2130103832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:33 compute-0 nova_compute[251992]: 2025-12-06 07:21:33.814 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:34.238 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.238 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:34.239 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.299 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 946841c5-aadb-47f4-a772-8b25581f01ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.340 251996 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Successfully created port: 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.382 251996 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] resizing rbd image 946841c5-aadb-47f4-a772-8b25581f01ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.494 251996 DEBUG nova.objects.instance [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'migration_context' on Instance uuid 946841c5-aadb-47f4-a772-8b25581f01ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.514 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.514 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Ensure instance console log exists: /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.515 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.515 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.516 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.862 251996 DEBUG nova.network.neutron [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updating instance_info_cache with network_info: [{"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.891 251996 DEBUG oslo_concurrency.lockutils [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Releasing lock "refresh_cache-46ad6692-490b-41f5-9d5d-d70ddcf61e04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.896 251996 DEBUG nova.virt.libvirt.vif [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2054432936',display_name='tempest-ServerActionsTestJSON-server-2054432936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2054432936',id=86,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-yj3rnpxj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=46ad6692-490b-41f5-9d5d-d70ddcf61e04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.897 251996 DEBUG nova.network.os_vif_util [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.898 251996 DEBUG nova.network.os_vif_util [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.898 251996 DEBUG os_vif [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.899 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.899 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.903 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.903 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap382a0d3e-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.903 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap382a0d3e-d0, col_values=(('external_ids', {'iface-id': '382a0d3e-d0a9-40ed-80e9-3c462d98181c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:fe:54', 'vm-uuid': '46ad6692-490b-41f5-9d5d-d70ddcf61e04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.904 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.904 251996 INFO os_vif [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0')
Dec 06 07:21:34 compute-0 nova_compute[251992]: 2025-12-06 07:21:34.926 251996 DEBUG nova.objects.instance [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'numa_topology' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2879884322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:34 compute-0 ceph-mon[74339]: pgmap v1911: 305 pgs: 305 active+clean; 167 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Dec 06 07:21:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:34.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:34 compute-0 kernel: tap382a0d3e-d0: entered promiscuous mode
Dec 06 07:21:34 compute-0 NetworkManager[48965]: <info>  [1765005694.9938] manager: (tap382a0d3e-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Dec 06 07:21:35 compute-0 ovn_controller[147168]: 2025-12-06T07:21:35Z|00300|binding|INFO|Claiming lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c for this chassis.
Dec 06 07:21:35 compute-0 ovn_controller[147168]: 2025-12-06T07:21:35Z|00301|binding|INFO|382a0d3e-d0a9-40ed-80e9-3c462d98181c: Claiming fa:16:3e:fe:fe:54 10.100.0.6
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.036 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:fe:54 10.100.0.6'], port_security=['fa:16:3e:fe:fe:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '46ad6692-490b-41f5-9d5d-d70ddcf61e04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '7', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=382a0d3e-d0a9-40ed-80e9-3c462d98181c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.038 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 382a0d3e-d0a9-40ed-80e9-3c462d98181c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 bound to our chassis
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.040 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:21:35 compute-0 systemd-machined[212986]: New machine qemu-40-instance-00000056.
Dec 06 07:21:35 compute-0 ovn_controller[147168]: 2025-12-06T07:21:35Z|00302|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c ovn-installed in OVS
Dec 06 07:21:35 compute-0 ovn_controller[147168]: 2025-12-06T07:21:35Z|00303|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c up in Southbound
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.046 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.048 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000056.
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.050 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9fab6889-9fc3-4327-a697-cbf7896648ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.051 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d599401-31 in ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.053 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d599401-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.053 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[143e7e74-0eca-4eeb-93aa-0d0f9456e76e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.054 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3a720566-d4e0-43be-9da4-4bd59d134c03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 systemd-udevd[308707]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:35 compute-0 NetworkManager[48965]: <info>  [1765005695.0708] device (tap382a0d3e-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:21:35 compute-0 NetworkManager[48965]: <info>  [1765005695.0722] device (tap382a0d3e-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.071 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[ef02d4f9-71d2-4ffd-a359-596b627ee599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.092 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[40e550c5-343c-4f70-b15f-abaf97b79729]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.120 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa58bfb-93a1-4bbe-94b0-a532f1b4f675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 NetworkManager[48965]: <info>  [1765005695.1270] manager: (tap4d599401-30): new Veth device (/org/freedesktop/NetworkManager/Devices/159)
Dec 06 07:21:35 compute-0 systemd-udevd[308710]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.126 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[437a10be-e2bb-472c-9fbf-625f8bcf122e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.160 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[561ae1f5-3689-46cf-8304-b66e67a9191d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.164 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a98322fd-d4be-426a-bb19-21cd9bd9dfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 NetworkManager[48965]: <info>  [1765005695.1916] device (tap4d599401-30): carrier: link connected
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.198 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[aa0a14e1-ab2c-409a-9873-6e5ba2590f98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.215 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a30cd3b4-b4a9-4496-82dd-3315c249d828]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596778, 'reachable_time': 19493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308739, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.231 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d2db852b-fd7d-4b60-b565-51db6fdf5b4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:4cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596778, 'tstamp': 596778}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308740, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.274 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a65ebe47-d3a2-4b36-8a2f-1af3f4d02d04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d599401-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:4c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596778, 'reachable_time': 19493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308741, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.307 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d45a3ff4-3703-4935-8298-ab8c7ab3368f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.355 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[19e16bf1-f23d-4782-af67-d62736b491ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.356 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.357 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.357 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d599401-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.359 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-0 kernel: tap4d599401-30: entered promiscuous mode
Dec 06 07:21:35 compute-0 NetworkManager[48965]: <info>  [1765005695.3599] manager: (tap4d599401-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.362 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.363 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d599401-30, col_values=(('external_ids', {'iface-id': 'd5f15755-ab6a-4ce9-857e-63f6c0e19fd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.364 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-0 ovn_controller[147168]: 2025-12-06T07:21:35Z|00304|binding|INFO|Releasing lport d5f15755-ab6a-4ce9-857e-63f6c0e19fd8 from this chassis (sb_readonly=0)
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.380 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.381 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[602855c8-d057-4653-9c53-5672f5a7be39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.382 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/4d599401-3772-4e38-8cd2-d774d370af64.pid.haproxy
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 4d599401-3772-4e38-8cd2-d774d370af64
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:21:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:35.383 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'env', 'PROCESS_TAG=haproxy-4d599401-3772-4e38-8cd2-d774d370af64', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d599401-3772-4e38-8cd2-d774d370af64.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.599 251996 DEBUG nova.compute.manager [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.599 251996 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.599 251996 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.599 251996 DEBUG oslo_concurrency.lockutils [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.600 251996 DEBUG nova.compute.manager [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.600 251996 WARNING nova.compute.manager [req-d79d06ab-f917-474e-bea8-a0210c5c9520 req-3cf7c1a5-388a-472d-b32c-29868e3e4f7e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state suspended and task_state resuming.
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.683 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 46ad6692-490b-41f5-9d5d-d70ddcf61e04 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.684 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005695.6832547, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.684 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Started (Lifecycle Event)
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.694 251996 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Successfully updated port: 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.699 251996 DEBUG nova.compute.manager [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.699 251996 DEBUG nova.objects.instance [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'pci_devices' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.730 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.732 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "refresh_cache-946841c5-aadb-47f4-a772-8b25581f01ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.732 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquired lock "refresh_cache-946841c5-aadb-47f4-a772-8b25581f01ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.732 251996 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.736 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.741 251996 INFO nova.virt.libvirt.driver [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance running successfully.
Dec 06 07:21:35 compute-0 virtqemud[251613]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.744 251996 DEBUG nova.virt.libvirt.guest [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.744 251996 DEBUG nova.compute.manager [None req-05a19348-c4cf-4030-aba8-c285619236e5 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:35 compute-0 podman[308815]: 2025-12-06 07:21:35.784961038 +0000 UTC m=+0.082681558 container create cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.797 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] During sync_power_state the instance has a pending task (resuming). Skip.
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.797 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005695.6872518, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.798 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Resumed (Lifecycle Event)
Dec 06 07:21:35 compute-0 podman[308815]: 2025-12-06 07:21:35.728394214 +0000 UTC m=+0.026114754 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.831 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:35 compute-0 systemd[1]: Started libpod-conmon-cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620.scope.
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.833 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.845 251996 DEBUG nova.compute.manager [req-f21a6286-53cc-4724-a1c0-88e50da370b6 req-c6cffec5-4ced-47a9-9169-2e5b39856c7a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received event network-changed-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.846 251996 DEBUG nova.compute.manager [req-f21a6286-53cc-4724-a1c0-88e50da370b6 req-c6cffec5-4ced-47a9-9169-2e5b39856c7a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Refreshing instance network info cache due to event network-changed-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.846 251996 DEBUG oslo_concurrency.lockutils [req-f21a6286-53cc-4724-a1c0-88e50da370b6 req-c6cffec5-4ced-47a9-9169-2e5b39856c7a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-946841c5-aadb-47f4-a772-8b25581f01ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:21:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ba7d99678414264b4e3644f665e26eb11da111ced0b814e300adf41fadbb801/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 237 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 138 op/s
Dec 06 07:21:35 compute-0 nova_compute[251992]: 2025-12-06 07:21:35.912 251996 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:21:35 compute-0 podman[308815]: 2025-12-06 07:21:35.927692645 +0000 UTC m=+0.225413165 container init cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:21:35 compute-0 podman[308815]: 2025-12-06 07:21:35.932640519 +0000 UTC m=+0.230361039 container start cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:21:35 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308830]: [NOTICE]   (308834) : New worker (308836) forked
Dec 06 07:21:35 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308830]: [NOTICE]   (308834) : Loading success.
Dec 06 07:21:36 compute-0 ceph-mon[74339]: pgmap v1912: 305 pgs: 305 active+clean; 237 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 138 op/s
Dec 06 07:21:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.815 251996 DEBUG nova.network.neutron [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Updating instance_info_cache with network_info: [{"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.846 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Releasing lock "refresh_cache-946841c5-aadb-47f4-a772-8b25581f01ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.846 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Instance network_info: |[{"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.847 251996 DEBUG oslo_concurrency.lockutils [req-f21a6286-53cc-4724-a1c0-88e50da370b6 req-c6cffec5-4ced-47a9-9169-2e5b39856c7a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-946841c5-aadb-47f4-a772-8b25581f01ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.847 251996 DEBUG nova.network.neutron [req-f21a6286-53cc-4724-a1c0-88e50da370b6 req-c6cffec5-4ced-47a9-9169-2e5b39856c7a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Refreshing network info cache for port 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.851 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Start _get_guest_xml network_info=[{"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.858 251996 WARNING nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.865 251996 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.866 251996 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.873 251996 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.873 251996 DEBUG nova.virt.libvirt.host [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.875 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.875 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.875 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.876 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.876 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.876 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.876 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.876 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.877 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.877 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.877 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.878 251996 DEBUG nova.virt.hardware [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:21:36 compute-0 nova_compute[251992]: 2025-12-06 07:21:36.881 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:36.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.286 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.287 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.287 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.288 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.288 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.289 251996 INFO nova.compute.manager [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Terminating instance
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.290 251996 DEBUG nova.compute.manager [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:21:37 compute-0 kernel: tap382a0d3e-d0 (unregistering): left promiscuous mode
Dec 06 07:21:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797562651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:37 compute-0 NetworkManager[48965]: <info>  [1765005697.3260] device (tap382a0d3e-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:21:37 compute-0 ovn_controller[147168]: 2025-12-06T07:21:37Z|00305|binding|INFO|Releasing lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c from this chassis (sb_readonly=0)
Dec 06 07:21:37 compute-0 ovn_controller[147168]: 2025-12-06T07:21:37Z|00306|binding|INFO|Setting lport 382a0d3e-d0a9-40ed-80e9-3c462d98181c down in Southbound
Dec 06 07:21:37 compute-0 ovn_controller[147168]: 2025-12-06T07:21:37Z|00307|binding|INFO|Removing iface tap382a0d3e-d0 ovn-installed in OVS
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.336 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:37.340 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:fe:54 10.100.0.6'], port_security=['fa:16:3e:fe:fe:54 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '46ad6692-490b-41f5-9d5d-d70ddcf61e04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d599401-3772-4e38-8cd2-d774d370af64', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e2be1488d4b80b7ad8946093a6abe', 'neutron:revision_number': '8', 'neutron:security_group_ids': '310d97ff-0e42-4be5-a68e-20cbdb7be60d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=222872e8-5260-47b5-883e-369af9b3a47f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=382a0d3e-d0a9-40ed-80e9-3c462d98181c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:37.342 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 382a0d3e-d0a9-40ed-80e9-3c462d98181c in datapath 4d599401-3772-4e38-8cd2-d774d370af64 unbound from our chassis
Dec 06 07:21:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:37.343 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d599401-3772-4e38-8cd2-d774d370af64, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:21:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:37.344 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[62681138-c120-4781-9e6f-27c5c9e6edd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:37.344 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 namespace which is not needed anymore
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.348 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:37 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000056.scope: Deactivated successfully.
Dec 06 07:21:37 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000056.scope: Consumed 2.217s CPU time.
Dec 06 07:21:37 compute-0 systemd-machined[212986]: Machine qemu-40-instance-00000056 terminated.
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.376 251996 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 946841c5-aadb-47f4-a772-8b25581f01ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.381 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.407 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:37 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308830]: [NOTICE]   (308834) : haproxy version is 2.8.14-c23fe91
Dec 06 07:21:37 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308830]: [NOTICE]   (308834) : path to executable is /usr/sbin/haproxy
Dec 06 07:21:37 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308830]: [WARNING]  (308834) : Exiting Master process...
Dec 06 07:21:37 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308830]: [ALERT]    (308834) : Current worker (308836) exited with code 143 (Terminated)
Dec 06 07:21:37 compute-0 neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64[308830]: [WARNING]  (308834) : All workers exited. Exiting... (0)
Dec 06 07:21:37 compute-0 systemd[1]: libpod-cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620.scope: Deactivated successfully.
Dec 06 07:21:37 compute-0 podman[308910]: 2025-12-06 07:21:37.458489211 +0000 UTC m=+0.040137646 container died cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620-userdata-shm.mount: Deactivated successfully.
Dec 06 07:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ba7d99678414264b4e3644f665e26eb11da111ced0b814e300adf41fadbb801-merged.mount: Deactivated successfully.
Dec 06 07:21:37 compute-0 podman[308910]: 2025-12-06 07:21:37.494684129 +0000 UTC m=+0.076332564 container cleanup cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:21:37 compute-0 systemd[1]: libpod-conmon-cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620.scope: Deactivated successfully.
Dec 06 07:21:37 compute-0 NetworkManager[48965]: <info>  [1765005697.5104] manager: (tap382a0d3e-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/161)
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.530 251996 INFO nova.virt.libvirt.driver [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Instance destroyed successfully.
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.531 251996 DEBUG nova.objects.instance [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lazy-loading 'resources' on Instance uuid 46ad6692-490b-41f5-9d5d-d70ddcf61e04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.548 251996 DEBUG nova.virt.libvirt.vif [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:20:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-2054432936',display_name='tempest-ServerActionsTestJSON-server-2054432936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-2054432936',id=86,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAYy9PI2opG1Yb015LzaQaZHiAr4KsuqNy5RLRivgn9w0frXJzdA9SLIokq/TNHsTv+OZ3SzlEhSSm/zy2gaUVX2tVfQksdYXi87Z2HYYYX2anFBfTxIFgh3j22gU5Usow==',key_name='tempest-keypair-1101896810',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:20:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e2be1488d4b80b7ad8946093a6abe',ramdisk_id='',reservation_id='r-yj3rnpxj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1877526843',owner_user_name='tempest-ServerActionsTestJSON-1877526843-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='627c36bb63534e52a4b1d5adf47e6ffd',uuid=46ad6692-490b-41f5-9d5d-d70ddcf61e04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.549 251996 DEBUG nova.network.os_vif_util [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converting VIF {"id": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "address": "fa:16:3e:fe:fe:54", "network": {"id": "4d599401-3772-4e38-8cd2-d774d370af64", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-809610913-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e2be1488d4b80b7ad8946093a6abe", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap382a0d3e-d0", "ovs_interfaceid": "382a0d3e-d0a9-40ed-80e9-3c462d98181c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.550 251996 DEBUG nova.network.os_vif_util [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.550 251996 DEBUG os_vif [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.553 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.553 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap382a0d3e-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.554 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.559 251996 INFO os_vif [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:fe:54,bridge_name='br-int',has_traffic_filtering=True,id=382a0d3e-d0a9-40ed-80e9-3c462d98181c,network=Network(4d599401-3772-4e38-8cd2-d774d370af64),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap382a0d3e-d0')
Dec 06 07:21:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1888702857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2797562651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.703 251996 DEBUG nova.compute.manager [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.703 251996 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.703 251996 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.704 251996 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.704 251996 DEBUG nova.compute.manager [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.704 251996 WARNING nova.compute.manager [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state active and task_state deleting.
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.704 251996 DEBUG nova.compute.manager [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.704 251996 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.705 251996 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.705 251996 DEBUG oslo_concurrency.lockutils [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.705 251996 DEBUG nova.compute.manager [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:37 compute-0 nova_compute[251992]: 2025-12-06 07:21:37.706 251996 DEBUG nova.compute.manager [req-1de4beb6-2ee0-4abb-89e1-77a3290e66ca req-cf2fac8d-7fd4-45ed-b508-f15a2290383b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-unplugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:21:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 306 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 229 op/s
Dec 06 07:21:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550746447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.053 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.054 251996 DEBUG nova.virt.libvirt.vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-2',id=90,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:33Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=946841c5-aadb-47f4-a772-8b25581f01ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.054 251996 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.055 251996 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:a8:0d,bridge_name='br-int',has_traffic_filtering=True,id=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap286fcec3-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.056 251996 DEBUG nova.objects.instance [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'pci_devices' on Instance uuid 946841c5-aadb-47f4-a772-8b25581f01ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.096 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <uuid>946841c5-aadb-47f4-a772-8b25581f01ef</uuid>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <name>instance-0000005a</name>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1422412228-2</nova:name>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:21:36</nova:creationTime>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <nova:user uuid="a52e2b4388994d8791443483bd42cc33">tempest-ListServersNegativeTestJSON-179719916-project-member</nova:user>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <nova:project uuid="b558585a6aa14470bdad319926a98046">tempest-ListServersNegativeTestJSON-179719916</nova:project>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <nova:port uuid="286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09">
Dec 06 07:21:38 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <system>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <entry name="serial">946841c5-aadb-47f4-a772-8b25581f01ef</entry>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <entry name="uuid">946841c5-aadb-47f4-a772-8b25581f01ef</entry>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </system>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <os>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   </os>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <features>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   </features>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/946841c5-aadb-47f4-a772-8b25581f01ef_disk">
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       </source>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/946841c5-aadb-47f4-a772-8b25581f01ef_disk.config">
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       </source>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:21:38 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:22:a8:0d"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <target dev="tap286fcec3-a1"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef/console.log" append="off"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <video>
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </video>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:21:38 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:21:38 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:21:38 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:21:38 compute-0 nova_compute[251992]: </domain>
Dec 06 07:21:38 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.097 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Preparing to wait for external event network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.098 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.098 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.098 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.099 251996 DEBUG nova.virt.libvirt.vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-2',id=90,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:21:33Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=946841c5-aadb-47f4-a772-8b25581f01ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.099 251996 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.100 251996 DEBUG nova.network.os_vif_util [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:a8:0d,bridge_name='br-int',has_traffic_filtering=True,id=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap286fcec3-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.100 251996 DEBUG os_vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:a8:0d,bridge_name='br-int',has_traffic_filtering=True,id=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap286fcec3-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.101 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.101 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.101 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.104 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.105 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap286fcec3-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.106 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap286fcec3-a1, col_values=(('external_ids', {'iface-id': '286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:a8:0d', 'vm-uuid': '946841c5-aadb-47f4-a772-8b25581f01ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.108 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-0 NetworkManager[48965]: <info>  [1765005698.1091] manager: (tap286fcec3-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.111 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.112 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.113 251996 INFO os_vif [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:a8:0d,bridge_name='br-int',has_traffic_filtering=True,id=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap286fcec3-a1')
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.296 251996 DEBUG nova.network.neutron [req-f21a6286-53cc-4724-a1c0-88e50da370b6 req-c6cffec5-4ced-47a9-9169-2e5b39856c7a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Updated VIF entry in instance network info cache for port 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.297 251996 DEBUG nova.network.neutron [req-f21a6286-53cc-4724-a1c0-88e50da370b6 req-c6cffec5-4ced-47a9-9169-2e5b39856c7a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Updating instance_info_cache with network_info: [{"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.316 251996 DEBUG oslo_concurrency.lockutils [req-f21a6286-53cc-4724-a1c0-88e50da370b6 req-c6cffec5-4ced-47a9-9169-2e5b39856c7a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-946841c5-aadb-47f4-a772-8b25581f01ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.436 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.437 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.437 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] No VIF found with MAC fa:16:3e:22:a8:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.438 251996 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Using config drive
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.464 251996 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 946841c5-aadb-47f4-a772-8b25581f01ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:38.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:38 compute-0 podman[308961]: 2025-12-06 07:21:38.766439225 +0000 UTC m=+1.249636383 container remove cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.772 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3a07ccb3-2087-4157-8cac-a6e7c2834c79]: (4, ('Sat Dec  6 07:21:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620)\ncfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620\nSat Dec  6 07:21:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 (cfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620)\ncfbac2575e35c2ae40f4406e509443fd76d0f0e405124910210c693048965620\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.774 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f13b297b-3fa0-40e2-9894-fd097e374660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.774 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d599401-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.776 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-0 kernel: tap4d599401-30: left promiscuous mode
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.794 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.796 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f139ea-e951-4920-aa13-03827b980371]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.814 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c277fbed-2cb3-45eb-aff6-4c22b4ba480c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:38 compute-0 nova_compute[251992]: 2025-12-06 07:21:38.815 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.815 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2c058981-36f4-4fbb-a806-a7fe008ec228]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.834 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[959a7b35-9992-4783-bc9e-78bb6cc9f4ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596770, 'reachable_time': 15297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309024, 'error': None, 'target': 'ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.837 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d599401-3772-4e38-8cd2-d774d370af64 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:21:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:38.837 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[687414d2-7242-4b28-a4ce-d5e2254dc85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d4d599401\x2d3772\x2d4e38\x2d8cd2\x2dd774d370af64.mount: Deactivated successfully.
Dec 06 07:21:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:38.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.128 251996 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Creating config drive at /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef/disk.config
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.135 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnsp5v5iv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.271 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnsp5v5iv" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.299 251996 DEBUG nova.storage.rbd_utils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] rbd image 946841c5-aadb-47f4-a772-8b25581f01ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.302 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef/disk.config 946841c5-aadb-47f4-a772-8b25581f01ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/130116494' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2419729644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:39 compute-0 ceph-mon[74339]: pgmap v1913: 305 pgs: 305 active+clean; 306 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 229 op/s
Dec 06 07:21:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3550746447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.451 251996 DEBUG oslo_concurrency.processutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef/disk.config 946841c5-aadb-47f4-a772-8b25581f01ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.452 251996 INFO nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Deleting local config drive /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef/disk.config because it was imported into RBD.
Dec 06 07:21:39 compute-0 kernel: tap286fcec3-a1: entered promiscuous mode
Dec 06 07:21:39 compute-0 NetworkManager[48965]: <info>  [1765005699.4907] manager: (tap286fcec3-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Dec 06 07:21:39 compute-0 systemd-udevd[309026]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:21:39 compute-0 NetworkManager[48965]: <info>  [1765005699.5020] device (tap286fcec3-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:21:39 compute-0 NetworkManager[48965]: <info>  [1765005699.5039] device (tap286fcec3-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:21:39 compute-0 ovn_controller[147168]: 2025-12-06T07:21:39Z|00308|binding|INFO|Claiming lport 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 for this chassis.
Dec 06 07:21:39 compute-0 ovn_controller[147168]: 2025-12-06T07:21:39Z|00309|binding|INFO|286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09: Claiming fa:16:3e:22:a8:0d 10.100.0.4
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.525 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.528 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.532 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:a8:0d 10.100.0.4'], port_security=['fa:16:3e:22:a8:0d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '946841c5-aadb-47f4-a772-8b25581f01ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b558585a6aa14470bdad319926a98046', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0eb8f52e-2f68-4151-8464-0d3b0eb6798f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c0e6fde-267f-49e6-86d0-b0c0ced92a7c, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.534 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 in datapath 77f3ccc8-bb54-46a1-b015-cc5b8a445202 bound to our chassis
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.535 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77f3ccc8-bb54-46a1-b015-cc5b8a445202
Dec 06 07:21:39 compute-0 ovn_controller[147168]: 2025-12-06T07:21:39Z|00310|binding|INFO|Setting lport 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 ovn-installed in OVS
Dec 06 07:21:39 compute-0 ovn_controller[147168]: 2025-12-06T07:21:39Z|00311|binding|INFO|Setting lport 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 up in Southbound
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.541 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.544 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.547 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[268bba83-0126-46b8-8b97-6e79a468c820]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.547 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77f3ccc8-b1 in ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:21:39 compute-0 systemd-machined[212986]: New machine qemu-41-instance-0000005a.
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.549 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77f3ccc8-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.549 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce6c059-ba96-4af6-983f-0e49dd639587]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.549 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[71885a87-f023-428e-af13-460dbe572338]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.558 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[5b84ea96-fa04-4d29-a4ae-b0d745bbca76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-0000005a.
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.580 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ffa7d8-d604-4035-ab6a-541197ffd457]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.601 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8a32bb-512a-45db-82bb-0d7bb9fc3d8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.607 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d7100e-ddd7-4db4-88ea-a1b67d16bba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 NetworkManager[48965]: <info>  [1765005699.6086] manager: (tap77f3ccc8-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.635 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5a73ffe1-1a91-42c4-ac1b-eb2fc4795ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.639 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9199dda9-b2ba-4e7e-a5f7-d64589f82491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 NetworkManager[48965]: <info>  [1765005699.6641] device (tap77f3ccc8-b0): carrier: link connected
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.669 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f79fdb0e-1bdc-4bde-91fd-dc5e2e7edf32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.688 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8d21b153-a98b-4423-96a7-e037b61c1fde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77f3ccc8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:76:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597225, 'reachable_time': 21290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309114, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.703 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[83e63391-9f01-4f0d-bed8-e805feef4b21]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:769c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597225, 'tstamp': 597225}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309115, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.722 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7a785a76-dbf5-467d-9682-9174d086f487]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77f3ccc8-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:76:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597225, 'reachable_time': 21290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309116, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.755 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[474eef1c-57cb-4255-82b9-0afcce985e1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.808 251996 DEBUG nova.compute.manager [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.809 251996 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.809 251996 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.809 251996 DEBUG oslo_concurrency.lockutils [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.809 251996 DEBUG nova.compute.manager [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] No waiting events found dispatching network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.809 251996 WARNING nova.compute.manager [req-a91ce6f9-947c-4fbb-9376-2e2b0ac13afc req-e9d20e56-b7b7-4daa-8569-174c6391647e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received unexpected event network-vif-plugged-382a0d3e-d0a9-40ed-80e9-3c462d98181c for instance with vm_state active and task_state deleting.
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.812 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd32355-7f70-49ae-851f-a833681e0eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.813 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77f3ccc8-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.814 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.814 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77f3ccc8-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.815 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:39 compute-0 NetworkManager[48965]: <info>  [1765005699.8167] manager: (tap77f3ccc8-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Dec 06 07:21:39 compute-0 kernel: tap77f3ccc8-b0: entered promiscuous mode
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.818 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.819 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77f3ccc8-b0, col_values=(('external_ids', {'iface-id': '556411fa-e8d1-4a0d-8496-4416c2200434'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.820 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:39 compute-0 ovn_controller[147168]: 2025-12-06T07:21:39Z|00312|binding|INFO|Releasing lport 556411fa-e8d1-4a0d-8496-4416c2200434 from this chassis (sb_readonly=0)
Dec 06 07:21:39 compute-0 nova_compute[251992]: 2025-12-06 07:21:39.834 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.835 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.836 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fab85b43-3931-4673-bd2f-729fa1789f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.837 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-77f3ccc8-bb54-46a1-b015-cc5b8a445202
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/77f3ccc8-bb54-46a1-b015-cc5b8a445202.pid.haproxy
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 77f3ccc8-bb54-46a1-b015-cc5b8a445202
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:21:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:39.838 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'env', 'PROCESS_TAG=haproxy-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77f3ccc8-bb54-46a1-b015-cc5b8a445202.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:21:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 306 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 146 op/s
Dec 06 07:21:40 compute-0 podman[309185]: 2025-12-06 07:21:40.19422238 +0000 UTC m=+0.046651134 container create 209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:21:40 compute-0 nova_compute[251992]: 2025-12-06 07:21:40.226 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005700.2256894, 946841c5-aadb-47f4-a772-8b25581f01ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:40 compute-0 nova_compute[251992]: 2025-12-06 07:21:40.226 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] VM Started (Lifecycle Event)
Dec 06 07:21:40 compute-0 podman[309185]: 2025-12-06 07:21:40.169926677 +0000 UTC m=+0.022355451 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:21:40 compute-0 nova_compute[251992]: 2025-12-06 07:21:40.269 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:40 compute-0 nova_compute[251992]: 2025-12-06 07:21:40.273 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005700.226289, 946841c5-aadb-47f4-a772-8b25581f01ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:40 compute-0 nova_compute[251992]: 2025-12-06 07:21:40.274 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] VM Paused (Lifecycle Event)
Dec 06 07:21:40 compute-0 nova_compute[251992]: 2025-12-06 07:21:40.303 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:40 compute-0 nova_compute[251992]: 2025-12-06 07:21:40.306 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:40 compute-0 nova_compute[251992]: 2025-12-06 07:21:40.350 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:21:40 compute-0 systemd[1]: Started libpod-conmon-209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67.scope.
Dec 06 07:21:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:21:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f06aa8c7676fba4725a304ead51984bd8552469cf49930fef7323685a2a8ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:21:40 compute-0 podman[309185]: 2025-12-06 07:21:40.629387419 +0000 UTC m=+0.481816213 container init 209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:21:40 compute-0 podman[309185]: 2025-12-06 07:21:40.635269459 +0000 UTC m=+0.487698223 container start 209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:21:40 compute-0 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[309206]: [NOTICE]   (309210) : New worker (309212) forked
Dec 06 07:21:40 compute-0 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[309206]: [NOTICE]   (309210) : Loading success.
Dec 06 07:21:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:40.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:40.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/311766206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:41 compute-0 ceph-mon[74339]: pgmap v1914: 305 pgs: 305 active+clean; 306 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 146 op/s
Dec 06 07:21:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3014881078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:21:41.240 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:21:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:21:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/126457107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 280 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.1 MiB/s wr, 215 op/s
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.956 251996 DEBUG nova.compute.manager [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received event network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.958 251996 DEBUG oslo_concurrency.lockutils [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.958 251996 DEBUG oslo_concurrency.lockutils [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.958 251996 DEBUG oslo_concurrency.lockutils [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.959 251996 DEBUG nova.compute.manager [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Processing event network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.959 251996 DEBUG nova.compute.manager [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received event network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.960 251996 DEBUG oslo_concurrency.lockutils [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.960 251996 DEBUG oslo_concurrency.lockutils [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.960 251996 DEBUG oslo_concurrency.lockutils [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.961 251996 DEBUG nova.compute.manager [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] No waiting events found dispatching network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.961 251996 WARNING nova.compute.manager [req-494e492b-52f1-4743-8c27-33a15f481cc4 req-a98fed9a-c2e1-4250-8f82-b453a31c473b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received unexpected event network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 for instance with vm_state building and task_state spawning.
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.962 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.966 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005701.9658463, 946841c5-aadb-47f4-a772-8b25581f01ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.966 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] VM Resumed (Lifecycle Event)
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.968 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.972 251996 INFO nova.virt.libvirt.driver [-] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Instance spawned successfully.
Dec 06 07:21:41 compute-0 nova_compute[251992]: 2025-12-06 07:21:41.972 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.012 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.018 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.022 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.023 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.023 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.024 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.024 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.025 251996 DEBUG nova.virt.libvirt.driver [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.068 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.100 251996 INFO nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Took 8.73 seconds to spawn the instance on the hypervisor.
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.101 251996 DEBUG nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.165 251996 INFO nova.compute.manager [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Took 9.83 seconds to build instance.
Dec 06 07:21:42 compute-0 nova_compute[251992]: 2025-12-06 07:21:42.183 251996 DEBUG oslo_concurrency.lockutils [None req-c08965c3-6c1d-4892-ac3f-c10623cb0853 a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:42.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:21:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:21:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:21:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:21:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:21:42 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:21:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:42.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:43 compute-0 nova_compute[251992]: 2025-12-06 07:21:43.110 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/126457107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:43 compute-0 ceph-mon[74339]: pgmap v1915: 305 pgs: 305 active+clean; 280 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.1 MiB/s wr, 215 op/s
Dec 06 07:21:43 compute-0 nova_compute[251992]: 2025-12-06 07:21:43.237 251996 INFO nova.virt.libvirt.driver [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Deleting instance files /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04_del
Dec 06 07:21:43 compute-0 nova_compute[251992]: 2025-12-06 07:21:43.238 251996 INFO nova.virt.libvirt.driver [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Deletion of /var/lib/nova/instances/46ad6692-490b-41f5-9d5d-d70ddcf61e04_del complete
Dec 06 07:21:43 compute-0 nova_compute[251992]: 2025-12-06 07:21:43.297 251996 INFO nova.compute.manager [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Took 6.01 seconds to destroy the instance on the hypervisor.
Dec 06 07:21:43 compute-0 nova_compute[251992]: 2025-12-06 07:21:43.298 251996 DEBUG oslo.service.loopingcall [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:21:43 compute-0 nova_compute[251992]: 2025-12-06 07:21:43.299 251996 DEBUG nova.compute.manager [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:21:43 compute-0 nova_compute[251992]: 2025-12-06 07:21:43.299 251996 DEBUG nova.network.neutron [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:21:43 compute-0 nova_compute[251992]: 2025-12-06 07:21:43.817 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 256 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 251 op/s
Dec 06 07:21:44 compute-0 nova_compute[251992]: 2025-12-06 07:21:44.290 251996 DEBUG nova.network.neutron [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:21:44 compute-0 nova_compute[251992]: 2025-12-06 07:21:44.320 251996 INFO nova.compute.manager [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Took 1.02 seconds to deallocate network for instance.
Dec 06 07:21:44 compute-0 nova_compute[251992]: 2025-12-06 07:21:44.374 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:44 compute-0 nova_compute[251992]: 2025-12-06 07:21:44.375 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:44 compute-0 nova_compute[251992]: 2025-12-06 07:21:44.453 251996 DEBUG nova.compute.manager [req-bc1c44d7-0fb0-46ea-a1c6-59f854c18617 req-7425e960-1fb6-4aa7-8fab-a4a4b8dd0aea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Received event network-vif-deleted-382a0d3e-d0a9-40ed-80e9-3c462d98181c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:21:44 compute-0 nova_compute[251992]: 2025-12-06 07:21:44.478 251996 DEBUG oslo_concurrency.processutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:44 compute-0 ceph-mon[74339]: pgmap v1916: 305 pgs: 305 active+clean; 256 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 251 op/s
Dec 06 07:21:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2109164090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:44 compute-0 sudo[309228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:44 compute-0 sudo[309228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:44 compute-0 sudo[309228]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:44 compute-0 sudo[309270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:21:44 compute-0 sudo[309270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:21:44 compute-0 sudo[309270]: pam_unix(sudo:session): session closed for user root
Dec 06 07:21:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:21:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245318948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:44 compute-0 nova_compute[251992]: 2025-12-06 07:21:44.988 251996 DEBUG oslo_concurrency.processutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:21:44 compute-0 nova_compute[251992]: 2025-12-06 07:21:44.994 251996 DEBUG nova.compute.provider_tree [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:21:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:45 compute-0 nova_compute[251992]: 2025-12-06 07:21:45.034 251996 DEBUG nova.scheduler.client.report [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:21:45 compute-0 nova_compute[251992]: 2025-12-06 07:21:45.061 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:45 compute-0 nova_compute[251992]: 2025-12-06 07:21:45.102 251996 INFO nova.scheduler.client.report [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Deleted allocations for instance 46ad6692-490b-41f5-9d5d-d70ddcf61e04
Dec 06 07:21:45 compute-0 nova_compute[251992]: 2025-12-06 07:21:45.164 251996 DEBUG oslo_concurrency.lockutils [None req-937b6496-fde0-409c-afe3-4a34363c1201 627c36bb63534e52a4b1d5adf47e6ffd 929e2be1488d4b80b7ad8946093a6abe - - default default] Lock "46ad6692-490b-41f5-9d5d-d70ddcf61e04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1530936474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:21:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4245318948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 273 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.1 MiB/s wr, 377 op/s
Dec 06 07:21:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:21:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:21:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:47 compute-0 ceph-mon[74339]: pgmap v1917: 305 pgs: 305 active+clean; 273 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.1 MiB/s wr, 377 op/s
Dec 06 07:21:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 273 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 4.2 MiB/s wr, 411 op/s
Dec 06 07:21:48 compute-0 nova_compute[251992]: 2025-12-06 07:21:48.114 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:48 compute-0 nova_compute[251992]: 2025-12-06 07:21:48.819 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:48 compute-0 ceph-mon[74339]: pgmap v1918: 305 pgs: 305 active+clean; 273 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 4.2 MiB/s wr, 411 op/s
Dec 06 07:21:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:49.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:49 compute-0 podman[309299]: 2025-12-06 07:21:49.4489424 +0000 UTC m=+0.103594498 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:21:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 273 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.8 MiB/s wr, 319 op/s
Dec 06 07:21:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:51.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:51 compute-0 ceph-mon[74339]: pgmap v1919: 305 pgs: 305 active+clean; 273 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.8 MiB/s wr, 319 op/s
Dec 06 07:21:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 211 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 1.8 MiB/s wr, 409 op/s
Dec 06 07:21:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4182883929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:52 compute-0 ceph-mon[74339]: pgmap v1920: 305 pgs: 305 active+clean; 211 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 1.8 MiB/s wr, 409 op/s
Dec 06 07:21:52 compute-0 ovn_controller[147168]: 2025-12-06T07:21:52Z|00313|binding|INFO|Releasing lport 556411fa-e8d1-4a0d-8496-4416c2200434 from this chassis (sb_readonly=0)
Dec 06 07:21:52 compute-0 nova_compute[251992]: 2025-12-06 07:21:52.486 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:52 compute-0 nova_compute[251992]: 2025-12-06 07:21:52.530 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005697.5285995, 46ad6692-490b-41f5-9d5d-d70ddcf61e04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:21:52 compute-0 nova_compute[251992]: 2025-12-06 07:21:52.530 251996 INFO nova.compute.manager [-] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] VM Stopped (Lifecycle Event)
Dec 06 07:21:52 compute-0 nova_compute[251992]: 2025-12-06 07:21:52.553 251996 DEBUG nova.compute.manager [None req-6e59f617-fce1-432d-887d-12c8c6d11ef4 - - - - - -] [instance: 46ad6692-490b-41f5-9d5d-d70ddcf61e04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:21:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:21:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:52.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:21:52 compute-0 ovn_controller[147168]: 2025-12-06T07:21:52Z|00314|binding|INFO|Releasing lport 556411fa-e8d1-4a0d-8496-4416c2200434 from this chassis (sb_readonly=0)
Dec 06 07:21:52 compute-0 nova_compute[251992]: 2025-12-06 07:21:52.744 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:53 compute-0 nova_compute[251992]: 2025-12-06 07:21:53.115 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:53 compute-0 nova_compute[251992]: 2025-12-06 07:21:53.823 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/295510486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:21:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 181 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 1.3 MiB/s wr, 364 op/s
Dec 06 07:21:54 compute-0 podman[309328]: 2025-12-06 07:21:54.407727133 +0000 UTC m=+0.060533813 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:21:54 compute-0 podman[309329]: 2025-12-06 07:21:54.409525672 +0000 UTC m=+0.060266525 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 07:21:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:54.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:55 compute-0 nova_compute[251992]: 2025-12-06 07:21:55.312 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "946841c5-aadb-47f4-a772-8b25581f01ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:55 compute-0 nova_compute[251992]: 2025-12-06 07:21:55.312 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:55 compute-0 nova_compute[251992]: 2025-12-06 07:21:55.313 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:55 compute-0 nova_compute[251992]: 2025-12-06 07:21:55.313 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:55 compute-0 nova_compute[251992]: 2025-12-06 07:21:55.313 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:55 compute-0 nova_compute[251992]: 2025-12-06 07:21:55.314 251996 INFO nova.compute.manager [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Terminating instance
Dec 06 07:21:55 compute-0 nova_compute[251992]: 2025-12-06 07:21:55.315 251996 DEBUG nova.compute.manager [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:21:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 187 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.8 MiB/s wr, 331 op/s
Dec 06 07:21:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:56.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:56 compute-0 ceph-mon[74339]: pgmap v1921: 305 pgs: 305 active+clean; 181 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 1.3 MiB/s wr, 364 op/s
Dec 06 07:21:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:21:57 compute-0 nova_compute[251992]: 2025-12-06 07:21:57.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 209 op/s
Dec 06 07:21:58 compute-0 nova_compute[251992]: 2025-12-06 07:21:58.119 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:58 compute-0 nova_compute[251992]: 2025-12-06 07:21:58.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:21:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:21:58.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:21:58 compute-0 nova_compute[251992]: 2025-12-06 07:21:58.824 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:21:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:21:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:21:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:21:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.685 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.685 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.685 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.737 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.738 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.754 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.839 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.839 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.846 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.846 251996 INFO nova.compute.claims [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:21:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 135 op/s
Dec 06 07:21:59 compute-0 nova_compute[251992]: 2025-12-06 07:21:59.993 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874194034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.142 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594007488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.431 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.438 251996 DEBUG nova.compute.provider_tree [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.467 251996 DEBUG nova.scheduler.client.report [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.504 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.505 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.584 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.585 251996 DEBUG nova.network.neutron [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.608 251996 INFO nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.625 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.710 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.712 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.712 251996 INFO nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Creating image(s)
Dec 06 07:22:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:00.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:00 compute-0 nova_compute[251992]: 2025-12-06 07:22:00.973 251996 DEBUG nova.storage.rbd_utils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:01 compute-0 nova_compute[251992]: 2025-12-06 07:22:01.003 251996 DEBUG nova.storage.rbd_utils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:01 compute-0 nova_compute[251992]: 2025-12-06 07:22:01.028 251996 DEBUG nova.storage.rbd_utils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:01 compute-0 nova_compute[251992]: 2025-12-06 07:22:01.031 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:01 compute-0 nova_compute[251992]: 2025-12-06 07:22:01.059 251996 DEBUG nova.policy [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd966fefcb38a45219b9cc637c46a3d62', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:22:01 compute-0 nova_compute[251992]: 2025-12-06 07:22:01.102 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:01 compute-0 nova_compute[251992]: 2025-12-06 07:22:01.103 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:01 compute-0 nova_compute[251992]: 2025-12-06 07:22:01.104 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:01 compute-0 nova_compute[251992]: 2025-12-06 07:22:01.104 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:22:02 compute-0 nova_compute[251992]: 2025-12-06 07:22:02.204 251996 DEBUG nova.storage.rbd_utils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:02 compute-0 nova_compute[251992]: 2025-12-06 07:22:02.207 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:02 compute-0 nova_compute[251992]: 2025-12-06 07:22:02.233 251996 DEBUG nova.network.neutron [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Successfully created port: 261f5dec-8a38-4bc8-ac43-93d094daeba1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:22:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:02.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:03 compute-0 kernel: tap286fcec3-a1 (unregistering): left promiscuous mode
Dec 06 07:22:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:03.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:03 compute-0 NetworkManager[48965]: <info>  [1765005723.0278] device (tap286fcec3-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:22:03 compute-0 ovn_controller[147168]: 2025-12-06T07:22:03Z|00315|binding|INFO|Releasing lport 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 from this chassis (sb_readonly=0)
Dec 06 07:22:03 compute-0 ovn_controller[147168]: 2025-12-06T07:22:03Z|00316|binding|INFO|Setting lport 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 down in Southbound
Dec 06 07:22:03 compute-0 ovn_controller[147168]: 2025-12-06T07:22:03Z|00317|binding|INFO|Removing iface tap286fcec3-a1 ovn-installed in OVS
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.035 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.037 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.047 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:a8:0d 10.100.0.4'], port_security=['fa:16:3e:22:a8:0d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '946841c5-aadb-47f4-a772-8b25581f01ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b558585a6aa14470bdad319926a98046', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0eb8f52e-2f68-4151-8464-0d3b0eb6798f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c0e6fde-267f-49e6-86d0-b0c0ced92a7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.050 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 in datapath 77f3ccc8-bb54-46a1-b015-cc5b8a445202 unbound from our chassis
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.051 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77f3ccc8-bb54-46a1-b015-cc5b8a445202, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.055 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[56372390-595c-4a75-b3c4-a3d3461f0ab4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.056 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 namespace which is not needed anymore
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.060 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Dec 06 07:22:03 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005a.scope: Consumed 13.084s CPU time.
Dec 06 07:22:03 compute-0 systemd-machined[212986]: Machine qemu-41-instance-0000005a terminated.
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.110 251996 DEBUG nova.network.neutron [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Successfully updated port: 261f5dec-8a38-4bc8-ac43-93d094daeba1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.135 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.139 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.139 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.140 251996 DEBUG nova.network.neutron [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.155 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.163 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.171 251996 INFO nova.virt.libvirt.driver [-] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Instance destroyed successfully.
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.171 251996 DEBUG nova.objects.instance [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lazy-loading 'resources' on Instance uuid 946841c5-aadb-47f4-a772-8b25581f01ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.185 251996 DEBUG nova.virt.libvirt.vif [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1422412228',display_name='tempest-ListServersNegativeTestJSON-server-1422412228-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1422412228-2',id=90,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-12-06T07:21:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b558585a6aa14470bdad319926a98046',ramdisk_id='',reservation_id='r-46lv2gf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-179719916',owner_user_name='tempest-ListServersNegativeTestJSON-179719916-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:21:42Z,user_data=None,user_id='a52e2b4388994d8791443483bd42cc33',uuid=946841c5-aadb-47f4-a772-8b25581f01ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.186 251996 DEBUG nova.network.os_vif_util [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converting VIF {"id": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "address": "fa:16:3e:22:a8:0d", "network": {"id": "77f3ccc8-bb54-46a1-b015-cc5b8a445202", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-573376355-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b558585a6aa14470bdad319926a98046", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap286fcec3-a1", "ovs_interfaceid": "286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.188 251996 DEBUG nova.network.os_vif_util [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:a8:0d,bridge_name='br-int',has_traffic_filtering=True,id=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap286fcec3-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.189 251996 DEBUG os_vif [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:a8:0d,bridge_name='br-int',has_traffic_filtering=True,id=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap286fcec3-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.191 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.192 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap286fcec3-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.194 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.196 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.200 251996 INFO os_vif [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:a8:0d,bridge_name='br-int',has_traffic_filtering=True,id=286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09,network=Network(77f3ccc8-bb54-46a1-b015-cc5b8a445202),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap286fcec3-a1')
Dec 06 07:22:03 compute-0 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[309206]: [NOTICE]   (309210) : haproxy version is 2.8.14-c23fe91
Dec 06 07:22:03 compute-0 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[309206]: [NOTICE]   (309210) : path to executable is /usr/sbin/haproxy
Dec 06 07:22:03 compute-0 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[309206]: [WARNING]  (309210) : Exiting Master process...
Dec 06 07:22:03 compute-0 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[309206]: [ALERT]    (309210) : Current worker (309212) exited with code 143 (Terminated)
Dec 06 07:22:03 compute-0 neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202[309206]: [WARNING]  (309210) : All workers exited. Exiting... (0)
Dec 06 07:22:03 compute-0 systemd[1]: libpod-209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67.scope: Deactivated successfully.
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.220 251996 DEBUG nova.compute.manager [req-0f178fc4-9e13-46dc-a286-fefbd220392f req-99bddf04-8a4f-4da0-9ddc-dc81524c379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received event network-changed-261f5dec-8a38-4bc8-ac43-93d094daeba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.221 251996 DEBUG nova.compute.manager [req-0f178fc4-9e13-46dc-a286-fefbd220392f req-99bddf04-8a4f-4da0-9ddc-dc81524c379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Refreshing instance network info cache due to event network-changed-261f5dec-8a38-4bc8-ac43-93d094daeba1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.222 251996 DEBUG oslo_concurrency.lockutils [req-0f178fc4-9e13-46dc-a286-fefbd220392f req-99bddf04-8a4f-4da0-9ddc-dc81524c379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:03 compute-0 podman[309538]: 2025-12-06 07:22:03.225342192 +0000 UTC m=+0.043607422 container died 209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.245 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.246 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67-userdata-shm.mount: Deactivated successfully.
Dec 06 07:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-55f06aa8c7676fba4725a304ead51984bd8552469cf49930fef7323685a2a8ea-merged.mount: Deactivated successfully.
Dec 06 07:22:03 compute-0 podman[309538]: 2025-12-06 07:22:03.265000314 +0000 UTC m=+0.083265544 container cleanup 209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:22:03 compute-0 systemd[1]: libpod-conmon-209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67.scope: Deactivated successfully.
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.298 251996 DEBUG nova.network.neutron [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.318 251996 DEBUG nova.compute.manager [req-c5d5bb91-0476-4379-a0cf-e557901a9320 req-7ae25fe6-e415-4144-91d5-e14c25d544da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received event network-vif-unplugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.319 251996 DEBUG oslo_concurrency.lockutils [req-c5d5bb91-0476-4379-a0cf-e557901a9320 req-7ae25fe6-e415-4144-91d5-e14c25d544da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.319 251996 DEBUG oslo_concurrency.lockutils [req-c5d5bb91-0476-4379-a0cf-e557901a9320 req-7ae25fe6-e415-4144-91d5-e14c25d544da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.319 251996 DEBUG oslo_concurrency.lockutils [req-c5d5bb91-0476-4379-a0cf-e557901a9320 req-7ae25fe6-e415-4144-91d5-e14c25d544da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.320 251996 DEBUG nova.compute.manager [req-c5d5bb91-0476-4379-a0cf-e557901a9320 req-7ae25fe6-e415-4144-91d5-e14c25d544da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] No waiting events found dispatching network-vif-unplugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.320 251996 DEBUG nova.compute.manager [req-c5d5bb91-0476-4379-a0cf-e557901a9320 req-7ae25fe6-e415-4144-91d5-e14c25d544da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received event network-vif-unplugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:22:03 compute-0 podman[309582]: 2025-12-06 07:22:03.336188868 +0000 UTC m=+0.049603195 container remove 209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.342 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[368a9064-9f78-408a-8de2-2f037fc950f0]: (4, ('Sat Dec  6 07:22:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 (209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67)\n209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67\nSat Dec  6 07:22:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 (209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67)\n209dda5d258f6d0825a9c2409be78733c38cb6633c54b741d1de69ee2ba8ba67\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.344 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[58e1cc66-e44d-4629-a03c-28304904033e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.345 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77f3ccc8-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.347 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 kernel: tap77f3ccc8-b0: left promiscuous mode
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.350 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.353 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8e39b5-faf0-4bc4-804b-92fe071d6019]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.366 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.370 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d6fe86-9433-4b08-a337-499eb7518d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.371 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dc37107b-b840-4e2c-8ab1-ac4ca355bb90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.389 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[53e43f4a-0db0-48bf-b151-0bcf677d2b17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597218, 'reachable_time': 35985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309597, 'error': None, 'target': 'ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d77f3ccc8\x2dbb54\x2d46a1\x2db015\x2dcc5b8a445202.mount: Deactivated successfully.
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.396 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77f3ccc8-bb54-46a1-b015-cc5b8a445202 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.397 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a99e2c-eb6c-4ddd-8b29-ab48af088d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:03 compute-0 ceph-mon[74339]: pgmap v1922: 305 pgs: 305 active+clean; 187 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 1.8 MiB/s wr, 331 op/s
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.460 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.461 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4397MB free_disk=20.913681030273438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.461 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.462 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.523 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 946841c5-aadb-47f4-a772-8b25581f01ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.523 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.524 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.524 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.594 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:03 compute-0 nova_compute[251992]: 2025-12-06 07:22:03.825 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.828 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.829 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:03.829 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 194 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 450 KiB/s rd, 1.4 MiB/s wr, 56 op/s
Dec 06 07:22:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2984823432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.061 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.066 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.086 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.115 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.116 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.315 251996 DEBUG nova.network.neutron [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Updating instance_info_cache with network_info: [{"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.331 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.332 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Instance network_info: |[{"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.332 251996 DEBUG oslo_concurrency.lockutils [req-0f178fc4-9e13-46dc-a286-fefbd220392f req-99bddf04-8a4f-4da0-9ddc-dc81524c379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:04 compute-0 nova_compute[251992]: 2025-12-06 07:22:04.333 251996 DEBUG nova.network.neutron [req-0f178fc4-9e13-46dc-a286-fefbd220392f req-99bddf04-8a4f-4da0-9ddc-dc81524c379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Refreshing network info cache for port 261f5dec-8a38-4bc8-ac43-93d094daeba1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:22:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:04.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:04 compute-0 sudo[309621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:04 compute-0 sudo[309621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:04 compute-0 sudo[309621]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:04 compute-0 sudo[309646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:04 compute-0 sudo[309646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:04 compute-0 sudo[309646]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:22:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:05.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.474 251996 DEBUG nova.compute.manager [req-b643d79e-1d85-4b34-8d72-8a4c70d0dc21 req-23ece902-e201-4fc1-9140-697cce140dfe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received event network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.476 251996 DEBUG oslo_concurrency.lockutils [req-b643d79e-1d85-4b34-8d72-8a4c70d0dc21 req-23ece902-e201-4fc1-9140-697cce140dfe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.476 251996 DEBUG oslo_concurrency.lockutils [req-b643d79e-1d85-4b34-8d72-8a4c70d0dc21 req-23ece902-e201-4fc1-9140-697cce140dfe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.476 251996 DEBUG oslo_concurrency.lockutils [req-b643d79e-1d85-4b34-8d72-8a4c70d0dc21 req-23ece902-e201-4fc1-9140-697cce140dfe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.476 251996 DEBUG nova.compute.manager [req-b643d79e-1d85-4b34-8d72-8a4c70d0dc21 req-23ece902-e201-4fc1-9140-697cce140dfe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] No waiting events found dispatching network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.477 251996 WARNING nova.compute.manager [req-b643d79e-1d85-4b34-8d72-8a4c70d0dc21 req-23ece902-e201-4fc1-9140-697cce140dfe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received unexpected event network-vif-plugged-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 for instance with vm_state active and task_state deleting.
Dec 06 07:22:05 compute-0 ceph-mon[74339]: pgmap v1923: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 209 op/s
Dec 06 07:22:05 compute-0 ceph-mon[74339]: pgmap v1924: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 135 op/s
Dec 06 07:22:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1874194034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2594007488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:05 compute-0 ceph-mon[74339]: pgmap v1925: 305 pgs: 305 active+clean; 194 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:22:05 compute-0 ceph-mon[74339]: pgmap v1926: 305 pgs: 305 active+clean; 194 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 450 KiB/s rd, 1.4 MiB/s wr, 56 op/s
Dec 06 07:22:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2984823432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.726 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.792 251996 DEBUG nova.storage.rbd_utils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] resizing rbd image 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.876 251996 DEBUG nova.network.neutron [req-0f178fc4-9e13-46dc-a286-fefbd220392f req-99bddf04-8a4f-4da0-9ddc-dc81524c379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Updated VIF entry in instance network info cache for port 261f5dec-8a38-4bc8-ac43-93d094daeba1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.876 251996 DEBUG nova.network.neutron [req-0f178fc4-9e13-46dc-a286-fefbd220392f req-99bddf04-8a4f-4da0-9ddc-dc81524c379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Updating instance_info_cache with network_info: [{"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.882 251996 DEBUG nova.objects.instance [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'migration_context' on Instance uuid 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 219 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.895 251996 DEBUG oslo_concurrency.lockutils [req-0f178fc4-9e13-46dc-a286-fefbd220392f req-99bddf04-8a4f-4da0-9ddc-dc81524c379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.896 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.896 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Ensure instance console log exists: /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.897 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.897 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.897 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.899 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Start _get_guest_xml network_info=[{"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.903 251996 WARNING nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.907 251996 DEBUG nova.virt.libvirt.host [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.908 251996 DEBUG nova.virt.libvirt.host [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.911 251996 DEBUG nova.virt.libvirt.host [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.911 251996 DEBUG nova.virt.libvirt.host [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.912 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.913 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.913 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.913 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.914 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.914 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.914 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.914 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.915 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.915 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.915 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.915 251996 DEBUG nova.virt.hardware [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:22:05 compute-0 nova_compute[251992]: 2025-12-06 07:22:05.919 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.116 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.117 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.117 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/116807286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.343 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.364 251996 DEBUG nova.storage.rbd_utils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.368 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/597978741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-0 ceph-mon[74339]: pgmap v1927: 305 pgs: 305 active+clean; 219 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Dec 06 07:22:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/116807286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/948365744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:06.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719153835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.836 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.838 251996 DEBUG nova.virt.libvirt.vif [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-633928739',display_name='tempest-DeleteServersTestJSON-server-633928739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-633928739',id=93,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-t43t8w0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:00Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=87ee67ad-8b8f-4a54-83aa-58e5d28e7497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.839 251996 DEBUG nova.network.os_vif_util [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.840 251996 DEBUG nova.network.os_vif_util [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:a1:d7,bridge_name='br-int',has_traffic_filtering=True,id=261f5dec-8a38-4bc8-ac43-93d094daeba1,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap261f5dec-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.842 251996 DEBUG nova.objects.instance [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.857 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <uuid>87ee67ad-8b8f-4a54-83aa-58e5d28e7497</uuid>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <name>instance-0000005d</name>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <nova:name>tempest-DeleteServersTestJSON-server-633928739</nova:name>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:22:05</nova:creationTime>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <nova:user uuid="d966fefcb38a45219b9cc637c46a3d62">tempest-DeleteServersTestJSON-1764569218-project-member</nova:user>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <nova:project uuid="c6d2f50c0db54315bfa96a24511dda90">tempest-DeleteServersTestJSON-1764569218</nova:project>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <nova:port uuid="261f5dec-8a38-4bc8-ac43-93d094daeba1">
Dec 06 07:22:06 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <system>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <entry name="serial">87ee67ad-8b8f-4a54-83aa-58e5d28e7497</entry>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <entry name="uuid">87ee67ad-8b8f-4a54-83aa-58e5d28e7497</entry>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </system>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <os>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   </os>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <features>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   </features>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk">
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       </source>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk.config">
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       </source>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:22:06 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:d5:a1:d7"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <target dev="tap261f5dec-8a"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497/console.log" append="off"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <video>
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </video>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:22:06 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:22:06 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:22:06 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:22:06 compute-0 nova_compute[251992]: </domain>
Dec 06 07:22:06 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.858 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Preparing to wait for external event network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.859 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.859 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.860 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.861 251996 DEBUG nova.virt.libvirt.vif [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:21:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-633928739',display_name='tempest-DeleteServersTestJSON-server-633928739',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-633928739',id=93,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-t43t8w0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:00Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=87ee67ad-8b8f-4a54-83aa-58e5d28e7497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.861 251996 DEBUG nova.network.os_vif_util [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.861 251996 DEBUG nova.network.os_vif_util [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:a1:d7,bridge_name='br-int',has_traffic_filtering=True,id=261f5dec-8a38-4bc8-ac43-93d094daeba1,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap261f5dec-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.862 251996 DEBUG os_vif [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:a1:d7,bridge_name='br-int',has_traffic_filtering=True,id=261f5dec-8a38-4bc8-ac43-93d094daeba1,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap261f5dec-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.863 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.863 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.864 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.866 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.867 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap261f5dec-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.867 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap261f5dec-8a, col_values=(('external_ids', {'iface-id': '261f5dec-8a38-4bc8-ac43-93d094daeba1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:a1:d7', 'vm-uuid': '87ee67ad-8b8f-4a54-83aa-58e5d28e7497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.869 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:06 compute-0 NetworkManager[48965]: <info>  [1765005726.8698] manager: (tap261f5dec-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.872 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.875 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.876 251996 INFO os_vif [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:a1:d7,bridge_name='br-int',has_traffic_filtering=True,id=261f5dec-8a38-4bc8-ac43-93d094daeba1,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap261f5dec-8a')
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.924 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.925 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.925 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:d5:a1:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.926 251996 INFO nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Using config drive
Dec 06 07:22:06 compute-0 nova_compute[251992]: 2025-12-06 07:22:06.946 251996 DEBUG nova.storage.rbd_utils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:07.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 260 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 3.8 MiB/s wr, 67 op/s
Dec 06 07:22:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3719153835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/474933109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:08 compute-0 nova_compute[251992]: 2025-12-06 07:22:08.059 251996 INFO nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Creating config drive at /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497/disk.config
Dec 06 07:22:08 compute-0 nova_compute[251992]: 2025-12-06 07:22:08.064 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsy6nf_dr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:08 compute-0 nova_compute[251992]: 2025-12-06 07:22:08.195 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsy6nf_dr" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:08 compute-0 nova_compute[251992]: 2025-12-06 07:22:08.227 251996 DEBUG nova.storage.rbd_utils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] rbd image 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:08 compute-0 nova_compute[251992]: 2025-12-06 07:22:08.231 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497/disk.config 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:22:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:22:08 compute-0 nova_compute[251992]: 2025-12-06 07:22:08.826 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:09.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:22:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019152381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:22:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:22:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019152381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:22:09 compute-0 nova_compute[251992]: 2025-12-06 07:22:09.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:09 compute-0 nova_compute[251992]: 2025-12-06 07:22:09.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:22:09 compute-0 nova_compute[251992]: 2025-12-06 07:22:09.694 251996 DEBUG oslo_concurrency.processutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497/disk.config 87ee67ad-8b8f-4a54-83aa-58e5d28e7497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:09 compute-0 nova_compute[251992]: 2025-12-06 07:22:09.695 251996 INFO nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Deleting local config drive /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497/disk.config because it was imported into RBD.
Dec 06 07:22:09 compute-0 kernel: tap261f5dec-8a: entered promiscuous mode
Dec 06 07:22:09 compute-0 NetworkManager[48965]: <info>  [1765005729.7434] manager: (tap261f5dec-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Dec 06 07:22:09 compute-0 ovn_controller[147168]: 2025-12-06T07:22:09Z|00318|binding|INFO|Claiming lport 261f5dec-8a38-4bc8-ac43-93d094daeba1 for this chassis.
Dec 06 07:22:09 compute-0 ovn_controller[147168]: 2025-12-06T07:22:09Z|00319|binding|INFO|261f5dec-8a38-4bc8-ac43-93d094daeba1: Claiming fa:16:3e:d5:a1:d7 10.100.0.10
Dec 06 07:22:09 compute-0 nova_compute[251992]: 2025-12-06 07:22:09.745 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.756 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:a1:d7 10.100.0.10'], port_security=['fa:16:3e:d5:a1:d7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '87ee67ad-8b8f-4a54-83aa-58e5d28e7497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '2', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=261f5dec-8a38-4bc8-ac43-93d094daeba1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.758 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 261f5dec-8a38-4bc8-ac43-93d094daeba1 in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 bound to our chassis
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.759 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.769 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bce6a55e-f395-4371-adc6-5a3c2d57acae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.770 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85cfbf28-71 in ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:22:09 compute-0 systemd-udevd[309883]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:22:09 compute-0 systemd-machined[212986]: New machine qemu-42-instance-0000005d.
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.772 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85cfbf28-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.772 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c191dfe7-de32-42a3-bd7e-64e6cf66444e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.773 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[22ec1aef-4a8f-487c-89be-8702915f64b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 NetworkManager[48965]: <info>  [1765005729.7819] device (tap261f5dec-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:22:09 compute-0 NetworkManager[48965]: <info>  [1765005729.7839] device (tap261f5dec-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.784 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[fb6327cc-ceb1-414f-a224-892f368c4d70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-0000005d.
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.810 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ca092fd1-6f18-4244-8a92-9dba55676374]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 nova_compute[251992]: 2025-12-06 07:22:09.816 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:09 compute-0 ovn_controller[147168]: 2025-12-06T07:22:09Z|00320|binding|INFO|Setting lport 261f5dec-8a38-4bc8-ac43-93d094daeba1 ovn-installed in OVS
Dec 06 07:22:09 compute-0 ovn_controller[147168]: 2025-12-06T07:22:09Z|00321|binding|INFO|Setting lport 261f5dec-8a38-4bc8-ac43-93d094daeba1 up in Southbound
Dec 06 07:22:09 compute-0 nova_compute[251992]: 2025-12-06 07:22:09.821 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.837 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[df21cc32-fd48-4bcc-9108-0a41ffdb4336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.844 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1088cd2e-471d-4464-a198-9f7b8aebccdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 NetworkManager[48965]: <info>  [1765005729.8447] manager: (tap85cfbf28-70): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.876 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f1d3ae-8aaf-4ca6-85dd-507affb17ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.879 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf90d7e-9e02-4e26-8532-edb9d6895521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 260 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 3.4 MiB/s wr, 63 op/s
Dec 06 07:22:09 compute-0 NetworkManager[48965]: <info>  [1765005729.8997] device (tap85cfbf28-70): carrier: link connected
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.903 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[df0d96e6-d6c2-4786-b508-c136b40a6bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.917 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb91975-efc7-453f-95cd-48d1c7ba7887]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600248, 'reachable_time': 15378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309915, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.929 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d97a0e-d37b-4b43-b2ff-93a263cedb4e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600248, 'tstamp': 600248}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309916, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.945 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3e183351-a2bf-4268-aee7-966ed86d6885]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600248, 'reachable_time': 15378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309917, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:09.973 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec6ee96-c69b-4a38-a8f8-d808698a3810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/787302535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:09 compute-0 ceph-mon[74339]: pgmap v1928: 305 pgs: 305 active+clean; 260 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 3.8 MiB/s wr, 67 op/s
Dec 06 07:22:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1296058430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.025 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1632d0-7b45-4856-8bfc-663a5407a147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.026 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.026 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.027 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85cfbf28-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:10 compute-0 NetworkManager[48965]: <info>  [1765005730.0294] manager: (tap85cfbf28-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Dec 06 07:22:10 compute-0 kernel: tap85cfbf28-70: entered promiscuous mode
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.028 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.031 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.032 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85cfbf28-70, col_values=(('external_ids', {'iface-id': '41b1b168-8e0e-4991-9750-9b31221f4863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:10 compute-0 ovn_controller[147168]: 2025-12-06T07:22:10Z|00322|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.048 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.049 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.050 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7061d1-c363-4fc0-a56d-df46d55967c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.051 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:22:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:10.052 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'env', 'PROCESS_TAG=haproxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85cfbf28-7016-4776-8fc2-2eb08a6b8347.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.058 251996 DEBUG nova.compute.manager [req-f7aa7141-faf4-4ec9-946b-fe639990b7fb req-0e066916-2f80-42de-a98d-d9ff6bfdb0cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received event network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.062 251996 DEBUG oslo_concurrency.lockutils [req-f7aa7141-faf4-4ec9-946b-fe639990b7fb req-0e066916-2f80-42de-a98d-d9ff6bfdb0cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.062 251996 DEBUG oslo_concurrency.lockutils [req-f7aa7141-faf4-4ec9-946b-fe639990b7fb req-0e066916-2f80-42de-a98d-d9ff6bfdb0cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.062 251996 DEBUG oslo_concurrency.lockutils [req-f7aa7141-faf4-4ec9-946b-fe639990b7fb req-0e066916-2f80-42de-a98d-d9ff6bfdb0cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.062 251996 DEBUG nova.compute.manager [req-f7aa7141-faf4-4ec9-946b-fe639990b7fb req-0e066916-2f80-42de-a98d-d9ff6bfdb0cf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Processing event network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.235 251996 INFO nova.virt.libvirt.driver [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Deleting instance files /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef_del
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.235 251996 INFO nova.virt.libvirt.driver [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Deletion of /var/lib/nova/instances/946841c5-aadb-47f4-a772-8b25581f01ef_del complete
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.349 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005730.3493419, 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.350 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] VM Started (Lifecycle Event)
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.352 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.355 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.358 251996 INFO nova.virt.libvirt.driver [-] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Instance spawned successfully.
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.358 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.391 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.395 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.395 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.397 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.398 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.398 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.399 251996 DEBUG nova.virt.libvirt.driver [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.403 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:10 compute-0 podman[309991]: 2025-12-06 07:22:10.415798124 +0000 UTC m=+0.050162901 container create b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.415 251996 INFO nova.compute.manager [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Took 15.10 seconds to destroy the instance on the hypervisor.
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.417 251996 DEBUG oslo.service.loopingcall [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.417 251996 DEBUG nova.compute.manager [-] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.417 251996 DEBUG nova.network.neutron [-] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.439 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.440 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005730.350697, 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.441 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] VM Paused (Lifecycle Event)
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.460 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:10 compute-0 systemd[1]: Started libpod-conmon-b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c.scope.
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.465 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005730.355592, 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.466 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] VM Resumed (Lifecycle Event)
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.476 251996 INFO nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Took 9.76 seconds to spawn the instance on the hypervisor.
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.477 251996 DEBUG nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:10 compute-0 podman[309991]: 2025-12-06 07:22:10.391454319 +0000 UTC m=+0.025819106 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.487 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.493 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ea7e0e5c99ffc7b5e5aec665abf26bcfb23b54c4ccaa5442867d924d968e3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:10 compute-0 podman[309991]: 2025-12-06 07:22:10.508083243 +0000 UTC m=+0.142448040 container init b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:22:10 compute-0 podman[309991]: 2025-12-06 07:22:10.518410544 +0000 UTC m=+0.152775311 container start b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.519 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.535 251996 INFO nova.compute.manager [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Took 10.73 seconds to build instance.
Dec 06 07:22:10 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[310003]: [NOTICE]   (310008) : New worker (310010) forked
Dec 06 07:22:10 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[310003]: [NOTICE]   (310008) : Loading success.
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.551 251996 DEBUG oslo_concurrency.lockutils [None req-3c647faa-4a7e-447d-a3bd-76ffc0a915ff d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.674 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 06 07:22:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.892 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.893 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.893 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:22:10 compute-0 nova_compute[251992]: 2025-12-06 07:22:10.894 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:11.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4019152381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:22:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4019152381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:22:11 compute-0 ceph-mon[74339]: pgmap v1929: 305 pgs: 305 active+clean; 260 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 3.4 MiB/s wr, 63 op/s
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.380 251996 DEBUG nova.network.neutron [-] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.397 251996 INFO nova.compute.manager [-] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Took 0.98 seconds to deallocate network for instance.
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.444 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.444 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.514 251996 DEBUG oslo_concurrency.processutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.640 251996 INFO nova.compute.manager [None req-bf697084-b6c4-413f-b301-b3bdb1445868 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Pausing
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.642 251996 DEBUG nova.objects.instance [None req-bf697084-b6c4-413f-b301-b3bdb1445868 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'flavor' on Instance uuid 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.678 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005731.6785836, 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.679 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] VM Paused (Lifecycle Event)
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.681 251996 DEBUG nova.compute.manager [None req-bf697084-b6c4-413f-b301-b3bdb1445868 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.699 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.702 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.725 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] During sync_power_state the instance has a pending task (pausing). Skip.
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.869 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 256 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 158 KiB/s rd, 4.9 MiB/s wr, 100 op/s
Dec 06 07:22:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351889120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.986 251996 DEBUG oslo_concurrency.processutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:11 compute-0 nova_compute[251992]: 2025-12-06 07:22:11.991 251996 DEBUG nova.compute.provider_tree [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.009 251996 DEBUG nova.scheduler.client.report [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.047 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.079 251996 INFO nova.scheduler.client.report [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Deleted allocations for instance 946841c5-aadb-47f4-a772-8b25581f01ef
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.158 251996 DEBUG oslo_concurrency.lockutils [None req-cd359c89-d0f7-43fc-906e-b3aea811854b a52e2b4388994d8791443483bd42cc33 b558585a6aa14470bdad319926a98046 - - default default] Lock "946841c5-aadb-47f4-a772-8b25581f01ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.170 251996 DEBUG nova.compute.manager [req-0e4659e1-39c0-439f-8deb-1dff2e8fc07e req-d88b25e8-4cc4-4f6c-aba4-1ab91108e4a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received event network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.171 251996 DEBUG oslo_concurrency.lockutils [req-0e4659e1-39c0-439f-8deb-1dff2e8fc07e req-d88b25e8-4cc4-4f6c-aba4-1ab91108e4a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.171 251996 DEBUG oslo_concurrency.lockutils [req-0e4659e1-39c0-439f-8deb-1dff2e8fc07e req-d88b25e8-4cc4-4f6c-aba4-1ab91108e4a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.171 251996 DEBUG oslo_concurrency.lockutils [req-0e4659e1-39c0-439f-8deb-1dff2e8fc07e req-d88b25e8-4cc4-4f6c-aba4-1ab91108e4a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.172 251996 DEBUG nova.compute.manager [req-0e4659e1-39c0-439f-8deb-1dff2e8fc07e req-d88b25e8-4cc4-4f6c-aba4-1ab91108e4a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] No waiting events found dispatching network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.172 251996 WARNING nova.compute.manager [req-0e4659e1-39c0-439f-8deb-1dff2e8fc07e req-d88b25e8-4cc4-4f6c-aba4-1ab91108e4a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received unexpected event network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 for instance with vm_state paused and task_state None.
Dec 06 07:22:12 compute-0 sudo[310041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:12 compute-0 sudo[310041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:12 compute-0 sudo[310041]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:12 compute-0 sudo[310066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:22:12 compute-0 sudo[310066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:12 compute-0 sudo[310066]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:12 compute-0 sudo[310091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:12 compute-0 sudo[310091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:12 compute-0 sudo[310091]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.454 251996 DEBUG nova.compute.manager [req-8dcb79b3-fac6-4bfe-af4f-2f2c0650346d req-8135e333-7ef0-4395-a0e6-566150c99e96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Received event network-vif-deleted-286fcec3-a1f1-4e67-9aec-e5b3ff3d2a09 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:12 compute-0 sudo[310116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:22:12 compute-0 sudo[310116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.586 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Updating instance_info_cache with network_info: [{"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.610 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-87ee67ad-8b8f-4a54-83aa-58e5d28e7497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:12 compute-0 nova_compute[251992]: 2025-12-06 07:22:12.610 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:22:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:12.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4131463299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:12 compute-0 ceph-mon[74339]: pgmap v1930: 305 pgs: 305 active+clean; 256 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 158 KiB/s rd, 4.9 MiB/s wr, 100 op/s
Dec 06 07:22:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2351889120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:12 compute-0 sudo[310116]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:22:12 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:22:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:13.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.706 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.706 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.706 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.706 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.707 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.707 251996 INFO nova.compute.manager [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Terminating instance
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.708 251996 DEBUG nova.compute.manager [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:22:13 compute-0 kernel: tap261f5dec-8a (unregistering): left promiscuous mode
Dec 06 07:22:13 compute-0 NetworkManager[48965]: <info>  [1765005733.7439] device (tap261f5dec-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:22:13 compute-0 ovn_controller[147168]: 2025-12-06T07:22:13Z|00323|binding|INFO|Releasing lport 261f5dec-8a38-4bc8-ac43-93d094daeba1 from this chassis (sb_readonly=0)
Dec 06 07:22:13 compute-0 ovn_controller[147168]: 2025-12-06T07:22:13Z|00324|binding|INFO|Setting lport 261f5dec-8a38-4bc8-ac43-93d094daeba1 down in Southbound
Dec 06 07:22:13 compute-0 ovn_controller[147168]: 2025-12-06T07:22:13Z|00325|binding|INFO|Removing iface tap261f5dec-8a ovn-installed in OVS
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.751 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.754 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:13.760 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:a1:d7 10.100.0.10'], port_security=['fa:16:3e:d5:a1:d7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '87ee67ad-8b8f-4a54-83aa-58e5d28e7497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '4', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=261f5dec-8a38-4bc8-ac43-93d094daeba1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2697805056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4258869415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:13.762 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 261f5dec-8a38-4bc8-ac43-93d094daeba1 in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 unbound from our chassis
Dec 06 07:22:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:13.764 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:22:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:13.765 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f43794f0-cce4-47c7-9d4a-f2d1e8d3c43c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:13.765 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace which is not needed anymore
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.773 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Dec 06 07:22:13 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005d.scope: Consumed 1.973s CPU time.
Dec 06 07:22:13 compute-0 systemd-machined[212986]: Machine qemu-42-instance-0000005d terminated.
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.826 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:22:13 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[310003]: [NOTICE]   (310008) : haproxy version is 2.8.14-c23fe91
Dec 06 07:22:13 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[310003]: [NOTICE]   (310008) : path to executable is /usr/sbin/haproxy
Dec 06 07:22:13 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[310003]: [WARNING]  (310008) : Exiting Master process...
Dec 06 07:22:13 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[310003]: [WARNING]  (310008) : Exiting Master process...
Dec 06 07:22:13 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[310003]: [ALERT]    (310008) : Current worker (310010) exited with code 143 (Terminated)
Dec 06 07:22:13 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[310003]: [WARNING]  (310008) : All workers exited. Exiting... (0)
Dec 06 07:22:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:22:13 compute-0 systemd[1]: libpod-b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c.scope: Deactivated successfully.
Dec 06 07:22:13 compute-0 conmon[310003]: conmon b90440a8fa40bf8721c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c.scope/container/memory.events
Dec 06 07:22:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 241 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 942 KiB/s rd, 6.4 MiB/s wr, 183 op/s
Dec 06 07:22:13 compute-0 podman[310194]: 2025-12-06 07:22:13.89404465 +0000 UTC m=+0.045780850 container died b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:22:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c-userdata-shm.mount: Deactivated successfully.
Dec 06 07:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-14ea7e0e5c99ffc7b5e5aec665abf26bcfb23b54c4ccaa5442867d924d968e3a-merged.mount: Deactivated successfully.
Dec 06 07:22:13 compute-0 podman[310194]: 2025-12-06 07:22:13.929497388 +0000 UTC m=+0.081233568 container cleanup b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.938 251996 INFO nova.virt.libvirt.driver [-] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Instance destroyed successfully.
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.940 251996 DEBUG nova.objects.instance [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'resources' on Instance uuid 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:13 compute-0 systemd[1]: libpod-conmon-b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c.scope: Deactivated successfully.
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.957 251996 DEBUG nova.virt.libvirt.vif [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:21:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-633928739',display_name='tempest-DeleteServersTestJSON-server-633928739',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-633928739',id=93,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:22:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-t43t8w0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:22:11Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=87ee67ad-8b8f-4a54-83aa-58e5d28e7497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.957 251996 DEBUG nova.network.os_vif_util [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "address": "fa:16:3e:d5:a1:d7", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap261f5dec-8a", "ovs_interfaceid": "261f5dec-8a38-4bc8-ac43-93d094daeba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.958 251996 DEBUG nova.network.os_vif_util [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:a1:d7,bridge_name='br-int',has_traffic_filtering=True,id=261f5dec-8a38-4bc8-ac43-93d094daeba1,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap261f5dec-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.959 251996 DEBUG os_vif [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:a1:d7,bridge_name='br-int',has_traffic_filtering=True,id=261f5dec-8a38-4bc8-ac43-93d094daeba1,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap261f5dec-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.961 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.961 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap261f5dec-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.962 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:22:13 compute-0 nova_compute[251992]: 2025-12-06 07:22:13.967 251996 INFO os_vif [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:a1:d7,bridge_name='br-int',has_traffic_filtering=True,id=261f5dec-8a38-4bc8-ac43-93d094daeba1,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap261f5dec-8a')
Dec 06 07:22:13 compute-0 podman[310231]: 2025-12-06 07:22:13.999475979 +0000 UTC m=+0.043235251 container remove b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.005 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec1cff7-d5b9-44ae-a5f2-c707c5870204]: (4, ('Sat Dec  6 07:22:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c)\nb90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c\nSat Dec  6 07:22:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (b90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c)\nb90440a8fa40bf8721c893f1a91565a95b01fe3d68a0d887a8af4e32bb1e272c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.008 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2689619b-0ddd-4f96-8843-12d929e2a721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.009 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.012 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:14 compute-0 kernel: tap85cfbf28-70: left promiscuous mode
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.028 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.052 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.054 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5b69ed41-4ac2-4447-9389-df4cc001d3df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.065 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[07c871d8-5eda-479c-aba2-228a9a182f63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.067 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[13797205-1314-476d-8719-0d976f00d8a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.082 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5089fccb-7b88-466e-a0a4-028179c0cb84]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600242, 'reachable_time': 23646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310265, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d85cfbf28\x2d7016\x2d4776\x2d8fc2\x2d2eb08a6b8347.mount: Deactivated successfully.
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.087 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:22:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:14.087 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[c58d856e-d3d6-4abe-a432-2b3a02f8c576]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.390 251996 DEBUG nova.compute.manager [req-9d6b6247-e704-4c15-bbeb-2b5887973361 req-a16539ea-64c2-4d35-9ef7-75e833774c14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received event network-vif-unplugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.390 251996 DEBUG oslo_concurrency.lockutils [req-9d6b6247-e704-4c15-bbeb-2b5887973361 req-a16539ea-64c2-4d35-9ef7-75e833774c14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.390 251996 DEBUG oslo_concurrency.lockutils [req-9d6b6247-e704-4c15-bbeb-2b5887973361 req-a16539ea-64c2-4d35-9ef7-75e833774c14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.390 251996 DEBUG oslo_concurrency.lockutils [req-9d6b6247-e704-4c15-bbeb-2b5887973361 req-a16539ea-64c2-4d35-9ef7-75e833774c14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.391 251996 DEBUG nova.compute.manager [req-9d6b6247-e704-4c15-bbeb-2b5887973361 req-a16539ea-64c2-4d35-9ef7-75e833774c14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] No waiting events found dispatching network-vif-unplugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:14 compute-0 nova_compute[251992]: 2025-12-06 07:22:14.391 251996 DEBUG nova.compute.manager [req-9d6b6247-e704-4c15-bbeb-2b5887973361 req-a16539ea-64c2-4d35-9ef7-75e833774c14 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received event network-vif-unplugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:22:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:22:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:22:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:14.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 48853ee4-11a8-4779-ae14-9a577c11e403 does not exist
Dec 06 07:22:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2a4c4a6e-fbf9-4bf5-914a-32062b3bfe2d does not exist
Dec 06 07:22:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 08e39cd3-a78f-4cec-8772-ff965dbb909b does not exist
Dec 06 07:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:22:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:22:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:22:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:22:14 compute-0 sudo[310268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:14 compute-0 sudo[310268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:14 compute-0 sudo[310268]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:15 compute-0 sudo[310293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:22:15 compute-0 sudo[310293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:15 compute-0 sudo[310293]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:15.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:15 compute-0 sudo[310318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:15 compute-0 sudo[310318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:15 compute-0 sudo[310318]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:15 compute-0 sudo[310343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:22:15 compute-0 sudo[310343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:15 compute-0 podman[310405]: 2025-12-06 07:22:15.410322142 +0000 UTC m=+0.038284057 container create 49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:22:15 compute-0 systemd[1]: Started libpod-conmon-49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7.scope.
Dec 06 07:22:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:15 compute-0 podman[310405]: 2025-12-06 07:22:15.392930566 +0000 UTC m=+0.020892411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:22:15 compute-0 podman[310405]: 2025-12-06 07:22:15.502877598 +0000 UTC m=+0.130839433 container init 49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:22:15 compute-0 podman[310405]: 2025-12-06 07:22:15.509761516 +0000 UTC m=+0.137723331 container start 49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 07:22:15 compute-0 podman[310405]: 2025-12-06 07:22:15.512236623 +0000 UTC m=+0.140198468 container attach 49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:22:15 compute-0 stupefied_archimedes[310422]: 167 167
Dec 06 07:22:15 compute-0 systemd[1]: libpod-49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7.scope: Deactivated successfully.
Dec 06 07:22:15 compute-0 podman[310405]: 2025-12-06 07:22:15.515630046 +0000 UTC m=+0.143591861 container died 49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1942a1d250a54a01849f572918ce536411c06f84dba97345c8edc73738b3f58f-merged.mount: Deactivated successfully.
Dec 06 07:22:15 compute-0 podman[310405]: 2025-12-06 07:22:15.555088533 +0000 UTC m=+0.183050348 container remove 49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:22:15 compute-0 systemd[1]: libpod-conmon-49f3e6c847e1aef1855b6c223d3f986f2c44cede88624475142ae009aba21af7.scope: Deactivated successfully.
Dec 06 07:22:15 compute-0 podman[310446]: 2025-12-06 07:22:15.757962091 +0000 UTC m=+0.090555113 container create 80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:22:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:15 compute-0 ceph-mon[74339]: pgmap v1931: 305 pgs: 305 active+clean; 241 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 942 KiB/s rd, 6.4 MiB/s wr, 183 op/s
Dec 06 07:22:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:22:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:22:15 compute-0 podman[310446]: 2025-12-06 07:22:15.690160331 +0000 UTC m=+0.022753373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:22:15 compute-0 systemd[1]: Started libpod-conmon-80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c.scope.
Dec 06 07:22:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2e0617462c10e0a7490aed92fc0f627c66cbb1107dd0f767dee85fab38fe2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2e0617462c10e0a7490aed92fc0f627c66cbb1107dd0f767dee85fab38fe2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2e0617462c10e0a7490aed92fc0f627c66cbb1107dd0f767dee85fab38fe2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2e0617462c10e0a7490aed92fc0f627c66cbb1107dd0f767dee85fab38fe2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b2e0617462c10e0a7490aed92fc0f627c66cbb1107dd0f767dee85fab38fe2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:15 compute-0 podman[310446]: 2025-12-06 07:22:15.869065454 +0000 UTC m=+0.201658486 container init 80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:22:15 compute-0 podman[310446]: 2025-12-06 07:22:15.875904551 +0000 UTC m=+0.208497573 container start 80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:22:15 compute-0 podman[310446]: 2025-12-06 07:22:15.880546978 +0000 UTC m=+0.213140020 container attach 80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:22:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 188 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 MiB/s wr, 229 op/s
Dec 06 07:22:16 compute-0 nova_compute[251992]: 2025-12-06 07:22:16.600 251996 DEBUG nova.compute.manager [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received event network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:16 compute-0 nova_compute[251992]: 2025-12-06 07:22:16.602 251996 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:16 compute-0 nova_compute[251992]: 2025-12-06 07:22:16.602 251996 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:16 compute-0 nova_compute[251992]: 2025-12-06 07:22:16.603 251996 DEBUG oslo_concurrency.lockutils [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:16 compute-0 nova_compute[251992]: 2025-12-06 07:22:16.603 251996 DEBUG nova.compute.manager [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] No waiting events found dispatching network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:16 compute-0 nova_compute[251992]: 2025-12-06 07:22:16.603 251996 WARNING nova.compute.manager [req-96d97ac7-44e6-4131-bbe8-1b34cc184d07 req-ad161ac9-cfc4-454b-bdb2-47747b820310 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received unexpected event network-vif-plugged-261f5dec-8a38-4bc8-ac43-93d094daeba1 for instance with vm_state paused and task_state deleting.
Dec 06 07:22:16 compute-0 wizardly_grothendieck[310462]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:22:16 compute-0 wizardly_grothendieck[310462]: --> relative data size: 1.0
Dec 06 07:22:16 compute-0 wizardly_grothendieck[310462]: --> All data devices are unavailable
Dec 06 07:22:16 compute-0 systemd[1]: libpod-80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c.scope: Deactivated successfully.
Dec 06 07:22:16 compute-0 podman[310446]: 2025-12-06 07:22:16.692156992 +0000 UTC m=+1.024750024 container died 80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8b2e0617462c10e0a7490aed92fc0f627c66cbb1107dd0f767dee85fab38fe2-merged.mount: Deactivated successfully.
Dec 06 07:22:16 compute-0 podman[310446]: 2025-12-06 07:22:16.742835275 +0000 UTC m=+1.075428297 container remove 80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:22:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:16.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:16 compute-0 systemd[1]: libpod-conmon-80829375e92c90096af02de4174d5fa3d04b2f827859383a8dc01ec7e134fc1c.scope: Deactivated successfully.
Dec 06 07:22:16 compute-0 sudo[310343]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:16 compute-0 sudo[310489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:16 compute-0 sudo[310489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:16 compute-0 sudo[310489]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:22:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:22:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:22:16 compute-0 ceph-mon[74339]: pgmap v1932: 305 pgs: 305 active+clean; 188 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 MiB/s wr, 229 op/s
Dec 06 07:22:16 compute-0 sudo[310514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:22:16 compute-0 sudo[310514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:16 compute-0 sudo[310514]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:16 compute-0 sudo[310539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:16 compute-0 sudo[310539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:16 compute-0 sudo[310539]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:16 compute-0 sudo[310564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:22:16 compute-0 sudo[310564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:17.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:17 compute-0 nova_compute[251992]: 2025-12-06 07:22:17.100 251996 INFO nova.virt.libvirt.driver [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Deleting instance files /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497_del
Dec 06 07:22:17 compute-0 nova_compute[251992]: 2025-12-06 07:22:17.101 251996 INFO nova.virt.libvirt.driver [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Deletion of /var/lib/nova/instances/87ee67ad-8b8f-4a54-83aa-58e5d28e7497_del complete
Dec 06 07:22:17 compute-0 nova_compute[251992]: 2025-12-06 07:22:17.166 251996 INFO nova.compute.manager [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Took 3.46 seconds to destroy the instance on the hypervisor.
Dec 06 07:22:17 compute-0 nova_compute[251992]: 2025-12-06 07:22:17.167 251996 DEBUG oslo.service.loopingcall [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:22:17 compute-0 nova_compute[251992]: 2025-12-06 07:22:17.167 251996 DEBUG nova.compute.manager [-] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:22:17 compute-0 nova_compute[251992]: 2025-12-06 07:22:17.167 251996 DEBUG nova.network.neutron [-] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:22:17 compute-0 podman[310628]: 2025-12-06 07:22:17.295846912 +0000 UTC m=+0.042261265 container create 76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 07:22:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:17 compute-0 systemd[1]: Started libpod-conmon-76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911.scope.
Dec 06 07:22:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:17 compute-0 podman[310628]: 2025-12-06 07:22:17.274656873 +0000 UTC m=+0.021071286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:22:17 compute-0 podman[310628]: 2025-12-06 07:22:17.380205824 +0000 UTC m=+0.126620207 container init 76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:22:17 compute-0 podman[310628]: 2025-12-06 07:22:17.388589293 +0000 UTC m=+0.135003666 container start 76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:22:17 compute-0 flamboyant_ride[310644]: 167 167
Dec 06 07:22:17 compute-0 systemd[1]: libpod-76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911.scope: Deactivated successfully.
Dec 06 07:22:17 compute-0 podman[310628]: 2025-12-06 07:22:17.395126471 +0000 UTC m=+0.141540854 container attach 76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:22:17 compute-0 podman[310628]: 2025-12-06 07:22:17.39545141 +0000 UTC m=+0.141865773 container died 76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-746064d428636ee156fc867f38b34dd11226899834cd36f1aff12601bef3600a-merged.mount: Deactivated successfully.
Dec 06 07:22:17 compute-0 podman[310628]: 2025-12-06 07:22:17.441617441 +0000 UTC m=+0.188031804 container remove 76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:22:17 compute-0 systemd[1]: libpod-conmon-76405edad532c1266fbcc75fbd0ace68adb467fd80820680178ab6dc8b3ee911.scope: Deactivated successfully.
Dec 06 07:22:17 compute-0 podman[310667]: 2025-12-06 07:22:17.634795484 +0000 UTC m=+0.052501674 container create 53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:22:17 compute-0 systemd[1]: Started libpod-conmon-53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed.scope.
Dec 06 07:22:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9247f62ae697061be2418863675d3116afdeccca063a50b45334f1e66fd1883c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9247f62ae697061be2418863675d3116afdeccca063a50b45334f1e66fd1883c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9247f62ae697061be2418863675d3116afdeccca063a50b45334f1e66fd1883c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9247f62ae697061be2418863675d3116afdeccca063a50b45334f1e66fd1883c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:17 compute-0 podman[310667]: 2025-12-06 07:22:17.696519339 +0000 UTC m=+0.114225579 container init 53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pasteur, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:22:17 compute-0 podman[310667]: 2025-12-06 07:22:17.70422193 +0000 UTC m=+0.121928120 container start 53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pasteur, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 07:22:17 compute-0 podman[310667]: 2025-12-06 07:22:17.707420567 +0000 UTC m=+0.125126777 container attach 53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pasteur, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec 06 07:22:17 compute-0 podman[310667]: 2025-12-06 07:22:17.615478757 +0000 UTC m=+0.033185007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:22:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.6 MiB/s wr, 255 op/s
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.130 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:18.130 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:18.132 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.166 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005723.1660733, 946841c5-aadb-47f4-a772-8b25581f01ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.167 251996 INFO nova.compute.manager [-] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] VM Stopped (Lifecycle Event)
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.188 251996 DEBUG nova.compute.manager [None req-064389ed-98d6-4aba-80e0-7f31a896c446 - - - - - -] [instance: 946841c5-aadb-47f4-a772-8b25581f01ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.387 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:22:18
Dec 06 07:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.control', 'backups', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Dec 06 07:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.417 251996 DEBUG nova.network.neutron [-] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.430 251996 INFO nova.compute.manager [-] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Took 1.26 seconds to deallocate network for instance.
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]: {
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:     "0": [
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:         {
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "devices": [
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "/dev/loop3"
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             ],
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "lv_name": "ceph_lv0",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "lv_size": "7511998464",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "name": "ceph_lv0",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "tags": {
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.cluster_name": "ceph",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.crush_device_class": "",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.encrypted": "0",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.osd_id": "0",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.type": "block",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:                 "ceph.vdo": "0"
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             },
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "type": "block",
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:             "vg_name": "ceph_vg0"
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:         }
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]:     ]
Dec 06 07:22:18 compute-0 dazzling_pasteur[310683]: }
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.476 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.476 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.492 251996 DEBUG nova.compute.manager [req-0ea6a925-dc51-4158-b96c-bb7c0a5b96b7 req-6db7bcb0-f6d0-4e44-b034-d24d26396316 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Received event network-vif-deleted-261f5dec-8a38-4bc8-ac43-93d094daeba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:18 compute-0 systemd[1]: libpod-53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed.scope: Deactivated successfully.
Dec 06 07:22:18 compute-0 podman[310667]: 2025-12-06 07:22:18.498336997 +0000 UTC m=+0.916043197 container died 53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.539 251996 DEBUG oslo_concurrency.processutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:18.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:18 compute-0 ceph-mon[74339]: pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.6 MiB/s wr, 255 op/s
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.829 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-9247f62ae697061be2418863675d3116afdeccca063a50b45334f1e66fd1883c-merged.mount: Deactivated successfully.
Dec 06 07:22:18 compute-0 podman[310667]: 2025-12-06 07:22:18.874625158 +0000 UTC m=+1.292331348 container remove 53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:22:18 compute-0 systemd[1]: libpod-conmon-53391287370d6c5f3af7aeda48fced71ea81a7e0929dea2fc9e4581c9c8839ed.scope: Deactivated successfully.
Dec 06 07:22:18 compute-0 sudo[310564]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:18 compute-0 nova_compute[251992]: 2025-12-06 07:22:18.963 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:18 compute-0 sudo[310725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:18 compute-0 sudo[310725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:18 compute-0 sudo[310725]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030584476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:19 compute-0 nova_compute[251992]: 2025-12-06 07:22:19.012 251996 DEBUG oslo_concurrency.processutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:19 compute-0 nova_compute[251992]: 2025-12-06 07:22:19.018 251996 DEBUG nova.compute.provider_tree [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:19 compute-0 sudo[310750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:22:19 compute-0 sudo[310750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:19 compute-0 sudo[310750]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:19 compute-0 nova_compute[251992]: 2025-12-06 07:22:19.044 251996 DEBUG nova.scheduler.client.report [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:19 compute-0 nova_compute[251992]: 2025-12-06 07:22:19.069 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:19 compute-0 sudo[310777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:19 compute-0 sudo[310777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:19 compute-0 sudo[310777]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:19 compute-0 nova_compute[251992]: 2025-12-06 07:22:19.104 251996 INFO nova.scheduler.client.report [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Deleted allocations for instance 87ee67ad-8b8f-4a54-83aa-58e5d28e7497
Dec 06 07:22:19 compute-0 sudo[310802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:22:19 compute-0 sudo[310802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:19 compute-0 nova_compute[251992]: 2025-12-06 07:22:19.200 251996 DEBUG oslo_concurrency.lockutils [None req-8046a607-ebcc-4c55-a434-cdb7db3dc2b4 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "87ee67ad-8b8f-4a54-83aa-58e5d28e7497" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:19 compute-0 podman[310866]: 2025-12-06 07:22:19.41174459 +0000 UTC m=+0.020083039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:22:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 167 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 210 op/s
Dec 06 07:22:20 compute-0 podman[310866]: 2025-12-06 07:22:20.01836977 +0000 UTC m=+0.626708219 container create 9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:22:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2030584476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:20 compute-0 systemd[1]: Started libpod-conmon-9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1.scope.
Dec 06 07:22:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:20 compute-0 podman[310866]: 2025-12-06 07:22:20.184221077 +0000 UTC m=+0.792559516 container init 9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:22:20 compute-0 podman[310866]: 2025-12-06 07:22:20.191263209 +0000 UTC m=+0.799601638 container start 9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:22:20 compute-0 inspiring_faraday[310895]: 167 167
Dec 06 07:22:20 compute-0 systemd[1]: libpod-9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1.scope: Deactivated successfully.
Dec 06 07:22:20 compute-0 conmon[310895]: conmon 9dfb352937294ad2890c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1.scope/container/memory.events
Dec 06 07:22:20 compute-0 podman[310866]: 2025-12-06 07:22:20.248598874 +0000 UTC m=+0.856937313 container attach 9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:22:20 compute-0 podman[310866]: 2025-12-06 07:22:20.24954502 +0000 UTC m=+0.857883449 container died 9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab70ee1246d44d12988363815d4fd574eceb371616ac61f76a7cdd47a59d04d2-merged.mount: Deactivated successfully.
Dec 06 07:22:20 compute-0 podman[310866]: 2025-12-06 07:22:20.361406194 +0000 UTC m=+0.969744623 container remove 9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:22:20 compute-0 systemd[1]: libpod-conmon-9dfb352937294ad2890c463054f78ad5e9511a16466d7890423bf019db9a39e1.scope: Deactivated successfully.
Dec 06 07:22:20 compute-0 podman[310880]: 2025-12-06 07:22:20.388315058 +0000 UTC m=+0.330878723 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:22:20 compute-0 podman[310935]: 2025-12-06 07:22:20.517448373 +0000 UTC m=+0.046826149 container create ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:22:20 compute-0 systemd[1]: Started libpod-conmon-ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b.scope.
Dec 06 07:22:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5bd2b4feeac9bf66d634a32f1ed9640e2a5e99857f69109e2297a336b0e368/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5bd2b4feeac9bf66d634a32f1ed9640e2a5e99857f69109e2297a336b0e368/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5bd2b4feeac9bf66d634a32f1ed9640e2a5e99857f69109e2297a336b0e368/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b5bd2b4feeac9bf66d634a32f1ed9640e2a5e99857f69109e2297a336b0e368/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:20 compute-0 podman[310935]: 2025-12-06 07:22:20.587013673 +0000 UTC m=+0.116391469 container init ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:22:20 compute-0 podman[310935]: 2025-12-06 07:22:20.493996073 +0000 UTC m=+0.023373869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:22:20 compute-0 podman[310935]: 2025-12-06 07:22:20.593191771 +0000 UTC m=+0.122569547 container start ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:22:20 compute-0 podman[310935]: 2025-12-06 07:22:20.598059064 +0000 UTC m=+0.127436870 container attach ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:22:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:20.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:21 compute-0 ceph-mon[74339]: pgmap v1934: 305 pgs: 305 active+clean; 167 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 210 op/s
Dec 06 07:22:21 compute-0 practical_curie[310952]: {
Dec 06 07:22:21 compute-0 practical_curie[310952]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:22:21 compute-0 practical_curie[310952]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:22:21 compute-0 practical_curie[310952]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:22:21 compute-0 practical_curie[310952]:         "osd_id": 0,
Dec 06 07:22:21 compute-0 practical_curie[310952]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:22:21 compute-0 practical_curie[310952]:         "type": "bluestore"
Dec 06 07:22:21 compute-0 practical_curie[310952]:     }
Dec 06 07:22:21 compute-0 practical_curie[310952]: }
Dec 06 07:22:21 compute-0 systemd[1]: libpod-ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b.scope: Deactivated successfully.
Dec 06 07:22:21 compute-0 podman[310973]: 2025-12-06 07:22:21.466970683 +0000 UTC m=+0.022038253 container died ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b5bd2b4feeac9bf66d634a32f1ed9640e2a5e99857f69109e2297a336b0e368-merged.mount: Deactivated successfully.
Dec 06 07:22:21 compute-0 podman[310973]: 2025-12-06 07:22:21.530381813 +0000 UTC m=+0.085449373 container remove ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_curie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:22:21 compute-0 systemd[1]: libpod-conmon-ebf176cfc505e730845759cf34c3f7a5f6c9292a04ee80b84545f225f23c621b.scope: Deactivated successfully.
Dec 06 07:22:21 compute-0 sudo[310802]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:22:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:22:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 56b1090c-8398-4568-bc71-f187baa067bd does not exist
Dec 06 07:22:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6ab232d9-10ef-431d-9e0c-d7521b680933 does not exist
Dec 06 07:22:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e18d3c1e-fe25-4879-b630-5d9ed3989890 does not exist
Dec 06 07:22:21 compute-0 sudo[310988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:21 compute-0 sudo[310988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:21 compute-0 sudo[310988]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:21 compute-0 sudo[311013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:22:21 compute-0 sudo[311013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:21 compute-0 sudo[311013]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 232 op/s
Dec 06 07:22:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:22:22 compute-0 ceph-mon[74339]: pgmap v1935: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 232 op/s
Dec 06 07:22:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:22.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:23.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:22:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1397180867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:22:23 compute-0 nova_compute[251992]: 2025-12-06 07:22:23.831 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 138 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.6 MiB/s wr, 259 op/s
Dec 06 07:22:23 compute-0 nova_compute[251992]: 2025-12-06 07:22:23.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:24.134 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:24.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:24 compute-0 ceph-mon[74339]: pgmap v1936: 305 pgs: 305 active+clean; 138 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.6 MiB/s wr, 259 op/s
Dec 06 07:22:25 compute-0 sudo[311040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:25 compute-0 sudo[311040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:25 compute-0 sudo[311040]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:25.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:25 compute-0 sudo[311077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:25 compute-0 sudo[311077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:25 compute-0 sudo[311077]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:25 compute-0 podman[311064]: 2025-12-06 07:22:25.087006581 +0000 UTC m=+0.055829286 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 07:22:25 compute-0 podman[311065]: 2025-12-06 07:22:25.119287581 +0000 UTC m=+0.087814938 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:22:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1757352582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002413507060766797 of space, bias 1.0, pg target 0.7240521182300391 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:22:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 121 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 43 KiB/s wr, 178 op/s
Dec 06 07:22:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:26.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:27.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:27 compute-0 ceph-mon[74339]: pgmap v1937: 305 pgs: 305 active+clean; 121 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 43 KiB/s wr, 178 op/s
Dec 06 07:22:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 150 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:22:28 compute-0 ceph-mon[74339]: pgmap v1938: 305 pgs: 305 active+clean; 150 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:22:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:28.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:28 compute-0 nova_compute[251992]: 2025-12-06 07:22:28.832 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:28 compute-0 nova_compute[251992]: 2025-12-06 07:22:28.938 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005733.9370918, 87ee67ad-8b8f-4a54-83aa-58e5d28e7497 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:28 compute-0 nova_compute[251992]: 2025-12-06 07:22:28.938 251996 INFO nova.compute.manager [-] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] VM Stopped (Lifecycle Event)
Dec 06 07:22:28 compute-0 nova_compute[251992]: 2025-12-06 07:22:28.964 251996 DEBUG nova.compute.manager [None req-c2a5a83d-34b5-4dae-bd91-72a7a7567917 - - - - - -] [instance: 87ee67ad-8b8f-4a54-83aa-58e5d28e7497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:28 compute-0 nova_compute[251992]: 2025-12-06 07:22:28.968 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:29.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1655764102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1821247374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 150 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 102 op/s
Dec 06 07:22:30 compute-0 ceph-mon[74339]: pgmap v1939: 305 pgs: 305 active+clean; 150 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 102 op/s
Dec 06 07:22:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Dec 06 07:22:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:32.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:33.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:33 compute-0 nova_compute[251992]: 2025-12-06 07:22:33.835 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Dec 06 07:22:33 compute-0 nova_compute[251992]: 2025-12-06 07:22:33.969 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:34.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:35.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Dec 06 07:22:36 compute-0 ceph-mon[74339]: pgmap v1940: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Dec 06 07:22:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:36.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:37 compute-0 ceph-mon[74339]: pgmap v1941: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Dec 06 07:22:37 compute-0 ceph-mon[74339]: pgmap v1942: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Dec 06 07:22:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/123745592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:22:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:38.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:38 compute-0 nova_compute[251992]: 2025-12-06 07:22:38.837 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:38 compute-0 ceph-mon[74339]: pgmap v1943: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 07:22:38 compute-0 nova_compute[251992]: 2025-12-06 07:22:38.971 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 590 KiB/s wr, 88 op/s
Dec 06 07:22:40 compute-0 ceph-mon[74339]: pgmap v1944: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 590 KiB/s wr, 88 op/s
Dec 06 07:22:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:40.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 590 KiB/s wr, 88 op/s
Dec 06 07:22:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:42 compute-0 ceph-mon[74339]: pgmap v1945: 305 pgs: 305 active+clean; 167 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 590 KiB/s wr, 88 op/s
Dec 06 07:22:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:42.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:22:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:43 compute-0 nova_compute[251992]: 2025-12-06 07:22:43.839 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:22:43 compute-0 nova_compute[251992]: 2025-12-06 07:22:43.973 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:44 compute-0 ceph-mon[74339]: pgmap v1946: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.402 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.402 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.418 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.442 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "28f9954c-de98-47d8-a564-dc16e7702d6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.442 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.467 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.514 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.515 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.525 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.525 251996 INFO nova.compute.claims [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.545 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:44 compute-0 nova_compute[251992]: 2025-12-06 07:22:44.671 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:44.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565512716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:45 compute-0 sudo[311157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:45 compute-0 sudo[311157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:45 compute-0 sudo[311157]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.153 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.160 251996 DEBUG nova.compute.provider_tree [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3565512716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.177 251996 DEBUG nova.scheduler.client.report [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:45 compute-0 sudo[311184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:22:45 compute-0 sudo[311184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:22:45 compute-0 sudo[311184]: pam_unix(sudo:session): session closed for user root
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.209 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.210 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.214 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.222 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.222 251996 INFO nova.compute.claims [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.289 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.290 251996 DEBUG nova.network.neutron [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.310 251996 INFO nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.328 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.386 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.474 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.476 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.477 251996 INFO nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Creating image(s)
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.512 251996 DEBUG nova.storage.rbd_utils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 00f56c62-f327-41e3-a105-24f56ae124c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.549 251996 DEBUG nova.storage.rbd_utils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 00f56c62-f327-41e3-a105-24f56ae124c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.576 251996 DEBUG nova.storage.rbd_utils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 00f56c62-f327-41e3-a105-24f56ae124c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.582 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.619 251996 DEBUG nova.policy [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'baddb65c90da47a58d026b0db966f6c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '001e2256cb8b430d93c1ff613010d199', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.654 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.655 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.656 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.656 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.694 251996 DEBUG nova.storage.rbd_utils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 00f56c62-f327-41e3-a105-24f56ae124c0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.702 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 00f56c62-f327-41e3-a105-24f56ae124c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:22:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596672294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.853 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.860 251996 DEBUG nova.compute.provider_tree [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.879 251996 DEBUG nova.scheduler.client.report [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.903 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 177 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 618 KiB/s wr, 70 op/s
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.924 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "0614af92-3b5d-4056-a4f5-b38212ec7770" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.924 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "0614af92-3b5d-4056-a4f5-b38212ec7770" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.936 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "0614af92-3b5d-4056-a4f5-b38212ec7770" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.937 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.987 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:22:45 compute-0 nova_compute[251992]: 2025-12-06 07:22:45.988 251996 DEBUG nova.network.neutron [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.012 251996 INFO nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.034 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.147 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.148 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.148 251996 INFO nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Creating image(s)
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.177 251996 DEBUG nova.storage.rbd_utils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] rbd image 28f9954c-de98-47d8-a564-dc16e7702d6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.309 251996 DEBUG nova.storage.rbd_utils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] rbd image 28f9954c-de98-47d8-a564-dc16e7702d6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.336 251996 DEBUG nova.storage.rbd_utils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] rbd image 28f9954c-de98-47d8-a564-dc16e7702d6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.341 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.366 251996 DEBUG nova.policy [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ca0a4a5ab2bb41298078e4d8c601925f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '597285dcf6f141219338afe733e28a2a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.370 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 00f56c62-f327-41e3-a105-24f56ae124c0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.437 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.438 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.439 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.439 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:46.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.912 251996 DEBUG nova.storage.rbd_utils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] rbd image 28f9954c-de98-47d8-a564-dc16e7702d6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.916 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 28f9954c-de98-47d8-a564-dc16e7702d6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.942 251996 DEBUG nova.network.neutron [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Successfully created port: c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:22:46 compute-0 nova_compute[251992]: 2025-12-06 07:22:46.950 251996 DEBUG nova.storage.rbd_utils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] resizing rbd image 00f56c62-f327-41e3-a105-24f56ae124c0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:22:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:47.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2596672294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:47 compute-0 ceph-mon[74339]: pgmap v1947: 305 pgs: 305 active+clean; 177 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 618 KiB/s wr, 70 op/s
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.145 251996 DEBUG nova.network.neutron [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Successfully created port: df4ceadc-d14e-461c-bf8c-fb5254125675 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:22:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.644 251996 DEBUG nova.objects.instance [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'migration_context' on Instance uuid 00f56c62-f327-41e3-a105-24f56ae124c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.657 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.657 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Ensure instance console log exists: /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.658 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.658 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.659 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.838 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 28f9954c-de98-47d8-a564-dc16e7702d6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.922s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.902 251996 DEBUG nova.storage.rbd_utils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] resizing rbd image 28f9954c-de98-47d8-a564-dc16e7702d6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:22:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 184 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:22:47 compute-0 nova_compute[251992]: 2025-12-06 07:22:47.998 251996 DEBUG nova.objects.instance [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lazy-loading 'migration_context' on Instance uuid 28f9954c-de98-47d8-a564-dc16e7702d6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.042 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.042 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Ensure instance console log exists: /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.043 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.043 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.043 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:48 compute-0 ceph-mon[74339]: pgmap v1948: 305 pgs: 305 active+clean; 184 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.689 251996 DEBUG nova.network.neutron [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Successfully updated port: df4ceadc-d14e-461c-bf8c-fb5254125675 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.728 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "refresh_cache-28f9954c-de98-47d8-a564-dc16e7702d6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.729 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquired lock "refresh_cache-28f9954c-de98-47d8-a564-dc16e7702d6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.729 251996 DEBUG nova.network.neutron [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:22:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:48.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.803 251996 DEBUG nova.compute.manager [req-d8939799-ab6a-4b05-8d85-392adec2f321 req-5c0c6be6-5726-42aa-9c0f-0b5610810c53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received event network-changed-df4ceadc-d14e-461c-bf8c-fb5254125675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.804 251996 DEBUG nova.compute.manager [req-d8939799-ab6a-4b05-8d85-392adec2f321 req-5c0c6be6-5726-42aa-9c0f-0b5610810c53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Refreshing instance network info cache due to event network-changed-df4ceadc-d14e-461c-bf8c-fb5254125675. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.804 251996 DEBUG oslo_concurrency.lockutils [req-d8939799-ab6a-4b05-8d85-392adec2f321 req-5c0c6be6-5726-42aa-9c0f-0b5610810c53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-28f9954c-de98-47d8-a564-dc16e7702d6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.842 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.916 251996 DEBUG nova.network.neutron [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Successfully updated port: c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.940 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.940 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.940 251996 DEBUG nova.network.neutron [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.966 251996 DEBUG nova.network.neutron [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:22:48 compute-0 nova_compute[251992]: 2025-12-06 07:22:48.974 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:49.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:49 compute-0 nova_compute[251992]: 2025-12-06 07:22:49.087 251996 DEBUG nova.network.neutron [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:22:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 184 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Dec 06 07:22:50 compute-0 nova_compute[251992]: 2025-12-06 07:22:50.531 251996 DEBUG nova.compute.manager [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-changed-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:50 compute-0 nova_compute[251992]: 2025-12-06 07:22:50.532 251996 DEBUG nova.compute.manager [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Refreshing instance network info cache due to event network-changed-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:22:50 compute-0 nova_compute[251992]: 2025-12-06 07:22:50.545 251996 DEBUG oslo_concurrency.lockutils [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:22:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:50.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1015985670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:22:50 compute-0 nova_compute[251992]: 2025-12-06 07:22:50.979 251996 DEBUG nova.network.neutron [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Updating instance_info_cache with network_info: [{"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:50.998 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Releasing lock "refresh_cache-28f9954c-de98-47d8-a564-dc16e7702d6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:50.999 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Instance network_info: |[{"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:50.999 251996 DEBUG oslo_concurrency.lockutils [req-d8939799-ab6a-4b05-8d85-392adec2f321 req-5c0c6be6-5726-42aa-9c0f-0b5610810c53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-28f9954c-de98-47d8-a564-dc16e7702d6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:50.999 251996 DEBUG nova.network.neutron [req-d8939799-ab6a-4b05-8d85-392adec2f321 req-5c0c6be6-5726-42aa-9c0f-0b5610810c53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Refreshing network info cache for port df4ceadc-d14e-461c-bf8c-fb5254125675 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.002 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Start _get_guest_xml network_info=[{"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.009 251996 WARNING nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.017 251996 DEBUG nova.virt.libvirt.host [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.017 251996 DEBUG nova.virt.libvirt.host [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.021 251996 DEBUG nova.virt.libvirt.host [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.022 251996 DEBUG nova.virt.libvirt.host [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.023 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.023 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.023 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.024 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.024 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.024 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.024 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.024 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.025 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.025 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.025 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.025 251996 DEBUG nova.virt.hardware [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.028 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.209 251996 DEBUG nova.network.neutron [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.231 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.232 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Instance network_info: |[{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.233 251996 DEBUG oslo_concurrency.lockutils [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.233 251996 DEBUG nova.network.neutron [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Refreshing network info cache for port c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.236 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Start _get_guest_xml network_info=[{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.242 251996 WARNING nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.247 251996 DEBUG nova.virt.libvirt.host [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.248 251996 DEBUG nova.virt.libvirt.host [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.254 251996 DEBUG nova.virt.libvirt.host [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.255 251996 DEBUG nova.virt.libvirt.host [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.256 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.256 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.257 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.257 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.257 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.257 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.258 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.258 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.258 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.258 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.259 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.259 251996 DEBUG nova.virt.hardware [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.262 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:51 compute-0 podman[311587]: 2025-12-06 07:22:51.444050551 +0000 UTC m=+0.100370201 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 06 07:22:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3752189496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.502 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.532 251996 DEBUG nova.storage.rbd_utils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] rbd image 28f9954c-de98-47d8-a564-dc16e7702d6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.536 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/957506070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.783 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.808 251996 DEBUG nova.storage.rbd_utils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 00f56c62-f327-41e3-a105-24f56ae124c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.812 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 214 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 5.0 MiB/s wr, 129 op/s
Dec 06 07:22:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1469328797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.979 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.981 251996 DEBUG nova.virt.libvirt.vif [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:22:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1877927184',display_name='tempest-ServerGroupTestJSON-server-1877927184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1877927184',id=97,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='597285dcf6f141219338afe733e28a2a',ramdisk_id='',reservation_id='r-m2wtb7eq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1579993364',owner_user_name='tempest-ServerGroupTestJSON-1579993364-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:46Z,user_data=None,user_id='ca0a4a5ab2bb41298078e4d8c601925f',uuid=28f9954c-de98-47d8-a564-dc16e7702d6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.981 251996 DEBUG nova.network.os_vif_util [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Converting VIF {"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.982 251996 DEBUG nova.network.os_vif_util [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:62:99,bridge_name='br-int',has_traffic_filtering=True,id=df4ceadc-d14e-461c-bf8c-fb5254125675,network=Network(79af561c-d1d7-4e64-8479-baa182d93bd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf4ceadc-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.983 251996 DEBUG nova.objects.instance [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lazy-loading 'pci_devices' on Instance uuid 28f9954c-de98-47d8-a564-dc16e7702d6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:51 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.998 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <uuid>28f9954c-de98-47d8-a564-dc16e7702d6b</uuid>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <name>instance-00000061</name>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerGroupTestJSON-server-1877927184</nova:name>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:22:51</nova:creationTime>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <nova:user uuid="ca0a4a5ab2bb41298078e4d8c601925f">tempest-ServerGroupTestJSON-1579993364-project-member</nova:user>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <nova:project uuid="597285dcf6f141219338afe733e28a2a">tempest-ServerGroupTestJSON-1579993364</nova:project>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <nova:port uuid="df4ceadc-d14e-461c-bf8c-fb5254125675">
Dec 06 07:22:51 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <system>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <entry name="serial">28f9954c-de98-47d8-a564-dc16e7702d6b</entry>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <entry name="uuid">28f9954c-de98-47d8-a564-dc16e7702d6b</entry>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </system>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <os>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   </os>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <features>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   </features>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/28f9954c-de98-47d8-a564-dc16e7702d6b_disk">
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       </source>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/28f9954c-de98-47d8-a564-dc16e7702d6b_disk.config">
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       </source>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:22:51 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:a8:62:99"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <target dev="tapdf4ceadc-d1"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b/console.log" append="off"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <video>
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </video>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:22:51 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:22:51 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:22:51 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:22:51 compute-0 nova_compute[251992]: </domain>
Dec 06 07:22:51 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:51.999 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Preparing to wait for external event network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.000 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.000 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.000 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.001 251996 DEBUG nova.virt.libvirt.vif [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:22:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1877927184',display_name='tempest-ServerGroupTestJSON-server-1877927184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1877927184',id=97,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='597285dcf6f141219338afe733e28a2a',ramdisk_id='',reservation_id='r-m2wtb7eq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1579993364',owner_user_name='tempest-ServerGroupTestJSON-1579993364-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:46Z,user_data=None,user_id='ca0a4a5ab2bb41298078e4d8c601925f',uuid=28f9954c-de98-47d8-a564-dc16e7702d6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.001 251996 DEBUG nova.network.os_vif_util [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Converting VIF {"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.002 251996 DEBUG nova.network.os_vif_util [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:62:99,bridge_name='br-int',has_traffic_filtering=True,id=df4ceadc-d14e-461c-bf8c-fb5254125675,network=Network(79af561c-d1d7-4e64-8479-baa182d93bd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf4ceadc-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.002 251996 DEBUG os_vif [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:62:99,bridge_name='br-int',has_traffic_filtering=True,id=df4ceadc-d14e-461c-bf8c-fb5254125675,network=Network(79af561c-d1d7-4e64-8479-baa182d93bd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf4ceadc-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.002 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.003 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.004 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.010 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.010 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf4ceadc-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.011 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf4ceadc-d1, col_values=(('external_ids', {'iface-id': 'df4ceadc-d14e-461c-bf8c-fb5254125675', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:62:99', 'vm-uuid': '28f9954c-de98-47d8-a564-dc16e7702d6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.012 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:52 compute-0 NetworkManager[48965]: <info>  [1765005772.0133] manager: (tapdf4ceadc-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.014 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.019 251996 INFO os_vif [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:62:99,bridge_name='br-int',has_traffic_filtering=True,id=df4ceadc-d14e-461c-bf8c-fb5254125675,network=Network(79af561c-d1d7-4e64-8479-baa182d93bd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf4ceadc-d1')
Dec 06 07:22:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Dec 06 07:22:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:22:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/9378072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:52 compute-0 ceph-mon[74339]: pgmap v1949: 305 pgs: 305 active+clean; 184 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Dec 06 07:22:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3752189496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/957506070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.252 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.253 251996 DEBUG nova.virt.libvirt.vif [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:22:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-831125912',display_name='tempest-ServerActionsTestOtherA-server-831125912',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-831125912',id=96,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-w55x9tcn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=00f56c62-f327-41e3-a105-24f56ae124c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.254 251996 DEBUG nova.network.os_vif_util [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.254 251996 DEBUG nova.network.os_vif_util [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:82:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e1aa30-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.256 251996 DEBUG nova.objects.instance [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_devices' on Instance uuid 00f56c62-f327-41e3-a105-24f56ae124c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.271 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <uuid>00f56c62-f327-41e3-a105-24f56ae124c0</uuid>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <name>instance-00000060</name>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestOtherA-server-831125912</nova:name>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:22:51</nova:creationTime>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <nova:user uuid="baddb65c90da47a58d026b0db966f6c8">tempest-ServerActionsTestOtherA-1949739102-project-member</nova:user>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <nova:project uuid="001e2256cb8b430d93c1ff613010d199">tempest-ServerActionsTestOtherA-1949739102</nova:project>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <nova:port uuid="c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e">
Dec 06 07:22:52 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <system>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <entry name="serial">00f56c62-f327-41e3-a105-24f56ae124c0</entry>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <entry name="uuid">00f56c62-f327-41e3-a105-24f56ae124c0</entry>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </system>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <os>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   </os>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <features>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   </features>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/00f56c62-f327-41e3-a105-24f56ae124c0_disk">
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       </source>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/00f56c62-f327-41e3-a105-24f56ae124c0_disk.config">
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       </source>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:22:52 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:4f:82:3f"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <target dev="tapc1e1aa30-1f"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0/console.log" append="off"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <video>
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </video>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:22:52 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:22:52 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:22:52 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:22:52 compute-0 nova_compute[251992]: </domain>
Dec 06 07:22:52 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.273 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Preparing to wait for external event network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.273 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.273 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.273 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.274 251996 DEBUG nova.virt.libvirt.vif [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:22:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-831125912',display_name='tempest-ServerActionsTestOtherA-server-831125912',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-831125912',id=96,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-w55x9tcn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:22:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=00f56c62-f327-41e3-a105-24f56ae124c0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.274 251996 DEBUG nova.network.os_vif_util [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.275 251996 DEBUG nova.network.os_vif_util [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:82:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e1aa30-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.275 251996 DEBUG os_vif [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:82:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e1aa30-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.276 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.276 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.276 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.279 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.280 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1e1aa30-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.280 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1e1aa30-1f, col_values=(('external_ids', {'iface-id': 'c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:82:3f', 'vm-uuid': '00f56c62-f327-41e3-a105-24f56ae124c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.282 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:52 compute-0 NetworkManager[48965]: <info>  [1765005772.2829] manager: (tapc1e1aa30-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.285 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.288 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.289 251996 INFO os_vif [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:82:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e1aa30-1f')
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.476 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.476 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.476 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] No VIF found with MAC fa:16:3e:a8:62:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.477 251996 INFO nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Using config drive
Dec 06 07:22:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:52.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.923 251996 DEBUG nova.storage.rbd_utils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] rbd image 28f9954c-de98-47d8-a564-dc16e7702d6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.982 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.982 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.982 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:4f:82:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:22:52 compute-0 nova_compute[251992]: 2025-12-06 07:22:52.983 251996 INFO nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Using config drive
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.009 251996 DEBUG nova.storage.rbd_utils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 00f56c62-f327-41e3-a105-24f56ae124c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.323 251996 INFO nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Creating config drive at /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b/disk.config
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.328 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4hpgr3y1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.363 251996 DEBUG nova.network.neutron [req-d8939799-ab6a-4b05-8d85-392adec2f321 req-5c0c6be6-5726-42aa-9c0f-0b5610810c53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Updated VIF entry in instance network info cache for port df4ceadc-d14e-461c-bf8c-fb5254125675. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.364 251996 DEBUG nova.network.neutron [req-d8939799-ab6a-4b05-8d85-392adec2f321 req-5c0c6be6-5726-42aa-9c0f-0b5610810c53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Updating instance_info_cache with network_info: [{"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.391 251996 DEBUG oslo_concurrency.lockutils [req-d8939799-ab6a-4b05-8d85-392adec2f321 req-5c0c6be6-5726-42aa-9c0f-0b5610810c53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-28f9954c-de98-47d8-a564-dc16e7702d6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.465 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4hpgr3y1" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.498 251996 DEBUG nova.storage.rbd_utils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] rbd image 28f9954c-de98-47d8-a564-dc16e7702d6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.503 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b/disk.config 28f9954c-de98-47d8-a564-dc16e7702d6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Dec 06 07:22:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.844 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 213 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 436 KiB/s rd, 6.9 MiB/s wr, 184 op/s
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.922 251996 INFO nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Creating config drive at /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0/disk.config
Dec 06 07:22:53 compute-0 nova_compute[251992]: 2025-12-06 07:22:53.929 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpimt68i6t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:54 compute-0 nova_compute[251992]: 2025-12-06 07:22:54.062 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpimt68i6t" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:54 compute-0 nova_compute[251992]: 2025-12-06 07:22:54.652 251996 DEBUG nova.storage.rbd_utils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 00f56c62-f327-41e3-a105-24f56ae124c0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:22:54 compute-0 nova_compute[251992]: 2025-12-06 07:22:54.656 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0/disk.config 00f56c62-f327-41e3-a105-24f56ae124c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:54 compute-0 nova_compute[251992]: 2025-12-06 07:22:54.687 251996 DEBUG nova.network.neutron [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updated VIF entry in instance network info cache for port c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:22:54 compute-0 nova_compute[251992]: 2025-12-06 07:22:54.688 251996 DEBUG nova.network.neutron [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:22:54 compute-0 nova_compute[251992]: 2025-12-06 07:22:54.722 251996 DEBUG oslo_concurrency.lockutils [req-c9fe4570-71a0-4c26-8492-e7f968e262d4 req-b44899c1-29f6-47c0-a33b-fd4765d2cc7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:22:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:54.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:55.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:55 compute-0 ceph-mon[74339]: pgmap v1950: 305 pgs: 305 active+clean; 214 MiB data, 829 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 5.0 MiB/s wr, 129 op/s
Dec 06 07:22:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1469328797' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/9378072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:22:55 compute-0 podman[311836]: 2025-12-06 07:22:55.410246079 +0000 UTC m=+0.058044465 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:22:55 compute-0 podman[311835]: 2025-12-06 07:22:55.410166887 +0000 UTC m=+0.061074599 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:22:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 213 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 6.2 MiB/s wr, 164 op/s
Dec 06 07:22:55 compute-0 nova_compute[251992]: 2025-12-06 07:22:55.910 251996 DEBUG oslo_concurrency.processutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b/disk.config 28f9954c-de98-47d8-a564-dc16e7702d6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:55 compute-0 nova_compute[251992]: 2025-12-06 07:22:55.911 251996 INFO nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Deleting local config drive /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b/disk.config because it was imported into RBD.
Dec 06 07:22:55 compute-0 nova_compute[251992]: 2025-12-06 07:22:55.974 251996 DEBUG oslo_concurrency.processutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0/disk.config 00f56c62-f327-41e3-a105-24f56ae124c0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:22:55 compute-0 nova_compute[251992]: 2025-12-06 07:22:55.975 251996 INFO nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Deleting local config drive /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0/disk.config because it was imported into RBD.
Dec 06 07:22:55 compute-0 NetworkManager[48965]: <info>  [1765005775.9777] manager: (tapdf4ceadc-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Dec 06 07:22:55 compute-0 kernel: tapdf4ceadc-d1: entered promiscuous mode
Dec 06 07:22:55 compute-0 ovn_controller[147168]: 2025-12-06T07:22:55Z|00326|binding|INFO|Claiming lport df4ceadc-d14e-461c-bf8c-fb5254125675 for this chassis.
Dec 06 07:22:55 compute-0 nova_compute[251992]: 2025-12-06 07:22:55.985 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:55 compute-0 ovn_controller[147168]: 2025-12-06T07:22:55Z|00327|binding|INFO|df4ceadc-d14e-461c-bf8c-fb5254125675: Claiming fa:16:3e:a8:62:99 10.100.0.11
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:55.999 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:62:99 10.100.0.11'], port_security=['fa:16:3e:a8:62:99 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '28f9954c-de98-47d8-a564-dc16e7702d6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79af561c-d1d7-4e64-8479-baa182d93bd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '597285dcf6f141219338afe733e28a2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75799009-9bae-4b08-b6b5-839943677542', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=052c5496-31ea-4482-b675-8b06085e3df8, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=df4ceadc-d14e-461c-bf8c-fb5254125675) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.002 158118 INFO neutron.agent.ovn.metadata.agent [-] Port df4ceadc-d14e-461c-bf8c-fb5254125675 in datapath 79af561c-d1d7-4e64-8479-baa182d93bd5 bound to our chassis
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.006 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79af561c-d1d7-4e64-8479-baa182d93bd5
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.052 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[26d0d29e-d62b-4e93-bd69-6d03ecd72281]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.053 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79af561c-d1 in ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.056 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79af561c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.056 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7bef5374-06a1-4a82-9687-aa0766e8eb40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 systemd-udevd[311895]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.057 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[feabc2ac-1ded-405d-a507-11cde7757448]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 systemd-machined[212986]: New machine qemu-43-instance-00000061.
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.071 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[05712ba2-b7fc-47e7-b4f7-d78dac678f00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000061.
Dec 06 07:22:56 compute-0 NetworkManager[48965]: <info>  [1765005776.0788] device (tapdf4ceadc-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:22:56 compute-0 NetworkManager[48965]: <info>  [1765005776.0809] device (tapdf4ceadc-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:22:56 compute-0 NetworkManager[48965]: <info>  [1765005776.0899] manager: (tapc1e1aa30-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.099 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a1ed4c-da88-49e0-8353-ff26399aaa2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 kernel: tapc1e1aa30-1f: entered promiscuous mode
Dec 06 07:22:56 compute-0 NetworkManager[48965]: <info>  [1765005776.1039] device (tapc1e1aa30-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:22:56 compute-0 NetworkManager[48965]: <info>  [1765005776.1050] device (tapc1e1aa30-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:22:56 compute-0 ovn_controller[147168]: 2025-12-06T07:22:56Z|00328|binding|INFO|Claiming lport c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e for this chassis.
Dec 06 07:22:56 compute-0 ovn_controller[147168]: 2025-12-06T07:22:56Z|00329|binding|INFO|c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e: Claiming fa:16:3e:4f:82:3f 10.100.0.6
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.107 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.116 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.129 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:82:3f 10.100.0.6'], port_security=['fa:16:3e:4f:82:3f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '00f56c62-f327-41e3-a105-24f56ae124c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56e13d32-a2bf-49aa-a4ac-9182c3684195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.128 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7a94d4-96e0-43ff-bcf3-3152f281c7f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_controller[147168]: 2025-12-06T07:22:56Z|00330|binding|INFO|Setting lport df4ceadc-d14e-461c-bf8c-fb5254125675 ovn-installed in OVS
Dec 06 07:22:56 compute-0 ovn_controller[147168]: 2025-12-06T07:22:56Z|00331|binding|INFO|Setting lport df4ceadc-d14e-461c-bf8c-fb5254125675 up in Southbound
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.137 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[53276481-e888-4387-b54c-1ea7582b89fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.136 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:56 compute-0 NetworkManager[48965]: <info>  [1765005776.1387] manager: (tap79af561c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/174)
Dec 06 07:22:56 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000060.
Dec 06 07:22:56 compute-0 systemd-machined[212986]: New machine qemu-44-instance-00000060.
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.168 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fb37ccf5-e3f3-464d-a180-c224ac601776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.179 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc56eba-2cd9-4468-be18-62cbe6149c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 NetworkManager[48965]: <info>  [1765005776.2047] device (tap79af561c-d0): carrier: link connected
Dec 06 07:22:56 compute-0 ovn_controller[147168]: 2025-12-06T07:22:56Z|00332|binding|INFO|Setting lport c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e ovn-installed in OVS
Dec 06 07:22:56 compute-0 ovn_controller[147168]: 2025-12-06T07:22:56Z|00333|binding|INFO|Setting lport c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e up in Southbound
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.206 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.212 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[71cddeaf-8e7b-4bb7-87fa-9758bcefb480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.232 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[26b7a5e5-d938-4e87-8e76-ac5e1312a30e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79af561c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:10:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604879, 'reachable_time': 40681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311943, 'error': None, 'target': 'ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.248 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[141ad568-9e6a-4ff2-85e0-9303342978eb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:100d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 604879, 'tstamp': 604879}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311944, 'error': None, 'target': 'ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.268 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[85304819-3c9f-49e6-a23b-c63df35aeb39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79af561c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:10:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604879, 'reachable_time': 40681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311946, 'error': None, 'target': 'ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ceph-mon[74339]: osdmap e259: 3 total, 3 up, 3 in
Dec 06 07:22:56 compute-0 ceph-mon[74339]: pgmap v1952: 305 pgs: 305 active+clean; 213 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 436 KiB/s rd, 6.9 MiB/s wr, 184 op/s
Dec 06 07:22:56 compute-0 ceph-mon[74339]: pgmap v1953: 305 pgs: 305 active+clean; 213 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 6.2 MiB/s wr, 164 op/s
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.300 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[259c92e7-3425-4df9-84fa-6fc5513e5da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.356 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a2a5b0-d9ff-483c-968e-5b65efb0c82e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.357 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79af561c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.358 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.358 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79af561c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:56 compute-0 kernel: tap79af561c-d0: entered promiscuous mode
Dec 06 07:22:56 compute-0 NetworkManager[48965]: <info>  [1765005776.3605] manager: (tap79af561c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.362 251996 DEBUG nova.compute.manager [req-c84beabd-6093-4604-873e-e0bd831f7d0d req-79733cd5-d28e-4855-86e9-66fd66e8d53b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received event network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.362 251996 DEBUG oslo_concurrency.lockutils [req-c84beabd-6093-4604-873e-e0bd831f7d0d req-79733cd5-d28e-4855-86e9-66fd66e8d53b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.362 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79af561c-d0, col_values=(('external_ids', {'iface-id': '5218aed6-786e-4ad6-bc47-c7b72b7fefcd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.362 251996 DEBUG oslo_concurrency.lockutils [req-c84beabd-6093-4604-873e-e0bd831f7d0d req-79733cd5-d28e-4855-86e9-66fd66e8d53b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.363 251996 DEBUG oslo_concurrency.lockutils [req-c84beabd-6093-4604-873e-e0bd831f7d0d req-79733cd5-d28e-4855-86e9-66fd66e8d53b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.363 251996 DEBUG nova.compute.manager [req-c84beabd-6093-4604-873e-e0bd831f7d0d req-79733cd5-d28e-4855-86e9-66fd66e8d53b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Processing event network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.363 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:56 compute-0 ovn_controller[147168]: 2025-12-06T07:22:56Z|00334|binding|INFO|Releasing lport 5218aed6-786e-4ad6-bc47-c7b72b7fefcd from this chassis (sb_readonly=0)
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.380 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79af561c-d1d7-4e64-8479-baa182d93bd5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79af561c-d1d7-4e64-8479-baa182d93bd5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.380 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[489f67ea-0334-4e14-b259-ffcda06abf64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.381 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-79af561c-d1d7-4e64-8479-baa182d93bd5
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/79af561c-d1d7-4e64-8479-baa182d93bd5.pid.haproxy
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 79af561c-d1d7-4e64-8479-baa182d93bd5
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:22:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:56.382 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5', 'env', 'PROCESS_TAG=haproxy-79af561c-d1d7-4e64-8479-baa182d93bd5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79af561c-d1d7-4e64-8479-baa182d93bd5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.704 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005776.703674, 28f9954c-de98-47d8-a564-dc16e7702d6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.705 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] VM Started (Lifecycle Event)
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.710 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.713 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.717 251996 INFO nova.virt.libvirt.driver [-] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Instance spawned successfully.
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.718 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.742 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.743 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.744 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.744 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.745 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.745 251996 DEBUG nova.virt.libvirt.driver [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.753 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.757 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:56.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.814 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.815 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005776.708208, 28f9954c-de98-47d8-a564-dc16e7702d6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.815 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] VM Paused (Lifecycle Event)
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.837 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.841 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005776.7090657, 00f56c62-f327-41e3-a105-24f56ae124c0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.842 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] VM Started (Lifecycle Event)
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.851 251996 INFO nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Took 10.70 seconds to spawn the instance on the hypervisor.
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.852 251996 DEBUG nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:56 compute-0 podman[312062]: 2025-12-06 07:22:56.762768739 +0000 UTC m=+0.032472837 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.862 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.865 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005776.7091208, 00f56c62-f327-41e3-a105-24f56ae124c0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.865 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] VM Paused (Lifecycle Event)
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.883 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.888 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.909 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.909 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005776.7132287, 28f9954c-de98-47d8-a564-dc16e7702d6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.909 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] VM Resumed (Lifecycle Event)
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.920 251996 INFO nova.compute.manager [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Took 12.41 seconds to build instance.
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.931 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:56 compute-0 podman[312062]: 2025-12-06 07:22:56.932420041 +0000 UTC m=+0.202124119 container create f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.935 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:56 compute-0 nova_compute[251992]: 2025-12-06 07:22:56.941 251996 DEBUG oslo_concurrency.lockutils [None req-10c0ac46-461b-469c-a2ac-7a86f434664c ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.092 251996 DEBUG nova.compute.manager [req-e83d9154-3412-47f2-91f0-5d6c395ae7a3 req-82f45195-be9e-40bc-b84c-fabf6e1b5321 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.093 251996 DEBUG oslo_concurrency.lockutils [req-e83d9154-3412-47f2-91f0-5d6c395ae7a3 req-82f45195-be9e-40bc-b84c-fabf6e1b5321 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.093 251996 DEBUG oslo_concurrency.lockutils [req-e83d9154-3412-47f2-91f0-5d6c395ae7a3 req-82f45195-be9e-40bc-b84c-fabf6e1b5321 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.093 251996 DEBUG oslo_concurrency.lockutils [req-e83d9154-3412-47f2-91f0-5d6c395ae7a3 req-82f45195-be9e-40bc-b84c-fabf6e1b5321 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.094 251996 DEBUG nova.compute.manager [req-e83d9154-3412-47f2-91f0-5d6c395ae7a3 req-82f45195-be9e-40bc-b84c-fabf6e1b5321 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Processing event network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.095 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:22:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.098 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005777.0982366, 00f56c62-f327-41e3-a105-24f56ae124c0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.098 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] VM Resumed (Lifecycle Event)
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.100 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.104 251996 INFO nova.virt.libvirt.driver [-] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Instance spawned successfully.
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.105 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.122 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.126 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.131 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.131 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.132 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.133 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.133 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.133 251996 DEBUG nova.virt.libvirt.driver [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.163 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.189 251996 INFO nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Took 11.71 seconds to spawn the instance on the hypervisor.
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.189 251996 DEBUG nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.249 251996 INFO nova.compute.manager [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Took 12.78 seconds to build instance.
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.264 251996 DEBUG oslo_concurrency.lockutils [None req-b5b93d33-cfaa-492f-a283-a85585942bd1 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:57 compute-0 nova_compute[251992]: 2025-12-06 07:22:57.282 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:57 compute-0 systemd[1]: Started libpod-conmon-f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842.scope.
Dec 06 07:22:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6095cb1ee1d70e694137a88a9770610b3910b116f1b2245ed78a8aa866d369a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:57 compute-0 podman[312062]: 2025-12-06 07:22:57.855803036 +0000 UTC m=+1.125507134 container init f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:22:57 compute-0 podman[312062]: 2025-12-06 07:22:57.863932668 +0000 UTC m=+1.133636746 container start f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:22:57 compute-0 neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5[312077]: [NOTICE]   (312081) : New worker (312083) forked
Dec 06 07:22:57 compute-0 neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5[312077]: [NOTICE]   (312081) : Loading success.
Dec 06 07:22:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:22:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 213 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 111 op/s
Dec 06 07:22:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:57.949 158118 INFO neutron.agent.ovn.metadata.agent [-] Port c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:22:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:57.952 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:22:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:57.963 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e0aa9ec1-0cd1-4f44-8fe4-024483b43d66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:57.964 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6209aab-d1 in ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:22:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:57.967 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6209aab-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:22:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:57.968 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ab5904-3ea8-40e8-a955-aee3db53f97c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:57.968 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8e289d6b-4dce-42ad-a791-efc711ca5475]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:57.981 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[52e784f9-75b3-4e16-a9f9-3eecadb7099a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.009 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cd54b1-6b01-4f7c-9687-25f5ca93c369]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.040 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[18a5fc70-75cb-4a47-983a-fd4ca0b31237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.046 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ee8dde-4fea-4ecf-8ce5-57c983cc2e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 NetworkManager[48965]: <info>  [1765005778.0484] manager: (tapf6209aab-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/176)
Dec 06 07:22:58 compute-0 systemd-udevd[311928]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.077 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[aa865a8c-3928-4841-8390-4c3074742b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.082 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[837ee22d-650c-4209-8c16-7ed9031c5f3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 NetworkManager[48965]: <info>  [1765005778.1087] device (tapf6209aab-d0): carrier: link connected
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.118 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[65754b92-88e2-47d0-9598-2b9837004f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.139 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c08a7fce-3307-4a62-a45d-eb39b44b79a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605069, 'reachable_time': 29413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312102, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.158 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd8a657-5c70-4086-bebe-1664cbebf0a3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:c5a9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605069, 'tstamp': 605069}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312103, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.177 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5fbb155d-52ac-4fb7-895e-fe7884a59f27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605069, 'reachable_time': 29413, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312104, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.207 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ed64cb0e-a479-4aec-b068-a6f04dfd6cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.266 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd918a05-ac4c-4ec8-8873-fdc56aa8d0ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.268 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.268 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.269 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:58 compute-0 NetworkManager[48965]: <info>  [1765005778.2718] manager: (tapf6209aab-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Dec 06 07:22:58 compute-0 kernel: tapf6209aab-d0: entered promiscuous mode
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.275 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.277 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.279 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:58 compute-0 ovn_controller[147168]: 2025-12-06T07:22:58Z|00335|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.295 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.298 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.303 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[47320a45-4f9c-410f-8450-9891676a2f73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.304 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/f6209aab-d53f-4d58-9b94-ffb7adc6239e.pid.haproxy
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:22:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:22:58.305 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'env', 'PROCESS_TAG=haproxy-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6209aab-d53f-4d58-9b94-ffb7adc6239e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.606 251996 DEBUG nova.compute.manager [req-71cd69f3-f239-4df1-89ff-48d84a76b26e req-1ec961e6-44dd-43ea-b76e-4351181d9069 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received event network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.607 251996 DEBUG oslo_concurrency.lockutils [req-71cd69f3-f239-4df1-89ff-48d84a76b26e req-1ec961e6-44dd-43ea-b76e-4351181d9069 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.607 251996 DEBUG oslo_concurrency.lockutils [req-71cd69f3-f239-4df1-89ff-48d84a76b26e req-1ec961e6-44dd-43ea-b76e-4351181d9069 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.607 251996 DEBUG oslo_concurrency.lockutils [req-71cd69f3-f239-4df1-89ff-48d84a76b26e req-1ec961e6-44dd-43ea-b76e-4351181d9069 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.607 251996 DEBUG nova.compute.manager [req-71cd69f3-f239-4df1-89ff-48d84a76b26e req-1ec961e6-44dd-43ea-b76e-4351181d9069 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] No waiting events found dispatching network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.607 251996 WARNING nova.compute.manager [req-71cd69f3-f239-4df1-89ff-48d84a76b26e req-1ec961e6-44dd-43ea-b76e-4351181d9069 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received unexpected event network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 for instance with vm_state active and task_state None.
Dec 06 07:22:58 compute-0 podman[312138]: 2025-12-06 07:22:58.649591645 +0000 UTC m=+0.050297684 container create 85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:58 compute-0 systemd[1]: Started libpod-conmon-85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad.scope.
Dec 06 07:22:58 compute-0 podman[312138]: 2025-12-06 07:22:58.622265489 +0000 UTC m=+0.022971558 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:22:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da43c18ad159212d4775829917d9e471afab2659cc051bb2cad42e231f6415da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:22:58 compute-0 podman[312138]: 2025-12-06 07:22:58.742848931 +0000 UTC m=+0.143555000 container init 85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 07:22:58 compute-0 podman[312138]: 2025-12-06 07:22:58.748954877 +0000 UTC m=+0.149660906 container start 85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:22:58 compute-0 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[312153]: [NOTICE]   (312157) : New worker (312159) forked
Dec 06 07:22:58 compute-0 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[312153]: [NOTICE]   (312157) : Loading success.
Dec 06 07:22:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:22:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:22:58.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.843 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "28f9954c-de98-47d8-a564-dc16e7702d6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.843 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.844 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.844 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.844 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.845 251996 INFO nova.compute.manager [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Terminating instance
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.846 251996 DEBUG nova.compute.manager [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:22:58 compute-0 nova_compute[251992]: 2025-12-06 07:22:58.849 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:22:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:22:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:22:59.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.174 251996 DEBUG nova.compute.manager [req-0e820ae6-b800-4ba7-8af2-cd1aab78b965 req-033a6341-95b7-41ca-9b79-6ab6707fb002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.174 251996 DEBUG oslo_concurrency.lockutils [req-0e820ae6-b800-4ba7-8af2-cd1aab78b965 req-033a6341-95b7-41ca-9b79-6ab6707fb002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.174 251996 DEBUG oslo_concurrency.lockutils [req-0e820ae6-b800-4ba7-8af2-cd1aab78b965 req-033a6341-95b7-41ca-9b79-6ab6707fb002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.174 251996 DEBUG oslo_concurrency.lockutils [req-0e820ae6-b800-4ba7-8af2-cd1aab78b965 req-033a6341-95b7-41ca-9b79-6ab6707fb002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.174 251996 DEBUG nova.compute.manager [req-0e820ae6-b800-4ba7-8af2-cd1aab78b965 req-033a6341-95b7-41ca-9b79-6ab6707fb002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] No waiting events found dispatching network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.175 251996 WARNING nova.compute.manager [req-0e820ae6-b800-4ba7-8af2-cd1aab78b965 req-033a6341-95b7-41ca-9b79-6ab6707fb002 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received unexpected event network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e for instance with vm_state active and task_state None.
Dec 06 07:22:59 compute-0 NetworkManager[48965]: <info>  [1765005779.6011] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Dec 06 07:22:59 compute-0 NetworkManager[48965]: <info>  [1765005779.6017] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.602 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.677 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.677 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.677 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.678 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.678 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.791 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:59 compute-0 ovn_controller[147168]: 2025-12-06T07:22:59Z|00336|binding|INFO|Releasing lport 5218aed6-786e-4ad6-bc47-c7b72b7fefcd from this chassis (sb_readonly=0)
Dec 06 07:22:59 compute-0 ovn_controller[147168]: 2025-12-06T07:22:59Z|00337|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:22:59 compute-0 nova_compute[251992]: 2025-12-06 07:22:59.810 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:22:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 213 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 111 op/s
Dec 06 07:23:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512214763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:00 compute-0 nova_compute[251992]: 2025-12-06 07:23:00.123 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:00 compute-0 nova_compute[251992]: 2025-12-06 07:23:00.678 251996 DEBUG nova.compute.manager [req-6720ae90-e8a8-4b70-9f32-b492f030f0f8 req-9b9195ba-441f-4a55-94f4-84ac867448dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-changed-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:00 compute-0 nova_compute[251992]: 2025-12-06 07:23:00.678 251996 DEBUG nova.compute.manager [req-6720ae90-e8a8-4b70-9f32-b492f030f0f8 req-9b9195ba-441f-4a55-94f4-84ac867448dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Refreshing instance network info cache due to event network-changed-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:23:00 compute-0 nova_compute[251992]: 2025-12-06 07:23:00.678 251996 DEBUG oslo_concurrency.lockutils [req-6720ae90-e8a8-4b70-9f32-b492f030f0f8 req-9b9195ba-441f-4a55-94f4-84ac867448dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:00 compute-0 nova_compute[251992]: 2025-12-06 07:23:00.679 251996 DEBUG oslo_concurrency.lockutils [req-6720ae90-e8a8-4b70-9f32-b492f030f0f8 req-9b9195ba-441f-4a55-94f4-84ac867448dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:00 compute-0 nova_compute[251992]: 2025-12-06 07:23:00.679 251996 DEBUG nova.network.neutron [req-6720ae90-e8a8-4b70-9f32-b492f030f0f8 req-9b9195ba-441f-4a55-94f4-84ac867448dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Refreshing network info cache for port c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:23:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:00.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:00 compute-0 ceph-mon[74339]: pgmap v1954: 305 pgs: 305 active+clean; 213 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 111 op/s
Dec 06 07:23:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 225 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 129 op/s
Dec 06 07:23:02 compute-0 kernel: tapdf4ceadc-d1 (unregistering): left promiscuous mode
Dec 06 07:23:02 compute-0 NetworkManager[48965]: <info>  [1765005782.0649] device (tapdf4ceadc-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.072 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:02 compute-0 ovn_controller[147168]: 2025-12-06T07:23:02Z|00338|binding|INFO|Releasing lport df4ceadc-d14e-461c-bf8c-fb5254125675 from this chassis (sb_readonly=0)
Dec 06 07:23:02 compute-0 ovn_controller[147168]: 2025-12-06T07:23:02Z|00339|binding|INFO|Setting lport df4ceadc-d14e-461c-bf8c-fb5254125675 down in Southbound
Dec 06 07:23:02 compute-0 ovn_controller[147168]: 2025-12-06T07:23:02Z|00340|binding|INFO|Removing iface tapdf4ceadc-d1 ovn-installed in OVS
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.075 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.082 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:62:99 10.100.0.11'], port_security=['fa:16:3e:a8:62:99 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '28f9954c-de98-47d8-a564-dc16e7702d6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79af561c-d1d7-4e64-8479-baa182d93bd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '597285dcf6f141219338afe733e28a2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75799009-9bae-4b08-b6b5-839943677542', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=052c5496-31ea-4482-b675-8b06085e3df8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=df4ceadc-d14e-461c-bf8c-fb5254125675) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.084 158118 INFO neutron.agent.ovn.metadata.agent [-] Port df4ceadc-d14e-461c-bf8c-fb5254125675 in datapath 79af561c-d1d7-4e64-8479-baa182d93bd5 unbound from our chassis
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.085 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79af561c-d1d7-4e64-8479-baa182d93bd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.088 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5142537a-eacd-4723-8b20-6820e4109a64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.088 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5 namespace which is not needed anymore
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.094 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:02 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000061.scope: Deactivated successfully.
Dec 06 07:23:02 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000061.scope: Consumed 2.557s CPU time.
Dec 06 07:23:02 compute-0 systemd-machined[212986]: Machine qemu-43-instance-00000061 terminated.
Dec 06 07:23:02 compute-0 neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5[312077]: [NOTICE]   (312081) : haproxy version is 2.8.14-c23fe91
Dec 06 07:23:02 compute-0 neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5[312077]: [NOTICE]   (312081) : path to executable is /usr/sbin/haproxy
Dec 06 07:23:02 compute-0 neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5[312077]: [WARNING]  (312081) : Exiting Master process...
Dec 06 07:23:02 compute-0 neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5[312077]: [ALERT]    (312081) : Current worker (312083) exited with code 143 (Terminated)
Dec 06 07:23:02 compute-0 neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5[312077]: [WARNING]  (312081) : All workers exited. Exiting... (0)
Dec 06 07:23:02 compute-0 systemd[1]: libpod-f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842.scope: Deactivated successfully.
Dec 06 07:23:02 compute-0 podman[312216]: 2025-12-06 07:23:02.234260617 +0000 UTC m=+0.048046622 container died f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842-userdata-shm.mount: Deactivated successfully.
Dec 06 07:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6095cb1ee1d70e694137a88a9770610b3910b116f1b2245ed78a8aa866d369a4-merged.mount: Deactivated successfully.
Dec 06 07:23:02 compute-0 podman[312216]: 2025-12-06 07:23:02.289206508 +0000 UTC m=+0.102992513 container cleanup f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.287 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:02 compute-0 systemd[1]: libpod-conmon-f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842.scope: Deactivated successfully.
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.301 251996 INFO nova.virt.libvirt.driver [-] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Instance destroyed successfully.
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.302 251996 DEBUG nova.objects.instance [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lazy-loading 'resources' on Instance uuid 28f9954c-de98-47d8-a564-dc16e7702d6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.320 251996 DEBUG nova.virt.libvirt.vif [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:22:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1877927184',display_name='tempest-ServerGroupTestJSON-server-1877927184',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1877927184',id=97,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:22:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='597285dcf6f141219338afe733e28a2a',ramdisk_id='',reservation_id='r-m2wtb7eq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1579993364',owner_user_name='tempest-ServerGroupTestJSON-1579993364-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:22:56Z,user_data=None,user_id='ca0a4a5ab2bb41298078e4d8c601925f',uuid=28f9954c-de98-47d8-a564-dc16e7702d6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.321 251996 DEBUG nova.network.os_vif_util [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Converting VIF {"id": "df4ceadc-d14e-461c-bf8c-fb5254125675", "address": "fa:16:3e:a8:62:99", "network": {"id": "79af561c-d1d7-4e64-8479-baa182d93bd5", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1935062266-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "597285dcf6f141219338afe733e28a2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf4ceadc-d1", "ovs_interfaceid": "df4ceadc-d14e-461c-bf8c-fb5254125675", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.322 251996 DEBUG nova.network.os_vif_util [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:62:99,bridge_name='br-int',has_traffic_filtering=True,id=df4ceadc-d14e-461c-bf8c-fb5254125675,network=Network(79af561c-d1d7-4e64-8479-baa182d93bd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf4ceadc-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.322 251996 DEBUG os_vif [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:62:99,bridge_name='br-int',has_traffic_filtering=True,id=df4ceadc-d14e-461c-bf8c-fb5254125675,network=Network(79af561c-d1d7-4e64-8479-baa182d93bd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf4ceadc-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.324 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.325 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf4ceadc-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.326 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.329 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.335 251996 INFO os_vif [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:62:99,bridge_name='br-int',has_traffic_filtering=True,id=df4ceadc-d14e-461c-bf8c-fb5254125675,network=Network(79af561c-d1d7-4e64-8479-baa182d93bd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf4ceadc-d1')
Dec 06 07:23:02 compute-0 podman[312257]: 2025-12-06 07:23:02.36880647 +0000 UTC m=+0.049116231 container remove f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.375 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[88e46bb8-0c9a-4a2a-9d9f-f79098b96757]: (4, ('Sat Dec  6 07:23:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5 (f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842)\nf25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842\nSat Dec  6 07:23:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5 (f25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842)\nf25f5fc5b0fce4cbbd0da4de3a9707e66f45a81953ded1c3cbbec7ef6bd37842\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.377 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bcb10f-e85e-45b1-ba95-0888c60f6887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.378 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79af561c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:02 compute-0 kernel: tap79af561c-d0: left promiscuous mode
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.396 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.398 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d74a98bf-9c55-484e-87d0-3cf18d87be76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.406 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.406 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.412 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7d90cf-0e8b-43b1-9696-778eda6fd81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.414 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.414 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9967627b-8eec-4c4a-8fe6-56ffa1687af9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.414 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.432 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[68deb1dd-f692-41cd-bef6-374453677003]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 604871, 'reachable_time': 15216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312288, 'error': None, 'target': 'ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d79af561c\x2dd1d7\x2d4e64\x2d8479\x2dbaa182d93bd5.mount: Deactivated successfully.
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.437 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79af561c-d1d7-4e64-8479-baa182d93bd5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:23:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:02.437 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d9edfed4-2bea-4ef5-a3d6-1a48d635ed17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.585 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.587 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4353MB free_disk=20.90105438232422GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.587 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.587 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.663 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 00f56c62-f327-41e3-a105-24f56ae124c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.663 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 28f9954c-de98-47d8-a564-dc16e7702d6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.664 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.664 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.737 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.766 251996 DEBUG nova.compute.manager [req-5e996240-d625-4528-9636-8a01b9e02d66 req-221ca1aa-a77c-4cf0-b56b-14a3f86ac9cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received event network-vif-unplugged-df4ceadc-d14e-461c-bf8c-fb5254125675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.767 251996 DEBUG oslo_concurrency.lockutils [req-5e996240-d625-4528-9636-8a01b9e02d66 req-221ca1aa-a77c-4cf0-b56b-14a3f86ac9cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.768 251996 DEBUG oslo_concurrency.lockutils [req-5e996240-d625-4528-9636-8a01b9e02d66 req-221ca1aa-a77c-4cf0-b56b-14a3f86ac9cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.768 251996 DEBUG oslo_concurrency.lockutils [req-5e996240-d625-4528-9636-8a01b9e02d66 req-221ca1aa-a77c-4cf0-b56b-14a3f86ac9cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.768 251996 DEBUG nova.compute.manager [req-5e996240-d625-4528-9636-8a01b9e02d66 req-221ca1aa-a77c-4cf0-b56b-14a3f86ac9cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] No waiting events found dispatching network-vif-unplugged-df4ceadc-d14e-461c-bf8c-fb5254125675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:23:02 compute-0 nova_compute[251992]: 2025-12-06 07:23:02.769 251996 DEBUG nova.compute.manager [req-5e996240-d625-4528-9636-8a01b9e02d66 req-221ca1aa-a77c-4cf0-b56b-14a3f86ac9cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received event network-vif-unplugged-df4ceadc-d14e-461c-bf8c-fb5254125675 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:23:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.161 251996 DEBUG nova.network.neutron [req-6720ae90-e8a8-4b70-9f32-b492f030f0f8 req-9b9195ba-441f-4a55-94f4-84ac867448dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updated VIF entry in instance network info cache for port c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.163 251996 DEBUG nova.network.neutron [req-6720ae90-e8a8-4b70-9f32-b492f030f0f8 req-9b9195ba-441f-4a55-94f4-84ac867448dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2070338189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.199 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.205 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.284 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.289 251996 DEBUG oslo_concurrency.lockutils [req-6720ae90-e8a8-4b70-9f32-b492f030f0f8 req-9b9195ba-441f-4a55-94f4-84ac867448dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.307 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.307 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:03 compute-0 ceph-mon[74339]: pgmap v1955: 305 pgs: 305 active+clean; 213 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.2 MiB/s wr, 111 op/s
Dec 06 07:23:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/512214763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3269548074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:03.737859) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783737923, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2046, "num_deletes": 255, "total_data_size": 3621173, "memory_usage": 3681240, "flush_reason": "Manual Compaction"}
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec 06 07:23:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:03.828 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:03.829 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:03.830 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:03 compute-0 nova_compute[251992]: 2025-12-06 07:23:03.848 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783869896, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2182472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37811, "largest_seqno": 39856, "table_properties": {"data_size": 2175312, "index_size": 3782, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19466, "raw_average_key_size": 21, "raw_value_size": 2159328, "raw_average_value_size": 2399, "num_data_blocks": 166, "num_entries": 900, "num_filter_entries": 900, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005574, "oldest_key_time": 1765005574, "file_creation_time": 1765005783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 132078 microseconds, and 6491 cpu microseconds.
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:03.869937) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2182472 bytes OK
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:03.869957) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:03.872645) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:03.872662) EVENT_LOG_v1 {"time_micros": 1765005783872656, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:03.872680) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3612560, prev total WAL file size 3612560, number of live WAL files 2.
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:03.873785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323731' seq:72057594037927935, type:22 .. '6D6772737461740031353236' seq:0, type:0; will stop at (end)
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2131KB)], [80(10MB)]
Dec 06 07:23:03 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005783873850, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13469142, "oldest_snapshot_seqno": -1}
Dec 06 07:23:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 269 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 3.5 MiB/s wr, 156 op/s
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7273 keys, 10897737 bytes, temperature: kUnknown
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005784102025, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 10897737, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10849824, "index_size": 28601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 187111, "raw_average_key_size": 25, "raw_value_size": 10720442, "raw_average_value_size": 1474, "num_data_blocks": 1140, "num_entries": 7273, "num_filter_entries": 7273, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765005783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:04.102378) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10897737 bytes
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:04.137416) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.0 rd, 47.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.8 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(11.2) write-amplify(5.0) OK, records in: 7715, records dropped: 442 output_compression: NoCompression
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:04.137464) EVENT_LOG_v1 {"time_micros": 1765005784137446, "job": 46, "event": "compaction_finished", "compaction_time_micros": 228323, "compaction_time_cpu_micros": 32243, "output_level": 6, "num_output_files": 1, "total_output_size": 10897737, "num_input_records": 7715, "num_output_records": 7273, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005784138239, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005784141129, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:03.873704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:04.141283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:04.141418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:04.141422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:04.141424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:04.141426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.301 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.319 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.319 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.879 251996 DEBUG nova.compute.manager [req-15e83fb1-d914-44a9-a314-e3d6c3552caa req-59c069c3-1182-4b64-91a6-68c06fa38ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received event network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.880 251996 DEBUG oslo_concurrency.lockutils [req-15e83fb1-d914-44a9-a314-e3d6c3552caa req-59c069c3-1182-4b64-91a6-68c06fa38ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.880 251996 DEBUG oslo_concurrency.lockutils [req-15e83fb1-d914-44a9-a314-e3d6c3552caa req-59c069c3-1182-4b64-91a6-68c06fa38ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.881 251996 DEBUG oslo_concurrency.lockutils [req-15e83fb1-d914-44a9-a314-e3d6c3552caa req-59c069c3-1182-4b64-91a6-68c06fa38ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.881 251996 DEBUG nova.compute.manager [req-15e83fb1-d914-44a9-a314-e3d6c3552caa req-59c069c3-1182-4b64-91a6-68c06fa38ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] No waiting events found dispatching network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:23:04 compute-0 nova_compute[251992]: 2025-12-06 07:23:04.882 251996 WARNING nova.compute.manager [req-15e83fb1-d914-44a9-a314-e3d6c3552caa req-59c069c3-1182-4b64-91a6-68c06fa38ebe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received unexpected event network-vif-plugged-df4ceadc-d14e-461c-bf8c-fb5254125675 for instance with vm_state active and task_state deleting.
Dec 06 07:23:04 compute-0 ceph-mon[74339]: pgmap v1956: 305 pgs: 305 active+clean; 225 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.5 MiB/s wr, 129 op/s
Dec 06 07:23:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2070338189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:04 compute-0 ceph-mon[74339]: pgmap v1957: 305 pgs: 305 active+clean; 269 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 3.5 MiB/s wr, 156 op/s
Dec 06 07:23:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/183565267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:05.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:05 compute-0 sudo[312316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:05 compute-0 sudo[312316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:05 compute-0 sudo[312316]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:05 compute-0 sudo[312341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:05 compute-0 sudo[312341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:05 compute-0 sudo[312341]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 272 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.9 MiB/s wr, 183 op/s
Dec 06 07:23:06 compute-0 nova_compute[251992]: 2025-12-06 07:23:06.150 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:06.151 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:23:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:06.152 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:23:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:07 compute-0 ceph-mon[74339]: pgmap v1958: 305 pgs: 305 active+clean; 272 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.9 MiB/s wr, 183 op/s
Dec 06 07:23:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3978424021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:07 compute-0 nova_compute[251992]: 2025-12-06 07:23:07.328 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 261 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.9 MiB/s wr, 189 op/s
Dec 06 07:23:08 compute-0 ovn_controller[147168]: 2025-12-06T07:23:08Z|00341|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:23:08 compute-0 nova_compute[251992]: 2025-12-06 07:23:08.353 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:08 compute-0 nova_compute[251992]: 2025-12-06 07:23:08.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:08 compute-0 nova_compute[251992]: 2025-12-06 07:23:08.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:23:08 compute-0 nova_compute[251992]: 2025-12-06 07:23:08.672 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:23:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:08 compute-0 nova_compute[251992]: 2025-12-06 07:23:08.850 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:09.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:23:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1387182121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:23:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:23:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1387182121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:23:09 compute-0 nova_compute[251992]: 2025-12-06 07:23:09.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3828009080' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:23:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3828009080' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:23:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2534009470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:09 compute-0 ceph-mon[74339]: pgmap v1959: 305 pgs: 305 active+clean; 261 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.9 MiB/s wr, 189 op/s
Dec 06 07:23:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 273 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Dec 06 07:23:10 compute-0 nova_compute[251992]: 2025-12-06 07:23:10.668 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:10 compute-0 nova_compute[251992]: 2025-12-06 07:23:10.669 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:23:10 compute-0 nova_compute[251992]: 2025-12-06 07:23:10.684 251996 INFO nova.virt.libvirt.driver [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Deleting instance files /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b_del
Dec 06 07:23:10 compute-0 nova_compute[251992]: 2025-12-06 07:23:10.685 251996 INFO nova.virt.libvirt.driver [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Deletion of /var/lib/nova/instances/28f9954c-de98-47d8-a564-dc16e7702d6b_del complete
Dec 06 07:23:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1387182121' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:23:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1387182121' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:23:10 compute-0 ceph-mon[74339]: pgmap v1960: 305 pgs: 305 active+clean; 273 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Dec 06 07:23:10 compute-0 nova_compute[251992]: 2025-12-06 07:23:10.787 251996 INFO nova.compute.manager [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Took 11.94 seconds to destroy the instance on the hypervisor.
Dec 06 07:23:10 compute-0 nova_compute[251992]: 2025-12-06 07:23:10.787 251996 DEBUG oslo.service.loopingcall [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:23:10 compute-0 nova_compute[251992]: 2025-12-06 07:23:10.788 251996 DEBUG nova.compute.manager [-] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:23:10 compute-0 nova_compute[251992]: 2025-12-06 07:23:10.788 251996 DEBUG nova.network.neutron [-] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:23:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:11 compute-0 nova_compute[251992]: 2025-12-06 07:23:11.434 251996 DEBUG nova.network.neutron [-] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:11 compute-0 nova_compute[251992]: 2025-12-06 07:23:11.452 251996 INFO nova.compute.manager [-] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Took 0.66 seconds to deallocate network for instance.
Dec 06 07:23:11 compute-0 nova_compute[251992]: 2025-12-06 07:23:11.499 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:11 compute-0 nova_compute[251992]: 2025-12-06 07:23:11.499 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:11 compute-0 nova_compute[251992]: 2025-12-06 07:23:11.516 251996 DEBUG nova.compute.manager [req-fa8fc93d-9037-46fb-8eb9-5bb8b2dc7254 req-b6f15f25-9367-4ea0-b76b-fb47948d844d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Received event network-vif-deleted-df4ceadc-d14e-461c-bf8c-fb5254125675 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Dec 06 07:23:11 compute-0 nova_compute[251992]: 2025-12-06 07:23:11.595 251996 DEBUG oslo_concurrency.processutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Dec 06 07:23:11 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Dec 06 07:23:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 246 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 162 op/s
Dec 06 07:23:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444199091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:12 compute-0 ovn_controller[147168]: 2025-12-06T07:23:12Z|00342|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.071 251996 DEBUG oslo_concurrency.processutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.090 251996 DEBUG nova.compute.provider_tree [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.113 251996 DEBUG nova.scheduler.client.report [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.122 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.139 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.175 251996 INFO nova.scheduler.client.report [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Deleted allocations for instance 28f9954c-de98-47d8-a564-dc16e7702d6b
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.238 251996 DEBUG oslo_concurrency.lockutils [None req-72fd22b9-b181-48b7-96d8-09acd5a56409 ca0a4a5ab2bb41298078e4d8c601925f 597285dcf6f141219338afe733e28a2a - - default default] Lock "28f9954c-de98-47d8-a564-dc16e7702d6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.330 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:23:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.877 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.878 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.878 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:23:12 compute-0 nova_compute[251992]: 2025-12-06 07:23:12.878 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 00f56c62-f327-41e3-a105-24f56ae124c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:23:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Dec 06 07:23:13 compute-0 ceph-mon[74339]: osdmap e260: 3 total, 3 up, 3 in
Dec 06 07:23:13 compute-0 ceph-mon[74339]: pgmap v1962: 305 pgs: 305 active+clean; 246 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 162 op/s
Dec 06 07:23:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/444199091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Dec 06 07:23:13 compute-0 nova_compute[251992]: 2025-12-06 07:23:13.852 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 246 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 06 07:23:14 compute-0 ceph-mon[74339]: osdmap e261: 3 total, 3 up, 3 in
Dec 06 07:23:14 compute-0 ceph-mon[74339]: pgmap v1964: 305 pgs: 305 active+clean; 246 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec 06 07:23:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:14.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:14 compute-0 nova_compute[251992]: 2025-12-06 07:23:14.993 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:15 compute-0 nova_compute[251992]: 2025-12-06 07:23:15.009 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:15 compute-0 nova_compute[251992]: 2025-12-06 07:23:15.010 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:23:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:15.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:15.154 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 246 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 1.5 MiB/s wr, 49 op/s
Dec 06 07:23:16 compute-0 ovn_controller[147168]: 2025-12-06T07:23:16Z|00343|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:23:16 compute-0 nova_compute[251992]: 2025-12-06 07:23:16.340 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:16.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:16 compute-0 ovn_controller[147168]: 2025-12-06T07:23:16Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:82:3f 10.100.0.6
Dec 06 07:23:16 compute-0 ovn_controller[147168]: 2025-12-06T07:23:16Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:82:3f 10.100.0.6
Dec 06 07:23:17 compute-0 ceph-mon[74339]: pgmap v1965: 305 pgs: 305 active+clean; 246 MiB data, 845 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 1.5 MiB/s wr, 49 op/s
Dec 06 07:23:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:17.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:17 compute-0 nova_compute[251992]: 2025-12-06 07:23:17.290 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005782.2881985, 28f9954c-de98-47d8-a564-dc16e7702d6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:17 compute-0 nova_compute[251992]: 2025-12-06 07:23:17.290 251996 INFO nova.compute.manager [-] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] VM Stopped (Lifecycle Event)
Dec 06 07:23:17 compute-0 nova_compute[251992]: 2025-12-06 07:23:17.313 251996 DEBUG nova.compute.manager [None req-2f610347-1e22-42f2-91a1-09e970483aa6 - - - - - -] [instance: 28f9954c-de98-47d8-a564-dc16e7702d6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:17 compute-0 nova_compute[251992]: 2025-12-06 07:23:17.332 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 258 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 770 KiB/s wr, 54 op/s
Dec 06 07:23:18 compute-0 ceph-mon[74339]: pgmap v1966: 305 pgs: 305 active+clean; 258 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 770 KiB/s wr, 54 op/s
Dec 06 07:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:23:18
Dec 06 07:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.mgr', 'images', 'vms', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.rgw.root']
Dec 06 07:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:23:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:18.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:18 compute-0 nova_compute[251992]: 2025-12-06 07:23:18.854 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 264 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 1.4 MiB/s wr, 62 op/s
Dec 06 07:23:20 compute-0 ceph-mon[74339]: pgmap v1967: 305 pgs: 305 active+clean; 264 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 1.4 MiB/s wr, 62 op/s
Dec 06 07:23:20 compute-0 nova_compute[251992]: 2025-12-06 07:23:20.321 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:20.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:21 compute-0 nova_compute[251992]: 2025-12-06 07:23:21.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:21 compute-0 nova_compute[251992]: 2025-12-06 07:23:21.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:23:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 249 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 2.6 MiB/s wr, 110 op/s
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.047313) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802047390, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 424, "num_deletes": 251, "total_data_size": 382563, "memory_usage": 391952, "flush_reason": "Manual Compaction"}
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802051556, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 379122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39857, "largest_seqno": 40280, "table_properties": {"data_size": 376555, "index_size": 667, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6341, "raw_average_key_size": 19, "raw_value_size": 371351, "raw_average_value_size": 1121, "num_data_blocks": 30, "num_entries": 331, "num_filter_entries": 331, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005784, "oldest_key_time": 1765005784, "file_creation_time": 1765005802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 4269 microseconds, and 1828 cpu microseconds.
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.051594) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 379122 bytes OK
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.051612) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.053869) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.053912) EVENT_LOG_v1 {"time_micros": 1765005802053903, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.053934) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 379920, prev total WAL file size 379920, number of live WAL files 2.
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.054425) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(370KB)], [83(10MB)]
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802054476, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11276859, "oldest_snapshot_seqno": -1}
Dec 06 07:23:22 compute-0 sudo[312397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:22 compute-0 sudo[312397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:22 compute-0 sudo[312397]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7089 keys, 9293921 bytes, temperature: kUnknown
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802131071, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9293921, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9248845, "index_size": 26276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 184066, "raw_average_key_size": 25, "raw_value_size": 9124202, "raw_average_value_size": 1287, "num_data_blocks": 1033, "num_entries": 7089, "num_filter_entries": 7089, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765005802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.131310) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9293921 bytes
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.133227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 121.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(54.3) write-amplify(24.5) OK, records in: 7604, records dropped: 515 output_compression: NoCompression
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.133249) EVENT_LOG_v1 {"time_micros": 1765005802133239, "job": 48, "event": "compaction_finished", "compaction_time_micros": 76679, "compaction_time_cpu_micros": 23725, "output_level": 6, "num_output_files": 1, "total_output_size": 9293921, "num_input_records": 7604, "num_output_records": 7089, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802133486, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765005802135860, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.054344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.135945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.135951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.135952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.135954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:23:22.135955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:23:22 compute-0 sudo[312428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:23:22 compute-0 sudo[312428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:22 compute-0 sudo[312428]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:22 compute-0 podman[312421]: 2025-12-06 07:23:22.20393672 +0000 UTC m=+0.084264361 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:23:22 compute-0 sudo[312471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:22 compute-0 sudo[312471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:22 compute-0 sudo[312471]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:22 compute-0 sudo[312498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:23:22 compute-0 sudo[312498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:22 compute-0 nova_compute[251992]: 2025-12-06 07:23:22.334 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:22 compute-0 sudo[312498]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:22.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:23:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:23:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:23:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:23:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:23:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Dec 06 07:23:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:23.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ded4cd87-dd67-4c9b-a112-1fd58aa217e2 does not exist
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9c52b864-2657-4657-87ef-3a6ad9afa142 does not exist
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 931200f2-a132-42af-a4c3-ae108f63a6e5 does not exist
Dec 06 07:23:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:23:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:23:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:23:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:23:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:23:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:23:23 compute-0 sudo[312554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:23 compute-0 sudo[312554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:23 compute-0 sudo[312554]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:23:23 compute-0 sudo[312579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:23:23 compute-0 sudo[312579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:23 compute-0 sudo[312579]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:23 compute-0 sudo[312604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:23 compute-0 sudo[312604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:23 compute-0 sudo[312604]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Dec 06 07:23:23 compute-0 ceph-mon[74339]: pgmap v1968: 305 pgs: 305 active+clean; 249 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 416 KiB/s rd, 2.6 MiB/s wr, 110 op/s
Dec 06 07:23:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/24472655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:23:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:23:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Dec 06 07:23:23 compute-0 sudo[312629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:23:23 compute-0 sudo[312629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:23 compute-0 nova_compute[251992]: 2025-12-06 07:23:23.708 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:23:23 compute-0 nova_compute[251992]: 2025-12-06 07:23:23.856 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:23 compute-0 podman[312696]: 2025-12-06 07:23:23.906331541 +0000 UTC m=+0.044816445 container create 107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:23:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 200 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 130 op/s
Dec 06 07:23:23 compute-0 systemd[1]: Started libpod-conmon-107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516.scope.
Dec 06 07:23:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:23:23 compute-0 podman[312696]: 2025-12-06 07:23:23.888510725 +0000 UTC m=+0.026995649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:23:24 compute-0 podman[312696]: 2025-12-06 07:23:24.005793516 +0000 UTC m=+0.144278440 container init 107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:23:24 compute-0 podman[312696]: 2025-12-06 07:23:24.014430752 +0000 UTC m=+0.152915666 container start 107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 06 07:23:24 compute-0 podman[312696]: 2025-12-06 07:23:24.018989777 +0000 UTC m=+0.157474701 container attach 107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 07:23:24 compute-0 magical_mestorf[312712]: 167 167
Dec 06 07:23:24 compute-0 systemd[1]: libpod-107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516.scope: Deactivated successfully.
Dec 06 07:23:24 compute-0 conmon[312712]: conmon 107ac65c71de4486d285 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516.scope/container/memory.events
Dec 06 07:23:24 compute-0 podman[312696]: 2025-12-06 07:23:24.024132796 +0000 UTC m=+0.162617700 container died 107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec 06 07:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0fd198446528078048b9fd2c8e452dd86c9618898560d38f50f088a0c8bb6bc-merged.mount: Deactivated successfully.
Dec 06 07:23:24 compute-0 podman[312696]: 2025-12-06 07:23:24.073564666 +0000 UTC m=+0.212049570 container remove 107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mestorf, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 07:23:24 compute-0 systemd[1]: libpod-conmon-107ac65c71de4486d285ef4c653e86f90f8888f330794ce630e6a0bf1854a516.scope: Deactivated successfully.
Dec 06 07:23:24 compute-0 podman[312736]: 2025-12-06 07:23:24.239903777 +0000 UTC m=+0.041320049 container create 24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:23:24 compute-0 nova_compute[251992]: 2025-12-06 07:23:24.282 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:24 compute-0 systemd[1]: Started libpod-conmon-24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47.scope.
Dec 06 07:23:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:23:24 compute-0 podman[312736]: 2025-12-06 07:23:24.224063444 +0000 UTC m=+0.025479736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a0345b35e8e50b540575d21426ee83d9506b3a4e398dba2eceae39fdd80e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a0345b35e8e50b540575d21426ee83d9506b3a4e398dba2eceae39fdd80e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a0345b35e8e50b540575d21426ee83d9506b3a4e398dba2eceae39fdd80e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a0345b35e8e50b540575d21426ee83d9506b3a4e398dba2eceae39fdd80e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a8a0345b35e8e50b540575d21426ee83d9506b3a4e398dba2eceae39fdd80e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:24 compute-0 podman[312736]: 2025-12-06 07:23:24.337336736 +0000 UTC m=+0.138753008 container init 24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:23:24 compute-0 podman[312736]: 2025-12-06 07:23:24.34592639 +0000 UTC m=+0.147342662 container start 24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:23:24 compute-0 podman[312736]: 2025-12-06 07:23:24.350965048 +0000 UTC m=+0.152381340 container attach 24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:23:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:23:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:23:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:23:24 compute-0 ceph-mon[74339]: osdmap e262: 3 total, 3 up, 3 in
Dec 06 07:23:24 compute-0 ceph-mon[74339]: pgmap v1970: 305 pgs: 305 active+clean; 200 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 130 op/s
Dec 06 07:23:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Dec 06 07:23:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Dec 06 07:23:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Dec 06 07:23:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:24.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:24 compute-0 nova_compute[251992]: 2025-12-06 07:23:24.846 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:25.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:25 compute-0 nifty_mirzakhani[312753]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:23:25 compute-0 nifty_mirzakhani[312753]: --> relative data size: 1.0
Dec 06 07:23:25 compute-0 nifty_mirzakhani[312753]: --> All data devices are unavailable
Dec 06 07:23:25 compute-0 systemd[1]: libpod-24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47.scope: Deactivated successfully.
Dec 06 07:23:25 compute-0 podman[312736]: 2025-12-06 07:23:25.239264577 +0000 UTC m=+1.040680849 container died 24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:23:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a8a0345b35e8e50b540575d21426ee83d9506b3a4e398dba2eceae39fdd80e3-merged.mount: Deactivated successfully.
Dec 06 07:23:25 compute-0 podman[312736]: 2025-12-06 07:23:25.300429216 +0000 UTC m=+1.101845488 container remove 24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:23:25 compute-0 systemd[1]: libpod-conmon-24a6d057a3420518c875fee4bfd54497847893babd6753e38cf1d345de67fc47.scope: Deactivated successfully.
Dec 06 07:23:25 compute-0 sudo[312629]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:25 compute-0 sudo[312780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:25 compute-0 sudo[312780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:25 compute-0 sudo[312780]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:25 compute-0 sudo[312805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:25 compute-0 sudo[312806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:23:25 compute-0 sudo[312805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:25 compute-0 sudo[312806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:25 compute-0 sudo[312806]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:25 compute-0 sudo[312805]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:25 compute-0 sudo[312867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:25 compute-0 sudo[312868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:25 compute-0 sudo[312867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:25 compute-0 sudo[312868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:25 compute-0 sudo[312867]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:25 compute-0 sudo[312868]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:25 compute-0 podman[312853]: 2025-12-06 07:23:25.548652313 +0000 UTC m=+0.064774489 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Dec 06 07:23:25 compute-0 podman[312854]: 2025-12-06 07:23:25.549198457 +0000 UTC m=+0.064732448 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:23:25 compute-0 sudo[312942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:23:25 compute-0 sudo[312942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:25 compute-0 ceph-mon[74339]: osdmap e263: 3 total, 3 up, 3 in
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021646862786152987 of space, bias 1.0, pg target 0.6494058835845896 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004074735657453936 of space, bias 1.0, pg target 1.2224206972361809 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:23:25 compute-0 podman[313007]: 2025-12-06 07:23:25.919424203 +0000 UTC m=+0.038768339 container create e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:23:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 200 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 2.4 MiB/s wr, 138 op/s
Dec 06 07:23:25 compute-0 systemd[1]: Started libpod-conmon-e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c.scope.
Dec 06 07:23:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:23:25 compute-0 podman[313007]: 2025-12-06 07:23:25.901677319 +0000 UTC m=+0.021021475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:23:26 compute-0 podman[313007]: 2025-12-06 07:23:26.014016536 +0000 UTC m=+0.133360702 container init e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_agnesi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:23:26 compute-0 podman[313007]: 2025-12-06 07:23:26.021255553 +0000 UTC m=+0.140599689 container start e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:23:26 compute-0 practical_agnesi[313023]: 167 167
Dec 06 07:23:26 compute-0 systemd[1]: libpod-e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c.scope: Deactivated successfully.
Dec 06 07:23:26 compute-0 podman[313007]: 2025-12-06 07:23:26.031622746 +0000 UTC m=+0.150966892 container attach e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:23:26 compute-0 podman[313007]: 2025-12-06 07:23:26.03249656 +0000 UTC m=+0.151840696 container died e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:23:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa0c5f1059b7d84aa1e800c8441b6e32a8db122e3fd172a7a8ed5d7d532c813b-merged.mount: Deactivated successfully.
Dec 06 07:23:26 compute-0 podman[313007]: 2025-12-06 07:23:26.116494643 +0000 UTC m=+0.235838779 container remove e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_agnesi, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:23:26 compute-0 systemd[1]: libpod-conmon-e3a754b1d69761e2b7ef050539c073e73e634f44f89372274d7489fd0ca5eb3c.scope: Deactivated successfully.
Dec 06 07:23:26 compute-0 podman[313048]: 2025-12-06 07:23:26.294121672 +0000 UTC m=+0.042176942 container create 9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:23:26 compute-0 systemd[1]: Started libpod-conmon-9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7.scope.
Dec 06 07:23:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/208757a445bd3a5571a9e3d61ffd48fe7be5641d793916b22b1318fc656442ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/208757a445bd3a5571a9e3d61ffd48fe7be5641d793916b22b1318fc656442ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/208757a445bd3a5571a9e3d61ffd48fe7be5641d793916b22b1318fc656442ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/208757a445bd3a5571a9e3d61ffd48fe7be5641d793916b22b1318fc656442ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:26 compute-0 podman[313048]: 2025-12-06 07:23:26.276459639 +0000 UTC m=+0.024514929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:23:26 compute-0 podman[313048]: 2025-12-06 07:23:26.372557153 +0000 UTC m=+0.120612453 container init 9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:23:26 compute-0 podman[313048]: 2025-12-06 07:23:26.378360591 +0000 UTC m=+0.126415861 container start 9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_fermat, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:23:26 compute-0 podman[313048]: 2025-12-06 07:23:26.387808309 +0000 UTC m=+0.135863599 container attach 9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 07:23:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:26 compute-0 ceph-mon[74339]: pgmap v1972: 305 pgs: 305 active+clean; 200 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 518 KiB/s rd, 2.4 MiB/s wr, 138 op/s
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]: {
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:     "0": [
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:         {
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "devices": [
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "/dev/loop3"
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             ],
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "lv_name": "ceph_lv0",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "lv_size": "7511998464",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "name": "ceph_lv0",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "tags": {
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.cluster_name": "ceph",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.crush_device_class": "",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.encrypted": "0",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.osd_id": "0",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.type": "block",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:                 "ceph.vdo": "0"
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             },
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "type": "block",
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:             "vg_name": "ceph_vg0"
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:         }
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]:     ]
Dec 06 07:23:27 compute-0 quizzical_fermat[313065]: }
Dec 06 07:23:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:27.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:27 compute-0 systemd[1]: libpod-9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7.scope: Deactivated successfully.
Dec 06 07:23:27 compute-0 podman[313048]: 2025-12-06 07:23:27.158161048 +0000 UTC m=+0.906216318 container died 9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-208757a445bd3a5571a9e3d61ffd48fe7be5641d793916b22b1318fc656442ea-merged.mount: Deactivated successfully.
Dec 06 07:23:27 compute-0 podman[313048]: 2025-12-06 07:23:27.239772965 +0000 UTC m=+0.987828235 container remove 9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 07:23:27 compute-0 systemd[1]: libpod-conmon-9192064e268c55a2614642501fec72e3f86f239d3f02fec0d5a932c898ed7fd7.scope: Deactivated successfully.
Dec 06 07:23:27 compute-0 sudo[312942]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:27 compute-0 sudo[313089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:27 compute-0 nova_compute[251992]: 2025-12-06 07:23:27.335 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:27 compute-0 sudo[313089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:27 compute-0 sudo[313089]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:27 compute-0 sudo[313114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:23:27 compute-0 sudo[313114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:27 compute-0 sudo[313114]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:27 compute-0 sudo[313139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:27 compute-0 sudo[313139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:27 compute-0 sudo[313139]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:27 compute-0 sudo[313164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:23:27 compute-0 sudo[313164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:27 compute-0 podman[313229]: 2025-12-06 07:23:27.810129565 +0000 UTC m=+0.026119834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:23:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 172 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Dec 06 07:23:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:28 compute-0 nova_compute[251992]: 2025-12-06 07:23:28.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:28 compute-0 podman[313229]: 2025-12-06 07:23:28.069366161 +0000 UTC m=+0.285356410 container create b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:23:28 compute-0 systemd[1]: Started libpod-conmon-b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52.scope.
Dec 06 07:23:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:23:28 compute-0 podman[313229]: 2025-12-06 07:23:28.52903788 +0000 UTC m=+0.745028139 container init b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cerf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:23:28 compute-0 podman[313229]: 2025-12-06 07:23:28.53565373 +0000 UTC m=+0.751643969 container start b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:23:28 compute-0 systemd[1]: libpod-b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52.scope: Deactivated successfully.
Dec 06 07:23:28 compute-0 youthful_cerf[313245]: 167 167
Dec 06 07:23:28 compute-0 conmon[313245]: conmon b3aff9ada8036001a383 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52.scope/container/memory.events
Dec 06 07:23:28 compute-0 podman[313229]: 2025-12-06 07:23:28.570639445 +0000 UTC m=+0.786629714 container attach b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:23:28 compute-0 podman[313229]: 2025-12-06 07:23:28.571130869 +0000 UTC m=+0.787121118 container died b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:23:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c03502ea365246a9b4d8fe65c32bcb84d6ef08e0dc6cb8e9cb1f962a0c42005-merged.mount: Deactivated successfully.
Dec 06 07:23:28 compute-0 ceph-mon[74339]: pgmap v1973: 305 pgs: 305 active+clean; 172 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Dec 06 07:23:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4237680350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:28 compute-0 podman[313229]: 2025-12-06 07:23:28.810149623 +0000 UTC m=+1.026139872 container remove b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:23:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:28 compute-0 nova_compute[251992]: 2025-12-06 07:23:28.859 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:28 compute-0 systemd[1]: libpod-conmon-b3aff9ada8036001a383703e4c8b7677a89f1f337b0aa39074a17acabd78bb52.scope: Deactivated successfully.
Dec 06 07:23:29 compute-0 podman[313270]: 2025-12-06 07:23:28.972810943 +0000 UTC m=+0.022372741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:23:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:29.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:29 compute-0 podman[313270]: 2025-12-06 07:23:29.181315815 +0000 UTC m=+0.230877593 container create db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_boyd, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:23:29 compute-0 systemd[1]: Started libpod-conmon-db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c.scope.
Dec 06 07:23:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3660502b76a09e338dcc20cf92aa0d9c3d332ed3b30b21d946c4f11100ac926c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3660502b76a09e338dcc20cf92aa0d9c3d332ed3b30b21d946c4f11100ac926c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3660502b76a09e338dcc20cf92aa0d9c3d332ed3b30b21d946c4f11100ac926c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3660502b76a09e338dcc20cf92aa0d9c3d332ed3b30b21d946c4f11100ac926c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:29 compute-0 podman[313270]: 2025-12-06 07:23:29.510672835 +0000 UTC m=+0.560234633 container init db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_boyd, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:23:29 compute-0 podman[313270]: 2025-12-06 07:23:29.519573448 +0000 UTC m=+0.569135266 container start db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_boyd, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:23:29 compute-0 podman[313270]: 2025-12-06 07:23:29.814774747 +0000 UTC m=+0.864336525 container attach db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:23:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 135 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 23 KiB/s wr, 68 op/s
Dec 06 07:23:30 compute-0 zealous_boyd[313286]: {
Dec 06 07:23:30 compute-0 zealous_boyd[313286]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:23:30 compute-0 zealous_boyd[313286]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:23:30 compute-0 zealous_boyd[313286]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:23:30 compute-0 zealous_boyd[313286]:         "osd_id": 0,
Dec 06 07:23:30 compute-0 zealous_boyd[313286]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:23:30 compute-0 zealous_boyd[313286]:         "type": "bluestore"
Dec 06 07:23:30 compute-0 zealous_boyd[313286]:     }
Dec 06 07:23:30 compute-0 zealous_boyd[313286]: }
Dec 06 07:23:30 compute-0 systemd[1]: libpod-db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c.scope: Deactivated successfully.
Dec 06 07:23:30 compute-0 podman[313270]: 2025-12-06 07:23:30.469695734 +0000 UTC m=+1.519257562 container died db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_boyd, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 07:23:30 compute-0 nova_compute[251992]: 2025-12-06 07:23:30.800 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "cd305836-b072-4590-89c0-26c6c94a67a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:30 compute-0 nova_compute[251992]: 2025-12-06 07:23:30.801 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:30.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:30 compute-0 nova_compute[251992]: 2025-12-06 07:23:30.826 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:23:30 compute-0 nova_compute[251992]: 2025-12-06 07:23:30.927 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:30 compute-0 nova_compute[251992]: 2025-12-06 07:23:30.927 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:30 compute-0 nova_compute[251992]: 2025-12-06 07:23:30.934 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:23:30 compute-0 nova_compute[251992]: 2025-12-06 07:23:30.935 251996 INFO nova.compute.claims [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:23:31 compute-0 nova_compute[251992]: 2025-12-06 07:23:31.034 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:31.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 144 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.2 MiB/s wr, 86 op/s
Dec 06 07:23:32 compute-0 nova_compute[251992]: 2025-12-06 07:23:32.338 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:32.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Dec 06 07:23:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:33.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-3660502b76a09e338dcc20cf92aa0d9c3d332ed3b30b21d946c4f11100ac926c-merged.mount: Deactivated successfully.
Dec 06 07:23:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610633148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.489 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.495 251996 DEBUG nova.compute.provider_tree [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.509 251996 DEBUG nova.scheduler.client.report [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.534 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.535 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.596 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.596 251996 DEBUG nova.network.neutron [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.619 251996 INFO nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.635 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.758 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.760 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.760 251996 INFO nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Creating image(s)
Dec 06 07:23:33 compute-0 podman[313270]: 2025-12-06 07:23:33.787663327 +0000 UTC m=+4.837225105 container remove db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:23:33 compute-0 systemd[1]: libpod-conmon-db6e93368bdd7588c580a14b1a0c0f6607b98063ccfbfd0ab8cb8b511e049e4c.scope: Deactivated successfully.
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.795 251996 DEBUG nova.storage.rbd_utils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] rbd image cd305836-b072-4590-89c0-26c6c94a67a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:33 compute-0 sudo[313164]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.840 251996 DEBUG nova.storage.rbd_utils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] rbd image cd305836-b072-4590-89c0-26c6c94a67a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.870 251996 DEBUG nova.storage.rbd_utils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] rbd image cd305836-b072-4590-89c0-26c6c94a67a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.873 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.900 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 167 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.940 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.940 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.941 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.942 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.969 251996 DEBUG nova.storage.rbd_utils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] rbd image cd305836-b072-4590-89c0-26c6c94a67a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:33 compute-0 nova_compute[251992]: 2025-12-06 07:23:33.973 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef cd305836-b072-4590-89c0-26c6c94a67a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.004 251996 DEBUG nova.policy [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd0385d6c425a4b6ca71336ef4a2beb43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76ed0aa861fd42da90bbded63873a563', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:23:34 compute-0 ceph-mon[74339]: pgmap v1974: 305 pgs: 305 active+clean; 135 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 23 KiB/s wr, 68 op/s
Dec 06 07:23:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1442763614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Dec 06 07:23:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Dec 06 07:23:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:23:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.383 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef cd305836-b072-4590-89c0-26c6c94a67a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 710231d3-4ca4-48ec-9a46-bb6fd3ba6334 does not exist
Dec 06 07:23:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4e8f5dce-32d7-47f0-9052-94825fdd9a0c does not exist
Dec 06 07:23:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b81943e3-d756-498d-bd41-33c4ad6936d3 does not exist
Dec 06 07:23:34 compute-0 sudo[313445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:34 compute-0 sudo[313445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:34 compute-0 sudo[313445]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.480 251996 DEBUG nova.storage.rbd_utils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] resizing rbd image cd305836-b072-4590-89c0-26c6c94a67a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:23:34 compute-0 sudo[313500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:23:34 compute-0 sudo[313500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:34 compute-0 sudo[313500]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.603 251996 DEBUG nova.objects.instance [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lazy-loading 'migration_context' on Instance uuid cd305836-b072-4590-89c0-26c6c94a67a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.616 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.616 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Ensure instance console log exists: /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.617 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.617 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:34 compute-0 nova_compute[251992]: 2025-12-06 07:23:34.617 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:34.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:35.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:35 compute-0 ceph-mon[74339]: pgmap v1975: 305 pgs: 305 active+clean; 144 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.2 MiB/s wr, 86 op/s
Dec 06 07:23:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/610633148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/377675331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:35 compute-0 ceph-mon[74339]: pgmap v1976: 305 pgs: 305 active+clean; 167 MiB data, 792 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:23:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:35 compute-0 ceph-mon[74339]: osdmap e264: 3 total, 3 up, 3 in
Dec 06 07:23:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:23:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2798063090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 205 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 4.1 MiB/s wr, 113 op/s
Dec 06 07:23:36 compute-0 nova_compute[251992]: 2025-12-06 07:23:36.574 251996 DEBUG nova.network.neutron [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Successfully created port: 17febd5c-1a77-4f78-a405-50d99a8ade1d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:23:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:36.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:36 compute-0 ceph-mon[74339]: pgmap v1978: 305 pgs: 305 active+clean; 205 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 4.1 MiB/s wr, 113 op/s
Dec 06 07:23:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:37.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:37 compute-0 nova_compute[251992]: 2025-12-06 07:23:37.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:37 compute-0 nova_compute[251992]: 2025-12-06 07:23:37.848 251996 DEBUG nova.network.neutron [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Successfully updated port: 17febd5c-1a77-4f78-a405-50d99a8ade1d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:23:37 compute-0 nova_compute[251992]: 2025-12-06 07:23:37.868 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "refresh_cache-cd305836-b072-4590-89c0-26c6c94a67a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:37 compute-0 nova_compute[251992]: 2025-12-06 07:23:37.868 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquired lock "refresh_cache-cd305836-b072-4590-89c0-26c6c94a67a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:37 compute-0 nova_compute[251992]: 2025-12-06 07:23:37.868 251996 DEBUG nova.network.neutron [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:23:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 205 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 4.1 MiB/s wr, 93 op/s
Dec 06 07:23:37 compute-0 nova_compute[251992]: 2025-12-06 07:23:37.999 251996 DEBUG nova.compute.manager [req-16358759-02e0-4309-b9af-28becaa944fb req-b6d9a656-43e1-4bde-bdc3-5b8520195a19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received event network-changed-17febd5c-1a77-4f78-a405-50d99a8ade1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:38 compute-0 nova_compute[251992]: 2025-12-06 07:23:37.999 251996 DEBUG nova.compute.manager [req-16358759-02e0-4309-b9af-28becaa944fb req-b6d9a656-43e1-4bde-bdc3-5b8520195a19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Refreshing instance network info cache due to event network-changed-17febd5c-1a77-4f78-a405-50d99a8ade1d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:23:38 compute-0 nova_compute[251992]: 2025-12-06 07:23:38.000 251996 DEBUG oslo_concurrency.lockutils [req-16358759-02e0-4309-b9af-28becaa944fb req-b6d9a656-43e1-4bde-bdc3-5b8520195a19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-cd305836-b072-4590-89c0-26c6c94a67a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:23:38 compute-0 nova_compute[251992]: 2025-12-06 07:23:38.082 251996 DEBUG nova.network.neutron [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:23:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:38.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:38 compute-0 nova_compute[251992]: 2025-12-06 07:23:38.863 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.127 251996 DEBUG nova.network.neutron [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Updating instance_info_cache with network_info: [{"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.145 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Releasing lock "refresh_cache-cd305836-b072-4590-89c0-26c6c94a67a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.145 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Instance network_info: |[{"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.146 251996 DEBUG oslo_concurrency.lockutils [req-16358759-02e0-4309-b9af-28becaa944fb req-b6d9a656-43e1-4bde-bdc3-5b8520195a19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-cd305836-b072-4590-89c0-26c6c94a67a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.146 251996 DEBUG nova.network.neutron [req-16358759-02e0-4309-b9af-28becaa944fb req-b6d9a656-43e1-4bde-bdc3-5b8520195a19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Refreshing network info cache for port 17febd5c-1a77-4f78-a405-50d99a8ade1d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:23:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.152 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Start _get_guest_xml network_info=[{"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.158 251996 WARNING nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.163 251996 DEBUG nova.virt.libvirt.host [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.163 251996 DEBUG nova.virt.libvirt.host [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.175 251996 DEBUG nova.virt.libvirt.host [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.175 251996 DEBUG nova.virt.libvirt.host [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.178 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.178 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.179 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.179 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.180 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.180 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.181 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.181 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.182 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.183 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.183 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.184 251996 DEBUG nova.virt.hardware [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.190 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:39 compute-0 ceph-mon[74339]: pgmap v1979: 305 pgs: 305 active+clean; 205 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 4.1 MiB/s wr, 93 op/s
Dec 06 07:23:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2675889841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:23:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605953259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.625 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.668 251996 DEBUG nova.storage.rbd_utils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] rbd image cd305836-b072-4590-89c0-26c6c94a67a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:39 compute-0 nova_compute[251992]: 2025-12-06 07:23:39.672 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 260 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 6.4 MiB/s wr, 134 op/s
Dec 06 07:23:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:23:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502054305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.119 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.122 251996 DEBUG nova.virt.libvirt.vif [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:23:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-812618559',display_name='tempest-ServerMetadataTestJSON-server-812618559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-812618559',id=100,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76ed0aa861fd42da90bbded63873a563',ramdisk_id='',reservation_id='r-8dq4s9ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-851662312',owner_user_name='tempest-ServerMetadataTestJSON-851662312-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:33Z,user_data=None,user_id='d0385d6c425a4b6ca71336ef4a2beb43',uuid=cd305836-b072-4590-89c0-26c6c94a67a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.123 251996 DEBUG nova.network.os_vif_util [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Converting VIF {"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.125 251996 DEBUG nova.network.os_vif_util [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:8d:0c,bridge_name='br-int',has_traffic_filtering=True,id=17febd5c-1a77-4f78-a405-50d99a8ade1d,network=Network(3e2e3296-3e1f-431b-8634-816b4cd686e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17febd5c-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.128 251996 DEBUG nova.objects.instance [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd305836-b072-4590-89c0-26c6c94a67a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.150 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <uuid>cd305836-b072-4590-89c0-26c6c94a67a7</uuid>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <name>instance-00000064</name>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerMetadataTestJSON-server-812618559</nova:name>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:23:39</nova:creationTime>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <nova:user uuid="d0385d6c425a4b6ca71336ef4a2beb43">tempest-ServerMetadataTestJSON-851662312-project-member</nova:user>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <nova:project uuid="76ed0aa861fd42da90bbded63873a563">tempest-ServerMetadataTestJSON-851662312</nova:project>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <nova:port uuid="17febd5c-1a77-4f78-a405-50d99a8ade1d">
Dec 06 07:23:40 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <system>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <entry name="serial">cd305836-b072-4590-89c0-26c6c94a67a7</entry>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <entry name="uuid">cd305836-b072-4590-89c0-26c6c94a67a7</entry>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </system>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <os>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   </os>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <features>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   </features>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/cd305836-b072-4590-89c0-26c6c94a67a7_disk">
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       </source>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/cd305836-b072-4590-89c0-26c6c94a67a7_disk.config">
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       </source>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:23:40 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:c5:8d:0c"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <target dev="tap17febd5c-1a"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7/console.log" append="off"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <video>
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </video>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:23:40 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:23:40 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:23:40 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:23:40 compute-0 nova_compute[251992]: </domain>
Dec 06 07:23:40 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.152 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Preparing to wait for external event network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.153 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.153 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.153 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.154 251996 DEBUG nova.virt.libvirt.vif [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:23:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-812618559',display_name='tempest-ServerMetadataTestJSON-server-812618559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-812618559',id=100,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76ed0aa861fd42da90bbded63873a563',ramdisk_id='',reservation_id='r-8dq4s9ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-851662312',owner_user_name='tempest-ServerMetadataTestJSON-851662312-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:23:33Z,user_data=None,user_id='d0385d6c425a4b6ca71336ef4a2beb43',uuid=cd305836-b072-4590-89c0-26c6c94a67a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.154 251996 DEBUG nova.network.os_vif_util [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Converting VIF {"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.155 251996 DEBUG nova.network.os_vif_util [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:8d:0c,bridge_name='br-int',has_traffic_filtering=True,id=17febd5c-1a77-4f78-a405-50d99a8ade1d,network=Network(3e2e3296-3e1f-431b-8634-816b4cd686e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17febd5c-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.155 251996 DEBUG os_vif [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:8d:0c,bridge_name='br-int',has_traffic_filtering=True,id=17febd5c-1a77-4f78-a405-50d99a8ade1d,network=Network(3e2e3296-3e1f-431b-8634-816b4cd686e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17febd5c-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.156 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.156 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.157 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.162 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17febd5c-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.164 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17febd5c-1a, col_values=(('external_ids', {'iface-id': '17febd5c-1a77-4f78-a405-50d99a8ade1d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:8d:0c', 'vm-uuid': 'cd305836-b072-4590-89c0-26c6c94a67a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.166 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:40 compute-0 NetworkManager[48965]: <info>  [1765005820.1676] manager: (tap17febd5c-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.171 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.175 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.177 251996 INFO os_vif [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:8d:0c,bridge_name='br-int',has_traffic_filtering=True,id=17febd5c-1a77-4f78-a405-50d99a8ade1d,network=Network(3e2e3296-3e1f-431b-8634-816b4cd686e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17febd5c-1a')
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.300 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.301 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.301 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] No VIF found with MAC fa:16:3e:c5:8d:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.302 251996 INFO nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Using config drive
Dec 06 07:23:40 compute-0 nova_compute[251992]: 2025-12-06 07:23:40.330 251996 DEBUG nova.storage.rbd_utils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] rbd image cd305836-b072-4590-89c0-26c6c94a67a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:40.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:41 compute-0 nova_compute[251992]: 2025-12-06 07:23:41.069 251996 INFO nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Creating config drive at /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7/disk.config
Dec 06 07:23:41 compute-0 nova_compute[251992]: 2025-12-06 07:23:41.075 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl38icetr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2596147566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1605953259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:41 compute-0 ceph-mon[74339]: pgmap v1980: 305 pgs: 305 active+clean; 260 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 6.4 MiB/s wr, 134 op/s
Dec 06 07:23:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3502054305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:41.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:41 compute-0 nova_compute[251992]: 2025-12-06 07:23:41.209 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl38icetr" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:41 compute-0 nova_compute[251992]: 2025-12-06 07:23:41.244 251996 DEBUG nova.storage.rbd_utils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] rbd image cd305836-b072-4590-89c0-26c6c94a67a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:23:41 compute-0 nova_compute[251992]: 2025-12-06 07:23:41.248 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7/disk.config cd305836-b072-4590-89c0-26c6c94a67a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:41 compute-0 nova_compute[251992]: 2025-12-06 07:23:41.399 251996 DEBUG nova.network.neutron [req-16358759-02e0-4309-b9af-28becaa944fb req-b6d9a656-43e1-4bde-bdc3-5b8520195a19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Updated VIF entry in instance network info cache for port 17febd5c-1a77-4f78-a405-50d99a8ade1d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:23:41 compute-0 nova_compute[251992]: 2025-12-06 07:23:41.400 251996 DEBUG nova.network.neutron [req-16358759-02e0-4309-b9af-28becaa944fb req-b6d9a656-43e1-4bde-bdc3-5b8520195a19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Updating instance_info_cache with network_info: [{"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:41 compute-0 nova_compute[251992]: 2025-12-06 07:23:41.420 251996 DEBUG oslo_concurrency.lockutils [req-16358759-02e0-4309-b9af-28becaa944fb req-b6d9a656-43e1-4bde-bdc3-5b8520195a19 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-cd305836-b072-4590-89c0-26c6c94a67a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:23:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 260 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.4 MiB/s wr, 154 op/s
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.026 251996 DEBUG oslo_concurrency.processutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7/disk.config cd305836-b072-4590-89c0-26c6c94a67a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.027 251996 INFO nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Deleting local config drive /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7/disk.config because it was imported into RBD.
Dec 06 07:23:42 compute-0 NetworkManager[48965]: <info>  [1765005822.0700] manager: (tap17febd5c-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Dec 06 07:23:42 compute-0 kernel: tap17febd5c-1a: entered promiscuous mode
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.072 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-0 ovn_controller[147168]: 2025-12-06T07:23:42Z|00344|binding|INFO|Claiming lport 17febd5c-1a77-4f78-a405-50d99a8ade1d for this chassis.
Dec 06 07:23:42 compute-0 ovn_controller[147168]: 2025-12-06T07:23:42Z|00345|binding|INFO|17febd5c-1a77-4f78-a405-50d99a8ade1d: Claiming fa:16:3e:c5:8d:0c 10.100.0.13
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.080 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:8d:0c 10.100.0.13'], port_security=['fa:16:3e:c5:8d:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cd305836-b072-4590-89c0-26c6c94a67a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e2e3296-3e1f-431b-8634-816b4cd686e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76ed0aa861fd42da90bbded63873a563', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfdd690b-0ffd-41f2-b548-5ed3310f184a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abe73101-9890-4e9e-b5ae-0aaee4624853, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=17febd5c-1a77-4f78-a405-50d99a8ade1d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.081 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 17febd5c-1a77-4f78-a405-50d99a8ade1d in datapath 3e2e3296-3e1f-431b-8634-816b4cd686e8 bound to our chassis
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.083 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e2e3296-3e1f-431b-8634-816b4cd686e8
Dec 06 07:23:42 compute-0 ovn_controller[147168]: 2025-12-06T07:23:42Z|00346|binding|INFO|Setting lport 17febd5c-1a77-4f78-a405-50d99a8ade1d ovn-installed in OVS
Dec 06 07:23:42 compute-0 ovn_controller[147168]: 2025-12-06T07:23:42Z|00347|binding|INFO|Setting lport 17febd5c-1a77-4f78-a405-50d99a8ade1d up in Southbound
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.091 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.092 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.096 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0039fae8-167a-4eff-8f66-090dd3521896]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.097 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e2e3296-31 in ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.099 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e2e3296-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.099 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[783c2d5a-f7b8-4778-8e36-3614f4f14375]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.100 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec7ff12-204e-42bf-af5f-d6761d2233bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 systemd-udevd[313702]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:23:42 compute-0 systemd-machined[212986]: New machine qemu-45-instance-00000064.
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.114 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bf745b-a70a-441a-a783-48f94e39b401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000064.
Dec 06 07:23:42 compute-0 NetworkManager[48965]: <info>  [1765005822.1198] device (tap17febd5c-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:23:42 compute-0 NetworkManager[48965]: <info>  [1765005822.1204] device (tap17febd5c-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.136 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f0498588-dee0-4e65-9459-d065bc5fe2f9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.171 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[687c17b7-2239-46dc-80e6-83bfac844a08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.175 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c26a88e8-9215-4ed8-99e4-06cd4e6522b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 systemd-udevd[313707]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:23:42 compute-0 NetworkManager[48965]: <info>  [1765005822.1763] manager: (tap3e2e3296-30): new Veth device (/org/freedesktop/NetworkManager/Devices/182)
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.203 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[255190eb-48a5-4ef1-b2f9-b38a090c3683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.206 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d260113e-edf2-4f2a-86c7-bfdab9dbfd64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 NetworkManager[48965]: <info>  [1765005822.2269] device (tap3e2e3296-30): carrier: link connected
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.233 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9dffd8-a3a8-4f5c-9629-5488bcffab84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.249 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8b34125a-8c11-4f85-ac5e-ae62e98b561f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e2e3296-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:95:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609481, 'reachable_time': 41790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313735, 'error': None, 'target': 'ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.267 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[926c4cc8-d024-4f54-89ea-17865278eff2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:95eb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609481, 'tstamp': 609481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313736, 'error': None, 'target': 'ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.284 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1c2cd8-f337-454e-8ded-c168530e9a10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e2e3296-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:95:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609481, 'reachable_time': 41790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313737, 'error': None, 'target': 'ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.317 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[21ac85ae-44a4-4561-a505-a8f8b99ba453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.380 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1cff3906-2207-44f0-878b-acc68dc55474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.381 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e2e3296-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.382 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.382 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e2e3296-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:42 compute-0 NetworkManager[48965]: <info>  [1765005822.4229] manager: (tap3e2e3296-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Dec 06 07:23:42 compute-0 kernel: tap3e2e3296-30: entered promiscuous mode
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.425 251996 DEBUG nova.compute.manager [req-8f829422-1269-4306-b01c-78ab70aadf17 req-36312713-adad-48cb-b71f-4abcb97fe493 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received event network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.426 251996 DEBUG oslo_concurrency.lockutils [req-8f829422-1269-4306-b01c-78ab70aadf17 req-36312713-adad-48cb-b71f-4abcb97fe493 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.426 251996 DEBUG oslo_concurrency.lockutils [req-8f829422-1269-4306-b01c-78ab70aadf17 req-36312713-adad-48cb-b71f-4abcb97fe493 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.426 251996 DEBUG oslo_concurrency.lockutils [req-8f829422-1269-4306-b01c-78ab70aadf17 req-36312713-adad-48cb-b71f-4abcb97fe493 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.425 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e2e3296-30, col_values=(('external_ids', {'iface-id': '378555c2-90e8-42c5-ba26-c1bb1c1ef836'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.426 251996 DEBUG nova.compute.manager [req-8f829422-1269-4306-b01c-78ab70aadf17 req-36312713-adad-48cb-b71f-4abcb97fe493 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Processing event network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.427 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-0 ovn_controller[147168]: 2025-12-06T07:23:42Z|00348|binding|INFO|Releasing lport 378555c2-90e8-42c5-ba26-c1bb1c1ef836 from this chassis (sb_readonly=0)
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.430 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e2e3296-3e1f-431b-8634-816b4cd686e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e2e3296-3e1f-431b-8634-816b4cd686e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.432 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[733315f7-798a-4bee-8ae1-0c51bc969377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.433 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-3e2e3296-3e1f-431b-8634-816b4cd686e8
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/3e2e3296-3e1f-431b-8634-816b4cd686e8.pid.haproxy
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 3e2e3296-3e1f-431b-8634-816b4cd686e8
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:23:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:42.434 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8', 'env', 'PROCESS_TAG=haproxy-3e2e3296-3e1f-431b-8634-816b4cd686e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e2e3296-3e1f-431b-8634-816b4cd686e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/432722194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:42 compute-0 ceph-mon[74339]: pgmap v1981: 305 pgs: 305 active+clean; 260 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.4 MiB/s wr, 154 op/s
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.656 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.657 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005822.6557152, cd305836-b072-4590-89c0-26c6c94a67a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.658 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] VM Started (Lifecycle Event)
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.662 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.666 251996 INFO nova.virt.libvirt.driver [-] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Instance spawned successfully.
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.666 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.699 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.704 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.705 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.705 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.705 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.706 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.706 251996 DEBUG nova.virt.libvirt.driver [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.710 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.764 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.765 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005822.6558583, cd305836-b072-4590-89c0-26c6c94a67a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.765 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] VM Paused (Lifecycle Event)
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.797 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.801 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005822.6623175, cd305836-b072-4590-89c0-26c6c94a67a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.801 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] VM Resumed (Lifecycle Event)
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.812 251996 INFO nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Took 9.05 seconds to spawn the instance on the hypervisor.
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.813 251996 DEBUG nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.826 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.829 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:23:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:42.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.867 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:23:42 compute-0 podman[313811]: 2025-12-06 07:23:42.789989707 +0000 UTC m=+0.027631885 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.925 251996 INFO nova.compute.manager [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Took 12.02 seconds to build instance.
Dec 06 07:23:42 compute-0 nova_compute[251992]: 2025-12-06 07:23:42.948 251996 DEBUG oslo_concurrency.lockutils [None req-13f2074c-e7e6-49a2-a775-a2a4b6538367 d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:23:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:43.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:43 compute-0 podman[313811]: 2025-12-06 07:23:43.310484396 +0000 UTC m=+0.548126554 container create e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:23:43 compute-0 systemd[1]: Started libpod-conmon-e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6.scope.
Dec 06 07:23:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:23:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30748df81a5e5634ebd867fd2d4d13016e8c5dc5f0d6217dc63f33af31db5c14/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:23:43 compute-0 podman[313811]: 2025-12-06 07:23:43.43003942 +0000 UTC m=+0.667681598 container init e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:23:43 compute-0 podman[313811]: 2025-12-06 07:23:43.43593804 +0000 UTC m=+0.673580198 container start e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:23:43 compute-0 neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8[313827]: [NOTICE]   (313831) : New worker (313833) forked
Dec 06 07:23:43 compute-0 neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8[313827]: [NOTICE]   (313831) : Loading success.
Dec 06 07:23:43 compute-0 nova_compute[251992]: 2025-12-06 07:23:43.866 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 291 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.9 MiB/s wr, 193 op/s
Dec 06 07:23:44 compute-0 ceph-mon[74339]: pgmap v1982: 305 pgs: 305 active+clean; 291 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.9 MiB/s wr, 193 op/s
Dec 06 07:23:44 compute-0 nova_compute[251992]: 2025-12-06 07:23:44.519 251996 DEBUG nova.compute.manager [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received event network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:44 compute-0 nova_compute[251992]: 2025-12-06 07:23:44.520 251996 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:44 compute-0 nova_compute[251992]: 2025-12-06 07:23:44.521 251996 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:44 compute-0 nova_compute[251992]: 2025-12-06 07:23:44.522 251996 DEBUG oslo_concurrency.lockutils [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:44 compute-0 nova_compute[251992]: 2025-12-06 07:23:44.523 251996 DEBUG nova.compute.manager [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] No waiting events found dispatching network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:23:44 compute-0 nova_compute[251992]: 2025-12-06 07:23:44.523 251996 WARNING nova.compute.manager [req-ec7ac797-3ffe-4890-97d6-8d085c9251f3 req-b3bb7755-b112-4ae6-9064-6dc09ded981f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received unexpected event network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d for instance with vm_state active and task_state None.
Dec 06 07:23:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:44.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:45.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:45 compute-0 nova_compute[251992]: 2025-12-06 07:23:45.167 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:45 compute-0 sudo[313843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:45 compute-0 sudo[313843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:45 compute-0 sudo[313843]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:45 compute-0 sudo[313868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:23:45 compute-0 sudo[313868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:23:45 compute-0 sudo[313868]: pam_unix(sudo:session): session closed for user root
Dec 06 07:23:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 215 op/s
Dec 06 07:23:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:46.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:47.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:47 compute-0 ceph-mon[74339]: pgmap v1983: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 215 op/s
Dec 06 07:23:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1704251606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.7 MiB/s wr, 160 op/s
Dec 06 07:23:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2919027265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:23:48 compute-0 ceph-mon[74339]: pgmap v1984: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.7 MiB/s wr, 160 op/s
Dec 06 07:23:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:48.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:48 compute-0 nova_compute[251992]: 2025-12-06 07:23:48.868 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:49.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:49 compute-0 nova_compute[251992]: 2025-12-06 07:23:49.597 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "cd305836-b072-4590-89c0-26c6c94a67a7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:49 compute-0 nova_compute[251992]: 2025-12-06 07:23:49.598 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:49 compute-0 nova_compute[251992]: 2025-12-06 07:23:49.598 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:49 compute-0 nova_compute[251992]: 2025-12-06 07:23:49.599 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:49 compute-0 nova_compute[251992]: 2025-12-06 07:23:49.599 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:49 compute-0 nova_compute[251992]: 2025-12-06 07:23:49.600 251996 INFO nova.compute.manager [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Terminating instance
Dec 06 07:23:49 compute-0 nova_compute[251992]: 2025-12-06 07:23:49.601 251996 DEBUG nova.compute.manager [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:23:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Dec 06 07:23:50 compute-0 kernel: tap17febd5c-1a (unregistering): left promiscuous mode
Dec 06 07:23:50 compute-0 NetworkManager[48965]: <info>  [1765005830.0251] device (tap17febd5c-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.031 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:50 compute-0 ovn_controller[147168]: 2025-12-06T07:23:50Z|00349|binding|INFO|Releasing lport 17febd5c-1a77-4f78-a405-50d99a8ade1d from this chassis (sb_readonly=0)
Dec 06 07:23:50 compute-0 ovn_controller[147168]: 2025-12-06T07:23:50Z|00350|binding|INFO|Setting lport 17febd5c-1a77-4f78-a405-50d99a8ade1d down in Southbound
Dec 06 07:23:50 compute-0 ovn_controller[147168]: 2025-12-06T07:23:50Z|00351|binding|INFO|Removing iface tap17febd5c-1a ovn-installed in OVS
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.034 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:50.045 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:8d:0c 10.100.0.13'], port_security=['fa:16:3e:c5:8d:0c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cd305836-b072-4590-89c0-26c6c94a67a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e2e3296-3e1f-431b-8634-816b4cd686e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76ed0aa861fd42da90bbded63873a563', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfdd690b-0ffd-41f2-b548-5ed3310f184a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abe73101-9890-4e9e-b5ae-0aaee4624853, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=17febd5c-1a77-4f78-a405-50d99a8ade1d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:23:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:50.046 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 17febd5c-1a77-4f78-a405-50d99a8ade1d in datapath 3e2e3296-3e1f-431b-8634-816b4cd686e8 unbound from our chassis
Dec 06 07:23:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:50.048 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e2e3296-3e1f-431b-8634-816b4cd686e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:23:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:50.049 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[db272e46-2bb0-4e4b-96e1-698d63b89b26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:50.050 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8 namespace which is not needed anymore
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.050 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:50 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000064.scope: Deactivated successfully.
Dec 06 07:23:50 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000064.scope: Consumed 7.502s CPU time.
Dec 06 07:23:50 compute-0 systemd-machined[212986]: Machine qemu-45-instance-00000064 terminated.
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.169 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:50 compute-0 neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8[313827]: [NOTICE]   (313831) : haproxy version is 2.8.14-c23fe91
Dec 06 07:23:50 compute-0 neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8[313827]: [NOTICE]   (313831) : path to executable is /usr/sbin/haproxy
Dec 06 07:23:50 compute-0 neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8[313827]: [WARNING]  (313831) : Exiting Master process...
Dec 06 07:23:50 compute-0 neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8[313827]: [WARNING]  (313831) : Exiting Master process...
Dec 06 07:23:50 compute-0 neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8[313827]: [ALERT]    (313831) : Current worker (313833) exited with code 143 (Terminated)
Dec 06 07:23:50 compute-0 neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8[313827]: [WARNING]  (313831) : All workers exited. Exiting... (0)
Dec 06 07:23:50 compute-0 systemd[1]: libpod-e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6.scope: Deactivated successfully.
Dec 06 07:23:50 compute-0 podman[313919]: 2025-12-06 07:23:50.182573246 +0000 UTC m=+0.043855877 container died e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-30748df81a5e5634ebd867fd2d4d13016e8c5dc5f0d6217dc63f33af31db5c14-merged.mount: Deactivated successfully.
Dec 06 07:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6-userdata-shm.mount: Deactivated successfully.
Dec 06 07:23:50 compute-0 podman[313919]: 2025-12-06 07:23:50.217806519 +0000 UTC m=+0.079089150 container cleanup e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:23:50 compute-0 systemd[1]: libpod-conmon-e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6.scope: Deactivated successfully.
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.232 251996 DEBUG nova.compute.manager [req-61dfa41f-53f4-4298-b4c1-a4601fc991c2 req-220c3eb9-0a31-4b9a-a05e-1be779f9c5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received event network-vif-unplugged-17febd5c-1a77-4f78-a405-50d99a8ade1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.234 251996 DEBUG oslo_concurrency.lockutils [req-61dfa41f-53f4-4298-b4c1-a4601fc991c2 req-220c3eb9-0a31-4b9a-a05e-1be779f9c5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.234 251996 DEBUG oslo_concurrency.lockutils [req-61dfa41f-53f4-4298-b4c1-a4601fc991c2 req-220c3eb9-0a31-4b9a-a05e-1be779f9c5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.235 251996 DEBUG oslo_concurrency.lockutils [req-61dfa41f-53f4-4298-b4c1-a4601fc991c2 req-220c3eb9-0a31-4b9a-a05e-1be779f9c5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.235 251996 DEBUG nova.compute.manager [req-61dfa41f-53f4-4298-b4c1-a4601fc991c2 req-220c3eb9-0a31-4b9a-a05e-1be779f9c5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] No waiting events found dispatching network-vif-unplugged-17febd5c-1a77-4f78-a405-50d99a8ade1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.235 251996 DEBUG nova.compute.manager [req-61dfa41f-53f4-4298-b4c1-a4601fc991c2 req-220c3eb9-0a31-4b9a-a05e-1be779f9c5b4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received event network-vif-unplugged-17febd5c-1a77-4f78-a405-50d99a8ade1d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.239 251996 INFO nova.virt.libvirt.driver [-] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Instance destroyed successfully.
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.239 251996 DEBUG nova.objects.instance [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lazy-loading 'resources' on Instance uuid cd305836-b072-4590-89c0-26c6c94a67a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.258 251996 DEBUG nova.virt.libvirt.vif [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:23:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-812618559',display_name='tempest-ServerMetadataTestJSON-server-812618559',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-812618559',id=100,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:23:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76ed0aa861fd42da90bbded63873a563',ramdisk_id='',reservation_id='r-8dq4s9ui',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-851662312',owner_user_name='tempest-ServerMetadataTestJSON-851662312-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:23:49Z,user_data=None,user_id='d0385d6c425a4b6ca71336ef4a2beb43',uuid=cd305836-b072-4590-89c0-26c6c94a67a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.259 251996 DEBUG nova.network.os_vif_util [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Converting VIF {"id": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "address": "fa:16:3e:c5:8d:0c", "network": {"id": "3e2e3296-3e1f-431b-8634-816b4cd686e8", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1999907489-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ed0aa861fd42da90bbded63873a563", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17febd5c-1a", "ovs_interfaceid": "17febd5c-1a77-4f78-a405-50d99a8ade1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.260 251996 DEBUG nova.network.os_vif_util [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:8d:0c,bridge_name='br-int',has_traffic_filtering=True,id=17febd5c-1a77-4f78-a405-50d99a8ade1d,network=Network(3e2e3296-3e1f-431b-8634-816b4cd686e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17febd5c-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.261 251996 DEBUG os_vif [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:8d:0c,bridge_name='br-int',has_traffic_filtering=True,id=17febd5c-1a77-4f78-a405-50d99a8ade1d,network=Network(3e2e3296-3e1f-431b-8634-816b4cd686e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17febd5c-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.263 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.264 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17febd5c-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.268 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:23:50 compute-0 nova_compute[251992]: 2025-12-06 07:23:50.272 251996 INFO os_vif [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:8d:0c,bridge_name='br-int',has_traffic_filtering=True,id=17febd5c-1a77-4f78-a405-50d99a8ade1d,network=Network(3e2e3296-3e1f-431b-8634-816b4cd686e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17febd5c-1a')
Dec 06 07:23:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:50.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:51.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:51 compute-0 podman[313957]: 2025-12-06 07:23:51.282211264 +0000 UTC m=+1.040953717 container remove e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.288 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[427c7860-593a-4c71-a84d-2a77359b3fc5]: (4, ('Sat Dec  6 07:23:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8 (e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6)\ne7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6\nSat Dec  6 07:23:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8 (e7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6)\ne7982f5c6ce95cfa54ebea1343e7e9da0a5bfaecec0214670aaa7552a7709ed6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.290 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8809ab-0f2d-4fe0-9ab5-6a5be78ee1c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.291 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e2e3296-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:23:51 compute-0 nova_compute[251992]: 2025-12-06 07:23:51.294 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-0 kernel: tap3e2e3296-30: left promiscuous mode
Dec 06 07:23:51 compute-0 ceph-mon[74339]: pgmap v1985: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.7 MiB/s wr, 231 op/s
Dec 06 07:23:51 compute-0 nova_compute[251992]: 2025-12-06 07:23:51.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.313 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e78f7a-ef84-4f82-80de-45fa0c4b3eec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.333 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dba25f78-516b-463d-b56f-bcbf09e64ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.334 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[52759a38-53a9-4e20-93cf-cf337b86812e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.349 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[00c68adf-9e17-4231-8c04-b00a008a4ba2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609475, 'reachable_time': 30224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313994, 'error': None, 'target': 'ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.352 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e2e3296-3e1f-431b-8634-816b4cd686e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:23:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:23:51.352 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7af23c-ae1b-4d18-a2b5-10ef4e04ed82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:23:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d3e2e3296\x2d3e1f\x2d431b\x2d8634\x2d816b4cd686e8.mount: Deactivated successfully.
Dec 06 07:23:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 307 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.3 MiB/s wr, 230 op/s
Dec 06 07:23:52 compute-0 nova_compute[251992]: 2025-12-06 07:23:52.355 251996 DEBUG nova.compute.manager [req-00067ae9-5288-4e82-af68-4a63bc16c343 req-9e001eff-eb0c-40e9-bdb9-f39c9f4be001 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received event network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:52 compute-0 nova_compute[251992]: 2025-12-06 07:23:52.355 251996 DEBUG oslo_concurrency.lockutils [req-00067ae9-5288-4e82-af68-4a63bc16c343 req-9e001eff-eb0c-40e9-bdb9-f39c9f4be001 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:52 compute-0 nova_compute[251992]: 2025-12-06 07:23:52.355 251996 DEBUG oslo_concurrency.lockutils [req-00067ae9-5288-4e82-af68-4a63bc16c343 req-9e001eff-eb0c-40e9-bdb9-f39c9f4be001 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:52 compute-0 nova_compute[251992]: 2025-12-06 07:23:52.355 251996 DEBUG oslo_concurrency.lockutils [req-00067ae9-5288-4e82-af68-4a63bc16c343 req-9e001eff-eb0c-40e9-bdb9-f39c9f4be001 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:52 compute-0 nova_compute[251992]: 2025-12-06 07:23:52.355 251996 DEBUG nova.compute.manager [req-00067ae9-5288-4e82-af68-4a63bc16c343 req-9e001eff-eb0c-40e9-bdb9-f39c9f4be001 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] No waiting events found dispatching network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:23:52 compute-0 nova_compute[251992]: 2025-12-06 07:23:52.356 251996 WARNING nova.compute.manager [req-00067ae9-5288-4e82-af68-4a63bc16c343 req-9e001eff-eb0c-40e9-bdb9-f39c9f4be001 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received unexpected event network-vif-plugged-17febd5c-1a77-4f78-a405-50d99a8ade1d for instance with vm_state active and task_state deleting.
Dec 06 07:23:52 compute-0 podman[313996]: 2025-12-06 07:23:52.430245052 +0000 UTC m=+0.082670517 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:23:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:52.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:52 compute-0 ceph-mon[74339]: pgmap v1986: 305 pgs: 305 active+clean; 307 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.3 MiB/s wr, 230 op/s
Dec 06 07:23:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:53.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:53 compute-0 nova_compute[251992]: 2025-12-06 07:23:53.868 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 295 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Dec 06 07:23:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:54.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:23:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:55.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:23:55 compute-0 nova_compute[251992]: 2025-12-06 07:23:55.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:55 compute-0 nova_compute[251992]: 2025-12-06 07:23:55.886 251996 INFO nova.virt.libvirt.driver [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Deleting instance files /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7_del
Dec 06 07:23:55 compute-0 nova_compute[251992]: 2025-12-06 07:23:55.887 251996 INFO nova.virt.libvirt.driver [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Deletion of /var/lib/nova/instances/cd305836-b072-4590-89c0-26c6c94a67a7_del complete
Dec 06 07:23:55 compute-0 nova_compute[251992]: 2025-12-06 07:23:55.940 251996 INFO nova.compute.manager [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Took 6.34 seconds to destroy the instance on the hypervisor.
Dec 06 07:23:55 compute-0 nova_compute[251992]: 2025-12-06 07:23:55.940 251996 DEBUG oslo.service.loopingcall [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:23:55 compute-0 nova_compute[251992]: 2025-12-06 07:23:55.940 251996 DEBUG nova.compute.manager [-] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:23:55 compute-0 nova_compute[251992]: 2025-12-06 07:23:55.941 251996 DEBUG nova.network.neutron [-] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:23:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 289 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.6 MiB/s wr, 273 op/s
Dec 06 07:23:56 compute-0 ceph-mon[74339]: pgmap v1987: 305 pgs: 305 active+clean; 295 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Dec 06 07:23:56 compute-0 podman[314026]: 2025-12-06 07:23:56.388247656 +0000 UTC m=+0.048307980 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 07:23:56 compute-0 podman[314027]: 2025-12-06 07:23:56.419042586 +0000 UTC m=+0.076137839 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:23:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:56.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:57.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:57 compute-0 ceph-mon[74339]: pgmap v1988: 305 pgs: 305 active+clean; 289 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.6 MiB/s wr, 273 op/s
Dec 06 07:23:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 289 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Dec 06 07:23:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:23:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:23:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:23:58.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:23:58 compute-0 nova_compute[251992]: 2025-12-06 07:23:58.870 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.013 251996 DEBUG nova.network.neutron [-] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.030 251996 INFO nova.compute.manager [-] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Took 3.09 seconds to deallocate network for instance.
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.074 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.075 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.139 251996 DEBUG nova.compute.manager [req-f444baf9-5f64-4c9f-b169-0fbd884435b8 req-d4ff889f-1122-4f79-b7ac-b4cdfa08f4f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Received event network-vif-deleted-17febd5c-1a77-4f78-a405-50d99a8ade1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.167 251996 DEBUG nova.scheduler.client.report [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:23:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:23:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:23:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:23:59.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.253 251996 DEBUG nova.scheduler.client.report [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.253 251996 DEBUG nova.compute.provider_tree [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.324 251996 DEBUG nova.scheduler.client.report [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.347 251996 DEBUG nova.scheduler.client.report [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.407 251996 DEBUG oslo_concurrency.processutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:23:59 compute-0 ceph-mon[74339]: pgmap v1989: 305 pgs: 305 active+clean; 289 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.1 MiB/s wr, 226 op/s
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.707 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:23:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:23:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136141956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.842 251996 DEBUG oslo_concurrency.processutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.854 251996 DEBUG nova.compute.provider_tree [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.869 251996 DEBUG nova.scheduler.client.report [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.887 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.925 251996 INFO nova.scheduler.client.report [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Deleted allocations for instance cd305836-b072-4590-89c0-26c6c94a67a7
Dec 06 07:23:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 293 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Dec 06 07:23:59 compute-0 nova_compute[251992]: 2025-12-06 07:23:59.992 251996 DEBUG oslo_concurrency.lockutils [None req-ae4df74e-ca07-4312-89bb-f26a6d7235ff d0385d6c425a4b6ca71336ef4a2beb43 76ed0aa861fd42da90bbded63873a563 - - default default] Lock "cd305836-b072-4590-89c0-26c6c94a67a7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:00 compute-0 nova_compute[251992]: 2025-12-06 07:24:00.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:00 compute-0 nova_compute[251992]: 2025-12-06 07:24:00.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:00 compute-0 nova_compute[251992]: 2025-12-06 07:24:00.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:00 compute-0 nova_compute[251992]: 2025-12-06 07:24:00.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:00 compute-0 nova_compute[251992]: 2025-12-06 07:24:00.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:00 compute-0 nova_compute[251992]: 2025-12-06 07:24:00.687 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:24:00 compute-0 nova_compute[251992]: 2025-12-06 07:24:00.687 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2136141956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:00 compute-0 ceph-mon[74339]: pgmap v1990: 305 pgs: 305 active+clean; 293 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Dec 06 07:24:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1495295474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:24:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3617914429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.108 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:01.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.329 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.329 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.496 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.497 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4295MB free_disk=20.855472564697266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.497 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.497 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.560 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 00f56c62-f327-41e3-a105-24f56ae124c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.560 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.561 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:24:01 compute-0 nova_compute[251992]: 2025-12-06 07:24:01.597 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 309 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 213 op/s
Dec 06 07:24:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:24:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586755400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.085 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.090 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.110 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.133 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.134 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3617914429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.265 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.266 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.266 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.286 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid 00f56c62-f327-41e3-a105-24f56ae124c0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.286 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.287 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "00f56c62-f327-41e3-a105-24f56ae124c0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:02 compute-0 nova_compute[251992]: 2025-12-06 07:24:02.320 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "00f56c62-f327-41e3-a105-24f56ae124c0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:02.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:03.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:03 compute-0 ceph-mon[74339]: pgmap v1991: 305 pgs: 305 active+clean; 309 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 213 op/s
Dec 06 07:24:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1586755400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2011446636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:03.830 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:03.831 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:03.832 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:03 compute-0 nova_compute[251992]: 2025-12-06 07:24:03.881 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 272 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 216 op/s
Dec 06 07:24:04 compute-0 ceph-mon[74339]: pgmap v1992: 305 pgs: 305 active+clean; 272 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 216 op/s
Dec 06 07:24:04 compute-0 nova_compute[251992]: 2025-12-06 07:24:04.678 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:04 compute-0 nova_compute[251992]: 2025-12-06 07:24:04.678 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:05 compute-0 ovn_controller[147168]: 2025-12-06T07:24:05Z|00352|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:24:05 compute-0 nova_compute[251992]: 2025-12-06 07:24:05.172 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:05.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:05 compute-0 nova_compute[251992]: 2025-12-06 07:24:05.237 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005830.233087, cd305836-b072-4590-89c0-26c6c94a67a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:05 compute-0 nova_compute[251992]: 2025-12-06 07:24:05.237 251996 INFO nova.compute.manager [-] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] VM Stopped (Lifecycle Event)
Dec 06 07:24:05 compute-0 nova_compute[251992]: 2025-12-06 07:24:05.260 251996 DEBUG nova.compute.manager [None req-7c7430ad-f428-4984-8f13-aa45e67098ec - - - - - -] [instance: cd305836-b072-4590-89c0-26c6c94a67a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:05 compute-0 nova_compute[251992]: 2025-12-06 07:24:05.272 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:05 compute-0 nova_compute[251992]: 2025-12-06 07:24:05.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:05 compute-0 sudo[314138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:05 compute-0 sudo[314138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:05 compute-0 sudo[314138]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:05 compute-0 sudo[314163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:05 compute-0 sudo[314163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:05 compute-0 sudo[314163]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 179 op/s
Dec 06 07:24:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:07.086 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:07 compute-0 nova_compute[251992]: 2025-12-06 07:24:07.086 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:07.087 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:24:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:07.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:07 compute-0 ceph-mon[74339]: pgmap v1993: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 179 op/s
Dec 06 07:24:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 122 op/s
Dec 06 07:24:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:24:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540164652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:24:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:24:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540164652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:24:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:08.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:08 compute-0 nova_compute[251992]: 2025-12-06 07:24:08.884 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:09 compute-0 ceph-mon[74339]: pgmap v1994: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 122 op/s
Dec 06 07:24:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/637376290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3540164652' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:24:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3540164652' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:24:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 269 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 157 op/s
Dec 06 07:24:10 compute-0 nova_compute[251992]: 2025-12-06 07:24:10.276 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/806788559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:10 compute-0 ceph-mon[74339]: pgmap v1995: 305 pgs: 305 active+clean; 269 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 157 op/s
Dec 06 07:24:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:10.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:11.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1136767738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1705575113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 291 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.0 MiB/s wr, 140 op/s
Dec 06 07:24:12 compute-0 nova_compute[251992]: 2025-12-06 07:24:12.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:12 compute-0 nova_compute[251992]: 2025-12-06 07:24:12.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:24:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:12.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:12 compute-0 ceph-mon[74339]: pgmap v1996: 305 pgs: 305 active+clean; 291 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.0 MiB/s wr, 140 op/s
Dec 06 07:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:24:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:13.089 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:13.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:13 compute-0 nova_compute[251992]: 2025-12-06 07:24:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:24:13 compute-0 nova_compute[251992]: 2025-12-06 07:24:13.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:24:13 compute-0 nova_compute[251992]: 2025-12-06 07:24:13.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:24:13 compute-0 nova_compute[251992]: 2025-12-06 07:24:13.885 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 305 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 139 op/s
Dec 06 07:24:14 compute-0 nova_compute[251992]: 2025-12-06 07:24:14.084 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:24:14 compute-0 nova_compute[251992]: 2025-12-06 07:24:14.084 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:24:14 compute-0 nova_compute[251992]: 2025-12-06 07:24:14.084 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:24:14 compute-0 nova_compute[251992]: 2025-12-06 07:24:14.085 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 00f56c62-f327-41e3-a105-24f56ae124c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:24:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:14.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:24:15 compute-0 ceph-mon[74339]: pgmap v1997: 305 pgs: 305 active+clean; 305 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.5 MiB/s wr, 139 op/s
Dec 06 07:24:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:15.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:15 compute-0 nova_compute[251992]: 2025-12-06 07:24:15.278 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 326 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 142 op/s
Dec 06 07:24:16 compute-0 nova_compute[251992]: 2025-12-06 07:24:16.005 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:24:16 compute-0 nova_compute[251992]: 2025-12-06 07:24:16.017 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:24:16 compute-0 nova_compute[251992]: 2025-12-06 07:24:16.018 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:24:16 compute-0 nova_compute[251992]: 2025-12-06 07:24:16.131 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:16 compute-0 ceph-mon[74339]: pgmap v1998: 305 pgs: 305 active+clean; 326 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 142 op/s
Dec 06 07:24:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:16.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:17.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 326 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 116 op/s
Dec 06 07:24:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:24:18
Dec 06 07:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'images', 'backups']
Dec 06 07:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:24:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:18.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:18 compute-0 nova_compute[251992]: 2025-12-06 07:24:18.887 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:19 compute-0 ceph-mon[74339]: pgmap v1999: 305 pgs: 305 active+clean; 326 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 116 op/s
Dec 06 07:24:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:19.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 293 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Dec 06 07:24:20 compute-0 ceph-mon[74339]: pgmap v2000: 305 pgs: 305 active+clean; 293 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 172 op/s
Dec 06 07:24:20 compute-0 nova_compute[251992]: 2025-12-06 07:24:20.280 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:20.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:21.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.590 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.590 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.610 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.690 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.697 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.698 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.706 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.706 251996 INFO nova.compute.claims [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:24:21 compute-0 nova_compute[251992]: 2025-12-06 07:24:21.898 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 229 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 177 op/s
Dec 06 07:24:22 compute-0 ceph-mon[74339]: pgmap v2001: 305 pgs: 305 active+clean; 229 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 177 op/s
Dec 06 07:24:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:24:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871687488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.397 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.403 251996 DEBUG nova.compute.provider_tree [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.427 251996 DEBUG nova.scheduler.client.report [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.448 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.449 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.516 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.517 251996 DEBUG nova.network.neutron [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.540 251996 INFO nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.562 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.665 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.667 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.667 251996 INFO nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Creating image(s)
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.692 251996 DEBUG nova.storage.rbd_utils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.716 251996 DEBUG nova.storage.rbd_utils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.741 251996 DEBUG nova.storage.rbd_utils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.745 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.809 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.810 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.811 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.811 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.838 251996 DEBUG nova.storage.rbd_utils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.843 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:22.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:22 compute-0 nova_compute[251992]: 2025-12-06 07:24:22.881 251996 DEBUG nova.policy [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd67c136e82ad4001b000848d75eef50d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88f5b34244614321a9b6e902eaba0ece', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:24:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:23.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3871687488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2161360991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:24:23 compute-0 podman[314313]: 2025-12-06 07:24:23.454422275 +0000 UTC m=+0.113032517 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:24:23 compute-0 nova_compute[251992]: 2025-12-06 07:24:23.747 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.904s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:23 compute-0 nova_compute[251992]: 2025-12-06 07:24:23.816 251996 DEBUG nova.storage.rbd_utils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] resizing rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:24:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 220 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 155 op/s
Dec 06 07:24:24 compute-0 nova_compute[251992]: 2025-12-06 07:24:24.041 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1819027873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:24 compute-0 ceph-mon[74339]: pgmap v2002: 305 pgs: 305 active+clean; 220 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 155 op/s
Dec 06 07:24:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:24:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:24:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:24:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:24:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:24:24 compute-0 nova_compute[251992]: 2025-12-06 07:24:24.577 251996 DEBUG nova.objects.instance [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:24 compute-0 nova_compute[251992]: 2025-12-06 07:24:24.750 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:24:24 compute-0 nova_compute[251992]: 2025-12-06 07:24:24.751 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Ensure instance console log exists: /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:24:24 compute-0 nova_compute[251992]: 2025-12-06 07:24:24.752 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:24 compute-0 nova_compute[251992]: 2025-12-06 07:24:24.752 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:24 compute-0 nova_compute[251992]: 2025-12-06 07:24:24.752 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:24.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:25 compute-0 nova_compute[251992]: 2025-12-06 07:24:25.190 251996 DEBUG nova.network.neutron [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Successfully created port: 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:24:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:25.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:25 compute-0 nova_compute[251992]: 2025-12-06 07:24:25.283 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0049825952680067 of space, bias 1.0, pg target 1.4947785804020102 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:24:25 compute-0 sudo[314412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:25 compute-0 sudo[314412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:25 compute-0 sudo[314412]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 234 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 184 op/s
Dec 06 07:24:25 compute-0 sudo[314437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:25 compute-0 sudo[314437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:25 compute-0 sudo[314437]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:26 compute-0 ceph-mon[74339]: pgmap v2003: 305 pgs: 305 active+clean; 234 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 184 op/s
Dec 06 07:24:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:26.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:27.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:27 compute-0 podman[314463]: 2025-12-06 07:24:27.385891703 +0000 UTC m=+0.049607464 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 07:24:27 compute-0 podman[314464]: 2025-12-06 07:24:27.392125694 +0000 UTC m=+0.054481328 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 07:24:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 234 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Dec 06 07:24:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.539 251996 DEBUG nova.network.neutron [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Successfully updated port: 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.558 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.558 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.558 251996 DEBUG nova.network.neutron [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.627 251996 DEBUG nova.compute.manager [req-6043719d-09d5-418b-8d7a-d408e73546ff req-a825028d-b087-4f07-b517-11f130d74925 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-changed-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.628 251996 DEBUG nova.compute.manager [req-6043719d-09d5-418b-8d7a-d408e73546ff req-a825028d-b087-4f07-b517-11f130d74925 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Refreshing instance network info cache due to event network-changed-43f29a7e-fdfe-4bc7-b164-e60a29234bc2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.628 251996 DEBUG oslo_concurrency.lockutils [req-6043719d-09d5-418b-8d7a-d408e73546ff req-a825028d-b087-4f07-b517-11f130d74925 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.703 251996 DEBUG nova.network.neutron [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:24:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:28.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:28 compute-0 nova_compute[251992]: 2025-12-06 07:24:28.889 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:29 compute-0 ceph-mon[74339]: pgmap v2004: 305 pgs: 305 active+clean; 234 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Dec 06 07:24:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2549017799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:29.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 193 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.045 251996 DEBUG nova.network.neutron [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Updating instance_info_cache with network_info: [{"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.081 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.081 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance network_info: |[{"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.082 251996 DEBUG oslo_concurrency.lockutils [req-6043719d-09d5-418b-8d7a-d408e73546ff req-a825028d-b087-4f07-b517-11f130d74925 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.082 251996 DEBUG nova.network.neutron [req-6043719d-09d5-418b-8d7a-d408e73546ff req-a825028d-b087-4f07-b517-11f130d74925 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Refreshing network info cache for port 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.084 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Start _get_guest_xml network_info=[{"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.091 251996 WARNING nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.095 251996 DEBUG nova.virt.libvirt.host [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.095 251996 DEBUG nova.virt.libvirt.host [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.102 251996 DEBUG nova.virt.libvirt.host [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.103 251996 DEBUG nova.virt.libvirt.host [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.104 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.104 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.105 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.105 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.105 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.106 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.106 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.106 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.106 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.106 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.107 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.107 251996 DEBUG nova.virt.hardware [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.111 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.287 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/472744709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:30 compute-0 ceph-mon[74339]: pgmap v2005: 305 pgs: 305 active+clean; 193 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Dec 06 07:24:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:24:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738121377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.565 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.593 251996 DEBUG nova.storage.rbd_utils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:30 compute-0 nova_compute[251992]: 2025-12-06 07:24:30.597 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:30.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:24:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531532766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.043 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.045 251996 DEBUG nova.virt.libvirt.vif [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:24:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-57853796',display_name='tempest-ServerDiskConfigTestJSON-server-57853796',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-57853796',id=102,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-psuria6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:22Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.045 251996 DEBUG nova.network.os_vif_util [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.046 251996 DEBUG nova.network.os_vif_util [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.047 251996 DEBUG nova.objects.instance [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.089 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <uuid>2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8</uuid>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <name>instance-00000066</name>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-57853796</nova:name>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:24:30</nova:creationTime>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <nova:port uuid="43f29a7e-fdfe-4bc7-b164-e60a29234bc2">
Dec 06 07:24:31 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <system>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <entry name="serial">2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8</entry>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <entry name="uuid">2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8</entry>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </system>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <os>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   </os>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <features>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   </features>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk">
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       </source>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config">
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       </source>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:24:31 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:ee:e8:fd"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <target dev="tap43f29a7e-fd"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/console.log" append="off"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <video>
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </video>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:24:31 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:24:31 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:24:31 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:24:31 compute-0 nova_compute[251992]: </domain>
Dec 06 07:24:31 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.090 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Preparing to wait for external event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.091 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.091 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.091 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.092 251996 DEBUG nova.virt.libvirt.vif [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:24:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-57853796',display_name='tempest-ServerDiskConfigTestJSON-server-57853796',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-57853796',id=102,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-psuria6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:22Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.093 251996 DEBUG nova.network.os_vif_util [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.093 251996 DEBUG nova.network.os_vif_util [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.094 251996 DEBUG os_vif [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.095 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.096 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.100 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.100 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43f29a7e-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.100 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43f29a7e-fd, col_values=(('external_ids', {'iface-id': '43f29a7e-fdfe-4bc7-b164-e60a29234bc2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:e8:fd', 'vm-uuid': '2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.102 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:31 compute-0 NetworkManager[48965]: <info>  [1765005871.1032] manager: (tap43f29a7e-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.104 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.108 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.109 251996 INFO os_vif [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd')
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.167 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.168 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.168 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:ee:e8:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.168 251996 INFO nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Using config drive
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.196 251996 DEBUG nova.storage.rbd_utils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.934 251996 DEBUG nova.network.neutron [req-6043719d-09d5-418b-8d7a-d408e73546ff req-a825028d-b087-4f07-b517-11f130d74925 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Updated VIF entry in instance network info cache for port 43f29a7e-fdfe-4bc7-b164-e60a29234bc2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.934 251996 DEBUG nova.network.neutron [req-6043719d-09d5-418b-8d7a-d408e73546ff req-a825028d-b087-4f07-b517-11f130d74925 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Updating instance_info_cache with network_info: [{"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:24:31 compute-0 nova_compute[251992]: 2025-12-06 07:24:31.952 251996 DEBUG oslo_concurrency.lockutils [req-6043719d-09d5-418b-8d7a-d408e73546ff req-a825028d-b087-4f07-b517-11f130d74925 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:24:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 167 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 519 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 07:24:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1738121377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/531532766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.109 251996 INFO nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Creating config drive at /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.115 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d3yp6gw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.260 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2d3yp6gw" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.295 251996 DEBUG nova.storage.rbd_utils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.298 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.625 251996 DEBUG oslo_concurrency.processutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.626 251996 INFO nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Deleting local config drive /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config because it was imported into RBD.
Dec 06 07:24:32 compute-0 kernel: tap43f29a7e-fd: entered promiscuous mode
Dec 06 07:24:32 compute-0 NetworkManager[48965]: <info>  [1765005872.6750] manager: (tap43f29a7e-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:32 compute-0 ovn_controller[147168]: 2025-12-06T07:24:32Z|00353|binding|INFO|Claiming lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for this chassis.
Dec 06 07:24:32 compute-0 ovn_controller[147168]: 2025-12-06T07:24:32Z|00354|binding|INFO|43f29a7e-fdfe-4bc7-b164-e60a29234bc2: Claiming fa:16:3e:ee:e8:fd 10.100.0.6
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.682 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:e8:fd 10.100.0.6'], port_security=['fa:16:3e:ee:e8:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '2', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=43f29a7e-fdfe-4bc7-b164-e60a29234bc2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.684 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.686 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:24:32 compute-0 ovn_controller[147168]: 2025-12-06T07:24:32Z|00355|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 ovn-installed in OVS
Dec 06 07:24:32 compute-0 ovn_controller[147168]: 2025-12-06T07:24:32Z|00356|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 up in Southbound
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.695 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.699 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[605046e7-2d2f-42ea-bb52-2ffcb9659e89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.701 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.704 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.704 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ca55109a-4d42-4482-96f0-196d6c5a06dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.705 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f427d4da-2f9a-4ffb-ac33-d92b0c24e70e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 systemd-machined[212986]: New machine qemu-46-instance-00000066.
Dec 06 07:24:32 compute-0 systemd-udevd[314638]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.720 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b9164463-b11e-46ed-b1ac-c7b3fcff7cb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-00000066.
Dec 06 07:24:32 compute-0 NetworkManager[48965]: <info>  [1765005872.7278] device (tap43f29a7e-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:24:32 compute-0 NetworkManager[48965]: <info>  [1765005872.7288] device (tap43f29a7e-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.736 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[227f147b-0dd1-480f-934a-15212e858d4e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.768 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e57af911-e55c-4cff-a8dd-b4ad57a8c1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.774 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f91cdaa2-7575-41ee-b93b-866e2469bfb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 NetworkManager[48965]: <info>  [1765005872.7753] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/186)
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.803 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[64d63367-8e2b-4ce2-89ab-d39f7608ad11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.807 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[39cc7afc-b3be-4dd1-928c-1dcc2fec4607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 NetworkManager[48965]: <info>  [1765005872.8307] device (tap7c014e4e-a0): carrier: link connected
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.839 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[86e75699-45d3-492b-81bc-b2e1dadc25d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.857 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1ad267-3ee1-4e59-9563-ef3e0e2b02fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614541, 'reachable_time': 17460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314669, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.877 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[accfbea7-a346-45da-a837-12623ab384e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614541, 'tstamp': 614541}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314670, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:32.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.896 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7d95616c-0c49-494e-baf3-7792288f2ba5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614541, 'reachable_time': 17460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314671, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.927 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6616d08e-a292-497a-b77a-876b325c73cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.986 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cfee50c1-60f5-478a-82e6-95191d4da726]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.988 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.988 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.988 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.990 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:32 compute-0 NetworkManager[48965]: <info>  [1765005872.9914] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Dec 06 07:24:32 compute-0 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.993 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:32.994 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:32 compute-0 nova_compute[251992]: 2025-12-06 07:24:32.995 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:32 compute-0 ovn_controller[147168]: 2025-12-06T07:24:32Z|00357|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=0)
Dec 06 07:24:33 compute-0 nova_compute[251992]: 2025-12-06 07:24:33.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:33.019 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:33.020 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2861d6c6-9867-417e-a469-047fecc3dba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:33.021 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:24:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:33.022 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:24:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:33.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:33 compute-0 podman[314721]: 2025-12-06 07:24:33.373184022 +0000 UTC m=+0.042592613 container create 3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:24:33 compute-0 nova_compute[251992]: 2025-12-06 07:24:33.394 251996 DEBUG nova.compute.manager [req-3fcc58cb-fb35-42d1-8e62-fd5747f1d53c req-0241e38c-f3a6-40b9-8fba-c91b98485c27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:33 compute-0 nova_compute[251992]: 2025-12-06 07:24:33.394 251996 DEBUG oslo_concurrency.lockutils [req-3fcc58cb-fb35-42d1-8e62-fd5747f1d53c req-0241e38c-f3a6-40b9-8fba-c91b98485c27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:33 compute-0 nova_compute[251992]: 2025-12-06 07:24:33.395 251996 DEBUG oslo_concurrency.lockutils [req-3fcc58cb-fb35-42d1-8e62-fd5747f1d53c req-0241e38c-f3a6-40b9-8fba-c91b98485c27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:33 compute-0 nova_compute[251992]: 2025-12-06 07:24:33.395 251996 DEBUG oslo_concurrency.lockutils [req-3fcc58cb-fb35-42d1-8e62-fd5747f1d53c req-0241e38c-f3a6-40b9-8fba-c91b98485c27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:33 compute-0 nova_compute[251992]: 2025-12-06 07:24:33.395 251996 DEBUG nova.compute.manager [req-3fcc58cb-fb35-42d1-8e62-fd5747f1d53c req-0241e38c-f3a6-40b9-8fba-c91b98485c27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Processing event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:24:33 compute-0 systemd[1]: Started libpod-conmon-3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528.scope.
Dec 06 07:24:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:24:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28c66720511eee7691329e0a4341946cc945b9b50c2c18ac30c41c00b5b50df0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:33 compute-0 podman[314721]: 2025-12-06 07:24:33.351898751 +0000 UTC m=+0.021307372 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:24:33 compute-0 podman[314721]: 2025-12-06 07:24:33.449852555 +0000 UTC m=+0.119261166 container init 3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:24:33 compute-0 podman[314721]: 2025-12-06 07:24:33.455278193 +0000 UTC m=+0.124686784 container start 3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:24:33 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[314736]: [NOTICE]   (314740) : New worker (314742) forked
Dec 06 07:24:33 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[314736]: [NOTICE]   (314740) : Loading success.
Dec 06 07:24:33 compute-0 ceph-mon[74339]: pgmap v2006: 305 pgs: 305 active+clean; 167 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 519 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 07:24:33 compute-0 nova_compute[251992]: 2025-12-06 07:24:33.891 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 187 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 89 op/s
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.405 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.406 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005874.4052763, 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.407 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] VM Started (Lifecycle Event)
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.409 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.412 251996 INFO nova.virt.libvirt.driver [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance spawned successfully.
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.413 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.441 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.445 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.446 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.447 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.447 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.448 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.448 251996 DEBUG nova.virt.libvirt.driver [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.452 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.503 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.503 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005874.4065545, 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.504 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] VM Paused (Lifecycle Event)
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.528 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.530 251996 INFO nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Took 11.86 seconds to spawn the instance on the hypervisor.
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.530 251996 DEBUG nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.533 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005874.4085822, 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.534 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] VM Resumed (Lifecycle Event)
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.565 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.569 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.595 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.606 251996 INFO nova.compute.manager [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Took 12.95 seconds to build instance.
Dec 06 07:24:34 compute-0 nova_compute[251992]: 2025-12-06 07:24:34.622 251996 DEBUG oslo_concurrency.lockutils [None req-7a05c5b0-7922-42a2-bd0d-cd136b3f42ca d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:34 compute-0 sudo[314777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:34 compute-0 sudo[314777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:34 compute-0 sudo[314777]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:34.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:34 compute-0 sudo[314802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:24:34 compute-0 sudo[314802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:34 compute-0 sudo[314802]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:34 compute-0 ceph-mon[74339]: pgmap v2007: 305 pgs: 305 active+clean; 187 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.9 MiB/s wr, 89 op/s
Dec 06 07:24:34 compute-0 sudo[314827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:34 compute-0 sudo[314827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:34 compute-0 sudo[314827]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:35 compute-0 sudo[314852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:24:35 compute-0 sudo[314852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:35 compute-0 sudo[314852]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:24:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:24:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:24:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:24:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:24:35 compute-0 nova_compute[251992]: 2025-12-06 07:24:35.910 251996 DEBUG nova.compute.manager [req-1cc04226-6dac-48d9-83fb-4d5ce34b67a3 req-1afd1515-eae1-48db-a246-59683038ff99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:35 compute-0 nova_compute[251992]: 2025-12-06 07:24:35.911 251996 DEBUG oslo_concurrency.lockutils [req-1cc04226-6dac-48d9-83fb-4d5ce34b67a3 req-1afd1515-eae1-48db-a246-59683038ff99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:35 compute-0 nova_compute[251992]: 2025-12-06 07:24:35.911 251996 DEBUG oslo_concurrency.lockutils [req-1cc04226-6dac-48d9-83fb-4d5ce34b67a3 req-1afd1515-eae1-48db-a246-59683038ff99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:35 compute-0 nova_compute[251992]: 2025-12-06 07:24:35.912 251996 DEBUG oslo_concurrency.lockutils [req-1cc04226-6dac-48d9-83fb-4d5ce34b67a3 req-1afd1515-eae1-48db-a246-59683038ff99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:35 compute-0 nova_compute[251992]: 2025-12-06 07:24:35.912 251996 DEBUG nova.compute.manager [req-1cc04226-6dac-48d9-83fb-4d5ce34b67a3 req-1afd1515-eae1-48db-a246-59683038ff99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:35 compute-0 nova_compute[251992]: 2025-12-06 07:24:35.912 251996 WARNING nova.compute.manager [req-1cc04226-6dac-48d9-83fb-4d5ce34b67a3 req-1afd1515-eae1-48db-a246-59683038ff99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state None.
Dec 06 07:24:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 293 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 168 KiB/s rd, 6.6 MiB/s wr, 155 op/s
Dec 06 07:24:36 compute-0 nova_compute[251992]: 2025-12-06 07:24:36.104 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:24:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:24:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 15361713-0c47-43de-b1aa-e55e5cd687aa does not exist
Dec 06 07:24:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c7e25b3d-f405-4ff4-b05f-54e07754e64f does not exist
Dec 06 07:24:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 60333cd2-7272-46e4-a741-73a96db03ad7 does not exist
Dec 06 07:24:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:24:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:24:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:24:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:24:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:24:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:24:36 compute-0 sudo[314908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:36 compute-0 sudo[314908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:36 compute-0 sudo[314908]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:36 compute-0 sudo[314933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:24:36 compute-0 sudo[314933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:36 compute-0 sudo[314933]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:36 compute-0 sudo[314959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:36 compute-0 sudo[314959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:36 compute-0 sudo[314959]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:36 compute-0 sudo[314984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:24:36 compute-0 sudo[314984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:36 compute-0 podman[315050]: 2025-12-06 07:24:36.946044232 +0000 UTC m=+0.076558771 container create 047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:24:36 compute-0 podman[315050]: 2025-12-06 07:24:36.889369965 +0000 UTC m=+0.019884514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:24:36 compute-0 systemd[1]: Started libpod-conmon-047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972.scope.
Dec 06 07:24:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:24:37 compute-0 podman[315050]: 2025-12-06 07:24:37.059005905 +0000 UTC m=+0.189520464 container init 047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:24:37 compute-0 podman[315050]: 2025-12-06 07:24:37.066206162 +0000 UTC m=+0.196720711 container start 047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:24:37 compute-0 podman[315050]: 2025-12-06 07:24:37.069616365 +0000 UTC m=+0.200130914 container attach 047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:24:37 compute-0 stoic_dewdney[315067]: 167 167
Dec 06 07:24:37 compute-0 systemd[1]: libpod-047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972.scope: Deactivated successfully.
Dec 06 07:24:37 compute-0 conmon[315067]: conmon 047e66b4fa61d96b203c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972.scope/container/memory.events
Dec 06 07:24:37 compute-0 podman[315050]: 2025-12-06 07:24:37.073873502 +0000 UTC m=+0.204388051 container died 047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-212c1bcd713658c1445839adebc3ca81de49fd9a1826555c4d0f883cced21d72-merged.mount: Deactivated successfully.
Dec 06 07:24:37 compute-0 podman[315050]: 2025-12-06 07:24:37.111490578 +0000 UTC m=+0.242005127 container remove 047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:24:37 compute-0 systemd[1]: libpod-conmon-047e66b4fa61d96b203c17430d4678aee9247f9f366734521463f910c3a48972.scope: Deactivated successfully.
Dec 06 07:24:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:37.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:37 compute-0 podman[315091]: 2025-12-06 07:24:37.325229662 +0000 UTC m=+0.061903080 container create 380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:24:37 compute-0 systemd[1]: Started libpod-conmon-380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944.scope.
Dec 06 07:24:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a188d1809adef8bcc38466caa165fc64562e8da23273bf2ea4d7269a6dca1570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a188d1809adef8bcc38466caa165fc64562e8da23273bf2ea4d7269a6dca1570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a188d1809adef8bcc38466caa165fc64562e8da23273bf2ea4d7269a6dca1570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:37 compute-0 podman[315091]: 2025-12-06 07:24:37.301220307 +0000 UTC m=+0.037893745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a188d1809adef8bcc38466caa165fc64562e8da23273bf2ea4d7269a6dca1570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a188d1809adef8bcc38466caa165fc64562e8da23273bf2ea4d7269a6dca1570/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:37 compute-0 podman[315091]: 2025-12-06 07:24:37.403386226 +0000 UTC m=+0.140059664 container init 380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:24:37 compute-0 podman[315091]: 2025-12-06 07:24:37.413569264 +0000 UTC m=+0.150242682 container start 380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:24:37 compute-0 podman[315091]: 2025-12-06 07:24:37.416389611 +0000 UTC m=+0.153063049 container attach 380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:24:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/495689201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:37 compute-0 ceph-mon[74339]: pgmap v2008: 305 pgs: 305 active+clean; 293 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 168 KiB/s rd, 6.6 MiB/s wr, 155 op/s
Dec 06 07:24:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:24:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:24:37 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:24:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2049410736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 293 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 5.5 MiB/s wr, 109 op/s
Dec 06 07:24:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:38 compute-0 quirky_banzai[315107]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:24:38 compute-0 quirky_banzai[315107]: --> relative data size: 1.0
Dec 06 07:24:38 compute-0 quirky_banzai[315107]: --> All data devices are unavailable
Dec 06 07:24:38 compute-0 systemd[1]: libpod-380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944.scope: Deactivated successfully.
Dec 06 07:24:38 compute-0 podman[315122]: 2025-12-06 07:24:38.344000203 +0000 UTC m=+0.041783972 container died 380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banzai, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:24:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a188d1809adef8bcc38466caa165fc64562e8da23273bf2ea4d7269a6dca1570-merged.mount: Deactivated successfully.
Dec 06 07:24:38 compute-0 podman[315122]: 2025-12-06 07:24:38.422622559 +0000 UTC m=+0.120406228 container remove 380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:24:38 compute-0 systemd[1]: libpod-conmon-380f6406ed3446186a0101586c0817cec1e22c7b235aaaf33ed3d852b6f5a944.scope: Deactivated successfully.
Dec 06 07:24:38 compute-0 sudo[314984]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/735819537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:38 compute-0 ceph-mon[74339]: pgmap v2009: 305 pgs: 305 active+clean; 293 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 5.5 MiB/s wr, 109 op/s
Dec 06 07:24:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3125607217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/936231557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:38 compute-0 sudo[315138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:38 compute-0 sudo[315138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:38 compute-0 sudo[315138]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:38 compute-0 sudo[315164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:24:38 compute-0 sudo[315164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:38 compute-0 sudo[315164]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:38 compute-0 sudo[315189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:38 compute-0 sudo[315189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:38 compute-0 sudo[315189]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:38 compute-0 sudo[315214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:24:38 compute-0 sudo[315214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:38.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:38 compute-0 nova_compute[251992]: 2025-12-06 07:24:38.892 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:39 compute-0 podman[315280]: 2025-12-06 07:24:39.002229881 +0000 UTC m=+0.045339299 container create 3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:24:39 compute-0 systemd[1]: Started libpod-conmon-3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5.scope.
Dec 06 07:24:39 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:24:39 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:24:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:24:39 compute-0 podman[315280]: 2025-12-06 07:24:38.982259536 +0000 UTC m=+0.025368994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:24:39 compute-0 podman[315280]: 2025-12-06 07:24:39.084944339 +0000 UTC m=+0.128053767 container init 3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:24:39 compute-0 podman[315280]: 2025-12-06 07:24:39.098638093 +0000 UTC m=+0.141747531 container start 3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:24:39 compute-0 podman[315280]: 2025-12-06 07:24:39.102674733 +0000 UTC m=+0.145784161 container attach 3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:24:39 compute-0 thirsty_swanson[315296]: 167 167
Dec 06 07:24:39 compute-0 systemd[1]: libpod-3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5.scope: Deactivated successfully.
Dec 06 07:24:39 compute-0 podman[315280]: 2025-12-06 07:24:39.107031992 +0000 UTC m=+0.150141420 container died 3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 07:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8daf79abd697ddc00a4633b95aa154dd1053e92e5a8f8fc891f39c43c6d60f12-merged.mount: Deactivated successfully.
Dec 06 07:24:39 compute-0 podman[315280]: 2025-12-06 07:24:39.143183308 +0000 UTC m=+0.186292736 container remove 3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_swanson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:24:39 compute-0 systemd[1]: libpod-conmon-3db372467ae8c08dc6626b5c0413a7d2855d2264da6dd22ac1c5a31274e591c5.scope: Deactivated successfully.
Dec 06 07:24:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:39.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:39 compute-0 podman[315319]: 2025-12-06 07:24:39.320691074 +0000 UTC m=+0.045855433 container create b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meitner, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:24:39 compute-0 systemd[1]: Started libpod-conmon-b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244.scope.
Dec 06 07:24:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:24:39 compute-0 podman[315319]: 2025-12-06 07:24:39.295441125 +0000 UTC m=+0.020605504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920385a9093d029acd6d0e2ce4931bb7c2d9df4de2cb7f88b8d8b2accd4ba2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920385a9093d029acd6d0e2ce4931bb7c2d9df4de2cb7f88b8d8b2accd4ba2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920385a9093d029acd6d0e2ce4931bb7c2d9df4de2cb7f88b8d8b2accd4ba2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/920385a9093d029acd6d0e2ce4931bb7c2d9df4de2cb7f88b8d8b2accd4ba2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:39 compute-0 podman[315319]: 2025-12-06 07:24:39.408730607 +0000 UTC m=+0.133894996 container init b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:24:39 compute-0 podman[315319]: 2025-12-06 07:24:39.416502119 +0000 UTC m=+0.141666478 container start b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:24:39 compute-0 podman[315319]: 2025-12-06 07:24:39.419870131 +0000 UTC m=+0.145034510 container attach b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meitner, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:24:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 6.0 MiB/s wr, 152 op/s
Dec 06 07:24:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1445743369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:24:40 compute-0 friendly_meitner[315335]: {
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:     "0": [
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:         {
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "devices": [
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "/dev/loop3"
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             ],
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "lv_name": "ceph_lv0",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "lv_size": "7511998464",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "name": "ceph_lv0",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "tags": {
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.cluster_name": "ceph",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.crush_device_class": "",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.encrypted": "0",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.osd_id": "0",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.type": "block",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:                 "ceph.vdo": "0"
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             },
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "type": "block",
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:             "vg_name": "ceph_vg0"
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:         }
Dec 06 07:24:40 compute-0 friendly_meitner[315335]:     ]
Dec 06 07:24:40 compute-0 friendly_meitner[315335]: }
Dec 06 07:24:40 compute-0 systemd[1]: libpod-b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244.scope: Deactivated successfully.
Dec 06 07:24:40 compute-0 podman[315319]: 2025-12-06 07:24:40.150513376 +0000 UTC m=+0.875677745 container died b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meitner, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.379 251996 INFO nova.compute.manager [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Rebuilding instance
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.840 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.873 251996 DEBUG nova.compute.manager [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:24:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:40.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.940 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_requests' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.954 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.966 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.981 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'migration_context' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.992 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:24:40 compute-0 nova_compute[251992]: 2025-12-06 07:24:40.997 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:24:41 compute-0 nova_compute[251992]: 2025-12-06 07:24:41.108 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:24:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:41.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:24:41 compute-0 ceph-mon[74339]: pgmap v2010: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 6.0 MiB/s wr, 152 op/s
Dec 06 07:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-920385a9093d029acd6d0e2ce4931bb7c2d9df4de2cb7f88b8d8b2accd4ba2ef-merged.mount: Deactivated successfully.
Dec 06 07:24:41 compute-0 podman[315319]: 2025-12-06 07:24:41.544014085 +0000 UTC m=+2.269178454 container remove b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meitner, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:24:41 compute-0 sudo[315214]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:41 compute-0 systemd[1]: libpod-conmon-b9ab777881e5a2ed717cbebac7fe758a2e8f2851319c7f66691ef477282bf244.scope: Deactivated successfully.
Dec 06 07:24:41 compute-0 sudo[315358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:41 compute-0 sudo[315358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:41 compute-0 sudo[315358]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:41 compute-0 sudo[315383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:24:41 compute-0 sudo[315383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:41 compute-0 sudo[315383]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:41 compute-0 sudo[315408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:41 compute-0 sudo[315408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:41 compute-0 sudo[315408]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:41 compute-0 sudo[315433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:24:41 compute-0 sudo[315433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.3 MiB/s wr, 203 op/s
Dec 06 07:24:42 compute-0 podman[315501]: 2025-12-06 07:24:42.188584078 +0000 UTC m=+0.043175476 container create 0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:24:42 compute-0 systemd[1]: Started libpod-conmon-0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486.scope.
Dec 06 07:24:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:24:42 compute-0 podman[315501]: 2025-12-06 07:24:42.171718751 +0000 UTC m=+0.026310169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:24:42 compute-0 podman[315501]: 2025-12-06 07:24:42.281966041 +0000 UTC m=+0.136557469 container init 0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:24:42 compute-0 podman[315501]: 2025-12-06 07:24:42.288194073 +0000 UTC m=+0.142785471 container start 0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:24:42 compute-0 podman[315501]: 2025-12-06 07:24:42.291611767 +0000 UTC m=+0.146203185 container attach 0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 07:24:42 compute-0 clever_agnesi[315517]: 167 167
Dec 06 07:24:42 compute-0 systemd[1]: libpod-0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486.scope: Deactivated successfully.
Dec 06 07:24:42 compute-0 podman[315501]: 2025-12-06 07:24:42.293359256 +0000 UTC m=+0.147950654 container died 0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9476211347e005f5b91bacfd528fb0e476a4ce12003a174fcd0066143287853-merged.mount: Deactivated successfully.
Dec 06 07:24:42 compute-0 podman[315501]: 2025-12-06 07:24:42.327631494 +0000 UTC m=+0.182222892 container remove 0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:24:42 compute-0 systemd[1]: libpod-conmon-0e7acf076cd716c6faea2c4859a45e61223a6a1ea763f1937e22cca9f03ec486.scope: Deactivated successfully.
Dec 06 07:24:42 compute-0 podman[315541]: 2025-12-06 07:24:42.502461191 +0000 UTC m=+0.038464865 container create 51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:24:42 compute-0 systemd[1]: Started libpod-conmon-51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c.scope.
Dec 06 07:24:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cbede1cc1127e7089cb1bac84bb976b4fc1764c68fcae09f74b5ee4f856c5a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cbede1cc1127e7089cb1bac84bb976b4fc1764c68fcae09f74b5ee4f856c5a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cbede1cc1127e7089cb1bac84bb976b4fc1764c68fcae09f74b5ee4f856c5a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cbede1cc1127e7089cb1bac84bb976b4fc1764c68fcae09f74b5ee4f856c5a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:24:42 compute-0 podman[315541]: 2025-12-06 07:24:42.486607552 +0000 UTC m=+0.022611246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:24:42 compute-0 podman[315541]: 2025-12-06 07:24:42.597782528 +0000 UTC m=+0.133786212 container init 51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:24:42 compute-0 podman[315541]: 2025-12-06 07:24:42.60363292 +0000 UTC m=+0.139636594 container start 51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:24:42 compute-0 podman[315541]: 2025-12-06 07:24:42.610685444 +0000 UTC m=+0.146689148 container attach 51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 07:24:42 compute-0 ceph-mon[74339]: pgmap v2011: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.3 MiB/s wr, 203 op/s
Dec 06 07:24:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:24:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:43.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:43 compute-0 reverent_einstein[315558]: {
Dec 06 07:24:43 compute-0 reverent_einstein[315558]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:24:43 compute-0 reverent_einstein[315558]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:24:43 compute-0 reverent_einstein[315558]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:24:43 compute-0 reverent_einstein[315558]:         "osd_id": 0,
Dec 06 07:24:43 compute-0 reverent_einstein[315558]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:24:43 compute-0 reverent_einstein[315558]:         "type": "bluestore"
Dec 06 07:24:43 compute-0 reverent_einstein[315558]:     }
Dec 06 07:24:43 compute-0 reverent_einstein[315558]: }
Dec 06 07:24:43 compute-0 systemd[1]: libpod-51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c.scope: Deactivated successfully.
Dec 06 07:24:43 compute-0 podman[315541]: 2025-12-06 07:24:43.453309785 +0000 UTC m=+0.989313459 container died 51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cbede1cc1127e7089cb1bac84bb976b4fc1764c68fcae09f74b5ee4f856c5a1-merged.mount: Deactivated successfully.
Dec 06 07:24:43 compute-0 podman[315541]: 2025-12-06 07:24:43.513769608 +0000 UTC m=+1.049773282 container remove 51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:24:43 compute-0 systemd[1]: libpod-conmon-51792a461d3ecdcdb736c1432c4a9b1eea81a396da858c989cf4eb23cc52f59c.scope: Deactivated successfully.
Dec 06 07:24:43 compute-0 sudo[315433]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:24:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:24:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f3a7b496-dd2f-4812-87dd-3af86d3ac6e8 does not exist
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 38bbcd72-4731-434d-984e-cf29735ae7ac does not exist
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4d8cc557-6a55-4a07-950a-d070fbb5ec8e does not exist
Dec 06 07:24:43 compute-0 nova_compute[251992]: 2025-12-06 07:24:43.894 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:43 compute-0 sudo[315593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:43 compute-0 sudo[315593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:43 compute-0 sudo[315593]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.4 MiB/s wr, 212 op/s
Dec 06 07:24:44 compute-0 sudo[315618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:24:44 compute-0 sudo[315618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:44 compute-0 sudo[315618]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:24:44 compute-0 ceph-mon[74339]: pgmap v2012: 305 pgs: 305 active+clean; 306 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.4 MiB/s wr, 212 op/s
Dec 06 07:24:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:45.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 307 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 265 op/s
Dec 06 07:24:46 compute-0 sudo[315644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:46 compute-0 sudo[315644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:46 compute-0 sudo[315644]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:46 compute-0 sudo[315669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:24:46 compute-0 sudo[315669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:24:46 compute-0 sudo[315669]: pam_unix(sudo:session): session closed for user root
Dec 06 07:24:46 compute-0 nova_compute[251992]: 2025-12-06 07:24:46.117 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:46 compute-0 ceph-mon[74339]: pgmap v2013: 305 pgs: 305 active+clean; 307 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.2 MiB/s wr, 265 op/s
Dec 06 07:24:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:47.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 307 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 467 KiB/s wr, 197 op/s
Dec 06 07:24:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:48 compute-0 ceph-mon[74339]: pgmap v2014: 305 pgs: 305 active+clean; 307 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 467 KiB/s wr, 197 op/s
Dec 06 07:24:48 compute-0 nova_compute[251992]: 2025-12-06 07:24:48.896 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:48.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:49.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 301 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.5 MiB/s wr, 244 op/s
Dec 06 07:24:50 compute-0 ceph-mon[74339]: pgmap v2015: 305 pgs: 305 active+clean; 301 MiB data, 851 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.5 MiB/s wr, 244 op/s
Dec 06 07:24:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:51 compute-0 nova_compute[251992]: 2025-12-06 07:24:51.092 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:24:51 compute-0 nova_compute[251992]: 2025-12-06 07:24:51.121 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:51 compute-0 nova_compute[251992]: 2025-12-06 07:24:51.169 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:51.170 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:51.172 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:24:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:51.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:51 compute-0 ovn_controller[147168]: 2025-12-06T07:24:51Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:e8:fd 10.100.0.6
Dec 06 07:24:51 compute-0 ovn_controller[147168]: 2025-12-06T07:24:51Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:e8:fd 10.100.0.6
Dec 06 07:24:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 283 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.9 MiB/s wr, 252 op/s
Dec 06 07:24:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3534498576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:24:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:52.174 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:52.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:53.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:53 compute-0 ceph-mon[74339]: pgmap v2016: 305 pgs: 305 active+clean; 283 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.9 MiB/s wr, 252 op/s
Dec 06 07:24:53 compute-0 nova_compute[251992]: 2025-12-06 07:24:53.898 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 291 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Dec 06 07:24:54 compute-0 podman[315698]: 2025-12-06 07:24:54.430316614 +0000 UTC m=+0.081578138 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:24:54 compute-0 ceph-mon[74339]: pgmap v2017: 305 pgs: 305 active+clean; 291 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Dec 06 07:24:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:54.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:55.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 227 op/s
Dec 06 07:24:56 compute-0 nova_compute[251992]: 2025-12-06 07:24:56.113 251996 INFO nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance shutdown successfully after 15 seconds.
Dec 06 07:24:56 compute-0 nova_compute[251992]: 2025-12-06 07:24:56.124 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:56.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:24:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:57.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:24:57 compute-0 ceph-mon[74339]: pgmap v2018: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 227 op/s
Dec 06 07:24:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.2 MiB/s wr, 161 op/s
Dec 06 07:24:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:24:58 compute-0 kernel: tap43f29a7e-fd (unregistering): left promiscuous mode
Dec 06 07:24:58 compute-0 NetworkManager[48965]: <info>  [1765005898.3487] device (tap43f29a7e-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.355 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00358|binding|INFO|Releasing lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 from this chassis (sb_readonly=0)
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00359|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 down in Southbound
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00360|binding|INFO|Removing iface tap43f29a7e-fd ovn-installed in OVS
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.358 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.363 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:e8:fd 10.100.0.6'], port_security=['fa:16:3e:ee:e8:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '4', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=43f29a7e-fdfe-4bc7-b164-e60a29234bc2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.364 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.365 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.367 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c75ea78f-a48c-4ccd-a9c3-79af3484dfd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.367 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000066.scope: Deactivated successfully.
Dec 06 07:24:58 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000066.scope: Consumed 14.356s CPU time.
Dec 06 07:24:58 compute-0 systemd-machined[212986]: Machine qemu-46-instance-00000066 terminated.
Dec 06 07:24:58 compute-0 podman[315727]: 2025-12-06 07:24:58.413999938 +0000 UTC m=+0.067066705 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 06 07:24:58 compute-0 podman[315728]: 2025-12-06 07:24:58.42092668 +0000 UTC m=+0.067868528 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:24:58 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[314736]: [NOTICE]   (314740) : haproxy version is 2.8.14-c23fe91
Dec 06 07:24:58 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[314736]: [NOTICE]   (314740) : path to executable is /usr/sbin/haproxy
Dec 06 07:24:58 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[314736]: [WARNING]  (314740) : Exiting Master process...
Dec 06 07:24:58 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[314736]: [ALERT]    (314740) : Current worker (314742) exited with code 143 (Terminated)
Dec 06 07:24:58 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[314736]: [WARNING]  (314740) : All workers exited. Exiting... (0)
Dec 06 07:24:58 compute-0 systemd[1]: libpod-3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528.scope: Deactivated successfully.
Dec 06 07:24:58 compute-0 podman[315786]: 2025-12-06 07:24:58.493861468 +0000 UTC m=+0.041624893 container died 3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 07:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528-userdata-shm.mount: Deactivated successfully.
Dec 06 07:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-28c66720511eee7691329e0a4341946cc945b9b50c2c18ac30c41c00b5b50df0-merged.mount: Deactivated successfully.
Dec 06 07:24:58 compute-0 kernel: tap43f29a7e-fd: entered promiscuous mode
Dec 06 07:24:58 compute-0 NetworkManager[48965]: <info>  [1765005898.5305] manager: (tap43f29a7e-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Dec 06 07:24:58 compute-0 systemd-udevd[315750]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:24:58 compute-0 kernel: tap43f29a7e-fd (unregistering): left promiscuous mode
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00361|binding|INFO|Claiming lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for this chassis.
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00362|binding|INFO|43f29a7e-fdfe-4bc7-b164-e60a29234bc2: Claiming fa:16:3e:ee:e8:fd 10.100.0.6
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.532 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 podman[315786]: 2025-12-06 07:24:58.537705361 +0000 UTC m=+0.085468786 container cleanup 3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:24:58 compute-0 systemd[1]: libpod-conmon-3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528.scope: Deactivated successfully.
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.556 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:e8:fd 10.100.0.6'], port_security=['fa:16:3e:ee:e8:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '4', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=43f29a7e-fdfe-4bc7-b164-e60a29234bc2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.561 251996 INFO nova.virt.libvirt.driver [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance destroyed successfully.
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00363|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 ovn-installed in OVS
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00364|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 up in Southbound
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00365|binding|INFO|Releasing lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 from this chassis (sb_readonly=1)
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.563 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00366|if_status|INFO|Dropped 18 log messages in last 807 seconds (most recently, 807 seconds ago) due to excessive rate
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00367|if_status|INFO|Not setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 down as sb is readonly
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00368|binding|INFO|Removing iface tap43f29a7e-fd ovn-installed in OVS
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00369|binding|INFO|Releasing lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 from this chassis (sb_readonly=0)
Dec 06 07:24:58 compute-0 ovn_controller[147168]: 2025-12-06T07:24:58Z|00370|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 down in Southbound
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.571 251996 INFO nova.virt.libvirt.driver [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance destroyed successfully.
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.572 251996 DEBUG nova.virt.libvirt.vif [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:24:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-57853796',display_name='tempest-ServerDiskConfigTestJSON-server-57853796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-57853796',id=102,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:24:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-psuria6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:24:39Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.572 251996 DEBUG nova.network.os_vif_util [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.573 251996 DEBUG nova.network.os_vif_util [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.574 251996 DEBUG os_vif [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.574 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:e8:fd 10.100.0.6'], port_security=['fa:16:3e:ee:e8:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '4', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=43f29a7e-fdfe-4bc7-b164-e60a29234bc2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.577 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.578 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43f29a7e-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.579 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.581 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.584 251996 INFO os_vif [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd')
Dec 06 07:24:58 compute-0 podman[315818]: 2025-12-06 07:24:58.611487352 +0000 UTC m=+0.046047355 container remove 3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.613 251996 DEBUG nova.compute.manager [req-70868237-ee8d-46ff-8886-77a6e18951e5 req-b543c1fd-ad75-4d08-91cc-0c2b8e47ebdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.614 251996 DEBUG oslo_concurrency.lockutils [req-70868237-ee8d-46ff-8886-77a6e18951e5 req-b543c1fd-ad75-4d08-91cc-0c2b8e47ebdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.615 251996 DEBUG oslo_concurrency.lockutils [req-70868237-ee8d-46ff-8886-77a6e18951e5 req-b543c1fd-ad75-4d08-91cc-0c2b8e47ebdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.615 251996 DEBUG oslo_concurrency.lockutils [req-70868237-ee8d-46ff-8886-77a6e18951e5 req-b543c1fd-ad75-4d08-91cc-0c2b8e47ebdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.615 251996 DEBUG nova.compute.manager [req-70868237-ee8d-46ff-8886-77a6e18951e5 req-b543c1fd-ad75-4d08-91cc-0c2b8e47ebdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.615 251996 WARNING nova.compute.manager [req-70868237-ee8d-46ff-8886-77a6e18951e5 req-b543c1fd-ad75-4d08-91cc-0c2b8e47ebdb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state rebuilding.
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.620 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b5af671e-7e1a-4b40-9553-a6d4bc0939df]: (4, ('Sat Dec  6 07:24:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528)\n3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528\nSat Dec  6 07:24:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528)\n3e527dbe8e457f2184bc71d20a8a8ad874a17ecb6e1f7e2eba48bc151fa29528\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.623 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ee148733-9e6f-4c7d-932f-bba8e911d48c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.624 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:24:58 compute-0 kernel: tap7c014e4e-a0: left promiscuous mode
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.626 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.631 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[41f7119c-ca2c-4bce-b18c-fabfdd93ee64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.641 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.645 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[678bc1dd-8935-4c0f-a9f1-405cc861b83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.646 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[37c414e4-2d83-4d17-b72b-ce03448c7ad8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.662 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[afbe390f-de8f-4b25-8cf4-dd0d7c415358]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614535, 'reachable_time': 22205, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315852, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.666 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.666 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[51b15621-6e9c-4d48-9560-5a4122cc986f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.667 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.668 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.669 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[78ef8c9a-7626-4460-85f6-9769599ca738]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.670 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.671 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:24:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:24:58.672 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d8d64d-a0b2-469b-a5dc-4d53828d495e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:24:58 compute-0 nova_compute[251992]: 2025-12-06 07:24:58.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:24:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:24:58.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:24:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:24:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:24:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:24:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 175 op/s
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:00 compute-0 ceph-mon[74339]: pgmap v2019: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.2 MiB/s wr, 161 op/s
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.745 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.745 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.745 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.746 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.746 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.780 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.782 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.782 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.783 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.783 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.783 251996 WARNING nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state rebuilding.
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.783 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.784 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.784 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.784 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.785 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.785 251996 WARNING nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state rebuilding.
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.785 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.785 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.786 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.786 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.786 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.786 251996 WARNING nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state rebuilding.
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.787 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.787 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.787 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.788 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.788 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.788 251996 WARNING nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state rebuilding.
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.788 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.789 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.789 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.789 251996 DEBUG oslo_concurrency.lockutils [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.789 251996 DEBUG nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:00 compute-0 nova_compute[251992]: 2025-12-06 07:25:00.790 251996 WARNING nova.compute.manager [req-f35a0280-4775-4733-b89b-8248c8b3d1fc req-c7bdbae4-cad5-43a8-914c-9aedea1204c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state rebuilding.
Dec 06 07:25:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:00.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/970776473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.176 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.338 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.338 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.341 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.341 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.484 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.486 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4290MB free_disk=20.83203125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.486 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.486 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:01 compute-0 sshd-session[315853]: Invalid user ubuntu from 45.140.17.124 port 33910
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.602 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 00f56c62-f327-41e3-a105-24f56ae124c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.602 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.602 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.603 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:25:01 compute-0 nova_compute[251992]: 2025-12-06 07:25:01.694 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:01 compute-0 sshd-session[315853]: Connection reset by invalid user ubuntu 45.140.17.124 port 33910 [preauth]
Dec 06 07:25:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 703 KiB/s rd, 3.2 MiB/s wr, 138 op/s
Dec 06 07:25:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158887967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:02 compute-0 nova_compute[251992]: 2025-12-06 07:25:02.135 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:02 compute-0 nova_compute[251992]: 2025-12-06 07:25:02.142 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:25:02 compute-0 nova_compute[251992]: 2025-12-06 07:25:02.193 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:25:02 compute-0 nova_compute[251992]: 2025-12-06 07:25:02.233 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:25:02 compute-0 nova_compute[251992]: 2025-12-06 07:25:02.233 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:02 compute-0 ceph-mon[74339]: pgmap v2020: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 175 op/s
Dec 06 07:25:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/970776473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:02.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:03 compute-0 nova_compute[251992]: 2025-12-06 07:25:03.234 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:03 compute-0 nova_compute[251992]: 2025-12-06 07:25:03.258 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:03 compute-0 nova_compute[251992]: 2025-12-06 07:25:03.259 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:03.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:03 compute-0 nova_compute[251992]: 2025-12-06 07:25:03.582 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:03 compute-0 nova_compute[251992]: 2025-12-06 07:25:03.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:03.831 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:03.831 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:03.832 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:03 compute-0 nova_compute[251992]: 2025-12-06 07:25:03.902 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 306 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 440 KiB/s rd, 2.3 MiB/s wr, 94 op/s
Dec 06 07:25:03 compute-0 ceph-mon[74339]: pgmap v2021: 305 pgs: 305 active+clean; 314 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 703 KiB/s rd, 3.2 MiB/s wr, 138 op/s
Dec 06 07:25:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3158887967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:04 compute-0 sshd-session[315901]: Connection reset by authenticating user root 45.140.17.124 port 33934 [preauth]
Dec 06 07:25:04 compute-0 nova_compute[251992]: 2025-12-06 07:25:04.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:04 compute-0 nova_compute[251992]: 2025-12-06 07:25:04.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:04.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:04 compute-0 ceph-mon[74339]: pgmap v2022: 305 pgs: 305 active+clean; 306 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 440 KiB/s rd, 2.3 MiB/s wr, 94 op/s
Dec 06 07:25:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/278321561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:05.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:05 compute-0 nova_compute[251992]: 2025-12-06 07:25:05.947 251996 INFO nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Deleting instance files /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_del
Dec 06 07:25:05 compute-0 nova_compute[251992]: 2025-12-06 07:25:05.948 251996 INFO nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Deletion of /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_del complete
Dec 06 07:25:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 231 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Dec 06 07:25:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1659571722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.155 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.156 251996 INFO nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Creating image(s)
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.181 251996 DEBUG nova.storage.rbd_utils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.213 251996 DEBUG nova.storage.rbd_utils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:06 compute-0 sudo[315924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:06 compute-0 sudo[315924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:06 compute-0 sudo[315924]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.245 251996 DEBUG nova.storage.rbd_utils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.249 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:06 compute-0 sudo[315985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:06 compute-0 sudo[315985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:06 compute-0 sudo[315985]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.319 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.320 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.321 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.321 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.349 251996 DEBUG nova.storage.rbd_utils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.353 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.651 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.724 251996 DEBUG nova.storage.rbd_utils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] resizing rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.828 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.829 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Ensure instance console log exists: /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.829 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.830 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.830 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.832 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Start _get_guest_xml network_info=[{"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:25:06 compute-0 sshd-session[315905]: Connection reset by authenticating user root 45.140.17.124 port 27210 [preauth]
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.836 251996 WARNING nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.841 251996 DEBUG nova.virt.libvirt.host [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.842 251996 DEBUG nova.virt.libvirt.host [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.845 251996 DEBUG nova.virt.libvirt.host [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.845 251996 DEBUG nova.virt.libvirt.host [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.847 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.847 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.848 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.848 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.848 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.848 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.848 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.849 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.849 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.849 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.849 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.850 251996 DEBUG nova.virt.hardware [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.850 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:06 compute-0 nova_compute[251992]: 2025-12-06 07:25:06.880 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:06.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:07 compute-0 ceph-mon[74339]: pgmap v2023: 305 pgs: 305 active+clean; 231 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Dec 06 07:25:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3503138643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:07.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:25:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1281897683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.332 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.365 251996 DEBUG nova.storage.rbd_utils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.369 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:25:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/904149066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.893 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.895 251996 DEBUG nova.virt.libvirt.vif [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:24:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-57853796',display_name='tempest-ServerDiskConfigTestJSON-server-57853796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-57853796',id=102,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:24:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-psuria6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:06Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.895 251996 DEBUG nova.network.os_vif_util [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.896 251996 DEBUG nova.network.os_vif_util [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.899 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <uuid>2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8</uuid>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <name>instance-00000066</name>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-57853796</nova:name>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:25:06</nova:creationTime>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <nova:port uuid="43f29a7e-fdfe-4bc7-b164-e60a29234bc2">
Dec 06 07:25:07 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <system>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <entry name="serial">2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8</entry>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <entry name="uuid">2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8</entry>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </system>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <os>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   </os>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <features>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   </features>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk">
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       </source>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config">
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       </source>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:25:07 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:ee:e8:fd"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <target dev="tap43f29a7e-fd"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/console.log" append="off"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <video>
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </video>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:25:07 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:25:07 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:25:07 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:25:07 compute-0 nova_compute[251992]: </domain>
Dec 06 07:25:07 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.900 251996 DEBUG nova.compute.manager [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Preparing to wait for external event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.900 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.900 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.901 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.901 251996 DEBUG nova.virt.libvirt.vif [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:24:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-57853796',display_name='tempest-ServerDiskConfigTestJSON-server-57853796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-57853796',id=102,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:24:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-psuria6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:25:06Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.901 251996 DEBUG nova.network.os_vif_util [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.902 251996 DEBUG nova.network.os_vif_util [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.902 251996 DEBUG os_vif [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.903 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.903 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.903 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.905 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.906 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43f29a7e-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.906 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43f29a7e-fd, col_values=(('external_ids', {'iface-id': '43f29a7e-fdfe-4bc7-b164-e60a29234bc2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:e8:fd', 'vm-uuid': '2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.907 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:07 compute-0 NetworkManager[48965]: <info>  [1765005907.9089] manager: (tap43f29a7e-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.910 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.913 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.914 251996 INFO os_vif [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd')
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.960 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.960 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.961 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:ee:e8:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.961 251996 INFO nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Using config drive
Dec 06 07:25:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 231 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 277 KiB/s rd, 126 KiB/s wr, 59 op/s
Dec 06 07:25:07 compute-0 nova_compute[251992]: 2025-12-06 07:25:07.996 251996 DEBUG nova.storage.rbd_utils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:08 compute-0 nova_compute[251992]: 2025-12-06 07:25:08.023 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:08 compute-0 nova_compute[251992]: 2025-12-06 07:25:08.057 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'keypairs' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1281897683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1999743962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/904149066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:08 compute-0 nova_compute[251992]: 2025-12-06 07:25:08.904 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:08.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:09 compute-0 nova_compute[251992]: 2025-12-06 07:25:09.103 251996 INFO nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Creating config drive at /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config
Dec 06 07:25:09 compute-0 nova_compute[251992]: 2025-12-06 07:25:09.107 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbzs74o31 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:09 compute-0 sshd-session[316145]: Connection reset by authenticating user root 45.140.17.124 port 27224 [preauth]
Dec 06 07:25:09 compute-0 nova_compute[251992]: 2025-12-06 07:25:09.242 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbzs74o31" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:09.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:09 compute-0 nova_compute[251992]: 2025-12-06 07:25:09.273 251996 DEBUG nova.storage.rbd_utils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] rbd image 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:25:09 compute-0 nova_compute[251992]: 2025-12-06 07:25:09.276 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:09 compute-0 ceph-mon[74339]: pgmap v2024: 305 pgs: 305 active+clean; 231 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 277 KiB/s rd, 126 KiB/s wr, 59 op/s
Dec 06 07:25:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1966551518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3840847270' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:25:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3840847270' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:25:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 275 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 3.1 MiB/s wr, 122 op/s
Dec 06 07:25:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2880801786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:11 compute-0 ceph-mon[74339]: pgmap v2025: 305 pgs: 305 active+clean; 275 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 3.1 MiB/s wr, 122 op/s
Dec 06 07:25:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:11.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:11 compute-0 nova_compute[251992]: 2025-12-06 07:25:11.521 251996 DEBUG oslo_concurrency.processutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:11 compute-0 nova_compute[251992]: 2025-12-06 07:25:11.522 251996 INFO nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Deleting local config drive /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8/disk.config because it was imported into RBD.
Dec 06 07:25:11 compute-0 sshd-session[316247]: Connection reset by authenticating user root 45.140.17.124 port 27242 [preauth]
Dec 06 07:25:11 compute-0 kernel: tap43f29a7e-fd: entered promiscuous mode
Dec 06 07:25:11 compute-0 NetworkManager[48965]: <info>  [1765005911.5727] manager: (tap43f29a7e-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/190)
Dec 06 07:25:11 compute-0 ovn_controller[147168]: 2025-12-06T07:25:11Z|00371|binding|INFO|Claiming lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for this chassis.
Dec 06 07:25:11 compute-0 ovn_controller[147168]: 2025-12-06T07:25:11Z|00372|binding|INFO|43f29a7e-fdfe-4bc7-b164-e60a29234bc2: Claiming fa:16:3e:ee:e8:fd 10.100.0.6
Dec 06 07:25:11 compute-0 nova_compute[251992]: 2025-12-06 07:25:11.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:11 compute-0 ovn_controller[147168]: 2025-12-06T07:25:11Z|00373|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 ovn-installed in OVS
Dec 06 07:25:11 compute-0 nova_compute[251992]: 2025-12-06 07:25:11.591 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:11 compute-0 systemd-udevd[316263]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:25:11 compute-0 NetworkManager[48965]: <info>  [1765005911.6164] device (tap43f29a7e-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:25:11 compute-0 NetworkManager[48965]: <info>  [1765005911.6170] device (tap43f29a7e-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:25:11 compute-0 systemd-machined[212986]: New machine qemu-47-instance-00000066.
Dec 06 07:25:11 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-00000066.
Dec 06 07:25:11 compute-0 ovn_controller[147168]: 2025-12-06T07:25:11Z|00374|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 up in Southbound
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.665 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:e8:fd 10.100.0.6'], port_security=['fa:16:3e:ee:e8:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '7', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=43f29a7e-fdfe-4bc7-b164-e60a29234bc2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.666 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.668 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.678 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6940d911-10c6-4d4e-8f65-8c55b03af950]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.678 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.681 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.681 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e93cbc00-758f-4f87-81a3-22c2f7919a43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.682 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[be01e591-7f63-49f2-8173-2eaf6c127e18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.692 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[953b64ba-318a-4c41-97b8-1c9e4c8112c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.704 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0e00121f-6e2d-4f22-ae80-c040033b249c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.733 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5560d75b-e62f-47d7-8ea2-fa95d338847f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.739 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[46635a96-d66b-4822-ae0b-3eb5e4448331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 NetworkManager[48965]: <info>  [1765005911.7403] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Dec 06 07:25:11 compute-0 systemd-udevd[316267]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.773 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a26f308b-2ecb-4957-b307-fb19e01ee1d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.775 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[19ccdcd6-ac38-4cf0-826f-9ac3dcb21e92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 NetworkManager[48965]: <info>  [1765005911.7954] device (tap7c014e4e-a0): carrier: link connected
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.800 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[bb474983-1e67-4236-9f84-68837b72c0a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.815 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[56ecfd15-8065-4cf6-9a18-610a0224f5ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618438, 'reachable_time': 18560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316299, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.830 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd68972-f27e-4c3f-8f1d-2cf233da3416]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 618438, 'tstamp': 618438}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316300, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.847 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0a8800-8170-4ba0-b048-92231685da44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618438, 'reachable_time': 18560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316301, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.874 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[18029368-1125-4a69-90ad-762240e3dcb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.929 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cb13dfb0-bc10-43bc-8028-d32b6d82abf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.931 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.931 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.932 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:11 compute-0 nova_compute[251992]: 2025-12-06 07:25:11.934 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:11 compute-0 NetworkManager[48965]: <info>  [1765005911.9349] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Dec 06 07:25:11 compute-0 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec 06 07:25:11 compute-0 nova_compute[251992]: 2025-12-06 07:25:11.937 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.938 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:11 compute-0 nova_compute[251992]: 2025-12-06 07:25:11.939 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:11 compute-0 ovn_controller[147168]: 2025-12-06T07:25:11Z|00375|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=0)
Dec 06 07:25:11 compute-0 nova_compute[251992]: 2025-12-06 07:25:11.960 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.961 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.962 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[63f68f4d-f80f-434b-926e-649e1ee90f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.963 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:25:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:11.964 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:25:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 327 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 236 KiB/s rd, 4.9 MiB/s wr, 144 op/s
Dec 06 07:25:12 compute-0 podman[316369]: 2025-12-06 07:25:12.313768982 +0000 UTC m=+0.047096834 container create 5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:25:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3014821163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2845940397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:12 compute-0 ceph-mon[74339]: pgmap v2026: 305 pgs: 305 active+clean; 327 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 236 KiB/s rd, 4.9 MiB/s wr, 144 op/s
Dec 06 07:25:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1981577113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:12 compute-0 systemd[1]: Started libpod-conmon-5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39.scope.
Dec 06 07:25:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1267671f765e00b5a51bf8ae21b0b0c2cf195016db74ad9b34369546e26d336a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:12 compute-0 podman[316369]: 2025-12-06 07:25:12.28943602 +0000 UTC m=+0.022763892 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:25:12 compute-0 podman[316369]: 2025-12-06 07:25:12.388060898 +0000 UTC m=+0.121388770 container init 5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 07:25:12 compute-0 podman[316369]: 2025-12-06 07:25:12.394027453 +0000 UTC m=+0.127355305 container start 5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:25:12 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[316385]: [NOTICE]   (316389) : New worker (316391) forked
Dec 06 07:25:12 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[316385]: [NOTICE]   (316389) : Loading success.
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.612 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.613 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005912.6113746, 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.613 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] VM Started (Lifecycle Event)
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.638 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.642 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005912.6117163, 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.642 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] VM Paused (Lifecycle Event)
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.677 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.680 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.714 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:25:12 compute-0 nova_compute[251992]: 2025-12-06 07:25:12.908 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:25:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:13.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.315 251996 DEBUG nova.compute.manager [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.315 251996 DEBUG oslo_concurrency.lockutils [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.316 251996 DEBUG oslo_concurrency.lockutils [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.316 251996 DEBUG oslo_concurrency.lockutils [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.316 251996 DEBUG nova.compute.manager [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Processing event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.316 251996 DEBUG nova.compute.manager [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.317 251996 DEBUG oslo_concurrency.lockutils [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.317 251996 DEBUG oslo_concurrency.lockutils [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.317 251996 DEBUG oslo_concurrency.lockutils [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.317 251996 DEBUG nova.compute.manager [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.318 251996 WARNING nova.compute.manager [req-6497d959-aca7-48a4-9d01-6595654eb8c1 req-5d29a94c-47a4-4392-98a9-e1c7ab0eeefa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state rebuild_spawning.
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.318 251996 DEBUG nova.compute.manager [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.322 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005913.321804, 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.322 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] VM Resumed (Lifecycle Event)
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.324 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.326 251996 INFO nova.virt.libvirt.driver [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance spawned successfully.
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.327 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.595 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.599 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.600 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.600 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.601 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.601 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.602 251996 DEBUG nova.virt.libvirt.driver [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.606 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.774 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.822 251996 DEBUG nova.compute.manager [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4110808184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1268851066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/534500885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:13 compute-0 nova_compute[251992]: 2025-12-06 07:25:13.906 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 342 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 5.1 MiB/s wr, 139 op/s
Dec 06 07:25:14 compute-0 nova_compute[251992]: 2025-12-06 07:25:14.030 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:14 compute-0 nova_compute[251992]: 2025-12-06 07:25:14.031 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:14 compute-0 nova_compute[251992]: 2025-12-06 07:25:14.031 251996 DEBUG nova.objects.instance [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:25:14 compute-0 nova_compute[251992]: 2025-12-06 07:25:14.097 251996 DEBUG oslo_concurrency.lockutils [None req-958bd860-e4a1-4806-91f1-01803d374cfb d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:14.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:15 compute-0 ceph-mon[74339]: pgmap v2027: 305 pgs: 305 active+clean; 342 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 132 KiB/s rd, 5.1 MiB/s wr, 139 op/s
Dec 06 07:25:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:25:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:15.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:25:15 compute-0 nova_compute[251992]: 2025-12-06 07:25:15.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:25:15 compute-0 nova_compute[251992]: 2025-12-06 07:25:15.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:25:15 compute-0 nova_compute[251992]: 2025-12-06 07:25:15.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:25:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 380 MiB data, 908 MiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 6.9 MiB/s wr, 176 op/s
Dec 06 07:25:16 compute-0 nova_compute[251992]: 2025-12-06 07:25:16.011 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:25:16 compute-0 nova_compute[251992]: 2025-12-06 07:25:16.012 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:25:16 compute-0 nova_compute[251992]: 2025-12-06 07:25:16.013 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:25:16 compute-0 nova_compute[251992]: 2025-12-06 07:25:16.013 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 00f56c62-f327-41e3-a105-24f56ae124c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:16 compute-0 ceph-mon[74339]: pgmap v2028: 305 pgs: 305 active+clean; 380 MiB data, 908 MiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 6.9 MiB/s wr, 176 op/s
Dec 06 07:25:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:16.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:17.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:17 compute-0 nova_compute[251992]: 2025-12-06 07:25:17.911 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 380 MiB data, 908 MiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 6.8 MiB/s wr, 147 op/s
Dec 06 07:25:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:25:18
Dec 06 07:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'backups', 'images', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes']
Dec 06 07:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:25:18 compute-0 nova_compute[251992]: 2025-12-06 07:25:18.909 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:18.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:19 compute-0 ceph-mon[74339]: pgmap v2029: 305 pgs: 305 active+clean; 380 MiB data, 908 MiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 6.8 MiB/s wr, 147 op/s
Dec 06 07:25:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:19.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.376 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.376 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.377 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.377 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.377 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.378 251996 INFO nova.compute.manager [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Terminating instance
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.379 251996 DEBUG nova.compute.manager [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:25:19 compute-0 kernel: tap43f29a7e-fd (unregistering): left promiscuous mode
Dec 06 07:25:19 compute-0 NetworkManager[48965]: <info>  [1765005919.6134] device (tap43f29a7e-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:25:19 compute-0 ovn_controller[147168]: 2025-12-06T07:25:19Z|00376|binding|INFO|Releasing lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 from this chassis (sb_readonly=0)
Dec 06 07:25:19 compute-0 ovn_controller[147168]: 2025-12-06T07:25:19Z|00377|binding|INFO|Setting lport 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 down in Southbound
Dec 06 07:25:19 compute-0 ovn_controller[147168]: 2025-12-06T07:25:19Z|00378|binding|INFO|Removing iface tap43f29a7e-fd ovn-installed in OVS
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.621 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.622 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.635 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:e8:fd 10.100.0.6'], port_security=['fa:16:3e:ee:e8:fd 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '8', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=43f29a7e-fdfe-4bc7-b164-e60a29234bc2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.637 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 43f29a7e-fdfe-4bc7-b164-e60a29234bc2 in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.638 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.639 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.640 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8f051b75-07c2-4822-b8ad-8ef848c95ed0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.640 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore
Dec 06 07:25:19 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000066.scope: Deactivated successfully.
Dec 06 07:25:19 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000066.scope: Consumed 6.867s CPU time.
Dec 06 07:25:19 compute-0 systemd-machined[212986]: Machine qemu-47-instance-00000066 terminated.
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.696 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.743 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.744 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:25:19 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[316385]: [NOTICE]   (316389) : haproxy version is 2.8.14-c23fe91
Dec 06 07:25:19 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[316385]: [NOTICE]   (316389) : path to executable is /usr/sbin/haproxy
Dec 06 07:25:19 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[316385]: [WARNING]  (316389) : Exiting Master process...
Dec 06 07:25:19 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[316385]: [ALERT]    (316389) : Current worker (316391) exited with code 143 (Terminated)
Dec 06 07:25:19 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[316385]: [WARNING]  (316389) : All workers exited. Exiting... (0)
Dec 06 07:25:19 compute-0 systemd[1]: libpod-5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39.scope: Deactivated successfully.
Dec 06 07:25:19 compute-0 podman[316432]: 2025-12-06 07:25:19.768210773 +0000 UTC m=+0.043803282 container died 5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:25:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1267671f765e00b5a51bf8ae21b0b0c2cf195016db74ad9b34369546e26d336a-merged.mount: Deactivated successfully.
Dec 06 07:25:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39-userdata-shm.mount: Deactivated successfully.
Dec 06 07:25:19 compute-0 podman[316432]: 2025-12-06 07:25:19.813262149 +0000 UTC m=+0.088854658 container cleanup 5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:25:19 compute-0 systemd[1]: libpod-conmon-5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39.scope: Deactivated successfully.
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.825 251996 INFO nova.virt.libvirt.driver [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Instance destroyed successfully.
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.825 251996 DEBUG nova.objects.instance [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:19 compute-0 podman[316473]: 2025-12-06 07:25:19.877839336 +0000 UTC m=+0.041173179 container remove 5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.886 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[14d4c622-1ded-4074-93f3-e8cbc6ac6bf7]: (4, ('Sat Dec  6 07:25:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39)\n5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39\nSat Dec  6 07:25:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39)\n5808475a4134d9c79c5f1546403267d5174dee825cd2607f5968d21be70b3d39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.887 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb58db1-814f-453d-b8ec-f36c97cefbab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.889 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.891 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:19 compute-0 kernel: tap7c014e4e-a0: left promiscuous mode
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.908 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.912 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0ebffc-cedc-48d5-8700-bde9c3ce8365]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.926 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[22c7ddc3-0116-45c3-9bb2-753fb48b0527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.927 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d5ff2c-801a-4462-9c74-ab5f226a8019]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.942 251996 DEBUG nova.virt.libvirt.vif [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:24:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-57853796',display_name='tempest-ServerDiskConfigTestJSON-server-57853796',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-57853796',id=102,image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-psuria6w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='412dd61d-1b1e-439f-b7f9-7e7c4e42924c',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:25:14Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.943 251996 DEBUG nova.network.os_vif_util [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "address": "fa:16:3e:ee:e8:fd", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43f29a7e-fd", "ovs_interfaceid": "43f29a7e-fdfe-4bc7-b164-e60a29234bc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.944 251996 DEBUG nova.network.os_vif_util [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.944 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[381c3ef2-d93c-4504-94ec-66a38259d20a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 618431, 'reachable_time': 28312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316493, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.944 251996 DEBUG os_vif [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.946 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:25:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:19.947 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[f09500e5-1ab1-4691-9099-4c9d174b4427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.948 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.948 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43f29a7e-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.950 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.952 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:25:19 compute-0 nova_compute[251992]: 2025-12-06 07:25:19.955 251996 INFO os_vif [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:e8:fd,bridge_name='br-int',has_traffic_filtering=True,id=43f29a7e-fdfe-4bc7-b164-e60a29234bc2,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43f29a7e-fd')
Dec 06 07:25:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.2 MiB/s wr, 291 op/s
Dec 06 07:25:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:20.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:21.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:21 compute-0 nova_compute[251992]: 2025-12-06 07:25:21.351 251996 DEBUG nova.compute.manager [req-bffe1ca5-bbcb-4ef9-8409-51365aa2ca9a req-aa7c2803-3e2d-4dcd-8d27-1d2efac863ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:21 compute-0 nova_compute[251992]: 2025-12-06 07:25:21.351 251996 DEBUG oslo_concurrency.lockutils [req-bffe1ca5-bbcb-4ef9-8409-51365aa2ca9a req-aa7c2803-3e2d-4dcd-8d27-1d2efac863ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:21 compute-0 nova_compute[251992]: 2025-12-06 07:25:21.351 251996 DEBUG oslo_concurrency.lockutils [req-bffe1ca5-bbcb-4ef9-8409-51365aa2ca9a req-aa7c2803-3e2d-4dcd-8d27-1d2efac863ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:21 compute-0 nova_compute[251992]: 2025-12-06 07:25:21.352 251996 DEBUG oslo_concurrency.lockutils [req-bffe1ca5-bbcb-4ef9-8409-51365aa2ca9a req-aa7c2803-3e2d-4dcd-8d27-1d2efac863ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:21 compute-0 nova_compute[251992]: 2025-12-06 07:25:21.352 251996 DEBUG nova.compute.manager [req-bffe1ca5-bbcb-4ef9-8409-51365aa2ca9a req-aa7c2803-3e2d-4dcd-8d27-1d2efac863ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:21 compute-0 nova_compute[251992]: 2025-12-06 07:25:21.352 251996 DEBUG nova.compute.manager [req-bffe1ca5-bbcb-4ef9-8409-51365aa2ca9a req-aa7c2803-3e2d-4dcd-8d27-1d2efac863ea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-unplugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:25:21 compute-0 ceph-mon[74339]: pgmap v2030: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.2 MiB/s wr, 291 op/s
Dec 06 07:25:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 290 op/s
Dec 06 07:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9220 writes, 41K keys, 9212 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9220 writes, 9212 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1447 writes, 6414 keys, 1447 commit groups, 1.0 writes per commit group, ingest: 9.94 MB, 0.02 MB/s
                                           Interval WAL: 1447 writes, 1447 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     46.9      1.11              0.17        24    0.046       0      0       0.0       0.0
                                             L6      1/0    8.86 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    103.4     86.0      2.50              0.70        23    0.109    137K    13K       0.0       0.0
                                            Sum      1/0    8.86 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1     71.5     73.9      3.61              0.86        47    0.077    137K    13K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.3     48.4     48.2      1.20              0.18        10    0.120     37K   2530       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    103.4     86.0      2.50              0.70        23    0.109    137K    13K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     47.0      1.11              0.17        23    0.048       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.051, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.26 GB write, 0.07 MB/s write, 0.25 GB read, 0.07 MB/s read, 3.6 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 28.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000241 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1680,27.75 MB,9.12729%) FilterBlock(48,392.11 KB,0.12596%) IndexBlock(48,667.28 KB,0.214356%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:25:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:22.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:23 compute-0 ceph-mon[74339]: pgmap v2031: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 290 op/s
Dec 06 07:25:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:23.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:25:23 compute-0 nova_compute[251992]: 2025-12-06 07:25:23.920 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.3 MiB/s wr, 254 op/s
Dec 06 07:25:24 compute-0 ceph-mon[74339]: pgmap v2032: 305 pgs: 305 active+clean; 386 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.3 MiB/s wr, 254 op/s
Dec 06 07:25:24 compute-0 nova_compute[251992]: 2025-12-06 07:25:24.339 251996 DEBUG nova.compute.manager [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:24 compute-0 nova_compute[251992]: 2025-12-06 07:25:24.340 251996 DEBUG oslo_concurrency.lockutils [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:24 compute-0 nova_compute[251992]: 2025-12-06 07:25:24.340 251996 DEBUG oslo_concurrency.lockutils [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:24 compute-0 nova_compute[251992]: 2025-12-06 07:25:24.341 251996 DEBUG oslo_concurrency.lockutils [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:24 compute-0 nova_compute[251992]: 2025-12-06 07:25:24.341 251996 DEBUG nova.compute.manager [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] No waiting events found dispatching network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:25:24 compute-0 nova_compute[251992]: 2025-12-06 07:25:24.341 251996 WARNING nova.compute.manager [req-e6378134-4133-4535-ae25-c7c52734a900 req-a9f51d50-0691-4b3d-876f-c7f781468bdc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received unexpected event network-vif-plugged-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 for instance with vm_state active and task_state deleting.
Dec 06 07:25:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:25:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:25:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:25:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:25:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:25:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:24.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:24 compute-0 nova_compute[251992]: 2025-12-06 07:25:24.977 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:25 compute-0 nova_compute[251992]: 2025-12-06 07:25:25.022 251996 INFO nova.virt.libvirt.driver [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Deleting instance files /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_del
Dec 06 07:25:25 compute-0 nova_compute[251992]: 2025-12-06 07:25:25.023 251996 INFO nova.virt.libvirt.driver [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Deletion of /var/lib/nova/instances/2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8_del complete
Dec 06 07:25:25 compute-0 nova_compute[251992]: 2025-12-06 07:25:25.112 251996 INFO nova.compute.manager [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Took 5.73 seconds to destroy the instance on the hypervisor.
Dec 06 07:25:25 compute-0 nova_compute[251992]: 2025-12-06 07:25:25.113 251996 DEBUG oslo.service.loopingcall [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:25:25 compute-0 nova_compute[251992]: 2025-12-06 07:25:25.113 251996 DEBUG nova.compute.manager [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:25:25 compute-0 nova_compute[251992]: 2025-12-06 07:25:25.113 251996 DEBUG nova.network.neutron [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:25:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:25:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:25.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:25:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3300964011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:25 compute-0 podman[316516]: 2025-12-06 07:25:25.446155559 +0000 UTC m=+0.104869152 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008316321189279732 of space, bias 1.0, pg target 2.4948963567839195 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027081297692164525 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:25:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 376 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 261 op/s
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.203 251996 DEBUG nova.network.neutron [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.228 251996 INFO nova.compute.manager [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Took 1.12 seconds to deallocate network for instance.
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.267 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.267 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1548510838' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1295261342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:26 compute-0 ceph-mon[74339]: pgmap v2033: 305 pgs: 305 active+clean; 376 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 261 op/s
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.379 251996 DEBUG oslo_concurrency.processutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:26 compute-0 sudo[316542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:26 compute-0 sudo[316542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:26 compute-0 sudo[316542]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.445 251996 DEBUG nova.compute.manager [req-26b1720e-2a8d-4def-97ad-8cd5e188693f req-93e20844-6fc9-4a21-a1ea-3bcf3f3916c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Received event network-vif-deleted-43f29a7e-fdfe-4bc7-b164-e60a29234bc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:25:26 compute-0 sudo[316568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:26 compute-0 sudo[316568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:26 compute-0 sudo[316568]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3204653884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:26.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.943 251996 DEBUG oslo_concurrency.processutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.951 251996 DEBUG nova.compute.provider_tree [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:25:26 compute-0 nova_compute[251992]: 2025-12-06 07:25:26.981 251996 DEBUG nova.scheduler.client.report [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:25:27 compute-0 nova_compute[251992]: 2025-12-06 07:25:27.010 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:27 compute-0 nova_compute[251992]: 2025-12-06 07:25:27.048 251996 INFO nova.scheduler.client.report [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Deleted allocations for instance 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8
Dec 06 07:25:27 compute-0 nova_compute[251992]: 2025-12-06 07:25:27.122 251996 DEBUG oslo_concurrency.lockutils [None req-d183924a-20b3-4746-9910-75b0529d8843 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:27.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2019194234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3204653884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 376 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 401 KiB/s wr, 218 op/s
Dec 06 07:25:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:28 compute-0 ceph-mon[74339]: pgmap v2034: 305 pgs: 305 active+clean; 376 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 401 KiB/s wr, 218 op/s
Dec 06 07:25:28 compute-0 nova_compute[251992]: 2025-12-06 07:25:28.924 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:28.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:29.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:29 compute-0 podman[316616]: 2025-12-06 07:25:29.386922067 +0000 UTC m=+0.048385820 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:25:29 compute-0 podman[316617]: 2025-12-06 07:25:29.394420614 +0000 UTC m=+0.053805789 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:25:29 compute-0 nova_compute[251992]: 2025-12-06 07:25:29.979 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 395 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.4 MiB/s wr, 304 op/s
Dec 06 07:25:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:30.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:31.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:31 compute-0 ceph-mon[74339]: pgmap v2035: 305 pgs: 305 active+clean; 395 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.4 MiB/s wr, 304 op/s
Dec 06 07:25:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 456 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 231 op/s
Dec 06 07:25:32 compute-0 ceph-mon[74339]: pgmap v2036: 305 pgs: 305 active+clean; 456 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 231 op/s
Dec 06 07:25:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:32.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:33.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:33 compute-0 nova_compute[251992]: 2025-12-06 07:25:33.926 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 457 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.7 MiB/s wr, 220 op/s
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.141 251996 DEBUG nova.compute.manager [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.234 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.235 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.257 251996 DEBUG nova.objects.instance [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_requests' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.281 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.281 251996 INFO nova.compute.claims [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.282 251996 DEBUG nova.objects.instance [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'resources' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.297 251996 DEBUG nova.objects.instance [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'pci_devices' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.360 251996 INFO nova.compute.resource_tracker [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating resource usage from migration 4e7568f7-0b1a-4d39-a986-35ec8e8330d4
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.361 251996 DEBUG nova.compute.resource_tracker [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Starting to track incoming migration 4e7568f7-0b1a-4d39-a986-35ec8e8330d4 with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.481 251996 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.822 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765005919.8202698, 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.823 251996 INFO nova.compute.manager [-] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] VM Stopped (Lifecycle Event)
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.840 251996 DEBUG nova.compute.manager [None req-5295e246-54a8-4114-868d-a061c702dee1 - - - - - -] [instance: 2ccfb1e1-ce2f-4e24-9fab-b379d7e357e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:25:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4074039243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.932 251996 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.938 251996 DEBUG nova.compute.provider_tree [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:25:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:34.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.953 251996 DEBUG nova.scheduler.client.report [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:25:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/996845718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3612033471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:34 compute-0 ceph-mon[74339]: pgmap v2037: 305 pgs: 305 active+clean; 457 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.7 MiB/s wr, 220 op/s
Dec 06 07:25:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3723579441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1114235426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.980 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.987 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:34 compute-0 nova_compute[251992]: 2025-12-06 07:25:34.988 251996 INFO nova.compute.manager [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Migrating
Dec 06 07:25:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:35.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 465 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 274 op/s
Dec 06 07:25:36 compute-0 sshd-session[316679]: Accepted publickey for nova from 192.168.122.101 port 33682 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:25:36 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Dec 06 07:25:36 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec 06 07:25:36 compute-0 systemd-logind[798]: New session 56 of user nova.
Dec 06 07:25:36 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec 06 07:25:36 compute-0 systemd[1]: Starting User Manager for UID 42436...
Dec 06 07:25:36 compute-0 systemd[316683]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:25:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:36.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:36 compute-0 systemd[316683]: Queued start job for default target Main User Target.
Dec 06 07:25:37 compute-0 systemd[316683]: Created slice User Application Slice.
Dec 06 07:25:37 compute-0 systemd[316683]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:25:37 compute-0 systemd[316683]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 07:25:37 compute-0 systemd[316683]: Reached target Paths.
Dec 06 07:25:37 compute-0 systemd[316683]: Reached target Timers.
Dec 06 07:25:37 compute-0 systemd[316683]: Starting D-Bus User Message Bus Socket...
Dec 06 07:25:37 compute-0 systemd[316683]: Starting Create User's Volatile Files and Directories...
Dec 06 07:25:37 compute-0 systemd[316683]: Listening on D-Bus User Message Bus Socket.
Dec 06 07:25:37 compute-0 systemd[316683]: Reached target Sockets.
Dec 06 07:25:37 compute-0 systemd[316683]: Finished Create User's Volatile Files and Directories.
Dec 06 07:25:37 compute-0 systemd[316683]: Reached target Basic System.
Dec 06 07:25:37 compute-0 systemd[316683]: Reached target Main User Target.
Dec 06 07:25:37 compute-0 systemd[316683]: Startup finished in 141ms.
Dec 06 07:25:37 compute-0 systemd[1]: Started User Manager for UID 42436.
Dec 06 07:25:37 compute-0 systemd[1]: Started Session 56 of User nova.
Dec 06 07:25:37 compute-0 sshd-session[316679]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:25:37 compute-0 sshd-session[316698]: Received disconnect from 192.168.122.101 port 33682:11: disconnected by user
Dec 06 07:25:37 compute-0 sshd-session[316698]: Disconnected from user nova 192.168.122.101 port 33682
Dec 06 07:25:37 compute-0 sshd-session[316679]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:25:37 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec 06 07:25:37 compute-0 systemd-logind[798]: Session 56 logged out. Waiting for processes to exit.
Dec 06 07:25:37 compute-0 systemd-logind[798]: Removed session 56.
Dec 06 07:25:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:37.117 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:25:37 compute-0 nova_compute[251992]: 2025-12-06 07:25:37.117 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:37.119 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:25:37 compute-0 sshd-session[316700]: Accepted publickey for nova from 192.168.122.101 port 33698 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:25:37 compute-0 systemd-logind[798]: New session 58 of user nova.
Dec 06 07:25:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:37.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:37 compute-0 systemd[1]: Started Session 58 of User nova.
Dec 06 07:25:37 compute-0 sshd-session[316700]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:25:37 compute-0 sshd-session[316703]: Received disconnect from 192.168.122.101 port 33698:11: disconnected by user
Dec 06 07:25:37 compute-0 sshd-session[316703]: Disconnected from user nova 192.168.122.101 port 33698
Dec 06 07:25:37 compute-0 sshd-session[316700]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:25:37 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Dec 06 07:25:37 compute-0 systemd-logind[798]: Session 58 logged out. Waiting for processes to exit.
Dec 06 07:25:37 compute-0 systemd-logind[798]: Removed session 58.
Dec 06 07:25:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 465 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 262 op/s
Dec 06 07:25:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4074039243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:38.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:38 compute-0 nova_compute[251992]: 2025-12-06 07:25:38.994 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:39 compute-0 ceph-mon[74339]: pgmap v2038: 305 pgs: 305 active+clean; 465 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 274 op/s
Dec 06 07:25:39 compute-0 ceph-mon[74339]: pgmap v2039: 305 pgs: 305 active+clean; 465 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 262 op/s
Dec 06 07:25:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:39.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:39 compute-0 nova_compute[251992]: 2025-12-06 07:25:39.982 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 467 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 277 op/s
Dec 06 07:25:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:40.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:41 compute-0 ceph-mon[74339]: pgmap v2040: 305 pgs: 305 active+clean; 467 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 277 op/s
Dec 06 07:25:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:41.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 467 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 236 op/s
Dec 06 07:25:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:42.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:25:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:43.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:43 compute-0 ceph-mon[74339]: pgmap v2041: 305 pgs: 305 active+clean; 467 MiB data, 996 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.7 MiB/s wr, 236 op/s
Dec 06 07:25:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 470 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 754 KiB/s wr, 240 op/s
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.041 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:44 compute-0 sudo[316708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:44 compute-0 sudo[316708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:44 compute-0 sudo[316708]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:44 compute-0 sudo[316733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:25:44 compute-0 sudo[316733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:44 compute-0 sudo[316733]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:44 compute-0 sudo[316758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:44 compute-0 sudo[316758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:44 compute-0 sudo[316758]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:44 compute-0 sudo[316783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:25:44 compute-0 sudo[316783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.684 251996 DEBUG nova.compute.manager [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.821 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.821 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.878 251996 DEBUG nova.objects.instance [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_requests' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.911 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.911 251996 INFO nova.compute.claims [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.911 251996 DEBUG nova.objects.instance [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.923 251996 DEBUG nova.objects.instance [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'pci_devices' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:25:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:44.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:44 compute-0 sudo[316783]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.981 251996 INFO nova.compute.resource_tracker [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating resource usage from migration 2799ffd6-1eb2-4617-b3b6-b7669cd5f78c
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.982 251996 DEBUG nova.compute.resource_tracker [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Starting to track incoming migration 2799ffd6-1eb2-4617-b3b6-b7669cd5f78c with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:25:44 compute-0 nova_compute[251992]: 2025-12-06 07:25:44.984 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:45 compute-0 nova_compute[251992]: 2025-12-06 07:25:45.138 251996 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:25:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 07:25:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:25:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:25:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:25:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:25:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:25:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:25:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:45.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:25:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451184587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:45 compute-0 nova_compute[251992]: 2025-12-06 07:25:45.561 251996 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:25:45 compute-0 nova_compute[251992]: 2025-12-06 07:25:45.566 251996 DEBUG nova.compute.provider_tree [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:25:45 compute-0 nova_compute[251992]: 2025-12-06 07:25:45.770 251996 DEBUG nova.scheduler.client.report [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:25:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 477 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.3 MiB/s wr, 223 op/s
Dec 06 07:25:46 compute-0 nova_compute[251992]: 2025-12-06 07:25:46.075 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:25:46 compute-0 nova_compute[251992]: 2025-12-06 07:25:46.075 251996 INFO nova.compute.manager [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Migrating
Dec 06 07:25:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1747949989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:25:46 compute-0 ceph-mon[74339]: pgmap v2042: 305 pgs: 305 active+clean; 470 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 754 KiB/s wr, 240 op/s
Dec 06 07:25:46 compute-0 sudo[316862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:46 compute-0 sudo[316862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:46 compute-0 sudo[316862]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:46 compute-0 sudo[316888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:46 compute-0 sudo[316888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:46 compute-0 sudo[316888]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:25:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:25:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9154332d-0896-4aad-be92-c29ab4425af2 does not exist
Dec 06 07:25:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7aa13201-a5b8-4a6d-8291-0cc579f31fb8 does not exist
Dec 06 07:25:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c6d0b2e2-de3b-41df-bf0b-b1f693298bdb does not exist
Dec 06 07:25:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:25:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:25:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:25:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:25:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:25:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:25:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:25:47.122 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:25:47 compute-0 sudo[316913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:47 compute-0 sudo[316913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:47 compute-0 sudo[316913]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:47 compute-0 sudo[316938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:25:47 compute-0 sudo[316938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:47 compute-0 sudo[316938]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:47 compute-0 sudo[316963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:47 compute-0 sudo[316963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:47 compute-0 sudo[316963]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:47 compute-0 sudo[316988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:25:47 compute-0 sudo[316988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:25:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:47.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:25:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:25:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:25:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:25:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2451184587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:25:47 compute-0 ceph-mon[74339]: pgmap v2043: 305 pgs: 305 active+clean; 477 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.3 MiB/s wr, 223 op/s
Dec 06 07:25:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:25:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:25:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:25:47 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Dec 06 07:25:47 compute-0 systemd[316683]: Activating special unit Exit the Session...
Dec 06 07:25:47 compute-0 systemd[316683]: Stopped target Main User Target.
Dec 06 07:25:47 compute-0 systemd[316683]: Stopped target Basic System.
Dec 06 07:25:47 compute-0 systemd[316683]: Stopped target Paths.
Dec 06 07:25:47 compute-0 systemd[316683]: Stopped target Sockets.
Dec 06 07:25:47 compute-0 systemd[316683]: Stopped target Timers.
Dec 06 07:25:47 compute-0 systemd[316683]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:25:47 compute-0 systemd[316683]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 07:25:47 compute-0 systemd[316683]: Closed D-Bus User Message Bus Socket.
Dec 06 07:25:47 compute-0 systemd[316683]: Stopped Create User's Volatile Files and Directories.
Dec 06 07:25:47 compute-0 systemd[316683]: Removed slice User Application Slice.
Dec 06 07:25:47 compute-0 systemd[316683]: Reached target Shutdown.
Dec 06 07:25:47 compute-0 systemd[316683]: Finished Exit the Session.
Dec 06 07:25:47 compute-0 systemd[316683]: Reached target Exit the Session.
Dec 06 07:25:47 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Dec 06 07:25:47 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Dec 06 07:25:47 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec 06 07:25:47 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec 06 07:25:47 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec 06 07:25:47 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec 06 07:25:47 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Dec 06 07:25:47 compute-0 podman[317053]: 2025-12-06 07:25:47.645908596 +0000 UTC m=+0.061484482 container create 8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:25:47 compute-0 podman[317053]: 2025-12-06 07:25:47.61823422 +0000 UTC m=+0.033810166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:25:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 477 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 169 op/s
Dec 06 07:25:48 compute-0 systemd[1]: Started libpod-conmon-8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628.scope.
Dec 06 07:25:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:25:48 compute-0 podman[317053]: 2025-12-06 07:25:48.138382699 +0000 UTC m=+0.553958615 container init 8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:25:48 compute-0 podman[317053]: 2025-12-06 07:25:48.146484633 +0000 UTC m=+0.562060519 container start 8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:25:48 compute-0 podman[317053]: 2025-12-06 07:25:48.150991538 +0000 UTC m=+0.566567464 container attach 8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:25:48 compute-0 beautiful_tesla[317070]: 167 167
Dec 06 07:25:48 compute-0 systemd[1]: libpod-8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628.scope: Deactivated successfully.
Dec 06 07:25:48 compute-0 podman[317053]: 2025-12-06 07:25:48.154493425 +0000 UTC m=+0.570069311 container died 8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1ee898b9832247019851b9479132edf8dbb861080f1bd7f99877003a262360f-merged.mount: Deactivated successfully.
Dec 06 07:25:48 compute-0 podman[317053]: 2025-12-06 07:25:48.212762697 +0000 UTC m=+0.628338583 container remove 8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:25:48 compute-0 systemd[1]: libpod-conmon-8b8f54f4bb007b87ca6c6c2856eef25208f75c65c32686459d74ea1afa7e2628.scope: Deactivated successfully.
Dec 06 07:25:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:48 compute-0 ceph-mon[74339]: pgmap v2044: 305 pgs: 305 active+clean; 477 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 169 op/s
Dec 06 07:25:48 compute-0 podman[317094]: 2025-12-06 07:25:48.379572771 +0000 UTC m=+0.038003052 container create 488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:25:48 compute-0 systemd[1]: Started libpod-conmon-488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639.scope.
Dec 06 07:25:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d15020cd02ddab34857e855af1ee92e05f1dc4d013af97fa40d2e76da3ef04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d15020cd02ddab34857e855af1ee92e05f1dc4d013af97fa40d2e76da3ef04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d15020cd02ddab34857e855af1ee92e05f1dc4d013af97fa40d2e76da3ef04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d15020cd02ddab34857e855af1ee92e05f1dc4d013af97fa40d2e76da3ef04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d15020cd02ddab34857e855af1ee92e05f1dc4d013af97fa40d2e76da3ef04/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:48 compute-0 podman[317094]: 2025-12-06 07:25:48.363721803 +0000 UTC m=+0.022152104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:25:48 compute-0 podman[317094]: 2025-12-06 07:25:48.459639217 +0000 UTC m=+0.118069528 container init 488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lichterman, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 07:25:48 compute-0 podman[317094]: 2025-12-06 07:25:48.466562689 +0000 UTC m=+0.124992970 container start 488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:25:48 compute-0 podman[317094]: 2025-12-06 07:25:48.470214649 +0000 UTC m=+0.128644960 container attach 488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lichterman, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:25:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:48.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:49 compute-0 nova_compute[251992]: 2025-12-06 07:25:49.043 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:49 compute-0 mystifying_lichterman[317111]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:25:49 compute-0 mystifying_lichterman[317111]: --> relative data size: 1.0
Dec 06 07:25:49 compute-0 mystifying_lichterman[317111]: --> All data devices are unavailable
Dec 06 07:25:49 compute-0 systemd[1]: libpod-488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639.scope: Deactivated successfully.
Dec 06 07:25:49 compute-0 podman[317094]: 2025-12-06 07:25:49.316623694 +0000 UTC m=+0.975053975 container died 488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:25:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-21d15020cd02ddab34857e855af1ee92e05f1dc4d013af97fa40d2e76da3ef04-merged.mount: Deactivated successfully.
Dec 06 07:25:49 compute-0 podman[317094]: 2025-12-06 07:25:49.736794258 +0000 UTC m=+1.395224539 container remove 488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:25:49 compute-0 sudo[316988]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:49 compute-0 systemd[1]: libpod-conmon-488cc6b467225c54a1b3ce228b15f8dbd000a529f1424de8e3b4cf4b80045639.scope: Deactivated successfully.
Dec 06 07:25:49 compute-0 sudo[317139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:49 compute-0 sudo[317139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:49 compute-0 sudo[317139]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:49 compute-0 sudo[317164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:25:49 compute-0 sudo[317164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:49 compute-0 sudo[317164]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:49 compute-0 sudo[317189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:49 compute-0 sudo[317189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:49 compute-0 sudo[317189]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:49 compute-0 nova_compute[251992]: 2025-12-06 07:25:49.985 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 488 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Dec 06 07:25:50 compute-0 sudo[317214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:25:50 compute-0 sudo[317214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:50 compute-0 podman[317279]: 2025-12-06 07:25:50.437349278 +0000 UTC m=+0.036697866 container create 842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 07:25:50 compute-0 systemd[1]: Started libpod-conmon-842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243.scope.
Dec 06 07:25:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:25:50 compute-0 podman[317279]: 2025-12-06 07:25:50.419859885 +0000 UTC m=+0.019208493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:25:50 compute-0 podman[317279]: 2025-12-06 07:25:50.526733921 +0000 UTC m=+0.126082529 container init 842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:25:50 compute-0 podman[317279]: 2025-12-06 07:25:50.532735807 +0000 UTC m=+0.132084395 container start 842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 07:25:50 compute-0 podman[317279]: 2025-12-06 07:25:50.536374437 +0000 UTC m=+0.135723025 container attach 842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:25:50 compute-0 interesting_thompson[317296]: 167 167
Dec 06 07:25:50 compute-0 systemd[1]: libpod-842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243.scope: Deactivated successfully.
Dec 06 07:25:50 compute-0 podman[317279]: 2025-12-06 07:25:50.538402524 +0000 UTC m=+0.137751112 container died 842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-00b8712aa25aa79e3ec79e0dcf9ebfdc701ee1779f3a80bb809c95beeac4aa00-merged.mount: Deactivated successfully.
Dec 06 07:25:50 compute-0 podman[317279]: 2025-12-06 07:25:50.575327535 +0000 UTC m=+0.174676123 container remove 842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:25:50 compute-0 systemd[1]: libpod-conmon-842a0d1124722d27a7591a678121686574ee91c314df5d15cf4a321a29ea9243.scope: Deactivated successfully.
Dec 06 07:25:50 compute-0 podman[317322]: 2025-12-06 07:25:50.732653377 +0000 UTC m=+0.037474697 container create fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:25:50 compute-0 systemd[1]: Started libpod-conmon-fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa.scope.
Dec 06 07:25:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8577a3b7f2a84095affef7cf2fa7b5e0018220fa549ba1ddd4d6244ca05187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8577a3b7f2a84095affef7cf2fa7b5e0018220fa549ba1ddd4d6244ca05187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8577a3b7f2a84095affef7cf2fa7b5e0018220fa549ba1ddd4d6244ca05187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8577a3b7f2a84095affef7cf2fa7b5e0018220fa549ba1ddd4d6244ca05187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:50 compute-0 podman[317322]: 2025-12-06 07:25:50.799295011 +0000 UTC m=+0.104116351 container init fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_satoshi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:25:50 compute-0 podman[317322]: 2025-12-06 07:25:50.806160831 +0000 UTC m=+0.110982151 container start fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:25:50 compute-0 podman[317322]: 2025-12-06 07:25:50.810438179 +0000 UTC m=+0.115259489 container attach fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:25:50 compute-0 podman[317322]: 2025-12-06 07:25:50.716312996 +0000 UTC m=+0.021134346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:25:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:50.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:51 compute-0 ceph-mon[74339]: pgmap v2045: 305 pgs: 305 active+clean; 488 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Dec 06 07:25:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:51.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]: {
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:     "0": [
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:         {
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "devices": [
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "/dev/loop3"
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             ],
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "lv_name": "ceph_lv0",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "lv_size": "7511998464",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "name": "ceph_lv0",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "tags": {
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.cluster_name": "ceph",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.crush_device_class": "",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.encrypted": "0",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.osd_id": "0",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.type": "block",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:                 "ceph.vdo": "0"
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             },
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "type": "block",
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:             "vg_name": "ceph_vg0"
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:         }
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]:     ]
Dec 06 07:25:51 compute-0 reverent_satoshi[317338]: }
Dec 06 07:25:51 compute-0 systemd[1]: libpod-fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa.scope: Deactivated successfully.
Dec 06 07:25:51 compute-0 podman[317322]: 2025-12-06 07:25:51.581119589 +0000 UTC m=+0.885940909 container died fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_satoshi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d8577a3b7f2a84095affef7cf2fa7b5e0018220fa549ba1ddd4d6244ca05187-merged.mount: Deactivated successfully.
Dec 06 07:25:51 compute-0 podman[317322]: 2025-12-06 07:25:51.637690575 +0000 UTC m=+0.942511895 container remove fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:25:51 compute-0 systemd[1]: libpod-conmon-fabc5590b739fd4fa0e38c735b7354ca01ad95014489632e564b6468d6dacaaa.scope: Deactivated successfully.
Dec 06 07:25:51 compute-0 sudo[317214]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:51 compute-0 sudo[317361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:51 compute-0 sudo[317361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:51 compute-0 sudo[317361]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:51 compute-0 sudo[317386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:25:51 compute-0 sudo[317386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:51 compute-0 sudo[317386]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:51 compute-0 sudo[317411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:51 compute-0 sudo[317411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:51 compute-0 sudo[317411]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:51 compute-0 sudo[317436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:25:51 compute-0 sudo[317436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 184 op/s
Dec 06 07:25:52 compute-0 podman[317502]: 2025-12-06 07:25:52.198678804 +0000 UTC m=+0.042416164 container create dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_grothendieck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec 06 07:25:52 compute-0 systemd[1]: Started libpod-conmon-dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d.scope.
Dec 06 07:25:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:25:52 compute-0 podman[317502]: 2025-12-06 07:25:52.181612392 +0000 UTC m=+0.025349752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:25:52 compute-0 podman[317502]: 2025-12-06 07:25:52.279791898 +0000 UTC m=+0.123529278 container init dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_grothendieck, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:25:52 compute-0 podman[317502]: 2025-12-06 07:25:52.286338549 +0000 UTC m=+0.130075909 container start dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_grothendieck, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:25:52 compute-0 podman[317502]: 2025-12-06 07:25:52.289726572 +0000 UTC m=+0.133463932 container attach dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:25:52 compute-0 keen_grothendieck[317518]: 167 167
Dec 06 07:25:52 compute-0 systemd[1]: libpod-dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d.scope: Deactivated successfully.
Dec 06 07:25:52 compute-0 podman[317502]: 2025-12-06 07:25:52.291304297 +0000 UTC m=+0.135041657 container died dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-df3fd2a6c37722bd1a4aa26da8d83cfca042214f7b7914007199f95e685d8469-merged.mount: Deactivated successfully.
Dec 06 07:25:52 compute-0 podman[317502]: 2025-12-06 07:25:52.324951177 +0000 UTC m=+0.168688547 container remove dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 07:25:52 compute-0 systemd[1]: libpod-conmon-dd5f7bd3e431ea28866bbc9056fd6ad1d8d1c8cc8fd52b1f5d5ceb6ed67d682d.scope: Deactivated successfully.
Dec 06 07:25:52 compute-0 ceph-mon[74339]: pgmap v2046: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 184 op/s
Dec 06 07:25:52 compute-0 podman[317542]: 2025-12-06 07:25:52.467756978 +0000 UTC m=+0.022854283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:25:52 compute-0 sshd-session[317557]: Accepted publickey for nova from 192.168.122.102 port 54020 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:25:52 compute-0 systemd-logind[798]: New session 59 of user nova.
Dec 06 07:25:52 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Dec 06 07:25:52 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec 06 07:25:52 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec 06 07:25:52 compute-0 systemd[1]: Starting User Manager for UID 42436...
Dec 06 07:25:52 compute-0 systemd[317561]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:25:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:52.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:53 compute-0 systemd[317561]: Queued start job for default target Main User Target.
Dec 06 07:25:53 compute-0 systemd[317561]: Created slice User Application Slice.
Dec 06 07:25:53 compute-0 systemd[317561]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:25:53 compute-0 systemd[317561]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 07:25:53 compute-0 systemd[317561]: Reached target Paths.
Dec 06 07:25:53 compute-0 systemd[317561]: Reached target Timers.
Dec 06 07:25:53 compute-0 systemd[317561]: Starting D-Bus User Message Bus Socket...
Dec 06 07:25:53 compute-0 systemd[317561]: Starting Create User's Volatile Files and Directories...
Dec 06 07:25:53 compute-0 systemd[317561]: Finished Create User's Volatile Files and Directories.
Dec 06 07:25:53 compute-0 systemd[317561]: Listening on D-Bus User Message Bus Socket.
Dec 06 07:25:53 compute-0 systemd[317561]: Reached target Sockets.
Dec 06 07:25:53 compute-0 systemd[317561]: Reached target Basic System.
Dec 06 07:25:53 compute-0 systemd[317561]: Reached target Main User Target.
Dec 06 07:25:53 compute-0 systemd[317561]: Startup finished in 141ms.
Dec 06 07:25:53 compute-0 systemd[1]: Started User Manager for UID 42436.
Dec 06 07:25:53 compute-0 systemd[1]: Started Session 59 of User nova.
Dec 06 07:25:53 compute-0 sshd-session[317557]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:25:53 compute-0 sshd-session[317576]: Received disconnect from 192.168.122.102 port 54020:11: disconnected by user
Dec 06 07:25:53 compute-0 sshd-session[317576]: Disconnected from user nova 192.168.122.102 port 54020
Dec 06 07:25:53 compute-0 sshd-session[317557]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:25:53 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Dec 06 07:25:53 compute-0 systemd-logind[798]: Session 59 logged out. Waiting for processes to exit.
Dec 06 07:25:53 compute-0 systemd-logind[798]: Removed session 59.
Dec 06 07:25:53 compute-0 podman[317542]: 2025-12-06 07:25:53.136445497 +0000 UTC m=+0.691542762 container create 0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 07:25:53 compute-0 systemd[1]: Started libpod-conmon-0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901.scope.
Dec 06 07:25:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415db2f36502c1bd1f2fb63ca2440c7e0c558d5cffa66accba0ab0c064429b9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415db2f36502c1bd1f2fb63ca2440c7e0c558d5cffa66accba0ab0c064429b9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415db2f36502c1bd1f2fb63ca2440c7e0c558d5cffa66accba0ab0c064429b9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415db2f36502c1bd1f2fb63ca2440c7e0c558d5cffa66accba0ab0c064429b9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:25:53 compute-0 sshd-session[317578]: Accepted publickey for nova from 192.168.122.102 port 54036 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 07:25:53 compute-0 systemd-logind[798]: New session 61 of user nova.
Dec 06 07:25:53 compute-0 systemd[1]: Started Session 61 of User nova.
Dec 06 07:25:53 compute-0 sshd-session[317578]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 07:25:53 compute-0 podman[317542]: 2025-12-06 07:25:53.288058751 +0000 UTC m=+0.843156036 container init 0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 07:25:53 compute-0 podman[317542]: 2025-12-06 07:25:53.298083068 +0000 UTC m=+0.853180333 container start 0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:25:53 compute-0 podman[317542]: 2025-12-06 07:25:53.30284797 +0000 UTC m=+0.857945245 container attach 0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 07:25:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:53 compute-0 sshd-session[317586]: Received disconnect from 192.168.122.102 port 54036:11: disconnected by user
Dec 06 07:25:53 compute-0 sshd-session[317586]: Disconnected from user nova 192.168.122.102 port 54036
Dec 06 07:25:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:53.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:53 compute-0 sshd-session[317578]: pam_unix(sshd:session): session closed for user nova
Dec 06 07:25:53 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Dec 06 07:25:53 compute-0 systemd-logind[798]: Session 61 logged out. Waiting for processes to exit.
Dec 06 07:25:53 compute-0 systemd-logind[798]: Removed session 61.
Dec 06 07:25:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 506 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 169 op/s
Dec 06 07:25:54 compute-0 nova_compute[251992]: 2025-12-06 07:25:54.044 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]: {
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]:         "osd_id": 0,
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]:         "type": "bluestore"
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]:     }
Dec 06 07:25:54 compute-0 mystifying_satoshi[317582]: }
Dec 06 07:25:54 compute-0 systemd[1]: libpod-0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901.scope: Deactivated successfully.
Dec 06 07:25:54 compute-0 podman[317542]: 2025-12-06 07:25:54.146859748 +0000 UTC m=+1.701957013 container died 0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 07:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-415db2f36502c1bd1f2fb63ca2440c7e0c558d5cffa66accba0ab0c064429b9b-merged.mount: Deactivated successfully.
Dec 06 07:25:54 compute-0 podman[317542]: 2025-12-06 07:25:54.200388599 +0000 UTC m=+1.755485864 container remove 0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:25:54 compute-0 systemd[1]: libpod-conmon-0a9e623e05970659be46579b27a973a994850f5f8320f37b4f1f718c4b451901.scope: Deactivated successfully.
Dec 06 07:25:54 compute-0 sudo[317436]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:25:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:54.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:54 compute-0 nova_compute[251992]: 2025-12-06 07:25:54.987 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:55 compute-0 ceph-mon[74339]: pgmap v2047: 305 pgs: 305 active+clean; 506 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 169 op/s
Dec 06 07:25:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:25:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:55.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 002b9d90-a959-470e-b391-c54c0ebf0343 does not exist
Dec 06 07:25:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c7eb3d26-a4ae-4ad2-8fdc-c0bfc6fe6da1 does not exist
Dec 06 07:25:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6bb58952-4f4c-4b49-8968-4a6c8755da58 does not exist
Dec 06 07:25:55 compute-0 sudo[317622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:25:55 compute-0 sudo[317622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:55 compute-0 sudo[317622]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:55 compute-0 sudo[317653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:25:55 compute-0 sudo[317653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:25:55 compute-0 sudo[317653]: pam_unix(sudo:session): session closed for user root
Dec 06 07:25:55 compute-0 podman[317646]: 2025-12-06 07:25:55.941874337 +0000 UTC m=+0.079837791 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:25:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.1 MiB/s wr, 123 op/s
Dec 06 07:25:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:56.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:25:57 compute-0 ceph-mon[74339]: pgmap v2048: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.1 MiB/s wr, 123 op/s
Dec 06 07:25:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:57.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 387 KiB/s rd, 4.4 MiB/s wr, 89 op/s
Dec 06 07:25:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:25:58 compute-0 ceph-mon[74339]: pgmap v2049: 305 pgs: 305 active+clean; 537 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 387 KiB/s rd, 4.4 MiB/s wr, 89 op/s
Dec 06 07:25:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:25:58.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:59 compute-0 nova_compute[251992]: 2025-12-06 07:25:59.046 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:59 compute-0 ovn_controller[147168]: 2025-12-06T07:25:59Z|00379|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:25:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:25:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:25:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:25:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:25:59 compute-0 nova_compute[251992]: 2025-12-06 07:25:59.988 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:25:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 542 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 457 KiB/s rd, 4.9 MiB/s wr, 103 op/s
Dec 06 07:26:00 compute-0 ceph-mon[74339]: pgmap v2050: 305 pgs: 305 active+clean; 542 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 457 KiB/s rd, 4.9 MiB/s wr, 103 op/s
Dec 06 07:26:00 compute-0 podman[317702]: 2025-12-06 07:26:00.414033414 +0000 UTC m=+0.070360718 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:26:00 compute-0 podman[317701]: 2025-12-06 07:26:00.421987844 +0000 UTC m=+0.069771131 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:26:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:00.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 546 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 741 KiB/s rd, 4.2 MiB/s wr, 118 op/s
Dec 06 07:26:02 compute-0 ceph-mon[74339]: pgmap v2051: 305 pgs: 305 active+clean; 546 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 741 KiB/s rd, 4.2 MiB/s wr, 118 op/s
Dec 06 07:26:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1219810440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:02 compute-0 nova_compute[251992]: 2025-12-06 07:26:02.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:02 compute-0 nova_compute[251992]: 2025-12-06 07:26:02.720 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:02 compute-0 nova_compute[251992]: 2025-12-06 07:26:02.721 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:02 compute-0 nova_compute[251992]: 2025-12-06 07:26:02.721 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:02 compute-0 nova_compute[251992]: 2025-12-06 07:26:02.721 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:26:02 compute-0 nova_compute[251992]: 2025-12-06 07:26:02.721 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:26:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/429288119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.153 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.304 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.304 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:26:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.469 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.470 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4226MB free_disk=20.69593048095703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.470 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.470 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.545 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Migration for instance ea8c0005-4b7a-4697-89ae-91f4bef22e36 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.545 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Migration for instance f32ea15c-cf80-482c-9f9a-22392bc79e78 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:26:03 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Dec 06 07:26:03 compute-0 systemd[317561]: Activating special unit Exit the Session...
Dec 06 07:26:03 compute-0 systemd[317561]: Stopped target Main User Target.
Dec 06 07:26:03 compute-0 systemd[317561]: Stopped target Basic System.
Dec 06 07:26:03 compute-0 systemd[317561]: Stopped target Paths.
Dec 06 07:26:03 compute-0 systemd[317561]: Stopped target Sockets.
Dec 06 07:26:03 compute-0 systemd[317561]: Stopped target Timers.
Dec 06 07:26:03 compute-0 systemd[317561]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:26:03 compute-0 systemd[317561]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 07:26:03 compute-0 systemd[317561]: Closed D-Bus User Message Bus Socket.
Dec 06 07:26:03 compute-0 systemd[317561]: Stopped Create User's Volatile Files and Directories.
Dec 06 07:26:03 compute-0 systemd[317561]: Removed slice User Application Slice.
Dec 06 07:26:03 compute-0 systemd[317561]: Reached target Shutdown.
Dec 06 07:26:03 compute-0 systemd[317561]: Finished Exit the Session.
Dec 06 07:26:03 compute-0 systemd[317561]: Reached target Exit the Session.
Dec 06 07:26:03 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Dec 06 07:26:03 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Dec 06 07:26:03 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec 06 07:26:03 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec 06 07:26:03 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec 06 07:26:03 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec 06 07:26:03 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.675 251996 INFO nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating resource usage from migration 2799ffd6-1eb2-4617-b3b6-b7669cd5f78c
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.675 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Starting to track incoming migration 2799ffd6-1eb2-4617-b3b6-b7669cd5f78c with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.675 251996 INFO nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating resource usage from migration 4e7568f7-0b1a-4d39-a986-35ec8e8330d4
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.676 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Starting to track incoming migration 4e7568f7-0b1a-4d39-a986-35ec8e8330d4 with flavor fb97f55a-36c0-42f2-8156-c1b04eb23dd0 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.714 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 00f56c62-f327-41e3-a105-24f56ae124c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:26:03 compute-0 nova_compute[251992]: 2025-12-06 07:26:03.792 251996 WARNING nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f32ea15c-cf80-482c-9f9a-22392bc79e78 has been moved to another host compute-1.ctlplane.example.com(compute-1.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.
Dec 06 07:26:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:03.832 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:03.833 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:03.834 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 770 KiB/s rd, 4.3 MiB/s wr, 136 op/s
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.004 251996 WARNING nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance ea8c0005-4b7a-4697-89ae-91f4bef22e36 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.004 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.005 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.047 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.065 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/429288119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/142966572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:26:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150822182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.514 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.520 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.554 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.821 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.822 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:04.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:04 compute-0 nova_compute[251992]: 2025-12-06 07:26:04.990 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:05 compute-0 ceph-mon[74339]: pgmap v2052: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 770 KiB/s rd, 4.3 MiB/s wr, 136 op/s
Dec 06 07:26:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2150822182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:05 compute-0 nova_compute[251992]: 2025-12-06 07:26:05.822 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:05 compute-0 nova_compute[251992]: 2025-12-06 07:26:05.822 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:05 compute-0 nova_compute[251992]: 2025-12-06 07:26:05.823 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 775 KiB/s rd, 3.2 MiB/s wr, 141 op/s
Dec 06 07:26:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2324340893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:06 compute-0 ceph-mon[74339]: pgmap v2053: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 775 KiB/s rd, 3.2 MiB/s wr, 141 op/s
Dec 06 07:26:06 compute-0 sudo[317792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:06 compute-0 sudo[317792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:06 compute-0 sudo[317792]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:06 compute-0 nova_compute[251992]: 2025-12-06 07:26:06.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:06 compute-0 nova_compute[251992]: 2025-12-06 07:26:06.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:06 compute-0 sudo[317817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:06 compute-0 sudo[317817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:06 compute-0 sudo[317817]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:06.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:07.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:07 compute-0 nova_compute[251992]: 2025-12-06 07:26:07.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 801 KiB/s wr, 112 op/s
Dec 06 07:26:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:08 compute-0 ceph-mon[74339]: pgmap v2054: 305 pgs: 305 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 801 KiB/s wr, 112 op/s
Dec 06 07:26:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:08.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:09 compute-0 nova_compute[251992]: 2025-12-06 07:26:09.050 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:09.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:09 compute-0 nova_compute[251992]: 2025-12-06 07:26:09.992 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 514 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 1.6 MiB/s wr, 147 op/s
Dec 06 07:26:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2900093873' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3294259920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:26:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3294259920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:26:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:10.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:11 compute-0 nova_compute[251992]: 2025-12-06 07:26:11.070 251996 INFO nova.network.neutron [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating port 0e5e71bc-7098-4091-938e-6299f989917f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 07:26:11 compute-0 ceph-mon[74339]: pgmap v2055: 305 pgs: 305 active+clean; 514 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 1.6 MiB/s wr, 147 op/s
Dec 06 07:26:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4230824791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 532 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 733 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 07:26:12 compute-0 nova_compute[251992]: 2025-12-06 07:26:12.405 251996 DEBUG nova.compute.manager [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:12 compute-0 nova_compute[251992]: 2025-12-06 07:26:12.406 251996 DEBUG oslo_concurrency.lockutils [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:12 compute-0 nova_compute[251992]: 2025-12-06 07:26:12.406 251996 DEBUG oslo_concurrency.lockutils [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:12 compute-0 nova_compute[251992]: 2025-12-06 07:26:12.406 251996 DEBUG oslo_concurrency.lockutils [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:12 compute-0 nova_compute[251992]: 2025-12-06 07:26:12.406 251996 DEBUG nova.compute.manager [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:12 compute-0 nova_compute[251992]: 2025-12-06 07:26:12.407 251996 WARNING nova.compute.manager [req-ba61c127-e952-4e85-a3a2-2df7823cd9ef req-4aa2d9d8-ff30-4542-b06e-cc21c1222bf0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state active and task_state resize_migrated.
Dec 06 07:26:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:12.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:26:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:13 compute-0 nova_compute[251992]: 2025-12-06 07:26:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:13 compute-0 nova_compute[251992]: 2025-12-06 07:26:13.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:26:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.051 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:14 compute-0 ceph-mon[74339]: pgmap v2056: 305 pgs: 305 active+clean; 532 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 733 KiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.232 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.232 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquired lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.232 251996 DEBUG nova.network.neutron [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.664 251996 DEBUG nova.compute.manager [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-changed-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.664 251996 DEBUG nova.compute.manager [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Refreshing instance network info cache due to event network-changed-0e5e71bc-7098-4091-938e-6299f989917f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.664 251996 DEBUG oslo_concurrency.lockutils [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.767 251996 DEBUG nova.compute.manager [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.768 251996 DEBUG oslo_concurrency.lockutils [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.768 251996 DEBUG oslo_concurrency.lockutils [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.768 251996 DEBUG oslo_concurrency.lockutils [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.768 251996 DEBUG nova.compute.manager [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.769 251996 WARNING nova.compute.manager [req-32105171-b0e4-4654-8b18-026a18a02586 req-384ac494-9de0-48ea-8b83-59ae13539018 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state active and task_state resize_migrated.
Dec 06 07:26:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:14.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:14 compute-0 nova_compute[251992]: 2025-12-06 07:26:14.993 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:15.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:15 compute-0 ceph-mon[74339]: pgmap v2057: 305 pgs: 305 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 2.7 MiB/s wr, 112 op/s
Dec 06 07:26:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 173 KiB/s rd, 3.7 MiB/s wr, 98 op/s
Dec 06 07:26:16 compute-0 nova_compute[251992]: 2025-12-06 07:26:16.295 251996 DEBUG nova.network.neutron [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:26:16 compute-0 nova_compute[251992]: 2025-12-06 07:26:16.379 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Releasing lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:26:16 compute-0 nova_compute[251992]: 2025-12-06 07:26:16.382 251996 DEBUG oslo_concurrency.lockutils [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:26:16 compute-0 nova_compute[251992]: 2025-12-06 07:26:16.383 251996 DEBUG nova.network.neutron [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Refreshing network info cache for port 0e5e71bc-7098-4091-938e-6299f989917f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:26:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2648319042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1767994758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1647351199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4140271475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-0 ceph-mon[74339]: pgmap v2058: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 173 KiB/s rd, 3.7 MiB/s wr, 98 op/s
Dec 06 07:26:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1655734803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:16 compute-0 nova_compute[251992]: 2025-12-06 07:26:16.587 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 07:26:16 compute-0 nova_compute[251992]: 2025-12-06 07:26:16.589 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:26:16 compute-0 nova_compute[251992]: 2025-12-06 07:26:16.589 251996 INFO nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Creating image(s)
Dec 06 07:26:16 compute-0 nova_compute[251992]: 2025-12-06 07:26:16.629 251996 DEBUG nova.storage.rbd_utils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] creating snapshot(nova-resize) on rbd image(ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:26:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:16.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:17 compute-0 nova_compute[251992]: 2025-12-06 07:26:17.011 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:26:17 compute-0 nova_compute[251992]: 2025-12-06 07:26:17.012 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:26:17 compute-0 nova_compute[251992]: 2025-12-06 07:26:17.259 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:26:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:17.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Dec 06 07:26:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:18 compute-0 nova_compute[251992]: 2025-12-06 07:26:18.392 251996 DEBUG nova.network.neutron [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updated VIF entry in instance network info cache for port 0e5e71bc-7098-4091-938e-6299f989917f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:26:18 compute-0 nova_compute[251992]: 2025-12-06 07:26:18.393 251996 DEBUG nova.network.neutron [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:26:18
Dec 06 07:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'volumes', 'default.rgw.meta', 'backups', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 06 07:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:26:18 compute-0 nova_compute[251992]: 2025-12-06 07:26:18.843 251996 DEBUG oslo_concurrency.lockutils [req-7eb65bcd-f467-4d01-aaa3-2cce1dc1c049 req-5255d6a4-2642-495b-bf89-47ee47721429 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ea8c0005-4b7a-4697-89ae-91f4bef22e36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:26:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:18.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Dec 06 07:26:19 compute-0 nova_compute[251992]: 2025-12-06 07:26:19.052 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:19 compute-0 ceph-mon[74339]: pgmap v2059: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Dec 06 07:26:19 compute-0 nova_compute[251992]: 2025-12-06 07:26:19.995 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Dec 06 07:26:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Dec 06 07:26:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Dec 06 07:26:20 compute-0 ceph-mon[74339]: pgmap v2060: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Dec 06 07:26:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:20.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:21.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Dec 06 07:26:22 compute-0 ceph-mon[74339]: osdmap e265: 3 total, 3 up, 3 in
Dec 06 07:26:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:23.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:26:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.3 MiB/s wr, 30 op/s
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.054 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.116 251996 DEBUG nova.objects.instance [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'trusted_certs' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:24 compute-0 ceph-mon[74339]: pgmap v2062: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.357 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.357 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Ensure instance console log exists: /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.358 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.358 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.358 251996 DEBUG oslo_concurrency.lockutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.361 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Start _get_guest_xml network_info=[{"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:ec:96:d5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.365 251996 WARNING nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.369 251996 DEBUG nova.virt.libvirt.host [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.370 251996 DEBUG nova.virt.libvirt.host [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.374 251996 DEBUG nova.virt.libvirt.host [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.375 251996 DEBUG nova.virt.libvirt.host [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.376 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.376 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.377 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.377 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.377 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.378 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.378 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.378 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.379 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.379 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.380 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.380 251996 DEBUG nova.virt.hardware [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.380 251996 DEBUG nova.objects.instance [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'vcpu_model' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:26:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:26:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:26:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:26:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.700 251996 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:24 compute-0 nova_compute[251992]: 2025-12-06 07:26:24.997 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:26:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:25.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:26:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:26:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557758486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.162 251996 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.200 251996 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:25.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:25 compute-0 ceph-mon[74339]: pgmap v2063: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.3 MiB/s wr, 30 op/s
Dec 06 07:26:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1557758486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:26:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1733590926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.663 251996 DEBUG oslo_concurrency.processutils [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.665 251996 DEBUG nova.virt.libvirt.vif [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1037063395',display_name='tempest-ServerDiskConfigTestJSON-server-1037063395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1037063395',id=108,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-qe8qs13c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:26:10Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=ea8c0005-4b7a-4697-89ae-91f4bef22e36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:ec:96:d5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.666 251996 DEBUG nova.network.os_vif_util [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:ec:96:d5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.667 251996 DEBUG nova.network.os_vif_util [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.670 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <uuid>ea8c0005-4b7a-4697-89ae-91f4bef22e36</uuid>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <name>instance-0000006c</name>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <memory>196608</memory>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1037063395</nova:name>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:26:24</nova:creationTime>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <nova:flavor name="m1.micro">
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <nova:memory>192</nova:memory>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <nova:user uuid="d67c136e82ad4001b000848d75eef50d">tempest-ServerDiskConfigTestJSON-749654875-project-member</nova:user>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <nova:project uuid="88f5b34244614321a9b6e902eaba0ece">tempest-ServerDiskConfigTestJSON-749654875</nova:project>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <nova:port uuid="0e5e71bc-7098-4091-938e-6299f989917f">
Dec 06 07:26:25 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <system>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <entry name="serial">ea8c0005-4b7a-4697-89ae-91f4bef22e36</entry>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <entry name="uuid">ea8c0005-4b7a-4697-89ae-91f4bef22e36</entry>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </system>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <os>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   </os>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <features>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   </features>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk">
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       </source>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/ea8c0005-4b7a-4697-89ae-91f4bef22e36_disk.config">
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       </source>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:26:25 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:ec:96:d5"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <target dev="tap0e5e71bc-70"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36/console.log" append="off"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <video>
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </video>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:26:25 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:26:25 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:26:25 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:26:25 compute-0 nova_compute[251992]: </domain>
Dec 06 07:26:25 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.672 251996 DEBUG nova.virt.libvirt.vif [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1037063395',display_name='tempest-ServerDiskConfigTestJSON-server-1037063395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1037063395',id=108,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-qe8qs13c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:26:10Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=ea8c0005-4b7a-4697-89ae-91f4bef22e36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:ec:96:d5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.673 251996 DEBUG nova.network.os_vif_util [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "vif_mac": "fa:16:3e:ec:96:d5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.673 251996 DEBUG nova.network.os_vif_util [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.674 251996 DEBUG os_vif [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.675 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.676 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.681 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.682 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e5e71bc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.683 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e5e71bc-70, col_values=(('external_ids', {'iface-id': '0e5e71bc-7098-4091-938e-6299f989917f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:96:d5', 'vm-uuid': 'ea8c0005-4b7a-4697-89ae-91f4bef22e36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.685 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 NetworkManager[48965]: <info>  [1765005985.6866] manager: (tap0e5e71bc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.689 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.695 251996 INFO os_vif [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70')
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.783 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.784 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.784 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] No VIF found with MAC fa:16:3e:ec:96:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.784 251996 INFO nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Using config drive
Dec 06 07:26:25 compute-0 kernel: tap0e5e71bc-70: entered promiscuous mode
Dec 06 07:26:25 compute-0 NetworkManager[48965]: <info>  [1765005985.8816] manager: (tap0e5e71bc-70): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.882 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 ovn_controller[147168]: 2025-12-06T07:26:25Z|00380|binding|INFO|Claiming lport 0e5e71bc-7098-4091-938e-6299f989917f for this chassis.
Dec 06 07:26:25 compute-0 ovn_controller[147168]: 2025-12-06T07:26:25Z|00381|binding|INFO|0e5e71bc-7098-4091-938e-6299f989917f: Claiming fa:16:3e:ec:96:d5 10.100.0.13
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.885 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.895 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:96:d5 10.100.0.13'], port_security=['fa:16:3e:ec:96:d5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ea8c0005-4b7a-4697-89ae-91f4bef22e36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '6', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=0e5e71bc-7098-4091-938e-6299f989917f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.896 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 0e5e71bc-7098-4091-938e-6299f989917f in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a bound to our chassis
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.897 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:26:25 compute-0 ovn_controller[147168]: 2025-12-06T07:26:25Z|00382|binding|INFO|Setting lport 0e5e71bc-7098-4091-938e-6299f989917f ovn-installed in OVS
Dec 06 07:26:25 compute-0 ovn_controller[147168]: 2025-12-06T07:26:25Z|00383|binding|INFO|Setting lport 0e5e71bc-7098-4091-938e-6299f989917f up in Southbound
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.900 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:26:25 compute-0 nova_compute[251992]: 2025-12-06 07:26:25.903 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013821094767820584 of space, bias 1.0, pg target 4.146328430346175 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:26:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Dec 06 07:26:25 compute-0 systemd-udevd[318016]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.911 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[66bcc352-5d50-43f5-b7ac-3ef726014273]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.913 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c014e4e-a1 in ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.915 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c014e4e-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.915 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[955c5079-73cf-4411-9e1b-598dbe453cc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.916 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[32d841a1-8577-42b5-a307-2e84bd38ac1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:25 compute-0 systemd-machined[212986]: New machine qemu-48-instance-0000006c.
Dec 06 07:26:25 compute-0 NetworkManager[48965]: <info>  [1765005985.9241] device (tap0e5e71bc-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:26:25 compute-0 NetworkManager[48965]: <info>  [1765005985.9248] device (tap0e5e71bc-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.929 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[243f735f-c8fc-466f-8119-c7b14932263c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:25 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-0000006c.
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.952 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7f046d56-4a0c-46e6-b7ff-2cb404bc89cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.981 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2c0c06-ebac-47f2-8eb1-edf4ef2d74db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:25 compute-0 NetworkManager[48965]: <info>  [1765005985.9887] manager: (tap7c014e4e-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Dec 06 07:26:25 compute-0 systemd-udevd[318021]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:26:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:25.987 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4410edb2-4223-46bf-9eb2-9d4b8e345797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 21 op/s
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.020 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ea794a06-70dd-4570-a15a-0319c1743a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.023 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b26e04a4-1e74-48fc-a326-d218b30122fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 NetworkManager[48965]: <info>  [1765005986.0471] device (tap7c014e4e-a0): carrier: link connected
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.052 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4233b351-92b7-4d4b-91e5-844a2a4606f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.067 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[291caac2-c0df-4bf0-b99b-72b96592c72c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625863, 'reachable_time': 34655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318065, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.084 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[403cdf6f-7b6f-4e2b-a17d-2f2895540f9f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:141c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625863, 'tstamp': 625863}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318070, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.098 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[84143ed3-01b5-4025-b511-88ad6f71a8b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c014e4e-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:14:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625863, 'reachable_time': 34655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318075, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 podman[318038]: 2025-12-06 07:26:26.115906333 +0000 UTC m=+0.086047751 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.130 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[601f66c5-2dd3-4c25-9e3a-ef2bc9f12bb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.180 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc1a6b0-0bda-4851-9469-b6396d967c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.181 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.182 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.182 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c014e4e-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.184 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:26 compute-0 kernel: tap7c014e4e-a0: entered promiscuous mode
Dec 06 07:26:26 compute-0 NetworkManager[48965]: <info>  [1765005986.1863] manager: (tap7c014e4e-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.186 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.189 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c014e4e-a0, col_values=(('external_ids', {'iface-id': 'd8dd1a7d-045a-42a3-8829-567c43985ae0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.190 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:26 compute-0 ovn_controller[147168]: 2025-12-06T07:26:26Z|00384|binding|INFO|Releasing lport d8dd1a7d-045a-42a3-8829-567c43985ae0 from this chassis (sb_readonly=0)
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.191 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.192 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.192 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[45cfcb8b-075a-46d2-b0d9-acc4ebfb457a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.193 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/7c014e4e-a182-4f60-8285-20525bc99e5a.pid.haproxy
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 7c014e4e-a182-4f60-8285-20525bc99e5a
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:26:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:26.194 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'env', 'PROCESS_TAG=haproxy-7c014e4e-a182-4f60-8285-20525bc99e5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c014e4e-a182-4f60-8285-20525bc99e5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.204 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:26 compute-0 podman[318130]: 2025-12-06 07:26:26.561979733 +0000 UTC m=+0.068956258 container create afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:26:26 compute-0 systemd[1]: Started libpod-conmon-afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590.scope.
Dec 06 07:26:26 compute-0 podman[318130]: 2025-12-06 07:26:26.515713084 +0000 UTC m=+0.022689629 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:26:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:26:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c355e307c6f05a144b5526a1e2e5eb8fa918383f8a9126920545f65c78787544/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:26:26 compute-0 podman[318130]: 2025-12-06 07:26:26.633916314 +0000 UTC m=+0.140892869 container init afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:26:26 compute-0 podman[318130]: 2025-12-06 07:26:26.639492298 +0000 UTC m=+0.146468823 container start afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:26:26 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[318146]: [NOTICE]   (318168) : New worker (318173) forked
Dec 06 07:26:26 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[318146]: [NOTICE]   (318168) : Loading success.
Dec 06 07:26:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1733590926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:26 compute-0 ceph-mon[74339]: pgmap v2064: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 21 op/s
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.750 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005986.750271, ea8c0005-4b7a-4697-89ae-91f4bef22e36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.751 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] VM Resumed (Lifecycle Event)
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.753 251996 DEBUG nova.compute.manager [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.757 251996 INFO nova.virt.libvirt.driver [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance running successfully.
Dec 06 07:26:26 compute-0 virtqemud[251613]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.760 251996 DEBUG nova.virt.libvirt.guest [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.761 251996 DEBUG nova.virt.libvirt.driver [None req-c51e62a6-81ad-4fa7-9b2c-22b33e8be37d d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.788 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.793 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:26:26 compute-0 sudo[318185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:26 compute-0 sudo[318185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:26 compute-0 sudo[318185]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.839 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.839 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765005986.751506, ea8c0005-4b7a-4697-89ae-91f4bef22e36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.839 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] VM Started (Lifecycle Event)
Dec 06 07:26:26 compute-0 sudo[318210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:26 compute-0 sudo[318210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:26 compute-0 sudo[318210]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.953 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:26 compute-0 nova_compute[251992]: 2025-12-06 07:26:26.958 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:26:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:27.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 21 op/s
Dec 06 07:26:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:28 compute-0 ceph-mon[74339]: pgmap v2065: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 21 op/s
Dec 06 07:26:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:29.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:29 compute-0 nova_compute[251992]: 2025-12-06 07:26:29.056 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:26:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:29.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:26:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 113 op/s
Dec 06 07:26:30 compute-0 ceph-mon[74339]: pgmap v2066: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 54 KiB/s wr, 113 op/s
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.686 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.697 251996 DEBUG nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.697 251996 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.698 251996 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.698 251996 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.698 251996 DEBUG nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.698 251996 WARNING nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state resized and task_state None.
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.698 251996 DEBUG nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.699 251996 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.699 251996 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.699 251996 DEBUG oslo_concurrency.lockutils [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.699 251996 DEBUG nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:30 compute-0 nova_compute[251992]: 2025-12-06 07:26:30.700 251996 WARNING nova.compute.manager [req-62b88ed4-dec2-4bfd-8865-d2dbf8149fc0 req-ac337f8d-1cb5-454b-a554-0489855c8f56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state resized and task_state None.
Dec 06 07:26:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:31.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:31.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:31 compute-0 podman[318237]: 2025-12-06 07:26:31.392766523 +0000 UTC m=+0.051299300 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Dec 06 07:26:31 compute-0 podman[318238]: 2025-12-06 07:26:31.399089118 +0000 UTC m=+0.056475793 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:26:31 compute-0 nova_compute[251992]: 2025-12-06 07:26:31.618 251996 DEBUG nova.compute.manager [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:31 compute-0 nova_compute[251992]: 2025-12-06 07:26:31.619 251996 DEBUG oslo_concurrency.lockutils [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:31 compute-0 nova_compute[251992]: 2025-12-06 07:26:31.619 251996 DEBUG oslo_concurrency.lockutils [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:31 compute-0 nova_compute[251992]: 2025-12-06 07:26:31.620 251996 DEBUG oslo_concurrency.lockutils [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:31 compute-0 nova_compute[251992]: 2025-12-06 07:26:31.620 251996 DEBUG nova.compute.manager [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:31 compute-0 nova_compute[251992]: 2025-12-06 07:26:31.621 251996 WARNING nova.compute.manager [req-65876387-7edd-44dd-83ab-3de8cd8473b9 req-af2ee967-9d69-43da-88b6-1744466d9e23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state resize_migrating.
Dec 06 07:26:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 51 KiB/s wr, 205 op/s
Dec 06 07:26:32 compute-0 ceph-mon[74339]: pgmap v2067: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 51 KiB/s wr, 205 op/s
Dec 06 07:26:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:33.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:33 compute-0 nova_compute[251992]: 2025-12-06 07:26:33.067 251996 INFO nova.network.neutron [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating port fc5020f3-51a1-4ca2-b0b5-ff2add84607f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 07:26:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:33.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 50 KiB/s wr, 237 op/s
Dec 06 07:26:34 compute-0 nova_compute[251992]: 2025-12-06 07:26:34.062 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:34 compute-0 nova_compute[251992]: 2025-12-06 07:26:34.223 251996 DEBUG nova.compute.manager [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:34 compute-0 nova_compute[251992]: 2025-12-06 07:26:34.224 251996 DEBUG oslo_concurrency.lockutils [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:34 compute-0 nova_compute[251992]: 2025-12-06 07:26:34.224 251996 DEBUG oslo_concurrency.lockutils [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:34 compute-0 nova_compute[251992]: 2025-12-06 07:26:34.224 251996 DEBUG oslo_concurrency.lockutils [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:34 compute-0 nova_compute[251992]: 2025-12-06 07:26:34.225 251996 DEBUG nova.compute.manager [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:34 compute-0 nova_compute[251992]: 2025-12-06 07:26:34.225 251996 WARNING nova.compute.manager [req-694d4887-e653-4c05-86de-3013aeb27438 req-e4d860d6-9acc-4685-a2a0-26afeb775a79 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state resize_migrated.
Dec 06 07:26:34 compute-0 ceph-mon[74339]: pgmap v2068: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 50 KiB/s wr, 237 op/s
Dec 06 07:26:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:35.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:35.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:35 compute-0 nova_compute[251992]: 2025-12-06 07:26:35.471 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:26:35 compute-0 nova_compute[251992]: 2025-12-06 07:26:35.472 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquired lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:26:35 compute-0 nova_compute[251992]: 2025-12-06 07:26:35.472 251996 DEBUG nova.network.neutron [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:26:35 compute-0 nova_compute[251992]: 2025-12-06 07:26:35.688 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:35 compute-0 nova_compute[251992]: 2025-12-06 07:26:35.704 251996 DEBUG nova.compute.manager [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-changed-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:35 compute-0 nova_compute[251992]: 2025-12-06 07:26:35.704 251996 DEBUG nova.compute.manager [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Refreshing instance network info cache due to event network-changed-fc5020f3-51a1-4ca2-b0b5-ff2add84607f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:26:35 compute-0 nova_compute[251992]: 2025-12-06 07:26:35.705 251996 DEBUG oslo_concurrency.lockutils [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:26:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 50 KiB/s wr, 231 op/s
Dec 06 07:26:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:26:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:37.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:26:37 compute-0 ceph-mon[74339]: pgmap v2069: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 50 KiB/s wr, 231 op/s
Dec 06 07:26:37 compute-0 nova_compute[251992]: 2025-12-06 07:26:37.371 251996 DEBUG nova.network.neutron [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:26:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:37.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:37 compute-0 nova_compute[251992]: 2025-12-06 07:26:37.610 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Releasing lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:26:37 compute-0 nova_compute[251992]: 2025-12-06 07:26:37.614 251996 DEBUG oslo_concurrency.lockutils [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:26:37 compute-0 nova_compute[251992]: 2025-12-06 07:26:37.614 251996 DEBUG nova.network.neutron [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Refreshing network info cache for port fc5020f3-51a1-4ca2-b0b5-ff2add84607f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:26:37 compute-0 nova_compute[251992]: 2025-12-06 07:26:37.783 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 07:26:37 compute-0 nova_compute[251992]: 2025-12-06 07:26:37.785 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:26:37 compute-0 nova_compute[251992]: 2025-12-06 07:26:37.785 251996 INFO nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Creating image(s)
Dec 06 07:26:37 compute-0 nova_compute[251992]: 2025-12-06 07:26:37.820 251996 DEBUG nova.storage.rbd_utils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] creating snapshot(nova-resize) on rbd image(f32ea15c-cf80-482c-9f9a-22392bc79e78_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:26:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 35 KiB/s wr, 227 op/s
Dec 06 07:26:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:38.888 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:26:38 compute-0 nova_compute[251992]: 2025-12-06 07:26:38.889 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:38.890 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:26:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:39.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:39 compute-0 nova_compute[251992]: 2025-12-06 07:26:39.062 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:39 compute-0 ceph-mon[74339]: pgmap v2070: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 35 KiB/s wr, 227 op/s
Dec 06 07:26:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:39.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Dec 06 07:26:39 compute-0 nova_compute[251992]: 2025-12-06 07:26:39.477 251996 DEBUG nova.network.neutron [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updated VIF entry in instance network info cache for port fc5020f3-51a1-4ca2-b0b5-ff2add84607f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:26:39 compute-0 nova_compute[251992]: 2025-12-06 07:26:39.478 251996 DEBUG nova.network.neutron [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:26:39 compute-0 nova_compute[251992]: 2025-12-06 07:26:39.521 251996 DEBUG oslo_concurrency.lockutils [req-565dad0d-0baa-44bf-a4b5-87d2d96500da req-45f4721b-5d66-4b07-96e4-d1258a614246 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f32ea15c-cf80-482c-9f9a-22392bc79e78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:26:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 36 KiB/s wr, 248 op/s
Dec 06 07:26:40 compute-0 nova_compute[251992]: 2025-12-06 07:26:40.691 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Dec 06 07:26:40 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Dec 06 07:26:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:41.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:41.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:41 compute-0 ceph-mon[74339]: pgmap v2071: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 36 KiB/s wr, 248 op/s
Dec 06 07:26:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Dec 06 07:26:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:41.892 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 KiB/s wr, 114 op/s
Dec 06 07:26:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Dec 06 07:26:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Dec 06 07:26:42 compute-0 ceph-mon[74339]: osdmap e266: 3 total, 3 up, 3 in
Dec 06 07:26:42 compute-0 ceph-mon[74339]: pgmap v2073: 305 pgs: 305 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 KiB/s wr, 114 op/s
Dec 06 07:26:42 compute-0 ceph-mon[74339]: osdmap e267: 3 total, 3 up, 3 in
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.550 251996 DEBUG nova.objects.instance [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.826 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.827 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Ensure instance console log exists: /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.828 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.829 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.830 251996 DEBUG oslo_concurrency.lockutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.835 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Start _get_guest_xml network_info=[{"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-855821425-network", "vif_mac": "fa:16:3e:39:43:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.840 251996 WARNING nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.848 251996 DEBUG nova.virt.libvirt.host [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.848 251996 DEBUG nova.virt.libvirt.host [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.852 251996 DEBUG nova.virt.libvirt.host [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.852 251996 DEBUG nova.virt.libvirt.host [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.853 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.853 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:24Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='fb97f55a-36c0-42f2-8156-c1b04eb23dd0',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.854 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.854 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.854 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.854 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.855 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.855 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.855 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.855 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.856 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.856 251996 DEBUG nova.virt.hardware [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:26:42 compute-0 nova_compute[251992]: 2025-12-06 07:26:42.856 251996 DEBUG nova.objects.instance [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:26:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:43.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:43.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:43 compute-0 nova_compute[251992]: 2025-12-06 07:26:43.587 251996 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.7 KiB/s wr, 183 op/s
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.065 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2168580241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.163 251996 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.204 251996 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:26:44 compute-0 ceph-mon[74339]: pgmap v2075: 305 pgs: 305 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.7 KiB/s wr, 183 op/s
Dec 06 07:26:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2168580241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:26:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3034104894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.666 251996 DEBUG oslo_concurrency.processutils [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.668 251996 DEBUG nova.virt.libvirt.vif [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1353793859',display_name='tempest-DeleteServersTestJSON-server-1353793859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1353793859',id=106,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-jutadpu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:26:32Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=f32ea15c-cf80-482c-9f9a-22392bc79e78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-855821425-network", "vif_mac": "fa:16:3e:39:43:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.668 251996 DEBUG nova.network.os_vif_util [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-855821425-network", "vif_mac": "fa:16:3e:39:43:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.669 251996 DEBUG nova.network.os_vif_util [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.672 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <uuid>f32ea15c-cf80-482c-9f9a-22392bc79e78</uuid>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <name>instance-0000006a</name>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <memory>196608</memory>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <nova:name>tempest-DeleteServersTestJSON-server-1353793859</nova:name>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:26:42</nova:creationTime>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <nova:flavor name="m1.micro">
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <nova:memory>192</nova:memory>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <nova:user uuid="d966fefcb38a45219b9cc637c46a3d62">tempest-DeleteServersTestJSON-1764569218-project-member</nova:user>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <nova:project uuid="c6d2f50c0db54315bfa96a24511dda90">tempest-DeleteServersTestJSON-1764569218</nova:project>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <nova:port uuid="fc5020f3-51a1-4ca2-b0b5-ff2add84607f">
Dec 06 07:26:44 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <system>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <entry name="serial">f32ea15c-cf80-482c-9f9a-22392bc79e78</entry>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <entry name="uuid">f32ea15c-cf80-482c-9f9a-22392bc79e78</entry>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </system>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <os>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   </os>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <features>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   </features>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f32ea15c-cf80-482c-9f9a-22392bc79e78_disk">
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       </source>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f32ea15c-cf80-482c-9f9a-22392bc79e78_disk.config">
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       </source>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:26:44 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:39:43:3f"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <target dev="tapfc5020f3-51"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78/console.log" append="off"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <video>
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </video>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:26:44 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:26:44 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:26:44 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:26:44 compute-0 nova_compute[251992]: </domain>
Dec 06 07:26:44 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.673 251996 DEBUG nova.virt.libvirt.vif [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1353793859',display_name='tempest-DeleteServersTestJSON-server-1353793859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1353793859',id=106,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:25:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-jutadpu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:26:32Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=f32ea15c-cf80-482c-9f9a-22392bc79e78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-855821425-network", "vif_mac": "fa:16:3e:39:43:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.674 251996 DEBUG nova.network.os_vif_util [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-855821425-network", "vif_mac": "fa:16:3e:39:43:3f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.674 251996 DEBUG nova.network.os_vif_util [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.674 251996 DEBUG os_vif [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.675 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.676 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.679 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.679 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc5020f3-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.680 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc5020f3-51, col_values=(('external_ids', {'iface-id': 'fc5020f3-51a1-4ca2-b0b5-ff2add84607f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:43:3f', 'vm-uuid': 'f32ea15c-cf80-482c-9f9a-22392bc79e78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.681 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-0 NetworkManager[48965]: <info>  [1765006004.6822] manager: (tapfc5020f3-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.683 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.688 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.688 251996 INFO os_vif [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51')
Dec 06 07:26:44 compute-0 ovn_controller[147168]: 2025-12-06T07:26:44Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:96:d5 10.100.0.13
Dec 06 07:26:44 compute-0 ovn_controller[147168]: 2025-12-06T07:26:44Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:96:d5 10.100.0.13
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.829 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.830 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.830 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] No VIF found with MAC fa:16:3e:39:43:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.831 251996 INFO nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Using config drive
Dec 06 07:26:44 compute-0 NetworkManager[48965]: <info>  [1765006004.9189] manager: (tapfc5020f3-51): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Dec 06 07:26:44 compute-0 kernel: tapfc5020f3-51: entered promiscuous mode
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.922 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-0 ovn_controller[147168]: 2025-12-06T07:26:44Z|00385|binding|INFO|Claiming lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f for this chassis.
Dec 06 07:26:44 compute-0 ovn_controller[147168]: 2025-12-06T07:26:44Z|00386|binding|INFO|fc5020f3-51a1-4ca2-b0b5-ff2add84607f: Claiming fa:16:3e:39:43:3f 10.100.0.11
Dec 06 07:26:44 compute-0 ovn_controller[147168]: 2025-12-06T07:26:44Z|00387|binding|INFO|Setting lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f ovn-installed in OVS
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.948 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-0 nova_compute[251992]: 2025-12-06 07:26:44.950 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:44 compute-0 systemd-udevd[318449]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:26:44 compute-0 systemd-machined[212986]: New machine qemu-49-instance-0000006a.
Dec 06 07:26:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:44.971 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:43:3f 10.100.0.11'], port_security=['fa:16:3e:39:43:3f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f32ea15c-cf80-482c-9f9a-22392bc79e78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '6', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=fc5020f3-51a1-4ca2-b0b5-ff2add84607f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:26:44 compute-0 ovn_controller[147168]: 2025-12-06T07:26:44Z|00388|binding|INFO|Setting lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f up in Southbound
Dec 06 07:26:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:44.973 158118 INFO neutron.agent.ovn.metadata.agent [-] Port fc5020f3-51a1-4ca2-b0b5-ff2add84607f in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 bound to our chassis
Dec 06 07:26:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:44.974 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:26:44 compute-0 NetworkManager[48965]: <info>  [1765006004.9805] device (tapfc5020f3-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:26:44 compute-0 NetworkManager[48965]: <info>  [1765006004.9817] device (tapfc5020f3-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:26:44 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000006a.
Dec 06 07:26:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:44.987 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[45c86a97-c355-463a-b566-4ec12531f028]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:44.989 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85cfbf28-71 in ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:26:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:44.991 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85cfbf28-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:26:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:44.991 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f3b49e-6eb0-41cb-b16e-ba646fcb61a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:44.992 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[270ef2ab-a76b-4b39-b6c2-00cc56ba12b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.003 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0a0695-9b54-4837-918a-f35ec52e4745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.018 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b02ac646-ba9c-4a46-ba2e-3b873e5660e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:45.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.043 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4676ae14-51e9-4d32-8a54-0790e5bb4e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.050 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f1a429-121d-4373-ad77-1ef48e566ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 NetworkManager[48965]: <info>  [1765006005.0517] manager: (tap85cfbf28-70): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.083 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4743c814-9482-4013-9e0a-9b7aac828e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.086 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[334d8528-fd6d-4c63-a2f1-323be393c335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 NetworkManager[48965]: <info>  [1765006005.1096] device (tap85cfbf28-70): carrier: link connected
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.119 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a355a1-a1d2-46b0-9c71-445f7158bb0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.139 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2184e88c-a57f-4cf5-96d6-a09747cb00bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627769, 'reachable_time': 26286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318483, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.155 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[740ebc84-5a13-45f5-a373-6dafcc8ef020]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:762'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 627769, 'tstamp': 627769}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318484, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.173 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[142dcfb9-da6d-4d50-916e-33883678397b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85cfbf28-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:07:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627769, 'reachable_time': 26286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318485, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.204 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a321e738-e6ca-48d9-83eb-4b94822d3542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.267 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3f97b497-4883-4586-951b-2acc303ce116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.269 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.269 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.270 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85cfbf28-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:45 compute-0 kernel: tap85cfbf28-70: entered promiscuous mode
Dec 06 07:26:45 compute-0 NetworkManager[48965]: <info>  [1765006005.2737] manager: (tap85cfbf28-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.277 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85cfbf28-70, col_values=(('external_ids', {'iface-id': '41b1b168-8e0e-4991-9750-9b31221f4863'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:26:45 compute-0 ovn_controller[147168]: 2025-12-06T07:26:45Z|00389|binding|INFO|Releasing lport 41b1b168-8e0e-4991-9750-9b31221f4863 from this chassis (sb_readonly=0)
Dec 06 07:26:45 compute-0 nova_compute[251992]: 2025-12-06 07:26:45.278 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.279 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.280 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8703a3-73cb-4b3e-b4c5-7dba4e0e4217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.281 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/85cfbf28-7016-4776-8fc2-2eb08a6b8347.pid.haproxy
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 85cfbf28-7016-4776-8fc2-2eb08a6b8347
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:26:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:26:45.282 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'env', 'PROCESS_TAG=haproxy-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85cfbf28-7016-4776-8fc2-2eb08a6b8347.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:26:45 compute-0 nova_compute[251992]: 2025-12-06 07:26:45.295 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:45.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:45 compute-0 podman[318535]: 2025-12-06 07:26:45.628137741 +0000 UTC m=+0.024672624 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:26:45 compute-0 podman[318535]: 2025-12-06 07:26:45.738561536 +0000 UTC m=+0.135096399 container create dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 07:26:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3034104894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:26:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/514090520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:45 compute-0 systemd[1]: Started libpod-conmon-dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f.scope.
Dec 06 07:26:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:26:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fa9aeb4d4d41aa84985347f8e119a02a23c5733331f79938e3de3db38e672bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:26:45 compute-0 podman[318535]: 2025-12-06 07:26:45.841122693 +0000 UTC m=+0.237657576 container init dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:26:45 compute-0 podman[318535]: 2025-12-06 07:26:45.847719536 +0000 UTC m=+0.244254389 container start dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 07:26:45 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[318570]: [NOTICE]   (318578) : New worker (318580) forked
Dec 06 07:26:45 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[318570]: [NOTICE]   (318578) : Loading success.
Dec 06 07:26:45 compute-0 nova_compute[251992]: 2025-12-06 07:26:45.889 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006005.8890717, f32ea15c-cf80-482c-9f9a-22392bc79e78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:45 compute-0 nova_compute[251992]: 2025-12-06 07:26:45.890 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] VM Resumed (Lifecycle Event)
Dec 06 07:26:45 compute-0 nova_compute[251992]: 2025-12-06 07:26:45.892 251996 DEBUG nova.compute.manager [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:26:45 compute-0 nova_compute[251992]: 2025-12-06 07:26:45.896 251996 INFO nova.virt.libvirt.driver [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance running successfully.
Dec 06 07:26:45 compute-0 virtqemud[251613]: argument unsupported: QEMU guest agent is not configured
Dec 06 07:26:45 compute-0 nova_compute[251992]: 2025-12-06 07:26:45.897 251996 DEBUG nova.virt.libvirt.guest [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 07:26:45 compute-0 nova_compute[251992]: 2025-12-06 07:26:45.898 251996 DEBUG nova.virt.libvirt.driver [None req-348c20ac-c66f-4207-9aae-58c58b93b866 d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 07:26:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 529 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 21 KiB/s wr, 236 op/s
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.821 251996 DEBUG nova.compute.manager [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.822 251996 DEBUG oslo_concurrency.lockutils [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.822 251996 DEBUG oslo_concurrency.lockutils [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.822 251996 DEBUG oslo_concurrency.lockutils [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.823 251996 DEBUG nova.compute.manager [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.823 251996 WARNING nova.compute.manager [req-a5a38c78-2713-4f0f-ba00-a384b19f9674 req-18565a16-eb22-4df2-81c3-aaf051fa3ee7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state resize_finish.
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.830 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.833 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.863 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.864 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006005.8902204, f32ea15c-cf80-482c-9f9a-22392bc79e78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.864 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] VM Started (Lifecycle Event)
Dec 06 07:26:46 compute-0 sudo[318590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:46 compute-0 sudo[318590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:46 compute-0 sudo[318590]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.989 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:26:46 compute-0 nova_compute[251992]: 2025-12-06 07:26:46.992 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:26:46 compute-0 sudo[318615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:46 compute-0 sudo[318615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:46 compute-0 sudo[318615]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:47 compute-0 ceph-mon[74339]: pgmap v2076: 305 pgs: 305 active+clean; 529 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 21 KiB/s wr, 236 op/s
Dec 06 07:26:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:47.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 529 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 19 KiB/s wr, 205 op/s
Dec 06 07:26:48 compute-0 ceph-mon[74339]: pgmap v2077: 305 pgs: 305 active+clean; 529 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 19 KiB/s wr, 205 op/s
Dec 06 07:26:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1610378948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Dec 06 07:26:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Dec 06 07:26:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Dec 06 07:26:48 compute-0 nova_compute[251992]: 2025-12-06 07:26:48.979 251996 DEBUG nova.compute.manager [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:26:48 compute-0 nova_compute[251992]: 2025-12-06 07:26:48.979 251996 DEBUG oslo_concurrency.lockutils [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:48 compute-0 nova_compute[251992]: 2025-12-06 07:26:48.979 251996 DEBUG oslo_concurrency.lockutils [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:48 compute-0 nova_compute[251992]: 2025-12-06 07:26:48.979 251996 DEBUG oslo_concurrency.lockutils [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:48 compute-0 nova_compute[251992]: 2025-12-06 07:26:48.980 251996 DEBUG nova.compute.manager [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:26:48 compute-0 nova_compute[251992]: 2025-12-06 07:26:48.980 251996 WARNING nova.compute.manager [req-48d3a981-9242-40eb-b9b0-e4196abe0edb req-835c57db-229d-48be-9070-206ab8c89917 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state resized and task_state None.
Dec 06 07:26:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:49.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:49 compute-0 nova_compute[251992]: 2025-12-06 07:26:49.067 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:49.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:49 compute-0 ceph-mon[74339]: osdmap e268: 3 total, 3 up, 3 in
Dec 06 07:26:49 compute-0 nova_compute[251992]: 2025-12-06 07:26:49.681 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 463 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 22 KiB/s wr, 267 op/s
Dec 06 07:26:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:51.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:51 compute-0 ceph-mon[74339]: pgmap v2079: 305 pgs: 305 active+clean; 463 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 22 KiB/s wr, 267 op/s
Dec 06 07:26:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:51.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 417 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 31 KiB/s wr, 286 op/s
Dec 06 07:26:52 compute-0 ceph-mon[74339]: pgmap v2080: 305 pgs: 305 active+clean; 417 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 31 KiB/s wr, 286 op/s
Dec 06 07:26:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:53.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 409 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 29 KiB/s wr, 205 op/s
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:54.059590) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014059641, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2104, "num_deletes": 258, "total_data_size": 3735203, "memory_usage": 3785392, "flush_reason": "Manual Compaction"}
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec 06 07:26:54 compute-0 nova_compute[251992]: 2025-12-06 07:26:54.068 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014183252, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 3668690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40281, "largest_seqno": 42384, "table_properties": {"data_size": 3659055, "index_size": 6065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20455, "raw_average_key_size": 20, "raw_value_size": 3639577, "raw_average_value_size": 3665, "num_data_blocks": 264, "num_entries": 993, "num_filter_entries": 993, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765005802, "oldest_key_time": 1765005802, "file_creation_time": 1765006014, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 123711 microseconds, and 9448 cpu microseconds.
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:54.183297) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 3668690 bytes OK
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:54.183317) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:54.229874) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:54.229928) EVENT_LOG_v1 {"time_micros": 1765006014229912, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:54.229955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3726457, prev total WAL file size 3727210, number of live WAL files 2.
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:54.231004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323632' seq:72057594037927935, type:22 .. '6C6F676D0031353133' seq:0, type:0; will stop at (end)
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(3582KB)], [86(9076KB)]
Dec 06 07:26:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006014231094, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12962611, "oldest_snapshot_seqno": -1}
Dec 06 07:26:54 compute-0 nova_compute[251992]: 2025-12-06 07:26:54.684 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:55.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7549 keys, 12813768 bytes, temperature: kUnknown
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006015084730, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 12813768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12762047, "index_size": 31769, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 194986, "raw_average_key_size": 25, "raw_value_size": 12625913, "raw_average_value_size": 1672, "num_data_blocks": 1263, "num_entries": 7549, "num_filter_entries": 7549, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765006014, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:55.085019) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 12813768 bytes
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:55.101388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 15.2 rd, 15.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.9 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 8082, records dropped: 533 output_compression: NoCompression
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:55.101441) EVENT_LOG_v1 {"time_micros": 1765006015101407, "job": 50, "event": "compaction_finished", "compaction_time_micros": 853724, "compaction_time_cpu_micros": 50439, "output_level": 6, "num_output_files": 1, "total_output_size": 12813768, "num_input_records": 8082, "num_output_records": 7549, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006015102379, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec 06 07:26:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3505948962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006015104834, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:54.230855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:55.104925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:55.104932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:55.104935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:55.104938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:26:55.104940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:26:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 409 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 17 KiB/s wr, 162 op/s
Dec 06 07:26:56 compute-0 ceph-mon[74339]: pgmap v2081: 305 pgs: 305 active+clean; 409 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 29 KiB/s wr, 205 op/s
Dec 06 07:26:56 compute-0 sudo[318644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:56 compute-0 sudo[318644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:56 compute-0 sudo[318644]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:56 compute-0 sudo[318670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:26:56 compute-0 sudo[318670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:56 compute-0 sudo[318670]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:56 compute-0 sudo[318695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:56 compute-0 sudo[318695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:56 compute-0 sudo[318695]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:56 compute-0 sudo[318720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:26:56 compute-0 sudo[318720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:56 compute-0 podman[318668]: 2025-12-06 07:26:56.4920238 +0000 UTC m=+0.108838442 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 07:26:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Dec 06 07:26:56 compute-0 sudo[318720]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:57.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:57 compute-0 sudo[318799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:57 compute-0 sudo[318799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:57 compute-0 sudo[318799]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:57 compute-0 sudo[318824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:26:57 compute-0 sudo[318824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:57 compute-0 sudo[318824]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Dec 06 07:26:57 compute-0 sudo[318849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:57 compute-0 sudo[318849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:57 compute-0 sudo[318849]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:57 compute-0 sudo[318874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 07:26:57 compute-0 sudo[318874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:57 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Dec 06 07:26:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:57.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:57 compute-0 sudo[318874]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:26:57 compute-0 ceph-mon[74339]: pgmap v2082: 305 pgs: 305 active+clean; 409 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 17 KiB/s wr, 162 op/s
Dec 06 07:26:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 409 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 123 op/s
Dec 06 07:26:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:26:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:26:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:26:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:26:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:26:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:26:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:26:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5dfa9b42-17a2-48ec-bd89-d034acf052a5 does not exist
Dec 06 07:26:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6eca9bfb-6d33-4df8-8fac-b589987aa18e does not exist
Dec 06 07:26:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0ba7c2e3-cbe9-43aa-9ddc-3463f602d120 does not exist
Dec 06 07:26:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:26:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:26:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:26:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:26:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:26:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:26:58 compute-0 sudo[318918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:58 compute-0 sudo[318918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:58 compute-0 sudo[318918]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:58 compute-0 sudo[318943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:26:58 compute-0 sudo[318943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:58 compute-0 sudo[318943]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:58 compute-0 ceph-mon[74339]: osdmap e269: 3 total, 3 up, 3 in
Dec 06 07:26:58 compute-0 ceph-mon[74339]: pgmap v2084: 305 pgs: 305 active+clean; 409 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 123 op/s
Dec 06 07:26:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:26:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:26:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:26:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:26:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:26:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:26:58 compute-0 sudo[318968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:26:58 compute-0 sudo[318968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:58 compute-0 sudo[318968]: pam_unix(sudo:session): session closed for user root
Dec 06 07:26:59 compute-0 sudo[318993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:26:59 compute-0 sudo[318993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:26:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:26:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:26:59.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.070 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.363 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.363 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.364 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.364 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.364 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.365 251996 INFO nova.compute.manager [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Terminating instance
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.366 251996 DEBUG nova.compute.manager [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:26:59 compute-0 podman[319060]: 2025-12-06 07:26:59.39107111 +0000 UTC m=+0.083770399 container create 575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:26:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:26:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:26:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:26:59.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:26:59 compute-0 podman[319060]: 2025-12-06 07:26:59.350740145 +0000 UTC m=+0.043439464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:26:59 compute-0 kernel: tap0e5e71bc-70 (unregistering): left promiscuous mode
Dec 06 07:26:59 compute-0 NetworkManager[48965]: <info>  [1765006019.4426] device (tap0e5e71bc-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:26:59 compute-0 systemd[1]: Started libpod-conmon-575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc.scope.
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.450 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:59 compute-0 ovn_controller[147168]: 2025-12-06T07:26:59Z|00390|binding|INFO|Releasing lport 0e5e71bc-7098-4091-938e-6299f989917f from this chassis (sb_readonly=0)
Dec 06 07:26:59 compute-0 ovn_controller[147168]: 2025-12-06T07:26:59Z|00391|binding|INFO|Setting lport 0e5e71bc-7098-4091-938e-6299f989917f down in Southbound
Dec 06 07:26:59 compute-0 ovn_controller[147168]: 2025-12-06T07:26:59Z|00392|binding|INFO|Removing iface tap0e5e71bc-70 ovn-installed in OVS
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.468 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:26:59 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Dec 06 07:26:59 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006c.scope: Consumed 15.593s CPU time.
Dec 06 07:26:59 compute-0 systemd-machined[212986]: Machine qemu-48-instance-0000006c terminated.
Dec 06 07:26:59 compute-0 podman[319060]: 2025-12-06 07:26:59.557768132 +0000 UTC m=+0.250467441 container init 575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:26:59 compute-0 podman[319060]: 2025-12-06 07:26:59.567647705 +0000 UTC m=+0.260346994 container start 575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:26:59 compute-0 naughty_khayyam[319078]: 167 167
Dec 06 07:26:59 compute-0 systemd[1]: libpod-575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc.scope: Deactivated successfully.
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.588 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.595 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.607 251996 INFO nova.virt.libvirt.driver [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Instance destroyed successfully.
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.607 251996 DEBUG nova.objects.instance [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lazy-loading 'resources' on Instance uuid ea8c0005-4b7a-4697-89ae-91f4bef22e36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:26:59 compute-0 podman[319060]: 2025-12-06 07:26:59.622745779 +0000 UTC m=+0.315445068 container attach 575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:26:59 compute-0 podman[319060]: 2025-12-06 07:26:59.624313892 +0000 UTC m=+0.317013171 container died 575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:26:59 compute-0 nova_compute[251992]: 2025-12-06 07:26:59.685 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6aeffad922c995a71f3e6d2b3b0ca4dbacc56caded74d0ae3cdd6cff6f43d4-merged.mount: Deactivated successfully.
Dec 06 07:26:59 compute-0 podman[319060]: 2025-12-06 07:26:59.767057891 +0000 UTC m=+0.459757180 container remove 575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:26:59 compute-0 systemd[1]: libpod-conmon-575d891f5ad66d960c01170ceb8adecaf51b2e3418225718ff830e6bb58f02bc.scope: Deactivated successfully.
Dec 06 07:26:59 compute-0 podman[319120]: 2025-12-06 07:26:59.994796092 +0000 UTC m=+0.085998690 container create 69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:27:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 367 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 16 KiB/s wr, 90 op/s
Dec 06 07:27:00 compute-0 podman[319120]: 2025-12-06 07:26:59.935881732 +0000 UTC m=+0.027084350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:27:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/392178062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:00 compute-0 systemd[1]: Started libpod-conmon-69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5.scope.
Dec 06 07:27:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42a645c53da5ac62de98d369343d13bd51d645c08a5b0a89d003d92fcd76a4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42a645c53da5ac62de98d369343d13bd51d645c08a5b0a89d003d92fcd76a4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42a645c53da5ac62de98d369343d13bd51d645c08a5b0a89d003d92fcd76a4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42a645c53da5ac62de98d369343d13bd51d645c08a5b0a89d003d92fcd76a4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f42a645c53da5ac62de98d369343d13bd51d645c08a5b0a89d003d92fcd76a4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:00 compute-0 podman[319120]: 2025-12-06 07:27:00.198479416 +0000 UTC m=+0.289682044 container init 69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:27:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:00.203 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:96:d5 10.100.0.13'], port_security=['fa:16:3e:ec:96:d5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ea8c0005-4b7a-4697-89ae-91f4bef22e36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c014e4e-a182-4f60-8285-20525bc99e5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88f5b34244614321a9b6e902eaba0ece', 'neutron:revision_number': '8', 'neutron:security_group_ids': '562c0019-973b-497e-ab29-636b40b9ed6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7228f8e4-751e-45fe-ae64-cd2ffef9b9bb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=0e5e71bc-7098-4091-938e-6299f989917f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:27:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:00.206 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 0e5e71bc-7098-4091-938e-6299f989917f in datapath 7c014e4e-a182-4f60-8285-20525bc99e5a unbound from our chassis
Dec 06 07:27:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:00.208 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c014e4e-a182-4f60-8285-20525bc99e5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:27:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:00.211 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca527a6-f5e7-4fcd-8a80-88b597259e33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:00.212 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a namespace which is not needed anymore
Dec 06 07:27:00 compute-0 podman[319120]: 2025-12-06 07:27:00.213593055 +0000 UTC m=+0.304795653 container start 69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:27:00 compute-0 nova_compute[251992]: 2025-12-06 07:27:00.228 251996 DEBUG nova.virt.libvirt.vif [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1037063395',display_name='tempest-ServerDiskConfigTestJSON-server-1037063395',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1037063395',id=108,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:26:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88f5b34244614321a9b6e902eaba0ece',ramdisk_id='',reservation_id='r-qe8qs13c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-749654875',owner_user_name='tempest-ServerDiskConfigTestJSON-749654875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:26:47Z,user_data=None,user_id='d67c136e82ad4001b000848d75eef50d',uuid=ea8c0005-4b7a-4697-89ae-91f4bef22e36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:27:00 compute-0 nova_compute[251992]: 2025-12-06 07:27:00.230 251996 DEBUG nova.network.os_vif_util [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converting VIF {"id": "0e5e71bc-7098-4091-938e-6299f989917f", "address": "fa:16:3e:ec:96:d5", "network": {"id": "7c014e4e-a182-4f60-8285-20525bc99e5a", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-602234112-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88f5b34244614321a9b6e902eaba0ece", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e5e71bc-70", "ovs_interfaceid": "0e5e71bc-7098-4091-938e-6299f989917f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:27:00 compute-0 nova_compute[251992]: 2025-12-06 07:27:00.231 251996 DEBUG nova.network.os_vif_util [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:27:00 compute-0 nova_compute[251992]: 2025-12-06 07:27:00.231 251996 DEBUG os_vif [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:27:00 compute-0 nova_compute[251992]: 2025-12-06 07:27:00.233 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:00 compute-0 nova_compute[251992]: 2025-12-06 07:27:00.234 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e5e71bc-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:00 compute-0 nova_compute[251992]: 2025-12-06 07:27:00.238 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:27:00 compute-0 nova_compute[251992]: 2025-12-06 07:27:00.242 251996 INFO os_vif [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:96:d5,bridge_name='br-int',has_traffic_filtering=True,id=0e5e71bc-7098-4091-938e-6299f989917f,network=Network(7c014e4e-a182-4f60-8285-20525bc99e5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e5e71bc-70')
Dec 06 07:27:00 compute-0 podman[319120]: 2025-12-06 07:27:00.376548383 +0000 UTC m=+0.467750981 container attach 69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:27:00 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[318146]: [NOTICE]   (318168) : haproxy version is 2.8.14-c23fe91
Dec 06 07:27:00 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[318146]: [NOTICE]   (318168) : path to executable is /usr/sbin/haproxy
Dec 06 07:27:00 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[318146]: [WARNING]  (318168) : Exiting Master process...
Dec 06 07:27:00 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[318146]: [ALERT]    (318168) : Current worker (318173) exited with code 143 (Terminated)
Dec 06 07:27:00 compute-0 neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a[318146]: [WARNING]  (318168) : All workers exited. Exiting... (0)
Dec 06 07:27:00 compute-0 systemd[1]: libpod-afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590.scope: Deactivated successfully.
Dec 06 07:27:00 compute-0 podman[319175]: 2025-12-06 07:27:00.561781877 +0000 UTC m=+0.136479707 container died afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:27:00 compute-0 ovn_controller[147168]: 2025-12-06T07:27:00Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:43:3f 10.100.0.11
Dec 06 07:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590-userdata-shm.mount: Deactivated successfully.
Dec 06 07:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c355e307c6f05a144b5526a1e2e5eb8fa918383f8a9126920545f65c78787544-merged.mount: Deactivated successfully.
Dec 06 07:27:00 compute-0 podman[319175]: 2025-12-06 07:27:00.778215164 +0000 UTC m=+0.352912994 container cleanup afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:27:00 compute-0 systemd[1]: libpod-conmon-afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590.scope: Deactivated successfully.
Dec 06 07:27:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:01.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:01 compute-0 romantic_thompson[319136]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:27:01 compute-0 romantic_thompson[319136]: --> relative data size: 1.0
Dec 06 07:27:01 compute-0 romantic_thompson[319136]: --> All data devices are unavailable
Dec 06 07:27:01 compute-0 systemd[1]: libpod-69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5.scope: Deactivated successfully.
Dec 06 07:27:01 compute-0 podman[319120]: 2025-12-06 07:27:01.103706269 +0000 UTC m=+1.194908867 container died 69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:27:01 compute-0 ceph-mon[74339]: pgmap v2085: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 367 MiB data, 994 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 16 KiB/s wr, 90 op/s
Dec 06 07:27:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:01.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f42a645c53da5ac62de98d369343d13bd51d645c08a5b0a89d003d92fcd76a4b-merged.mount: Deactivated successfully.
Dec 06 07:27:01 compute-0 podman[319120]: 2025-12-06 07:27:01.705879367 +0000 UTC m=+1.797081965 container remove 69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_thompson, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:27:01 compute-0 sudo[318993]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:01 compute-0 podman[319208]: 2025-12-06 07:27:01.782160948 +0000 UTC m=+0.982567083 container remove afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.789 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f14f2c-b1cf-43d0-8006-09237e29aee8]: (4, ('Sat Dec  6 07:27:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590)\nafbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590\nSat Dec  6 07:27:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a (afbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590)\nafbd9b5a723df5e256d9f54b79fc8e5e2e3ff3b32d21a6a92d689ab6572d4590\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:01 compute-0 sudo[319265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.791 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0304bf-35bd-4be0-b9c9-3d525e829b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.792 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c014e4e-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:01 compute-0 sudo[319265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:01 compute-0 sudo[319265]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:01 compute-0 nova_compute[251992]: 2025-12-06 07:27:01.805 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:01 compute-0 kernel: tap7c014e4e-a0: left promiscuous mode
Dec 06 07:27:01 compute-0 nova_compute[251992]: 2025-12-06 07:27:01.829 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.832 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ce09f507-c1f4-45ab-8cf1-33082213d374]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.846 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bb560c0f-98ce-47f0-ae31-943c1224a913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.847 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[630d28b0-b4d3-4958-bd2d-e40579522e67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:01 compute-0 podman[319245]: 2025-12-06 07:27:01.85600178 +0000 UTC m=+0.303444845 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:27:01 compute-0 sudo[319290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.865 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[08195674-81bc-4c91-b6f9-d7991040e9c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625856, 'reachable_time': 28312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319329, 'error': None, 'target': 'ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:01 compute-0 sudo[319290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.867 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c014e4e-a182-4f60-8285-20525bc99e5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:27:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:01.867 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[5671de0b-4990-49f6-af85-c26a4fd24e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d7c014e4e\x2da182\x2d4f60\x2d8285\x2d20525bc99e5a.mount: Deactivated successfully.
Dec 06 07:27:01 compute-0 sudo[319290]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:01 compute-0 podman[319244]: 2025-12-06 07:27:01.875852999 +0000 UTC m=+0.324282052 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:27:01 compute-0 systemd[1]: libpod-conmon-69cd09108265ffc94ff24576752e19520350e8c03bb5f021512cfb0542b18bc5.scope: Deactivated successfully.
Dec 06 07:27:01 compute-0 sudo[319332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:01 compute-0 sudo[319332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:01 compute-0 sudo[319332]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:02 compute-0 sudo[319358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:27:02 compute-0 sudo[319358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 302 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 171 KiB/s rd, 5.4 KiB/s wr, 80 op/s
Dec 06 07:27:02 compute-0 podman[319423]: 2025-12-06 07:27:02.2955451 +0000 UTC m=+0.021333032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:27:02 compute-0 ceph-mon[74339]: pgmap v2086: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 302 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 171 KiB/s rd, 5.4 KiB/s wr, 80 op/s
Dec 06 07:27:02 compute-0 podman[319423]: 2025-12-06 07:27:02.413285247 +0000 UTC m=+0.139073159 container create 5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:27:02 compute-0 nova_compute[251992]: 2025-12-06 07:27:02.438 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:02 compute-0 nova_compute[251992]: 2025-12-06 07:27:02.439 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:02 compute-0 nova_compute[251992]: 2025-12-06 07:27:02.439 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:02 compute-0 nova_compute[251992]: 2025-12-06 07:27:02.439 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:02 compute-0 nova_compute[251992]: 2025-12-06 07:27:02.439 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:02 compute-0 nova_compute[251992]: 2025-12-06 07:27:02.440 251996 INFO nova.compute.manager [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Terminating instance
Dec 06 07:27:02 compute-0 nova_compute[251992]: 2025-12-06 07:27:02.441 251996 DEBUG nova.compute.manager [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:27:02 compute-0 systemd[1]: Started libpod-conmon-5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d.scope.
Dec 06 07:27:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:27:02 compute-0 podman[319423]: 2025-12-06 07:27:02.546524943 +0000 UTC m=+0.272312875 container init 5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:27:02 compute-0 podman[319423]: 2025-12-06 07:27:02.553685181 +0000 UTC m=+0.279473093 container start 5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:27:02 compute-0 dazzling_sanderson[319439]: 167 167
Dec 06 07:27:02 compute-0 systemd[1]: libpod-5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d.scope: Deactivated successfully.
Dec 06 07:27:02 compute-0 podman[319423]: 2025-12-06 07:27:02.691324049 +0000 UTC m=+0.417111981 container attach 5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:27:02 compute-0 podman[319423]: 2025-12-06 07:27:02.692294226 +0000 UTC m=+0.418082158 container died 5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:27:02 compute-0 nova_compute[251992]: 2025-12-06 07:27:02.897 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:02.927782) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022927838, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 380, "num_deletes": 252, "total_data_size": 249427, "memory_usage": 256560, "flush_reason": "Manual Compaction"}
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022970340, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 247460, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42385, "largest_seqno": 42764, "table_properties": {"data_size": 245065, "index_size": 495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6156, "raw_average_key_size": 19, "raw_value_size": 240144, "raw_average_value_size": 755, "num_data_blocks": 21, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006014, "oldest_key_time": 1765006014, "file_creation_time": 1765006022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 42612 microseconds, and 2348 cpu microseconds.
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:02.970395) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 247460 bytes OK
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:02.970420) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:02.972401) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:02.972446) EVENT_LOG_v1 {"time_micros": 1765006022972436, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:02.972467) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 246935, prev total WAL file size 246935, number of live WAL files 2.
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:02.972920) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(241KB)], [89(12MB)]
Dec 06 07:27:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006022972986, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 13061228, "oldest_snapshot_seqno": -1}
Dec 06 07:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-76916fd33695137dbd072dde82070fb5f8350e8578994b84ae5e448bee5dfe37-merged.mount: Deactivated successfully.
Dec 06 07:27:03 compute-0 kernel: tapfc5020f3-51 (unregistering): left promiscuous mode
Dec 06 07:27:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:03.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:03 compute-0 NetworkManager[48965]: <info>  [1765006023.0619] device (tapfc5020f3-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:27:03 compute-0 ovn_controller[147168]: 2025-12-06T07:27:03Z|00393|binding|INFO|Releasing lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f from this chassis (sb_readonly=0)
Dec 06 07:27:03 compute-0 ovn_controller[147168]: 2025-12-06T07:27:03Z|00394|binding|INFO|Setting lport fc5020f3-51a1-4ca2-b0b5-ff2add84607f down in Southbound
Dec 06 07:27:03 compute-0 ovn_controller[147168]: 2025-12-06T07:27:03Z|00395|binding|INFO|Removing iface tapfc5020f3-51 ovn-installed in OVS
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.129 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.131 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.146 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:03 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Dec 06 07:27:03 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006a.scope: Consumed 14.846s CPU time.
Dec 06 07:27:03 compute-0 systemd-machined[212986]: Machine qemu-49-instance-0000006a terminated.
Dec 06 07:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:03.226 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:43:3f 10.100.0.11'], port_security=['fa:16:3e:39:43:3f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f32ea15c-cf80-482c-9f9a-22392bc79e78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6d2f50c0db54315bfa96a24511dda90', 'neutron:revision_number': '8', 'neutron:security_group_ids': '859a0bc3-7542-4622-9180-7c67df8e913c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e462675c-3feb-4b24-a87b-c5ebd92a4b8b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=fc5020f3-51a1-4ca2-b0b5-ff2add84607f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:03.227 158118 INFO neutron.agent.ovn.metadata.agent [-] Port fc5020f3-51a1-4ca2-b0b5-ff2add84607f in datapath 85cfbf28-7016-4776-8fc2-2eb08a6b8347 unbound from our chassis
Dec 06 07:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:03.229 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85cfbf28-7016-4776-8fc2-2eb08a6b8347, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:03.230 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[74d40dcd-1d06-4a13-896f-6b2a7c288415]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:03.231 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 namespace which is not needed anymore
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.276 251996 INFO nova.virt.libvirt.driver [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Instance destroyed successfully.
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.277 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.277 251996 DEBUG nova.objects.instance [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lazy-loading 'resources' on Instance uuid f32ea15c-cf80-482c-9f9a-22392bc79e78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 7347 keys, 11109990 bytes, temperature: kUnknown
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023282858, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11109990, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11060988, "index_size": 29562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 191502, "raw_average_key_size": 26, "raw_value_size": 10929633, "raw_average_value_size": 1487, "num_data_blocks": 1164, "num_entries": 7347, "num_filter_entries": 7347, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765006022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:03.283319) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11109990 bytes
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:03.285128) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.1 rd, 35.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(97.7) write-amplify(44.9) OK, records in: 7867, records dropped: 520 output_compression: NoCompression
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:03.285152) EVENT_LOG_v1 {"time_micros": 1765006023285141, "job": 52, "event": "compaction_finished", "compaction_time_micros": 310114, "compaction_time_cpu_micros": 24584, "output_level": 6, "num_output_files": 1, "total_output_size": 11109990, "num_input_records": 7867, "num_output_records": 7347, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023285390, "job": 52, "event": "table_file_deletion", "file_number": 91}
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006023287625, "job": 52, "event": "table_file_deletion", "file_number": 89}
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:02.972811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:03.287790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:03.287796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:03.287798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:03.287799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:27:03.287801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:27:03 compute-0 podman[319423]: 2025-12-06 07:27:03.299474942 +0000 UTC m=+1.025262854 container remove 5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.306 251996 INFO nova.virt.libvirt.driver [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Deleting instance files /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36_del
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.307 251996 INFO nova.virt.libvirt.driver [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Deletion of /var/lib/nova/instances/ea8c0005-4b7a-4697-89ae-91f4bef22e36_del complete
Dec 06 07:27:03 compute-0 systemd[1]: libpod-conmon-5afd6d2beceafd58d5363877f1267c997275e2b0da51421bbc617f6eb1960d0d.scope: Deactivated successfully.
Dec 06 07:27:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:03.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.636 251996 DEBUG nova.virt.libvirt.vif [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:25:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1353793859',display_name='tempest-DeleteServersTestJSON-server-1353793859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1353793859',id=106,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:26:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c6d2f50c0db54315bfa96a24511dda90',ramdisk_id='',reservation_id='r-jutadpu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1764569218',owner_user_name='tempest-DeleteServersTestJSON-1764569218-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:26:47Z,user_data=None,user_id='d966fefcb38a45219b9cc637c46a3d62',uuid=f32ea15c-cf80-482c-9f9a-22392bc79e78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.638 251996 DEBUG nova.network.os_vif_util [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converting VIF {"id": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "address": "fa:16:3e:39:43:3f", "network": {"id": "85cfbf28-7016-4776-8fc2-2eb08a6b8347", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-855821425-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c6d2f50c0db54315bfa96a24511dda90", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc5020f3-51", "ovs_interfaceid": "fc5020f3-51a1-4ca2-b0b5-ff2add84607f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.639 251996 DEBUG nova.network.os_vif_util [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.639 251996 DEBUG os_vif [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.641 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.641 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc5020f3-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.643 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.646 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.648 251996 INFO os_vif [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:43:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f,network=Network(85cfbf28-7016-4776-8fc2-2eb08a6b8347),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc5020f3-51')
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.828 251996 INFO nova.compute.manager [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Took 4.46 seconds to destroy the instance on the hypervisor.
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.829 251996 DEBUG oslo.service.loopingcall [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.829 251996 DEBUG nova.compute.manager [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:27:03 compute-0 nova_compute[251992]: 2025-12-06 07:27:03.829 251996 DEBUG nova.network.neutron [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:03.833 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:03.833 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:03.834 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 233 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 678 KiB/s rd, 19 KiB/s wr, 158 op/s
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.072 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:04 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[318570]: [NOTICE]   (318578) : haproxy version is 2.8.14-c23fe91
Dec 06 07:27:04 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[318570]: [NOTICE]   (318578) : path to executable is /usr/sbin/haproxy
Dec 06 07:27:04 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[318570]: [WARNING]  (318578) : Exiting Master process...
Dec 06 07:27:04 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[318570]: [WARNING]  (318578) : Exiting Master process...
Dec 06 07:27:04 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[318570]: [ALERT]    (318578) : Current worker (318580) exited with code 143 (Terminated)
Dec 06 07:27:04 compute-0 neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347[318570]: [WARNING]  (318578) : All workers exited. Exiting... (0)
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.272 251996 DEBUG nova.compute.manager [req-c3f957c8-a265-4afe-88c5-99d81aff5cf2 req-561b612a-39d0-417b-9b2d-411dc9aa7d1f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.273 251996 DEBUG oslo_concurrency.lockutils [req-c3f957c8-a265-4afe-88c5-99d81aff5cf2 req-561b612a-39d0-417b-9b2d-411dc9aa7d1f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.273 251996 DEBUG oslo_concurrency.lockutils [req-c3f957c8-a265-4afe-88c5-99d81aff5cf2 req-561b612a-39d0-417b-9b2d-411dc9aa7d1f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.274 251996 DEBUG oslo_concurrency.lockutils [req-c3f957c8-a265-4afe-88c5-99d81aff5cf2 req-561b612a-39d0-417b-9b2d-411dc9aa7d1f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:04 compute-0 systemd[1]: libpod-dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f.scope: Deactivated successfully.
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.274 251996 DEBUG nova.compute.manager [req-c3f957c8-a265-4afe-88c5-99d81aff5cf2 req-561b612a-39d0-417b-9b2d-411dc9aa7d1f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.276 251996 DEBUG nova.compute.manager [req-c3f957c8-a265-4afe-88c5-99d81aff5cf2 req-561b612a-39d0-417b-9b2d-411dc9aa7d1f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-unplugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:27:04 compute-0 podman[319494]: 2025-12-06 07:27:04.280497482 +0000 UTC m=+0.945878528 container died dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 07:27:04 compute-0 podman[319512]: 2025-12-06 07:27:04.284929674 +0000 UTC m=+0.854668525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:27:04 compute-0 podman[319512]: 2025-12-06 07:27:04.408872752 +0000 UTC m=+0.978611583 container create e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:27:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fa9aeb4d4d41aa84985347f8e119a02a23c5733331f79938e3de3db38e672bc-merged.mount: Deactivated successfully.
Dec 06 07:27:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f-userdata-shm.mount: Deactivated successfully.
Dec 06 07:27:04 compute-0 systemd[1]: Started libpod-conmon-e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b.scope.
Dec 06 07:27:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f564192228f2642127e5554351e284328b1b7f20f5ecb98bbc3a02585e0ac439/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f564192228f2642127e5554351e284328b1b7f20f5ecb98bbc3a02585e0ac439/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f564192228f2642127e5554351e284328b1b7f20f5ecb98bbc3a02585e0ac439/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f564192228f2642127e5554351e284328b1b7f20f5ecb98bbc3a02585e0ac439/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:04 compute-0 podman[319494]: 2025-12-06 07:27:04.554741598 +0000 UTC m=+1.220122634 container cleanup dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:27:04 compute-0 podman[319512]: 2025-12-06 07:27:04.576089399 +0000 UTC m=+1.145828260 container init e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:27:04 compute-0 podman[319512]: 2025-12-06 07:27:04.586492797 +0000 UTC m=+1.156231628 container start e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wing, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/219169138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.979 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.979 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.979 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.980 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:27:04 compute-0 nova_compute[251992]: 2025-12-06 07:27:04.980 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:04 compute-0 podman[319512]: 2025-12-06 07:27:04.988863647 +0000 UTC m=+1.558602478 container attach e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:27:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:05.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:05 compute-0 podman[319567]: 2025-12-06 07:27:05.164973339 +0000 UTC m=+0.586774613 container remove dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.171 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b94f26-a8eb-4492-84a1-8448e3cca4e2]: (4, ('Sat Dec  6 07:27:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f)\ndc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f\nSat Dec  6 07:27:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 (dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f)\ndc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:05 compute-0 systemd[1]: libpod-conmon-dc3464389249c79d5f2a71a05eab4f484ec39f45073e24034e379765ac3edb4f.scope: Deactivated successfully.
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.174 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f1327480-e795-4f8d-ac27-055221813f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.176 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85cfbf28-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:05 compute-0 nova_compute[251992]: 2025-12-06 07:27:05.178 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:05 compute-0 kernel: tap85cfbf28-70: left promiscuous mode
Dec 06 07:27:05 compute-0 nova_compute[251992]: 2025-12-06 07:27:05.195 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.198 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1f1a14-a59c-4488-8fac-677c8b6c2bf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.211 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6fed35-c3b0-4203-904e-7f4fde7f8c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.212 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d15f0906-5340-45ce-931f-2681d2a25f63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.226 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1918bd21-84d6-4571-88d7-663d12dac0a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 627762, 'reachable_time': 44238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319605, 'error': None, 'target': 'ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d85cfbf28\x2d7016\x2d4776\x2d8fc2\x2d2eb08a6b8347.mount: Deactivated successfully.
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.231 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85cfbf28-7016-4776-8fc2-2eb08a6b8347 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:27:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:05.231 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[68bc35b0-2e75-4b4b-ae88-fbdb0781adb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]: {
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:     "0": [
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:         {
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "devices": [
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "/dev/loop3"
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             ],
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "lv_name": "ceph_lv0",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "lv_size": "7511998464",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "name": "ceph_lv0",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "tags": {
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.cluster_name": "ceph",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.crush_device_class": "",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.encrypted": "0",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.osd_id": "0",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.type": "block",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:                 "ceph.vdo": "0"
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             },
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "type": "block",
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:             "vg_name": "ceph_vg0"
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:         }
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]:     ]
Dec 06 07:27:05 compute-0 flamboyant_wing[319562]: }
Dec 06 07:27:05 compute-0 systemd[1]: libpod-e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b.scope: Deactivated successfully.
Dec 06 07:27:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:05 compute-0 podman[319610]: 2025-12-06 07:27:05.42777154 +0000 UTC m=+0.029136187 container died e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wing, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:27:05 compute-0 nova_compute[251992]: 2025-12-06 07:27:05.475 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "dd85818c-bf82-473d-8650-6b391dbfa300" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:05 compute-0 nova_compute[251992]: 2025-12-06 07:27:05.476 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:05 compute-0 nova_compute[251992]: 2025-12-06 07:27:05.585 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:27:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:27:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829125190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:05 compute-0 nova_compute[251992]: 2025-12-06 07:27:05.793 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f564192228f2642127e5554351e284328b1b7f20f5ecb98bbc3a02585e0ac439-merged.mount: Deactivated successfully.
Dec 06 07:27:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 200 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 19 KiB/s wr, 163 op/s
Dec 06 07:27:06 compute-0 ceph-mon[74339]: pgmap v2087: 305 pgs: 305 active+clean; 233 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 678 KiB/s rd, 19 KiB/s wr, 158 op/s
Dec 06 07:27:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1290920532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2829125190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:06 compute-0 podman[319610]: 2025-12-06 07:27:06.076756934 +0000 UTC m=+0.678121581 container remove e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wing, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:27:06 compute-0 systemd[1]: libpod-conmon-e9fc5bea9f76269e05f646c22f1722339989a4a4f3200b964d37c2aaf156f74b.scope: Deactivated successfully.
Dec 06 07:27:06 compute-0 sudo[319358]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:06 compute-0 sudo[319627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:06 compute-0 sudo[319627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:06 compute-0 sudo[319627]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:06 compute-0 sudo[319652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:27:06 compute-0 sudo[319652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:06 compute-0 sudo[319652]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:06 compute-0 sudo[319677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:06 compute-0 sudo[319677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:06 compute-0 sudo[319677]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:06 compute-0 sudo[319702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:27:06 compute-0 sudo[319702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.412 251996 DEBUG nova.compute.manager [req-0d326729-2a89-4fbb-b711-3cfb9c0c73f3 req-bf92bfa8-0f63-44d3-abc1-413f9c876249 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.413 251996 DEBUG oslo_concurrency.lockutils [req-0d326729-2a89-4fbb-b711-3cfb9c0c73f3 req-bf92bfa8-0f63-44d3-abc1-413f9c876249 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.413 251996 DEBUG oslo_concurrency.lockutils [req-0d326729-2a89-4fbb-b711-3cfb9c0c73f3 req-bf92bfa8-0f63-44d3-abc1-413f9c876249 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.413 251996 DEBUG oslo_concurrency.lockutils [req-0d326729-2a89-4fbb-b711-3cfb9c0c73f3 req-bf92bfa8-0f63-44d3-abc1-413f9c876249 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.414 251996 DEBUG nova.compute.manager [req-0d326729-2a89-4fbb-b711-3cfb9c0c73f3 req-bf92bfa8-0f63-44d3-abc1-413f9c876249 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.414 251996 WARNING nova.compute.manager [req-0d326729-2a89-4fbb-b711-3cfb9c0c73f3 req-bf92bfa8-0f63-44d3-abc1-413f9c876249 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-unplugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state None.
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.417 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.417 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.425 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.425 251996 INFO nova.compute.claims [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.448 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.448 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.451 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.452 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.612 251996 DEBUG nova.compute.manager [req-f56f2566-e10c-4bed-a408-5a83422b2894 req-2fcc20d7-6786-4880-8e47-fc0e52eb71c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.613 251996 DEBUG oslo_concurrency.lockutils [req-f56f2566-e10c-4bed-a408-5a83422b2894 req-2fcc20d7-6786-4880-8e47-fc0e52eb71c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.613 251996 DEBUG oslo_concurrency.lockutils [req-f56f2566-e10c-4bed-a408-5a83422b2894 req-2fcc20d7-6786-4880-8e47-fc0e52eb71c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.613 251996 DEBUG oslo_concurrency.lockutils [req-f56f2566-e10c-4bed-a408-5a83422b2894 req-2fcc20d7-6786-4880-8e47-fc0e52eb71c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.613 251996 DEBUG nova.compute.manager [req-f56f2566-e10c-4bed-a408-5a83422b2894 req-2fcc20d7-6786-4880-8e47-fc0e52eb71c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] No waiting events found dispatching network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.614 251996 WARNING nova.compute.manager [req-f56f2566-e10c-4bed-a408-5a83422b2894 req-2fcc20d7-6786-4880-8e47-fc0e52eb71c3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received unexpected event network-vif-plugged-0e5e71bc-7098-4091-938e-6299f989917f for instance with vm_state active and task_state deleting.
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.628 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.629 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4176MB free_disk=20.837993621826172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:27:06 compute-0 nova_compute[251992]: 2025-12-06 07:27:06.629 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:06 compute-0 podman[319768]: 2025-12-06 07:27:06.770597297 +0000 UTC m=+0.099622536 container create 4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:27:06 compute-0 podman[319768]: 2025-12-06 07:27:06.693483065 +0000 UTC m=+0.022508324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:27:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:07.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:07 compute-0 sudo[319782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:07 compute-0 sudo[319782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:07 compute-0 sudo[319782]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:07 compute-0 systemd[1]: Started libpod-conmon-4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683.scope.
Dec 06 07:27:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:27:07 compute-0 sudo[319807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:07 compute-0 sudo[319807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:07 compute-0 sudo[319807]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:07 compute-0 podman[319768]: 2025-12-06 07:27:07.16541973 +0000 UTC m=+0.494444999 container init 4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:27:07 compute-0 podman[319768]: 2025-12-06 07:27:07.172782814 +0000 UTC m=+0.501808053 container start 4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:27:07 compute-0 inspiring_driscoll[319830]: 167 167
Dec 06 07:27:07 compute-0 podman[319768]: 2025-12-06 07:27:07.178856712 +0000 UTC m=+0.507881951 container attach 4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:27:07 compute-0 systemd[1]: libpod-4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683.scope: Deactivated successfully.
Dec 06 07:27:07 compute-0 conmon[319830]: conmon 4b1db0e5dd4fa50d7a59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683.scope/container/memory.events
Dec 06 07:27:07 compute-0 podman[319768]: 2025-12-06 07:27:07.181245998 +0000 UTC m=+0.510271257 container died 4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:27:07 compute-0 ceph-mon[74339]: pgmap v2088: 305 pgs: 305 active+clean; 200 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 19 KiB/s wr, 163 op/s
Dec 06 07:27:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1443338164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:07 compute-0 nova_compute[251992]: 2025-12-06 07:27:07.205 251996 INFO nova.virt.libvirt.driver [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Deleting instance files /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78_del
Dec 06 07:27:07 compute-0 nova_compute[251992]: 2025-12-06 07:27:07.206 251996 INFO nova.virt.libvirt.driver [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Deletion of /var/lib/nova/instances/f32ea15c-cf80-482c-9f9a-22392bc79e78_del complete
Dec 06 07:27:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd7df6dce206ca17b999fc93471034e068ffc9563936b482f10cd2b8c50d6c6b-merged.mount: Deactivated successfully.
Dec 06 07:27:07 compute-0 podman[319768]: 2025-12-06 07:27:07.224703641 +0000 UTC m=+0.553728870 container remove 4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_driscoll, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:27:07 compute-0 systemd[1]: libpod-conmon-4b1db0e5dd4fa50d7a59d47dca7c1f91f29ac4a742fbace427bd711d29038683.scope: Deactivated successfully.
Dec 06 07:27:07 compute-0 nova_compute[251992]: 2025-12-06 07:27:07.259 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:07.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:07 compute-0 podman[319858]: 2025-12-06 07:27:07.445826557 +0000 UTC m=+0.091864282 container create 748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:27:07 compute-0 podman[319858]: 2025-12-06 07:27:07.388483431 +0000 UTC m=+0.034521206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:27:07 compute-0 systemd[1]: Started libpod-conmon-748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345.scope.
Dec 06 07:27:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd822268ea07a370881b94b03015299985fa809f6dc46c34479f801c5c2183e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd822268ea07a370881b94b03015299985fa809f6dc46c34479f801c5c2183e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd822268ea07a370881b94b03015299985fa809f6dc46c34479f801c5c2183e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd822268ea07a370881b94b03015299985fa809f6dc46c34479f801c5c2183e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:27:07 compute-0 podman[319858]: 2025-12-06 07:27:07.655879658 +0000 UTC m=+0.301917403 container init 748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:27:07 compute-0 podman[319858]: 2025-12-06 07:27:07.662773299 +0000 UTC m=+0.308811024 container start 748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:27:07 compute-0 podman[319858]: 2025-12-06 07:27:07.667081728 +0000 UTC m=+0.313119453 container attach 748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:27:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:27:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054989693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:07 compute-0 nova_compute[251992]: 2025-12-06 07:27:07.721 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:07 compute-0 nova_compute[251992]: 2025-12-06 07:27:07.728 251996 DEBUG nova.compute.provider_tree [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:27:07 compute-0 nova_compute[251992]: 2025-12-06 07:27:07.882 251996 DEBUG nova.network.neutron [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 200 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 18 KiB/s wr, 153 op/s
Dec 06 07:27:08 compute-0 nova_compute[251992]: 2025-12-06 07:27:08.046 251996 DEBUG nova.scheduler.client.report [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:27:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]: {
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]:         "osd_id": 0,
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]:         "type": "bluestore"
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]:     }
Dec 06 07:27:08 compute-0 fervent_mclaren[319892]: }
Dec 06 07:27:08 compute-0 systemd[1]: libpod-748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345.scope: Deactivated successfully.
Dec 06 07:27:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2054989693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:08 compute-0 ceph-mon[74339]: pgmap v2089: 305 pgs: 305 active+clean; 200 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 18 KiB/s wr, 153 op/s
Dec 06 07:27:08 compute-0 podman[319915]: 2025-12-06 07:27:08.506927592 +0000 UTC m=+0.023084950 container died 748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:27:08 compute-0 nova_compute[251992]: 2025-12-06 07:27:08.539 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:08 compute-0 nova_compute[251992]: 2025-12-06 07:27:08.540 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:27:08 compute-0 nova_compute[251992]: 2025-12-06 07:27:08.542 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:08 compute-0 nova_compute[251992]: 2025-12-06 07:27:08.647 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-acd822268ea07a370881b94b03015299985fa809f6dc46c34479f801c5c2183e-merged.mount: Deactivated successfully.
Dec 06 07:27:08 compute-0 podman[319915]: 2025-12-06 07:27:08.966737522 +0000 UTC m=+0.482894840 container remove 748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:27:08 compute-0 systemd[1]: libpod-conmon-748a43fbd522248570536953c7ba4e40ca4920ea91012f785971137a1f9da345.scope: Deactivated successfully.
Dec 06 07:27:09 compute-0 sudo[319702]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Dec 06 07:27:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Dec 06 07:27:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:09.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:09 compute-0 nova_compute[251992]: 2025-12-06 07:27:09.074 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:09.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:27:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 128851f7-ae7d-484d-9662-fbf4d5a7acf0 does not exist
Dec 06 07:27:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fe456464-929b-4369-a85b-b67ddc00fc3a does not exist
Dec 06 07:27:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e33102c0-0e6b-4e5d-b545-b581e1e947ed does not exist
Dec 06 07:27:09 compute-0 sudo[319931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:09 compute-0 sudo[319931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:09 compute-0 sudo[319931]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 162 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 19 KiB/s wr, 163 op/s
Dec 06 07:27:10 compute-0 sudo[319956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:27:10 compute-0 sudo[319956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:10 compute-0 sudo[319956]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:11.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/874349504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:11 compute-0 ceph-mon[74339]: osdmap e270: 3 total, 3 up, 3 in
Dec 06 07:27:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/479479914' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:27:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/479479914' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:27:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:27:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.241 251996 INFO nova.compute.manager [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Took 7.41 seconds to deallocate network for instance.
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.370 251996 INFO nova.compute.manager [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Took 8.93 seconds to destroy the instance on the hypervisor.
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.370 251996 DEBUG oslo.service.loopingcall [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.370 251996 DEBUG nova.compute.manager [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.371 251996 DEBUG nova.network.neutron [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:27:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:11.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.944 251996 DEBUG nova.compute.manager [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.945 251996 DEBUG oslo_concurrency.lockutils [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.945 251996 DEBUG oslo_concurrency.lockutils [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.945 251996 DEBUG oslo_concurrency.lockutils [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.946 251996 DEBUG nova.compute.manager [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] No waiting events found dispatching network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.946 251996 WARNING nova.compute.manager [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received unexpected event network-vif-plugged-fc5020f3-51a1-4ca2-b0b5-ff2add84607f for instance with vm_state active and task_state None.
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.946 251996 DEBUG nova.compute.manager [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Received event network-vif-deleted-0e5e71bc-7098-4091-938e-6299f989917f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.946 251996 INFO nova.compute.manager [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Neutron deleted interface 0e5e71bc-7098-4091-938e-6299f989917f; detaching it from the instance and deleting it from the info cache
Dec 06 07:27:11 compute-0 nova_compute[251992]: 2025-12-06 07:27:11.947 251996 DEBUG nova.network.neutron [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 567 KiB/s rd, 17 KiB/s wr, 122 op/s
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.096 251996 DEBUG nova.compute.manager [req-087fdb89-26f5-472f-885b-05c985f531ef req-01d08de8-434b-4412-96c7-4d54c4e59b00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Detach interface failed, port_id=0e5e71bc-7098-4091-938e-6299f989917f, reason: Instance ea8c0005-4b7a-4697-89ae-91f4bef22e36 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.233 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 00f56c62-f327-41e3-a105-24f56ae124c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.233 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f32ea15c-cf80-482c-9f9a-22392bc79e78 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.233 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance ea8c0005-4b7a-4697-89ae-91f4bef22e36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.233 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance dd85818c-bf82-473d-8650-6b391dbfa300 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.234 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.234 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:27:12 compute-0 ceph-mon[74339]: pgmap v2091: 305 pgs: 305 active+clean; 162 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 19 KiB/s wr, 163 op/s
Dec 06 07:27:12 compute-0 ceph-mon[74339]: pgmap v2092: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 567 KiB/s rd, 17 KiB/s wr, 122 op/s
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.492 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.521 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.522 251996 DEBUG nova.network.neutron [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.607 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:27:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3772906005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.941 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:12 compute-0 nova_compute[251992]: 2025-12-06 07:27:12.949 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:27:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:13.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:13.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1591755363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3772906005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:13 compute-0 nova_compute[251992]: 2025-12-06 07:27:13.634 251996 INFO nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:27:13 compute-0 nova_compute[251992]: 2025-12-06 07:27:13.652 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Dec 06 07:27:14 compute-0 nova_compute[251992]: 2025-12-06 07:27:14.075 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:14 compute-0 nova_compute[251992]: 2025-12-06 07:27:14.187 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:27:14 compute-0 nova_compute[251992]: 2025-12-06 07:27:14.212 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:27:14 compute-0 nova_compute[251992]: 2025-12-06 07:27:14.605 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006019.6044083, ea8c0005-4b7a-4697-89ae-91f4bef22e36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:27:14 compute-0 nova_compute[251992]: 2025-12-06 07:27:14.606 251996 INFO nova.compute.manager [-] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] VM Stopped (Lifecycle Event)
Dec 06 07:27:15 compute-0 nova_compute[251992]: 2025-12-06 07:27:15.019 251996 DEBUG nova.compute.manager [None req-5d3b5881-6cca-4a16-b3d2-c2eee02da51b - - - - - -] [instance: ea8c0005-4b7a-4697-89ae-91f4bef22e36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:15 compute-0 nova_compute[251992]: 2025-12-06 07:27:15.251 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:27:15 compute-0 nova_compute[251992]: 2025-12-06 07:27:15.251 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:15 compute-0 nova_compute[251992]: 2025-12-06 07:27:15.252 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 2.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:15.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:15 compute-0 nova_compute[251992]: 2025-12-06 07:27:15.545 251996 DEBUG oslo_concurrency.processutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:15 compute-0 nova_compute[251992]: 2025-12-06 07:27:15.923 251996 DEBUG nova.policy [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'baddb65c90da47a58d026b0db966f6c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '001e2256cb8b430d93c1ff613010d199', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:27:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:27:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341888395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 32 op/s
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.031 251996 DEBUG oslo_concurrency.processutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.036 251996 DEBUG nova.compute.provider_tree [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.065 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:16.066 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:27:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:16.067 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.101 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.102 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.103 251996 INFO nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Creating image(s)
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.127 251996 DEBUG nova.storage.rbd_utils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image dd85818c-bf82-473d-8650-6b391dbfa300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.155 251996 DEBUG nova.storage.rbd_utils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image dd85818c-bf82-473d-8650-6b391dbfa300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.189 251996 DEBUG nova.storage.rbd_utils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image dd85818c-bf82-473d-8650-6b391dbfa300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.194 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.224 251996 DEBUG nova.scheduler.client.report [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.252 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.253 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.253 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.254 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.254 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.254 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.255 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.264 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.265 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.265 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.266 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.296 251996 DEBUG nova.storage.rbd_utils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image dd85818c-bf82-473d-8650-6b391dbfa300_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:16 compute-0 nova_compute[251992]: 2025-12-06 07:27:16.301 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef dd85818c-bf82-473d-8650-6b391dbfa300_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:17.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.080 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.140 251996 DEBUG nova.network.neutron [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.198 251996 INFO nova.scheduler.client.report [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Deleted allocations for instance ea8c0005-4b7a-4697-89ae-91f4bef22e36
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.235 251996 INFO nova.compute.manager [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Took 5.86 seconds to deallocate network for instance.
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.289 251996 DEBUG nova.compute.manager [req-de22ebde-8810-439a-af54-7d31732dcd31 req-1f0f82e1-adc7-42c9-a938-744bbf8b6458 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Received event network-vif-deleted-fc5020f3-51a1-4ca2-b0b5-ff2add84607f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.289 251996 INFO nova.compute.manager [req-de22ebde-8810-439a-af54-7d31732dcd31 req-1f0f82e1-adc7-42c9-a938-744bbf8b6458 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Neutron deleted interface fc5020f3-51a1-4ca2-b0b5-ff2add84607f; detaching it from the instance and deleting it from the info cache
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.290 251996 DEBUG nova.network.neutron [req-de22ebde-8810-439a-af54-7d31732dcd31 req-1f0f82e1-adc7-42c9-a938-744bbf8b6458 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.329 251996 DEBUG nova.compute.manager [req-de22ebde-8810-439a-af54-7d31732dcd31 req-1f0f82e1-adc7-42c9-a938-744bbf8b6458 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Detach interface failed, port_id=fc5020f3-51a1-4ca2-b0b5-ff2add84607f, reason: Instance f32ea15c-cf80-482c-9f9a-22392bc79e78 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.429 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.430 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:17.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.485 251996 DEBUG oslo_concurrency.lockutils [None req-9cb965a9-1bf4-44a9-a3d3-4e3a8e43aaa8 d67c136e82ad4001b000848d75eef50d 88f5b34244614321a9b6e902eaba0ece - - default default] Lock "ea8c0005-4b7a-4697-89ae-91f4bef22e36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:17 compute-0 nova_compute[251992]: 2025-12-06 07:27:17.625 251996 DEBUG oslo_concurrency.processutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 32 op/s
Dec 06 07:27:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:27:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/679720062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.093 251996 DEBUG oslo_concurrency.processutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.101 251996 DEBUG nova.compute.provider_tree [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.275 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006023.2737894, f32ea15c-cf80-482c-9f9a-22392bc79e78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.275 251996 INFO nova.compute.manager [-] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] VM Stopped (Lifecycle Event)
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.282 251996 DEBUG nova.scheduler.client.report [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.334 251996 DEBUG nova.network.neutron [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Successfully created port: 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:27:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:27:18
Dec 06 07:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', '.rgw.root', 'default.rgw.control', '.mgr']
Dec 06 07:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.445 251996 DEBUG nova.compute.manager [None req-7afd4c69-72fb-469b-8d77-2f8b3ad0d927 - - - - - -] [instance: f32ea15c-cf80-482c-9f9a-22392bc79e78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.505 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.576 251996 INFO nova.scheduler.client.report [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Deleted allocations for instance f32ea15c-cf80-482c-9f9a-22392bc79e78
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.655 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.765 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 07:27:18 compute-0 nova_compute[251992]: 2025-12-06 07:27:18.775 251996 DEBUG oslo_concurrency.lockutils [None req-7456b1bf-2aa4-4ccd-a6c0-246db503eb0b d966fefcb38a45219b9cc637c46a3d62 c6d2f50c0db54315bfa96a24511dda90 - - default default] Lock "f32ea15c-cf80-482c-9f9a-22392bc79e78" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:19.069 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:19 compute-0 nova_compute[251992]: 2025-12-06 07:27:19.078 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:19 compute-0 nova_compute[251992]: 2025-12-06 07:27:19.236 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:27:19 compute-0 nova_compute[251992]: 2025-12-06 07:27:19.236 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:27:19 compute-0 nova_compute[251992]: 2025-12-06 07:27:19.236 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:27:19 compute-0 nova_compute[251992]: 2025-12-06 07:27:19.237 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 00f56c62-f327-41e3-a105-24f56ae124c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:27:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 464 B/s wr, 11 op/s
Dec 06 07:27:20 compute-0 ceph-mon[74339]: pgmap v2093: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 KiB/s wr, 38 op/s
Dec 06 07:27:20 compute-0 nova_compute[251992]: 2025-12-06 07:27:20.705 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef dd85818c-bf82-473d-8650-6b391dbfa300_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:20 compute-0 nova_compute[251992]: 2025-12-06 07:27:20.783 251996 DEBUG nova.storage.rbd_utils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] resizing rbd image dd85818c-bf82-473d-8650-6b391dbfa300_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:27:20 compute-0 nova_compute[251992]: 2025-12-06 07:27:20.921 251996 DEBUG nova.objects.instance [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'migration_context' on Instance uuid dd85818c-bf82-473d-8650-6b391dbfa300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:27:20 compute-0 nova_compute[251992]: 2025-12-06 07:27:20.956 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:27:20 compute-0 nova_compute[251992]: 2025-12-06 07:27:20.957 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Ensure instance console log exists: /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:27:20 compute-0 nova_compute[251992]: 2025-12-06 07:27:20.957 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:20 compute-0 nova_compute[251992]: 2025-12-06 07:27:20.958 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:20 compute-0 nova_compute[251992]: 2025-12-06 07:27:20.958 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:27:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:21.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:27:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3341888395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:21 compute-0 ceph-mon[74339]: pgmap v2094: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 32 op/s
Dec 06 07:27:21 compute-0 ceph-mon[74339]: pgmap v2095: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 32 op/s
Dec 06 07:27:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/679720062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:21 compute-0 ceph-mon[74339]: pgmap v2096: 305 pgs: 305 active+clean; 121 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 464 B/s wr, 11 op/s
Dec 06 07:27:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3762767992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:21.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:21 compute-0 nova_compute[251992]: 2025-12-06 07:27:21.480 251996 DEBUG nova.network.neutron [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Successfully updated port: 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:27:21 compute-0 nova_compute[251992]: 2025-12-06 07:27:21.513 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:27:21 compute-0 nova_compute[251992]: 2025-12-06 07:27:21.513 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquired lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:27:21 compute-0 nova_compute[251992]: 2025-12-06 07:27:21.514 251996 DEBUG nova.network.neutron [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:27:21 compute-0 nova_compute[251992]: 2025-12-06 07:27:21.754 251996 DEBUG nova.compute.manager [req-7743d8cc-0b3f-46b0-80e6-fdab8367d0f8 req-a2991329-81a3-400a-962e-fc2bf2cbf8aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received event network-changed-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:21 compute-0 nova_compute[251992]: 2025-12-06 07:27:21.755 251996 DEBUG nova.compute.manager [req-7743d8cc-0b3f-46b0-80e6-fdab8367d0f8 req-a2991329-81a3-400a-962e-fc2bf2cbf8aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Refreshing instance network info cache due to event network-changed-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:27:21 compute-0 nova_compute[251992]: 2025-12-06 07:27:21.755 251996 DEBUG oslo_concurrency.lockutils [req-7743d8cc-0b3f-46b0-80e6-fdab8367d0f8 req-a2991329-81a3-400a-962e-fc2bf2cbf8aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:27:21 compute-0 nova_compute[251992]: 2025-12-06 07:27:21.995 251996 DEBUG nova.network.neutron [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:27:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 149 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Dec 06 07:27:22 compute-0 ovn_controller[147168]: 2025-12-06T07:27:22Z|00396|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:27:22 compute-0 nova_compute[251992]: 2025-12-06 07:27:22.353 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:22 compute-0 ceph-mon[74339]: pgmap v2097: 305 pgs: 305 active+clean; 149 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.2 MiB/s wr, 25 op/s
Dec 06 07:27:22 compute-0 nova_compute[251992]: 2025-12-06 07:27:22.828 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:22 compute-0 nova_compute[251992]: 2025-12-06 07:27:22.865 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:27:22 compute-0 nova_compute[251992]: 2025-12-06 07:27:22.865 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:27:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:23.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:27:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:27:23 compute-0 nova_compute[251992]: 2025-12-06 07:27:23.658 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 204 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.125 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.172 251996 DEBUG nova.network.neutron [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Updating instance_info_cache with network_info: [{"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.195 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Releasing lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.195 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Instance network_info: |[{"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.196 251996 DEBUG oslo_concurrency.lockutils [req-7743d8cc-0b3f-46b0-80e6-fdab8367d0f8 req-a2991329-81a3-400a-962e-fc2bf2cbf8aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.196 251996 DEBUG nova.network.neutron [req-7743d8cc-0b3f-46b0-80e6-fdab8367d0f8 req-a2991329-81a3-400a-962e-fc2bf2cbf8aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Refreshing network info cache for port 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.199 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Start _get_guest_xml network_info=[{"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.202 251996 WARNING nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.206 251996 DEBUG nova.virt.libvirt.host [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.207 251996 DEBUG nova.virt.libvirt.host [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.210 251996 DEBUG nova.virt.libvirt.host [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.211 251996 DEBUG nova.virt.libvirt.host [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.212 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.212 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.213 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.213 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.213 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.213 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.213 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.214 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.214 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.214 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.214 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.215 251996 DEBUG nova.virt.hardware [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.218 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:27:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:27:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:27:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:27:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:27:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:27:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803218215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.686 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.714 251996 DEBUG nova.storage.rbd_utils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image dd85818c-bf82-473d-8650-6b391dbfa300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:24 compute-0 nova_compute[251992]: 2025-12-06 07:27:24.718 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:25.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:27:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2939792088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.179 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.181 251996 DEBUG nova.virt.libvirt.vif [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1902272375',display_name='tempest-ServerActionsTestOtherA-server-1902272375',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1902272375',id=109,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-jio410z8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:27:15Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=dd85818c-bf82-473d-8650-6b391dbfa300,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.181 251996 DEBUG nova.network.os_vif_util [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.182 251996 DEBUG nova.network.os_vif_util [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:67:71,bridge_name='br-int',has_traffic_filtering=True,id=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3edb8e90-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.184 251996 DEBUG nova.objects.instance [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd85818c-bf82-473d-8650-6b391dbfa300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.258 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <uuid>dd85818c-bf82-473d-8650-6b391dbfa300</uuid>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <name>instance-0000006d</name>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestOtherA-server-1902272375</nova:name>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:27:24</nova:creationTime>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <nova:user uuid="baddb65c90da47a58d026b0db966f6c8">tempest-ServerActionsTestOtherA-1949739102-project-member</nova:user>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <nova:project uuid="001e2256cb8b430d93c1ff613010d199">tempest-ServerActionsTestOtherA-1949739102</nova:project>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <nova:port uuid="3edb8e90-653f-4c6f-9f6e-90dfc0fdb014">
Dec 06 07:27:25 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <system>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <entry name="serial">dd85818c-bf82-473d-8650-6b391dbfa300</entry>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <entry name="uuid">dd85818c-bf82-473d-8650-6b391dbfa300</entry>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </system>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <os>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   </os>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <features>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   </features>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd85818c-bf82-473d-8650-6b391dbfa300_disk">
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       </source>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dd85818c-bf82-473d-8650-6b391dbfa300_disk.config">
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       </source>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:27:25 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:b8:67:71"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <target dev="tap3edb8e90-65"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300/console.log" append="off"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <video>
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </video>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:27:25 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:27:25 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:27:25 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:27:25 compute-0 nova_compute[251992]: </domain>
Dec 06 07:27:25 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.259 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Preparing to wait for external event network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.259 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.259 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.259 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.260 251996 DEBUG nova.virt.libvirt.vif [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1902272375',display_name='tempest-ServerActionsTestOtherA-server-1902272375',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1902272375',id=109,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-jio410z8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:27:15Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=dd85818c-bf82-473d-8650-6b391dbfa300,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.260 251996 DEBUG nova.network.os_vif_util [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.261 251996 DEBUG nova.network.os_vif_util [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:67:71,bridge_name='br-int',has_traffic_filtering=True,id=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3edb8e90-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.261 251996 DEBUG os_vif [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:67:71,bridge_name='br-int',has_traffic_filtering=True,id=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3edb8e90-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.262 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.263 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.267 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3edb8e90-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.267 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3edb8e90-65, col_values=(('external_ids', {'iface-id': '3edb8e90-653f-4c6f-9f6e-90dfc0fdb014', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:67:71', 'vm-uuid': 'dd85818c-bf82-473d-8650-6b391dbfa300'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:25 compute-0 NetworkManager[48965]: <info>  [1765006045.2705] manager: (tap3edb8e90-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.273 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.275 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.277 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.277 251996 INFO os_vif [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:67:71,bridge_name='br-int',has_traffic_filtering=True,id=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3edb8e90-65')
Dec 06 07:27:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:25 compute-0 ceph-mon[74339]: pgmap v2098: 305 pgs: 305 active+clean; 204 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.654 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.655 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.655 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:b8:67:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.656 251996 INFO nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Using config drive
Dec 06 07:27:25 compute-0 nova_compute[251992]: 2025-12-06 07:27:25.683 251996 DEBUG nova.storage.rbd_utils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image dd85818c-bf82-473d-8650-6b391dbfa300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004033114065698865 of space, bias 1.0, pg target 1.2099342197096594 quantized to 32 (current 32)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:27:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:27:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 213 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 53 op/s
Dec 06 07:27:26 compute-0 nova_compute[251992]: 2025-12-06 07:27:26.475 251996 INFO nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Creating config drive at /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300/disk.config
Dec 06 07:27:26 compute-0 nova_compute[251992]: 2025-12-06 07:27:26.480 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3pqtlnjb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:26 compute-0 nova_compute[251992]: 2025-12-06 07:27:26.612 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3pqtlnjb" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:26 compute-0 nova_compute[251992]: 2025-12-06 07:27:26.812 251996 DEBUG nova.storage.rbd_utils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image dd85818c-bf82-473d-8650-6b391dbfa300_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:27:26 compute-0 nova_compute[251992]: 2025-12-06 07:27:26.815 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300/disk.config dd85818c-bf82-473d-8650-6b391dbfa300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:27:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/803218215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2939792088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:26 compute-0 ceph-mon[74339]: pgmap v2099: 305 pgs: 305 active+clean; 213 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 53 op/s
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.022 251996 DEBUG nova.network.neutron [req-7743d8cc-0b3f-46b0-80e6-fdab8367d0f8 req-a2991329-81a3-400a-962e-fc2bf2cbf8aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Updated VIF entry in instance network info cache for port 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.023 251996 DEBUG nova.network.neutron [req-7743d8cc-0b3f-46b0-80e6-fdab8367d0f8 req-a2991329-81a3-400a-962e-fc2bf2cbf8aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Updating instance_info_cache with network_info: [{"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:27.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.072 251996 DEBUG oslo_concurrency.lockutils [req-7743d8cc-0b3f-46b0-80e6-fdab8367d0f8 req-a2991329-81a3-400a-962e-fc2bf2cbf8aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:27:27 compute-0 sudo[320344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:27 compute-0 sudo[320344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:27 compute-0 sudo[320344]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:27 compute-0 sudo[320375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:27 compute-0 sudo[320375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:27 compute-0 sudo[320375]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:27 compute-0 podman[320368]: 2025-12-06 07:27:27.302792934 +0000 UTC m=+0.083262955 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:27:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:27:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.466 251996 DEBUG oslo_concurrency.processutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300/disk.config dd85818c-bf82-473d-8650-6b391dbfa300_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.466 251996 INFO nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Deleting local config drive /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300/disk.config because it was imported into RBD.
Dec 06 07:27:27 compute-0 kernel: tap3edb8e90-65: entered promiscuous mode
Dec 06 07:27:27 compute-0 NetworkManager[48965]: <info>  [1765006047.5220] manager: (tap3edb8e90-65): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Dec 06 07:27:27 compute-0 ovn_controller[147168]: 2025-12-06T07:27:27Z|00397|binding|INFO|Claiming lport 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 for this chassis.
Dec 06 07:27:27 compute-0 ovn_controller[147168]: 2025-12-06T07:27:27Z|00398|binding|INFO|3edb8e90-653f-4c6f-9f6e-90dfc0fdb014: Claiming fa:16:3e:b8:67:71 10.100.0.11
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.522 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:27 compute-0 ovn_controller[147168]: 2025-12-06T07:27:27Z|00399|binding|INFO|Setting lport 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 ovn-installed in OVS
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.541 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.543 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:27 compute-0 ovn_controller[147168]: 2025-12-06T07:27:27Z|00400|binding|INFO|Setting lport 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 up in Southbound
Dec 06 07:27:27 compute-0 systemd-udevd[320433]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:27:27 compute-0 systemd-machined[212986]: New machine qemu-50-instance-0000006d.
Dec 06 07:27:27 compute-0 NetworkManager[48965]: <info>  [1765006047.5665] device (tap3edb8e90-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:27:27 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000006d.
Dec 06 07:27:27 compute-0 NetworkManager[48965]: <info>  [1765006047.5671] device (tap3edb8e90-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.572 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:67:71 10.100.0.11'], port_security=['fa:16:3e:b8:67:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dd85818c-bf82-473d-8650-6b391dbfa300', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f2ed01ce-ee24-45dc-b59f-29fb74c119b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.574 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e bound to our chassis
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.575 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.593 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c7208698-dd2e-4ce2-a181-bfba0ddb33d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.624 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[788a8ccb-0c27-47d2-9148-64b485f37258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.627 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7048f6e7-dc82-48ed-8e97-8e9b2b2931c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.657 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d4218ed8-6c9e-4c6b-b609-941fca9306da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.679 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d73f46bd-2ee0-4e12-9729-f49e46535a94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605069, 'reachable_time': 42946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320447, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.698 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0f4a0d-1f3c-4451-957c-7c0487fcc22c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6209aab-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605081, 'tstamp': 605081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320448, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6209aab-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605084, 'tstamp': 605084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320448, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.701 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.705 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:27 compute-0 nova_compute[251992]: 2025-12-06 07:27:27.705 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.706 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.707 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:27:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:27.707 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:27:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 213 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 53 op/s
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.288 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006048.288409, dd85818c-bf82-473d-8650-6b391dbfa300 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.289 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] VM Started (Lifecycle Event)
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.296 251996 DEBUG nova.compute.manager [req-02a721de-0148-4843-915c-a735c2c37e33 req-b9d66b3d-174a-4ae5-815a-bea8d52c2222 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received event network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.296 251996 DEBUG oslo_concurrency.lockutils [req-02a721de-0148-4843-915c-a735c2c37e33 req-b9d66b3d-174a-4ae5-815a-bea8d52c2222 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.296 251996 DEBUG oslo_concurrency.lockutils [req-02a721de-0148-4843-915c-a735c2c37e33 req-b9d66b3d-174a-4ae5-815a-bea8d52c2222 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.296 251996 DEBUG oslo_concurrency.lockutils [req-02a721de-0148-4843-915c-a735c2c37e33 req-b9d66b3d-174a-4ae5-815a-bea8d52c2222 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.297 251996 DEBUG nova.compute.manager [req-02a721de-0148-4843-915c-a735c2c37e33 req-b9d66b3d-174a-4ae5-815a-bea8d52c2222 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Processing event network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.297 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.301 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.304 251996 INFO nova.virt.libvirt.driver [-] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Instance spawned successfully.
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.304 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.317 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.320 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:27:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1297532924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2529783913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.335 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.336 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.336 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.337 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.337 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.338 251996 DEBUG nova.virt.libvirt.driver [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.349 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.350 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006048.2885454, dd85818c-bf82-473d-8650-6b391dbfa300 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.350 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] VM Paused (Lifecycle Event)
Dec 06 07:27:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.419 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.423 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006048.2999845, dd85818c-bf82-473d-8650-6b391dbfa300 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.423 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] VM Resumed (Lifecycle Event)
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.471 251996 INFO nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Took 12.37 seconds to spawn the instance on the hypervisor.
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.471 251996 DEBUG nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.477 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.479 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.539 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.570 251996 INFO nova.compute.manager [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Took 22.27 seconds to build instance.
Dec 06 07:27:28 compute-0 nova_compute[251992]: 2025-12-06 07:27:28.587 251996 DEBUG oslo_concurrency.lockutils [None req-3ec27200-9910-4832-bf01-354cbfb0c9fa baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:29 compute-0 nova_compute[251992]: 2025-12-06 07:27:29.126 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 213 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Dec 06 07:27:30 compute-0 ceph-mon[74339]: pgmap v2100: 305 pgs: 305 active+clean; 213 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 53 op/s
Dec 06 07:27:30 compute-0 nova_compute[251992]: 2025-12-06 07:27:30.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:30 compute-0 nova_compute[251992]: 2025-12-06 07:27:30.709 251996 DEBUG nova.compute.manager [req-cf19d3f8-2342-4cef-8fcb-b0472545abf8 req-be5dd23e-0eab-4c0a-9cf7-bbb1750c1491 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received event network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:30 compute-0 nova_compute[251992]: 2025-12-06 07:27:30.710 251996 DEBUG oslo_concurrency.lockutils [req-cf19d3f8-2342-4cef-8fcb-b0472545abf8 req-be5dd23e-0eab-4c0a-9cf7-bbb1750c1491 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:27:30 compute-0 nova_compute[251992]: 2025-12-06 07:27:30.710 251996 DEBUG oslo_concurrency.lockutils [req-cf19d3f8-2342-4cef-8fcb-b0472545abf8 req-be5dd23e-0eab-4c0a-9cf7-bbb1750c1491 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:27:30 compute-0 nova_compute[251992]: 2025-12-06 07:27:30.711 251996 DEBUG oslo_concurrency.lockutils [req-cf19d3f8-2342-4cef-8fcb-b0472545abf8 req-be5dd23e-0eab-4c0a-9cf7-bbb1750c1491 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:27:30 compute-0 nova_compute[251992]: 2025-12-06 07:27:30.712 251996 DEBUG nova.compute.manager [req-cf19d3f8-2342-4cef-8fcb-b0472545abf8 req-be5dd23e-0eab-4c0a-9cf7-bbb1750c1491 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] No waiting events found dispatching network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:27:30 compute-0 nova_compute[251992]: 2025-12-06 07:27:30.712 251996 WARNING nova.compute.manager [req-cf19d3f8-2342-4cef-8fcb-b0472545abf8 req-be5dd23e-0eab-4c0a-9cf7-bbb1750c1491 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received unexpected event network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 for instance with vm_state active and task_state None.
Dec 06 07:27:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:31.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:31 compute-0 ceph-mon[74339]: pgmap v2101: 305 pgs: 305 active+clean; 213 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 3.6 MiB/s wr, 70 op/s
Dec 06 07:27:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 213 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Dec 06 07:27:32 compute-0 nova_compute[251992]: 2025-12-06 07:27:32.055 251996 DEBUG nova.compute.manager [req-ab6150fe-9296-4964-b35a-89838e6a5e7d req-7a29ad2a-f5b8-4b8d-862e-3ddff99a3e45 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received event network-changed-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:27:32 compute-0 nova_compute[251992]: 2025-12-06 07:27:32.055 251996 DEBUG nova.compute.manager [req-ab6150fe-9296-4964-b35a-89838e6a5e7d req-7a29ad2a-f5b8-4b8d-862e-3ddff99a3e45 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Refreshing instance network info cache due to event network-changed-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:27:32 compute-0 nova_compute[251992]: 2025-12-06 07:27:32.056 251996 DEBUG oslo_concurrency.lockutils [req-ab6150fe-9296-4964-b35a-89838e6a5e7d req-7a29ad2a-f5b8-4b8d-862e-3ddff99a3e45 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:27:32 compute-0 nova_compute[251992]: 2025-12-06 07:27:32.056 251996 DEBUG oslo_concurrency.lockutils [req-ab6150fe-9296-4964-b35a-89838e6a5e7d req-7a29ad2a-f5b8-4b8d-862e-3ddff99a3e45 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:27:32 compute-0 nova_compute[251992]: 2025-12-06 07:27:32.056 251996 DEBUG nova.network.neutron [req-ab6150fe-9296-4964-b35a-89838e6a5e7d req-7a29ad2a-f5b8-4b8d-862e-3ddff99a3e45 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Refreshing network info cache for port 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:27:32 compute-0 podman[320494]: 2025-12-06 07:27:32.405152536 +0000 UTC m=+0.053496171 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:27:32 compute-0 podman[320495]: 2025-12-06 07:27:32.407188212 +0000 UTC m=+0.055760403 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:27:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:33.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:33 compute-0 ceph-mon[74339]: pgmap v2102: 305 pgs: 305 active+clean; 213 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Dec 06 07:27:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/326643290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:27:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/326643290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:27:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:33.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 133 op/s
Dec 06 07:27:34 compute-0 nova_compute[251992]: 2025-12-06 07:27:34.128 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:34 compute-0 ceph-mon[74339]: pgmap v2103: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 133 op/s
Dec 06 07:27:34 compute-0 nova_compute[251992]: 2025-12-06 07:27:34.904 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:35.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:35 compute-0 nova_compute[251992]: 2025-12-06 07:27:35.266 251996 DEBUG nova.network.neutron [req-ab6150fe-9296-4964-b35a-89838e6a5e7d req-7a29ad2a-f5b8-4b8d-862e-3ddff99a3e45 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Updated VIF entry in instance network info cache for port 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:27:35 compute-0 nova_compute[251992]: 2025-12-06 07:27:35.268 251996 DEBUG nova.network.neutron [req-ab6150fe-9296-4964-b35a-89838e6a5e7d req-7a29ad2a-f5b8-4b8d-862e-3ddff99a3e45 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Updating instance_info_cache with network_info: [{"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:27:35 compute-0 nova_compute[251992]: 2025-12-06 07:27:35.272 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:35 compute-0 nova_compute[251992]: 2025-12-06 07:27:35.306 251996 DEBUG oslo_concurrency.lockutils [req-ab6150fe-9296-4964-b35a-89838e6a5e7d req-7a29ad2a-f5b8-4b8d-862e-3ddff99a3e45 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-dd85818c-bf82-473d-8650-6b391dbfa300" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:27:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:35.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 240 KiB/s wr, 110 op/s
Dec 06 07:27:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:37 compute-0 ceph-mon[74339]: pgmap v2104: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 240 KiB/s wr, 110 op/s
Dec 06 07:27:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 25 KiB/s wr, 99 op/s
Dec 06 07:27:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:38 compute-0 ovn_controller[147168]: 2025-12-06T07:27:38Z|00401|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:27:38 compute-0 nova_compute[251992]: 2025-12-06 07:27:38.516 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:38 compute-0 ceph-mon[74339]: pgmap v2105: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 25 KiB/s wr, 99 op/s
Dec 06 07:27:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:39.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:39 compute-0 nova_compute[251992]: 2025-12-06 07:27:39.130 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:39.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 25 KiB/s wr, 155 op/s
Dec 06 07:27:40 compute-0 nova_compute[251992]: 2025-12-06 07:27:40.275 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:40 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Dec 06 07:27:40 compute-0 nova_compute[251992]: 2025-12-06 07:27:40.940 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:41.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:41 compute-0 ceph-mon[74339]: pgmap v2106: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 25 KiB/s wr, 155 op/s
Dec 06 07:27:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:41.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Dec 06 07:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:27:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:43.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:43.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 220 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 630 KiB/s wr, 105 op/s
Dec 06 07:27:44 compute-0 nova_compute[251992]: 2025-12-06 07:27:44.132 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:44 compute-0 ceph-mon[74339]: pgmap v2107: 305 pgs: 305 active+clean; 214 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 145 op/s
Dec 06 07:27:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:45.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:45 compute-0 nova_compute[251992]: 2025-12-06 07:27:45.278 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:45.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 225 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 84 op/s
Dec 06 07:27:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:47.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:47 compute-0 sudo[320543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:47 compute-0 sudo[320543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:47 compute-0 sudo[320543]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:47 compute-0 sudo[320568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:27:47 compute-0 sudo[320568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:27:47 compute-0 sudo[320568]: pam_unix(sudo:session): session closed for user root
Dec 06 07:27:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:47.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 225 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 79 op/s
Dec 06 07:27:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3802733494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:49 compute-0 ceph-mon[74339]: pgmap v2108: 305 pgs: 305 active+clean; 220 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 630 KiB/s wr, 105 op/s
Dec 06 07:27:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:49.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:49 compute-0 nova_compute[251992]: 2025-12-06 07:27:49.134 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 227 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 90 op/s
Dec 06 07:27:50 compute-0 nova_compute[251992]: 2025-12-06 07:27:50.280 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:51.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 238 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 252 KiB/s rd, 2.4 MiB/s wr, 48 op/s
Dec 06 07:27:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 244 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 3.0 MiB/s wr, 46 op/s
Dec 06 07:27:54 compute-0 nova_compute[251992]: 2025-12-06 07:27:54.136 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:54 compute-0 nova_compute[251992]: 2025-12-06 07:27:54.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:55 compute-0 nova_compute[251992]: 2025-12-06 07:27:55.013 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:55.016 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:27:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:27:55.017 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:27:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:55 compute-0 nova_compute[251992]: 2025-12-06 07:27:55.373 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:55.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:55 compute-0 ceph-mon[74339]: pgmap v2109: 305 pgs: 305 active+clean; 225 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 84 op/s
Dec 06 07:27:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4280102292' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:27:55 compute-0 ceph-mon[74339]: pgmap v2110: 305 pgs: 305 active+clean; 225 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 79 op/s
Dec 06 07:27:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 279 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 55 op/s
Dec 06 07:27:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:57 compute-0 podman[320598]: 2025-12-06 07:27:57.434994043 +0000 UTC m=+0.079592703 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:27:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:27:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 279 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 47 op/s
Dec 06 07:27:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:27:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:27:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:27:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:27:59 compute-0 nova_compute[251992]: 2025-12-06 07:27:59.139 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:27:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:27:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:27:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:27:59.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 281 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 47 op/s
Dec 06 07:28:00 compute-0 ceph-mon[74339]: pgmap v2111: 305 pgs: 305 active+clean; 227 MiB data, 938 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 90 op/s
Dec 06 07:28:00 compute-0 ceph-mon[74339]: pgmap v2112: 305 pgs: 305 active+clean; 238 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 252 KiB/s rd, 2.4 MiB/s wr, 48 op/s
Dec 06 07:28:00 compute-0 ceph-mon[74339]: pgmap v2113: 305 pgs: 305 active+clean; 244 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 3.0 MiB/s wr, 46 op/s
Dec 06 07:28:00 compute-0 ceph-mon[74339]: pgmap v2114: 305 pgs: 305 active+clean; 279 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 55 op/s
Dec 06 07:28:00 compute-0 nova_compute[251992]: 2025-12-06 07:28:00.375 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 300 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.1 MiB/s wr, 41 op/s
Dec 06 07:28:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:03.019 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:03 compute-0 podman[320629]: 2025-12-06 07:28:03.41516757 +0000 UTC m=+0.069629028 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:28:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:03 compute-0 podman[320628]: 2025-12-06 07:28:03.440980683 +0000 UTC m=+0.099147554 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:28:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:03.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:03.833 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:03.834 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:03.835 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 302 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 34 op/s
Dec 06 07:28:04 compute-0 nova_compute[251992]: 2025-12-06 07:28:04.140 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:04 compute-0 nova_compute[251992]: 2025-12-06 07:28:04.563 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.377 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:05.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.726 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.727 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.727 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.727 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:28:05 compute-0 nova_compute[251992]: 2025-12-06 07:28:05.728 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 307 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 53 op/s
Dec 06 07:28:06 compute-0 ceph-mon[74339]: pgmap v2115: 305 pgs: 305 active+clean; 279 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 47 op/s
Dec 06 07:28:06 compute-0 ceph-mon[74339]: pgmap v2116: 305 pgs: 305 active+clean; 281 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 47 op/s
Dec 06 07:28:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:28:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598187878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.192 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.281 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.282 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.286 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.286 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.457 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.458 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4075MB free_disk=20.83625030517578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.458 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.458 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.552 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 00f56c62-f327-41e3-a105-24f56ae124c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.553 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance dd85818c-bf82-473d-8650-6b391dbfa300 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.553 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.553 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:28:06 compute-0 nova_compute[251992]: 2025-12-06 07:28:06.653 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:07.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:28:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741694525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:07 compute-0 nova_compute[251992]: 2025-12-06 07:28:07.151 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:07 compute-0 nova_compute[251992]: 2025-12-06 07:28:07.157 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:28:07 compute-0 sudo[320710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:07 compute-0 sudo[320710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:07 compute-0 sudo[320710]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:07.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:07 compute-0 sudo[320735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:07 compute-0 sudo[320735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:07 compute-0 sudo[320735]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:07 compute-0 nova_compute[251992]: 2025-12-06 07:28:07.601 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:28:08 compute-0 nova_compute[251992]: 2025-12-06 07:28:08.021 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:28:08 compute-0 nova_compute[251992]: 2025-12-06 07:28:08.021 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 307 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 224 KiB/s rd, 1.0 MiB/s wr, 35 op/s
Dec 06 07:28:08 compute-0 ceph-mon[74339]: pgmap v2117: 305 pgs: 305 active+clean; 300 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.1 MiB/s wr, 41 op/s
Dec 06 07:28:08 compute-0 ceph-mon[74339]: pgmap v2118: 305 pgs: 305 active+clean; 302 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 34 op/s
Dec 06 07:28:08 compute-0 ceph-mon[74339]: pgmap v2119: 305 pgs: 305 active+clean; 307 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 MiB/s wr, 53 op/s
Dec 06 07:28:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2598187878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/164684786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3741694525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:09 compute-0 nova_compute[251992]: 2025-12-06 07:28:09.021 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:09 compute-0 nova_compute[251992]: 2025-12-06 07:28:09.021 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:09 compute-0 nova_compute[251992]: 2025-12-06 07:28:09.021 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:09 compute-0 nova_compute[251992]: 2025-12-06 07:28:09.022 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:09.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:09 compute-0 nova_compute[251992]: 2025-12-06 07:28:09.144 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:09.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 309 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Dec 06 07:28:10 compute-0 ceph-mon[74339]: pgmap v2120: 305 pgs: 305 active+clean; 307 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 224 KiB/s rd, 1.0 MiB/s wr, 35 op/s
Dec 06 07:28:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1106945750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2551767847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2081102898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:28:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2081102898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:28:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1049557105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1704356668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:10 compute-0 sudo[320761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:10 compute-0 sudo[320761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:10 compute-0 sudo[320761]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:10 compute-0 nova_compute[251992]: 2025-12-06 07:28:10.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:10 compute-0 sudo[320786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:28:10 compute-0 sudo[320786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:10 compute-0 sudo[320786]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:10 compute-0 sudo[320811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:10 compute-0 sudo[320811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:10 compute-0 sudo[320811]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:10 compute-0 sudo[320837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:28:10 compute-0 sudo[320837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:11.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:11 compute-0 nova_compute[251992]: 2025-12-06 07:28:11.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:11 compute-0 nova_compute[251992]: 2025-12-06 07:28:11.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:28:11 compute-0 nova_compute[251992]: 2025-12-06 07:28:11.672 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:28:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:28:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 318 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 1.2 MiB/s wr, 83 op/s
Dec 06 07:28:12 compute-0 podman[320935]: 2025-12-06 07:28:12.315271752 +0000 UTC m=+1.328946235 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:28:12 compute-0 ceph-mon[74339]: pgmap v2121: 305 pgs: 305 active+clean; 309 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Dec 06 07:28:12 compute-0 podman[320935]: 2025-12-06 07:28:12.959600187 +0000 UTC m=+1.973274680 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:28:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:28:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:13.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3051144945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:13 compute-0 ceph-mon[74339]: pgmap v2122: 305 pgs: 305 active+clean; 318 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 1.2 MiB/s wr, 83 op/s
Dec 06 07:28:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:13 compute-0 podman[321090]: 2025-12-06 07:28:13.628646295 +0000 UTC m=+0.053815100 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:28:13 compute-0 podman[321090]: 2025-12-06 07:28:13.639495375 +0000 UTC m=+0.064664160 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 32K writes, 120K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 32K writes, 12K syncs, 2.72 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6524 writes, 26K keys, 6524 commit groups, 1.0 writes per commit group, ingest: 26.42 MB, 0.04 MB/s
                                           Interval WAL: 6524 writes, 2591 syncs, 2.52 writes per sync, written: 0.03 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:28:13 compute-0 podman[321154]: 2025-12-06 07:28:13.859126141 +0000 UTC m=+0.049605433 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=keepalived, io.buildah.version=1.28.2, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 06 07:28:13 compute-0 podman[321154]: 2025-12-06 07:28:13.874428684 +0000 UTC m=+0.064907976 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9)
Dec 06 07:28:13 compute-0 sudo[320837]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:28:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 321 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 436 KiB/s wr, 84 op/s
Dec 06 07:28:14 compute-0 nova_compute[251992]: 2025-12-06 07:28:14.146 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:28:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:14 compute-0 sudo[321186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:14 compute-0 sudo[321186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:14 compute-0 sudo[321186]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:14 compute-0 sudo[321212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:28:14 compute-0 sudo[321212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:14 compute-0 sudo[321212]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:14 compute-0 nova_compute[251992]: 2025-12-06 07:28:14.671 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:14 compute-0 nova_compute[251992]: 2025-12-06 07:28:14.671 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:28:14 compute-0 sudo[321237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:14 compute-0 sudo[321237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:14 compute-0 sudo[321237]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:14 compute-0 sudo[321262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:28:14 compute-0 sudo[321262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:15 compute-0 ceph-mon[74339]: pgmap v2123: 305 pgs: 305 active+clean; 321 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 436 KiB/s wr, 84 op/s
Dec 06 07:28:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:15 compute-0 sudo[321262]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:28:15 compute-0 nova_compute[251992]: 2025-12-06 07:28:15.434 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:15.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3978c90d-dd5d-4a2a-95a4-48d85a6507e1 does not exist
Dec 06 07:28:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 304f6831-c65a-44a7-9b96-164b771727a1 does not exist
Dec 06 07:28:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9a26f216-918e-42a1-9e54-e8af795bf319 does not exist
Dec 06 07:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:28:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:28:15 compute-0 sudo[321318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:15 compute-0 sudo[321318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:15 compute-0 sudo[321318]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:15 compute-0 sudo[321343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:28:15 compute-0 sudo[321343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:15 compute-0 sudo[321343]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:15 compute-0 sudo[321368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:15 compute-0 sudo[321368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:15 compute-0 sudo[321368]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:15 compute-0 sudo[321393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:28:15 compute-0 sudo[321393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 359 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 188 op/s
Dec 06 07:28:16 compute-0 podman[321456]: 2025-12-06 07:28:16.098238924 +0000 UTC m=+0.021219839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:28:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:17.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:17.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 359 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 163 op/s
Dec 06 07:28:18 compute-0 nova_compute[251992]: 2025-12-06 07:28:18.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:28:18
Dec 06 07:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'images']
Dec 06 07:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:28:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.149 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:19 compute-0 podman[321456]: 2025-12-06 07:28:19.243351571 +0000 UTC m=+3.166332466 container create 062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.360 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.360 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:19 compute-0 systemd[1]: Started libpod-conmon-062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037.scope.
Dec 06 07:28:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:28:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:19.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.523 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.629 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.629 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.636 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.636 251996 INFO nova.compute.claims [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.766 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:28:19 compute-0 nova_compute[251992]: 2025-12-06 07:28:19.767 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:19 compute-0 podman[321456]: 2025-12-06 07:28:19.779304267 +0000 UTC m=+3.702285212 container init 062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:28:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:28:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:28:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:28:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:28:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:28:19 compute-0 ceph-mon[74339]: pgmap v2124: 305 pgs: 305 active+clean; 359 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 188 op/s
Dec 06 07:28:19 compute-0 podman[321456]: 2025-12-06 07:28:19.788311546 +0000 UTC m=+3.711292441 container start 062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:28:19 compute-0 elastic_feistel[321475]: 167 167
Dec 06 07:28:19 compute-0 systemd[1]: libpod-062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037.scope: Deactivated successfully.
Dec 06 07:28:19 compute-0 conmon[321475]: conmon 062da19d852f8a0eaca2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037.scope/container/memory.events
Dec 06 07:28:19 compute-0 podman[321456]: 2025-12-06 07:28:19.918064666 +0000 UTC m=+3.841045561 container attach 062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:28:19 compute-0 podman[321456]: 2025-12-06 07:28:19.919133185 +0000 UTC m=+3.842114100 container died 062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 07:28:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.054 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:28:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737581334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.508 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.518 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.524 251996 DEBUG nova.compute.provider_tree [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.637 251996 DEBUG nova.scheduler.client.report [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.691 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.692 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.845 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:28:20 compute-0 nova_compute[251992]: 2025-12-06 07:28:20.846 251996 DEBUG nova.network.neutron [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb355728f7b7889f5e7b6ceda07b162f494ddf6463451b42ba15b802ff875d90-merged.mount: Deactivated successfully.
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.020 251996 INFO nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:28:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:21.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.158 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:28:21 compute-0 ceph-mon[74339]: pgmap v2125: 305 pgs: 305 active+clean; 359 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 163 op/s
Dec 06 07:28:21 compute-0 ceph-mon[74339]: pgmap v2126: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Dec 06 07:28:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1737581334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:28:21 compute-0 podman[321456]: 2025-12-06 07:28:21.302628749 +0000 UTC m=+5.225609644 container remove 062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:28:21 compute-0 systemd[1]: libpod-conmon-062da19d852f8a0eaca2887ced96ee8bedf18bbda435781da2c358795ba9b037.scope: Deactivated successfully.
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.440 251996 DEBUG nova.policy [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'baddb65c90da47a58d026b0db966f6c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '001e2256cb8b430d93c1ff613010d199', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.452 251996 INFO nova.virt.block_device [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Booting with volume f889bfdf-c600-4260-90f0-d332e8e3d79b at /dev/vda
Dec 06 07:28:21 compute-0 podman[321521]: 2025-12-06 07:28:21.484004796 +0000 UTC m=+0.055344652 container create beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:28:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:21.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:21 compute-0 systemd[1]: Started libpod-conmon-beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c.scope.
Dec 06 07:28:21 compute-0 podman[321521]: 2025-12-06 07:28:21.454944382 +0000 UTC m=+0.026284258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:28:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace99a0f1e6cd7fac3844bf3aeedabd749942a2f96d2ccde41c43c4bded7aac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace99a0f1e6cd7fac3844bf3aeedabd749942a2f96d2ccde41c43c4bded7aac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace99a0f1e6cd7fac3844bf3aeedabd749942a2f96d2ccde41c43c4bded7aac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace99a0f1e6cd7fac3844bf3aeedabd749942a2f96d2ccde41c43c4bded7aac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace99a0f1e6cd7fac3844bf3aeedabd749942a2f96d2ccde41c43c4bded7aac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:21 compute-0 podman[321521]: 2025-12-06 07:28:21.628305628 +0000 UTC m=+0.199645504 container init beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 07:28:21 compute-0 podman[321521]: 2025-12-06 07:28:21.635727003 +0000 UTC m=+0.207066859 container start beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:28:21 compute-0 podman[321521]: 2025-12-06 07:28:21.644002102 +0000 UTC m=+0.215341958 container attach beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.778 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.779 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.976 251996 DEBUG os_brick.utils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.978 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.992 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.992 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[5d60ce53-9d91-4c9a-8bef-b6e7ae61f1a5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:21 compute-0 nova_compute[251992]: 2025-12-06 07:28:21.995 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.002 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.002 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f76638-d8ea-41bf-b11c-19241a0229c8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.004 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.011 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.011 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f57f62-f6d4-4953-9bc9-d400b38d1fb2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.013 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[b920dde0-d97f-4022-963d-a75dc334e16e]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.014 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.043 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.046 251996 DEBUG os_brick.initiator.connectors.lightos [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.046 251996 DEBUG os_brick.initiator.connectors.lightos [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.047 251996 DEBUG os_brick.initiator.connectors.lightos [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.047 251996 DEBUG os_brick.utils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:28:22 compute-0 nova_compute[251992]: 2025-12-06 07:28:22.048 251996 DEBUG nova.virt.block_device [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating existing volume attachment record: 9a973fe2-99ba-4c5d-8004-b747f98eb275 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:28:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Dec 06 07:28:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 07:28:22 compute-0 flamboyant_williams[321537]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:28:22 compute-0 flamboyant_williams[321537]: --> relative data size: 1.0
Dec 06 07:28:22 compute-0 flamboyant_williams[321537]: --> All data devices are unavailable
Dec 06 07:28:22 compute-0 systemd[1]: libpod-beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c.scope: Deactivated successfully.
Dec 06 07:28:22 compute-0 podman[321521]: 2025-12-06 07:28:22.555421195 +0000 UTC m=+1.126761061 container died beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-eace99a0f1e6cd7fac3844bf3aeedabd749942a2f96d2ccde41c43c4bded7aac-merged.mount: Deactivated successfully.
Dec 06 07:28:22 compute-0 podman[321521]: 2025-12-06 07:28:22.663136915 +0000 UTC m=+1.234476771 container remove beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_williams, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:28:22 compute-0 systemd[1]: libpod-conmon-beb5e3e9221169bad7d7c45ac4d076c44fedc9436922b97e49792f4e4c7c982c.scope: Deactivated successfully.
Dec 06 07:28:22 compute-0 sudo[321393]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:22 compute-0 sudo[321574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:22 compute-0 sudo[321574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:22 compute-0 sudo[321574]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:22 compute-0 sudo[321599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:28:22 compute-0 sudo[321599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:22 compute-0 sudo[321599]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:22 compute-0 sudo[321624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:22 compute-0 sudo[321624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:22 compute-0 ceph-mon[74339]: pgmap v2127: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Dec 06 07:28:22 compute-0 sudo[321624]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:22 compute-0 sudo[321649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:28:22 compute-0 sudo[321649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:23.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:23 compute-0 podman[321711]: 2025-12-06 07:28:23.324775839 +0000 UTC m=+0.059137117 container create 32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:28:23 compute-0 systemd[1]: Started libpod-conmon-32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53.scope.
Dec 06 07:28:23 compute-0 podman[321711]: 2025-12-06 07:28:23.287082126 +0000 UTC m=+0.021443424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:28:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:28:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:28:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2126141649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:23 compute-0 podman[321711]: 2025-12-06 07:28:23.449396676 +0000 UTC m=+0.183757984 container init 32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:28:23 compute-0 podman[321711]: 2025-12-06 07:28:23.457304445 +0000 UTC m=+0.191665723 container start 32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:28:23 compute-0 podman[321711]: 2025-12-06 07:28:23.461534032 +0000 UTC m=+0.195895330 container attach 32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:28:23 compute-0 wonderful_grothendieck[321727]: 167 167
Dec 06 07:28:23 compute-0 systemd[1]: libpod-32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53.scope: Deactivated successfully.
Dec 06 07:28:23 compute-0 podman[321711]: 2025-12-06 07:28:23.464441053 +0000 UTC m=+0.198802331 container died 32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:28:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-de0f824e80a241c3e3b0740a39f6415cf21ba5be06fefe7fa679d7d1acd3bd20-merged.mount: Deactivated successfully.
Dec 06 07:28:23 compute-0 podman[321711]: 2025-12-06 07:28:23.51241632 +0000 UTC m=+0.246777598 container remove 32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 07:28:23 compute-0 systemd[1]: libpod-conmon-32c0d9eeb60a1c037d68d63606e9abd8badb3049d101e43519f2411d79e58b53.scope: Deactivated successfully.
Dec 06 07:28:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:23.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:23 compute-0 podman[321751]: 2025-12-06 07:28:23.702806617 +0000 UTC m=+0.046283442 container create c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 07:28:23 compute-0 systemd[1]: Started libpod-conmon-c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3.scope.
Dec 06 07:28:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e004a55ea84f4c09b351748102fb2a38ca637ab4a86acef09ae8fa0e3d05969d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e004a55ea84f4c09b351748102fb2a38ca637ab4a86acef09ae8fa0e3d05969d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e004a55ea84f4c09b351748102fb2a38ca637ab4a86acef09ae8fa0e3d05969d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e004a55ea84f4c09b351748102fb2a38ca637ab4a86acef09ae8fa0e3d05969d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:23 compute-0 podman[321751]: 2025-12-06 07:28:23.681523368 +0000 UTC m=+0.025000193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:28:23 compute-0 podman[321751]: 2025-12-06 07:28:23.784879947 +0000 UTC m=+0.128356802 container init c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:28:23 compute-0 podman[321751]: 2025-12-06 07:28:23.792597791 +0000 UTC m=+0.136074616 container start c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:28:23 compute-0 podman[321751]: 2025-12-06 07:28:23.801463426 +0000 UTC m=+0.144940281 container attach c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:28:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.150 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.505 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.507 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.507 251996 INFO nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Creating image(s)
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.508 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.508 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Ensure instance console log exists: /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.508 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.509 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.509 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:28:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:28:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:28:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:28:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]: {
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:     "0": [
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:         {
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "devices": [
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "/dev/loop3"
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             ],
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "lv_name": "ceph_lv0",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "lv_size": "7511998464",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "name": "ceph_lv0",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "tags": {
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.cluster_name": "ceph",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.crush_device_class": "",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.encrypted": "0",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.osd_id": "0",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.type": "block",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:                 "ceph.vdo": "0"
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             },
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "type": "block",
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:             "vg_name": "ceph_vg0"
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:         }
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]:     ]
Dec 06 07:28:24 compute-0 heuristic_ramanujan[321768]: }
Dec 06 07:28:24 compute-0 systemd[1]: libpod-c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3.scope: Deactivated successfully.
Dec 06 07:28:24 compute-0 podman[321751]: 2025-12-06 07:28:24.646853484 +0000 UTC m=+0.990330309 container died c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:28:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2126141649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:24 compute-0 ceph-mon[74339]: pgmap v2128: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Dec 06 07:28:24 compute-0 nova_compute[251992]: 2025-12-06 07:28:24.885 251996 DEBUG nova.network.neutron [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Successfully created port: 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:28:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e004a55ea84f4c09b351748102fb2a38ca637ab4a86acef09ae8fa0e3d05969d-merged.mount: Deactivated successfully.
Dec 06 07:28:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:25.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:25 compute-0 nova_compute[251992]: 2025-12-06 07:28:25.512 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:25.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007508971361436813 of space, bias 1.0, pg target 2.252691408431044 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.294807006676903 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:28:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:28:25 compute-0 podman[321751]: 2025-12-06 07:28:25.991847411 +0000 UTC m=+2.335324246 container remove c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 07:28:26 compute-0 systemd[1]: libpod-conmon-c50eb273a37c31fc2877741a2fe72d890bcf54ffc6a780e8b3a274b73e7c58b3.scope: Deactivated successfully.
Dec 06 07:28:26 compute-0 sudo[321649]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Dec 06 07:28:26 compute-0 sudo[321789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:26 compute-0 sudo[321789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:26 compute-0 sudo[321789]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:26 compute-0 sudo[321814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:28:26 compute-0 sudo[321814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:26 compute-0 sudo[321814]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:26 compute-0 sudo[321839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:26 compute-0 sudo[321839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:26 compute-0 sudo[321839]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:26 compute-0 sudo[321864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:28:26 compute-0 sudo[321864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:26 compute-0 podman[321930]: 2025-12-06 07:28:26.623360191 +0000 UTC m=+0.102174177 container create 24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hopper, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 07:28:26 compute-0 podman[321930]: 2025-12-06 07:28:26.542396691 +0000 UTC m=+0.021210697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:28:26 compute-0 systemd[1]: Started libpod-conmon-24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00.scope.
Dec 06 07:28:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:28:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:27.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:27 compute-0 podman[321930]: 2025-12-06 07:28:27.221631532 +0000 UTC m=+0.700445608 container init 24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:28:27 compute-0 podman[321930]: 2025-12-06 07:28:27.230038355 +0000 UTC m=+0.708852351 container start 24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:28:27 compute-0 keen_hopper[321946]: 167 167
Dec 06 07:28:27 compute-0 systemd[1]: libpod-24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00.scope: Deactivated successfully.
Dec 06 07:28:27 compute-0 podman[321930]: 2025-12-06 07:28:27.358729685 +0000 UTC m=+0.837543711 container attach 24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:28:27 compute-0 podman[321930]: 2025-12-06 07:28:27.359478416 +0000 UTC m=+0.838292412 container died 24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:28:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2f74e43200cc47a8afa4332772cbc64f23c066cc977062b8eac17cd5216d1ca-merged.mount: Deactivated successfully.
Dec 06 07:28:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:27.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:27 compute-0 podman[321930]: 2025-12-06 07:28:27.56060476 +0000 UTC m=+1.039418746 container remove 24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:28:27 compute-0 systemd[1]: libpod-conmon-24d5609f438418151ae3a48e99320ae37b6313d9090ea653761aa3e4b5865f00.scope: Deactivated successfully.
Dec 06 07:28:27 compute-0 sudo[321971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:27 compute-0 sudo[321971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:27 compute-0 sudo[321971]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:27 compute-0 podman[321965]: 2025-12-06 07:28:27.645171219 +0000 UTC m=+0.105266814 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:28:27 compute-0 sudo[322015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:27 compute-0 sudo[322015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:27 compute-0 sudo[322015]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:27 compute-0 podman[322049]: 2025-12-06 07:28:27.771585696 +0000 UTC m=+0.052570406 container create ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:28:27 compute-0 systemd[1]: Started libpod-conmon-ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983.scope.
Dec 06 07:28:27 compute-0 podman[322049]: 2025-12-06 07:28:27.742883012 +0000 UTC m=+0.023867742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:28:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5beffd81e5fbb4651b6b73ebdc632b7ff343830d68c674ae58a59109e2e6cf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5beffd81e5fbb4651b6b73ebdc632b7ff343830d68c674ae58a59109e2e6cf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5beffd81e5fbb4651b6b73ebdc632b7ff343830d68c674ae58a59109e2e6cf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5beffd81e5fbb4651b6b73ebdc632b7ff343830d68c674ae58a59109e2e6cf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:28:27 compute-0 podman[322049]: 2025-12-06 07:28:27.921360739 +0000 UTC m=+0.202345449 container init ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:28:27 compute-0 podman[322049]: 2025-12-06 07:28:27.930901143 +0000 UTC m=+0.211885853 container start ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:28:27 compute-0 podman[322049]: 2025-12-06 07:28:27.951660947 +0000 UTC m=+0.232645677 container attach ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:28:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 523 KiB/s wr, 8 op/s
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]: {
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]:         "osd_id": 0,
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]:         "type": "bluestore"
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]:     }
Dec 06 07:28:28 compute-0 beautiful_babbage[322067]: }
Dec 06 07:28:28 compute-0 systemd[1]: libpod-ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983.scope: Deactivated successfully.
Dec 06 07:28:28 compute-0 podman[322049]: 2025-12-06 07:28:28.778900633 +0000 UTC m=+1.059885353 container died ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Dec 06 07:28:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5beffd81e5fbb4651b6b73ebdc632b7ff343830d68c674ae58a59109e2e6cf6-merged.mount: Deactivated successfully.
Dec 06 07:28:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:29.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:29 compute-0 nova_compute[251992]: 2025-12-06 07:28:29.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:29 compute-0 nova_compute[251992]: 2025-12-06 07:28:29.269 251996 DEBUG nova.network.neutron [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Successfully updated port: 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:28:29 compute-0 podman[322049]: 2025-12-06 07:28:29.288596423 +0000 UTC m=+1.569581133 container remove ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 07:28:29 compute-0 systemd[1]: libpod-conmon-ad27b0705e833d29cd26c71120716322c8e58d8e1e8d6f829c448699c45fc983.scope: Deactivated successfully.
Dec 06 07:28:29 compute-0 sudo[321864]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:28:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:29.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:29 compute-0 ceph-mon[74339]: pgmap v2129: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Dec 06 07:28:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 523 KiB/s wr, 11 op/s
Dec 06 07:28:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:28:30 compute-0 nova_compute[251992]: 2025-12-06 07:28:30.591 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b12bf94e-534f-44d0-b8ed-5257667c2043 does not exist
Dec 06 07:28:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c9951b02-6057-4438-a335-d953a39251be does not exist
Dec 06 07:28:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9c4cc800-ff78-4fbc-b198-b8aab3b06d91 does not exist
Dec 06 07:28:31 compute-0 sudo[322103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:31 compute-0 sudo[322103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:31 compute-0 sudo[322103]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:31.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:31 compute-0 sudo[322128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:28:31 compute-0 sudo[322128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:31 compute-0 sudo[322128]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:31 compute-0 nova_compute[251992]: 2025-12-06 07:28:31.359 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:31.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:31 compute-0 nova_compute[251992]: 2025-12-06 07:28:31.852 251996 DEBUG nova.compute.manager [req-4ccc6949-c44a-4f9b-b201-07159d66f953 req-75f40a81-4b8b-43b6-92cb-e5bbc2af131c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-changed-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:28:31 compute-0 nova_compute[251992]: 2025-12-06 07:28:31.852 251996 DEBUG nova.compute.manager [req-4ccc6949-c44a-4f9b-b201-07159d66f953 req-75f40a81-4b8b-43b6-92cb-e5bbc2af131c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Refreshing instance network info cache due to event network-changed-06cbbb2f-cba8-4b99-b2dd-778c71df2d23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:28:31 compute-0 nova_compute[251992]: 2025-12-06 07:28:31.853 251996 DEBUG oslo_concurrency.lockutils [req-4ccc6949-c44a-4f9b-b201-07159d66f953 req-75f40a81-4b8b-43b6-92cb-e5bbc2af131c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:28:31 compute-0 nova_compute[251992]: 2025-12-06 07:28:31.853 251996 DEBUG oslo_concurrency.lockutils [req-4ccc6949-c44a-4f9b-b201-07159d66f953 req-75f40a81-4b8b-43b6-92cb-e5bbc2af131c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:28:31 compute-0 nova_compute[251992]: 2025-12-06 07:28:31.853 251996 DEBUG nova.network.neutron [req-4ccc6949-c44a-4f9b-b201-07159d66f953 req-75f40a81-4b8b-43b6-92cb-e5bbc2af131c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Refreshing network info cache for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:28:31 compute-0 nova_compute[251992]: 2025-12-06 07:28:31.905 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:28:32 compute-0 ceph-mon[74339]: pgmap v2130: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 523 KiB/s wr, 8 op/s
Dec 06 07:28:32 compute-0 ceph-mon[74339]: pgmap v2131: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 148 KiB/s rd, 523 KiB/s wr, 11 op/s
Dec 06 07:28:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 122 KiB/s rd, 1.4 MiB/s wr, 23 op/s
Dec 06 07:28:32 compute-0 nova_compute[251992]: 2025-12-06 07:28:32.300 251996 DEBUG nova.network.neutron [req-4ccc6949-c44a-4f9b-b201-07159d66f953 req-75f40a81-4b8b-43b6-92cb-e5bbc2af131c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:28:32 compute-0 nova_compute[251992]: 2025-12-06 07:28:32.900 251996 DEBUG nova.network.neutron [req-4ccc6949-c44a-4f9b-b201-07159d66f953 req-75f40a81-4b8b-43b6-92cb-e5bbc2af131c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:28:32 compute-0 nova_compute[251992]: 2025-12-06 07:28:32.917 251996 DEBUG oslo_concurrency.lockutils [req-4ccc6949-c44a-4f9b-b201-07159d66f953 req-75f40a81-4b8b-43b6-92cb-e5bbc2af131c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:28:32 compute-0 nova_compute[251992]: 2025-12-06 07:28:32.918 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:28:32 compute-0 nova_compute[251992]: 2025-12-06 07:28:32.918 251996 DEBUG nova.network.neutron [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:28:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:28:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:33.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:28:33 compute-0 nova_compute[251992]: 2025-12-06 07:28:33.376 251996 DEBUG nova.network.neutron [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:28:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:33.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Dec 06 07:28:34 compute-0 nova_compute[251992]: 2025-12-06 07:28:34.154 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:34 compute-0 podman[322155]: 2025-12-06 07:28:34.406291208 +0000 UTC m=+0.063009194 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 06 07:28:34 compute-0 podman[322154]: 2025-12-06 07:28:34.425814918 +0000 UTC m=+0.084205980 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:28:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:35.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:35.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:35 compute-0 nova_compute[251992]: 2025-12-06 07:28:35.594 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:28:35 compute-0 ceph-mon[74339]: pgmap v2132: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 122 KiB/s rd, 1.4 MiB/s wr, 23 op/s
Dec 06 07:28:35 compute-0 nova_compute[251992]: 2025-12-06 07:28:35.927 251996 DEBUG nova.network.neutron [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.023 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.023 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance network_info: |[{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.026 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Start _get_guest_xml network_info=[{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f889bfdf-c600-4260-90f0-d332e8e3d79b', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f889bfdf-c600-4260-90f0-d332e8e3d79b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6ee4f2f5-3303-4c84-b708-eb35a65082b6', 'attached_at': '', 'detached_at': '', 'volume_id': 'f889bfdf-c600-4260-90f0-d332e8e3d79b', 'serial': 'f889bfdf-c600-4260-90f0-d332e8e3d79b'}, 'attachment_id': '9a973fe2-99ba-4c5d-8004-b747f98eb275', 'guest_format': None, 'delete_on_termination': True, 'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.031 251996 WARNING nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.041 251996 DEBUG nova.virt.libvirt.host [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.042 251996 DEBUG nova.virt.libvirt.host [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.050 251996 DEBUG nova.virt.libvirt.host [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.051 251996 DEBUG nova.virt.libvirt.host [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.053 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.053 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.054 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.054 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.054 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.054 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.055 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.055 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.055 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.055 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.056 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.056 251996 DEBUG nova.virt.hardware [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:28:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.086 251996 DEBUG nova.storage.rbd_utils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 6ee4f2f5-3303-4c84-b708-eb35a65082b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.091 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:28:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1676437557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.563 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.588 251996 DEBUG nova.virt.libvirt.vif [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1330317525',display_name='tempest-ServerActionsTestOtherA-server-1330317525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1330317525',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-ul2d1zzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:28:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=6ee4f2f5-3303-4c84-b708-eb35a65082b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.588 251996 DEBUG nova.network.os_vif_util [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.589 251996 DEBUG nova.network.os_vif_util [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.591 251996 DEBUG nova.objects.instance [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.605 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <uuid>6ee4f2f5-3303-4c84-b708-eb35a65082b6</uuid>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <name>instance-00000070</name>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestOtherA-server-1330317525</nova:name>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:28:36</nova:creationTime>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <nova:user uuid="baddb65c90da47a58d026b0db966f6c8">tempest-ServerActionsTestOtherA-1949739102-project-member</nova:user>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <nova:project uuid="001e2256cb8b430d93c1ff613010d199">tempest-ServerActionsTestOtherA-1949739102</nova:project>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <nova:port uuid="06cbbb2f-cba8-4b99-b2dd-778c71df2d23">
Dec 06 07:28:36 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <system>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <entry name="serial">6ee4f2f5-3303-4c84-b708-eb35a65082b6</entry>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <entry name="uuid">6ee4f2f5-3303-4c84-b708-eb35a65082b6</entry>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </system>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <os>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   </os>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <features>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   </features>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6ee4f2f5-3303-4c84-b708-eb35a65082b6_disk.config">
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       </source>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-f889bfdf-c600-4260-90f0-d332e8e3d79b">
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       </source>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:28:36 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <serial>f889bfdf-c600-4260-90f0-d332e8e3d79b</serial>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:14:2c:72"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <target dev="tap06cbbb2f-cb"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/console.log" append="off"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <video>
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </video>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:28:36 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:28:36 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:28:36 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:28:36 compute-0 nova_compute[251992]: </domain>
Dec 06 07:28:36 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.606 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Preparing to wait for external event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.606 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.607 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.607 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.607 251996 DEBUG nova.virt.libvirt.vif [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1330317525',display_name='tempest-ServerActionsTestOtherA-server-1330317525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1330317525',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-ul2d1zzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:28:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=6ee4f2f5-3303-4c84-b708-eb35a65082b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.608 251996 DEBUG nova.network.os_vif_util [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.609 251996 DEBUG nova.network.os_vif_util [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.609 251996 DEBUG os_vif [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.610 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.611 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.611 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.615 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.616 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06cbbb2f-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.617 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06cbbb2f-cb, col_values=(('external_ids', {'iface-id': '06cbbb2f-cba8-4b99-b2dd-778c71df2d23', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:2c:72', 'vm-uuid': '6ee4f2f5-3303-4c84-b708-eb35a65082b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.618 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:36 compute-0 NetworkManager[48965]: <info>  [1765006116.6200] manager: (tap06cbbb2f-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.620 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.631 251996 INFO os_vif [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb')
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.694 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.695 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.695 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] No VIF found with MAC fa:16:3e:14:2c:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.695 251996 INFO nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Using config drive
Dec 06 07:28:36 compute-0 nova_compute[251992]: 2025-12-06 07:28:36.723 251996 DEBUG nova.storage.rbd_utils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 6ee4f2f5-3303-4c84-b708-eb35a65082b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:28:37 compute-0 ceph-mon[74339]: pgmap v2133: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Dec 06 07:28:37 compute-0 ceph-mon[74339]: pgmap v2134: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 07:28:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1676437557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:37.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:37 compute-0 nova_compute[251992]: 2025-12-06 07:28:37.298 251996 INFO nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Creating config drive at /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/disk.config
Dec 06 07:28:37 compute-0 nova_compute[251992]: 2025-12-06 07:28:37.304 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqjojf1e6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:37 compute-0 nova_compute[251992]: 2025-12-06 07:28:37.438 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqjojf1e6" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:37 compute-0 nova_compute[251992]: 2025-12-06 07:28:37.467 251996 DEBUG nova.storage.rbd_utils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 6ee4f2f5-3303-4c84-b708-eb35a65082b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:28:37 compute-0 nova_compute[251992]: 2025-12-06 07:28:37.470 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/disk.config 6ee4f2f5-3303-4c84-b708-eb35a65082b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:28:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:28:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:37.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:28:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Dec 06 07:28:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 07:28:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Dec 06 07:28:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:39.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:39 compute-0 nova_compute[251992]: 2025-12-06 07:28:39.157 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:39 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Dec 06 07:28:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:39.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.4 MiB/s wr, 32 op/s
Dec 06 07:28:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:41.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:41 compute-0 ceph-mon[74339]: pgmap v2135: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 29 op/s
Dec 06 07:28:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:41.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:41 compute-0 nova_compute[251992]: 2025-12-06 07:28:41.621 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 784 KiB/s wr, 17 op/s
Dec 06 07:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:28:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:28:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:43.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:28:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.004000110s ======
Dec 06 07:28:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:43.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000110s
Dec 06 07:28:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 756 KiB/s wr, 24 op/s
Dec 06 07:28:44 compute-0 nova_compute[251992]: 2025-12-06 07:28:44.158 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:28:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:45.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:28:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:45.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:46 compute-0 ceph-mon[74339]: osdmap e271: 3 total, 3 up, 3 in
Dec 06 07:28:46 compute-0 ceph-mon[74339]: pgmap v2137: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.4 MiB/s wr, 32 op/s
Dec 06 07:28:46 compute-0 ceph-mon[74339]: pgmap v2138: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 784 KiB/s wr, 17 op/s
Dec 06 07:28:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 403 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 175 KiB/s wr, 59 op/s
Dec 06 07:28:46 compute-0 nova_compute[251992]: 2025-12-06 07:28:46.625 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:47.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:47.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:47 compute-0 sudo[322302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:47 compute-0 sudo[322302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:47 compute-0 sudo[322302]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:47 compute-0 sudo[322327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:28:47 compute-0 sudo[322327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:28:47 compute-0 sudo[322327]: pam_unix(sudo:session): session closed for user root
Dec 06 07:28:48 compute-0 ceph-mon[74339]: pgmap v2139: 305 pgs: 305 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 756 KiB/s wr, 24 op/s
Dec 06 07:28:48 compute-0 ceph-mon[74339]: pgmap v2140: 305 pgs: 305 active+clean; 403 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 175 KiB/s wr, 59 op/s
Dec 06 07:28:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 403 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 175 KiB/s wr, 59 op/s
Dec 06 07:28:48 compute-0 nova_compute[251992]: 2025-12-06 07:28:48.563 251996 DEBUG oslo_concurrency.processutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/disk.config 6ee4f2f5-3303-4c84-b708-eb35a65082b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 11.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:28:48 compute-0 nova_compute[251992]: 2025-12-06 07:28:48.564 251996 INFO nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Deleting local config drive /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/disk.config because it was imported into RBD.
Dec 06 07:28:48 compute-0 kernel: tap06cbbb2f-cb: entered promiscuous mode
Dec 06 07:28:48 compute-0 NetworkManager[48965]: <info>  [1765006128.6097] manager: (tap06cbbb2f-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Dec 06 07:28:48 compute-0 nova_compute[251992]: 2025-12-06 07:28:48.610 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:48 compute-0 ovn_controller[147168]: 2025-12-06T07:28:48Z|00402|binding|INFO|Claiming lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for this chassis.
Dec 06 07:28:48 compute-0 ovn_controller[147168]: 2025-12-06T07:28:48Z|00403|binding|INFO|06cbbb2f-cba8-4b99-b2dd-778c71df2d23: Claiming fa:16:3e:14:2c:72 10.100.0.7
Dec 06 07:28:48 compute-0 ovn_controller[147168]: 2025-12-06T07:28:48Z|00404|binding|INFO|Setting lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 ovn-installed in OVS
Dec 06 07:28:48 compute-0 nova_compute[251992]: 2025-12-06 07:28:48.628 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:48 compute-0 nova_compute[251992]: 2025-12-06 07:28:48.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:48 compute-0 systemd-machined[212986]: New machine qemu-51-instance-00000070.
Dec 06 07:28:48 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-00000070.
Dec 06 07:28:48 compute-0 systemd-udevd[322366]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:28:48 compute-0 NetworkManager[48965]: <info>  [1765006128.6684] device (tap06cbbb2f-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:28:48 compute-0 NetworkManager[48965]: <info>  [1765006128.6697] device (tap06cbbb2f-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:28:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:49.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:49 compute-0 nova_compute[251992]: 2025-12-06 07:28:49.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2549393210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:49 compute-0 ceph-mon[74339]: pgmap v2141: 305 pgs: 305 active+clean; 403 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 175 KiB/s wr, 59 op/s
Dec 06 07:28:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1698605546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:28:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:49.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 179 KiB/s wr, 58 op/s
Dec 06 07:28:50 compute-0 nova_compute[251992]: 2025-12-06 07:28:50.295 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006130.2939053, 6ee4f2f5-3303-4c84-b708-eb35a65082b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:28:50 compute-0 nova_compute[251992]: 2025-12-06 07:28:50.296 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] VM Started (Lifecycle Event)
Dec 06 07:28:50 compute-0 ceph-mon[74339]: pgmap v2142: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 179 KiB/s wr, 58 op/s
Dec 06 07:28:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:51.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:51 compute-0 ovn_controller[147168]: 2025-12-06T07:28:51Z|00405|binding|INFO|Setting lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 up in Southbound
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.229 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:2c:72 10.100.0.7'], port_security=['fa:16:3e:14:2c:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6ee4f2f5-3303-4c84-b708-eb35a65082b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56e13d32-a2bf-49aa-a4ac-9182c3684195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=06cbbb2f-cba8-4b99-b2dd-778c71df2d23) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.232 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e bound to our chassis
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.235 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.256 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0e40e836-bf89-4434-99b8-04fbb14687a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.290 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ce240437-6727-421e-8b21-613f100da8b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.295 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[0241d881-6ee6-4bdd-b30a-862b8e37209f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.323 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[33451dee-41dd-419a-8578-ca5bd099c786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.341 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a825b266-6d46-42a0-9e13-dd93464d5dc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605069, 'reachable_time': 42946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322423, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.357 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eab3501e-08e0-420d-be6e-4351d96b0ccb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6209aab-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605081, 'tstamp': 605081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322424, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6209aab-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605084, 'tstamp': 605084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322424, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.360 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:51 compute-0 nova_compute[251992]: 2025-12-06 07:28:51.362 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:51 compute-0 nova_compute[251992]: 2025-12-06 07:28:51.363 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.364 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.364 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.364 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:28:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:28:51.365 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:28:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:51.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:51 compute-0 nova_compute[251992]: 2025-12-06 07:28:51.627 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 162 KiB/s wr, 59 op/s
Dec 06 07:28:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:53.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:53.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 105 KiB/s wr, 58 op/s
Dec 06 07:28:54 compute-0 nova_compute[251992]: 2025-12-06 07:28:54.163 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:54 compute-0 nova_compute[251992]: 2025-12-06 07:28:54.499 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:28:54 compute-0 nova_compute[251992]: 2025-12-06 07:28:54.504 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006130.29426, 6ee4f2f5-3303-4c84-b708-eb35a65082b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:28:54 compute-0 nova_compute[251992]: 2025-12-06 07:28:54.504 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] VM Paused (Lifecycle Event)
Dec 06 07:28:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:55.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 107 KiB/s wr, 54 op/s
Dec 06 07:28:56 compute-0 nova_compute[251992]: 2025-12-06 07:28:56.078 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:28:56 compute-0 nova_compute[251992]: 2025-12-06 07:28:56.081 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:28:56 compute-0 nova_compute[251992]: 2025-12-06 07:28:56.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:57.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:57.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 18 KiB/s wr, 16 op/s
Dec 06 07:28:58 compute-0 podman[322428]: 2025-12-06 07:28:58.4296907 +0000 UTC m=+0.084826350 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 07:28:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:28:59.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.164 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:28:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:28:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:28:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:28:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:28:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.632 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.942 251996 DEBUG nova.compute.manager [req-39b3ef9e-b2f8-404f-9e2c-10b0f38d781e req-44c0c1a3-e0b6-4d22-91c0-e1b638b982aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.942 251996 DEBUG oslo_concurrency.lockutils [req-39b3ef9e-b2f8-404f-9e2c-10b0f38d781e req-44c0c1a3-e0b6-4d22-91c0-e1b638b982aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.943 251996 DEBUG oslo_concurrency.lockutils [req-39b3ef9e-b2f8-404f-9e2c-10b0f38d781e req-44c0c1a3-e0b6-4d22-91c0-e1b638b982aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.943 251996 DEBUG oslo_concurrency.lockutils [req-39b3ef9e-b2f8-404f-9e2c-10b0f38d781e req-44c0c1a3-e0b6-4d22-91c0-e1b638b982aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.943 251996 DEBUG nova.compute.manager [req-39b3ef9e-b2f8-404f-9e2c-10b0f38d781e req-44c0c1a3-e0b6-4d22-91c0-e1b638b982aa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Processing event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.944 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance event wait completed in 9 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.948 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006139.9482653, 6ee4f2f5-3303-4c84-b708-eb35a65082b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.948 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] VM Resumed (Lifecycle Event)
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.950 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.954 251996 INFO nova.virt.libvirt.driver [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance spawned successfully.
Dec 06 07:28:59 compute-0 nova_compute[251992]: 2025-12-06 07:28:59.954 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:29:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 18 KiB/s wr, 19 op/s
Dec 06 07:29:00 compute-0 ceph-mon[74339]: pgmap v2143: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 162 KiB/s wr, 59 op/s
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.022 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.028 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.031 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.031 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.032 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.032 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.033 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.033 251996 DEBUG nova.virt.libvirt.driver [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:29:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:01.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.630 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:01 compute-0 nova_compute[251992]: 2025-12-06 07:29:01.860 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:29:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.9 KiB/s wr, 19 op/s
Dec 06 07:29:02 compute-0 ceph-mon[74339]: pgmap v2144: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 105 KiB/s wr, 58 op/s
Dec 06 07:29:02 compute-0 ceph-mon[74339]: pgmap v2145: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 107 KiB/s wr, 54 op/s
Dec 06 07:29:02 compute-0 ceph-mon[74339]: pgmap v2146: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 18 KiB/s wr, 16 op/s
Dec 06 07:29:02 compute-0 ceph-mon[74339]: pgmap v2147: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 18 KiB/s wr, 19 op/s
Dec 06 07:29:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:03.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:03 compute-0 nova_compute[251992]: 2025-12-06 07:29:03.600 251996 DEBUG nova.compute.manager [req-2189e167-3400-4db3-8cf4-43f53303ca5e req-fe3f1470-6d4f-4dfc-b98d-2ae4e7265a6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:03 compute-0 nova_compute[251992]: 2025-12-06 07:29:03.600 251996 DEBUG oslo_concurrency.lockutils [req-2189e167-3400-4db3-8cf4-43f53303ca5e req-fe3f1470-6d4f-4dfc-b98d-2ae4e7265a6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:03 compute-0 nova_compute[251992]: 2025-12-06 07:29:03.601 251996 DEBUG oslo_concurrency.lockutils [req-2189e167-3400-4db3-8cf4-43f53303ca5e req-fe3f1470-6d4f-4dfc-b98d-2ae4e7265a6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:03 compute-0 nova_compute[251992]: 2025-12-06 07:29:03.601 251996 DEBUG oslo_concurrency.lockutils [req-2189e167-3400-4db3-8cf4-43f53303ca5e req-fe3f1470-6d4f-4dfc-b98d-2ae4e7265a6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:03 compute-0 nova_compute[251992]: 2025-12-06 07:29:03.601 251996 DEBUG nova.compute.manager [req-2189e167-3400-4db3-8cf4-43f53303ca5e req-fe3f1470-6d4f-4dfc-b98d-2ae4e7265a6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:03 compute-0 nova_compute[251992]: 2025-12-06 07:29:03.601 251996 WARNING nova.compute.manager [req-2189e167-3400-4db3-8cf4-43f53303ca5e req-fe3f1470-6d4f-4dfc-b98d-2ae4e7265a6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state building and task_state spawning.
Dec 06 07:29:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:03.836 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:03.836 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:03.837 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 KiB/s wr, 71 op/s
Dec 06 07:29:04 compute-0 nova_compute[251992]: 2025-12-06 07:29:04.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:04 compute-0 nova_compute[251992]: 2025-12-06 07:29:04.467 251996 INFO nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Took 39.96 seconds to spawn the instance on the hypervisor.
Dec 06 07:29:04 compute-0 nova_compute[251992]: 2025-12-06 07:29:04.468 251996 DEBUG nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:04 compute-0 nova_compute[251992]: 2025-12-06 07:29:04.685 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:05.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:05 compute-0 podman[322458]: 2025-12-06 07:29:05.401757513 +0000 UTC m=+0.058085859 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 07:29:05 compute-0 podman[322459]: 2025-12-06 07:29:05.40870822 +0000 UTC m=+0.056052024 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:29:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:05.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 144 op/s
Dec 06 07:29:06 compute-0 nova_compute[251992]: 2025-12-06 07:29:06.632 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:06 compute-0 ceph-mon[74339]: pgmap v2148: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.9 KiB/s wr, 19 op/s
Dec 06 07:29:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:07.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:07.247 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.247 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:07.248 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.585 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:07.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.625 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.625 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.625 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.626 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.626 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.662 251996 INFO nova.compute.manager [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Took 48.06 seconds to build instance.
Dec 06 07:29:07 compute-0 nova_compute[251992]: 2025-12-06 07:29:07.681 251996 DEBUG oslo_concurrency.lockutils [None req-239442d4-b708-41c0-852e-3a713ff56226 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 48.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:07 compute-0 ceph-mon[74339]: pgmap v2149: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 KiB/s wr, 71 op/s
Dec 06 07:29:07 compute-0 ceph-mon[74339]: pgmap v2150: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 144 op/s
Dec 06 07:29:07 compute-0 sudo[322516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:07 compute-0 sudo[322516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:07 compute-0 sudo[322516]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:07 compute-0 sudo[322541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:07 compute-0 sudo[322541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:07 compute-0 sudo[322541]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 23 KiB/s wr, 141 op/s
Dec 06 07:29:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:29:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/803272312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:08 compute-0 nova_compute[251992]: 2025-12-06 07:29:08.107 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:29:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2527059766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:29:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:29:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2527059766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:29:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/851716319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:09 compute-0 ceph-mon[74339]: pgmap v2151: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 23 KiB/s wr, 141 op/s
Dec 06 07:29:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/803272312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2527059766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:29:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2527059766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:29:09 compute-0 nova_compute[251992]: 2025-12-06 07:29:09.168 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:09.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:09.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 142 op/s
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.356 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.356 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.363 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.364 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.368 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.369 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.578 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.580 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3920MB free_disk=20.805904388427734GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.581 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.581 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.953 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 00f56c62-f327-41e3-a105-24f56ae124c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.953 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance dd85818c-bf82-473d-8650-6b391dbfa300 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.954 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 6ee4f2f5-3303-4c84-b708-eb35a65082b6 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.954 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:29:10 compute-0 nova_compute[251992]: 2025-12-06 07:29:10.954 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.058 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.085 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.086 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.134 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.169 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:29:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:11.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:11 compute-0 ceph-mon[74339]: pgmap v2152: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 142 op/s
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.273 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:11.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.633 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:29:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3153342435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.774 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:11 compute-0 nova_compute[251992]: 2025-12-06 07:29:11.779 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:29:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 430 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 959 KiB/s wr, 158 op/s
Dec 06 07:29:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1467730153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3153342435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:12 compute-0 ceph-mon[74339]: pgmap v2153: 305 pgs: 305 active+clean; 430 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 959 KiB/s wr, 158 op/s
Dec 06 07:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:29:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:13.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:13.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 445 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 164 op/s
Dec 06 07:29:14 compute-0 nova_compute[251992]: 2025-12-06 07:29:14.169 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:14.250 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:15.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:15.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Dec 06 07:29:16 compute-0 ceph-mon[74339]: pgmap v2154: 305 pgs: 305 active+clean; 445 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 164 op/s
Dec 06 07:29:16 compute-0 nova_compute[251992]: 2025-12-06 07:29:16.567 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:29:16 compute-0 nova_compute[251992]: 2025-12-06 07:29:16.635 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:17.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.439 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.439 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.511 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.512 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.512 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.512 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.512 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.513 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.513 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:17 compute-0 nova_compute[251992]: 2025-12-06 07:29:17.513 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:29:17 compute-0 ceph-mon[74339]: pgmap v2155: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Dec 06 07:29:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1424801763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:17.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Dec 06 07:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:29:18
Dec 06 07:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'backups', 'images', 'vms', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 06 07:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:29:19 compute-0 nova_compute[251992]: 2025-12-06 07:29:19.173 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:19.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:19.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Dec 06 07:29:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1045039534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:20 compute-0 ceph-mon[74339]: pgmap v2156: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Dec 06 07:29:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1311121322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1032885356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:20 compute-0 nova_compute[251992]: 2025-12-06 07:29:20.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:29:20 compute-0 nova_compute[251992]: 2025-12-06 07:29:20.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:29:20 compute-0 nova_compute[251992]: 2025-12-06 07:29:20.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:29:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:21.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:21 compute-0 nova_compute[251992]: 2025-12-06 07:29:21.637 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 503 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 4.9 MiB/s wr, 86 op/s
Dec 06 07:29:22 compute-0 ceph-mon[74339]: pgmap v2157: 305 pgs: 305 active+clean; 462 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.7 MiB/s wr, 41 op/s
Dec 06 07:29:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:23.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:29:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 503 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 3.9 MiB/s wr, 67 op/s
Dec 06 07:29:24 compute-0 nova_compute[251992]: 2025-12-06 07:29:24.174 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:24 compute-0 ceph-mon[74339]: pgmap v2158: 305 pgs: 305 active+clean; 503 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 4.9 MiB/s wr, 86 op/s
Dec 06 07:29:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:29:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:29:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:29:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:29:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:29:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:25.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:25 compute-0 nova_compute[251992]: 2025-12-06 07:29:25.733 251996 DEBUG nova.compute.manager [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-changed-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:25 compute-0 nova_compute[251992]: 2025-12-06 07:29:25.734 251996 DEBUG nova.compute.manager [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Refreshing instance network info cache due to event network-changed-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:29:25 compute-0 nova_compute[251992]: 2025-12-06 07:29:25.734 251996 DEBUG oslo_concurrency.lockutils [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:25 compute-0 nova_compute[251992]: 2025-12-06 07:29:25.734 251996 DEBUG oslo_concurrency.lockutils [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:25 compute-0 nova_compute[251992]: 2025-12-06 07:29:25.734 251996 DEBUG nova.network.neutron [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Refreshing network info cache for port c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009683654092220361 of space, bias 1.0, pg target 2.9050962276661085 quantized to 32 (current 32)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029013339265773312 of space, bias 1.0, pg target 0.8645975101200447 quantized to 32 (current 32)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:29:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:29:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 652 KiB/s rd, 3.7 MiB/s wr, 91 op/s
Dec 06 07:29:26 compute-0 nova_compute[251992]: 2025-12-06 07:29:26.305 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:26 compute-0 nova_compute[251992]: 2025-12-06 07:29:26.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:26 compute-0 ovn_controller[147168]: 2025-12-06T07:29:26Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:2c:72 10.100.0.7
Dec 06 07:29:26 compute-0 ovn_controller[147168]: 2025-12-06T07:29:26Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:2c:72 10.100.0.7
Dec 06 07:29:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:27.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:27 compute-0 ceph-mon[74339]: pgmap v2159: 305 pgs: 305 active+clean; 503 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 3.9 MiB/s wr, 67 op/s
Dec 06 07:29:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:27.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.019 251996 DEBUG nova.network.neutron [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updated VIF entry in instance network info cache for port c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.020 251996 DEBUG nova.network.neutron [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:28 compute-0 sudo[322601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:28 compute-0 sudo[322601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:28 compute-0 sudo[322601]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 625 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Dec 06 07:29:28 compute-0 sudo[322626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:28 compute-0 sudo[322626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:28 compute-0 sudo[322626]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.092 251996 DEBUG oslo_concurrency.lockutils [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.094 251996 DEBUG nova.compute.manager [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-changed-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.094 251996 DEBUG nova.compute.manager [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Refreshing instance network info cache due to event network-changed-06cbbb2f-cba8-4b99-b2dd-778c71df2d23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.094 251996 DEBUG oslo_concurrency.lockutils [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.094 251996 DEBUG oslo_concurrency.lockutils [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.094 251996 DEBUG nova.network.neutron [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Refreshing network info cache for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.096 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.096 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:29:28 compute-0 nova_compute[251992]: 2025-12-06 07:29:28.096 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 00f56c62-f327-41e3-a105-24f56ae124c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:29 compute-0 nova_compute[251992]: 2025-12-06 07:29:29.175 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:29.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:29 compute-0 podman[322652]: 2025-12-06 07:29:29.416151497 +0000 UTC m=+0.074592085 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 07:29:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:29.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:29 compute-0 ceph-mon[74339]: pgmap v2160: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 652 KiB/s rd, 3.7 MiB/s wr, 91 op/s
Dec 06 07:29:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1719017473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:29 compute-0 ceph-mon[74339]: pgmap v2161: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 625 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Dec 06 07:29:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 625 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.274 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [{"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.305 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-00f56c62-f327-41e3-a105-24f56ae124c0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.305 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.364 251996 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.497 251996 DEBUG nova.network.neutron [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updated VIF entry in instance network info cache for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.498 251996 DEBUG nova.network.neutron [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.529 251996 DEBUG oslo_concurrency.lockutils [req-0f626028-9b68-4b25-92bc-3d234aae2d44 req-7d155216-8bfe-4e5d-8817-8223b3a16744 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.529 251996 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:30 compute-0 nova_compute[251992]: 2025-12-06 07:29:30.529 251996 DEBUG nova.network.neutron [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:31 compute-0 ceph-mon[74339]: pgmap v2162: 305 pgs: 305 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 625 KiB/s rd, 2.9 MiB/s wr, 78 op/s
Dec 06 07:29:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2722503193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:31.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:31 compute-0 sudo[322679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:31 compute-0 sudo[322679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:31 compute-0 sudo[322679]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:31 compute-0 sudo[322704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:31 compute-0 sudo[322704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:31 compute-0 sudo[322704]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Dec 06 07:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Dec 06 07:29:31 compute-0 sudo[322729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:31 compute-0 sudo[322729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:31 compute-0 sudo[322729]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:31.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:31 compute-0 nova_compute[251992]: 2025-12-06 07:29:31.641 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:31 compute-0 sudo[322754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:29:31 compute-0 sudo[322754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Dec 06 07:29:31 compute-0 sudo[322754]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:29:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1017 KiB/s wr, 87 op/s
Dec 06 07:29:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:29:32 compute-0 nova_compute[251992]: 2025-12-06 07:29:32.745 251996 DEBUG nova.network.neutron [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:29:32 compute-0 nova_compute[251992]: 2025-12-06 07:29:32.764 251996 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:32 compute-0 sudo[322800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:32 compute-0 sudo[322800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:32 compute-0 sudo[322800]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:32 compute-0 sudo[322825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:32 compute-0 sudo[322825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:32 compute-0 sudo[322825]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:32 compute-0 nova_compute[251992]: 2025-12-06 07:29:32.881 251996 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 07:29:32 compute-0 nova_compute[251992]: 2025-12-06 07:29:32.882 251996 DEBUG nova.virt.libvirt.volume.remotefs [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Creating file /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/7506689ed72a4790ade08fb270984095.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 07:29:32 compute-0 nova_compute[251992]: 2025-12-06 07:29:32.882 251996 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/7506689ed72a4790ade08fb270984095.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:32 compute-0 sudo[322850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:32 compute-0 sudo[322850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:32 compute-0 sudo[322850]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:32 compute-0 sudo[322876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:29:32 compute-0 sudo[322876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:33 compute-0 ceph-mon[74339]: osdmap e272: 3 total, 3 up, 3 in
Dec 06 07:29:33 compute-0 ceph-mon[74339]: pgmap v2164: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1017 KiB/s wr, 87 op/s
Dec 06 07:29:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1251781797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:33.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:33 compute-0 nova_compute[251992]: 2025-12-06 07:29:33.324 251996 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/7506689ed72a4790ade08fb270984095.tmp" returned: 1 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:33 compute-0 nova_compute[251992]: 2025-12-06 07:29:33.324 251996 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6/7506689ed72a4790ade08fb270984095.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 07:29:33 compute-0 nova_compute[251992]: 2025-12-06 07:29:33.325 251996 DEBUG nova.virt.libvirt.volume.remotefs [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Creating directory /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 07:29:33 compute-0 nova_compute[251992]: 2025-12-06 07:29:33.325 251996 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:33 compute-0 sudo[322876]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:29:33 compute-0 sudo[322933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:33 compute-0 nova_compute[251992]: 2025-12-06 07:29:33.529 251996 DEBUG oslo_concurrency.processutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/6ee4f2f5-3303-4c84-b708-eb35a65082b6" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:33 compute-0 sudo[322933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:33 compute-0 nova_compute[251992]: 2025-12-06 07:29:33.532 251996 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:29:33 compute-0 sudo[322933]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:33 compute-0 sudo[322958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:33 compute-0 sudo[322958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:33.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:33 compute-0 sudo[322958]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:33 compute-0 sudo[322983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:33 compute-0 sudo[322983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:33 compute-0 sudo[322983]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:33 compute-0 sudo[323008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 07:29:33 compute-0 sudo[323008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1017 KiB/s wr, 87 op/s
Dec 06 07:29:34 compute-0 podman[323075]: 2025-12-06 07:29:34.02052223 +0000 UTC m=+0.020088454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:34 compute-0 nova_compute[251992]: 2025-12-06 07:29:34.178 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:29:34 compute-0 podman[323075]: 2025-12-06 07:29:34.399791109 +0000 UTC m=+0.399357313 container create 453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:29:34 compute-0 systemd[1]: Started libpod-conmon-453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160.scope.
Dec 06 07:29:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:29:34 compute-0 podman[323075]: 2025-12-06 07:29:34.704946627 +0000 UTC m=+0.704512871 container init 453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:29:34 compute-0 podman[323075]: 2025-12-06 07:29:34.711067442 +0000 UTC m=+0.710633646 container start 453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:34 compute-0 modest_brown[323092]: 167 167
Dec 06 07:29:34 compute-0 systemd[1]: libpod-453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160.scope: Deactivated successfully.
Dec 06 07:29:34 compute-0 conmon[323092]: conmon 453c963442ffa18b5249 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160.scope/container/memory.events
Dec 06 07:29:34 compute-0 podman[323075]: 2025-12-06 07:29:34.785391589 +0000 UTC m=+0.784957813 container attach 453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:29:34 compute-0 podman[323075]: 2025-12-06 07:29:34.786680613 +0000 UTC m=+0.786246817 container died 453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 07:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e73b0e94c11f33a298b60701cb4c8701ee624211b8c2c79f40fceb114def752f-merged.mount: Deactivated successfully.
Dec 06 07:29:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:29:35 compute-0 podman[323075]: 2025-12-06 07:29:35.013445515 +0000 UTC m=+1.013011719 container remove 453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:29:35 compute-0 systemd[1]: libpod-conmon-453c963442ffa18b524967a32a2728873104f2de89937a0df6e9b414dc6bb160.scope: Deactivated successfully.
Dec 06 07:29:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:35.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:35 compute-0 podman[323119]: 2025-12-06 07:29:35.257528745 +0000 UTC m=+0.096751173 container create 40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec 06 07:29:35 compute-0 podman[323119]: 2025-12-06 07:29:35.187883674 +0000 UTC m=+0.027106122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:35 compute-0 systemd[1]: Started libpod-conmon-40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f.scope.
Dec 06 07:29:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f065dba106b5a83538c9c927538d969db6987d107f57d8da5e25d1f1110349d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f065dba106b5a83538c9c927538d969db6987d107f57d8da5e25d1f1110349d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f065dba106b5a83538c9c927538d969db6987d107f57d8da5e25d1f1110349d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f065dba106b5a83538c9c927538d969db6987d107f57d8da5e25d1f1110349d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:35 compute-0 podman[323119]: 2025-12-06 07:29:35.461100161 +0000 UTC m=+0.300322599 container init 40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:29:35 compute-0 podman[323119]: 2025-12-06 07:29:35.470552986 +0000 UTC m=+0.309775414 container start 40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:29:35 compute-0 podman[323119]: 2025-12-06 07:29:35.563443154 +0000 UTC m=+0.402665602 container attach 40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:29:35 compute-0 podman[323138]: 2025-12-06 07:29:35.609438916 +0000 UTC m=+0.174088981 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 07:29:35 compute-0 podman[323139]: 2025-12-06 07:29:35.611221524 +0000 UTC m=+0.170273258 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:29:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:35.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:36 compute-0 ceph-mon[74339]: pgmap v2165: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 1017 KiB/s wr, 87 op/s
Dec 06 07:29:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:36 compute-0 ceph-mon[74339]: pgmap v2166: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:36 compute-0 nova_compute[251992]: 2025-12-06 07:29:36.643 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:36 compute-0 nifty_joliot[323135]: [
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:     {
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         "available": false,
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         "ceph_device": false,
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         "lsm_data": {},
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         "lvs": [],
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         "path": "/dev/sr0",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         "rejected_reasons": [
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "Insufficient space (<5GB)",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "Has a FileSystem"
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         ],
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         "sys_api": {
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "actuators": null,
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "device_nodes": "sr0",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "devname": "sr0",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "human_readable_size": "482.00 KB",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "id_bus": "ata",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "model": "QEMU DVD-ROM",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "nr_requests": "2",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "parent": "/dev/sr0",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "partitions": {},
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "path": "/dev/sr0",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "removable": "1",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "rev": "2.5+",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "ro": "0",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "rotational": "1",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "sas_address": "",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "sas_device_handle": "",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "scheduler_mode": "mq-deadline",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "sectors": 0,
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "sectorsize": "2048",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "size": 493568.0,
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "support_discard": "2048",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "type": "disk",
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:             "vendor": "QEMU"
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:         }
Dec 06 07:29:36 compute-0 nifty_joliot[323135]:     }
Dec 06 07:29:36 compute-0 nifty_joliot[323135]: ]
Dec 06 07:29:36 compute-0 systemd[1]: libpod-40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f.scope: Deactivated successfully.
Dec 06 07:29:36 compute-0 podman[323119]: 2025-12-06 07:29:36.759684398 +0000 UTC m=+1.598906826 container died 40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:29:36 compute-0 systemd[1]: libpod-40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f.scope: Consumed 1.289s CPU time.
Dec 06 07:29:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:37.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f065dba106b5a83538c9c927538d969db6987d107f57d8da5e25d1f1110349d8-merged.mount: Deactivated successfully.
Dec 06 07:29:37 compute-0 kernel: tap06cbbb2f-cb (unregistering): left promiscuous mode
Dec 06 07:29:37 compute-0 NetworkManager[48965]: <info>  [1765006177.4685] device (tap06cbbb2f-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:29:37 compute-0 ovn_controller[147168]: 2025-12-06T07:29:37Z|00406|binding|INFO|Releasing lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 from this chassis (sb_readonly=0)
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:37 compute-0 ovn_controller[147168]: 2025-12-06T07:29:37Z|00407|binding|INFO|Setting lport 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 down in Southbound
Dec 06 07:29:37 compute-0 ovn_controller[147168]: 2025-12-06T07:29:37Z|00408|binding|INFO|Removing iface tap06cbbb2f-cb ovn-installed in OVS
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.479 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.492 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:37 compute-0 podman[323119]: 2025-12-06 07:29:37.495918274 +0000 UTC m=+2.335140702 container remove 40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.504 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:2c:72 10.100.0.7'], port_security=['fa:16:3e:14:2c:72 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6ee4f2f5-3303-4c84-b708-eb35a65082b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56e13d32-a2bf-49aa-a4ac-9182c3684195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=06cbbb2f-cba8-4b99-b2dd-778c71df2d23) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.506 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.509 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.531 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7100f48f-897d-4db2-9d74-68856c605ba2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:37 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000070.scope: Deactivated successfully.
Dec 06 07:29:37 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000070.scope: Consumed 16.148s CPU time.
Dec 06 07:29:37 compute-0 systemd[1]: libpod-conmon-40442c481f285c63c8c22e03ee920b9e2b6afd7af54c28f65b1d968d6377326f.scope: Deactivated successfully.
Dec 06 07:29:37 compute-0 systemd-machined[212986]: Machine qemu-51-instance-00000070 terminated.
Dec 06 07:29:37 compute-0 sudo[323008]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.568 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3831b224-0bdb-49b4-8dfc-4cc40e793763]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.571 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae9a4db-7260-4ca9-9d10-d9c43ff83086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.605 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c39487b2-c9d7-4d36-8430-e710e909c267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.625 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[95664a1a-ba88-4e86-a977-e624eab3ce6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605069, 'reachable_time': 42946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324344, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:37.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.645 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1f1569ad-6362-4bfa-9c6e-105aacd7edae]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6209aab-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605081, 'tstamp': 605081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324345, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6209aab-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605084, 'tstamp': 605084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324345, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.647 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.648 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.656 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.657 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.657 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.658 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:29:37.658 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.719 251996 INFO nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance shutdown successfully after 4 seconds.
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.728 251996 INFO nova.virt.libvirt.driver [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Instance destroyed successfully.
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.730 251996 DEBUG nova.virt.libvirt.vif [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1330317525',display_name='tempest-ServerActionsTestOtherA-server-1330317525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1330317525',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-ul2d1zzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:29:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=6ee4f2f5-3303-4c84-b708-eb35a65082b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1643604044-network", "vif_mac": "fa:16:3e:14:2c:72"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.730 251996 DEBUG nova.network.os_vif_util [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-1643604044-network", "vif_mac": "fa:16:3e:14:2c:72"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.732 251996 DEBUG nova.network.os_vif_util [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.733 251996 DEBUG os_vif [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.737 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.737 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06cbbb2f-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.739 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.743 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.749 251996 INFO os_vif [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb')
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.756 251996 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.757 251996 DEBUG nova.virt.libvirt.driver [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:29:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.892 251996 DEBUG nova.compute.manager [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.892 251996 DEBUG oslo_concurrency.lockutils [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.893 251996 DEBUG oslo_concurrency.lockutils [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.893 251996 DEBUG oslo_concurrency.lockutils [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.893 251996 DEBUG nova.compute.manager [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:37 compute-0 nova_compute[251992]: 2025-12-06 07:29:37.894 251996 WARNING nova.compute.manager [req-094c63d3-a54f-45e8-955e-e82634f34ee6 req-aa75a05c-e86d-48ad-94ac-4f5fc1eccd27 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-unplugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state active and task_state resize_migrating.
Dec 06 07:29:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:38 compute-0 ceph-mon[74339]: pgmap v2167: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:39 compute-0 nova_compute[251992]: 2025-12-06 07:29:39.181 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:39.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:39 compute-0 nova_compute[251992]: 2025-12-06 07:29:39.361 251996 DEBUG neutronclient.v2_0.client [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 07:29:39 compute-0 nova_compute[251992]: 2025-12-06 07:29:39.595 251996 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:39 compute-0 nova_compute[251992]: 2025-12-06 07:29:39.596 251996 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:39 compute-0 nova_compute[251992]: 2025-12-06 07:29:39.596 251996 DEBUG oslo_concurrency.lockutils [None req-59a600ea-aaab-4445-a9f8-96c7d9151f8c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:39.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:29:40 compute-0 nova_compute[251992]: 2025-12-06 07:29:40.288 251996 DEBUG nova.compute.manager [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:40 compute-0 nova_compute[251992]: 2025-12-06 07:29:40.289 251996 DEBUG oslo_concurrency.lockutils [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:40 compute-0 nova_compute[251992]: 2025-12-06 07:29:40.289 251996 DEBUG oslo_concurrency.lockutils [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:40 compute-0 nova_compute[251992]: 2025-12-06 07:29:40.289 251996 DEBUG oslo_concurrency.lockutils [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:40 compute-0 nova_compute[251992]: 2025-12-06 07:29:40.289 251996 DEBUG nova.compute.manager [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:40 compute-0 nova_compute[251992]: 2025-12-06 07:29:40.290 251996 WARNING nova.compute.manager [req-c60a2151-c412-4eac-8326-6294a7438cfd req-4a6a5742-7bec-4411-88c1-d2a1c29107fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state active and task_state resize_migrated.
Dec 06 07:29:40 compute-0 ceph-mon[74339]: pgmap v2168: 305 pgs: 305 active+clean; 533 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 191 KiB/s wr, 157 op/s
Dec 06 07:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8003a8f2-a24b-4792-a1fe-7f6d4c387bf1 does not exist
Dec 06 07:29:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a4b9dc19-bf7d-4bb3-a5d8-619c1a7b6e2d does not exist
Dec 06 07:29:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 37d246b3-044c-4d56-9e98-9eba4ebb204f does not exist
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:29:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Dec 06 07:29:40 compute-0 sudo[324360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:40 compute-0 sudo[324360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:40 compute-0 sudo[324360]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:41 compute-0 sudo[324385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:41 compute-0 sudo[324385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:41 compute-0 sudo[324385]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:41 compute-0 sudo[324410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:41 compute-0 sudo[324410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:41 compute-0 sudo[324410]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Dec 06 07:29:41 compute-0 sudo[324435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:29:41 compute-0 sudo[324435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Dec 06 07:29:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:41.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:41 compute-0 podman[324502]: 2025-12-06 07:29:41.54115462 +0000 UTC m=+0.040338190 container create 87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bassi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:29:41 compute-0 systemd[1]: Started libpod-conmon-87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65.scope.
Dec 06 07:29:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:29:41 compute-0 podman[324502]: 2025-12-06 07:29:41.524319716 +0000 UTC m=+0.023503316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:41 compute-0 podman[324502]: 2025-12-06 07:29:41.63227662 +0000 UTC m=+0.131460220 container init 87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:29:41 compute-0 podman[324502]: 2025-12-06 07:29:41.641968332 +0000 UTC m=+0.141151902 container start 87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bassi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:29:41 compute-0 podman[324502]: 2025-12-06 07:29:41.645524118 +0000 UTC m=+0.144707718 container attach 87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:29:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:41.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:41 compute-0 zealous_bassi[324518]: 167 167
Dec 06 07:29:41 compute-0 systemd[1]: libpod-87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65.scope: Deactivated successfully.
Dec 06 07:29:41 compute-0 podman[324502]: 2025-12-06 07:29:41.649051263 +0000 UTC m=+0.148234843 container died 87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db479df0e7169c20dba32fbeee7650493f70f7b752a1bb04425b475b4f874f2-merged.mount: Deactivated successfully.
Dec 06 07:29:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:29:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:29:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:29:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:29:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:29:41 compute-0 ceph-mon[74339]: osdmap e273: 3 total, 3 up, 3 in
Dec 06 07:29:41 compute-0 podman[324502]: 2025-12-06 07:29:41.696261738 +0000 UTC m=+0.195445318 container remove 87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:41 compute-0 systemd[1]: libpod-conmon-87b46445d9dd712d1f504266c1e20c009c24354c2819e36c6d9a115c6c53cd65.scope: Deactivated successfully.
Dec 06 07:29:41 compute-0 podman[324541]: 2025-12-06 07:29:41.873553734 +0000 UTC m=+0.044063951 container create 9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:29:41 compute-0 systemd[1]: Started libpod-conmon-9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b.scope.
Dec 06 07:29:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c276ac5fb0e4f58930d8d212d791cfa33825f1b26e4f15e60369cc1ab231a20d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c276ac5fb0e4f58930d8d212d791cfa33825f1b26e4f15e60369cc1ab231a20d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c276ac5fb0e4f58930d8d212d791cfa33825f1b26e4f15e60369cc1ab231a20d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c276ac5fb0e4f58930d8d212d791cfa33825f1b26e4f15e60369cc1ab231a20d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c276ac5fb0e4f58930d8d212d791cfa33825f1b26e4f15e60369cc1ab231a20d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:41 compute-0 podman[324541]: 2025-12-06 07:29:41.856531035 +0000 UTC m=+0.027041282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:41 compute-0 podman[324541]: 2025-12-06 07:29:41.954846649 +0000 UTC m=+0.125356876 container init 9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:29:41 compute-0 podman[324541]: 2025-12-06 07:29:41.963616705 +0000 UTC m=+0.134126922 container start 9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:29:41 compute-0 podman[324541]: 2025-12-06 07:29:41.967311795 +0000 UTC m=+0.137822032 container attach 9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Dec 06 07:29:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 85 KiB/s wr, 142 op/s
Dec 06 07:29:42 compute-0 nova_compute[251992]: 2025-12-06 07:29:42.739 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:42 compute-0 stoic_allen[324559]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:29:42 compute-0 stoic_allen[324559]: --> relative data size: 1.0
Dec 06 07:29:42 compute-0 stoic_allen[324559]: --> All data devices are unavailable
Dec 06 07:29:42 compute-0 systemd[1]: libpod-9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b.scope: Deactivated successfully.
Dec 06 07:29:42 compute-0 podman[324541]: 2025-12-06 07:29:42.796989344 +0000 UTC m=+0.967499561 container died 9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:29:43 compute-0 nova_compute[251992]: 2025-12-06 07:29:43.033 251996 DEBUG nova.compute.manager [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-changed-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:43 compute-0 nova_compute[251992]: 2025-12-06 07:29:43.034 251996 DEBUG nova.compute.manager [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Refreshing instance network info cache due to event network-changed-06cbbb2f-cba8-4b99-b2dd-778c71df2d23. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:29:43 compute-0 nova_compute[251992]: 2025-12-06 07:29:43.034 251996 DEBUG oslo_concurrency.lockutils [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:43 compute-0 nova_compute[251992]: 2025-12-06 07:29:43.034 251996 DEBUG oslo_concurrency.lockutils [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:43 compute-0 nova_compute[251992]: 2025-12-06 07:29:43.034 251996 DEBUG nova.network.neutron [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Refreshing network info cache for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:29:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:43.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:43 compute-0 ceph-mon[74339]: pgmap v2170: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 85 KiB/s wr, 142 op/s
Dec 06 07:29:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/54487037' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4188191624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:43.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c276ac5fb0e4f58930d8d212d791cfa33825f1b26e4f15e60369cc1ab231a20d-merged.mount: Deactivated successfully.
Dec 06 07:29:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 85 KiB/s wr, 142 op/s
Dec 06 07:29:44 compute-0 nova_compute[251992]: 2025-12-06 07:29:44.183 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:44 compute-0 podman[324541]: 2025-12-06 07:29:44.339548378 +0000 UTC m=+2.510058595 container remove 9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_allen, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 07:29:44 compute-0 systemd[1]: libpod-conmon-9e8ccce301027d802f38b500cf3d1a93d892c862332f61c2ab5d274f2ab2b36b.scope: Deactivated successfully.
Dec 06 07:29:44 compute-0 sudo[324435]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:44 compute-0 sudo[324588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:44 compute-0 sudo[324588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:44 compute-0 sudo[324588]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:44 compute-0 sudo[324613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:44 compute-0 sudo[324613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:44 compute-0 sudo[324613]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:44 compute-0 sudo[324638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:44 compute-0 sudo[324638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:44 compute-0 sudo[324638]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:44 compute-0 sudo[324664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:29:44 compute-0 sudo[324664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:45 compute-0 podman[324730]: 2025-12-06 07:29:44.916431182 +0000 UTC m=+0.021865452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:45 compute-0 ceph-mon[74339]: pgmap v2171: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 85 KiB/s wr, 142 op/s
Dec 06 07:29:45 compute-0 podman[324730]: 2025-12-06 07:29:45.050346567 +0000 UTC m=+0.155780847 container create c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_torvalds, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:29:45 compute-0 systemd[1]: Started libpod-conmon-c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b.scope.
Dec 06 07:29:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:29:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:45.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:45 compute-0 podman[324730]: 2025-12-06 07:29:45.218285861 +0000 UTC m=+0.323720131 container init c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:29:45 compute-0 podman[324730]: 2025-12-06 07:29:45.226267037 +0000 UTC m=+0.331701287 container start c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_torvalds, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:29:45 compute-0 bold_torvalds[324747]: 167 167
Dec 06 07:29:45 compute-0 systemd[1]: libpod-c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b.scope: Deactivated successfully.
Dec 06 07:29:45 compute-0 podman[324730]: 2025-12-06 07:29:45.270383797 +0000 UTC m=+0.375818077 container attach c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_torvalds, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:45 compute-0 podman[324730]: 2025-12-06 07:29:45.270984974 +0000 UTC m=+0.376419214 container died c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 07:29:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:45.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe39d7200fc727a952fbf2ca5f3cb20bd162e0e2f0b9d33e21fe57c15e86edf4-merged.mount: Deactivated successfully.
Dec 06 07:29:45 compute-0 podman[324730]: 2025-12-06 07:29:45.756563552 +0000 UTC m=+0.861997802 container remove c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 07:29:45 compute-0 systemd[1]: libpod-conmon-c4fe2b6c18aa0d99f7177bbc09a4f79d90848d008115f91ef9db14df16e5824b.scope: Deactivated successfully.
Dec 06 07:29:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:45 compute-0 nova_compute[251992]: 2025-12-06 07:29:45.949 251996 DEBUG nova.network.neutron [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updated VIF entry in instance network info cache for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:29:45 compute-0 nova_compute[251992]: 2025-12-06 07:29:45.950 251996 DEBUG nova.network.neutron [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:45 compute-0 nova_compute[251992]: 2025-12-06 07:29:45.975 251996 DEBUG oslo_concurrency.lockutils [req-c6fe95e3-d156-42c7-92b8-9f51c643c0d1 req-57acfcb8-465b-4f6a-8c47-533eaede7f43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:46 compute-0 podman[324772]: 2025-12-06 07:29:45.93167897 +0000 UTC m=+0.026928148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:46 compute-0 podman[324772]: 2025-12-06 07:29:46.138283267 +0000 UTC m=+0.233532415 container create 60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bardeen, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:46 compute-0 systemd[1]: Started libpod-conmon-60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2.scope.
Dec 06 07:29:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a65d5a8e42a025c931c59376434ea0436c674bfa84fd56a4b9aa1996a0cede/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a65d5a8e42a025c931c59376434ea0436c674bfa84fd56a4b9aa1996a0cede/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a65d5a8e42a025c931c59376434ea0436c674bfa84fd56a4b9aa1996a0cede/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a65d5a8e42a025c931c59376434ea0436c674bfa84fd56a4b9aa1996a0cede/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:46 compute-0 podman[324772]: 2025-12-06 07:29:46.387954877 +0000 UTC m=+0.483204045 container init 60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 07:29:46 compute-0 podman[324772]: 2025-12-06 07:29:46.397033033 +0000 UTC m=+0.492282181 container start 60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bardeen, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:29:46 compute-0 podman[324772]: 2025-12-06 07:29:46.770951387 +0000 UTC m=+0.866200535 container attach 60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:29:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1839537884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:46 compute-0 ceph-mon[74339]: pgmap v2172: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]: {
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:     "0": [
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:         {
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "devices": [
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "/dev/loop3"
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             ],
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "lv_name": "ceph_lv0",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "lv_size": "7511998464",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "name": "ceph_lv0",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "tags": {
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.cluster_name": "ceph",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.crush_device_class": "",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.encrypted": "0",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.osd_id": "0",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.type": "block",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:                 "ceph.vdo": "0"
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             },
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "type": "block",
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:             "vg_name": "ceph_vg0"
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:         }
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]:     ]
Dec 06 07:29:47 compute-0 pensive_bardeen[324788]: }
Dec 06 07:29:47 compute-0 systemd[1]: libpod-60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2.scope: Deactivated successfully.
Dec 06 07:29:47 compute-0 podman[324772]: 2025-12-06 07:29:47.137743079 +0000 UTC m=+1.232992247 container died 60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:47.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:47.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:47 compute-0 nova_compute[251992]: 2025-12-06 07:29:47.742 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:48 compute-0 sudo[324811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:48 compute-0 sudo[324811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:48 compute-0 sudo[324811]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:48 compute-0 sudo[324836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:48 compute-0 sudo[324836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:48 compute-0 sudo[324836]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2507357678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6a65d5a8e42a025c931c59376434ea0436c674bfa84fd56a4b9aa1996a0cede-merged.mount: Deactivated successfully.
Dec 06 07:29:48 compute-0 podman[324772]: 2025-12-06 07:29:48.836369206 +0000 UTC m=+2.931618354 container remove 60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:29:48 compute-0 systemd[1]: libpod-conmon-60cfdc9ee77dc0e771364e0f4fe8073b9fda231dd57501b42599a99de9cda3c2.scope: Deactivated successfully.
Dec 06 07:29:48 compute-0 sudo[324664]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:48 compute-0 nova_compute[251992]: 2025-12-06 07:29:48.929 251996 DEBUG nova.compute.manager [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:48 compute-0 nova_compute[251992]: 2025-12-06 07:29:48.930 251996 DEBUG oslo_concurrency.lockutils [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:48 compute-0 nova_compute[251992]: 2025-12-06 07:29:48.931 251996 DEBUG oslo_concurrency.lockutils [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:48 compute-0 nova_compute[251992]: 2025-12-06 07:29:48.931 251996 DEBUG oslo_concurrency.lockutils [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:48 compute-0 nova_compute[251992]: 2025-12-06 07:29:48.931 251996 DEBUG nova.compute.manager [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:48 compute-0 nova_compute[251992]: 2025-12-06 07:29:48.931 251996 WARNING nova.compute.manager [req-2c93b5b1-c083-4b8a-bed7-8c7e2110104c req-c9e21c6f-e8f0-488f-a573-c592461fa4cb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state active and task_state resize_finish.
Dec 06 07:29:48 compute-0 sudo[324863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:48 compute-0 sudo[324863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:48 compute-0 sudo[324863]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:49 compute-0 sudo[324888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:29:49 compute-0 sudo[324888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:49 compute-0 sudo[324888]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:49 compute-0 sudo[324913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:49 compute-0 sudo[324913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:49 compute-0 sudo[324913]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:49 compute-0 sudo[324938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:29:49 compute-0 sudo[324938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:49 compute-0 nova_compute[251992]: 2025-12-06 07:29:49.185 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:49.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:49 compute-0 podman[325003]: 2025-12-06 07:29:49.441464122 +0000 UTC m=+0.021484661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:49.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:49 compute-0 podman[325003]: 2025-12-06 07:29:49.884948885 +0000 UTC m=+0.464969394 container create 20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:49 compute-0 ceph-mon[74339]: pgmap v2173: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3309389964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3301973627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:29:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:50 compute-0 systemd[1]: Started libpod-conmon-20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3.scope.
Dec 06 07:29:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:29:50 compute-0 podman[325003]: 2025-12-06 07:29:50.339822514 +0000 UTC m=+0.919843013 container init 20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:29:50 compute-0 podman[325003]: 2025-12-06 07:29:50.348431756 +0000 UTC m=+0.928452265 container start 20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:50 compute-0 elegant_neumann[325019]: 167 167
Dec 06 07:29:50 compute-0 systemd[1]: libpod-20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3.scope: Deactivated successfully.
Dec 06 07:29:50 compute-0 nova_compute[251992]: 2025-12-06 07:29:50.452 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:50 compute-0 nova_compute[251992]: 2025-12-06 07:29:50.452 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:50 compute-0 nova_compute[251992]: 2025-12-06 07:29:50.453 251996 DEBUG nova.compute.manager [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Going to confirm migration 16 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Dec 06 07:29:50 compute-0 podman[325003]: 2025-12-06 07:29:50.84044748 +0000 UTC m=+1.420468009 container attach 20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:29:50 compute-0 podman[325003]: 2025-12-06 07:29:50.841816726 +0000 UTC m=+1.421837225 container died 20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:51 compute-0 nova_compute[251992]: 2025-12-06 07:29:51.050 251996 DEBUG neutronclient.v2_0.client [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 07:29:51 compute-0 nova_compute[251992]: 2025-12-06 07:29:51.052 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:29:51 compute-0 nova_compute[251992]: 2025-12-06 07:29:51.052 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquired lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:29:51 compute-0 nova_compute[251992]: 2025-12-06 07:29:51.053 251996 DEBUG nova.network.neutron [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:29:51 compute-0 nova_compute[251992]: 2025-12-06 07:29:51.053 251996 DEBUG nova.objects.instance [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'info_cache' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:51.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e47992e1aaeabd8dbb5ff69bc84b285218682c847a7a5df6803b6f007166376-merged.mount: Deactivated successfully.
Dec 06 07:29:51 compute-0 podman[325003]: 2025-12-06 07:29:51.328398142 +0000 UTC m=+1.908418651 container remove 20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:29:51 compute-0 systemd[1]: libpod-conmon-20cf97eaac37b40630aa30d2fab06a54822dcaabded395b79f2078c5aa453bd3.scope: Deactivated successfully.
Dec 06 07:29:51 compute-0 ceph-mon[74339]: pgmap v2174: 305 pgs: 305 active+clean; 498 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 535 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 07:29:51 compute-0 podman[325044]: 2025-12-06 07:29:51.490465788 +0000 UTC m=+0.020722851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:29:51 compute-0 podman[325044]: 2025-12-06 07:29:51.624191738 +0000 UTC m=+0.154448781 container create d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:29:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:51.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 178 op/s
Dec 06 07:29:52 compute-0 systemd[1]: Started libpod-conmon-d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150.scope.
Dec 06 07:29:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057db1a76a0a73228e98133660c7e992e73d0c23483adb42e68de7f27c11c943/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057db1a76a0a73228e98133660c7e992e73d0c23483adb42e68de7f27c11c943/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057db1a76a0a73228e98133660c7e992e73d0c23483adb42e68de7f27c11c943/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057db1a76a0a73228e98133660c7e992e73d0c23483adb42e68de7f27c11c943/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:29:52 compute-0 podman[325044]: 2025-12-06 07:29:52.394602576 +0000 UTC m=+0.924859639 container init d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cannon, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:29:52 compute-0 podman[325044]: 2025-12-06 07:29:52.401022089 +0000 UTC m=+0.931279132 container start d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cannon, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:29:52 compute-0 podman[325044]: 2025-12-06 07:29:52.523824335 +0000 UTC m=+1.054081398 container attach d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cannon, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.537 251996 DEBUG nova.compute.manager [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.539 251996 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.539 251996 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.539 251996 DEBUG oslo_concurrency.lockutils [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.539 251996 DEBUG nova.compute.manager [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] No waiting events found dispatching network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.540 251996 WARNING nova.compute.manager [req-36b7657e-7c8c-4d45-815f-33cee22ec720 req-d849f725-5fe6-47f9-8af2-baac53f2d91e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Received unexpected event network-vif-plugged-06cbbb2f-cba8-4b99-b2dd-778c71df2d23 for instance with vm_state resized and task_state None.
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.720 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006177.718902, 6ee4f2f5-3303-4c84-b708-eb35a65082b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.720 251996 INFO nova.compute.manager [-] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] VM Stopped (Lifecycle Event)
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.742 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.753 251996 DEBUG nova.compute.manager [None req-7cade469-da0e-401d-abfd-4eb20b2a3a0c - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.757 251996 DEBUG nova.compute.manager [None req-7cade469-da0e-401d-abfd-4eb20b2a3a0c - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:29:52 compute-0 nova_compute[251992]: 2025-12-06 07:29:52.781 251996 INFO nova.compute.manager [None req-7cade469-da0e-401d-abfd-4eb20b2a3a0c - - - - - -] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com
Dec 06 07:29:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:53.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:29:53 compute-0 frosty_cannon[325061]: {
Dec 06 07:29:53 compute-0 frosty_cannon[325061]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:29:53 compute-0 frosty_cannon[325061]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:29:53 compute-0 frosty_cannon[325061]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:29:53 compute-0 frosty_cannon[325061]:         "osd_id": 0,
Dec 06 07:29:53 compute-0 frosty_cannon[325061]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:29:53 compute-0 frosty_cannon[325061]:         "type": "bluestore"
Dec 06 07:29:53 compute-0 frosty_cannon[325061]:     }
Dec 06 07:29:53 compute-0 frosty_cannon[325061]: }
Dec 06 07:29:53 compute-0 systemd[1]: libpod-d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150.scope: Deactivated successfully.
Dec 06 07:29:53 compute-0 podman[325044]: 2025-12-06 07:29:53.278323943 +0000 UTC m=+1.808580986 container died d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:29:53 compute-0 ceph-mon[74339]: pgmap v2175: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 178 op/s
Dec 06 07:29:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-057db1a76a0a73228e98133660c7e992e73d0c23483adb42e68de7f27c11c943-merged.mount: Deactivated successfully.
Dec 06 07:29:53 compute-0 podman[325044]: 2025-12-06 07:29:53.845248877 +0000 UTC m=+2.375505920 container remove d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec 06 07:29:53 compute-0 sudo[324938]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:29:53 compute-0 systemd[1]: libpod-conmon-d20e0bca4c593f6c8e4c64d77e49198577f367974663318406d1a2b359d41150.scope: Deactivated successfully.
Dec 06 07:29:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Dec 06 07:29:54 compute-0 nova_compute[251992]: 2025-12-06 07:29:54.188 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:29:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2796408c-6da8-4a97-b7d4-c67f6b6a25a4 does not exist
Dec 06 07:29:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5c03e167-870d-48fa-b8c9-a4da9d45055e does not exist
Dec 06 07:29:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 95a84041-176b-4da6-9ec5-07450e9487a1 does not exist
Dec 06 07:29:54 compute-0 sudo[325095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:29:54 compute-0 sudo[325095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:54 compute-0 sudo[325095]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:54 compute-0 sudo[325120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:29:54 compute-0 sudo[325120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:29:54 compute-0 sudo[325120]: pam_unix(sudo:session): session closed for user root
Dec 06 07:29:54 compute-0 ceph-mon[74339]: pgmap v2176: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Dec 06 07:29:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:29:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:55.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:55.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:55 compute-0 nova_compute[251992]: 2025-12-06 07:29:55.879 251996 DEBUG nova.network.neutron [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 6ee4f2f5-3303-4c84-b708-eb35a65082b6] Updating instance_info_cache with network_info: [{"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:29:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.080 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Releasing lock "refresh_cache-6ee4f2f5-3303-4c84-b708-eb35a65082b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.081 251996 DEBUG nova.objects.instance [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'migration_context' on Instance uuid 6ee4f2f5-3303-4c84-b708-eb35a65082b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:29:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.250 251996 DEBUG nova.storage.rbd_utils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] rbd image 6ee4f2f5-3303-4c84-b708-eb35a65082b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.257 251996 DEBUG nova.virt.libvirt.vif [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:28:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1330317525',display_name='tempest-ServerActionsTestOtherA-server-1330317525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1330317525',id=112,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-ul2d1zzd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:29:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=6ee4f2f5-3303-4c84-b708-eb35a65082b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.258 251996 DEBUG nova.network.os_vif_util [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "address": "fa:16:3e:14:2c:72", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cbbb2f-cb", "ovs_interfaceid": "06cbbb2f-cba8-4b99-b2dd-778c71df2d23", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.258 251996 DEBUG nova.network.os_vif_util [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.258 251996 DEBUG os_vif [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.260 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.260 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06cbbb2f-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.260 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.262 251996 INFO os_vif [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:2c:72,bridge_name='br-int',has_traffic_filtering=True,id=06cbbb2f-cba8-4b99-b2dd-778c71df2d23,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cbbb2f-cb')
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.262 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.263 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:29:56 compute-0 nova_compute[251992]: 2025-12-06 07:29:56.985 251996 DEBUG oslo_concurrency.processutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:29:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:57.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:29:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1937180653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:57 compute-0 nova_compute[251992]: 2025-12-06 07:29:57.495 251996 DEBUG oslo_concurrency.processutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:29:57 compute-0 nova_compute[251992]: 2025-12-06 07:29:57.504 251996 DEBUG nova.compute.provider_tree [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:29:57 compute-0 nova_compute[251992]: 2025-12-06 07:29:57.532 251996 DEBUG nova.scheduler.client.report [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:29:57 compute-0 ceph-mon[74339]: pgmap v2177: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Dec 06 07:29:57 compute-0 nova_compute[251992]: 2025-12-06 07:29:57.591 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:29:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:29:57 compute-0 nova_compute[251992]: 2025-12-06 07:29:57.744 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:58 compute-0 nova_compute[251992]: 2025-12-06 07:29:58.030 251996 INFO nova.scheduler.client.report [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Deleted allocation for migration c33ab03a-95a6-4e21-b04b-af3607324828
Dec 06 07:29:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 129 op/s
Dec 06 07:29:58 compute-0 nova_compute[251992]: 2025-12-06 07:29:58.138 251996 DEBUG oslo_concurrency.lockutils [None req-2af82cad-48bd-4eb5-a93f-d9678ca8479c baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "6ee4f2f5-3303-4c84-b708-eb35a65082b6" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1937180653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:29:58 compute-0 ceph-mon[74339]: pgmap v2178: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 129 op/s
Dec 06 07:29:59 compute-0 nova_compute[251992]: 2025-12-06 07:29:59.191 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:29:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:29:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:29:59.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:29:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:29:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:29:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:29:59.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:30:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 129 op/s
Dec 06 07:30:00 compute-0 podman[325187]: 2025-12-06 07:30:00.462148809 +0000 UTC m=+0.118379386 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 06 07:30:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:30:00 compute-0 ceph-mon[74339]: pgmap v2179: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 129 op/s
Dec 06 07:30:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:01.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:01.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 24 KiB/s wr, 167 op/s
Dec 06 07:30:02 compute-0 nova_compute[251992]: 2025-12-06 07:30:02.747 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:02 compute-0 ceph-mon[74339]: pgmap v2180: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 24 KiB/s wr, 167 op/s
Dec 06 07:30:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:03.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:03.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:03.836 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:03.837 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:03.837 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:30:04 compute-0 nova_compute[251992]: 2025-12-06 07:30:04.193 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:04 compute-0 ceph-mon[74339]: pgmap v2181: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:30:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:05 compute-0 nova_compute[251992]: 2025-12-06 07:30:05.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:05.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:05 compute-0 nova_compute[251992]: 2025-12-06 07:30:05.797 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:05 compute-0 nova_compute[251992]: 2025-12-06 07:30:05.797 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:05 compute-0 nova_compute[251992]: 2025-12-06 07:30:05.798 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:05 compute-0 nova_compute[251992]: 2025-12-06 07:30:05.798 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:30:05 compute-0 nova_compute[251992]: 2025-12-06 07:30:05.798 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 28 KiB/s wr, 161 op/s
Dec 06 07:30:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881476132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.302 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:06 compute-0 podman[325238]: 2025-12-06 07:30:06.405424748 +0000 UTC m=+0.056420024 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 07:30:06 compute-0 podman[325239]: 2025-12-06 07:30:06.414514643 +0000 UTC m=+0.063066223 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.442 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.442 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.446 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.446 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.635 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.636 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4031MB free_disk=20.863555908203125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.637 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.637 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.714 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 00f56c62-f327-41e3-a105-24f56ae124c0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.714 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance dd85818c-bf82-473d-8650-6b391dbfa300 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.715 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.715 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:30:06 compute-0 nova_compute[251992]: 2025-12-06 07:30:06.765 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:07.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/434332849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:07 compute-0 nova_compute[251992]: 2025-12-06 07:30:07.315 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:07 compute-0 nova_compute[251992]: 2025-12-06 07:30:07.324 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:07 compute-0 nova_compute[251992]: 2025-12-06 07:30:07.348 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:07 compute-0 nova_compute[251992]: 2025-12-06 07:30:07.403 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:30:07 compute-0 nova_compute[251992]: 2025-12-06 07:30:07.403 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:07 compute-0 ceph-mon[74339]: pgmap v2182: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 28 KiB/s wr, 161 op/s
Dec 06 07:30:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/881476132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:07 compute-0 nova_compute[251992]: 2025-12-06 07:30:07.748 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 16 KiB/s wr, 123 op/s
Dec 06 07:30:08 compute-0 sudo[325298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:08 compute-0 sudo[325298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:08 compute-0 sudo[325298]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:08 compute-0 sudo[325323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:08 compute-0 sudo[325323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:08 compute-0 sudo[325323]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/434332849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1666525427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3067774594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:08 compute-0 ceph-mon[74339]: pgmap v2183: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 16 KiB/s wr, 123 op/s
Dec 06 07:30:09 compute-0 nova_compute[251992]: 2025-12-06 07:30:09.194 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/815563372' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/815563372' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3057662245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/20788799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:09.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 16 KiB/s wr, 123 op/s
Dec 06 07:30:10 compute-0 nova_compute[251992]: 2025-12-06 07:30:10.404 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-0 nova_compute[251992]: 2025-12-06 07:30:10.405 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-0 nova_compute[251992]: 2025-12-06 07:30:10.405 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-0 nova_compute[251992]: 2025-12-06 07:30:10.405 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-0 nova_compute[251992]: 2025-12-06 07:30:10.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-0 nova_compute[251992]: 2025-12-06 07:30:10.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:10 compute-0 nova_compute[251992]: 2025-12-06 07:30:10.854 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:10.856 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:30:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:10.857 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:30:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:11 compute-0 ovn_controller[147168]: 2025-12-06T07:30:11Z|00409|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:30:11 compute-0 nova_compute[251992]: 2025-12-06 07:30:11.351 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/712143605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:11 compute-0 ceph-mon[74339]: pgmap v2184: 305 pgs: 305 active+clean; 390 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 16 KiB/s wr, 123 op/s
Dec 06 07:30:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:11.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:11.859 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 328 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 28 KiB/s wr, 161 op/s
Dec 06 07:30:12 compute-0 nova_compute[251992]: 2025-12-06 07:30:12.769 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3433212711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/878054034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:12 compute-0 ceph-mon[74339]: pgmap v2185: 305 pgs: 305 active+clean; 328 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 28 KiB/s wr, 161 op/s
Dec 06 07:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/87351210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2865460060' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:30:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:13.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:13.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 328 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 26 KiB/s wr, 122 op/s
Dec 06 07:30:14 compute-0 nova_compute[251992]: 2025-12-06 07:30:14.196 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:15.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:15 compute-0 ovn_controller[147168]: 2025-12-06T07:30:15Z|00410|binding|INFO|Releasing lport 1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c from this chassis (sb_readonly=0)
Dec 06 07:30:15 compute-0 nova_compute[251992]: 2025-12-06 07:30:15.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:15.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 26 KiB/s wr, 134 op/s
Dec 06 07:30:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:17.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:17 compute-0 nova_compute[251992]: 2025-12-06 07:30:17.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:17 compute-0 nova_compute[251992]: 2025-12-06 07:30:17.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:30:17 compute-0 ceph-mon[74339]: pgmap v2186: 305 pgs: 305 active+clean; 328 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 26 KiB/s wr, 122 op/s
Dec 06 07:30:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/693340743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:17 compute-0 ceph-mon[74339]: pgmap v2187: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 26 KiB/s wr, 134 op/s
Dec 06 07:30:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:17.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:17 compute-0 nova_compute[251992]: 2025-12-06 07:30:17.772 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 49 op/s
Dec 06 07:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:30:18
Dec 06 07:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes']
Dec 06 07:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:30:18 compute-0 ceph-mon[74339]: pgmap v2188: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 49 op/s
Dec 06 07:30:19 compute-0 nova_compute[251992]: 2025-12-06 07:30:19.198 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:19.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:19.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3715423026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 49 op/s
Dec 06 07:30:21 compute-0 ceph-mon[74339]: pgmap v2189: 305 pgs: 305 active+clean; 328 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 13 KiB/s wr, 49 op/s
Dec 06 07:30:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2634629622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:21.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.434 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "dd85818c-bf82-473d-8650-6b391dbfa300" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.434 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.435 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.435 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.435 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.436 251996 INFO nova.compute.manager [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Terminating instance
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.437 251996 DEBUG nova.compute.manager [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:30:21 compute-0 kernel: tap3edb8e90-65 (unregistering): left promiscuous mode
Dec 06 07:30:21 compute-0 NetworkManager[48965]: <info>  [1765006221.4947] device (tap3edb8e90-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.502 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 ovn_controller[147168]: 2025-12-06T07:30:21Z|00411|binding|INFO|Releasing lport 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 from this chassis (sb_readonly=0)
Dec 06 07:30:21 compute-0 ovn_controller[147168]: 2025-12-06T07:30:21Z|00412|binding|INFO|Setting lport 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 down in Southbound
Dec 06 07:30:21 compute-0 ovn_controller[147168]: 2025-12-06T07:30:21Z|00413|binding|INFO|Removing iface tap3edb8e90-65 ovn-installed in OVS
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.505 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.513 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:67:71 10.100.0.11'], port_security=['fa:16:3e:b8:67:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'dd85818c-bf82-473d-8650-6b391dbfa300', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.515 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.516 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6209aab-d53f-4d58-9b94-ffb7adc6239e
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.518 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.532 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5e39beb1-7f2b-4905-9c9c-fbeddd12778c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:21 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Dec 06 07:30:21 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006d.scope: Consumed 20.756s CPU time.
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.566 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2106eb1c-2112-4616-ab12-4733a0e03f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:21 compute-0 systemd-machined[212986]: Machine qemu-50-instance-0000006d terminated.
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.570 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9c160a1c-8eee-44df-8aa6-7b0b009f0178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.599 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7aef1a75-3727-415d-9f76-534b43a21671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.618 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab869f1-af48-4dc6-aca9-fc7f11a8c2ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6209aab-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:c5:a9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 109], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605069, 'reachable_time': 42946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325367, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.632 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ec0423-131e-4b5b-b064-5ab7d5245ae1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6209aab-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605081, 'tstamp': 605081}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325368, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6209aab-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 605084, 'tstamp': 605084}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325368, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.634 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.636 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.640 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.641 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6209aab-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.641 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.641 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6209aab-d0, col_values=(('external_ids', {'iface-id': '1b6e9f57-9cda-4f5d-b858-8c0a4d2d498c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:21.642 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.655 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.677 251996 INFO nova.virt.libvirt.driver [-] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Instance destroyed successfully.
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.677 251996 DEBUG nova.objects.instance [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'resources' on Instance uuid dd85818c-bf82-473d-8650-6b391dbfa300 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.682 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.694 251996 DEBUG nova.virt.libvirt.vif [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1902272375',display_name='tempest-ServerActionsTestOtherA-server-1902272375',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1902272375',id=109,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:27:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-jio410z8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:27:28Z,user_data=None,user_id='baddb65c90da47a58d026b0db966f6c8',uuid=dd85818c-bf82-473d-8650-6b391dbfa300,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.696 251996 DEBUG nova.network.os_vif_util [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "address": "fa:16:3e:b8:67:71", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3edb8e90-65", "ovs_interfaceid": "3edb8e90-653f-4c6f-9f6e-90dfc0fdb014", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.697 251996 DEBUG nova.network.os_vif_util [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:67:71,bridge_name='br-int',has_traffic_filtering=True,id=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3edb8e90-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.697 251996 DEBUG os_vif [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:67:71,bridge_name='br-int',has_traffic_filtering=True,id=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3edb8e90-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.702 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.703 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3edb8e90-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:21.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.706 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.707 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.709 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.711 251996 INFO os_vif [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:67:71,bridge_name='br-int',has_traffic_filtering=True,id=3edb8e90-653f-4c6f-9f6e-90dfc0fdb014,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3edb8e90-65')
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.836 251996 DEBUG nova.compute.manager [req-2507518f-eb5a-4b17-8c0c-963961026332 req-9907209a-f1aa-4b9b-aac5-01f8f37a6b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received event network-vif-unplugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.836 251996 DEBUG oslo_concurrency.lockutils [req-2507518f-eb5a-4b17-8c0c-963961026332 req-9907209a-f1aa-4b9b-aac5-01f8f37a6b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.836 251996 DEBUG oslo_concurrency.lockutils [req-2507518f-eb5a-4b17-8c0c-963961026332 req-9907209a-f1aa-4b9b-aac5-01f8f37a6b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.837 251996 DEBUG oslo_concurrency.lockutils [req-2507518f-eb5a-4b17-8c0c-963961026332 req-9907209a-f1aa-4b9b-aac5-01f8f37a6b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.837 251996 DEBUG nova.compute.manager [req-2507518f-eb5a-4b17-8c0c-963961026332 req-9907209a-f1aa-4b9b-aac5-01f8f37a6b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] No waiting events found dispatching network-vif-unplugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:21 compute-0 nova_compute[251992]: 2025-12-06 07:30:21.837 251996 DEBUG nova.compute.manager [req-2507518f-eb5a-4b17-8c0c-963961026332 req-9907209a-f1aa-4b9b-aac5-01f8f37a6b87 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received event network-vif-unplugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:30:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 327 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 13 KiB/s wr, 57 op/s
Dec 06 07:30:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/872411589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:30:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/872411589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:30:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2043406885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1557114716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:22 compute-0 nova_compute[251992]: 2025-12-06 07:30:22.608 251996 INFO nova.virt.libvirt.driver [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Deleting instance files /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300_del
Dec 06 07:30:22 compute-0 nova_compute[251992]: 2025-12-06 07:30:22.609 251996 INFO nova.virt.libvirt.driver [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Deletion of /var/lib/nova/instances/dd85818c-bf82-473d-8650-6b391dbfa300_del complete
Dec 06 07:30:22 compute-0 nova_compute[251992]: 2025-12-06 07:30:22.658 251996 INFO nova.compute.manager [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Took 1.22 seconds to destroy the instance on the hypervisor.
Dec 06 07:30:22 compute-0 nova_compute[251992]: 2025-12-06 07:30:22.658 251996 DEBUG oslo.service.loopingcall [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:30:22 compute-0 nova_compute[251992]: 2025-12-06 07:30:22.658 251996 DEBUG nova.compute.manager [-] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:30:22 compute-0 nova_compute[251992]: 2025-12-06 07:30:22.659 251996 DEBUG nova.network.neutron [-] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:30:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:23.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:23 compute-0 ceph-mon[74339]: pgmap v2190: 305 pgs: 305 active+clean; 327 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 13 KiB/s wr, 57 op/s
Dec 06 07:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:30:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:23.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.906 251996 DEBUG nova.network.neutron [-] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.938 251996 INFO nova.compute.manager [-] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Took 1.28 seconds to deallocate network for instance.
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.989 251996 DEBUG nova.compute.manager [req-8e4c3060-e414-4d42-9ff4-5fafa725cf92 req-4f711a01-a91e-497c-b649-b051fed5ad4f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received event network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.990 251996 DEBUG oslo_concurrency.lockutils [req-8e4c3060-e414-4d42-9ff4-5fafa725cf92 req-4f711a01-a91e-497c-b649-b051fed5ad4f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.990 251996 DEBUG oslo_concurrency.lockutils [req-8e4c3060-e414-4d42-9ff4-5fafa725cf92 req-4f711a01-a91e-497c-b649-b051fed5ad4f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.990 251996 DEBUG oslo_concurrency.lockutils [req-8e4c3060-e414-4d42-9ff4-5fafa725cf92 req-4f711a01-a91e-497c-b649-b051fed5ad4f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.991 251996 DEBUG nova.compute.manager [req-8e4c3060-e414-4d42-9ff4-5fafa725cf92 req-4f711a01-a91e-497c-b649-b051fed5ad4f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] No waiting events found dispatching network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.991 251996 WARNING nova.compute.manager [req-8e4c3060-e414-4d42-9ff4-5fafa725cf92 req-4f711a01-a91e-497c-b649-b051fed5ad4f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received unexpected event network-vif-plugged-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 for instance with vm_state active and task_state deleting.
Dec 06 07:30:23 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.999 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:23.999 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.080 251996 DEBUG oslo_concurrency.processutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 327 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 682 B/s wr, 20 op/s
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:24 compute-0 ceph-mon[74339]: pgmap v2191: 305 pgs: 305 active+clean; 327 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 682 B/s wr, 20 op/s
Dec 06 07:30:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/946517001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.571 251996 DEBUG oslo_concurrency.processutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.577 251996 DEBUG nova.compute.provider_tree [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:30:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:30:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:30:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.601 251996 DEBUG nova.scheduler.client.report [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.619 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.645 251996 INFO nova.scheduler.client.report [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Deleted allocations for instance dd85818c-bf82-473d-8650-6b391dbfa300
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.709 251996 DEBUG oslo_concurrency.lockutils [None req-ba95e88f-b586-406a-b0f0-a1f9e577b7d9 baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "dd85818c-bf82-473d-8650-6b391dbfa300" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.743 251996 DEBUG nova.compute.manager [req-0f58004f-54fa-470f-88ce-ce5335995cef req-c812a80e-a35d-4aab-b558-b99f541f09be 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Received event network-vif-deleted-3edb8e90-653f-4c6f-9f6e-90dfc0fdb014 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:24 compute-0 nova_compute[251992]: 2025-12-06 07:30:24.799 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:25.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/946517001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:25.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004600003780476456 of space, bias 1.0, pg target 1.3800011341429368 quantized to 32 (current 32)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003154698550623488 of space, bias 1.0, pg target 0.9432548666364229 quantized to 32 (current 32)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:30:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:30:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Dec 06 07:30:26 compute-0 ceph-mon[74339]: pgmap v2192: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Dec 06 07:30:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2767874342' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:30:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2767874342' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:30:26 compute-0 nova_compute[251992]: 2025-12-06 07:30:26.705 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:27.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.567 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.567 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.568 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.568 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.568 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.569 251996 INFO nova.compute.manager [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Terminating instance
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.570 251996 DEBUG nova.compute.manager [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:30:27 compute-0 kernel: tapc1e1aa30-1f (unregistering): left promiscuous mode
Dec 06 07:30:27 compute-0 NetworkManager[48965]: <info>  [1765006227.7116] device (tapc1e1aa30-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:30:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:27.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:27 compute-0 ovn_controller[147168]: 2025-12-06T07:30:27Z|00414|binding|INFO|Releasing lport c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e from this chassis (sb_readonly=0)
Dec 06 07:30:27 compute-0 ovn_controller[147168]: 2025-12-06T07:30:27Z|00415|binding|INFO|Setting lport c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e down in Southbound
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.718 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:27 compute-0 ovn_controller[147168]: 2025-12-06T07:30:27Z|00416|binding|INFO|Removing iface tapc1e1aa30-1f ovn-installed in OVS
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.726 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:82:3f 10.100.0.6'], port_security=['fa:16:3e:4f:82:3f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '00f56c62-f327-41e3-a105-24f56ae124c0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '001e2256cb8b430d93c1ff613010d199', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56e13d32-a2bf-49aa-a4ac-9182c3684195', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f021186b-c663-4a37-b593-75e967e588a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.728 158118 INFO neutron.agent.ovn.metadata.agent [-] Port c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e in datapath f6209aab-d53f-4d58-9b94-ffb7adc6239e unbound from our chassis
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.729 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6209aab-d53f-4d58-9b94-ffb7adc6239e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.731 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8ce7ce-27ef-480d-824e-ccaf7e67e7bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.731 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e namespace which is not needed anymore
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.736 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:27 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000060.scope: Deactivated successfully.
Dec 06 07:30:27 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000060.scope: Consumed 37.486s CPU time.
Dec 06 07:30:27 compute-0 systemd-machined[212986]: Machine qemu-44-instance-00000060 terminated.
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.804 251996 INFO nova.virt.libvirt.driver [-] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Instance destroyed successfully.
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.805 251996 DEBUG nova.objects.instance [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lazy-loading 'resources' on Instance uuid 00f56c62-f327-41e3-a105-24f56ae124c0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.829 251996 DEBUG nova.virt.libvirt.vif [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:22:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-831125912',display_name='tempest-ServerActionsTestOtherA-server-831125912',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-831125912',id=96,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG2zgDxxtT0nLqH8UsyYi0lN8OWWrrFEA5pyLz04zJISRImczknO8hVkmNR6jGCiWeaXsQGs+JSkIuJDu8PO8wxSR1MWFJiUPcyPRnxYT8pR/R9bXgGDk3j+Ho5fOrAeLw==',key_name='tempest-keypair-402640413',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:22:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='001e2256cb8b430d93c1ff613010d199',ramdisk_id='',reservation_id='r-w55x9tcn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1949739102',owner_user_name='tempest-ServerActionsTestOtherA-1949739102-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:22:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='baddb65c90da47a58d026b0db966f6c8',uuid=00f56c62-f327-41e3-a105-24f56ae124c0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.830 251996 DEBUG nova.network.os_vif_util [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converting VIF {"id": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "address": "fa:16:3e:4f:82:3f", "network": {"id": "f6209aab-d53f-4d58-9b94-ffb7adc6239e", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-1643604044-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "001e2256cb8b430d93c1ff613010d199", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1e1aa30-1f", "ovs_interfaceid": "c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.831 251996 DEBUG nova.network.os_vif_util [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:82:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e1aa30-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.831 251996 DEBUG os_vif [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:82:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e1aa30-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.833 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.834 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1e1aa30-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.835 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.837 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.840 251996 INFO os_vif [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:82:3f,bridge_name='br-int',has_traffic_filtering=True,id=c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e,network=Network(f6209aab-d53f-4d58-9b94-ffb7adc6239e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1e1aa30-1f')
Dec 06 07:30:27 compute-0 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[312153]: [NOTICE]   (312157) : haproxy version is 2.8.14-c23fe91
Dec 06 07:30:27 compute-0 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[312153]: [NOTICE]   (312157) : path to executable is /usr/sbin/haproxy
Dec 06 07:30:27 compute-0 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[312153]: [WARNING]  (312157) : Exiting Master process...
Dec 06 07:30:27 compute-0 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[312153]: [ALERT]    (312157) : Current worker (312159) exited with code 143 (Terminated)
Dec 06 07:30:27 compute-0 neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e[312153]: [WARNING]  (312157) : All workers exited. Exiting... (0)
Dec 06 07:30:27 compute-0 systemd[1]: libpod-85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad.scope: Deactivated successfully.
Dec 06 07:30:27 compute-0 podman[325457]: 2025-12-06 07:30:27.876588293 +0000 UTC m=+0.047363899 container died 85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:30:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad-userdata-shm.mount: Deactivated successfully.
Dec 06 07:30:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-da43c18ad159212d4775829917d9e471afab2659cc051bb2cad42e231f6415da-merged.mount: Deactivated successfully.
Dec 06 07:30:27 compute-0 podman[325457]: 2025-12-06 07:30:27.913134949 +0000 UTC m=+0.083910555 container cleanup 85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:30:27 compute-0 systemd[1]: libpod-conmon-85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad.scope: Deactivated successfully.
Dec 06 07:30:27 compute-0 podman[325505]: 2025-12-06 07:30:27.964393974 +0000 UTC m=+0.033145696 container remove 85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.970 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab41364-84fd-4b94-ac60-9c5a4fe5ec0a]: (4, ('Sat Dec  6 07:30:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad)\n85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad\nSat Dec  6 07:30:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e (85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad)\n85037b2e63b57c2783f753f7b445c37d1fe20591949ad1639faf06de7cb0d2ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.972 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[15e91ddd-58b3-4e47-a4d9-d120a371d1a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.973 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6209aab-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.975 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:27 compute-0 kernel: tapf6209aab-d0: left promiscuous mode
Dec 06 07:30:27 compute-0 nova_compute[251992]: 2025-12-06 07:30:27.988 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:27.991 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e8cbf406-851c-4a73-a9c5-a36a0f28904c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:28.001 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c01773ae-13db-4c4c-a6c6-6555a50a9282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:28.002 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4bba612c-97d7-4c11-b1e9-c7dbef799be5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:28.019 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d5907eb2-05c1-4551-8f5e-e997d8d1a4db]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 605062, 'reachable_time': 42752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325520, 'error': None, 'target': 'ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:28 compute-0 systemd[1]: run-netns-ovnmeta\x2df6209aab\x2dd53f\x2d4d58\x2d9b94\x2dffb7adc6239e.mount: Deactivated successfully.
Dec 06 07:30:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:28.022 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6209aab-d53f-4d58-9b94-ffb7adc6239e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:30:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:28.023 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[95183ca8-9ae0-4065-841a-bd65c9e619a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:30:28 compute-0 ceph-mon[74339]: pgmap v2193: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:30:28 compute-0 nova_compute[251992]: 2025-12-06 07:30:28.383 251996 DEBUG nova.compute.manager [req-ba076c39-4f68-405f-9b2b-6a6909611bd8 req-47dd4bef-ab5f-4e92-84c2-695ddb7c7b1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-vif-unplugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:28 compute-0 nova_compute[251992]: 2025-12-06 07:30:28.383 251996 DEBUG oslo_concurrency.lockutils [req-ba076c39-4f68-405f-9b2b-6a6909611bd8 req-47dd4bef-ab5f-4e92-84c2-695ddb7c7b1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:28 compute-0 nova_compute[251992]: 2025-12-06 07:30:28.384 251996 DEBUG oslo_concurrency.lockutils [req-ba076c39-4f68-405f-9b2b-6a6909611bd8 req-47dd4bef-ab5f-4e92-84c2-695ddb7c7b1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:28 compute-0 nova_compute[251992]: 2025-12-06 07:30:28.384 251996 DEBUG oslo_concurrency.lockutils [req-ba076c39-4f68-405f-9b2b-6a6909611bd8 req-47dd4bef-ab5f-4e92-84c2-695ddb7c7b1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:28 compute-0 nova_compute[251992]: 2025-12-06 07:30:28.384 251996 DEBUG nova.compute.manager [req-ba076c39-4f68-405f-9b2b-6a6909611bd8 req-47dd4bef-ab5f-4e92-84c2-695ddb7c7b1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] No waiting events found dispatching network-vif-unplugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:28 compute-0 nova_compute[251992]: 2025-12-06 07:30:28.384 251996 DEBUG nova.compute.manager [req-ba076c39-4f68-405f-9b2b-6a6909611bd8 req-47dd4bef-ab5f-4e92-84c2-695ddb7c7b1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-vif-unplugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:30:28 compute-0 sudo[325521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:28 compute-0 sudo[325521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:28 compute-0 sudo[325521]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:28 compute-0 sudo[325546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:28 compute-0 sudo[325546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:28 compute-0 sudo[325546]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:29 compute-0 nova_compute[251992]: 2025-12-06 07:30:29.204 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:29.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:29.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:30:30 compute-0 ceph-mon[74339]: pgmap v2194: 305 pgs: 305 active+clean; 213 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.520 251996 DEBUG nova.compute.manager [req-bb6b788f-552e-4e60-9af9-02ac2f59d31c req-477f4d4b-2772-4171-9cfc-e1712018c2cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.521 251996 DEBUG oslo_concurrency.lockutils [req-bb6b788f-552e-4e60-9af9-02ac2f59d31c req-477f4d4b-2772-4171-9cfc-e1712018c2cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.521 251996 DEBUG oslo_concurrency.lockutils [req-bb6b788f-552e-4e60-9af9-02ac2f59d31c req-477f4d4b-2772-4171-9cfc-e1712018c2cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.521 251996 DEBUG oslo_concurrency.lockutils [req-bb6b788f-552e-4e60-9af9-02ac2f59d31c req-477f4d4b-2772-4171-9cfc-e1712018c2cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.521 251996 DEBUG nova.compute.manager [req-bb6b788f-552e-4e60-9af9-02ac2f59d31c req-477f4d4b-2772-4171-9cfc-e1712018c2cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] No waiting events found dispatching network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.521 251996 WARNING nova.compute.manager [req-bb6b788f-552e-4e60-9af9-02ac2f59d31c req-477f4d4b-2772-4171-9cfc-e1712018c2cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received unexpected event network-vif-plugged-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e for instance with vm_state active and task_state deleting.
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.632 251996 INFO nova.virt.libvirt.driver [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Deleting instance files /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0_del
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.633 251996 INFO nova.virt.libvirt.driver [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Deletion of /var/lib/nova/instances/00f56c62-f327-41e3-a105-24f56ae124c0_del complete
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.684 251996 INFO nova.compute.manager [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Took 3.11 seconds to destroy the instance on the hypervisor.
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.685 251996 DEBUG oslo.service.loopingcall [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.685 251996 DEBUG nova.compute.manager [-] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:30:30 compute-0 nova_compute[251992]: 2025-12-06 07:30:30.685 251996 DEBUG nova.network.neutron [-] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:30:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:31 compute-0 podman[325574]: 2025-12-06 07:30:31.450944667 +0000 UTC m=+0.106395613 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:30:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:31.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Dec 06 07:30:32 compute-0 ceph-mon[74339]: pgmap v2195: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Dec 06 07:30:32 compute-0 nova_compute[251992]: 2025-12-06 07:30:32.837 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:33.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:33.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Dec 06 07:30:34 compute-0 nova_compute[251992]: 2025-12-06 07:30:34.206 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:34 compute-0 ceph-mon[74339]: pgmap v2196: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Dec 06 07:30:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:35.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:35 compute-0 nova_compute[251992]: 2025-12-06 07:30:35.406 251996 DEBUG nova.network.neutron [-] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:30:35 compute-0 nova_compute[251992]: 2025-12-06 07:30:35.428 251996 INFO nova.compute.manager [-] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Took 4.74 seconds to deallocate network for instance.
Dec 06 07:30:35 compute-0 nova_compute[251992]: 2025-12-06 07:30:35.507 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:35 compute-0 nova_compute[251992]: 2025-12-06 07:30:35.507 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:35 compute-0 nova_compute[251992]: 2025-12-06 07:30:35.552 251996 DEBUG nova.compute.manager [req-ba4e1dc6-e607-4ae0-b2a1-d58e1bcb778c req-7b4994e2-aec2-4458-a9f3-9af90e55a7d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Received event network-vif-deleted-c1e1aa30-1fdd-4de1-9c91-3c4a358dc57e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:35 compute-0 nova_compute[251992]: 2025-12-06 07:30:35.616 251996 DEBUG oslo_concurrency.processutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:35.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228057039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.093 251996 DEBUG oslo_concurrency.processutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.099 251996 DEBUG nova.compute.provider_tree [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.115 251996 DEBUG nova.scheduler.client.report [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.135 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.163 251996 INFO nova.scheduler.client.report [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Deleted allocations for instance 00f56c62-f327-41e3-a105-24f56ae124c0
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.245 251996 DEBUG oslo_concurrency.lockutils [None req-bacdc478-02eb-4dc5-a1c1-40ca80e007df baddb65c90da47a58d026b0db966f6c8 001e2256cb8b430d93c1ff613010d199 - - default default] Lock "00f56c62-f327-41e3-a105-24f56ae124c0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.624 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.675 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006221.6738203, dd85818c-bf82-473d-8650-6b391dbfa300 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.675 251996 INFO nova.compute.manager [-] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] VM Stopped (Lifecycle Event)
Dec 06 07:30:36 compute-0 nova_compute[251992]: 2025-12-06 07:30:36.706 251996 DEBUG nova.compute.manager [None req-a486be07-2149-4f26-87cc-83add2c0afed - - - - - -] [instance: dd85818c-bf82-473d-8650-6b391dbfa300] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4228057039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:37.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:37 compute-0 podman[325626]: 2025-12-06 07:30:37.41703248 +0000 UTC m=+0.077705109 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 07:30:37 compute-0 podman[325627]: 2025-12-06 07:30:37.425288873 +0000 UTC m=+0.081536712 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:30:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:37.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:37 compute-0 nova_compute[251992]: 2025-12-06 07:30:37.840 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:38 compute-0 ceph-mon[74339]: pgmap v2197: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Dec 06 07:30:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 06 07:30:39 compute-0 ceph-mon[74339]: pgmap v2198: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 06 07:30:39 compute-0 nova_compute[251992]: 2025-12-06 07:30:39.208 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:39.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:39.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 06 07:30:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/428252605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:40 compute-0 ceph-mon[74339]: pgmap v2199: 305 pgs: 305 active+clean; 134 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 06 07:30:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:41.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:41 compute-0 nova_compute[251992]: 2025-12-06 07:30:41.504 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:41 compute-0 nova_compute[251992]: 2025-12-06 07:30:41.504 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:41 compute-0 nova_compute[251992]: 2025-12-06 07:30:41.533 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:30:41 compute-0 nova_compute[251992]: 2025-12-06 07:30:41.650 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:41 compute-0 nova_compute[251992]: 2025-12-06 07:30:41.651 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:41 compute-0 nova_compute[251992]: 2025-12-06 07:30:41.659 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:30:41 compute-0 nova_compute[251992]: 2025-12-06 07:30:41.660 251996 INFO nova.compute.claims [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:30:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:41.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:41 compute-0 nova_compute[251992]: 2025-12-06 07:30:41.769 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 167 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 06 07:30:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097164889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.249 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.256 251996 DEBUG nova.compute.provider_tree [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.278 251996 DEBUG nova.scheduler.client.report [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.306 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.307 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.474 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.475 251996 DEBUG nova.network.neutron [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:30:42 compute-0 ceph-mon[74339]: pgmap v2200: 305 pgs: 305 active+clean; 167 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec 06 07:30:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4097164889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.668 251996 DEBUG nova.policy [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c3e74d72d7114a37b1fa3e366712c49e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c7c878bda5244b67976c298fc026bbcd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.694 251996 INFO nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.722 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.802 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006227.8012836, 00f56c62-f327-41e3-a105-24f56ae124c0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.802 251996 INFO nova.compute.manager [-] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] VM Stopped (Lifecycle Event)
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.816 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.818 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.818 251996 INFO nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Creating image(s)
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.849 251996 DEBUG nova.storage.rbd_utils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] rbd image e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.877 251996 DEBUG nova.storage.rbd_utils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] rbd image e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.905 251996 DEBUG nova.storage.rbd_utils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] rbd image e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.908 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.933 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.935 251996 DEBUG nova.compute.manager [None req-c512324c-99a2-43f4-a13a-63b5739dcc87 - - - - - -] [instance: 00f56c62-f327-41e3-a105-24f56ae124c0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.973 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.973 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.974 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:42 compute-0 nova_compute[251992]: 2025-12-06 07:30:42.974 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:43 compute-0 nova_compute[251992]: 2025-12-06 07:30:43.004 251996 DEBUG nova.storage.rbd_utils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] rbd image e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:30:43 compute-0 nova_compute[251992]: 2025-12-06 07:30:43.009 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:30:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:43 compute-0 nova_compute[251992]: 2025-12-06 07:30:43.552 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:43.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:43 compute-0 nova_compute[251992]: 2025-12-06 07:30:43.765 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 167 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.210 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.543 251996 DEBUG nova.network.neutron [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Successfully created port: d704120a-a680-4b8b-9b78-a44a11524971 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.569 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.806 251996 DEBUG nova.storage.rbd_utils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] resizing rbd image e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.914 251996 DEBUG nova.objects.instance [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lazy-loading 'migration_context' on Instance uuid e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.931 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.931 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Ensure instance console log exists: /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.932 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.932 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:44 compute-0 nova_compute[251992]: 2025-12-06 07:30:44.932 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:45.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:45 compute-0 ceph-mon[74339]: pgmap v2201: 305 pgs: 305 active+clean; 167 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 07:30:45 compute-0 nova_compute[251992]: 2025-12-06 07:30:45.590 251996 DEBUG nova.network.neutron [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Successfully updated port: d704120a-a680-4b8b-9b78-a44a11524971 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:30:45 compute-0 nova_compute[251992]: 2025-12-06 07:30:45.608 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "refresh_cache-e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:30:45 compute-0 nova_compute[251992]: 2025-12-06 07:30:45.608 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquired lock "refresh_cache-e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:30:45 compute-0 nova_compute[251992]: 2025-12-06 07:30:45.608 251996 DEBUG nova.network.neutron [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:30:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:45.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:45 compute-0 nova_compute[251992]: 2025-12-06 07:30:45.981 251996 DEBUG nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received event network-changed-d704120a-a680-4b8b-9b78-a44a11524971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:45 compute-0 nova_compute[251992]: 2025-12-06 07:30:45.982 251996 DEBUG nova.compute.manager [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Refreshing instance network info cache due to event network-changed-d704120a-a680-4b8b-9b78-a44a11524971. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:30:45 compute-0 nova_compute[251992]: 2025-12-06 07:30:45.982 251996 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:30:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 112 op/s
Dec 06 07:30:46 compute-0 nova_compute[251992]: 2025-12-06 07:30:46.372 251996 DEBUG nova.network.neutron [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:30:46 compute-0 ceph-mon[74339]: pgmap v2202: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 112 op/s
Dec 06 07:30:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:47.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.346 251996 DEBUG nova.network.neutron [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Updating instance_info_cache with network_info: [{"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.386 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Releasing lock "refresh_cache-e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.387 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Instance network_info: |[{"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.387 251996 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.388 251996 DEBUG nova.network.neutron [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Refreshing network info cache for port d704120a-a680-4b8b-9b78-a44a11524971 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.390 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Start _get_guest_xml network_info=[{"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.394 251996 WARNING nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.400 251996 DEBUG nova.virt.libvirt.host [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.401 251996 DEBUG nova.virt.libvirt.host [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.405 251996 DEBUG nova.virt.libvirt.host [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.405 251996 DEBUG nova.virt.libvirt.host [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.406 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.407 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.407 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.407 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.407 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.408 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.408 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.408 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.408 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.409 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.409 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.409 251996 DEBUG nova.virt.hardware [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.413 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:47.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:30:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3395693374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.896 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.924 251996 DEBUG nova.storage.rbd_utils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] rbd image e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.928 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:47 compute-0 nova_compute[251992]: 2025-12-06 07:30:47.951 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3395693374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Dec 06 07:30:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:30:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817414365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.404 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.405 251996 DEBUG nova.virt.libvirt.vif [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:30:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1362919657',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1362919657',id=116,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7c878bda5244b67976c298fc026bbcd',ramdisk_id='',reservation_id='r-1c2xrqq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1528823652',owner_user_name='tempest-InstanceActionsV221TestJSON-1528823652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:42Z,user_data=None,user_id='c3e74d72d7114a37b1fa3e366712c49e',uuid=e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.406 251996 DEBUG nova.network.os_vif_util [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Converting VIF {"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.407 251996 DEBUG nova.network.os_vif_util [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ab:a2,bridge_name='br-int',has_traffic_filtering=True,id=d704120a-a680-4b8b-9b78-a44a11524971,network=Network(04397a5a-4db7-4588-ae81-fc28d358d032),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd704120a-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.409 251996 DEBUG nova.objects.instance [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lazy-loading 'pci_devices' on Instance uuid e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.427 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <uuid>e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c</uuid>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <name>instance-00000074</name>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-1362919657</nova:name>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:30:47</nova:creationTime>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <nova:user uuid="c3e74d72d7114a37b1fa3e366712c49e">tempest-InstanceActionsV221TestJSON-1528823652-project-member</nova:user>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <nova:project uuid="c7c878bda5244b67976c298fc026bbcd">tempest-InstanceActionsV221TestJSON-1528823652</nova:project>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <nova:port uuid="d704120a-a680-4b8b-9b78-a44a11524971">
Dec 06 07:30:48 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <system>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <entry name="serial">e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c</entry>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <entry name="uuid">e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c</entry>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </system>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <os>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   </os>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <features>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   </features>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk">
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       </source>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk.config">
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       </source>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:30:48 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:aa:ab:a2"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <target dev="tapd704120a-a6"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c/console.log" append="off"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <video>
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </video>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:30:48 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:30:48 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:30:48 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:30:48 compute-0 nova_compute[251992]: </domain>
Dec 06 07:30:48 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.429 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Preparing to wait for external event network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.429 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.429 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.430 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.430 251996 DEBUG nova.virt.libvirt.vif [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:30:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1362919657',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1362919657',id=116,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7c878bda5244b67976c298fc026bbcd',ramdisk_id='',reservation_id='r-1c2xrqq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1528823652',owner_user_name='tempest-InstanceActionsV221TestJSON-1528823652-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:42Z,user_data=None,user_id='c3e74d72d7114a37b1fa3e366712c49e',uuid=e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.431 251996 DEBUG nova.network.os_vif_util [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Converting VIF {"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.431 251996 DEBUG nova.network.os_vif_util [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ab:a2,bridge_name='br-int',has_traffic_filtering=True,id=d704120a-a680-4b8b-9b78-a44a11524971,network=Network(04397a5a-4db7-4588-ae81-fc28d358d032),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd704120a-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.432 251996 DEBUG os_vif [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ab:a2,bridge_name='br-int',has_traffic_filtering=True,id=d704120a-a680-4b8b-9b78-a44a11524971,network=Network(04397a5a-4db7-4588-ae81-fc28d358d032),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd704120a-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.433 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.433 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.438 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd704120a-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.438 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd704120a-a6, col_values=(('external_ids', {'iface-id': 'd704120a-a680-4b8b-9b78-a44a11524971', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:ab:a2', 'vm-uuid': 'e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.439 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:48 compute-0 NetworkManager[48965]: <info>  [1765006248.4407] manager: (tapd704120a-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.442 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.446 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.446 251996 INFO os_vif [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ab:a2,bridge_name='br-int',has_traffic_filtering=True,id=d704120a-a680-4b8b-9b78-a44a11524971,network=Network(04397a5a-4db7-4588-ae81-fc28d358d032),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd704120a-a6')
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.518 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.518 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.519 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] No VIF found with MAC fa:16:3e:aa:ab:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.520 251996 INFO nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Using config drive
Dec 06 07:30:48 compute-0 sudo[325924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:48 compute-0 sudo[325924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:48 compute-0 sudo[325924]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.550 251996 DEBUG nova.storage.rbd_utils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] rbd image e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:48 compute-0 sudo[325965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:48 compute-0 sudo[325965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:48 compute-0 sudo[325965]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.952 251996 INFO nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Creating config drive at /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c/disk.config
Dec 06 07:30:48 compute-0 nova_compute[251992]: 2025-12-06 07:30:48.957 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzpuwn5cx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:49 compute-0 nova_compute[251992]: 2025-12-06 07:30:49.034 251996 DEBUG nova.network.neutron [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Updated VIF entry in instance network info cache for port d704120a-a680-4b8b-9b78-a44a11524971. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:30:49 compute-0 nova_compute[251992]: 2025-12-06 07:30:49.035 251996 DEBUG nova.network.neutron [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Updating instance_info_cache with network_info: [{"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:30:49 compute-0 nova_compute[251992]: 2025-12-06 07:30:49.054 251996 DEBUG oslo_concurrency.lockutils [req-367da855-e291-4f8d-9dd8-4621e921dabb req-cddd5c79-4748-4e7f-97db-57dfdd445457 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:30:49 compute-0 nova_compute[251992]: 2025-12-06 07:30:49.090 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzpuwn5cx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:49 compute-0 nova_compute[251992]: 2025-12-06 07:30:49.397 251996 DEBUG nova.storage.rbd_utils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] rbd image e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:49 compute-0 nova_compute[251992]: 2025-12-06 07:30:49.401 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c/disk.config e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:49 compute-0 nova_compute[251992]: 2025-12-06 07:30:49.427 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:50 compute-0 ceph-mon[74339]: pgmap v2203: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Dec 06 07:30:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1817414365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:30:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 133 op/s
Dec 06 07:30:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:30:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:51.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:30:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 174 op/s
Dec 06 07:30:52 compute-0 ceph-mon[74339]: pgmap v2204: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 133 op/s
Dec 06 07:30:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:53.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:53 compute-0 nova_compute[251992]: 2025-12-06 07:30:53.440 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:53 compute-0 ceph-mon[74339]: pgmap v2205: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 174 op/s
Dec 06 07:30:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.214 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.377 251996 DEBUG oslo_concurrency.processutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c/disk.config e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.975s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.377 251996 INFO nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Deleting local config drive /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c/disk.config because it was imported into RBD.
Dec 06 07:30:54 compute-0 kernel: tapd704120a-a6: entered promiscuous mode
Dec 06 07:30:54 compute-0 ovn_controller[147168]: 2025-12-06T07:30:54Z|00417|binding|INFO|Claiming lport d704120a-a680-4b8b-9b78-a44a11524971 for this chassis.
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.447 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 ovn_controller[147168]: 2025-12-06T07:30:54Z|00418|binding|INFO|d704120a-a680-4b8b-9b78-a44a11524971: Claiming fa:16:3e:aa:ab:a2 10.100.0.12
Dec 06 07:30:54 compute-0 NetworkManager[48965]: <info>  [1765006254.4515] manager: (tapd704120a-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.472 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ab:a2 10.100.0.12'], port_security=['fa:16:3e:aa:ab:a2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04397a5a-4db7-4588-ae81-fc28d358d032', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7c878bda5244b67976c298fc026bbcd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6c823a00-750b-4bc6-9b4e-05438bbc018c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c3eb8a9-c520-4585-9c13-247bc4f15644, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=d704120a-a680-4b8b-9b78-a44a11524971) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.475 158118 INFO neutron.agent.ovn.metadata.agent [-] Port d704120a-a680-4b8b-9b78-a44a11524971 in datapath 04397a5a-4db7-4588-ae81-fc28d358d032 bound to our chassis
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.480 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 04397a5a-4db7-4588-ae81-fc28d358d032
Dec 06 07:30:54 compute-0 systemd-udevd[326048]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.494 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c38620b9-d193-4542-8db4-61679a7da2de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.498 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap04397a5a-41 in ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.502 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap04397a5a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.502 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4aeb8e-6c94-4837-a9f6-15e4c01723c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.504 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2b7d45-406d-495c-8935-a37069123986]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 systemd-machined[212986]: New machine qemu-52-instance-00000074.
Dec 06 07:30:54 compute-0 NetworkManager[48965]: <info>  [1765006254.5180] device (tapd704120a-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:30:54 compute-0 NetworkManager[48965]: <info>  [1765006254.5188] device (tapd704120a-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.518 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0abd46a4-7586-40e3-a638-4e23dbbd3a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.529 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.533 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-00000074.
Dec 06 07:30:54 compute-0 ovn_controller[147168]: 2025-12-06T07:30:54Z|00419|binding|INFO|Setting lport d704120a-a680-4b8b-9b78-a44a11524971 ovn-installed in OVS
Dec 06 07:30:54 compute-0 ovn_controller[147168]: 2025-12-06T07:30:54Z|00420|binding|INFO|Setting lport d704120a-a680-4b8b-9b78-a44a11524971 up in Southbound
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.537 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.536 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfefd74-dad2-49ff-9f1f-b5e081b5fbd6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.579 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[45602b6f-040b-4c5b-9766-c09d09e8376d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 NetworkManager[48965]: <info>  [1765006254.5909] manager: (tap04397a5a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.589 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6c88d740-fd8f-4502-956d-7a7203b33ada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.621 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c32da4-ca37-4fb1-9cd0-39fa48cbb378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.625 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[bdee783d-a19e-49c7-a2d4-1d127c325c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 NetworkManager[48965]: <info>  [1765006254.6553] device (tap04397a5a-40): carrier: link connected
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.660 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[62333ade-7e71-4ab5-b7be-ed2259850e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.677 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[07161bfa-f024-43ff-96b6-81508abbbb7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04397a5a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:0e:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652724, 'reachable_time': 25317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326082, 'error': None, 'target': 'ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.696 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fe39df94-5ff0-4769-b63e-6c6b46563498]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:ed4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 652724, 'tstamp': 652724}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326083, 'error': None, 'target': 'ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.713 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e52a6f-2e19-420f-9036-ee10e1a3d138]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap04397a5a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:0e:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652724, 'reachable_time': 25317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326084, 'error': None, 'target': 'ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.750 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[52822e84-4381-4eba-ba3a-e61bdd7ded4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.822 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbcaa58-45c0-406b-a844-60629dfde98e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.824 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04397a5a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.824 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.824 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap04397a5a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.826 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 NetworkManager[48965]: <info>  [1765006254.8273] manager: (tap04397a5a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/208)
Dec 06 07:30:54 compute-0 kernel: tap04397a5a-40: entered promiscuous mode
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.828 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.829 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap04397a5a-40, col_values=(('external_ids', {'iface-id': '5c54b80e-c363-4117-b1e5-2d9c5e886d98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.830 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 ovn_controller[147168]: 2025-12-06T07:30:54Z|00421|binding|INFO|Releasing lport 5c54b80e-c363-4117-b1e5-2d9c5e886d98 from this chassis (sb_readonly=0)
Dec 06 07:30:54 compute-0 nova_compute[251992]: 2025-12-06 07:30:54.845 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.846 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/04397a5a-4db7-4588-ae81-fc28d358d032.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/04397a5a-4db7-4588-ae81-fc28d358d032.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.848 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc150e9-2467-497a-b4f9-057f3499fc14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.848 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-04397a5a-4db7-4588-ae81-fc28d358d032
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/04397a5a-4db7-4588-ae81-fc28d358d032.pid.haproxy
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 04397a5a-4db7-4588-ae81-fc28d358d032
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:30:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:54.849 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032', 'env', 'PROCESS_TAG=haproxy-04397a5a-4db7-4588-ae81-fc28d358d032', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/04397a5a-4db7-4588-ae81-fc28d358d032.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:30:54 compute-0 sudo[326101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:54 compute-0 sudo[326101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:54 compute-0 sudo[326101]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:55 compute-0 sudo[326133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:30:55 compute-0 sudo[326133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:55 compute-0 sudo[326133]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:55 compute-0 sudo[326158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:55 compute-0 sudo[326158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:55 compute-0 sudo[326158]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:55 compute-0 sudo[326195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:30:55 compute-0 sudo[326195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:55.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:55 compute-0 podman[326251]: 2025-12-06 07:30:55.219063783 +0000 UTC m=+0.030774792 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:30:55 compute-0 ceph-mon[74339]: pgmap v2206: 305 pgs: 305 active+clean; 213 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.566 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006255.5660968, e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.567 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] VM Started (Lifecycle Event)
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.584 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.590 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006255.568295, e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.590 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] VM Paused (Lifecycle Event)
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.606 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.609 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:30:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.628 251996 DEBUG nova.compute.manager [req-7cafd346-391c-4c1d-9108-b538e0465f07 req-9c22ad80-5b27-4b1e-971f-9ec267752c7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received event network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.629 251996 DEBUG oslo_concurrency.lockutils [req-7cafd346-391c-4c1d-9108-b538e0465f07 req-9c22ad80-5b27-4b1e-971f-9ec267752c7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.629 251996 DEBUG oslo_concurrency.lockutils [req-7cafd346-391c-4c1d-9108-b538e0465f07 req-9c22ad80-5b27-4b1e-971f-9ec267752c7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.629 251996 DEBUG oslo_concurrency.lockutils [req-7cafd346-391c-4c1d-9108-b538e0465f07 req-9c22ad80-5b27-4b1e-971f-9ec267752c7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.629 251996 DEBUG nova.compute.manager [req-7cafd346-391c-4c1d-9108-b538e0465f07 req-9c22ad80-5b27-4b1e-971f-9ec267752c7d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Processing event network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.633 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.635 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.639 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006255.6388013, e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.639 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] VM Resumed (Lifecycle Event)
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.641 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.645 251996 INFO nova.virt.libvirt.driver [-] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Instance spawned successfully.
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.646 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.659 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.668 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.668 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.669 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.669 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.670 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.670 251996 DEBUG nova.virt.libvirt.driver [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.673 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.708 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.733 251996 INFO nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Took 12.92 seconds to spawn the instance on the hypervisor.
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.734 251996 DEBUG nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:30:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:55.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.849 251996 INFO nova.compute.manager [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Took 14.24 seconds to build instance.
Dec 06 07:30:55 compute-0 nova_compute[251992]: 2025-12-06 07:30:55.874 251996 DEBUG oslo_concurrency.lockutils [None req-ad610092-7468-4414-acff-ca7d887fe536 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:55 compute-0 podman[326251]: 2025-12-06 07:30:55.894329823 +0000 UTC m=+0.706040792 container create f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:30:56 compute-0 sudo[326195]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:56 compute-0 systemd[1]: Started libpod-conmon-f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea.scope.
Dec 06 07:30:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:30:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/993e60158888c711d42fb8197caaadcfb44b45cf82fb875496baa32c2911d96c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:30:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 214 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Dec 06 07:30:56 compute-0 podman[326251]: 2025-12-06 07:30:56.211644839 +0000 UTC m=+1.023355828 container init f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:30:56 compute-0 podman[326251]: 2025-12-06 07:30:56.217025844 +0000 UTC m=+1.028736813 container start f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:30:56 compute-0 neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032[326304]: [NOTICE]   (326308) : New worker (326310) forked
Dec 06 07:30:56 compute-0 neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032[326304]: [NOTICE]   (326308) : Loading success.
Dec 06 07:30:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:30:56 compute-0 ceph-mon[74339]: pgmap v2207: 305 pgs: 305 active+clean; 214 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Dec 06 07:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:30:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:57.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.731 251996 DEBUG nova.compute.manager [req-c2802ff4-c890-4cd5-942d-31f6a99e3396 req-79a907b0-73f1-4570-817e-8ac544bd9804 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received event network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.732 251996 DEBUG oslo_concurrency.lockutils [req-c2802ff4-c890-4cd5-942d-31f6a99e3396 req-79a907b0-73f1-4570-817e-8ac544bd9804 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.732 251996 DEBUG oslo_concurrency.lockutils [req-c2802ff4-c890-4cd5-942d-31f6a99e3396 req-79a907b0-73f1-4570-817e-8ac544bd9804 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.732 251996 DEBUG oslo_concurrency.lockutils [req-c2802ff4-c890-4cd5-942d-31f6a99e3396 req-79a907b0-73f1-4570-817e-8ac544bd9804 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.733 251996 DEBUG nova.compute.manager [req-c2802ff4-c890-4cd5-942d-31f6a99e3396 req-79a907b0-73f1-4570-817e-8ac544bd9804 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] No waiting events found dispatching network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.733 251996 WARNING nova.compute.manager [req-c2802ff4-c890-4cd5-942d-31f6a99e3396 req-79a907b0-73f1-4570-817e-8ac544bd9804 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received unexpected event network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 for instance with vm_state active and task_state None.
Dec 06 07:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.746 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "29672a35-c42f-4bd4-9bdd-be0122c29963" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.746 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:57.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.769 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.774 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.774 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.774 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.775 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.775 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.776 251996 INFO nova.compute.manager [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Terminating instance
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.777 251996 DEBUG nova.compute.manager [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 49e28828-4f69-4003-96ba-5f0c112951a2 does not exist
Dec 06 07:30:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f3e52ef3-2e2b-407e-8203-099c60eb820f does not exist
Dec 06 07:30:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0c885cee-46ac-4a61-a38e-f9503ed92596 does not exist
Dec 06 07:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:30:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:30:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.833 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.834 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.841 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.842 251996 INFO nova.compute.claims [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:30:57 compute-0 sudo[326320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:57 compute-0 sudo[326320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:57 compute-0 sudo[326320]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:57 compute-0 sudo[326345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:30:57 compute-0 sudo[326345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:57 compute-0 sudo[326345]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:57 compute-0 sudo[326370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:30:57 compute-0 sudo[326370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:57 compute-0 sudo[326370]: pam_unix(sudo:session): session closed for user root
Dec 06 07:30:57 compute-0 nova_compute[251992]: 2025-12-06 07:30:57.971 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:58 compute-0 sudo[326395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:58 compute-0 kernel: tapd704120a-a6 (unregistering): left promiscuous mode
Dec 06 07:30:58 compute-0 sudo[326395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:30:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:30:58 compute-0 NetworkManager[48965]: <info>  [1765006258.0137] device (tapd704120a-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:30:58 compute-0 ovn_controller[147168]: 2025-12-06T07:30:58Z|00422|binding|INFO|Releasing lport d704120a-a680-4b8b-9b78-a44a11524971 from this chassis (sb_readonly=0)
Dec 06 07:30:58 compute-0 ovn_controller[147168]: 2025-12-06T07:30:58Z|00423|binding|INFO|Setting lport d704120a-a680-4b8b-9b78-a44a11524971 down in Southbound
Dec 06 07:30:58 compute-0 ovn_controller[147168]: 2025-12-06T07:30:58Z|00424|binding|INFO|Removing iface tapd704120a-a6 ovn-installed in OVS
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.023 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.032 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:ab:a2 10.100.0.12'], port_security=['fa:16:3e:aa:ab:a2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04397a5a-4db7-4588-ae81-fc28d358d032', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7c878bda5244b67976c298fc026bbcd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6c823a00-750b-4bc6-9b4e-05438bbc018c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5c3eb8a9-c520-4585-9c13-247bc4f15644, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=d704120a-a680-4b8b-9b78-a44a11524971) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.033 158118 INFO neutron.agent.ovn.metadata.agent [-] Port d704120a-a680-4b8b-9b78-a44a11524971 in datapath 04397a5a-4db7-4588-ae81-fc28d358d032 unbound from our chassis
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.035 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 04397a5a-4db7-4588-ae81-fc28d358d032, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.037 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdabccd-e987-4abf-9498-22576f1805cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.037 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032 namespace which is not needed anymore
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.043 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:58 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000074.scope: Deactivated successfully.
Dec 06 07:30:58 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000074.scope: Consumed 2.704s CPU time.
Dec 06 07:30:58 compute-0 systemd-machined[212986]: Machine qemu-52-instance-00000074 terminated.
Dec 06 07:30:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 214 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 87 op/s
Dec 06 07:30:58 compute-0 neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032[326304]: [NOTICE]   (326308) : haproxy version is 2.8.14-c23fe91
Dec 06 07:30:58 compute-0 neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032[326304]: [NOTICE]   (326308) : path to executable is /usr/sbin/haproxy
Dec 06 07:30:58 compute-0 neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032[326304]: [WARNING]  (326308) : Exiting Master process...
Dec 06 07:30:58 compute-0 neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032[326304]: [ALERT]    (326308) : Current worker (326310) exited with code 143 (Terminated)
Dec 06 07:30:58 compute-0 neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032[326304]: [WARNING]  (326308) : All workers exited. Exiting... (0)
Dec 06 07:30:58 compute-0 systemd[1]: libpod-f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea.scope: Deactivated successfully.
Dec 06 07:30:58 compute-0 podman[326463]: 2025-12-06 07:30:58.176846333 +0000 UTC m=+0.044099072 container died f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.209 251996 INFO nova.virt.libvirt.driver [-] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Instance destroyed successfully.
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.210 251996 DEBUG nova.objects.instance [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lazy-loading 'resources' on Instance uuid e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:30:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea-userdata-shm.mount: Deactivated successfully.
Dec 06 07:30:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-993e60158888c711d42fb8197caaadcfb44b45cf82fb875496baa32c2911d96c-merged.mount: Deactivated successfully.
Dec 06 07:30:58 compute-0 podman[326463]: 2025-12-06 07:30:58.245399563 +0000 UTC m=+0.112652312 container cleanup f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 07:30:58 compute-0 systemd[1]: libpod-conmon-f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea.scope: Deactivated successfully.
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.304 251996 DEBUG nova.virt.libvirt.vif [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1362919657',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1362919657',id=116,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:30:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c7c878bda5244b67976c298fc026bbcd',ramdisk_id='',reservation_id='r-1c2xrqq6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-1528823652',owner_user_name='tempest-InstanceActionsV221TestJSON-1528823652-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:30:55Z,user_data=None,user_id='c3e74d72d7114a37b1fa3e366712c49e',uuid=e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.305 251996 DEBUG nova.network.os_vif_util [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Converting VIF {"id": "d704120a-a680-4b8b-9b78-a44a11524971", "address": "fa:16:3e:aa:ab:a2", "network": {"id": "04397a5a-4db7-4588-ae81-fc28d358d032", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1768382234-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7c878bda5244b67976c298fc026bbcd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd704120a-a6", "ovs_interfaceid": "d704120a-a680-4b8b-9b78-a44a11524971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.306 251996 DEBUG nova.network.os_vif_util [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ab:a2,bridge_name='br-int',has_traffic_filtering=True,id=d704120a-a680-4b8b-9b78-a44a11524971,network=Network(04397a5a-4db7-4588-ae81-fc28d358d032),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd704120a-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.306 251996 DEBUG os_vif [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ab:a2,bridge_name='br-int',has_traffic_filtering=True,id=d704120a-a680-4b8b-9b78-a44a11524971,network=Network(04397a5a-4db7-4588-ae81-fc28d358d032),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd704120a-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.309 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd704120a-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.311 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.313 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.315 251996 INFO os_vif [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:ab:a2,bridge_name='br-int',has_traffic_filtering=True,id=d704120a-a680-4b8b-9b78-a44a11524971,network=Network(04397a5a-4db7-4588-ae81-fc28d358d032),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd704120a-a6')
Dec 06 07:30:58 compute-0 podman[326527]: 2025-12-06 07:30:58.317649104 +0000 UTC m=+0.045196391 container remove f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.322 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f21bcbd6-ca1f-4780-b5e2-932d6bd3c5d2]: (4, ('Sat Dec  6 07:30:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032 (f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea)\nf4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea\nSat Dec  6 07:30:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032 (f4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea)\nf4853a3b318e49aae398ec563805de20accb58df3375b566f6bc3e29fd45fdea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.324 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2d46747a-1140-43f0-a981-7fefcd11f740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.325 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap04397a5a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:30:58 compute-0 kernel: tap04397a5a-40: left promiscuous mode
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.334 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.342 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.345 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d3deb180-fa15-4ef4-8483-32e761ffe285]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.362 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d401b7f8-9e01-4cfa-b12e-bbf8c5c0709a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.364 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3abc084e-b956-4130-bd4d-de5cb86aedcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.382 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[70c13522-5c8d-4d89-b8e4-62be7d212482]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 652716, 'reachable_time': 21893, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326575, 'error': None, 'target': 'ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.385 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-04397a5a-4db7-4588-ae81-fc28d358d032 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:30:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:30:58.385 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[2529fe02-710a-409b-8316-2597826d75aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:30:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d04397a5a\x2d4db7\x2d4588\x2dae81\x2dfc28d358d032.mount: Deactivated successfully.
Dec 06 07:30:58 compute-0 podman[326566]: 2025-12-06 07:30:58.40231908 +0000 UTC m=+0.040264148 container create f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:30:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:30:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1648050588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:58 compute-0 systemd[1]: Started libpod-conmon-f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611.scope.
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.455 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.461 251996 DEBUG nova.compute.provider_tree [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:30:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:30:58 compute-0 podman[326566]: 2025-12-06 07:30:58.381924829 +0000 UTC m=+0.019869907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.478 251996 DEBUG nova.scheduler.client.report [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.505 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.506 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.712 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.713 251996 DEBUG nova.network.neutron [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.729 251996 INFO nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.745 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:30:58 compute-0 podman[326566]: 2025-12-06 07:30:58.791484405 +0000 UTC m=+0.429429493 container init f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:30:58 compute-0 podman[326566]: 2025-12-06 07:30:58.801494716 +0000 UTC m=+0.439439764 container start f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:30:58 compute-0 competent_euler[326591]: 167 167
Dec 06 07:30:58 compute-0 systemd[1]: libpod-f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611.scope: Deactivated successfully.
Dec 06 07:30:58 compute-0 conmon[326591]: conmon f636ceaa7443abfcb565 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611.scope/container/memory.events
Dec 06 07:30:58 compute-0 podman[326566]: 2025-12-06 07:30:58.824291101 +0000 UTC m=+0.462236159 container attach f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:30:58 compute-0 podman[326566]: 2025-12-06 07:30:58.825013971 +0000 UTC m=+0.462959029 container died f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.832 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.833 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.834 251996 INFO nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Creating image(s)
Dec 06 07:30:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdfc0d243278a8c51ab441e0b61efb20386ed27737f35b1c90112544f321d8a1-merged.mount: Deactivated successfully.
Dec 06 07:30:58 compute-0 podman[326566]: 2025-12-06 07:30:58.86425988 +0000 UTC m=+0.502204938 container remove f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_euler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.867 251996 DEBUG nova.storage.rbd_utils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] rbd image 29672a35-c42f-4bd4-9bdd-be0122c29963_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:58 compute-0 systemd[1]: libpod-conmon-f636ceaa7443abfcb565686abb0825abf4106d665ab263a091d9d31c964ec611.scope: Deactivated successfully.
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.930 251996 DEBUG nova.storage.rbd_utils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] rbd image 29672a35-c42f-4bd4-9bdd-be0122c29963_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.962 251996 DEBUG nova.storage.rbd_utils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] rbd image 29672a35-c42f-4bd4-9bdd-be0122c29963_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.965 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:58 compute-0 nova_compute[251992]: 2025-12-06 07:30:58.997 251996 DEBUG nova.policy [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba221a62b5d5452c80ef6e9223ab018d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f577827abb5458f902bb5d5580b7d69', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:30:59 compute-0 nova_compute[251992]: 2025-12-06 07:30:59.040 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:30:59 compute-0 nova_compute[251992]: 2025-12-06 07:30:59.041 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:30:59 compute-0 nova_compute[251992]: 2025-12-06 07:30:59.042 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:30:59 compute-0 nova_compute[251992]: 2025-12-06 07:30:59.042 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:30:59 compute-0 nova_compute[251992]: 2025-12-06 07:30:59.075 251996 DEBUG nova.storage.rbd_utils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] rbd image 29672a35-c42f-4bd4-9bdd-be0122c29963_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:30:59 compute-0 nova_compute[251992]: 2025-12-06 07:30:59.079 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 29672a35-c42f-4bd4-9bdd-be0122c29963_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:30:59 compute-0 podman[326669]: 2025-12-06 07:30:59.055266046 +0000 UTC m=+0.033445343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:30:59 compute-0 nova_compute[251992]: 2025-12-06 07:30:59.217 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:30:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:30:59.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:30:59 compute-0 ceph-mon[74339]: pgmap v2208: 305 pgs: 305 active+clean; 214 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 87 op/s
Dec 06 07:30:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1648050588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:30:59 compute-0 podman[326669]: 2025-12-06 07:30:59.637861165 +0000 UTC m=+0.616040432 container create de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:30:59 compute-0 systemd[1]: Started libpod-conmon-de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5.scope.
Dec 06 07:30:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a482b460fd729821bf832ead20113552c2694bcb43caa9d9386780c30cb02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a482b460fd729821bf832ead20113552c2694bcb43caa9d9386780c30cb02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a482b460fd729821bf832ead20113552c2694bcb43caa9d9386780c30cb02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a482b460fd729821bf832ead20113552c2694bcb43caa9d9386780c30cb02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:30:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038a482b460fd729821bf832ead20113552c2694bcb43caa9d9386780c30cb02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:30:59 compute-0 podman[326669]: 2025-12-06 07:30:59.720796924 +0000 UTC m=+0.698976221 container init de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:30:59 compute-0 podman[326669]: 2025-12-06 07:30:59.727253698 +0000 UTC m=+0.705432965 container start de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 07:30:59 compute-0 podman[326669]: 2025-12-06 07:30:59.731253616 +0000 UTC m=+0.709432883 container attach de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:30:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:30:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:30:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:30:59.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.004 251996 DEBUG nova.compute.manager [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received event network-vif-unplugged-d704120a-a680-4b8b-9b78-a44a11524971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.005 251996 DEBUG oslo_concurrency.lockutils [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.005 251996 DEBUG oslo_concurrency.lockutils [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.006 251996 DEBUG oslo_concurrency.lockutils [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.006 251996 DEBUG nova.compute.manager [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] No waiting events found dispatching network-vif-unplugged-d704120a-a680-4b8b-9b78-a44a11524971 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.006 251996 DEBUG nova.compute.manager [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received event network-vif-unplugged-d704120a-a680-4b8b-9b78-a44a11524971 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.006 251996 DEBUG nova.compute.manager [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received event network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.007 251996 DEBUG oslo_concurrency.lockutils [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.007 251996 DEBUG oslo_concurrency.lockutils [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.007 251996 DEBUG oslo_concurrency.lockutils [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.008 251996 DEBUG nova.compute.manager [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] No waiting events found dispatching network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.008 251996 WARNING nova.compute.manager [req-35c9e1fa-9026-4b64-bb4e-f89c979c9d2b req-6810b18a-b7a7-4ee6-8a48-30edeb55ba1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received unexpected event network-vif-plugged-d704120a-a680-4b8b-9b78-a44a11524971 for instance with vm_state active and task_state deleting.
Dec 06 07:31:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 202 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 789 KiB/s wr, 144 op/s
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.132 251996 DEBUG nova.network.neutron [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Successfully created port: 95912911-2009-4ed7-8b3a-823a64d0c5e5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:31:00 compute-0 tender_curie[326724]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:31:00 compute-0 tender_curie[326724]: --> relative data size: 1.0
Dec 06 07:31:00 compute-0 tender_curie[326724]: --> All data devices are unavailable
Dec 06 07:31:00 compute-0 systemd[1]: libpod-de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5.scope: Deactivated successfully.
Dec 06 07:31:00 compute-0 podman[326669]: 2025-12-06 07:31:00.600331049 +0000 UTC m=+1.578510316 container died de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:31:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-038a482b460fd729821bf832ead20113552c2694bcb43caa9d9386780c30cb02-merged.mount: Deactivated successfully.
Dec 06 07:31:00 compute-0 podman[326669]: 2025-12-06 07:31:00.66042264 +0000 UTC m=+1.638601907 container remove de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:31:00 compute-0 systemd[1]: libpod-conmon-de6b750f4f495d839576954d53df7a108993d9065648ee95d76ec95378c957a5.scope: Deactivated successfully.
Dec 06 07:31:00 compute-0 sudo[326395]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:00 compute-0 sudo[326755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:00 compute-0 sudo[326755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:00 compute-0 sudo[326755]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:00 compute-0 sudo[326780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:31:00 compute-0 sudo[326780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:00 compute-0 sudo[326780]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:00 compute-0 sudo[326805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:00 compute-0 sudo[326805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:00 compute-0 sudo[326805]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.942 251996 DEBUG nova.network.neutron [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Successfully updated port: 95912911-2009-4ed7-8b3a-823a64d0c5e5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:31:00 compute-0 sudo[326830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:31:00 compute-0 sudo[326830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.961 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.961 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquired lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:31:00 compute-0 nova_compute[251992]: 2025-12-06 07:31:00.961 251996 DEBUG nova.network.neutron [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:31:01 compute-0 nova_compute[251992]: 2025-12-06 07:31:01.266 251996 DEBUG nova.compute.manager [req-dea53796-efac-4dc4-9a20-c8b9f529b571 req-28c74aa0-c04b-4992-9d26-d442d4875e5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received event network-changed-95912911-2009-4ed7-8b3a-823a64d0c5e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:01 compute-0 nova_compute[251992]: 2025-12-06 07:31:01.266 251996 DEBUG nova.compute.manager [req-dea53796-efac-4dc4-9a20-c8b9f529b571 req-28c74aa0-c04b-4992-9d26-d442d4875e5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Refreshing instance network info cache due to event network-changed-95912911-2009-4ed7-8b3a-823a64d0c5e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:31:01 compute-0 nova_compute[251992]: 2025-12-06 07:31:01.266 251996 DEBUG oslo_concurrency.lockutils [req-dea53796-efac-4dc4-9a20-c8b9f529b571 req-28c74aa0-c04b-4992-9d26-d442d4875e5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:31:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:01.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:01 compute-0 podman[326896]: 2025-12-06 07:31:01.31493943 +0000 UTC m=+0.053306660 container create 1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:31:01 compute-0 systemd[1]: Started libpod-conmon-1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c.scope.
Dec 06 07:31:01 compute-0 nova_compute[251992]: 2025-12-06 07:31:01.363 251996 DEBUG nova.network.neutron [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:31:01 compute-0 podman[326896]: 2025-12-06 07:31:01.295765343 +0000 UTC m=+0.034132603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:31:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:31:01 compute-0 podman[326896]: 2025-12-06 07:31:01.409080991 +0000 UTC m=+0.147448251 container init 1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:31:01 compute-0 podman[326896]: 2025-12-06 07:31:01.416506232 +0000 UTC m=+0.154873462 container start 1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:31:01 compute-0 podman[326896]: 2025-12-06 07:31:01.419872343 +0000 UTC m=+0.158239573 container attach 1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:31:01 compute-0 distracted_elgamal[326912]: 167 167
Dec 06 07:31:01 compute-0 systemd[1]: libpod-1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c.scope: Deactivated successfully.
Dec 06 07:31:01 compute-0 podman[326896]: 2025-12-06 07:31:01.422777801 +0000 UTC m=+0.161145031 container died 1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:31:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-de34b91d4241b6a702cc7c1ea8975b89ebbfe8a864ee07bc53e59d092edfb712-merged.mount: Deactivated successfully.
Dec 06 07:31:01 compute-0 podman[326896]: 2025-12-06 07:31:01.459388649 +0000 UTC m=+0.197755879 container remove 1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:31:01 compute-0 systemd[1]: libpod-conmon-1aa2f59a9ea903bcf3c3e2b2925c1d7d80685d10db4f638fac9fb8942e095f3c.scope: Deactivated successfully.
Dec 06 07:31:01 compute-0 podman[326930]: 2025-12-06 07:31:01.621390253 +0000 UTC m=+0.102384444 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:31:01 compute-0 podman[326952]: 2025-12-06 07:31:01.638386762 +0000 UTC m=+0.049874088 container create 665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:31:01 compute-0 systemd[1]: Started libpod-conmon-665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e.scope.
Dec 06 07:31:01 compute-0 podman[326952]: 2025-12-06 07:31:01.616143061 +0000 UTC m=+0.027630417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:31:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d1e7c69480747b7533af7f13ea59fccf6d61d9bbc2cdc583f3f4410e935d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d1e7c69480747b7533af7f13ea59fccf6d61d9bbc2cdc583f3f4410e935d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d1e7c69480747b7533af7f13ea59fccf6d61d9bbc2cdc583f3f4410e935d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d1e7c69480747b7533af7f13ea59fccf6d61d9bbc2cdc583f3f4410e935d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:01 compute-0 podman[326952]: 2025-12-06 07:31:01.740495349 +0000 UTC m=+0.151982685 container init 665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dubinsky, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:31:01 compute-0 podman[326952]: 2025-12-06 07:31:01.749458611 +0000 UTC m=+0.160945927 container start 665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dubinsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:31:01 compute-0 podman[326952]: 2025-12-06 07:31:01.753759586 +0000 UTC m=+0.165246902 container attach 665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:31:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 194 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Dec 06 07:31:02 compute-0 nova_compute[251992]: 2025-12-06 07:31:02.222 251996 DEBUG nova.network.neutron [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Updating instance_info_cache with network_info: [{"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:31:02 compute-0 nova_compute[251992]: 2025-12-06 07:31:02.245 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Releasing lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:31:02 compute-0 nova_compute[251992]: 2025-12-06 07:31:02.245 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Instance network_info: |[{"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:31:02 compute-0 nova_compute[251992]: 2025-12-06 07:31:02.246 251996 DEBUG oslo_concurrency.lockutils [req-dea53796-efac-4dc4-9a20-c8b9f529b571 req-28c74aa0-c04b-4992-9d26-d442d4875e5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:31:02 compute-0 nova_compute[251992]: 2025-12-06 07:31:02.246 251996 DEBUG nova.network.neutron [req-dea53796-efac-4dc4-9a20-c8b9f529b571 req-28c74aa0-c04b-4992-9d26-d442d4875e5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Refreshing network info cache for port 95912911-2009-4ed7-8b3a-823a64d0c5e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]: {
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:     "0": [
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:         {
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "devices": [
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "/dev/loop3"
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             ],
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "lv_name": "ceph_lv0",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "lv_size": "7511998464",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "name": "ceph_lv0",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "tags": {
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.cluster_name": "ceph",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.crush_device_class": "",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.encrypted": "0",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.osd_id": "0",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.type": "block",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:                 "ceph.vdo": "0"
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             },
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "type": "block",
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:             "vg_name": "ceph_vg0"
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:         }
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]:     ]
Dec 06 07:31:02 compute-0 reverent_dubinsky[326977]: }
Dec 06 07:31:02 compute-0 systemd[1]: libpod-665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e.scope: Deactivated successfully.
Dec 06 07:31:02 compute-0 podman[326952]: 2025-12-06 07:31:02.57323028 +0000 UTC m=+0.984717596 container died 665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dubinsky, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:31:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-30d1e7c69480747b7533af7f13ea59fccf6d61d9bbc2cdc583f3f4410e935d05-merged.mount: Deactivated successfully.
Dec 06 07:31:02 compute-0 podman[326952]: 2025-12-06 07:31:02.634007761 +0000 UTC m=+1.045495077 container remove 665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_dubinsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:31:02 compute-0 systemd[1]: libpod-conmon-665ecaf49e3163379cfd93f1aeebc39f7e701599e71fc7730eaa1e0b48e79b0e.scope: Deactivated successfully.
Dec 06 07:31:02 compute-0 sudo[326830]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:02 compute-0 sudo[327001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:02 compute-0 sudo[327001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:02 compute-0 sudo[327001]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:02 compute-0 sudo[327026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:31:02 compute-0 sudo[327026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:02 compute-0 sudo[327026]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:02 compute-0 sudo[327051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:02 compute-0 sudo[327051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:02 compute-0 sudo[327051]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:02 compute-0 sudo[327076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:31:02 compute-0 sudo[327076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:03 compute-0 podman[327141]: 2025-12-06 07:31:03.195231891 +0000 UTC m=+0.023053143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:31:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:03.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:03 compute-0 nova_compute[251992]: 2025-12-06 07:31:03.312 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:03 compute-0 podman[327141]: 2025-12-06 07:31:03.33222814 +0000 UTC m=+0.160049372 container create 26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:31:03 compute-0 systemd[1]: Started libpod-conmon-26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786.scope.
Dec 06 07:31:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:31:03 compute-0 podman[327141]: 2025-12-06 07:31:03.460230456 +0000 UTC m=+0.288051708 container init 26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:31:03 compute-0 podman[327141]: 2025-12-06 07:31:03.467682996 +0000 UTC m=+0.295504228 container start 26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:31:03 compute-0 podman[327141]: 2025-12-06 07:31:03.4707977 +0000 UTC m=+0.298618932 container attach 26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:31:03 compute-0 amazing_poincare[327157]: 167 167
Dec 06 07:31:03 compute-0 systemd[1]: libpod-26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786.scope: Deactivated successfully.
Dec 06 07:31:03 compute-0 podman[327141]: 2025-12-06 07:31:03.472651071 +0000 UTC m=+0.300472313 container died 26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:31:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-68110f5c39dcd98d5c56cd0dae94f8967c197920ceb7f750bdb13bb19343375f-merged.mount: Deactivated successfully.
Dec 06 07:31:03 compute-0 podman[327141]: 2025-12-06 07:31:03.509249669 +0000 UTC m=+0.337070901 container remove 26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec 06 07:31:03 compute-0 systemd[1]: libpod-conmon-26ed134e4f8bfe45804533be4af7b94d8564100090da55e84ad370d486f47786.scope: Deactivated successfully.
Dec 06 07:31:03 compute-0 podman[327180]: 2025-12-06 07:31:03.658161209 +0000 UTC m=+0.040821353 container create 29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:31:03 compute-0 systemd[1]: Started libpod-conmon-29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1.scope.
Dec 06 07:31:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b588fb439b4ad4c643a4e913a08b1c373e00cff1183cff41a7ba6a5c04593a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b588fb439b4ad4c643a4e913a08b1c373e00cff1183cff41a7ba6a5c04593a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b588fb439b4ad4c643a4e913a08b1c373e00cff1183cff41a7ba6a5c04593a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b588fb439b4ad4c643a4e913a08b1c373e00cff1183cff41a7ba6a5c04593a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:03 compute-0 podman[327180]: 2025-12-06 07:31:03.726954516 +0000 UTC m=+0.109614660 container init 29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:31:03 compute-0 podman[327180]: 2025-12-06 07:31:03.732975959 +0000 UTC m=+0.115636103 container start 29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:31:03 compute-0 podman[327180]: 2025-12-06 07:31:03.638387475 +0000 UTC m=+0.021047639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:31:03 compute-0 podman[327180]: 2025-12-06 07:31:03.736425951 +0000 UTC m=+0.119086115 container attach 29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:31:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:03.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:03.838 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:03.839 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:03.839 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 194 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 1006 KiB/s rd, 1.1 MiB/s wr, 90 op/s
Dec 06 07:31:04 compute-0 nova_compute[251992]: 2025-12-06 07:31:04.218 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:04 compute-0 zealous_mayer[327197]: {
Dec 06 07:31:04 compute-0 zealous_mayer[327197]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:31:04 compute-0 zealous_mayer[327197]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:31:04 compute-0 zealous_mayer[327197]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:31:04 compute-0 zealous_mayer[327197]:         "osd_id": 0,
Dec 06 07:31:04 compute-0 zealous_mayer[327197]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:31:04 compute-0 zealous_mayer[327197]:         "type": "bluestore"
Dec 06 07:31:04 compute-0 zealous_mayer[327197]:     }
Dec 06 07:31:04 compute-0 zealous_mayer[327197]: }
Dec 06 07:31:04 compute-0 systemd[1]: libpod-29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1.scope: Deactivated successfully.
Dec 06 07:31:04 compute-0 podman[327180]: 2025-12-06 07:31:04.572556225 +0000 UTC m=+0.955216369 container died 29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:31:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b588fb439b4ad4c643a4e913a08b1c373e00cff1183cff41a7ba6a5c04593a6-merged.mount: Deactivated successfully.
Dec 06 07:31:04 compute-0 podman[327180]: 2025-12-06 07:31:04.648650979 +0000 UTC m=+1.031311113 container remove 29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mayer, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:31:04 compute-0 systemd[1]: libpod-conmon-29c9b3d1beda95f058f00d92bdc468c081857275916fbbb353a329e2a5930ad1.scope: Deactivated successfully.
Dec 06 07:31:04 compute-0 sudo[327076]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:31:04 compute-0 nova_compute[251992]: 2025-12-06 07:31:04.702 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:05.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.681 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.682 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.932 251996 DEBUG nova.network.neutron [req-dea53796-efac-4dc4-9a20-c8b9f529b571 req-28c74aa0-c04b-4992-9d26-d442d4875e5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Updated VIF entry in instance network info cache for port 95912911-2009-4ed7-8b3a-823a64d0c5e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.933 251996 DEBUG nova.network.neutron [req-dea53796-efac-4dc4-9a20-c8b9f529b571 req-28c74aa0-c04b-4992-9d26-d442d4875e5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Updating instance_info_cache with network_info: [{"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:31:05 compute-0 nova_compute[251992]: 2025-12-06 07:31:05.948 251996 DEBUG oslo_concurrency.lockutils [req-dea53796-efac-4dc4-9a20-c8b9f529b571 req-28c74aa0-c04b-4992-9d26-d442d4875e5e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:31:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1541494324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 144 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.135 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.204 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.204 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.333 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.334 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4363MB free_disk=20.921672821044922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.335 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.335 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.594 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.595 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 29672a35-c42f-4bd4-9bdd-be0122c29963 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.595 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.595 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:31:06 compute-0 nova_compute[251992]: 2025-12-06 07:31:06.650 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:07.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247156057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:07 compute-0 nova_compute[251992]: 2025-12-06 07:31:07.533 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.882s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:07 compute-0 nova_compute[251992]: 2025-12-06 07:31:07.540 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:31:07 compute-0 nova_compute[251992]: 2025-12-06 07:31:07.574 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:31:07 compute-0 nova_compute[251992]: 2025-12-06 07:31:07.602 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:31:07 compute-0 nova_compute[251992]: 2025-12-06 07:31:07.603 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 144 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 122 op/s
Dec 06 07:31:08 compute-0 nova_compute[251992]: 2025-12-06 07:31:08.316 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:08 compute-0 podman[327280]: 2025-12-06 07:31:08.407018861 +0000 UTC m=+0.057501573 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 06 07:31:08 compute-0 podman[327281]: 2025-12-06 07:31:08.411890423 +0000 UTC m=+0.061206054 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:31:09 compute-0 nova_compute[251992]: 2025-12-06 07:31:09.221 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:09.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:09 compute-0 ceph-mon[74339]: pgmap v2209: 305 pgs: 305 active+clean; 202 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 789 KiB/s wr, 144 op/s
Dec 06 07:31:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:09.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 172 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 131 op/s
Dec 06 07:31:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:31:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:31:10 compute-0 nova_compute[251992]: 2025-12-06 07:31:10.399 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 29672a35-c42f-4bd4-9bdd-be0122c29963_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 11.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:10 compute-0 nova_compute[251992]: 2025-12-06 07:31:10.470 251996 DEBUG nova.storage.rbd_utils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] resizing rbd image 29672a35-c42f-4bd4-9bdd-be0122c29963_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:31:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:11.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 177 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 89 op/s
Dec 06 07:31:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.160 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.162 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.163 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.164 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.164 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.165 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.170 251996 DEBUG nova.objects.instance [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lazy-loading 'migration_context' on Instance uuid 29672a35-c42f-4bd4-9bdd-be0122c29963 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.190 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.191 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Ensure instance console log exists: /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.191 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.192 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.193 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.197 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Start _get_guest_xml network_info=[{"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.204 251996 WARNING nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.207 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006258.2065616, e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.208 251996 INFO nova.compute.manager [-] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] VM Stopped (Lifecycle Event)
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.210 251996 DEBUG nova.virt.libvirt.host [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.211 251996 DEBUG nova.virt.libvirt.host [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.215 251996 DEBUG nova.virt.libvirt.host [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.216 251996 DEBUG nova.virt.libvirt.host [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.217 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.217 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.218 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.218 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.218 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.218 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.219 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.219 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.219 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.220 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.220 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.220 251996 DEBUG nova.virt.hardware [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.223 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.252 251996 DEBUG nova.compute.manager [None req-72af3123-3242-42b2-a905-6c638df50da4 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.259 251996 DEBUG nova.compute.manager [None req-72af3123-3242-42b2-a905-6c638df50da4 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.280 251996 INFO nova.compute.manager [None req-72af3123-3242-42b2-a905-6c638df50da4 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] During sync_power_state the instance has a pending task (deleting). Skip.
Dec 06 07:31:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:13.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.320 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:31:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1200503693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.666 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.703 251996 DEBUG nova.storage.rbd_utils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] rbd image 29672a35-c42f-4bd4-9bdd-be0122c29963_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:13 compute-0 nova_compute[251992]: 2025-12-06 07:31:13.710 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:13.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 185 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.4 MiB/s wr, 86 op/s
Dec 06 07:31:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:31:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2339104030' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.176 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.179 251996 DEBUG nova.virt.libvirt.vif [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1643256774',display_name='tempest-ServerMetadataNegativeTestJSON-server-1643256774',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1643256774',id=117,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f577827abb5458f902bb5d5580b7d69',ramdisk_id='',reservation_id='r-33uf1sjp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-408999880',owner_user_name='tempest-ServerMetadataNegativeTestJSON-408999880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:58Z,user_data=None,user_id='ba221a62b5d5452c80ef6e9223ab018d',uuid=29672a35-c42f-4bd4-9bdd-be0122c29963,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.180 251996 DEBUG nova.network.os_vif_util [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Converting VIF {"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.181 251996 DEBUG nova.network.os_vif_util [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:e8:fa,bridge_name='br-int',has_traffic_filtering=True,id=95912911-2009-4ed7-8b3a-823a64d0c5e5,network=Network(d61cbad0-c569-40a3-a629-90a480791a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95912911-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.183 251996 DEBUG nova.objects.instance [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29672a35-c42f-4bd4-9bdd-be0122c29963 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.209 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <uuid>29672a35-c42f-4bd4-9bdd-be0122c29963</uuid>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <name>instance-00000075</name>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1643256774</nova:name>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:31:13</nova:creationTime>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <nova:user uuid="ba221a62b5d5452c80ef6e9223ab018d">tempest-ServerMetadataNegativeTestJSON-408999880-project-member</nova:user>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <nova:project uuid="7f577827abb5458f902bb5d5580b7d69">tempest-ServerMetadataNegativeTestJSON-408999880</nova:project>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <nova:port uuid="95912911-2009-4ed7-8b3a-823a64d0c5e5">
Dec 06 07:31:14 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <system>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <entry name="serial">29672a35-c42f-4bd4-9bdd-be0122c29963</entry>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <entry name="uuid">29672a35-c42f-4bd4-9bdd-be0122c29963</entry>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </system>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <os>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   </os>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <features>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   </features>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/29672a35-c42f-4bd4-9bdd-be0122c29963_disk">
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       </source>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/29672a35-c42f-4bd4-9bdd-be0122c29963_disk.config">
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       </source>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:31:14 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:6f:e8:fa"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <target dev="tap95912911-20"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963/console.log" append="off"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <video>
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </video>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:31:14 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:31:14 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:31:14 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:31:14 compute-0 nova_compute[251992]: </domain>
Dec 06 07:31:14 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.211 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Preparing to wait for external event network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.212 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.213 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.213 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.215 251996 DEBUG nova.virt.libvirt.vif [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1643256774',display_name='tempest-ServerMetadataNegativeTestJSON-server-1643256774',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1643256774',id=117,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f577827abb5458f902bb5d5580b7d69',ramdisk_id='',reservation_id='r-33uf1sjp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-408999880',owner_user_name='tempest-ServerMetadataNegativeTestJSON-408999880-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:30:58Z,user_data=None,user_id='ba221a62b5d5452c80ef6e9223ab018d',uuid=29672a35-c42f-4bd4-9bdd-be0122c29963,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.215 251996 DEBUG nova.network.os_vif_util [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Converting VIF {"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.217 251996 DEBUG nova.network.os_vif_util [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:e8:fa,bridge_name='br-int',has_traffic_filtering=True,id=95912911-2009-4ed7-8b3a-823a64d0c5e5,network=Network(d61cbad0-c569-40a3-a629-90a480791a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95912911-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.218 251996 DEBUG os_vif [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:e8:fa,bridge_name='br-int',has_traffic_filtering=True,id=95912911-2009-4ed7-8b3a-823a64d0c5e5,network=Network(d61cbad0-c569-40a3-a629-90a480791a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95912911-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.219 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.220 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.222 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.225 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.228 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.228 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95912911-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.229 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95912911-20, col_values=(('external_ids', {'iface-id': '95912911-2009-4ed7-8b3a-823a64d0c5e5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:e8:fa', 'vm-uuid': '29672a35-c42f-4bd4-9bdd-be0122c29963'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.231 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:14 compute-0 NetworkManager[48965]: <info>  [1765006274.2341] manager: (tap95912911-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.234 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.241 251996 INFO os_vif [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:e8:fa,bridge_name='br-int',has_traffic_filtering=True,id=95912911-2009-4ed7-8b3a-823a64d0c5e5,network=Network(d61cbad0-c569-40a3-a629-90a480791a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95912911-20')
Dec 06 07:31:14 compute-0 ceph-mon[74339]: pgmap v2210: 305 pgs: 305 active+clean; 194 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 131 op/s
Dec 06 07:31:14 compute-0 ceph-mon[74339]: pgmap v2211: 305 pgs: 305 active+clean; 194 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 1006 KiB/s rd, 1.1 MiB/s wr, 90 op/s
Dec 06 07:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1541494324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:14 compute-0 ceph-mon[74339]: pgmap v2212: 305 pgs: 305 active+clean; 144 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Dec 06 07:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1699218702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4247156057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:14 compute-0 ceph-mon[74339]: pgmap v2213: 305 pgs: 305 active+clean; 144 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 122 op/s
Dec 06 07:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1080974136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1080974136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:31:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4135206798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:14 compute-0 ceph-mon[74339]: pgmap v2214: 305 pgs: 305 active+clean; 172 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 131 op/s
Dec 06 07:31:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.879 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.880 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.880 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] No VIF found with MAC fa:16:3e:6f:e8:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:31:14 compute-0 nova_compute[251992]: 2025-12-06 07:31:14.881 251996 INFO nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Using config drive
Dec 06 07:31:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:31:15 compute-0 nova_compute[251992]: 2025-12-06 07:31:15.128 251996 DEBUG nova.storage.rbd_utils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] rbd image 29672a35-c42f-4bd4-9bdd-be0122c29963_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f98c620e-747b-4da2-bd02-84bc48c10f12 does not exist
Dec 06 07:31:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ea686f69-a60a-487d-8d87-3ab17a1ba4f3 does not exist
Dec 06 07:31:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3ca0eaea-01b8-45a0-a7d9-581ab9b43cb6 does not exist
Dec 06 07:31:15 compute-0 sudo[327480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:15 compute-0 sudo[327478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:15 compute-0 sudo[327480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:15 compute-0 sudo[327478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:15 compute-0 sudo[327480]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:15 compute-0 sudo[327478]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:15 compute-0 sudo[327529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:31:15 compute-0 sudo[327528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:15 compute-0 sudo[327528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:15 compute-0 sudo[327529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:15 compute-0 sudo[327528]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:15 compute-0 sudo[327529]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:15.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:15 compute-0 nova_compute[251992]: 2025-12-06 07:31:15.540 251996 INFO nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Creating config drive at /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963/disk.config
Dec 06 07:31:15 compute-0 nova_compute[251992]: 2025-12-06 07:31:15.545 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zsxjrx3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:15 compute-0 nova_compute[251992]: 2025-12-06 07:31:15.679 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zsxjrx3" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:15 compute-0 nova_compute[251992]: 2025-12-06 07:31:15.707 251996 DEBUG nova.storage.rbd_utils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] rbd image 29672a35-c42f-4bd4-9bdd-be0122c29963_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:31:15 compute-0 nova_compute[251992]: 2025-12-06 07:31:15.710 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963/disk.config 29672a35-c42f-4bd4-9bdd-be0122c29963_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:15.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 185 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.4 MiB/s wr, 97 op/s
Dec 06 07:31:16 compute-0 radosgw[91889]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 06 07:31:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.292 251996 DEBUG oslo_concurrency.processutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963/disk.config 29672a35-c42f-4bd4-9bdd-be0122c29963_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.293 251996 INFO nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Deleting local config drive /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963/disk.config because it was imported into RBD.
Dec 06 07:31:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:17.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:17 compute-0 kernel: tap95912911-20: entered promiscuous mode
Dec 06 07:31:17 compute-0 NetworkManager[48965]: <info>  [1765006277.3665] manager: (tap95912911-20): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Dec 06 07:31:17 compute-0 ovn_controller[147168]: 2025-12-06T07:31:17Z|00425|binding|INFO|Claiming lport 95912911-2009-4ed7-8b3a-823a64d0c5e5 for this chassis.
Dec 06 07:31:17 compute-0 ovn_controller[147168]: 2025-12-06T07:31:17Z|00426|binding|INFO|95912911-2009-4ed7-8b3a-823a64d0c5e5: Claiming fa:16:3e:6f:e8:fa 10.100.0.9
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.367 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.377 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:e8:fa 10.100.0.9'], port_security=['fa:16:3e:6f:e8:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '29672a35-c42f-4bd4-9bdd-be0122c29963', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d61cbad0-c569-40a3-a629-90a480791a82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f577827abb5458f902bb5d5580b7d69', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0168991e-1688-418f-baa3-6dbb587ea903', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ddf7a5e-35ae-4905-a43f-2a18b3417570, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=95912911-2009-4ed7-8b3a-823a64d0c5e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.378 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 95912911-2009-4ed7-8b3a-823a64d0c5e5 in datapath d61cbad0-c569-40a3-a629-90a480791a82 bound to our chassis
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.379 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d61cbad0-c569-40a3-a629-90a480791a82
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.389 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9bca40f4-06cb-4c89-844f-392a164b621e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.390 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd61cbad0-c1 in ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.392 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd61cbad0-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.392 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c45ab9fe-2e98-4625-8e81-ae79debf9cfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.393 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e85450b7-4272-4ae0-b3d0-3c0e3c6de1ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.403 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[1b31d86b-7f91-4bd8-ba7d-f5ce667f0fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 systemd-udevd[327634]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:31:17 compute-0 systemd-machined[212986]: New machine qemu-53-instance-00000075.
Dec 06 07:31:17 compute-0 NetworkManager[48965]: <info>  [1765006277.4172] device (tap95912911-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:31:17 compute-0 NetworkManager[48965]: <info>  [1765006277.4189] device (tap95912911-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.426 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c291ae8b-5716-4f65-854a-8b82fcdedbfb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.430 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:17 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000075.
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:17 compute-0 ovn_controller[147168]: 2025-12-06T07:31:17Z|00427|binding|INFO|Setting lport 95912911-2009-4ed7-8b3a-823a64d0c5e5 ovn-installed in OVS
Dec 06 07:31:17 compute-0 ovn_controller[147168]: 2025-12-06T07:31:17Z|00428|binding|INFO|Setting lport 95912911-2009-4ed7-8b3a-823a64d0c5e5 up in Southbound
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.440 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.461 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[522c56c7-29f1-48fd-9dc3-06d9ba9552fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 systemd-udevd[327637]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.467 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[acd290bb-efbd-431e-b205-0de02bf889a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 NetworkManager[48965]: <info>  [1765006277.4682] manager: (tapd61cbad0-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/211)
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.499 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[259ad33e-9703-48c2-8f42-439c75a68ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.502 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2d04774d-ffb2-4557-8eb7-cc8feef52e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 NetworkManager[48965]: <info>  [1765006277.5276] device (tapd61cbad0-c0): carrier: link connected
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.534 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2f00bc-612f-47a6-a958-12d36ac4983e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.552 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c023dcc4-5a21-45d3-915d-5414e77c3eb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd61cbad0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:fb:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655011, 'reachable_time': 37700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327665, 'error': None, 'target': 'ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.567 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ea24bc-21a3-43d6-b8d1-28ac3908c640]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:fb99'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655011, 'tstamp': 655011}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327666, 'error': None, 'target': 'ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.588 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[99344378-6c38-4ae4-986c-f09b88877023]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd61cbad0-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:fb:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 135], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655011, 'reachable_time': 37700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327667, 'error': None, 'target': 'ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.627 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[621efc11-2a33-4477-ba35-6d8cedd671c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.694 251996 DEBUG nova.compute.manager [req-8d98dafe-8501-4ea3-a887-76bb595394d7 req-704deb1c-4d27-4a9d-990e-19f5b661f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received event network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.694 251996 DEBUG oslo_concurrency.lockutils [req-8d98dafe-8501-4ea3-a887-76bb595394d7 req-704deb1c-4d27-4a9d-990e-19f5b661f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.694 251996 DEBUG oslo_concurrency.lockutils [req-8d98dafe-8501-4ea3-a887-76bb595394d7 req-704deb1c-4d27-4a9d-990e-19f5b661f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.695 251996 DEBUG oslo_concurrency.lockutils [req-8d98dafe-8501-4ea3-a887-76bb595394d7 req-704deb1c-4d27-4a9d-990e-19f5b661f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.695 251996 DEBUG nova.compute.manager [req-8d98dafe-8501-4ea3-a887-76bb595394d7 req-704deb1c-4d27-4a9d-990e-19f5b661f623 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Processing event network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.697 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffada74-c66b-474a-b277-64467bcbf6dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.698 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd61cbad0-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.698 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.698 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd61cbad0-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:17 compute-0 NetworkManager[48965]: <info>  [1765006277.7006] manager: (tapd61cbad0-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.700 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:17 compute-0 kernel: tapd61cbad0-c0: entered promiscuous mode
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.702 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.704 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd61cbad0-c0, col_values=(('external_ids', {'iface-id': 'b9da16af-2070-46f8-847f-44fa1f736bd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.704 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:17 compute-0 ovn_controller[147168]: 2025-12-06T07:31:17Z|00429|binding|INFO|Releasing lport b9da16af-2070-46f8-847f-44fa1f736bd1 from this chassis (sb_readonly=0)
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.717 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.717 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d61cbad0-c569-40a3-a629-90a480791a82.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d61cbad0-c569-40a3-a629-90a480791a82.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.719 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[04870483-152c-4310-a5e4-e2c8addfe4e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.720 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-d61cbad0-c569-40a3-a629-90a480791a82
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/d61cbad0-c569-40a3-a629-90a480791a82.pid.haproxy
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID d61cbad0-c569-40a3-a629-90a480791a82
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.721 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82', 'env', 'PROCESS_TAG=haproxy-d61cbad0-c569-40a3-a629-90a480791a82', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d61cbad0-c569-40a3-a629-90a480791a82.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:31:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:17.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:17.793 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:31:17 compute-0 nova_compute[251992]: 2025-12-06 07:31:17.793 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 185 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.4 MiB/s wr, 52 op/s
Dec 06 07:31:18 compute-0 podman[327717]: 2025-12-06 07:31:18.081244523 +0000 UTC m=+0.020290149 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:31:18 compute-0 radosgw[91889]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec 06 07:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:31:18
Dec 06 07:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Dec 06 07:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:31:18 compute-0 nova_compute[251992]: 2025-12-06 07:31:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:18 compute-0 nova_compute[251992]: 2025-12-06 07:31:18.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:31:18 compute-0 ceph-mon[74339]: pgmap v2215: 305 pgs: 305 active+clean; 177 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 89 op/s
Dec 06 07:31:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1200503693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:18 compute-0 ceph-mon[74339]: pgmap v2216: 305 pgs: 305 active+clean; 185 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.4 MiB/s wr, 86 op/s
Dec 06 07:31:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2339104030' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:31:19 compute-0 nova_compute[251992]: 2025-12-06 07:31:19.225 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:19 compute-0 nova_compute[251992]: 2025-12-06 07:31:19.231 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:19.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:19.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:19 compute-0 nova_compute[251992]: 2025-12-06 07:31:19.798 251996 DEBUG nova.compute.manager [req-afe8ed4f-e833-4db9-ae9c-4c6031eca166 req-bdd49278-0d05-4136-9e12-acc8722d21f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received event network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:19 compute-0 nova_compute[251992]: 2025-12-06 07:31:19.798 251996 DEBUG oslo_concurrency.lockutils [req-afe8ed4f-e833-4db9-ae9c-4c6031eca166 req-bdd49278-0d05-4136-9e12-acc8722d21f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:19 compute-0 nova_compute[251992]: 2025-12-06 07:31:19.798 251996 DEBUG oslo_concurrency.lockutils [req-afe8ed4f-e833-4db9-ae9c-4c6031eca166 req-bdd49278-0d05-4136-9e12-acc8722d21f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:19 compute-0 nova_compute[251992]: 2025-12-06 07:31:19.799 251996 DEBUG oslo_concurrency.lockutils [req-afe8ed4f-e833-4db9-ae9c-4c6031eca166 req-bdd49278-0d05-4136-9e12-acc8722d21f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:19 compute-0 nova_compute[251992]: 2025-12-06 07:31:19.799 251996 DEBUG nova.compute.manager [req-afe8ed4f-e833-4db9-ae9c-4c6031eca166 req-bdd49278-0d05-4136-9e12-acc8722d21f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] No waiting events found dispatching network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:31:19 compute-0 nova_compute[251992]: 2025-12-06 07:31:19.799 251996 WARNING nova.compute.manager [req-afe8ed4f-e833-4db9-ae9c-4c6031eca166 req-bdd49278-0d05-4136-9e12-acc8722d21f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received unexpected event network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 for instance with vm_state building and task_state spawning.
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.113 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.114 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006280.1127512, 29672a35-c42f-4bd4-9bdd-be0122c29963 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.114 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] VM Started (Lifecycle Event)
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.117 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.121 251996 INFO nova.virt.libvirt.driver [-] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Instance spawned successfully.
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.121 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:31:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 196 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 4.0 MiB/s wr, 74 op/s
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.143 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.149 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.151 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.152 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.152 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.153 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.153 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.153 251996 DEBUG nova.virt.libvirt.driver [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.189 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.190 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006280.1130364, 29672a35-c42f-4bd4-9bdd-be0122c29963 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.190 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] VM Paused (Lifecycle Event)
Dec 06 07:31:20 compute-0 podman[327717]: 2025-12-06 07:31:20.196581779 +0000 UTC m=+2.135627385 container create acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.232 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.236 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006280.1162982, 29672a35-c42f-4bd4-9bdd-be0122c29963 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.236 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] VM Resumed (Lifecycle Event)
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.244 251996 INFO nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Took 21.41 seconds to spawn the instance on the hypervisor.
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.245 251996 DEBUG nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.255 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.257 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.286 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.311 251996 INFO nova.compute.manager [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Took 22.50 seconds to build instance.
Dec 06 07:31:20 compute-0 nova_compute[251992]: 2025-12-06 07:31:20.328 251996 DEBUG oslo_concurrency.lockutils [None req-7855c102-aecb-426d-a128-a9705305c4b9 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:20 compute-0 radosgw[91889]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Dec 06 07:31:20 compute-0 systemd[1]: Started libpod-conmon-acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a.scope.
Dec 06 07:31:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aff5b5dc590f30223b07b70b0cbb9da8a0597feb9783a1bea39e7e562fb8fcd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:31:20 compute-0 radosgw[91889]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Dec 06 07:31:20 compute-0 podman[327717]: 2025-12-06 07:31:20.818985151 +0000 UTC m=+2.758030767 container init acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:31:20 compute-0 podman[327717]: 2025-12-06 07:31:20.825033344 +0000 UTC m=+2.764078950 container start acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:31:20 compute-0 neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82[327759]: [NOTICE]   (327763) : New worker (327765) forked
Dec 06 07:31:20 compute-0 neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82[327759]: [NOTICE]   (327763) : Loading success.
Dec 06 07:31:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:21.146 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:31:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:21.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:21 compute-0 nova_compute[251992]: 2025-12-06 07:31:21.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:31:21 compute-0 nova_compute[251992]: 2025-12-06 07:31:21.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:31:21 compute-0 nova_compute[251992]: 2025-12-06 07:31:21.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:31:21 compute-0 nova_compute[251992]: 2025-12-06 07:31:21.686 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 06 07:31:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:21.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 199 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 82 op/s
Dec 06 07:31:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.374 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.374 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.374 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.375 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29672a35-c42f-4bd4-9bdd-be0122c29963 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.451 251996 INFO nova.virt.libvirt.driver [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Deleting instance files /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_del
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.453 251996 INFO nova.virt.libvirt.driver [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Deletion of /var/lib/nova/instances/e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c_del complete
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.513 251996 INFO nova.compute.manager [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Took 24.74 seconds to destroy the instance on the hypervisor.
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.513 251996 DEBUG oslo.service.loopingcall [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.514 251996 DEBUG nova.compute.manager [-] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:31:22 compute-0 nova_compute[251992]: 2025-12-06 07:31:22.514 251996 DEBUG nova.network.neutron [-] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:31:22 compute-0 ceph-mon[74339]: pgmap v2217: 305 pgs: 305 active+clean; 185 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.4 MiB/s wr, 97 op/s
Dec 06 07:31:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/920108480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:22 compute-0 ceph-mon[74339]: pgmap v2218: 305 pgs: 305 active+clean; 185 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 3.4 MiB/s wr, 52 op/s
Dec 06 07:31:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:23.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:31:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:23.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:23 compute-0 nova_compute[251992]: 2025-12-06 07:31:23.898 251996 DEBUG nova.network.neutron [-] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:31:23 compute-0 nova_compute[251992]: 2025-12-06 07:31:23.927 251996 INFO nova.compute.manager [-] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Took 1.41 seconds to deallocate network for instance.
Dec 06 07:31:23 compute-0 nova_compute[251992]: 2025-12-06 07:31:23.980 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:23 compute-0 nova_compute[251992]: 2025-12-06 07:31:23.980 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:23 compute-0 nova_compute[251992]: 2025-12-06 07:31:23.988 251996 DEBUG nova.compute.manager [req-b646c619-030e-4fb9-8bf9-318bf00bdbc7 req-d7c5f007-0b3b-4b0e-b9e4-7575db11e9a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c] Received event network-vif-deleted-d704120a-a680-4b8b-9b78-a44a11524971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.081 251996 DEBUG oslo_concurrency.processutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 2.4 MiB/s wr, 103 op/s
Dec 06 07:31:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/759483955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:24 compute-0 ceph-mon[74339]: pgmap v2219: 305 pgs: 305 active+clean; 196 MiB data, 927 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 4.0 MiB/s wr, 74 op/s
Dec 06 07:31:24 compute-0 ceph-mon[74339]: pgmap v2220: 305 pgs: 305 active+clean; 199 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 82 op/s
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.227 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.310 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Updating instance_info_cache with network_info: [{"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.330 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-29672a35-c42f-4bd4-9bdd-be0122c29963" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.330 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:31:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:31:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:31:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:31:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:31:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.813 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "29672a35-c42f-4bd4-9bdd-be0122c29963" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.814 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.814 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.814 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.815 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.818 251996 INFO nova.compute.manager [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Terminating instance
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.819 251996 DEBUG nova.compute.manager [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:31:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249822656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.888 251996 DEBUG oslo_concurrency.processutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.807s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.896 251996 DEBUG nova.compute.provider_tree [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.911 251996 DEBUG nova.scheduler.client.report [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.929 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:24 compute-0 nova_compute[251992]: 2025-12-06 07:31:24.953 251996 INFO nova.scheduler.client.report [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Deleted allocations for instance e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.047 251996 DEBUG oslo_concurrency.lockutils [None req-cf8be368-1b9c-4ce4-b59a-80c8c161ac54 c3e74d72d7114a37b1fa3e366712c49e c7c878bda5244b67976c298fc026bbcd - - default default] Lock "e14b8dd0-e3f2-4d2d-8c88-d4c7a5d9144c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 27.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:25 compute-0 kernel: tap95912911-20 (unregistering): left promiscuous mode
Dec 06 07:31:25 compute-0 NetworkManager[48965]: <info>  [1765006285.0749] device (tap95912911-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:31:25 compute-0 ovn_controller[147168]: 2025-12-06T07:31:25Z|00430|binding|INFO|Releasing lport 95912911-2009-4ed7-8b3a-823a64d0c5e5 from this chassis (sb_readonly=0)
Dec 06 07:31:25 compute-0 ovn_controller[147168]: 2025-12-06T07:31:25Z|00431|binding|INFO|Setting lport 95912911-2009-4ed7-8b3a-823a64d0c5e5 down in Southbound
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.082 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:25 compute-0 ovn_controller[147168]: 2025-12-06T07:31:25Z|00432|binding|INFO|Removing iface tap95912911-20 ovn-installed in OVS
Dec 06 07:31:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:25.093 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:e8:fa 10.100.0.9'], port_security=['fa:16:3e:6f:e8:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '29672a35-c42f-4bd4-9bdd-be0122c29963', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d61cbad0-c569-40a3-a629-90a480791a82', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f577827abb5458f902bb5d5580b7d69', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0168991e-1688-418f-baa3-6dbb587ea903', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ddf7a5e-35ae-4905-a43f-2a18b3417570, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=95912911-2009-4ed7-8b3a-823a64d0c5e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:31:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:25.094 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 95912911-2009-4ed7-8b3a-823a64d0c5e5 in datapath d61cbad0-c569-40a3-a629-90a480791a82 unbound from our chassis
Dec 06 07:31:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:25.096 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d61cbad0-c569-40a3-a629-90a480791a82, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:31:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:25.097 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[660d438b-419f-4db5-8c07-9ddf66445046]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:25.097 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82 namespace which is not needed anymore
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.099 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:25.148 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:25 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000075.scope: Deactivated successfully.
Dec 06 07:31:25 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000075.scope: Consumed 5.598s CPU time.
Dec 06 07:31:25 compute-0 systemd-machined[212986]: Machine qemu-53-instance-00000075 terminated.
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.237 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.242 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.259 251996 INFO nova.virt.libvirt.driver [-] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Instance destroyed successfully.
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.260 251996 DEBUG nova.objects.instance [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lazy-loading 'resources' on Instance uuid 29672a35-c42f-4bd4-9bdd-be0122c29963 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:31:25 compute-0 neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82[327759]: [NOTICE]   (327763) : haproxy version is 2.8.14-c23fe91
Dec 06 07:31:25 compute-0 neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82[327759]: [NOTICE]   (327763) : path to executable is /usr/sbin/haproxy
Dec 06 07:31:25 compute-0 neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82[327759]: [WARNING]  (327763) : Exiting Master process...
Dec 06 07:31:25 compute-0 neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82[327759]: [WARNING]  (327763) : Exiting Master process...
Dec 06 07:31:25 compute-0 neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82[327759]: [ALERT]    (327763) : Current worker (327765) exited with code 143 (Terminated)
Dec 06 07:31:25 compute-0 neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82[327759]: [WARNING]  (327763) : All workers exited. Exiting... (0)
Dec 06 07:31:25 compute-0 systemd[1]: libpod-acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a.scope: Deactivated successfully.
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.275 251996 DEBUG nova.virt.libvirt.vif [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:30:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1643256774',display_name='tempest-ServerMetadataNegativeTestJSON-server-1643256774',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1643256774',id=117,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:31:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f577827abb5458f902bb5d5580b7d69',ramdisk_id='',reservation_id='r-33uf1sjp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-408999880',owner_user_name='tempest-ServerMetadataNegativeTestJSON-408999880-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:31:20Z,user_data=None,user_id='ba221a62b5d5452c80ef6e9223ab018d',uuid=29672a35-c42f-4bd4-9bdd-be0122c29963,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.276 251996 DEBUG nova.network.os_vif_util [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Converting VIF {"id": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "address": "fa:16:3e:6f:e8:fa", "network": {"id": "d61cbad0-c569-40a3-a629-90a480791a82", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-189297030-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f577827abb5458f902bb5d5580b7d69", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95912911-20", "ovs_interfaceid": "95912911-2009-4ed7-8b3a-823a64d0c5e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.277 251996 DEBUG nova.network.os_vif_util [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:e8:fa,bridge_name='br-int',has_traffic_filtering=True,id=95912911-2009-4ed7-8b3a-823a64d0c5e5,network=Network(d61cbad0-c569-40a3-a629-90a480791a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95912911-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.277 251996 DEBUG os_vif [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:e8:fa,bridge_name='br-int',has_traffic_filtering=True,id=95912911-2009-4ed7-8b3a-823a64d0c5e5,network=Network(d61cbad0-c569-40a3-a629-90a480791a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95912911-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.280 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:25 compute-0 podman[327823]: 2025-12-06 07:31:25.280306602 +0000 UTC m=+0.090511606 container died acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.280 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95912911-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:25.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.339 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.342 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.345 251996 INFO os_vif [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:e8:fa,bridge_name='br-int',has_traffic_filtering=True,id=95912911-2009-4ed7-8b3a-823a64d0c5e5,network=Network(d61cbad0-c569-40a3-a629-90a480791a82),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95912911-20')
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.571 251996 DEBUG nova.compute.manager [req-1e7dce2c-c049-43d4-a437-7d5cbde262d2 req-7973ec2b-68b1-4eb0-95cc-677f0dc26bc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received event network-vif-unplugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.571 251996 DEBUG oslo_concurrency.lockutils [req-1e7dce2c-c049-43d4-a437-7d5cbde262d2 req-7973ec2b-68b1-4eb0-95cc-677f0dc26bc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.572 251996 DEBUG oslo_concurrency.lockutils [req-1e7dce2c-c049-43d4-a437-7d5cbde262d2 req-7973ec2b-68b1-4eb0-95cc-677f0dc26bc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.572 251996 DEBUG oslo_concurrency.lockutils [req-1e7dce2c-c049-43d4-a437-7d5cbde262d2 req-7973ec2b-68b1-4eb0-95cc-677f0dc26bc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.572 251996 DEBUG nova.compute.manager [req-1e7dce2c-c049-43d4-a437-7d5cbde262d2 req-7973ec2b-68b1-4eb0-95cc-677f0dc26bc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] No waiting events found dispatching network-vif-unplugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:31:25 compute-0 nova_compute[251992]: 2025-12-06 07:31:25.572 251996 DEBUG nova.compute.manager [req-1e7dce2c-c049-43d4-a437-7d5cbde262d2 req-7973ec2b-68b1-4eb0-95cc-677f0dc26bc3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received event network-vif-unplugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:31:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:25.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019931108086729946 of space, bias 1.0, pg target 0.5979332426018984 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002120156628047646 of space, bias 1.0, pg target 0.6360469884142937 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:31:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:31:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1817503598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:25 compute-0 ceph-mon[74339]: pgmap v2221: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 2.4 MiB/s wr, 103 op/s
Dec 06 07:31:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4164864406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:31:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/249822656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a-userdata-shm.mount: Deactivated successfully.
Dec 06 07:31:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aff5b5dc590f30223b07b70b0cbb9da8a0597feb9783a1bea39e7e562fb8fcd-merged.mount: Deactivated successfully.
Dec 06 07:31:26 compute-0 podman[327823]: 2025-12-06 07:31:26.02184019 +0000 UTC m=+0.832045194 container cleanup acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:31:26 compute-0 systemd[1]: libpod-conmon-acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a.scope: Deactivated successfully.
Dec 06 07:31:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 174 op/s
Dec 06 07:31:26 compute-0 podman[327881]: 2025-12-06 07:31:26.368272092 +0000 UTC m=+0.326681450 container remove acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.374 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e5591c9d-38e2-48c5-8772-dfa738c1c6c0]: (4, ('Sat Dec  6 07:31:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82 (acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a)\nacf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a\nSat Dec  6 07:31:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82 (acf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a)\nacf5ad61b7d5a300cae58020f8b4313ae53b4e71b603519674c6fdfe68a0440a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.375 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fbeb6681-eed9-439d-a022-df499aa7738a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.376 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd61cbad0-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:31:26 compute-0 nova_compute[251992]: 2025-12-06 07:31:26.412 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:26 compute-0 kernel: tapd61cbad0-c0: left promiscuous mode
Dec 06 07:31:26 compute-0 nova_compute[251992]: 2025-12-06 07:31:26.429 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.433 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3f132310-95fc-4538-b67b-3fa9392ecee7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.457 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e6fb14-ae76-4f0b-a571-7d4fede38651]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.458 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[abd787a4-f88b-4412-948c-e51d087ae82b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.474 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[969eea6b-284e-4962-b0aa-74cfd130bcbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655004, 'reachable_time': 32814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327898, 'error': None, 'target': 'ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:26 compute-0 systemd[1]: run-netns-ovnmeta\x2dd61cbad0\x2dc569\x2d40a3\x2da629\x2d90a480791a82.mount: Deactivated successfully.
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.480 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d61cbad0-c569-40a3-a629-90a480791a82 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:31:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:31:26.481 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[c0cb3166-d135-47e5-b25a-d10cb739fd58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:31:27 compute-0 ceph-mon[74339]: pgmap v2222: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 174 op/s
Dec 06 07:31:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:27.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:27 compute-0 nova_compute[251992]: 2025-12-06 07:31:27.658 251996 DEBUG nova.compute.manager [req-720503f9-d284-4e06-bdf4-3f2dd8d9a1dc req-1df16ee0-9fb2-4334-b76f-413f62acf2e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received event network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:27 compute-0 nova_compute[251992]: 2025-12-06 07:31:27.658 251996 DEBUG oslo_concurrency.lockutils [req-720503f9-d284-4e06-bdf4-3f2dd8d9a1dc req-1df16ee0-9fb2-4334-b76f-413f62acf2e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:27 compute-0 nova_compute[251992]: 2025-12-06 07:31:27.658 251996 DEBUG oslo_concurrency.lockutils [req-720503f9-d284-4e06-bdf4-3f2dd8d9a1dc req-1df16ee0-9fb2-4334-b76f-413f62acf2e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:27 compute-0 nova_compute[251992]: 2025-12-06 07:31:27.659 251996 DEBUG oslo_concurrency.lockutils [req-720503f9-d284-4e06-bdf4-3f2dd8d9a1dc req-1df16ee0-9fb2-4334-b76f-413f62acf2e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:27 compute-0 nova_compute[251992]: 2025-12-06 07:31:27.659 251996 DEBUG nova.compute.manager [req-720503f9-d284-4e06-bdf4-3f2dd8d9a1dc req-1df16ee0-9fb2-4334-b76f-413f62acf2e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] No waiting events found dispatching network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:31:27 compute-0 nova_compute[251992]: 2025-12-06 07:31:27.659 251996 WARNING nova.compute.manager [req-720503f9-d284-4e06-bdf4-3f2dd8d9a1dc req-1df16ee0-9fb2-4334-b76f-413f62acf2e6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received unexpected event network-vif-plugged-95912911-2009-4ed7-8b3a-823a64d0c5e5 for instance with vm_state active and task_state deleting.
Dec 06 07:31:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:27.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 162 op/s
Dec 06 07:31:28 compute-0 nova_compute[251992]: 2025-12-06 07:31:28.681 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:28 compute-0 ceph-mon[74339]: pgmap v2223: 305 pgs: 305 active+clean; 202 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 162 op/s
Dec 06 07:31:29 compute-0 nova_compute[251992]: 2025-12-06 07:31:29.228 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:29.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:29.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 181 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.2 MiB/s wr, 278 op/s
Dec 06 07:31:30 compute-0 nova_compute[251992]: 2025-12-06 07:31:30.339 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:30 compute-0 ceph-mon[74339]: pgmap v2224: 305 pgs: 305 active+clean; 181 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.2 MiB/s wr, 278 op/s
Dec 06 07:31:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:31.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:31.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 170 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 508 KiB/s wr, 283 op/s
Dec 06 07:31:32 compute-0 podman[327903]: 2025-12-06 07:31:32.408601179 +0000 UTC m=+0.070148865 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 07:31:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:33 compute-0 nova_compute[251992]: 2025-12-06 07:31:33.024 251996 INFO nova.virt.libvirt.driver [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Deleting instance files /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963_del
Dec 06 07:31:33 compute-0 nova_compute[251992]: 2025-12-06 07:31:33.025 251996 INFO nova.virt.libvirt.driver [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Deletion of /var/lib/nova/instances/29672a35-c42f-4bd4-9bdd-be0122c29963_del complete
Dec 06 07:31:33 compute-0 ceph-mon[74339]: pgmap v2225: 305 pgs: 305 active+clean; 170 MiB data, 918 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 508 KiB/s wr, 283 op/s
Dec 06 07:31:33 compute-0 nova_compute[251992]: 2025-12-06 07:31:33.144 251996 INFO nova.compute.manager [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Took 8.32 seconds to destroy the instance on the hypervisor.
Dec 06 07:31:33 compute-0 nova_compute[251992]: 2025-12-06 07:31:33.145 251996 DEBUG oslo.service.loopingcall [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:31:33 compute-0 nova_compute[251992]: 2025-12-06 07:31:33.145 251996 DEBUG nova.compute.manager [-] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:31:33 compute-0 nova_compute[251992]: 2025-12-06 07:31:33.146 251996 DEBUG nova.network.neutron [-] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:31:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:33.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 162 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 365 KiB/s wr, 277 op/s
Dec 06 07:31:34 compute-0 nova_compute[251992]: 2025-12-06 07:31:34.230 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:34 compute-0 nova_compute[251992]: 2025-12-06 07:31:34.368 251996 DEBUG nova.network.neutron [-] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:31:34 compute-0 nova_compute[251992]: 2025-12-06 07:31:34.399 251996 INFO nova.compute.manager [-] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Took 1.25 seconds to deallocate network for instance.
Dec 06 07:31:34 compute-0 nova_compute[251992]: 2025-12-06 07:31:34.476 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:31:34 compute-0 nova_compute[251992]: 2025-12-06 07:31:34.477 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:31:34 compute-0 nova_compute[251992]: 2025-12-06 07:31:34.502 251996 DEBUG nova.compute.manager [req-ecd85736-5636-44b8-94c9-4b251eae9bd4 req-67ca9041-d997-4709-901a-5b51fda21816 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Received event network-vif-deleted-95912911-2009-4ed7-8b3a-823a64d0c5e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:31:34 compute-0 nova_compute[251992]: 2025-12-06 07:31:34.521 251996 DEBUG oslo_concurrency.processutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:31:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:31:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449106417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:35 compute-0 nova_compute[251992]: 2025-12-06 07:31:35.078 251996 DEBUG oslo_concurrency.processutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:31:35 compute-0 nova_compute[251992]: 2025-12-06 07:31:35.086 251996 DEBUG nova.compute.provider_tree [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:31:35 compute-0 nova_compute[251992]: 2025-12-06 07:31:35.103 251996 DEBUG nova.scheduler.client.report [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:31:35 compute-0 nova_compute[251992]: 2025-12-06 07:31:35.128 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:35 compute-0 nova_compute[251992]: 2025-12-06 07:31:35.152 251996 INFO nova.scheduler.client.report [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Deleted allocations for instance 29672a35-c42f-4bd4-9bdd-be0122c29963
Dec 06 07:31:35 compute-0 nova_compute[251992]: 2025-12-06 07:31:35.229 251996 DEBUG oslo_concurrency.lockutils [None req-d0aff1a1-39ab-41a1-8469-f963fbd52839 ba221a62b5d5452c80ef6e9223ab018d 7f577827abb5458f902bb5d5580b7d69 - - default default] Lock "29672a35-c42f-4bd4-9bdd-be0122c29963" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:31:35 compute-0 ceph-mon[74339]: pgmap v2226: 305 pgs: 305 active+clean; 162 MiB data, 910 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 365 KiB/s wr, 277 op/s
Dec 06 07:31:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:35.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:35 compute-0 nova_compute[251992]: 2025-12-06 07:31:35.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:35 compute-0 sudo[327957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:35 compute-0 sudo[327957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:35 compute-0 sudo[327957]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:35 compute-0 sudo[327982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:35 compute-0 sudo[327982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:35 compute-0 sudo[327982]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 139 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 117 KiB/s wr, 287 op/s
Dec 06 07:31:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/449106417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:36 compute-0 ceph-mon[74339]: pgmap v2227: 305 pgs: 305 active+clean; 139 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 117 KiB/s wr, 287 op/s
Dec 06 07:31:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:37.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 139 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 114 KiB/s wr, 199 op/s
Dec 06 07:31:39 compute-0 nova_compute[251992]: 2025-12-06 07:31:39.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:39 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Dec 06 07:31:39 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:39.293531) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:31:39 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Dec 06 07:31:39 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299293617, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2275, "num_deletes": 253, "total_data_size": 4265009, "memory_usage": 4329224, "flush_reason": "Manual Compaction"}
Dec 06 07:31:39 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Dec 06 07:31:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:39.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:39 compute-0 podman[328009]: 2025-12-06 07:31:39.400137957 +0000 UTC m=+0.057939806 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:31:39 compute-0 podman[328010]: 2025-12-06 07:31:39.416133099 +0000 UTC m=+0.068367027 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 07:31:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:39.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:39 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006299997877, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 4120130, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42765, "largest_seqno": 45039, "table_properties": {"data_size": 4109543, "index_size": 6761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22728, "raw_average_key_size": 21, "raw_value_size": 4088292, "raw_average_value_size": 3796, "num_data_blocks": 293, "num_entries": 1077, "num_filter_entries": 1077, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006024, "oldest_key_time": 1765006024, "file_creation_time": 1765006299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:31:39 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 704430 microseconds, and 13129 cpu microseconds.
Dec 06 07:31:39 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:31:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 124 KiB/s wr, 222 op/s
Dec 06 07:31:40 compute-0 nova_compute[251992]: 2025-12-06 07:31:40.257 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006285.2565882, 29672a35-c42f-4bd4-9bdd-be0122c29963 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:31:40 compute-0 nova_compute[251992]: 2025-12-06 07:31:40.258 251996 INFO nova.compute.manager [-] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] VM Stopped (Lifecycle Event)
Dec 06 07:31:40 compute-0 nova_compute[251992]: 2025-12-06 07:31:40.302 251996 DEBUG nova.compute.manager [None req-2b01cfb3-18ac-4645-9d35-dd7660881bed - - - - - -] [instance: 29672a35-c42f-4bd4-9bdd-be0122c29963] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:31:40 compute-0 nova_compute[251992]: 2025-12-06 07:31:40.344 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:39.997958) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 4120130 bytes OK
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:39.997986) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.422394) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.422445) EVENT_LOG_v1 {"time_micros": 1765006300422436, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.422466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 4255551, prev total WAL file size 4273251, number of live WAL files 2.
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.423718) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(4023KB)], [92(10MB)]
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006300423767, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 15230120, "oldest_snapshot_seqno": -1}
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7897 keys, 13150300 bytes, temperature: kUnknown
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006300668898, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 13150300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13095916, "index_size": 33536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 204256, "raw_average_key_size": 25, "raw_value_size": 12953567, "raw_average_value_size": 1640, "num_data_blocks": 1327, "num_entries": 7897, "num_filter_entries": 7897, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765006300, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.669241) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 13150300 bytes
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.773663) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.1 rd, 53.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 10.6 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8424, records dropped: 527 output_compression: NoCompression
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.773702) EVENT_LOG_v1 {"time_micros": 1765006300773687, "job": 54, "event": "compaction_finished", "compaction_time_micros": 245274, "compaction_time_cpu_micros": 32739, "output_level": 6, "num_output_files": 1, "total_output_size": 13150300, "num_input_records": 8424, "num_output_records": 7897, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006300775405, "job": 54, "event": "table_file_deletion", "file_number": 94}
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006300777669, "job": 54, "event": "table_file_deletion", "file_number": 92}
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.423591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.777743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.777749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.777751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.777753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:40 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:31:40.777754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:31:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:41.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:41.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 212 KiB/s rd, 65 KiB/s wr, 109 op/s
Dec 06 07:31:42 compute-0 ceph-mon[74339]: pgmap v2228: 305 pgs: 305 active+clean; 139 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 114 KiB/s wr, 199 op/s
Dec 06 07:31:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:31:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:43.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:43.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 52 KiB/s wr, 83 op/s
Dec 06 07:31:44 compute-0 nova_compute[251992]: 2025-12-06 07:31:44.233 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:45 compute-0 nova_compute[251992]: 2025-12-06 07:31:45.346 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:45.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:45 compute-0 ceph-mon[74339]: pgmap v2229: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 124 KiB/s wr, 222 op/s
Dec 06 07:31:45 compute-0 ceph-mon[74339]: pgmap v2230: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 212 KiB/s rd, 65 KiB/s wr, 109 op/s
Dec 06 07:31:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 21 KiB/s wr, 77 op/s
Dec 06 07:31:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:47.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1659011349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:47 compute-0 ceph-mon[74339]: pgmap v2231: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 52 KiB/s wr, 83 op/s
Dec 06 07:31:47 compute-0 ceph-mon[74339]: pgmap v2232: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 21 KiB/s wr, 77 op/s
Dec 06 07:31:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 33 op/s
Dec 06 07:31:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:49 compute-0 nova_compute[251992]: 2025-12-06 07:31:49.249 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:49.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:49.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:50 compute-0 nova_compute[251992]: 2025-12-06 07:31:50.387 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 13 KiB/s wr, 40 op/s
Dec 06 07:31:50 compute-0 ceph-mon[74339]: pgmap v2233: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 33 op/s
Dec 06 07:31:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:51.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:51.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:52 compute-0 ceph-mon[74339]: pgmap v2234: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 13 KiB/s wr, 40 op/s
Dec 06 07:31:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 3.6 KiB/s wr, 18 op/s
Dec 06 07:31:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:53.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:31:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:31:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:53.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:31:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:54 compute-0 nova_compute[251992]: 2025-12-06 07:31:54.252 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 19 op/s
Dec 06 07:31:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:55 compute-0 nova_compute[251992]: 2025-12-06 07:31:55.390 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:55 compute-0 sudo[328057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:55 compute-0 sudo[328057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:55 compute-0 sudo[328057]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:55 compute-0 sudo[328082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:31:55 compute-0 sudo[328082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:31:55 compute-0 sudo[328082]: pam_unix(sudo:session): session closed for user root
Dec 06 07:31:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:55.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:56 compute-0 ceph-mon[74339]: pgmap v2235: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 3.6 KiB/s wr, 18 op/s
Dec 06 07:31:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 978 B/s wr, 22 op/s
Dec 06 07:31:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:57.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:58 compute-0 ceph-mon[74339]: pgmap v2236: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.4 KiB/s wr, 19 op/s
Dec 06 07:31:58 compute-0 ceph-mon[74339]: pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 978 B/s wr, 22 op/s
Dec 06 07:31:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.0 KiB/s wr, 19 op/s
Dec 06 07:31:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:31:59 compute-0 nova_compute[251992]: 2025-12-06 07:31:59.253 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:31:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:31:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:31:59.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:31:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3610760591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:59 compute-0 ceph-mon[74339]: pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.0 KiB/s wr, 19 op/s
Dec 06 07:31:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/53891712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:31:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:31:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:31:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:31:59.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:00 compute-0 nova_compute[251992]: 2025-12-06 07:32:00.393 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 134 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 572 KiB/s wr, 47 op/s
Dec 06 07:32:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 134 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 572 KiB/s wr, 40 op/s
Dec 06 07:32:02 compute-0 ceph-mon[74339]: pgmap v2239: 305 pgs: 305 active+clean; 134 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 572 KiB/s wr, 47 op/s
Dec 06 07:32:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:03.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:03 compute-0 podman[328111]: 2025-12-06 07:32:03.499140904 +0000 UTC m=+0.162688363 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 06 07:32:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:32:03.838 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:32:03.839 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:32:03.839 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:03.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:03 compute-0 ceph-mon[74339]: pgmap v2240: 305 pgs: 305 active+clean; 134 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 572 KiB/s wr, 40 op/s
Dec 06 07:32:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:04 compute-0 nova_compute[251992]: 2025-12-06 07:32:04.255 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 150 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.2 MiB/s wr, 60 op/s
Dec 06 07:32:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:05.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:05 compute-0 nova_compute[251992]: 2025-12-06 07:32:05.396 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:05.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:06 compute-0 ceph-mon[74339]: pgmap v2241: 305 pgs: 305 active+clean; 150 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.2 MiB/s wr, 60 op/s
Dec 06 07:32:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3743405159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1789558873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 3.5 MiB/s wr, 95 op/s
Dec 06 07:32:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:07.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:07 compute-0 nova_compute[251992]: 2025-12-06 07:32:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:07 compute-0 nova_compute[251992]: 2025-12-06 07:32:07.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:07 compute-0 nova_compute[251992]: 2025-12-06 07:32:07.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:07 compute-0 nova_compute[251992]: 2025-12-06 07:32:07.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:07 compute-0 nova_compute[251992]: 2025-12-06 07:32:07.682 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:32:07 compute-0 nova_compute[251992]: 2025-12-06 07:32:07.682 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:07.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:32:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179823086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.135 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.299 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.301 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4460MB free_disk=20.970619201660156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.301 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.302 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:08 compute-0 ceph-mon[74339]: pgmap v2242: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 3.5 MiB/s wr, 95 op/s
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.395 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.395 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.415 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:32:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Dec 06 07:32:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:32:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3113212600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.849 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.853 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.876 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.902 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:32:08 compute-0 nova_compute[251992]: 2025-12-06 07:32:08.902 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:09 compute-0 nova_compute[251992]: 2025-12-06 07:32:09.256 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:09.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:09.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:10 compute-0 podman[328185]: 2025-12-06 07:32:10.390989202 +0000 UTC m=+0.047478143 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:32:10 compute-0 nova_compute[251992]: 2025-12-06 07:32:10.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:10 compute-0 podman[328186]: 2025-12-06 07:32:10.424938728 +0000 UTC m=+0.078289294 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 07:32:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1179823086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:10 compute-0 ceph-mon[74339]: pgmap v2243: 305 pgs: 305 active+clean; 214 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Dec 06 07:32:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3113212600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Dec 06 07:32:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:11.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:11.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1774354065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:32:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1774354065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:32:12 compute-0 ceph-mon[74339]: pgmap v2244: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Dec 06 07:32:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.0 MiB/s wr, 71 op/s
Dec 06 07:32:12 compute-0 nova_compute[251992]: 2025-12-06 07:32:12.896 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:12 compute-0 nova_compute[251992]: 2025-12-06 07:32:12.896 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:12 compute-0 nova_compute[251992]: 2025-12-06 07:32:12.896 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:12 compute-0 nova_compute[251992]: 2025-12-06 07:32:12.897 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:12 compute-0 nova_compute[251992]: 2025-12-06 07:32:12.897 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:32:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:13.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:13 compute-0 nova_compute[251992]: 2025-12-06 07:32:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:13.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:14 compute-0 nova_compute[251992]: 2025-12-06 07:32:14.258 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.0 MiB/s wr, 73 op/s
Dec 06 07:32:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000080s ======
Dec 06 07:32:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:15.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 06 07:32:15 compute-0 nova_compute[251992]: 2025-12-06 07:32:15.400 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:15 compute-0 sudo[328227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:15 compute-0 sudo[328230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:15 compute-0 sudo[328230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-0 sudo[328227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-0 sudo[328227]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:15 compute-0 sudo[328230]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:15 compute-0 sudo[328278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:32:15 compute-0 sudo[328278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-0 sudo[328277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:15 compute-0 sudo[328278]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:15 compute-0 sudo[328277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-0 sudo[328277]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:15 compute-0 sudo[328327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:15 compute-0 sudo[328327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-0 sudo[328327]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:15 compute-0 sudo[328352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:32:15 compute-0 sudo[328352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:32:16 compute-0 sudo[328352]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:16 compute-0 ceph-mon[74339]: pgmap v2245: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.0 MiB/s wr, 71 op/s
Dec 06 07:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 07:32:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 07:32:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:32:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Dec 06 07:32:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:32:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:17.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:17.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3515848525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:17 compute-0 ceph-mon[74339]: pgmap v2246: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.0 MiB/s wr, 73 op/s
Dec 06 07:32:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3910440295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1222227724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:32:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:32:17 compute-0 ceph-mon[74339]: pgmap v2247: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Dec 06 07:32:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:32:18
Dec 06 07:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Dec 06 07:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:32:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 19 op/s
Dec 06 07:32:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:19 compute-0 nova_compute[251992]: 2025-12-06 07:32:19.295 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:19.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:32:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:19.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:20 compute-0 nova_compute[251992]: 2025-12-06 07:32:20.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:20 compute-0 nova_compute[251992]: 2025-12-06 07:32:20.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:20 compute-0 nova_compute[251992]: 2025-12-06 07:32:20.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:32:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 49 op/s
Dec 06 07:32:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:21.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:21 compute-0 nova_compute[251992]: 2025-12-06 07:32:21.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:21 compute-0 nova_compute[251992]: 2025-12-06 07:32:21.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:32:21 compute-0 nova_compute[251992]: 2025-12-06 07:32:21.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:32:21 compute-0 nova_compute[251992]: 2025-12-06 07:32:21.673 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:32:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:21.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 47 op/s
Dec 06 07:32:22 compute-0 nova_compute[251992]: 2025-12-06 07:32:22.780 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:32:22.780 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:32:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:32:22.782 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:32:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:23.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:32:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:23.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:24 compute-0 nova_compute[251992]: 2025-12-06 07:32:24.298 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:32:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:32:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:32:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:32:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:32:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 64 op/s
Dec 06 07:32:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:25.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:25 compute-0 nova_compute[251992]: 2025-12-06 07:32:25.438 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:25.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019845683859110366 of space, bias 1.0, pg target 0.595370515773311 quantized to 32 (current 32)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002166503815373162 of space, bias 1.0, pg target 0.6499511446119486 quantized to 32 (current 32)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:32:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:32:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec 06 07:32:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:27.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:27.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:28 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 07:32:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 06 07:32:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).paxos(paxos updating c 4017..4552) accept timeout, calling fresh election
Dec 06 07:32:29 compute-0 ceph-mon[74339]: mon.compute-0@0(probing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:32:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 07:32:29 compute-0 ceph-mon[74339]: paxos.0).electionLogic(52) init, last seen epoch 52
Dec 06 07:32:29 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:32:29 compute-0 nova_compute[251992]: 2025-12-06 07:32:29.299 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:29.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:29.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:30 compute-0 nova_compute[251992]: 2025-12-06 07:32:30.441 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Dec 06 07:32:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:32:30.784 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:32:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:31.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:31.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:32 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 07:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 29 op/s
Dec 06 07:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:33 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:33 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:33 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:33.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.657 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.659 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.659 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.659 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.660 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.660 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.760 251996 DEBUG nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.761 251996 WARNING nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.762 251996 WARNING nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.762 251996 INFO nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Removable base files: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.763 251996 INFO nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.764 251996 INFO nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.764 251996 DEBUG nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.765 251996 DEBUG nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 06 07:32:33 compute-0 nova_compute[251992]: 2025-12-06 07:32:33.765 251996 DEBUG nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 06 07:32:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:33.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:34 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:34 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:34 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:34 compute-0 ceph-mon[74339]: paxos.0).electionLogic(53) init, last seen epoch 53, mid-election, bumping
Dec 06 07:32:34 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:32:34 compute-0 nova_compute[251992]: 2025-12-06 07:32:34.303 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:34 compute-0 podman[328416]: 2025-12-06 07:32:34.442695476 +0000 UTC m=+0.102530728 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 29 op/s
Dec 06 07:32:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:35.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:35 compute-0 nova_compute[251992]: 2025-12-06 07:32:35.443 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:35 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-0 sudo[328444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:35 compute-0 sudo[328444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:35 compute-0 sudo[328444]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:35 compute-0 sudo[328469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:35 compute-0 sudo[328469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:35 compute-0 sudo[328469]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:35 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:35.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:36 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:36 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:36 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:36 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 07:32:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 13 op/s
Dec 06 07:32:37 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:37 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:37 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 07:32:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:37.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:37.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:32:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3435441625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/941915969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 66m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail
Dec 06 07:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:32:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:32:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 266a51bf-052d-4604-af45-5ce6592c2774 does not exist
Dec 06 07:32:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev af265b6a-6b1a-4e00-aa7f-ece2da49c77a does not exist
Dec 06 07:32:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7010ab35-c88e-45cf-9e45-cf248d706f5c does not exist
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:32:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:32:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:32:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:32:39 compute-0 sudo[328496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:39 compute-0 sudo[328496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:39 compute-0 sudo[328496]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2248: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 19 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/720906342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2249: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 49 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/72302301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2250: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 47 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2251: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 64 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2252: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2253: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2254: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 71 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mon.compute-1 calling monitor election
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2255: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 29 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2256: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 822 KiB/s rd, 29 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: pgmap v2257: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 13 op/s
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:32:39 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:32:39 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:32:39 compute-0 ceph-mon[74339]: osdmap e273: 3 total, 3 up, 3 in
Dec 06 07:32:39 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 66m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:32:39 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:32:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:32:39 compute-0 sudo[328521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:32:39 compute-0 sudo[328521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:39 compute-0 sudo[328521]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:39 compute-0 sudo[328546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:39 compute-0 sudo[328546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:39 compute-0 sudo[328546]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:39 compute-0 nova_compute[251992]: 2025-12-06 07:32:39.304 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:39 compute-0 sudo[328571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:32:39 compute-0 sudo[328571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:39.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:39 compute-0 podman[328638]: 2025-12-06 07:32:39.642362039 +0000 UTC m=+0.038947933 container create b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:32:39 compute-0 systemd[1]: Started libpod-conmon-b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507.scope.
Dec 06 07:32:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:32:39 compute-0 podman[328638]: 2025-12-06 07:32:39.625974047 +0000 UTC m=+0.022559961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:32:39 compute-0 podman[328638]: 2025-12-06 07:32:39.729549504 +0000 UTC m=+0.126135418 container init b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:32:39 compute-0 podman[328638]: 2025-12-06 07:32:39.738008102 +0000 UTC m=+0.134593996 container start b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:32:39 compute-0 nostalgic_cartwright[328654]: 167 167
Dec 06 07:32:39 compute-0 systemd[1]: libpod-b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507.scope: Deactivated successfully.
Dec 06 07:32:39 compute-0 conmon[328654]: conmon b5017896c6312f972eb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507.scope/container/memory.events
Dec 06 07:32:39 compute-0 podman[328638]: 2025-12-06 07:32:39.751122265 +0000 UTC m=+0.147708189 container attach b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 07:32:39 compute-0 podman[328638]: 2025-12-06 07:32:39.752040951 +0000 UTC m=+0.148626835 container died b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:32:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3682fa55e6476167ef0b0377637b3731c62074a8462a6222b4d314c768b5b081-merged.mount: Deactivated successfully.
Dec 06 07:32:39 compute-0 podman[328638]: 2025-12-06 07:32:39.903620573 +0000 UTC m=+0.300206477 container remove b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_cartwright, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:32:39 compute-0 systemd[1]: libpod-conmon-b5017896c6312f972eb8ecdbe5ece2661ffd1d49505378852b4c00caef540507.scope: Deactivated successfully.
Dec 06 07:32:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:39.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:40 compute-0 podman[328680]: 2025-12-06 07:32:40.043698744 +0000 UTC m=+0.023273250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:32:40 compute-0 podman[328680]: 2025-12-06 07:32:40.310448395 +0000 UTC m=+0.290022851 container create ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:32:40 compute-0 systemd[1]: Started libpod-conmon-ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e.scope.
Dec 06 07:32:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5609daf21361ddf6b626e40a01cba44ce0e1ec337cad48e4c274db1b1312b875/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5609daf21361ddf6b626e40a01cba44ce0e1ec337cad48e4c274db1b1312b875/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5609daf21361ddf6b626e40a01cba44ce0e1ec337cad48e4c274db1b1312b875/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5609daf21361ddf6b626e40a01cba44ce0e1ec337cad48e4c274db1b1312b875/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5609daf21361ddf6b626e40a01cba44ce0e1ec337cad48e4c274db1b1312b875/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:40 compute-0 podman[328680]: 2025-12-06 07:32:40.437903186 +0000 UTC m=+0.417477642 container init ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 07:32:40 compute-0 podman[328680]: 2025-12-06 07:32:40.448211845 +0000 UTC m=+0.427786301 container start ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:32:40 compute-0 nova_compute[251992]: 2025-12-06 07:32:40.445 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:40 compute-0 podman[328680]: 2025-12-06 07:32:40.452188902 +0000 UTC m=+0.431763358 container attach ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:32:40 compute-0 podman[328698]: 2025-12-06 07:32:40.478995516 +0000 UTC m=+0.058025228 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:32:40 compute-0 podman[328718]: 2025-12-06 07:32:40.570125496 +0000 UTC m=+0.059215760 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:32:41 compute-0 practical_margulis[328697]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:32:41 compute-0 practical_margulis[328697]: --> relative data size: 1.0
Dec 06 07:32:41 compute-0 practical_margulis[328697]: --> All data devices are unavailable
Dec 06 07:32:41 compute-0 systemd[1]: libpod-ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e.scope: Deactivated successfully.
Dec 06 07:32:41 compute-0 podman[328680]: 2025-12-06 07:32:41.243480314 +0000 UTC m=+1.223054770 container died ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:32:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5609daf21361ddf6b626e40a01cba44ce0e1ec337cad48e4c274db1b1312b875-merged.mount: Deactivated successfully.
Dec 06 07:32:41 compute-0 podman[328680]: 2025-12-06 07:32:41.306896627 +0000 UTC m=+1.286471083 container remove ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:41 compute-0 systemd[1]: libpod-conmon-ed79da762c9b13888d2b6e226005c2893e6f85c8d8e337419f00bfa409f00e8e.scope: Deactivated successfully.
Dec 06 07:32:41 compute-0 sudo[328571]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:41 compute-0 sudo[328763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:41 compute-0 sudo[328763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:41 compute-0 sudo[328763]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:41 compute-0 sudo[328788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:32:41 compute-0 sudo[328788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:41 compute-0 sudo[328788]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:41 compute-0 sudo[328813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:41 compute-0 sudo[328813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:41 compute-0 sudo[328813]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:41 compute-0 ceph-mon[74339]: pgmap v2258: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail
Dec 06 07:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:32:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:32:41 compute-0 sudo[328838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:32:41 compute-0 sudo[328838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:41 compute-0 podman[328901]: 2025-12-06 07:32:41.943426401 +0000 UTC m=+0.068653775 container create 82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:41 compute-0 systemd[1]: Started libpod-conmon-82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a.scope.
Dec 06 07:32:41 compute-0 podman[328901]: 2025-12-06 07:32:41.897323235 +0000 UTC m=+0.022550629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:32:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:32:42 compute-0 podman[328901]: 2025-12-06 07:32:42.029263208 +0000 UTC m=+0.154490602 container init 82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:32:42 compute-0 podman[328901]: 2025-12-06 07:32:42.035900617 +0000 UTC m=+0.161127991 container start 82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:32:42 compute-0 priceless_bose[328918]: 167 167
Dec 06 07:32:42 compute-0 systemd[1]: libpod-82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a.scope: Deactivated successfully.
Dec 06 07:32:42 compute-0 podman[328901]: 2025-12-06 07:32:42.040605754 +0000 UTC m=+0.165833158 container attach 82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:32:42 compute-0 podman[328901]: 2025-12-06 07:32:42.041141379 +0000 UTC m=+0.166368773 container died 82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-76ff956625ed6bf849f1510df58824df08736e00653100438ab8c4613aeeb7d1-merged.mount: Deactivated successfully.
Dec 06 07:32:42 compute-0 podman[328901]: 2025-12-06 07:32:42.077719767 +0000 UTC m=+0.202947141 container remove 82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:32:42 compute-0 systemd[1]: libpod-conmon-82f92ba7fda91afd611ec4238cd4db92a96feca115c8fecf379b186e868adc8a.scope: Deactivated successfully.
Dec 06 07:32:42 compute-0 podman[328942]: 2025-12-06 07:32:42.222628958 +0000 UTC m=+0.036799764 container create a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:32:42 compute-0 systemd[1]: Started libpod-conmon-a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391.scope.
Dec 06 07:32:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a40fe98aafc884277625678a6a8733fceb0538817e1f4ab69c278c91b9b6b4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a40fe98aafc884277625678a6a8733fceb0538817e1f4ab69c278c91b9b6b4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a40fe98aafc884277625678a6a8733fceb0538817e1f4ab69c278c91b9b6b4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a40fe98aafc884277625678a6a8733fceb0538817e1f4ab69c278c91b9b6b4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:42 compute-0 podman[328942]: 2025-12-06 07:32:42.206159104 +0000 UTC m=+0.020329940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:32:42 compute-0 podman[328942]: 2025-12-06 07:32:42.307441338 +0000 UTC m=+0.121612164 container init a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elbakyan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:32:42 compute-0 podman[328942]: 2025-12-06 07:32:42.314898889 +0000 UTC m=+0.129069695 container start a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elbakyan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 07:32:42 compute-0 podman[328942]: 2025-12-06 07:32:42.318292901 +0000 UTC m=+0.132463737 container attach a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elbakyan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:42 compute-0 ceph-osd[84884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Dec 06 07:32:42 compute-0 ceph-mon[74339]: pgmap v2259: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:32:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]: {
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:     "0": [
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:         {
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "devices": [
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "/dev/loop3"
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             ],
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "lv_name": "ceph_lv0",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "lv_size": "7511998464",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "name": "ceph_lv0",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "tags": {
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.cluster_name": "ceph",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.crush_device_class": "",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.encrypted": "0",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.osd_id": "0",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.type": "block",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:                 "ceph.vdo": "0"
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             },
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "type": "block",
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:             "vg_name": "ceph_vg0"
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:         }
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]:     ]
Dec 06 07:32:43 compute-0 lucid_elbakyan[328958]: }
Dec 06 07:32:43 compute-0 systemd[1]: libpod-a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391.scope: Deactivated successfully.
Dec 06 07:32:43 compute-0 podman[328942]: 2025-12-06 07:32:43.102140472 +0000 UTC m=+0.916311278 container died a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elbakyan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:32:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:43.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a40fe98aafc884277625678a6a8733fceb0538817e1f4ab69c278c91b9b6b4a-merged.mount: Deactivated successfully.
Dec 06 07:32:43 compute-0 podman[328942]: 2025-12-06 07:32:43.501333289 +0000 UTC m=+1.315504095 container remove a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_elbakyan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:32:43 compute-0 systemd[1]: libpod-conmon-a496fa4167509facd5591e46536223d97f882b767cfd9f622907c3b2909f2391.scope: Deactivated successfully.
Dec 06 07:32:43 compute-0 sudo[328838]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:43 compute-0 sudo[328983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:43 compute-0 sudo[328983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:43 compute-0 sudo[328983]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:43 compute-0 sudo[329008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:32:43 compute-0 sudo[329008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:43 compute-0 sudo[329008]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:43 compute-0 sudo[329033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:43 compute-0 sudo[329033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:43 compute-0 sudo[329033]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:43 compute-0 sudo[329058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:32:43 compute-0 sudo[329058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:43.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:44 compute-0 podman[329122]: 2025-12-06 07:32:44.068036808 +0000 UTC m=+0.040328689 container create 09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:32:44 compute-0 systemd[1]: Started libpod-conmon-09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8.scope.
Dec 06 07:32:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:32:44 compute-0 podman[329122]: 2025-12-06 07:32:44.141379668 +0000 UTC m=+0.113671569 container init 09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:32:44 compute-0 podman[329122]: 2025-12-06 07:32:44.049496248 +0000 UTC m=+0.021788159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:32:44 compute-0 podman[329122]: 2025-12-06 07:32:44.146440635 +0000 UTC m=+0.118732516 container start 09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 07:32:44 compute-0 great_curie[329138]: 167 167
Dec 06 07:32:44 compute-0 systemd[1]: libpod-09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8.scope: Deactivated successfully.
Dec 06 07:32:44 compute-0 podman[329122]: 2025-12-06 07:32:44.150941326 +0000 UTC m=+0.123233207 container attach 09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:32:44 compute-0 podman[329122]: 2025-12-06 07:32:44.151181963 +0000 UTC m=+0.123473854 container died 09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:32:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-de53af5fffece1faaadb01cf7a054aa53024a9970c124dd56b269de0dd2c42f9-merged.mount: Deactivated successfully.
Dec 06 07:32:44 compute-0 podman[329122]: 2025-12-06 07:32:44.18552606 +0000 UTC m=+0.157817941 container remove 09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 07:32:44 compute-0 systemd[1]: libpod-conmon-09313182e39f00050314a5e3a183b04c9c687306d9354f9f52d854f03dbc3bb8.scope: Deactivated successfully.
Dec 06 07:32:44 compute-0 nova_compute[251992]: 2025-12-06 07:32:44.306 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:44 compute-0 podman[329161]: 2025-12-06 07:32:44.326735272 +0000 UTC m=+0.039762715 container create c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:32:44 compute-0 systemd[1]: Started libpod-conmon-c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06.scope.
Dec 06 07:32:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3bf8ccbe000dfdf86c956390e74f76b65ba5af9acf3203decd124c83e2fd953/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3bf8ccbe000dfdf86c956390e74f76b65ba5af9acf3203decd124c83e2fd953/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3bf8ccbe000dfdf86c956390e74f76b65ba5af9acf3203decd124c83e2fd953/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3bf8ccbe000dfdf86c956390e74f76b65ba5af9acf3203decd124c83e2fd953/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:32:44 compute-0 podman[329161]: 2025-12-06 07:32:44.309432024 +0000 UTC m=+0.022459497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:32:44 compute-0 podman[329161]: 2025-12-06 07:32:44.415660332 +0000 UTC m=+0.128687805 container init c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 07:32:44 compute-0 podman[329161]: 2025-12-06 07:32:44.428341555 +0000 UTC m=+0.141368988 container start c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bouman, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:32:44 compute-0 podman[329161]: 2025-12-06 07:32:44.432220009 +0000 UTC m=+0.145247542 container attach c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bouman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 206 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 22 op/s
Dec 06 07:32:44 compute-0 ceph-mon[74339]: pgmap v2260: 305 pgs: 305 active+clean; 213 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:32:45 compute-0 frosty_bouman[329177]: {
Dec 06 07:32:45 compute-0 frosty_bouman[329177]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:32:45 compute-0 frosty_bouman[329177]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:32:45 compute-0 frosty_bouman[329177]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:32:45 compute-0 frosty_bouman[329177]:         "osd_id": 0,
Dec 06 07:32:45 compute-0 frosty_bouman[329177]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:32:45 compute-0 frosty_bouman[329177]:         "type": "bluestore"
Dec 06 07:32:45 compute-0 frosty_bouman[329177]:     }
Dec 06 07:32:45 compute-0 frosty_bouman[329177]: }
Dec 06 07:32:45 compute-0 systemd[1]: libpod-c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06.scope: Deactivated successfully.
Dec 06 07:32:45 compute-0 podman[329200]: 2025-12-06 07:32:45.358130226 +0000 UTC m=+0.024303797 container died c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:32:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3bf8ccbe000dfdf86c956390e74f76b65ba5af9acf3203decd124c83e2fd953-merged.mount: Deactivated successfully.
Dec 06 07:32:45 compute-0 podman[329200]: 2025-12-06 07:32:45.408060474 +0000 UTC m=+0.074234025 container remove c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:32:45 compute-0 systemd[1]: libpod-conmon-c0ce60ce34d01ac3ead1e6c98223d21fc83f2b69c1def48a1eb0eb3b6af09f06.scope: Deactivated successfully.
Dec 06 07:32:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:45.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:45 compute-0 sudo[329058]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:32:45 compute-0 nova_compute[251992]: 2025-12-06 07:32:45.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:46 compute-0 ceph-mon[74339]: pgmap v2261: 305 pgs: 305 active+clean; 206 MiB data, 936 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 15 KiB/s wr, 22 op/s
Dec 06 07:32:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:32:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 71bd0ebe-250f-49c2-959a-6476fd695b3d does not exist
Dec 06 07:32:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1d5aefd4-0182-499f-a144-bac2e5b87b36 does not exist
Dec 06 07:32:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1e972a40-a774-4052-92b7-a9a150f3badd does not exist
Dec 06 07:32:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 167 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 92 op/s
Dec 06 07:32:46 compute-0 sudo[329216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:46 compute-0 sudo[329216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:46 compute-0 sudo[329216]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:46 compute-0 ovn_controller[147168]: 2025-12-06T07:32:46Z|00433|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 06 07:32:46 compute-0 sudo[329241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:32:46 compute-0 sudo[329241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:46 compute-0 sudo[329241]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:47.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:32:47 compute-0 ceph-mon[74339]: pgmap v2262: 305 pgs: 305 active+clean; 167 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 15 KiB/s wr, 92 op/s
Dec 06 07:32:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 108 op/s
Dec 06 07:32:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:49.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:49 compute-0 nova_compute[251992]: 2025-12-06 07:32:49.505 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:50 compute-0 nova_compute[251992]: 2025-12-06 07:32:50.452 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 16 KiB/s wr, 122 op/s
Dec 06 07:32:50 compute-0 ceph-mon[74339]: pgmap v2263: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 108 op/s
Dec 06 07:32:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:51.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 115 op/s
Dec 06 07:32:53 compute-0 ceph-mon[74339]: pgmap v2264: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 16 KiB/s wr, 122 op/s
Dec 06 07:32:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:53.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:53.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.209609) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374209635, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 711, "num_deletes": 250, "total_data_size": 1004257, "memory_usage": 1017824, "flush_reason": "Manual Compaction"}
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374219404, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 700514, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45040, "largest_seqno": 45750, "table_properties": {"data_size": 697108, "index_size": 1186, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9385, "raw_average_key_size": 21, "raw_value_size": 689914, "raw_average_value_size": 1582, "num_data_blocks": 51, "num_entries": 436, "num_filter_entries": 436, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006300, "oldest_key_time": 1765006300, "file_creation_time": 1765006374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 9839 microseconds, and 2456 cpu microseconds.
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.219445) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 700514 bytes OK
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.219460) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.221607) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.221619) EVENT_LOG_v1 {"time_micros": 1765006374221615, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.221631) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 1000542, prev total WAL file size 1000542, number of live WAL files 2.
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.222046) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353235' seq:72057594037927935, type:22 .. '6D6772737461740031373736' seq:0, type:0; will stop at (end)
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(684KB)], [95(12MB)]
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374222073, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 13850814, "oldest_snapshot_seqno": -1}
Dec 06 07:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505189984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1505189984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7833 keys, 10259588 bytes, temperature: kUnknown
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374289526, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10259588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10209867, "index_size": 29065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 203379, "raw_average_key_size": 25, "raw_value_size": 10072705, "raw_average_value_size": 1285, "num_data_blocks": 1141, "num_entries": 7833, "num_filter_entries": 7833, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765006374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.289823) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10259588 bytes
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.291380) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.0 rd, 151.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(34.4) write-amplify(14.6) OK, records in: 8333, records dropped: 500 output_compression: NoCompression
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.291398) EVENT_LOG_v1 {"time_micros": 1765006374291389, "job": 56, "event": "compaction_finished", "compaction_time_micros": 67561, "compaction_time_cpu_micros": 24650, "output_level": 6, "num_output_files": 1, "total_output_size": 10259588, "num_input_records": 8333, "num_output_records": 7833, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374291827, "job": 56, "event": "table_file_deletion", "file_number": 97}
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006374294302, "job": 56, "event": "table_file_deletion", "file_number": 95}
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.221970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.294415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.294419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.294421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.294423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:32:54.294425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:32:54 compute-0 nova_compute[251992]: 2025-12-06 07:32:54.508 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 116 op/s
Dec 06 07:32:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:55.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:55 compute-0 nova_compute[251992]: 2025-12-06 07:32:55.454 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:55 compute-0 ceph-mon[74339]: pgmap v2265: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 115 op/s
Dec 06 07:32:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1505189984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:32:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1505189984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:32:55 compute-0 sudo[329270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:55 compute-0 sudo[329270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:55 compute-0 sudo[329270]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:55 compute-0 sudo[329295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:32:55 compute-0 sudo[329295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:32:55 compute-0 sudo[329295]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:55.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 108 op/s
Dec 06 07:32:56 compute-0 ceph-mon[74339]: pgmap v2266: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 116 op/s
Dec 06 07:32:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:32:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:32:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:32:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:57.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:32:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 172 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 360 KiB/s wr, 47 op/s
Dec 06 07:32:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:32:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:32:59.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:32:59 compute-0 nova_compute[251992]: 2025-12-06 07:32:59.510 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:32:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:32:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:32:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:32:59.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:00 compute-0 ceph-mon[74339]: pgmap v2267: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 108 op/s
Dec 06 07:33:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3433138790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:00 compute-0 nova_compute[251992]: 2025-12-06 07:33:00.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Dec 06 07:33:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:01.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:01 compute-0 ceph-mon[74339]: pgmap v2268: 305 pgs: 305 active+clean; 172 MiB data, 919 MiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 360 KiB/s wr, 47 op/s
Dec 06 07:33:01 compute-0 ceph-mon[74339]: pgmap v2269: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Dec 06 07:33:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:01.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Dec 06 07:33:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:03.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:03.840 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:03.840 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:03.841 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:03.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:04 compute-0 ceph-mon[74339]: pgmap v2270: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Dec 06 07:33:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:33:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3258911594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:33:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:33:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3258911594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:33:04 compute-0 nova_compute[251992]: 2025-12-06 07:33:04.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Dec 06 07:33:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:05.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:05 compute-0 nova_compute[251992]: 2025-12-06 07:33:05.457 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:05 compute-0 podman[329325]: 2025-12-06 07:33:05.476140881 +0000 UTC m=+0.129700872 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 07:33:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3258911594' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:33:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3258911594' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:33:05 compute-0 ceph-mon[74339]: pgmap v2271: 305 pgs: 305 active+clean; 189 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Dec 06 07:33:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:05.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 231 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 3.4 MiB/s wr, 87 op/s
Dec 06 07:33:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:07.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:08 compute-0 ceph-mon[74339]: pgmap v2272: 305 pgs: 305 active+clean; 231 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 3.4 MiB/s wr, 87 op/s
Dec 06 07:33:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 242 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 06 07:33:08 compute-0 nova_compute[251992]: 2025-12-06 07:33:08.760 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:08 compute-0 nova_compute[251992]: 2025-12-06 07:33:08.785 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:08 compute-0 nova_compute[251992]: 2025-12-06 07:33:08.810 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:08 compute-0 nova_compute[251992]: 2025-12-06 07:33:08.810 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:08 compute-0 nova_compute[251992]: 2025-12-06 07:33:08.810 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:08 compute-0 nova_compute[251992]: 2025-12-06 07:33:08.810 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:33:08 compute-0 nova_compute[251992]: 2025-12-06 07:33:08.811 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:33:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1667020030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.276 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2456836380' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:33:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2456836380' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.433 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.435 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4445MB free_disk=20.94310760498047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.435 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.435 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:09.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.551 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.551 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:33:09 compute-0 nova_compute[251992]: 2025-12-06 07:33:09.585 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:09.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:33:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192691524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:10 compute-0 nova_compute[251992]: 2025-12-06 07:33:10.040 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:10 compute-0 nova_compute[251992]: 2025-12-06 07:33:10.047 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:33:10 compute-0 nova_compute[251992]: 2025-12-06 07:33:10.075 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:33:10 compute-0 nova_compute[251992]: 2025-12-06 07:33:10.078 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:33:10 compute-0 nova_compute[251992]: 2025-12-06 07:33:10.078 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:10 compute-0 ceph-mon[74339]: pgmap v2273: 305 pgs: 305 active+clean; 242 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 06 07:33:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1667020030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2134285174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/192691524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:10 compute-0 nova_compute[251992]: 2025-12-06 07:33:10.460 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 221 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.6 MiB/s wr, 103 op/s
Dec 06 07:33:11 compute-0 podman[329398]: 2025-12-06 07:33:11.412121891 +0000 UTC m=+0.074839261 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:33:11 compute-0 podman[329399]: 2025-12-06 07:33:11.412125102 +0000 UTC m=+0.065319394 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:33:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:11.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:11 compute-0 ceph-mon[74339]: pgmap v2274: 305 pgs: 305 active+clean; 221 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.6 MiB/s wr, 103 op/s
Dec 06 07:33:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4278511991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:33:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4278511991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:33:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4150334743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:11.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 221 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 1.9 MiB/s wr, 77 op/s
Dec 06 07:33:12 compute-0 nova_compute[251992]: 2025-12-06 07:33:12.950 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:12 compute-0 nova_compute[251992]: 2025-12-06 07:33:12.951 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:33:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:13.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:13 compute-0 nova_compute[251992]: 2025-12-06 07:33:13.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:13 compute-0 nova_compute[251992]: 2025-12-06 07:33:13.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:14 compute-0 nova_compute[251992]: 2025-12-06 07:33:14.515 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:14 compute-0 nova_compute[251992]: 2025-12-06 07:33:14.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:14 compute-0 nova_compute[251992]: 2025-12-06 07:33:14.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:14 compute-0 ceph-mon[74339]: pgmap v2275: 305 pgs: 305 active+clean; 221 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 1.9 MiB/s wr, 77 op/s
Dec 06 07:33:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 194 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 1.9 MiB/s wr, 85 op/s
Dec 06 07:33:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:15.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:15 compute-0 nova_compute[251992]: 2025-12-06 07:33:15.462 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:15.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:16 compute-0 sudo[329439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:16 compute-0 sudo[329439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:16 compute-0 sudo[329439]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:16 compute-0 sudo[329464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:16 compute-0 sudo[329464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:16 compute-0 sudo[329464]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:16 compute-0 ceph-mon[74339]: pgmap v2276: 305 pgs: 305 active+clean; 194 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 1.9 MiB/s wr, 85 op/s
Dec 06 07:33:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/527801579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 274 KiB/s rd, 1.9 MiB/s wr, 73 op/s
Dec 06 07:33:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:17.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:17.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:33:18
Dec 06 07:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta']
Dec 06 07:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:33:18 compute-0 ceph-mon[74339]: pgmap v2277: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 274 KiB/s rd, 1.9 MiB/s wr, 73 op/s
Dec 06 07:33:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2460644784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1407645777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 551 KiB/s wr, 42 op/s
Dec 06 07:33:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:19 compute-0 nova_compute[251992]: 2025-12-06 07:33:19.518 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:19 compute-0 ceph-mon[74339]: pgmap v2278: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 551 KiB/s wr, 42 op/s
Dec 06 07:33:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/107927179' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:19.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:20 compute-0 nova_compute[251992]: 2025-12-06 07:33:20.464 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:20 compute-0 nova_compute[251992]: 2025-12-06 07:33:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:20 compute-0 nova_compute[251992]: 2025-12-06 07:33:20.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:33:20 compute-0 nova_compute[251992]: 2025-12-06 07:33:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:20 compute-0 nova_compute[251992]: 2025-12-06 07:33:20.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:33:20 compute-0 nova_compute[251992]: 2025-12-06 07:33:20.705 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:33:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 38 KiB/s wr, 24 op/s
Dec 06 07:33:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:21 compute-0 nova_compute[251992]: 2025-12-06 07:33:21.704 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:21 compute-0 nova_compute[251992]: 2025-12-06 07:33:21.704 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:33:21 compute-0 nova_compute[251992]: 2025-12-06 07:33:21.704 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:33:21 compute-0 nova_compute[251992]: 2025-12-06 07:33:21.729 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:33:21 compute-0 ceph-mon[74339]: pgmap v2279: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 38 KiB/s wr, 24 op/s
Dec 06 07:33:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:21.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 06 07:33:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:23.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:33:23 compute-0 ceph-mon[74339]: pgmap v2280: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 06 07:33:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:24.175 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:33:24 compute-0 nova_compute[251992]: 2025-12-06 07:33:24.176 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:24.176 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:33:24 compute-0 nova_compute[251992]: 2025-12-06 07:33:24.519 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:33:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:33:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:33:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:33:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:33:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 06 07:33:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:25 compute-0 nova_compute[251992]: 2025-12-06 07:33:25.466 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:25.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021710476572678206 of space, bias 1.0, pg target 0.6513142971803462 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009887399962776847 of space, bias 1.0, pg target 0.2966219988833054 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:33:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:33:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:25.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:26 compute-0 ceph-mon[74339]: pgmap v2281: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 13 KiB/s wr, 9 op/s
Dec 06 07:33:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3403146162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Dec 06 07:33:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:27.178 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:33:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Dec 06 07:33:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:27.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:27 compute-0 ceph-mon[74339]: pgmap v2282: 305 pgs: 305 active+clean; 167 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 14 KiB/s wr, 1 op/s
Dec 06 07:33:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Dec 06 07:33:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:27.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:28 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Dec 06 07:33:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.4 KiB/s rd, 16 KiB/s wr, 3 op/s
Dec 06 07:33:29 compute-0 ceph-mon[74339]: osdmap e274: 3 total, 3 up, 3 in
Dec 06 07:33:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.521 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.662 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.663 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.699 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.813 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.814 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.821 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.821 251996 INFO nova.compute.claims [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:33:29 compute-0 nova_compute[251992]: 2025-12-06 07:33:29.958 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:29.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:30 compute-0 ceph-mon[74339]: pgmap v2284: 305 pgs: 305 active+clean; 167 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.4 KiB/s rd, 16 KiB/s wr, 3 op/s
Dec 06 07:33:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:33:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309148104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.415 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.421 251996 DEBUG nova.compute.provider_tree [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.440 251996 DEBUG nova.scheduler.client.report [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.469 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.473 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.474 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.540 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.541 251996 DEBUG nova.network.neutron [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.578 251996 INFO nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.603 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:33:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 205 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.754 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.756 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.757 251996 INFO nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Creating image(s)
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.792 251996 DEBUG nova.storage.rbd_utils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 70928eda-043f-429b-aa4e-af1f3189a7c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.820 251996 DEBUG nova.storage.rbd_utils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 70928eda-043f-429b-aa4e-af1f3189a7c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.847 251996 DEBUG nova.storage.rbd_utils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 70928eda-043f-429b-aa4e-af1f3189a7c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.851 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.914 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.915 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.916 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.916 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.943 251996 DEBUG nova.storage.rbd_utils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 70928eda-043f-429b-aa4e-af1f3189a7c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.947 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 70928eda-043f-429b-aa4e-af1f3189a7c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:30 compute-0 nova_compute[251992]: 2025-12-06 07:33:30.976 251996 DEBUG nova.policy [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2aa5b15c15f84a8cb24776d5c781eb09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:33:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:31.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:32 compute-0 nova_compute[251992]: 2025-12-06 07:33:32.222 251996 DEBUG nova.network.neutron [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Successfully created port: dc251d3b-d52d-4043-b1db-dc8528b247d0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:33:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2309148104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 205 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:33:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:34.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.016 251996 DEBUG nova.network.neutron [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Successfully updated port: dc251d3b-d52d-4043-b1db-dc8528b247d0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:33:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.118 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.119 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquired lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.119 251996 DEBUG nova.network.neutron [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.166 251996 DEBUG nova.compute.manager [req-cdd31f68-a775-4942-933b-b6c258336744 req-cd4399a2-e668-46cc-91e7-3dad77fa334b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received event network-changed-dc251d3b-d52d-4043-b1db-dc8528b247d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.166 251996 DEBUG nova.compute.manager [req-cdd31f68-a775-4942-933b-b6c258336744 req-cd4399a2-e668-46cc-91e7-3dad77fa334b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Refreshing instance network info cache due to event network-changed-dc251d3b-d52d-4043-b1db-dc8528b247d0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.166 251996 DEBUG oslo_concurrency.lockutils [req-cdd31f68-a775-4942-933b-b6c258336744 req-cd4399a2-e668-46cc-91e7-3dad77fa334b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.347 251996 DEBUG nova.network.neutron [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.679 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:33:34 compute-0 nova_compute[251992]: 2025-12-06 07:33:34.679 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:33:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 254 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 5.0 MiB/s wr, 161 op/s
Dec 06 07:33:35 compute-0 nova_compute[251992]: 2025-12-06 07:33:35.472 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:35.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:36 compute-0 ceph-mon[74339]: pgmap v2285: 305 pgs: 305 active+clean; 205 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:33:36 compute-0 ceph-mon[74339]: pgmap v2286: 305 pgs: 305 active+clean; 205 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Dec 06 07:33:36 compute-0 sudo[329615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:36 compute-0 sudo[329615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:36 compute-0 sudo[329615]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:36 compute-0 sudo[329646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:36 compute-0 sudo[329646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:36 compute-0 sudo[329646]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:36 compute-0 podman[329639]: 2025-12-06 07:33:36.284519396 +0000 UTC m=+0.098918612 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 07:33:36 compute-0 nova_compute[251992]: 2025-12-06 07:33:36.539 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 70928eda-043f-429b-aa4e-af1f3189a7c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:36 compute-0 nova_compute[251992]: 2025-12-06 07:33:36.620 251996 DEBUG nova.storage.rbd_utils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] resizing rbd image 70928eda-043f-429b-aa4e-af1f3189a7c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:33:36 compute-0 nova_compute[251992]: 2025-12-06 07:33:36.726 251996 DEBUG nova.objects.instance [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'migration_context' on Instance uuid 70928eda-043f-429b-aa4e-af1f3189a7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:33:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 286 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.6 MiB/s wr, 189 op/s
Dec 06 07:33:36 compute-0 nova_compute[251992]: 2025-12-06 07:33:36.800 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:33:36 compute-0 nova_compute[251992]: 2025-12-06 07:33:36.802 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Ensure instance console log exists: /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:33:36 compute-0 nova_compute[251992]: 2025-12-06 07:33:36.802 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:36 compute-0 nova_compute[251992]: 2025-12-06 07:33:36.802 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:36 compute-0 nova_compute[251992]: 2025-12-06 07:33:36.803 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:37 compute-0 ceph-mon[74339]: pgmap v2287: 305 pgs: 305 active+clean; 254 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 5.0 MiB/s wr, 161 op/s
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.400 251996 DEBUG nova.network.neutron [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updating instance_info_cache with network_info: [{"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.438 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Releasing lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.438 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Instance network_info: |[{"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.438 251996 DEBUG oslo_concurrency.lockutils [req-cdd31f68-a775-4942-933b-b6c258336744 req-cd4399a2-e668-46cc-91e7-3dad77fa334b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.439 251996 DEBUG nova.network.neutron [req-cdd31f68-a775-4942-933b-b6c258336744 req-cd4399a2-e668-46cc-91e7-3dad77fa334b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Refreshing network info cache for port dc251d3b-d52d-4043-b1db-dc8528b247d0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.442 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Start _get_guest_xml network_info=[{"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.447 251996 WARNING nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.452 251996 DEBUG nova.virt.libvirt.host [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.452 251996 DEBUG nova.virt.libvirt.host [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.457 251996 DEBUG nova.virt.libvirt.host [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.458 251996 DEBUG nova.virt.libvirt.host [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.459 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.459 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.460 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.460 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.460 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.460 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.460 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.461 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.461 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.461 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.461 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.462 251996 DEBUG nova.virt.hardware [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.465 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:37.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Dec 06 07:33:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Dec 06 07:33:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Dec 06 07:33:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:33:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378557285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.925 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.952 251996 DEBUG nova.storage.rbd_utils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 70928eda-043f-429b-aa4e-af1f3189a7c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:37 compute-0 nova_compute[251992]: 2025-12-06 07:33:37.956 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:38.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:38 compute-0 ceph-mon[74339]: pgmap v2288: 305 pgs: 305 active+clean; 286 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.6 MiB/s wr, 189 op/s
Dec 06 07:33:38 compute-0 ceph-mon[74339]: osdmap e275: 3 total, 3 up, 3 in
Dec 06 07:33:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2378557285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:33:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2246063812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.483 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.485 251996 DEBUG nova.virt.libvirt.vif [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:33:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-840641308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-840641308',id=121,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-moptynqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:33:30Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=70928eda-043f-429b-aa4e-af1f3189a7c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.486 251996 DEBUG nova.network.os_vif_util [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.487 251996 DEBUG nova.network.os_vif_util [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:63:d9,bridge_name='br-int',has_traffic_filtering=True,id=dc251d3b-d52d-4043-b1db-dc8528b247d0,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc251d3b-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.488 251996 DEBUG nova.objects.instance [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 70928eda-043f-429b-aa4e-af1f3189a7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.520 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <uuid>70928eda-043f-429b-aa4e-af1f3189a7c1</uuid>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <name>instance-00000079</name>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-840641308</nova:name>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:33:37</nova:creationTime>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <nova:user uuid="2aa5b15c15f84a8cb24776d5c781eb09">tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member</nova:user>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <nova:project uuid="17cdfa63c4424ec7a0eb4bb3d7372c14">tempest-ServerBootFromVolumeStableRescueTest-344238221</nova:project>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <nova:port uuid="dc251d3b-d52d-4043-b1db-dc8528b247d0">
Dec 06 07:33:38 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <system>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <entry name="serial">70928eda-043f-429b-aa4e-af1f3189a7c1</entry>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <entry name="uuid">70928eda-043f-429b-aa4e-af1f3189a7c1</entry>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </system>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <os>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   </os>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <features>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   </features>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/70928eda-043f-429b-aa4e-af1f3189a7c1_disk">
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       </source>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/70928eda-043f-429b-aa4e-af1f3189a7c1_disk.config">
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       </source>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:33:38 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:63:63:d9"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <target dev="tapdc251d3b-d5"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1/console.log" append="off"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <video>
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </video>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:33:38 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:33:38 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:33:38 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:33:38 compute-0 nova_compute[251992]: </domain>
Dec 06 07:33:38 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.521 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Preparing to wait for external event network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.522 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.522 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.523 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.524 251996 DEBUG nova.virt.libvirt.vif [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:33:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-840641308',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-840641308',id=121,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-moptynqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:33:30Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=70928eda-043f-429b-aa4e-af1f3189a7c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.524 251996 DEBUG nova.network.os_vif_util [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.525 251996 DEBUG nova.network.os_vif_util [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:63:d9,bridge_name='br-int',has_traffic_filtering=True,id=dc251d3b-d52d-4043-b1db-dc8528b247d0,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc251d3b-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.525 251996 DEBUG os_vif [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:63:d9,bridge_name='br-int',has_traffic_filtering=True,id=dc251d3b-d52d-4043-b1db-dc8528b247d0,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc251d3b-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.527 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.527 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.532 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.532 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc251d3b-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.533 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc251d3b-d5, col_values=(('external_ids', {'iface-id': 'dc251d3b-d52d-4043-b1db-dc8528b247d0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:63:d9', 'vm-uuid': '70928eda-043f-429b-aa4e-af1f3189a7c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.534 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:38 compute-0 NetworkManager[48965]: <info>  [1765006418.5357] manager: (tapdc251d3b-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.537 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.541 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.542 251996 INFO os_vif [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:63:d9,bridge_name='br-int',has_traffic_filtering=True,id=dc251d3b-d52d-4043-b1db-dc8528b247d0,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc251d3b-d5')
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.611 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.612 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.612 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No VIF found with MAC fa:16:3e:63:63:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.613 251996 INFO nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Using config drive
Dec 06 07:33:38 compute-0 nova_compute[251992]: 2025-12-06 07:33:38.638 251996 DEBUG nova.storage.rbd_utils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 70928eda-043f-429b-aa4e-af1f3189a7c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.8 MiB/s wr, 188 op/s
Dec 06 07:33:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Dec 06 07:33:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Dec 06 07:33:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Dec 06 07:33:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.304 251996 INFO nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Creating config drive at /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1/disk.config
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.310 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp418j7w_p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.442 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp418j7w_p" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.469 251996 DEBUG nova.storage.rbd_utils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image 70928eda-043f-429b-aa4e-af1f3189a7c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.473 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1/disk.config 70928eda-043f-429b-aa4e-af1f3189a7c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:33:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:39.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2246063812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:33:39 compute-0 ceph-mon[74339]: osdmap e276: 3 total, 3 up, 3 in
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.760 251996 DEBUG nova.network.neutron [req-cdd31f68-a775-4942-933b-b6c258336744 req-cd4399a2-e668-46cc-91e7-3dad77fa334b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updated VIF entry in instance network info cache for port dc251d3b-d52d-4043-b1db-dc8528b247d0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.760 251996 DEBUG nova.network.neutron [req-cdd31f68-a775-4942-933b-b6c258336744 req-cd4399a2-e668-46cc-91e7-3dad77fa334b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updating instance_info_cache with network_info: [{"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.783 251996 DEBUG oslo_concurrency.lockutils [req-cdd31f68-a775-4942-933b-b6c258336744 req-cd4399a2-e668-46cc-91e7-3dad77fa334b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.886 251996 DEBUG oslo_concurrency.processutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1/disk.config 70928eda-043f-429b-aa4e-af1f3189a7c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.887 251996 INFO nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Deleting local config drive /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1/disk.config because it was imported into RBD.
Dec 06 07:33:39 compute-0 kernel: tapdc251d3b-d5: entered promiscuous mode
Dec 06 07:33:39 compute-0 NetworkManager[48965]: <info>  [1765006419.9399] manager: (tapdc251d3b-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Dec 06 07:33:39 compute-0 ovn_controller[147168]: 2025-12-06T07:33:39Z|00434|binding|INFO|Claiming lport dc251d3b-d52d-4043-b1db-dc8528b247d0 for this chassis.
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.940 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:39 compute-0 ovn_controller[147168]: 2025-12-06T07:33:39Z|00435|binding|INFO|dc251d3b-d52d-4043-b1db-dc8528b247d0: Claiming fa:16:3e:63:63:d9 10.100.0.9
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.946 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:39 compute-0 nova_compute[251992]: 2025-12-06 07:33:39.948 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.954 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:63:d9 10.100.0.9'], port_security=['fa:16:3e:63:63:d9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '70928eda-043f-429b-aa4e-af1f3189a7c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=dc251d3b-d52d-4043-b1db-dc8528b247d0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.956 158118 INFO neutron.agent.ovn.metadata.agent [-] Port dc251d3b-d52d-4043-b1db-dc8528b247d0 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 bound to our chassis
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.957 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:33:39 compute-0 systemd-udevd[329899]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.972 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a74d7164-29ea-43d8-80bf-339b57178308]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.974 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap40bc9d32-81 in ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:33:39 compute-0 systemd-machined[212986]: New machine qemu-54-instance-00000079.
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.976 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap40bc9d32-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.976 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2552c3b3-2dfe-470c-bc7b-4d7652f1a5c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.979 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bcb092-e525-4c55-bcca-a4ff1dd1681c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:39 compute-0 NetworkManager[48965]: <info>  [1765006419.9894] device (tapdc251d3b-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:33:39 compute-0 NetworkManager[48965]: <info>  [1765006419.9906] device (tapdc251d3b-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:33:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:39.992 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[87cf0192-d602-4b2d-9ffa-d872cc0c2717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.010 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:40 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000079.
Dec 06 07:33:40 compute-0 ovn_controller[147168]: 2025-12-06T07:33:40Z|00436|binding|INFO|Setting lport dc251d3b-d52d-4043-b1db-dc8528b247d0 ovn-installed in OVS
Dec 06 07:33:40 compute-0 ovn_controller[147168]: 2025-12-06T07:33:40Z|00437|binding|INFO|Setting lport dc251d3b-d52d-4043-b1db-dc8528b247d0 up in Southbound
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.015 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.017 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c0104386-abdf-4061-a8f0-f26785fd6180]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.055 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ab048e8d-d3b3-41c0-af35-220b4860110f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.060 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[21e1714c-a839-4406-bd3d-ccede8608f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 NetworkManager[48965]: <info>  [1765006420.0615] manager: (tap40bc9d32-80): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.090 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba4c3e7-ae4e-4d52-9a83-3968db613080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.093 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f69bf9-57f8-4a65-b559-801d5212683b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 NetworkManager[48965]: <info>  [1765006420.1236] device (tap40bc9d32-80): carrier: link connected
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.128 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[43f02256-f60b-44d7-bf88-b7cdbdc4772f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.147 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ab091a0e-6171-47d8-bce1-4d5327ba3642]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 42370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329931, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.165 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6492ccba-e0ee-469e-985e-5fab367f78d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:6673'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669271, 'tstamp': 669271}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329932, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.184 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1e136822-a70f-4bd8-bd37-67d8505a890d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 42370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329933, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.218 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[322f714a-8343-4a9b-87b9-25e8ded6e4d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.273 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[36e0a810-2b98-4f4c-a38a-078bee3aaae9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.277 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.277 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.277 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:33:40 compute-0 kernel: tap40bc9d32-80: entered promiscuous mode
Dec 06 07:33:40 compute-0 NetworkManager[48965]: <info>  [1765006420.2799] manager: (tap40bc9d32-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.279 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.282 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.283 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:40 compute-0 ovn_controller[147168]: 2025-12-06T07:33:40Z|00438|binding|INFO|Releasing lport 0d2044a5-87cb-4c28-912c-9a2682bb94de from this chassis (sb_readonly=0)
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.297 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.298 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.298 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/40bc9d32-839b-4591-acbc-c5d535123ff1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/40bc9d32-839b-4591-acbc-c5d535123ff1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.299 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8941d46c-526f-4cb9-b563-03beed15da35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.301 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/40bc9d32-839b-4591-acbc-c5d535123ff1.pid.haproxy
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:33:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:33:40.303 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'env', 'PROCESS_TAG=haproxy-40bc9d32-839b-4591-acbc-c5d535123ff1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/40bc9d32-839b-4591-acbc-c5d535123ff1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.477 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006420.4766986, 70928eda-043f-429b-aa4e-af1f3189a7c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.479 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] VM Started (Lifecycle Event)
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.514 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.519 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006420.4768295, 70928eda-043f-429b-aa4e-af1f3189a7c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.520 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] VM Paused (Lifecycle Event)
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.544 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.550 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.578 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:33:40 compute-0 podman[330008]: 2025-12-06 07:33:40.683608945 +0000 UTC m=+0.049665172 container create 744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:33:40 compute-0 systemd[1]: Started libpod-conmon-744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31.scope.
Dec 06 07:33:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 148 op/s
Dec 06 07:33:40 compute-0 ceph-mon[74339]: pgmap v2290: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.8 MiB/s wr, 188 op/s
Dec 06 07:33:40 compute-0 podman[330008]: 2025-12-06 07:33:40.65747278 +0000 UTC m=+0.023529027 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:33:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:33:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9caba10c024b8ae751ba6f3ad6a10afe0dd02e2e759963162327dfaef4c9d31c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:40 compute-0 podman[330008]: 2025-12-06 07:33:40.773174983 +0000 UTC m=+0.139231220 container init 744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:33:40 compute-0 podman[330008]: 2025-12-06 07:33:40.78008233 +0000 UTC m=+0.146138557 container start 744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:33:40 compute-0 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[330023]: [NOTICE]   (330027) : New worker (330029) forked
Dec 06 07:33:40 compute-0 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[330023]: [NOTICE]   (330027) : Loading success.
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.962 251996 DEBUG nova.compute.manager [req-9548acc7-0efb-417d-a245-1eff31f9137f req-75c12da4-6186-46c8-95c3-15bfaa3dff5d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received event network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.964 251996 DEBUG oslo_concurrency.lockutils [req-9548acc7-0efb-417d-a245-1eff31f9137f req-75c12da4-6186-46c8-95c3-15bfaa3dff5d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.965 251996 DEBUG oslo_concurrency.lockutils [req-9548acc7-0efb-417d-a245-1eff31f9137f req-75c12da4-6186-46c8-95c3-15bfaa3dff5d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.965 251996 DEBUG oslo_concurrency.lockutils [req-9548acc7-0efb-417d-a245-1eff31f9137f req-75c12da4-6186-46c8-95c3-15bfaa3dff5d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.965 251996 DEBUG nova.compute.manager [req-9548acc7-0efb-417d-a245-1eff31f9137f req-75c12da4-6186-46c8-95c3-15bfaa3dff5d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Processing event network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.966 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.970 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006420.9701319, 70928eda-043f-429b-aa4e-af1f3189a7c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.970 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] VM Resumed (Lifecycle Event)
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.972 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.975 251996 INFO nova.virt.libvirt.driver [-] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Instance spawned successfully.
Dec 06 07:33:40 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.975 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:40.999 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.000 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.000 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.001 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.001 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.002 251996 DEBUG nova.virt.libvirt.driver [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.006 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.010 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.041 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.061 251996 INFO nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Took 10.31 seconds to spawn the instance on the hypervisor.
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.062 251996 DEBUG nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.136 251996 INFO nova.compute.manager [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Took 11.36 seconds to build instance.
Dec 06 07:33:41 compute-0 nova_compute[251992]: 2025-12-06 07:33:41.151 251996 DEBUG oslo_concurrency.lockutils [None req-f40f77df-8f19-43b1-96f0-1697374268b6 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:41.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:42.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:42 compute-0 podman[330038]: 2025-12-06 07:33:42.399942661 +0000 UTC m=+0.057896775 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 07:33:42 compute-0 podman[330039]: 2025-12-06 07:33:42.411936464 +0000 UTC m=+0.066450345 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 06 07:33:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 263 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Dec 06 07:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.159 251996 DEBUG nova.compute.manager [req-713cd30f-661e-41ae-a63e-a6614e20a2b5 req-b11d2bba-606b-4764-976c-089fd7d9f439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received event network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.159 251996 DEBUG oslo_concurrency.lockutils [req-713cd30f-661e-41ae-a63e-a6614e20a2b5 req-b11d2bba-606b-4764-976c-089fd7d9f439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.159 251996 DEBUG oslo_concurrency.lockutils [req-713cd30f-661e-41ae-a63e-a6614e20a2b5 req-b11d2bba-606b-4764-976c-089fd7d9f439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.159 251996 DEBUG oslo_concurrency.lockutils [req-713cd30f-661e-41ae-a63e-a6614e20a2b5 req-b11d2bba-606b-4764-976c-089fd7d9f439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.160 251996 DEBUG nova.compute.manager [req-713cd30f-661e-41ae-a63e-a6614e20a2b5 req-b11d2bba-606b-4764-976c-089fd7d9f439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] No waiting events found dispatching network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.160 251996 WARNING nova.compute.manager [req-713cd30f-661e-41ae-a63e-a6614e20a2b5 req-b11d2bba-606b-4764-976c-089fd7d9f439 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received unexpected event network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 for instance with vm_state active and task_state None.
Dec 06 07:33:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:43.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.549 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.702 251996 DEBUG nova.compute.manager [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:33:43 compute-0 nova_compute[251992]: 2025-12-06 07:33:43.753 251996 INFO nova.compute.manager [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] instance snapshotting
Dec 06 07:33:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:44.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Dec 06 07:33:44 compute-0 nova_compute[251992]: 2025-12-06 07:33:44.129 251996 INFO nova.virt.libvirt.driver [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Beginning live snapshot process
Dec 06 07:33:44 compute-0 nova_compute[251992]: 2025-12-06 07:33:44.270 251996 DEBUG nova.virt.libvirt.imagebackend [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:33:44 compute-0 nova_compute[251992]: 2025-12-06 07:33:44.560 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:44 compute-0 nova_compute[251992]: 2025-12-06 07:33:44.703 251996 DEBUG nova.storage.rbd_utils [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] creating snapshot(d1675ca2d21b40a7aa71abad2fce445d) on rbd image(70928eda-043f-429b-aa4e-af1f3189a7c1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:33:44 compute-0 ceph-mon[74339]: pgmap v2292: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 148 op/s
Dec 06 07:33:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 293 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 731 KiB/s rd, 321 KiB/s wr, 74 op/s
Dec 06 07:33:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:45.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:46.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Dec 06 07:33:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Dec 06 07:33:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 299 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.6 MiB/s wr, 184 op/s
Dec 06 07:33:47 compute-0 sudo[330131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:47 compute-0 sudo[330131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:47 compute-0 sudo[330131]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Dec 06 07:33:47 compute-0 sudo[330156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:33:47 compute-0 sudo[330156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:47 compute-0 sudo[330156]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:47 compute-0 ceph-mon[74339]: pgmap v2293: 305 pgs: 305 active+clean; 293 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 263 KiB/s rd, 2.3 MiB/s wr, 80 op/s
Dec 06 07:33:47 compute-0 ceph-mon[74339]: pgmap v2294: 305 pgs: 305 active+clean; 293 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 731 KiB/s rd, 321 KiB/s wr, 74 op/s
Dec 06 07:33:47 compute-0 sudo[330181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:47 compute-0 sudo[330181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:47 compute-0 sudo[330181]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:47 compute-0 sudo[330206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:33:47 compute-0 sudo[330206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:47.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:47 compute-0 sudo[330206]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:48.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:48 compute-0 nova_compute[251992]: 2025-12-06 07:33:48.552 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 305 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 153 op/s
Dec 06 07:33:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Dec 06 07:33:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Dec 06 07:33:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:49 compute-0 nova_compute[251992]: 2025-12-06 07:33:49.561 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:50.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:33:50 compute-0 ceph-mon[74339]: osdmap e277: 3 total, 3 up, 3 in
Dec 06 07:33:50 compute-0 ceph-mon[74339]: pgmap v2296: 305 pgs: 305 active+clean; 299 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.6 MiB/s wr, 184 op/s
Dec 06 07:33:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2904693144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:50 compute-0 nova_compute[251992]: 2025-12-06 07:33:50.461 251996 DEBUG nova.storage.rbd_utils [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] cloning vms/70928eda-043f-429b-aa4e-af1f3189a7c1_disk@d1675ca2d21b40a7aa71abad2fce445d to images/deed3bf2-6426-476b-8bb8-98ac272255a2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:33:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:33:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 313 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 158 op/s
Dec 06 07:33:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:33:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:33:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:33:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:33:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:33:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3d0c9fde-bdd1-4ef3-90db-532c7bb0712e does not exist
Dec 06 07:33:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4b28a674-c7ac-493b-8206-a33fa615facc does not exist
Dec 06 07:33:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b2343d76-49f3-45de-845e-7e5543312971 does not exist
Dec 06 07:33:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:33:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:33:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:33:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:33:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:33:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:33:51 compute-0 sudo[330300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:51 compute-0 sudo[330300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:51 compute-0 sudo[330300]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:51 compute-0 sudo[330325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:33:51 compute-0 sudo[330325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:51 compute-0 sudo[330325]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:51 compute-0 sudo[330350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:51 compute-0 sudo[330350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:51 compute-0 sudo[330350]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:51 compute-0 ceph-mon[74339]: pgmap v2297: 305 pgs: 305 active+clean; 305 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 153 op/s
Dec 06 07:33:51 compute-0 ceph-mon[74339]: osdmap e278: 3 total, 3 up, 3 in
Dec 06 07:33:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:51 compute-0 ceph-mon[74339]: pgmap v2299: 305 pgs: 305 active+clean; 313 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 158 op/s
Dec 06 07:33:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:33:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:33:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:33:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:33:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:33:51 compute-0 sudo[330375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:33:51 compute-0 sudo[330375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:51.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:51 compute-0 podman[330442]: 2025-12-06 07:33:51.618227941 +0000 UTC m=+0.041587753 container create ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:33:51 compute-0 nova_compute[251992]: 2025-12-06 07:33:51.649 251996 DEBUG nova.storage.rbd_utils [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] flattening images/deed3bf2-6426-476b-8bb8-98ac272255a2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:33:51 compute-0 systemd[1]: Started libpod-conmon-ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164.scope.
Dec 06 07:33:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:33:51 compute-0 podman[330442]: 2025-12-06 07:33:51.600490303 +0000 UTC m=+0.023850135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:33:51 compute-0 podman[330442]: 2025-12-06 07:33:51.704788278 +0000 UTC m=+0.128148250 container init ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:33:51 compute-0 podman[330442]: 2025-12-06 07:33:51.711371347 +0000 UTC m=+0.134731159 container start ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:33:51 compute-0 podman[330442]: 2025-12-06 07:33:51.715288402 +0000 UTC m=+0.138648234 container attach ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 07:33:51 compute-0 stupefied_feistel[330468]: 167 167
Dec 06 07:33:51 compute-0 systemd[1]: libpod-ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164.scope: Deactivated successfully.
Dec 06 07:33:51 compute-0 podman[330442]: 2025-12-06 07:33:51.720230206 +0000 UTC m=+0.143590018 container died ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:33:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5da5f0b8a0e830c8594128b553db948507c32f05c2dfa35777f772ee7a3d51b8-merged.mount: Deactivated successfully.
Dec 06 07:33:51 compute-0 podman[330442]: 2025-12-06 07:33:51.766909175 +0000 UTC m=+0.190268987 container remove ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:33:51 compute-0 systemd[1]: libpod-conmon-ed673a80aa2cd44a26948de2ab78c22327f0f57bc2a5761ecbe41ca5b28bb164.scope: Deactivated successfully.
Dec 06 07:33:51 compute-0 podman[330502]: 2025-12-06 07:33:51.951820427 +0000 UTC m=+0.041190782 container create 2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:33:51 compute-0 systemd[1]: Started libpod-conmon-2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4.scope.
Dec 06 07:33:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1c67ac6d1fb9e05694d8b5e841659ea228b4232d7f7b01af7076f162155a64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1c67ac6d1fb9e05694d8b5e841659ea228b4232d7f7b01af7076f162155a64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1c67ac6d1fb9e05694d8b5e841659ea228b4232d7f7b01af7076f162155a64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1c67ac6d1fb9e05694d8b5e841659ea228b4232d7f7b01af7076f162155a64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af1c67ac6d1fb9e05694d8b5e841659ea228b4232d7f7b01af7076f162155a64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:52 compute-0 podman[330502]: 2025-12-06 07:33:51.934684184 +0000 UTC m=+0.024054539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:33:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:52.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:52 compute-0 podman[330502]: 2025-12-06 07:33:52.046924204 +0000 UTC m=+0.136294609 container init 2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 06 07:33:52 compute-0 nova_compute[251992]: 2025-12-06 07:33:52.049 251996 DEBUG nova.storage.rbd_utils [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] removing snapshot(d1675ca2d21b40a7aa71abad2fce445d) on rbd image(70928eda-043f-429b-aa4e-af1f3189a7c1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:33:52 compute-0 podman[330502]: 2025-12-06 07:33:52.055401723 +0000 UTC m=+0.144772078 container start 2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:33:52 compute-0 podman[330502]: 2025-12-06 07:33:52.058875978 +0000 UTC m=+0.148246333 container attach 2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:33:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Dec 06 07:33:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 313 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 07:33:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Dec 06 07:33:52 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Dec 06 07:33:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3573876812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:33:52 compute-0 elastic_ganguly[330525]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:33:52 compute-0 elastic_ganguly[330525]: --> relative data size: 1.0
Dec 06 07:33:52 compute-0 elastic_ganguly[330525]: --> All data devices are unavailable
Dec 06 07:33:52 compute-0 systemd[1]: libpod-2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4.scope: Deactivated successfully.
Dec 06 07:33:52 compute-0 nova_compute[251992]: 2025-12-06 07:33:52.851 251996 DEBUG nova.storage.rbd_utils [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] creating snapshot(snap) on rbd image(deed3bf2-6426-476b-8bb8-98ac272255a2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:33:52 compute-0 podman[330502]: 2025-12-06 07:33:52.853175571 +0000 UTC m=+0.942545916 container died 2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:33:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-af1c67ac6d1fb9e05694d8b5e841659ea228b4232d7f7b01af7076f162155a64-merged.mount: Deactivated successfully.
Dec 06 07:33:52 compute-0 podman[330502]: 2025-12-06 07:33:52.904203008 +0000 UTC m=+0.993573363 container remove 2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 07:33:52 compute-0 systemd[1]: libpod-conmon-2e9cac030ef244ed80c7b9ba4b1d0ea7d65e803cdc19272fc976cea26d50c4d4.scope: Deactivated successfully.
Dec 06 07:33:52 compute-0 sudo[330375]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:52 compute-0 sudo[330584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:53 compute-0 sudo[330584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:53 compute-0 sudo[330584]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:53 compute-0 sudo[330609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:33:53 compute-0 sudo[330609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:53 compute-0 sudo[330609]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:53 compute-0 sudo[330634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:53 compute-0 sudo[330634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:53 compute-0 sudo[330634]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:53 compute-0 sudo[330659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:33:53 compute-0 sudo[330659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:53.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:53 compute-0 podman[330724]: 2025-12-06 07:33:53.52136748 +0000 UTC m=+0.060946707 container create 3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 07:33:53 compute-0 nova_compute[251992]: 2025-12-06 07:33:53.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:53 compute-0 systemd[1]: Started libpod-conmon-3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204.scope.
Dec 06 07:33:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:33:53 compute-0 podman[330724]: 2025-12-06 07:33:53.490375143 +0000 UTC m=+0.029954400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:33:53 compute-0 podman[330724]: 2025-12-06 07:33:53.597023352 +0000 UTC m=+0.136602599 container init 3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:33:53 compute-0 podman[330724]: 2025-12-06 07:33:53.605121661 +0000 UTC m=+0.144700888 container start 3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gagarin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:33:53 compute-0 podman[330724]: 2025-12-06 07:33:53.607944847 +0000 UTC m=+0.147524094 container attach 3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gagarin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:33:53 compute-0 silly_gagarin[330741]: 167 167
Dec 06 07:33:53 compute-0 systemd[1]: libpod-3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204.scope: Deactivated successfully.
Dec 06 07:33:53 compute-0 podman[330724]: 2025-12-06 07:33:53.610379502 +0000 UTC m=+0.149958729 container died 3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:33:53 compute-0 sshd-session[330490]: Connection reset by authenticating user root 45.135.232.92 port 39938 [preauth]
Dec 06 07:33:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-013a7d7232fde6bb5432153abfa036c2039af656f7e62decfe92bd9e9919e4e5-merged.mount: Deactivated successfully.
Dec 06 07:33:53 compute-0 podman[330724]: 2025-12-06 07:33:53.641934585 +0000 UTC m=+0.181513812 container remove 3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_gagarin, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:33:53 compute-0 systemd[1]: libpod-conmon-3e22c8fb4aee8c6d971ab58f588dffe58a8c80e7ce937729f6d64a6f022bd204.scope: Deactivated successfully.
Dec 06 07:33:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Dec 06 07:33:53 compute-0 ceph-mon[74339]: pgmap v2300: 305 pgs: 305 active+clean; 313 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 07:33:53 compute-0 ceph-mon[74339]: osdmap e279: 3 total, 3 up, 3 in
Dec 06 07:33:53 compute-0 podman[330766]: 2025-12-06 07:33:53.799538599 +0000 UTC m=+0.036700521 container create 32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_darwin, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:33:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Dec 06 07:33:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Dec 06 07:33:53 compute-0 systemd[1]: Started libpod-conmon-32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e.scope.
Dec 06 07:33:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8156839383aff8955aaa57e610729fe96f7874744a341364ff860db8cda59e1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8156839383aff8955aaa57e610729fe96f7874744a341364ff860db8cda59e1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8156839383aff8955aaa57e610729fe96f7874744a341364ff860db8cda59e1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8156839383aff8955aaa57e610729fe96f7874744a341364ff860db8cda59e1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:53 compute-0 podman[330766]: 2025-12-06 07:33:53.876941659 +0000 UTC m=+0.114103601 container init 32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_darwin, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:33:53 compute-0 podman[330766]: 2025-12-06 07:33:53.784497044 +0000 UTC m=+0.021658986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:33:53 compute-0 podman[330766]: 2025-12-06 07:33:53.883896767 +0000 UTC m=+0.121058689 container start 32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:33:53 compute-0 podman[330766]: 2025-12-06 07:33:53.887268638 +0000 UTC m=+0.124430560 container attach 32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:33:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:54.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:54 compute-0 ovn_controller[147168]: 2025-12-06T07:33:54Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:63:d9 10.100.0.9
Dec 06 07:33:54 compute-0 ovn_controller[147168]: 2025-12-06T07:33:54Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:63:d9 10.100.0.9
Dec 06 07:33:54 compute-0 nova_compute[251992]: 2025-12-06 07:33:54.562 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]: {
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:     "0": [
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:         {
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "devices": [
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "/dev/loop3"
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             ],
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "lv_name": "ceph_lv0",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "lv_size": "7511998464",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "name": "ceph_lv0",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "tags": {
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.cluster_name": "ceph",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.crush_device_class": "",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.encrypted": "0",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.osd_id": "0",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.type": "block",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:                 "ceph.vdo": "0"
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             },
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "type": "block",
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:             "vg_name": "ceph_vg0"
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:         }
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]:     ]
Dec 06 07:33:54 compute-0 stupefied_darwin[330780]: }
Dec 06 07:33:54 compute-0 systemd[1]: libpod-32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e.scope: Deactivated successfully.
Dec 06 07:33:54 compute-0 podman[330766]: 2025-12-06 07:33:54.658288963 +0000 UTC m=+0.895450895 container died 32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:33:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8156839383aff8955aaa57e610729fe96f7874744a341364ff860db8cda59e1b-merged.mount: Deactivated successfully.
Dec 06 07:33:54 compute-0 podman[330766]: 2025-12-06 07:33:54.713603676 +0000 UTC m=+0.950765598 container remove 32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_darwin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 07:33:54 compute-0 systemd[1]: libpod-conmon-32341b52e4572b6dbf6e529392ae5f9baf41bca2d4b01ee36b32db9c9a45f47e.scope: Deactivated successfully.
Dec 06 07:33:54 compute-0 sudo[330659]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.8 MiB/s wr, 178 op/s
Dec 06 07:33:54 compute-0 sudo[330807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:54 compute-0 sudo[330807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:54 compute-0 sudo[330807]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Dec 06 07:33:54 compute-0 sudo[330832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:33:54 compute-0 sudo[330832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:54 compute-0 sudo[330832]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:54 compute-0 sudo[330857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:54 compute-0 sudo[330857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:54 compute-0 sudo[330857]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:54 compute-0 ceph-mon[74339]: osdmap e280: 3 total, 3 up, 3 in
Dec 06 07:33:54 compute-0 sudo[330882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:33:54 compute-0 sudo[330882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Dec 06 07:33:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Dec 06 07:33:55 compute-0 sshd-session[330761]: Invalid user user from 45.135.232.92 port 39950
Dec 06 07:33:55 compute-0 podman[330947]: 2025-12-06 07:33:55.288351162 +0000 UTC m=+0.041210863 container create f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:33:55 compute-0 systemd[1]: Started libpod-conmon-f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865.scope.
Dec 06 07:33:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:33:55 compute-0 podman[330947]: 2025-12-06 07:33:55.360810218 +0000 UTC m=+0.113669979 container init f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:33:55 compute-0 podman[330947]: 2025-12-06 07:33:55.268778233 +0000 UTC m=+0.021637954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:33:55 compute-0 podman[330947]: 2025-12-06 07:33:55.372145754 +0000 UTC m=+0.125005455 container start f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:33:55 compute-0 podman[330947]: 2025-12-06 07:33:55.375170335 +0000 UTC m=+0.128030036 container attach f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 07:33:55 compute-0 charming_montalcini[330963]: 167 167
Dec 06 07:33:55 compute-0 systemd[1]: libpod-f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865.scope: Deactivated successfully.
Dec 06 07:33:55 compute-0 podman[330947]: 2025-12-06 07:33:55.378588778 +0000 UTC m=+0.131448499 container died f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:33:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-eaaea9af87fcb1330d9a06eb3e6155c6dd3d38a0bb894b4b44b1ba26ec0c208b-merged.mount: Deactivated successfully.
Dec 06 07:33:55 compute-0 podman[330947]: 2025-12-06 07:33:55.421598999 +0000 UTC m=+0.174458700 container remove f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:33:55 compute-0 systemd[1]: libpod-conmon-f9bc9fb604e51c299a0ce898e0e35af686f800d19c403b9e56713f6894adf865.scope: Deactivated successfully.
Dec 06 07:33:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:55 compute-0 podman[330987]: 2025-12-06 07:33:55.588455634 +0000 UTC m=+0.037239437 container create aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:33:55 compute-0 systemd[1]: Started libpod-conmon-aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a.scope.
Dec 06 07:33:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f4125156ee0c9eca9427eb73dda480ccb1127b2bfe75e0e14b2f5a13f09d70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f4125156ee0c9eca9427eb73dda480ccb1127b2bfe75e0e14b2f5a13f09d70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f4125156ee0c9eca9427eb73dda480ccb1127b2bfe75e0e14b2f5a13f09d70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2f4125156ee0c9eca9427eb73dda480ccb1127b2bfe75e0e14b2f5a13f09d70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:33:55 compute-0 sshd-session[330761]: Connection reset by invalid user user 45.135.232.92 port 39950 [preauth]
Dec 06 07:33:55 compute-0 podman[330987]: 2025-12-06 07:33:55.572161724 +0000 UTC m=+0.020945547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:33:55 compute-0 podman[330987]: 2025-12-06 07:33:55.674095606 +0000 UTC m=+0.122879429 container init aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 07:33:55 compute-0 podman[330987]: 2025-12-06 07:33:55.680215831 +0000 UTC m=+0.128999634 container start aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:33:55 compute-0 podman[330987]: 2025-12-06 07:33:55.68384915 +0000 UTC m=+0.132632973 container attach aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:33:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:56 compute-0 ceph-mon[74339]: pgmap v2303: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.8 MiB/s wr, 178 op/s
Dec 06 07:33:56 compute-0 ceph-mon[74339]: osdmap e281: 3 total, 3 up, 3 in
Dec 06 07:33:56 compute-0 sudo[331010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:56 compute-0 sudo[331010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:56 compute-0 sudo[331010]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:56 compute-0 sudo[331037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:56 compute-0 sudo[331037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:56 compute-0 sudo[331037]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:56 compute-0 angry_pasteur[331003]: {
Dec 06 07:33:56 compute-0 angry_pasteur[331003]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:33:56 compute-0 angry_pasteur[331003]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:33:56 compute-0 angry_pasteur[331003]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:33:56 compute-0 angry_pasteur[331003]:         "osd_id": 0,
Dec 06 07:33:56 compute-0 angry_pasteur[331003]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:33:56 compute-0 angry_pasteur[331003]:         "type": "bluestore"
Dec 06 07:33:56 compute-0 angry_pasteur[331003]:     }
Dec 06 07:33:56 compute-0 angry_pasteur[331003]: }
Dec 06 07:33:56 compute-0 systemd[1]: libpod-aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a.scope: Deactivated successfully.
Dec 06 07:33:56 compute-0 podman[330987]: 2025-12-06 07:33:56.495780858 +0000 UTC m=+0.944564661 container died aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 07:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2f4125156ee0c9eca9427eb73dda480ccb1127b2bfe75e0e14b2f5a13f09d70-merged.mount: Deactivated successfully.
Dec 06 07:33:56 compute-0 podman[330987]: 2025-12-06 07:33:56.550986089 +0000 UTC m=+0.999769892 container remove aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_pasteur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:33:56 compute-0 systemd[1]: libpod-conmon-aadb12679bfd9c66c0384081f71cff8727e1e61c5813e490ca11f0bea9827e6a.scope: Deactivated successfully.
Dec 06 07:33:56 compute-0 sudo[330882]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:33:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 490 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 15 MiB/s wr, 356 op/s
Dec 06 07:33:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:33:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:57.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:33:57 compute-0 sshd-session[331008]: Connection reset by authenticating user root 45.135.232.92 port 33968 [preauth]
Dec 06 07:33:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:33:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:33:58.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:33:58 compute-0 nova_compute[251992]: 2025-12-06 07:33:58.601 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 490 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 MiB/s wr, 405 op/s
Dec 06 07:33:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:33:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:33:59 compute-0 ceph-mon[74339]: pgmap v2305: 305 pgs: 305 active+clean; 490 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 15 MiB/s wr, 356 op/s
Dec 06 07:33:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:33:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:33:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:33:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:33:59 compute-0 nova_compute[251992]: 2025-12-06 07:33:59.564 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:33:59 compute-0 nova_compute[251992]: 2025-12-06 07:33:59.657 251996 INFO nova.virt.libvirt.driver [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Snapshot image upload complete
Dec 06 07:33:59 compute-0 nova_compute[251992]: 2025-12-06 07:33:59.658 251996 INFO nova.compute.manager [None req-c320b73d-1078-454d-8053-092a7610861a 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Took 15.90 seconds to snapshot the instance on the hypervisor.
Dec 06 07:33:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:33:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 70645f5b-43d5-480f-92a9-5d6ea2e2e076 does not exist
Dec 06 07:33:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2de5b1de-675e-4de0-a992-15f4f6cb9b57 does not exist
Dec 06 07:33:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ef5d5179-1f82-4ec1-b3ee-447ff6c1f6b9 does not exist
Dec 06 07:33:59 compute-0 sshd-session[331091]: Connection reset by authenticating user root 45.135.232.92 port 33980 [preauth]
Dec 06 07:33:59 compute-0 sudo[331094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:33:59 compute-0 sudo[331094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:59 compute-0 sudo[331094]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:59 compute-0 sudo[331119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:33:59 compute-0 sudo[331119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:33:59 compute-0 sudo[331119]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:00.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/789745650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:00 compute-0 ceph-mon[74339]: pgmap v2306: 305 pgs: 305 active+clean; 490 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 MiB/s wr, 405 op/s
Dec 06 07:34:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:34:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/494249757' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3797525378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:34:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 12 MiB/s wr, 325 op/s
Dec 06 07:34:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:01 compute-0 sshd-session[331144]: Connection reset by authenticating user root 45.135.232.92 port 33994 [preauth]
Dec 06 07:34:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2964872426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:02 compute-0 ceph-mon[74339]: pgmap v2307: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 12 MiB/s wr, 325 op/s
Dec 06 07:34:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 7.1 MiB/s wr, 181 op/s
Dec 06 07:34:02 compute-0 nova_compute[251992]: 2025-12-06 07:34:02.913 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:02.913 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:34:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:02.915 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:34:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Dec 06 07:34:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Dec 06 07:34:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.247 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.247 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.267 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.351 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.352 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.359 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.359 251996 INFO nova.compute.claims [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:34:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.537 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:03 compute-0 nova_compute[251992]: 2025-12-06 07:34:03.612 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:03.841 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:03.842 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:03.842 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:34:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3598755728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.002 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.009 251996 DEBUG nova.compute.provider_tree [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.031 251996 DEBUG nova.scheduler.client.report [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:34:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:04.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.056 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.056 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:34:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.101 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.101 251996 DEBUG nova.network.neutron [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.129 251996 INFO nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.145 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:34:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Dec 06 07:34:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.189 251996 INFO nova.virt.block_device [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Booting with blank volume at /dev/vda
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.279 251996 DEBUG nova.policy [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2aa5b15c15f84a8cb24776d5c781eb09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:34:04 compute-0 ceph-mon[74339]: pgmap v2308: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 7.1 MiB/s wr, 181 op/s
Dec 06 07:34:04 compute-0 ceph-mon[74339]: osdmap e282: 3 total, 3 up, 3 in
Dec 06 07:34:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3598755728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:04 compute-0 nova_compute[251992]: 2025-12-06 07:34:04.567 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.294 251996 DEBUG nova.network.neutron [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Successfully created port: e61ac68e-e534-4351-b3ce-b20fa32579fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.387 251996 DEBUG os_brick.utils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.389 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.402 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.402 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[52fa67ca-c219-4e9f-8b48-262ea6ac71f4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.404 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.412 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.412 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[bc85144e-3025-4e50-a465-7080714f8ef5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.413 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.421 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.422 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[83c64220-8318-4c4b-ba38-1dfa3c7005e5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.423 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[76c995f9-e70b-45d7-ad77-f64a399ce28f]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.424 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:05 compute-0 ceph-mon[74339]: osdmap e283: 3 total, 3 up, 3 in
Dec 06 07:34:05 compute-0 ceph-mon[74339]: pgmap v2311: 305 pgs: 305 active+clean; 525 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.454 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.456 251996 DEBUG os_brick.initiator.connectors.lightos [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.456 251996 DEBUG os_brick.initiator.connectors.lightos [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.457 251996 DEBUG os_brick.initiator.connectors.lightos [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.457 251996 DEBUG os_brick.utils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:34:05 compute-0 nova_compute[251992]: 2025-12-06 07:34:05.457 251996 DEBUG nova.virt.block_device [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updating existing volume attachment record: c92e5c00-d68a-4954-9986-06588c40c17d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:34:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:06.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.189 251996 DEBUG nova.network.neutron [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Successfully updated port: e61ac68e-e534-4351-b3ce-b20fa32579fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:34:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3739392517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.212 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.213 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquired lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.213 251996 DEBUG nova.network.neutron [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.429 251996 DEBUG nova.network.neutron [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:34:06 compute-0 podman[331178]: 2025-12-06 07:34:06.441749412 +0000 UTC m=+0.102912529 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.726 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.728 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.729 251996 INFO nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Creating image(s)
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.729 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.729 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Ensure instance console log exists: /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.730 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.730 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.730 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.9 MiB/s wr, 254 op/s
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.809 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.834 251996 WARNING nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.834 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid 70928eda-043f-429b-aa4e-af1f3189a7c1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.835 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.836 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.837 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.837 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:06 compute-0 nova_compute[251992]: 2025-12-06 07:34:06.864 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3739392517' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:07 compute-0 nova_compute[251992]: 2025-12-06 07:34:07.007 251996 DEBUG nova.compute.manager [req-45ecf386-fb24-41f6-bdd2-cc1eaf7346dd req-f7c2d28f-3eb6-4161-afe7-6e28c0deb87f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-changed-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:07 compute-0 nova_compute[251992]: 2025-12-06 07:34:07.007 251996 DEBUG nova.compute.manager [req-45ecf386-fb24-41f6-bdd2-cc1eaf7346dd req-f7c2d28f-3eb6-4161-afe7-6e28c0deb87f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Refreshing instance network info cache due to event network-changed-e61ac68e-e534-4351-b3ce-b20fa32579fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:34:07 compute-0 nova_compute[251992]: 2025-12-06 07:34:07.008 251996 DEBUG oslo_concurrency.lockutils [req-45ecf386-fb24-41f6-bdd2-cc1eaf7346dd req-f7c2d28f-3eb6-4161-afe7-6e28c0deb87f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:08.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:08 compute-0 ceph-mon[74339]: pgmap v2312: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.9 MiB/s wr, 254 op/s
Dec 06 07:34:08 compute-0 nova_compute[251992]: 2025-12-06 07:34:08.616 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 302 op/s
Dec 06 07:34:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.369 251996 DEBUG nova.network.neutron [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updating instance_info_cache with network_info: [{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Dec 06 07:34:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4141776775' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:34:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4141776775' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:34:09 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.435 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Releasing lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.436 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance network_info: |[{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.436 251996 DEBUG oslo_concurrency.lockutils [req-45ecf386-fb24-41f6-bdd2-cc1eaf7346dd req-f7c2d28f-3eb6-4161-afe7-6e28c0deb87f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.436 251996 DEBUG nova.network.neutron [req-45ecf386-fb24-41f6-bdd2-cc1eaf7346dd req-f7c2d28f-3eb6-4161-afe7-6e28c0deb87f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Refreshing network info cache for port e61ac68e-e534-4351-b3ce-b20fa32579fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.439 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Start _get_guest_xml network_info=[{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-95b7906f-ca03-4ae4-bdc0-817cf9423acd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '95b7906f-ca03-4ae4-bdc0-817cf9423acd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c2e6b8fd-375c-4658-b338-f2d334041ba3', 'attached_at': '', 'detached_at': '', 'volume_id': '95b7906f-ca03-4ae4-bdc0-817cf9423acd', 'serial': '95b7906f-ca03-4ae4-bdc0-817cf9423acd'}, 'attachment_id': 'c92e5c00-d68a-4954-9986-06588c40c17d', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.444 251996 WARNING nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.448 251996 DEBUG nova.virt.libvirt.host [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.449 251996 DEBUG nova.virt.libvirt.host [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.453 251996 DEBUG nova.virt.libvirt.host [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.453 251996 DEBUG nova.virt.libvirt.host [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.454 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.455 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.455 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.455 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.456 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.456 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.456 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.456 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.456 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.457 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.457 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:34:09 compute-0 nova_compute[251992]: 2025-12-06 07:34:09.457 251996 DEBUG nova.virt.hardware [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:34:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:09.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:10.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.422 251996 DEBUG nova.storage.rbd_utils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.428 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.455 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.699 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.699 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.700 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.700 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.700 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 363 op/s
Dec 06 07:34:10 compute-0 ceph-mon[74339]: pgmap v2313: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 302 op/s
Dec 06 07:34:10 compute-0 ceph-mon[74339]: osdmap e284: 3 total, 3 up, 3 in
Dec 06 07:34:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:34:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30104927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.880 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:10.917 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.933 251996 DEBUG nova.virt.libvirt.vif [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1801137848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1801137848',id=124,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-blw02nr2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:04Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=c2e6b8fd-375c-4658-b338-f2d334041ba3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.934 251996 DEBUG nova.network.os_vif_util [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.935 251996 DEBUG nova.network.os_vif_util [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=e61ac68e-e534-4351-b3ce-b20fa32579fc,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ac68e-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.936 251996 DEBUG nova.objects.instance [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:10 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.997 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <uuid>c2e6b8fd-375c-4658-b338-f2d334041ba3</uuid>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <name>instance-0000007c</name>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1801137848</nova:name>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:34:09</nova:creationTime>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <nova:user uuid="2aa5b15c15f84a8cb24776d5c781eb09">tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member</nova:user>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <nova:project uuid="17cdfa63c4424ec7a0eb4bb3d7372c14">tempest-ServerBootFromVolumeStableRescueTest-344238221</nova:project>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <nova:port uuid="e61ac68e-e534-4351-b3ce-b20fa32579fc">
Dec 06 07:34:10 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <system>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <entry name="serial">c2e6b8fd-375c-4658-b338-f2d334041ba3</entry>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <entry name="uuid">c2e6b8fd-375c-4658-b338-f2d334041ba3</entry>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </system>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <os>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   </os>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <features>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   </features>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config">
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       </source>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-95b7906f-ca03-4ae4-bdc0-817cf9423acd">
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       </source>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:34:10 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <serial>95b7906f-ca03-4ae4-bdc0-817cf9423acd</serial>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:99:1b:d0"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <target dev="tape61ac68e-e5"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/console.log" append="off"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <video>
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </video>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:34:10 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:34:10 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:34:10 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:34:10 compute-0 nova_compute[251992]: </domain>
Dec 06 07:34:10 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:10.999 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Preparing to wait for external event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.000 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.001 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.001 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.002 251996 DEBUG nova.virt.libvirt.vif [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1801137848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1801137848',id=124,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-blw02nr2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:04Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=c2e6b8fd-375c-4658-b338-f2d334041ba3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.002 251996 DEBUG nova.network.os_vif_util [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.003 251996 DEBUG nova.network.os_vif_util [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=e61ac68e-e534-4351-b3ce-b20fa32579fc,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ac68e-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.004 251996 DEBUG os_vif [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=e61ac68e-e534-4351-b3ce-b20fa32579fc,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ac68e-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.005 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.005 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.006 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.010 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.010 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape61ac68e-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.011 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape61ac68e-e5, col_values=(('external_ids', {'iface-id': 'e61ac68e-e534-4351-b3ce-b20fa32579fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:1b:d0', 'vm-uuid': 'c2e6b8fd-375c-4658-b338-f2d334041ba3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.012 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:11 compute-0 NetworkManager[48965]: <info>  [1765006451.0137] manager: (tape61ac68e-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.015 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.021 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.023 251996 INFO os_vif [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=e61ac68e-e534-4351-b3ce-b20fa32579fc,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ac68e-e5')
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.083 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.083 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.083 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No VIF found with MAC fa:16:3e:99:1b:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.084 251996 INFO nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Using config drive
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.107 251996 DEBUG nova.storage.rbd_utils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:34:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171411006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.159 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.242 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.243 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.246 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.246 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.408 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.410 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4224MB free_disk=20.855270385742188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.410 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.410 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.697 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 70928eda-043f-429b-aa4e-af1f3189a7c1 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.697 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c2e6b8fd-375c-4658-b338-f2d334041ba3 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.697 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.698 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.862 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.976 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.977 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.987 251996 INFO nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Creating config drive at /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config
Dec 06 07:34:11 compute-0 nova_compute[251992]: 2025-12-06 07:34:11.991 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpex_4582e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.018 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.045 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:34:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:12.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.083 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.125 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpex_4582e" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.153 251996 DEBUG nova.storage.rbd_utils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.157 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:34:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1991038349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-0 ceph-mon[74339]: pgmap v2315: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 6.2 MiB/s wr, 363 op/s
Dec 06 07:34:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/30104927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1171411006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3782614172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Dec 06 07:34:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:34:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1153200742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.517 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.524 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.543 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.577 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.578 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Dec 06 07:34:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Dec 06 07:34:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 3.9 MiB/s wr, 274 op/s
Dec 06 07:34:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:34:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/853046914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.996 251996 DEBUG nova.network.neutron [req-45ecf386-fb24-41f6-bdd2-cc1eaf7346dd req-f7c2d28f-3eb6-4161-afe7-6e28c0deb87f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updated VIF entry in instance network info cache for port e61ac68e-e534-4351-b3ce-b20fa32579fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:34:12 compute-0 nova_compute[251992]: 2025-12-06 07:34:12.996 251996 DEBUG nova.network.neutron [req-45ecf386-fb24-41f6-bdd2-cc1eaf7346dd req-f7c2d28f-3eb6-4161-afe7-6e28c0deb87f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updating instance_info_cache with network_info: [{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:34:13 compute-0 nova_compute[251992]: 2025-12-06 07:34:13.020 251996 DEBUG oslo_concurrency.lockutils [req-45ecf386-fb24-41f6-bdd2-cc1eaf7346dd req-f7c2d28f-3eb6-4161-afe7-6e28c0deb87f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:13 compute-0 podman[331353]: 2025-12-06 07:34:13.40445832 +0000 UTC m=+0.056252509 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 07:34:13 compute-0 podman[331354]: 2025-12-06 07:34:13.420795582 +0000 UTC m=+0.065965242 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:34:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1849849855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1153200742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3016613992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:13 compute-0 ceph-mon[74339]: osdmap e285: 3 total, 3 up, 3 in
Dec 06 07:34:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/853046914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:13.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:14.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:14 compute-0 nova_compute[251992]: 2025-12-06 07:34:14.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 18 KiB/s wr, 121 op/s
Dec 06 07:34:15 compute-0 ceph-mon[74339]: pgmap v2317: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 3.9 MiB/s wr, 274 op/s
Dec 06 07:34:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:15.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:15 compute-0 nova_compute[251992]: 2025-12-06 07:34:15.571 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:15 compute-0 nova_compute[251992]: 2025-12-06 07:34:15.572 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:15 compute-0 nova_compute[251992]: 2025-12-06 07:34:15.572 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:15 compute-0 nova_compute[251992]: 2025-12-06 07:34:15.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:16 compute-0 nova_compute[251992]: 2025-12-06 07:34:16.014 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:16.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:16 compute-0 sudo[331393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:34:16 compute-0 sudo[331393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:34:16 compute-0 sudo[331393]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:16 compute-0 sudo[331418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:34:16 compute-0 sudo[331418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:34:16 compute-0 sudo[331418]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:16 compute-0 nova_compute[251992]: 2025-12-06 07:34:16.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:16 compute-0 nova_compute[251992]: 2025-12-06 07:34:16.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 613 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.3 MiB/s wr, 122 op/s
Dec 06 07:34:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:17.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:18.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:34:18
Dec 06 07:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', '.rgw.root', 'backups', '.mgr', 'default.rgw.control']
Dec 06 07:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:34:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 613 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.8 MiB/s wr, 104 op/s
Dec 06 07:34:18 compute-0 ceph-mon[74339]: pgmap v2318: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 18 KiB/s wr, 121 op/s
Dec 06 07:34:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.269 251996 DEBUG oslo_concurrency.processutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.270 251996 INFO nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Deleting local config drive /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config because it was imported into RBD.
Dec 06 07:34:19 compute-0 kernel: tape61ac68e-e5: entered promiscuous mode
Dec 06 07:34:19 compute-0 NetworkManager[48965]: <info>  [1765006459.3260] manager: (tape61ac68e-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.327 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:19 compute-0 ovn_controller[147168]: 2025-12-06T07:34:19Z|00439|binding|INFO|Claiming lport e61ac68e-e534-4351-b3ce-b20fa32579fc for this chassis.
Dec 06 07:34:19 compute-0 ovn_controller[147168]: 2025-12-06T07:34:19Z|00440|binding|INFO|e61ac68e-e534-4351-b3ce-b20fa32579fc: Claiming fa:16:3e:99:1b:d0 10.100.0.14
Dec 06 07:34:19 compute-0 ovn_controller[147168]: 2025-12-06T07:34:19Z|00441|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc ovn-installed in OVS
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.347 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:19 compute-0 systemd-udevd[331457]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:34:19 compute-0 systemd-machined[212986]: New machine qemu-55-instance-0000007c.
Dec 06 07:34:19 compute-0 NetworkManager[48965]: <info>  [1765006459.3669] device (tape61ac68e-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:34:19 compute-0 NetworkManager[48965]: <info>  [1765006459.3687] device (tape61ac68e-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:34:19 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-0000007c.
Dec 06 07:34:19 compute-0 ovn_controller[147168]: 2025-12-06T07:34:19Z|00442|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc up in Southbound
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.445 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1b:d0 10.100.0.14'], port_security=['fa:16:3e:99:1b:d0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c2e6b8fd-375c-4658-b338-f2d334041ba3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=e61ac68e-e534-4351-b3ce-b20fa32579fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.447 158118 INFO neutron.agent.ovn.metadata.agent [-] Port e61ac68e-e534-4351-b3ce-b20fa32579fc in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 bound to our chassis
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.449 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.465 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bf79d9a5-adfb-4965-bf8e-1aa35bafd9a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.494 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e714e8-a0dc-4df5-8676-49053254a507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.497 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[22ba11d9-1278-422b-ba0f-2a856446f7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.524 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8c4a8b-37f2-42c4-b74e-06327f8532f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:19.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.540 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f72a642c-412e-4230-937e-dd4b740943e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 42370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331472, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.553 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a42f39b8-d7bc-465f-823e-1f890307c87c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331473, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331473, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.555 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.558 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.558 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.559 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:34:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:34:19.559 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.571 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.927 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006459.9266713, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:19 compute-0 nova_compute[251992]: 2025-12-06 07:34:19.928 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Started (Lifecycle Event)
Dec 06 07:34:19 compute-0 ceph-mon[74339]: pgmap v2319: 305 pgs: 305 active+clean; 613 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.3 MiB/s wr, 122 op/s
Dec 06 07:34:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/420931447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:19 compute-0 ceph-mon[74339]: pgmap v2320: 305 pgs: 305 active+clean; 613 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.8 MiB/s wr, 104 op/s
Dec 06 07:34:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:20.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.102 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.106 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006459.926823, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.106 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Paused (Lifecycle Event)
Dec 06 07:34:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.142 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.146 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.170 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:34:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Dec 06 07:34:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.220 251996 DEBUG nova.compute.manager [req-572abadd-f52e-4b35-b1bd-cc28ad4df4e8 req-4b8ef3ac-879c-4c18-9286-ebec63f6bfed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.220 251996 DEBUG oslo_concurrency.lockutils [req-572abadd-f52e-4b35-b1bd-cc28ad4df4e8 req-4b8ef3ac-879c-4c18-9286-ebec63f6bfed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.221 251996 DEBUG oslo_concurrency.lockutils [req-572abadd-f52e-4b35-b1bd-cc28ad4df4e8 req-4b8ef3ac-879c-4c18-9286-ebec63f6bfed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.221 251996 DEBUG oslo_concurrency.lockutils [req-572abadd-f52e-4b35-b1bd-cc28ad4df4e8 req-4b8ef3ac-879c-4c18-9286-ebec63f6bfed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.221 251996 DEBUG nova.compute.manager [req-572abadd-f52e-4b35-b1bd-cc28ad4df4e8 req-4b8ef3ac-879c-4c18-9286-ebec63f6bfed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Processing event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.222 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.228 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006460.227856, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.229 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Resumed (Lifecycle Event)
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.230 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.237 251996 INFO nova.virt.libvirt.driver [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance spawned successfully.
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.237 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.253 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.269 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.272 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.272 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.273 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.273 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.273 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.274 251996 DEBUG nova.virt.libvirt.driver [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.308 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.335 251996 INFO nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Took 13.61 seconds to spawn the instance on the hypervisor.
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.336 251996 DEBUG nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.415 251996 INFO nova.compute.manager [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Took 17.10 seconds to build instance.
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.433 251996 DEBUG oslo_concurrency.lockutils [None req-0e5dc67d-3366-4b18-9ec5-8876930ceab0 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.434 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.434 251996 INFO nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.434 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:20 compute-0 nova_compute[251992]: 2025-12-06 07:34:20.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:34:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 9.7 MiB/s wr, 181 op/s
Dec 06 07:34:21 compute-0 nova_compute[251992]: 2025-12-06 07:34:21.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Dec 06 07:34:21 compute-0 ceph-mon[74339]: osdmap e286: 3 total, 3 up, 3 in
Dec 06 07:34:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/717285692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1013349408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1044629662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Dec 06 07:34:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Dec 06 07:34:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:21.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:21 compute-0 nova_compute[251992]: 2025-12-06 07:34:21.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:34:21 compute-0 nova_compute[251992]: 2025-12-06 07:34:21.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:34:21 compute-0 nova_compute[251992]: 2025-12-06 07:34:21.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:34:21 compute-0 nova_compute[251992]: 2025-12-06 07:34:21.954 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:21 compute-0 nova_compute[251992]: 2025-12-06 07:34:21.955 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:21 compute-0 nova_compute[251992]: 2025-12-06 07:34:21.955 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:34:21 compute-0 nova_compute[251992]: 2025-12-06 07:34:21.955 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 70928eda-043f-429b-aa4e-af1f3189a7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:34:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:22.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:22 compute-0 nova_compute[251992]: 2025-12-06 07:34:22.298 251996 DEBUG nova.compute.manager [req-14e4c8a5-a230-4cf5-8e90-c74fcbdb0590 req-b7aeb240-0e57-43d0-a438-ae4d458e0f01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:34:22 compute-0 nova_compute[251992]: 2025-12-06 07:34:22.298 251996 DEBUG oslo_concurrency.lockutils [req-14e4c8a5-a230-4cf5-8e90-c74fcbdb0590 req-b7aeb240-0e57-43d0-a438-ae4d458e0f01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:22 compute-0 nova_compute[251992]: 2025-12-06 07:34:22.299 251996 DEBUG oslo_concurrency.lockutils [req-14e4c8a5-a230-4cf5-8e90-c74fcbdb0590 req-b7aeb240-0e57-43d0-a438-ae4d458e0f01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:22 compute-0 nova_compute[251992]: 2025-12-06 07:34:22.299 251996 DEBUG oslo_concurrency.lockutils [req-14e4c8a5-a230-4cf5-8e90-c74fcbdb0590 req-b7aeb240-0e57-43d0-a438-ae4d458e0f01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:34:22 compute-0 nova_compute[251992]: 2025-12-06 07:34:22.299 251996 DEBUG nova.compute.manager [req-14e4c8a5-a230-4cf5-8e90-c74fcbdb0590 req-b7aeb240-0e57-43d0-a438-ae4d458e0f01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:34:22 compute-0 nova_compute[251992]: 2025-12-06 07:34:22.299 251996 WARNING nova.compute.manager [req-14e4c8a5-a230-4cf5-8e90-c74fcbdb0590 req-b7aeb240-0e57-43d0-a438-ae4d458e0f01 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state active and task_state None.
Dec 06 07:34:22 compute-0 ceph-mon[74339]: pgmap v2322: 305 pgs: 305 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 9.7 MiB/s wr, 181 op/s
Dec 06 07:34:22 compute-0 ceph-mon[74339]: osdmap e287: 3 total, 3 up, 3 in
Dec 06 07:34:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 9.9 MiB/s wr, 173 op/s
Dec 06 07:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:34:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:23.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:23 compute-0 nova_compute[251992]: 2025-12-06 07:34:23.733 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updating instance_info_cache with network_info: [{"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:23 compute-0 nova_compute[251992]: 2025-12-06 07:34:23.763 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:23 compute-0 nova_compute[251992]: 2025-12-06 07:34:23.764 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:34:24 compute-0 ceph-mon[74339]: pgmap v2324: 305 pgs: 305 active+clean; 695 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 9.9 MiB/s wr, 173 op/s
Dec 06 07:34:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3848048345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:24.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:24 compute-0 nova_compute[251992]: 2025-12-06 07:34:24.387 251996 INFO nova.compute.manager [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Rescuing
Dec 06 07:34:24 compute-0 nova_compute[251992]: 2025-12-06 07:34:24.387 251996 DEBUG oslo_concurrency.lockutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:34:24 compute-0 nova_compute[251992]: 2025-12-06 07:34:24.387 251996 DEBUG oslo_concurrency.lockutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquired lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:34:24 compute-0 nova_compute[251992]: 2025-12-06 07:34:24.387 251996 DEBUG nova.network.neutron [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:34:24 compute-0 nova_compute[251992]: 2025-12-06 07:34:24.615 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:34:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:34:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:34:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:34:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:34:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 705 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 7.5 MiB/s wr, 187 op/s
Dec 06 07:34:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Dec 06 07:34:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:25.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:25 compute-0 ceph-mon[74339]: pgmap v2325: 305 pgs: 305 active+clean; 705 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 7.5 MiB/s wr, 187 op/s
Dec 06 07:34:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Dec 06 07:34:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007513696957007258 of space, bias 1.0, pg target 2.2541090871021776 quantized to 32 (current 32)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021608694514237857 of space, bias 1.0, pg target 0.6439390965242882 quantized to 32 (current 32)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.009385032802903407 of space, bias 1.0, pg target 2.7967397752652152 quantized to 32 (current 32)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:34:25 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Dec 06 07:34:26 compute-0 nova_compute[251992]: 2025-12-06 07:34:26.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:26.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:26 compute-0 nova_compute[251992]: 2025-12-06 07:34:26.589 251996 DEBUG nova.network.neutron [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updating instance_info_cache with network_info: [{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:34:26 compute-0 nova_compute[251992]: 2025-12-06 07:34:26.681 251996 DEBUG oslo_concurrency.lockutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Releasing lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:34:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 3.9 MiB/s wr, 271 op/s
Dec 06 07:34:26 compute-0 ceph-mon[74339]: osdmap e288: 3 total, 3 up, 3 in
Dec 06 07:34:27 compute-0 nova_compute[251992]: 2025-12-06 07:34:27.050 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:34:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:27.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:27 compute-0 ceph-mon[74339]: pgmap v2327: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 3.9 MiB/s wr, 271 op/s
Dec 06 07:34:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:28.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:34:28 compute-0 ceph-osd[84884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Dec 06 07:34:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Dec 06 07:34:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Dec 06 07:34:28 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Dec 06 07:34:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Dec 06 07:34:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Dec 06 07:34:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Dec 06 07:34:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:29.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:29 compute-0 nova_compute[251992]: 2025-12-06 07:34:29.617 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:29 compute-0 ceph-mon[74339]: pgmap v2328: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 559 KiB/s rd, 1.8 MiB/s wr, 180 op/s
Dec 06 07:34:29 compute-0 ceph-mon[74339]: osdmap e289: 3 total, 3 up, 3 in
Dec 06 07:34:29 compute-0 ceph-mon[74339]: osdmap e290: 3 total, 3 up, 3 in
Dec 06 07:34:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:30.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Dec 06 07:34:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Dec 06 07:34:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Dec 06 07:34:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 37 KiB/s wr, 257 op/s
Dec 06 07:34:31 compute-0 nova_compute[251992]: 2025-12-06 07:34:31.023 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:31.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:34:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262995560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:32.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 30 KiB/s wr, 208 op/s
Dec 06 07:34:33 compute-0 ceph-mon[74339]: osdmap e291: 3 total, 3 up, 3 in
Dec 06 07:34:33 compute-0 ceph-mon[74339]: pgmap v2332: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 37 KiB/s wr, 257 op/s
Dec 06 07:34:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:33.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:34.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Dec 06 07:34:34 compute-0 nova_compute[251992]: 2025-12-06 07:34:34.619 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 32 KiB/s wr, 233 op/s
Dec 06 07:34:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:35.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:36 compute-0 nova_compute[251992]: 2025-12-06 07:34:36.025 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:36.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:36 compute-0 sudo[331525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:34:36 compute-0 sudo[331525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:34:36 compute-0 sudo[331525]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:36 compute-0 sudo[331556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:34:36 compute-0 sudo[331556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:34:36 compute-0 sudo[331556]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:36 compute-0 podman[331549]: 2025-12-06 07:34:36.715029109 +0000 UTC m=+0.100929756 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:34:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 25 KiB/s wr, 164 op/s
Dec 06 07:34:37 compute-0 nova_compute[251992]: 2025-12-06 07:34:37.093 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:34:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:37.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:38.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1262995560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:38 compute-0 ceph-mon[74339]: pgmap v2333: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 30 KiB/s wr, 208 op/s
Dec 06 07:34:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 134 op/s
Dec 06 07:34:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:39.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:39 compute-0 nova_compute[251992]: 2025-12-06 07:34:39.622 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Dec 06 07:34:39 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Dec 06 07:34:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:40.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 06 07:34:41 compute-0 nova_compute[251992]: 2025-12-06 07:34:41.027 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:41.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:42.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:34:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/461198439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 06 07:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:34:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:43.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:43 compute-0 ceph-mon[74339]: pgmap v2334: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 32 KiB/s wr, 233 op/s
Dec 06 07:34:43 compute-0 ceph-mon[74339]: pgmap v2335: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 25 KiB/s wr, 164 op/s
Dec 06 07:34:43 compute-0 ceph-mon[74339]: pgmap v2336: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 134 op/s
Dec 06 07:34:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:44.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:44 compute-0 podman[331604]: 2025-12-06 07:34:44.385084636 +0000 UTC m=+0.047496053 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:34:44 compute-0 podman[331605]: 2025-12-06 07:34:44.389818294 +0000 UTC m=+0.049502657 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:34:44 compute-0 nova_compute[251992]: 2025-12-06 07:34:44.625 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Dec 06 07:34:45 compute-0 ceph-mon[74339]: osdmap e292: 3 total, 3 up, 3 in
Dec 06 07:34:45 compute-0 ceph-mon[74339]: pgmap v2338: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 06 07:34:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/461198439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:45 compute-0 ceph-mon[74339]: pgmap v2339: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.4 MiB/s wr, 40 op/s
Dec 06 07:34:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:45.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:46 compute-0 nova_compute[251992]: 2025-12-06 07:34:46.071 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:46.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 06 07:34:47 compute-0 ceph-mon[74339]: pgmap v2340: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.4 MiB/s wr, 28 op/s
Dec 06 07:34:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:47.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:48.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:48 compute-0 nova_compute[251992]: 2025-12-06 07:34:48.137 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:34:48 compute-0 ceph-mon[74339]: pgmap v2341: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 06 07:34:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2908565525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 06 07:34:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Dec 06 07:34:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:49 compute-0 nova_compute[251992]: 2025-12-06 07:34:49.627 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Dec 06 07:34:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Dec 06 07:34:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:50.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:50 compute-0 sshd-session[331644]: Invalid user admin from 78.128.112.74 port 33430
Dec 06 07:34:50 compute-0 sshd-session[331644]: Connection closed by invalid user admin 78.128.112.74 port 33430 [preauth]
Dec 06 07:34:50 compute-0 ceph-mon[74339]: pgmap v2342: 305 pgs: 305 active+clean; 540 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 06 07:34:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Dec 06 07:34:51 compute-0 nova_compute[251992]: 2025-12-06 07:34:51.073 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:51.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:51 compute-0 ceph-mon[74339]: osdmap e293: 3 total, 3 up, 3 in
Dec 06 07:34:51 compute-0 ceph-mon[74339]: pgmap v2344: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Dec 06 07:34:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:52.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Dec 06 07:34:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:54.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:54 compute-0 ceph-mon[74339]: pgmap v2345: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 MiB/s wr, 26 op/s
Dec 06 07:34:54 compute-0 nova_compute[251992]: 2025-12-06 07:34:54.630 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 491 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.5 MiB/s wr, 46 op/s
Dec 06 07:34:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:55.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:55 compute-0 ceph-mon[74339]: pgmap v2346: 305 pgs: 305 active+clean; 491 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.5 MiB/s wr, 46 op/s
Dec 06 07:34:56 compute-0 nova_compute[251992]: 2025-12-06 07:34:56.074 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:56.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:56 compute-0 sudo[331651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:34:56 compute-0 sudo[331651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:34:56 compute-0 sudo[331651]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 3.2 MiB/s wr, 84 op/s
Dec 06 07:34:56 compute-0 sudo[331676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:34:56 compute-0 sudo[331676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:34:56 compute-0 sudo[331676]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:34:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:57.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:34:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:34:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:34:58.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:34:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 3.2 MiB/s wr, 84 op/s
Dec 06 07:34:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3450741167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3678721108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:34:58 compute-0 ceph-mon[74339]: pgmap v2347: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 3.2 MiB/s wr, 84 op/s
Dec 06 07:34:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4270521250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:34:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.194 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.390 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "c1ef1073-7c66-428c-a02b-e4daa3551d22" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.390 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.451 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.553 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.554 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.560 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.560 251996 INFO nova.compute.claims [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:34:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:34:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:34:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:34:59.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.631 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:34:59 compute-0 nova_compute[251992]: 2025-12-06 07:34:59.701 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:00.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:35:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1378460008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.146 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.152 251996 DEBUG nova.compute.provider_tree [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.167 251996 DEBUG nova.scheduler.client.report [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:35:00 compute-0 sudo[331724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:00 compute-0 sudo[331724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:00 compute-0 sudo[331724]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.188 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.189 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.231 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.232 251996 DEBUG nova.network.neutron [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:35:00 compute-0 sudo[331749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:35:00 compute-0 sudo[331749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:00 compute-0 sudo[331749]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.254 251996 INFO nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.271 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:35:00 compute-0 sudo[331774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:00 compute-0 sudo[331774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:00 compute-0 sudo[331774]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.345 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.346 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:35:00 compute-0 nova_compute[251992]: 2025-12-06 07:35:00.347 251996 INFO nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Creating image(s)
Dec 06 07:35:00 compute-0 sudo[331799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:35:00 compute-0 sudo[331799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 555 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 3.8 MiB/s wr, 90 op/s
Dec 06 07:35:00 compute-0 sudo[331799]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:35:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:35:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:35:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:35:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.145 251996 DEBUG nova.storage.rbd_utils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image c1ef1073-7c66-428c-a02b-e4daa3551d22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.169 251996 DEBUG nova.storage.rbd_utils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image c1ef1073-7c66-428c-a02b-e4daa3551d22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.191 251996 DEBUG nova.storage.rbd_utils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image c1ef1073-7c66-428c-a02b-e4daa3551d22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.194 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.221 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.224 251996 DEBUG nova.policy [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a70f6c3c5e2c402bb6fa0e0507e9b6dc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b10aa03d68eb4d4799d53538521cc364', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.267 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.267 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.268 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.268 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.355 251996 DEBUG nova.storage.rbd_utils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image c1ef1073-7c66-428c-a02b-e4daa3551d22_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:01 compute-0 nova_compute[251992]: 2025-12-06 07:35:01.358 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c1ef1073-7c66-428c-a02b-e4daa3551d22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:01.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:01 compute-0 ceph-mon[74339]: pgmap v2348: 305 pgs: 305 active+clean; 524 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 3.2 MiB/s wr, 84 op/s
Dec 06 07:35:02 compute-0 nova_compute[251992]: 2025-12-06 07:35:02.070 251996 DEBUG nova.network.neutron [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Successfully created port: 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:35:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:02.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 555 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 2.8 MiB/s wr, 71 op/s
Dec 06 07:35:03 compute-0 nova_compute[251992]: 2025-12-06 07:35:03.053 251996 DEBUG nova.network.neutron [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Successfully updated port: 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:35:03 compute-0 nova_compute[251992]: 2025-12-06 07:35:03.110 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:35:03 compute-0 nova_compute[251992]: 2025-12-06 07:35:03.110 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:35:03 compute-0 nova_compute[251992]: 2025-12-06 07:35:03.111 251996 DEBUG nova.network.neutron [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:35:03 compute-0 nova_compute[251992]: 2025-12-06 07:35:03.154 251996 DEBUG nova.compute.manager [req-f5c45558-5ad5-441c-8729-cd0f316f2b50 req-c95c4b9c-9d82-4377-a67a-17a2ae9f2ad8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received event network-changed-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:03 compute-0 nova_compute[251992]: 2025-12-06 07:35:03.154 251996 DEBUG nova.compute.manager [req-f5c45558-5ad5-441c-8729-cd0f316f2b50 req-c95c4b9c-9d82-4377-a67a-17a2ae9f2ad8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Refreshing instance network info cache due to event network-changed-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:35:03 compute-0 nova_compute[251992]: 2025-12-06 07:35:03.154 251996 DEBUG oslo_concurrency.lockutils [req-f5c45558-5ad5-441c-8729-cd0f316f2b50 req-c95c4b9c-9d82-4377-a67a-17a2ae9f2ad8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:35:03 compute-0 nova_compute[251992]: 2025-12-06 07:35:03.261 251996 DEBUG nova.network.neutron [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:35:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 81417b08-4ff9-43c8-b7af-ed2f9cfddef9 does not exist
Dec 06 07:35:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e7e246bf-b571-4179-8f6b-467743846c82 does not exist
Dec 06 07:35:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9422fb91-9ef0-461e-8edd-2e584e925adb does not exist
Dec 06 07:35:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:35:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:35:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:35:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:35:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:35:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:35:03 compute-0 sudo[331951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:03 compute-0 sudo[331951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:03 compute-0 sudo[331951]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:03 compute-0 sudo[331976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:35:03 compute-0 sudo[331976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:03 compute-0 sudo[331976]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:03.842 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:03.842 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:03.843 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:03 compute-0 sudo[332001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:03 compute-0 sudo[332001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:03 compute-0 sudo[332001]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:03 compute-0 sudo[332026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:35:03 compute-0 sudo[332026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:04 compute-0 nova_compute[251992]: 2025-12-06 07:35:04.120 251996 DEBUG nova.network.neutron [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updating instance_info_cache with network_info: [{"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:35:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:04 compute-0 nova_compute[251992]: 2025-12-06 07:35:04.160 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:35:04 compute-0 nova_compute[251992]: 2025-12-06 07:35:04.161 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Instance network_info: |[{"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:35:04 compute-0 nova_compute[251992]: 2025-12-06 07:35:04.161 251996 DEBUG oslo_concurrency.lockutils [req-f5c45558-5ad5-441c-8729-cd0f316f2b50 req-c95c4b9c-9d82-4377-a67a-17a2ae9f2ad8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:35:04 compute-0 nova_compute[251992]: 2025-12-06 07:35:04.161 251996 DEBUG nova.network.neutron [req-f5c45558-5ad5-441c-8729-cd0f316f2b50 req-c95c4b9c-9d82-4377-a67a-17a2ae9f2ad8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Refreshing network info cache for port 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:35:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:04 compute-0 podman[332094]: 2025-12-06 07:35:04.24371375 +0000 UTC m=+0.044305777 container create 35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:35:04 compute-0 systemd[1]: Started libpod-conmon-35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7.scope.
Dec 06 07:35:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:35:04 compute-0 podman[332094]: 2025-12-06 07:35:04.222741744 +0000 UTC m=+0.023333811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:35:04 compute-0 podman[332094]: 2025-12-06 07:35:04.330171374 +0000 UTC m=+0.130763431 container init 35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 07:35:04 compute-0 podman[332094]: 2025-12-06 07:35:04.338511339 +0000 UTC m=+0.139103376 container start 35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_haslett, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec 06 07:35:04 compute-0 podman[332094]: 2025-12-06 07:35:04.342085445 +0000 UTC m=+0.142677482 container attach 35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_haslett, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:35:04 compute-0 frosty_haslett[332110]: 167 167
Dec 06 07:35:04 compute-0 systemd[1]: libpod-35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7.scope: Deactivated successfully.
Dec 06 07:35:04 compute-0 podman[332094]: 2025-12-06 07:35:04.345414716 +0000 UTC m=+0.146006753 container died 35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec 06 07:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c19fe6cb9ae0ab696a23b958d941e12e8172e86ea74327abdfcf6ef40a743c-merged.mount: Deactivated successfully.
Dec 06 07:35:04 compute-0 podman[332094]: 2025-12-06 07:35:04.388391876 +0000 UTC m=+0.188983913 container remove 35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_haslett, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:35:04 compute-0 systemd[1]: libpod-conmon-35f8bdf9093fae5f443fded4ae661ff6f07cc61ef1efcdd51498df68711174f7.scope: Deactivated successfully.
Dec 06 07:35:04 compute-0 podman[332133]: 2025-12-06 07:35:04.553402761 +0000 UTC m=+0.043599279 container create 364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_volhard, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:35:04 compute-0 systemd[1]: Started libpod-conmon-364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff.scope.
Dec 06 07:35:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:35:04 compute-0 podman[332133]: 2025-12-06 07:35:04.533506204 +0000 UTC m=+0.023702742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be2fdd8423ec85d1bf4317d363a75494388f5873e17d68ab143b74fdf64a2a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be2fdd8423ec85d1bf4317d363a75494388f5873e17d68ab143b74fdf64a2a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be2fdd8423ec85d1bf4317d363a75494388f5873e17d68ab143b74fdf64a2a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be2fdd8423ec85d1bf4317d363a75494388f5873e17d68ab143b74fdf64a2a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be2fdd8423ec85d1bf4317d363a75494388f5873e17d68ab143b74fdf64a2a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:04 compute-0 nova_compute[251992]: 2025-12-06 07:35:04.632 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:04 compute-0 podman[332133]: 2025-12-06 07:35:04.652153867 +0000 UTC m=+0.142350395 container init 364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:35:04 compute-0 podman[332133]: 2025-12-06 07:35:04.658017585 +0000 UTC m=+0.148214103 container start 364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:35:04 compute-0 podman[332133]: 2025-12-06 07:35:04.662315311 +0000 UTC m=+0.152511849 container attach 364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_volhard, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:35:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1378460008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:04 compute-0 ceph-mon[74339]: pgmap v2349: 305 pgs: 305 active+clean; 555 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 307 KiB/s rd, 3.8 MiB/s wr, 90 op/s
Dec 06 07:35:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:35:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:35:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 570 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 3.6 MiB/s wr, 79 op/s
Dec 06 07:35:05 compute-0 nova_compute[251992]: 2025-12-06 07:35:05.154 251996 DEBUG nova.network.neutron [req-f5c45558-5ad5-441c-8729-cd0f316f2b50 req-c95c4b9c-9d82-4377-a67a-17a2ae9f2ad8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updated VIF entry in instance network info cache for port 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:35:05 compute-0 nova_compute[251992]: 2025-12-06 07:35:05.155 251996 DEBUG nova.network.neutron [req-f5c45558-5ad5-441c-8729-cd0f316f2b50 req-c95c4b9c-9d82-4377-a67a-17a2ae9f2ad8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updating instance_info_cache with network_info: [{"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:35:05 compute-0 nova_compute[251992]: 2025-12-06 07:35:05.171 251996 DEBUG oslo_concurrency.lockutils [req-f5c45558-5ad5-441c-8729-cd0f316f2b50 req-c95c4b9c-9d82-4377-a67a-17a2ae9f2ad8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:35:05 compute-0 goofy_volhard[332149]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:35:05 compute-0 goofy_volhard[332149]: --> relative data size: 1.0
Dec 06 07:35:05 compute-0 goofy_volhard[332149]: --> All data devices are unavailable
Dec 06 07:35:05 compute-0 systemd[1]: libpod-364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff.scope: Deactivated successfully.
Dec 06 07:35:05 compute-0 podman[332133]: 2025-12-06 07:35:05.462804621 +0000 UTC m=+0.953001159 container died 364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:35:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2be2fdd8423ec85d1bf4317d363a75494388f5873e17d68ab143b74fdf64a2a5-merged.mount: Deactivated successfully.
Dec 06 07:35:05 compute-0 podman[332133]: 2025-12-06 07:35:05.519687447 +0000 UTC m=+1.009883965 container remove 364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_volhard, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:35:05 compute-0 systemd[1]: libpod-conmon-364866b9a2a2c5a367b6da9a52b8fbcef18fed3c786de7e1ccf1082cc37b02ff.scope: Deactivated successfully.
Dec 06 07:35:05 compute-0 sudo[332026]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:05 compute-0 sudo[332178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:05 compute-0 sudo[332178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:05 compute-0 sudo[332178]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:05 compute-0 sudo[332203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:35:05 compute-0 sudo[332203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:05 compute-0 sudo[332203]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:05 compute-0 sudo[332228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:05 compute-0 sudo[332228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:05 compute-0 sudo[332228]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:05 compute-0 sudo[332253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:35:05 compute-0 sudo[332253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:06 compute-0 podman[332319]: 2025-12-06 07:35:06.099199842 +0000 UTC m=+0.041492921 container create 8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:35:06 compute-0 systemd[1]: Started libpod-conmon-8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920.scope.
Dec 06 07:35:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:35:06 compute-0 podman[332319]: 2025-12-06 07:35:06.177674831 +0000 UTC m=+0.119967920 container init 8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:35:06 compute-0 podman[332319]: 2025-12-06 07:35:06.082566263 +0000 UTC m=+0.024859362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:35:06 compute-0 podman[332319]: 2025-12-06 07:35:06.186197721 +0000 UTC m=+0.128490800 container start 8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:35:06 compute-0 podman[332319]: 2025-12-06 07:35:06.189537971 +0000 UTC m=+0.131831070 container attach 8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:35:06 compute-0 musing_chaum[332336]: 167 167
Dec 06 07:35:06 compute-0 systemd[1]: libpod-8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920.scope: Deactivated successfully.
Dec 06 07:35:06 compute-0 podman[332319]: 2025-12-06 07:35:06.190512147 +0000 UTC m=+0.132805226 container died 8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-93363f2e01697eda856af92b62e6c515779971d5bd49ff47b720034a52ccce6f-merged.mount: Deactivated successfully.
Dec 06 07:35:06 compute-0 nova_compute[251992]: 2025-12-06 07:35:06.224 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:06 compute-0 podman[332319]: 2025-12-06 07:35:06.232648234 +0000 UTC m=+0.174941313 container remove 8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:35:06 compute-0 systemd[1]: libpod-conmon-8459ce697933ef4cdb581fca42ed9b353a754cd5f3355913b72f11ca54672920.scope: Deactivated successfully.
Dec 06 07:35:06 compute-0 podman[332360]: 2025-12-06 07:35:06.396414636 +0000 UTC m=+0.046946139 container create dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:35:06 compute-0 systemd[1]: Started libpod-conmon-dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a.scope.
Dec 06 07:35:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3445b590fe2eeeff5f2b27402509184e83ba4fbdbfe7b996f23a3d9f314ded53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3445b590fe2eeeff5f2b27402509184e83ba4fbdbfe7b996f23a3d9f314ded53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3445b590fe2eeeff5f2b27402509184e83ba4fbdbfe7b996f23a3d9f314ded53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3445b590fe2eeeff5f2b27402509184e83ba4fbdbfe7b996f23a3d9f314ded53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:06 compute-0 podman[332360]: 2025-12-06 07:35:06.370265109 +0000 UTC m=+0.020796642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:35:06 compute-0 podman[332360]: 2025-12-06 07:35:06.474229606 +0000 UTC m=+0.124761139 container init dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:35:06 compute-0 podman[332360]: 2025-12-06 07:35:06.480911797 +0000 UTC m=+0.131443300 container start dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:35:06 compute-0 podman[332360]: 2025-12-06 07:35:06.484430852 +0000 UTC m=+0.134962375 container attach dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:35:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 576 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 3.3 MiB/s wr, 74 op/s
Dec 06 07:35:07 compute-0 happy_borg[332376]: {
Dec 06 07:35:07 compute-0 happy_borg[332376]:     "0": [
Dec 06 07:35:07 compute-0 happy_borg[332376]:         {
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "devices": [
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "/dev/loop3"
Dec 06 07:35:07 compute-0 happy_borg[332376]:             ],
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "lv_name": "ceph_lv0",
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "lv_size": "7511998464",
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "name": "ceph_lv0",
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "tags": {
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.cluster_name": "ceph",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.crush_device_class": "",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.encrypted": "0",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.osd_id": "0",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.type": "block",
Dec 06 07:35:07 compute-0 happy_borg[332376]:                 "ceph.vdo": "0"
Dec 06 07:35:07 compute-0 happy_borg[332376]:             },
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "type": "block",
Dec 06 07:35:07 compute-0 happy_borg[332376]:             "vg_name": "ceph_vg0"
Dec 06 07:35:07 compute-0 happy_borg[332376]:         }
Dec 06 07:35:07 compute-0 happy_borg[332376]:     ]
Dec 06 07:35:07 compute-0 happy_borg[332376]: }
Dec 06 07:35:07 compute-0 podman[332360]: 2025-12-06 07:35:07.341951031 +0000 UTC m=+0.992482534 container died dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 07:35:07 compute-0 systemd[1]: libpod-dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a.scope: Deactivated successfully.
Dec 06 07:35:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3445b590fe2eeeff5f2b27402509184e83ba4fbdbfe7b996f23a3d9f314ded53-merged.mount: Deactivated successfully.
Dec 06 07:35:07 compute-0 podman[332360]: 2025-12-06 07:35:07.396647228 +0000 UTC m=+1.047178731 container remove dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_borg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:35:07 compute-0 systemd[1]: libpod-conmon-dbff5a0add0da1beb9d9d381ef726ab989b527d9f9217316326b24460c17618a.scope: Deactivated successfully.
Dec 06 07:35:07 compute-0 sudo[332253]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:07 compute-0 podman[332386]: 2025-12-06 07:35:07.42896937 +0000 UTC m=+0.087521053 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:35:07 compute-0 sudo[332422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:07 compute-0 sudo[332422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:07 compute-0 sudo[332422]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:07 compute-0 sudo[332447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:35:07 compute-0 sudo[332447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:07 compute-0 sudo[332447]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:07.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:07 compute-0 sudo[332472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:07 compute-0 sudo[332472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:07 compute-0 sudo[332472]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:07 compute-0 sudo[332497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:35:07 compute-0 sudo[332497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:07 compute-0 ceph-mon[74339]: pgmap v2350: 305 pgs: 305 active+clean; 555 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 2.8 MiB/s wr, 71 op/s
Dec 06 07:35:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:35:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:35:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:35:07 compute-0 ceph-mon[74339]: pgmap v2351: 305 pgs: 305 active+clean; 570 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 3.6 MiB/s wr, 79 op/s
Dec 06 07:35:07 compute-0 podman[332562]: 2025-12-06 07:35:07.972219036 +0000 UTC m=+0.038166680 container create 996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:35:08 compute-0 systemd[1]: Started libpod-conmon-996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b.scope.
Dec 06 07:35:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:35:08 compute-0 podman[332562]: 2025-12-06 07:35:08.049717749 +0000 UTC m=+0.115665413 container init 996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:35:08 compute-0 podman[332562]: 2025-12-06 07:35:07.955967687 +0000 UTC m=+0.021915351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:35:08 compute-0 podman[332562]: 2025-12-06 07:35:08.056563633 +0000 UTC m=+0.122511277 container start 996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:35:08 compute-0 podman[332562]: 2025-12-06 07:35:08.059891213 +0000 UTC m=+0.125838877 container attach 996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:35:08 compute-0 busy_fermi[332579]: 167 167
Dec 06 07:35:08 compute-0 systemd[1]: libpod-996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b.scope: Deactivated successfully.
Dec 06 07:35:08 compute-0 podman[332562]: 2025-12-06 07:35:08.063733357 +0000 UTC m=+0.129681001 container died 996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:35:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5395bf76cba16402904d67bd940e448272714e63c6bb65232cfad95cee3030ae-merged.mount: Deactivated successfully.
Dec 06 07:35:08 compute-0 podman[332562]: 2025-12-06 07:35:08.102493424 +0000 UTC m=+0.168441068 container remove 996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:35:08 compute-0 systemd[1]: libpod-conmon-996a2aaf8c33ebba695335197262d8a15dcc728e37bd142b877f5d83f96f9f9b.scope: Deactivated successfully.
Dec 06 07:35:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:08.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:08 compute-0 podman[332603]: 2025-12-06 07:35:08.317082056 +0000 UTC m=+0.040918935 container create 09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:35:08 compute-0 systemd[1]: Started libpod-conmon-09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d.scope.
Dec 06 07:35:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cb731684ee9930316a39e8f287b8d45ba3e0009e53783ae292302d080b5722/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cb731684ee9930316a39e8f287b8d45ba3e0009e53783ae292302d080b5722/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cb731684ee9930316a39e8f287b8d45ba3e0009e53783ae292302d080b5722/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cb731684ee9930316a39e8f287b8d45ba3e0009e53783ae292302d080b5722/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:08 compute-0 podman[332603]: 2025-12-06 07:35:08.378847654 +0000 UTC m=+0.102684543 container init 09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:35:08 compute-0 podman[332603]: 2025-12-06 07:35:08.387047445 +0000 UTC m=+0.110884324 container start 09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 07:35:08 compute-0 podman[332603]: 2025-12-06 07:35:08.390455407 +0000 UTC m=+0.114292326 container attach 09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:35:08 compute-0 podman[332603]: 2025-12-06 07:35:08.299678077 +0000 UTC m=+0.023514976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:35:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 576 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Dec 06 07:35:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:09 compute-0 brave_nash[332619]: {
Dec 06 07:35:09 compute-0 brave_nash[332619]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:35:09 compute-0 brave_nash[332619]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:35:09 compute-0 brave_nash[332619]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:35:09 compute-0 brave_nash[332619]:         "osd_id": 0,
Dec 06 07:35:09 compute-0 brave_nash[332619]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:35:09 compute-0 brave_nash[332619]:         "type": "bluestore"
Dec 06 07:35:09 compute-0 brave_nash[332619]:     }
Dec 06 07:35:09 compute-0 brave_nash[332619]: }
Dec 06 07:35:09 compute-0 systemd[1]: libpod-09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d.scope: Deactivated successfully.
Dec 06 07:35:09 compute-0 conmon[332619]: conmon 09c9ef96ebe5121c7566 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d.scope/container/memory.events
Dec 06 07:35:09 compute-0 podman[332603]: 2025-12-06 07:35:09.303084515 +0000 UTC m=+1.026921394 container died 09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:35:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-58cb731684ee9930316a39e8f287b8d45ba3e0009e53783ae292302d080b5722-merged.mount: Deactivated successfully.
Dec 06 07:35:09 compute-0 podman[332603]: 2025-12-06 07:35:09.355023108 +0000 UTC m=+1.078859977 container remove 09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_nash, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:35:09 compute-0 systemd[1]: libpod-conmon-09c9ef96ebe5121c7566072c7ea4ceba54a8ba746e65f5936b12411a4b1f2b7d.scope: Deactivated successfully.
Dec 06 07:35:09 compute-0 sudo[332497]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:35:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:09.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:09 compute-0 nova_compute[251992]: 2025-12-06 07:35:09.635 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:10.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:10 compute-0 ceph-mon[74339]: pgmap v2352: 305 pgs: 305 active+clean; 576 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 3.3 MiB/s wr, 74 op/s
Dec 06 07:35:10 compute-0 nova_compute[251992]: 2025-12-06 07:35:10.271 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:35:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 623 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec 06 07:35:11 compute-0 nova_compute[251992]: 2025-12-06 07:35:11.229 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:11.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:12.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:12 compute-0 nova_compute[251992]: 2025-12-06 07:35:12.575 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c1ef1073-7c66-428c-a02b-e4daa3551d22_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 11.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:12 compute-0 nova_compute[251992]: 2025-12-06 07:35:12.647 251996 DEBUG nova.storage.rbd_utils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] resizing rbd image c1ef1073-7c66-428c-a02b-e4daa3551d22_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:35:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:35:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 623 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.6 MiB/s wr, 38 op/s
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.091 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:13 compute-0 ceph-mon[74339]: pgmap v2353: 305 pgs: 305 active+clean; 576 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.9 MiB/s wr, 42 op/s
Dec 06 07:35:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3238803652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.131 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.160 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.161 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.161 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.161 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.161 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e537c112-121c-4139-aaee-9457e71dc2d7 does not exist
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dbe3e825-e4ac-4876-b19c-a847aa3f1bd4 does not exist
Dec 06 07:35:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8d3402ae-25b7-4840-b9a6-0bc834459079 does not exist
Dec 06 07:35:13 compute-0 sudo[332712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:13 compute-0 sudo[332712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:13 compute-0 sudo[332712]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:13 compute-0 sudo[332747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:35:13 compute-0 sudo[332747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:13 compute-0 sudo[332747]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:35:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1926352857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.643 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.711 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.713 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.716 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.716 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.831 251996 DEBUG nova.objects.instance [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'migration_context' on Instance uuid c1ef1073-7c66-428c-a02b-e4daa3551d22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.848 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.850 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Ensure instance console log exists: /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.850 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.851 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.851 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.854 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Start _get_guest_xml network_info=[{"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.859 251996 WARNING nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.865 251996 DEBUG nova.virt.libvirt.host [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.865 251996 DEBUG nova.virt.libvirt.host [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.869 251996 DEBUG nova.virt.libvirt.host [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.870 251996 DEBUG nova.virt.libvirt.host [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.871 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.871 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.872 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.872 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.872 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.872 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.873 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.873 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.873 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.873 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.873 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.874 251996 DEBUG nova.virt.hardware [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.876 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.954 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.955 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4149MB free_disk=20.764301300048828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.955 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:13 compute-0 nova_compute[251992]: 2025-12-06 07:35:13.956 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.054 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 70928eda-043f-429b-aa4e-af1f3189a7c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.054 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c2e6b8fd-375c-4658-b338-f2d334041ba3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.054 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c1ef1073-7c66-428c-a02b-e4daa3551d22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.054 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.055 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.115 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3009168779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:35:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3009168779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:35:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1980977133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-0 ceph-mon[74339]: pgmap v2354: 305 pgs: 305 active+clean; 623 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec 06 07:35:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:14 compute-0 ceph-mon[74339]: pgmap v2355: 305 pgs: 305 active+clean; 623 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.6 MiB/s wr, 38 op/s
Dec 06 07:35:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:35:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1926352857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/743687950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:14.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:35:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/288153005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.334 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.374 251996 DEBUG nova.storage.rbd_utils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image c1ef1073-7c66-428c-a02b-e4daa3551d22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.380 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:35:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3588510582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.574 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.580 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.662 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.674 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Dec 06 07:35:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:35:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2354002293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.856 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.858 251996 DEBUG nova.virt.libvirt.vif [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-144167502',display_name='tempest-ServerActionsTestOtherB-server-144167502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-144167502',id=128,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-x1yakn5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:35:00Z,user_data=None,user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=c1ef1073-7c66-428c-a02b-e4daa3551d22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.858 251996 DEBUG nova.network.os_vif_util [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.859 251996 DEBUG nova.network.os_vif_util [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:cc:7f,bridge_name='br-int',has_traffic_filtering=True,id=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16f011c3-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.860 251996 DEBUG nova.objects.instance [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid c1ef1073-7c66-428c-a02b-e4daa3551d22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.863 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:35:14 compute-0 nova_compute[251992]: 2025-12-06 07:35:14.863 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.037 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <uuid>c1ef1073-7c66-428c-a02b-e4daa3551d22</uuid>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <name>instance-00000080</name>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestOtherB-server-144167502</nova:name>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:35:13</nova:creationTime>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <nova:port uuid="16f011c3-09ff-46c7-b7cc-7ad9cdaac07d">
Dec 06 07:35:15 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <system>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <entry name="serial">c1ef1073-7c66-428c-a02b-e4daa3551d22</entry>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <entry name="uuid">c1ef1073-7c66-428c-a02b-e4daa3551d22</entry>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </system>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <os>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   </os>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <features>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   </features>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c1ef1073-7c66-428c-a02b-e4daa3551d22_disk">
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       </source>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c1ef1073-7c66-428c-a02b-e4daa3551d22_disk.config">
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       </source>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:35:15 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:36:cc:7f"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <target dev="tap16f011c3-09"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22/console.log" append="off"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <video>
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </video>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:35:15 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:35:15 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:35:15 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:35:15 compute-0 nova_compute[251992]: </domain>
Dec 06 07:35:15 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.038 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Preparing to wait for external event network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.039 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.039 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.040 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.040 251996 DEBUG nova.virt.libvirt.vif [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-144167502',display_name='tempest-ServerActionsTestOtherB-server-144167502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-144167502',id=128,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-x1yakn5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:35:00Z,user_data=None,user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=c1ef1073-7c66-428c-a02b-e4daa3551d22,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.041 251996 DEBUG nova.network.os_vif_util [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.041 251996 DEBUG nova.network.os_vif_util [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:cc:7f,bridge_name='br-int',has_traffic_filtering=True,id=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16f011c3-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.042 251996 DEBUG os_vif [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:cc:7f,bridge_name='br-int',has_traffic_filtering=True,id=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16f011c3-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.042 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.043 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.043 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.048 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.049 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16f011c3-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.049 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap16f011c3-09, col_values=(('external_ids', {'iface-id': '16f011c3-09ff-46c7-b7cc-7ad9cdaac07d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:cc:7f', 'vm-uuid': 'c1ef1073-7c66-428c-a02b-e4daa3551d22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.051 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:15 compute-0 NetworkManager[48965]: <info>  [1765006515.0529] manager: (tap16f011c3-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/219)
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.055 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.058 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.059 251996 INFO os_vif [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:cc:7f,bridge_name='br-int',has_traffic_filtering=True,id=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16f011c3-09')
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.106 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.107 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.107 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:36:cc:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.107 251996 INFO nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Using config drive
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.135 251996 DEBUG nova.storage.rbd_utils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image c1ef1073-7c66-428c-a02b-e4daa3551d22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/288153005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/524363761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3588510582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2354002293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:15 compute-0 podman[332908]: 2025-12-06 07:35:15.404196102 +0000 UTC m=+0.059848106 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.409 251996 INFO nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Creating config drive at /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22/disk.config
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.413 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_eo46k4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:15 compute-0 podman[332907]: 2025-12-06 07:35:15.427380178 +0000 UTC m=+0.083028862 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.547 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_eo46k4" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.579 251996 DEBUG nova.storage.rbd_utils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image c1ef1073-7c66-428c-a02b-e4daa3551d22_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.583 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22/disk.config c1ef1073-7c66-428c-a02b-e4daa3551d22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.818 251996 DEBUG oslo_concurrency.processutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22/disk.config c1ef1073-7c66-428c-a02b-e4daa3551d22_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.819 251996 INFO nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Deleting local config drive /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22/disk.config because it was imported into RBD.
Dec 06 07:35:15 compute-0 kernel: tap16f011c3-09: entered promiscuous mode
Dec 06 07:35:15 compute-0 NetworkManager[48965]: <info>  [1765006515.8744] manager: (tap16f011c3-09): new Tun device (/org/freedesktop/NetworkManager/Devices/220)
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.875 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:15 compute-0 ovn_controller[147168]: 2025-12-06T07:35:15Z|00443|binding|INFO|Claiming lport 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d for this chassis.
Dec 06 07:35:15 compute-0 ovn_controller[147168]: 2025-12-06T07:35:15Z|00444|binding|INFO|16f011c3-09ff-46c7-b7cc-7ad9cdaac07d: Claiming fa:16:3e:36:cc:7f 10.100.0.10
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.878 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.884 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:15 compute-0 NetworkManager[48965]: <info>  [1765006515.8948] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Dec 06 07:35:15 compute-0 NetworkManager[48965]: <info>  [1765006515.8955] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Dec 06 07:35:15 compute-0 nova_compute[251992]: 2025-12-06 07:35:15.895 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.899 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:cc:7f 10.100.0.10'], port_security=['fa:16:3e:36:cc:7f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1ef1073-7c66-428c-a02b-e4daa3551d22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9f7f4f14-4f63-443a-af4a-951f8b77b0f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.901 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.903 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:35:15 compute-0 systemd-udevd[332998]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:35:15 compute-0 systemd-machined[212986]: New machine qemu-56-instance-00000080.
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.916 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7eebf173-ab7b-465e-8c45-625de7e8114b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.917 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3beede49-11 in ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:35:15 compute-0 NetworkManager[48965]: <info>  [1765006515.9192] device (tap16f011c3-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:35:15 compute-0 NetworkManager[48965]: <info>  [1765006515.9205] device (tap16f011c3-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.919 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3beede49-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.919 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[83dbf43a-7599-4bc8-b8fc-39198c83e8d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.920 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[be5979f0-c5b9-448b-b27f-6e7ce792d9cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:15 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000080.
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.937 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b994f3eb-9dc0-4a8a-906f-ae00c870d19c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.962 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a2cf5e-35d9-40bd-a417-661e5872226e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:15.993 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7d166642-e302-4997-8b1d-591e3ab2e09a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 NetworkManager[48965]: <info>  [1765006516.0024] manager: (tap3beede49-10): new Veth device (/org/freedesktop/NetworkManager/Devices/223)
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.001 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[edc9643a-39c3-4eab-911a-388d270f09f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.039 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[92e1779f-c511-4adf-b2f7-9a82199bb17c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.043 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ad354f68-01ad-43a2-acf0-d557895ec2f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 NetworkManager[48965]: <info>  [1765006516.0676] device (tap3beede49-10): carrier: link connected
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.073 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[acbc25d1-eeaa-46b5-8789-b8b09d7c4a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.086 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.093 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8b44e7-97fe-49d0-b18c-0e26700b4cb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678865, 'reachable_time': 30294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333031, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 ovn_controller[147168]: 2025-12-06T07:35:16Z|00445|binding|INFO|Releasing lport 0d2044a5-87cb-4c28-912c-9a2682bb94de from this chassis (sb_readonly=0)
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.109 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:16 compute-0 ovn_controller[147168]: 2025-12-06T07:35:16Z|00446|binding|INFO|Setting lport 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d ovn-installed in OVS
Dec 06 07:35:16 compute-0 ovn_controller[147168]: 2025-12-06T07:35:16Z|00447|binding|INFO|Setting lport 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d up in Southbound
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.122 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.122 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5eef6467-27de-47f8-a4db-d39666f74080]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:c755'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678865, 'tstamp': 678865}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333032, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.139 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5e741c76-564e-4baf-83b1-87efd27aa461]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678865, 'reachable_time': 30294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333033, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:16.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.169 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbba702-7ce6-4f0b-a4a4-9d90c9c02ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.232 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbf2304-d931-4d72-9f2e-e53a589dbe02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.233 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.234 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.234 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.236 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:16 compute-0 NetworkManager[48965]: <info>  [1765006516.2368] manager: (tap3beede49-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Dec 06 07:35:16 compute-0 kernel: tap3beede49-10: entered promiscuous mode
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.238 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:16 compute-0 ovn_controller[147168]: 2025-12-06T07:35:16Z|00448|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.240 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.255 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.256 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.257 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[66542b4a-e1c0-40dd-a351-304e476c9b51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.258 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/3beede49-1cbb-425c-b1af-82f43dc57163.pid.haproxy
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:35:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:16.259 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'env', 'PROCESS_TAG=haproxy-3beede49-1cbb-425c-b1af-82f43dc57163', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3beede49-1cbb-425c-b1af-82f43dc57163.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:35:16 compute-0 ceph-mon[74339]: pgmap v2356: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.338 251996 DEBUG nova.compute.manager [req-4524b9af-6c05-4197-b39b-a2bc8925484a req-6309371b-cb83-482e-aa01-d9e50adfd723 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received event network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.338 251996 DEBUG oslo_concurrency.lockutils [req-4524b9af-6c05-4197-b39b-a2bc8925484a req-6309371b-cb83-482e-aa01-d9e50adfd723 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.338 251996 DEBUG oslo_concurrency.lockutils [req-4524b9af-6c05-4197-b39b-a2bc8925484a req-6309371b-cb83-482e-aa01-d9e50adfd723 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.339 251996 DEBUG oslo_concurrency.lockutils [req-4524b9af-6c05-4197-b39b-a2bc8925484a req-6309371b-cb83-482e-aa01-d9e50adfd723 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.339 251996 DEBUG nova.compute.manager [req-4524b9af-6c05-4197-b39b-a2bc8925484a req-6309371b-cb83-482e-aa01-d9e50adfd723 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Processing event network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.389 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.389 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.390 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.390 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:16 compute-0 podman[333084]: 2025-12-06 07:35:16.631378742 +0000 UTC m=+0.048591432 container create 9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:35:16 compute-0 systemd[1]: Started libpod-conmon-9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a.scope.
Dec 06 07:35:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:35:16 compute-0 podman[333084]: 2025-12-06 07:35:16.605083683 +0000 UTC m=+0.022296393 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a104929a99b62ea8c0cb82ecb3fee48c6e0ac246ae6f8d7d095968df1a85716f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:35:16 compute-0 podman[333084]: 2025-12-06 07:35:16.715547915 +0000 UTC m=+0.132760625 container init 9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:35:16 compute-0 podman[333084]: 2025-12-06 07:35:16.72202474 +0000 UTC m=+0.139237430 container start 9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:35:16 compute-0 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[333119]: [NOTICE]   (333125) : New worker (333128) forked
Dec 06 07:35:16 compute-0 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[333119]: [NOTICE]   (333125) : Loading success.
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.766 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.767 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006516.7656696, c1ef1073-7c66-428c-a02b-e4daa3551d22 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.767 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] VM Started (Lifecycle Event)
Dec 06 07:35:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 782 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.775 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.779 251996 INFO nova.virt.libvirt.driver [-] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Instance spawned successfully.
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.779 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.803 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.808 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.809 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.809 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.810 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.810 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.810 251996 DEBUG nova.virt.libvirt.driver [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.816 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.862 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.862 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006516.7668624, c1ef1073-7c66-428c-a02b-e4daa3551d22 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.862 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] VM Paused (Lifecycle Event)
Dec 06 07:35:16 compute-0 sudo[333137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:16 compute-0 sudo[333137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:16 compute-0 sudo[333137]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.897 251996 INFO nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Took 16.55 seconds to spawn the instance on the hypervisor.
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.898 251996 DEBUG nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.901 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.909 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006516.770658, c1ef1073-7c66-428c-a02b-e4daa3551d22 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.909 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] VM Resumed (Lifecycle Event)
Dec 06 07:35:16 compute-0 sudo[333162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:16 compute-0 sudo[333162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:16 compute-0 sudo[333162]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.943 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.945 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:35:16 compute-0 nova_compute[251992]: 2025-12-06 07:35:16.982 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:35:17 compute-0 nova_compute[251992]: 2025-12-06 07:35:17.000 251996 INFO nova.compute.manager [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Took 17.50 seconds to build instance.
Dec 06 07:35:17 compute-0 nova_compute[251992]: 2025-12-06 07:35:17.018 251996 DEBUG oslo_concurrency.lockutils [None req-b5eda03b-3443-4264-b049-a673508de265 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:17.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:17 compute-0 nova_compute[251992]: 2025-12-06 07:35:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:17 compute-0 ceph-mon[74339]: pgmap v2357: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 782 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Dec 06 07:35:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:18.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.435 251996 DEBUG nova.compute.manager [req-57698063-8125-4e3f-8b42-718364d21b46 req-af91d694-f52f-418a-9c74-620cd17df107 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received event network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.435 251996 DEBUG oslo_concurrency.lockutils [req-57698063-8125-4e3f-8b42-718364d21b46 req-af91d694-f52f-418a-9c74-620cd17df107 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.436 251996 DEBUG oslo_concurrency.lockutils [req-57698063-8125-4e3f-8b42-718364d21b46 req-af91d694-f52f-418a-9c74-620cd17df107 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.436 251996 DEBUG oslo_concurrency.lockutils [req-57698063-8125-4e3f-8b42-718364d21b46 req-af91d694-f52f-418a-9c74-620cd17df107 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.436 251996 DEBUG nova.compute.manager [req-57698063-8125-4e3f-8b42-718364d21b46 req-af91d694-f52f-418a-9c74-620cd17df107 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] No waiting events found dispatching network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.436 251996 WARNING nova.compute.manager [req-57698063-8125-4e3f-8b42-718364d21b46 req-af91d694-f52f-418a-9c74-620cd17df107 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received unexpected event network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d for instance with vm_state active and task_state None.
Dec 06 07:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:35:18
Dec 06 07:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root', 'backups', 'vms', 'images', 'default.rgw.meta']
Dec 06 07:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.620 251996 INFO nova.compute.manager [None req-69aedd84-a3e8-4c59-894e-c4f3467382b3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Get console output
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.626 251996 INFO oslo.privsep.daemon [None req-69aedd84-a3e8-4c59-894e-c4f3467382b3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpd5y84mnb/privsep.sock']
Dec 06 07:35:18 compute-0 nova_compute[251992]: 2025-12-06 07:35:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 773 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 06 07:35:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:19 compute-0 nova_compute[251992]: 2025-12-06 07:35:19.312 251996 INFO oslo.privsep.daemon [None req-69aedd84-a3e8-4c59-894e-c4f3467382b3 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Spawned new privsep daemon via rootwrap
Dec 06 07:35:19 compute-0 nova_compute[251992]: 2025-12-06 07:35:19.190 333192 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 07:35:19 compute-0 nova_compute[251992]: 2025-12-06 07:35:19.194 333192 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 07:35:19 compute-0 nova_compute[251992]: 2025-12-06 07:35:19.196 333192 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 06 07:35:19 compute-0 nova_compute[251992]: 2025-12-06 07:35:19.196 333192 INFO oslo.privsep.daemon [-] privsep daemon running as pid 333192
Dec 06 07:35:19 compute-0 nova_compute[251992]: 2025-12-06 07:35:19.408 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 07:35:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:19.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:19 compute-0 nova_compute[251992]: 2025-12-06 07:35:19.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:20 compute-0 nova_compute[251992]: 2025-12-06 07:35:20.051 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:20.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:20 compute-0 nova_compute[251992]: 2025-12-06 07:35:20.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:20 compute-0 nova_compute[251992]: 2025-12-06 07:35:20.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:35:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 256 op/s
Dec 06 07:35:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1096218273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/661832997' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:21 compute-0 nova_compute[251992]: 2025-12-06 07:35:21.315 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:35:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:21.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:22.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:35:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 46K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1308 writes, 5751 keys, 1307 commit groups, 1.0 writes per commit group, ingest: 9.28 MB, 0.02 MB/s
                                           Interval WAL: 1308 writes, 1307 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     30.4      2.00              0.20        28    0.071       0      0       0.0       0.0
                                             L6      1/0    9.78 MB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   4.3     78.2     65.4      3.97              0.83        27    0.147    170K    15K       0.0       0.0
                                            Sum      1/0    9.78 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3     52.0     53.7      5.97              1.02        55    0.109    170K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     22.3     22.7      2.36              0.16         8    0.295     32K   2080       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   0.0     78.2     65.4      3.97              0.83        27    0.147    170K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.4      1.99              0.20        27    0.074       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.059, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.31 GB write, 0.08 MB/s write, 0.30 GB read, 0.07 MB/s read, 6.0 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 2.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 34.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000339 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1992,33.14 MB,10.8999%) FilterBlock(56,479.73 KB,0.154109%) IndexBlock(56,819.66 KB,0.263304%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:35:22 compute-0 nova_compute[251992]: 2025-12-06 07:35:22.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:35:22 compute-0 nova_compute[251992]: 2025-12-06 07:35:22.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:35:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 40 KiB/s wr, 239 op/s
Dec 06 07:35:22 compute-0 nova_compute[251992]: 2025-12-06 07:35:22.797 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:35:22 compute-0 nova_compute[251992]: 2025-12-06 07:35:22.798 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:35:22 compute-0 nova_compute[251992]: 2025-12-06 07:35:22.798 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:35:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:23.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:24.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:35:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:35:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:35:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:35:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:35:24 compute-0 ceph-mon[74339]: pgmap v2358: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 773 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Dec 06 07:35:24 compute-0 ceph-mon[74339]: pgmap v2359: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 256 op/s
Dec 06 07:35:24 compute-0 nova_compute[251992]: 2025-12-06 07:35:24.678 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 40 KiB/s wr, 239 op/s
Dec 06 07:35:25 compute-0 nova_compute[251992]: 2025-12-06 07:35:25.026 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updating instance_info_cache with network_info: [{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:35:25 compute-0 nova_compute[251992]: 2025-12-06 07:35:25.079 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:35:25 compute-0 nova_compute[251992]: 2025-12-06 07:35:25.079 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:35:25 compute-0 nova_compute[251992]: 2025-12-06 07:35:25.099 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:25.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:25.601 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:35:25 compute-0 nova_compute[251992]: 2025-12-06 07:35:25.601 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:25.602 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:35:25 compute-0 ceph-mon[74339]: pgmap v2360: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 40 KiB/s wr, 239 op/s
Dec 06 07:35:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3294554558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:25 compute-0 ceph-mon[74339]: pgmap v2361: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 40 KiB/s wr, 239 op/s
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01168748836776475 of space, bias 1.0, pg target 3.506246510329425 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161051205099572 of space, bias 1.0, pg target 0.641832207914573 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8589431532663316 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:35:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 07:35:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:26.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 07:35:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:26.604 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 37 KiB/s wr, 225 op/s
Dec 06 07:35:27 compute-0 nova_compute[251992]: 2025-12-06 07:35:27.337 251996 INFO nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance failed to shutdown in 60 seconds.
Dec 06 07:35:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:27.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:27 compute-0 ceph-mon[74339]: pgmap v2362: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 37 KiB/s wr, 225 op/s
Dec 06 07:35:28 compute-0 kernel: tape61ac68e-e5 (unregistering): left promiscuous mode
Dec 06 07:35:28 compute-0 NetworkManager[48965]: <info>  [1765006528.0439] device (tape61ac68e-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:35:28 compute-0 ovn_controller[147168]: 2025-12-06T07:35:28Z|00449|binding|INFO|Releasing lport e61ac68e-e534-4351-b3ce-b20fa32579fc from this chassis (sb_readonly=0)
Dec 06 07:35:28 compute-0 ovn_controller[147168]: 2025-12-06T07:35:28Z|00450|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc down in Southbound
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.051 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:28 compute-0 ovn_controller[147168]: 2025-12-06T07:35:28Z|00451|binding|INFO|Removing iface tape61ac68e-e5 ovn-installed in OVS
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.054 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.059 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1b:d0 10.100.0.14'], port_security=['fa:16:3e:99:1b:d0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c2e6b8fd-375c-4658-b338-f2d334041ba3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=e61ac68e-e534-4351-b3ce-b20fa32579fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.060 158118 INFO neutron.agent.ovn.metadata.agent [-] Port e61ac68e-e534-4351-b3ce-b20fa32579fc in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 unbound from our chassis
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.064 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.079 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[044c6a5a-4a5b-4b49-9193-b239a1dac029]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.079 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:28 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Dec 06 07:35:28 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d0000007c.scope: Consumed 1.272s CPU time.
Dec 06 07:35:28 compute-0 systemd-machined[212986]: Machine qemu-55-instance-0000007c terminated.
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.111 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9417958e-baad-4067-b014-e16797e32147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.115 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4b17c4dc-6195-4bef-8a05-ed7f8be58bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.141 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1350f3-77e6-461a-ad1d-b4ac08a5e4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:28 compute-0 NetworkManager[48965]: <info>  [1765006528.1557] manager: (tape61ac68e-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Dec 06 07:35:28 compute-0 systemd-udevd[333200]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.160 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[08ed08d1-ad2a-40d8-93c6-dfa31b9cf38b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 42370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333209, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:28.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.179 251996 INFO nova.virt.libvirt.driver [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance destroyed successfully.
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.178 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[17d5b3df-0673-4feb-ad65-e7d71436c2ab]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333215, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333215, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.179 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.180 251996 DEBUG nova.objects.instance [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'numa_topology' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.181 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.184 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.185 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.185 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.185 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:28.186 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.230 251996 INFO nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Attempting a stable device rescue
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.425 251996 DEBUG nova.compute.manager [req-d1bbec6b-6f93-46ee-839f-4342871f0a4f req-2533d2da-9979-451f-89cd-133e17fdbefe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.426 251996 DEBUG oslo_concurrency.lockutils [req-d1bbec6b-6f93-46ee-839f-4342871f0a4f req-2533d2da-9979-451f-89cd-133e17fdbefe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.426 251996 DEBUG oslo_concurrency.lockutils [req-d1bbec6b-6f93-46ee-839f-4342871f0a4f req-2533d2da-9979-451f-89cd-133e17fdbefe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.427 251996 DEBUG oslo_concurrency.lockutils [req-d1bbec6b-6f93-46ee-839f-4342871f0a4f req-2533d2da-9979-451f-89cd-133e17fdbefe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.427 251996 DEBUG nova.compute.manager [req-d1bbec6b-6f93-46ee-839f-4342871f0a4f req-2533d2da-9979-451f-89cd-133e17fdbefe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.427 251996 WARNING nova.compute.manager [req-d1bbec6b-6f93-46ee-839f-4342871f0a4f req-2533d2da-9979-451f-89cd-133e17fdbefe 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state active and task_state rescuing.
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.465 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.469 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.470 251996 INFO nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Creating image(s)
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.498 251996 DEBUG nova.storage.rbd_utils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.503 251996 DEBUG nova.objects.instance [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.547 251996 DEBUG nova.storage.rbd_utils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.574 251996 DEBUG nova.storage.rbd_utils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.579 251996 DEBUG oslo_concurrency.lockutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "adb38d4ffc13c233e6b4d86394e9e864099e8499" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.580 251996 DEBUG oslo_concurrency.lockutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "adb38d4ffc13c233e6b4d86394e9e864099e8499" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 22 KiB/s wr, 192 op/s
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.822 251996 DEBUG nova.virt.libvirt.imagebackend [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/deed3bf2-6426-476b-8bb8-98ac272255a2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/deed3bf2-6426-476b-8bb8-98ac272255a2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.882 251996 DEBUG nova.virt.libvirt.imagebackend [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/deed3bf2-6426-476b-8bb8-98ac272255a2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:35:28 compute-0 nova_compute[251992]: 2025-12-06 07:35:28.883 251996 DEBUG nova.storage.rbd_utils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] cloning images/deed3bf2-6426-476b-8bb8-98ac272255a2@snap to None/c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:35:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:29 compute-0 nova_compute[251992]: 2025-12-06 07:35:29.680 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:30 compute-0 nova_compute[251992]: 2025-12-06 07:35:30.100 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:30.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:30 compute-0 nova_compute[251992]: 2025-12-06 07:35:30.557 251996 DEBUG nova.compute.manager [req-63f89d19-5ff9-427a-8938-316409f3df40 req-c9f0208f-4990-4fab-9746-cc504b02bee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:30 compute-0 nova_compute[251992]: 2025-12-06 07:35:30.557 251996 DEBUG oslo_concurrency.lockutils [req-63f89d19-5ff9-427a-8938-316409f3df40 req-c9f0208f-4990-4fab-9746-cc504b02bee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:30 compute-0 nova_compute[251992]: 2025-12-06 07:35:30.557 251996 DEBUG oslo_concurrency.lockutils [req-63f89d19-5ff9-427a-8938-316409f3df40 req-c9f0208f-4990-4fab-9746-cc504b02bee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:30 compute-0 nova_compute[251992]: 2025-12-06 07:35:30.558 251996 DEBUG oslo_concurrency.lockutils [req-63f89d19-5ff9-427a-8938-316409f3df40 req-c9f0208f-4990-4fab-9746-cc504b02bee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:30 compute-0 nova_compute[251992]: 2025-12-06 07:35:30.558 251996 DEBUG nova.compute.manager [req-63f89d19-5ff9-427a-8938-316409f3df40 req-c9f0208f-4990-4fab-9746-cc504b02bee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:30 compute-0 nova_compute[251992]: 2025-12-06 07:35:30.558 251996 WARNING nova.compute.manager [req-63f89d19-5ff9-427a-8938-316409f3df40 req-c9f0208f-4990-4fab-9746-cc504b02bee6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state active and task_state rescuing.
Dec 06 07:35:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 231 op/s
Dec 06 07:35:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:31.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:31 compute-0 ceph-mon[74339]: pgmap v2363: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 22 KiB/s wr, 192 op/s
Dec 06 07:35:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:32.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.391 251996 DEBUG oslo_concurrency.lockutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "adb38d4ffc13c233e6b4d86394e9e864099e8499" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.438 251996 DEBUG nova.objects.instance [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'migration_context' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:33.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.730 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.733 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Start _get_guest_xml network_info=[{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "vif_mac": "fa:16:3e:99:1b:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'deed3bf2-6426-476b-8bb8-98ac272255a2', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-95b7906f-ca03-4ae4-bdc0-817cf9423acd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '95b7906f-ca03-4ae4-bdc0-817cf9423acd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c2e6b8fd-375c-4658-b338-f2d334041ba3', 'attached_at': '', 'detached_at': '', 'volume_id': '95b7906f-ca03-4ae4-bdc0-817cf9423acd', 'serial': '95b7906f-ca03-4ae4-bdc0-817cf9423acd'}, 'attachment_id': 'c92e5c00-d68a-4954-9986-06588c40c17d', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.733 251996 DEBUG nova.objects.instance [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'resources' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.803 251996 WARNING nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.811 251996 DEBUG nova.virt.libvirt.host [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.812 251996 DEBUG nova.virt.libvirt.host [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.818 251996 DEBUG nova.virt.libvirt.host [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.819 251996 DEBUG nova.virt.libvirt.host [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.820 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.821 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.821 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.821 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.822 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.822 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.822 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.823 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.823 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.823 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.823 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.824 251996 DEBUG nova.virt.hardware [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.824 251996 DEBUG nova.objects.instance [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:33 compute-0 nova_compute[251992]: 2025-12-06 07:35:33.869 251996 DEBUG oslo_concurrency.processutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:34 compute-0 ceph-mon[74339]: pgmap v2364: 305 pgs: 305 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.8 MiB/s wr, 231 op/s
Dec 06 07:35:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:34.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:35:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2666969850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:34 compute-0 nova_compute[251992]: 2025-12-06 07:35:34.310 251996 DEBUG oslo_concurrency.processutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:34 compute-0 nova_compute[251992]: 2025-12-06 07:35:34.423 251996 DEBUG oslo_concurrency.processutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:34 compute-0 nova_compute[251992]: 2025-12-06 07:35:34.683 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 657 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.3 MiB/s wr, 65 op/s
Dec 06 07:35:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:35:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839873704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:34 compute-0 nova_compute[251992]: 2025-12-06 07:35:34.908 251996 DEBUG oslo_concurrency.processutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:34 compute-0 nova_compute[251992]: 2025-12-06 07:35:34.910 251996 DEBUG nova.virt.libvirt.vif [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1801137848',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1801137848',id=124,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:34:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-blw02nr2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:34:20Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=c2e6b8fd-375c-4658-b338-f2d334041ba3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "vif_mac": "fa:16:3e:99:1b:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:35:34 compute-0 nova_compute[251992]: 2025-12-06 07:35:34.910 251996 DEBUG nova.network.os_vif_util [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "vif_mac": "fa:16:3e:99:1b:d0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:35:34 compute-0 nova_compute[251992]: 2025-12-06 07:35:34.911 251996 DEBUG nova.network.os_vif_util [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=e61ac68e-e534-4351-b3ce-b20fa32579fc,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ac68e-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:35:34 compute-0 nova_compute[251992]: 2025-12-06 07:35:34.913 251996 DEBUG nova.objects.instance [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.008 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <uuid>c2e6b8fd-375c-4658-b338-f2d334041ba3</uuid>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <name>instance-0000007c</name>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1801137848</nova:name>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:35:33</nova:creationTime>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <nova:user uuid="2aa5b15c15f84a8cb24776d5c781eb09">tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member</nova:user>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <nova:project uuid="17cdfa63c4424ec7a0eb4bb3d7372c14">tempest-ServerBootFromVolumeStableRescueTest-344238221</nova:project>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <nova:port uuid="e61ac68e-e534-4351-b3ce-b20fa32579fc">
Dec 06 07:35:35 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <system>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <entry name="serial">c2e6b8fd-375c-4658-b338-f2d334041ba3</entry>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <entry name="uuid">c2e6b8fd-375c-4658-b338-f2d334041ba3</entry>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </system>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <os>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   </os>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <features>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   </features>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config">
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </source>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-95b7906f-ca03-4ae4-bdc0-817cf9423acd">
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </source>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <serial>95b7906f-ca03-4ae4-bdc0-817cf9423acd</serial>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.rescue">
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </source>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:35:35 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <boot order="1"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:99:1b:d0"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <target dev="tape61ac68e-e5"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/console.log" append="off"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <video>
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </video>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:35:35 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:35:35 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:35:35 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:35:35 compute-0 nova_compute[251992]: </domain>
Dec 06 07:35:35 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.016 251996 INFO nova.virt.libvirt.driver [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance destroyed successfully.
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.100 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.100 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.101 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.101 251996 DEBUG nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No VIF found with MAC fa:16:3e:99:1b:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.101 251996 INFO nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Using config drive
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.127 251996 DEBUG nova.storage.rbd_utils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.133 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:35 compute-0 ceph-mon[74339]: pgmap v2365: 305 pgs: 305 active+clean; 670 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 06 07:35:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2666969850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/839873704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.192 251996 DEBUG nova.objects.instance [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.245 251996 DEBUG nova.objects.instance [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'keypairs' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:35.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.913 251996 INFO nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Creating config drive at /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config.rescue
Dec 06 07:35:35 compute-0 nova_compute[251992]: 2025-12-06 07:35:35.918 251996 DEBUG oslo_concurrency.processutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm7_ezsw1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:36 compute-0 nova_compute[251992]: 2025-12-06 07:35:36.055 251996 DEBUG oslo_concurrency.processutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm7_ezsw1" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:36 compute-0 nova_compute[251992]: 2025-12-06 07:35:36.114 251996 DEBUG nova.storage.rbd_utils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:35:36 compute-0 nova_compute[251992]: 2025-12-06 07:35:36.118 251996 DEBUG oslo_concurrency.processutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config.rescue c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:35:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:36.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:36 compute-0 ceph-mon[74339]: pgmap v2366: 305 pgs: 305 active+clean; 657 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 2.3 MiB/s wr, 65 op/s
Dec 06 07:35:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4096406409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2344886864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:35:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 5.1 MiB/s wr, 103 op/s
Dec 06 07:35:37 compute-0 sudo[333486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:37 compute-0 sudo[333486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:37 compute-0 sudo[333486]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:37 compute-0 sudo[333511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:37 compute-0 sudo[333511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:37 compute-0 sudo[333511]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:37.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:38.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:38 compute-0 ceph-mon[74339]: pgmap v2367: 305 pgs: 305 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 5.1 MiB/s wr, 103 op/s
Dec 06 07:35:38 compute-0 podman[333536]: 2025-12-06 07:35:38.433223477 +0000 UTC m=+0.084584634 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:35:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 5.1 MiB/s wr, 98 op/s
Dec 06 07:35:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:39.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:39 compute-0 nova_compute[251992]: 2025-12-06 07:35:39.683 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:40 compute-0 nova_compute[251992]: 2025-12-06 07:35:40.135 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:40.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 5.9 MiB/s wr, 138 op/s
Dec 06 07:35:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:41.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:42.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 4.1 MiB/s wr, 99 op/s
Dec 06 07:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:35:43 compute-0 nova_compute[251992]: 2025-12-06 07:35:43.178 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006528.1764278, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:43 compute-0 nova_compute[251992]: 2025-12-06 07:35:43.179 251996 INFO nova.compute.manager [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Stopped (Lifecycle Event)
Dec 06 07:35:43 compute-0 nova_compute[251992]: 2025-12-06 07:35:43.232 251996 DEBUG nova.compute.manager [None req-d91f7f3a-e5d3-4e43-b33d-18af9f374d5a - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:43 compute-0 nova_compute[251992]: 2025-12-06 07:35:43.236 251996 DEBUG nova.compute.manager [None req-d91f7f3a-e5d3-4e43-b33d-18af9f374d5a - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:35:43 compute-0 nova_compute[251992]: 2025-12-06 07:35:43.261 251996 INFO nova.compute.manager [None req-d91f7f3a-e5d3-4e43-b33d-18af9f374d5a - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:35:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:43.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:44.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:44 compute-0 ceph-mon[74339]: pgmap v2368: 305 pgs: 305 active+clean; 656 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 5.1 MiB/s wr, 98 op/s
Dec 06 07:35:44 compute-0 nova_compute[251992]: 2025-12-06 07:35:44.685 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 4.1 MiB/s wr, 104 op/s
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.072 251996 DEBUG oslo_concurrency.processutils [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config.rescue c2e6b8fd-375c-4658-b338-f2d334041ba3_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.955s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.073 251996 INFO nova.virt.libvirt.driver [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Deleting local config drive /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3/disk.config.rescue because it was imported into RBD.
Dec 06 07:35:45 compute-0 NetworkManager[48965]: <info>  [1765006545.1224] manager: (tape61ac68e-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Dec 06 07:35:45 compute-0 kernel: tape61ac68e-e5: entered promiscuous mode
Dec 06 07:35:45 compute-0 ovn_controller[147168]: 2025-12-06T07:35:45Z|00452|binding|INFO|Claiming lport e61ac68e-e534-4351-b3ce-b20fa32579fc for this chassis.
Dec 06 07:35:45 compute-0 ovn_controller[147168]: 2025-12-06T07:35:45Z|00453|binding|INFO|e61ac68e-e534-4351-b3ce-b20fa32579fc: Claiming fa:16:3e:99:1b:d0 10.100.0.14
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.125 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.133 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1b:d0 10.100.0.14'], port_security=['fa:16:3e:99:1b:d0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c2e6b8fd-375c-4658-b338-f2d334041ba3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '5', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=e61ac68e-e534-4351-b3ce-b20fa32579fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.135 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.135 158118 INFO neutron.agent.ovn.metadata.agent [-] Port e61ac68e-e534-4351-b3ce-b20fa32579fc in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 bound to our chassis
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.137 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.155 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:45 compute-0 ovn_controller[147168]: 2025-12-06T07:35:45Z|00454|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc ovn-installed in OVS
Dec 06 07:35:45 compute-0 ovn_controller[147168]: 2025-12-06T07:35:45Z|00455|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc up in Southbound
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.156 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[96a7ade4-9299-44b4-ac47-61589f87ccf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:45 compute-0 systemd-machined[212986]: New machine qemu-57-instance-0000007c.
Dec 06 07:35:45 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-0000007c.
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.198 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e3e411-e48a-4060-a81e-3f791b12b697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.203 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[00103d3a-abdf-4c21-9c47-5ed13b55f4b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:45 compute-0 systemd-udevd[333583]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:35:45 compute-0 NetworkManager[48965]: <info>  [1765006545.2186] device (tape61ac68e-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:35:45 compute-0 NetworkManager[48965]: <info>  [1765006545.2199] device (tape61ac68e-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.231 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c26f6235-b6aa-4852-9943-d96a9923a4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.248 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d273ba22-f3de-489a-be6d-213737b18ad3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 42370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333591, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.262 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[14085c5d-e6c2-44e5-84ca-5ec15360db00]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333594, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333594, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.264 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.267 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.267 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.268 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:45.268 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:45.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.836 251996 DEBUG nova.compute.manager [req-4a7522d1-6dd4-4b45-840a-5cca9231e5d8 req-cadb1a48-f544-4a32-9b56-c5d1911ded23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.837 251996 DEBUG oslo_concurrency.lockutils [req-4a7522d1-6dd4-4b45-840a-5cca9231e5d8 req-cadb1a48-f544-4a32-9b56-c5d1911ded23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.838 251996 DEBUG oslo_concurrency.lockutils [req-4a7522d1-6dd4-4b45-840a-5cca9231e5d8 req-cadb1a48-f544-4a32-9b56-c5d1911ded23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.838 251996 DEBUG oslo_concurrency.lockutils [req-4a7522d1-6dd4-4b45-840a-5cca9231e5d8 req-cadb1a48-f544-4a32-9b56-c5d1911ded23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.838 251996 DEBUG nova.compute.manager [req-4a7522d1-6dd4-4b45-840a-5cca9231e5d8 req-cadb1a48-f544-4a32-9b56-c5d1911ded23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:45 compute-0 nova_compute[251992]: 2025-12-06 07:35:45.839 251996 WARNING nova.compute.manager [req-4a7522d1-6dd4-4b45-840a-5cca9231e5d8 req-cadb1a48-f544-4a32-9b56-c5d1911ded23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state active and task_state rescuing.
Dec 06 07:35:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:46.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:46 compute-0 podman[333614]: 2025-12-06 07:35:46.397650508 +0000 UTC m=+0.058164971 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 06 07:35:46 compute-0 podman[333615]: 2025-12-06 07:35:46.398424109 +0000 UTC m=+0.057148204 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 07:35:46 compute-0 ovn_controller[147168]: 2025-12-06T07:35:46Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:cc:7f 10.100.0.10
Dec 06 07:35:46 compute-0 ovn_controller[147168]: 2025-12-06T07:35:46Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:cc:7f 10.100.0.10
Dec 06 07:35:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.7 MiB/s wr, 127 op/s
Dec 06 07:35:47 compute-0 ceph-mon[74339]: pgmap v2369: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 5.9 MiB/s wr, 138 op/s
Dec 06 07:35:47 compute-0 ceph-mon[74339]: pgmap v2370: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 4.1 MiB/s wr, 99 op/s
Dec 06 07:35:47 compute-0 ceph-mon[74339]: pgmap v2371: 305 pgs: 305 active+clean; 667 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 4.1 MiB/s wr, 104 op/s
Dec 06 07:35:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:47.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.155 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006548.1553605, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.157 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Resumed (Lifecycle Event)
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.162 251996 DEBUG nova.compute.manager [None req-b48878d6-8558-4670-8361-59170c04b5f1 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:48.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.466 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.470 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.509 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.509 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006548.1569057, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.509 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Started (Lifecycle Event)
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.536 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.539 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.594 251996 DEBUG nova.compute.manager [req-bb1960bb-3757-4928-9437-a892fab624fc req-51a992fe-8021-443e-92b8-0b0f63810f92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.594 251996 DEBUG oslo_concurrency.lockutils [req-bb1960bb-3757-4928-9437-a892fab624fc req-51a992fe-8021-443e-92b8-0b0f63810f92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.595 251996 DEBUG oslo_concurrency.lockutils [req-bb1960bb-3757-4928-9437-a892fab624fc req-51a992fe-8021-443e-92b8-0b0f63810f92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.595 251996 DEBUG oslo_concurrency.lockutils [req-bb1960bb-3757-4928-9437-a892fab624fc req-51a992fe-8021-443e-92b8-0b0f63810f92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.595 251996 DEBUG nova.compute.manager [req-bb1960bb-3757-4928-9437-a892fab624fc req-51a992fe-8021-443e-92b8-0b0f63810f92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:48 compute-0 nova_compute[251992]: 2025-12-06 07:35:48.595 251996 WARNING nova.compute.manager [req-bb1960bb-3757-4928-9437-a892fab624fc req-51a992fe-8021-443e-92b8-0b0f63810f92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state rescued and task_state None.
Dec 06 07:35:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 891 KiB/s wr, 89 op/s
Dec 06 07:35:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:49 compute-0 ceph-mon[74339]: pgmap v2372: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.7 MiB/s wr, 127 op/s
Dec 06 07:35:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:35:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:49.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:35:49 compute-0 nova_compute[251992]: 2025-12-06 07:35:49.686 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:50 compute-0 nova_compute[251992]: 2025-12-06 07:35:50.137 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:50.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:50 compute-0 nova_compute[251992]: 2025-12-06 07:35:50.632 251996 INFO nova.compute.manager [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Unrescuing
Dec 06 07:35:50 compute-0 nova_compute[251992]: 2025-12-06 07:35:50.632 251996 DEBUG oslo_concurrency.lockutils [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:35:50 compute-0 nova_compute[251992]: 2025-12-06 07:35:50.633 251996 DEBUG oslo_concurrency.lockutils [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquired lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:35:50 compute-0 nova_compute[251992]: 2025-12-06 07:35:50.633 251996 DEBUG nova.network.neutron [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:35:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 675 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 944 KiB/s wr, 159 op/s
Dec 06 07:35:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1498299116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:35:51 compute-0 ceph-mon[74339]: pgmap v2373: 305 pgs: 305 active+clean; 671 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 891 KiB/s wr, 89 op/s
Dec 06 07:35:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:35:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:51.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:35:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:52.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:52 compute-0 nova_compute[251992]: 2025-12-06 07:35:52.235 251996 DEBUG nova.network.neutron [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updating instance_info_cache with network_info: [{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:35:52 compute-0 nova_compute[251992]: 2025-12-06 07:35:52.269 251996 DEBUG oslo_concurrency.lockutils [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Releasing lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:35:52 compute-0 nova_compute[251992]: 2025-12-06 07:35:52.271 251996 DEBUG nova.objects.instance [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'flavor' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 676 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 126 KiB/s wr, 153 op/s
Dec 06 07:35:52 compute-0 ceph-mon[74339]: pgmap v2374: 305 pgs: 305 active+clean; 675 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 944 KiB/s wr, 159 op/s
Dec 06 07:35:52 compute-0 kernel: tape61ac68e-e5 (unregistering): left promiscuous mode
Dec 06 07:35:52 compute-0 NetworkManager[48965]: <info>  [1765006552.8831] device (tape61ac68e-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:35:52 compute-0 ovn_controller[147168]: 2025-12-06T07:35:52Z|00456|binding|INFO|Releasing lport e61ac68e-e534-4351-b3ce-b20fa32579fc from this chassis (sb_readonly=0)
Dec 06 07:35:52 compute-0 ovn_controller[147168]: 2025-12-06T07:35:52Z|00457|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc down in Southbound
Dec 06 07:35:52 compute-0 nova_compute[251992]: 2025-12-06 07:35:52.893 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:52 compute-0 ovn_controller[147168]: 2025-12-06T07:35:52Z|00458|binding|INFO|Removing iface tape61ac68e-e5 ovn-installed in OVS
Dec 06 07:35:52 compute-0 nova_compute[251992]: 2025-12-06 07:35:52.895 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:52.899 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1b:d0 10.100.0.14'], port_security=['fa:16:3e:99:1b:d0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c2e6b8fd-375c-4658-b338-f2d334041ba3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '6', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=e61ac68e-e534-4351-b3ce-b20fa32579fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:35:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:52.901 158118 INFO neutron.agent.ovn.metadata.agent [-] Port e61ac68e-e534-4351-b3ce-b20fa32579fc in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 unbound from our chassis
Dec 06 07:35:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:52.903 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:35:52 compute-0 nova_compute[251992]: 2025-12-06 07:35:52.910 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:52.918 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d53a7d14-6ae0-4024-bb3f-8cd58599f340]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:52 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Dec 06 07:35:52 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000007c.scope: Consumed 5.108s CPU time.
Dec 06 07:35:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:52.945 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[56555687-54e9-4c91-9e49-9ef343647b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:52 compute-0 systemd-machined[212986]: Machine qemu-57-instance-0000007c terminated.
Dec 06 07:35:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:52.949 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[0169f72e-0093-4ae3-b85f-5f1b8d9e00ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:52 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:35:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:52.978 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3ded5d-7fad-4c49-9edc-d942a3149170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.008 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1ccc66-11ef-494d-8ea2-3b989c220deb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 42370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333710, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.022 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5dbd6a-9721-4727-9bf5-c73ba7800c7c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333711, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333711, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.023 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.025 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.029 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.030 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.030 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.030 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.031 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.134 251996 INFO nova.virt.libvirt.driver [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance destroyed successfully.
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.135 251996 DEBUG nova.objects.instance [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'numa_topology' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:35:53 compute-0 kernel: tape61ac68e-e5: entered promiscuous mode
Dec 06 07:35:53 compute-0 systemd-udevd[333700]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:35:53 compute-0 NetworkManager[48965]: <info>  [1765006553.3149] manager: (tape61ac68e-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Dec 06 07:35:53 compute-0 ovn_controller[147168]: 2025-12-06T07:35:53Z|00459|binding|INFO|Claiming lport e61ac68e-e534-4351-b3ce-b20fa32579fc for this chassis.
Dec 06 07:35:53 compute-0 ovn_controller[147168]: 2025-12-06T07:35:53Z|00460|binding|INFO|e61ac68e-e534-4351-b3ce-b20fa32579fc: Claiming fa:16:3e:99:1b:d0 10.100.0.14
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.316 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:53 compute-0 NetworkManager[48965]: <info>  [1765006553.3272] device (tape61ac68e-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:35:53 compute-0 NetworkManager[48965]: <info>  [1765006553.3278] device (tape61ac68e-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:35:53 compute-0 ovn_controller[147168]: 2025-12-06T07:35:53Z|00461|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc ovn-installed in OVS
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.333 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.335 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Dec 06 07:35:53 compute-0 systemd-machined[212986]: New machine qemu-58-instance-0000007c.
Dec 06 07:35:53 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-0000007c.
Dec 06 07:35:53 compute-0 ovn_controller[147168]: 2025-12-06T07:35:53Z|00462|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc up in Southbound
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.403 251996 DEBUG nova.compute.manager [req-77fe4f12-809f-4cb2-972e-9f3150f97c41 req-ef1e11b3-db11-4ec8-b0ee-130129272c81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.404 251996 DEBUG oslo_concurrency.lockutils [req-77fe4f12-809f-4cb2-972e-9f3150f97c41 req-ef1e11b3-db11-4ec8-b0ee-130129272c81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.404 251996 DEBUG oslo_concurrency.lockutils [req-77fe4f12-809f-4cb2-972e-9f3150f97c41 req-ef1e11b3-db11-4ec8-b0ee-130129272c81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.404 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1b:d0 10.100.0.14'], port_security=['fa:16:3e:99:1b:d0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c2e6b8fd-375c-4658-b338-f2d334041ba3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '6', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=e61ac68e-e534-4351-b3ce-b20fa32579fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.404 251996 DEBUG oslo_concurrency.lockutils [req-77fe4f12-809f-4cb2-972e-9f3150f97c41 req-ef1e11b3-db11-4ec8-b0ee-130129272c81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.405 251996 DEBUG nova.compute.manager [req-77fe4f12-809f-4cb2-972e-9f3150f97c41 req-ef1e11b3-db11-4ec8-b0ee-130129272c81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.405 251996 WARNING nova.compute.manager [req-77fe4f12-809f-4cb2-972e-9f3150f97c41 req-ef1e11b3-db11-4ec8-b0ee-130129272c81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.405 158118 INFO neutron.agent.ovn.metadata.agent [-] Port e61ac68e-e534-4351-b3ce-b20fa32579fc in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 bound to our chassis
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.407 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.421 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5dcc37-90aa-4a80-8027-1afae227568a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.447 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5ca054-6919-4b81-9925-b8a0856d7764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.450 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a34de61a-cdd8-4c90-90d5-bd49382f82ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.477 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fa239de8-0435-4d3b-a6cc-41dd9b538db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.493 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[98a9db2b-72db-48b8-b286-b22df99692b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 42370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333749, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.510 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d0b867-7bfe-4ee8-97b3-1ec54f615a05]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333757, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333757, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.512 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.513 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:53 compute-0 nova_compute[251992]: 2025-12-06 07:35:53.514 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.516 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.516 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.516 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:35:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:35:53.517 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:35:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.149 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for c2e6b8fd-375c-4658-b338-f2d334041ba3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.150 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006554.1493518, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.151 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Resumed (Lifecycle Event)
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.180 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.183 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.203 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.204 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006554.1503124, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.204 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Started (Lifecycle Event)
Dec 06 07:35:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.227 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.230 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.247 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:35:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Dec 06 07:35:54 compute-0 ceph-mon[74339]: pgmap v2375: 305 pgs: 305 active+clean; 676 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 126 KiB/s wr, 153 op/s
Dec 06 07:35:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Dec 06 07:35:54 compute-0 nova_compute[251992]: 2025-12-06 07:35:54.688 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 684 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 165 KiB/s wr, 227 op/s
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.140 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.509 251996 DEBUG nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.510 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.510 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.511 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.511 251996 DEBUG nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.511 251996 WARNING nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.512 251996 DEBUG nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.512 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.512 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.512 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.513 251996 DEBUG nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.513 251996 WARNING nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.513 251996 DEBUG nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.514 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.514 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.514 251996 DEBUG oslo_concurrency.lockutils [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.514 251996 DEBUG nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:35:55 compute-0 nova_compute[251992]: 2025-12-06 07:35:55.515 251996 WARNING nova.compute.manager [req-2866db53-cd07-484a-a2fb-a08835b42a3c req-28dc5c95-dae0-420e-8ad7-04a9fe54f294 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:35:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:56.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 690 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 183 KiB/s wr, 204 op/s
Dec 06 07:35:57 compute-0 ceph-mon[74339]: osdmap e294: 3 total, 3 up, 3 in
Dec 06 07:35:57 compute-0 ceph-mon[74339]: pgmap v2377: 305 pgs: 305 active+clean; 684 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 165 KiB/s wr, 227 op/s
Dec 06 07:35:57 compute-0 sudo[333813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:57 compute-0 sudo[333813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:57 compute-0 sudo[333813]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:57 compute-0 sudo[333838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:35:57 compute-0 sudo[333838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:35:57 compute-0 sudo[333838]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:57.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:35:58.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:58 compute-0 ceph-mon[74339]: pgmap v2378: 305 pgs: 305 active+clean; 690 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 183 KiB/s wr, 204 op/s
Dec 06 07:35:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 690 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 183 KiB/s wr, 204 op/s
Dec 06 07:35:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:35:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:35:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:35:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:35:59.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:35:59 compute-0 nova_compute[251992]: 2025-12-06 07:35:59.692 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:00 compute-0 nova_compute[251992]: 2025-12-06 07:36:00.187 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:00.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 807 KiB/s wr, 152 op/s
Dec 06 07:36:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:01.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4059959374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:01 compute-0 ceph-mon[74339]: pgmap v2379: 305 pgs: 305 active+clean; 690 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 183 KiB/s wr, 204 op/s
Dec 06 07:36:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:02.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 795 KiB/s wr, 111 op/s
Dec 06 07:36:03 compute-0 ceph-mon[74339]: pgmap v2380: 305 pgs: 305 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 807 KiB/s wr, 152 op/s
Dec 06 07:36:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:03 compute-0 nova_compute[251992]: 2025-12-06 07:36:03.791 251996 DEBUG nova.compute.manager [None req-ad3b5402-fdf2-48c7-9177-841e42bf1582 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:36:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:36:03.843 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:36:03.844 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:36:03.844 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:04.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:04 compute-0 ceph-mon[74339]: pgmap v2381: 305 pgs: 305 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 795 KiB/s wr, 111 op/s
Dec 06 07:36:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3407513672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:04 compute-0 nova_compute[251992]: 2025-12-06 07:36:04.694 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 703 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 2.0 MiB/s wr, 88 op/s
Dec 06 07:36:05 compute-0 nova_compute[251992]: 2025-12-06 07:36:05.189 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:05.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:06 compute-0 ceph-mon[74339]: pgmap v2382: 305 pgs: 305 active+clean; 703 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 2.0 MiB/s wr, 88 op/s
Dec 06 07:36:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:06.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 711 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Dec 06 07:36:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2192358059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1808890436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:07 compute-0 ceph-mon[74339]: pgmap v2383: 305 pgs: 305 active+clean; 711 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Dec 06 07:36:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:08.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 711 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Dec 06 07:36:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:36:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/690703462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:36:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:36:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/690703462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:36:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/690703462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:36:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/690703462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:36:09 compute-0 podman[333870]: 2025-12-06 07:36:09.500429925 +0000 UTC m=+0.150774772 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 07:36:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:09.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:09 compute-0 nova_compute[251992]: 2025-12-06 07:36:09.695 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:10 compute-0 nova_compute[251992]: 2025-12-06 07:36:10.191 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:10.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:10 compute-0 ceph-mon[74339]: pgmap v2384: 305 pgs: 305 active+clean; 711 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 310 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Dec 06 07:36:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 723 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:36:11 compute-0 ovn_controller[147168]: 2025-12-06T07:36:11Z|00463|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:36:11 compute-0 ovn_controller[147168]: 2025-12-06T07:36:11Z|00464|binding|INFO|Releasing lport 0d2044a5-87cb-4c28-912c-9a2682bb94de from this chassis (sb_readonly=0)
Dec 06 07:36:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.435832) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571435917, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1937, "num_deletes": 258, "total_data_size": 3339238, "memory_usage": 3403232, "flush_reason": "Manual Compaction"}
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Dec 06 07:36:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Dec 06 07:36:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3904011872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:11 compute-0 ceph-mon[74339]: pgmap v2385: 305 pgs: 305 active+clean; 723 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 2.2 MiB/s wr, 110 op/s
Dec 06 07:36:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1025593138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:11 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Dec 06 07:36:11 compute-0 nova_compute[251992]: 2025-12-06 07:36:11.452 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571463063, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 3284966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45751, "largest_seqno": 47687, "table_properties": {"data_size": 3275826, "index_size": 5698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19961, "raw_average_key_size": 21, "raw_value_size": 3257343, "raw_average_value_size": 3472, "num_data_blocks": 245, "num_entries": 938, "num_filter_entries": 938, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006374, "oldest_key_time": 1765006374, "file_creation_time": 1765006571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 27391 microseconds, and 15933 cpu microseconds.
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.463214) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 3284966 bytes OK
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.463269) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.464744) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.464832) EVENT_LOG_v1 {"time_micros": 1765006571464811, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.464875) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 3330929, prev total WAL file size 3330970, number of live WAL files 2.
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.467177) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(3207KB)], [98(10019KB)]
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571467277, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 13544554, "oldest_snapshot_seqno": -1}
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 8241 keys, 11524541 bytes, temperature: kUnknown
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571524607, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 11524541, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11470859, "index_size": 32002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 213239, "raw_average_key_size": 25, "raw_value_size": 11325274, "raw_average_value_size": 1374, "num_data_blocks": 1258, "num_entries": 8241, "num_filter_entries": 8241, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765006571, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.525023) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 11524541 bytes
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.526078) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.2 rd, 201.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.8 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 8771, records dropped: 530 output_compression: NoCompression
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.526099) EVENT_LOG_v1 {"time_micros": 1765006571526089, "job": 58, "event": "compaction_finished", "compaction_time_micros": 57349, "compaction_time_cpu_micros": 31799, "output_level": 6, "num_output_files": 1, "total_output_size": 11524541, "num_input_records": 8771, "num_output_records": 8241, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571526966, "job": 58, "event": "table_file_deletion", "file_number": 100}
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006571529227, "job": 58, "event": "table_file_deletion", "file_number": 98}
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.466959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.529299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.529309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.529311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.529313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:11.529315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:11.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:36:12.008 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:36:12 compute-0 nova_compute[251992]: 2025-12-06 07:36:12.009 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:36:12.009 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:36:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:12.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:12 compute-0 ceph-mon[74339]: osdmap e295: 3 total, 3 up, 3 in
Dec 06 07:36:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2445399869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 723 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 1.9 MiB/s wr, 101 op/s
Dec 06 07:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:36:13 compute-0 sudo[333899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:13 compute-0 sudo[333899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:13 compute-0 nova_compute[251992]: 2025-12-06 07:36:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:13 compute-0 sudo[333899]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:13.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:13 compute-0 nova_compute[251992]: 2025-12-06 07:36:13.695 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:13 compute-0 nova_compute[251992]: 2025-12-06 07:36:13.695 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:13 compute-0 nova_compute[251992]: 2025-12-06 07:36:13.695 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:13 compute-0 nova_compute[251992]: 2025-12-06 07:36:13.696 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:36:13 compute-0 nova_compute[251992]: 2025-12-06 07:36:13.696 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:13 compute-0 sudo[333924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:36:13 compute-0 sudo[333924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:13 compute-0 sudo[333924]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:13 compute-0 sudo[333950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:13 compute-0 sudo[333950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:13 compute-0 sudo[333950]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4100487112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:13 compute-0 ceph-mon[74339]: pgmap v2387: 305 pgs: 305 active+clean; 723 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 1.9 MiB/s wr, 101 op/s
Dec 06 07:36:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1337473103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:13 compute-0 sudo[333975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:36:13 compute-0 sudo[333975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:36:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1884224956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.161 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:14.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.237622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574237701, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 292, "num_deletes": 261, "total_data_size": 50414, "memory_usage": 56184, "flush_reason": "Manual Compaction"}
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574240565, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 50360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47688, "largest_seqno": 47979, "table_properties": {"data_size": 48413, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4970, "raw_average_key_size": 17, "raw_value_size": 44515, "raw_average_value_size": 157, "num_data_blocks": 5, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006571, "oldest_key_time": 1765006571, "file_creation_time": 1765006574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 2961 microseconds, and 1399 cpu microseconds.
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.240597) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 50360 bytes OK
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.240615) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.241592) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.241604) EVENT_LOG_v1 {"time_micros": 1765006574241600, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.241621) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 48229, prev total WAL file size 48229, number of live WAL files 2.
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.241963) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353132' seq:72057594037927935, type:22 .. '6C6F676D0031373639' seq:0, type:0; will stop at (end)
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(49KB)], [101(10MB)]
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574242000, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11574901, "oldest_snapshot_seqno": -1}
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.261 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.261 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.265 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.265 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.268 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.268 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7992 keys, 11428077 bytes, temperature: kUnknown
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574297791, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 11428077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11375574, "index_size": 31399, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 209042, "raw_average_key_size": 26, "raw_value_size": 11234032, "raw_average_value_size": 1405, "num_data_blocks": 1228, "num_entries": 7992, "num_filter_entries": 7992, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765006574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.298077) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 11428077 bytes
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.299614) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.0 rd, 204.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(456.8) write-amplify(226.9) OK, records in: 8523, records dropped: 531 output_compression: NoCompression
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.299635) EVENT_LOG_v1 {"time_micros": 1765006574299626, "job": 60, "event": "compaction_finished", "compaction_time_micros": 55912, "compaction_time_cpu_micros": 28782, "output_level": 6, "num_output_files": 1, "total_output_size": 11428077, "num_input_records": 8523, "num_output_records": 7992, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574300068, "job": 60, "event": "table_file_deletion", "file_number": 103}
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006574304373, "job": 60, "event": "table_file_deletion", "file_number": 101}
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.241883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.304486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.304492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.304494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.304496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:36:14.304497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:36:14 compute-0 sudo[333975]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.441 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.443 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3895MB free_disk=20.669200897216797GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.443 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.443 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 07:36:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:36:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:36:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.526 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 70928eda-043f-429b-aa4e-af1f3189a7c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.527 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c2e6b8fd-375c-4658-b338-f2d334041ba3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.527 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c1ef1073-7c66-428c-a02b-e4daa3551d22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.527 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.528 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:36:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c4810618-50f4-4a2d-a30e-6648cfedda66 does not exist
Dec 06 07:36:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 06dc43c1-53f6-4b6e-91c9-c7fff0d27e16 does not exist
Dec 06 07:36:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8ac93268-2ea8-4c51-a920-ab823190210f does not exist
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:36:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:36:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:36:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:36:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:36:14 compute-0 sudo[334053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:14 compute-0 sudo[334053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:14 compute-0 sudo[334053]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.645 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:14 compute-0 sudo[334078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:36:14 compute-0 sudo[334078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:14 compute-0 sudo[334078]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:14 compute-0 nova_compute[251992]: 2025-12-06 07:36:14.697 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:14 compute-0 sudo[334104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:14 compute-0 sudo[334104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:14 compute-0 sudo[334104]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:14 compute-0 sudo[334129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:36:14 compute-0 sudo[334129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 740 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 1.6 MiB/s wr, 88 op/s
Dec 06 07:36:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1884224956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:36:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:36:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:36:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:36:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:36:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:36:15 compute-0 podman[334214]: 2025-12-06 07:36:15.060035955 +0000 UTC m=+0.038231682 container create 8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:36:15 compute-0 systemd[1]: Started libpod-conmon-8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea.scope.
Dec 06 07:36:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:36:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:36:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988076846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:15 compute-0 podman[334214]: 2025-12-06 07:36:15.042831101 +0000 UTC m=+0.021026868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:36:15 compute-0 podman[334214]: 2025-12-06 07:36:15.147401023 +0000 UTC m=+0.125596800 container init 8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:36:15 compute-0 podman[334214]: 2025-12-06 07:36:15.1561246 +0000 UTC m=+0.134320347 container start 8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:36:15 compute-0 podman[334214]: 2025-12-06 07:36:15.160520988 +0000 UTC m=+0.138716745 container attach 8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:36:15 compute-0 agitated_northcutt[334230]: 167 167
Dec 06 07:36:15 compute-0 systemd[1]: libpod-8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea.scope: Deactivated successfully.
Dec 06 07:36:15 compute-0 podman[334214]: 2025-12-06 07:36:15.164251668 +0000 UTC m=+0.142447415 container died 8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:36:15 compute-0 nova_compute[251992]: 2025-12-06 07:36:15.163 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:15 compute-0 nova_compute[251992]: 2025-12-06 07:36:15.171 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a21b84fc714cc4212f811f3fa86146d2afe73e9b657dc09404007d13c5a39176-merged.mount: Deactivated successfully.
Dec 06 07:36:15 compute-0 nova_compute[251992]: 2025-12-06 07:36:15.189 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:36:15 compute-0 nova_compute[251992]: 2025-12-06 07:36:15.193 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:15 compute-0 podman[334214]: 2025-12-06 07:36:15.201855774 +0000 UTC m=+0.180051511 container remove 8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_northcutt, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:36:15 compute-0 systemd[1]: libpod-conmon-8eeb1de42d9f4fa6a7382f015def5da56d24cdd26c7df947e16a221118b815ea.scope: Deactivated successfully.
Dec 06 07:36:15 compute-0 nova_compute[251992]: 2025-12-06 07:36:15.217 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:36:15 compute-0 nova_compute[251992]: 2025-12-06 07:36:15.218 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:15 compute-0 podman[334254]: 2025-12-06 07:36:15.374929127 +0000 UTC m=+0.043614689 container create 17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:36:15 compute-0 systemd[1]: Started libpod-conmon-17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028.scope.
Dec 06 07:36:15 compute-0 podman[334254]: 2025-12-06 07:36:15.355701037 +0000 UTC m=+0.024386609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:36:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c1f55ec50ec2adc2f926bdad1b30f0044e38b49455acd02ef2166bd027ed7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c1f55ec50ec2adc2f926bdad1b30f0044e38b49455acd02ef2166bd027ed7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c1f55ec50ec2adc2f926bdad1b30f0044e38b49455acd02ef2166bd027ed7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c1f55ec50ec2adc2f926bdad1b30f0044e38b49455acd02ef2166bd027ed7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c1f55ec50ec2adc2f926bdad1b30f0044e38b49455acd02ef2166bd027ed7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:15 compute-0 podman[334254]: 2025-12-06 07:36:15.470428855 +0000 UTC m=+0.139114427 container init 17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 07:36:15 compute-0 podman[334254]: 2025-12-06 07:36:15.479279453 +0000 UTC m=+0.147965005 container start 17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 07:36:15 compute-0 podman[334254]: 2025-12-06 07:36:15.482090849 +0000 UTC m=+0.150776401 container attach 17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:36:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:16 compute-0 ceph-mon[74339]: pgmap v2388: 305 pgs: 305 active+clean; 740 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 1.6 MiB/s wr, 88 op/s
Dec 06 07:36:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3988076846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2477259455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:16.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:16 compute-0 gallant_ride[334270]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:36:16 compute-0 gallant_ride[334270]: --> relative data size: 1.0
Dec 06 07:36:16 compute-0 gallant_ride[334270]: --> All data devices are unavailable
Dec 06 07:36:16 compute-0 systemd[1]: libpod-17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028.scope: Deactivated successfully.
Dec 06 07:36:16 compute-0 podman[334285]: 2025-12-06 07:36:16.422119706 +0000 UTC m=+0.024717247 container died 17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:36:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0c1f55ec50ec2adc2f926bdad1b30f0044e38b49455acd02ef2166bd027ed7e-merged.mount: Deactivated successfully.
Dec 06 07:36:16 compute-0 podman[334285]: 2025-12-06 07:36:16.472565888 +0000 UTC m=+0.075163399 container remove 17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:36:16 compute-0 systemd[1]: libpod-conmon-17f3376a02f93f9b91c82aeddfb2deaa47a1d0262c5f8ca1a4d47df7ce70c028.scope: Deactivated successfully.
Dec 06 07:36:16 compute-0 sudo[334129]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:16 compute-0 podman[334301]: 2025-12-06 07:36:16.528058016 +0000 UTC m=+0.057174604 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:36:16 compute-0 podman[334302]: 2025-12-06 07:36:16.533157754 +0000 UTC m=+0.060234186 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 07:36:16 compute-0 sudo[334335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:16 compute-0 sudo[334335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:16 compute-0 sudo[334335]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:16 compute-0 sudo[334364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:36:16 compute-0 sudo[334364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:16 compute-0 sudo[334364]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:16 compute-0 sudo[334389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:16 compute-0 sudo[334389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:16 compute-0 sudo[334389]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:16 compute-0 sudo[334414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:36:16 compute-0 sudo[334414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 06 07:36:17 compute-0 podman[334479]: 2025-12-06 07:36:17.031162379 +0000 UTC m=+0.041150953 container create b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_taussig, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:36:17 compute-0 systemd[1]: Started libpod-conmon-b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba.scope.
Dec 06 07:36:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:36:17 compute-0 podman[334479]: 2025-12-06 07:36:17.01156632 +0000 UTC m=+0.021554944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:36:17 compute-0 podman[334479]: 2025-12-06 07:36:17.107787167 +0000 UTC m=+0.117775761 container init b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:36:17 compute-0 podman[334479]: 2025-12-06 07:36:17.115184637 +0000 UTC m=+0.125173231 container start b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_taussig, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:36:17 compute-0 podman[334479]: 2025-12-06 07:36:17.118682791 +0000 UTC m=+0.128671415 container attach b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_taussig, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:36:17 compute-0 affectionate_taussig[334495]: 167 167
Dec 06 07:36:17 compute-0 systemd[1]: libpod-b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba.scope: Deactivated successfully.
Dec 06 07:36:17 compute-0 podman[334479]: 2025-12-06 07:36:17.120138961 +0000 UTC m=+0.130127555 container died b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_taussig, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Dec 06 07:36:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-99138e395726f8803f33eed37dddff50248fe325ad4f23c53e0bf2b8a755938d-merged.mount: Deactivated successfully.
Dec 06 07:36:17 compute-0 podman[334479]: 2025-12-06 07:36:17.156474402 +0000 UTC m=+0.166462986 container remove b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:36:17 compute-0 ceph-mon[74339]: pgmap v2389: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 06 07:36:17 compute-0 systemd[1]: libpod-conmon-b26065a00fc1c890dfef012abd1282542b844f6373f0301d7ced6d01e5e4bcba.scope: Deactivated successfully.
Dec 06 07:36:17 compute-0 nova_compute[251992]: 2025-12-06 07:36:17.212 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:17 compute-0 nova_compute[251992]: 2025-12-06 07:36:17.212 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:17 compute-0 nova_compute[251992]: 2025-12-06 07:36:17.213 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:17 compute-0 sudo[334514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:17 compute-0 sudo[334514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:17 compute-0 sudo[334514]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:17 compute-0 podman[334542]: 2025-12-06 07:36:17.331968659 +0000 UTC m=+0.037470532 container create 7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 07:36:17 compute-0 systemd[1]: Started libpod-conmon-7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0.scope.
Dec 06 07:36:17 compute-0 sudo[334551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:17 compute-0 sudo[334551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:17 compute-0 sudo[334551]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:36:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c45804798b5707f3c1d414e8eb2907c3901972d1037a0d78d63c90e71aff83c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c45804798b5707f3c1d414e8eb2907c3901972d1037a0d78d63c90e71aff83c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c45804798b5707f3c1d414e8eb2907c3901972d1037a0d78d63c90e71aff83c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c45804798b5707f3c1d414e8eb2907c3901972d1037a0d78d63c90e71aff83c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:17 compute-0 podman[334542]: 2025-12-06 07:36:17.396972284 +0000 UTC m=+0.102474157 container init 7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 07:36:17 compute-0 podman[334542]: 2025-12-06 07:36:17.405144194 +0000 UTC m=+0.110646067 container start 7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:36:17 compute-0 podman[334542]: 2025-12-06 07:36:17.407892109 +0000 UTC m=+0.113393982 container attach 7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:36:17 compute-0 podman[334542]: 2025-12-06 07:36:17.316748339 +0000 UTC m=+0.022250242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:36:17 compute-0 nova_compute[251992]: 2025-12-06 07:36:17.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:17 compute-0 nova_compute[251992]: 2025-12-06 07:36:17.660 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:17.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]: {
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:     "0": [
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:         {
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "devices": [
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "/dev/loop3"
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             ],
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "lv_name": "ceph_lv0",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "lv_size": "7511998464",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "name": "ceph_lv0",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "tags": {
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.cluster_name": "ceph",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.crush_device_class": "",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.encrypted": "0",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.osd_id": "0",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.type": "block",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:                 "ceph.vdo": "0"
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             },
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "type": "block",
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:             "vg_name": "ceph_vg0"
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:         }
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]:     ]
Dec 06 07:36:18 compute-0 intelligent_mcnulty[334585]: }
Dec 06 07:36:18 compute-0 systemd[1]: libpod-7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0.scope: Deactivated successfully.
Dec 06 07:36:18 compute-0 podman[334542]: 2025-12-06 07:36:18.192015207 +0000 UTC m=+0.897517090 container died 7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:36:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c45804798b5707f3c1d414e8eb2907c3901972d1037a0d78d63c90e71aff83c-merged.mount: Deactivated successfully.
Dec 06 07:36:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:18.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:18 compute-0 podman[334542]: 2025-12-06 07:36:18.248383059 +0000 UTC m=+0.953884932 container remove 7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:36:18 compute-0 systemd[1]: libpod-conmon-7fefd2c49fdf4fb782a1eacd1cdc6550774b83cf0d2a380d40bec5c58446f9b0.scope: Deactivated successfully.
Dec 06 07:36:18 compute-0 sudo[334414]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:18 compute-0 sudo[334609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:18 compute-0 sudo[334609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:18 compute-0 sudo[334609]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:18 compute-0 sudo[334634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:36:18 compute-0 sudo[334634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:18 compute-0 sudo[334634]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:18 compute-0 sudo[334659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:18 compute-0 sudo[334659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:18 compute-0 sudo[334659]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:36:18
Dec 06 07:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', 'backups', 'default.rgw.control']
Dec 06 07:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:36:18 compute-0 sudo[334684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:36:18 compute-0 sudo[334684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1306820683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2282150798' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1026922230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 06 07:36:18 compute-0 podman[334749]: 2025-12-06 07:36:18.827663948 +0000 UTC m=+0.048765467 container create 2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:36:18 compute-0 podman[334749]: 2025-12-06 07:36:18.807205406 +0000 UTC m=+0.028306945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:36:18 compute-0 systemd[1]: Started libpod-conmon-2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44.scope.
Dec 06 07:36:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:36:18 compute-0 podman[334749]: 2025-12-06 07:36:18.961119711 +0000 UTC m=+0.182221240 container init 2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:36:18 compute-0 podman[334749]: 2025-12-06 07:36:18.967112612 +0000 UTC m=+0.188214121 container start 2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_noyce, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:36:18 compute-0 podman[334749]: 2025-12-06 07:36:18.969783035 +0000 UTC m=+0.190884544 container attach 2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:36:18 compute-0 clever_noyce[334765]: 167 167
Dec 06 07:36:18 compute-0 systemd[1]: libpod-2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44.scope: Deactivated successfully.
Dec 06 07:36:18 compute-0 podman[334749]: 2025-12-06 07:36:18.971921713 +0000 UTC m=+0.193023222 container died 2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:36:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd70771e016cb8a309f5a4dda82e721b6058b5e365574b4f31a759c637ea6aa9-merged.mount: Deactivated successfully.
Dec 06 07:36:19 compute-0 podman[334749]: 2025-12-06 07:36:19.003000162 +0000 UTC m=+0.224101671 container remove 2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:36:19 compute-0 systemd[1]: libpod-conmon-2900feab28d3e5208ab9a67453edbab9ce2fa1f19745667744919ddcec4ecc44.scope: Deactivated successfully.
Dec 06 07:36:19 compute-0 podman[334789]: 2025-12-06 07:36:19.174398088 +0000 UTC m=+0.040677159 container create dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:36:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Dec 06 07:36:19 compute-0 systemd[1]: Started libpod-conmon-dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091.scope.
Dec 06 07:36:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f1146622e4fc8b55c4e67b7ceb31b85fc661efed8466075d1bb5453d3e77aab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f1146622e4fc8b55c4e67b7ceb31b85fc661efed8466075d1bb5453d3e77aab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f1146622e4fc8b55c4e67b7ceb31b85fc661efed8466075d1bb5453d3e77aab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f1146622e4fc8b55c4e67b7ceb31b85fc661efed8466075d1bb5453d3e77aab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:36:19 compute-0 podman[334789]: 2025-12-06 07:36:19.156151045 +0000 UTC m=+0.022430136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:36:19 compute-0 podman[334789]: 2025-12-06 07:36:19.265493417 +0000 UTC m=+0.131772508 container init dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:36:19 compute-0 podman[334789]: 2025-12-06 07:36:19.27187425 +0000 UTC m=+0.138153321 container start dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:36:19 compute-0 podman[334789]: 2025-12-06 07:36:19.275233031 +0000 UTC m=+0.141512122 container attach dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:36:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Dec 06 07:36:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Dec 06 07:36:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:19.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:19 compute-0 ceph-mon[74339]: pgmap v2390: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 141 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Dec 06 07:36:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1629657261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:19 compute-0 ceph-mon[74339]: osdmap e296: 3 total, 3 up, 3 in
Dec 06 07:36:19 compute-0 nova_compute[251992]: 2025-12-06 07:36:19.726 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-0 lucid_fermi[334805]: {
Dec 06 07:36:20 compute-0 lucid_fermi[334805]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:36:20 compute-0 lucid_fermi[334805]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:36:20 compute-0 lucid_fermi[334805]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:36:20 compute-0 lucid_fermi[334805]:         "osd_id": 0,
Dec 06 07:36:20 compute-0 lucid_fermi[334805]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:36:20 compute-0 lucid_fermi[334805]:         "type": "bluestore"
Dec 06 07:36:20 compute-0 lucid_fermi[334805]:     }
Dec 06 07:36:20 compute-0 lucid_fermi[334805]: }
Dec 06 07:36:20 compute-0 systemd[1]: libpod-dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091.scope: Deactivated successfully.
Dec 06 07:36:20 compute-0 podman[334789]: 2025-12-06 07:36:20.099057521 +0000 UTC m=+0.965336612 container died dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f1146622e4fc8b55c4e67b7ceb31b85fc661efed8466075d1bb5453d3e77aab-merged.mount: Deactivated successfully.
Dec 06 07:36:20 compute-0 nova_compute[251992]: 2025-12-06 07:36:20.194 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:20 compute-0 podman[334789]: 2025-12-06 07:36:20.196625735 +0000 UTC m=+1.062904806 container remove dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:36:20 compute-0 systemd[1]: libpod-conmon-dd38d34c7c51aed345054261402801c1479e9cf5ba73480cc464c1533a3dc091.scope: Deactivated successfully.
Dec 06 07:36:20 compute-0 sudo[334684]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:36:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:20.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:36:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b1635ffb-db0b-4e4d-bc41-e8bc16196fa7 does not exist
Dec 06 07:36:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a7e418f5-3dd5-4cd7-abe7-e76b69947fcb does not exist
Dec 06 07:36:20 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8af50080-4ac1-427c-b0bf-ea364f51c2d1 does not exist
Dec 06 07:36:20 compute-0 nova_compute[251992]: 2025-12-06 07:36:20.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:20 compute-0 sudo[334840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:20 compute-0 sudo[334840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:20 compute-0 sudo[334840]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:20 compute-0 sudo[334865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:36:20 compute-0 sudo[334865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:20 compute-0 sudo[334865]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Dec 06 07:36:20 compute-0 nova_compute[251992]: 2025-12-06 07:36:20.970 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:36:21.011 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:36:21 compute-0 nova_compute[251992]: 2025-12-06 07:36:21.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:21 compute-0 nova_compute[251992]: 2025-12-06 07:36:21.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:36:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:21.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:36:22 compute-0 ceph-mon[74339]: pgmap v2392: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.3 MiB/s wr, 54 op/s
Dec 06 07:36:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/754207191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:22.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 MiB/s wr, 50 op/s
Dec 06 07:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:36:23 compute-0 nova_compute[251992]: 2025-12-06 07:36:23.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:36:23 compute-0 nova_compute[251992]: 2025-12-06 07:36:23.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:36:23 compute-0 nova_compute[251992]: 2025-12-06 07:36:23.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:36:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:23.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:24 compute-0 nova_compute[251992]: 2025-12-06 07:36:24.239 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:24 compute-0 nova_compute[251992]: 2025-12-06 07:36:24.239 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:24 compute-0 nova_compute[251992]: 2025-12-06 07:36:24.239 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:36:24 compute-0 nova_compute[251992]: 2025-12-06 07:36:24.240 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 70928eda-043f-429b-aa4e-af1f3189a7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:24.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:36:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:36:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:36:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:36:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:36:24 compute-0 nova_compute[251992]: 2025-12-06 07:36:24.728 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 77 op/s
Dec 06 07:36:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1505987283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:24 compute-0 ceph-mon[74339]: pgmap v2393: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 MiB/s wr, 50 op/s
Dec 06 07:36:25 compute-0 nova_compute[251992]: 2025-12-06 07:36:25.196 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:36:26 compute-0 nova_compute[251992]: 2025-12-06 07:36:26.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.016208429299274147 of space, bias 1.0, pg target 4.862528789782244 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002161232958775358 of space, bias 1.0, pg target 0.639724955797506 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8560510887772195 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Dec 06 07:36:26 compute-0 nova_compute[251992]: 2025-12-06 07:36:26.215 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updating instance_info_cache with network_info: [{"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:36:26 compute-0 nova_compute[251992]: 2025-12-06 07:36:26.232 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:36:26 compute-0 nova_compute[251992]: 2025-12-06 07:36:26.232 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:36:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:26.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:26 compute-0 ceph-mon[74339]: pgmap v2394: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 77 op/s
Dec 06 07:36:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 96 op/s
Dec 06 07:36:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:27.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:27 compute-0 ceph-mon[74339]: pgmap v2395: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 96 op/s
Dec 06 07:36:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:28.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 96 op/s
Dec 06 07:36:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:29 compute-0 nova_compute[251992]: 2025-12-06 07:36:29.766 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:30 compute-0 nova_compute[251992]: 2025-12-06 07:36:30.197 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:30.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 07:36:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:31.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:32.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4090472694' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.8 KiB/s wr, 153 op/s
Dec 06 07:36:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Dec 06 07:36:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:33.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Dec 06 07:36:34 compute-0 ceph-mon[74339]: pgmap v2396: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 36 KiB/s wr, 96 op/s
Dec 06 07:36:34 compute-0 ceph-mon[74339]: pgmap v2397: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 07:36:34 compute-0 ceph-mon[74339]: pgmap v2398: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.8 KiB/s wr, 153 op/s
Dec 06 07:36:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Dec 06 07:36:34 compute-0 nova_compute[251992]: 2025-12-06 07:36:34.768 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 KiB/s wr, 141 op/s
Dec 06 07:36:35 compute-0 nova_compute[251992]: 2025-12-06 07:36:35.199 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Dec 06 07:36:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:36:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:35.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:36:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:36.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Dec 06 07:36:36 compute-0 ceph-mon[74339]: osdmap e297: 3 total, 3 up, 3 in
Dec 06 07:36:36 compute-0 ceph-mon[74339]: pgmap v2400: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 KiB/s wr, 141 op/s
Dec 06 07:36:36 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Dec 06 07:36:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 133 op/s
Dec 06 07:36:37 compute-0 sudo[334900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:37 compute-0 sudo[334900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:37 compute-0 sudo[334900]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:37 compute-0 sudo[334925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:37 compute-0 sudo[334925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:37 compute-0 sudo[334925]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:37 compute-0 ceph-mon[74339]: osdmap e298: 3 total, 3 up, 3 in
Dec 06 07:36:37 compute-0 ceph-mon[74339]: pgmap v2402: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 133 op/s
Dec 06 07:36:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:37.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:38.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/607710496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 11 KiB/s wr, 16 op/s
Dec 06 07:36:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:39.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:39 compute-0 nova_compute[251992]: 2025-12-06 07:36:39.804 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:39 compute-0 ceph-mon[74339]: pgmap v2403: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 11 KiB/s wr, 16 op/s
Dec 06 07:36:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3620983311' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:36:40 compute-0 nova_compute[251992]: 2025-12-06 07:36:40.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:40 compute-0 podman[334951]: 2025-12-06 07:36:40.445002941 +0000 UTC m=+0.100324560 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:36:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 766 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.7 MiB/s wr, 202 op/s
Dec 06 07:36:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Dec 06 07:36:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Dec 06 07:36:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Dec 06 07:36:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:41.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:42.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 766 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.7 MiB/s wr, 199 op/s
Dec 06 07:36:42 compute-0 ceph-mon[74339]: pgmap v2404: 305 pgs: 305 active+clean; 766 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.7 MiB/s wr, 202 op/s
Dec 06 07:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:36:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:43.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:44.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:44 compute-0 ceph-mon[74339]: osdmap e299: 3 total, 3 up, 3 in
Dec 06 07:36:44 compute-0 ceph-mon[74339]: pgmap v2406: 305 pgs: 305 active+clean; 766 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.7 MiB/s wr, 199 op/s
Dec 06 07:36:44 compute-0 nova_compute[251992]: 2025-12-06 07:36:44.805 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 765 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.6 MiB/s wr, 241 op/s
Dec 06 07:36:45 compute-0 nova_compute[251992]: 2025-12-06 07:36:45.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Dec 06 07:36:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Dec 06 07:36:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Dec 06 07:36:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:45.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:46 compute-0 ceph-mon[74339]: pgmap v2407: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 765 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.6 MiB/s wr, 241 op/s
Dec 06 07:36:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:46.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.8 MiB/s wr, 341 op/s
Dec 06 07:36:47 compute-0 ceph-mon[74339]: osdmap e300: 3 total, 3 up, 3 in
Dec 06 07:36:47 compute-0 ceph-mon[74339]: pgmap v2409: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.8 MiB/s wr, 341 op/s
Dec 06 07:36:47 compute-0 podman[334982]: 2025-12-06 07:36:47.430245666 +0000 UTC m=+0.075208952 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:36:47 compute-0 podman[334983]: 2025-12-06 07:36:47.438604472 +0000 UTC m=+0.073909856 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 07:36:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:47.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:48.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 55 KiB/s wr, 155 op/s
Dec 06 07:36:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:49.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:49 compute-0 nova_compute[251992]: 2025-12-06 07:36:49.849 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:50 compute-0 nova_compute[251992]: 2025-12-06 07:36:50.203 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:50.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 126 KiB/s wr, 180 op/s
Dec 06 07:36:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:51 compute-0 ceph-mon[74339]: pgmap v2410: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 763 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 55 KiB/s wr, 155 op/s
Dec 06 07:36:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:52.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:52 compute-0 nova_compute[251992]: 2025-12-06 07:36:52.577 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:52 compute-0 nova_compute[251992]: 2025-12-06 07:36:52.578 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:52 compute-0 nova_compute[251992]: 2025-12-06 07:36:52.700 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:36:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 117 KiB/s wr, 167 op/s
Dec 06 07:36:52 compute-0 nova_compute[251992]: 2025-12-06 07:36:52.893 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:52 compute-0 nova_compute[251992]: 2025-12-06 07:36:52.893 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:52 compute-0 nova_compute[251992]: 2025-12-06 07:36:52.900 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:36:52 compute-0 nova_compute[251992]: 2025-12-06 07:36:52.900 251996 INFO nova.compute.claims [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:36:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Dec 06 07:36:53 compute-0 ceph-mon[74339]: pgmap v2411: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 126 KiB/s wr, 180 op/s
Dec 06 07:36:53 compute-0 nova_compute[251992]: 2025-12-06 07:36:53.236 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Dec 06 07:36:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Dec 06 07:36:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:36:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4217498047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:53 compute-0 nova_compute[251992]: 2025-12-06 07:36:53.700 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:53.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:53 compute-0 nova_compute[251992]: 2025-12-06 07:36:53.707 251996 DEBUG nova.compute.provider_tree [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:36:53 compute-0 nova_compute[251992]: 2025-12-06 07:36:53.782 251996 DEBUG nova.scheduler.client.report [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:36:53 compute-0 nova_compute[251992]: 2025-12-06 07:36:53.869 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:53 compute-0 nova_compute[251992]: 2025-12-06 07:36:53.870 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.039 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.039 251996 DEBUG nova.network.neutron [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.088 251996 INFO nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.098 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.098 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.139 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.142 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:36:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.260 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.261 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.267 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.267 251996 INFO nova.compute.claims [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:36:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:54.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.362 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.364 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.365 251996 INFO nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Creating image(s)
Dec 06 07:36:54 compute-0 ceph-mon[74339]: pgmap v2412: 305 pgs: 305 active+clean; 769 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 117 KiB/s wr, 167 op/s
Dec 06 07:36:54 compute-0 ceph-mon[74339]: osdmap e301: 3 total, 3 up, 3 in
Dec 06 07:36:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4217498047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1062886133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.399 251996 DEBUG nova.storage.rbd_utils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2de097e3-8182-48e5-b69d-88acbfb84e66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.436 251996 DEBUG nova.storage.rbd_utils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2de097e3-8182-48e5-b69d-88acbfb84e66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.470 251996 DEBUG nova.storage.rbd_utils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2de097e3-8182-48e5-b69d-88acbfb84e66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.475 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.547 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.548 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.549 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.549 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.576 251996 DEBUG nova.storage.rbd_utils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2de097e3-8182-48e5-b69d-88acbfb84e66_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.580 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2de097e3-8182-48e5-b69d-88acbfb84e66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.616 251996 DEBUG nova.policy [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a70f6c3c5e2c402bb6fa0e0507e9b6dc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b10aa03d68eb4d4799d53538521cc364', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.787 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:36:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 786 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 188 KiB/s rd, 880 KiB/s wr, 79 op/s
Dec 06 07:36:54 compute-0 nova_compute[251992]: 2025-12-06 07:36:54.903 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Dec 06 07:36:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.204 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:36:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392313173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.226 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.231 251996 DEBUG nova.compute.provider_tree [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.446 251996 DEBUG nova.scheduler.client.report [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.483 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.484 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.551 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.551 251996 DEBUG nova.network.neutron [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.582 251996 INFO nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.598 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.668 251996 INFO nova.virt.block_device [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Booting with volume-backed-image 6efab05d-c7cf-4770-a5c3-c806a2739063 at /dev/vda
Dec 06 07:36:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:36:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:55.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:36:55 compute-0 nova_compute[251992]: 2025-12-06 07:36:55.834 251996 DEBUG nova.policy [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2aa5b15c15f84a8cb24776d5c781eb09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:36:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:36:56.244 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:36:56 compute-0 nova_compute[251992]: 2025-12-06 07:36:56.244 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:36:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:36:56.245 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:36:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:56.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:56 compute-0 nova_compute[251992]: 2025-12-06 07:36:56.735 251996 DEBUG nova.network.neutron [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Successfully created port: 8690867c-c0a8-4574-b54f-38486691e339 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:36:56 compute-0 nova_compute[251992]: 2025-12-06 07:36:56.772 251996 DEBUG nova.network.neutron [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Successfully created port: 83a9b755-339a-4da1-ade2-590aecb2c951 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:36:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 830 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 3.1 MiB/s wr, 157 op/s
Dec 06 07:36:57 compute-0 ceph-mon[74339]: pgmap v2414: 305 pgs: 305 active+clean; 786 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 188 KiB/s rd, 880 KiB/s wr, 79 op/s
Dec 06 07:36:57 compute-0 ceph-mon[74339]: osdmap e302: 3 total, 3 up, 3 in
Dec 06 07:36:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2392313173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.141 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2de097e3-8182-48e5-b69d-88acbfb84e66_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.272 251996 DEBUG nova.storage.rbd_utils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] resizing rbd image 2de097e3-8182-48e5-b69d-88acbfb84e66_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.383 251996 DEBUG nova.objects.instance [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'migration_context' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.425 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.426 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Ensure instance console log exists: /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.427 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.427 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.427 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:36:57 compute-0 sudo[335233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:57 compute-0 sudo[335233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:57 compute-0 sudo[335233]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:57 compute-0 sudo[335258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:36:57 compute-0 sudo[335258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:36:57 compute-0 sudo[335258]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.862 251996 DEBUG nova.network.neutron [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Successfully updated port: 83a9b755-339a-4da1-ade2-590aecb2c951 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.886 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.886 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquired lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.886 251996 DEBUG nova.network.neutron [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.985 251996 DEBUG nova.compute.manager [req-6130402b-ac78-490d-a70c-12c8a801a4c3 req-7e718a5d-511b-4586-9ee9-5c70230e2228 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-changed-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.985 251996 DEBUG nova.compute.manager [req-6130402b-ac78-490d-a70c-12c8a801a4c3 req-7e718a5d-511b-4586-9ee9-5c70230e2228 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Refreshing instance network info cache due to event network-changed-83a9b755-339a-4da1-ade2-590aecb2c951. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:36:57 compute-0 nova_compute[251992]: 2025-12-06 07:36:57.986 251996 DEBUG oslo_concurrency.lockutils [req-6130402b-ac78-490d-a70c-12c8a801a4c3 req-7e718a5d-511b-4586-9ee9-5c70230e2228 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.090 251996 DEBUG nova.network.neutron [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.149 251996 DEBUG nova.network.neutron [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Successfully updated port: 8690867c-c0a8-4574-b54f-38486691e339 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.176 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.176 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.176 251996 DEBUG nova.network.neutron [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:36:58 compute-0 ceph-mon[74339]: pgmap v2416: 305 pgs: 305 active+clean; 830 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 3.1 MiB/s wr, 157 op/s
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.282 251996 DEBUG nova.compute.manager [req-23fdbe9d-53b1-49fa-b8f3-5c9b32a64142 req-0e2c857c-74f4-42e4-8ac5-53a6cf94175e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-changed-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.282 251996 DEBUG nova.compute.manager [req-23fdbe9d-53b1-49fa-b8f3-5c9b32a64142 req-0e2c857c-74f4-42e4-8ac5-53a6cf94175e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing instance network info cache due to event network-changed-8690867c-c0a8-4574-b54f-38486691e339. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.282 251996 DEBUG oslo_concurrency.lockutils [req-23fdbe9d-53b1-49fa-b8f3-5c9b32a64142 req-0e2c857c-74f4-42e4-8ac5-53a6cf94175e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:36:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:36:58.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:58 compute-0 nova_compute[251992]: 2025-12-06 07:36:58.471 251996 DEBUG nova.network.neutron [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:36:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 830 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 3.1 MiB/s wr, 103 op/s
Dec 06 07:36:59 compute-0 nova_compute[251992]: 2025-12-06 07:36:59.102 251996 DEBUG nova.network.neutron [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Updating instance_info_cache with network_info: [{"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:36:59 compute-0 nova_compute[251992]: 2025-12-06 07:36:59.125 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Releasing lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:36:59 compute-0 nova_compute[251992]: 2025-12-06 07:36:59.126 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance network_info: |[{"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:36:59 compute-0 nova_compute[251992]: 2025-12-06 07:36:59.126 251996 DEBUG oslo_concurrency.lockutils [req-6130402b-ac78-490d-a70c-12c8a801a4c3 req-7e718a5d-511b-4586-9ee9-5c70230e2228 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:36:59 compute-0 nova_compute[251992]: 2025-12-06 07:36:59.126 251996 DEBUG nova.network.neutron [req-6130402b-ac78-490d-a70c-12c8a801a4c3 req-7e718a5d-511b-4586-9ee9-5c70230e2228 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Refreshing network info cache for port 83a9b755-339a-4da1-ade2-590aecb2c951 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:36:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:36:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Dec 06 07:36:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Dec 06 07:36:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Dec 06 07:36:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:36:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:36:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:36:59.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:36:59 compute-0 nova_compute[251992]: 2025-12-06 07:36:59.900 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.040 251996 DEBUG nova.network.neutron [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.060 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.061 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance network_info: |[{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.061 251996 DEBUG oslo_concurrency.lockutils [req-23fdbe9d-53b1-49fa-b8f3-5c9b32a64142 req-0e2c857c-74f4-42e4-8ac5-53a6cf94175e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.061 251996 DEBUG nova.network.neutron [req-23fdbe9d-53b1-49fa-b8f3-5c9b32a64142 req-0e2c857c-74f4-42e4-8ac5-53a6cf94175e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing network info cache for port 8690867c-c0a8-4574-b54f-38486691e339 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.064 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Start _get_guest_xml network_info=[{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.068 251996 WARNING nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.072 251996 DEBUG nova.virt.libvirt.host [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.073 251996 DEBUG nova.virt.libvirt.host [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.094 251996 DEBUG nova.virt.libvirt.host [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.094 251996 DEBUG nova.virt.libvirt.host [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.096 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.096 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.096 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.096 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.097 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.097 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.097 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.097 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.097 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.098 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.098 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.098 251996 DEBUG nova.virt.hardware [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.101 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.206 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:00 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:00.247 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:00.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:37:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614414098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:00 compute-0 ceph-mon[74339]: pgmap v2417: 305 pgs: 305 active+clean; 830 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 3.1 MiB/s wr, 103 op/s
Dec 06 07:37:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/825412902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.542 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.572 251996 DEBUG nova.storage.rbd_utils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2de097e3-8182-48e5-b69d-88acbfb84e66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:37:00 compute-0 nova_compute[251992]: 2025-12-06 07:37:00.576 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 959 KiB/s rd, 5.7 MiB/s wr, 160 op/s
Dec 06 07:37:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:37:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/923916124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.056 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.058 251996 DEBUG nova.virt.libvirt.vif [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.058 251996 DEBUG nova.network.os_vif_util [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.059 251996 DEBUG nova.network.os_vif_util [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.060 251996 DEBUG nova.objects.instance [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.073 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <uuid>2de097e3-8182-48e5-b69d-88acbfb84e66</uuid>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <name>instance-00000083</name>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestOtherB-server-1439654070</nova:name>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:37:00</nova:creationTime>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <nova:port uuid="8690867c-c0a8-4574-b54f-38486691e339">
Dec 06 07:37:01 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <system>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <entry name="serial">2de097e3-8182-48e5-b69d-88acbfb84e66</entry>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <entry name="uuid">2de097e3-8182-48e5-b69d-88acbfb84e66</entry>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </system>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <os>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   </os>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <features>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   </features>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2de097e3-8182-48e5-b69d-88acbfb84e66_disk">
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       </source>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2de097e3-8182-48e5-b69d-88acbfb84e66_disk.config">
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       </source>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:37:01 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:4a:1f:3e"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <target dev="tap8690867c-c0"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/console.log" append="off"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <video>
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </video>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:37:01 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:37:01 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:37:01 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:37:01 compute-0 nova_compute[251992]: </domain>
Dec 06 07:37:01 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.075 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Preparing to wait for external event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.075 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.076 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.076 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.078 251996 DEBUG nova.virt.libvirt.vif [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.078 251996 DEBUG nova.network.os_vif_util [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.079 251996 DEBUG nova.network.os_vif_util [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.080 251996 DEBUG os_vif [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.081 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.082 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.083 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.089 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.089 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8690867c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.090 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8690867c-c0, col_values=(('external_ids', {'iface-id': '8690867c-c0a8-4574-b54f-38486691e339', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:1f:3e', 'vm-uuid': '2de097e3-8182-48e5-b69d-88acbfb84e66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.091 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:01 compute-0 NetworkManager[48965]: <info>  [1765006621.0929] manager: (tap8690867c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.098 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.099 251996 INFO os_vif [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0')
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.149 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.150 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.150 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:4a:1f:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.150 251996 INFO nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Using config drive
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.175 251996 DEBUG nova.storage.rbd_utils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2de097e3-8182-48e5-b69d-88acbfb84e66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.180 251996 DEBUG nova.network.neutron [req-6130402b-ac78-490d-a70c-12c8a801a4c3 req-7e718a5d-511b-4586-9ee9-5c70230e2228 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Updated VIF entry in instance network info cache for port 83a9b755-339a-4da1-ade2-590aecb2c951. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.180 251996 DEBUG nova.network.neutron [req-6130402b-ac78-490d-a70c-12c8a801a4c3 req-7e718a5d-511b-4586-9ee9-5c70230e2228 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Updating instance_info_cache with network_info: [{"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.202 251996 DEBUG oslo_concurrency.lockutils [req-6130402b-ac78-490d-a70c-12c8a801a4c3 req-7e718a5d-511b-4586-9ee9-5c70230e2228 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.539 251996 INFO nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Creating config drive at /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/disk.config
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.548 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprl5i2npu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.684 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprl5i2npu" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:01 compute-0 ceph-mon[74339]: osdmap e303: 3 total, 3 up, 3 in
Dec 06 07:37:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1614414098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:01 compute-0 ceph-mon[74339]: pgmap v2419: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 959 KiB/s rd, 5.7 MiB/s wr, 160 op/s
Dec 06 07:37:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3009061304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/923916124' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:01.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.718 251996 DEBUG nova.storage.rbd_utils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image 2de097e3-8182-48e5-b69d-88acbfb84e66_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:37:01 compute-0 nova_compute[251992]: 2025-12-06 07:37:01.722 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/disk.config 2de097e3-8182-48e5-b69d-88acbfb84e66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:02.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:02 compute-0 nova_compute[251992]: 2025-12-06 07:37:02.622 251996 DEBUG oslo_concurrency.processutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/disk.config 2de097e3-8182-48e5-b69d-88acbfb84e66_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.900s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:02 compute-0 nova_compute[251992]: 2025-12-06 07:37:02.623 251996 INFO nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Deleting local config drive /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/disk.config because it was imported into RBD.
Dec 06 07:37:02 compute-0 kernel: tap8690867c-c0: entered promiscuous mode
Dec 06 07:37:02 compute-0 NetworkManager[48965]: <info>  [1765006622.6723] manager: (tap8690867c-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Dec 06 07:37:02 compute-0 ovn_controller[147168]: 2025-12-06T07:37:02Z|00465|binding|INFO|Claiming lport 8690867c-c0a8-4574-b54f-38486691e339 for this chassis.
Dec 06 07:37:02 compute-0 ovn_controller[147168]: 2025-12-06T07:37:02Z|00466|binding|INFO|8690867c-c0a8-4574-b54f-38486691e339: Claiming fa:16:3e:4a:1f:3e 10.100.0.11
Dec 06 07:37:02 compute-0 nova_compute[251992]: 2025-12-06 07:37:02.673 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:02 compute-0 ovn_controller[147168]: 2025-12-06T07:37:02Z|00467|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 ovn-installed in OVS
Dec 06 07:37:02 compute-0 nova_compute[251992]: 2025-12-06 07:37:02.691 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:02 compute-0 nova_compute[251992]: 2025-12-06 07:37:02.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:02 compute-0 systemd-machined[212986]: New machine qemu-59-instance-00000083.
Dec 06 07:37:02 compute-0 systemd-udevd[335420]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:37:02 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000083.
Dec 06 07:37:02 compute-0 NetworkManager[48965]: <info>  [1765006622.7266] device (tap8690867c-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:37:02 compute-0 NetworkManager[48965]: <info>  [1765006622.7275] device (tap8690867c-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:37:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 742 KiB/s rd, 4.5 MiB/s wr, 114 op/s
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.136 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006623.1364274, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.137 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Started (Lifecycle Event)
Dec 06 07:37:03 compute-0 ovn_controller[147168]: 2025-12-06T07:37:03Z|00468|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 up in Southbound
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.265 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:1f:3e 10.100.0.11'], port_security=['fa:16:3e:4a:1f:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2de097e3-8182-48e5-b69d-88acbfb84e66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=8690867c-c0a8-4574-b54f-38486691e339) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.266 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 8690867c-c0a8-4574-b54f-38486691e339 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.268 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.284 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8a74119d-7983-445d-9b77-b4407b83fe9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.299 251996 DEBUG nova.network.neutron [req-23fdbe9d-53b1-49fa-b8f3-5c9b32a64142 req-0e2c857c-74f4-42e4-8ac5-53a6cf94175e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updated VIF entry in instance network info cache for port 8690867c-c0a8-4574-b54f-38486691e339. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.300 251996 DEBUG nova.network.neutron [req-23fdbe9d-53b1-49fa-b8f3-5c9b32a64142 req-0e2c857c-74f4-42e4-8ac5-53a6cf94175e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.316 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fdea87d1-a3c0-4450-8f64-300642136b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.320 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9a3cb6-ea69-47b3-8bd5-4d970906f2e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.354 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f33efe-8ecf-4b7d-a5b1-75eca24c0d15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.378 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d7515fc8-7ab6-4d68-84f3-8a6a3ead8fcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678865, 'reachable_time': 30294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335477, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.396 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb8cd7e-4f9a-46d6-8fae-0dab3f6911d4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678878, 'tstamp': 678878}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335478, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678881, 'tstamp': 678881}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335478, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.398 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.445 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.445 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.446 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.446 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.447 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.447 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.508 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.510 251996 DEBUG oslo_concurrency.lockutils [req-23fdbe9d-53b1-49fa-b8f3-5c9b32a64142 req-0e2c857c-74f4-42e4-8ac5-53a6cf94175e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.513 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006623.136631, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.514 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Paused (Lifecycle Event)
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.586 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.588 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:37:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:03.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:03 compute-0 nova_compute[251992]: 2025-12-06 07:37:03.800 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.844 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.845 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:03.845 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:04 compute-0 ceph-mon[74339]: pgmap v2420: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 742 KiB/s rd, 4.5 MiB/s wr, 114 op/s
Dec 06 07:37:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:04.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 65 op/s
Dec 06 07:37:04 compute-0 nova_compute[251992]: 2025-12-06 07:37:04.902 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:06 compute-0 ceph-mon[74339]: pgmap v2421: 305 pgs: 305 active+clean; 861 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 65 op/s
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.093 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.133 251996 DEBUG nova.compute.manager [req-3297072b-2f43-4e00-a14d-9bfd2b867104 req-7f7bb52f-97b8-4997-9ecb-8004142b4697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.133 251996 DEBUG oslo_concurrency.lockutils [req-3297072b-2f43-4e00-a14d-9bfd2b867104 req-7f7bb52f-97b8-4997-9ecb-8004142b4697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.134 251996 DEBUG oslo_concurrency.lockutils [req-3297072b-2f43-4e00-a14d-9bfd2b867104 req-7f7bb52f-97b8-4997-9ecb-8004142b4697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.134 251996 DEBUG oslo_concurrency.lockutils [req-3297072b-2f43-4e00-a14d-9bfd2b867104 req-7f7bb52f-97b8-4997-9ecb-8004142b4697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.134 251996 DEBUG nova.compute.manager [req-3297072b-2f43-4e00-a14d-9bfd2b867104 req-7f7bb52f-97b8-4997-9ecb-8004142b4697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Processing event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.135 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.139 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.140 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006626.1392775, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.140 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Resumed (Lifecycle Event)
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.144 251996 INFO nova.virt.libvirt.driver [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance spawned successfully.
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.144 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:37:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:06.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.453 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.454 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.454 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.455 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.455 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.456 251996 DEBUG nova.virt.libvirt.driver [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.460 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.464 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.651 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.742 251996 INFO nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Took 12.38 seconds to spawn the instance on the hypervisor.
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.743 251996 DEBUG nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 59 op/s
Dec 06 07:37:06 compute-0 nova_compute[251992]: 2025-12-06 07:37:06.941 251996 INFO nova.compute.manager [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Took 14.10 seconds to build instance.
Dec 06 07:37:07 compute-0 nova_compute[251992]: 2025-12-06 07:37:07.161 251996 DEBUG oslo_concurrency.lockutils [None req-142afa5f-fa27-4575-a4c9-6c27d0d6fe59 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:07.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:08 compute-0 nova_compute[251992]: 2025-12-06 07:37:08.269 251996 DEBUG nova.compute.manager [req-a1f97c63-bf76-414a-a33f-77d87764aa40 req-0d1cf636-360d-4902-b289-7f73f30a44b3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:08 compute-0 nova_compute[251992]: 2025-12-06 07:37:08.270 251996 DEBUG oslo_concurrency.lockutils [req-a1f97c63-bf76-414a-a33f-77d87764aa40 req-0d1cf636-360d-4902-b289-7f73f30a44b3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:08 compute-0 nova_compute[251992]: 2025-12-06 07:37:08.270 251996 DEBUG oslo_concurrency.lockutils [req-a1f97c63-bf76-414a-a33f-77d87764aa40 req-0d1cf636-360d-4902-b289-7f73f30a44b3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:08 compute-0 nova_compute[251992]: 2025-12-06 07:37:08.270 251996 DEBUG oslo_concurrency.lockutils [req-a1f97c63-bf76-414a-a33f-77d87764aa40 req-0d1cf636-360d-4902-b289-7f73f30a44b3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:08 compute-0 nova_compute[251992]: 2025-12-06 07:37:08.270 251996 DEBUG nova.compute.manager [req-a1f97c63-bf76-414a-a33f-77d87764aa40 req-0d1cf636-360d-4902-b289-7f73f30a44b3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:37:08 compute-0 nova_compute[251992]: 2025-12-06 07:37:08.270 251996 WARNING nova.compute.manager [req-a1f97c63-bf76-414a-a33f-77d87764aa40 req-0d1cf636-360d-4902-b289-7f73f30a44b3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state None.
Dec 06 07:37:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:08.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 59 op/s
Dec 06 07:37:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:09 compute-0 ceph-mon[74339]: pgmap v2422: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 59 op/s
Dec 06 07:37:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:09.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:09 compute-0 nova_compute[251992]: 2025-12-06 07:37:09.904 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:10.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/225869628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:37:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/225869628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:37:10 compute-0 ceph-mon[74339]: pgmap v2423: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 59 op/s
Dec 06 07:37:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1594552602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:10 compute-0 nova_compute[251992]: 2025-12-06 07:37:10.521 251996 DEBUG nova.compute.manager [req-b96d5a63-6edd-4f55-8d47-47b6e03f4b69 req-e27dfb8b-bb43-4ce1-9e69-cdc049d11813 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-changed-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:10 compute-0 nova_compute[251992]: 2025-12-06 07:37:10.521 251996 DEBUG nova.compute.manager [req-b96d5a63-6edd-4f55-8d47-47b6e03f4b69 req-e27dfb8b-bb43-4ce1-9e69-cdc049d11813 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing instance network info cache due to event network-changed-8690867c-c0a8-4574-b54f-38486691e339. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:37:10 compute-0 nova_compute[251992]: 2025-12-06 07:37:10.522 251996 DEBUG oslo_concurrency.lockutils [req-b96d5a63-6edd-4f55-8d47-47b6e03f4b69 req-e27dfb8b-bb43-4ce1-9e69-cdc049d11813 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:37:10 compute-0 nova_compute[251992]: 2025-12-06 07:37:10.522 251996 DEBUG oslo_concurrency.lockutils [req-b96d5a63-6edd-4f55-8d47-47b6e03f4b69 req-e27dfb8b-bb43-4ce1-9e69-cdc049d11813 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:37:10 compute-0 nova_compute[251992]: 2025-12-06 07:37:10.522 251996 DEBUG nova.network.neutron [req-b96d5a63-6edd-4f55-8d47-47b6e03f4b69 req-e27dfb8b-bb43-4ce1-9e69-cdc049d11813 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing network info cache for port 8690867c-c0a8-4574-b54f-38486691e339 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:37:10 compute-0 sshd-session[335482]: Connection closed by 205.210.31.205 port 51078
Dec 06 07:37:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Dec 06 07:37:11 compute-0 nova_compute[251992]: 2025-12-06 07:37:11.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:11 compute-0 podman[335484]: 2025-12-06 07:37:11.437065596 +0000 UTC m=+0.095904960 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:37:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/407077210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:11 compute-0 ceph-mon[74339]: pgmap v2424: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Dec 06 07:37:12 compute-0 nova_compute[251992]: 2025-12-06 07:37:12.283 251996 DEBUG nova.network.neutron [req-b96d5a63-6edd-4f55-8d47-47b6e03f4b69 req-e27dfb8b-bb43-4ce1-9e69-cdc049d11813 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updated VIF entry in instance network info cache for port 8690867c-c0a8-4574-b54f-38486691e339. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:37:12 compute-0 nova_compute[251992]: 2025-12-06 07:37:12.284 251996 DEBUG nova.network.neutron [req-b96d5a63-6edd-4f55-8d47-47b6e03f4b69 req-e27dfb8b-bb43-4ce1-9e69-cdc049d11813 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:12.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:12 compute-0 nova_compute[251992]: 2025-12-06 07:37:12.398 251996 DEBUG oslo_concurrency.lockutils [req-b96d5a63-6edd-4f55-8d47-47b6e03f4b69 req-e27dfb8b-bb43-4ce1-9e69-cdc049d11813 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 38 KiB/s wr, 83 op/s
Dec 06 07:37:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3571071003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:37:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:13.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:13 compute-0 ceph-mon[74339]: pgmap v2425: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 38 KiB/s wr, 83 op/s
Dec 06 07:37:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/679455447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:14.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:14 compute-0 nova_compute[251992]: 2025-12-06 07:37:14.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:14 compute-0 nova_compute[251992]: 2025-12-06 07:37:14.684 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:14 compute-0 nova_compute[251992]: 2025-12-06 07:37:14.708 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:14 compute-0 nova_compute[251992]: 2025-12-06 07:37:14.708 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:14 compute-0 nova_compute[251992]: 2025-12-06 07:37:14.709 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:14 compute-0 nova_compute[251992]: 2025-12-06 07:37:14.709 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:37:14 compute-0 nova_compute[251992]: 2025-12-06 07:37:14.710 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 39 KiB/s wr, 88 op/s
Dec 06 07:37:14 compute-0 nova_compute[251992]: 2025-12-06 07:37:14.907 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:37:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1918043786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:15 compute-0 ceph-mon[74339]: pgmap v2426: 305 pgs: 305 active+clean; 863 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 39 KiB/s wr, 88 op/s
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.182 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.260 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.262 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.265 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.265 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.270 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.271 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.274 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.274 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.461 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.462 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3779MB free_disk=20.64812469482422GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.463 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.463 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.544 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 70928eda-043f-429b-aa4e-af1f3189a7c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.545 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c2e6b8fd-375c-4658-b338-f2d334041ba3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.545 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c1ef1073-7c66-428c-a02b-e4daa3551d22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.546 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 2de097e3-8182-48e5-b69d-88acbfb84e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.546 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f37cdbe1-70ec-41d7-8e94-24a34612404f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.546 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.546 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:37:15 compute-0 nova_compute[251992]: 2025-12-06 07:37:15.680 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:15.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:16 compute-0 nova_compute[251992]: 2025-12-06 07:37:16.098 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:37:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798193347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:16 compute-0 nova_compute[251992]: 2025-12-06 07:37:16.149 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:16 compute-0 nova_compute[251992]: 2025-12-06 07:37:16.156 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:37:16 compute-0 nova_compute[251992]: 2025-12-06 07:37:16.188 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:37:16 compute-0 nova_compute[251992]: 2025-12-06 07:37:16.210 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:37:16 compute-0 nova_compute[251992]: 2025-12-06 07:37:16.210 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1918043786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2798193347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:16.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 864 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 67 KiB/s wr, 119 op/s
Dec 06 07:37:17 compute-0 ceph-mon[74339]: pgmap v2427: 305 pgs: 305 active+clean; 864 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 67 KiB/s wr, 119 op/s
Dec 06 07:37:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:17 compute-0 sudo[335559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:17 compute-0 sudo[335559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:17 compute-0 sudo[335559]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:17 compute-0 sudo[335596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:17 compute-0 sudo[335596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:17 compute-0 sudo[335596]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:17 compute-0 podman[335584]: 2025-12-06 07:37:17.810314512 +0000 UTC m=+0.062875768 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:37:17 compute-0 podman[335583]: 2025-12-06 07:37:17.830947418 +0000 UTC m=+0.086665051 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:37:18 compute-0 nova_compute[251992]: 2025-12-06 07:37:18.184 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:18 compute-0 nova_compute[251992]: 2025-12-06 07:37:18.184 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:18.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:37:18
Dec 06 07:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'volumes', '.mgr']
Dec 06 07:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:37:18 compute-0 nova_compute[251992]: 2025-12-06 07:37:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:18 compute-0 nova_compute[251992]: 2025-12-06 07:37:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:18 compute-0 nova_compute[251992]: 2025-12-06 07:37:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 864 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 41 KiB/s wr, 112 op/s
Dec 06 07:37:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:19 compute-0 ceph-mon[74339]: pgmap v2428: 305 pgs: 305 active+clean; 864 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 41 KiB/s wr, 112 op/s
Dec 06 07:37:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:19 compute-0 nova_compute[251992]: 2025-12-06 07:37:19.911 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:20.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:20 compute-0 nova_compute[251992]: 2025-12-06 07:37:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 926 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.3 MiB/s wr, 203 op/s
Dec 06 07:37:21 compute-0 sudo[335650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:21 compute-0 sudo[335650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:21 compute-0 sudo[335650]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:21 compute-0 nova_compute[251992]: 2025-12-06 07:37:21.102 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:21 compute-0 sudo[335675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:37:21 compute-0 sudo[335675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:21 compute-0 sudo[335675]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:21 compute-0 sudo[335700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:21 compute-0 sudo[335700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:21 compute-0 sudo[335700]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:21 compute-0 sudo[335725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:37:21 compute-0 sudo[335725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3578164270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2304411446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:21 compute-0 ceph-mon[74339]: pgmap v2429: 305 pgs: 305 active+clean; 926 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.3 MiB/s wr, 203 op/s
Dec 06 07:37:21 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Dec 06 07:37:21 compute-0 sudo[335725]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:21.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:37:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:37:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:37:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:37:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:37:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev cd407a78-d137-4813-a703-4795d816e2eb does not exist
Dec 06 07:37:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6e6eacbb-c962-465c-b168-92c7d87b0c2f does not exist
Dec 06 07:37:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 988769a1-d4b6-44ca-b523-59bae2c7fd79 does not exist
Dec 06 07:37:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:37:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:37:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:37:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:37:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:37:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:37:22 compute-0 sudo[335783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:22 compute-0 sudo[335783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:22 compute-0 sudo[335783]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:22 compute-0 sudo[335808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:37:22 compute-0 sudo[335808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:22 compute-0 sudo[335808]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:22 compute-0 sudo[335833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:22 compute-0 sudo[335833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:22 compute-0 sudo[335833]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:22 compute-0 sudo[335858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:37:22 compute-0 sudo[335858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:22.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:22 compute-0 podman[335927]: 2025-12-06 07:37:22.540947542 +0000 UTC m=+0.038768867 container create 7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:37:22 compute-0 ovn_controller[147168]: 2025-12-06T07:37:22Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:1f:3e 10.100.0.11
Dec 06 07:37:22 compute-0 ovn_controller[147168]: 2025-12-06T07:37:22Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:1f:3e 10.100.0.11
Dec 06 07:37:22 compute-0 systemd[1]: Started libpod-conmon-7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd.scope.
Dec 06 07:37:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:37:22 compute-0 podman[335927]: 2025-12-06 07:37:22.524303373 +0000 UTC m=+0.022124708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:37:22 compute-0 podman[335927]: 2025-12-06 07:37:22.628991489 +0000 UTC m=+0.126812814 container init 7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 07:37:22 compute-0 podman[335927]: 2025-12-06 07:37:22.635050502 +0000 UTC m=+0.132871817 container start 7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:37:22 compute-0 podman[335927]: 2025-12-06 07:37:22.637994492 +0000 UTC m=+0.135815807 container attach 7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:37:22 compute-0 systemd[1]: libpod-7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd.scope: Deactivated successfully.
Dec 06 07:37:22 compute-0 priceless_payne[335944]: 167 167
Dec 06 07:37:22 compute-0 conmon[335944]: conmon 7c72d009ae32e63417ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd.scope/container/memory.events
Dec 06 07:37:22 compute-0 podman[335927]: 2025-12-06 07:37:22.643544701 +0000 UTC m=+0.141366006 container died 7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:37:22 compute-0 nova_compute[251992]: 2025-12-06 07:37:22.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:22 compute-0 nova_compute[251992]: 2025-12-06 07:37:22.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:37:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a31a68903429ca5fcd847c2e53234b53c9ebacbc829df42d8ae9994e25966af1-merged.mount: Deactivated successfully.
Dec 06 07:37:22 compute-0 podman[335927]: 2025-12-06 07:37:22.680213931 +0000 UTC m=+0.178035246 container remove 7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_payne, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:37:22 compute-0 systemd[1]: libpod-conmon-7c72d009ae32e63417ecc8dead343ab9eb34966c4ac21dd56c4dad233cbe82dd.scope: Deactivated successfully.
Dec 06 07:37:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 926 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 137 op/s
Dec 06 07:37:22 compute-0 podman[335967]: 2025-12-06 07:37:22.849206034 +0000 UTC m=+0.036187328 container create bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 07:37:22 compute-0 systemd[1]: Started libpod-conmon-bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb.scope.
Dec 06 07:37:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8060a9ec50cd6ac10cd90c3f5e6bb9942d8ad3e7114e4a4ce3cacc887868a1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8060a9ec50cd6ac10cd90c3f5e6bb9942d8ad3e7114e4a4ce3cacc887868a1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8060a9ec50cd6ac10cd90c3f5e6bb9942d8ad3e7114e4a4ce3cacc887868a1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8060a9ec50cd6ac10cd90c3f5e6bb9942d8ad3e7114e4a4ce3cacc887868a1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8060a9ec50cd6ac10cd90c3f5e6bb9942d8ad3e7114e4a4ce3cacc887868a1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:22 compute-0 podman[335967]: 2025-12-06 07:37:22.928437023 +0000 UTC m=+0.115418347 container init bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meitner, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:37:22 compute-0 podman[335967]: 2025-12-06 07:37:22.83426205 +0000 UTC m=+0.021243364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:37:22 compute-0 podman[335967]: 2025-12-06 07:37:22.936671805 +0000 UTC m=+0.123653099 container start bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meitner, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:37:22 compute-0 podman[335967]: 2025-12-06 07:37:22.939499961 +0000 UTC m=+0.126481285 container attach bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meitner, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:37:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:37:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:37:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:37:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:37:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.264 251996 DEBUG os_brick.utils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.266 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.279 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.279 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5d2b70-c2f1-4cce-999c-aa50314242e5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.280 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.288 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.288 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[ad866529-84c5-4356-be01-3a853e4e9c6f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.290 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.298 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.298 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f9c3e6-8ad3-44c3-82fd-e6099f23c00d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.299 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[079dd543-e4f2-4396-87fb-0366119f134f]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.300 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.332 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.335 251996 DEBUG os_brick.initiator.connectors.lightos [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.335 251996 DEBUG os_brick.initiator.connectors.lightos [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.336 251996 DEBUG os_brick.initiator.connectors.lightos [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.336 251996 DEBUG os_brick.utils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.337 251996 DEBUG nova.virt.block_device [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Updating existing volume attachment record: 915aa72a-4057-4688-bfcf-15f26826837c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:37:23 compute-0 quirky_meitner[335983]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:37:23 compute-0 quirky_meitner[335983]: --> relative data size: 1.0
Dec 06 07:37:23 compute-0 quirky_meitner[335983]: --> All data devices are unavailable
Dec 06 07:37:23 compute-0 systemd[1]: libpod-bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb.scope: Deactivated successfully.
Dec 06 07:37:23 compute-0 podman[335967]: 2025-12-06 07:37:23.733503896 +0000 UTC m=+0.920485190 container died bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:37:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:23.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8060a9ec50cd6ac10cd90c3f5e6bb9942d8ad3e7114e4a4ce3cacc887868a1a-merged.mount: Deactivated successfully.
Dec 06 07:37:23 compute-0 podman[335967]: 2025-12-06 07:37:23.787059942 +0000 UTC m=+0.974041236 container remove bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:37:23 compute-0 systemd[1]: libpod-conmon-bc3aabd2bec253617e5244e2cfcb5809ecd55266d02946bf80c9e9dc8abf74fb.scope: Deactivated successfully.
Dec 06 07:37:23 compute-0 sudo[335858]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:23 compute-0 sudo[336016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:23 compute-0 sudo[336016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:23 compute-0 sudo[336016]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.925 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.926 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:37:23 compute-0 nova_compute[251992]: 2025-12-06 07:37:23.926 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:37:23 compute-0 sudo[336041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:37:23 compute-0 sudo[336041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:23 compute-0 sudo[336041]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:24 compute-0 sudo[336066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:24 compute-0 sudo[336066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:24 compute-0 sudo[336066]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:24 compute-0 sudo[336091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:37:24 compute-0 sudo[336091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:24 compute-0 ceph-mon[74339]: pgmap v2430: 305 pgs: 305 active+clean; 926 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 137 op/s
Dec 06 07:37:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/873242481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:24.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:24 compute-0 podman[336157]: 2025-12-06 07:37:24.377354618 +0000 UTC m=+0.038958393 container create f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:37:24 compute-0 systemd[1]: Started libpod-conmon-f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085.scope.
Dec 06 07:37:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.443 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.445 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.446 251996 INFO nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Creating image(s)
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.446 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.446 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Ensure instance console log exists: /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.447 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.447 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.447 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.450 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Start _get_guest_xml network_info=[{"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4cae51dd-790a-4050-b673-0850eb817a06', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4cae51dd-790a-4050-b673-0850eb817a06', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f37cdbe1-70ec-41d7-8e94-24a34612404f', 'attached_at': '', 'detached_at': '', 'volume_id': '4cae51dd-790a-4050-b673-0850eb817a06', 'serial': '4cae51dd-790a-4050-b673-0850eb817a06'}, 'attachment_id': '915aa72a-4057-4688-bfcf-15f26826837c', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:37:24 compute-0 podman[336157]: 2025-12-06 07:37:24.361542562 +0000 UTC m=+0.023146357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:37:24 compute-0 podman[336157]: 2025-12-06 07:37:24.4581577 +0000 UTC m=+0.119761495 container init f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.459 251996 WARNING nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.465 251996 DEBUG nova.virt.libvirt.host [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.466 251996 DEBUG nova.virt.libvirt.host [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:37:24 compute-0 podman[336157]: 2025-12-06 07:37:24.47002704 +0000 UTC m=+0.131630815 container start f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elion, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.470 251996 DEBUG nova.virt.libvirt.host [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.471 251996 DEBUG nova.virt.libvirt.host [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.472 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:37:24 compute-0 podman[336157]: 2025-12-06 07:37:24.473068302 +0000 UTC m=+0.134672097 container attach f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.473 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.473 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.473 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.473 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.474 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.474 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.474 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.474 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.474 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.474 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.475 251996 DEBUG nova.virt.hardware [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:37:24 compute-0 wonderful_elion[336174]: 167 167
Dec 06 07:37:24 compute-0 systemd[1]: libpod-f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085.scope: Deactivated successfully.
Dec 06 07:37:24 compute-0 podman[336157]: 2025-12-06 07:37:24.478397946 +0000 UTC m=+0.140001721 container died f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:37:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa5a93723bfb5ec7d6569ed88552ef74e5ba408efb17e09174f8fb91c0c2aa4b-merged.mount: Deactivated successfully.
Dec 06 07:37:24 compute-0 podman[336157]: 2025-12-06 07:37:24.512605549 +0000 UTC m=+0.174209324 container remove f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.514 251996 DEBUG nova.storage.rbd_utils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.519 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:24 compute-0 systemd[1]: libpod-conmon-f950abc13ed1a8398adbe24e1a5ccdf71e0f19c14ab2c7ba0fa7b8f3a9a69085.scope: Deactivated successfully.
Dec 06 07:37:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:37:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:37:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:37:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:37:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:37:24 compute-0 podman[336218]: 2025-12-06 07:37:24.69595915 +0000 UTC m=+0.042037296 container create 5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:37:24 compute-0 systemd[1]: Started libpod-conmon-5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2.scope.
Dec 06 07:37:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de379bbf3ec7574d2c50327b4964202abaddf570fac4426fabe33deeba8fd5f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:24 compute-0 podman[336218]: 2025-12-06 07:37:24.678686934 +0000 UTC m=+0.024765100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de379bbf3ec7574d2c50327b4964202abaddf570fac4426fabe33deeba8fd5f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de379bbf3ec7574d2c50327b4964202abaddf570fac4426fabe33deeba8fd5f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de379bbf3ec7574d2c50327b4964202abaddf570fac4426fabe33deeba8fd5f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:24 compute-0 podman[336218]: 2025-12-06 07:37:24.793040731 +0000 UTC m=+0.139118897 container init 5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:37:24 compute-0 podman[336218]: 2025-12-06 07:37:24.799568607 +0000 UTC m=+0.145646763 container start 5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:37:24 compute-0 podman[336218]: 2025-12-06 07:37:24.804785958 +0000 UTC m=+0.150864124 container attach 5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:37:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 934 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 141 op/s
Dec 06 07:37:24 compute-0 nova_compute[251992]: 2025-12-06 07:37:24.913 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:37:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1351833603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.000 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.033 251996 DEBUG nova.virt.libvirt.vif [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:36:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-40970156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-40970156',id=132,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-1min8f00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:55Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=f37cdbe1-70ec-41d7-8e94-24a34612404f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.034 251996 DEBUG nova.network.os_vif_util [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.034 251996 DEBUG nova.network.os_vif_util [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:91:ba,bridge_name='br-int',has_traffic_filtering=True,id=83a9b755-339a-4da1-ade2-590aecb2c951,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9b755-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.036 251996 DEBUG nova.objects.instance [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'pci_devices' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.053 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <uuid>f37cdbe1-70ec-41d7-8e94-24a34612404f</uuid>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <name>instance-00000084</name>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-40970156</nova:name>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:37:24</nova:creationTime>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <nova:user uuid="2aa5b15c15f84a8cb24776d5c781eb09">tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member</nova:user>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <nova:project uuid="17cdfa63c4424ec7a0eb4bb3d7372c14">tempest-ServerBootFromVolumeStableRescueTest-344238221</nova:project>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <nova:port uuid="83a9b755-339a-4da1-ade2-590aecb2c951">
Dec 06 07:37:25 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <system>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <entry name="serial">f37cdbe1-70ec-41d7-8e94-24a34612404f</entry>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <entry name="uuid">f37cdbe1-70ec-41d7-8e94-24a34612404f</entry>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </system>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <os>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   </os>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <features>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   </features>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config">
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       </source>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-4cae51dd-790a-4050-b673-0850eb817a06">
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       </source>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:37:25 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <serial>4cae51dd-790a-4050-b673-0850eb817a06</serial>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:cf:91:ba"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <target dev="tap83a9b755-33"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/console.log" append="off"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <video>
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </video>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:37:25 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:37:25 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:37:25 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:37:25 compute-0 nova_compute[251992]: </domain>
Dec 06 07:37:25 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.053 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Preparing to wait for external event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.054 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.054 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.054 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.055 251996 DEBUG nova.virt.libvirt.vif [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:36:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-40970156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-40970156',id=132,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-1min8f00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:36:55Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=f37cdbe1-70ec-41d7-8e94-24a34612404f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.055 251996 DEBUG nova.network.os_vif_util [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.056 251996 DEBUG nova.network.os_vif_util [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:91:ba,bridge_name='br-int',has_traffic_filtering=True,id=83a9b755-339a-4da1-ade2-590aecb2c951,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9b755-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.056 251996 DEBUG os_vif [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:91:ba,bridge_name='br-int',has_traffic_filtering=True,id=83a9b755-339a-4da1-ade2-590aecb2c951,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9b755-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.056 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.057 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.057 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.061 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.061 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83a9b755-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.062 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83a9b755-33, col_values=(('external_ids', {'iface-id': '83a9b755-339a-4da1-ade2-590aecb2c951', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:91:ba', 'vm-uuid': 'f37cdbe1-70ec-41d7-8e94-24a34612404f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.063 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:25 compute-0 NetworkManager[48965]: <info>  [1765006645.0645] manager: (tap83a9b755-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.065 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.072 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.073 251996 INFO os_vif [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:91:ba,bridge_name='br-int',has_traffic_filtering=True,id=83a9b755-339a-4da1-ade2-590aecb2c951,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9b755-33')
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.131 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.132 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.132 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No VIF found with MAC fa:16:3e:cf:91:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.133 251996 INFO nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Using config drive
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.159 251996 DEBUG nova.storage.rbd_utils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:37:25 compute-0 ceph-mon[74339]: pgmap v2431: 305 pgs: 305 active+clean; 934 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 141 op/s
Dec 06 07:37:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/671171046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1351833603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]: {
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:     "0": [
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:         {
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "devices": [
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "/dev/loop3"
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             ],
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "lv_name": "ceph_lv0",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "lv_size": "7511998464",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "name": "ceph_lv0",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "tags": {
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.cluster_name": "ceph",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.crush_device_class": "",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.encrypted": "0",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.osd_id": "0",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.type": "block",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:                 "ceph.vdo": "0"
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             },
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "type": "block",
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:             "vg_name": "ceph_vg0"
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:         }
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]:     ]
Dec 06 07:37:25 compute-0 sleepy_chatterjee[336251]: }
Dec 06 07:37:25 compute-0 systemd[1]: libpod-5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2.scope: Deactivated successfully.
Dec 06 07:37:25 compute-0 podman[336218]: 2025-12-06 07:37:25.631487206 +0000 UTC m=+0.977565362 container died 5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-de379bbf3ec7574d2c50327b4964202abaddf570fac4426fabe33deeba8fd5f0-merged.mount: Deactivated successfully.
Dec 06 07:37:25 compute-0 podman[336218]: 2025-12-06 07:37:25.68202765 +0000 UTC m=+1.028105796 container remove 5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:37:25 compute-0 systemd[1]: libpod-conmon-5837240614c1d94df2314e3ae14a94379710c5bc83c7f79f82d66309ba2d13f2.scope: Deactivated successfully.
Dec 06 07:37:25 compute-0 sudo[336091]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:25.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:25 compute-0 sudo[336295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:25 compute-0 sudo[336295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:25 compute-0 sudo[336295]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:25 compute-0 sudo[336320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:37:25 compute-0 sudo[336320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:25 compute-0 sudo[336320]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:25 compute-0 sudo[336345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:25 compute-0 sudo[336345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:25 compute-0 sudo[336345]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:25 compute-0 sudo[336370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.938 251996 INFO nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Creating config drive at /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config
Dec 06 07:37:25 compute-0 sudo[336370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:25 compute-0 nova_compute[251992]: 2025-12-06 07:37:25.944 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa5lajbjn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:26 compute-0 nova_compute[251992]: 2025-12-06 07:37:26.019 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updating instance_info_cache with network_info: [{"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.017323669853899125 of space, bias 1.0, pg target 5.197100956169738 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004138712951330728 of space, bias 1.0, pg target 1.2209203206425647 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1410099542853156 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017041224641727154 quantized to 16 (current 16)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021301530802158943 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018106301181835102 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042603061604317886 quantized to 32 (current 32)
Dec 06 07:37:26 compute-0 nova_compute[251992]: 2025-12-06 07:37:26.034 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-c2e6b8fd-375c-4658-b338-f2d334041ba3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:26 compute-0 nova_compute[251992]: 2025-12-06 07:37:26.034 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:37:26 compute-0 nova_compute[251992]: 2025-12-06 07:37:26.077 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa5lajbjn" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:26 compute-0 nova_compute[251992]: 2025-12-06 07:37:26.106 251996 DEBUG nova.storage.rbd_utils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:37:26 compute-0 nova_compute[251992]: 2025-12-06 07:37:26.113 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:26 compute-0 podman[336474]: 2025-12-06 07:37:26.261433752 +0000 UTC m=+0.036693891 container create 3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lehmann, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:37:26 compute-0 systemd[1]: Started libpod-conmon-3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd.scope.
Dec 06 07:37:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:37:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:37:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:26.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:37:26 compute-0 podman[336474]: 2025-12-06 07:37:26.338526003 +0000 UTC m=+0.113786162 container init 3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:37:26 compute-0 podman[336474]: 2025-12-06 07:37:26.246786947 +0000 UTC m=+0.022047106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:37:26 compute-0 podman[336474]: 2025-12-06 07:37:26.346844528 +0000 UTC m=+0.122104667 container start 3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:37:26 compute-0 podman[336474]: 2025-12-06 07:37:26.350315411 +0000 UTC m=+0.125575550 container attach 3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lehmann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:37:26 compute-0 zen_lehmann[336490]: 167 167
Dec 06 07:37:26 compute-0 systemd[1]: libpod-3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd.scope: Deactivated successfully.
Dec 06 07:37:26 compute-0 podman[336474]: 2025-12-06 07:37:26.352467649 +0000 UTC m=+0.127727788 container died 3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec 06 07:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3354ad14bb25009357211f437b2bf89fd7905f56d070472c5ee1b89fa7c49afc-merged.mount: Deactivated successfully.
Dec 06 07:37:26 compute-0 podman[336474]: 2025-12-06 07:37:26.382719846 +0000 UTC m=+0.157979985 container remove 3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 07:37:26 compute-0 systemd[1]: libpod-conmon-3ba82df9e5a77a33f2e20d38f346a5f884de0388c0b2e1b87cbcf698f56576bd.scope: Deactivated successfully.
Dec 06 07:37:26 compute-0 podman[336515]: 2025-12-06 07:37:26.566380975 +0000 UTC m=+0.041800320 container create 2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 07:37:26 compute-0 systemd[1]: Started libpod-conmon-2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509.scope.
Dec 06 07:37:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:37:26 compute-0 podman[336515]: 2025-12-06 07:37:26.547049833 +0000 UTC m=+0.022469188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e6a5954c984937692f37fab627e5b2e6a5411d59dbdfad51a54c876d89bf67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e6a5954c984937692f37fab627e5b2e6a5411d59dbdfad51a54c876d89bf67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e6a5954c984937692f37fab627e5b2e6a5411d59dbdfad51a54c876d89bf67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13e6a5954c984937692f37fab627e5b2e6a5411d59dbdfad51a54c876d89bf67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:37:26 compute-0 podman[336515]: 2025-12-06 07:37:26.658770089 +0000 UTC m=+0.134189454 container init 2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:37:26 compute-0 podman[336515]: 2025-12-06 07:37:26.664985767 +0000 UTC m=+0.140405102 container start 2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:37:26 compute-0 podman[336515]: 2025-12-06 07:37:26.669055496 +0000 UTC m=+0.144474831 container attach 2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:37:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 943 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 06 07:37:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]: {
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]:         "osd_id": 0,
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]:         "type": "bluestore"
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]:     }
Dec 06 07:37:27 compute-0 jovial_blackburn[336532]: }
Dec 06 07:37:27 compute-0 systemd[1]: libpod-2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509.scope: Deactivated successfully.
Dec 06 07:37:27 compute-0 podman[336557]: 2025-12-06 07:37:27.57152472 +0000 UTC m=+0.025194602 container died 2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 07:37:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-13e6a5954c984937692f37fab627e5b2e6a5411d59dbdfad51a54c876d89bf67-merged.mount: Deactivated successfully.
Dec 06 07:37:27 compute-0 podman[336557]: 2025-12-06 07:37:27.623058691 +0000 UTC m=+0.076728543 container remove 2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_blackburn, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:37:27 compute-0 systemd[1]: libpod-conmon-2f4feec55754db31efa5d398ad83c4f07e0c742cfc014a0bb69129481ff35509.scope: Deactivated successfully.
Dec 06 07:37:27 compute-0 sudo[336370]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:37:27 compute-0 ceph-mon[74339]: pgmap v2432: 305 pgs: 305 active+clean; 943 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Dec 06 07:37:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:27.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:28.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 943 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 135 op/s
Dec 06 07:37:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:29 compute-0 nova_compute[251992]: 2025-12-06 07:37:29.914 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:30 compute-0 nova_compute[251992]: 2025-12-06 07:37:30.063 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:30.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 947 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.4 MiB/s wr, 146 op/s
Dec 06 07:37:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:32.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 948 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 261 KiB/s rd, 1.2 MiB/s wr, 65 op/s
Dec 06 07:37:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:33.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Dec 06 07:37:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Dec 06 07:37:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:37:33 compute-0 ceph-mon[74339]: pgmap v2433: 305 pgs: 305 active+clean; 943 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 135 op/s
Dec 06 07:37:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.092 251996 DEBUG oslo_concurrency.processutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.980s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.093 251996 INFO nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Deleting local config drive /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config because it was imported into RBD.
Dec 06 07:37:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 452471d0-450d-40f8-a86a-2f9a680ca613 does not exist
Dec 06 07:37:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 79625098-b796-42e8-9ef5-9a0ada40b235 does not exist
Dec 06 07:37:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 721365aa-96e3-4b5f-9208-1b217f0db929 does not exist
Dec 06 07:37:34 compute-0 kernel: tap83a9b755-33: entered promiscuous mode
Dec 06 07:37:34 compute-0 NetworkManager[48965]: <info>  [1765006654.1490] manager: (tap83a9b755-33): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Dec 06 07:37:34 compute-0 ovn_controller[147168]: 2025-12-06T07:37:34Z|00469|binding|INFO|Claiming lport 83a9b755-339a-4da1-ade2-590aecb2c951 for this chassis.
Dec 06 07:37:34 compute-0 ovn_controller[147168]: 2025-12-06T07:37:34Z|00470|binding|INFO|83a9b755-339a-4da1-ade2-590aecb2c951: Claiming fa:16:3e:cf:91:ba 10.100.0.7
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.150 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.167 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:34 compute-0 ovn_controller[147168]: 2025-12-06T07:37:34Z|00471|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 ovn-installed in OVS
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.170 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:34 compute-0 sudo[336577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:34 compute-0 sudo[336577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:34 compute-0 systemd-machined[212986]: New machine qemu-60-instance-00000084.
Dec 06 07:37:34 compute-0 sudo[336577]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:34 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000084.
Dec 06 07:37:34 compute-0 systemd-udevd[336621]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:37:34 compute-0 NetworkManager[48965]: <info>  [1765006654.2206] device (tap83a9b755-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:37:34 compute-0 ovn_controller[147168]: 2025-12-06T07:37:34Z|00472|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 up in Southbound
Dec 06 07:37:34 compute-0 NetworkManager[48965]: <info>  [1765006654.2219] device (tap83a9b755-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.220 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:91:ba 10.100.0.7'], port_security=['fa:16:3e:cf:91:ba 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37cdbe1-70ec-41d7-8e94-24a34612404f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=83a9b755-339a-4da1-ade2-590aecb2c951) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.223 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 83a9b755-339a-4da1-ade2-590aecb2c951 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 bound to our chassis
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.227 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:37:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:34 compute-0 sudo[336613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.246 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2a177ed3-6549-4620-9d6f-84b6bf84965a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:34 compute-0 sudo[336613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:34 compute-0 sudo[336613]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.279 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[08f749df-321d-43cc-a3c6-de415a0038f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.282 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8471bc-8d3b-43fd-a97d-1952319f4240]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.312 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e95c57bf-75e1-4dc3-829c-37edca4fc963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.331 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d0012b-82af-44e1-bd0e-bb2f6965258d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 40379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336652, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:34.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.348 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[92874046-fae7-4f22-b03b-a34fd8e85a5f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336653, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336653, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.350 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.351 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.353 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.353 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.353 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.353 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:37:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:37:34.354 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.606 251996 DEBUG nova.compute.manager [req-dcdc80c8-c6dc-49b3-94ba-89613678105c req-ef5092ed-a6d2-4d87-ba8b-a7f3a02679fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.606 251996 DEBUG oslo_concurrency.lockutils [req-dcdc80c8-c6dc-49b3-94ba-89613678105c req-ef5092ed-a6d2-4d87-ba8b-a7f3a02679fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.606 251996 DEBUG oslo_concurrency.lockutils [req-dcdc80c8-c6dc-49b3-94ba-89613678105c req-ef5092ed-a6d2-4d87-ba8b-a7f3a02679fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.607 251996 DEBUG oslo_concurrency.lockutils [req-dcdc80c8-c6dc-49b3-94ba-89613678105c req-ef5092ed-a6d2-4d87-ba8b-a7f3a02679fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.607 251996 DEBUG nova.compute.manager [req-dcdc80c8-c6dc-49b3-94ba-89613678105c req-ef5092ed-a6d2-4d87-ba8b-a7f3a02679fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Processing event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.786 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.787 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006654.7863169, f37cdbe1-70ec-41d7-8e94-24a34612404f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.788 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] VM Started (Lifecycle Event)
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.790 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.793 251996 INFO nova.virt.libvirt.driver [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance spawned successfully.
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.793 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.810 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.815 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.818 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.818 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.819 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.819 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.819 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.820 251996 DEBUG nova.virt.libvirt.driver [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:37:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 948 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 873 KiB/s wr, 73 op/s
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.847 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.848 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006654.7864277, f37cdbe1-70ec-41d7-8e94-24a34612404f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.848 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] VM Paused (Lifecycle Event)
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.880 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.883 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006654.7894213, f37cdbe1-70ec-41d7-8e94-24a34612404f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.883 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] VM Resumed (Lifecycle Event)
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.888 251996 INFO nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Took 10.44 seconds to spawn the instance on the hypervisor.
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.889 251996 DEBUG nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.911 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.913 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.916 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.937 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.952 251996 INFO nova.compute.manager [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Took 40.71 seconds to build instance.
Dec 06 07:37:34 compute-0 ceph-mon[74339]: pgmap v2434: 305 pgs: 305 active+clean; 947 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.4 MiB/s wr, 146 op/s
Dec 06 07:37:34 compute-0 ceph-mon[74339]: pgmap v2435: 305 pgs: 305 active+clean; 948 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 261 KiB/s rd, 1.2 MiB/s wr, 65 op/s
Dec 06 07:37:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:34 compute-0 ceph-mon[74339]: osdmap e304: 3 total, 3 up, 3 in
Dec 06 07:37:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:37:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4211913717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:34 compute-0 nova_compute[251992]: 2025-12-06 07:37:34.979 251996 DEBUG oslo_concurrency.lockutils [None req-ee2dd62c-15e2-4a93-a7c9-4c171f9943c2 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 40.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:35 compute-0 nova_compute[251992]: 2025-12-06 07:37:35.065 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:35.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:35 compute-0 ceph-mon[74339]: pgmap v2437: 305 pgs: 305 active+clean; 948 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 309 KiB/s rd, 873 KiB/s wr, 73 op/s
Dec 06 07:37:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3011747925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:36.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.723 251996 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.724 251996 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.724 251996 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.725 251996 DEBUG oslo_concurrency.lockutils [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.725 251996 DEBUG nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.726 251996 WARNING nova.compute.manager [req-3c04fd27-9e92-4ffd-9360-d648b4956f92 req-423fd49f-994a-4764-9cf5-971b67649254 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state active and task_state None.
Dec 06 07:37:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 961 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.9 MiB/s wr, 79 op/s
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.925 251996 INFO nova.compute.manager [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Rescuing
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.926 251996 DEBUG oslo_concurrency.lockutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.926 251996 DEBUG oslo_concurrency.lockutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquired lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:37:36 compute-0 nova_compute[251992]: 2025-12-06 07:37:36.926 251996 DEBUG nova.network.neutron [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:37:37 compute-0 ceph-mon[74339]: pgmap v2438: 305 pgs: 305 active+clean; 961 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.9 MiB/s wr, 79 op/s
Dec 06 07:37:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:37.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:37 compute-0 sudo[336699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:37 compute-0 sudo[336699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:37 compute-0 sudo[336699]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:37 compute-0 sudo[336724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:37 compute-0 sudo[336724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:37 compute-0 sudo[336724]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:38.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 972 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Dec 06 07:37:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:39 compute-0 nova_compute[251992]: 2025-12-06 07:37:39.604 251996 DEBUG nova.network.neutron [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Updating instance_info_cache with network_info: [{"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:37:39 compute-0 nova_compute[251992]: 2025-12-06 07:37:39.626 251996 DEBUG oslo_concurrency.lockutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Releasing lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:37:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:39.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:39 compute-0 nova_compute[251992]: 2025-12-06 07:37:39.885 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:37:39 compute-0 nova_compute[251992]: 2025-12-06 07:37:39.918 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:40 compute-0 ceph-mon[74339]: pgmap v2439: 305 pgs: 305 active+clean; 972 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Dec 06 07:37:40 compute-0 nova_compute[251992]: 2025-12-06 07:37:40.067 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:40.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Dec 06 07:37:41 compute-0 ceph-mon[74339]: pgmap v2440: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Dec 06 07:37:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:41.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Dec 06 07:37:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Dec 06 07:37:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Dec 06 07:37:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:42.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:42 compute-0 podman[336751]: 2025-12-06 07:37:42.450956556 +0000 UTC m=+0.102413781 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 06 07:37:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.1 MiB/s wr, 282 op/s
Dec 06 07:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:37:43 compute-0 ceph-mon[74339]: osdmap e305: 3 total, 3 up, 3 in
Dec 06 07:37:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3416436855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:37:43 compute-0 ceph-mon[74339]: pgmap v2442: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.1 MiB/s wr, 282 op/s
Dec 06 07:37:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:43.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:44.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.9 MiB/s wr, 252 op/s
Dec 06 07:37:44 compute-0 nova_compute[251992]: 2025-12-06 07:37:44.920 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:45 compute-0 nova_compute[251992]: 2025-12-06 07:37:45.069 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:45.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:45 compute-0 ceph-mon[74339]: pgmap v2443: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.9 MiB/s wr, 252 op/s
Dec 06 07:37:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:46.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 771 KiB/s wr, 207 op/s
Dec 06 07:37:47 compute-0 ceph-mon[74339]: pgmap v2444: 305 pgs: 305 active+clean; 976 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 771 KiB/s wr, 207 op/s
Dec 06 07:37:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:47.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:48 compute-0 podman[336781]: 2025-12-06 07:37:48.395407967 +0000 UTC m=+0.053618832 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:37:48 compute-0 podman[336782]: 2025-12-06 07:37:48.405423851 +0000 UTC m=+0.062249139 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Dec 06 07:37:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:48.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 977 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 631 KiB/s wr, 129 op/s
Dec 06 07:37:49 compute-0 nova_compute[251992]: 2025-12-06 07:37:49.079 251996 DEBUG oslo_concurrency.lockutils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:49 compute-0 nova_compute[251992]: 2025-12-06 07:37:49.079 251996 DEBUG oslo_concurrency.lockutils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Dec 06 07:37:49 compute-0 nova_compute[251992]: 2025-12-06 07:37:49.529 251996 DEBUG nova.objects.instance [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'flavor' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:49 compute-0 nova_compute[251992]: 2025-12-06 07:37:49.732 251996 DEBUG oslo_concurrency.lockutils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:49.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:49 compute-0 nova_compute[251992]: 2025-12-06 07:37:49.928 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:37:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Dec 06 07:37:49 compute-0 ceph-mon[74339]: pgmap v2445: 305 pgs: 305 active+clean; 977 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 631 KiB/s wr, 129 op/s
Dec 06 07:37:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Dec 06 07:37:49 compute-0 nova_compute[251992]: 2025-12-06 07:37:49.972 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:50 compute-0 nova_compute[251992]: 2025-12-06 07:37:50.071 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:50.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:50 compute-0 ovn_controller[147168]: 2025-12-06T07:37:50Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:91:ba 10.100.0.7
Dec 06 07:37:50 compute-0 ovn_controller[147168]: 2025-12-06T07:37:50Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:91:ba 10.100.0.7
Dec 06 07:37:50 compute-0 nova_compute[251992]: 2025-12-06 07:37:50.624 251996 DEBUG oslo_concurrency.lockutils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:37:50 compute-0 nova_compute[251992]: 2025-12-06 07:37:50.625 251996 DEBUG oslo_concurrency.lockutils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:37:50 compute-0 nova_compute[251992]: 2025-12-06 07:37:50.625 251996 INFO nova.compute.manager [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Attaching volume 5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6 to /dev/vdb
Dec 06 07:37:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 2.7 MiB/s wr, 88 op/s
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.053 251996 DEBUG os_brick.utils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.055 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.065 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.065 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[ee032ad1-6f52-48dc-ac06-c8a1bbfa5226]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.066 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.074 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.074 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[9f715548-2ebc-457a-949b-6346150485ff]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.076 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.084 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.084 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e8f8c0-ac54-47a6-a718-1db5c9e34183]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.085 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[173a008c-f500-4e79-ac89-de7c9153472b]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.086 251996 DEBUG oslo_concurrency.processutils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.114 251996 DEBUG oslo_concurrency.processutils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.116 251996 DEBUG os_brick.initiator.connectors.lightos [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.116 251996 DEBUG os_brick.initiator.connectors.lightos [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.116 251996 DEBUG os_brick.initiator.connectors.lightos [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.116 251996 DEBUG os_brick.utils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:37:51 compute-0 nova_compute[251992]: 2025-12-06 07:37:51.117 251996 DEBUG nova.virt.block_device [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating existing volume attachment record: a799365d-be42-4ca8-87e9-e01efde0fc10 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:37:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:51.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:51 compute-0 ceph-mon[74339]: osdmap e306: 3 total, 3 up, 3 in
Dec 06 07:37:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:37:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/474200040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:52 compute-0 nova_compute[251992]: 2025-12-06 07:37:52.139 251996 DEBUG nova.objects.instance [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'flavor' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:37:52 compute-0 nova_compute[251992]: 2025-12-06 07:37:52.175 251996 DEBUG nova.virt.libvirt.driver [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Attempting to attach volume 5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:37:52 compute-0 nova_compute[251992]: 2025-12-06 07:37:52.178 251996 DEBUG nova.virt.libvirt.guest [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:37:52 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:37:52 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6">
Dec 06 07:37:52 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:37:52 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:37:52 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:37:52 compute-0 nova_compute[251992]:   </source>
Dec 06 07:37:52 compute-0 nova_compute[251992]:   <auth username="openstack">
Dec 06 07:37:52 compute-0 nova_compute[251992]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:37:52 compute-0 nova_compute[251992]:   </auth>
Dec 06 07:37:52 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:37:52 compute-0 nova_compute[251992]:   <serial>5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6</serial>
Dec 06 07:37:52 compute-0 nova_compute[251992]: </disk>
Dec 06 07:37:52 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:37:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:52.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 865 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 06 07:37:53 compute-0 nova_compute[251992]: 2025-12-06 07:37:53.051 251996 DEBUG nova.virt.libvirt.driver [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:53 compute-0 nova_compute[251992]: 2025-12-06 07:37:53.051 251996 DEBUG nova.virt.libvirt.driver [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:53 compute-0 nova_compute[251992]: 2025-12-06 07:37:53.052 251996 DEBUG nova.virt.libvirt.driver [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:37:53 compute-0 nova_compute[251992]: 2025-12-06 07:37:53.052 251996 DEBUG nova.virt.libvirt.driver [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:4a:1f:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:37:53 compute-0 nova_compute[251992]: 2025-12-06 07:37:53.585 251996 DEBUG oslo_concurrency.lockutils [None req-ca2298b6-733e-43b5-8db3-dd1b8b2aee52 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:37:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:53.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:54 compute-0 ceph-mon[74339]: pgmap v2447: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 412 KiB/s rd, 2.7 MiB/s wr, 88 op/s
Dec 06 07:37:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/474200040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:37:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 865 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 06 07:37:54 compute-0 nova_compute[251992]: 2025-12-06 07:37:54.975 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:55 compute-0 nova_compute[251992]: 2025-12-06 07:37:55.073 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:37:55 compute-0 ceph-mon[74339]: pgmap v2448: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 865 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 06 07:37:55 compute-0 ceph-mon[74339]: pgmap v2449: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 865 KiB/s rd, 2.5 MiB/s wr, 108 op/s
Dec 06 07:37:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:37:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:37:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:56.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 1006 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Dec 06 07:37:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:57 compute-0 ceph-mon[74339]: pgmap v2450: 305 pgs: 305 active+clean; 1006 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Dec 06 07:37:58 compute-0 sudo[336854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:58 compute-0 sudo[336854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:58 compute-0 sudo[336854]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:58 compute-0 sudo[336879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:37:58 compute-0 sudo[336879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:37:58 compute-0 sudo[336879]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:37:58.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 1009 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 133 op/s
Dec 06 07:37:59 compute-0 ceph-mon[74339]: pgmap v2451: 305 pgs: 305 active+clean; 1009 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 133 op/s
Dec 06 07:37:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:37:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:37:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:37:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:37:59.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:37:59 compute-0 nova_compute[251992]: 2025-12-06 07:37:59.977 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:00 compute-0 nova_compute[251992]: 2025-12-06 07:38:00.075 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 1.6 MiB/s wr, 136 op/s
Dec 06 07:38:00 compute-0 nova_compute[251992]: 2025-12-06 07:38:00.972 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:38:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:01.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:01 compute-0 nova_compute[251992]: 2025-12-06 07:38:01.903 251996 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:38:01 compute-0 nova_compute[251992]: 2025-12-06 07:38:01.904 251996 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:38:01 compute-0 nova_compute[251992]: 2025-12-06 07:38:01.904 251996 DEBUG nova.network.neutron [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:38:01 compute-0 ceph-mon[74339]: pgmap v2452: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1020 KiB/s rd, 1.6 MiB/s wr, 136 op/s
Dec 06 07:38:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1196958827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:02.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 194 KiB/s wr, 91 op/s
Dec 06 07:38:03 compute-0 ceph-mon[74339]: pgmap v2453: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 194 KiB/s wr, 91 op/s
Dec 06 07:38:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:03.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:03 compute-0 nova_compute[251992]: 2025-12-06 07:38:03.818 251996 DEBUG nova.network.neutron [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:38:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:03.845 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:03.846 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:03.847 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:03 compute-0 nova_compute[251992]: 2025-12-06 07:38:03.878 251996 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:38:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:04 compute-0 nova_compute[251992]: 2025-12-06 07:38:04.355 251996 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 07:38:04 compute-0 nova_compute[251992]: 2025-12-06 07:38:04.355 251996 DEBUG nova.virt.libvirt.volume.remotefs [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Creating file /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/943005edad41492d879ca0dc4ce3621a.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 07:38:04 compute-0 nova_compute[251992]: 2025-12-06 07:38:04.356 251996 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/943005edad41492d879ca0dc4ce3621a.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:04 compute-0 nova_compute[251992]: 2025-12-06 07:38:04.793 251996 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/943005edad41492d879ca0dc4ce3621a.tmp" returned: 1 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:04 compute-0 nova_compute[251992]: 2025-12-06 07:38:04.794 251996 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/943005edad41492d879ca0dc4ce3621a.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 07:38:04 compute-0 nova_compute[251992]: 2025-12-06 07:38:04.794 251996 DEBUG nova.virt.libvirt.volume.remotefs [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Creating directory /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 07:38:04 compute-0 nova_compute[251992]: 2025-12-06 07:38:04.795 251996 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 925 KiB/s rd, 131 KiB/s wr, 64 op/s
Dec 06 07:38:05 compute-0 nova_compute[251992]: 2025-12-06 07:38:05.001 251996 DEBUG oslo_concurrency.processutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:05 compute-0 nova_compute[251992]: 2025-12-06 07:38:05.007 251996 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:38:05 compute-0 nova_compute[251992]: 2025-12-06 07:38:05.036 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:05 compute-0 nova_compute[251992]: 2025-12-06 07:38:05.076 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:05 compute-0 ceph-mon[74339]: pgmap v2454: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 925 KiB/s rd, 131 KiB/s wr, 64 op/s
Dec 06 07:38:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:06.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 156 KiB/s wr, 68 op/s
Dec 06 07:38:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:38:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:07.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.018 251996 INFO nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance shutdown successfully after 3 seconds.
Dec 06 07:38:08 compute-0 ceph-mon[74339]: pgmap v2455: 305 pgs: 305 active+clean; 1010 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 156 KiB/s wr, 68 op/s
Dec 06 07:38:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:08.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:08 compute-0 kernel: tap8690867c-c0 (unregistering): left promiscuous mode
Dec 06 07:38:08 compute-0 NetworkManager[48965]: <info>  [1765006688.5089] device (tap8690867c-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.516 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 ovn_controller[147168]: 2025-12-06T07:38:08Z|00473|binding|INFO|Releasing lport 8690867c-c0a8-4574-b54f-38486691e339 from this chassis (sb_readonly=0)
Dec 06 07:38:08 compute-0 ovn_controller[147168]: 2025-12-06T07:38:08Z|00474|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 down in Southbound
Dec 06 07:38:08 compute-0 ovn_controller[147168]: 2025-12-06T07:38:08Z|00475|binding|INFO|Removing iface tap8690867c-c0 ovn-installed in OVS
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.519 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.535 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.550 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:1f:3e 10.100.0.11'], port_security=['fa:16:3e:4a:1f:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2de097e3-8182-48e5-b69d-88acbfb84e66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=8690867c-c0a8-4574-b54f-38486691e339) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.551 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 8690867c-c0a8-4574-b54f-38486691e339 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.553 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.573 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0547af-8fc8-4fed-b719-81d553faa316]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:08 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000083.scope: Deactivated successfully.
Dec 06 07:38:08 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000083.scope: Consumed 16.260s CPU time.
Dec 06 07:38:08 compute-0 systemd-machined[212986]: Machine qemu-59-instance-00000083 terminated.
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.607 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[074107fa-e220-4056-9803-5d7ecac77737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.611 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[47e44570-0051-41e1-9cbd-cb281eaf7441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.637 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.643 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.643 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fd6081a3-a288-4529-8e44-1b8a0188bdfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.656 251996 INFO nova.virt.libvirt.driver [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance destroyed successfully.
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.657 251996 DEBUG nova.virt.libvirt.vif [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:37:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:4a:1f:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.658 251996 DEBUG nova.network.os_vif_util [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-619240463-network", "vif_mac": "fa:16:3e:4a:1f:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.658 251996 DEBUG nova.network.os_vif_util [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.659 251996 DEBUG os_vif [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.661 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.662 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8690867c-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.664 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.663 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b67ec210-b45b-4e34-9fba-4316890d4c2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678865, 'reachable_time': 42731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336930, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.665 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.669 251996 INFO os_vif [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0')
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.675 251996 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.675 251996 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.675 251996 DEBUG nova.virt.libvirt.driver [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.680 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ff3fe2-857f-4fa2-b8e2-b1556dbf1a05]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678878, 'tstamp': 678878}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336933, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678881, 'tstamp': 678881}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336933, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.682 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:08 compute-0 nova_compute[251992]: 2025-12-06 07:38:08.683 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.685 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.685 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.685 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:08.685 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:38:08 compute-0 ceph-osd[84884]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Dec 06 07:38:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 982 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 76 KiB/s wr, 57 op/s
Dec 06 07:38:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3978816651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:38:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3978816651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:38:09 compute-0 ceph-mon[74339]: pgmap v2456: 305 pgs: 305 active+clean; 982 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 76 KiB/s wr, 57 op/s
Dec 06 07:38:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:09.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.038 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.126 251996 DEBUG nova.compute.manager [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.127 251996 DEBUG oslo_concurrency.lockutils [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.127 251996 DEBUG oslo_concurrency.lockutils [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.127 251996 DEBUG oslo_concurrency.lockutils [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.127 251996 DEBUG nova.compute.manager [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.128 251996 WARNING nova.compute.manager [req-83848317-da69-4a02-80fa-26be22a7926d req-773633dc-177b-42a5-8c9f-b20f7a0ee5c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state resize_migrating.
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.361 251996 DEBUG neutronclient.v2_0.client [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8690867c-c0a8-4574-b54f-38486691e339 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 07:38:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:10.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 52 KiB/s wr, 52 op/s
Dec 06 07:38:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:10.943 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:38:10 compute-0 nova_compute[251992]: 2025-12-06 07:38:10.943 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:10.944 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:38:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:10.945 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:11 compute-0 ceph-mon[74339]: pgmap v2457: 305 pgs: 305 active+clean; 931 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 52 KiB/s wr, 52 op/s
Dec 06 07:38:11 compute-0 nova_compute[251992]: 2025-12-06 07:38:11.323 251996 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:11 compute-0 nova_compute[251992]: 2025-12-06 07:38:11.323 251996 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:11 compute-0 nova_compute[251992]: 2025-12-06 07:38:11.324 251996 DEBUG oslo_concurrency.lockutils [None req-50d60388-e28c-430d-bb2f-1abe5533371b a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:11.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:12 compute-0 nova_compute[251992]: 2025-12-06 07:38:12.031 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:38:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:12.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:12 compute-0 nova_compute[251992]: 2025-12-06 07:38:12.538 251996 DEBUG nova.compute.manager [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:12 compute-0 nova_compute[251992]: 2025-12-06 07:38:12.538 251996 DEBUG oslo_concurrency.lockutils [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:12 compute-0 nova_compute[251992]: 2025-12-06 07:38:12.538 251996 DEBUG oslo_concurrency.lockutils [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:12 compute-0 nova_compute[251992]: 2025-12-06 07:38:12.539 251996 DEBUG oslo_concurrency.lockutils [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:12 compute-0 nova_compute[251992]: 2025-12-06 07:38:12.539 251996 DEBUG nova.compute.manager [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:12 compute-0 nova_compute[251992]: 2025-12-06 07:38:12.539 251996 WARNING nova.compute.manager [req-6d09819a-9f27-4428-9628-f3ab4542321f req-facd5f5f-9ce2-4d5b-8d11-9423a4a0d59b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state resize_migrated.
Dec 06 07:38:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 950 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 803 KiB/s wr, 48 op/s
Dec 06 07:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:38:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1245743315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:13 compute-0 podman[336937]: 2025-12-06 07:38:13.436956823 +0000 UTC m=+0.081863477 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec 06 07:38:13 compute-0 nova_compute[251992]: 2025-12-06 07:38:13.664 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:38:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 37K writes, 139K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 14K syncs, 2.71 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5251 writes, 19K keys, 5251 commit groups, 1.0 writes per commit group, ingest: 21.95 MB, 0.04 MB/s
                                           Interval WAL: 5251 writes, 2004 syncs, 2.62 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:38:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:13.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:14 compute-0 nova_compute[251992]: 2025-12-06 07:38:14.106 251996 DEBUG nova.compute.manager [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-changed-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:14 compute-0 nova_compute[251992]: 2025-12-06 07:38:14.106 251996 DEBUG nova.compute.manager [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing instance network info cache due to event network-changed-8690867c-c0a8-4574-b54f-38486691e339. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:38:14 compute-0 nova_compute[251992]: 2025-12-06 07:38:14.107 251996 DEBUG oslo_concurrency.lockutils [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:38:14 compute-0 nova_compute[251992]: 2025-12-06 07:38:14.107 251996 DEBUG oslo_concurrency.lockutils [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:38:14 compute-0 nova_compute[251992]: 2025-12-06 07:38:14.107 251996 DEBUG nova.network.neutron [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing network info cache for port 8690867c-c0a8-4574-b54f-38486691e339 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:38:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:38:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1172078197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:14 compute-0 ceph-mon[74339]: pgmap v2458: 305 pgs: 305 active+clean; 950 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 803 KiB/s wr, 48 op/s
Dec 06 07:38:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1172078197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:14.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 950 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 796 KiB/s wr, 42 op/s
Dec 06 07:38:15 compute-0 nova_compute[251992]: 2025-12-06 07:38:15.040 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:15 compute-0 ceph-mon[74339]: pgmap v2459: 305 pgs: 305 active+clean; 950 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 796 KiB/s wr, 42 op/s
Dec 06 07:38:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1309206469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:15 compute-0 nova_compute[251992]: 2025-12-06 07:38:15.985 251996 DEBUG nova.network.neutron [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updated VIF entry in instance network info cache for port 8690867c-c0a8-4574-b54f-38486691e339. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:38:15 compute-0 nova_compute[251992]: 2025-12-06 07:38:15.986 251996 DEBUG nova.network.neutron [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:38:16 compute-0 nova_compute[251992]: 2025-12-06 07:38:16.004 251996 DEBUG oslo_concurrency.lockutils [req-204fdbdd-e574-4979-960e-31b410d8a3c8 req-c465d2d6-bf60-4327-832c-ae16cb6cf512 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:38:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:16.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:16 compute-0 nova_compute[251992]: 2025-12-06 07:38:16.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:16 compute-0 nova_compute[251992]: 2025-12-06 07:38:16.702 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:16 compute-0 nova_compute[251992]: 2025-12-06 07:38:16.702 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:16 compute-0 nova_compute[251992]: 2025-12-06 07:38:16.702 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:16 compute-0 nova_compute[251992]: 2025-12-06 07:38:16.702 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:38:16 compute-0 nova_compute[251992]: 2025-12-06 07:38:16.703 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 07:38:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:38:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489610966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.131 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.237 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.237 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.241 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.241 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.243 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.243 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.243 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.246 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.246 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.249 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.249 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.420 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.422 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3684MB free_disk=20.623065948486328GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.422 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.422 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.492 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Migration for instance 2de097e3-8182-48e5-b69d-88acbfb84e66 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.525 251996 INFO nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating resource usage from migration 47ca4852-1aea-48fa-a209-2395d57c0b69
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.526 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Starting to track outgoing migration 47ca4852-1aea-48fa-a209-2395d57c0b69 with flavor 25848a18-11d9-4f11-80b5-5d005675c76d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.553 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 70928eda-043f-429b-aa4e-af1f3189a7c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.553 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c2e6b8fd-375c-4658-b338-f2d334041ba3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.553 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c1ef1073-7c66-428c-a02b-e4daa3551d22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.553 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f37cdbe1-70ec-41d7-8e94-24a34612404f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.554 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Migration 47ca4852-1aea-48fa-a209-2395d57c0b69 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.554 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.554 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:38:17 compute-0 nova_compute[251992]: 2025-12-06 07:38:17.671 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:17.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1297615286' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:17 compute-0 ceph-mon[74339]: pgmap v2460: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 07:38:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3489610966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2276773940' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:38:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1056692813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:18 compute-0 nova_compute[251992]: 2025-12-06 07:38:18.105 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:18 compute-0 nova_compute[251992]: 2025-12-06 07:38:18.111 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:38:18 compute-0 nova_compute[251992]: 2025-12-06 07:38:18.137 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:38:18 compute-0 nova_compute[251992]: 2025-12-06 07:38:18.161 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:38:18 compute-0 nova_compute[251992]: 2025-12-06 07:38:18.161 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:18 compute-0 sudo[337012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:18 compute-0 sudo[337012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:18 compute-0 sudo[337012]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:18 compute-0 sudo[337037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:18 compute-0 sudo[337037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:18 compute-0 sudo[337037]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:18.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:38:18
Dec 06 07:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'vms']
Dec 06 07:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:38:18 compute-0 nova_compute[251992]: 2025-12-06 07:38:18.698 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 55 op/s
Dec 06 07:38:19 compute-0 nova_compute[251992]: 2025-12-06 07:38:19.162 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:19 compute-0 nova_compute[251992]: 2025-12-06 07:38:19.162 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:19 compute-0 nova_compute[251992]: 2025-12-06 07:38:19.163 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:19 compute-0 podman[337063]: 2025-12-06 07:38:19.412149226 +0000 UTC m=+0.055148175 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:38:19 compute-0 podman[337064]: 2025-12-06 07:38:19.413957385 +0000 UTC m=+0.056643395 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 07:38:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1056692813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:19 compute-0 nova_compute[251992]: 2025-12-06 07:38:19.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:19.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Dec 06 07:38:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Dec 06 07:38:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:20 compute-0 nova_compute[251992]: 2025-12-06 07:38:20.043 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:20.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:20 compute-0 nova_compute[251992]: 2025-12-06 07:38:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:20 compute-0 nova_compute[251992]: 2025-12-06 07:38:20.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:20 compute-0 nova_compute[251992]: 2025-12-06 07:38:20.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:38:20 compute-0 nova_compute[251992]: 2025-12-06 07:38:20.706 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:38:20 compute-0 ceph-mon[74339]: pgmap v2461: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 55 op/s
Dec 06 07:38:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2196679724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:20 compute-0 ceph-mon[74339]: osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 07:38:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:21.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:22 compute-0 ovn_controller[147168]: 2025-12-06T07:38:22Z|00476|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:38:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:22.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 07:38:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2244263686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:22 compute-0 ceph-mon[74339]: pgmap v2463: 305 pgs: 305 active+clean; 978 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 07:38:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2972126965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:22 compute-0 nova_compute[251992]: 2025-12-06 07:38:22.706 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Dec 06 07:38:22 compute-0 nova_compute[251992]: 2025-12-06 07:38:22.976 251996 DEBUG nova.compute.manager [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:22 compute-0 nova_compute[251992]: 2025-12-06 07:38:22.976 251996 DEBUG oslo_concurrency.lockutils [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:22 compute-0 nova_compute[251992]: 2025-12-06 07:38:22.976 251996 DEBUG oslo_concurrency.lockutils [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:22 compute-0 nova_compute[251992]: 2025-12-06 07:38:22.977 251996 DEBUG oslo_concurrency.lockutils [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:22 compute-0 nova_compute[251992]: 2025-12-06 07:38:22.977 251996 DEBUG nova.compute.manager [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:22 compute-0 nova_compute[251992]: 2025-12-06 07:38:22.977 251996 WARNING nova.compute.manager [req-20424f3a-18bf-45c4-bc80-80fa494626f9 req-1a644287-3900-4b54-84f0-2542bf897650 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state resize_finish.
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.075 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.654 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006688.6537073, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.654 251996 INFO nova.compute.manager [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Stopped (Lifecycle Event)
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.655 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.683 251996 DEBUG nova.compute.manager [None req-fbc3c26f-c3e8-4330-8713-e5a62dde7444 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.686 251996 DEBUG nova.compute.manager [None req-fbc3c26f-c3e8-4330-8713-e5a62dde7444 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.702 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:23 compute-0 nova_compute[251992]: 2025-12-06 07:38:23.709 251996 INFO nova.compute.manager [None req-fbc3c26f-c3e8-4330-8713-e5a62dde7444 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com
Dec 06 07:38:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:23.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:24 compute-0 nova_compute[251992]: 2025-12-06 07:38:24.166 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:38:24 compute-0 nova_compute[251992]: 2025-12-06 07:38:24.166 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:38:24 compute-0 nova_compute[251992]: 2025-12-06 07:38:24.166 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:38:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:24.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:38:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:38:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:38:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:38:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:38:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Dec 06 07:38:25 compute-0 nova_compute[251992]: 2025-12-06 07:38:25.094 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:25 compute-0 nova_compute[251992]: 2025-12-06 07:38:25.099 251996 DEBUG nova.compute.manager [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:25 compute-0 nova_compute[251992]: 2025-12-06 07:38:25.099 251996 DEBUG oslo_concurrency.lockutils [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:25 compute-0 nova_compute[251992]: 2025-12-06 07:38:25.099 251996 DEBUG oslo_concurrency.lockutils [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:25 compute-0 nova_compute[251992]: 2025-12-06 07:38:25.100 251996 DEBUG oslo_concurrency.lockutils [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:25 compute-0 nova_compute[251992]: 2025-12-06 07:38:25.100 251996 DEBUG nova.compute.manager [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:25 compute-0 nova_compute[251992]: 2025-12-06 07:38:25.100 251996 WARNING nova.compute.manager [req-66d28d30-10d3-4978-be87-b2a09ae103d2 req-5a90aee5-1a83-49a4-8dda-3c052ed1dc00 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state resize_finish.
Dec 06 07:38:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:25.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.017410184603573423 of space, bias 1.0, pg target 5.223055381072027 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005311387667504187 of space, bias 1.0, pg target 1.5668593619137352 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1410099542853156 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017041224641727154 quantized to 16 (current 16)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021301530802158943 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018106301181835102 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042603061604317886 quantized to 32 (current 32)
Dec 06 07:38:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:26.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 116 op/s
Dec 06 07:38:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:38:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:27.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:38:27 compute-0 nova_compute[251992]: 2025-12-06 07:38:27.900 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updating instance_info_cache with network_info: [{"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:38:27 compute-0 nova_compute[251992]: 2025-12-06 07:38:27.923 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:38:27 compute-0 nova_compute[251992]: 2025-12-06 07:38:27.924 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:38:27 compute-0 nova_compute[251992]: 2025-12-06 07:38:27.924 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:27 compute-0 nova_compute[251992]: 2025-12-06 07:38:27.924 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:38:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:28 compute-0 nova_compute[251992]: 2025-12-06 07:38:28.737 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 120 op/s
Dec 06 07:38:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:29.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:30 compute-0 nova_compute[251992]: 2025-12-06 07:38:30.132 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 7.1 KiB/s wr, 78 op/s
Dec 06 07:38:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:31.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:32 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 07:38:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:32.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 07:38:32 compute-0 ceph-mon[74339]: paxos.0).electionLogic(57) init, last seen epoch 57, mid-election, bumping
Dec 06 07:38:32 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:38:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.3 KiB/s wr, 67 op/s
Dec 06 07:38:33 compute-0 nova_compute[251992]: 2025-12-06 07:38:33.781 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:33.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:34 compute-0 nova_compute[251992]: 2025-12-06 07:38:34.149 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:38:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:34.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:34 compute-0 sudo[337108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:34 compute-0 sudo[337108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:34 compute-0 sudo[337108]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:34 compute-0 sudo[337133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:38:34 compute-0 sudo[337133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:34 compute-0 sudo[337133]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:34 compute-0 sudo[337158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:34 compute-0 sudo[337158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:34 compute-0 sudo[337158]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:34 compute-0 sudo[337183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:38:34 compute-0 sudo[337183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 3.8 KiB/s wr, 33 op/s
Dec 06 07:38:35 compute-0 nova_compute[251992]: 2025-12-06 07:38:35.160 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:35 compute-0 podman[337281]: 2025-12-06 07:38:35.251518164 +0000 UTC m=+0.059304138 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 07:38:35 compute-0 podman[337281]: 2025-12-06 07:38:35.359404934 +0000 UTC m=+0.167190908 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:38:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:35 compute-0 podman[337438]: 2025-12-06 07:38:35.90371073 +0000 UTC m=+0.048547694 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:38:35 compute-0 podman[337438]: 2025-12-06 07:38:35.910840445 +0000 UTC m=+0.055677389 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:38:36 compute-0 podman[337506]: 2025-12-06 07:38:36.093170038 +0000 UTC m=+0.047579707 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, name=keepalived, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793)
Dec 06 07:38:36 compute-0 podman[337506]: 2025-12-06 07:38:36.104682424 +0000 UTC m=+0.059092073 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, name=keepalived, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.openshift.expose-services=)
Dec 06 07:38:36 compute-0 sudo[337183]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:36 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:38:36 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 07:38:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:36.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 3.8 KiB/s wr, 36 op/s
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 07:38:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1646551191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:37 compute-0 ceph-mon[74339]: pgmap v2464: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:38:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 72m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:38:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:37 compute-0 sudo[337540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:37 compute-0 sudo[337540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:37 compute-0 sudo[337540]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:37 compute-0 sudo[337565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:38:37 compute-0 sudo[337565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:37 compute-0 sudo[337565]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:37 compute-0 sudo[337590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:37 compute-0 sudo[337590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:37 compute-0 sudo[337590]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:37 compute-0 sudo[337615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:38:37 compute-0 sudo[337615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:38 compute-0 sudo[337655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:38 compute-0 sudo[337655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:38 compute-0 sudo[337655]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:38 compute-0 sudo[337681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:38 compute-0 sudo[337681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:38 compute-0 sudo[337681]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:38.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:38 compute-0 sudo[337615]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:38:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:38:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:38:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 05924983-55f0-4a38-b6d9-442580389f60 does not exist
Dec 06 07:38:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 38aabeb4-080e-41ac-b2c0-de2b64125be8 does not exist
Dec 06 07:38:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2716f341-4fe8-4b67-b1c9-02f4e7b0c61a does not exist
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:38:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:38:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:38:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:38:38 compute-0 sudo[337721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:38 compute-0 sudo[337721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:38 compute-0 nova_compute[251992]: 2025-12-06 07:38:38.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:38 compute-0 sudo[337721]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:38 compute-0 sudo[337746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:38:38 compute-0 sudo[337746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:38 compute-0 sudo[337746]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:38 compute-0 ceph-mon[74339]: pgmap v2465: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 105 op/s
Dec 06 07:38:38 compute-0 ceph-mon[74339]: pgmap v2466: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 116 op/s
Dec 06 07:38:38 compute-0 ceph-mon[74339]: pgmap v2467: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 120 op/s
Dec 06 07:38:38 compute-0 ceph-mon[74339]: pgmap v2468: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 7.1 KiB/s wr, 78 op/s
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 07:38:38 compute-0 ceph-mon[74339]: pgmap v2469: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.3 KiB/s wr, 67 op/s
Dec 06 07:38:38 compute-0 ceph-mon[74339]: pgmap v2470: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 3.8 KiB/s wr, 33 op/s
Dec 06 07:38:38 compute-0 ceph-mon[74339]: pgmap v2471: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 3.8 KiB/s wr, 36 op/s
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 07:38:38 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:38:38 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:38:38 compute-0 ceph-mon[74339]: osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:38 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 72m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:38:38 compute-0 ceph-mon[74339]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 07:38:38 compute-0 ceph-mon[74339]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:38:38 compute-0 ceph-mon[74339]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 07:38:38 compute-0 ceph-mon[74339]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 07:38:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:38:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:38:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:38:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:38:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:38:38 compute-0 sudo[337771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:38 compute-0 sudo[337771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:38 compute-0 nova_compute[251992]: 2025-12-06 07:38:38.782 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:38 compute-0 sudo[337771]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:38 compute-0 sudo[337796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:38:38 compute-0 sudo[337796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 853 B/s wr, 11 op/s
Dec 06 07:38:39 compute-0 podman[337864]: 2025-12-06 07:38:39.161338366 +0000 UTC m=+0.044557443 container create f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:38:39 compute-0 systemd[1]: Started libpod-conmon-f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a.scope.
Dec 06 07:38:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:38:39 compute-0 podman[337864]: 2025-12-06 07:38:39.144573756 +0000 UTC m=+0.027792853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:38:39 compute-0 podman[337864]: 2025-12-06 07:38:39.241587828 +0000 UTC m=+0.124806925 container init f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:38:39 compute-0 podman[337864]: 2025-12-06 07:38:39.248539829 +0000 UTC m=+0.131758906 container start f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:38:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:39 compute-0 sleepy_jang[337880]: 167 167
Dec 06 07:38:39 compute-0 podman[337864]: 2025-12-06 07:38:39.253277698 +0000 UTC m=+0.136496795 container attach f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 07:38:39 compute-0 systemd[1]: libpod-f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a.scope: Deactivated successfully.
Dec 06 07:38:39 compute-0 conmon[337880]: conmon f44629fdc6cc67db4f23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a.scope/container/memory.events
Dec 06 07:38:39 compute-0 podman[337864]: 2025-12-06 07:38:39.255214692 +0000 UTC m=+0.138433789 container died f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 07:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c43855ca606dc850c3b95f13a54263d83ab6c955b2752e6818cb253b4d5314e-merged.mount: Deactivated successfully.
Dec 06 07:38:39 compute-0 podman[337864]: 2025-12-06 07:38:39.29122725 +0000 UTC m=+0.174446327 container remove f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:38:39 compute-0 systemd[1]: libpod-conmon-f44629fdc6cc67db4f233a6f9ba4317b76cf73d7781a069de103c3e78b161f6a.scope: Deactivated successfully.
Dec 06 07:38:39 compute-0 podman[337904]: 2025-12-06 07:38:39.482662772 +0000 UTC m=+0.041097278 container create 638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:38:39 compute-0 systemd[1]: Started libpod-conmon-638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd.scope.
Dec 06 07:38:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:38:39 compute-0 podman[337904]: 2025-12-06 07:38:39.464603047 +0000 UTC m=+0.023037573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f8c4c7d4a847352aa103216b72bfef59cb56ab493d28a6e121f3e1f208da47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f8c4c7d4a847352aa103216b72bfef59cb56ab493d28a6e121f3e1f208da47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f8c4c7d4a847352aa103216b72bfef59cb56ab493d28a6e121f3e1f208da47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f8c4c7d4a847352aa103216b72bfef59cb56ab493d28a6e121f3e1f208da47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f8c4c7d4a847352aa103216b72bfef59cb56ab493d28a6e121f3e1f208da47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:39 compute-0 podman[337904]: 2025-12-06 07:38:39.580138177 +0000 UTC m=+0.138572693 container init 638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_morse, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:38:39 compute-0 podman[337904]: 2025-12-06 07:38:39.588747853 +0000 UTC m=+0.147182349 container start 638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:38:39 compute-0 podman[337904]: 2025-12-06 07:38:39.592494916 +0000 UTC m=+0.150929412 container attach 638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:38:39 compute-0 ceph-mon[74339]: pgmap v2472: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 853 B/s wr, 11 op/s
Dec 06 07:38:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:40 compute-0 nova_compute[251992]: 2025-12-06 07:38:40.163 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:40 compute-0 nova_compute[251992]: 2025-12-06 07:38:40.183 251996 INFO nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance failed to shutdown in 60 seconds.
Dec 06 07:38:40 compute-0 intelligent_morse[337920]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:38:40 compute-0 intelligent_morse[337920]: --> relative data size: 1.0
Dec 06 07:38:40 compute-0 intelligent_morse[337920]: --> All data devices are unavailable
Dec 06 07:38:40 compute-0 systemd[1]: libpod-638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd.scope: Deactivated successfully.
Dec 06 07:38:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:40 compute-0 podman[337904]: 2025-12-06 07:38:40.483558797 +0000 UTC m=+1.041993303 container died 638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 07:38:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:40.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f8c4c7d4a847352aa103216b72bfef59cb56ab493d28a6e121f3e1f208da47-merged.mount: Deactivated successfully.
Dec 06 07:38:40 compute-0 podman[337904]: 2025-12-06 07:38:40.539088329 +0000 UTC m=+1.097522825 container remove 638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:38:40 compute-0 systemd[1]: libpod-conmon-638678c5b7bc3ce2b57ab63e202e979e9845c086515e331b5a2b4c791292d5fd.scope: Deactivated successfully.
Dec 06 07:38:40 compute-0 sudo[337796]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:40 compute-0 sudo[337947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:40 compute-0 sudo[337947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:40 compute-0 sudo[337947]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:40 compute-0 sudo[337972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:38:40 compute-0 sudo[337972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:40 compute-0 sudo[337972]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:40 compute-0 sudo[337997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:40 compute-0 sudo[337997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:40 compute-0 sudo[337997]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:40 compute-0 sudo[338022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:38:40 compute-0 sudo[338022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:38:41 compute-0 podman[338089]: 2025-12-06 07:38:41.117744308 +0000 UTC m=+0.034651912 container create 93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:38:41 compute-0 systemd[1]: Started libpod-conmon-93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574.scope.
Dec 06 07:38:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:38:41 compute-0 podman[338089]: 2025-12-06 07:38:41.170846545 +0000 UTC m=+0.087754149 container init 93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:38:41 compute-0 podman[338089]: 2025-12-06 07:38:41.177032364 +0000 UTC m=+0.093939968 container start 93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:38:41 compute-0 podman[338089]: 2025-12-06 07:38:41.179877832 +0000 UTC m=+0.096785466 container attach 93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:38:41 compute-0 zealous_leakey[338106]: 167 167
Dec 06 07:38:41 compute-0 systemd[1]: libpod-93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574.scope: Deactivated successfully.
Dec 06 07:38:41 compute-0 podman[338089]: 2025-12-06 07:38:41.181402565 +0000 UTC m=+0.098310169 container died 93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:38:41 compute-0 podman[338089]: 2025-12-06 07:38:41.102280443 +0000 UTC m=+0.019188067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:38:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-b308baaa8e3e362f7f636283b7879c2ec7b57260c1ce21f028eb542c444a3eac-merged.mount: Deactivated successfully.
Dec 06 07:38:41 compute-0 podman[338089]: 2025-12-06 07:38:41.216524278 +0000 UTC m=+0.133431882 container remove 93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leakey, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:38:41 compute-0 systemd[1]: libpod-conmon-93d088651ffa156a84507f4c52b099e3f9bfa63195893ccb60b9affc10e6d574.scope: Deactivated successfully.
Dec 06 07:38:41 compute-0 podman[338130]: 2025-12-06 07:38:41.389509765 +0000 UTC m=+0.040883134 container create eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_herschel, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:38:41 compute-0 systemd[1]: Started libpod-conmon-eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1.scope.
Dec 06 07:38:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9a71828afadadb3ba802579f303f1fba3656a3b0b36cae3e730d4d5a608811/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9a71828afadadb3ba802579f303f1fba3656a3b0b36cae3e730d4d5a608811/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9a71828afadadb3ba802579f303f1fba3656a3b0b36cae3e730d4d5a608811/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b9a71828afadadb3ba802579f303f1fba3656a3b0b36cae3e730d4d5a608811/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:41 compute-0 podman[338130]: 2025-12-06 07:38:41.373059313 +0000 UTC m=+0.024432722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:38:41 compute-0 podman[338130]: 2025-12-06 07:38:41.469698505 +0000 UTC m=+0.121071904 container init eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_herschel, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:38:41 compute-0 podman[338130]: 2025-12-06 07:38:41.476637375 +0000 UTC m=+0.128010774 container start eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_herschel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 07:38:41 compute-0 podman[338130]: 2025-12-06 07:38:41.480050679 +0000 UTC m=+0.131424058 container attach eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_herschel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 07:38:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:41 compute-0 ceph-mon[74339]: pgmap v2473: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:38:42 compute-0 romantic_herschel[338147]: {
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:     "0": [
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:         {
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "devices": [
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "/dev/loop3"
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             ],
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "lv_name": "ceph_lv0",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "lv_size": "7511998464",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "name": "ceph_lv0",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "tags": {
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.cluster_name": "ceph",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.crush_device_class": "",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.encrypted": "0",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.osd_id": "0",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.type": "block",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:                 "ceph.vdo": "0"
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             },
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "type": "block",
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:             "vg_name": "ceph_vg0"
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:         }
Dec 06 07:38:42 compute-0 romantic_herschel[338147]:     ]
Dec 06 07:38:42 compute-0 romantic_herschel[338147]: }
Dec 06 07:38:42 compute-0 systemd[1]: libpod-eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1.scope: Deactivated successfully.
Dec 06 07:38:42 compute-0 conmon[338147]: conmon eec4c8ba3ca3ddd3d1a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1.scope/container/memory.events
Dec 06 07:38:42 compute-0 podman[338130]: 2025-12-06 07:38:42.237260616 +0000 UTC m=+0.888633995 container died eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_herschel, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b9a71828afadadb3ba802579f303f1fba3656a3b0b36cae3e730d4d5a608811-merged.mount: Deactivated successfully.
Dec 06 07:38:42 compute-0 podman[338130]: 2025-12-06 07:38:42.292156752 +0000 UTC m=+0.943530131 container remove eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_herschel, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:38:42 compute-0 systemd[1]: libpod-conmon-eec4c8ba3ca3ddd3d1a4068beb9547929c0292f35ab7094c52226b33581eb9c1.scope: Deactivated successfully.
Dec 06 07:38:42 compute-0 sudo[338022]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:42 compute-0 sudo[338170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:42 compute-0 sudo[338170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:42 compute-0 sudo[338170]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:42 compute-0 sudo[338195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:38:42 compute-0 sudo[338195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:42 compute-0 sudo[338195]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:42.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:42 compute-0 sudo[338220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:42 compute-0 sudo[338220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:42 compute-0 sudo[338220]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:42 compute-0 sudo[338245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:38:42 compute-0 sudo[338245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:38:42 compute-0 podman[338311]: 2025-12-06 07:38:42.873817163 +0000 UTC m=+0.042976101 container create 95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:38:42 compute-0 systemd[1]: Started libpod-conmon-95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c.scope.
Dec 06 07:38:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:38:42 compute-0 podman[338311]: 2025-12-06 07:38:42.93314318 +0000 UTC m=+0.102302138 container init 95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:38:42 compute-0 podman[338311]: 2025-12-06 07:38:42.939369041 +0000 UTC m=+0.108527979 container start 95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 07:38:42 compute-0 podman[338311]: 2025-12-06 07:38:42.942466436 +0000 UTC m=+0.111625374 container attach 95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:38:42 compute-0 eager_tharp[338327]: 167 167
Dec 06 07:38:42 compute-0 systemd[1]: libpod-95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c.scope: Deactivated successfully.
Dec 06 07:38:42 compute-0 podman[338311]: 2025-12-06 07:38:42.944353868 +0000 UTC m=+0.113512816 container died 95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:38:42 compute-0 podman[338311]: 2025-12-06 07:38:42.856424236 +0000 UTC m=+0.025583204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1e192bae1a4383933b9f6b81295e607c0aa327f52d8cc78b7b76384402813b6-merged.mount: Deactivated successfully.
Dec 06 07:38:42 compute-0 podman[338311]: 2025-12-06 07:38:42.971600276 +0000 UTC m=+0.140759224 container remove 95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tharp, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:38:42 compute-0 systemd[1]: libpod-conmon-95a85ce3d1a06e666b057fd7bc98ca8f49a5ccdd4df51eefec2367fe3d8a5e9c.scope: Deactivated successfully.
Dec 06 07:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:38:43 compute-0 podman[338351]: 2025-12-06 07:38:43.181929537 +0000 UTC m=+0.055732810 container create 8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_austin, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:38:43 compute-0 systemd[1]: Started libpod-conmon-8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65.scope.
Dec 06 07:38:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:38:43 compute-0 podman[338351]: 2025-12-06 07:38:43.164746946 +0000 UTC m=+0.038550239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ebf42f32103942a99754b1c2c6ffd87d7f5524c97f0824662b5c978a604afd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ebf42f32103942a99754b1c2c6ffd87d7f5524c97f0824662b5c978a604afd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ebf42f32103942a99754b1c2c6ffd87d7f5524c97f0824662b5c978a604afd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ebf42f32103942a99754b1c2c6ffd87d7f5524c97f0824662b5c978a604afd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:38:43 compute-0 podman[338351]: 2025-12-06 07:38:43.277421667 +0000 UTC m=+0.151224970 container init 8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:38:43 compute-0 podman[338351]: 2025-12-06 07:38:43.283895525 +0000 UTC m=+0.157698798 container start 8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_austin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:38:43 compute-0 podman[338351]: 2025-12-06 07:38:43.286778784 +0000 UTC m=+0.160582057 container attach 8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_austin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:38:43 compute-0 nova_compute[251992]: 2025-12-06 07:38:43.784 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:43 compute-0 ceph-mon[74339]: pgmap v2474: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 07:38:44 compute-0 modest_austin[338367]: {
Dec 06 07:38:44 compute-0 modest_austin[338367]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:38:44 compute-0 modest_austin[338367]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:38:44 compute-0 modest_austin[338367]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:38:44 compute-0 modest_austin[338367]:         "osd_id": 0,
Dec 06 07:38:44 compute-0 modest_austin[338367]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:38:44 compute-0 modest_austin[338367]:         "type": "bluestore"
Dec 06 07:38:44 compute-0 modest_austin[338367]:     }
Dec 06 07:38:44 compute-0 modest_austin[338367]: }
Dec 06 07:38:44 compute-0 systemd[1]: libpod-8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65.scope: Deactivated successfully.
Dec 06 07:38:44 compute-0 podman[338351]: 2025-12-06 07:38:44.097484749 +0000 UTC m=+0.971288022 container died 8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_austin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-33ebf42f32103942a99754b1c2c6ffd87d7f5524c97f0824662b5c978a604afd-merged.mount: Deactivated successfully.
Dec 06 07:38:44 compute-0 podman[338351]: 2025-12-06 07:38:44.150370929 +0000 UTC m=+1.024174202 container remove 8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_austin, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:38:44 compute-0 systemd[1]: libpod-conmon-8c1533a0431076f40d5a0b23bb151c21110b657d7da1c5cd8fd18c8d1d94fb65.scope: Deactivated successfully.
Dec 06 07:38:44 compute-0 sudo[338245]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:38:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:38:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:44 compute-0 podman[338388]: 2025-12-06 07:38:44.215694143 +0000 UTC m=+0.085946990 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:38:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 870c269a-6a61-4c8d-b54c-f88a6b0cbfea does not exist
Dec 06 07:38:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5d36035e-2741-4897-857e-5568b7c80221 does not exist
Dec 06 07:38:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fecbd204-b8de-4db0-87c0-2ff63d1a25ad does not exist
Dec 06 07:38:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:44 compute-0 sudo[338426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:44 compute-0 sudo[338426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:44 compute-0 sudo[338426]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:44 compute-0 sudo[338451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:38:44 compute-0 sudo[338451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:44 compute-0 sudo[338451]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:44.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 07:38:45 compute-0 nova_compute[251992]: 2025-12-06 07:38:45.109 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:38:45 compute-0 nova_compute[251992]: 2025-12-06 07:38:45.110 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:38:45 compute-0 nova_compute[251992]: 2025-12-06 07:38:45.164 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:38:45 compute-0 ceph-mon[74339]: pgmap v2475: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 07:38:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:45.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:46.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 07:38:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:48 compute-0 ceph-mon[74339]: pgmap v2476: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 2 op/s
Dec 06 07:38:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:38:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:48.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:38:48 compute-0 nova_compute[251992]: 2025-12-06 07:38:48.786 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 426 B/s wr, 12 op/s
Dec 06 07:38:49 compute-0 ceph-mon[74339]: pgmap v2477: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 426 B/s wr, 12 op/s
Dec 06 07:38:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 07:38:49 compute-0 ceph-mon[74339]: paxos.0).electionLogic(60) init, last seen epoch 60
Dec 06 07:38:49 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:38:49 compute-0 kernel: tap83a9b755-33 (unregistering): left promiscuous mode
Dec 06 07:38:49 compute-0 NetworkManager[48965]: <info>  [1765006729.2530] device (tap83a9b755-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:49 compute-0 ovn_controller[147168]: 2025-12-06T07:38:49Z|00477|binding|INFO|Releasing lport 83a9b755-339a-4da1-ade2-590aecb2c951 from this chassis (sb_readonly=0)
Dec 06 07:38:49 compute-0 ovn_controller[147168]: 2025-12-06T07:38:49Z|00478|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 down in Southbound
Dec 06 07:38:49 compute-0 ovn_controller[147168]: 2025-12-06T07:38:49Z|00479|binding|INFO|Removing iface tap83a9b755-33 ovn-installed in OVS
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.282 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:91:ba 10.100.0.7'], port_security=['fa:16:3e:cf:91:ba 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37cdbe1-70ec-41d7-8e94-24a34612404f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=83a9b755-339a-4da1-ade2-590aecb2c951) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.285 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 83a9b755-339a-4da1-ade2-590aecb2c951 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 unbound from our chassis
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.285 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.287 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.308 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8268a7c3-b330-4d5d-97b8-e62c8f204600]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:49 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000084.scope: Deactivated successfully.
Dec 06 07:38:49 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000084.scope: Consumed 16.163s CPU time.
Dec 06 07:38:49 compute-0 systemd-machined[212986]: Machine qemu-60-instance-00000084 terminated.
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.349 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdaa287-214a-4bf8-b3ba-1c0a0b739531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.353 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[44686636-fedb-4b36-bb21-92d079f98ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.386 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[64ec4758-365a-4e23-b10c-6ec7afe33527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.407 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[babf76f5-6a49-49d0-b886-a5f6545b5189]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 40379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338491, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.429 251996 INFO nova.virt.libvirt.driver [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance destroyed successfully.
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.430 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a388eeaa-66b3-4057-a539-11142999f150]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338496, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338496, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.431 251996 DEBUG nova.objects.instance [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'numa_topology' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.432 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.438 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.438 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.439 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.439 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:49.439 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.456 251996 INFO nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Attempting a stable device rescue
Dec 06 07:38:49 compute-0 podman[338506]: 2025-12-06 07:38:49.55251346 +0000 UTC m=+0.068790259 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true)
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.557 251996 DEBUG nova.compute.manager [req-adaef6b4-6ec5-4b1b-87d2-9b2527484ad2 req-e51b33c5-837c-4686-972d-d76ba8ba8b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.557 251996 DEBUG oslo_concurrency.lockutils [req-adaef6b4-6ec5-4b1b-87d2-9b2527484ad2 req-e51b33c5-837c-4686-972d-d76ba8ba8b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.557 251996 DEBUG oslo_concurrency.lockutils [req-adaef6b4-6ec5-4b1b-87d2-9b2527484ad2 req-e51b33c5-837c-4686-972d-d76ba8ba8b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.558 251996 DEBUG oslo_concurrency.lockutils [req-adaef6b4-6ec5-4b1b-87d2-9b2527484ad2 req-e51b33c5-837c-4686-972d-d76ba8ba8b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.558 251996 DEBUG nova.compute.manager [req-adaef6b4-6ec5-4b1b-87d2-9b2527484ad2 req-e51b33c5-837c-4686-972d-d76ba8ba8b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.558 251996 WARNING nova.compute.manager [req-adaef6b4-6ec5-4b1b-87d2-9b2527484ad2 req-e51b33c5-837c-4686-972d-d76ba8ba8b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state active and task_state rescuing.
Dec 06 07:38:49 compute-0 podman[338505]: 2025-12-06 07:38:49.558905146 +0000 UTC m=+0.077938970 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:38:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.794 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.799 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.799 251996 INFO nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Creating image(s)
Dec 06 07:38:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:49.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.833 251996 DEBUG nova.storage.rbd_utils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.838 251996 DEBUG nova.objects.instance [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.926 251996 DEBUG nova.storage.rbd_utils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.952 251996 DEBUG nova.storage.rbd_utils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.955 251996 DEBUG oslo_concurrency.lockutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "d1af9eaaea02f449337ad0b10fe61866cc1dcf65" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:49 compute-0 nova_compute[251992]: 2025-12-06 07:38:49.956 251996 DEBUG oslo_concurrency.lockutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "d1af9eaaea02f449337ad0b10fe61866cc1dcf65" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.197 251996 DEBUG nova.virt.libvirt.imagebackend [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/441cc72d-7648-41a4-9dbf-54930196cf56/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/441cc72d-7648-41a4-9dbf-54930196cf56/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.246 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.250 251996 DEBUG nova.virt.libvirt.imagebackend [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/441cc72d-7648-41a4-9dbf-54930196cf56/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.251 251996 DEBUG nova.storage.rbd_utils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] cloning images/441cc72d-7648-41a4-9dbf-54930196cf56@snap to None/f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.366 251996 DEBUG oslo_concurrency.lockutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "d1af9eaaea02f449337ad0b10fe61866cc1dcf65" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.409 251996 DEBUG nova.objects.instance [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'migration_context' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.422 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.424 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Start _get_guest_xml network_info=[{"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "vif_mac": "fa:16:3e:cf:91:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '441cc72d-7648-41a4-9dbf-54930196cf56', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4cae51dd-790a-4050-b673-0850eb817a06', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4cae51dd-790a-4050-b673-0850eb817a06', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f37cdbe1-70ec-41d7-8e94-24a34612404f', 'attached_at': '', 'detached_at': '', 'volume_id': '4cae51dd-790a-4050-b673-0850eb817a06', 'serial': '4cae51dd-790a-4050-b673-0850eb817a06'}, 'attachment_id': '915aa72a-4057-4688-bfcf-15f26826837c', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.425 251996 DEBUG nova.objects.instance [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'resources' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.440 251996 WARNING nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.448 251996 DEBUG nova.virt.libvirt.host [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.449 251996 DEBUG nova.virt.libvirt.host [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.454 251996 DEBUG nova.virt.libvirt.host [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.454 251996 DEBUG nova.virt.libvirt.host [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.455 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.456 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.456 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.456 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.457 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.457 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.457 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.457 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.458 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.458 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.458 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.459 251996 DEBUG nova.virt.hardware [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.459 251996 DEBUG nova.objects.instance [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:50.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.512 251996 DEBUG oslo_concurrency.processutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:38:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 07:38:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:38:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 72m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:38:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 07:38:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 07:38:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 982 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 743 KiB/s wr, 77 op/s
Dec 06 07:38:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:38:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314860632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:50 compute-0 nova_compute[251992]: 2025-12-06 07:38:50.993 251996 DEBUG oslo_concurrency.processutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.047 251996 DEBUG oslo_concurrency.processutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:38:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306523317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.523 251996 DEBUG oslo_concurrency.processutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.525 251996 DEBUG nova.virt.libvirt.vif [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-40970156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-40970156',id=132,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:37:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-1min8f00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:37:34Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=f37cdbe1-70ec-41d7-8e94-24a34612404f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "vif_mac": "fa:16:3e:cf:91:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.526 251996 DEBUG nova.network.os_vif_util [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "vif_mac": "fa:16:3e:cf:91:ba"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.527 251996 DEBUG nova.network.os_vif_util [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:91:ba,bridge_name='br-int',has_traffic_filtering=True,id=83a9b755-339a-4da1-ade2-590aecb2c951,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9b755-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.528 251996 DEBUG nova.objects.instance [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'pci_devices' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.546 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <uuid>f37cdbe1-70ec-41d7-8e94-24a34612404f</uuid>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <name>instance-00000084</name>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-40970156</nova:name>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:38:50</nova:creationTime>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <nova:user uuid="2aa5b15c15f84a8cb24776d5c781eb09">tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member</nova:user>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <nova:project uuid="17cdfa63c4424ec7a0eb4bb3d7372c14">tempest-ServerBootFromVolumeStableRescueTest-344238221</nova:project>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <nova:port uuid="83a9b755-339a-4da1-ade2-590aecb2c951">
Dec 06 07:38:51 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <system>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <entry name="serial">f37cdbe1-70ec-41d7-8e94-24a34612404f</entry>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <entry name="uuid">f37cdbe1-70ec-41d7-8e94-24a34612404f</entry>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </system>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <os>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   </os>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <features>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   </features>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config">
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </source>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-4cae51dd-790a-4050-b673-0850eb817a06">
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </source>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <serial>4cae51dd-790a-4050-b673-0850eb817a06</serial>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.rescue">
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </source>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:38:51 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <boot order="1"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:cf:91:ba"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <target dev="tap83a9b755-33"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/console.log" append="off"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <video>
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </video>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:38:51 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:38:51 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:38:51 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:38:51 compute-0 nova_compute[251992]: </domain>
Dec 06 07:38:51 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.557 251996 INFO nova.virt.libvirt.driver [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance destroyed successfully.
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.634 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.635 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.635 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.635 251996 DEBUG nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] No VIF found with MAC fa:16:3e:cf:91:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.636 251996 INFO nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Using config drive
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.659 251996 DEBUG nova.storage.rbd_utils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.691 251996 DEBUG nova.objects.instance [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'ec2_ids' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.761 251996 DEBUG nova.objects.instance [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'keypairs' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.823 251996 DEBUG nova.compute.manager [req-e7464d5d-fe65-48c0-ad0c-9a99ed4a921b req-5e4f1521-11e2-420a-a13c-e67ad8218f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.824 251996 DEBUG oslo_concurrency.lockutils [req-e7464d5d-fe65-48c0-ad0c-9a99ed4a921b req-5e4f1521-11e2-420a-a13c-e67ad8218f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.825 251996 DEBUG oslo_concurrency.lockutils [req-e7464d5d-fe65-48c0-ad0c-9a99ed4a921b req-5e4f1521-11e2-420a-a13c-e67ad8218f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.825 251996 DEBUG oslo_concurrency.lockutils [req-e7464d5d-fe65-48c0-ad0c-9a99ed4a921b req-5e4f1521-11e2-420a-a13c-e67ad8218f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.825 251996 DEBUG nova.compute.manager [req-e7464d5d-fe65-48c0-ad0c-9a99ed4a921b req-5e4f1521-11e2-420a-a13c-e67ad8218f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:51 compute-0 nova_compute[251992]: 2025-12-06 07:38:51.826 251996 WARNING nova.compute.manager [req-e7464d5d-fe65-48c0-ad0c-9a99ed4a921b req-5e4f1521-11e2-420a-a13c-e67ad8218f6a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state active and task_state rescuing.
Dec 06 07:38:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:51.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:51 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:38:52 compute-0 nova_compute[251992]: 2025-12-06 07:38:52.232 251996 INFO nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Creating config drive at /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config.rescue
Dec 06 07:38:52 compute-0 nova_compute[251992]: 2025-12-06 07:38:52.237 251996 DEBUG oslo_concurrency.processutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb4nunct execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:52 compute-0 nova_compute[251992]: 2025-12-06 07:38:52.370 251996 DEBUG oslo_concurrency.processutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkb4nunct" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:52 compute-0 nova_compute[251992]: 2025-12-06 07:38:52.396 251996 DEBUG nova.storage.rbd_utils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] rbd image f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:38:52 compute-0 nova_compute[251992]: 2025-12-06 07:38:52.399 251996 DEBUG oslo_concurrency.processutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config.rescue f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:38:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:52.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 987 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 103 op/s
Dec 06 07:38:52 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 07:38:52 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 07:38:52 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 07:38:52 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 07:38:52 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 07:38:52 compute-0 ceph-mon[74339]: osdmap e307: 3 total, 3 up, 3 in
Dec 06 07:38:52 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 72m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 07:38:52 compute-0 ceph-mon[74339]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 07:38:52 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 07:38:53 compute-0 nova_compute[251992]: 2025-12-06 07:38:53.789 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:54 compute-0 ceph-mon[74339]: mon.compute-1 calling monitor election
Dec 06 07:38:54 compute-0 ceph-mon[74339]: pgmap v2478: 305 pgs: 305 active+clean; 982 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 743 KiB/s wr, 77 op/s
Dec 06 07:38:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1314860632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1306523317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:38:54 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:38:54 compute-0 ceph-mon[74339]: pgmap v2479: 305 pgs: 305 active+clean; 987 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 103 op/s
Dec 06 07:38:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:54.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 987 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 103 op/s
Dec 06 07:38:55 compute-0 nova_compute[251992]: 2025-12-06 07:38:55.224 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:55 compute-0 nova_compute[251992]: 2025-12-06 07:38:55.291 251996 DEBUG oslo_concurrency.processutils [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config.rescue f37cdbe1-70ec-41d7-8e94-24a34612404f_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:38:55 compute-0 nova_compute[251992]: 2025-12-06 07:38:55.291 251996 INFO nova.virt.libvirt.driver [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Deleting local config drive /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f/disk.config.rescue because it was imported into RBD.
Dec 06 07:38:55 compute-0 kernel: tap83a9b755-33: entered promiscuous mode
Dec 06 07:38:55 compute-0 NetworkManager[48965]: <info>  [1765006735.3548] manager: (tap83a9b755-33): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Dec 06 07:38:55 compute-0 ovn_controller[147168]: 2025-12-06T07:38:55Z|00480|binding|INFO|Claiming lport 83a9b755-339a-4da1-ade2-590aecb2c951 for this chassis.
Dec 06 07:38:55 compute-0 ovn_controller[147168]: 2025-12-06T07:38:55Z|00481|binding|INFO|83a9b755-339a-4da1-ade2-590aecb2c951: Claiming fa:16:3e:cf:91:ba 10.100.0.7
Dec 06 07:38:55 compute-0 nova_compute[251992]: 2025-12-06 07:38:55.355 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:55 compute-0 ovn_controller[147168]: 2025-12-06T07:38:55Z|00482|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 ovn-installed in OVS
Dec 06 07:38:55 compute-0 nova_compute[251992]: 2025-12-06 07:38:55.375 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:55 compute-0 ovn_controller[147168]: 2025-12-06T07:38:55Z|00483|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 up in Southbound
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.379 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:91:ba 10.100.0.7'], port_security=['fa:16:3e:cf:91:ba 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37cdbe1-70ec-41d7-8e94-24a34612404f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '5', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=83a9b755-339a-4da1-ade2-590aecb2c951) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.381 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 83a9b755-339a-4da1-ade2-590aecb2c951 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 bound to our chassis
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.383 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:38:55 compute-0 systemd-machined[212986]: New machine qemu-61-instance-00000084.
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.403 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[76d769bd-2139-429e-929e-988e6d6f5506]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:55 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000084.
Dec 06 07:38:55 compute-0 systemd-udevd[338825]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:38:55 compute-0 NetworkManager[48965]: <info>  [1765006735.4336] device (tap83a9b755-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:38:55 compute-0 NetworkManager[48965]: <info>  [1765006735.4345] device (tap83a9b755-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.435 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba74757-a242-42bd-a98c-7d9d49bcbf54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.439 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3730cc-ceea-42d0-aae9-ff16f1cec180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.471 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1bbf10-fd09-402a-87ee-4ae11d3aff7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.490 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d23cd747-880e-4dda-8b70-eeea138dc54d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 40379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338835, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.507 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[102bfe61-eaa1-42a5-b09d-53bdcdfe7c97]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338837, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338837, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.508 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:55 compute-0 nova_compute[251992]: 2025-12-06 07:38:55.510 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:55 compute-0 nova_compute[251992]: 2025-12-06 07:38:55.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.512 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.512 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.513 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:38:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:38:55.513 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:38:55 compute-0 ceph-mon[74339]: pgmap v2480: 305 pgs: 305 active+clean; 987 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 103 op/s
Dec 06 07:38:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:55.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:56 compute-0 nova_compute[251992]: 2025-12-06 07:38:56.363 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for f37cdbe1-70ec-41d7-8e94-24a34612404f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:38:56 compute-0 nova_compute[251992]: 2025-12-06 07:38:56.364 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006736.3625808, f37cdbe1-70ec-41d7-8e94-24a34612404f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:38:56 compute-0 nova_compute[251992]: 2025-12-06 07:38:56.364 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] VM Resumed (Lifecycle Event)
Dec 06 07:38:56 compute-0 nova_compute[251992]: 2025-12-06 07:38:56.368 251996 DEBUG nova.compute.manager [None req-2a604e87-1dd7-4a80-b5dc-b8816dc41921 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:38:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:56.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 118 op/s
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.026 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.030 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.073 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.073 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006736.3628917, f37cdbe1-70ec-41d7-8e94-24a34612404f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.074 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] VM Started (Lifecycle Event)
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.110 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.114 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.289 251996 DEBUG nova.compute.manager [req-d803737f-98dc-4f85-acc3-24497ded0031 req-646e416b-52d7-4605-b7a7-8aa64458b833 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.290 251996 DEBUG oslo_concurrency.lockutils [req-d803737f-98dc-4f85-acc3-24497ded0031 req-646e416b-52d7-4605-b7a7-8aa64458b833 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.292 251996 DEBUG oslo_concurrency.lockutils [req-d803737f-98dc-4f85-acc3-24497ded0031 req-646e416b-52d7-4605-b7a7-8aa64458b833 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.292 251996 DEBUG oslo_concurrency.lockutils [req-d803737f-98dc-4f85-acc3-24497ded0031 req-646e416b-52d7-4605-b7a7-8aa64458b833 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.293 251996 DEBUG nova.compute.manager [req-d803737f-98dc-4f85-acc3-24497ded0031 req-646e416b-52d7-4605-b7a7-8aa64458b833 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.293 251996 WARNING nova.compute.manager [req-d803737f-98dc-4f85-acc3-24497ded0031 req-646e416b-52d7-4605-b7a7-8aa64458b833 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state rescued and task_state None.
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.494 251996 DEBUG nova.compute.manager [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.495 251996 DEBUG oslo_concurrency.lockutils [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.496 251996 DEBUG oslo_concurrency.lockutils [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.496 251996 DEBUG oslo_concurrency.lockutils [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.496 251996 DEBUG nova.compute.manager [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:57 compute-0 nova_compute[251992]: 2025-12-06 07:38:57.497 251996 WARNING nova.compute.manager [req-318c534d-e77d-4de2-b068-d42c6ed76f00 req-cb7a7dcf-75e6-4de5-8e9d-53b39a63255f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state resized and task_state resize_reverting.
Dec 06 07:38:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:38:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:57.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:38:58 compute-0 ceph-mon[74339]: pgmap v2481: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 118 op/s
Dec 06 07:38:58 compute-0 sudo[338899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:58 compute-0 sudo[338899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:58 compute-0 sudo[338899]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:38:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:38:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:38:58 compute-0 sudo[338924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:38:58 compute-0 sudo[338924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:38:58 compute-0 sudo[338924]: pam_unix(sudo:session): session closed for user root
Dec 06 07:38:58 compute-0 nova_compute[251992]: 2025-12-06 07:38:58.792 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:38:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Dec 06 07:38:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:38:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1863463196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:38:59 compute-0 ceph-mon[74339]: pgmap v2482: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Dec 06 07:38:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:38:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:38:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:38:59.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:38:59 compute-0 nova_compute[251992]: 2025-12-06 07:38:59.869 251996 DEBUG nova.compute.manager [req-842218eb-4242-4d52-b5f4-60739b9a8fb3 req-cfd0353f-d4ca-4ccf-aded-93c660ef7a30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:38:59 compute-0 nova_compute[251992]: 2025-12-06 07:38:59.869 251996 DEBUG oslo_concurrency.lockutils [req-842218eb-4242-4d52-b5f4-60739b9a8fb3 req-cfd0353f-d4ca-4ccf-aded-93c660ef7a30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:38:59 compute-0 nova_compute[251992]: 2025-12-06 07:38:59.870 251996 DEBUG oslo_concurrency.lockutils [req-842218eb-4242-4d52-b5f4-60739b9a8fb3 req-cfd0353f-d4ca-4ccf-aded-93c660ef7a30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:38:59 compute-0 nova_compute[251992]: 2025-12-06 07:38:59.870 251996 DEBUG oslo_concurrency.lockutils [req-842218eb-4242-4d52-b5f4-60739b9a8fb3 req-cfd0353f-d4ca-4ccf-aded-93c660ef7a30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:38:59 compute-0 nova_compute[251992]: 2025-12-06 07:38:59.871 251996 DEBUG nova.compute.manager [req-842218eb-4242-4d52-b5f4-60739b9a8fb3 req-cfd0353f-d4ca-4ccf-aded-93c660ef7a30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:38:59 compute-0 nova_compute[251992]: 2025-12-06 07:38:59.871 251996 WARNING nova.compute.manager [req-842218eb-4242-4d52-b5f4-60739b9a8fb3 req-cfd0353f-d4ca-4ccf-aded-93c660ef7a30 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state rescued and task_state None.
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.036 251996 DEBUG nova.compute.manager [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.037 251996 DEBUG oslo_concurrency.lockutils [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.037 251996 DEBUG oslo_concurrency.lockutils [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.038 251996 DEBUG oslo_concurrency.lockutils [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.038 251996 DEBUG nova.compute.manager [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.038 251996 WARNING nova.compute.manager [req-22459016-4ac9-496a-8e40-46c13e627dae req-b481a38a-02a9-4f00-b94f-8e24ca387fba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state resized and task_state resize_reverting.
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.189 251996 INFO nova.compute.manager [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Swapping old allocation on dict_keys(['e75da5bf-16fa-49b1-b5e1-3aa61daf0433']) held by migration 47ca4852-1aea-48fa-a209-2395d57c0b69 for instance
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.227 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.230 251996 DEBUG nova.scheduler.client.report [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Overwriting current allocation {'allocations': {'6d00757a-082f-486d-ae84-869a2ba2e6e7': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 65}}, 'project_id': 'b10aa03d68eb4d4799d53538521cc364', 'user_id': 'a70f6c3c5e2c402bb6fa0e0507e9b6dc', 'consumer_generation': 1} on consumer 2de097e3-8182-48e5-b69d-88acbfb84e66 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Dec 06 07:39:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:00.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:00 compute-0 nova_compute[251992]: 2025-12-06 07:39:00.611 251996 INFO nova.network.neutron [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating port 8690867c-c0a8-4574-b54f-38486691e339 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 07:39:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Dec 06 07:39:01 compute-0 nova_compute[251992]: 2025-12-06 07:39:01.620 251996 INFO nova.compute.manager [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Unrescuing
Dec 06 07:39:01 compute-0 nova_compute[251992]: 2025-12-06 07:39:01.621 251996 DEBUG oslo_concurrency.lockutils [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:39:01 compute-0 nova_compute[251992]: 2025-12-06 07:39:01.621 251996 DEBUG oslo_concurrency.lockutils [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquired lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:39:01 compute-0 nova_compute[251992]: 2025-12-06 07:39:01.622 251996 DEBUG nova.network.neutron [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:39:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:01.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:02 compute-0 ceph-mon[74339]: pgmap v2483: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Dec 06 07:39:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:02.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:02 compute-0 nova_compute[251992]: 2025-12-06 07:39:02.807 251996 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:39:02 compute-0 nova_compute[251992]: 2025-12-06 07:39:02.807 251996 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:39:02 compute-0 nova_compute[251992]: 2025-12-06 07:39:02.808 251996 DEBUG nova.network.neutron [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:39:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 169 op/s
Dec 06 07:39:03 compute-0 ceph-mon[74339]: pgmap v2484: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 169 op/s
Dec 06 07:39:03 compute-0 nova_compute[251992]: 2025-12-06 07:39:03.797 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:03.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:03.847 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:03.849 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:03.849 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:04.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 143 op/s
Dec 06 07:39:04 compute-0 nova_compute[251992]: 2025-12-06 07:39:04.933 251996 DEBUG nova.compute.manager [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-changed-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:04 compute-0 nova_compute[251992]: 2025-12-06 07:39:04.934 251996 DEBUG nova.compute.manager [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing instance network info cache due to event network-changed-8690867c-c0a8-4574-b54f-38486691e339. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:39:04 compute-0 nova_compute[251992]: 2025-12-06 07:39:04.934 251996 DEBUG oslo_concurrency.lockutils [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:39:05 compute-0 nova_compute[251992]: 2025-12-06 07:39:05.228 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:05.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:06 compute-0 ceph-mon[74339]: pgmap v2485: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 143 op/s
Dec 06 07:39:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 144 op/s
Dec 06 07:39:07 compute-0 ceph-mon[74339]: pgmap v2486: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 144 op/s
Dec 06 07:39:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:07.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:08.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:08 compute-0 nova_compute[251992]: 2025-12-06 07:39:08.845 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:08 compute-0 nova_compute[251992]: 2025-12-06 07:39:08.866 251996 DEBUG nova.network.neutron [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Updating instance_info_cache with network_info: [{"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 544 KiB/s wr, 129 op/s
Dec 06 07:39:08 compute-0 nova_compute[251992]: 2025-12-06 07:39:08.919 251996 DEBUG oslo_concurrency.lockutils [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Releasing lock "refresh_cache-f37cdbe1-70ec-41d7-8e94-24a34612404f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:39:08 compute-0 nova_compute[251992]: 2025-12-06 07:39:08.920 251996 DEBUG nova.objects.instance [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'flavor' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:09 compute-0 kernel: tap83a9b755-33 (unregistering): left promiscuous mode
Dec 06 07:39:09 compute-0 NetworkManager[48965]: <info>  [1765006749.0069] device (tap83a9b755-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:39:09 compute-0 ovn_controller[147168]: 2025-12-06T07:39:09Z|00484|binding|INFO|Releasing lport 83a9b755-339a-4da1-ade2-590aecb2c951 from this chassis (sb_readonly=0)
Dec 06 07:39:09 compute-0 nova_compute[251992]: 2025-12-06 07:39:09.015 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:09 compute-0 ovn_controller[147168]: 2025-12-06T07:39:09Z|00485|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 down in Southbound
Dec 06 07:39:09 compute-0 ovn_controller[147168]: 2025-12-06T07:39:09Z|00486|binding|INFO|Removing iface tap83a9b755-33 ovn-installed in OVS
Dec 06 07:39:09 compute-0 nova_compute[251992]: 2025-12-06 07:39:09.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:09 compute-0 nova_compute[251992]: 2025-12-06 07:39:09.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:09 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000084.scope: Deactivated successfully.
Dec 06 07:39:09 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000084.scope: Consumed 13.420s CPU time.
Dec 06 07:39:09 compute-0 systemd-machined[212986]: Machine qemu-61-instance-00000084 terminated.
Dec 06 07:39:09 compute-0 nova_compute[251992]: 2025-12-06 07:39:09.188 251996 INFO nova.virt.libvirt.driver [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance destroyed successfully.
Dec 06 07:39:09 compute-0 nova_compute[251992]: 2025-12-06 07:39:09.188 251996 DEBUG nova.objects.instance [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'numa_topology' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.232 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:91:ba 10.100.0.7'], port_security=['fa:16:3e:cf:91:ba 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37cdbe1-70ec-41d7-8e94-24a34612404f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '6', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=83a9b755-339a-4da1-ade2-590aecb2c951) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.234 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 83a9b755-339a-4da1-ade2-590aecb2c951 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 unbound from our chassis
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.235 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.251 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[059314d0-cf49-45fa-8228-cea3cf7a21d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.282 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c0255c-f76c-4335-a113-5723eb4ed7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.285 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[552dbb6b-6490-4bb9-8a65-89a8a58dcfac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.307 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[404c4a80-e517-4359-8a05-a0869c1c996f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.326 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[26e86095-99fa-4ad7-a307-e15dc05341e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 22, 'rx_bytes': 700, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 22, 'rx_bytes': 700, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 40379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338978, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.341 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3638af-0753-45c3-85b4-ba851fb6510e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338979, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338979, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.342 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:09 compute-0 nova_compute[251992]: 2025-12-06 07:39:09.344 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:09 compute-0 nova_compute[251992]: 2025-12-06 07:39:09.347 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.348 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.348 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.348 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:09.349 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:09.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:09 compute-0 nova_compute[251992]: 2025-12-06 07:39:09.939 251996 DEBUG nova.network.neutron [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:09 compute-0 ceph-mon[74339]: pgmap v2487: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 544 KiB/s wr, 129 op/s
Dec 06 07:39:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/518087762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/518087762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:10 compute-0 nova_compute[251992]: 2025-12-06 07:39:10.230 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:10.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 37 KiB/s wr, 86 op/s
Dec 06 07:39:11 compute-0 ceph-mon[74339]: pgmap v2488: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 37 KiB/s wr, 86 op/s
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.679 251996 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.680 251996 DEBUG os_brick.utils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.682 251996 DEBUG oslo_concurrency.lockutils [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.683 251996 DEBUG nova.network.neutron [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Refreshing network info cache for port 8690867c-c0a8-4574-b54f-38486691e339 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.683 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.695 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.696 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[623c6356-b26e-40c4-a995-68e3711070a0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.697 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.705 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.705 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[452cfe67-8767-485a-afe5-4aacd3d0c23b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.707 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.716 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.717 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8e3560-0f6d-4718-a25c-70245bcb6950]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.718 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[b08a8462-3dd2-41f4-bdbf-59adb8e05558]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.719 251996 DEBUG oslo_concurrency.processutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:11 compute-0 kernel: tap83a9b755-33: entered promiscuous mode
Dec 06 07:39:11 compute-0 NetworkManager[48965]: <info>  [1765006751.7340] manager: (tap83a9b755-33): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Dec 06 07:39:11 compute-0 systemd-udevd[338958]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:39:11 compute-0 ovn_controller[147168]: 2025-12-06T07:39:11Z|00487|binding|INFO|Claiming lport 83a9b755-339a-4da1-ade2-590aecb2c951 for this chassis.
Dec 06 07:39:11 compute-0 ovn_controller[147168]: 2025-12-06T07:39:11Z|00488|binding|INFO|83a9b755-339a-4da1-ade2-590aecb2c951: Claiming fa:16:3e:cf:91:ba 10.100.0.7
Dec 06 07:39:11 compute-0 NetworkManager[48965]: <info>  [1765006751.7466] device (tap83a9b755-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:39:11 compute-0 NetworkManager[48965]: <info>  [1765006751.7478] device (tap83a9b755-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.747 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:91:ba 10.100.0.7'], port_security=['fa:16:3e:cf:91:ba 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37cdbe1-70ec-41d7-8e94-24a34612404f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '6', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=83a9b755-339a-4da1-ade2-590aecb2c951) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.747 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.749 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 83a9b755-339a-4da1-ade2-590aecb2c951 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 bound to our chassis
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.751 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:39:11 compute-0 ovn_controller[147168]: 2025-12-06T07:39:11Z|00489|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 ovn-installed in OVS
Dec 06 07:39:11 compute-0 ovn_controller[147168]: 2025-12-06T07:39:11Z|00490|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 up in Southbound
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.753 251996 DEBUG oslo_concurrency.processutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.755 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.757 251996 DEBUG os_brick.initiator.connectors.lightos [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.757 251996 DEBUG os_brick.initiator.connectors.lightos [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.757 251996 DEBUG os_brick.initiator.connectors.lightos [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.758 251996 DEBUG os_brick.utils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.762 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.766 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eff54516-9ff5-4a35-b694-ee24e7c90230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 systemd-machined[212986]: New machine qemu-62-instance-00000084.
Dec 06 07:39:11 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000084.
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.799 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e0262f-7cf2-4956-be6f-8df278637748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.801 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[97f80a1e-964b-431e-92b4-176f19d27db8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.824 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[22e605d4-6dd6-4d4e-9c8e-03073c3e6805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.841 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[18de0b5e-8dbd-4695-a4c6-7aed5f59a44b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 24, 'rx_bytes': 700, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 24, 'rx_bytes': 700, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 40379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339010, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:11.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.857 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f2953ade-a43f-4d79-a003-25412b6e177b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339015, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339015, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.859 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.860 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:11 compute-0 nova_compute[251992]: 2025-12-06 07:39:11.861 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.862 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.862 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.863 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:11.863 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:12 compute-0 nova_compute[251992]: 2025-12-06 07:39:12.119 251996 DEBUG nova.compute.manager [req-6d3238fe-7afb-494b-9ffa-aea95ee0f33c req-7d5030cd-8a89-4b29-a4f6-551b4dea54f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:12 compute-0 nova_compute[251992]: 2025-12-06 07:39:12.119 251996 DEBUG oslo_concurrency.lockutils [req-6d3238fe-7afb-494b-9ffa-aea95ee0f33c req-7d5030cd-8a89-4b29-a4f6-551b4dea54f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:12 compute-0 nova_compute[251992]: 2025-12-06 07:39:12.120 251996 DEBUG oslo_concurrency.lockutils [req-6d3238fe-7afb-494b-9ffa-aea95ee0f33c req-7d5030cd-8a89-4b29-a4f6-551b4dea54f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:12 compute-0 nova_compute[251992]: 2025-12-06 07:39:12.120 251996 DEBUG oslo_concurrency.lockutils [req-6d3238fe-7afb-494b-9ffa-aea95ee0f33c req-7d5030cd-8a89-4b29-a4f6-551b4dea54f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:12 compute-0 nova_compute[251992]: 2025-12-06 07:39:12.120 251996 DEBUG nova.compute.manager [req-6d3238fe-7afb-494b-9ffa-aea95ee0f33c req-7d5030cd-8a89-4b29-a4f6-551b4dea54f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:12 compute-0 nova_compute[251992]: 2025-12-06 07:39:12.120 251996 WARNING nova.compute.manager [req-6d3238fe-7afb-494b-9ffa-aea95ee0f33c req-7d5030cd-8a89-4b29-a4f6-551b4dea54f6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:39:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:12.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 189 KiB/s rd, 14 KiB/s wr, 8 op/s
Dec 06 07:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:39:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1505260623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/896338344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:13 compute-0 nova_compute[251992]: 2025-12-06 07:39:13.749 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:13.751 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:13.753 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:39:13 compute-0 nova_compute[251992]: 2025-12-06 07:39:13.847 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:13.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:14 compute-0 podman[339036]: 2025-12-06 07:39:14.473261352 +0000 UTC m=+0.120591280 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:39:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:14.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 14 KiB/s wr, 2 op/s
Dec 06 07:39:15 compute-0 nova_compute[251992]: 2025-12-06 07:39:15.076 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for f37cdbe1-70ec-41d7-8e94-24a34612404f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:39:15 compute-0 nova_compute[251992]: 2025-12-06 07:39:15.077 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006755.0762725, f37cdbe1-70ec-41d7-8e94-24a34612404f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:15 compute-0 nova_compute[251992]: 2025-12-06 07:39:15.077 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] VM Resumed (Lifecycle Event)
Dec 06 07:39:15 compute-0 nova_compute[251992]: 2025-12-06 07:39:15.233 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/846822878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:15 compute-0 ceph-mon[74339]: pgmap v2489: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 189 KiB/s rd, 14 KiB/s wr, 8 op/s
Dec 06 07:39:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:15.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.409 251996 DEBUG nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.409 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.410 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.410 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.410 251996 DEBUG nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.410 251996 WARNING nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.411 251996 DEBUG nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.411 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.411 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.411 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.412 251996 DEBUG nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.412 251996 WARNING nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.412 251996 DEBUG nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.412 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.412 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.413 251996 DEBUG oslo_concurrency.lockutils [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.413 251996 DEBUG nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.413 251996 WARNING nova.compute.manager [req-69646e03-2a0a-49d1-9f8a-ef00a94e03a0 req-40bb26ac-816f-4e85-8052-4265c2b657dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.439 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.443 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.484 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.484 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006755.0765316, f37cdbe1-70ec-41d7-8e94-24a34612404f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.484 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] VM Started (Lifecycle Event)
Dec 06 07:39:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:16.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.853 251996 DEBUG nova.virt.libvirt.driver [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Dec 06 07:39:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.8 KiB/s rd, 14 KiB/s wr, 4 op/s
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.939 251996 DEBUG nova.network.neutron [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updated VIF entry in instance network info cache for port 8690867c-c0a8-4574-b54f-38486691e339. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.940 251996 DEBUG nova.network.neutron [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:16 compute-0 nova_compute[251992]: 2025-12-06 07:39:16.943 251996 DEBUG nova.storage.rbd_utils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rolling back rbd image(2de097e3-8182-48e5-b69d-88acbfb84e66_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Dec 06 07:39:17 compute-0 nova_compute[251992]: 2025-12-06 07:39:17.051 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:17 compute-0 nova_compute[251992]: 2025-12-06 07:39:17.055 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:39:17 compute-0 nova_compute[251992]: 2025-12-06 07:39:17.076 251996 DEBUG oslo_concurrency.lockutils [req-36499926-4c9f-4b02-a46f-4ff6e485ad8e req-2a2142da-b068-4ce9-92f3-d03dcc19d005 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2de097e3-8182-48e5-b69d-88acbfb84e66" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:39:17 compute-0 nova_compute[251992]: 2025-12-06 07:39:17.093 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:39:17 compute-0 ceph-mon[74339]: pgmap v2490: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 14 KiB/s wr, 2 op/s
Dec 06 07:39:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2832921854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:17.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:17 compute-0 nova_compute[251992]: 2025-12-06 07:39:17.984 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:17 compute-0 nova_compute[251992]: 2025-12-06 07:39:17.988 251996 DEBUG nova.storage.rbd_utils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] removing snapshot(nova-resize) on rbd image(2de097e3-8182-48e5-b69d-88acbfb84e66_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:39:18
Dec 06 07:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', '.mgr']
Dec 06 07:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:39:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:18.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:18 compute-0 sudo[339161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:18 compute-0 sudo[339161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:18 compute-0 sudo[339161]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:18 compute-0 sudo[339186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:18 compute-0 sudo[339186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:18 compute-0 sudo[339186]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Dec 06 07:39:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:18.755 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 KiB/s rd, 13 KiB/s wr, 40 op/s
Dec 06 07:39:18 compute-0 nova_compute[251992]: 2025-12-06 07:39:18.895 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.377 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.377 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.377 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.425 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.425 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.426 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.426 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.426 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Dec 06 07:39:19 compute-0 ceph-mon[74339]: pgmap v2491: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.8 KiB/s rd, 14 KiB/s wr, 4 op/s
Dec 06 07:39:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4080526882' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.555 251996 DEBUG nova.virt.libvirt.driver [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Start _get_guest_xml network_info=[{"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '2de097e3-8182-48e5-b69d-88acbfb84e66', 'attached_at': '2025-12-06T07:39:13.000000', 'detached_at': '', 'volume_id': '5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6', 'serial': '5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6'}, 'attachment_id': 'e7a3f41d-7bea-4fd4-a1d8-f3e314ae080a', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': None, 'device_type': 'disk', 'mount_device': '/dev/vdb', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.560 251996 WARNING nova.virt.libvirt.driver [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.566 251996 DEBUG nova.virt.libvirt.host [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.567 251996 DEBUG nova.virt.libvirt.host [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.576 251996 DEBUG nova.virt.libvirt.host [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.577 251996 DEBUG nova.virt.libvirt.host [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.579 251996 DEBUG nova.virt.libvirt.driver [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.580 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.582 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.582 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.582 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.583 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.583 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.583 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.584 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.584 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.584 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.585 251996 DEBUG nova.virt.hardware [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.585 251996 DEBUG nova.objects.instance [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.663 251996 DEBUG oslo_concurrency.processutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:19.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166284350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:19 compute-0 nova_compute[251992]: 2025-12-06 07:39:19.938 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:20 compute-0 podman[339256]: 2025-12-06 07:39:20.046239789 +0000 UTC m=+0.059375180 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:39:20 compute-0 podman[339255]: 2025-12-06 07:39:20.046619829 +0000 UTC m=+0.059675718 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.068 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.068 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.072 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.073 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.075 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.076 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.076 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.079 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.080 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.083 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.083 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:39:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:39:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132695538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.146 251996 DEBUG oslo_concurrency.processutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.183 251996 DEBUG oslo_concurrency.processutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.344 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.346 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3680MB free_disk=20.622718811035156GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.346 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.347 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:20.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:20 compute-0 ceph-mon[74339]: pgmap v2492: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 KiB/s rd, 13 KiB/s wr, 40 op/s
Dec 06 07:39:20 compute-0 ceph-mon[74339]: osdmap e308: 3 total, 3 up, 3 in
Dec 06 07:39:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2166284350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1132695538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:39:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694726960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.629 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 70928eda-043f-429b-aa4e-af1f3189a7c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.630 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c2e6b8fd-375c-4658-b338-f2d334041ba3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.630 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c1ef1073-7c66-428c-a02b-e4daa3551d22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.630 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f37cdbe1-70ec-41d7-8e94-24a34612404f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.631 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 2de097e3-8182-48e5-b69d-88acbfb84e66 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.631 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.631 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.642 251996 DEBUG oslo_concurrency.processutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.669 251996 DEBUG nova.virt.libvirt.vif [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:38:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:38:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.670 251996 DEBUG nova.network.os_vif_util [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.671 251996 DEBUG nova.network.os_vif_util [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.674 251996 DEBUG nova.virt.libvirt.driver [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <uuid>2de097e3-8182-48e5-b69d-88acbfb84e66</uuid>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <name>instance-00000083</name>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestOtherB-server-1439654070</nova:name>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:39:19</nova:creationTime>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <nova:port uuid="8690867c-c0a8-4574-b54f-38486691e339">
Dec 06 07:39:20 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <system>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <entry name="serial">2de097e3-8182-48e5-b69d-88acbfb84e66</entry>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <entry name="uuid">2de097e3-8182-48e5-b69d-88acbfb84e66</entry>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </system>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <os>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   </os>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <features>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   </features>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2de097e3-8182-48e5-b69d-88acbfb84e66_disk">
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </source>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2de097e3-8182-48e5-b69d-88acbfb84e66_disk.config">
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </source>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6">
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </source>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:39:20 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <serial>5ac3b430-9e6d-4b6b-b6db-a57a4cebd8e6</serial>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:4a:1f:3e"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <target dev="tap8690867c-c0"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66/console.log" append="off"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <video>
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </video>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <input type="keyboard" bus="usb"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:39:20 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:39:20 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:39:20 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:39:20 compute-0 nova_compute[251992]: </domain>
Dec 06 07:39:20 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.680 251996 DEBUG nova.compute.manager [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Preparing to wait for external event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.681 251996 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.681 251996 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.681 251996 DEBUG oslo_concurrency.lockutils [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.682 251996 DEBUG nova.virt.libvirt.vif [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:38:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:38:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.682 251996 DEBUG nova.network.os_vif_util [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.683 251996 DEBUG nova.network.os_vif_util [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.684 251996 DEBUG os_vif [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.684 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.685 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.685 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.690 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.690 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8690867c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.691 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8690867c-c0, col_values=(('external_ids', {'iface-id': '8690867c-c0a8-4574-b54f-38486691e339', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:1f:3e', 'vm-uuid': '2de097e3-8182-48e5-b69d-88acbfb84e66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.692 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 NetworkManager[48965]: <info>  [1765006760.6940] manager: (tap8690867c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.696 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.699 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.700 251996 INFO os_vif [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0')
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.746 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:39:20 compute-0 NetworkManager[48965]: <info>  [1765006760.7629] manager: (tap8690867c-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Dec 06 07:39:20 compute-0 kernel: tap8690867c-c0: entered promiscuous mode
Dec 06 07:39:20 compute-0 ovn_controller[147168]: 2025-12-06T07:39:20Z|00491|binding|INFO|Claiming lport 8690867c-c0a8-4574-b54f-38486691e339 for this chassis.
Dec 06 07:39:20 compute-0 ovn_controller[147168]: 2025-12-06T07:39:20Z|00492|binding|INFO|8690867c-c0a8-4574-b54f-38486691e339: Claiming fa:16:3e:4a:1f:3e 10.100.0.11
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.770 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.784 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:1f:3e 10.100.0.11'], port_security=['fa:16:3e:4a:1f:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2de097e3-8182-48e5-b69d-88acbfb84e66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=8690867c-c0a8-4574-b54f-38486691e339) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:20 compute-0 ovn_controller[147168]: 2025-12-06T07:39:20Z|00493|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 ovn-installed in OVS
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.786 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 8690867c-c0a8-4574-b54f-38486691e339 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis
Dec 06 07:39:20 compute-0 ovn_controller[147168]: 2025-12-06T07:39:20Z|00494|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 up in Southbound
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.788 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.789 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:39:20 compute-0 systemd-udevd[339346]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:39:20 compute-0 NetworkManager[48965]: <info>  [1765006760.8071] device (tap8690867c-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:39:20 compute-0 NetworkManager[48965]: <info>  [1765006760.8083] device (tap8690867c-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:39:20 compute-0 systemd-machined[212986]: New machine qemu-63-instance-00000083.
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.814 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[42ed91bc-305f-414e-be17-88b64d89007d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:20 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000083.
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.851 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[02f7db75-0cf1-4c89-a60e-eca00cb2c4c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.856 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d14f32cf-4623-4a13-a298-bc832e28684e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 110 op/s
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.884 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e89780e4-1679-4b27-90ab-6e73f151964d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.891 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.891 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.903 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[50d49b29-81b4-4144-b533-4d6e0ef2a29a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678865, 'reachable_time': 42731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339361, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.919 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[45b5db6c-1fd1-43ef-a7ac-b473246ac82b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678878, 'tstamp': 678878}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339363, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678881, 'tstamp': 678881}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339363, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.921 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.922 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.923 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.924 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.925 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.926 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:20.926 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.926 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:39:20 compute-0 nova_compute[251992]: 2025-12-06 07:39:20.954 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.292 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.337 251996 DEBUG nova.compute.manager [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.338 251996 DEBUG oslo_concurrency.lockutils [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.338 251996 DEBUG oslo_concurrency.lockutils [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.339 251996 DEBUG oslo_concurrency.lockutils [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.339 251996 DEBUG nova.compute.manager [req-ba3207e8-1771-46f3-b558-329dbab6cb88 req-2cf38406-48ba-4969-94d3-747e99a0db72 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Processing event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:39:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3987226000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/694726960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:21 compute-0 ceph-mon[74339]: pgmap v2494: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 110 op/s
Dec 06 07:39:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527279665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.735 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.743 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.776 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.812 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:39:21 compute-0 nova_compute[251992]: 2025-12-06 07:39:21.813 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:21.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.198 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006762.197596, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.199 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Started (Lifecycle Event)
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.201 251996 DEBUG nova.compute.manager [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.207 251996 INFO nova.virt.libvirt.driver [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance running successfully.
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.208 251996 DEBUG nova.virt.libvirt.driver [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.240 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.247 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.302 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.303 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006762.198877, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.303 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Paused (Lifecycle Event)
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.349 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.357 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006762.2042825, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.358 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Resumed (Lifecycle Event)
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.428 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.431 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.469 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Dec 06 07:39:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:22.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:22 compute-0 nova_compute[251992]: 2025-12-06 07:39:22.663 251996 INFO nova.compute.manager [None req-f66d8321-66e9-4697-a3c7-65e56ce6abba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance to original state: 'active'
Dec 06 07:39:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 122 op/s
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.092 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.093 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.093 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.545 251996 DEBUG nova.compute.manager [req-78b61b97-cf0b-412e-8c4f-7424302d6fa0 req-319cf818-45c6-4d62-868e-ffa72df3555c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.546 251996 DEBUG oslo_concurrency.lockutils [req-78b61b97-cf0b-412e-8c4f-7424302d6fa0 req-319cf818-45c6-4d62-868e-ffa72df3555c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.546 251996 DEBUG oslo_concurrency.lockutils [req-78b61b97-cf0b-412e-8c4f-7424302d6fa0 req-319cf818-45c6-4d62-868e-ffa72df3555c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.547 251996 DEBUG oslo_concurrency.lockutils [req-78b61b97-cf0b-412e-8c4f-7424302d6fa0 req-319cf818-45c6-4d62-868e-ffa72df3555c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.547 251996 DEBUG nova.compute.manager [req-78b61b97-cf0b-412e-8c4f-7424302d6fa0 req-319cf818-45c6-4d62-868e-ffa72df3555c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.547 251996 WARNING nova.compute.manager [req-78b61b97-cf0b-412e-8c4f-7424302d6fa0 req-319cf818-45c6-4d62-868e-ffa72df3555c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state None.
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:39:23 compute-0 nova_compute[251992]: 2025-12-06 07:39:23.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:39:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:23.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3875607001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3527279665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:24 compute-0 nova_compute[251992]: 2025-12-06 07:39:24.119 251996 DEBUG nova.compute.manager [None req-1bd075a5-f3fc-43b8-8ea1-0b62d7953ca5 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:24 compute-0 nova_compute[251992]: 2025-12-06 07:39:24.204 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:39:24 compute-0 nova_compute[251992]: 2025-12-06 07:39:24.205 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:39:24 compute-0 nova_compute[251992]: 2025-12-06 07:39:24.205 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:39:24 compute-0 nova_compute[251992]: 2025-12-06 07:39:24.205 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 70928eda-043f-429b-aa4e-af1f3189a7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:39:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:39:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:39:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:39:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:39:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 122 op/s
Dec 06 07:39:25 compute-0 nova_compute[251992]: 2025-12-06 07:39:25.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:25 compute-0 nova_compute[251992]: 2025-12-06 07:39:25.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:25 compute-0 ceph-mon[74339]: pgmap v2495: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 122 op/s
Dec 06 07:39:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01741690948957752 of space, bias 1.0, pg target 5.225072846873256 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0074757104387679135 of space, bias 1.0, pg target 2.2053345794365344 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1371289680462497 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0016983063465475525 quantized to 16 (current 16)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021228829331844406 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018044504932067747 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004245765866368881 quantized to 32 (current 32)
Dec 06 07:39:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:26.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.2 KiB/s wr, 195 op/s
Dec 06 07:39:26 compute-0 nova_compute[251992]: 2025-12-06 07:39:26.962 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:26 compute-0 nova_compute[251992]: 2025-12-06 07:39:26.962 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:26 compute-0 nova_compute[251992]: 2025-12-06 07:39:26.963 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:26 compute-0 nova_compute[251992]: 2025-12-06 07:39:26.963 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:26 compute-0 nova_compute[251992]: 2025-12-06 07:39:26.963 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:26 compute-0 nova_compute[251992]: 2025-12-06 07:39:26.964 251996 INFO nova.compute.manager [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Terminating instance
Dec 06 07:39:26 compute-0 nova_compute[251992]: 2025-12-06 07:39:26.965 251996 DEBUG nova.compute.manager [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:39:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3445992299' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:26 compute-0 ceph-mon[74339]: pgmap v2496: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 KiB/s wr, 122 op/s
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.155 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updating instance_info_cache with network_info: [{"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.187 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-70928eda-043f-429b-aa4e-af1f3189a7c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.187 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.188 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.188 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.188 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.290 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.290 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.291 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.291 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.291 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.294 251996 INFO nova.compute.manager [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Terminating instance
Dec 06 07:39:27 compute-0 nova_compute[251992]: 2025-12-06 07:39:27.296 251996 DEBUG nova.compute.manager [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:39:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:27.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:28 compute-0 ceph-mon[74339]: pgmap v2497: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.2 KiB/s wr, 195 op/s
Dec 06 07:39:28 compute-0 kernel: tap8690867c-c0 (unregistering): left promiscuous mode
Dec 06 07:39:28 compute-0 NetworkManager[48965]: <info>  [1765006768.1731] device (tap8690867c-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:39:28 compute-0 ovn_controller[147168]: 2025-12-06T07:39:28Z|00495|binding|INFO|Releasing lport 8690867c-c0a8-4574-b54f-38486691e339 from this chassis (sb_readonly=0)
Dec 06 07:39:28 compute-0 ovn_controller[147168]: 2025-12-06T07:39:28Z|00496|binding|INFO|Setting lport 8690867c-c0a8-4574-b54f-38486691e339 down in Southbound
Dec 06 07:39:28 compute-0 ovn_controller[147168]: 2025-12-06T07:39:28Z|00497|binding|INFO|Removing iface tap8690867c-c0 ovn-installed in OVS
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.183 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.185 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.195 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:1f:3e 10.100.0.11'], port_security=['fa:16:3e:4a:1f:3e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2de097e3-8182-48e5-b69d-88acbfb84e66', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=8690867c-c0a8-4574-b54f-38486691e339) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.196 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 8690867c-c0a8-4574-b54f-38486691e339 in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.197 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.199 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 kernel: tap83a9b755-33 (unregistering): left promiscuous mode
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.212 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2fcd2159-042a-4943-9ace-7c0760694ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 NetworkManager[48965]: <info>  [1765006768.2140] device (tap83a9b755-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:39:28 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000083.scope: Deactivated successfully.
Dec 06 07:39:28 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000083.scope: Consumed 5.603s CPU time.
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.245 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 ovn_controller[147168]: 2025-12-06T07:39:28Z|00498|binding|INFO|Releasing lport 83a9b755-339a-4da1-ade2-590aecb2c951 from this chassis (sb_readonly=0)
Dec 06 07:39:28 compute-0 ovn_controller[147168]: 2025-12-06T07:39:28Z|00499|binding|INFO|Setting lport 83a9b755-339a-4da1-ade2-590aecb2c951 down in Southbound
Dec 06 07:39:28 compute-0 ovn_controller[147168]: 2025-12-06T07:39:28Z|00500|binding|INFO|Removing iface tap83a9b755-33 ovn-installed in OVS
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.247 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 systemd-machined[212986]: Machine qemu-63-instance-00000083 terminated.
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.256 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:91:ba 10.100.0.7'], port_security=['fa:16:3e:cf:91:ba 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f37cdbe1-70ec-41d7-8e94-24a34612404f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '8', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=83a9b755-339a-4da1-ade2-590aecb2c951) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.260 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[68c1e09a-64ee-430d-9911-1cdcbf14156d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.264 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[6c96cbb5-cdc5-47cf-9de4-44759c9354ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.296 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d0f288-ed3a-470a-ab7f-806eeb22627a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000084.scope: Deactivated successfully.
Dec 06 07:39:28 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000084.scope: Consumed 13.884s CPU time.
Dec 06 07:39:28 compute-0 systemd-machined[212986]: Machine qemu-62-instance-00000084 terminated.
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.313 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1a18eb-5392-4ded-8db5-82a5bbf11ed7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678865, 'reachable_time': 42731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339467, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.330 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e4673470-bfc1-4a1a-9d70-0c60724721ea]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678878, 'tstamp': 678878}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339468, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678881, 'tstamp': 678881}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339468, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.331 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.333 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.339 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.340 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.340 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.340 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.340 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.341 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 83a9b755-339a-4da1-ade2-590aecb2c951 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 unbound from our chassis
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.343 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.359 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[39971b49-21b2-4711-acb6-2f2fdd6651c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.392 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[879cf723-0ffd-4646-8c0a-652a46713734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.395 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf69410-29d1-429f-9c4f-7b41fdaacb03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.409 251996 INFO nova.virt.libvirt.driver [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Instance destroyed successfully.
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.410 251996 DEBUG nova.objects.instance [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid 2de097e3-8182-48e5-b69d-88acbfb84e66 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.430 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[792fd631-caa7-421d-afcc-e0fa35dd56e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.432 251996 DEBUG nova.virt.libvirt.vif [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1439654070',display_name='tempest-ServerActionsTestOtherB-server-1439654070',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1439654070',id=131,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:39:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-i11mga9l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:39:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=2de097e3-8182-48e5-b69d-88acbfb84e66,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.433 251996 DEBUG nova.network.os_vif_util [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "8690867c-c0a8-4574-b54f-38486691e339", "address": "fa:16:3e:4a:1f:3e", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8690867c-c0", "ovs_interfaceid": "8690867c-c0a8-4574-b54f-38486691e339", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.433 251996 DEBUG nova.network.os_vif_util [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.434 251996 DEBUG os_vif [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.437 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8690867c-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.439 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.441 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.443 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.447 251996 INFO os_vif [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:1f:3e,bridge_name='br-int',has_traffic_filtering=True,id=8690867c-c0a8-4574-b54f-38486691e339,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8690867c-c0')
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.450 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5d22066a-8471-469f-b0d7-23024ec6502f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 26, 'rx_bytes': 700, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 26, 'rx_bytes': 700, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 40379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339489, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.465 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f6250a0d-04fa-45e3-a782-a8506718b37b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339493, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339493, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.467 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.468 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.471 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.471 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.472 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.472 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:28.472 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.526 251996 INFO nova.virt.libvirt.driver [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Instance destroyed successfully.
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.527 251996 DEBUG nova.objects.instance [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'resources' on Instance uuid f37cdbe1-70ec-41d7-8e94-24a34612404f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.543 251996 DEBUG nova.virt.libvirt.vif [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:36:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-40970156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-40970156',id=132,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:38:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-1min8f00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:39:24Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=f37cdbe1-70ec-41d7-8e94-24a34612404f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.543 251996 DEBUG nova.network.os_vif_util [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "83a9b755-339a-4da1-ade2-590aecb2c951", "address": "fa:16:3e:cf:91:ba", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83a9b755-33", "ovs_interfaceid": "83a9b755-339a-4da1-ade2-590aecb2c951", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.544 251996 DEBUG nova.network.os_vif_util [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:91:ba,bridge_name='br-int',has_traffic_filtering=True,id=83a9b755-339a-4da1-ade2-590aecb2c951,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9b755-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.544 251996 DEBUG os_vif [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:91:ba,bridge_name='br-int',has_traffic_filtering=True,id=83a9b755-339a-4da1-ade2-590aecb2c951,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9b755-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.546 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.546 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83a9b755-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.547 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:28.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.551 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.553 251996 INFO os_vif [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:91:ba,bridge_name='br-int',has_traffic_filtering=True,id=83a9b755-339a-4da1-ade2-590aecb2c951,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83a9b755-33')
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.667 251996 DEBUG nova.compute.manager [req-be4a61b4-c0e1-4b3a-9e28-92a6fc523be1 req-530efb94-652b-479e-8bf8-beeffbe14622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.668 251996 DEBUG oslo_concurrency.lockutils [req-be4a61b4-c0e1-4b3a-9e28-92a6fc523be1 req-530efb94-652b-479e-8bf8-beeffbe14622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.668 251996 DEBUG oslo_concurrency.lockutils [req-be4a61b4-c0e1-4b3a-9e28-92a6fc523be1 req-530efb94-652b-479e-8bf8-beeffbe14622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.669 251996 DEBUG oslo_concurrency.lockutils [req-be4a61b4-c0e1-4b3a-9e28-92a6fc523be1 req-530efb94-652b-479e-8bf8-beeffbe14622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.670 251996 DEBUG nova.compute.manager [req-be4a61b4-c0e1-4b3a-9e28-92a6fc523be1 req-530efb94-652b-479e-8bf8-beeffbe14622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.670 251996 DEBUG nova.compute.manager [req-be4a61b4-c0e1-4b3a-9e28-92a6fc523be1 req-530efb94-652b-479e-8bf8-beeffbe14622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-unplugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:39:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.4 KiB/s wr, 175 op/s
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.953 251996 DEBUG nova.compute.manager [req-44a40074-5f67-441f-8740-39c47beb556b req-cb9b659b-d57d-42a1-b5ef-18ce129c8c12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.954 251996 DEBUG oslo_concurrency.lockutils [req-44a40074-5f67-441f-8740-39c47beb556b req-cb9b659b-d57d-42a1-b5ef-18ce129c8c12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.954 251996 DEBUG oslo_concurrency.lockutils [req-44a40074-5f67-441f-8740-39c47beb556b req-cb9b659b-d57d-42a1-b5ef-18ce129c8c12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.954 251996 DEBUG oslo_concurrency.lockutils [req-44a40074-5f67-441f-8740-39c47beb556b req-cb9b659b-d57d-42a1-b5ef-18ce129c8c12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.954 251996 DEBUG nova.compute.manager [req-44a40074-5f67-441f-8740-39c47beb556b req-cb9b659b-d57d-42a1-b5ef-18ce129c8c12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:28 compute-0 nova_compute[251992]: 2025-12-06 07:39:28.955 251996 DEBUG nova.compute.manager [req-44a40074-5f67-441f-8740-39c47beb556b req-cb9b659b-d57d-42a1-b5ef-18ce129c8c12 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-unplugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:39:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Dec 06 07:39:29 compute-0 ceph-mon[74339]: pgmap v2498: 305 pgs: 305 active+clean; 1.0 GiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.4 KiB/s wr, 175 op/s
Dec 06 07:39:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Dec 06 07:39:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Dec 06 07:39:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:30 compute-0 nova_compute[251992]: 2025-12-06 07:39:30.243 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:30 compute-0 ceph-mon[74339]: osdmap e309: 3 total, 3 up, 3 in
Dec 06 07:39:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/704595085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:30.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 161 op/s
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.291 251996 DEBUG nova.compute.manager [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.292 251996 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.292 251996 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.292 251996 DEBUG oslo_concurrency.lockutils [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.292 251996 DEBUG nova.compute.manager [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] No waiting events found dispatching network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.293 251996 WARNING nova.compute.manager [req-3b602ec7-23e4-4c03-8ee1-cf68253a7749 req-f342b933-c189-4524-b46e-de5c09245bfd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received unexpected event network-vif-plugged-8690867c-c0a8-4574-b54f-38486691e339 for instance with vm_state active and task_state deleting.
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.346 251996 DEBUG nova.compute.manager [req-565baa6e-9c12-4f85-9484-12d081e6f750 req-beb63c19-f910-4da0-bc75-3782112e16c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.346 251996 DEBUG oslo_concurrency.lockutils [req-565baa6e-9c12-4f85-9484-12d081e6f750 req-beb63c19-f910-4da0-bc75-3782112e16c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.346 251996 DEBUG oslo_concurrency.lockutils [req-565baa6e-9c12-4f85-9484-12d081e6f750 req-beb63c19-f910-4da0-bc75-3782112e16c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.347 251996 DEBUG oslo_concurrency.lockutils [req-565baa6e-9c12-4f85-9484-12d081e6f750 req-beb63c19-f910-4da0-bc75-3782112e16c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.347 251996 DEBUG nova.compute.manager [req-565baa6e-9c12-4f85-9484-12d081e6f750 req-beb63c19-f910-4da0-bc75-3782112e16c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] No waiting events found dispatching network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.347 251996 WARNING nova.compute.manager [req-565baa6e-9c12-4f85-9484-12d081e6f750 req-beb63c19-f910-4da0-bc75-3782112e16c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received unexpected event network-vif-plugged-83a9b755-339a-4da1-ade2-590aecb2c951 for instance with vm_state active and task_state deleting.
Dec 06 07:39:31 compute-0 ceph-mon[74339]: pgmap v2500: 305 pgs: 305 active+clean; 997 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 161 op/s
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.816 251996 INFO nova.virt.libvirt.driver [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Deleting instance files /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f_del
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.817 251996 INFO nova.virt.libvirt.driver [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Deletion of /var/lib/nova/instances/f37cdbe1-70ec-41d7-8e94-24a34612404f_del complete
Dec 06 07:39:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:31.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.953 251996 INFO nova.compute.manager [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Took 4.66 seconds to destroy the instance on the hypervisor.
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.954 251996 DEBUG oslo.service.loopingcall [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.955 251996 DEBUG nova.compute.manager [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:39:31 compute-0 nova_compute[251992]: 2025-12-06 07:39:31.955 251996 DEBUG nova.network.neutron [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:39:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:32.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 28 KiB/s wr, 203 op/s
Dec 06 07:39:33 compute-0 ceph-mon[74339]: pgmap v2501: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 28 KiB/s wr, 203 op/s
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.437 251996 INFO nova.virt.libvirt.driver [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Deleting instance files /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66_del
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.438 251996 INFO nova.virt.libvirt.driver [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Deletion of /var/lib/nova/instances/2de097e3-8182-48e5-b69d-88acbfb84e66_del complete
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.543 251996 INFO nova.compute.manager [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Took 6.58 seconds to destroy the instance on the hypervisor.
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.544 251996 DEBUG oslo.service.loopingcall [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.544 251996 DEBUG nova.compute.manager [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.544 251996 DEBUG nova.network.neutron [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.575 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:33.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.908 251996 DEBUG nova.network.neutron [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:33 compute-0 nova_compute[251992]: 2025-12-06 07:39:33.960 251996 INFO nova.compute.manager [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Took 2.00 seconds to deallocate network for instance.
Dec 06 07:39:34 compute-0 nova_compute[251992]: 2025-12-06 07:39:34.051 251996 DEBUG nova.compute.manager [req-9d9d8d74-eaec-447d-a340-5fe2a1792bc8 req-c00f825d-d627-4558-a559-f343946c7823 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Received event network-vif-deleted-83a9b755-339a-4da1-ade2-590aecb2c951 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:34 compute-0 nova_compute[251992]: 2025-12-06 07:39:34.272 251996 INFO nova.compute.manager [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Took 0.31 seconds to detach 1 volumes for instance.
Dec 06 07:39:34 compute-0 nova_compute[251992]: 2025-12-06 07:39:34.353 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:34 compute-0 nova_compute[251992]: 2025-12-06 07:39:34.353 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:34 compute-0 nova_compute[251992]: 2025-12-06 07:39:34.616 251996 DEBUG nova.network.neutron [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:34 compute-0 nova_compute[251992]: 2025-12-06 07:39:34.628 251996 DEBUG oslo_concurrency.processutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:34 compute-0 nova_compute[251992]: 2025-12-06 07:39:34.664 251996 INFO nova.compute.manager [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Took 1.12 seconds to deallocate network for instance.
Dec 06 07:39:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 28 KiB/s wr, 203 op/s
Dec 06 07:39:34 compute-0 nova_compute[251992]: 2025-12-06 07:39:34.930 251996 DEBUG nova.compute.manager [req-dc0165f4-279a-4f70-97b1-11030f7db615 req-cab0d936-14c6-4f3e-89e8-3413bd18d5b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Received event network-vif-deleted-8690867c-c0a8-4574-b54f-38486691e339 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089923814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:35 compute-0 nova_compute[251992]: 2025-12-06 07:39:35.100 251996 DEBUG oslo_concurrency.processutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:35 compute-0 nova_compute[251992]: 2025-12-06 07:39:35.106 251996 DEBUG nova.compute.provider_tree [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:39:35 compute-0 nova_compute[251992]: 2025-12-06 07:39:35.156 251996 DEBUG nova.scheduler.client.report [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:39:35 compute-0 nova_compute[251992]: 2025-12-06 07:39:35.244 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:35 compute-0 nova_compute[251992]: 2025-12-06 07:39:35.347 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:35 compute-0 ceph-mon[74339]: pgmap v2502: 305 pgs: 305 active+clean; 978 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 28 KiB/s wr, 203 op/s
Dec 06 07:39:35 compute-0 nova_compute[251992]: 2025-12-06 07:39:35.612 251996 INFO nova.scheduler.client.report [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Deleted allocations for instance f37cdbe1-70ec-41d7-8e94-24a34612404f
Dec 06 07:39:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:35.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:39:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3332702504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:39:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3332702504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:36 compute-0 nova_compute[251992]: 2025-12-06 07:39:36.304 251996 INFO nova.compute.manager [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Took 1.64 seconds to detach 1 volumes for instance.
Dec 06 07:39:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:36.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:36 compute-0 nova_compute[251992]: 2025-12-06 07:39:36.611 251996 DEBUG oslo_concurrency.lockutils [None req-e58a7d82-abb9-41e4-923b-0496cf61eacd 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "f37cdbe1-70ec-41d7-8e94-24a34612404f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:36 compute-0 nova_compute[251992]: 2025-12-06 07:39:36.705 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:36 compute-0 nova_compute[251992]: 2025-12-06 07:39:36.705 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2089923814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3332702504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3332702504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:36 compute-0 nova_compute[251992]: 2025-12-06 07:39:36.852 251996 DEBUG oslo_concurrency.processutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 925 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 201 op/s
Dec 06 07:39:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424324093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:37 compute-0 nova_compute[251992]: 2025-12-06 07:39:37.308 251996 DEBUG oslo_concurrency.processutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:37 compute-0 nova_compute[251992]: 2025-12-06 07:39:37.315 251996 DEBUG nova.compute.provider_tree [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:39:37 compute-0 nova_compute[251992]: 2025-12-06 07:39:37.343 251996 DEBUG nova.scheduler.client.report [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:39:37 compute-0 nova_compute[251992]: 2025-12-06 07:39:37.385 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:37 compute-0 nova_compute[251992]: 2025-12-06 07:39:37.431 251996 INFO nova.scheduler.client.report [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Deleted allocations for instance 2de097e3-8182-48e5-b69d-88acbfb84e66
Dec 06 07:39:37 compute-0 nova_compute[251992]: 2025-12-06 07:39:37.512 251996 DEBUG oslo_concurrency.lockutils [None req-3b8f88b6-fb56-40e8-bab1-efc6fbebc8ba a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "2de097e3-8182-48e5-b69d-88acbfb84e66" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Dec 06 07:39:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:37.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Dec 06 07:39:38 compute-0 ceph-mon[74339]: pgmap v2503: 305 pgs: 305 active+clean; 925 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 28 KiB/s wr, 201 op/s
Dec 06 07:39:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2424324093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Dec 06 07:39:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:38.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:38 compute-0 nova_compute[251992]: 2025-12-06 07:39:38.576 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:38 compute-0 sudo[339591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:38 compute-0 sudo[339591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:38 compute-0 sudo[339591]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:38 compute-0 sudo[339616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:38 compute-0 sudo[339616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:38 compute-0 sudo[339616]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 898 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 30 KiB/s wr, 193 op/s
Dec 06 07:39:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Dec 06 07:39:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Dec 06 07:39:39 compute-0 ceph-mon[74339]: osdmap e310: 3 total, 3 up, 3 in
Dec 06 07:39:39 compute-0 ceph-mon[74339]: pgmap v2505: 305 pgs: 305 active+clean; 898 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 30 KiB/s wr, 193 op/s
Dec 06 07:39:39 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Dec 06 07:39:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:40 compute-0 ovn_controller[147168]: 2025-12-06T07:39:40Z|00501|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:39:40 compute-0 ovn_controller[147168]: 2025-12-06T07:39:40Z|00502|binding|INFO|Releasing lport 0d2044a5-87cb-4c28-912c-9a2682bb94de from this chassis (sb_readonly=0)
Dec 06 07:39:40 compute-0 nova_compute[251992]: 2025-12-06 07:39:40.237 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-0 nova_compute[251992]: 2025-12-06 07:39:40.368 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-0 ovn_controller[147168]: 2025-12-06T07:39:40Z|00503|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:39:40 compute-0 ovn_controller[147168]: 2025-12-06T07:39:40Z|00504|binding|INFO|Releasing lport 0d2044a5-87cb-4c28-912c-9a2682bb94de from this chassis (sb_readonly=0)
Dec 06 07:39:40 compute-0 nova_compute[251992]: 2025-12-06 07:39:40.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:40.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:40 compute-0 ceph-mon[74339]: osdmap e311: 3 total, 3 up, 3 in
Dec 06 07:39:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.2 KiB/s wr, 140 op/s
Dec 06 07:39:41 compute-0 ceph-mon[74339]: pgmap v2507: 305 pgs: 305 active+clean; 859 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.2 KiB/s wr, 140 op/s
Dec 06 07:39:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:41.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:42.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 816 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.6 KiB/s wr, 180 op/s
Dec 06 07:39:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3522121454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:39:43 compute-0 nova_compute[251992]: 2025-12-06 07:39:43.408 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006768.408148, 2de097e3-8182-48e5-b69d-88acbfb84e66 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:43 compute-0 nova_compute[251992]: 2025-12-06 07:39:43.409 251996 INFO nova.compute.manager [-] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] VM Stopped (Lifecycle Event)
Dec 06 07:39:43 compute-0 nova_compute[251992]: 2025-12-06 07:39:43.525 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006768.524453, f37cdbe1-70ec-41d7-8e94-24a34612404f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:43 compute-0 nova_compute[251992]: 2025-12-06 07:39:43.526 251996 INFO nova.compute.manager [-] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] VM Stopped (Lifecycle Event)
Dec 06 07:39:43 compute-0 nova_compute[251992]: 2025-12-06 07:39:43.579 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:43.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:43 compute-0 nova_compute[251992]: 2025-12-06 07:39:43.940 251996 DEBUG nova.compute.manager [None req-6ff9fc88-1956-4208-ad8d-3e8b116d2592 - - - - - -] [instance: 2de097e3-8182-48e5-b69d-88acbfb84e66] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:44 compute-0 nova_compute[251992]: 2025-12-06 07:39:44.004 251996 DEBUG nova.compute.manager [None req-76e71df5-b943-41cc-b901-80bc3f515f03 - - - - - -] [instance: f37cdbe1-70ec-41d7-8e94-24a34612404f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Dec 06 07:39:44 compute-0 ceph-mon[74339]: pgmap v2508: 305 pgs: 305 active+clean; 816 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.6 KiB/s wr, 180 op/s
Dec 06 07:39:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Dec 06 07:39:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Dec 06 07:39:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:44.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:44 compute-0 sudo[339645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:44 compute-0 sudo[339645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:44 compute-0 sudo[339645]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:44 compute-0 sudo[339671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:39:44 compute-0 sudo[339671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:44 compute-0 sudo[339671]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:44 compute-0 podman[339669]: 2025-12-06 07:39:44.762553752 +0000 UTC m=+0.086298059 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:39:44 compute-0 sudo[339712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:44 compute-0 sudo[339712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:44 compute-0 sudo[339712]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:44 compute-0 sudo[339744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:39:44 compute-0 sudo[339744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 816 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 5.0 KiB/s wr, 104 op/s
Dec 06 07:39:45 compute-0 sudo[339744]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:39:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:39:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:45 compute-0 sudo[339790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:45 compute-0 sudo[339790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:45 compute-0 sudo[339790]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:45 compute-0 sudo[339815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:39:45 compute-0 sudo[339815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:45 compute-0 sudo[339815]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:45 compute-0 sudo[339840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:45 compute-0 sudo[339840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:45 compute-0 sudo[339840]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:45 compute-0 ceph-mon[74339]: osdmap e312: 3 total, 3 up, 3 in
Dec 06 07:39:45 compute-0 ceph-mon[74339]: pgmap v2510: 305 pgs: 305 active+clean; 816 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 5.0 KiB/s wr, 104 op/s
Dec 06 07:39:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.372 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 sudo[339865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:39:45 compute-0 sudo[339865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.411 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.412 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.412 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.413 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.413 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.414 251996 INFO nova.compute.manager [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Terminating instance
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.416 251996 DEBUG nova.compute.manager [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:39:45 compute-0 kernel: tape61ac68e-e5 (unregistering): left promiscuous mode
Dec 06 07:39:45 compute-0 NetworkManager[48965]: <info>  [1765006785.4610] device (tape61ac68e-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:39:45 compute-0 ovn_controller[147168]: 2025-12-06T07:39:45Z|00505|binding|INFO|Releasing lport e61ac68e-e534-4351-b3ce-b20fa32579fc from this chassis (sb_readonly=0)
Dec 06 07:39:45 compute-0 ovn_controller[147168]: 2025-12-06T07:39:45Z|00506|binding|INFO|Setting lport e61ac68e-e534-4351-b3ce-b20fa32579fc down in Southbound
Dec 06 07:39:45 compute-0 ovn_controller[147168]: 2025-12-06T07:39:45Z|00507|binding|INFO|Removing iface tape61ac68e-e5 ovn-installed in OVS
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.468 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.470 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.477 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1b:d0 10.100.0.14'], port_security=['fa:16:3e:99:1b:d0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c2e6b8fd-375c-4658-b338-f2d334041ba3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '8', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=e61ac68e-e534-4351-b3ce-b20fa32579fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.478 158118 INFO neutron.agent.ovn.metadata.agent [-] Port e61ac68e-e534-4351-b3ce-b20fa32579fc in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 unbound from our chassis
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.480 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40bc9d32-839b-4591-acbc-c5d535123ff1
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.485 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.497 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f46ccf12-e45d-47ff-9d82-0c8aa0c57208]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:45 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Dec 06 07:39:45 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007c.scope: Consumed 1.950s CPU time.
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.531 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5207b844-b3b0-4ec8-a981-158f8138daf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:45 compute-0 systemd-machined[212986]: Machine qemu-58-instance-0000007c terminated.
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.535 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d530d781-194b-49e0-a795-584070963866]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.562 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[850d68cf-b7d7-44a3-8f2a-35e24a52d789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.580 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f92d54a5-4b88-432d-89af-a7ce7aef5663]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40bc9d32-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:66:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 28, 'rx_bytes': 700, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 28, 'rx_bytes': 700, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669271, 'reachable_time': 40379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339915, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.597 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dad719a1-d887-41cf-b2f0-30e1d489394a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669283, 'tstamp': 669283}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339916, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap40bc9d32-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669285, 'tstamp': 669285}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339916, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.599 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.600 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.605 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.605 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40bc9d32-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.606 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.606 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40bc9d32-80, col_values=(('external_ids', {'iface-id': '0d2044a5-87cb-4c28-912c-9a2682bb94de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:45.607 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:45 compute-0 kernel: tape61ac68e-e5: entered promiscuous mode
Dec 06 07:39:45 compute-0 kernel: tape61ac68e-e5 (unregistering): left promiscuous mode
Dec 06 07:39:45 compute-0 NetworkManager[48965]: <info>  [1765006785.6439] manager: (tape61ac68e-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.648 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.660 251996 INFO nova.virt.libvirt.driver [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Instance destroyed successfully.
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.660 251996 DEBUG nova.objects.instance [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'resources' on Instance uuid c2e6b8fd-375c-4658-b338-f2d334041ba3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.689 251996 DEBUG nova.virt.libvirt.vif [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1801137848',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1801137848',id=124,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-blw02nr2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:36:04Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=c2e6b8fd-375c-4658-b338-f2d334041ba3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.690 251996 DEBUG nova.network.os_vif_util [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "address": "fa:16:3e:99:1b:d0", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61ac68e-e5", "ovs_interfaceid": "e61ac68e-e534-4351-b3ce-b20fa32579fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.690 251996 DEBUG nova.network.os_vif_util [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=e61ac68e-e534-4351-b3ce-b20fa32579fc,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ac68e-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.691 251996 DEBUG os_vif [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=e61ac68e-e534-4351-b3ce-b20fa32579fc,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ac68e-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.693 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape61ac68e-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.695 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.696 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.698 251996 INFO os_vif [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:1b:d0,bridge_name='br-int',has_traffic_filtering=True,id=e61ac68e-e534-4351-b3ce-b20fa32579fc,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61ac68e-e5')
Dec 06 07:39:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:39:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:39:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.858 251996 DEBUG nova.compute.manager [req-b77e62eb-000c-4642-94ca-61e64d233b82 req-347ef619-4977-4a18-92a2-697bbd92aa8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.858 251996 DEBUG oslo_concurrency.lockutils [req-b77e62eb-000c-4642-94ca-61e64d233b82 req-347ef619-4977-4a18-92a2-697bbd92aa8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.858 251996 DEBUG oslo_concurrency.lockutils [req-b77e62eb-000c-4642-94ca-61e64d233b82 req-347ef619-4977-4a18-92a2-697bbd92aa8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.859 251996 DEBUG oslo_concurrency.lockutils [req-b77e62eb-000c-4642-94ca-61e64d233b82 req-347ef619-4977-4a18-92a2-697bbd92aa8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.859 251996 DEBUG nova.compute.manager [req-b77e62eb-000c-4642-94ca-61e64d233b82 req-347ef619-4977-4a18-92a2-697bbd92aa8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:45 compute-0 nova_compute[251992]: 2025-12-06 07:39:45.859 251996 DEBUG nova.compute.manager [req-b77e62eb-000c-4642-94ca-61e64d233b82 req-347ef619-4977-4a18-92a2-697bbd92aa8d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-unplugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:39:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:45.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:45 compute-0 sudo[339865]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.115 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.116 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.136 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.233 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.234 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.242 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.243 251996 INFO nova.compute.claims [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.320 251996 INFO nova.virt.libvirt.driver [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Deleting instance files /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3_del
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.321 251996 INFO nova.virt.libvirt.driver [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Deletion of /var/lib/nova/instances/c2e6b8fd-375c-4658-b338-f2d334041ba3_del complete
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.403 251996 INFO nova.compute.manager [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Took 0.99 seconds to destroy the instance on the hypervisor.
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.404 251996 DEBUG oslo.service.loopingcall [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.404 251996 DEBUG nova.compute.manager [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.404 251996 DEBUG nova.network.neutron [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:39:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:46 compute-0 nova_compute[251992]: 2025-12-06 07:39:46.573 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:46.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:39:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:39:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:39:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:39:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:39:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 5.0 KiB/s wr, 109 op/s
Dec 06 07:39:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183177135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.039 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.045 251996 DEBUG nova.compute.provider_tree [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.067 251996 DEBUG nova.scheduler.client.report [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:39:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 38625eaf-8837-408e-9f8f-e9234d9ea806 does not exist
Dec 06 07:39:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c2f175bd-c132-4b1c-a6a2-04d1e58069a8 does not exist
Dec 06 07:39:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a45208b8-58c9-4977-8ef1-be86923e6392 does not exist
Dec 06 07:39:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:39:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:39:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:39:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:39:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:39:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.093 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.093 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:39:47 compute-0 sudo[339986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:47 compute-0 sudo[339986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:47 compute-0 sudo[339986]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.149 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.150 251996 DEBUG nova.network.neutron [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.172 251996 INFO nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:39:47 compute-0 sudo[340011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:39:47 compute-0 sudo[340011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:47 compute-0 sudo[340011]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:39:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.236 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:39:47 compute-0 sudo[340036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:47 compute-0 sudo[340036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:47 compute-0 sudo[340036]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:47 compute-0 sudo[340061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:39:47 compute-0 sudo[340061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.364 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.365 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.366 251996 INFO nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Creating image(s)
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.392 251996 DEBUG nova.storage.rbd_utils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.424 251996 DEBUG nova.storage.rbd_utils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.455 251996 DEBUG nova.storage.rbd_utils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.459 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.525 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.526 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.527 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.527 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.552 251996 DEBUG nova.storage.rbd_utils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.559 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b85968f0-ebd7-48f6-a932-c4e8da09381e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:47 compute-0 nova_compute[251992]: 2025-12-06 07:39:47.594 251996 DEBUG nova.policy [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a70f6c3c5e2c402bb6fa0e0507e9b6dc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b10aa03d68eb4d4799d53538521cc364', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:39:47 compute-0 podman[340203]: 2025-12-06 07:39:47.631034171 +0000 UTC m=+0.042204939 container create 7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:39:47 compute-0 systemd[1]: Started libpod-conmon-7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19.scope.
Dec 06 07:39:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:39:47 compute-0 podman[340203]: 2025-12-06 07:39:47.612242455 +0000 UTC m=+0.023413223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:39:47 compute-0 podman[340203]: 2025-12-06 07:39:47.709916605 +0000 UTC m=+0.121087403 container init 7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:39:47 compute-0 podman[340203]: 2025-12-06 07:39:47.718245624 +0000 UTC m=+0.129416392 container start 7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:39:47 compute-0 podman[340203]: 2025-12-06 07:39:47.722416349 +0000 UTC m=+0.133909306 container attach 7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:39:47 compute-0 laughing_elbakyan[340237]: 167 167
Dec 06 07:39:47 compute-0 systemd[1]: libpod-7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19.scope: Deactivated successfully.
Dec 06 07:39:47 compute-0 conmon[340237]: conmon 7186a772f016f34227b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19.scope/container/memory.events
Dec 06 07:39:47 compute-0 podman[340203]: 2025-12-06 07:39:47.729866863 +0000 UTC m=+0.141037631 container died 7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:39:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd581ec98e8d8ac36bd14c61f74df539570f753b4005b332ede01b5964d8a0ac-merged.mount: Deactivated successfully.
Dec 06 07:39:47 compute-0 podman[340203]: 2025-12-06 07:39:47.775958998 +0000 UTC m=+0.187129766 container remove 7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elbakyan, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 07:39:47 compute-0 systemd[1]: libpod-conmon-7186a772f016f34227b7d93d925effca318b11e7fcbf1449439be74bc4589d19.scope: Deactivated successfully.
Dec 06 07:39:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:47.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:47 compute-0 podman[340261]: 2025-12-06 07:39:47.951606047 +0000 UTC m=+0.041169740 container create d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 07:39:47 compute-0 systemd[1]: Started libpod-conmon-d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85.scope.
Dec 06 07:39:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a055c0232be94c08c4f7dadd6e0f47aa438356bbcc504d1c1a17b52dd34677/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a055c0232be94c08c4f7dadd6e0f47aa438356bbcc504d1c1a17b52dd34677/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a055c0232be94c08c4f7dadd6e0f47aa438356bbcc504d1c1a17b52dd34677/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a055c0232be94c08c4f7dadd6e0f47aa438356bbcc504d1c1a17b52dd34677/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15a055c0232be94c08c4f7dadd6e0f47aa438356bbcc504d1c1a17b52dd34677/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:48 compute-0 podman[340261]: 2025-12-06 07:39:47.93423887 +0000 UTC m=+0.023802583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:39:48 compute-0 podman[340261]: 2025-12-06 07:39:48.033212076 +0000 UTC m=+0.122775789 container init d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:39:48 compute-0 podman[340261]: 2025-12-06 07:39:48.04029412 +0000 UTC m=+0.129857803 container start d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:39:48 compute-0 podman[340261]: 2025-12-06 07:39:48.043725754 +0000 UTC m=+0.133289467 container attach d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.073 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef b85968f0-ebd7-48f6-a932-c4e8da09381e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.165 251996 DEBUG nova.storage.rbd_utils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] resizing rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:39:48 compute-0 ceph-mon[74339]: pgmap v2511: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 5.0 KiB/s wr, 109 op/s
Dec 06 07:39:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2183177135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:39:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:39:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:39:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4058252904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.283 251996 DEBUG nova.compute.manager [req-1e39ad5a-d405-4a68-b658-52a50b17f19d req-3905ce96-db3c-40c3-9535-6ab201bd38ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.283 251996 DEBUG oslo_concurrency.lockutils [req-1e39ad5a-d405-4a68-b658-52a50b17f19d req-3905ce96-db3c-40c3-9535-6ab201bd38ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.284 251996 DEBUG oslo_concurrency.lockutils [req-1e39ad5a-d405-4a68-b658-52a50b17f19d req-3905ce96-db3c-40c3-9535-6ab201bd38ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.284 251996 DEBUG oslo_concurrency.lockutils [req-1e39ad5a-d405-4a68-b658-52a50b17f19d req-3905ce96-db3c-40c3-9535-6ab201bd38ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.284 251996 DEBUG nova.compute.manager [req-1e39ad5a-d405-4a68-b658-52a50b17f19d req-3905ce96-db3c-40c3-9535-6ab201bd38ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] No waiting events found dispatching network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.285 251996 WARNING nova.compute.manager [req-1e39ad5a-d405-4a68-b658-52a50b17f19d req-3905ce96-db3c-40c3-9535-6ab201bd38ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received unexpected event network-vif-plugged-e61ac68e-e534-4351-b3ce-b20fa32579fc for instance with vm_state active and task_state deleting.
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.290 251996 DEBUG nova.objects.instance [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'migration_context' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.346 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.346 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Ensure instance console log exists: /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.347 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.347 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.348 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:48.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.757 251996 DEBUG nova.network.neutron [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:48 compute-0 nova_compute[251992]: 2025-12-06 07:39:48.799 251996 INFO nova.compute.manager [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Took 2.39 seconds to deallocate network for instance.
Dec 06 07:39:48 compute-0 trusting_jemison[340277]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:39:48 compute-0 trusting_jemison[340277]: --> relative data size: 1.0
Dec 06 07:39:48 compute-0 trusting_jemison[340277]: --> All data devices are unavailable
Dec 06 07:39:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 776 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 37 KiB/s wr, 114 op/s
Dec 06 07:39:48 compute-0 systemd[1]: libpod-d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85.scope: Deactivated successfully.
Dec 06 07:39:48 compute-0 podman[340261]: 2025-12-06 07:39:48.912586725 +0000 UTC m=+1.002150428 container died d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:39:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-15a055c0232be94c08c4f7dadd6e0f47aa438356bbcc504d1c1a17b52dd34677-merged.mount: Deactivated successfully.
Dec 06 07:39:48 compute-0 podman[340261]: 2025-12-06 07:39:48.961006654 +0000 UTC m=+1.050570347 container remove d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:39:48 compute-0 systemd[1]: libpod-conmon-d9860936111b574ee303874e9de3dee10d325a3c87b7dd3d72d1e11e1aa64b85.scope: Deactivated successfully.
Dec 06 07:39:48 compute-0 sudo[340061]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:49 compute-0 sudo[340377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:49 compute-0 sudo[340377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:49 compute-0 sudo[340377]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:49 compute-0 sudo[340402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:39:49 compute-0 sudo[340402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:49 compute-0 sudo[340402]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.148 251996 DEBUG nova.compute.manager [req-ad2c67de-39cb-4f9b-afb6-112a66ba65a7 req-ea51ca03-3992-4623-95a9-47d06d55ee84 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Received event network-vif-deleted-e61ac68e-e534-4351-b3ce-b20fa32579fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:49 compute-0 sudo[340427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.162 251996 INFO nova.compute.manager [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Took 0.36 seconds to detach 1 volumes for instance.
Dec 06 07:39:49 compute-0 sudo[340427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:49 compute-0 sudo[340427]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:49 compute-0 sudo[340452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:39:49 compute-0 sudo[340452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.227 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.228 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:49 compute-0 ceph-mon[74339]: pgmap v2512: 305 pgs: 305 active+clean; 776 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 37 KiB/s wr, 114 op/s
Dec 06 07:39:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.402 251996 DEBUG oslo_concurrency.processutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:49 compute-0 podman[340519]: 2025-12-06 07:39:49.521939056 +0000 UTC m=+0.037549402 container create 0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 07:39:49 compute-0 systemd[1]: Started libpod-conmon-0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989.scope.
Dec 06 07:39:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:39:49 compute-0 podman[340519]: 2025-12-06 07:39:49.506479941 +0000 UTC m=+0.022090307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:39:49 compute-0 podman[340519]: 2025-12-06 07:39:49.607287907 +0000 UTC m=+0.122898263 container init 0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 07:39:49 compute-0 podman[340519]: 2025-12-06 07:39:49.615138573 +0000 UTC m=+0.130748919 container start 0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:39:49 compute-0 busy_keldysh[340552]: 167 167
Dec 06 07:39:49 compute-0 systemd[1]: libpod-0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989.scope: Deactivated successfully.
Dec 06 07:39:49 compute-0 conmon[340552]: conmon 0baf582772b324219bce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989.scope/container/memory.events
Dec 06 07:39:49 compute-0 podman[340519]: 2025-12-06 07:39:49.620849689 +0000 UTC m=+0.136460065 container attach 0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:39:49 compute-0 podman[340519]: 2025-12-06 07:39:49.621165788 +0000 UTC m=+0.136776134 container died 0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:39:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-31eb4f2bc74ea7eabefccfe717d5fb93934ebf2cb162b4196892ca610922715e-merged.mount: Deactivated successfully.
Dec 06 07:39:49 compute-0 podman[340519]: 2025-12-06 07:39:49.653194277 +0000 UTC m=+0.168804623 container remove 0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_keldysh, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:39:49 compute-0 systemd[1]: libpod-conmon-0baf582772b324219bcee41e9116d73bb14a0bf84ecf553e7dbef921ca85b989.scope: Deactivated successfully.
Dec 06 07:39:49 compute-0 podman[340577]: 2025-12-06 07:39:49.835680464 +0000 UTC m=+0.044630555 container create 19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_turing, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 07:39:49 compute-0 systemd[1]: Started libpod-conmon-19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e.scope.
Dec 06 07:39:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2086763005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:49.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aeef079adba90397a0c9060785db23f1a0b730d7d0888c7c3337cc4a31524dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:49 compute-0 podman[340577]: 2025-12-06 07:39:49.819066338 +0000 UTC m=+0.028016429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aeef079adba90397a0c9060785db23f1a0b730d7d0888c7c3337cc4a31524dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aeef079adba90397a0c9060785db23f1a0b730d7d0888c7c3337cc4a31524dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aeef079adba90397a0c9060785db23f1a0b730d7d0888c7c3337cc4a31524dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.920 251996 DEBUG oslo_concurrency.processutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.929 251996 DEBUG nova.compute.provider_tree [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:39:49 compute-0 podman[340577]: 2025-12-06 07:39:49.932017187 +0000 UTC m=+0.140967298 container init 19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_turing, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 07:39:49 compute-0 podman[340577]: 2025-12-06 07:39:49.940444189 +0000 UTC m=+0.149394280 container start 19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:39:49 compute-0 podman[340577]: 2025-12-06 07:39:49.943447601 +0000 UTC m=+0.152397722 container attach 19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.953 251996 DEBUG nova.scheduler.client.report [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:39:49 compute-0 nova_compute[251992]: 2025-12-06 07:39:49.994 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:50 compute-0 nova_compute[251992]: 2025-12-06 07:39:50.030 251996 DEBUG nova.network.neutron [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Successfully created port: f1f563a8-9001-419f-858a-0213c5d6607a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:39:50 compute-0 nova_compute[251992]: 2025-12-06 07:39:50.033 251996 INFO nova.scheduler.client.report [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Deleted allocations for instance c2e6b8fd-375c-4658-b338-f2d334041ba3
Dec 06 07:39:50 compute-0 nova_compute[251992]: 2025-12-06 07:39:50.121 251996 DEBUG oslo_concurrency.lockutils [None req-8efbc8e2-3456-406f-bd8b-74bb7eb562d4 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "c2e6b8fd-375c-4658-b338-f2d334041ba3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Dec 06 07:39:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2086763005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Dec 06 07:39:50 compute-0 nova_compute[251992]: 2025-12-06 07:39:50.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Dec 06 07:39:50 compute-0 podman[340601]: 2025-12-06 07:39:50.410000843 +0000 UTC m=+0.060137331 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:39:50 compute-0 podman[340602]: 2025-12-06 07:39:50.441170468 +0000 UTC m=+0.092015826 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 06 07:39:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:39:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:50.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:39:50 compute-0 nova_compute[251992]: 2025-12-06 07:39:50.694 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:50 compute-0 focused_turing[340593]: {
Dec 06 07:39:50 compute-0 focused_turing[340593]:     "0": [
Dec 06 07:39:50 compute-0 focused_turing[340593]:         {
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "devices": [
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "/dev/loop3"
Dec 06 07:39:50 compute-0 focused_turing[340593]:             ],
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "lv_name": "ceph_lv0",
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "lv_size": "7511998464",
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "name": "ceph_lv0",
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "tags": {
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.cluster_name": "ceph",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.crush_device_class": "",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.encrypted": "0",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.osd_id": "0",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.type": "block",
Dec 06 07:39:50 compute-0 focused_turing[340593]:                 "ceph.vdo": "0"
Dec 06 07:39:50 compute-0 focused_turing[340593]:             },
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "type": "block",
Dec 06 07:39:50 compute-0 focused_turing[340593]:             "vg_name": "ceph_vg0"
Dec 06 07:39:50 compute-0 focused_turing[340593]:         }
Dec 06 07:39:50 compute-0 focused_turing[340593]:     ]
Dec 06 07:39:50 compute-0 focused_turing[340593]: }
Dec 06 07:39:50 compute-0 systemd[1]: libpod-19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e.scope: Deactivated successfully.
Dec 06 07:39:50 compute-0 podman[340577]: 2025-12-06 07:39:50.746091085 +0000 UTC m=+0.955041176 container died 19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aeef079adba90397a0c9060785db23f1a0b730d7d0888c7c3337cc4a31524dc-merged.mount: Deactivated successfully.
Dec 06 07:39:50 compute-0 podman[340577]: 2025-12-06 07:39:50.796337304 +0000 UTC m=+1.005287395 container remove 19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec 06 07:39:50 compute-0 systemd[1]: libpod-conmon-19654ea5c30e3f0e1325e83d4e3ca0b5166a60221158989b9310192baa7d507e.scope: Deactivated successfully.
Dec 06 07:39:50 compute-0 sudo[340452]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:50 compute-0 auditd[702]: Audit daemon rotating log files
Dec 06 07:39:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 806 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 2.3 MiB/s wr, 89 op/s
Dec 06 07:39:50 compute-0 sudo[340657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:50 compute-0 sudo[340657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:50 compute-0 sudo[340657]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:50 compute-0 sudo[340682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:39:50 compute-0 sudo[340682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:50 compute-0 sudo[340682]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:51 compute-0 sudo[340707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:51 compute-0 sudo[340707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:51 compute-0 sudo[340707]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:51 compute-0 sudo[340733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:39:51 compute-0 sudo[340733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:51 compute-0 nova_compute[251992]: 2025-12-06 07:39:51.260 251996 DEBUG nova.network.neutron [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Successfully updated port: f1f563a8-9001-419f-858a-0213c5d6607a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:39:51 compute-0 nova_compute[251992]: 2025-12-06 07:39:51.282 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:39:51 compute-0 nova_compute[251992]: 2025-12-06 07:39:51.283 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:39:51 compute-0 nova_compute[251992]: 2025-12-06 07:39:51.283 251996 DEBUG nova.network.neutron [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:39:51 compute-0 podman[340797]: 2025-12-06 07:39:51.375972709 +0000 UTC m=+0.038975921 container create fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:39:51 compute-0 systemd[1]: Started libpod-conmon-fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255.scope.
Dec 06 07:39:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:39:51 compute-0 nova_compute[251992]: 2025-12-06 07:39:51.439 251996 DEBUG nova.compute.manager [req-83ca7808-0567-466c-ade7-4e2a05c0664a req-892fa148-ca0b-4907-bab9-33adbd9f48fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-changed-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:51 compute-0 nova_compute[251992]: 2025-12-06 07:39:51.440 251996 DEBUG nova.compute.manager [req-83ca7808-0567-466c-ade7-4e2a05c0664a req-892fa148-ca0b-4907-bab9-33adbd9f48fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Refreshing instance network info cache due to event network-changed-f1f563a8-9001-419f-858a-0213c5d6607a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:39:51 compute-0 nova_compute[251992]: 2025-12-06 07:39:51.440 251996 DEBUG oslo_concurrency.lockutils [req-83ca7808-0567-466c-ade7-4e2a05c0664a req-892fa148-ca0b-4907-bab9-33adbd9f48fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:39:51 compute-0 podman[340797]: 2025-12-06 07:39:51.452272692 +0000 UTC m=+0.115275924 container init fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:39:51 compute-0 podman[340797]: 2025-12-06 07:39:51.358644753 +0000 UTC m=+0.021647995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:39:51 compute-0 podman[340797]: 2025-12-06 07:39:51.45946707 +0000 UTC m=+0.122470272 container start fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:39:51 compute-0 podman[340797]: 2025-12-06 07:39:51.461871145 +0000 UTC m=+0.124874357 container attach fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:39:51 compute-0 busy_rosalind[340814]: 167 167
Dec 06 07:39:51 compute-0 systemd[1]: libpod-fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255.scope: Deactivated successfully.
Dec 06 07:39:51 compute-0 podman[340797]: 2025-12-06 07:39:51.465544176 +0000 UTC m=+0.128547388 container died fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 07:39:51 compute-0 ceph-mon[74339]: osdmap e313: 3 total, 3 up, 3 in
Dec 06 07:39:51 compute-0 ceph-mon[74339]: pgmap v2514: 305 pgs: 305 active+clean; 806 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 2.3 MiB/s wr, 89 op/s
Dec 06 07:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a2ede41450a8f2137f325b6c20bb15bae495c822ca92ee99374b19ce8c754d1-merged.mount: Deactivated successfully.
Dec 06 07:39:51 compute-0 podman[340797]: 2025-12-06 07:39:51.506934051 +0000 UTC m=+0.169937263 container remove fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rosalind, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:39:51 compute-0 systemd[1]: libpod-conmon-fdd92085f0d3e3f4cda1f89ed7243430cab08e10214f7a9233707da144b01255.scope: Deactivated successfully.
Dec 06 07:39:51 compute-0 nova_compute[251992]: 2025-12-06 07:39:51.523 251996 DEBUG nova.network.neutron [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:39:51 compute-0 podman[340839]: 2025-12-06 07:39:51.690403236 +0000 UTC m=+0.048592714 container create bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wilson, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:39:51 compute-0 systemd[1]: Started libpod-conmon-bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17.scope.
Dec 06 07:39:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42569aa93e7a95a7ad3290aa45b597669240652c616ce55ef8dabd333419f0f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42569aa93e7a95a7ad3290aa45b597669240652c616ce55ef8dabd333419f0f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42569aa93e7a95a7ad3290aa45b597669240652c616ce55ef8dabd333419f0f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42569aa93e7a95a7ad3290aa45b597669240652c616ce55ef8dabd333419f0f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:39:51 compute-0 podman[340839]: 2025-12-06 07:39:51.754448514 +0000 UTC m=+0.112637992 container init bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:39:51 compute-0 podman[340839]: 2025-12-06 07:39:51.66611885 +0000 UTC m=+0.024308358 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:39:51 compute-0 podman[340839]: 2025-12-06 07:39:51.764200761 +0000 UTC m=+0.122390239 container start bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:39:51 compute-0 podman[340839]: 2025-12-06 07:39:51.769176947 +0000 UTC m=+0.127366445 container attach bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wilson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 07:39:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:51.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Dec 06 07:39:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Dec 06 07:39:52 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Dec 06 07:39:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2420898333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2420898333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:52 compute-0 sweet_wilson[340855]: {
Dec 06 07:39:52 compute-0 sweet_wilson[340855]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:39:52 compute-0 sweet_wilson[340855]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:39:52 compute-0 sweet_wilson[340855]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:39:52 compute-0 sweet_wilson[340855]:         "osd_id": 0,
Dec 06 07:39:52 compute-0 sweet_wilson[340855]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:39:52 compute-0 sweet_wilson[340855]:         "type": "bluestore"
Dec 06 07:39:52 compute-0 sweet_wilson[340855]:     }
Dec 06 07:39:52 compute-0 sweet_wilson[340855]: }
Dec 06 07:39:52 compute-0 systemd[1]: libpod-bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17.scope: Deactivated successfully.
Dec 06 07:39:52 compute-0 podman[340839]: 2025-12-06 07:39:52.623313374 +0000 UTC m=+0.981502852 container died bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wilson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:39:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-42569aa93e7a95a7ad3290aa45b597669240652c616ce55ef8dabd333419f0f5-merged.mount: Deactivated successfully.
Dec 06 07:39:52 compute-0 podman[340839]: 2025-12-06 07:39:52.676419122 +0000 UTC m=+1.034608600 container remove bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:39:52 compute-0 systemd[1]: libpod-conmon-bd63dff12b0cd48a8eb4bb655d3b1095a7500abffc5a4454d19fb7099f129d17.scope: Deactivated successfully.
Dec 06 07:39:52 compute-0 sudo[340733]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:39:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:39:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3494887b-f07e-4841-a2dc-92ed5aa50896 does not exist
Dec 06 07:39:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f15fb430-2f7a-4a49-b22d-5d2c8b283cad does not exist
Dec 06 07:39:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 44fc7c15-ae05-4cc6-a1e1-a3261e02dda7 does not exist
Dec 06 07:39:52 compute-0 sudo[340890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:52 compute-0 sudo[340890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:52 compute-0 sudo[340890]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:52 compute-0 sudo[340915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:39:52 compute-0 sudo[340915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:52 compute-0 sudo[340915]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 793 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 2.7 MiB/s wr, 146 op/s
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.888 251996 DEBUG nova.network.neutron [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating instance_info_cache with network_info: [{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.910 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.911 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance network_info: |[{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.911 251996 DEBUG oslo_concurrency.lockutils [req-83ca7808-0567-466c-ade7-4e2a05c0664a req-892fa148-ca0b-4907-bab9-33adbd9f48fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.911 251996 DEBUG nova.network.neutron [req-83ca7808-0567-466c-ade7-4e2a05c0664a req-892fa148-ca0b-4907-bab9-33adbd9f48fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Refreshing network info cache for port f1f563a8-9001-419f-858a-0213c5d6607a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.914 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Start _get_guest_xml network_info=[{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.919 251996 WARNING nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.924 251996 DEBUG nova.virt.libvirt.host [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.925 251996 DEBUG nova.virt.libvirt.host [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.931 251996 DEBUG nova.virt.libvirt.host [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.932 251996 DEBUG nova.virt.libvirt.host [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.933 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.933 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.933 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.933 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.934 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.934 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.934 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.934 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.934 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.935 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.935 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.935 251996 DEBUG nova.virt.hardware [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:39:52 compute-0 nova_compute[251992]: 2025-12-06 07:39:52.938 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:39:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204668804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.385 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.414 251996 DEBUG nova.storage.rbd_utils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.418 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.688 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.688 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.689 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.689 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.689 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:53 compute-0 ceph-mon[74339]: osdmap e314: 3 total, 3 up, 3 in
Dec 06 07:39:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3637923805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:39:53 compute-0 ceph-mon[74339]: pgmap v2516: 305 pgs: 305 active+clean; 793 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 2.7 MiB/s wr, 146 op/s
Dec 06 07:39:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4204668804' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.690 251996 INFO nova.compute.manager [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Terminating instance
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.693 251996 DEBUG nova.compute.manager [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:39:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Dec 06 07:39:53 compute-0 kernel: tapdc251d3b-d5 (unregistering): left promiscuous mode
Dec 06 07:39:53 compute-0 NetworkManager[48965]: <info>  [1765006793.7603] device (tapdc251d3b-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:39:53 compute-0 ovn_controller[147168]: 2025-12-06T07:39:53Z|00508|binding|INFO|Releasing lport dc251d3b-d52d-4043-b1db-dc8528b247d0 from this chassis (sb_readonly=0)
Dec 06 07:39:53 compute-0 ovn_controller[147168]: 2025-12-06T07:39:53Z|00509|binding|INFO|Setting lport dc251d3b-d52d-4043-b1db-dc8528b247d0 down in Southbound
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.767 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:53 compute-0 ovn_controller[147168]: 2025-12-06T07:39:53Z|00510|binding|INFO|Removing iface tapdc251d3b-d5 ovn-installed in OVS
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.771 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Dec 06 07:39:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.789 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:53.812 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:63:d9 10.100.0.9'], port_security=['fa:16:3e:63:63:d9 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '70928eda-043f-429b-aa4e-af1f3189a7c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40bc9d32-839b-4591-acbc-c5d535123ff1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17cdfa63c4424ec7a0eb4bb3d7372c14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '953f477d-4c58-4746-93a0-d2fe9cd53d10', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=13d96725-86c3-401b-a660-53c6583b3389, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=dc251d3b-d52d-4043-b1db-dc8528b247d0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:53.813 158118 INFO neutron.agent.ovn.metadata.agent [-] Port dc251d3b-d52d-4043-b1db-dc8528b247d0 in datapath 40bc9d32-839b-4591-acbc-c5d535123ff1 unbound from our chassis
Dec 06 07:39:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:53.814 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40bc9d32-839b-4591-acbc-c5d535123ff1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:39:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:53.816 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8fc571-a142-4a69-bac7-8e41f48e9643]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:53.817 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 namespace which is not needed anymore
Dec 06 07:39:53 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000079.scope: Deactivated successfully.
Dec 06 07:39:53 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000079.scope: Consumed 28.311s CPU time.
Dec 06 07:39:53 compute-0 systemd-machined[212986]: Machine qemu-54-instance-00000079 terminated.
Dec 06 07:39:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:53.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.930 251996 INFO nova.virt.libvirt.driver [-] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Instance destroyed successfully.
Dec 06 07:39:53 compute-0 nova_compute[251992]: 2025-12-06 07:39:53.931 251996 DEBUG nova.objects.instance [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lazy-loading 'resources' on Instance uuid 70928eda-043f-429b-aa4e-af1f3189a7c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:39:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604656726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:53 compute-0 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[330023]: [NOTICE]   (330027) : haproxy version is 2.8.14-c23fe91
Dec 06 07:39:53 compute-0 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[330023]: [NOTICE]   (330027) : path to executable is /usr/sbin/haproxy
Dec 06 07:39:53 compute-0 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[330023]: [WARNING]  (330027) : Exiting Master process...
Dec 06 07:39:53 compute-0 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[330023]: [WARNING]  (330027) : Exiting Master process...
Dec 06 07:39:53 compute-0 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[330023]: [ALERT]    (330027) : Current worker (330029) exited with code 143 (Terminated)
Dec 06 07:39:53 compute-0 neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1[330023]: [WARNING]  (330027) : All workers exited. Exiting... (0)
Dec 06 07:39:53 compute-0 systemd[1]: libpod-744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31.scope: Deactivated successfully.
Dec 06 07:39:54 compute-0 podman[341025]: 2025-12-06 07:39:54.003522646 +0000 UTC m=+0.096251352 container died 744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.009 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.010 251996 DEBUG nova.virt.libvirt.vif [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:39:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1033932756',display_name='tempest-ServerActionsTestOtherB-server-1033932756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1033932756',id=135,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-kbdg07ib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:39:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=b85968f0-ebd7-48f6-a932-c4e8da09381e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.010 251996 DEBUG nova.network.os_vif_util [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.011 251996 DEBUG nova.network.os_vif_util [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.012 251996 DEBUG nova.objects.instance [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'pci_devices' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.039 251996 DEBUG nova.virt.libvirt.vif [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:33:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-840641308',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-840641308',id=121,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:33:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17cdfa63c4424ec7a0eb4bb3d7372c14',ramdisk_id='',reservation_id='r-moptynqn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-344238221',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-344238221-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:33:59Z,user_data=None,user_id='2aa5b15c15f84a8cb24776d5c781eb09',uuid=70928eda-043f-429b-aa4e-af1f3189a7c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.040 251996 DEBUG nova.network.os_vif_util [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converting VIF {"id": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "address": "fa:16:3e:63:63:d9", "network": {"id": "40bc9d32-839b-4591-acbc-c5d535123ff1", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-488326816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17cdfa63c4424ec7a0eb4bb3d7372c14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc251d3b-d5", "ovs_interfaceid": "dc251d3b-d52d-4043-b1db-dc8528b247d0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.040 251996 DEBUG nova.network.os_vif_util [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:63:d9,bridge_name='br-int',has_traffic_filtering=True,id=dc251d3b-d52d-4043-b1db-dc8528b247d0,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc251d3b-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.041 251996 DEBUG os_vif [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:63:d9,bridge_name='br-int',has_traffic_filtering=True,id=dc251d3b-d52d-4043-b1db-dc8528b247d0,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc251d3b-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.043 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.044 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc251d3b-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.047 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.048 251996 INFO os_vif [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:63:d9,bridge_name='br-int',has_traffic_filtering=True,id=dc251d3b-d52d-4043-b1db-dc8528b247d0,network=Network(40bc9d32-839b-4591-acbc-c5d535123ff1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc251d3b-d5')
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.113 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <uuid>b85968f0-ebd7-48f6-a932-c4e8da09381e</uuid>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <name>instance-00000087</name>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerActionsTestOtherB-server-1033932756</nova:name>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:39:52</nova:creationTime>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <nova:user uuid="a70f6c3c5e2c402bb6fa0e0507e9b6dc">tempest-ServerActionsTestOtherB-874907570-project-member</nova:user>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <nova:project uuid="b10aa03d68eb4d4799d53538521cc364">tempest-ServerActionsTestOtherB-874907570</nova:project>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <nova:port uuid="f1f563a8-9001-419f-858a-0213c5d6607a">
Dec 06 07:39:54 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <system>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <entry name="serial">b85968f0-ebd7-48f6-a932-c4e8da09381e</entry>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <entry name="uuid">b85968f0-ebd7-48f6-a932-c4e8da09381e</entry>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </system>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <os>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   </os>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <features>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   </features>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/b85968f0-ebd7-48f6-a932-c4e8da09381e_disk">
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       </source>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config">
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       </source>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:39:54 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:48:b4:88"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <target dev="tapf1f563a8-90"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/console.log" append="off"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <video>
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </video>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:39:54 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:39:54 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:39:54 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:39:54 compute-0 nova_compute[251992]: </domain>
Dec 06 07:39:54 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.114 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Preparing to wait for external event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.114 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.114 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.114 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.115 251996 DEBUG nova.virt.libvirt.vif [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:39:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1033932756',display_name='tempest-ServerActionsTestOtherB-server-1033932756',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1033932756',id=135,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-kbdg07ib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:39:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=b85968f0-ebd7-48f6-a932-c4e8da09381e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.115 251996 DEBUG nova.network.os_vif_util [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.116 251996 DEBUG nova.network.os_vif_util [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.116 251996 DEBUG os_vif [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.116 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.117 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.117 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.119 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.119 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1f563a8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.119 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1f563a8-90, col_values=(('external_ids', {'iface-id': 'f1f563a8-9001-419f-858a-0213c5d6607a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:b4:88', 'vm-uuid': 'b85968f0-ebd7-48f6-a932-c4e8da09381e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:54 compute-0 NetworkManager[48965]: <info>  [1765006794.1218] manager: (tapf1f563a8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.123 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.125 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.125 251996 INFO os_vif [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90')
Dec 06 07:39:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31-userdata-shm.mount: Deactivated successfully.
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.498 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.499 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.499 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No VIF found with MAC fa:16:3e:48:b4:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.499 251996 INFO nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Using config drive
Dec 06 07:39:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-9caba10c024b8ae751ba6f3ad6a10afe0dd02e2e759963162327dfaef4c9d31c-merged.mount: Deactivated successfully.
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.524 251996 DEBUG nova.storage.rbd_utils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:39:54 compute-0 podman[341025]: 2025-12-06 07:39:54.526226528 +0000 UTC m=+0.618955234 container cleanup 744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:39:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:54.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:54 compute-0 podman[341104]: 2025-12-06 07:39:54.593475393 +0000 UTC m=+0.047384531 container remove 744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.600 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[45d83e1f-1128-49a0-81cb-4215236947b8]: (4, ('Sat Dec  6 07:39:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 (744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31)\n744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31\nSat Dec  6 07:39:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 (744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31)\n744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:54 compute-0 systemd[1]: libpod-conmon-744411cf5346a48846d91b60443213f89444f67d4e59317f5837571dd8f5ac31.scope: Deactivated successfully.
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.602 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9d283bd5-a340-4c3e-9c79-dd9050cffdc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.603 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40bc9d32-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:54 compute-0 kernel: tap40bc9d32-80: left promiscuous mode
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.633 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6010f6dc-2ffb-4d5c-8a88-9e4196850de3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.649 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[baec2435-0704-45ec-a67f-60895d65b173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.651 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbd8a7d-7fe1-4548-bd87-d5c4568aa628]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.667 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[82332ab3-853a-472d-ad07-4863c9df5447]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669263, 'reachable_time': 29963, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341121, 'error': None, 'target': 'ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d40bc9d32\x2d839b\x2d4591\x2dacbc\x2dc5d535123ff1.mount: Deactivated successfully.
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.670 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-40bc9d32-839b-4591-acbc-c5d535123ff1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:39:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:54.671 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[8683f2d3-0e3f-4364-988b-31fb5db745c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.738 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.741 251996 DEBUG nova.compute.manager [req-0b1a5639-9849-45ea-8423-196e8637cb5f req-65f4b5f0-9a74-4678-abcd-2cb741b8b6c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received event network-vif-unplugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.741 251996 DEBUG oslo_concurrency.lockutils [req-0b1a5639-9849-45ea-8423-196e8637cb5f req-65f4b5f0-9a74-4678-abcd-2cb741b8b6c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.741 251996 DEBUG oslo_concurrency.lockutils [req-0b1a5639-9849-45ea-8423-196e8637cb5f req-65f4b5f0-9a74-4678-abcd-2cb741b8b6c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.741 251996 DEBUG oslo_concurrency.lockutils [req-0b1a5639-9849-45ea-8423-196e8637cb5f req-65f4b5f0-9a74-4678-abcd-2cb741b8b6c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.742 251996 DEBUG nova.compute.manager [req-0b1a5639-9849-45ea-8423-196e8637cb5f req-65f4b5f0-9a74-4678-abcd-2cb741b8b6c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] No waiting events found dispatching network-vif-unplugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:54 compute-0 nova_compute[251992]: 2025-12-06 07:39:54.742 251996 DEBUG nova.compute.manager [req-0b1a5639-9849-45ea-8423-196e8637cb5f req-65f4b5f0-9a74-4678-abcd-2cb741b8b6c8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received event network-vif-unplugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:39:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 793 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Dec 06 07:39:54 compute-0 ceph-mon[74339]: osdmap e315: 3 total, 3 up, 3 in
Dec 06 07:39:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1604656726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.037 251996 DEBUG nova.network.neutron [req-83ca7808-0567-466c-ade7-4e2a05c0664a req-892fa148-ca0b-4907-bab9-33adbd9f48fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updated VIF entry in instance network info cache for port f1f563a8-9001-419f-858a-0213c5d6607a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.038 251996 DEBUG nova.network.neutron [req-83ca7808-0567-466c-ade7-4e2a05c0664a req-892fa148-ca0b-4907-bab9-33adbd9f48fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating instance_info_cache with network_info: [{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.056 251996 DEBUG oslo_concurrency.lockutils [req-83ca7808-0567-466c-ade7-4e2a05c0664a req-892fa148-ca0b-4907-bab9-33adbd9f48fc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.199 251996 INFO nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Creating config drive at /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.205 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8uffr1tu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.338 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8uffr1tu" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.370 251996 DEBUG nova.storage.rbd_utils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] rbd image b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.374 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:55 compute-0 nova_compute[251992]: 2025-12-06 07:39:55.400 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:55.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:56 compute-0 ceph-mon[74339]: pgmap v2518: 305 pgs: 305 active+clean; 793 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 3.5 MiB/s wr, 130 op/s
Dec 06 07:39:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2926417048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.041 251996 DEBUG oslo_concurrency.processutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config b85968f0-ebd7-48f6-a932-c4e8da09381e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.042 251996 INFO nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Deleting local config drive /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e/disk.config because it was imported into RBD.
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.053 251996 INFO nova.virt.libvirt.driver [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Deleting instance files /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1_del
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.055 251996 INFO nova.virt.libvirt.driver [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Deletion of /var/lib/nova/instances/70928eda-043f-429b-aa4e-af1f3189a7c1_del complete
Dec 06 07:39:56 compute-0 kernel: tapf1f563a8-90: entered promiscuous mode
Dec 06 07:39:56 compute-0 systemd-udevd[341004]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:39:56 compute-0 NetworkManager[48965]: <info>  [1765006796.0904] manager: (tapf1f563a8-90): new Tun device (/org/freedesktop/NetworkManager/Devices/238)
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.090 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:56 compute-0 ovn_controller[147168]: 2025-12-06T07:39:56Z|00511|binding|INFO|Claiming lport f1f563a8-9001-419f-858a-0213c5d6607a for this chassis.
Dec 06 07:39:56 compute-0 ovn_controller[147168]: 2025-12-06T07:39:56Z|00512|binding|INFO|f1f563a8-9001-419f-858a-0213c5d6607a: Claiming fa:16:3e:48:b4:88 10.100.0.3
Dec 06 07:39:56 compute-0 NetworkManager[48965]: <info>  [1765006796.1026] device (tapf1f563a8-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:39:56 compute-0 NetworkManager[48965]: <info>  [1765006796.1033] device (tapf1f563a8-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:39:56 compute-0 ovn_controller[147168]: 2025-12-06T07:39:56Z|00513|binding|INFO|Setting lport f1f563a8-9001-419f-858a-0213c5d6607a ovn-installed in OVS
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.109 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:56 compute-0 systemd-machined[212986]: New machine qemu-64-instance-00000087.
Dec 06 07:39:56 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-00000087.
Dec 06 07:39:56 compute-0 ovn_controller[147168]: 2025-12-06T07:39:56Z|00514|binding|INFO|Setting lport f1f563a8-9001-419f-858a-0213c5d6607a up in Southbound
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.354 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:b4:88 10.100.0.3'], port_security=['fa:16:3e:48:b4:88 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b85968f0-ebd7-48f6-a932-c4e8da09381e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=f1f563a8-9001-419f-858a-0213c5d6607a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.356 158118 INFO neutron.agent.ovn.metadata.agent [-] Port f1f563a8-9001-419f-858a-0213c5d6607a in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 bound to our chassis
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.359 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.376 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb6a37a-e06e-4343-8ae6-126a75984a06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.410 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[73b71c9e-7b1d-4df9-b09d-c225a27eedae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.414 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[80d8d494-0699-4733-bef0-d1a61d8d7bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.447 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[22b7b1e3-a2a5-465a-84f8-fe2e5af06280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.469 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[106bcb99-d48b-4f54-a985-941dcf0a57c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678865, 'reachable_time': 42731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341189, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.485 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea42d1a-7fae-4e3a-abe2-ff61f72fb063]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678878, 'tstamp': 678878}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341190, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678881, 'tstamp': 678881}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341190, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.487 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.489 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.490 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.490 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.491 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:39:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:39:56.491 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.504 251996 INFO nova.compute.manager [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Took 2.81 seconds to destroy the instance on the hypervisor.
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.505 251996 DEBUG oslo.service.loopingcall [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.505 251996 DEBUG nova.compute.manager [-] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.505 251996 DEBUG nova.network.neutron [-] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:39:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:56.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.808 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006796.8080728, b85968f0-ebd7-48f6-a932-c4e8da09381e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.809 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] VM Started (Lifecycle Event)
Dec 06 07:39:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 746 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 124 KiB/s rd, 3.9 MiB/s wr, 178 op/s
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.945 251996 DEBUG nova.compute.manager [req-2c5adafe-d4b9-49b0-8ac2-595ec3bd0fdd req-92b02e5b-e631-490e-94b7-15163915cf1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received event network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.945 251996 DEBUG oslo_concurrency.lockutils [req-2c5adafe-d4b9-49b0-8ac2-595ec3bd0fdd req-92b02e5b-e631-490e-94b7-15163915cf1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.946 251996 DEBUG oslo_concurrency.lockutils [req-2c5adafe-d4b9-49b0-8ac2-595ec3bd0fdd req-92b02e5b-e631-490e-94b7-15163915cf1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.946 251996 DEBUG oslo_concurrency.lockutils [req-2c5adafe-d4b9-49b0-8ac2-595ec3bd0fdd req-92b02e5b-e631-490e-94b7-15163915cf1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.946 251996 DEBUG nova.compute.manager [req-2c5adafe-d4b9-49b0-8ac2-595ec3bd0fdd req-92b02e5b-e631-490e-94b7-15163915cf1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] No waiting events found dispatching network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.947 251996 WARNING nova.compute.manager [req-2c5adafe-d4b9-49b0-8ac2-595ec3bd0fdd req-92b02e5b-e631-490e-94b7-15163915cf1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received unexpected event network-vif-plugged-dc251d3b-d52d-4043-b1db-dc8528b247d0 for instance with vm_state active and task_state deleting.
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.953 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.959 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006796.8083491, b85968f0-ebd7-48f6-a932-c4e8da09381e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:56 compute-0 nova_compute[251992]: 2025-12-06 07:39:56.959 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] VM Paused (Lifecycle Event)
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.001 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.004 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:39:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4267829070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:39:57 compute-0 ceph-mon[74339]: pgmap v2519: 305 pgs: 305 active+clean; 746 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 124 KiB/s rd, 3.9 MiB/s wr, 178 op/s
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.242 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.738 251996 DEBUG nova.compute.manager [req-b93e619a-4604-4e7d-b9cd-791e855a433c req-49145f95-9d5c-476a-8124-034d52a20d9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.738 251996 DEBUG oslo_concurrency.lockutils [req-b93e619a-4604-4e7d-b9cd-791e855a433c req-49145f95-9d5c-476a-8124-034d52a20d9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.739 251996 DEBUG oslo_concurrency.lockutils [req-b93e619a-4604-4e7d-b9cd-791e855a433c req-49145f95-9d5c-476a-8124-034d52a20d9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.739 251996 DEBUG oslo_concurrency.lockutils [req-b93e619a-4604-4e7d-b9cd-791e855a433c req-49145f95-9d5c-476a-8124-034d52a20d9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.739 251996 DEBUG nova.compute.manager [req-b93e619a-4604-4e7d-b9cd-791e855a433c req-49145f95-9d5c-476a-8124-034d52a20d9d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Processing event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.740 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.751 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006797.749964, b85968f0-ebd7-48f6-a932-c4e8da09381e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.751 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] VM Resumed (Lifecycle Event)
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.752 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.757 251996 INFO nova.virt.libvirt.driver [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance spawned successfully.
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.757 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.788 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.792 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.793 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.793 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.794 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.794 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.794 251996 DEBUG nova.virt.libvirt.driver [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.800 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:39:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:57.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:57 compute-0 nova_compute[251992]: 2025-12-06 07:39:57.919 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:39:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2787199876' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:39:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2787199876' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.066372) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798066430, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2332, "num_deletes": 258, "total_data_size": 4000639, "memory_usage": 4056160, "flush_reason": "Manual Compaction"}
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798086216, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3887597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47980, "largest_seqno": 50311, "table_properties": {"data_size": 3876926, "index_size": 6845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23510, "raw_average_key_size": 21, "raw_value_size": 3855123, "raw_average_value_size": 3504, "num_data_blocks": 296, "num_entries": 1100, "num_filter_entries": 1100, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006574, "oldest_key_time": 1765006574, "file_creation_time": 1765006798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 19878 microseconds, and 10043 cpu microseconds.
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.086252) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3887597 bytes OK
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.086271) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.087490) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.087505) EVENT_LOG_v1 {"time_micros": 1765006798087500, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.087523) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3990849, prev total WAL file size 3990849, number of live WAL files 2.
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.088615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3796KB)], [104(10MB)]
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798088688, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 15315674, "oldest_snapshot_seqno": -1}
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 8553 keys, 13156902 bytes, temperature: kUnknown
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798159829, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 13156902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13099027, "index_size": 35363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 222206, "raw_average_key_size": 25, "raw_value_size": 12946024, "raw_average_value_size": 1513, "num_data_blocks": 1390, "num_entries": 8553, "num_filter_entries": 8553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765006798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:39:58 compute-0 nova_compute[251992]: 2025-12-06 07:39:58.160 251996 INFO nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Took 10.80 seconds to spawn the instance on the hypervisor.
Dec 06 07:39:58 compute-0 nova_compute[251992]: 2025-12-06 07:39:58.161 251996 DEBUG nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.160467) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 13156902 bytes
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.162447) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.2 rd, 184.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 10.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 9092, records dropped: 539 output_compression: NoCompression
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.162488) EVENT_LOG_v1 {"time_micros": 1765006798162473, "job": 62, "event": "compaction_finished", "compaction_time_micros": 71507, "compaction_time_cpu_micros": 34413, "output_level": 6, "num_output_files": 1, "total_output_size": 13156902, "num_input_records": 9092, "num_output_records": 8553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798164207, "job": 62, "event": "table_file_deletion", "file_number": 106}
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006798165942, "job": 62, "event": "table_file_deletion", "file_number": 104}
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.088488) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.166026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.166032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.166033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.166035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:39:58.166036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:39:58 compute-0 nova_compute[251992]: 2025-12-06 07:39:58.518 251996 INFO nova.compute.manager [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Took 12.33 seconds to build instance.
Dec 06 07:39:58 compute-0 nova_compute[251992]: 2025-12-06 07:39:58.532 251996 DEBUG nova.network.neutron [-] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:39:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:39:58.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:39:58 compute-0 nova_compute[251992]: 2025-12-06 07:39:58.616 251996 DEBUG oslo_concurrency.lockutils [None req-dea20ef5-fd6c-4d78-9b20-81760626a392 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:58 compute-0 nova_compute[251992]: 2025-12-06 07:39:58.705 251996 INFO nova.compute.manager [-] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Took 2.20 seconds to deallocate network for instance.
Dec 06 07:39:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 735 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 126 KiB/s rd, 3.1 MiB/s wr, 177 op/s
Dec 06 07:39:58 compute-0 sudo[341234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:58 compute-0 sudo[341234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:58 compute-0 sudo[341234]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:58 compute-0 nova_compute[251992]: 2025-12-06 07:39:58.956 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:58 compute-0 nova_compute[251992]: 2025-12-06 07:39:58.957 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:58 compute-0 sudo[341259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:39:58 compute-0 sudo[341259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:39:58 compute-0 sudo[341259]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.087 251996 DEBUG oslo_concurrency.processutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:39:59 compute-0 ceph-mon[74339]: pgmap v2520: 305 pgs: 305 active+clean; 735 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 126 KiB/s rd, 3.1 MiB/s wr, 177 op/s
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.121 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:39:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:39:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Dec 06 07:39:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Dec 06 07:39:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Dec 06 07:39:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:39:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351957969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.624 251996 DEBUG oslo_concurrency.processutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.630 251996 DEBUG nova.compute.provider_tree [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.698 251996 DEBUG nova.scheduler.client.report [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.738 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.881 251996 INFO nova.scheduler.client.report [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Deleted allocations for instance 70928eda-043f-429b-aa4e-af1f3189a7c1
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.887 251996 DEBUG nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.887 251996 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.888 251996 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.888 251996 DEBUG oslo_concurrency.lockutils [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.888 251996 DEBUG nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] No waiting events found dispatching network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.889 251996 WARNING nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received unexpected event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a for instance with vm_state active and task_state None.
Dec 06 07:39:59 compute-0 nova_compute[251992]: 2025-12-06 07:39:59.889 251996 DEBUG nova.compute.manager [req-cad02275-b162-4fbf-bbcd-cd2d46b65a91 req-38a508d3-bfeb-41c8-8e95-b9288f7de2da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Received event network-vif-deleted-dc251d3b-d52d-4043-b1db-dc8528b247d0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:39:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:39:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:39:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:39:59.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:40:00 compute-0 nova_compute[251992]: 2025-12-06 07:40:00.042 251996 DEBUG oslo_concurrency.lockutils [None req-07ae9772-f90f-4178-8fd7-5f9b9dacdd91 2aa5b15c15f84a8cb24776d5c781eb09 17cdfa63c4424ec7a0eb4bb3d7372c14 - - default default] Lock "70928eda-043f-429b-aa4e-af1f3189a7c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:00 compute-0 nova_compute[251992]: 2025-12-06 07:40:00.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:00 compute-0 ceph-mon[74339]: osdmap e316: 3 total, 3 up, 3 in
Dec 06 07:40:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/351957969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:40:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:00.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:00 compute-0 nova_compute[251992]: 2025-12-06 07:40:00.659 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006785.658839, c2e6b8fd-375c-4658-b338-f2d334041ba3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:40:00 compute-0 nova_compute[251992]: 2025-12-06 07:40:00.660 251996 INFO nova.compute.manager [-] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] VM Stopped (Lifecycle Event)
Dec 06 07:40:00 compute-0 nova_compute[251992]: 2025-12-06 07:40:00.716 251996 DEBUG nova.compute.manager [None req-671f75bb-39a8-4ff4-8f8c-67ca20f3eac1 - - - - - -] [instance: c2e6b8fd-375c-4658-b338-f2d334041ba3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:40:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 645 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 346 op/s
Dec 06 07:40:01 compute-0 nova_compute[251992]: 2025-12-06 07:40:01.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:01 compute-0 NetworkManager[48965]: <info>  [1765006801.5272] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Dec 06 07:40:01 compute-0 NetworkManager[48965]: <info>  [1765006801.5284] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Dec 06 07:40:01 compute-0 ceph-mon[74339]: pgmap v2522: 305 pgs: 305 active+clean; 645 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 346 op/s
Dec 06 07:40:01 compute-0 nova_compute[251992]: 2025-12-06 07:40:01.615 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:01 compute-0 ovn_controller[147168]: 2025-12-06T07:40:01Z|00515|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:40:01 compute-0 nova_compute[251992]: 2025-12-06 07:40:01.631 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:01.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:02.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/87181181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.4 MiB/s wr, 348 op/s
Dec 06 07:40:03 compute-0 nova_compute[251992]: 2025-12-06 07:40:03.540 251996 DEBUG nova.compute.manager [req-a21477d4-0261-4d13-800a-ea2c9345baaf req-a4553816-06e7-452a-9411-8bf4e3bd4d26 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-changed-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:40:03 compute-0 nova_compute[251992]: 2025-12-06 07:40:03.540 251996 DEBUG nova.compute.manager [req-a21477d4-0261-4d13-800a-ea2c9345baaf req-a4553816-06e7-452a-9411-8bf4e3bd4d26 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Refreshing instance network info cache due to event network-changed-f1f563a8-9001-419f-858a-0213c5d6607a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:40:03 compute-0 nova_compute[251992]: 2025-12-06 07:40:03.541 251996 DEBUG oslo_concurrency.lockutils [req-a21477d4-0261-4d13-800a-ea2c9345baaf req-a4553816-06e7-452a-9411-8bf4e3bd4d26 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:40:03 compute-0 nova_compute[251992]: 2025-12-06 07:40:03.541 251996 DEBUG oslo_concurrency.lockutils [req-a21477d4-0261-4d13-800a-ea2c9345baaf req-a4553816-06e7-452a-9411-8bf4e3bd4d26 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:40:03 compute-0 nova_compute[251992]: 2025-12-06 07:40:03.541 251996 DEBUG nova.network.neutron [req-a21477d4-0261-4d13-800a-ea2c9345baaf req-a4553816-06e7-452a-9411-8bf4e3bd4d26 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Refreshing network info cache for port f1f563a8-9001-419f-858a-0213c5d6607a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:40:03 compute-0 ceph-mon[74339]: pgmap v2523: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.4 MiB/s wr, 348 op/s
Dec 06 07:40:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:40:03.849 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:40:03.850 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:40:03.850 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:03.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:04 compute-0 nova_compute[251992]: 2025-12-06 07:40:04.125 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:04.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Dec 06 07:40:05 compute-0 nova_compute[251992]: 2025-12-06 07:40:05.384 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:40:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:05.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:40:05 compute-0 ceph-mon[74339]: pgmap v2524: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 316 op/s
Dec 06 07:40:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:06.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 304 KiB/s wr, 273 op/s
Dec 06 07:40:07 compute-0 ceph-mon[74339]: pgmap v2525: 305 pgs: 305 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 304 KiB/s wr, 273 op/s
Dec 06 07:40:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:07.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:08 compute-0 nova_compute[251992]: 2025-12-06 07:40:08.447 251996 DEBUG nova.network.neutron [req-a21477d4-0261-4d13-800a-ea2c9345baaf req-a4553816-06e7-452a-9411-8bf4e3bd4d26 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updated VIF entry in instance network info cache for port f1f563a8-9001-419f-858a-0213c5d6607a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:40:08 compute-0 nova_compute[251992]: 2025-12-06 07:40:08.448 251996 DEBUG nova.network.neutron [req-a21477d4-0261-4d13-800a-ea2c9345baaf req-a4553816-06e7-452a-9411-8bf4e3bd4d26 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating instance_info_cache with network_info: [{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:08.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:08 compute-0 nova_compute[251992]: 2025-12-06 07:40:08.806 251996 DEBUG oslo_concurrency.lockutils [req-a21477d4-0261-4d13-800a-ea2c9345baaf req-a4553816-06e7-452a-9411-8bf4e3bd4d26 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:40:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 556 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 258 op/s
Dec 06 07:40:08 compute-0 nova_compute[251992]: 2025-12-06 07:40:08.930 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006793.928995, 70928eda-043f-429b-aa4e-af1f3189a7c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:40:08 compute-0 nova_compute[251992]: 2025-12-06 07:40:08.930 251996 INFO nova.compute.manager [-] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] VM Stopped (Lifecycle Event)
Dec 06 07:40:09 compute-0 nova_compute[251992]: 2025-12-06 07:40:09.127 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2376389813' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:40:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2376389813' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:40:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:09.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:10 compute-0 ceph-mon[74339]: pgmap v2526: 305 pgs: 305 active+clean; 556 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 21 KiB/s wr, 258 op/s
Dec 06 07:40:10 compute-0 nova_compute[251992]: 2025-12-06 07:40:10.386 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:10.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 547 KiB/s wr, 258 op/s
Dec 06 07:40:11 compute-0 ovn_controller[147168]: 2025-12-06T07:40:11Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:b4:88 10.100.0.3
Dec 06 07:40:11 compute-0 ovn_controller[147168]: 2025-12-06T07:40:11Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:b4:88 10.100.0.3
Dec 06 07:40:11 compute-0 ceph-mon[74339]: pgmap v2527: 305 pgs: 305 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 547 KiB/s wr, 258 op/s
Dec 06 07:40:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:11.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:12 compute-0 nova_compute[251992]: 2025-12-06 07:40:12.105 251996 DEBUG nova.compute.manager [None req-1c66256b-63e4-47a3-bd78-ef6fae063f69 - - - - - -] [instance: 70928eda-043f-429b-aa4e-af1f3189a7c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:40:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:12.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4164932125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 893 KiB/s rd, 1.2 MiB/s wr, 115 op/s
Dec 06 07:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:40:13 compute-0 ceph-mon[74339]: pgmap v2528: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 893 KiB/s rd, 1.2 MiB/s wr, 115 op/s
Dec 06 07:40:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:13.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:14 compute-0 nova_compute[251992]: 2025-12-06 07:40:14.130 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:14.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/384442653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Dec 06 07:40:15 compute-0 nova_compute[251992]: 2025-12-06 07:40:15.388 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:15 compute-0 podman[341316]: 2025-12-06 07:40:15.432764645 +0000 UTC m=+0.086234697 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:40:15 compute-0 ovn_controller[147168]: 2025-12-06T07:40:15Z|00516|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:40:15 compute-0 nova_compute[251992]: 2025-12-06 07:40:15.739 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:15.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:16 compute-0 ceph-mon[74339]: pgmap v2529: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 136 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Dec 06 07:40:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Dec 06 07:40:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1248014117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:17 compute-0 nova_compute[251992]: 2025-12-06 07:40:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:17 compute-0 nova_compute[251992]: 2025-12-06 07:40:17.737 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:17 compute-0 nova_compute[251992]: 2025-12-06 07:40:17.737 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:17 compute-0 nova_compute[251992]: 2025-12-06 07:40:17.738 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:17 compute-0 nova_compute[251992]: 2025-12-06 07:40:17.738 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:40:17 compute-0 nova_compute[251992]: 2025-12-06 07:40:17.738 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:17.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3362342613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.228 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:18 compute-0 ceph-mon[74339]: pgmap v2530: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Dec 06 07:40:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3362342613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:40:18
Dec 06 07:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr']
Dec 06 07:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:40:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:18.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.637 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.637 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.641 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.641 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.788 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.789 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3924MB free_disk=20.806015014648438GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.790 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.790 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:40:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.996 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c1ef1073-7c66-428c-a02b-e4daa3551d22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.997 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance b85968f0-ebd7-48f6-a932-c4e8da09381e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.997 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:40:18 compute-0 nova_compute[251992]: 2025-12-06 07:40:18.997 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:40:19 compute-0 sudo[341367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:19 compute-0 sudo[341367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:19 compute-0 sudo[341367]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:19 compute-0 sudo[341392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:19 compute-0 sudo[341392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:19 compute-0 sudo[341392]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:19 compute-0 nova_compute[251992]: 2025-12-06 07:40:19.130 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:40:19 compute-0 nova_compute[251992]: 2025-12-06 07:40:19.154 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:40:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3750253994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:19 compute-0 nova_compute[251992]: 2025-12-06 07:40:19.597 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:40:19 compute-0 nova_compute[251992]: 2025-12-06 07:40:19.603 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:40:19 compute-0 nova_compute[251992]: 2025-12-06 07:40:19.685 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:40:19 compute-0 nova_compute[251992]: 2025-12-06 07:40:19.720 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:40:19 compute-0 nova_compute[251992]: 2025-12-06 07:40:19.720 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:40:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:19.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:20 compute-0 nova_compute[251992]: 2025-12-06 07:40:20.389 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:20.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:20 compute-0 nova_compute[251992]: 2025-12-06 07:40:20.721 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:20 compute-0 nova_compute[251992]: 2025-12-06 07:40:20.722 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:20 compute-0 nova_compute[251992]: 2025-12-06 07:40:20.722 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:20 compute-0 ceph-mon[74339]: pgmap v2531: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 06 07:40:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Dec 06 07:40:21 compute-0 podman[341440]: 2025-12-06 07:40:21.396269609 +0000 UTC m=+0.053067867 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 07:40:21 compute-0 podman[341441]: 2025-12-06 07:40:21.40941099 +0000 UTC m=+0.060455990 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 06 07:40:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:21.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3750253994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3096176524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:22 compute-0 ceph-mon[74339]: pgmap v2532: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Dec 06 07:40:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/666352619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:22.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:22 compute-0 nova_compute[251992]: 2025-12-06 07:40:22.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Dec 06 07:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:40:23 compute-0 nova_compute[251992]: 2025-12-06 07:40:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:23 compute-0 nova_compute[251992]: 2025-12-06 07:40:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:23 compute-0 nova_compute[251992]: 2025-12-06 07:40:23.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:23 compute-0 nova_compute[251992]: 2025-12-06 07:40:23.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:40:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3163851333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:23 compute-0 ceph-mon[74339]: pgmap v2533: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Dec 06 07:40:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3706009052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:40:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3706009052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:40:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:23.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:24 compute-0 nova_compute[251992]: 2025-12-06 07:40:24.158 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:24.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:40:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:40:24 compute-0 nova_compute[251992]: 2025-12-06 07:40:24.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:40:24 compute-0 nova_compute[251992]: 2025-12-06 07:40:24.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:40:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:40:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:40:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:40:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 979 KiB/s wr, 46 op/s
Dec 06 07:40:25 compute-0 nova_compute[251992]: 2025-12-06 07:40:25.300 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:40:25 compute-0 nova_compute[251992]: 2025-12-06 07:40:25.300 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:40:25 compute-0 nova_compute[251992]: 2025-12-06 07:40:25.301 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:40:25 compute-0 nova_compute[251992]: 2025-12-06 07:40:25.391 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:25.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008690915515075377 of space, bias 1.0, pg target 2.607274654522613 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004322829424902289 of space, bias 1.0, pg target 1.2882031686208821 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:40:26 compute-0 ceph-mon[74339]: pgmap v2534: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 979 KiB/s wr, 46 op/s
Dec 06 07:40:26 compute-0 nova_compute[251992]: 2025-12-06 07:40:26.269 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:40:26.269 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:40:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:40:26.271 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:40:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:26.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 500 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 980 KiB/s wr, 54 op/s
Dec 06 07:40:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2930511714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:27 compute-0 ceph-mon[74339]: pgmap v2535: 305 pgs: 305 active+clean; 500 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 980 KiB/s wr, 54 op/s
Dec 06 07:40:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:27.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:28 compute-0 nova_compute[251992]: 2025-12-06 07:40:28.078 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updating instance_info_cache with network_info: [{"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:40:28 compute-0 nova_compute[251992]: 2025-12-06 07:40:28.112 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:40:28 compute-0 nova_compute[251992]: 2025-12-06 07:40:28.113 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:40:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2321178459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:28.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 615 KiB/s wr, 39 op/s
Dec 06 07:40:29 compute-0 nova_compute[251992]: 2025-12-06 07:40:29.160 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:29 compute-0 ceph-mon[74339]: pgmap v2536: 305 pgs: 305 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 615 KiB/s wr, 39 op/s
Dec 06 07:40:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:30 compute-0 nova_compute[251992]: 2025-12-06 07:40:30.393 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:30.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 479 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Dec 06 07:40:31 compute-0 ceph-mon[74339]: pgmap v2537: 305 pgs: 305 active+clean; 479 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Dec 06 07:40:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:31.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:40:32.273 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:40:32 compute-0 nova_compute[251992]: 2025-12-06 07:40:32.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:32 compute-0 nova_compute[251992]: 2025-12-06 07:40:32.516 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/936565583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3844405613' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:32.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 07:40:33 compute-0 ceph-mon[74339]: pgmap v2538: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 07:40:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:33.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:34 compute-0 nova_compute[251992]: 2025-12-06 07:40:34.163 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 06 07:40:35 compute-0 ceph-mon[74339]: pgmap v2539: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec 06 07:40:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3102066638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:35 compute-0 nova_compute[251992]: 2025-12-06 07:40:35.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:35.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:36.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:36 compute-0 nova_compute[251992]: 2025-12-06 07:40:36.720 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Dec 06 07:40:37 compute-0 ceph-mon[74339]: pgmap v2540: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Dec 06 07:40:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:37.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:38 compute-0 nova_compute[251992]: 2025-12-06 07:40:38.062 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:38.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Dec 06 07:40:39 compute-0 nova_compute[251992]: 2025-12-06 07:40:39.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:39 compute-0 sudo[341486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:39 compute-0 sudo[341486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:39 compute-0 sudo[341486]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:39 compute-0 sudo[341511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:39 compute-0 sudo[341511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:39 compute-0 sudo[341511]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:39.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:40 compute-0 ceph-mon[74339]: pgmap v2541: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Dec 06 07:40:40 compute-0 nova_compute[251992]: 2025-12-06 07:40:40.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:40.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:40:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:41.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:42 compute-0 ceph-mon[74339]: pgmap v2542: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 136 op/s
Dec 06 07:40:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2878607099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:42.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 190 KiB/s wr, 116 op/s
Dec 06 07:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:40:43 compute-0 ceph-mon[74339]: pgmap v2543: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 190 KiB/s wr, 116 op/s
Dec 06 07:40:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:43.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:44 compute-0 nova_compute[251992]: 2025-12-06 07:40:44.167 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:44.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 108 op/s
Dec 06 07:40:45 compute-0 ceph-mon[74339]: pgmap v2544: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 108 op/s
Dec 06 07:40:45 compute-0 nova_compute[251992]: 2025-12-06 07:40:45.399 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:40:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:45.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:40:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Dec 06 07:40:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1599764996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Dec 06 07:40:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Dec 06 07:40:46 compute-0 podman[341539]: 2025-12-06 07:40:46.415506983 +0000 UTC m=+0.069167139 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:40:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:46.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.5 KiB/s wr, 113 op/s
Dec 06 07:40:47 compute-0 ceph-mon[74339]: osdmap e317: 3 total, 3 up, 3 in
Dec 06 07:40:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/831947400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:47 compute-0 ceph-mon[74339]: pgmap v2546: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.5 KiB/s wr, 113 op/s
Dec 06 07:40:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:47.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Dec 06 07:40:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1497706244' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:40:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1497706244' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:40:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Dec 06 07:40:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Dec 06 07:40:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:48.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 46 op/s
Dec 06 07:40:49 compute-0 nova_compute[251992]: 2025-12-06 07:40:49.170 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Dec 06 07:40:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Dec 06 07:40:49 compute-0 ceph-mon[74339]: osdmap e318: 3 total, 3 up, 3 in
Dec 06 07:40:49 compute-0 ceph-mon[74339]: pgmap v2548: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 46 op/s
Dec 06 07:40:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Dec 06 07:40:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:50 compute-0 nova_compute[251992]: 2025-12-06 07:40:50.402 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:50 compute-0 ceph-mon[74339]: osdmap e319: 3 total, 3 up, 3 in
Dec 06 07:40:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:50.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 456 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 10 MiB/s wr, 210 op/s
Dec 06 07:40:51 compute-0 ceph-mon[74339]: pgmap v2550: 305 pgs: 305 active+clean; 456 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 10 MiB/s wr, 210 op/s
Dec 06 07:40:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:51.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:52 compute-0 podman[341572]: 2025-12-06 07:40:52.397874255 +0000 UTC m=+0.048208704 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:40:52 compute-0 podman[341573]: 2025-12-06 07:40:52.429123273 +0000 UTC m=+0.077319033 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 06 07:40:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:52.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 9.7 MiB/s wr, 216 op/s
Dec 06 07:40:53 compute-0 sudo[341611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:53 compute-0 sudo[341611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-0 sudo[341611]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:53 compute-0 sudo[341636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:40:53 compute-0 sudo[341636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-0 sudo[341636]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:53 compute-0 sudo[341661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:53 compute-0 sudo[341661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-0 sudo[341661]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:53 compute-0 sudo[341686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:40:53 compute-0 sudo[341686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-0 sudo[341686]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:40:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:40:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:40:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:40:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:40:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:40:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2e925d24-dfef-4b0d-98a0-cedd3ad7e3f9 does not exist
Dec 06 07:40:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 72458b00-2de7-4858-9646-dc6d4534e26b does not exist
Dec 06 07:40:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2eb633a5-ddbb-4906-bd99-6d17abdffc6a does not exist
Dec 06 07:40:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:40:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:40:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:40:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:40:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:40:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:40:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:53.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:53 compute-0 ceph-mon[74339]: pgmap v2551: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 9.7 MiB/s wr, 216 op/s
Dec 06 07:40:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:40:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:40:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:40:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:40:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:40:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:40:53 compute-0 sudo[341742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:53 compute-0 sudo[341742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:53 compute-0 sudo[341742]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:54 compute-0 sudo[341767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:40:54 compute-0 sudo[341767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:54 compute-0 sudo[341767]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:54 compute-0 sudo[341792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:54 compute-0 sudo[341792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:54 compute-0 sudo[341792]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:54 compute-0 sudo[341817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:40:54 compute-0 sudo[341817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:54 compute-0 nova_compute[251992]: 2025-12-06 07:40:54.172 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Dec 06 07:40:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Dec 06 07:40:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Dec 06 07:40:54 compute-0 podman[341882]: 2025-12-06 07:40:54.49139411 +0000 UTC m=+0.044454091 container create d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:40:54 compute-0 systemd[1]: Started libpod-conmon-d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04.scope.
Dec 06 07:40:54 compute-0 podman[341882]: 2025-12-06 07:40:54.47244015 +0000 UTC m=+0.025500181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:40:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:40:54 compute-0 podman[341882]: 2025-12-06 07:40:54.596939135 +0000 UTC m=+0.149999136 container init d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:40:54 compute-0 podman[341882]: 2025-12-06 07:40:54.603890786 +0000 UTC m=+0.156950767 container start d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bartik, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:40:54 compute-0 pedantic_bartik[341900]: 167 167
Dec 06 07:40:54 compute-0 podman[341882]: 2025-12-06 07:40:54.609388577 +0000 UTC m=+0.162448558 container attach d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bartik, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:40:54 compute-0 systemd[1]: libpod-d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04.scope: Deactivated successfully.
Dec 06 07:40:54 compute-0 podman[341882]: 2025-12-06 07:40:54.610846137 +0000 UTC m=+0.163906118 container died d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:40:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-3106d695a9ea8c8ea082dbab8ac3b3958b2b2c7aca399208256b7d12c21fb99f-merged.mount: Deactivated successfully.
Dec 06 07:40:54 compute-0 podman[341882]: 2025-12-06 07:40:54.654713261 +0000 UTC m=+0.207773242 container remove d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bartik, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:40:54 compute-0 systemd[1]: libpod-conmon-d39c93b02f10e573ca8b24f9e6e8ce1e1f6e0f957e11d52cb056f9dfda186e04.scope: Deactivated successfully.
Dec 06 07:40:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:54.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:54 compute-0 podman[341925]: 2025-12-06 07:40:54.826302689 +0000 UTC m=+0.042092066 container create b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:40:54 compute-0 systemd[1]: Started libpod-conmon-b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca.scope.
Dec 06 07:40:54 compute-0 podman[341925]: 2025-12-06 07:40:54.808127681 +0000 UTC m=+0.023917078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:40:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2067c93494a421543f0e1fcc15d255faaf8cdb08bfbf331470e0c608cff11adb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2067c93494a421543f0e1fcc15d255faaf8cdb08bfbf331470e0c608cff11adb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2067c93494a421543f0e1fcc15d255faaf8cdb08bfbf331470e0c608cff11adb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2067c93494a421543f0e1fcc15d255faaf8cdb08bfbf331470e0c608cff11adb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2067c93494a421543f0e1fcc15d255faaf8cdb08bfbf331470e0c608cff11adb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.6 MiB/s wr, 193 op/s
Dec 06 07:40:54 compute-0 podman[341925]: 2025-12-06 07:40:54.926430487 +0000 UTC m=+0.142219884 container init b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:40:54 compute-0 podman[341925]: 2025-12-06 07:40:54.934688413 +0000 UTC m=+0.150477790 container start b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:40:54 compute-0 podman[341925]: 2025-12-06 07:40:54.938171569 +0000 UTC m=+0.153960966 container attach b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:40:55 compute-0 nova_compute[251992]: 2025-12-06 07:40:55.404 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:55 compute-0 ceph-mon[74339]: osdmap e320: 3 total, 3 up, 3 in
Dec 06 07:40:55 compute-0 ceph-mon[74339]: pgmap v2553: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.6 MiB/s wr, 193 op/s
Dec 06 07:40:55 compute-0 trusting_bouman[341942]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:40:55 compute-0 trusting_bouman[341942]: --> relative data size: 1.0
Dec 06 07:40:55 compute-0 trusting_bouman[341942]: --> All data devices are unavailable
Dec 06 07:40:55 compute-0 systemd[1]: libpod-b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca.scope: Deactivated successfully.
Dec 06 07:40:55 compute-0 conmon[341942]: conmon b5f5870f743f6b80e596 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca.scope/container/memory.events
Dec 06 07:40:55 compute-0 podman[341925]: 2025-12-06 07:40:55.77318506 +0000 UTC m=+0.988974447 container died b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 07:40:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2067c93494a421543f0e1fcc15d255faaf8cdb08bfbf331470e0c608cff11adb-merged.mount: Deactivated successfully.
Dec 06 07:40:55 compute-0 podman[341925]: 2025-12-06 07:40:55.835740878 +0000 UTC m=+1.051530265 container remove b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:40:55 compute-0 systemd[1]: libpod-conmon-b5f5870f743f6b80e596fa6d7d7b6288a81503209a72d0af5a01dae5dbde32ca.scope: Deactivated successfully.
Dec 06 07:40:55 compute-0 sudo[341817]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:55 compute-0 sudo[341970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:55 compute-0 sudo[341970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:55 compute-0 sudo[341970]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:55.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:55 compute-0 sudo[341995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:40:55 compute-0 sudo[341995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:55 compute-0 sudo[341995]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:56 compute-0 sudo[342020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:56 compute-0 sudo[342020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:56 compute-0 sudo[342020]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:56 compute-0 sudo[342045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:40:56 compute-0 sudo[342045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:56 compute-0 podman[342111]: 2025-12-06 07:40:56.407309551 +0000 UTC m=+0.041329425 container create d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:40:56 compute-0 systemd[1]: Started libpod-conmon-d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf.scope.
Dec 06 07:40:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:40:56 compute-0 podman[342111]: 2025-12-06 07:40:56.388086743 +0000 UTC m=+0.022106637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:40:56 compute-0 podman[342111]: 2025-12-06 07:40:56.490241786 +0000 UTC m=+0.124261680 container init d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:40:56 compute-0 podman[342111]: 2025-12-06 07:40:56.497807814 +0000 UTC m=+0.131827688 container start d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:40:56 compute-0 podman[342111]: 2025-12-06 07:40:56.50095385 +0000 UTC m=+0.134973734 container attach d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:40:56 compute-0 infallible_sinoussi[342128]: 167 167
Dec 06 07:40:56 compute-0 systemd[1]: libpod-d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf.scope: Deactivated successfully.
Dec 06 07:40:56 compute-0 podman[342111]: 2025-12-06 07:40:56.50351122 +0000 UTC m=+0.137531094 container died d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:40:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-92101fb97c060cad895ed38c9298b104576a6174bf0310687d8f119c53b2e3dc-merged.mount: Deactivated successfully.
Dec 06 07:40:56 compute-0 podman[342111]: 2025-12-06 07:40:56.546717295 +0000 UTC m=+0.180737169 container remove d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec 06 07:40:56 compute-0 systemd[1]: libpod-conmon-d7701b77b33a7a34ff68440c689543a137cd34b9d830585a13fa3e4b9ac396bf.scope: Deactivated successfully.
Dec 06 07:40:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:56.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:56 compute-0 podman[342150]: 2025-12-06 07:40:56.712242738 +0000 UTC m=+0.041483549 container create 2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 07:40:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/509348024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/74222001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1657904244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:56 compute-0 systemd[1]: Started libpod-conmon-2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d.scope.
Dec 06 07:40:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f39f8286d5783d5d0d1a1c91e17a5802336c6b4f0670e744a60a859eafb824/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f39f8286d5783d5d0d1a1c91e17a5802336c6b4f0670e744a60a859eafb824/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f39f8286d5783d5d0d1a1c91e17a5802336c6b4f0670e744a60a859eafb824/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08f39f8286d5783d5d0d1a1c91e17a5802336c6b4f0670e744a60a859eafb824/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:56 compute-0 podman[342150]: 2025-12-06 07:40:56.695373425 +0000 UTC m=+0.024614266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:40:56 compute-0 podman[342150]: 2025-12-06 07:40:56.802219477 +0000 UTC m=+0.131460308 container init 2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:40:56 compute-0 podman[342150]: 2025-12-06 07:40:56.80855438 +0000 UTC m=+0.137795191 container start 2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 07:40:56 compute-0 podman[342150]: 2025-12-06 07:40:56.812021315 +0000 UTC m=+0.141262146 container attach 2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:40:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 159 op/s
Dec 06 07:40:57 compute-0 trusting_brattain[342166]: {
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:     "0": [
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:         {
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "devices": [
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "/dev/loop3"
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             ],
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "lv_name": "ceph_lv0",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "lv_size": "7511998464",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "name": "ceph_lv0",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "tags": {
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.cluster_name": "ceph",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.crush_device_class": "",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.encrypted": "0",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.osd_id": "0",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.type": "block",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:                 "ceph.vdo": "0"
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             },
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "type": "block",
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:             "vg_name": "ceph_vg0"
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:         }
Dec 06 07:40:57 compute-0 trusting_brattain[342166]:     ]
Dec 06 07:40:57 compute-0 trusting_brattain[342166]: }
Dec 06 07:40:57 compute-0 systemd[1]: libpod-2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d.scope: Deactivated successfully.
Dec 06 07:40:57 compute-0 podman[342150]: 2025-12-06 07:40:57.660953719 +0000 UTC m=+0.990194530 container died 2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-08f39f8286d5783d5d0d1a1c91e17a5802336c6b4f0670e744a60a859eafb824-merged.mount: Deactivated successfully.
Dec 06 07:40:57 compute-0 podman[342150]: 2025-12-06 07:40:57.713398409 +0000 UTC m=+1.042639210 container remove 2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec 06 07:40:57 compute-0 systemd[1]: libpod-conmon-2bbf820f1c234b90bdc333de02ed57d0f0a576ff641c57ef399b7cad644d2e0d.scope: Deactivated successfully.
Dec 06 07:40:57 compute-0 ceph-mon[74339]: pgmap v2554: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 159 op/s
Dec 06 07:40:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1682043614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:40:57 compute-0 sudo[342045]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:57 compute-0 sudo[342187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:57 compute-0 sudo[342187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:57 compute-0 sudo[342187]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:57 compute-0 sudo[342212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:40:57 compute-0 sudo[342212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:57 compute-0 sudo[342212]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:57 compute-0 sudo[342237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:57 compute-0 sudo[342237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:57 compute-0 sudo[342237]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:57 compute-0 sudo[342262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:40:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:57.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:57 compute-0 sudo[342262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:58 compute-0 podman[342328]: 2025-12-06 07:40:58.279539733 +0000 UTC m=+0.038187520 container create 62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 07:40:58 compute-0 systemd[1]: Started libpod-conmon-62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94.scope.
Dec 06 07:40:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:40:58 compute-0 podman[342328]: 2025-12-06 07:40:58.350203961 +0000 UTC m=+0.108851768 container init 62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:40:58 compute-0 podman[342328]: 2025-12-06 07:40:58.357267286 +0000 UTC m=+0.115915073 container start 62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 07:40:58 compute-0 podman[342328]: 2025-12-06 07:40:58.262834024 +0000 UTC m=+0.021481841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:40:58 compute-0 cranky_mendel[342346]: 167 167
Dec 06 07:40:58 compute-0 systemd[1]: libpod-62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94.scope: Deactivated successfully.
Dec 06 07:40:58 compute-0 podman[342328]: 2025-12-06 07:40:58.361848391 +0000 UTC m=+0.120496198 container attach 62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:40:58 compute-0 podman[342328]: 2025-12-06 07:40:58.362492569 +0000 UTC m=+0.121140366 container died 62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:40:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-85f3637f7908ec96abf0a0ec4eee2a78554a762ab0b98a95faea6ae74636866f-merged.mount: Deactivated successfully.
Dec 06 07:40:58 compute-0 podman[342328]: 2025-12-06 07:40:58.415149113 +0000 UTC m=+0.173796900 container remove 62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:40:58 compute-0 ovn_controller[147168]: 2025-12-06T07:40:58Z|00517|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:40:58 compute-0 systemd[1]: libpod-conmon-62268c8da63ab81b205b1da047c0ec1c0e49c13dbe59eac8f53eb5393484be94.scope: Deactivated successfully.
Dec 06 07:40:58 compute-0 nova_compute[251992]: 2025-12-06 07:40:58.488 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:58 compute-0 podman[342370]: 2025-12-06 07:40:58.603834091 +0000 UTC m=+0.038458046 container create 5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_herschel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 07:40:58 compute-0 systemd[1]: Started libpod-conmon-5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b.scope.
Dec 06 07:40:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c71223b05307ff3c80d16019cd656823721ceb94fea1870e31763a705dc01e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c71223b05307ff3c80d16019cd656823721ceb94fea1870e31763a705dc01e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c71223b05307ff3c80d16019cd656823721ceb94fea1870e31763a705dc01e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44c71223b05307ff3c80d16019cd656823721ceb94fea1870e31763a705dc01e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:40:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:40:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:40:58.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:40:58 compute-0 podman[342370]: 2025-12-06 07:40:58.588281184 +0000 UTC m=+0.022905159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:40:58 compute-0 podman[342370]: 2025-12-06 07:40:58.693464551 +0000 UTC m=+0.128088526 container init 5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:40:58 compute-0 podman[342370]: 2025-12-06 07:40:58.699310181 +0000 UTC m=+0.133934126 container start 5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:40:58 compute-0 podman[342370]: 2025-12-06 07:40:58.702462297 +0000 UTC m=+0.137086252 container attach 5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 07:40:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.5 MiB/s wr, 134 op/s
Dec 06 07:40:59 compute-0 nova_compute[251992]: 2025-12-06 07:40:59.175 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:40:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:40:59 compute-0 sudo[342393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:59 compute-0 sudo[342393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:59 compute-0 sudo[342393]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:59 compute-0 sudo[342418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:59 compute-0 sudo[342418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:59 compute-0 sudo[342418]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:59 compute-0 modest_herschel[342387]: {
Dec 06 07:40:59 compute-0 modest_herschel[342387]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:40:59 compute-0 modest_herschel[342387]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:40:59 compute-0 modest_herschel[342387]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:40:59 compute-0 modest_herschel[342387]:         "osd_id": 0,
Dec 06 07:40:59 compute-0 modest_herschel[342387]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:40:59 compute-0 modest_herschel[342387]:         "type": "bluestore"
Dec 06 07:40:59 compute-0 modest_herschel[342387]:     }
Dec 06 07:40:59 compute-0 modest_herschel[342387]: }
Dec 06 07:40:59 compute-0 systemd[1]: libpod-5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b.scope: Deactivated successfully.
Dec 06 07:40:59 compute-0 podman[342370]: 2025-12-06 07:40:59.609495646 +0000 UTC m=+1.044119621 container died 5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 06 07:40:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-44c71223b05307ff3c80d16019cd656823721ceb94fea1870e31763a705dc01e-merged.mount: Deactivated successfully.
Dec 06 07:40:59 compute-0 podman[342370]: 2025-12-06 07:40:59.658959382 +0000 UTC m=+1.093583337 container remove 5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_herschel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:40:59 compute-0 systemd[1]: libpod-conmon-5a3e94184ae1f27cc60a0f7cad44540fea31f85e775ad78a56f2ec545140221b.scope: Deactivated successfully.
Dec 06 07:40:59 compute-0 sudo[342262]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:40:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:40:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:40:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:40:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0b5f03a0-e6c2-4c50-a8c0-1adeae8eebf1 does not exist
Dec 06 07:40:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f930cd2d-1753-4312-8a90-c4289f5ba332 does not exist
Dec 06 07:40:59 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d018e96c-a526-44cc-83d1-4398cb469e66 does not exist
Dec 06 07:40:59 compute-0 sudo[342473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:40:59 compute-0 sudo[342473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:59 compute-0 sudo[342473]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:59 compute-0 sudo[342498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:40:59 compute-0 sudo[342498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:40:59 compute-0 sudo[342498]: pam_unix(sudo:session): session closed for user root
Dec 06 07:40:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:40:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:40:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:40:59.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:40:59 compute-0 ceph-mon[74339]: pgmap v2555: 305 pgs: 305 active+clean; 464 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.5 MiB/s wr, 134 op/s
Dec 06 07:40:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1098147259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:40:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:40:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:41:00 compute-0 nova_compute[251992]: 2025-12-06 07:41:00.406 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 432 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 378 KiB/s wr, 57 op/s
Dec 06 07:41:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:01.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:02 compute-0 ceph-mon[74339]: pgmap v2556: 305 pgs: 305 active+clean; 432 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 378 KiB/s wr, 57 op/s
Dec 06 07:41:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:02.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 107 op/s
Dec 06 07:41:03 compute-0 ceph-mon[74339]: pgmap v2557: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 107 op/s
Dec 06 07:41:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:03.849 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:03.850 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:03.851 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:03.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:04 compute-0 nova_compute[251992]: 2025-12-06 07:41:04.180 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 30 KiB/s wr, 103 op/s
Dec 06 07:41:05 compute-0 ceph-mon[74339]: pgmap v2558: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 30 KiB/s wr, 103 op/s
Dec 06 07:41:05 compute-0 nova_compute[251992]: 2025-12-06 07:41:05.408 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:05.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3407117804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:06 compute-0 nova_compute[251992]: 2025-12-06 07:41:06.308 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:06 compute-0 nova_compute[251992]: 2025-12-06 07:41:06.309 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:06 compute-0 nova_compute[251992]: 2025-12-06 07:41:06.309 251996 INFO nova.compute.manager [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Shelving
Dec 06 07:41:06 compute-0 nova_compute[251992]: 2025-12-06 07:41:06.332 251996 DEBUG nova.virt.libvirt.driver [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:41:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:41:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:06.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:41:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 199 op/s
Dec 06 07:41:07 compute-0 ceph-mon[74339]: pgmap v2559: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 199 op/s
Dec 06 07:41:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:07.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:08.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 37 KiB/s wr, 201 op/s
Dec 06 07:41:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3528680710' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:41:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3528680710' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:41:09 compute-0 nova_compute[251992]: 2025-12-06 07:41:09.182 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:09.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:10 compute-0 nova_compute[251992]: 2025-12-06 07:41:10.064 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:10 compute-0 nova_compute[251992]: 2025-12-06 07:41:10.432 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:10 compute-0 ceph-mon[74339]: pgmap v2560: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 37 KiB/s wr, 201 op/s
Dec 06 07:41:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:10.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:10 compute-0 ovn_controller[147168]: 2025-12-06T07:41:10Z|00518|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:41:10 compute-0 nova_compute[251992]: 2025-12-06 07:41:10.832 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 37 KiB/s wr, 217 op/s
Dec 06 07:41:11 compute-0 ceph-mon[74339]: pgmap v2561: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 37 KiB/s wr, 217 op/s
Dec 06 07:41:11 compute-0 kernel: tapf1f563a8-90 (unregistering): left promiscuous mode
Dec 06 07:41:11 compute-0 NetworkManager[48965]: <info>  [1765006871.8902] device (tapf1f563a8-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:41:11 compute-0 ovn_controller[147168]: 2025-12-06T07:41:11Z|00519|binding|INFO|Releasing lport f1f563a8-9001-419f-858a-0213c5d6607a from this chassis (sb_readonly=0)
Dec 06 07:41:11 compute-0 ovn_controller[147168]: 2025-12-06T07:41:11Z|00520|binding|INFO|Setting lport f1f563a8-9001-419f-858a-0213c5d6607a down in Southbound
Dec 06 07:41:11 compute-0 nova_compute[251992]: 2025-12-06 07:41:11.896 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:11 compute-0 ovn_controller[147168]: 2025-12-06T07:41:11Z|00521|binding|INFO|Removing iface tapf1f563a8-90 ovn-installed in OVS
Dec 06 07:41:11 compute-0 nova_compute[251992]: 2025-12-06 07:41:11.898 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:11 compute-0 nova_compute[251992]: 2025-12-06 07:41:11.913 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:11 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000087.scope: Deactivated successfully.
Dec 06 07:41:11 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000087.scope: Consumed 16.661s CPU time.
Dec 06 07:41:11 compute-0 systemd-machined[212986]: Machine qemu-64-instance-00000087 terminated.
Dec 06 07:41:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:11.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.063 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:b4:88 10.100.0.3'], port_security=['fa:16:3e:48:b4:88 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b85968f0-ebd7-48f6-a932-c4e8da09381e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd7c24a87-3909-4046-b7ee-0c4e77c9cc98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=f1f563a8-9001-419f-858a-0213c5d6607a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.064 158118 INFO neutron.agent.ovn.metadata.agent [-] Port f1f563a8-9001-419f-858a-0213c5d6607a in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.065 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3beede49-1cbb-425c-b1af-82f43dc57163
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.086 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1d09e2-9e68-451f-967c-5b514fd05b42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:12 compute-0 kernel: tapf1f563a8-90: entered promiscuous mode
Dec 06 07:41:12 compute-0 kernel: tapf1f563a8-90 (unregistering): left promiscuous mode
Dec 06 07:41:12 compute-0 nova_compute[251992]: 2025-12-06 07:41:12.122 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.124 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[27c6d033-0f65-428a-aeac-7a6791415e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.128 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[551a5c10-c998-468f-849a-88c0c4af4012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.154 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4e90b243-dd5f-4f1e-b19e-b4de88d65296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.170 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7a3281-b58b-465e-b406-075ffba3e117]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3beede49-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:c7:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 784, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 16, 'rx_bytes': 784, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678865, 'reachable_time': 42731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342550, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.186 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9754d5a8-99d3-4dcb-9a10-1417fd979034]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678878, 'tstamp': 678878}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342551, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3beede49-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678881, 'tstamp': 678881}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342551, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.188 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:12 compute-0 nova_compute[251992]: 2025-12-06 07:41:12.189 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.195 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3beede49-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:12 compute-0 nova_compute[251992]: 2025-12-06 07:41:12.195 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.195 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.196 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3beede49-10, col_values=(('external_ids', {'iface-id': '058fee39-af19-4b00-b556-fb88bc823747'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:12 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:12.196 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:41:12 compute-0 nova_compute[251992]: 2025-12-06 07:41:12.363 251996 INFO nova.virt.libvirt.driver [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance shutdown successfully after 6 seconds.
Dec 06 07:41:12 compute-0 nova_compute[251992]: 2025-12-06 07:41:12.369 251996 INFO nova.virt.libvirt.driver [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance destroyed successfully.
Dec 06 07:41:12 compute-0 nova_compute[251992]: 2025-12-06 07:41:12.370 251996 DEBUG nova.objects.instance [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'numa_topology' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:41:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:12.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3426479770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 776 KiB/s wr, 216 op/s
Dec 06 07:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:41:13 compute-0 ceph-mgr[74630]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 07:41:13 compute-0 ceph-mon[74339]: pgmap v2562: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 776 KiB/s wr, 216 op/s
Dec 06 07:41:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:14 compute-0 nova_compute[251992]: 2025-12-06 07:41:14.185 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:14.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 763 KiB/s wr, 149 op/s
Dec 06 07:41:14 compute-0 nova_compute[251992]: 2025-12-06 07:41:14.926 251996 INFO nova.virt.libvirt.driver [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Beginning cold snapshot process
Dec 06 07:41:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/835378175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.097 251996 DEBUG nova.virt.libvirt.imagebackend [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.142 251996 DEBUG nova.compute.manager [req-50dc1280-213b-4df5-bcab-9c87a91ab934 req-7d844a96-fae8-4f91-bda2-e4011ac9bfa7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-unplugged-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.143 251996 DEBUG oslo_concurrency.lockutils [req-50dc1280-213b-4df5-bcab-9c87a91ab934 req-7d844a96-fae8-4f91-bda2-e4011ac9bfa7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.143 251996 DEBUG oslo_concurrency.lockutils [req-50dc1280-213b-4df5-bcab-9c87a91ab934 req-7d844a96-fae8-4f91-bda2-e4011ac9bfa7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.143 251996 DEBUG oslo_concurrency.lockutils [req-50dc1280-213b-4df5-bcab-9c87a91ab934 req-7d844a96-fae8-4f91-bda2-e4011ac9bfa7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.143 251996 DEBUG nova.compute.manager [req-50dc1280-213b-4df5-bcab-9c87a91ab934 req-7d844a96-fae8-4f91-bda2-e4011ac9bfa7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] No waiting events found dispatching network-vif-unplugged-f1f563a8-9001-419f-858a-0213c5d6607a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.144 251996 WARNING nova.compute.manager [req-50dc1280-213b-4df5-bcab-9c87a91ab934 req-7d844a96-fae8-4f91-bda2-e4011ac9bfa7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received unexpected event network-vif-unplugged-f1f563a8-9001-419f-858a-0213c5d6607a for instance with vm_state active and task_state shelving_image_uploading.
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.436 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:15 compute-0 nova_compute[251992]: 2025-12-06 07:41:15.721 251996 DEBUG nova.storage.rbd_utils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(d90d77da694345b18c7fb33bf11f2de8) on rbd image(b85968f0-ebd7-48f6-a932-c4e8da09381e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:41:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:15.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Dec 06 07:41:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Dec 06 07:41:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Dec 06 07:41:16 compute-0 ceph-mon[74339]: pgmap v2563: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 763 KiB/s wr, 149 op/s
Dec 06 07:41:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2710732886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:16 compute-0 nova_compute[251992]: 2025-12-06 07:41:16.077 251996 DEBUG nova.storage.rbd_utils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] cloning vms/b85968f0-ebd7-48f6-a932-c4e8da09381e_disk@d90d77da694345b18c7fb33bf11f2de8 to images/16647923-36b7-4c61-9de3-fb629f75ca61 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:41:16 compute-0 nova_compute[251992]: 2025-12-06 07:41:16.177 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:16 compute-0 nova_compute[251992]: 2025-12-06 07:41:16.202 251996 DEBUG nova.storage.rbd_utils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] flattening images/16647923-36b7-4c61-9de3-fb629f75ca61 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:41:16 compute-0 nova_compute[251992]: 2025-12-06 07:41:16.589 251996 DEBUG nova.storage.rbd_utils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] removing snapshot(d90d77da694345b18c7fb33bf11f2de8) on rbd image(b85968f0-ebd7-48f6-a932-c4e8da09381e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:41:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:16.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 439 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 392 KiB/s rd, 4.0 MiB/s wr, 201 op/s
Dec 06 07:41:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Dec 06 07:41:17 compute-0 ceph-mon[74339]: osdmap e321: 3 total, 3 up, 3 in
Dec 06 07:41:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Dec 06 07:41:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.084 251996 DEBUG nova.storage.rbd_utils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] creating snapshot(snap) on rbd image(16647923-36b7-4c61-9de3-fb629f75ca61) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.311 251996 DEBUG nova.compute.manager [req-e63945a0-226a-4896-b837-3cd799adb754 req-ec2c5dac-611e-45a1-afc3-eaffb1507e4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.311 251996 DEBUG oslo_concurrency.lockutils [req-e63945a0-226a-4896-b837-3cd799adb754 req-ec2c5dac-611e-45a1-afc3-eaffb1507e4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.312 251996 DEBUG oslo_concurrency.lockutils [req-e63945a0-226a-4896-b837-3cd799adb754 req-ec2c5dac-611e-45a1-afc3-eaffb1507e4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.312 251996 DEBUG oslo_concurrency.lockutils [req-e63945a0-226a-4896-b837-3cd799adb754 req-ec2c5dac-611e-45a1-afc3-eaffb1507e4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.313 251996 DEBUG nova.compute.manager [req-e63945a0-226a-4896-b837-3cd799adb754 req-ec2c5dac-611e-45a1-afc3-eaffb1507e4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] No waiting events found dispatching network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.313 251996 WARNING nova.compute.manager [req-e63945a0-226a-4896-b837-3cd799adb754 req-ec2c5dac-611e-45a1-afc3-eaffb1507e4b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received unexpected event network-vif-plugged-f1f563a8-9001-419f-858a-0213c5d6607a for instance with vm_state active and task_state shelving_image_uploading.
Dec 06 07:41:17 compute-0 podman[342697]: 2025-12-06 07:41:17.448473801 +0000 UTC m=+0.102449302 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.679 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.701 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.701 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.701 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.702 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:41:17 compute-0 nova_compute[251992]: 2025-12-06 07:41:17.702 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:17.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Dec 06 07:41:18 compute-0 ceph-mon[74339]: pgmap v2565: 305 pgs: 305 active+clean; 439 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 392 KiB/s rd, 4.0 MiB/s wr, 201 op/s
Dec 06 07:41:18 compute-0 ceph-mon[74339]: osdmap e322: 3 total, 3 up, 3 in
Dec 06 07:41:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Dec 06 07:41:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Dec 06 07:41:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:41:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673045323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.281 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.352 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.352 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.356 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.356 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:41:18
Dec 06 07:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'volumes', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root']
Dec 06 07:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.517 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.518 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4097MB free_disk=20.83806610107422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.518 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.519 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.664 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c1ef1073-7c66-428c-a02b-e4daa3551d22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.665 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance b85968f0-ebd7-48f6-a932-c4e8da09381e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.665 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.665 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:41:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:18.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:18 compute-0 nova_compute[251992]: 2025-12-06 07:41:18.756 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 9.7 MiB/s wr, 391 op/s
Dec 06 07:41:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:41:19 compute-0 nova_compute[251992]: 2025-12-06 07:41:19.188 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072884106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:19 compute-0 nova_compute[251992]: 2025-12-06 07:41:19.206 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:19 compute-0 nova_compute[251992]: 2025-12-06 07:41:19.212 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:41:19 compute-0 ceph-mon[74339]: osdmap e323: 3 total, 3 up, 3 in
Dec 06 07:41:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/673045323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2372975583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2685470459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:19 compute-0 ceph-mon[74339]: pgmap v2568: 305 pgs: 305 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 9.7 MiB/s wr, 391 op/s
Dec 06 07:41:19 compute-0 nova_compute[251992]: 2025-12-06 07:41:19.232 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:41:19 compute-0 nova_compute[251992]: 2025-12-06 07:41:19.256 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:41:19 compute-0 nova_compute[251992]: 2025-12-06 07:41:19.256 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:19 compute-0 sudo[342767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:41:19 compute-0 sudo[342767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:41:19 compute-0 sudo[342767]: pam_unix(sudo:session): session closed for user root
Dec 06 07:41:19 compute-0 sudo[342792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:41:19 compute-0 sudo[342792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:41:19 compute-0 sudo[342792]: pam_unix(sudo:session): session closed for user root
Dec 06 07:41:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:19.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:20 compute-0 nova_compute[251992]: 2025-12-06 07:41:20.438 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3072884106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:20.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 14 MiB/s wr, 619 op/s
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.234 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.234 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.235 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.404 251996 INFO nova.virt.libvirt.driver [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Snapshot image upload complete
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.405 251996 DEBUG nova.compute.manager [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.461 251996 INFO nova.compute.manager [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Shelve offloading
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.467 251996 INFO nova.virt.libvirt.driver [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance destroyed successfully.
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.468 251996 DEBUG nova.compute.manager [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.469 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.469 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquired lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:41:21 compute-0 nova_compute[251992]: 2025-12-06 07:41:21.470 251996 DEBUG nova.network.neutron [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:41:21 compute-0 ceph-mon[74339]: pgmap v2569: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 14 MiB/s wr, 619 op/s
Dec 06 07:41:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:21.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:22 compute-0 nova_compute[251992]: 2025-12-06 07:41:22.128 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:22.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3397156865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 352 op/s
Dec 06 07:41:23 compute-0 podman[342820]: 2025-12-06 07:41:23.404382565 +0000 UTC m=+0.062932318 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 06 07:41:23 compute-0 nova_compute[251992]: 2025-12-06 07:41:23.408 251996 DEBUG nova.network.neutron [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating instance_info_cache with network_info: [{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:41:23 compute-0 podman[342819]: 2025-12-06 07:41:23.420325592 +0000 UTC m=+0.078078293 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 07:41:23 compute-0 nova_compute[251992]: 2025-12-06 07:41:23.472 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Releasing lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:41:23 compute-0 nova_compute[251992]: 2025-12-06 07:41:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:23 compute-0 ceph-mon[74339]: pgmap v2570: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 352 op/s
Dec 06 07:41:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/845513629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1794384952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:23.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:24 compute-0 nova_compute[251992]: 2025-12-06 07:41:24.191 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Dec 06 07:41:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Dec 06 07:41:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Dec 06 07:41:24 compute-0 nova_compute[251992]: 2025-12-06 07:41:24.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:24 compute-0 nova_compute[251992]: 2025-12-06 07:41:24.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:41:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:41:24 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:41:24 compute-0 nova_compute[251992]: 2025-12-06 07:41:24.707 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:41:24 compute-0 nova_compute[251992]: 2025-12-06 07:41:24.708 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:24 compute-0 nova_compute[251992]: 2025-12-06 07:41:24.708 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:24 compute-0 nova_compute[251992]: 2025-12-06 07:41:24.708 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:41:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:24.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.8 MiB/s wr, 309 op/s
Dec 06 07:41:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:41:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:41:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.442 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:25 compute-0 ceph-mon[74339]: osdmap e324: 3 total, 3 up, 3 in
Dec 06 07:41:25 compute-0 ceph-mon[74339]: pgmap v2572: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.8 MiB/s wr, 309 op/s
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.590 251996 INFO nova.virt.libvirt.driver [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Instance destroyed successfully.
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.590 251996 DEBUG nova.objects.instance [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid b85968f0-ebd7-48f6-a932-c4e8da09381e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.608 251996 DEBUG nova.virt.libvirt.vif [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:39:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1033932756',display_name='tempest-ServerActionsTestOtherB-server-1033932756',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1033932756',id=135,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNN9jQYM4kD1mTnBw0NDX39Zbdx9ux1HYR8eIQywEVZjFzFLOofd0KCZoZVTNe73or3BwcctNg+QkLYSKwQ/ud2tRwFgp+UoYWDz3YSx64mxFih1G20CdOLvEJ79lvWoOg==',key_name='tempest-keypair-1961317761',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:39:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-kbdg07ib',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member',shelved_at='2025-12-06T07:41:21.405475',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='16647923-36b7-4c61-9de3-fb629f75ca61'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:41:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=b85968f0-ebd7-48f6-a932-c4e8da09381e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.609 251996 DEBUG nova.network.os_vif_util [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1f563a8-90", "ovs_interfaceid": "f1f563a8-9001-419f-858a-0213c5d6607a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.610 251996 DEBUG nova.network.os_vif_util [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.610 251996 DEBUG os_vif [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.613 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.613 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1f563a8-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.616 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.621 251996 INFO os_vif [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:b4:88,bridge_name='br-int',has_traffic_filtering=True,id=f1f563a8-9001-419f-858a-0213c5d6607a,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1f563a8-90')
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.795 251996 DEBUG nova.compute.manager [req-94433c17-983d-48b0-bf61-04e087bb08a9 req-0de79f66-7f55-4869-bdcd-0d5cc6abf18d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Received event network-changed-f1f563a8-9001-419f-858a-0213c5d6607a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.795 251996 DEBUG nova.compute.manager [req-94433c17-983d-48b0-bf61-04e087bb08a9 req-0de79f66-7f55-4869-bdcd-0d5cc6abf18d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Refreshing instance network info cache due to event network-changed-f1f563a8-9001-419f-858a-0213c5d6607a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.796 251996 DEBUG oslo_concurrency.lockutils [req-94433c17-983d-48b0-bf61-04e087bb08a9 req-0de79f66-7f55-4869-bdcd-0d5cc6abf18d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.796 251996 DEBUG oslo_concurrency.lockutils [req-94433c17-983d-48b0-bf61-04e087bb08a9 req-0de79f66-7f55-4869-bdcd-0d5cc6abf18d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:41:25 compute-0 nova_compute[251992]: 2025-12-06 07:41:25.796 251996 DEBUG nova.network.neutron [req-94433c17-983d-48b0-bf61-04e087bb08a9 req-0de79f66-7f55-4869-bdcd-0d5cc6abf18d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Refreshing network info cache for port f1f563a8-9001-419f-858a-0213c5d6607a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:41:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:25.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007494976328401266 of space, bias 1.0, pg target 2.2484928985203796 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.644426559882747 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005059113565512749 of space, bias 1.0, pg target 1.5076158425227992 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:41:26 compute-0 nova_compute[251992]: 2025-12-06 07:41:26.343 251996 INFO nova.virt.libvirt.driver [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Deleting instance files /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e_del
Dec 06 07:41:26 compute-0 nova_compute[251992]: 2025-12-06 07:41:26.344 251996 INFO nova.virt.libvirt.driver [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Deletion of /var/lib/nova/instances/b85968f0-ebd7-48f6-a932-c4e8da09381e_del complete
Dec 06 07:41:26 compute-0 nova_compute[251992]: 2025-12-06 07:41:26.461 251996 INFO nova.scheduler.client.report [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Deleted allocations for instance b85968f0-ebd7-48f6-a932-c4e8da09381e
Dec 06 07:41:26 compute-0 nova_compute[251992]: 2025-12-06 07:41:26.520 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:26 compute-0 nova_compute[251992]: 2025-12-06 07:41:26.521 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:26 compute-0 nova_compute[251992]: 2025-12-06 07:41:26.591 251996 DEBUG oslo_concurrency.processutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:26.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.2 MiB/s wr, 283 op/s
Dec 06 07:41:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:41:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/96112953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.051 251996 DEBUG oslo_concurrency.processutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.057 251996 DEBUG nova.compute.provider_tree [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.102 251996 DEBUG nova.scheduler.client.report [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.129 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006872.129066, b85968f0-ebd7-48f6-a932-c4e8da09381e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.130 251996 INFO nova.compute.manager [-] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] VM Stopped (Lifecycle Event)
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.138 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.159 251996 DEBUG nova.compute.manager [None req-723a563a-9370-4118-ad3c-e787fd088dee - - - - - -] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.168 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.266 251996 DEBUG oslo_concurrency.lockutils [None req-04e6c6d8-654c-4a9d-be9b-f8da2d82ddb8 a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "b85968f0-ebd7-48f6-a932-c4e8da09381e" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.653 251996 DEBUG nova.network.neutron [req-94433c17-983d-48b0-bf61-04e087bb08a9 req-0de79f66-7f55-4869-bdcd-0d5cc6abf18d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updated VIF entry in instance network info cache for port f1f563a8-9001-419f-858a-0213c5d6607a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.654 251996 DEBUG nova.network.neutron [req-94433c17-983d-48b0-bf61-04e087bb08a9 req-0de79f66-7f55-4869-bdcd-0d5cc6abf18d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: b85968f0-ebd7-48f6-a932-c4e8da09381e] Updating instance_info_cache with network_info: [{"id": "f1f563a8-9001-419f-858a-0213c5d6607a", "address": "fa:16:3e:48:b4:88", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": null, "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapf1f563a8-90", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:41:27 compute-0 nova_compute[251992]: 2025-12-06 07:41:27.676 251996 DEBUG oslo_concurrency.lockutils [req-94433c17-983d-48b0-bf61-04e087bb08a9 req-0de79f66-7f55-4869-bdcd-0d5cc6abf18d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-b85968f0-ebd7-48f6-a932-c4e8da09381e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:41:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:28.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:28 compute-0 ceph-mon[74339]: pgmap v2573: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.2 MiB/s wr, 283 op/s
Dec 06 07:41:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/96112953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3383122313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 523 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.8 MiB/s wr, 282 op/s
Dec 06 07:41:29 compute-0 nova_compute[251992]: 2025-12-06 07:41:29.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:29.153 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:41:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:29.155 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:41:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:41:30.158 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:41:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3505286723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:30 compute-0 ceph-mon[74339]: pgmap v2574: 305 pgs: 305 active+clean; 523 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.8 MiB/s wr, 282 op/s
Dec 06 07:41:30 compute-0 nova_compute[251992]: 2025-12-06 07:41:30.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:30 compute-0 nova_compute[251992]: 2025-12-06 07:41:30.615 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 197 op/s
Dec 06 07:41:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Dec 06 07:41:32 compute-0 ceph-mon[74339]: pgmap v2575: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 197 op/s
Dec 06 07:41:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:34.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:34 compute-0 ovn_controller[147168]: 2025-12-06T07:41:34Z|00522|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:41:34 compute-0 nova_compute[251992]: 2025-12-06 07:41:34.884 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Dec 06 07:41:35 compute-0 ceph-mon[74339]: pgmap v2576: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Dec 06 07:41:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1684873521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:35 compute-0 nova_compute[251992]: 2025-12-06 07:41:35.446 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:35 compute-0 nova_compute[251992]: 2025-12-06 07:41:35.616 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:36.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 07:41:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/193829452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:38 compute-0 ceph-mon[74339]: pgmap v2577: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Dec 06 07:41:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/257299139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:38.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 467 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 301 KiB/s rd, 1.7 MiB/s wr, 75 op/s
Dec 06 07:41:39 compute-0 ceph-mon[74339]: pgmap v2578: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 07:41:39 compute-0 ceph-mon[74339]: pgmap v2579: 305 pgs: 305 active+clean; 467 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 301 KiB/s rd, 1.7 MiB/s wr, 75 op/s
Dec 06 07:41:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:39 compute-0 sudo[342908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:41:39 compute-0 sudo[342908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:41:39 compute-0 sudo[342908]: pam_unix(sudo:session): session closed for user root
Dec 06 07:41:39 compute-0 sudo[342933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:41:39 compute-0 sudo[342933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:41:39 compute-0 sudo[342933]: pam_unix(sudo:session): session closed for user root
Dec 06 07:41:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2344964929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/523603762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:40 compute-0 nova_compute[251992]: 2025-12-06 07:41:40.448 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:40 compute-0 nova_compute[251992]: 2025-12-06 07:41:40.618 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:40.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 123 op/s
Dec 06 07:41:41 compute-0 ceph-mon[74339]: pgmap v2580: 305 pgs: 305 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 123 op/s
Dec 06 07:41:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4180631999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3433887939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:41:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:42.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 80 op/s
Dec 06 07:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:41:43 compute-0 ceph-mon[74339]: pgmap v2581: 305 pgs: 305 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 80 op/s
Dec 06 07:41:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:44.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:44.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 538 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Dec 06 07:41:45 compute-0 nova_compute[251992]: 2025-12-06 07:41:45.450 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:45 compute-0 nova_compute[251992]: 2025-12-06 07:41:45.619 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:46.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:46 compute-0 nova_compute[251992]: 2025-12-06 07:41:46.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:46 compute-0 ceph-mon[74339]: pgmap v2582: 305 pgs: 305 active+clean; 538 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.8 MiB/s wr, 164 op/s
Dec 06 07:41:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 5.7 MiB/s wr, 248 op/s
Dec 06 07:41:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Dec 06 07:41:47 compute-0 ceph-mon[74339]: pgmap v2583: 305 pgs: 305 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 5.7 MiB/s wr, 248 op/s
Dec 06 07:41:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Dec 06 07:41:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Dec 06 07:41:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:48.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:48 compute-0 podman[342962]: 2025-12-06 07:41:48.424789373 +0000 UTC m=+0.080579002 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:41:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:48.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 560 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.0 MiB/s wr, 345 op/s
Dec 06 07:41:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:49 compute-0 ceph-mon[74339]: osdmap e325: 3 total, 3 up, 3 in
Dec 06 07:41:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:50.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:50 compute-0 nova_compute[251992]: 2025-12-06 07:41:50.452 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:50 compute-0 nova_compute[251992]: 2025-12-06 07:41:50.576 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:50 compute-0 nova_compute[251992]: 2025-12-06 07:41:50.620 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:50.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:50 compute-0 ceph-mon[74339]: pgmap v2585: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 560 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.0 MiB/s wr, 345 op/s
Dec 06 07:41:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 4.0 MiB/s wr, 350 op/s
Dec 06 07:41:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Dec 06 07:41:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:52.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:52.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 4.0 MiB/s wr, 350 op/s
Dec 06 07:41:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1112513091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:53 compute-0 ceph-mon[74339]: pgmap v2586: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 4.0 MiB/s wr, 350 op/s
Dec 06 07:41:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Dec 06 07:41:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Dec 06 07:41:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:54.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:54 compute-0 podman[342992]: 2025-12-06 07:41:54.401530809 +0000 UTC m=+0.056413619 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:41:54 compute-0 podman[342993]: 2025-12-06 07:41:54.4085362 +0000 UTC m=+0.060545821 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:41:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:54.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:54 compute-0 ceph-mon[74339]: pgmap v2587: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 4.0 MiB/s wr, 350 op/s
Dec 06 07:41:54 compute-0 ceph-mon[74339]: osdmap e326: 3 total, 3 up, 3 in
Dec 06 07:41:54 compute-0 nova_compute[251992]: 2025-12-06 07:41:54.926 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:54 compute-0 nova_compute[251992]: 2025-12-06 07:41:54.927 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 218 op/s
Dec 06 07:41:54 compute-0 nova_compute[251992]: 2025-12-06 07:41:54.946 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.043 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.043 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.049 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.049 251996 INFO nova.compute.claims [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.163 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.454 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:41:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011568115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.600 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.609 251996 DEBUG nova.compute.provider_tree [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.622 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.643 251996 DEBUG nova.scheduler.client.report [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.681 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.682 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.753 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.753 251996 DEBUG nova.network.neutron [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.782 251996 INFO nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.813 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.955 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.956 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.956 251996 INFO nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Creating image(s)
Dec 06 07:41:55 compute-0 nova_compute[251992]: 2025-12-06 07:41:55.981 251996 DEBUG nova.storage.rbd_utils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.009 251996 DEBUG nova.storage.rbd_utils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:56.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.037 251996 DEBUG nova.storage.rbd_utils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.041 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.069 251996 DEBUG nova.policy [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d8b62a3276f4a8b8349af67b82134c8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eff1f6a1654b45079de20eddb830e76d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.107 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.108 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.108 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.108 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.134 251996 DEBUG nova.storage.rbd_utils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.138 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:41:56 compute-0 ceph-mon[74339]: pgmap v2589: 305 pgs: 305 active+clean; 466 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 218 op/s
Dec 06 07:41:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2011568115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:56.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:56 compute-0 nova_compute[251992]: 2025-12-06 07:41:56.784 251996 DEBUG nova.network.neutron [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Successfully created port: 6bec9913-1f72-4458-a269-bab059df9fe1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:41:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 226 op/s
Dec 06 07:41:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:41:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:41:58.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:41:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/26752850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:41:58 compute-0 ceph-mon[74339]: pgmap v2590: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 226 op/s
Dec 06 07:41:58 compute-0 nova_compute[251992]: 2025-12-06 07:41:58.358 251996 DEBUG nova.network.neutron [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Successfully updated port: 6bec9913-1f72-4458-a269-bab059df9fe1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:41:58 compute-0 nova_compute[251992]: 2025-12-06 07:41:58.383 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "refresh_cache-4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:41:58 compute-0 nova_compute[251992]: 2025-12-06 07:41:58.383 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquired lock "refresh_cache-4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:41:58 compute-0 nova_compute[251992]: 2025-12-06 07:41:58.383 251996 DEBUG nova.network.neutron [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:41:58 compute-0 nova_compute[251992]: 2025-12-06 07:41:58.504 251996 DEBUG nova.compute.manager [req-1ea72fd3-38b8-4a79-be67-68127bfc97e8 req-167e6273-8c60-4fe6-9151-487f4f92fcc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received event network-changed-6bec9913-1f72-4458-a269-bab059df9fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:41:58 compute-0 nova_compute[251992]: 2025-12-06 07:41:58.505 251996 DEBUG nova.compute.manager [req-1ea72fd3-38b8-4a79-be67-68127bfc97e8 req-167e6273-8c60-4fe6-9151-487f4f92fcc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Refreshing instance network info cache due to event network-changed-6bec9913-1f72-4458-a269-bab059df9fe1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:41:58 compute-0 nova_compute[251992]: 2025-12-06 07:41:58.505 251996 DEBUG oslo_concurrency.lockutils [req-1ea72fd3-38b8-4a79-be67-68127bfc97e8 req-167e6273-8c60-4fe6-9151-487f4f92fcc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:41:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:41:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:41:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:41:58.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:41:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 473 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 183 op/s
Dec 06 07:41:59 compute-0 nova_compute[251992]: 2025-12-06 07:41:59.040 251996 DEBUG nova.network.neutron [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:41:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:41:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Dec 06 07:41:59 compute-0 sudo[343149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:41:59 compute-0 sudo[343149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:41:59 compute-0 sudo[343149]: pam_unix(sudo:session): session closed for user root
Dec 06 07:41:59 compute-0 sudo[343174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:41:59 compute-0 sudo[343174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:41:59 compute-0 sudo[343174]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:00.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:00 compute-0 sudo[343199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:00 compute-0 sudo[343199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-0 sudo[343199]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Dec 06 07:42:00 compute-0 ceph-mon[74339]: pgmap v2591: 305 pgs: 305 active+clean; 473 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 183 op/s
Dec 06 07:42:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Dec 06 07:42:00 compute-0 sudo[343224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:42:00 compute-0 sudo[343224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-0 sudo[343224]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.235 251996 DEBUG nova.network.neutron [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Updating instance_info_cache with network_info: [{"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:00 compute-0 sudo[343249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:00 compute-0 sudo[343249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-0 sudo[343249]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-0 sudo[343274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:42:00 compute-0 sudo[343274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.508 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.529 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.605 251996 DEBUG nova.storage.rbd_utils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] resizing rbd image 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:42:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.640 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.698 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Releasing lock "refresh_cache-4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.699 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Instance network_info: |[{"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.699 251996 DEBUG oslo_concurrency.lockutils [req-1ea72fd3-38b8-4a79-be67-68127bfc97e8 req-167e6273-8c60-4fe6-9151-487f4f92fcc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:42:00 compute-0 nova_compute[251992]: 2025-12-06 07:42:00.699 251996 DEBUG nova.network.neutron [req-1ea72fd3-38b8-4a79-be67-68127bfc97e8 req-167e6273-8c60-4fe6-9151-487f4f92fcc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Refreshing network info cache for port 6bec9913-1f72-4458-a269-bab059df9fe1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:42:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:00 compute-0 sudo[343274]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:00.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 610 KiB/s rd, 6.7 MiB/s wr, 133 op/s
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.048 251996 DEBUG nova.objects.instance [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'migration_context' on Instance uuid 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.077 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.077 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Ensure instance console log exists: /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.078 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.078 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.078 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.080 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Start _get_guest_xml network_info=[{"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.084 251996 WARNING nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.095 251996 DEBUG nova.virt.libvirt.host [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.096 251996 DEBUG nova.virt.libvirt.host [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.101 251996 DEBUG nova.virt.libvirt.host [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.101 251996 DEBUG nova.virt.libvirt.host [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.103 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.103 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.103 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.103 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.104 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.104 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.104 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.104 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.104 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.105 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.105 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.105 251996 DEBUG nova.virt.hardware [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.109 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:01 compute-0 ceph-mon[74339]: osdmap e327: 3 total, 3 up, 3 in
Dec 06 07:42:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:01 compute-0 ceph-mon[74339]: pgmap v2593: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 610 KiB/s rd, 6.7 MiB/s wr, 133 op/s
Dec 06 07:42:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:42:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794994888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.544 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.569 251996 DEBUG nova.storage.rbd_utils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:01 compute-0 nova_compute[251992]: 2025-12-06 07:42:01.574 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:42:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:42:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:42:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:42:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:42:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 215c0a04-afa0-4a70-9e99-01f6d0d1fd5c does not exist
Dec 06 07:42:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 34007a08-2aee-4834-aaab-75c8c2b36f41 does not exist
Dec 06 07:42:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b44b96b3-4170-47e2-bc24-5ef58a56e80b does not exist
Dec 06 07:42:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:42:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:42:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:42:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:42:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:42:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:42:01 compute-0 sudo[343463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:01 compute-0 sudo[343463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:01 compute-0 sudo[343463]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:01 compute-0 sudo[343488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:42:01 compute-0 sudo[343488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:01 compute-0 sudo[343488]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:01 compute-0 sudo[343513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:01 compute-0 sudo[343513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:01 compute-0 sudo[343513]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:01 compute-0 sudo[343538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:42:01 compute-0 sudo[343538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:42:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280951404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.045 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.047 251996 DEBUG nova.virt.libvirt.vif [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-309022851',display_name='tempest-ServersTestJSON-server-309022851',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-309022851',id=143,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-8g0fb13j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:55Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.048 251996 DEBUG nova.network.os_vif_util [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.049 251996 DEBUG nova.network.os_vif_util [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:9d:eb,bridge_name='br-int',has_traffic_filtering=True,id=6bec9913-1f72-4458-a269-bab059df9fe1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bec9913-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.050 251996 DEBUG nova.objects.instance [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'pci_devices' on Instance uuid 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.067 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <uuid>4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6</uuid>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <name>instance-0000008f</name>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <nova:name>tempest-ServersTestJSON-server-309022851</nova:name>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:42:01</nova:creationTime>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <nova:user uuid="0d8b62a3276f4a8b8349af67b82134c8">tempest-ServersTestJSON-374151197-project-member</nova:user>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <nova:project uuid="eff1f6a1654b45079de20eddb830e76d">tempest-ServersTestJSON-374151197</nova:project>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <nova:port uuid="6bec9913-1f72-4458-a269-bab059df9fe1">
Dec 06 07:42:02 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <system>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <entry name="serial">4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6</entry>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <entry name="uuid">4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6</entry>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </system>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <os>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   </os>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <features>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   </features>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk">
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       </source>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk.config">
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       </source>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:42:02 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:a9:9d:eb"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <target dev="tap6bec9913-1f"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6/console.log" append="off"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <video>
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </video>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:42:02 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:42:02 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:42:02 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:42:02 compute-0 nova_compute[251992]: </domain>
Dec 06 07:42:02 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.070 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Preparing to wait for external event network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.070 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.070 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.071 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.072 251996 DEBUG nova.virt.libvirt.vif [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:41:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-309022851',display_name='tempest-ServersTestJSON-server-309022851',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-309022851',id=143,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-8g0fb13j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:41:55Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.072 251996 DEBUG nova.network.os_vif_util [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.073 251996 DEBUG nova.network.os_vif_util [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:9d:eb,bridge_name='br-int',has_traffic_filtering=True,id=6bec9913-1f72-4458-a269-bab059df9fe1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bec9913-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.073 251996 DEBUG os_vif [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:9d:eb,bridge_name='br-int',has_traffic_filtering=True,id=6bec9913-1f72-4458-a269-bab059df9fe1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bec9913-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.074 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.074 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.075 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.078 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.079 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6bec9913-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.079 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6bec9913-1f, col_values=(('external_ids', {'iface-id': '6bec9913-1f72-4458-a269-bab059df9fe1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:9d:eb', 'vm-uuid': '4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.081 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.083 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:42:02 compute-0 NetworkManager[48965]: <info>  [1765006922.0840] manager: (tap6bec9913-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.088 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.090 251996 INFO os_vif [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:9d:eb,bridge_name='br-int',has_traffic_filtering=True,id=6bec9913-1f72-4458-a269-bab059df9fe1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bec9913-1f')
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.134 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.135 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.135 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] No VIF found with MAC fa:16:3e:a9:9d:eb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.135 251996 INFO nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Using config drive
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.170 251996 DEBUG nova.storage.rbd_utils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.214 251996 DEBUG nova.network.neutron [req-1ea72fd3-38b8-4a79-be67-68127bfc97e8 req-167e6273-8c60-4fe6-9151-487f4f92fcc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Updated VIF entry in instance network info cache for port 6bec9913-1f72-4458-a269-bab059df9fe1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.215 251996 DEBUG nova.network.neutron [req-1ea72fd3-38b8-4a79-be67-68127bfc97e8 req-167e6273-8c60-4fe6-9151-487f4f92fcc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Updating instance_info_cache with network_info: [{"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.227 251996 DEBUG oslo_concurrency.lockutils [req-1ea72fd3-38b8-4a79-be67-68127bfc97e8 req-167e6273-8c60-4fe6-9151-487f4f92fcc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3794994888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/820102900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1280951404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/41133090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:02 compute-0 podman[343624]: 2025-12-06 07:42:02.273178169 +0000 UTC m=+0.037703206 container create fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:42:02 compute-0 systemd[1]: Started libpod-conmon-fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61.scope.
Dec 06 07:42:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:42:02 compute-0 podman[343624]: 2025-12-06 07:42:02.349378289 +0000 UTC m=+0.113903346 container init fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:42:02 compute-0 podman[343624]: 2025-12-06 07:42:02.258186487 +0000 UTC m=+0.022711544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:42:02 compute-0 podman[343624]: 2025-12-06 07:42:02.356288228 +0000 UTC m=+0.120813265 container start fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:42:02 compute-0 podman[343624]: 2025-12-06 07:42:02.360501515 +0000 UTC m=+0.125026572 container attach fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:42:02 compute-0 objective_thompson[343640]: 167 167
Dec 06 07:42:02 compute-0 systemd[1]: libpod-fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61.scope: Deactivated successfully.
Dec 06 07:42:02 compute-0 podman[343624]: 2025-12-06 07:42:02.361904453 +0000 UTC m=+0.126429490 container died fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:42:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-30879fb3f92b7cb298e5f217fb7f2298e7b3391dab99f924a85f1fa5b0a10654-merged.mount: Deactivated successfully.
Dec 06 07:42:02 compute-0 podman[343624]: 2025-12-06 07:42:02.396160462 +0000 UTC m=+0.160685499 container remove fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 07:42:02 compute-0 systemd[1]: libpod-conmon-fe4db7a2f21fd378d29a5e9f67c37a4a5a467f596b84fdc4dba006f45c837b61.scope: Deactivated successfully.
Dec 06 07:42:02 compute-0 podman[343664]: 2025-12-06 07:42:02.551920797 +0000 UTC m=+0.042411175 container create 5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:42:02 compute-0 systemd[1]: Started libpod-conmon-5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5.scope.
Dec 06 07:42:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab755119cef9b5c7a7b609860df5818e774575ca22c5f486eeb00b1151541c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:02 compute-0 podman[343664]: 2025-12-06 07:42:02.535448144 +0000 UTC m=+0.025938562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab755119cef9b5c7a7b609860df5818e774575ca22c5f486eeb00b1151541c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab755119cef9b5c7a7b609860df5818e774575ca22c5f486eeb00b1151541c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab755119cef9b5c7a7b609860df5818e774575ca22c5f486eeb00b1151541c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dab755119cef9b5c7a7b609860df5818e774575ca22c5f486eeb00b1151541c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:02 compute-0 podman[343664]: 2025-12-06 07:42:02.643268714 +0000 UTC m=+0.133759222 container init 5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:42:02 compute-0 podman[343664]: 2025-12-06 07:42:02.651809368 +0000 UTC m=+0.142299776 container start 5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:42:02 compute-0 podman[343664]: 2025-12-06 07:42:02.655432677 +0000 UTC m=+0.145923095 container attach 5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sinoussi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:42:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:02.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.833 251996 INFO nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Creating config drive at /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6/disk.config
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.839 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpabz9c89a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 515 KiB/s rd, 4.3 MiB/s wr, 90 op/s
Dec 06 07:42:02 compute-0 nova_compute[251992]: 2025-12-06 07:42:02.972 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpabz9c89a" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.006 251996 DEBUG nova.storage.rbd_utils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] rbd image 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.010 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6/disk.config 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.209 251996 DEBUG oslo_concurrency.processutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6/disk.config 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.210 251996 INFO nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Deleting local config drive /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6/disk.config because it was imported into RBD.
Dec 06 07:42:03 compute-0 virtqemud[251613]: End of file while reading data: Input/output error
Dec 06 07:42:03 compute-0 virtqemud[251613]: End of file while reading data: Input/output error
Dec 06 07:42:03 compute-0 kernel: tap6bec9913-1f: entered promiscuous mode
Dec 06 07:42:03 compute-0 ovn_controller[147168]: 2025-12-06T07:42:03Z|00523|binding|INFO|Claiming lport 6bec9913-1f72-4458-a269-bab059df9fe1 for this chassis.
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.278 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-0 NetworkManager[48965]: <info>  [1765006923.2793] manager: (tap6bec9913-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Dec 06 07:42:03 compute-0 ovn_controller[147168]: 2025-12-06T07:42:03Z|00524|binding|INFO|6bec9913-1f72-4458-a269-bab059df9fe1: Claiming fa:16:3e:a9:9d:eb 10.100.0.13
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.289 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:9d:eb 10.100.0.13'], port_security=['fa:16:3e:a9:9d:eb 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=6bec9913-1f72-4458-a269-bab059df9fe1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.291 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 6bec9913-1f72-4458-a269-bab059df9fe1 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 bound to our chassis
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.292 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:42:03 compute-0 ovn_controller[147168]: 2025-12-06T07:42:03Z|00525|binding|INFO|Setting lport 6bec9913-1f72-4458-a269-bab059df9fe1 ovn-installed in OVS
Dec 06 07:42:03 compute-0 ovn_controller[147168]: 2025-12-06T07:42:03Z|00526|binding|INFO|Setting lport 6bec9913-1f72-4458-a269-bab059df9fe1 up in Southbound
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.304 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.307 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-0 ceph-mon[74339]: pgmap v2594: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 515 KiB/s rd, 4.3 MiB/s wr, 90 op/s
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.309 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8a52bcf8-e0df-427c-a040-0a77838cb58d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.310 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35a27638-31 in ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.313 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35a27638-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.313 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cafa125b-e23b-4b56-9339-49286e84c679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.315 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c30da095-3181-49dc-8314-e713f34ad627]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 systemd-udevd[343741]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:42:03 compute-0 systemd-machined[212986]: New machine qemu-65-instance-0000008f.
Dec 06 07:42:03 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-0000008f.
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.333 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[319761b4-f693-41da-b699-45f6d1e8f124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 NetworkManager[48965]: <info>  [1765006923.3421] device (tap6bec9913-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:42:03 compute-0 NetworkManager[48965]: <info>  [1765006923.3450] device (tap6bec9913-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.351 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[02731eda-36e0-4e0d-bfb8-f9e34fe4e2a1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.377 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[0c8b2c48-88e7-42ea-9748-cdafc4d8f8b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 systemd-udevd[343744]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.384 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[36cfdc8e-2419-4927-8a6a-a9a23ecb06a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 NetworkManager[48965]: <info>  [1765006923.3860] manager: (tap35a27638-30): new Veth device (/org/freedesktop/NetworkManager/Devices/243)
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.414 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3f17c524-258b-41fe-b7dc-e26aab1781a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.417 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2e33d94a-60ed-4790-82e6-1c57d8ecea64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 NetworkManager[48965]: <info>  [1765006923.4457] device (tap35a27638-30): carrier: link connected
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.451 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ebaab960-2833-424c-98bd-115163f4b820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.466 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[63e2dffb-bea3-406a-973a-cf1a8fcc96a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719603, 'reachable_time': 41852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343779, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.482 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[43c67475-76b9-49ba-a419-a75b75649f67]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:c527'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 719603, 'tstamp': 719603}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343780, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.498 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4200ca6e-7a6e-49ac-9143-2de8d3d4433c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35a27638-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:c5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719603, 'reachable_time': 41852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343783, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 inspiring_sinoussi[343681]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:42:03 compute-0 inspiring_sinoussi[343681]: --> relative data size: 1.0
Dec 06 07:42:03 compute-0 inspiring_sinoussi[343681]: --> All data devices are unavailable
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.530 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8f733c19-6f33-4d0f-8c1e-9c80a672b0fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 systemd[1]: libpod-5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5.scope: Deactivated successfully.
Dec 06 07:42:03 compute-0 podman[343664]: 2025-12-06 07:42:03.53861988 +0000 UTC m=+1.029110258 container died 5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sinoussi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-dab755119cef9b5c7a7b609860df5818e774575ca22c5f486eeb00b1151541c9-merged.mount: Deactivated successfully.
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.587 251996 DEBUG nova.compute.manager [req-46a2e5a6-f987-46fe-a442-6a74f83c29b0 req-4c3eb183-79c1-4434-9c3a-2d18a610fd5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received event network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.588 251996 DEBUG oslo_concurrency.lockutils [req-46a2e5a6-f987-46fe-a442-6a74f83c29b0 req-4c3eb183-79c1-4434-9c3a-2d18a610fd5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.588 251996 DEBUG oslo_concurrency.lockutils [req-46a2e5a6-f987-46fe-a442-6a74f83c29b0 req-4c3eb183-79c1-4434-9c3a-2d18a610fd5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.589 251996 DEBUG oslo_concurrency.lockutils [req-46a2e5a6-f987-46fe-a442-6a74f83c29b0 req-4c3eb183-79c1-4434-9c3a-2d18a610fd5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.589 251996 DEBUG nova.compute.manager [req-46a2e5a6-f987-46fe-a442-6a74f83c29b0 req-4c3eb183-79c1-4434-9c3a-2d18a610fd5b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Processing event network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:42:03 compute-0 podman[343664]: 2025-12-06 07:42:03.597529137 +0000 UTC m=+1.088019525 container remove 5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.602 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[71879a29-6e9a-4762-82e5-696b5c114980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.604 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.604 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.605 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35a27638-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:03 compute-0 systemd[1]: libpod-conmon-5167084a4f18dcc274ea3adba022e6e817bf84e1338cd361ff5912ac162c2fb5.scope: Deactivated successfully.
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.607 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-0 NetworkManager[48965]: <info>  [1765006923.6078] manager: (tap35a27638-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Dec 06 07:42:03 compute-0 kernel: tap35a27638-30: entered promiscuous mode
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.610 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.612 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35a27638-30, col_values=(('external_ids', {'iface-id': '5e371956-96bf-49df-b4a8-97154044dc54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.613 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-0 ovn_controller[147168]: 2025-12-06T07:42:03Z|00527|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.614 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.616 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.617 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0adfe7c6-ad11-491f-b8a5-b4cb8325f5b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.618 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/35a27638-382c-4afb-83b0-edd6d7f4bca8.pid.haproxy
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 35a27638-382c-4afb-83b0-edd6d7f4bca8
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.619 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'env', 'PROCESS_TAG=haproxy-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35a27638-382c-4afb-83b0-edd6d7f4bca8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:42:03 compute-0 sudo[343538]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:03 compute-0 sudo[343822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:03 compute-0 sudo[343822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:03 compute-0 sudo[343822]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:03 compute-0 sudo[343868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:42:03 compute-0 sudo[343868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:03 compute-0 sudo[343868]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.804 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.805 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006923.8048368, 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.805 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] VM Started (Lifecycle Event)
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.808 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.814 251996 INFO nova.virt.libvirt.driver [-] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Instance spawned successfully.
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.815 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:42:03 compute-0 sudo[343897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:03 compute-0 sudo[343897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:03 compute-0 sudo[343897]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.837 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.843 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.846 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.847 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.847 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.847 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.848 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.848 251996 DEBUG nova.virt.libvirt.driver [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.850 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.851 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:03.851 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:03 compute-0 sudo[343923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:42:03 compute-0 sudo[343923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.891 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.891 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006923.8055897, 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.891 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] VM Paused (Lifecycle Event)
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.930 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.939 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006923.80752, 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.939 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] VM Resumed (Lifecycle Event)
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.948 251996 INFO nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Took 7.99 seconds to spawn the instance on the hypervisor.
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.948 251996 DEBUG nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.960 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.963 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:42:03 compute-0 nova_compute[251992]: 2025-12-06 07:42:03.990 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:42:04 compute-0 nova_compute[251992]: 2025-12-06 07:42:04.023 251996 INFO nova.compute.manager [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Took 9.00 seconds to build instance.
Dec 06 07:42:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:04.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:04 compute-0 nova_compute[251992]: 2025-12-06 07:42:04.040 251996 DEBUG oslo_concurrency.lockutils [None req-e21abea0-4e71-4cf5-ad91-dc115fd20e4f 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:04 compute-0 podman[343970]: 2025-12-06 07:42:03.997866442 +0000 UTC m=+0.020738080 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:42:04 compute-0 podman[343970]: 2025-12-06 07:42:04.141780521 +0000 UTC m=+0.164652149 container create 72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:42:04 compute-0 systemd[1]: Started libpod-conmon-72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496.scope.
Dec 06 07:42:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2800245a484c7d359d891063685899f9a4faf9edcf181b22a97f432efd1c2c47/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:04 compute-0 podman[343970]: 2025-12-06 07:42:04.245395294 +0000 UTC m=+0.268266942 container init 72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:42:04 compute-0 podman[343970]: 2025-12-06 07:42:04.253583889 +0000 UTC m=+0.276455517 container start 72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:42:04 compute-0 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[344010]: [NOTICE]   (344014) : New worker (344018) forked
Dec 06 07:42:04 compute-0 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[344010]: [NOTICE]   (344014) : Loading success.
Dec 06 07:42:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Dec 06 07:42:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Dec 06 07:42:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Dec 06 07:42:04 compute-0 podman[344038]: 2025-12-06 07:42:04.406370581 +0000 UTC m=+0.052700737 container create 4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:42:04 compute-0 systemd[1]: Started libpod-conmon-4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7.scope.
Dec 06 07:42:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:42:04 compute-0 podman[344038]: 2025-12-06 07:42:04.37866056 +0000 UTC m=+0.024990736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:42:04 compute-0 podman[344038]: 2025-12-06 07:42:04.485773099 +0000 UTC m=+0.132103275 container init 4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 07:42:04 compute-0 podman[344038]: 2025-12-06 07:42:04.493541003 +0000 UTC m=+0.139871159 container start 4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_panini, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:42:04 compute-0 podman[344038]: 2025-12-06 07:42:04.498804887 +0000 UTC m=+0.145135063 container attach 4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_panini, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:42:04 compute-0 reverent_panini[344055]: 167 167
Dec 06 07:42:04 compute-0 systemd[1]: libpod-4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7.scope: Deactivated successfully.
Dec 06 07:42:04 compute-0 podman[344038]: 2025-12-06 07:42:04.501926213 +0000 UTC m=+0.148256369 container died 4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_panini, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:42:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ba0bbfe6906f48d11572fdd7f7c6cb65f09b28847dd30007fbbecf34567a171-merged.mount: Deactivated successfully.
Dec 06 07:42:04 compute-0 podman[344038]: 2025-12-06 07:42:04.545518139 +0000 UTC m=+0.191848295 container remove 4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:42:04 compute-0 systemd[1]: libpod-conmon-4a4fc21038e4b34705416e47e7b95c86a8f04aa7140b08f064918dd3f9b69af7.scope: Deactivated successfully.
Dec 06 07:42:04 compute-0 podman[344079]: 2025-12-06 07:42:04.730563977 +0000 UTC m=+0.044972675 container create 6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:42:04 compute-0 systemd[1]: Started libpod-conmon-6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966.scope.
Dec 06 07:42:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:04.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c63a314c0b0d2011d90560c7781e3d67707994287358c8ac4ca1ab42159f9b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c63a314c0b0d2011d90560c7781e3d67707994287358c8ac4ca1ab42159f9b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c63a314c0b0d2011d90560c7781e3d67707994287358c8ac4ca1ab42159f9b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c63a314c0b0d2011d90560c7781e3d67707994287358c8ac4ca1ab42159f9b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:04 compute-0 podman[344079]: 2025-12-06 07:42:04.807365664 +0000 UTC m=+0.121774382 container init 6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cray, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:42:04 compute-0 podman[344079]: 2025-12-06 07:42:04.711455932 +0000 UTC m=+0.025864650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:42:04 compute-0 podman[344079]: 2025-12-06 07:42:04.816423722 +0000 UTC m=+0.130832410 container start 6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:42:04 compute-0 podman[344079]: 2025-12-06 07:42:04.820086443 +0000 UTC m=+0.134495161 container attach 6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cray, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:42:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 535 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 4.6 MiB/s wr, 133 op/s
Dec 06 07:42:05 compute-0 ceph-mon[74339]: osdmap e328: 3 total, 3 up, 3 in
Dec 06 07:42:05 compute-0 ceph-mon[74339]: pgmap v2596: 305 pgs: 305 active+clean; 535 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 4.6 MiB/s wr, 133 op/s
Dec 06 07:42:05 compute-0 nova_compute[251992]: 2025-12-06 07:42:05.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]: {
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:     "0": [
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:         {
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "devices": [
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "/dev/loop3"
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             ],
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "lv_name": "ceph_lv0",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "lv_size": "7511998464",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "name": "ceph_lv0",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "tags": {
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.cluster_name": "ceph",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.crush_device_class": "",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.encrypted": "0",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.osd_id": "0",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.type": "block",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:                 "ceph.vdo": "0"
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             },
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "type": "block",
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:             "vg_name": "ceph_vg0"
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:         }
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]:     ]
Dec 06 07:42:05 compute-0 ecstatic_cray[344095]: }
Dec 06 07:42:05 compute-0 systemd[1]: libpod-6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966.scope: Deactivated successfully.
Dec 06 07:42:05 compute-0 podman[344079]: 2025-12-06 07:42:05.678513797 +0000 UTC m=+0.992922495 container died 6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:42:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c63a314c0b0d2011d90560c7781e3d67707994287358c8ac4ca1ab42159f9b1-merged.mount: Deactivated successfully.
Dec 06 07:42:05 compute-0 nova_compute[251992]: 2025-12-06 07:42:05.734 251996 DEBUG nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received event network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:05 compute-0 nova_compute[251992]: 2025-12-06 07:42:05.734 251996 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:05 compute-0 nova_compute[251992]: 2025-12-06 07:42:05.735 251996 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:05 compute-0 nova_compute[251992]: 2025-12-06 07:42:05.735 251996 DEBUG oslo_concurrency.lockutils [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:05 compute-0 nova_compute[251992]: 2025-12-06 07:42:05.735 251996 DEBUG nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] No waiting events found dispatching network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:42:05 compute-0 nova_compute[251992]: 2025-12-06 07:42:05.735 251996 WARNING nova.compute.manager [req-09ff50d3-3121-4a9b-b121-392cb2b604b8 req-9c85e857-fe52-4fe6-a6f7-83e602d194f3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received unexpected event network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 for instance with vm_state active and task_state None.
Dec 06 07:42:05 compute-0 podman[344079]: 2025-12-06 07:42:05.736244922 +0000 UTC m=+1.050653620 container remove 6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cray, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:42:05 compute-0 systemd[1]: libpod-conmon-6277ef80950aed77f268429f2a6c71f7d078171c55a8edfb02cb616c02e83966.scope: Deactivated successfully.
Dec 06 07:42:05 compute-0 sudo[343923]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:05 compute-0 sudo[344118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:05 compute-0 sudo[344118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:05 compute-0 sudo[344118]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:05 compute-0 sudo[344143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:42:05 compute-0 sudo[344143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:05 compute-0 sudo[344143]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:05 compute-0 sudo[344168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:05 compute-0 sudo[344168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:05 compute-0 sudo[344168]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:06 compute-0 sudo[344193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:42:06 compute-0 sudo[344193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:06.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:06 compute-0 podman[344259]: 2025-12-06 07:42:06.32588741 +0000 UTC m=+0.036250805 container create b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:42:06 compute-0 systemd[1]: Started libpod-conmon-b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9.scope.
Dec 06 07:42:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:42:06 compute-0 podman[344259]: 2025-12-06 07:42:06.403933062 +0000 UTC m=+0.114296487 container init b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_volhard, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:42:06 compute-0 podman[344259]: 2025-12-06 07:42:06.31054335 +0000 UTC m=+0.020906765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:42:06 compute-0 podman[344259]: 2025-12-06 07:42:06.410706649 +0000 UTC m=+0.121070044 container start b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:42:06 compute-0 podman[344259]: 2025-12-06 07:42:06.415298944 +0000 UTC m=+0.125662359 container attach b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_volhard, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:42:06 compute-0 amazing_volhard[344276]: 167 167
Dec 06 07:42:06 compute-0 systemd[1]: libpod-b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9.scope: Deactivated successfully.
Dec 06 07:42:06 compute-0 podman[344259]: 2025-12-06 07:42:06.432353712 +0000 UTC m=+0.142717107 container died b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:42:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-43b85ec1be43aadb5f180cab4643e0e8b7728134ee908c11f7e0f26119cc4e2a-merged.mount: Deactivated successfully.
Dec 06 07:42:06 compute-0 podman[344259]: 2025-12-06 07:42:06.47272122 +0000 UTC m=+0.183084615 container remove b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:42:06 compute-0 systemd[1]: libpod-conmon-b2a7de221442d61a0fa9b2cfda040501125e7c32755a4b9c81e66b15cb1792a9.scope: Deactivated successfully.
Dec 06 07:42:06 compute-0 podman[344299]: 2025-12-06 07:42:06.655177816 +0000 UTC m=+0.045915061 container create 01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:42:06 compute-0 systemd[1]: Started libpod-conmon-01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9.scope.
Dec 06 07:42:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:42:06 compute-0 podman[344299]: 2025-12-06 07:42:06.632687739 +0000 UTC m=+0.023425004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69e4b40ef6813313bbdb5c3b5d1e57fb4c33579218d8f737fa359015fa2faac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69e4b40ef6813313bbdb5c3b5d1e57fb4c33579218d8f737fa359015fa2faac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69e4b40ef6813313bbdb5c3b5d1e57fb4c33579218d8f737fa359015fa2faac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e69e4b40ef6813313bbdb5c3b5d1e57fb4c33579218d8f737fa359015fa2faac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:42:06 compute-0 podman[344299]: 2025-12-06 07:42:06.757471653 +0000 UTC m=+0.148208918 container init 01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 07:42:06 compute-0 podman[344299]: 2025-12-06 07:42:06.765950105 +0000 UTC m=+0.156687350 container start 01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:42:06 compute-0 podman[344299]: 2025-12-06 07:42:06.769536784 +0000 UTC m=+0.160274029 container attach 01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:42:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:06.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.5 MiB/s wr, 302 op/s
Dec 06 07:42:07 compute-0 nova_compute[251992]: 2025-12-06 07:42:07.081 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:07 compute-0 magical_jennings[344315]: {
Dec 06 07:42:07 compute-0 magical_jennings[344315]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:42:07 compute-0 magical_jennings[344315]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:42:07 compute-0 magical_jennings[344315]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:42:07 compute-0 magical_jennings[344315]:         "osd_id": 0,
Dec 06 07:42:07 compute-0 magical_jennings[344315]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:42:07 compute-0 magical_jennings[344315]:         "type": "bluestore"
Dec 06 07:42:07 compute-0 magical_jennings[344315]:     }
Dec 06 07:42:07 compute-0 magical_jennings[344315]: }
Dec 06 07:42:07 compute-0 systemd[1]: libpod-01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9.scope: Deactivated successfully.
Dec 06 07:42:07 compute-0 podman[344338]: 2025-12-06 07:42:07.670472145 +0000 UTC m=+0.022686624 container died 01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:42:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69e4b40ef6813313bbdb5c3b5d1e57fb4c33579218d8f737fa359015fa2faac-merged.mount: Deactivated successfully.
Dec 06 07:42:07 compute-0 podman[344338]: 2025-12-06 07:42:07.740587199 +0000 UTC m=+0.092801658 container remove 01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Dec 06 07:42:07 compute-0 systemd[1]: libpod-conmon-01bf8240e1c0d862107f2458f6930fc8cd51612ced2530f4b57e74217285c0f9.scope: Deactivated successfully.
Dec 06 07:42:07 compute-0 sudo[344193]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:42:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:08.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:08.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:08 compute-0 ceph-mon[74339]: pgmap v2597: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.5 MiB/s wr, 302 op/s
Dec 06 07:42:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.7 MiB/s wr, 295 op/s
Dec 06 07:42:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:42:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d59b32fb-b05f-4275-935a-7ee41c45086e does not exist
Dec 06 07:42:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d6040973-a608-4c92-9dfa-a0a5cf0c283f does not exist
Dec 06 07:42:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 09ca29db-a0d9-4ea8-bc23-94aef8400ec6 does not exist
Dec 06 07:42:09 compute-0 sudo[344354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:09 compute-0 sudo[344354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:09 compute-0 sudo[344354]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:09 compute-0 sudo[344379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:42:09 compute-0 sudo[344379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:09 compute-0 sudo[344379]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:10.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:10 compute-0 nova_compute[251992]: 2025-12-06 07:42:10.512 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:10 compute-0 ceph-mon[74339]: pgmap v2598: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.7 MiB/s wr, 295 op/s
Dec 06 07:42:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3429869309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:42:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3429869309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:42:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:42:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:10.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 483 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 316 op/s
Dec 06 07:42:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/150849439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:11 compute-0 ceph-mon[74339]: pgmap v2599: 305 pgs: 305 active+clean; 483 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 316 op/s
Dec 06 07:42:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:12.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:12 compute-0 nova_compute[251992]: 2025-12-06 07:42:12.085 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:12.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 483 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 316 op/s
Dec 06 07:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:42:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:14.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:14 compute-0 ceph-mon[74339]: pgmap v2600: 305 pgs: 305 active+clean; 483 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.5 MiB/s wr, 316 op/s
Dec 06 07:42:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:14.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 461 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.3 MiB/s wr, 273 op/s
Dec 06 07:42:15 compute-0 nova_compute[251992]: 2025-12-06 07:42:15.516 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:16.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:16 compute-0 ceph-mon[74339]: pgmap v2601: 305 pgs: 305 active+clean; 461 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.3 MiB/s wr, 273 op/s
Dec 06 07:42:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/79433732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:16.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.4 MiB/s wr, 269 op/s
Dec 06 07:42:17 compute-0 nova_compute[251992]: 2025-12-06 07:42:17.088 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:17 compute-0 nova_compute[251992]: 2025-12-06 07:42:17.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:17 compute-0 nova_compute[251992]: 2025-12-06 07:42:17.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:17 compute-0 nova_compute[251992]: 2025-12-06 07:42:17.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:17 compute-0 nova_compute[251992]: 2025-12-06 07:42:17.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:17 compute-0 nova_compute[251992]: 2025-12-06 07:42:17.683 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:42:17 compute-0 nova_compute[251992]: 2025-12-06 07:42:17.684 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:17 compute-0 ceph-mon[74339]: pgmap v2602: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.4 MiB/s wr, 269 op/s
Dec 06 07:42:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/757135810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:18.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3917993197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.144 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.352 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.353 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.358 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.359 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:42:18
Dec 06 07:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'vms', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', '.rgw.root']
Dec 06 07:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.570 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.572 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3867MB free_disk=20.809673309326172GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.572 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.572 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.669 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c1ef1073-7c66-428c-a02b-e4daa3551d22 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.669 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.669 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.669 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:42:18 compute-0 nova_compute[251992]: 2025-12-06 07:42:18.732 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:18.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 459 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 153 op/s
Dec 06 07:42:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3917993197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305989615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:19 compute-0 nova_compute[251992]: 2025-12-06 07:42:19.245 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:19 compute-0 nova_compute[251992]: 2025-12-06 07:42:19.251 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:19 compute-0 nova_compute[251992]: 2025-12-06 07:42:19.266 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:19 compute-0 ovn_controller[147168]: 2025-12-06T07:42:19Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a9:9d:eb 10.100.0.13
Dec 06 07:42:19 compute-0 ovn_controller[147168]: 2025-12-06T07:42:19Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a9:9d:eb 10.100.0.13
Dec 06 07:42:19 compute-0 nova_compute[251992]: 2025-12-06 07:42:19.300 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:42:19 compute-0 nova_compute[251992]: 2025-12-06 07:42:19.301 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:19 compute-0 podman[344454]: 2025-12-06 07:42:19.445368707 +0000 UTC m=+0.100727224 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:42:19 compute-0 sudo[344480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:19 compute-0 sudo[344480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:19 compute-0 sudo[344480]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:19 compute-0 sudo[344505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:19 compute-0 sudo[344505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:19 compute-0 sudo[344505]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:20.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:20 compute-0 nova_compute[251992]: 2025-12-06 07:42:20.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:20 compute-0 ceph-mon[74339]: pgmap v2603: 305 pgs: 305 active+clean; 459 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 153 op/s
Dec 06 07:42:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4069215907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/305989615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/363974003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2820750602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/733693291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 160 op/s
Dec 06 07:42:21 compute-0 nova_compute[251992]: 2025-12-06 07:42:21.300 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:21 compute-0 nova_compute[251992]: 2025-12-06 07:42:21.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:22.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:22 compute-0 nova_compute[251992]: 2025-12-06 07:42:22.091 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:22 compute-0 nova_compute[251992]: 2025-12-06 07:42:22.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:22.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 282 KiB/s rd, 3.5 MiB/s wr, 107 op/s
Dec 06 07:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:42:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:24.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Dec 06 07:42:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:42:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:42:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:42:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:42:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:42:25 compute-0 podman[344534]: 2025-12-06 07:42:25.403988128 +0000 UTC m=+0.061227741 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:42:25 compute-0 podman[344533]: 2025-12-06 07:42:25.42375792 +0000 UTC m=+0.082228438 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.519 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:25 compute-0 ovn_controller[147168]: 2025-12-06T07:42:25Z|00528|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:42:25 compute-0 ovn_controller[147168]: 2025-12-06T07:42:25Z|00529|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:42:25 compute-0 ovn_controller[147168]: 2025-12-06T07:42:25Z|00530|binding|INFO|Releasing lport 058fee39-af19-4b00-b556-fb88bc823747 from this chassis (sb_readonly=0)
Dec 06 07:42:25 compute-0 ovn_controller[147168]: 2025-12-06T07:42:25Z|00531|binding|INFO|Releasing lport 5e371956-96bf-49df-b4a8-97154044dc54 from this chassis (sb_readonly=0)
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.701 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.933 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.933 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.934 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:42:25 compute-0 nova_compute[251992]: 2025-12-06 07:42:25.934 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c1ef1073-7c66-428c-a02b-e4daa3551d22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:26.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009627492206402383 of space, bias 1.0, pg target 2.888247661920715 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021628687418574354 of space, bias 1.0, pg target 0.6445348850735158 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:42:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:26.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 476 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.3 MiB/s wr, 100 op/s
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.093 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.405 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updating instance_info_cache with network_info: [{"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.424 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-c1ef1073-7c66-428c-a02b-e4daa3551d22" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.424 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.424 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.425 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.425 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.425 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:42:27 compute-0 nova_compute[251992]: 2025-12-06 07:42:27.426 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:42:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:28.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:28.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 476 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 2.0 MiB/s wr, 71 op/s
Dec 06 07:42:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:29 compute-0 nova_compute[251992]: 2025-12-06 07:42:29.481 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:29.480 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:42:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:29.482 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:42:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/559352823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:30 compute-0 ceph-mon[74339]: pgmap v2604: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 160 op/s
Dec 06 07:42:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:30.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:30 compute-0 nova_compute[251992]: 2025-12-06 07:42:30.521 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:30.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 2.7 MiB/s wr, 67 op/s
Dec 06 07:42:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:32.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:32 compute-0 nova_compute[251992]: 2025-12-06 07:42:32.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:32.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 1.5 MiB/s wr, 31 op/s
Dec 06 07:42:33 compute-0 nova_compute[251992]: 2025-12-06 07:42:33.830 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "dff82ae9-39a8-4a01-8dbb-782b5329f293" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:33 compute-0 nova_compute[251992]: 2025-12-06 07:42:33.830 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "dff82ae9-39a8-4a01-8dbb-782b5329f293" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:33 compute-0 nova_compute[251992]: 2025-12-06 07:42:33.865 251996 DEBUG nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:42:33 compute-0 nova_compute[251992]: 2025-12-06 07:42:33.948 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:33 compute-0 nova_compute[251992]: 2025-12-06 07:42:33.949 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:33 compute-0 nova_compute[251992]: 2025-12-06 07:42:33.956 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:42:33 compute-0 nova_compute[251992]: 2025-12-06 07:42:33.956 251996 INFO nova.compute.claims [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:42:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:34.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.100 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/743800435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:34 compute-0 ceph-mon[74339]: pgmap v2605: 305 pgs: 305 active+clean; 468 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 282 KiB/s rd, 3.5 MiB/s wr, 107 op/s
Dec 06 07:42:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/946810008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:34 compute-0 ceph-mon[74339]: pgmap v2606: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Dec 06 07:42:34 compute-0 ceph-mon[74339]: pgmap v2607: 305 pgs: 305 active+clean; 476 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 3.3 MiB/s wr, 100 op/s
Dec 06 07:42:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3930922968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:34 compute-0 ceph-mon[74339]: pgmap v2608: 305 pgs: 305 active+clean; 476 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 2.0 MiB/s wr, 71 op/s
Dec 06 07:42:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.500 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "81c5c358-132f-4db7-acee-2c7454a0a4d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.501 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "81c5c358-132f-4db7-acee-2c7454a0a4d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.520 251996 DEBUG nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.594 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626641352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.643 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.649 251996 DEBUG nova.compute.provider_tree [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.669 251996 DEBUG nova.scheduler.client.report [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.696 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.697 251996 DEBUG nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.700 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.707 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.708 251996 INFO nova.compute.claims [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.766 251996 DEBUG nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.794 251996 INFO nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.812 251996 DEBUG nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:42:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.878 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.919 251996 DEBUG nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.922 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.923 251996 INFO nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Creating image(s)
Dec 06 07:42:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 1.5 MiB/s wr, 32 op/s
Dec 06 07:42:34 compute-0 nova_compute[251992]: 2025-12-06 07:42:34.972 251996 DEBUG nova.storage.rbd_utils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image dff82ae9-39a8-4a01-8dbb-782b5329f293_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.022 251996 DEBUG nova.storage.rbd_utils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image dff82ae9-39a8-4a01-8dbb-782b5329f293_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.055 251996 DEBUG nova.storage.rbd_utils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image dff82ae9-39a8-4a01-8dbb-782b5329f293_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.061 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.155 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.156 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.156 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.156 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.185 251996 DEBUG nova.storage.rbd_utils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image dff82ae9-39a8-4a01-8dbb-782b5329f293_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.189 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef dff82ae9-39a8-4a01-8dbb-782b5329f293_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/80076762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.339 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.346 251996 DEBUG nova.compute.provider_tree [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.373 251996 DEBUG nova.scheduler.client.report [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.402 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.403 251996 DEBUG nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.461 251996 DEBUG nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.475 251996 INFO nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.493 251996 DEBUG nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.523 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.620 251996 DEBUG nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.623 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.623 251996 INFO nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Creating image(s)
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.676 251996 DEBUG nova.storage.rbd_utils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.699210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955699298, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1729, "num_deletes": 258, "total_data_size": 2742812, "memory_usage": 2784328, "flush_reason": "Manual Compaction"}
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.757 251996 DEBUG nova.storage.rbd_utils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955763456, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 1715795, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50312, "largest_seqno": 52040, "table_properties": {"data_size": 1709538, "index_size": 3203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17153, "raw_average_key_size": 21, "raw_value_size": 1695498, "raw_average_value_size": 2151, "num_data_blocks": 140, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006799, "oldest_key_time": 1765006799, "file_creation_time": 1765006955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 64323 microseconds, and 7924 cpu microseconds.
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.795 251996 DEBUG nova.storage.rbd_utils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.799 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.763531) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 1715795 bytes OK
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.763558) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.843900) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.843951) EVENT_LOG_v1 {"time_micros": 1765006955843941, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.843977) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2735371, prev total WAL file size 2753136, number of live WAL files 2.
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.845330) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373735' seq:72057594037927935, type:22 .. '6D6772737461740032303237' seq:0, type:0; will stop at (end)
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(1675KB)], [107(12MB)]
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955845413, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 14872697, "oldest_snapshot_seqno": -1}
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.896 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.897 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.898 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.898 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.927 251996 DEBUG nova.storage.rbd_utils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:35 compute-0 nova_compute[251992]: 2025-12-06 07:42:35.931 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 8871 keys, 11974916 bytes, temperature: kUnknown
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955954461, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 11974916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11917499, "index_size": 34137, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 229344, "raw_average_key_size": 25, "raw_value_size": 11761479, "raw_average_value_size": 1325, "num_data_blocks": 1340, "num_entries": 8871, "num_filter_entries": 8871, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765006955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.954921) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 11974916 bytes
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.956667) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.3 rd, 109.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 12.5 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(15.6) write-amplify(7.0) OK, records in: 9341, records dropped: 470 output_compression: NoCompression
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.956691) EVENT_LOG_v1 {"time_micros": 1765006955956681, "job": 64, "event": "compaction_finished", "compaction_time_micros": 109135, "compaction_time_cpu_micros": 37903, "output_level": 6, "num_output_files": 1, "total_output_size": 11974916, "num_input_records": 9341, "num_output_records": 8871, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955957049, "job": 64, "event": "table_file_deletion", "file_number": 109}
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765006955959588, "job": 64, "event": "table_file_deletion", "file_number": 107}
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.845152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.959669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.959674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.959676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.959677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:42:35.959679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:42:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:36.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:36 compute-0 ceph-mon[74339]: pgmap v2609: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 2.7 MiB/s wr, 67 op/s
Dec 06 07:42:36 compute-0 ceph-mon[74339]: pgmap v2610: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 1.5 MiB/s wr, 31 op/s
Dec 06 07:42:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1626641352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:36 compute-0 ceph-mon[74339]: pgmap v2611: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 1.5 MiB/s wr, 32 op/s
Dec 06 07:42:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:36.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 32 op/s
Dec 06 07:42:37 compute-0 nova_compute[251992]: 2025-12-06 07:42:37.097 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:37.485 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:42:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:42:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3098859861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/80076762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1153323595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:38 compute-0 ceph-mon[74339]: pgmap v2612: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 32 op/s
Dec 06 07:42:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:38.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 523 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 06 07:42:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:39 compute-0 sudo[344811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:39 compute-0 sudo[344811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:39 compute-0 sudo[344811]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:40 compute-0 sudo[344836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:42:40 compute-0 sudo[344836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:42:40 compute-0 sudo[344836]: pam_unix(sudo:session): session closed for user root
Dec 06 07:42:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:40.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:40 compute-0 ceph-mon[74339]: pgmap v2613: 305 pgs: 305 active+clean; 523 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 06 07:42:40 compute-0 nova_compute[251992]: 2025-12-06 07:42:40.555 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:42:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:40.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:42:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.085 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.214 251996 DEBUG nova.storage.rbd_utils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] resizing rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.322 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef dff82ae9-39a8-4a01-8dbb-782b5329f293_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.365 251996 DEBUG nova.objects.instance [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'migration_context' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.413 251996 DEBUG nova.storage.rbd_utils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] resizing rbd image dff82ae9-39a8-4a01-8dbb-782b5329f293_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.457 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.458 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Ensure instance console log exists: /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.458 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.459 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.459 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.460 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.465 251996 WARNING nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.471 251996 DEBUG nova.virt.libvirt.host [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.472 251996 DEBUG nova.virt.libvirt.host [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.477 251996 DEBUG nova.virt.libvirt.host [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.478 251996 DEBUG nova.virt.libvirt.host [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.479 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.479 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.480 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.480 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.480 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.480 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.480 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.481 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.481 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.481 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.481 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.481 251996 DEBUG nova.virt.hardware [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.485 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.623 251996 DEBUG nova.objects.instance [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'migration_context' on Instance uuid dff82ae9-39a8-4a01-8dbb-782b5329f293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.638 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.638 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Ensure instance console log exists: /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.638 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.639 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.639 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.640 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.645 251996 WARNING nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.649 251996 DEBUG nova.virt.libvirt.host [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.650 251996 DEBUG nova.virt.libvirt.host [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.653 251996 DEBUG nova.virt.libvirt.host [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.653 251996 DEBUG nova.virt.libvirt.host [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.656 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.656 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.656 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.657 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.657 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.657 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.657 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.657 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.657 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.657 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.658 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.658 251996 DEBUG nova.virt.hardware [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.660 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:42:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337735335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.963 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:41 compute-0 nova_compute[251992]: 2025-12-06 07:42:41.996 251996 DEBUG nova.storage.rbd_utils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.001 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:42.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.101 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:42:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713282379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.125 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.154 251996 DEBUG nova.storage.rbd_utils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image dff82ae9-39a8-4a01-8dbb-782b5329f293_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.161 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:42 compute-0 ceph-mon[74339]: pgmap v2614: 305 pgs: 305 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Dec 06 07:42:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2337735335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:42:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204003624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.503 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.507 251996 DEBUG nova.objects.instance [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.526 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <uuid>81c5c358-132f-4db7-acee-2c7454a0a4d3</uuid>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <name>instance-00000094</name>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerShowV247Test-server-1195711009</nova:name>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:42:41</nova:creationTime>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:user uuid="8b7e1fb80daa458699ec19892dc9a92c">tempest-ServerShowV247Test-375404285-project-member</nova:user>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:project uuid="6061a73c34904608870b68e204d01c42">tempest-ServerShowV247Test-375404285</nova:project>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <system>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="serial">81c5c358-132f-4db7-acee-2c7454a0a4d3</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="uuid">81c5c358-132f-4db7-acee-2c7454a0a4d3</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </system>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <os>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </os>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <features>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </features>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/81c5c358-132f-4db7-acee-2c7454a0a4d3_disk">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </source>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </source>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/console.log" append="off"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <video>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </video>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:42:42 compute-0 nova_compute[251992]: </domain>
Dec 06 07:42:42 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.596 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.596 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.597 251996 INFO nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Using config drive
Dec 06 07:42:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:42:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277896716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.630 251996 DEBUG nova.storage.rbd_utils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.639 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.641 251996 DEBUG nova.objects.instance [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'pci_devices' on Instance uuid dff82ae9-39a8-4a01-8dbb-782b5329f293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.695 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <uuid>dff82ae9-39a8-4a01-8dbb-782b5329f293</uuid>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <name>instance-00000093</name>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerShowV247Test-server-1269525602</nova:name>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:42:41</nova:creationTime>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:user uuid="8b7e1fb80daa458699ec19892dc9a92c">tempest-ServerShowV247Test-375404285-project-member</nova:user>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <nova:project uuid="6061a73c34904608870b68e204d01c42">tempest-ServerShowV247Test-375404285</nova:project>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <system>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="serial">dff82ae9-39a8-4a01-8dbb-782b5329f293</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="uuid">dff82ae9-39a8-4a01-8dbb-782b5329f293</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </system>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <os>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </os>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <features>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </features>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dff82ae9-39a8-4a01-8dbb-782b5329f293_disk">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </source>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/dff82ae9-39a8-4a01-8dbb-782b5329f293_disk.config">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </source>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:42:42 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293/console.log" append="off"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <video>
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </video>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:42:42 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:42:42 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:42:42 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:42:42 compute-0 nova_compute[251992]: </domain>
Dec 06 07:42:42 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.771 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.772 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.772 251996 INFO nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Using config drive
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.795 251996 DEBUG nova.storage.rbd_utils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image dff82ae9-39a8-4a01-8dbb-782b5329f293_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:42.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.891 251996 INFO nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Creating config drive at /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.895 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2u49gpn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.953 251996 INFO nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Creating config drive at /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293/disk.config
Dec 06 07:42:42 compute-0 nova_compute[251992]: 2025-12-06 07:42:42.960 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmo9i0z5o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.3 MiB/s wr, 42 op/s
Dec 06 07:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.025 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2u49gpn" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.055 251996 DEBUG nova.storage.rbd_utils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.058 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.096 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmo9i0z5o" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.130 251996 DEBUG nova.storage.rbd_utils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image dff82ae9-39a8-4a01-8dbb-782b5329f293_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.134 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293/disk.config dff82ae9-39a8-4a01-8dbb-782b5329f293_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.208 251996 DEBUG oslo_concurrency.processutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.209 251996 INFO nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Deleting local config drive /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config because it was imported into RBD.
Dec 06 07:42:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/713282379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4204003624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/35790465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:42:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/35790465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:42:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1277896716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:42:43 compute-0 ceph-mon[74339]: pgmap v2615: 305 pgs: 305 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 2.3 MiB/s wr, 42 op/s
Dec 06 07:42:43 compute-0 systemd-machined[212986]: New machine qemu-66-instance-00000094.
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.293 251996 DEBUG oslo_concurrency.processutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293/disk.config dff82ae9-39a8-4a01-8dbb-782b5329f293_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.294 251996 INFO nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Deleting local config drive /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293/disk.config because it was imported into RBD.
Dec 06 07:42:43 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-00000094.
Dec 06 07:42:43 compute-0 systemd-machined[212986]: New machine qemu-67-instance-00000093.
Dec 06 07:42:43 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-00000093.
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.866 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006963.866055, dff82ae9-39a8-4a01-8dbb-782b5329f293 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.867 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] VM Resumed (Lifecycle Event)
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.870 251996 DEBUG nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.870 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.875 251996 INFO nova.virt.libvirt.driver [-] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Instance spawned successfully.
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.875 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.900 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.910 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.911 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.912 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.913 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.914 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.914 251996 DEBUG nova.virt.libvirt.driver [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.920 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.957 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.957 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006963.8663232, dff82ae9-39a8-4a01-8dbb-782b5329f293 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.957 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] VM Started (Lifecycle Event)
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.980 251996 INFO nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Took 9.06 seconds to spawn the instance on the hypervisor.
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.980 251996 DEBUG nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.982 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:43 compute-0 nova_compute[251992]: 2025-12-06 07:42:43.987 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.021 251996 DEBUG nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.021 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.023 251996 INFO nova.virt.libvirt.driver [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance spawned successfully.
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.023 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.025 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.025 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006964.0196888, 81c5c358-132f-4db7-acee-2c7454a0a4d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.026 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] VM Resumed (Lifecycle Event)
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.050 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.052 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.053 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.053 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.053 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.054 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.054 251996 DEBUG nova.virt.libvirt.driver [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.061 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.063 251996 INFO nova.compute.manager [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Took 10.14 seconds to build instance.
Dec 06 07:42:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:44.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.091 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.092 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006964.0204678, 81c5c358-132f-4db7-acee-2c7454a0a4d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.092 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] VM Started (Lifecycle Event)
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.093 251996 DEBUG oslo_concurrency.lockutils [None req-dfbdcee5-228e-4d06-97b1-579fdc74c188 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "dff82ae9-39a8-4a01-8dbb-782b5329f293" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.120 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.127 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.136 251996 INFO nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Took 8.52 seconds to spawn the instance on the hypervisor.
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.136 251996 DEBUG nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.164 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.183 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "c1ef1073-7c66-428c-a02b-e4daa3551d22" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.183 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.183 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.184 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.184 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.185 251996 INFO nova.compute.manager [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Terminating instance
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.186 251996 DEBUG nova.compute.manager [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.197 251996 INFO nova.compute.manager [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Took 9.62 seconds to build instance.
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.217 251996 DEBUG oslo_concurrency.lockutils [None req-4ed60d03-e4e0-4e66-a490-c8ace4cb9679 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "81c5c358-132f-4db7-acee-2c7454a0a4d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:44 compute-0 kernel: tap16f011c3-09 (unregistering): left promiscuous mode
Dec 06 07:42:44 compute-0 NetworkManager[48965]: <info>  [1765006964.3505] device (tap16f011c3-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:42:44 compute-0 ovn_controller[147168]: 2025-12-06T07:42:44Z|00532|binding|INFO|Releasing lport 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d from this chassis (sb_readonly=0)
Dec 06 07:42:44 compute-0 ovn_controller[147168]: 2025-12-06T07:42:44Z|00533|binding|INFO|Setting lport 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d down in Southbound
Dec 06 07:42:44 compute-0 ovn_controller[147168]: 2025-12-06T07:42:44Z|00534|binding|INFO|Removing iface tap16f011c3-09 ovn-installed in OVS
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.360 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.366 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:cc:7f 10.100.0.10'], port_security=['fa:16:3e:36:cc:7f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'c1ef1073-7c66-428c-a02b-e4daa3551d22', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3beede49-1cbb-425c-b1af-82f43dc57163', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b10aa03d68eb4d4799d53538521cc364', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9f7f4f14-4f63-443a-af4a-951f8b77b0f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f4f51045-db64-4b9b-8a34-a3c617e616e7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.367 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 16f011c3-09ff-46c7-b7cc-7ad9cdaac07d in datapath 3beede49-1cbb-425c-b1af-82f43dc57163 unbound from our chassis
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.370 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3beede49-1cbb-425c-b1af-82f43dc57163, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.374 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eccf512b-b88f-449e-b211-e2779dcbeabf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.376 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 namespace which is not needed anymore
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.391 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000080.scope: Deactivated successfully.
Dec 06 07:42:44 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000080.scope: Consumed 31.470s CPU time.
Dec 06 07:42:44 compute-0 systemd-machined[212986]: Machine qemu-56-instance-00000080 terminated.
Dec 06 07:42:44 compute-0 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[333119]: [NOTICE]   (333125) : haproxy version is 2.8.14-c23fe91
Dec 06 07:42:44 compute-0 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[333119]: [NOTICE]   (333125) : path to executable is /usr/sbin/haproxy
Dec 06 07:42:44 compute-0 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[333119]: [ALERT]    (333125) : Current worker (333128) exited with code 143 (Terminated)
Dec 06 07:42:44 compute-0 neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163[333119]: [WARNING]  (333125) : All workers exited. Exiting... (0)
Dec 06 07:42:44 compute-0 systemd[1]: libpod-9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a.scope: Deactivated successfully.
Dec 06 07:42:44 compute-0 podman[345382]: 2025-12-06 07:42:44.512802613 +0000 UTC m=+0.046235260 container died 9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a-userdata-shm.mount: Deactivated successfully.
Dec 06 07:42:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a104929a99b62ea8c0cb82ecb3fee48c6e0ac246ae6f8d7d095968df1a85716f-merged.mount: Deactivated successfully.
Dec 06 07:42:44 compute-0 podman[345382]: 2025-12-06 07:42:44.555762682 +0000 UTC m=+0.089195329 container cleanup 9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:42:44 compute-0 systemd[1]: libpod-conmon-9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a.scope: Deactivated successfully.
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.609 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.612 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 podman[345409]: 2025-12-06 07:42:44.617793073 +0000 UTC m=+0.039594737 container remove 9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.622 251996 INFO nova.virt.libvirt.driver [-] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Instance destroyed successfully.
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.623 251996 DEBUG nova.objects.instance [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lazy-loading 'resources' on Instance uuid c1ef1073-7c66-428c-a02b-e4daa3551d22 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.627 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6e53e6e9-c66c-4732-9307-8c9a403bd0b6]: (4, ('Sat Dec  6 07:42:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a)\n9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a\nSat Dec  6 07:42:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 (9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a)\n9984ca8049f0f233d982c09988cf4dbd0ee2f0cfbac0fdf6ece92f8e48925c3a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.630 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f91791eb-d9a0-4b71-a8a8-42a310d9b345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.631 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3beede49-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:44 compute-0 kernel: tap3beede49-10: left promiscuous mode
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.632 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.639 251996 DEBUG nova.virt.libvirt.vif [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:34:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-144167502',display_name='tempest-ServerActionsTestOtherB-server-144167502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-144167502',id=128,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:35:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b10aa03d68eb4d4799d53538521cc364',ramdisk_id='',reservation_id='r-x1yakn5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-874907570',owner_user_name='tempest-ServerActionsTestOtherB-874907570-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:35:16Z,user_data=None,user_id='a70f6c3c5e2c402bb6fa0e0507e9b6dc',uuid=c1ef1073-7c66-428c-a02b-e4daa3551d22,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.639 251996 DEBUG nova.network.os_vif_util [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converting VIF {"id": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "address": "fa:16:3e:36:cc:7f", "network": {"id": "3beede49-1cbb-425c-b1af-82f43dc57163", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-619240463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b10aa03d68eb4d4799d53538521cc364", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap16f011c3-09", "ovs_interfaceid": "16f011c3-09ff-46c7-b7cc-7ad9cdaac07d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.640 251996 DEBUG nova.network.os_vif_util [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:cc:7f,bridge_name='br-int',has_traffic_filtering=True,id=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16f011c3-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.640 251996 DEBUG os_vif [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:cc:7f,bridge_name='br-int',has_traffic_filtering=True,id=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16f011c3-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.642 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.643 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f011c3-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.644 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.650 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.658 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.662 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[43b24cab-f5e0-47e3-a46d-99c6fd4e795a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.663 251996 INFO os_vif [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:cc:7f,bridge_name='br-int',has_traffic_filtering=True,id=16f011c3-09ff-46c7-b7cc-7ad9cdaac07d,network=Network(3beede49-1cbb-425c-b1af-82f43dc57163),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap16f011c3-09')
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.678 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ee8ee8-9509-48d7-8ecf-0b4e38b10bd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.680 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[761784b0-6d6e-45e5-8dc0-6babd04a168d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.697 251996 DEBUG nova.compute.manager [req-9e105304-f030-46a8-9a8f-c7b546002c85 req-50915a4a-17de-4076-8d8f-17795f620d2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received event network-vif-unplugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.698 251996 DEBUG oslo_concurrency.lockutils [req-9e105304-f030-46a8-9a8f-c7b546002c85 req-50915a4a-17de-4076-8d8f-17795f620d2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.698 251996 DEBUG oslo_concurrency.lockutils [req-9e105304-f030-46a8-9a8f-c7b546002c85 req-50915a4a-17de-4076-8d8f-17795f620d2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.698 251996 DEBUG oslo_concurrency.lockutils [req-9e105304-f030-46a8-9a8f-c7b546002c85 req-50915a4a-17de-4076-8d8f-17795f620d2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.698 251996 DEBUG nova.compute.manager [req-9e105304-f030-46a8-9a8f-c7b546002c85 req-50915a4a-17de-4076-8d8f-17795f620d2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] No waiting events found dispatching network-vif-unplugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:42:44 compute-0 nova_compute[251992]: 2025-12-06 07:42:44.699 251996 DEBUG nova.compute.manager [req-9e105304-f030-46a8-9a8f-c7b546002c85 req-50915a4a-17de-4076-8d8f-17795f620d2d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received event network-vif-unplugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.704 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4f72e8d9-6cd8-46ee-8469-f6896cbd8d3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678857, 'reachable_time': 36395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345450, 'error': None, 'target': 'ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d3beede49\x2d1cbb\x2d425c\x2db1af\x2d82f43dc57163.mount: Deactivated successfully.
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.713 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3beede49-1cbb-425c-b1af-82f43dc57163 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:42:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:44.714 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[76d60fcb-cf5b-4a83-a51e-117d5a12ee3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:44.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 607 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 763 KiB/s rd, 4.0 MiB/s wr, 103 op/s
Dec 06 07:42:45 compute-0 ceph-mon[74339]: pgmap v2616: 305 pgs: 305 active+clean; 607 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 763 KiB/s rd, 4.0 MiB/s wr, 103 op/s
Dec 06 07:42:45 compute-0 nova_compute[251992]: 2025-12-06 07:42:45.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.077 251996 INFO nova.compute.manager [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Rebuilding instance
Dec 06 07:42:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:46.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.333 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.358 251996 DEBUG nova.compute.manager [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.410 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'pci_requests' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.428 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.443 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'resources' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.453 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'migration_context' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.464 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:42:46 compute-0 nova_compute[251992]: 2025-12-06 07:42:46.467 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:42:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:46.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 623 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 289 op/s
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.002 251996 DEBUG nova.compute.manager [req-c6fa9eee-7b1d-4c1d-b8c0-7908692b8de9 req-7a7fcbd5-2a62-476e-b300-ca75be475f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received event network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.002 251996 DEBUG oslo_concurrency.lockutils [req-c6fa9eee-7b1d-4c1d-b8c0-7908692b8de9 req-7a7fcbd5-2a62-476e-b300-ca75be475f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.003 251996 DEBUG oslo_concurrency.lockutils [req-c6fa9eee-7b1d-4c1d-b8c0-7908692b8de9 req-7a7fcbd5-2a62-476e-b300-ca75be475f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.004 251996 DEBUG oslo_concurrency.lockutils [req-c6fa9eee-7b1d-4c1d-b8c0-7908692b8de9 req-7a7fcbd5-2a62-476e-b300-ca75be475f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.004 251996 DEBUG nova.compute.manager [req-c6fa9eee-7b1d-4c1d-b8c0-7908692b8de9 req-7a7fcbd5-2a62-476e-b300-ca75be475f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] No waiting events found dispatching network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.005 251996 WARNING nova.compute.manager [req-c6fa9eee-7b1d-4c1d-b8c0-7908692b8de9 req-7a7fcbd5-2a62-476e-b300-ca75be475f78 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received unexpected event network-vif-plugged-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d for instance with vm_state active and task_state deleting.
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.009 251996 INFO nova.virt.libvirt.driver [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Deleting instance files /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22_del
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.010 251996 INFO nova.virt.libvirt.driver [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Deletion of /var/lib/nova/instances/c1ef1073-7c66-428c-a02b-e4daa3551d22_del complete
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.058 251996 INFO nova.compute.manager [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Took 2.87 seconds to destroy the instance on the hypervisor.
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.058 251996 DEBUG oslo.service.loopingcall [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.059 251996 DEBUG nova.compute.manager [-] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:42:47 compute-0 nova_compute[251992]: 2025-12-06 07:42:47.059 251996 DEBUG nova.network.neutron [-] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:42:47 compute-0 ceph-mon[74339]: pgmap v2617: 305 pgs: 305 active+clean; 623 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 289 op/s
Dec 06 07:42:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:48 compute-0 nova_compute[251992]: 2025-12-06 07:42:48.441 251996 DEBUG nova.network.neutron [-] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:48 compute-0 nova_compute[251992]: 2025-12-06 07:42:48.456 251996 INFO nova.compute.manager [-] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Took 1.40 seconds to deallocate network for instance.
Dec 06 07:42:48 compute-0 nova_compute[251992]: 2025-12-06 07:42:48.497 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:48 compute-0 nova_compute[251992]: 2025-12-06 07:42:48.497 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:48 compute-0 nova_compute[251992]: 2025-12-06 07:42:48.603 251996 DEBUG oslo_concurrency.processutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:48 compute-0 nova_compute[251992]: 2025-12-06 07:42:48.682 251996 DEBUG nova.compute.manager [req-75e1e7f0-877e-474f-8802-6ee6c4580f04 req-62c35658-ec95-4f3a-963c-dff15d3d1367 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Received event network-vif-deleted-16f011c3-09ff-46c7-b7cc-7ad9cdaac07d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:42:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:48.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:42:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 585 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.3 MiB/s wr, 374 op/s
Dec 06 07:42:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608166669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.060 251996 DEBUG oslo_concurrency.processutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.066 251996 DEBUG nova.compute.provider_tree [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.084 251996 DEBUG nova.scheduler.client.report [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.105 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.124 251996 INFO nova.scheduler.client.report [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Deleted allocations for instance c1ef1073-7c66-428c-a02b-e4daa3551d22
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.182 251996 DEBUG oslo_concurrency.lockutils [None req-dee7e60d-f32f-44e4-9685-d57f2753adbc a70f6c3c5e2c402bb6fa0e0507e9b6dc b10aa03d68eb4d4799d53538521cc364 - - default default] Lock "c1ef1073-7c66-428c-a02b-e4daa3551d22" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.645 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.983 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.984 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.984 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.984 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.985 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.986 251996 INFO nova.compute.manager [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Terminating instance
Dec 06 07:42:49 compute-0 nova_compute[251992]: 2025-12-06 07:42:49.987 251996 DEBUG nova.compute.manager [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:42:50 compute-0 ceph-mon[74339]: pgmap v2618: 305 pgs: 305 active+clean; 585 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.3 MiB/s wr, 374 op/s
Dec 06 07:42:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2608166669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2237471296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:50 compute-0 kernel: tap6bec9913-1f (unregistering): left promiscuous mode
Dec 06 07:42:50 compute-0 NetworkManager[48965]: <info>  [1765006970.0890] device (tap6bec9913-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:42:50 compute-0 ovn_controller[147168]: 2025-12-06T07:42:50Z|00535|binding|INFO|Releasing lport 6bec9913-1f72-4458-a269-bab059df9fe1 from this chassis (sb_readonly=0)
Dec 06 07:42:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:50 compute-0 ovn_controller[147168]: 2025-12-06T07:42:50Z|00536|binding|INFO|Setting lport 6bec9913-1f72-4458-a269-bab059df9fe1 down in Southbound
Dec 06 07:42:50 compute-0 ovn_controller[147168]: 2025-12-06T07:42:50Z|00537|binding|INFO|Removing iface tap6bec9913-1f ovn-installed in OVS
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.098 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.106 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:9d:eb 10.100.0.13'], port_security=['fa:16:3e:a9:9d:eb 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eff1f6a1654b45079de20eddb830e76d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5b8e710-017e-4606-9067-bf1900949ed3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80d3c5d2-eecc-4e72-bceb-41384af759f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=6bec9913-1f72-4458-a269-bab059df9fe1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.107 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 6bec9913-1f72-4458-a269-bab059df9fe1 in datapath 35a27638-382c-4afb-83b0-edd6d7f4bca8 unbound from our chassis
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.118 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35a27638-382c-4afb-83b0-edd6d7f4bca8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.120 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.119 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d41bfda5-0053-4a76-8852-f2e0ec33026d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.124 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 namespace which is not needed anymore
Dec 06 07:42:50 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Dec 06 07:42:50 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000008f.scope: Consumed 15.127s CPU time.
Dec 06 07:42:50 compute-0 systemd-machined[212986]: Machine qemu-65-instance-0000008f terminated.
Dec 06 07:42:50 compute-0 podman[345482]: 2025-12-06 07:42:50.215435767 +0000 UTC m=+0.089015613 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.221 251996 INFO nova.virt.libvirt.driver [-] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Instance destroyed successfully.
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.222 251996 DEBUG nova.objects.instance [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lazy-loading 'resources' on Instance uuid 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.242 251996 DEBUG nova.virt.libvirt.vif [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:41:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-309022851',display_name='tempest-ServersTestJSON-server-309022851',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-309022851',id=143,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:42:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eff1f6a1654b45079de20eddb830e76d',ramdisk_id='',reservation_id='r-8g0fb13j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-374151197',owner_user_name='tempest-ServersTestJSON-374151197-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:42:04Z,user_data=None,user_id='0d8b62a3276f4a8b8349af67b82134c8',uuid=4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.243 251996 DEBUG nova.network.os_vif_util [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converting VIF {"id": "6bec9913-1f72-4458-a269-bab059df9fe1", "address": "fa:16:3e:a9:9d:eb", "network": {"id": "35a27638-382c-4afb-83b0-edd6d7f4bca8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1603796324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eff1f6a1654b45079de20eddb830e76d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bec9913-1f", "ovs_interfaceid": "6bec9913-1f72-4458-a269-bab059df9fe1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.243 251996 DEBUG nova.network.os_vif_util [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:9d:eb,bridge_name='br-int',has_traffic_filtering=True,id=6bec9913-1f72-4458-a269-bab059df9fe1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bec9913-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.244 251996 DEBUG os_vif [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:9d:eb,bridge_name='br-int',has_traffic_filtering=True,id=6bec9913-1f72-4458-a269-bab059df9fe1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bec9913-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.246 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.246 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6bec9913-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.248 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.250 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:42:50 compute-0 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[344010]: [NOTICE]   (344014) : haproxy version is 2.8.14-c23fe91
Dec 06 07:42:50 compute-0 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[344010]: [NOTICE]   (344014) : path to executable is /usr/sbin/haproxy
Dec 06 07:42:50 compute-0 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[344010]: [WARNING]  (344014) : Exiting Master process...
Dec 06 07:42:50 compute-0 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[344010]: [WARNING]  (344014) : Exiting Master process...
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.253 251996 INFO os_vif [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:9d:eb,bridge_name='br-int',has_traffic_filtering=True,id=6bec9913-1f72-4458-a269-bab059df9fe1,network=Network(35a27638-382c-4afb-83b0-edd6d7f4bca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bec9913-1f')
Dec 06 07:42:50 compute-0 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[344010]: [ALERT]    (344014) : Current worker (344018) exited with code 143 (Terminated)
Dec 06 07:42:50 compute-0 neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8[344010]: [WARNING]  (344014) : All workers exited. Exiting... (0)
Dec 06 07:42:50 compute-0 systemd[1]: libpod-72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496.scope: Deactivated successfully.
Dec 06 07:42:50 compute-0 podman[345526]: 2025-12-06 07:42:50.264565686 +0000 UTC m=+0.055673719 container died 72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 07:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496-userdata-shm.mount: Deactivated successfully.
Dec 06 07:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2800245a484c7d359d891063685899f9a4faf9edcf181b22a97f432efd1c2c47-merged.mount: Deactivated successfully.
Dec 06 07:42:50 compute-0 podman[345526]: 2025-12-06 07:42:50.306430225 +0000 UTC m=+0.097538248 container cleanup 72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:42:50 compute-0 systemd[1]: libpod-conmon-72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496.scope: Deactivated successfully.
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.333 251996 DEBUG nova.compute.manager [req-402f907f-0375-4566-a354-c5acb4c79b6b req-cda1fc72-01d7-4f26-95be-208bee1c2622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received event network-vif-unplugged-6bec9913-1f72-4458-a269-bab059df9fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.333 251996 DEBUG oslo_concurrency.lockutils [req-402f907f-0375-4566-a354-c5acb4c79b6b req-cda1fc72-01d7-4f26-95be-208bee1c2622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.334 251996 DEBUG oslo_concurrency.lockutils [req-402f907f-0375-4566-a354-c5acb4c79b6b req-cda1fc72-01d7-4f26-95be-208bee1c2622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.334 251996 DEBUG oslo_concurrency.lockutils [req-402f907f-0375-4566-a354-c5acb4c79b6b req-cda1fc72-01d7-4f26-95be-208bee1c2622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.334 251996 DEBUG nova.compute.manager [req-402f907f-0375-4566-a354-c5acb4c79b6b req-cda1fc72-01d7-4f26-95be-208bee1c2622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] No waiting events found dispatching network-vif-unplugged-6bec9913-1f72-4458-a269-bab059df9fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.334 251996 DEBUG nova.compute.manager [req-402f907f-0375-4566-a354-c5acb4c79b6b req-cda1fc72-01d7-4f26-95be-208bee1c2622 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received event network-vif-unplugged-6bec9913-1f72-4458-a269-bab059df9fe1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:42:50 compute-0 podman[345591]: 2025-12-06 07:42:50.371288034 +0000 UTC m=+0.044670817 container remove 72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.376 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3546d575-dfd8-40c4-893c-83aef124d433]: (4, ('Sat Dec  6 07:42:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496)\n72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496\nSat Dec  6 07:42:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 (72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496)\n72e1430998e0e123447ae32be0c9f289fb5d4c43cd490bc2484355d9055e8496\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.378 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[738e15dc-2fb4-4a30-9823-05f8788743a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.379 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35a27638-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:42:50 compute-0 kernel: tap35a27638-30: left promiscuous mode
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.398 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.400 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0999d1-a209-40e3-80a8-aea4218fe10a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.414 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5f170062-a7c6-49a1-bd80-ef08dc308ee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.415 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[df9ea80a-5fd7-4003-967a-e00cd9597c8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.434 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e0f0f1-d5f7-44fc-9f4b-435f26f88117]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 719596, 'reachable_time': 16387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345603, 'error': None, 'target': 'ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d35a27638\x2d382c\x2d4afb\x2d83b0\x2dedd6d7f4bca8.mount: Deactivated successfully.
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.439 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35a27638-382c-4afb-83b0-edd6d7f4bca8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:42:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:42:50.439 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[203d255a-dd6c-4f7c-b312-7cf6bc5dbb3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.559 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.680 251996 INFO nova.virt.libvirt.driver [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Deleting instance files /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_del
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.681 251996 INFO nova.virt.libvirt.driver [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Deletion of /var/lib/nova/instances/4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6_del complete
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.761 251996 INFO nova.compute.manager [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Took 0.77 seconds to destroy the instance on the hypervisor.
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.762 251996 DEBUG oslo.service.loopingcall [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.762 251996 DEBUG nova.compute.manager [-] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:42:50 compute-0 nova_compute[251992]: 2025-12-06 07:42:50.762 251996 DEBUG nova.network.neutron [-] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:42:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:50.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 3.7 MiB/s wr, 420 op/s
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.347 251996 DEBUG nova.network.neutron [-] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.365 251996 INFO nova.compute.manager [-] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Took 0.60 seconds to deallocate network for instance.
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.408 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.408 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.476 251996 DEBUG oslo_concurrency.processutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:42:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:42:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645391525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.917 251996 DEBUG oslo_concurrency.processutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.924 251996 DEBUG nova.compute.provider_tree [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.941 251996 DEBUG nova.scheduler.client.report [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.962 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:51 compute-0 nova_compute[251992]: 2025-12-06 07:42:51.986 251996 INFO nova.scheduler.client.report [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Deleted allocations for instance 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6
Dec 06 07:42:52 compute-0 ceph-mon[74339]: pgmap v2619: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 3.7 MiB/s wr, 420 op/s
Dec 06 07:42:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3645391525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:52 compute-0 nova_compute[251992]: 2025-12-06 07:42:52.055 251996 DEBUG oslo_concurrency.lockutils [None req-6038862c-edaf-4e80-a7bd-d2b9958525a3 0d8b62a3276f4a8b8349af67b82134c8 eff1f6a1654b45079de20eddb830e76d - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:52 compute-0 nova_compute[251992]: 2025-12-06 07:42:52.453 251996 DEBUG nova.compute.manager [req-27c9531b-9555-4213-9fce-00d49ab67838 req-9b714331-a4fb-4473-b28b-18b51a051f76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received event network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:52 compute-0 nova_compute[251992]: 2025-12-06 07:42:52.453 251996 DEBUG oslo_concurrency.lockutils [req-27c9531b-9555-4213-9fce-00d49ab67838 req-9b714331-a4fb-4473-b28b-18b51a051f76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:42:52 compute-0 nova_compute[251992]: 2025-12-06 07:42:52.454 251996 DEBUG oslo_concurrency.lockutils [req-27c9531b-9555-4213-9fce-00d49ab67838 req-9b714331-a4fb-4473-b28b-18b51a051f76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:42:52 compute-0 nova_compute[251992]: 2025-12-06 07:42:52.454 251996 DEBUG oslo_concurrency.lockutils [req-27c9531b-9555-4213-9fce-00d49ab67838 req-9b714331-a4fb-4473-b28b-18b51a051f76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:42:52 compute-0 nova_compute[251992]: 2025-12-06 07:42:52.454 251996 DEBUG nova.compute.manager [req-27c9531b-9555-4213-9fce-00d49ab67838 req-9b714331-a4fb-4473-b28b-18b51a051f76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] No waiting events found dispatching network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:42:52 compute-0 nova_compute[251992]: 2025-12-06 07:42:52.454 251996 WARNING nova.compute.manager [req-27c9531b-9555-4213-9fce-00d49ab67838 req-9b714331-a4fb-4473-b28b-18b51a051f76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received unexpected event network-vif-plugged-6bec9913-1f72-4458-a269-bab059df9fe1 for instance with vm_state deleted and task_state None.
Dec 06 07:42:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:52.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:52 compute-0 nova_compute[251992]: 2025-12-06 07:42:52.963 251996 DEBUG nova.compute.manager [req-360b7d50-21e6-4c27-972c-3c96371519b6 req-b7eac199-4fda-4fda-a78d-8176194a38c9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Received event network-vif-deleted-6bec9913-1f72-4458-a269-bab059df9fe1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:42:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.0 MiB/s wr, 399 op/s
Dec 06 07:42:54 compute-0 ceph-mon[74339]: pgmap v2620: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.0 MiB/s wr, 399 op/s
Dec 06 07:42:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:54 compute-0 nova_compute[251992]: 2025-12-06 07:42:54.202 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:54.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.0 MiB/s wr, 406 op/s
Dec 06 07:42:55 compute-0 ceph-mon[74339]: pgmap v2621: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 2.0 MiB/s wr, 406 op/s
Dec 06 07:42:55 compute-0 nova_compute[251992]: 2025-12-06 07:42:55.248 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:55 compute-0 nova_compute[251992]: 2025-12-06 07:42:55.561 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:42:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:56.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:56 compute-0 podman[345631]: 2025-12-06 07:42:56.398148045 +0000 UTC m=+0.055262887 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 07:42:56 compute-0 podman[345632]: 2025-12-06 07:42:56.4008911 +0000 UTC m=+0.056067809 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 06 07:42:56 compute-0 nova_compute[251992]: 2025-12-06 07:42:56.523 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:42:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:56.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2087247517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:42:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 1.7 MiB/s wr, 388 op/s
Dec 06 07:42:57 compute-0 ceph-mon[74339]: pgmap v2622: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 1.7 MiB/s wr, 388 op/s
Dec 06 07:42:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:42:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:42:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:42:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:42:58.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:42:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 254 op/s
Dec 06 07:42:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:42:59 compute-0 nova_compute[251992]: 2025-12-06 07:42:59.621 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006964.6207843, c1ef1073-7c66-428c-a02b-e4daa3551d22 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:42:59 compute-0 nova_compute[251992]: 2025-12-06 07:42:59.622 251996 INFO nova.compute.manager [-] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] VM Stopped (Lifecycle Event)
Dec 06 07:42:59 compute-0 nova_compute[251992]: 2025-12-06 07:42:59.639 251996 DEBUG nova.compute.manager [None req-549358b3-89d2-4df0-bbdf-783f4012dc93 - - - - - -] [instance: c1ef1073-7c66-428c-a02b-e4daa3551d22] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:42:59 compute-0 ceph-mon[74339]: pgmap v2623: 305 pgs: 305 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 254 op/s
Dec 06 07:43:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:00.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:00 compute-0 sudo[345671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:00 compute-0 sudo[345671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:00 compute-0 sudo[345671]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:00 compute-0 sudo[345696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:00 compute-0 sudo[345696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:00 compute-0 sudo[345696]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:00 compute-0 nova_compute[251992]: 2025-12-06 07:43:00.251 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:00 compute-0 nova_compute[251992]: 2025-12-06 07:43:00.563 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:00.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3366057383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.2 MiB/s wr, 287 op/s
Dec 06 07:43:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2101538755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:02 compute-0 ceph-mon[74339]: pgmap v2624: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.2 MiB/s wr, 287 op/s
Dec 06 07:43:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:02.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:02.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 959 KiB/s rd, 8.2 MiB/s wr, 236 op/s
Dec 06 07:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:03.851 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:03.852 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:03.852 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:04.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:04 compute-0 ceph-mon[74339]: pgmap v2625: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 959 KiB/s rd, 8.2 MiB/s wr, 236 op/s
Dec 06 07:43:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:04.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 965 KiB/s rd, 8.2 MiB/s wr, 238 op/s
Dec 06 07:43:05 compute-0 nova_compute[251992]: 2025-12-06 07:43:05.218 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006970.2174602, 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:05 compute-0 nova_compute[251992]: 2025-12-06 07:43:05.218 251996 INFO nova.compute.manager [-] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] VM Stopped (Lifecycle Event)
Dec 06 07:43:05 compute-0 nova_compute[251992]: 2025-12-06 07:43:05.236 251996 DEBUG nova.compute.manager [None req-b197e58a-f1c3-4de3-9416-d706818ae06a - - - - - -] [instance: 4a64ca64-6aaa-4842-85f4-5f66f4b9fcd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:05 compute-0 nova_compute[251992]: 2025-12-06 07:43:05.254 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:05 compute-0 ceph-mon[74339]: pgmap v2626: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 965 KiB/s rd, 8.2 MiB/s wr, 238 op/s
Dec 06 07:43:05 compute-0 nova_compute[251992]: 2025-12-06 07:43:05.564 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:06.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:06.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 8.2 MiB/s wr, 282 op/s
Dec 06 07:43:07 compute-0 nova_compute[251992]: 2025-12-06 07:43:07.567 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:43:08 compute-0 ceph-mon[74339]: pgmap v2627: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 8.2 MiB/s wr, 282 op/s
Dec 06 07:43:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:08.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 552 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 267 op/s
Dec 06 07:43:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3363148672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:43:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3363148672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:43:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:09 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000094.scope: Deactivated successfully.
Dec 06 07:43:09 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000094.scope: Consumed 14.185s CPU time.
Dec 06 07:43:09 compute-0 systemd-machined[212986]: Machine qemu-66-instance-00000094 terminated.
Dec 06 07:43:10 compute-0 ceph-mon[74339]: pgmap v2628: 305 pgs: 305 active+clean; 552 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 267 op/s
Dec 06 07:43:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2080084077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:10.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:10 compute-0 sudo[345729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:10 compute-0 sudo[345729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:10 compute-0 sudo[345729]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:10 compute-0 nova_compute[251992]: 2025-12-06 07:43:10.257 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:10 compute-0 sudo[345754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:43:10 compute-0 sudo[345754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:10 compute-0 sudo[345754]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:10 compute-0 sudo[345779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:10 compute-0 sudo[345779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:10 compute-0 sudo[345779]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:10 compute-0 sudo[345804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:43:10 compute-0 sudo[345804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:10 compute-0 nova_compute[251992]: 2025-12-06 07:43:10.566 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:10 compute-0 nova_compute[251992]: 2025-12-06 07:43:10.582 251996 INFO nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance shutdown successfully after 24 seconds.
Dec 06 07:43:10 compute-0 nova_compute[251992]: 2025-12-06 07:43:10.591 251996 INFO nova.virt.libvirt.driver [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance destroyed successfully.
Dec 06 07:43:10 compute-0 nova_compute[251992]: 2025-12-06 07:43:10.596 251996 INFO nova.virt.libvirt.driver [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance destroyed successfully.
Dec 06 07:43:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:43:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:43:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:10 compute-0 sudo[345804]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 07:43:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:43:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:10.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:10 compute-0 nova_compute[251992]: 2025-12-06 07:43:10.956 251996 INFO nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Deleting instance files /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3_del
Dec 06 07:43:10 compute-0 nova_compute[251992]: 2025-12-06 07:43:10.957 251996 INFO nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Deletion of /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3_del complete
Dec 06 07:43:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 241 op/s
Dec 06 07:43:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 07:43:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:43:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4142751679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:43:11 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.189 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.190 251996 INFO nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Creating image(s)
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.223 251996 DEBUG nova.storage.rbd_utils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.249 251996 DEBUG nova.storage.rbd_utils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.275 251996 DEBUG nova.storage.rbd_utils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.279 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.344 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.346 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.346 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.347 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "40c8d19f192ebe6ef01b2a3ea96d896752dcd737" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.377 251996 DEBUG nova.storage.rbd_utils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.381 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.712 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.781 251996 DEBUG nova.storage.rbd_utils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] resizing rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.889 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.889 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Ensure instance console log exists: /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.890 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.890 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.891 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.892 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.896 251996 WARNING nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.900 251996 DEBUG nova.virt.libvirt.host [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.901 251996 DEBUG nova.virt.libvirt.host [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.904 251996 DEBUG nova.virt.libvirt.host [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.905 251996 DEBUG nova.virt.libvirt.host [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.906 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.906 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:38Z,direct_url=<?>,disk_format='qcow2',id=412dd61d-1b1e-439f-b7f9-7e7c4e42924c,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.907 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.907 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.907 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.907 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.907 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.908 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.908 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.908 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.908 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.909 251996 DEBUG nova.virt.hardware [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:43:11 compute-0 nova_compute[251992]: 2025-12-06 07:43:11.909 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:12 compute-0 nova_compute[251992]: 2025-12-06 07:43:12.015 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:12.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:12 compute-0 ceph-mon[74339]: pgmap v2629: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 241 op/s
Dec 06 07:43:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/812162387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:12 compute-0 nova_compute[251992]: 2025-12-06 07:43:12.504 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:43:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:43:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:12 compute-0 nova_compute[251992]: 2025-12-06 07:43:12.535 251996 DEBUG nova.storage.rbd_utils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:12 compute-0 nova_compute[251992]: 2025-12-06 07:43:12.539 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:12.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 73 KiB/s wr, 107 op/s
Dec 06 07:43:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700738085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.011 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.015 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <uuid>81c5c358-132f-4db7-acee-2c7454a0a4d3</uuid>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <name>instance-00000094</name>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerShowV247Test-server-1195711009</nova:name>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:43:11</nova:creationTime>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <nova:user uuid="8b7e1fb80daa458699ec19892dc9a92c">tempest-ServerShowV247Test-375404285-project-member</nova:user>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <nova:project uuid="6061a73c34904608870b68e204d01c42">tempest-ServerShowV247Test-375404285</nova:project>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="412dd61d-1b1e-439f-b7f9-7e7c4e42924c"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <nova:ports/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <system>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <entry name="serial">81c5c358-132f-4db7-acee-2c7454a0a4d3</entry>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <entry name="uuid">81c5c358-132f-4db7-acee-2c7454a0a4d3</entry>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     </system>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <os>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   </os>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <features>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   </features>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/81c5c358-132f-4db7-acee-2c7454a0a4d3_disk">
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       </source>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config">
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       </source>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:43:13 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/console.log" append="off"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <video>
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     </video>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:43:13 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:43:13 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:43:13 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:43:13 compute-0 nova_compute[251992]: </domain>
Dec 06 07:43:13 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.065 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.066 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.066 251996 INFO nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Using config drive
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.088 251996 DEBUG nova.storage.rbd_utils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:43:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:43:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:43:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.270 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:43:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/812162387' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:13 compute-0 ceph-mon[74339]: pgmap v2630: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 73 KiB/s wr, 107 op/s
Dec 06 07:43:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/700738085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:43:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:43:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e70e34c7-ce1b-4b82-94af-aba78daf3d81 does not exist
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0aa01ea2-bbe9-4ada-9b1b-1a7dcdc467a5 does not exist
Dec 06 07:43:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c080e0f7-e101-4e61-9194-bf23d86ed101 does not exist
Dec 06 07:43:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:43:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:43:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:43:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:43:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:43:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.308 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'keypairs' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:13 compute-0 sudo[346127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:13 compute-0 sudo[346127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:13 compute-0 sudo[346127]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:13 compute-0 sudo[346152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:43:13 compute-0 sudo[346152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:13 compute-0 sudo[346152]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:13 compute-0 sudo[346177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:13 compute-0 sudo[346177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:13 compute-0 sudo[346177]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:13 compute-0 sudo[346202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:43:13 compute-0 sudo[346202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.578 251996 INFO nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Creating config drive at /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.583 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppqx301xk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.712 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppqx301xk" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.743 251996 DEBUG nova.storage.rbd_utils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] rbd image 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.747 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:13 compute-0 podman[346297]: 2025-12-06 07:43:13.857846944 +0000 UTC m=+0.038796775 container create 60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lichterman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:43:13 compute-0 systemd[1]: Started libpod-conmon-60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77.scope.
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.917 251996 DEBUG oslo_concurrency.processutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config 81c5c358-132f-4db7-acee-2c7454a0a4d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:13 compute-0 nova_compute[251992]: 2025-12-06 07:43:13.918 251996 INFO nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Deleting local config drive /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3/disk.config because it was imported into RBD.
Dec 06 07:43:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:43:13 compute-0 podman[346297]: 2025-12-06 07:43:13.841348422 +0000 UTC m=+0.022298263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:43:13 compute-0 podman[346297]: 2025-12-06 07:43:13.944017359 +0000 UTC m=+0.124967170 container init 60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lichterman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:43:13 compute-0 podman[346297]: 2025-12-06 07:43:13.949965742 +0000 UTC m=+0.130915563 container start 60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:43:13 compute-0 podman[346297]: 2025-12-06 07:43:13.954180267 +0000 UTC m=+0.135130108 container attach 60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:43:13 compute-0 nifty_lichterman[346323]: 167 167
Dec 06 07:43:13 compute-0 systemd[1]: libpod-60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77.scope: Deactivated successfully.
Dec 06 07:43:13 compute-0 podman[346297]: 2025-12-06 07:43:13.957488428 +0000 UTC m=+0.138438249 container died 60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:43:13 compute-0 systemd-machined[212986]: New machine qemu-68-instance-00000094.
Dec 06 07:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d854f15082d28f3352b5f680aded1566128d404dc89373ec81395d59c92bc56f-merged.mount: Deactivated successfully.
Dec 06 07:43:13 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-00000094.
Dec 06 07:43:13 compute-0 podman[346297]: 2025-12-06 07:43:13.994564196 +0000 UTC m=+0.175514017 container remove 60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:43:14 compute-0 systemd[1]: libpod-conmon-60505e999d9b74bb4f06f43c3a58a6a2e9f7029bc7aeeeaf38f54a955518fb77.scope: Deactivated successfully.
Dec 06 07:43:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:14.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:14 compute-0 podman[346362]: 2025-12-06 07:43:14.151831641 +0000 UTC m=+0.042927888 container create 3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 07:43:14 compute-0 systemd[1]: Started libpod-conmon-3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741.scope.
Dec 06 07:43:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f18f67ade069ca03c355fde1569b77c4b5315d8278a93171e474838669f1f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f18f67ade069ca03c355fde1569b77c4b5315d8278a93171e474838669f1f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f18f67ade069ca03c355fde1569b77c4b5315d8278a93171e474838669f1f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f18f67ade069ca03c355fde1569b77c4b5315d8278a93171e474838669f1f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f18f67ade069ca03c355fde1569b77c4b5315d8278a93171e474838669f1f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:14 compute-0 podman[346362]: 2025-12-06 07:43:14.136770407 +0000 UTC m=+0.027866664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:43:14 compute-0 podman[346362]: 2025-12-06 07:43:14.238418337 +0000 UTC m=+0.129514614 container init 3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:43:14 compute-0 podman[346362]: 2025-12-06 07:43:14.244440922 +0000 UTC m=+0.135537179 container start 3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:43:14 compute-0 podman[346362]: 2025-12-06 07:43:14.247456545 +0000 UTC m=+0.138552822 container attach 3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 07:43:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:43:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:43:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:43:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2882213620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3595080009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.474 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 81c5c358-132f-4db7-acee-2c7454a0a4d3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.476 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006994.4735272, 81c5c358-132f-4db7-acee-2c7454a0a4d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.476 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] VM Resumed (Lifecycle Event)
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.479 251996 DEBUG nova.compute.manager [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.480 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.484 251996 INFO nova.virt.libvirt.driver [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance spawned successfully.
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.484 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.498 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.502 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.505 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.505 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.506 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.506 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.506 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.507 251996 DEBUG nova.virt.libvirt.driver [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.533 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.534 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765006994.4738195, 81c5c358-132f-4db7-acee-2c7454a0a4d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.534 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] VM Started (Lifecycle Event)
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.568 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.571 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.587 251996 DEBUG nova.compute.manager [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.594 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.667 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.668 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.668 251996 DEBUG nova.objects.instance [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 06 07:43:14 compute-0 nova_compute[251992]: 2025-12-06 07:43:14.722 251996 DEBUG oslo_concurrency.lockutils [None req-54c4905c-ac70-4a41-9d3f-204ce47f76d8 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:14.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 502 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 873 KiB/s wr, 127 op/s
Dec 06 07:43:15 compute-0 cranky_brattain[346378]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:43:15 compute-0 cranky_brattain[346378]: --> relative data size: 1.0
Dec 06 07:43:15 compute-0 cranky_brattain[346378]: --> All data devices are unavailable
Dec 06 07:43:15 compute-0 systemd[1]: libpod-3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741.scope: Deactivated successfully.
Dec 06 07:43:15 compute-0 podman[346436]: 2025-12-06 07:43:15.192818665 +0000 UTC m=+0.027408473 container died 3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 07:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-41f18f67ade069ca03c355fde1569b77c4b5315d8278a93171e474838669f1f1-merged.mount: Deactivated successfully.
Dec 06 07:43:15 compute-0 podman[346436]: 2025-12-06 07:43:15.249559952 +0000 UTC m=+0.084149750 container remove 3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:43:15 compute-0 systemd[1]: libpod-conmon-3ecfd4ab7f3162bf18c08e46ee37262f7d502d1cc07bc2710f8bd1acd623e741.scope: Deactivated successfully.
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.261 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:15 compute-0 sudo[346202]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/992574613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:15 compute-0 ceph-mon[74339]: pgmap v2631: 305 pgs: 305 active+clean; 502 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 873 KiB/s wr, 127 op/s
Dec 06 07:43:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4227087763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:15 compute-0 sudo[346451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:15 compute-0 sudo[346451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:15 compute-0 sudo[346451]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:15 compute-0 sudo[346476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:43:15 compute-0 sudo[346476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:15 compute-0 sudo[346476]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:15 compute-0 sudo[346501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:15 compute-0 sudo[346501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:15 compute-0 sudo[346501]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:15 compute-0 sudo[346526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:43:15 compute-0 sudo[346526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.585 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.644 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "81c5c358-132f-4db7-acee-2c7454a0a4d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.645 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "81c5c358-132f-4db7-acee-2c7454a0a4d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.645 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "81c5c358-132f-4db7-acee-2c7454a0a4d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.645 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "81c5c358-132f-4db7-acee-2c7454a0a4d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.646 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "81c5c358-132f-4db7-acee-2c7454a0a4d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.647 251996 INFO nova.compute.manager [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Terminating instance
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.648 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "refresh_cache-81c5c358-132f-4db7-acee-2c7454a0a4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.648 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquired lock "refresh_cache-81c5c358-132f-4db7-acee-2c7454a0a4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.648 251996 DEBUG nova.network.neutron [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:43:15 compute-0 podman[346590]: 2025-12-06 07:43:15.860582328 +0000 UTC m=+0.040325628 container create 24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:43:15 compute-0 systemd[1]: Started libpod-conmon-24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5.scope.
Dec 06 07:43:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:43:15 compute-0 podman[346590]: 2025-12-06 07:43:15.840415665 +0000 UTC m=+0.020158945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:43:15 compute-0 podman[346590]: 2025-12-06 07:43:15.937088557 +0000 UTC m=+0.116831837 container init 24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:43:15 compute-0 podman[346590]: 2025-12-06 07:43:15.942833385 +0000 UTC m=+0.122576645 container start 24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:43:15 compute-0 happy_ride[346607]: 167 167
Dec 06 07:43:15 compute-0 systemd[1]: libpod-24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5.scope: Deactivated successfully.
Dec 06 07:43:15 compute-0 podman[346590]: 2025-12-06 07:43:15.948270894 +0000 UTC m=+0.128014144 container attach 24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ride, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 07:43:15 compute-0 podman[346590]: 2025-12-06 07:43:15.948564572 +0000 UTC m=+0.128307842 container died 24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ride, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:43:15 compute-0 nova_compute[251992]: 2025-12-06 07:43:15.960 251996 DEBUG nova.network.neutron [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-790740b66e05810bc7e27d2b6d5f9dac4968d320cdb6ef6c243bb4faafff902f-merged.mount: Deactivated successfully.
Dec 06 07:43:15 compute-0 podman[346590]: 2025-12-06 07:43:15.979061789 +0000 UTC m=+0.158805049 container remove 24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:43:16 compute-0 systemd[1]: libpod-conmon-24c24f18141f24e3361a60428b755c868663ea27c709a9de60f068d27c82c9b5.scope: Deactivated successfully.
Dec 06 07:43:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:16.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:16 compute-0 podman[346630]: 2025-12-06 07:43:16.141154867 +0000 UTC m=+0.040769801 container create 0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:43:16 compute-0 systemd[1]: Started libpod-conmon-0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3.scope.
Dec 06 07:43:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0436e2f5ba5ad581e25b2d058c6f4098370e95e9653e24dbbeb285747912a473/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0436e2f5ba5ad581e25b2d058c6f4098370e95e9653e24dbbeb285747912a473/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0436e2f5ba5ad581e25b2d058c6f4098370e95e9653e24dbbeb285747912a473/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:16 compute-0 podman[346630]: 2025-12-06 07:43:16.123799131 +0000 UTC m=+0.023414085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0436e2f5ba5ad581e25b2d058c6f4098370e95e9653e24dbbeb285747912a473/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:16 compute-0 podman[346630]: 2025-12-06 07:43:16.2396729 +0000 UTC m=+0.139287834 container init 0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:43:16 compute-0 podman[346630]: 2025-12-06 07:43:16.247651139 +0000 UTC m=+0.147266073 container start 0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:43:16 compute-0 podman[346630]: 2025-12-06 07:43:16.250581389 +0000 UTC m=+0.150196343 container attach 0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yalow, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:43:16 compute-0 nova_compute[251992]: 2025-12-06 07:43:16.402 251996 DEBUG nova.network.neutron [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:16 compute-0 nova_compute[251992]: 2025-12-06 07:43:16.452 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Releasing lock "refresh_cache-81c5c358-132f-4db7-acee-2c7454a0a4d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:16 compute-0 nova_compute[251992]: 2025-12-06 07:43:16.452 251996 DEBUG nova.compute.manager [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:43:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3521867990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:16 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000094.scope: Deactivated successfully.
Dec 06 07:43:16 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000094.scope: Consumed 2.556s CPU time.
Dec 06 07:43:16 compute-0 systemd-machined[212986]: Machine qemu-68-instance-00000094 terminated.
Dec 06 07:43:16 compute-0 nova_compute[251992]: 2025-12-06 07:43:16.679 251996 INFO nova.virt.libvirt.driver [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance destroyed successfully.
Dec 06 07:43:16 compute-0 nova_compute[251992]: 2025-12-06 07:43:16.680 251996 DEBUG nova.objects.instance [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'resources' on Instance uuid 81c5c358-132f-4db7-acee-2c7454a0a4d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:16.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 564 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.0 MiB/s wr, 226 op/s
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]: {
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:     "0": [
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:         {
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "devices": [
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "/dev/loop3"
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             ],
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "lv_name": "ceph_lv0",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "lv_size": "7511998464",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "name": "ceph_lv0",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "tags": {
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.cluster_name": "ceph",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.crush_device_class": "",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.encrypted": "0",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.osd_id": "0",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.type": "block",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:                 "ceph.vdo": "0"
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             },
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "type": "block",
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:             "vg_name": "ceph_vg0"
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:         }
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]:     ]
Dec 06 07:43:17 compute-0 inspiring_yalow[346646]: }
Dec 06 07:43:17 compute-0 systemd[1]: libpod-0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3.scope: Deactivated successfully.
Dec 06 07:43:17 compute-0 podman[346630]: 2025-12-06 07:43:17.082166907 +0000 UTC m=+0.981781851 container died 0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yalow, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:43:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0436e2f5ba5ad581e25b2d058c6f4098370e95e9653e24dbbeb285747912a473-merged.mount: Deactivated successfully.
Dec 06 07:43:17 compute-0 podman[346630]: 2025-12-06 07:43:17.308069136 +0000 UTC m=+1.207684070 container remove 0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yalow, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:43:17 compute-0 systemd[1]: libpod-conmon-0410a0836d1f72eccfb80a5402ef287e1d5fe19437446465b4af742108ff02b3.scope: Deactivated successfully.
Dec 06 07:43:17 compute-0 sudo[346526]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:17 compute-0 sudo[346687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:17 compute-0 sudo[346687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:17 compute-0 sudo[346687]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:17 compute-0 sudo[346712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:43:17 compute-0 sudo[346712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:17 compute-0 sudo[346712]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:17 compute-0 sudo[346737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:17 compute-0 sudo[346737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:17 compute-0 sudo[346737]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:17 compute-0 sudo[346762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:43:17 compute-0 sudo[346762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:17 compute-0 nova_compute[251992]: 2025-12-06 07:43:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:17 compute-0 nova_compute[251992]: 2025-12-06 07:43:17.836 251996 INFO nova.virt.libvirt.driver [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Deleting instance files /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3_del
Dec 06 07:43:17 compute-0 nova_compute[251992]: 2025-12-06 07:43:17.836 251996 INFO nova.virt.libvirt.driver [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Deletion of /var/lib/nova/instances/81c5c358-132f-4db7-acee-2c7454a0a4d3_del complete
Dec 06 07:43:17 compute-0 podman[346828]: 2025-12-06 07:43:17.88403847 +0000 UTC m=+0.053845999 container create 8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:43:17 compute-0 systemd[1]: Started libpod-conmon-8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686.scope.
Dec 06 07:43:17 compute-0 nova_compute[251992]: 2025-12-06 07:43:17.932 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:17 compute-0 nova_compute[251992]: 2025-12-06 07:43:17.933 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:17 compute-0 nova_compute[251992]: 2025-12-06 07:43:17.934 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:17 compute-0 nova_compute[251992]: 2025-12-06 07:43:17.934 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:43:17 compute-0 nova_compute[251992]: 2025-12-06 07:43:17.935 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:43:17 compute-0 podman[346828]: 2025-12-06 07:43:17.853498381 +0000 UTC m=+0.023305940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:43:17 compute-0 podman[346828]: 2025-12-06 07:43:17.962130663 +0000 UTC m=+0.131938202 container init 8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:43:17 compute-0 podman[346828]: 2025-12-06 07:43:17.969375612 +0000 UTC m=+0.139183151 container start 8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec 06 07:43:17 compute-0 podman[346828]: 2025-12-06 07:43:17.973001271 +0000 UTC m=+0.142808810 container attach 8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:43:17 compute-0 kind_chebyshev[346844]: 167 167
Dec 06 07:43:17 compute-0 systemd[1]: libpod-8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686.scope: Deactivated successfully.
Dec 06 07:43:17 compute-0 podman[346828]: 2025-12-06 07:43:17.975591382 +0000 UTC m=+0.145398911 container died 8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 07:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dddea54a5877b5cfb2eb1295224c6b0856b79b9ba8d0e3e3baa630cf1de82ff4-merged.mount: Deactivated successfully.
Dec 06 07:43:18 compute-0 podman[346828]: 2025-12-06 07:43:18.011019194 +0000 UTC m=+0.180826723 container remove 8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:43:18 compute-0 systemd[1]: libpod-conmon-8937a759a51013b468614cb28b3174fe6ba5e41b2579be03346f571f508c3686.scope: Deactivated successfully.
Dec 06 07:43:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:18.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.142 251996 INFO nova.compute.manager [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Took 1.69 seconds to destroy the instance on the hypervisor.
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.143 251996 DEBUG oslo.service.loopingcall [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.144 251996 DEBUG nova.compute.manager [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.144 251996 DEBUG nova.network.neutron [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:43:18 compute-0 podman[346889]: 2025-12-06 07:43:18.169081011 +0000 UTC m=+0.046234130 container create f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:43:18 compute-0 ceph-mon[74339]: pgmap v2632: 305 pgs: 305 active+clean; 564 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.0 MiB/s wr, 226 op/s
Dec 06 07:43:18 compute-0 systemd[1]: Started libpod-conmon-f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005.scope.
Dec 06 07:43:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7324b9b6770550ea6afb10f70bae42111eea50b53b8a0d69f91bc1aa6949ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7324b9b6770550ea6afb10f70bae42111eea50b53b8a0d69f91bc1aa6949ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7324b9b6770550ea6afb10f70bae42111eea50b53b8a0d69f91bc1aa6949ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7324b9b6770550ea6afb10f70bae42111eea50b53b8a0d69f91bc1aa6949ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:18 compute-0 podman[346889]: 2025-12-06 07:43:18.151801287 +0000 UTC m=+0.028954426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:43:18 compute-0 podman[346889]: 2025-12-06 07:43:18.249794606 +0000 UTC m=+0.126947725 container init f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:43:18 compute-0 podman[346889]: 2025-12-06 07:43:18.260435347 +0000 UTC m=+0.137588476 container start f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Dec 06 07:43:18 compute-0 podman[346889]: 2025-12-06 07:43:18.26452439 +0000 UTC m=+0.141677509 container attach f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 07:43:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1009578427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.379 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.444 251996 DEBUG nova.network.neutron [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.471 251996 DEBUG nova.network.neutron [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.476 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.477 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.495 251996 INFO nova.compute.manager [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Took 0.35 seconds to deallocate network for instance.
Dec 06 07:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:43:18
Dec 06 07:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control']
Dec 06 07:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.564 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.564 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.650 251996 DEBUG oslo_concurrency.processutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.714 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.716 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4026MB free_disk=20.748096466064453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:43:18 compute-0 nova_compute[251992]: 2025-12-06 07:43:18.716 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:18.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 564 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 204 op/s
Dec 06 07:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495926152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:19 compute-0 nova_compute[251992]: 2025-12-06 07:43:19.119 251996 DEBUG oslo_concurrency.processutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:19 compute-0 nova_compute[251992]: 2025-12-06 07:43:19.125 251996 DEBUG nova.compute.provider_tree [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:19 compute-0 hopeful_panini[346905]: {
Dec 06 07:43:19 compute-0 hopeful_panini[346905]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:43:19 compute-0 hopeful_panini[346905]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:43:19 compute-0 hopeful_panini[346905]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:43:19 compute-0 hopeful_panini[346905]:         "osd_id": 0,
Dec 06 07:43:19 compute-0 hopeful_panini[346905]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:43:19 compute-0 hopeful_panini[346905]:         "type": "bluestore"
Dec 06 07:43:19 compute-0 hopeful_panini[346905]:     }
Dec 06 07:43:19 compute-0 hopeful_panini[346905]: }
Dec 06 07:43:19 compute-0 systemd[1]: libpod-f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005.scope: Deactivated successfully.
Dec 06 07:43:19 compute-0 podman[346952]: 2025-12-06 07:43:19.196877263 +0000 UTC m=+0.024311188 container died f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb7324b9b6770550ea6afb10f70bae42111eea50b53b8a0d69f91bc1aa6949ab-merged.mount: Deactivated successfully.
Dec 06 07:43:19 compute-0 podman[346952]: 2025-12-06 07:43:19.247728298 +0000 UTC m=+0.075162203 container remove f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:43:19 compute-0 systemd[1]: libpod-conmon-f4b1945906db9c20af79bae3a7bc397b140c93dbd1eaaf942aee640b902ce005.scope: Deactivated successfully.
Dec 06 07:43:19 compute-0 sudo[346762]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1009578427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:19 compute-0 ceph-mon[74339]: pgmap v2633: 305 pgs: 305 active+clean; 564 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.3 MiB/s wr, 204 op/s
Dec 06 07:43:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1495926152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:43:19 compute-0 nova_compute[251992]: 2025-12-06 07:43:19.484 251996 DEBUG nova.scheduler.client.report [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev eac67c7b-725e-4648-b637-4f3ddab96fe7 does not exist
Dec 06 07:43:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 593118fb-7fcd-4ea1-b507-73c653f79657 does not exist
Dec 06 07:43:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7f78d335-2373-4c15-9f24-d41f3d7e0aa8 does not exist
Dec 06 07:43:19 compute-0 sudo[346967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:19 compute-0 sudo[346967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:19 compute-0 sudo[346967]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:19 compute-0 sudo[346992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:43:19 compute-0 sudo[346992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:19 compute-0 sudo[346992]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.024 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.027 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:20.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:20 compute-0 sudo[347017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:20 compute-0 sudo[347017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:20 compute-0 sudo[347017]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:20 compute-0 sudo[347043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:20 compute-0 sudo[347043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:20 compute-0 sudo[347043]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:20 compute-0 podman[347041]: 2025-12-06 07:43:20.386989469 +0000 UTC m=+0.085662862 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:43:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4055792314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.527 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance dff82ae9-39a8-4a01-8dbb-782b5329f293 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.528 251996 WARNING nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 81c5c358-132f-4db7-acee-2c7454a0a4d3 is not being actively managed by this compute host but has allocations referencing this compute host: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocation because we do not know what to do.
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.528 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.528 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.587 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.589 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.631 251996 INFO nova.scheduler.client.report [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Deleted allocations for instance 81c5c358-132f-4db7-acee-2c7454a0a4d3
Dec 06 07:43:20 compute-0 nova_compute[251992]: 2025-12-06 07:43:20.777 251996 DEBUG oslo_concurrency.lockutils [None req-19748ca6-ac85-4ac5-bc94-cd2be3999661 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "81c5c358-132f-4db7-acee-2c7454a0a4d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:20.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 260 op/s
Dec 06 07:43:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1758460467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.055 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.061 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.076 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.098 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.099 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1147296475' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:21 compute-0 ceph-mon[74339]: pgmap v2634: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 260 op/s
Dec 06 07:43:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1758460467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.980 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "dff82ae9-39a8-4a01-8dbb-782b5329f293" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.980 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "dff82ae9-39a8-4a01-8dbb-782b5329f293" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.981 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "dff82ae9-39a8-4a01-8dbb-782b5329f293-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.981 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "dff82ae9-39a8-4a01-8dbb-782b5329f293-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.981 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "dff82ae9-39a8-4a01-8dbb-782b5329f293-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.983 251996 INFO nova.compute.manager [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Terminating instance
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.983 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "refresh_cache-dff82ae9-39a8-4a01-8dbb-782b5329f293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.984 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquired lock "refresh_cache-dff82ae9-39a8-4a01-8dbb-782b5329f293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:21 compute-0 nova_compute[251992]: 2025-12-06 07:43:21.984 251996 DEBUG nova.network.neutron [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:43:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:22.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:22 compute-0 nova_compute[251992]: 2025-12-06 07:43:22.233 251996 DEBUG nova.network.neutron [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:43:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:22 compute-0 nova_compute[251992]: 2025-12-06 07:43:22.966 251996 DEBUG nova.network.neutron [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 233 op/s
Dec 06 07:43:22 compute-0 nova_compute[251992]: 2025-12-06 07:43:22.988 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Releasing lock "refresh_cache-dff82ae9-39a8-4a01-8dbb-782b5329f293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:22 compute-0 nova_compute[251992]: 2025-12-06 07:43:22.989 251996 DEBUG nova.compute.manager [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:43:23 compute-0 nova_compute[251992]: 2025-12-06 07:43:23.099 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:23 compute-0 nova_compute[251992]: 2025-12-06 07:43:23.100 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:23 compute-0 nova_compute[251992]: 2025-12-06 07:43:23.126 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3041811195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:43:23 compute-0 nova_compute[251992]: 2025-12-06 07:43:23.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:24.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:24.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 267 op/s
Dec 06 07:43:25 compute-0 ceph-mon[74339]: pgmap v2635: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 233 op/s
Dec 06 07:43:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:43:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:43:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:43:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:43:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:43:25 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000093.scope: Deactivated successfully.
Dec 06 07:43:25 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000093.scope: Consumed 14.709s CPU time.
Dec 06 07:43:25 compute-0 systemd-machined[212986]: Machine qemu-67-instance-00000093 terminated.
Dec 06 07:43:25 compute-0 nova_compute[251992]: 2025-12-06 07:43:25.217 251996 INFO nova.virt.libvirt.driver [-] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Instance destroyed successfully.
Dec 06 07:43:25 compute-0 nova_compute[251992]: 2025-12-06 07:43:25.217 251996 DEBUG nova.objects.instance [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lazy-loading 'resources' on Instance uuid dff82ae9-39a8-4a01-8dbb-782b5329f293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:25 compute-0 nova_compute[251992]: 2025-12-06 07:43:25.307 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:25 compute-0 nova_compute[251992]: 2025-12-06 07:43:25.589 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:25 compute-0 nova_compute[251992]: 2025-12-06 07:43:25.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Dec 06 07:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Dec 06 07:43:26 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Dec 06 07:43:26 compute-0 ceph-mon[74339]: pgmap v2636: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.3 MiB/s wr, 267 op/s
Dec 06 07:43:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2786445463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:26 compute-0 ceph-mon[74339]: osdmap e329: 3 total, 3 up, 3 in
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.111 251996 INFO nova.virt.libvirt.driver [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Deleting instance files /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293_del
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.112 251996 INFO nova.virt.libvirt.driver [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Deletion of /var/lib/nova/instances/dff82ae9-39a8-4a01-8dbb-782b5329f293_del complete
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008492804008468267 of space, bias 1.0, pg target 2.5478412025404804 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.644426559882747 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:43:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:26.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.170 251996 INFO nova.compute.manager [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Took 3.18 seconds to destroy the instance on the hypervisor.
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.171 251996 DEBUG oslo.service.loopingcall [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.171 251996 DEBUG nova.compute.manager [-] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.172 251996 DEBUG nova.network.neutron [-] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.674 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.675 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.675 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.676 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.676 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.859 251996 DEBUG nova.network.neutron [-] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.889 251996 DEBUG nova.network.neutron [-] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 07:43:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:26.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.917 251996 INFO nova.compute.manager [-] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Took 0.75 seconds to deallocate network for instance.
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.962 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:26 compute-0 nova_compute[251992]: 2025-12-06 07:43:26.962 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 492 KiB/s wr, 302 op/s
Dec 06 07:43:27 compute-0 nova_compute[251992]: 2025-12-06 07:43:27.042 251996 DEBUG oslo_concurrency.processutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:27 compute-0 podman[347162]: 2025-12-06 07:43:27.406003425 +0000 UTC m=+0.062014572 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Dec 06 07:43:27 compute-0 podman[347161]: 2025-12-06 07:43:27.418856248 +0000 UTC m=+0.069776505 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:43:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039062054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:27 compute-0 nova_compute[251992]: 2025-12-06 07:43:27.494 251996 DEBUG oslo_concurrency.processutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:27 compute-0 nova_compute[251992]: 2025-12-06 07:43:27.501 251996 DEBUG nova.compute.provider_tree [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:27 compute-0 nova_compute[251992]: 2025-12-06 07:43:27.514 251996 DEBUG nova.scheduler.client.report [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:27 compute-0 nova_compute[251992]: 2025-12-06 07:43:27.534 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.572s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:27 compute-0 nova_compute[251992]: 2025-12-06 07:43:27.553 251996 INFO nova.scheduler.client.report [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Deleted allocations for instance dff82ae9-39a8-4a01-8dbb-782b5329f293
Dec 06 07:43:27 compute-0 nova_compute[251992]: 2025-12-06 07:43:27.616 251996 DEBUG oslo_concurrency.lockutils [None req-5ded7cb0-55f9-42ba-9208-f38e3754cd9c 8b7e1fb80daa458699ec19892dc9a92c 6061a73c34904608870b68e204d01c42 - - default default] Lock "dff82ae9-39a8-4a01-8dbb-782b5329f293" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:28.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.224428) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008224546, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 792, "num_deletes": 251, "total_data_size": 1100325, "memory_usage": 1119600, "flush_reason": "Manual Compaction"}
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Dec 06 07:43:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4128211629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008235367, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1077711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52041, "largest_seqno": 52832, "table_properties": {"data_size": 1073528, "index_size": 1835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9844, "raw_average_key_size": 20, "raw_value_size": 1065006, "raw_average_value_size": 2182, "num_data_blocks": 79, "num_entries": 488, "num_filter_entries": 488, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765006955, "oldest_key_time": 1765006955, "file_creation_time": 1765007008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 11009 microseconds, and 6506 cpu microseconds.
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.235435) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1077711 bytes OK
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.235471) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.236944) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.236965) EVENT_LOG_v1 {"time_micros": 1765007008236958, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.236990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1096294, prev total WAL file size 1096294, number of live WAL files 2.
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.237808) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1052KB)], [110(11MB)]
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008237861, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 13052627, "oldest_snapshot_seqno": -1}
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 8838 keys, 10855304 bytes, temperature: kUnknown
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008301408, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 10855304, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10799123, "index_size": 32965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22149, "raw_key_size": 229510, "raw_average_key_size": 25, "raw_value_size": 10644893, "raw_average_value_size": 1204, "num_data_blocks": 1284, "num_entries": 8838, "num_filter_entries": 8838, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007008, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.301803) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 10855304 bytes
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.304434) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.0 rd, 170.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(22.2) write-amplify(10.1) OK, records in: 9359, records dropped: 521 output_compression: NoCompression
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.304466) EVENT_LOG_v1 {"time_micros": 1765007008304444, "job": 66, "event": "compaction_finished", "compaction_time_micros": 63667, "compaction_time_cpu_micros": 28830, "output_level": 6, "num_output_files": 1, "total_output_size": 10855304, "num_input_records": 9359, "num_output_records": 8838, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008304698, "job": 66, "event": "table_file_deletion", "file_number": 112}
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007008306652, "job": 66, "event": "table_file_deletion", "file_number": 110}
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.237706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.306685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.306690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.306693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.306696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:28 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:28.306698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:28 compute-0 nova_compute[251992]: 2025-12-06 07:43:28.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:28 compute-0 nova_compute[251992]: 2025-12-06 07:43:28.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:43:28 compute-0 nova_compute[251992]: 2025-12-06 07:43:28.671 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:43:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 993 KiB/s wr, 311 op/s
Dec 06 07:43:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Dec 06 07:43:29 compute-0 ceph-mon[74339]: pgmap v2638: 305 pgs: 305 active+clean; 425 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 492 KiB/s wr, 302 op/s
Dec 06 07:43:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2039062054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:29 compute-0 ceph-mon[74339]: pgmap v2639: 305 pgs: 305 active+clean; 416 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 993 KiB/s wr, 311 op/s
Dec 06 07:43:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Dec 06 07:43:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Dec 06 07:43:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.354172) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009354227, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 289, "num_deletes": 260, "total_data_size": 63614, "memory_usage": 70208, "flush_reason": "Manual Compaction"}
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009356274, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 63772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52833, "largest_seqno": 53121, "table_properties": {"data_size": 61804, "index_size": 132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5034, "raw_average_key_size": 17, "raw_value_size": 57792, "raw_average_value_size": 204, "num_data_blocks": 6, "num_entries": 283, "num_filter_entries": 283, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007008, "oldest_key_time": 1765007008, "file_creation_time": 1765007009, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 2115 microseconds, and 793 cpu microseconds.
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.356298) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 63772 bytes OK
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.356310) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360844) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360856) EVENT_LOG_v1 {"time_micros": 1765007009360853, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.360871) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 61427, prev total WAL file size 61427, number of live WAL files 2.
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.361366) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373638' seq:72057594037927935, type:22 .. '6C6F676D0032303234' seq:0, type:0; will stop at (end)
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(62KB)], [113(10MB)]
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009361555, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10919076, "oldest_snapshot_seqno": -1}
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 8589 keys, 10772317 bytes, temperature: kUnknown
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009421474, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 10772317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10717358, "index_size": 32383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 225329, "raw_average_key_size": 26, "raw_value_size": 10566822, "raw_average_value_size": 1230, "num_data_blocks": 1255, "num_entries": 8589, "num_filter_entries": 8589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007009, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.421801) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10772317 bytes
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.423527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 179.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.4 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(340.1) write-amplify(168.9) OK, records in: 9121, records dropped: 532 output_compression: NoCompression
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.423544) EVENT_LOG_v1 {"time_micros": 1765007009423536, "job": 68, "event": "compaction_finished", "compaction_time_micros": 60007, "compaction_time_cpu_micros": 26319, "output_level": 6, "num_output_files": 1, "total_output_size": 10772317, "num_input_records": 9121, "num_output_records": 8589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009423658, "job": 68, "event": "table_file_deletion", "file_number": 115}
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007009425407, "job": 68, "event": "table_file_deletion", "file_number": 113}
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.361272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.425434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.425437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.425439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.425440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:43:29.425441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:43:29 compute-0 nova_compute[251992]: 2025-12-06 07:43:29.684 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:29.684 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:29.686 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:43:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Dec 06 07:43:30 compute-0 ceph-mon[74339]: osdmap e330: 3 total, 3 up, 3 in
Dec 06 07:43:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Dec 06 07:43:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Dec 06 07:43:30 compute-0 nova_compute[251992]: 2025-12-06 07:43:30.308 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:30 compute-0 nova_compute[251992]: 2025-12-06 07:43:30.590 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:30.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 2.6 MiB/s wr, 354 op/s
Dec 06 07:43:31 compute-0 ceph-mon[74339]: osdmap e331: 3 total, 3 up, 3 in
Dec 06 07:43:31 compute-0 ceph-mon[74339]: pgmap v2642: 305 pgs: 305 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 2.6 MiB/s wr, 354 op/s
Dec 06 07:43:31 compute-0 nova_compute[251992]: 2025-12-06 07:43:31.678 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765006996.6767988, 81c5c358-132f-4db7-acee-2c7454a0a4d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:31 compute-0 nova_compute[251992]: 2025-12-06 07:43:31.678 251996 INFO nova.compute.manager [-] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] VM Stopped (Lifecycle Event)
Dec 06 07:43:31 compute-0 nova_compute[251992]: 2025-12-06 07:43:31.699 251996 DEBUG nova.compute.manager [None req-2b0a4934-fea6-4194-85d9-d9a76632f7a1 - - - - - -] [instance: 81c5c358-132f-4db7-acee-2c7454a0a4d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:32.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1543812408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:32.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.1 MiB/s wr, 286 op/s
Dec 06 07:43:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:34.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:34 compute-0 ceph-mon[74339]: pgmap v2643: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.1 MiB/s wr, 286 op/s
Dec 06 07:43:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:34.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 204 op/s
Dec 06 07:43:35 compute-0 nova_compute[251992]: 2025-12-06 07:43:35.311 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:35 compute-0 nova_compute[251992]: 2025-12-06 07:43:35.592 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:35.689 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:35 compute-0 ceph-mon[74339]: pgmap v2644: 305 pgs: 305 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 204 op/s
Dec 06 07:43:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:36.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:36.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.8 MiB/s wr, 195 op/s
Dec 06 07:43:37 compute-0 nova_compute[251992]: 2025-12-06 07:43:37.588 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:37 compute-0 nova_compute[251992]: 2025-12-06 07:43:37.589 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:37 compute-0 nova_compute[251992]: 2025-12-06 07:43:37.607 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:43:37 compute-0 nova_compute[251992]: 2025-12-06 07:43:37.683 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:37 compute-0 nova_compute[251992]: 2025-12-06 07:43:37.684 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:37 compute-0 nova_compute[251992]: 2025-12-06 07:43:37.690 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:43:37 compute-0 nova_compute[251992]: 2025-12-06 07:43:37.690 251996 INFO nova.compute.claims [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:43:37 compute-0 nova_compute[251992]: 2025-12-06 07:43:37.781 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:38 compute-0 ceph-mon[74339]: pgmap v2645: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.8 MiB/s wr, 195 op/s
Dec 06 07:43:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:38.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945964719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.224 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.230 251996 DEBUG nova.compute.provider_tree [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.276 251996 DEBUG nova.scheduler.client.report [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.617 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.618 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.675 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.676 251996 DEBUG nova.network.neutron [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.701 251996 INFO nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.727 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.821 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.822 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.823 251996 INFO nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Creating image(s)
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.880 251996 DEBUG nova.storage.rbd_utils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.910 251996 DEBUG nova.storage.rbd_utils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:38.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.941 251996 DEBUG nova.storage.rbd_utils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.945 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:38 compute-0 nova_compute[251992]: 2025-12-06 07:43:38.980 251996 DEBUG nova.policy [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '297bc99c242e4fa8aedea4a6367b61c0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:43:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 380 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.3 MiB/s wr, 203 op/s
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.018 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.019 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.020 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.020 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.051 251996 DEBUG nova.storage.rbd_utils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.054 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2945964719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:39 compute-0 ceph-mon[74339]: pgmap v2646: 305 pgs: 305 active+clean; 380 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.3 MiB/s wr, 203 op/s
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.350 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.433 251996 DEBUG nova.storage.rbd_utils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] resizing rbd image f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.541 251996 DEBUG nova.objects.instance [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'migration_context' on Instance uuid f3e780ab-f17f-4ecf-908b-16e88419d5f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.578 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.579 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Ensure instance console log exists: /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.579 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.580 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:39 compute-0 nova_compute[251992]: 2025-12-06 07:43:39.580 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Dec 06 07:43:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Dec 06 07:43:39 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Dec 06 07:43:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:40.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:40 compute-0 nova_compute[251992]: 2025-12-06 07:43:40.214 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007005.2128127, dff82ae9-39a8-4a01-8dbb-782b5329f293 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:40 compute-0 nova_compute[251992]: 2025-12-06 07:43:40.214 251996 INFO nova.compute.manager [-] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] VM Stopped (Lifecycle Event)
Dec 06 07:43:40 compute-0 nova_compute[251992]: 2025-12-06 07:43:40.234 251996 DEBUG nova.compute.manager [None req-930245cf-1cf9-4e4b-8e4c-39125f991e21 - - - - - -] [instance: dff82ae9-39a8-4a01-8dbb-782b5329f293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:40 compute-0 nova_compute[251992]: 2025-12-06 07:43:40.314 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-0 sudo[347392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:40 compute-0 sudo[347392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:40 compute-0 sudo[347392]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:40 compute-0 sudo[347417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:43:40 compute-0 sudo[347417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:43:40 compute-0 sudo[347417]: pam_unix(sudo:session): session closed for user root
Dec 06 07:43:40 compute-0 nova_compute[251992]: 2025-12-06 07:43:40.595 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:40 compute-0 ceph-mon[74339]: osdmap e332: 3 total, 3 up, 3 in
Dec 06 07:43:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:40.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.8 MiB/s wr, 211 op/s
Dec 06 07:43:41 compute-0 ceph-mon[74339]: pgmap v2648: 305 pgs: 305 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.8 MiB/s wr, 211 op/s
Dec 06 07:43:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:42.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1921876245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2263409015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:42.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:42 compute-0 nova_compute[251992]: 2025-12-06 07:43:42.978 251996 DEBUG nova.network.neutron [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Successfully created port: 72ebf84f-114c-481c-8735-4ba8278ccfdb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:43:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 439 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 880 KiB/s rd, 7.0 MiB/s wr, 190 op/s
Dec 06 07:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:43:43 compute-0 ceph-mon[74339]: pgmap v2649: 305 pgs: 305 active+clean; 439 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 880 KiB/s rd, 7.0 MiB/s wr, 190 op/s
Dec 06 07:43:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1346591955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.038 251996 DEBUG nova.network.neutron [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Successfully updated port: 72ebf84f-114c-481c-8735-4ba8278ccfdb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.060 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.060 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquired lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.061 251996 DEBUG nova.network.neutron [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.118 251996 DEBUG nova.compute.manager [req-b1dfd0eb-f38a-4c1d-8ac0-a23eafa1db3c req-7f8ab03d-dad6-41b5-ac57-2be3e0f066ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received event network-changed-72ebf84f-114c-481c-8735-4ba8278ccfdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.118 251996 DEBUG nova.compute.manager [req-b1dfd0eb-f38a-4c1d-8ac0-a23eafa1db3c req-7f8ab03d-dad6-41b5-ac57-2be3e0f066ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Refreshing instance network info cache due to event network-changed-72ebf84f-114c-481c-8735-4ba8278ccfdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.118 251996 DEBUG oslo_concurrency.lockutils [req-b1dfd0eb-f38a-4c1d-8ac0-a23eafa1db3c req-7f8ab03d-dad6-41b5-ac57-2be3e0f066ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:44.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.213 251996 DEBUG nova.network.neutron [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:43:44 compute-0 nova_compute[251992]: 2025-12-06 07:43:44.732 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:43:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3709914878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:44.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 883 KiB/s rd, 6.3 MiB/s wr, 204 op/s
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.310 251996 DEBUG nova.network.neutron [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updating instance_info_cache with network_info: [{"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.317 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.338 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Releasing lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.338 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Instance network_info: |[{"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.339 251996 DEBUG oslo_concurrency.lockutils [req-b1dfd0eb-f38a-4c1d-8ac0-a23eafa1db3c req-7f8ab03d-dad6-41b5-ac57-2be3e0f066ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.339 251996 DEBUG nova.network.neutron [req-b1dfd0eb-f38a-4c1d-8ac0-a23eafa1db3c req-7f8ab03d-dad6-41b5-ac57-2be3e0f066ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Refreshing network info cache for port 72ebf84f-114c-481c-8735-4ba8278ccfdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.344 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Start _get_guest_xml network_info=[{"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.351 251996 WARNING nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.360 251996 DEBUG nova.virt.libvirt.host [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.361 251996 DEBUG nova.virt.libvirt.host [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.367 251996 DEBUG nova.virt.libvirt.host [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.368 251996 DEBUG nova.virt.libvirt.host [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.369 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.369 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.369 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.370 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.370 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.370 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.370 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.370 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.370 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.371 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.371 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.371 251996 DEBUG nova.virt.hardware [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.374 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.599 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3444232602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:45 compute-0 nova_compute[251992]: 2025-12-06 07:43:45.862 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:46 compute-0 ceph-mon[74339]: pgmap v2650: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 883 KiB/s rd, 6.3 MiB/s wr, 204 op/s
Dec 06 07:43:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3444232602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.041 251996 DEBUG nova.storage.rbd_utils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.045 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:46.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:43:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138774656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.523 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.525 251996 DEBUG nova.virt.libvirt.vif [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1333355293',display_name='tempest-AttachVolumeNegativeTest-server-1333355293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1333355293',id=152,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEqz2bDAWUOfXde68r22NS0cm5MJs8rrPEKWOtfnlImrTM2XFzAu3ww59I+122hdwjnBS2JwHi0p2ZpnbGj6IZ0751PuMJQly9DdwP115KGFoCh/bvypbUECozGCIQ4h9A==',key_name='tempest-keypair-2045615674',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-d5i0f7ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=f3e780ab-f17f-4ecf-908b-16e88419d5f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.525 251996 DEBUG nova.network.os_vif_util [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.526 251996 DEBUG nova.network.os_vif_util [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=72ebf84f-114c-481c-8735-4ba8278ccfdb,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ebf84f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.527 251996 DEBUG nova.objects.instance [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'pci_devices' on Instance uuid f3e780ab-f17f-4ecf-908b-16e88419d5f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.545 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <uuid>f3e780ab-f17f-4ecf-908b-16e88419d5f4</uuid>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <name>instance-00000098</name>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <nova:name>tempest-AttachVolumeNegativeTest-server-1333355293</nova:name>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:43:45</nova:creationTime>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <nova:user uuid="297bc99c242e4fa8aedea4a6367b61c0">tempest-AttachVolumeNegativeTest-2080911030-project-member</nova:user>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <nova:project uuid="741dc47f9ced423cbd99fd6f9d32904f">tempest-AttachVolumeNegativeTest-2080911030</nova:project>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <nova:port uuid="72ebf84f-114c-481c-8735-4ba8278ccfdb">
Dec 06 07:43:46 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <system>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <entry name="serial">f3e780ab-f17f-4ecf-908b-16e88419d5f4</entry>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <entry name="uuid">f3e780ab-f17f-4ecf-908b-16e88419d5f4</entry>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </system>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <os>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   </os>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <features>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   </features>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk">
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       </source>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk.config">
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       </source>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:43:46 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:6c:9c:27"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <target dev="tap72ebf84f-11"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4/console.log" append="off"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <video>
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </video>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:43:46 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:43:46 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:43:46 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:43:46 compute-0 nova_compute[251992]: </domain>
Dec 06 07:43:46 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.546 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Preparing to wait for external event network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.547 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.547 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.547 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.548 251996 DEBUG nova.virt.libvirt.vif [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1333355293',display_name='tempest-AttachVolumeNegativeTest-server-1333355293',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1333355293',id=152,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEqz2bDAWUOfXde68r22NS0cm5MJs8rrPEKWOtfnlImrTM2XFzAu3ww59I+122hdwjnBS2JwHi0p2ZpnbGj6IZ0751PuMJQly9DdwP115KGFoCh/bvypbUECozGCIQ4h9A==',key_name='tempest-keypair-2045615674',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-d5i0f7ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=f3e780ab-f17f-4ecf-908b-16e88419d5f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.548 251996 DEBUG nova.network.os_vif_util [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.549 251996 DEBUG nova.network.os_vif_util [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=72ebf84f-114c-481c-8735-4ba8278ccfdb,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ebf84f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.549 251996 DEBUG os_vif [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=72ebf84f-114c-481c-8735-4ba8278ccfdb,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ebf84f-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.550 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.550 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.551 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.555 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.555 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72ebf84f-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.556 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72ebf84f-11, col_values=(('external_ids', {'iface-id': '72ebf84f-114c-481c-8735-4ba8278ccfdb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:9c:27', 'vm-uuid': 'f3e780ab-f17f-4ecf-908b-16e88419d5f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:46 compute-0 NetworkManager[48965]: <info>  [1765007026.5587] manager: (tap72ebf84f-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.561 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.565 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.566 251996 INFO os_vif [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=72ebf84f-114c-481c-8735-4ba8278ccfdb,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ebf84f-11')
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.678 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.678 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.678 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No VIF found with MAC fa:16:3e:6c:9c:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.679 251996 INFO nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Using config drive
Dec 06 07:43:46 compute-0 nova_compute[251992]: 2025-12-06 07:43:46.705 251996 DEBUG nova.storage.rbd_utils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:46.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 203 op/s
Dec 06 07:43:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/138774656' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:47 compute-0 nova_compute[251992]: 2025-12-06 07:43:47.705 251996 INFO nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Creating config drive at /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4/disk.config
Dec 06 07:43:47 compute-0 nova_compute[251992]: 2025-12-06 07:43:47.712 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptn4z4zz3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:47 compute-0 nova_compute[251992]: 2025-12-06 07:43:47.846 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptn4z4zz3" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:47 compute-0 nova_compute[251992]: 2025-12-06 07:43:47.875 251996 DEBUG nova.storage.rbd_utils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] rbd image f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:47 compute-0 nova_compute[251992]: 2025-12-06 07:43:47.879 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4/disk.config f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:48.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:48 compute-0 ceph-mon[74339]: pgmap v2651: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 203 op/s
Dec 06 07:43:48 compute-0 nova_compute[251992]: 2025-12-06 07:43:48.359 251996 DEBUG oslo_concurrency.processutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4/disk.config f3e780ab-f17f-4ecf-908b-16e88419d5f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:48 compute-0 nova_compute[251992]: 2025-12-06 07:43:48.360 251996 INFO nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Deleting local config drive /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4/disk.config because it was imported into RBD.
Dec 06 07:43:48 compute-0 kernel: tap72ebf84f-11: entered promiscuous mode
Dec 06 07:43:48 compute-0 NetworkManager[48965]: <info>  [1765007028.4069] manager: (tap72ebf84f-11): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Dec 06 07:43:48 compute-0 ovn_controller[147168]: 2025-12-06T07:43:48Z|00538|binding|INFO|Claiming lport 72ebf84f-114c-481c-8735-4ba8278ccfdb for this chassis.
Dec 06 07:43:48 compute-0 ovn_controller[147168]: 2025-12-06T07:43:48Z|00539|binding|INFO|72ebf84f-114c-481c-8735-4ba8278ccfdb: Claiming fa:16:3e:6c:9c:27 10.100.0.11
Dec 06 07:43:48 compute-0 nova_compute[251992]: 2025-12-06 07:43:48.408 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.420 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:9c:27 10.100.0.11'], port_security=['fa:16:3e:6c:9c:27 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f3e780ab-f17f-4ecf-908b-16e88419d5f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92065a2a-e95c-473c-bbd3-27c37f70c344', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a93df87f-d2df-4d3a-b692-98bba32f2fe1, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=72ebf84f-114c-481c-8735-4ba8278ccfdb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.422 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 72ebf84f-114c-481c-8735-4ba8278ccfdb in datapath 3c5d4817-c3d5-45fc-9890-418e779bacb2 bound to our chassis
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.423 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec 06 07:43:48 compute-0 systemd-udevd[347579]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.437 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[96f95f7a-9905-46da-be59-d6f9ccfba762]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.439 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c5d4817-c1 in ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:43:48 compute-0 systemd-machined[212986]: New machine qemu-69-instance-00000098.
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.441 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c5d4817-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.442 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ce70c0-3818-4c2a-af57-e2bc03db7fba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 NetworkManager[48965]: <info>  [1765007028.4440] device (tap72ebf84f-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.444 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8674545f-9cf6-4068-9e88-9927ba33d7d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 NetworkManager[48965]: <info>  [1765007028.4462] device (tap72ebf84f-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.459 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[8499c567-8f9f-4317-8fc7-60c2a6701995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-00000098.
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.481 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d468f1-9846-477c-b057-b10bb708f11d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 nova_compute[251992]: 2025-12-06 07:43:48.484 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:48 compute-0 ovn_controller[147168]: 2025-12-06T07:43:48Z|00540|binding|INFO|Setting lport 72ebf84f-114c-481c-8735-4ba8278ccfdb ovn-installed in OVS
Dec 06 07:43:48 compute-0 ovn_controller[147168]: 2025-12-06T07:43:48Z|00541|binding|INFO|Setting lport 72ebf84f-114c-481c-8735-4ba8278ccfdb up in Southbound
Dec 06 07:43:48 compute-0 nova_compute[251992]: 2025-12-06 07:43:48.490 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.510 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec104ed-e80e-46f7-931d-9666ca353eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.516 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5a07910d-99ea-405c-97f9-772287d95d5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 NetworkManager[48965]: <info>  [1765007028.5177] manager: (tap3c5d4817-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/247)
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.545 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[0c51ff8f-deb0-4f1c-9772-dabf40adb0c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.547 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d885d79e-d7d6-416e-ae26-16aafe615f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 NetworkManager[48965]: <info>  [1765007028.5655] device (tap3c5d4817-c0): carrier: link connected
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.570 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[152f92a1-f430-4722-b01e-94e1a5f856f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.585 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3712d7fc-219b-4a8e-95c8-64d5d0e6ed8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c5d4817-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:51:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730115, 'reachable_time': 36462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347615, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.599 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7c0d8d-5c67-4fd9-bcd0-14cd629ce54e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:517c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 730115, 'tstamp': 730115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347616, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.614 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[550d623c-d46d-48e4-82de-fa846ed2763d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c5d4817-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:51:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730115, 'reachable_time': 36462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347617, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.638 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0389b38d-d98a-4cf0-87cd-b95367598ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.685 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1c2d1d-bb88-47a7-87e2-195dfd25fecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.690 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c5d4817-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.690 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.691 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c5d4817-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:48 compute-0 NetworkManager[48965]: <info>  [1765007028.6937] manager: (tap3c5d4817-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Dec 06 07:43:48 compute-0 kernel: tap3c5d4817-c0: entered promiscuous mode
Dec 06 07:43:48 compute-0 nova_compute[251992]: 2025-12-06 07:43:48.695 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.695 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c5d4817-c0, col_values=(('external_ids', {'iface-id': 'dc336d05-182d-42ac-ab5e-a73bf30a0662'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:43:48 compute-0 ovn_controller[147168]: 2025-12-06T07:43:48Z|00542|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec 06 07:43:48 compute-0 nova_compute[251992]: 2025-12-06 07:43:48.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.713 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.715 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[04a0142e-05e3-483f-93da-c78a92dc05fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.715 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/3c5d4817-c3d5-45fc-9890-418e779bacb2.pid.haproxy
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 3c5d4817-c3d5-45fc-9890-418e779bacb2
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:43:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:43:48.716 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'env', 'PROCESS_TAG=haproxy-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c5d4817-c3d5-45fc-9890-418e779bacb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:43:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 459 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 228 op/s
Dec 06 07:43:49 compute-0 podman[347668]: 2025-12-06 07:43:49.066903659 +0000 UTC m=+0.050972859 container create 8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:43:49 compute-0 systemd[1]: Started libpod-conmon-8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7.scope.
Dec 06 07:43:49 compute-0 podman[347668]: 2025-12-06 07:43:49.039604101 +0000 UTC m=+0.023673391 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:43:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a90c8a1c1cef740e0217e5ed618daa7a1e7628c7b93c0e41d00101293a70ff5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.222 251996 DEBUG nova.network.neutron [req-b1dfd0eb-f38a-4c1d-8ac0-a23eafa1db3c req-7f8ab03d-dad6-41b5-ac57-2be3e0f066ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updated VIF entry in instance network info cache for port 72ebf84f-114c-481c-8735-4ba8278ccfdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.222 251996 DEBUG nova.network.neutron [req-b1dfd0eb-f38a-4c1d-8ac0-a23eafa1db3c req-7f8ab03d-dad6-41b5-ac57-2be3e0f066ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updating instance_info_cache with network_info: [{"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.239 251996 DEBUG oslo_concurrency.lockutils [req-b1dfd0eb-f38a-4c1d-8ac0-a23eafa1db3c req-7f8ab03d-dad6-41b5-ac57-2be3e0f066ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:49 compute-0 podman[347668]: 2025-12-06 07:43:49.367288762 +0000 UTC m=+0.351357992 container init 8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:43:49 compute-0 podman[347668]: 2025-12-06 07:43:49.372781333 +0000 UTC m=+0.356850533 container start 8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 07:43:49 compute-0 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[347684]: [NOTICE]   (347688) : New worker (347690) forked
Dec 06 07:43:49 compute-0 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[347684]: [NOTICE]   (347688) : Loading success.
Dec 06 07:43:49 compute-0 ceph-mon[74339]: pgmap v2652: 305 pgs: 305 active+clean; 459 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 228 op/s
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.604 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007029.6042776, f3e780ab-f17f-4ecf-908b-16e88419d5f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.605 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] VM Started (Lifecycle Event)
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.632 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.636 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007029.6045291, f3e780ab-f17f-4ecf-908b-16e88419d5f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.637 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] VM Paused (Lifecycle Event)
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.662 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.665 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.681 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:43:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.954 251996 DEBUG nova.compute.manager [req-78206fed-a1fa-4166-afe9-0a6ef320c54c req-d76e730c-3c6a-4a6e-a3c2-627a2a75c1a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received event network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.955 251996 DEBUG oslo_concurrency.lockutils [req-78206fed-a1fa-4166-afe9-0a6ef320c54c req-d76e730c-3c6a-4a6e-a3c2-627a2a75c1a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.955 251996 DEBUG oslo_concurrency.lockutils [req-78206fed-a1fa-4166-afe9-0a6ef320c54c req-d76e730c-3c6a-4a6e-a3c2-627a2a75c1a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.955 251996 DEBUG oslo_concurrency.lockutils [req-78206fed-a1fa-4166-afe9-0a6ef320c54c req-d76e730c-3c6a-4a6e-a3c2-627a2a75c1a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.955 251996 DEBUG nova.compute.manager [req-78206fed-a1fa-4166-afe9-0a6ef320c54c req-d76e730c-3c6a-4a6e-a3c2-627a2a75c1a6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Processing event network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.956 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.959 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007029.9596217, f3e780ab-f17f-4ecf-908b-16e88419d5f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.960 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] VM Resumed (Lifecycle Event)
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.962 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.966 251996 INFO nova.virt.libvirt.driver [-] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Instance spawned successfully.
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.967 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:43:49 compute-0 nova_compute[251992]: 2025-12-06 07:43:49.998 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.004 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.007 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.007 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.008 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.009 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.010 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.010 251996 DEBUG nova.virt.libvirt.driver [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.054 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:43:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:50.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.258 251996 INFO nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Took 11.44 seconds to spawn the instance on the hypervisor.
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.259 251996 DEBUG nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.337 251996 INFO nova.compute.manager [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Took 12.68 seconds to build instance.
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.373 251996 DEBUG oslo_concurrency.lockutils [None req-c44d4de1-d31c-465e-84f1-d8ac12f94cd0 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:50 compute-0 nova_compute[251992]: 2025-12-06 07:43:50.655 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:50.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 482 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:43:51 compute-0 podman[347725]: 2025-12-06 07:43:51.416693756 +0000 UTC m=+0.077337103 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 07:43:51 compute-0 nova_compute[251992]: 2025-12-06 07:43:51.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:52 compute-0 ceph-mon[74339]: pgmap v2653: 305 pgs: 305 active+clean; 482 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Dec 06 07:43:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:52.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:52 compute-0 nova_compute[251992]: 2025-12-06 07:43:52.298 251996 DEBUG nova.compute.manager [req-cdc4bfce-5368-41d0-be3d-8a62330ba6d9 req-937ba427-576a-4173-ab80-7c302b6bf987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received event network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:52 compute-0 nova_compute[251992]: 2025-12-06 07:43:52.299 251996 DEBUG oslo_concurrency.lockutils [req-cdc4bfce-5368-41d0-be3d-8a62330ba6d9 req-937ba427-576a-4173-ab80-7c302b6bf987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:52 compute-0 nova_compute[251992]: 2025-12-06 07:43:52.299 251996 DEBUG oslo_concurrency.lockutils [req-cdc4bfce-5368-41d0-be3d-8a62330ba6d9 req-937ba427-576a-4173-ab80-7c302b6bf987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:52 compute-0 nova_compute[251992]: 2025-12-06 07:43:52.299 251996 DEBUG oslo_concurrency.lockutils [req-cdc4bfce-5368-41d0-be3d-8a62330ba6d9 req-937ba427-576a-4173-ab80-7c302b6bf987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:52 compute-0 nova_compute[251992]: 2025-12-06 07:43:52.299 251996 DEBUG nova.compute.manager [req-cdc4bfce-5368-41d0-be3d-8a62330ba6d9 req-937ba427-576a-4173-ab80-7c302b6bf987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] No waiting events found dispatching network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:43:52 compute-0 nova_compute[251992]: 2025-12-06 07:43:52.300 251996 WARNING nova.compute.manager [req-cdc4bfce-5368-41d0-be3d-8a62330ba6d9 req-937ba427-576a-4173-ab80-7c302b6bf987 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received unexpected event network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb for instance with vm_state active and task_state None.
Dec 06 07:43:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:52.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 497 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 190 op/s
Dec 06 07:43:54 compute-0 NetworkManager[48965]: <info>  [1765007034.1177] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Dec 06 07:43:54 compute-0 nova_compute[251992]: 2025-12-06 07:43:54.114 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:54 compute-0 NetworkManager[48965]: <info>  [1765007034.1194] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Dec 06 07:43:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:54.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:54 compute-0 nova_compute[251992]: 2025-12-06 07:43:54.339 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:54 compute-0 ovn_controller[147168]: 2025-12-06T07:43:54Z|00543|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec 06 07:43:54 compute-0 nova_compute[251992]: 2025-12-06 07:43:54.361 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:54 compute-0 nova_compute[251992]: 2025-12-06 07:43:54.493 251996 DEBUG nova.compute.manager [req-fd714dee-ddd7-4008-b073-e044da2ddb7c req-0fbf57cd-dcc1-4432-904b-42e07c17eb96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received event network-changed-72ebf84f-114c-481c-8735-4ba8278ccfdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:43:54 compute-0 nova_compute[251992]: 2025-12-06 07:43:54.494 251996 DEBUG nova.compute.manager [req-fd714dee-ddd7-4008-b073-e044da2ddb7c req-0fbf57cd-dcc1-4432-904b-42e07c17eb96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Refreshing instance network info cache due to event network-changed-72ebf84f-114c-481c-8735-4ba8278ccfdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:43:54 compute-0 nova_compute[251992]: 2025-12-06 07:43:54.494 251996 DEBUG oslo_concurrency.lockutils [req-fd714dee-ddd7-4008-b073-e044da2ddb7c req-0fbf57cd-dcc1-4432-904b-42e07c17eb96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:43:54 compute-0 nova_compute[251992]: 2025-12-06 07:43:54.494 251996 DEBUG oslo_concurrency.lockutils [req-fd714dee-ddd7-4008-b073-e044da2ddb7c req-0fbf57cd-dcc1-4432-904b-42e07c17eb96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:43:54 compute-0 nova_compute[251992]: 2025-12-06 07:43:54.494 251996 DEBUG nova.network.neutron [req-fd714dee-ddd7-4008-b073-e044da2ddb7c req-0fbf57cd-dcc1-4432-904b-42e07c17eb96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Refreshing network info cache for port 72ebf84f-114c-481c-8735-4ba8278ccfdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:43:54 compute-0 ceph-mon[74339]: pgmap v2654: 305 pgs: 305 active+clean; 497 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 190 op/s
Dec 06 07:43:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:43:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 497 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 278 op/s
Dec 06 07:43:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2032903449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:55 compute-0 ceph-mon[74339]: pgmap v2655: 305 pgs: 305 active+clean; 497 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.1 MiB/s wr, 278 op/s
Dec 06 07:43:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3309694968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:43:55 compute-0 nova_compute[251992]: 2025-12-06 07:43:55.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:56.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:56 compute-0 nova_compute[251992]: 2025-12-06 07:43:56.469 251996 DEBUG nova.network.neutron [req-fd714dee-ddd7-4008-b073-e044da2ddb7c req-0fbf57cd-dcc1-4432-904b-42e07c17eb96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updated VIF entry in instance network info cache for port 72ebf84f-114c-481c-8735-4ba8278ccfdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:43:56 compute-0 nova_compute[251992]: 2025-12-06 07:43:56.469 251996 DEBUG nova.network.neutron [req-fd714dee-ddd7-4008-b073-e044da2ddb7c req-0fbf57cd-dcc1-4432-904b-42e07c17eb96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updating instance_info_cache with network_info: [{"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:43:56 compute-0 nova_compute[251992]: 2025-12-06 07:43:56.493 251996 DEBUG oslo_concurrency.lockutils [req-fd714dee-ddd7-4008-b073-e044da2ddb7c req-0fbf57cd-dcc1-4432-904b-42e07c17eb96 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:43:56 compute-0 nova_compute[251992]: 2025-12-06 07:43:56.560 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:43:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:56.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:56 compute-0 nova_compute[251992]: 2025-12-06 07:43:56.969 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:56 compute-0 nova_compute[251992]: 2025-12-06 07:43:56.970 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 471 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 290 op/s
Dec 06 07:43:56 compute-0 nova_compute[251992]: 2025-12-06 07:43:56.993 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.075 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.076 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.084 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.084 251996 INFO nova.compute.claims [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.184 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:43:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1459538352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.647 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.654 251996 DEBUG nova.compute.provider_tree [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.705 251996 DEBUG nova.scheduler.client.report [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.734 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.735 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.811 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.812 251996 DEBUG nova.network.neutron [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.836 251996 INFO nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.864 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.989 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.995 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:43:57 compute-0 nova_compute[251992]: 2025-12-06 07:43:57.996 251996 INFO nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Creating image(s)
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.039 251996 DEBUG nova.storage.rbd_utils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.079 251996 DEBUG nova.storage.rbd_utils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:43:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:43:58.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.193 251996 DEBUG nova.storage.rbd_utils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.196 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.227 251996 DEBUG nova.policy [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e997a5eeee174b368a43ed8cb35fa1d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f44ecb8bdc7e4692a299e29603301124', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:43:58 compute-0 ceph-mon[74339]: pgmap v2656: 305 pgs: 305 active+clean; 471 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 290 op/s
Dec 06 07:43:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1459538352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3852900960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.279 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.280 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.281 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.281 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.304 251996 DEBUG nova.storage.rbd_utils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:43:58 compute-0 nova_compute[251992]: 2025-12-06 07:43:58.308 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:43:58 compute-0 podman[347852]: 2025-12-06 07:43:58.419916428 +0000 UTC m=+0.064326416 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:43:58 compute-0 podman[347854]: 2025-12-06 07:43:58.420271297 +0000 UTC m=+0.067834732 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec 06 07:43:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:43:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:43:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:43:58.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:43:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.8 MiB/s wr, 283 op/s
Dec 06 07:43:59 compute-0 nova_compute[251992]: 2025-12-06 07:43:59.143 251996 DEBUG nova.network.neutron [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Successfully created port: 450480d9-e0c3-414d-ba7e-8b996711a653 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:43:59 compute-0 ceph-mon[74339]: pgmap v2657: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.8 MiB/s wr, 283 op/s
Dec 06 07:43:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:00.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.328 251996 DEBUG nova.network.neutron [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Successfully updated port: 450480d9-e0c3-414d-ba7e-8b996711a653 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.353 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.354 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.354 251996 DEBUG nova.network.neutron [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.370 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.417 251996 DEBUG nova.compute.manager [req-c8d4da6b-b01e-4953-80b7-fd3ae278bd4d req-4d40e8e2-8ce4-4064-9dfb-a667d85b1bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-changed-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.417 251996 DEBUG nova.compute.manager [req-c8d4da6b-b01e-4953-80b7-fd3ae278bd4d req-4d40e8e2-8ce4-4064-9dfb-a667d85b1bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Refreshing instance network info cache due to event network-changed-450480d9-e0c3-414d-ba7e-8b996711a653. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.417 251996 DEBUG oslo_concurrency.lockutils [req-c8d4da6b-b01e-4953-80b7-fd3ae278bd4d req-4d40e8e2-8ce4-4064-9dfb-a667d85b1bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.461 251996 DEBUG nova.storage.rbd_utils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] resizing rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:44:00 compute-0 sudo[347962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:00 compute-0 sudo[347962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:00 compute-0 sudo[347962]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:00 compute-0 sudo[347987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:00 compute-0 sudo[347987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:00 compute-0 sudo[347987]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.619 251996 DEBUG nova.network.neutron [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:44:00 compute-0 nova_compute[251992]: 2025-12-06 07:44:00.660 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:00.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 266 op/s
Dec 06 07:44:01 compute-0 nova_compute[251992]: 2025-12-06 07:44:01.562 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:01 compute-0 nova_compute[251992]: 2025-12-06 07:44:01.764 251996 DEBUG nova.network.neutron [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updating instance_info_cache with network_info: [{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:44:01 compute-0 nova_compute[251992]: 2025-12-06 07:44:01.801 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:44:01 compute-0 nova_compute[251992]: 2025-12-06 07:44:01.802 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance network_info: |[{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:44:01 compute-0 nova_compute[251992]: 2025-12-06 07:44:01.803 251996 DEBUG oslo_concurrency.lockutils [req-c8d4da6b-b01e-4953-80b7-fd3ae278bd4d req-4d40e8e2-8ce4-4064-9dfb-a667d85b1bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:44:01 compute-0 nova_compute[251992]: 2025-12-06 07:44:01.803 251996 DEBUG nova.network.neutron [req-c8d4da6b-b01e-4953-80b7-fd3ae278bd4d req-4d40e8e2-8ce4-4064-9dfb-a667d85b1bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Refreshing network info cache for port 450480d9-e0c3-414d-ba7e-8b996711a653 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:44:01 compute-0 ceph-mon[74339]: pgmap v2658: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.2 MiB/s wr, 266 op/s
Dec 06 07:44:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:02.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.180 251996 DEBUG nova.objects.instance [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'migration_context' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.205 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.206 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Ensure instance console log exists: /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.206 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.206 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.207 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.209 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Start _get_guest_xml network_info=[{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.214 251996 WARNING nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.220 251996 DEBUG nova.virt.libvirt.host [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.220 251996 DEBUG nova.virt.libvirt.host [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.224 251996 DEBUG nova.virt.libvirt.host [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.224 251996 DEBUG nova.virt.libvirt.host [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.225 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.226 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.226 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.226 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.227 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.227 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.227 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.227 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.228 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.228 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.228 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.229 251996 DEBUG nova.virt.hardware [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.231 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:44:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158173877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.673 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.699 251996 DEBUG nova.storage.rbd_utils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:02 compute-0 nova_compute[251992]: 2025-12-06 07:44:02.703 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/158173877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:02.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 452 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 279 op/s
Dec 06 07:44:03 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #49. Immutable memtables: 0.
Dec 06 07:44:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:44:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504808701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.161 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.163 251996 DEBUG nova.virt.libvirt.vif [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1644415942',display_name='tempest-ServerStableDeviceRescueTest-server-1644415942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1644415942',id=154,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-z5s6nndr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:57Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=53cabacd-b2a5-4ad1-a97a-0d0710d43bf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.163 251996 DEBUG nova.network.os_vif_util [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.164 251996 DEBUG nova.network.os_vif_util [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:e9,bridge_name='br-int',has_traffic_filtering=True,id=450480d9-e0c3-414d-ba7e-8b996711a653,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap450480d9-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.166 251996 DEBUG nova.objects.instance [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'pci_devices' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.190 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <uuid>53cabacd-b2a5-4ad1-a97a-0d0710d43bf9</uuid>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <name>instance-0000009a</name>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-1644415942</nova:name>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:44:02</nova:creationTime>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <nova:user uuid="e997a5eeee174b368a43ed8cb35fa1d0">tempest-ServerStableDeviceRescueTest-1830949011-project-member</nova:user>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <nova:project uuid="f44ecb8bdc7e4692a299e29603301124">tempest-ServerStableDeviceRescueTest-1830949011</nova:project>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <nova:port uuid="450480d9-e0c3-414d-ba7e-8b996711a653">
Dec 06 07:44:03 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <system>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <entry name="serial">53cabacd-b2a5-4ad1-a97a-0d0710d43bf9</entry>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <entry name="uuid">53cabacd-b2a5-4ad1-a97a-0d0710d43bf9</entry>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </system>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <os>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   </os>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <features>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   </features>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk">
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       </source>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config">
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       </source>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:44:03 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:ed:3b:e9"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <target dev="tap450480d9-e0"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/console.log" append="off"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <video>
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </video>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:44:03 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:44:03 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:44:03 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:44:03 compute-0 nova_compute[251992]: </domain>
Dec 06 07:44:03 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.191 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Preparing to wait for external event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.192 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.192 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.192 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.193 251996 DEBUG nova.virt.libvirt.vif [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:43:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1644415942',display_name='tempest-ServerStableDeviceRescueTest-server-1644415942',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1644415942',id=154,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-z5s6nndr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:43:57Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=53cabacd-b2a5-4ad1-a97a-0d0710d43bf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.193 251996 DEBUG nova.network.os_vif_util [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.194 251996 DEBUG nova.network.os_vif_util [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:e9,bridge_name='br-int',has_traffic_filtering=True,id=450480d9-e0c3-414d-ba7e-8b996711a653,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap450480d9-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.194 251996 DEBUG os_vif [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:e9,bridge_name='br-int',has_traffic_filtering=True,id=450480d9-e0c3-414d-ba7e-8b996711a653,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap450480d9-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.195 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.196 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.196 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.200 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.200 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap450480d9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.201 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap450480d9-e0, col_values=(('external_ids', {'iface-id': '450480d9-e0c3-414d-ba7e-8b996711a653', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:3b:e9', 'vm-uuid': '53cabacd-b2a5-4ad1-a97a-0d0710d43bf9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:03 compute-0 NetworkManager[48965]: <info>  [1765007043.2681] manager: (tap450480d9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.274 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.276 251996 INFO os_vif [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:3b:e9,bridge_name='br-int',has_traffic_filtering=True,id=450480d9-e0c3-414d-ba7e-8b996711a653,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap450480d9-e0')
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.296 251996 DEBUG nova.network.neutron [req-c8d4da6b-b01e-4953-80b7-fd3ae278bd4d req-4d40e8e2-8ce4-4064-9dfb-a667d85b1bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updated VIF entry in instance network info cache for port 450480d9-e0c3-414d-ba7e-8b996711a653. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.296 251996 DEBUG nova.network.neutron [req-c8d4da6b-b01e-4953-80b7-fd3ae278bd4d req-4d40e8e2-8ce4-4064-9dfb-a667d85b1bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updating instance_info_cache with network_info: [{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.323 251996 DEBUG oslo_concurrency.lockutils [req-c8d4da6b-b01e-4953-80b7-fd3ae278bd4d req-4d40e8e2-8ce4-4064-9dfb-a667d85b1bff 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.470 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.470 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.470 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:ed:3b:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.473 251996 INFO nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Using config drive
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.496 251996 DEBUG nova.storage.rbd_utils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:03.852 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:03.853 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:03.854 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.909 251996 INFO nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Creating config drive at /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config
Dec 06 07:44:03 compute-0 nova_compute[251992]: 2025-12-06 07:44:03.918 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfw2_0h0_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:03 compute-0 ceph-mon[74339]: pgmap v2659: 305 pgs: 305 active+clean; 452 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 279 op/s
Dec 06 07:44:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2504808701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/561146328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:04 compute-0 nova_compute[251992]: 2025-12-06 07:44:04.058 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfw2_0h0_" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:04 compute-0 nova_compute[251992]: 2025-12-06 07:44:04.115 251996 DEBUG nova.storage.rbd_utils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:04 compute-0 nova_compute[251992]: 2025-12-06 07:44:04.119 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:04.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:04.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 337 op/s
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.194 251996 DEBUG oslo_concurrency.processutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.195 251996 INFO nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Deleting local config drive /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config because it was imported into RBD.
Dec 06 07:44:05 compute-0 kernel: tap450480d9-e0: entered promiscuous mode
Dec 06 07:44:05 compute-0 NetworkManager[48965]: <info>  [1765007045.2438] manager: (tap450480d9-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Dec 06 07:44:05 compute-0 ovn_controller[147168]: 2025-12-06T07:44:05Z|00544|binding|INFO|Claiming lport 450480d9-e0c3-414d-ba7e-8b996711a653 for this chassis.
Dec 06 07:44:05 compute-0 ovn_controller[147168]: 2025-12-06T07:44:05Z|00545|binding|INFO|450480d9-e0c3-414d-ba7e-8b996711a653: Claiming fa:16:3e:ed:3b:e9 10.100.0.3
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.248 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.255 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:e9 10.100.0.3'], port_security=['fa:16:3e:ed:3b:e9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '53cabacd-b2a5-4ad1-a97a-0d0710d43bf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=450480d9-e0c3-414d-ba7e-8b996711a653) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.257 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 450480d9-e0c3-414d-ba7e-8b996711a653 in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.260 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:05 compute-0 ovn_controller[147168]: 2025-12-06T07:44:05Z|00546|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 ovn-installed in OVS
Dec 06 07:44:05 compute-0 ovn_controller[147168]: 2025-12-06T07:44:05Z|00547|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 up in Southbound
Dec 06 07:44:05 compute-0 systemd-udevd[348170]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.291 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:05 compute-0 systemd-machined[212986]: New machine qemu-70-instance-0000009a.
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.290 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7deb3a-5fd2-41ad-a202-0943cd6b421a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.294 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.296 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.296 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[74eb01d2-442c-4eec-a9ca-3dc78f409ad9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.297 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bf98cdc9-c767-4f1d-98cd-163822ddf0f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 NetworkManager[48965]: <info>  [1765007045.3082] device (tap450480d9-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:44:05 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000009a.
Dec 06 07:44:05 compute-0 NetworkManager[48965]: <info>  [1765007045.3090] device (tap450480d9-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.316 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d15ff8b8-2938-4ce0-8dd3-a8cf989ed79c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.342 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f208c307-e32b-4ead-9cdf-5fca401a0319]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.376 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[cea023c2-6019-4967-b279-5423cc7ac8a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 NetworkManager[48965]: <info>  [1765007045.3847] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/253)
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.385 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fef92675-38ff-455b-8a66-90bee5eb8a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.418 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddd49db-863e-4055-95b4-7e51a024ad07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.424 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b513a38c-e236-4409-9146-0fc2ecf08a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ceph-mon[74339]: pgmap v2660: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 337 op/s
Dec 06 07:44:05 compute-0 NetworkManager[48965]: <info>  [1765007045.4474] device (tap6d1a17d6-50): carrier: link connected
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.454 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d0b236-8b6d-46c0-b9c8-ea5ca305b1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.473 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9f414c20-18f6-44f9-a2df-e83991a44467]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731803, 'reachable_time': 22376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348202, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.490 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[060946a6-afc2-410c-8b66-4093e6a0c3ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 731803, 'tstamp': 731803}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348203, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.517 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[008bab52-368a-446e-87ea-689eabc2163e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731803, 'reachable_time': 22376, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348204, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.548 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e6666d-5d40-4e19-bd98-eb6dee98145c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.600 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8a29652d-31a4-4c8f-bcc1-ea811839b73b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.601 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.602 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.602 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.604 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:05 compute-0 NetworkManager[48965]: <info>  [1765007045.6050] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Dec 06 07:44:05 compute-0 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.607 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.608 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:05 compute-0 ovn_controller[147168]: 2025-12-06T07:44:05Z|00548|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.625 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.627 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.628 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3e700c-9f4b-4255-9efd-59e1d7cc4efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.628 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:44:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:05.630 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.661 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.948 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007045.9482749, 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.949 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] VM Started (Lifecycle Event)
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.967 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.972 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007045.9484363, 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.972 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] VM Paused (Lifecycle Event)
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.990 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:05 compute-0 nova_compute[251992]: 2025-12-06 07:44:05.993 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.010 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:44:06 compute-0 podman[348278]: 2025-12-06 07:44:05.95343861 +0000 UTC m=+0.020882893 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:44:06 compute-0 podman[348278]: 2025-12-06 07:44:06.148256906 +0000 UTC m=+0.215701169 container create b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:44:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:06 compute-0 systemd[1]: Started libpod-conmon-b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c.scope.
Dec 06 07:44:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc3410475ab6911d4da190c38062219c9e8f306533aa6a30e66b2900b0b6626/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:06 compute-0 podman[348278]: 2025-12-06 07:44:06.296280398 +0000 UTC m=+0.363724691 container init b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:44:06 compute-0 podman[348278]: 2025-12-06 07:44:06.301743758 +0000 UTC m=+0.369188021 container start b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:44:06 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[348293]: [NOTICE]   (348297) : New worker (348299) forked
Dec 06 07:44:06 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[348293]: [NOTICE]   (348297) : Loading success.
Dec 06 07:44:06 compute-0 ovn_controller[147168]: 2025-12-06T07:44:06Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:9c:27 10.100.0.11
Dec 06 07:44:06 compute-0 ovn_controller[147168]: 2025-12-06T07:44:06Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:9c:27 10.100.0.11
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.773 251996 DEBUG nova.compute.manager [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.774 251996 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.774 251996 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.774 251996 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.775 251996 DEBUG nova.compute.manager [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Processing event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.775 251996 DEBUG nova.compute.manager [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.775 251996 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.775 251996 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.775 251996 DEBUG oslo_concurrency.lockutils [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.775 251996 DEBUG nova.compute.manager [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.776 251996 WARNING nova.compute.manager [req-779821ab-0af6-4756-85f5-5ceacff18da3 req-a60d5952-d78f-4281-a327-de0e49a34a1d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state building and task_state spawning.
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.776 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.779 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007046.779502, 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.779 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] VM Resumed (Lifecycle Event)
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.781 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.783 251996 INFO nova.virt.libvirt.driver [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance spawned successfully.
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.783 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.805 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.809 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.809 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.810 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.810 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.810 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.811 251996 DEBUG nova.virt.libvirt.driver [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.817 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.849 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.873 251996 INFO nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Took 8.88 seconds to spawn the instance on the hypervisor.
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.874 251996 DEBUG nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.938 251996 INFO nova.compute.manager [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Took 9.89 seconds to build instance.
Dec 06 07:44:06 compute-0 nova_compute[251992]: 2025-12-06 07:44:06.952 251996 DEBUG oslo_concurrency.lockutils [None req-ac51a80e-3707-41f0-9775-3acfd8940798 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:06.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.2 MiB/s wr, 249 op/s
Dec 06 07:44:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:08.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:08 compute-0 nova_compute[251992]: 2025-12-06 07:44:08.269 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:08 compute-0 ceph-mon[74339]: pgmap v2661: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.2 MiB/s wr, 249 op/s
Dec 06 07:44:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:08.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.6 MiB/s wr, 288 op/s
Dec 06 07:44:09 compute-0 nova_compute[251992]: 2025-12-06 07:44:09.105 251996 DEBUG nova.compute.manager [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:09 compute-0 nova_compute[251992]: 2025-12-06 07:44:09.154 251996 INFO nova.compute.manager [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] instance snapshotting
Dec 06 07:44:09 compute-0 nova_compute[251992]: 2025-12-06 07:44:09.399 251996 INFO nova.virt.libvirt.driver [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Beginning live snapshot process
Dec 06 07:44:09 compute-0 nova_compute[251992]: 2025-12-06 07:44:09.539 251996 DEBUG nova.virt.libvirt.imagebackend [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:44:09 compute-0 nova_compute[251992]: 2025-12-06 07:44:09.709 251996 DEBUG nova.storage.rbd_utils [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] creating snapshot(d4bc3b79e6ff427c9a22bada9ce55a8d) on rbd image(53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:44:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/895270717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:44:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/895270717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:44:09 compute-0 ceph-mon[74339]: pgmap v2662: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.6 MiB/s wr, 288 op/s
Dec 06 07:44:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2028827212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:10 compute-0 nova_compute[251992]: 2025-12-06 07:44:10.664 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Dec 06 07:44:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:10.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 458 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 299 op/s
Dec 06 07:44:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Dec 06 07:44:11 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Dec 06 07:44:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2461673043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:12.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.946 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.966 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid f3e780ab-f17f-4ecf-908b-16e88419d5f4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.966 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.967 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.967 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.968 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.968 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.969 251996 INFO nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] During sync_power_state the instance has a pending task (image_uploading). Skip.
Dec 06 07:44:12 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.969 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:12.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 462 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 306 op/s
Dec 06 07:44:13 compute-0 nova_compute[251992]: 2025-12-06 07:44:12.999 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:44:13 compute-0 nova_compute[251992]: 2025-12-06 07:44:13.273 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:13 compute-0 ceph-mon[74339]: pgmap v2663: 305 pgs: 305 active+clean; 458 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 299 op/s
Dec 06 07:44:13 compute-0 ceph-mon[74339]: osdmap e333: 3 total, 3 up, 3 in
Dec 06 07:44:13 compute-0 ceph-mon[74339]: pgmap v2665: 305 pgs: 305 active+clean; 462 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 306 op/s
Dec 06 07:44:14 compute-0 nova_compute[251992]: 2025-12-06 07:44:14.121 251996 DEBUG nova.storage.rbd_utils [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] cloning vms/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk@d4bc3b79e6ff427c9a22bada9ce55a8d to images/9b2bfd1e-afa4-4eb9-a3eb-6b2d196bc5e9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:44:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:14.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:14 compute-0 nova_compute[251992]: 2025-12-06 07:44:14.420 251996 DEBUG nova.storage.rbd_utils [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] flattening images/9b2bfd1e-afa4-4eb9-a3eb-6b2d196bc5e9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:44:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:14 compute-0 nova_compute[251992]: 2025-12-06 07:44:14.815 251996 DEBUG nova.storage.rbd_utils [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] removing snapshot(d4bc3b79e6ff427c9a22bada9ce55a8d) on rbd image(53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:44:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:14.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 487 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 277 op/s
Dec 06 07:44:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Dec 06 07:44:15 compute-0 ceph-mon[74339]: pgmap v2666: 305 pgs: 305 active+clean; 487 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 277 op/s
Dec 06 07:44:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Dec 06 07:44:15 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Dec 06 07:44:15 compute-0 nova_compute[251992]: 2025-12-06 07:44:15.314 251996 DEBUG nova.storage.rbd_utils [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] creating snapshot(snap) on rbd image(9b2bfd1e-afa4-4eb9-a3eb-6b2d196bc5e9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:44:15 compute-0 nova_compute[251992]: 2025-12-06 07:44:15.665 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Dec 06 07:44:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:16.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:16 compute-0 ceph-mon[74339]: osdmap e334: 3 total, 3 up, 3 in
Dec 06 07:44:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3727198498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Dec 06 07:44:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Dec 06 07:44:16 compute-0 nova_compute[251992]: 2025-12-06 07:44:16.794 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:16.794 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:16.796 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:44:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:16.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 254 op/s
Dec 06 07:44:17 compute-0 nova_compute[251992]: 2025-12-06 07:44:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:17 compute-0 ceph-mon[74339]: osdmap e335: 3 total, 3 up, 3 in
Dec 06 07:44:17 compute-0 ceph-mon[74339]: pgmap v2669: 305 pgs: 305 active+clean; 517 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.1 MiB/s wr, 254 op/s
Dec 06 07:44:17 compute-0 nova_compute[251992]: 2025-12-06 07:44:17.862 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:17 compute-0 nova_compute[251992]: 2025-12-06 07:44:17.863 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:17 compute-0 nova_compute[251992]: 2025-12-06 07:44:17.863 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:17 compute-0 nova_compute[251992]: 2025-12-06 07:44:17.864 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:44:17 compute-0 nova_compute[251992]: 2025-12-06 07:44:17.864 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:18.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:44:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1728587174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.331 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.349 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.474 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.474 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.477 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.478 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:44:18
Dec 06 07:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control']
Dec 06 07:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.632 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.633 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3838MB free_disk=20.817211151123047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.633 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.633 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.973 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f3e780ab-f17f-4ecf-908b-16e88419d5f4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.973 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.973 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:44:18 compute-0 nova_compute[251992]: 2025-12-06 07:44:18.973 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:44:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:44:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:18.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:44:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1728587174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 543 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.7 MiB/s wr, 229 op/s
Dec 06 07:44:19 compute-0 nova_compute[251992]: 2025-12-06 07:44:19.058 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:44:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2331378924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:19 compute-0 nova_compute[251992]: 2025-12-06 07:44:19.539 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:19 compute-0 nova_compute[251992]: 2025-12-06 07:44:19.545 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:44:19 compute-0 nova_compute[251992]: 2025-12-06 07:44:19.595 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:44:19 compute-0 nova_compute[251992]: 2025-12-06 07:44:19.675 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:44:19 compute-0 nova_compute[251992]: 2025-12-06 07:44:19.675 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:19 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 6.
Dec 06 07:44:19 compute-0 sudo[348501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:19 compute-0 sudo[348501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:19 compute-0 sudo[348501]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:20 compute-0 sudo[348526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:44:20 compute-0 sudo[348526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:20 compute-0 sudo[348526]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:20 compute-0 sudo[348551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:20 compute-0 sudo[348551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:20 compute-0 sudo[348551]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:20 compute-0 sudo[348576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:44:20 compute-0 sudo[348576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:20.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:20 compute-0 sudo[348576]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:20 compute-0 nova_compute[251992]: 2025-12-06 07:44:20.668 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:44:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:44:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:44:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:44:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:44:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:20.798 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:20 compute-0 sudo[348634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:20 compute-0 sudo[348634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:20 compute-0 sudo[348634]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:20 compute-0 sudo[348659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:20 compute-0 sudo[348659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:20 compute-0 sudo[348659]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:20.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.4 MiB/s wr, 299 op/s
Dec 06 07:44:21 compute-0 ceph-mon[74339]: pgmap v2670: 305 pgs: 305 active+clean; 543 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.7 MiB/s wr, 229 op/s
Dec 06 07:44:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2331378924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3064426482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f8388ab3-4eda-457b-9da5-46b8ef4a2b13 does not exist
Dec 06 07:44:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 433f31e3-ba3e-4fd8-9ac4-fb9c64bbed1a does not exist
Dec 06 07:44:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dcd2970f-e100-4319-ac1d-df0b7d993e64 does not exist
Dec 06 07:44:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:44:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:44:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:44:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:44:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:44:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:44:21 compute-0 sudo[348685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:21 compute-0 sudo[348685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:21 compute-0 sudo[348685]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:21 compute-0 sudo[348710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:44:21 compute-0 sudo[348710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:21 compute-0 sudo[348710]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:21 compute-0 sudo[348741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:21 compute-0 sudo[348741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:21 compute-0 sudo[348741]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:21 compute-0 podman[348734]: 2025-12-06 07:44:21.56001709 +0000 UTC m=+0.084366006 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 06 07:44:21 compute-0 sudo[348784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:44:21 compute-0 sudo[348784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:21 compute-0 podman[348854]: 2025-12-06 07:44:21.943914673 +0000 UTC m=+0.056541142 container create 13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 07:44:21 compute-0 systemd[1]: Started libpod-conmon-13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515.scope.
Dec 06 07:44:22 compute-0 podman[348854]: 2025-12-06 07:44:21.917031256 +0000 UTC m=+0.029657765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:44:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:22 compute-0 podman[348854]: 2025-12-06 07:44:22.053589452 +0000 UTC m=+0.166215901 container init 13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:44:22 compute-0 podman[348854]: 2025-12-06 07:44:22.063271579 +0000 UTC m=+0.175898018 container start 13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:44:22 compute-0 podman[348854]: 2025-12-06 07:44:22.067595567 +0000 UTC m=+0.180221996 container attach 13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:44:22 compute-0 hardcore_feynman[348870]: 167 167
Dec 06 07:44:22 compute-0 systemd[1]: libpod-13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515.scope: Deactivated successfully.
Dec 06 07:44:22 compute-0 podman[348854]: 2025-12-06 07:44:22.073225221 +0000 UTC m=+0.185851660 container died 13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a75e74e85ea663dfcce738de7dd9818e231522b1c1496a9659930c4bde87f5a-merged.mount: Deactivated successfully.
Dec 06 07:44:22 compute-0 podman[348854]: 2025-12-06 07:44:22.121346513 +0000 UTC m=+0.233972952 container remove 13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feynman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:44:22 compute-0 systemd[1]: libpod-conmon-13a8f198cf0a71baa79233ae9038fcd97f1a058b1d087947ead952b5998aa515.scope: Deactivated successfully.
Dec 06 07:44:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:22.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:22 compute-0 podman[348894]: 2025-12-06 07:44:22.329265197 +0000 UTC m=+0.041167151 container create c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:44:22 compute-0 systemd[1]: Started libpod-conmon-c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e.scope.
Dec 06 07:44:22 compute-0 podman[348894]: 2025-12-06 07:44:22.313382771 +0000 UTC m=+0.025284745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:44:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f3cab3cada001b5b464bd6180364f3468589e3aef0d04a3089cfd021a18ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f3cab3cada001b5b464bd6180364f3468589e3aef0d04a3089cfd021a18ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f3cab3cada001b5b464bd6180364f3468589e3aef0d04a3089cfd021a18ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f3cab3cada001b5b464bd6180364f3468589e3aef0d04a3089cfd021a18ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f3cab3cada001b5b464bd6180364f3468589e3aef0d04a3089cfd021a18ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:22 compute-0 podman[348894]: 2025-12-06 07:44:22.428363256 +0000 UTC m=+0.140265240 container init c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:44:22 compute-0 podman[348894]: 2025-12-06 07:44:22.436951772 +0000 UTC m=+0.148853726 container start c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:44:22 compute-0 podman[348894]: 2025-12-06 07:44:22.44086759 +0000 UTC m=+0.152769574 container attach c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:44:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:44:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:44:22 compute-0 ceph-mon[74339]: pgmap v2671: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 6.4 MiB/s wr, 299 op/s
Dec 06 07:44:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/323194662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:44:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:44:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:44:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:22.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.7 MiB/s wr, 252 op/s
Dec 06 07:44:23 compute-0 hardcore_saha[348910]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:44:23 compute-0 hardcore_saha[348910]: --> relative data size: 1.0
Dec 06 07:44:23 compute-0 hardcore_saha[348910]: --> All data devices are unavailable
Dec 06 07:44:23 compute-0 systemd[1]: libpod-c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e.scope: Deactivated successfully.
Dec 06 07:44:23 compute-0 podman[348894]: 2025-12-06 07:44:23.298588335 +0000 UTC m=+1.010490329 container died c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 07:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c4f3cab3cada001b5b464bd6180364f3468589e3aef0d04a3089cfd021a18ed-merged.mount: Deactivated successfully.
Dec 06 07:44:23 compute-0 nova_compute[251992]: 2025-12-06 07:44:23.336 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:23 compute-0 podman[348894]: 2025-12-06 07:44:23.352323159 +0000 UTC m=+1.064225103 container remove c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:44:23 compute-0 systemd[1]: libpod-conmon-c2ac2b5743c2cd11b6bd349385e2e4428ed291d44a76c4c09afbadb13140823e.scope: Deactivated successfully.
Dec 06 07:44:23 compute-0 sudo[348784]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:23 compute-0 sudo[348939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:23 compute-0 sudo[348939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:23 compute-0 sudo[348939]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:23 compute-0 sudo[348964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:44:23 compute-0 sudo[348964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:23 compute-0 sudo[348964]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:44:23 compute-0 sudo[348989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:44:23 compute-0 sudo[348989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:23 compute-0 sudo[348989]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:23 compute-0 sudo[349014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:44:23 compute-0 sudo[349014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:23 compute-0 nova_compute[251992]: 2025-12-06 07:44:23.669 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:23 compute-0 nova_compute[251992]: 2025-12-06 07:44:23.670 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:23 compute-0 podman[349081]: 2025-12-06 07:44:23.910312469 +0000 UTC m=+0.039074093 container create fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:44:23 compute-0 systemd[1]: Started libpod-conmon-fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78.scope.
Dec 06 07:44:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:23 compute-0 podman[349081]: 2025-12-06 07:44:23.892162912 +0000 UTC m=+0.020924576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:44:23 compute-0 podman[349081]: 2025-12-06 07:44:23.997186213 +0000 UTC m=+0.125947847 container init fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:44:24 compute-0 podman[349081]: 2025-12-06 07:44:24.004184136 +0000 UTC m=+0.132945760 container start fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:44:24 compute-0 friendly_haibt[349097]: 167 167
Dec 06 07:44:24 compute-0 podman[349081]: 2025-12-06 07:44:24.008141734 +0000 UTC m=+0.136903378 container attach fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:44:24 compute-0 systemd[1]: libpod-fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78.scope: Deactivated successfully.
Dec 06 07:44:24 compute-0 podman[349081]: 2025-12-06 07:44:24.009316127 +0000 UTC m=+0.138077751 container died fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 07:44:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-18c13ab6c0b2580ae6acef371d4c947d28161a6f71bc449a9b66c3cf676827ab-merged.mount: Deactivated successfully.
Dec 06 07:44:24 compute-0 podman[349081]: 2025-12-06 07:44:24.060896361 +0000 UTC m=+0.189658015 container remove fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:44:24 compute-0 systemd[1]: libpod-conmon-fec00a825905cbfd7e9bb846c6775242d60a7ad84cc00c2a07b38d0fcf5ced78.scope: Deactivated successfully.
Dec 06 07:44:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:24.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:24 compute-0 podman[349121]: 2025-12-06 07:44:24.235888213 +0000 UTC m=+0.040706738 container create 0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shamir, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:44:24 compute-0 systemd[1]: Started libpod-conmon-0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58.scope.
Dec 06 07:44:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1c1972dd29bd0ff4cb553814c7d73ccf2b0d4e86700ae5ddfc3f7c1a870f21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1c1972dd29bd0ff4cb553814c7d73ccf2b0d4e86700ae5ddfc3f7c1a870f21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1c1972dd29bd0ff4cb553814c7d73ccf2b0d4e86700ae5ddfc3f7c1a870f21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca1c1972dd29bd0ff4cb553814c7d73ccf2b0d4e86700ae5ddfc3f7c1a870f21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:24 compute-0 podman[349121]: 2025-12-06 07:44:24.309642447 +0000 UTC m=+0.114461002 container init 0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 07:44:24 compute-0 podman[349121]: 2025-12-06 07:44:24.220379818 +0000 UTC m=+0.025198363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:44:24 compute-0 podman[349121]: 2025-12-06 07:44:24.318164631 +0000 UTC m=+0.122983156 container start 0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Dec 06 07:44:24 compute-0 podman[349121]: 2025-12-06 07:44:24.320547346 +0000 UTC m=+0.125365871 container attach 0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:44:24 compute-0 ceph-mon[74339]: pgmap v2672: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.7 MiB/s wr, 252 op/s
Dec 06 07:44:24 compute-0 ovn_controller[147168]: 2025-12-06T07:44:24Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ed:3b:e9 10.100.0.3
Dec 06 07:44:24 compute-0 ovn_controller[147168]: 2025-12-06T07:44:24Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:3b:e9 10.100.0.3
Dec 06 07:44:24 compute-0 nova_compute[251992]: 2025-12-06 07:44:24.766 251996 INFO nova.virt.libvirt.driver [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Snapshot image upload complete
Dec 06 07:44:24 compute-0 nova_compute[251992]: 2025-12-06 07:44:24.767 251996 INFO nova.compute.manager [None req-51a1e7d9-3db9-48b0-920e-21fa95008805 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Took 15.61 seconds to snapshot the instance on the hypervisor.
Dec 06 07:44:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Dec 06 07:44:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:25.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 567 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Dec 06 07:44:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:44:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:44:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:44:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:44:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:44:25 compute-0 sad_shamir[349138]: {
Dec 06 07:44:25 compute-0 sad_shamir[349138]:     "0": [
Dec 06 07:44:25 compute-0 sad_shamir[349138]:         {
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "devices": [
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "/dev/loop3"
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             ],
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "lv_name": "ceph_lv0",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "lv_size": "7511998464",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "name": "ceph_lv0",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "tags": {
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.cluster_name": "ceph",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.crush_device_class": "",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.encrypted": "0",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.osd_id": "0",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.type": "block",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:                 "ceph.vdo": "0"
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             },
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "type": "block",
Dec 06 07:44:25 compute-0 sad_shamir[349138]:             "vg_name": "ceph_vg0"
Dec 06 07:44:25 compute-0 sad_shamir[349138]:         }
Dec 06 07:44:25 compute-0 sad_shamir[349138]:     ]
Dec 06 07:44:25 compute-0 sad_shamir[349138]: }
Dec 06 07:44:25 compute-0 systemd[1]: libpod-0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58.scope: Deactivated successfully.
Dec 06 07:44:25 compute-0 podman[349148]: 2025-12-06 07:44:25.187528645 +0000 UTC m=+0.032996836 container died 0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca1c1972dd29bd0ff4cb553814c7d73ccf2b0d4e86700ae5ddfc3f7c1a870f21-merged.mount: Deactivated successfully.
Dec 06 07:44:25 compute-0 podman[349148]: 2025-12-06 07:44:25.239178123 +0000 UTC m=+0.084646264 container remove 0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shamir, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:44:25 compute-0 systemd[1]: libpod-conmon-0b06d29876b2e0275f2f042790fb81698b0f38a71e1cca78b20bf7685a183d58.scope: Deactivated successfully.
Dec 06 07:44:25 compute-0 sudo[349014]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:25 compute-0 sudo[349163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:25 compute-0 sudo[349163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:25 compute-0 sudo[349163]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:25 compute-0 sudo[349188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:44:25 compute-0 sudo[349188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:25 compute-0 sudo[349188]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:25 compute-0 sudo[349213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:25 compute-0 sudo[349213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:25 compute-0 sudo[349213]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:25 compute-0 sudo[349238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:44:25 compute-0 sudo[349238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:25 compute-0 nova_compute[251992]: 2025-12-06 07:44:25.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:25 compute-0 nova_compute[251992]: 2025-12-06 07:44:25.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Dec 06 07:44:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Dec 06 07:44:25 compute-0 podman[349303]: 2025-12-06 07:44:25.837406077 +0000 UTC m=+0.036478822 container create 221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 07:44:25 compute-0 systemd[1]: Started libpod-conmon-221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811.scope.
Dec 06 07:44:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:25 compute-0 podman[349303]: 2025-12-06 07:44:25.821737298 +0000 UTC m=+0.020810063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:44:25 compute-0 podman[349303]: 2025-12-06 07:44:25.922874893 +0000 UTC m=+0.121947638 container init 221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:44:25 compute-0 podman[349303]: 2025-12-06 07:44:25.928676172 +0000 UTC m=+0.127748917 container start 221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_khorana, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:44:25 compute-0 frosty_khorana[349319]: 167 167
Dec 06 07:44:25 compute-0 systemd[1]: libpod-221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811.scope: Deactivated successfully.
Dec 06 07:44:25 compute-0 podman[349303]: 2025-12-06 07:44:25.932873576 +0000 UTC m=+0.131946341 container attach 221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_khorana, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:44:25 compute-0 podman[349303]: 2025-12-06 07:44:25.933131323 +0000 UTC m=+0.132204078 container died 221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:44:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc25430aaa4b3f4632ef510424c8154f7e047955ab9c7dae2a3f110f23787d00-merged.mount: Deactivated successfully.
Dec 06 07:44:25 compute-0 podman[349303]: 2025-12-06 07:44:25.972646418 +0000 UTC m=+0.171719163 container remove 221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_khorana, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:44:25 compute-0 systemd[1]: libpod-conmon-221d936d9f5dded78debd3b9721b7a8fd8684e13a260079638967974fcf84811.scope: Deactivated successfully.
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009609862099851107 of space, bias 1.0, pg target 2.8829586299553323 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.644426559882747 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1565338992415783 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:44:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:44:26 compute-0 podman[349341]: 2025-12-06 07:44:26.146152789 +0000 UTC m=+0.042156928 container create a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:44:26 compute-0 systemd[1]: Started libpod-conmon-a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104.scope.
Dec 06 07:44:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:26.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70553ddd749ba7c45b95ae3cbb6c116949eda3d6a381a2fec989a260481d111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70553ddd749ba7c45b95ae3cbb6c116949eda3d6a381a2fec989a260481d111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70553ddd749ba7c45b95ae3cbb6c116949eda3d6a381a2fec989a260481d111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70553ddd749ba7c45b95ae3cbb6c116949eda3d6a381a2fec989a260481d111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:26 compute-0 podman[349341]: 2025-12-06 07:44:26.216618733 +0000 UTC m=+0.112622892 container init a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:44:26 compute-0 podman[349341]: 2025-12-06 07:44:26.127404865 +0000 UTC m=+0.023409044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:44:26 compute-0 podman[349341]: 2025-12-06 07:44:26.223711257 +0000 UTC m=+0.119715396 container start a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:44:26 compute-0 podman[349341]: 2025-12-06 07:44:26.226565485 +0000 UTC m=+0.122569624 container attach a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:44:26 compute-0 ceph-mon[74339]: pgmap v2673: 305 pgs: 305 active+clean; 567 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.6 MiB/s wr, 258 op/s
Dec 06 07:44:26 compute-0 ceph-mon[74339]: osdmap e336: 3 total, 3 up, 3 in
Dec 06 07:44:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3531443425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:26 compute-0 nova_compute[251992]: 2025-12-06 07:44:26.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:26 compute-0 nova_compute[251992]: 2025-12-06 07:44:26.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:44:26 compute-0 nova_compute[251992]: 2025-12-06 07:44:26.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:44:26 compute-0 hungry_pasteur[349359]: {
Dec 06 07:44:26 compute-0 hungry_pasteur[349359]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:44:26 compute-0 hungry_pasteur[349359]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:44:26 compute-0 hungry_pasteur[349359]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:44:26 compute-0 hungry_pasteur[349359]:         "osd_id": 0,
Dec 06 07:44:27 compute-0 hungry_pasteur[349359]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:44:27 compute-0 hungry_pasteur[349359]:         "type": "bluestore"
Dec 06 07:44:27 compute-0 hungry_pasteur[349359]:     }
Dec 06 07:44:27 compute-0 hungry_pasteur[349359]: }
Dec 06 07:44:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:27.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 567 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 211 op/s
Dec 06 07:44:27 compute-0 systemd[1]: libpod-a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104.scope: Deactivated successfully.
Dec 06 07:44:27 compute-0 podman[349341]: 2025-12-06 07:44:27.029585549 +0000 UTC m=+0.925589688 container died a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 07:44:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70553ddd749ba7c45b95ae3cbb6c116949eda3d6a381a2fec989a260481d111-merged.mount: Deactivated successfully.
Dec 06 07:44:27 compute-0 podman[349341]: 2025-12-06 07:44:27.08062639 +0000 UTC m=+0.976630529 container remove a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pasteur, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:44:27 compute-0 systemd[1]: libpod-conmon-a1a7c9ac130354aad2ce4f5095673e1e6cc838dd968e3dace437c4d86b1aa104.scope: Deactivated successfully.
Dec 06 07:44:27 compute-0 sudo[349238]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:44:27 compute-0 nova_compute[251992]: 2025-12-06 07:44:27.173 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:44:27 compute-0 nova_compute[251992]: 2025-12-06 07:44:27.173 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:44:27 compute-0 nova_compute[251992]: 2025-12-06 07:44:27.174 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:44:27 compute-0 nova_compute[251992]: 2025-12-06 07:44:27.174 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f3e780ab-f17f-4ecf-908b-16e88419d5f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:44:27 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fc3c364c-8f65-422f-98a6-7634e0eb8630 does not exist
Dec 06 07:44:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 395aadc8-6680-4e6c-988f-eb32c9ba8103 does not exist
Dec 06 07:44:27 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e7e128c4-7e64-43bb-b9d6-a1133c5d9cd8 does not exist
Dec 06 07:44:27 compute-0 sudo[349394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:27 compute-0 sudo[349394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:27 compute-0 sudo[349394]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:27 compute-0 sudo[349419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:44:27 compute-0 sudo[349419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:27 compute-0 sudo[349419]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:28 compute-0 ceph-mon[74339]: pgmap v2675: 305 pgs: 305 active+clean; 567 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 211 op/s
Dec 06 07:44:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/678080406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:28 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:28.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:28 compute-0 nova_compute[251992]: 2025-12-06 07:44:28.342 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:28 compute-0 nova_compute[251992]: 2025-12-06 07:44:28.874 251996 INFO nova.compute.manager [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Rescuing
Dec 06 07:44:28 compute-0 nova_compute[251992]: 2025-12-06 07:44:28.874 251996 DEBUG oslo_concurrency.lockutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:44:28 compute-0 nova_compute[251992]: 2025-12-06 07:44:28.874 251996 DEBUG oslo_concurrency.lockutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:44:28 compute-0 nova_compute[251992]: 2025-12-06 07:44:28.874 251996 DEBUG nova.network.neutron [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:44:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 573 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 162 op/s
Dec 06 07:44:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:29.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:29 compute-0 ovn_controller[147168]: 2025-12-06T07:44:29Z|00549|binding|INFO|Releasing lport dc336d05-182d-42ac-ab5e-a73bf30a0662 from this chassis (sb_readonly=0)
Dec 06 07:44:29 compute-0 ovn_controller[147168]: 2025-12-06T07:44:29Z|00550|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.288 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:29 compute-0 podman[349445]: 2025-12-06 07:44:29.396112345 +0000 UTC m=+0.054594310 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:44:29 compute-0 podman[349446]: 2025-12-06 07:44:29.403801595 +0000 UTC m=+0.062269460 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:44:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.542 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updating instance_info_cache with network_info: [{"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.591 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-f3e780ab-f17f-4ecf-908b-16e88419d5f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.592 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.592 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.592 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.592 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.592 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:44:29 compute-0 nova_compute[251992]: 2025-12-06 07:44:29.593 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:44:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:30.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:30 compute-0 ceph-mon[74339]: pgmap v2676: 305 pgs: 305 active+clean; 573 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 162 op/s
Dec 06 07:44:30 compute-0 nova_compute[251992]: 2025-12-06 07:44:30.673 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:30 compute-0 nova_compute[251992]: 2025-12-06 07:44:30.836 251996 DEBUG nova.network.neutron [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updating instance_info_cache with network_info: [{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:44:30 compute-0 nova_compute[251992]: 2025-12-06 07:44:30.886 251996 DEBUG oslo_concurrency.lockutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:44:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 578 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Dec 06 07:44:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:31.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:31 compute-0 nova_compute[251992]: 2025-12-06 07:44:31.770 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:44:31 compute-0 nova_compute[251992]: 2025-12-06 07:44:31.862 251996 DEBUG oslo_concurrency.lockutils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:31 compute-0 nova_compute[251992]: 2025-12-06 07:44:31.863 251996 DEBUG oslo_concurrency.lockutils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:31 compute-0 ceph-mon[74339]: pgmap v2677: 305 pgs: 305 active+clean; 578 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Dec 06 07:44:31 compute-0 nova_compute[251992]: 2025-12-06 07:44:31.953 251996 DEBUG nova.objects.instance [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid f3e780ab-f17f-4ecf-908b-16e88419d5f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:31 compute-0 nova_compute[251992]: 2025-12-06 07:44:31.997 251996 DEBUG oslo_concurrency.lockutils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.340 251996 DEBUG oslo_concurrency.lockutils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.341 251996 DEBUG oslo_concurrency.lockutils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.341 251996 INFO nova.compute.manager [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Attaching volume 13b2825e-5790-477b-b9fb-6ad4efe8c20c to /dev/vdb
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.749 251996 DEBUG os_brick.utils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.751 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.766 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.766 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[651117e6-0cbf-4bf5-96b8-490d732d8b62]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.768 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.777 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.777 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[8173dcbc-3195-4e01-be22-e18d080df6ce]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.779 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.786 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.787 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfc69b2-6eea-4696-bc2c-c20aaa3473c5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.788 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[bf468584-f5cd-47c9-9c22-65cadee39a64]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.789 251996 DEBUG oslo_concurrency.processutils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.816 251996 DEBUG oslo_concurrency.processutils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.820 251996 DEBUG os_brick.initiator.connectors.lightos [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.820 251996 DEBUG os_brick.initiator.connectors.lightos [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.820 251996 DEBUG os_brick.initiator.connectors.lightos [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.821 251996 DEBUG os_brick.utils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:44:32 compute-0 nova_compute[251992]: 2025-12-06 07:44:32.821 251996 DEBUG nova.virt.block_device [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updating existing volume attachment record: f12acac1-8fdb-4176-9e37-e39a2ef932e7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:44:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 578 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Dec 06 07:44:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:33.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:33 compute-0 nova_compute[251992]: 2025-12-06 07:44:33.344 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:33 compute-0 ceph-mon[74339]: pgmap v2678: 305 pgs: 305 active+clean; 578 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Dec 06 07:44:34 compute-0 sshd-session[349481]: Connection reset by authenticating user root 91.202.233.33 port 63776 [preauth]
Dec 06 07:44:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:34.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:44:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3865790578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:34 compute-0 kernel: tap450480d9-e0 (unregistering): left promiscuous mode
Dec 06 07:44:34 compute-0 NetworkManager[48965]: <info>  [1765007074.4080] device (tap450480d9-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:44:34 compute-0 ovn_controller[147168]: 2025-12-06T07:44:34Z|00551|binding|INFO|Releasing lport 450480d9-e0c3-414d-ba7e-8b996711a653 from this chassis (sb_readonly=0)
Dec 06 07:44:34 compute-0 ovn_controller[147168]: 2025-12-06T07:44:34Z|00552|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 down in Southbound
Dec 06 07:44:34 compute-0 ovn_controller[147168]: 2025-12-06T07:44:34Z|00553|binding|INFO|Removing iface tap450480d9-e0 ovn-installed in OVS
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.414 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.433 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:e9 10.100.0.3'], port_security=['fa:16:3e:ed:3b:e9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '53cabacd-b2a5-4ad1-a97a-0d0710d43bf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=450480d9-e0c3-414d-ba7e-8b996711a653) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.435 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 450480d9-e0c3-414d-ba7e-8b996711a653 in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.437 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.439 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4156d223-2986-466e-b277-68bb70082fc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.440 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore
Dec 06 07:44:34 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Dec 06 07:44:34 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000009a.scope: Consumed 14.247s CPU time.
Dec 06 07:44:34 compute-0 systemd-machined[212986]: Machine qemu-70-instance-0000009a terminated.
Dec 06 07:44:34 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[348293]: [NOTICE]   (348297) : haproxy version is 2.8.14-c23fe91
Dec 06 07:44:34 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[348293]: [NOTICE]   (348297) : path to executable is /usr/sbin/haproxy
Dec 06 07:44:34 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[348293]: [WARNING]  (348297) : Exiting Master process...
Dec 06 07:44:34 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[348293]: [ALERT]    (348297) : Current worker (348299) exited with code 143 (Terminated)
Dec 06 07:44:34 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[348293]: [WARNING]  (348297) : All workers exited. Exiting... (0)
Dec 06 07:44:34 compute-0 systemd[1]: libpod-b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c.scope: Deactivated successfully.
Dec 06 07:44:34 compute-0 conmon[348293]: conmon b1a4c3769177414d60bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c.scope/container/memory.events
Dec 06 07:44:34 compute-0 podman[349516]: 2025-12-06 07:44:34.565029045 +0000 UTC m=+0.041190182 container died b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c-userdata-shm.mount: Deactivated successfully.
Dec 06 07:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc3410475ab6911d4da190c38062219c9e8f306533aa6a30e66b2900b0b6626-merged.mount: Deactivated successfully.
Dec 06 07:44:34 compute-0 podman[349516]: 2025-12-06 07:44:34.608255581 +0000 UTC m=+0.084416728 container cleanup b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 07:44:34 compute-0 systemd[1]: libpod-conmon-b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c.scope: Deactivated successfully.
Dec 06 07:44:34 compute-0 kernel: tap450480d9-e0: entered promiscuous mode
Dec 06 07:44:34 compute-0 kernel: tap450480d9-e0 (unregistering): left promiscuous mode
Dec 06 07:44:34 compute-0 NetworkManager[48965]: <info>  [1765007074.6446] manager: (tap450480d9-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.699 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.744 251996 DEBUG nova.objects.instance [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid f3e780ab-f17f-4ecf-908b-16e88419d5f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:34 compute-0 podman[349544]: 2025-12-06 07:44:34.752230213 +0000 UTC m=+0.126197466 container remove b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.758 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[46ce4ed1-971e-4fd5-a20c-291cbfd819c8]: (4, ('Sat Dec  6 07:44:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c)\nb1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c\nSat Dec  6 07:44:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (b1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c)\nb1a4c3769177414d60bde4565cfe95bb000895b702850b1ffc971c6fd8522b5c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.760 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab9e571-04ea-441e-9df0-27bd865897b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.761 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.762 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:34 compute-0 kernel: tap6d1a17d6-50: left promiscuous mode
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.773 251996 DEBUG nova.virt.libvirt.driver [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Attempting to attach volume 13b2825e-5790-477b-b9fb-6ad4efe8c20c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.776 251996 DEBUG nova.virt.libvirt.guest [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:44:34 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:44:34 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-13b2825e-5790-477b-b9fb-6ad4efe8c20c">
Dec 06 07:44:34 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:34 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:34 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:34 compute-0 nova_compute[251992]:   </source>
Dec 06 07:44:34 compute-0 nova_compute[251992]:   <auth username="openstack">
Dec 06 07:44:34 compute-0 nova_compute[251992]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:44:34 compute-0 nova_compute[251992]:   </auth>
Dec 06 07:44:34 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:44:34 compute-0 nova_compute[251992]:   <serial>13b2825e-5790-477b-b9fb-6ad4efe8c20c</serial>
Dec 06 07:44:34 compute-0 nova_compute[251992]: </disk>
Dec 06 07:44:34 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:44:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.781 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.783 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[97a9a93e-c2fc-4288-8db1-c4e35c61cd16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.790 251996 INFO nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance shutdown successfully after 3 seconds.
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.800 251996 INFO nova.virt.libvirt.driver [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance destroyed successfully.
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.800 251996 DEBUG nova.objects.instance [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'numa_topology' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.802 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7528861b-1581-4a94-b34a-467309b1394b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.803 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[77e3039e-e63b-4b94-ade4-9dc6e31ec607]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.820 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f3158ca6-80fd-42d7-ad5d-b4bf9c48939e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 731795, 'reachable_time': 37957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349581, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:34 compute-0 nova_compute[251992]: 2025-12-06 07:44:34.822 251996 INFO nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Attempting a stable device rescue
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.824 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:44:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:34.825 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf8750d-20b4-4bb1-b6fd-6eaba1da93d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec 06 07:44:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 193 KiB/s wr, 38 op/s
Dec 06 07:44:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:35.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3865790578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.128 251996 DEBUG nova.compute.manager [req-a7b86917-6d57-4dd2-a8dc-380c63ffe1bd req-791a9a46-ce1f-4397-8aef-5dddb3b1d88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.128 251996 DEBUG oslo_concurrency.lockutils [req-a7b86917-6d57-4dd2-a8dc-380c63ffe1bd req-791a9a46-ce1f-4397-8aef-5dddb3b1d88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.128 251996 DEBUG oslo_concurrency.lockutils [req-a7b86917-6d57-4dd2-a8dc-380c63ffe1bd req-791a9a46-ce1f-4397-8aef-5dddb3b1d88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.128 251996 DEBUG oslo_concurrency.lockutils [req-a7b86917-6d57-4dd2-a8dc-380c63ffe1bd req-791a9a46-ce1f-4397-8aef-5dddb3b1d88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.129 251996 DEBUG nova.compute.manager [req-a7b86917-6d57-4dd2-a8dc-380c63ffe1bd req-791a9a46-ce1f-4397-8aef-5dddb3b1d88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.129 251996 WARNING nova.compute.manager [req-a7b86917-6d57-4dd2-a8dc-380c63ffe1bd req-791a9a46-ce1f-4397-8aef-5dddb3b1d88c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state active and task_state rescuing.
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.246 251996 DEBUG nova.virt.libvirt.driver [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.246 251996 DEBUG nova.virt.libvirt.driver [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.246 251996 DEBUG nova.virt.libvirt.driver [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.246 251996 DEBUG nova.virt.libvirt.driver [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] No VIF found with MAC fa:16:3e:6c:9c:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.875 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.879 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.880 251996 INFO nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Creating image(s)
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.911 251996 DEBUG nova.storage.rbd_utils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:35 compute-0 nova_compute[251992]: 2025-12-06 07:44:35.915 251996 DEBUG nova.objects.instance [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:36 compute-0 nova_compute[251992]: 2025-12-06 07:44:36.039 251996 DEBUG nova.storage.rbd_utils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:36 compute-0 nova_compute[251992]: 2025-12-06 07:44:36.066 251996 DEBUG nova.storage.rbd_utils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:36 compute-0 nova_compute[251992]: 2025-12-06 07:44:36.069 251996 DEBUG oslo_concurrency.lockutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "7d836bde7107fe7c8645b33ca2a03567d8be5141" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:36 compute-0 nova_compute[251992]: 2025-12-06 07:44:36.070 251996 DEBUG oslo_concurrency.lockutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "7d836bde7107fe7c8645b33ca2a03567d8be5141" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:36.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:36 compute-0 ceph-mon[74339]: pgmap v2679: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 193 KiB/s wr, 38 op/s
Dec 06 07:44:36 compute-0 nova_compute[251992]: 2025-12-06 07:44:36.344 251996 DEBUG oslo_concurrency.lockutils [None req-20ba102a-1220-489c-a185-9969baca0211 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:36 compute-0 sshd-session[349491]: Connection reset by authenticating user root 91.202.233.33 port 24954 [preauth]
Dec 06 07:44:36 compute-0 nova_compute[251992]: 2025-12-06 07:44:36.934 251996 DEBUG nova.virt.libvirt.imagebackend [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/9b2bfd1e-afa4-4eb9-a3eb-6b2d196bc5e9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/9b2bfd1e-afa4-4eb9-a3eb-6b2d196bc5e9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:44:36 compute-0 nova_compute[251992]: 2025-12-06 07:44:36.994 251996 DEBUG nova.virt.libvirt.imagebackend [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/9b2bfd1e-afa4-4eb9-a3eb-6b2d196bc5e9/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:44:36 compute-0 nova_compute[251992]: 2025-12-06 07:44:36.995 251996 DEBUG nova.storage.rbd_utils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] cloning images/9b2bfd1e-afa4-4eb9-a3eb-6b2d196bc5e9@snap to None/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:44:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 189 KiB/s wr, 36 op/s
Dec 06 07:44:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:37.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:37 compute-0 ceph-mon[74339]: pgmap v2680: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 189 KiB/s wr, 36 op/s
Dec 06 07:44:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:38.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:38 compute-0 nova_compute[251992]: 2025-12-06 07:44:38.219 251996 DEBUG nova.compute.manager [req-0b1b9ae4-72a0-4b0f-bc95-a26ae78d2dfa req-510bbb93-d3e8-4bfa-901b-95150b4e5b70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:38 compute-0 nova_compute[251992]: 2025-12-06 07:44:38.220 251996 DEBUG oslo_concurrency.lockutils [req-0b1b9ae4-72a0-4b0f-bc95-a26ae78d2dfa req-510bbb93-d3e8-4bfa-901b-95150b4e5b70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:38 compute-0 nova_compute[251992]: 2025-12-06 07:44:38.220 251996 DEBUG oslo_concurrency.lockutils [req-0b1b9ae4-72a0-4b0f-bc95-a26ae78d2dfa req-510bbb93-d3e8-4bfa-901b-95150b4e5b70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:38 compute-0 nova_compute[251992]: 2025-12-06 07:44:38.220 251996 DEBUG oslo_concurrency.lockutils [req-0b1b9ae4-72a0-4b0f-bc95-a26ae78d2dfa req-510bbb93-d3e8-4bfa-901b-95150b4e5b70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:38 compute-0 nova_compute[251992]: 2025-12-06 07:44:38.220 251996 DEBUG nova.compute.manager [req-0b1b9ae4-72a0-4b0f-bc95-a26ae78d2dfa req-510bbb93-d3e8-4bfa-901b-95150b4e5b70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:38 compute-0 nova_compute[251992]: 2025-12-06 07:44:38.220 251996 WARNING nova.compute.manager [req-0b1b9ae4-72a0-4b0f-bc95-a26ae78d2dfa req-510bbb93-d3e8-4bfa-901b-95150b4e5b70 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state active and task_state rescuing.
Dec 06 07:44:38 compute-0 nova_compute[251992]: 2025-12-06 07:44:38.399 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:38 compute-0 sshd-session[349648]: Connection reset by authenticating user root 91.202.233.33 port 24968 [preauth]
Dec 06 07:44:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 85 KiB/s wr, 42 op/s
Dec 06 07:44:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:39.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:39 compute-0 nova_compute[251992]: 2025-12-06 07:44:39.337 251996 DEBUG oslo_concurrency.lockutils [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:39 compute-0 nova_compute[251992]: 2025-12-06 07:44:39.337 251996 DEBUG oslo_concurrency.lockutils [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:39 compute-0 nova_compute[251992]: 2025-12-06 07:44:39.386 251996 INFO nova.compute.manager [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Detaching volume 13b2825e-5790-477b-b9fb-6ad4efe8c20c
Dec 06 07:44:39 compute-0 ceph-mon[74339]: pgmap v2681: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 85 KiB/s wr, 42 op/s
Dec 06 07:44:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:39 compute-0 nova_compute[251992]: 2025-12-06 07:44:39.873 251996 DEBUG oslo_concurrency.lockutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "7d836bde7107fe7c8645b33ca2a03567d8be5141" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:39 compute-0 nova_compute[251992]: 2025-12-06 07:44:39.919 251996 DEBUG nova.objects.instance [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'migration_context' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:39 compute-0 nova_compute[251992]: 2025-12-06 07:44:39.970 251996 INFO nova.virt.block_device [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Attempting to driver detach volume 13b2825e-5790-477b-b9fb-6ad4efe8c20c from mountpoint /dev/vdb
Dec 06 07:44:39 compute-0 nova_compute[251992]: 2025-12-06 07:44:39.979 251996 DEBUG nova.virt.libvirt.driver [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Attempting to detach device vdb from instance f3e780ab-f17f-4ecf-908b-16e88419d5f4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:44:39 compute-0 nova_compute[251992]: 2025-12-06 07:44:39.979 251996 DEBUG nova.virt.libvirt.guest [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:44:39 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:44:39 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-13b2825e-5790-477b-b9fb-6ad4efe8c20c">
Dec 06 07:44:39 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:39 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:39 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:39 compute-0 nova_compute[251992]:   </source>
Dec 06 07:44:39 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:44:39 compute-0 nova_compute[251992]:   <serial>13b2825e-5790-477b-b9fb-6ad4efe8c20c</serial>
Dec 06 07:44:39 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:44:39 compute-0 nova_compute[251992]: </disk>
Dec 06 07:44:39 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.059 251996 INFO nova.virt.libvirt.driver [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully detached device vdb from instance f3e780ab-f17f-4ecf-908b-16e88419d5f4 from the persistent domain config.
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.060 251996 DEBUG nova.virt.libvirt.driver [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f3e780ab-f17f-4ecf-908b-16e88419d5f4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.061 251996 DEBUG nova.virt.libvirt.guest [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:44:40 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:44:40 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-13b2825e-5790-477b-b9fb-6ad4efe8c20c">
Dec 06 07:44:40 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:40 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:40 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:40 compute-0 nova_compute[251992]:   </source>
Dec 06 07:44:40 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:44:40 compute-0 nova_compute[251992]:   <serial>13b2825e-5790-477b-b9fb-6ad4efe8c20c</serial>
Dec 06 07:44:40 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:44:40 compute-0 nova_compute[251992]: </disk>
Dec 06 07:44:40 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.063 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.070 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Start _get_guest_xml network_info=[{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:ed:3b:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '9b2bfd1e-afa4-4eb9-a3eb-6b2d196bc5e9', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.071 251996 DEBUG nova.objects.instance [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'resources' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.158 251996 WARNING nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.170 251996 DEBUG nova.virt.libvirt.host [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.171 251996 DEBUG nova.virt.libvirt.host [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.182 251996 DEBUG nova.virt.libvirt.host [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.183 251996 DEBUG nova.virt.libvirt.host [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.184 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.184 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.184 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.185 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.185 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.185 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.185 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.187 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.190 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.191 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.191 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.191 251996 DEBUG nova.virt.hardware [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.191 251996 DEBUG nova.objects.instance [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:40.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.310 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.703 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.749 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765007080.7484832, f3e780ab-f17f-4ecf-908b-16e88419d5f4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.750 251996 DEBUG nova.virt.libvirt.driver [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f3e780ab-f17f-4ecf-908b-16e88419d5f4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.752 251996 INFO nova.virt.libvirt.driver [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully detached device vdb from instance f3e780ab-f17f-4ecf-908b-16e88419d5f4 from the live domain config.
Dec 06 07:44:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:44:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559247986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.786 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:40 compute-0 nova_compute[251992]: 2025-12-06 07:44:40.827 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2559247986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:40 compute-0 sudo[349784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:40 compute-0 sudo[349784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:40 compute-0 sudo[349784]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 78 KiB/s wr, 40 op/s
Dec 06 07:44:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:41.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:41 compute-0 sudo[349828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:44:41 compute-0 sudo[349828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:44:41 compute-0 sudo[349828]: pam_unix(sudo:session): session closed for user root
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.116 251996 DEBUG nova.objects.instance [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'flavor' on Instance uuid f3e780ab-f17f-4ecf-908b-16e88419d5f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.191 251996 DEBUG oslo_concurrency.lockutils [None req-0410dac0-3899-49f8-9b9f-e4645b314494 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:44:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681047041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.283 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.284 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:44:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111193492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.753 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.756 251996 DEBUG nova.virt.libvirt.vif [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:43:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1644415942',display_name='tempest-ServerStableDeviceRescueTest-server-1644415942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1644415942',id=154,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:44:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-z5s6nndr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:44:25Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=53cabacd-b2a5-4ad1-a97a-0d0710d43bf9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:ed:3b:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.756 251996 DEBUG nova.network.os_vif_util [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:ed:3b:e9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.757 251996 DEBUG nova.network.os_vif_util [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:e9,bridge_name='br-int',has_traffic_filtering=True,id=450480d9-e0c3-414d-ba7e-8b996711a653,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap450480d9-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.759 251996 DEBUG nova.objects.instance [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'pci_devices' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.793 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <uuid>53cabacd-b2a5-4ad1-a97a-0d0710d43bf9</uuid>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <name>instance-0000009a</name>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-1644415942</nova:name>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:44:40</nova:creationTime>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <nova:user uuid="e997a5eeee174b368a43ed8cb35fa1d0">tempest-ServerStableDeviceRescueTest-1830949011-project-member</nova:user>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <nova:project uuid="f44ecb8bdc7e4692a299e29603301124">tempest-ServerStableDeviceRescueTest-1830949011</nova:project>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <nova:port uuid="450480d9-e0c3-414d-ba7e-8b996711a653">
Dec 06 07:44:41 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <system>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <entry name="serial">53cabacd-b2a5-4ad1-a97a-0d0710d43bf9</entry>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <entry name="uuid">53cabacd-b2a5-4ad1-a97a-0d0710d43bf9</entry>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </system>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <os>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   </os>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <features>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   </features>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk">
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </source>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config">
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </source>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.rescue">
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </source>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:44:41 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <target dev="sdb" bus="usb"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <boot order="1"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:ed:3b:e9"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <target dev="tap450480d9-e0"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/console.log" append="off"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <video>
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </video>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:44:41 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:44:41 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:44:41 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:44:41 compute-0 nova_compute[251992]: </domain>
Dec 06 07:44:41 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.801 251996 INFO nova.virt.libvirt.driver [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance destroyed successfully.
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.862 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.862 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.863 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.863 251996 DEBUG nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:ed:3b:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.863 251996 INFO nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Using config drive
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.891 251996 DEBUG nova.storage.rbd_utils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.926 251996 DEBUG nova.objects.instance [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:41 compute-0 nova_compute[251992]: 2025-12-06 07:44:41.991 251996 DEBUG nova.objects.instance [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'keypairs' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:42 compute-0 ceph-mon[74339]: pgmap v2682: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 78 KiB/s wr, 40 op/s
Dec 06 07:44:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3681047041' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4111193492' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.194 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.194 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.194 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.195 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.195 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.196 251996 INFO nova.compute.manager [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Terminating instance
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.199 251996 DEBUG nova.compute.manager [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:44:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:42.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:42 compute-0 sshd-session[349721]: Connection reset by authenticating user root 91.202.233.33 port 24984 [preauth]
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.545 251996 INFO nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Creating config drive at /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config.rescue
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.550 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgbjce9dh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.681 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgbjce9dh" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.730 251996 DEBUG nova.storage.rbd_utils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.735 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config.rescue 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:42 compute-0 kernel: tap72ebf84f-11 (unregistering): left promiscuous mode
Dec 06 07:44:42 compute-0 NetworkManager[48965]: <info>  [1765007082.8964] device (tap72ebf84f-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:44:42 compute-0 ovn_controller[147168]: 2025-12-06T07:44:42Z|00554|binding|INFO|Releasing lport 72ebf84f-114c-481c-8735-4ba8278ccfdb from this chassis (sb_readonly=0)
Dec 06 07:44:42 compute-0 ovn_controller[147168]: 2025-12-06T07:44:42Z|00555|binding|INFO|Setting lport 72ebf84f-114c-481c-8735-4ba8278ccfdb down in Southbound
Dec 06 07:44:42 compute-0 ovn_controller[147168]: 2025-12-06T07:44:42Z|00556|binding|INFO|Removing iface tap72ebf84f-11 ovn-installed in OVS
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.954 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:42.965 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:9c:27 10.100.0.11'], port_security=['fa:16:3e:6c:9c:27 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f3e780ab-f17f-4ecf-908b-16e88419d5f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '741dc47f9ced423cbd99fd6f9d32904f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92065a2a-e95c-473c-bbd3-27c37f70c344', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a93df87f-d2df-4d3a-b692-98bba32f2fe1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=72ebf84f-114c-481c-8735-4ba8278ccfdb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:42.966 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 72ebf84f-114c-481c-8735-4ba8278ccfdb in datapath 3c5d4817-c3d5-45fc-9890-418e779bacb2 unbound from our chassis
Dec 06 07:44:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:42.967 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c5d4817-c3d5-45fc-9890-418e779bacb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:44:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:42.968 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[55bc4e93-30d7-4ac5-8381-f775dd21fe09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:42 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:42.969 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 namespace which is not needed anymore
Dec 06 07:44:42 compute-0 nova_compute[251992]: 2025-12-06 07:44:42.972 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 33 KiB/s wr, 36 op/s
Dec 06 07:44:43 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000098.scope: Deactivated successfully.
Dec 06 07:44:43 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000098.scope: Consumed 16.427s CPU time.
Dec 06 07:44:43 compute-0 systemd-machined[212986]: Machine qemu-69-instance-00000098 terminated.
Dec 06 07:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:44:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:43.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:43 compute-0 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[347684]: [NOTICE]   (347688) : haproxy version is 2.8.14-c23fe91
Dec 06 07:44:43 compute-0 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[347684]: [NOTICE]   (347688) : path to executable is /usr/sbin/haproxy
Dec 06 07:44:43 compute-0 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[347684]: [WARNING]  (347688) : Exiting Master process...
Dec 06 07:44:43 compute-0 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[347684]: [ALERT]    (347688) : Current worker (347690) exited with code 143 (Terminated)
Dec 06 07:44:43 compute-0 neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2[347684]: [WARNING]  (347688) : All workers exited. Exiting... (0)
Dec 06 07:44:43 compute-0 systemd[1]: libpod-8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7.scope: Deactivated successfully.
Dec 06 07:44:43 compute-0 podman[349965]: 2025-12-06 07:44:43.107398438 +0000 UTC m=+0.041272724 container died 8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7-userdata-shm.mount: Deactivated successfully.
Dec 06 07:44:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a90c8a1c1cef740e0217e5ed618daa7a1e7628c7b93c0e41d00101293a70ff5-merged.mount: Deactivated successfully.
Dec 06 07:44:43 compute-0 podman[349965]: 2025-12-06 07:44:43.142433339 +0000 UTC m=+0.076307625 container cleanup 8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:44:43 compute-0 systemd[1]: libpod-conmon-8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7.scope: Deactivated successfully.
Dec 06 07:44:43 compute-0 podman[349993]: 2025-12-06 07:44:43.19785834 +0000 UTC m=+0.036849652 container remove 8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.203 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c471a811-9088-40d2-b330-d1b1bf1f513d]: (4, ('Sat Dec  6 07:44:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 (8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7)\n8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7\nSat Dec  6 07:44:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 (8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7)\n8b2547727a955992756ba603a58b73c08f3609b5f8a398a1f44b8ae200b502a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.204 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[49a6e550-c967-4720-a003-9516d8ec10f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.205 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c5d4817-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:43 compute-0 kernel: tap3c5d4817-c0: left promiscuous mode
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.208 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:43 compute-0 NetworkManager[48965]: <info>  [1765007083.2189] manager: (tap72ebf84f-11): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.234 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8250fb43-f1f0-4ecd-a6ba-52f1391ebfb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.243 251996 INFO nova.virt.libvirt.driver [-] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Instance destroyed successfully.
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.244 251996 DEBUG nova.objects.instance [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lazy-loading 'resources' on Instance uuid f3e780ab-f17f-4ecf-908b-16e88419d5f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.247 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a87759b1-adbd-42af-a607-4fe1957223ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.248 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0f7bd4-b64e-427b-a6a0-ea09f27dfe6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.258 251996 DEBUG nova.virt.libvirt.vif [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1333355293',display_name='tempest-AttachVolumeNegativeTest-server-1333355293',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1333355293',id=152,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEqz2bDAWUOfXde68r22NS0cm5MJs8rrPEKWOtfnlImrTM2XFzAu3ww59I+122hdwjnBS2JwHi0p2ZpnbGj6IZ0751PuMJQly9DdwP115KGFoCh/bvypbUECozGCIQ4h9A==',key_name='tempest-keypair-2045615674',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:43:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='741dc47f9ced423cbd99fd6f9d32904f',ramdisk_id='',reservation_id='r-d5i0f7ho',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-2080911030',owner_user_name='tempest-AttachVolumeNegativeTest-2080911030-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:43:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='297bc99c242e4fa8aedea4a6367b61c0',uuid=f3e780ab-f17f-4ecf-908b-16e88419d5f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.259 251996 DEBUG nova.network.os_vif_util [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converting VIF {"id": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "address": "fa:16:3e:6c:9c:27", "network": {"id": "3c5d4817-c3d5-45fc-9890-418e779bacb2", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1824643193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "741dc47f9ced423cbd99fd6f9d32904f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ebf84f-11", "ovs_interfaceid": "72ebf84f-114c-481c-8735-4ba8278ccfdb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.259 251996 DEBUG nova.network.os_vif_util [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=72ebf84f-114c-481c-8735-4ba8278ccfdb,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ebf84f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.260 251996 DEBUG os_vif [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=72ebf84f-114c-481c-8735-4ba8278ccfdb,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ebf84f-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.262 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72ebf84f-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.264 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.265 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e37de1-0784-4d9c-8de3-db88a595adfa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 730109, 'reachable_time': 15148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350020, 'error': None, 'target': 'ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.267 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c5d4817-c3d5-45fc-9890-418e779bacb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:44:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:43.267 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a9753d-f2e3-4f68-a2dc-916e065e4276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d3c5d4817\x2dc3d5\x2d45fc\x2d9890\x2d418e779bacb2.mount: Deactivated successfully.
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.269 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.274 251996 INFO os_vif [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:9c:27,bridge_name='br-int',has_traffic_filtering=True,id=72ebf84f-114c-481c-8735-4ba8278ccfdb,network=Network(3c5d4817-c3d5-45fc-9890-418e779bacb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ebf84f-11')
Dec 06 07:44:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/289831042' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:43 compute-0 ceph-mon[74339]: pgmap v2683: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 33 KiB/s wr, 36 op/s
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.626 251996 DEBUG nova.compute.manager [req-d8b101d2-2ed1-4c67-ad5b-382b82b62bbf req-a566d17a-afbf-4f11-bc52-24706ce63f6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received event network-vif-unplugged-72ebf84f-114c-481c-8735-4ba8278ccfdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.627 251996 DEBUG oslo_concurrency.lockutils [req-d8b101d2-2ed1-4c67-ad5b-382b82b62bbf req-a566d17a-afbf-4f11-bc52-24706ce63f6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.628 251996 DEBUG oslo_concurrency.lockutils [req-d8b101d2-2ed1-4c67-ad5b-382b82b62bbf req-a566d17a-afbf-4f11-bc52-24706ce63f6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.628 251996 DEBUG oslo_concurrency.lockutils [req-d8b101d2-2ed1-4c67-ad5b-382b82b62bbf req-a566d17a-afbf-4f11-bc52-24706ce63f6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.629 251996 DEBUG nova.compute.manager [req-d8b101d2-2ed1-4c67-ad5b-382b82b62bbf req-a566d17a-afbf-4f11-bc52-24706ce63f6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] No waiting events found dispatching network-vif-unplugged-72ebf84f-114c-481c-8735-4ba8278ccfdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.629 251996 DEBUG nova.compute.manager [req-d8b101d2-2ed1-4c67-ad5b-382b82b62bbf req-a566d17a-afbf-4f11-bc52-24706ce63f6d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received event network-vif-unplugged-72ebf84f-114c-481c-8735-4ba8278ccfdb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.926 251996 DEBUG oslo_concurrency.processutils [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config.rescue 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:43 compute-0 nova_compute[251992]: 2025-12-06 07:44:43.929 251996 INFO nova.virt.libvirt.driver [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Deleting local config drive /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9/disk.config.rescue because it was imported into RBD.
Dec 06 07:44:44 compute-0 kernel: tap450480d9-e0: entered promiscuous mode
Dec 06 07:44:44 compute-0 systemd-udevd[349940]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:44:44 compute-0 NetworkManager[48965]: <info>  [1765007084.0030] manager: (tap450480d9-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Dec 06 07:44:44 compute-0 NetworkManager[48965]: <info>  [1765007084.0116] device (tap450480d9-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:44:44 compute-0 NetworkManager[48965]: <info>  [1765007084.0125] device (tap450480d9-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.046 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:44 compute-0 ovn_controller[147168]: 2025-12-06T07:44:44Z|00557|binding|INFO|Claiming lport 450480d9-e0c3-414d-ba7e-8b996711a653 for this chassis.
Dec 06 07:44:44 compute-0 ovn_controller[147168]: 2025-12-06T07:44:44Z|00558|binding|INFO|450480d9-e0c3-414d-ba7e-8b996711a653: Claiming fa:16:3e:ed:3b:e9 10.100.0.3
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.048 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.055 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:e9 10.100.0.3'], port_security=['fa:16:3e:ed:3b:e9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '53cabacd-b2a5-4ad1-a97a-0d0710d43bf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=450480d9-e0c3-414d-ba7e-8b996711a653) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.056 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 450480d9-e0c3-414d-ba7e-8b996711a653 in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.057 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.066 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a77b34b3-7cbb-4191-b16d-211a68420855]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.067 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.069 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.069 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[098bd208-da89-4198-8819-e6ce6c8a458c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.070 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[71a6996b-7160-4609-9280-96d45ffca3b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 systemd-machined[212986]: New machine qemu-71-instance-0000009a.
Dec 06 07:44:44 compute-0 ovn_controller[147168]: 2025-12-06T07:44:44Z|00559|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 ovn-installed in OVS
Dec 06 07:44:44 compute-0 ovn_controller[147168]: 2025-12-06T07:44:44Z|00560|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 up in Southbound
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.073 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.081 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b03ef8fa-7fe2-4f93-8210-6d68832218ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-0000009a.
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.103 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe6601b-7cde-4287-9a30-f4baf92e5c70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.133 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb527cf-1f15-4cdb-9642-d95bdb0759cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 NetworkManager[48965]: <info>  [1765007084.1396] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.138 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f92091d8-27ed-4acb-95c8-3a06dcd39563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.171 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[816183bb-530a-4948-ab76-166be9eed0b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.174 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[368ed9ec-4e74-4aa7-99f2-60b7592b1899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 NetworkManager[48965]: <info>  [1765007084.1951] device (tap6d1a17d6-50): carrier: link connected
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.203 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[df5ec37f-adec-4179-b29e-c1c57df766de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.220 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed915ec-838b-49af-89aa-09b1a5b95fcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735678, 'reachable_time': 30010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350087, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:44.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.247 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[aec2f046-4176-4867-a1a9-61223cb8af33]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 735678, 'tstamp': 735678}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350088, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.264 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[96f9e7e6-251c-4bae-8356-73b9c1aba08b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735678, 'reachable_time': 30010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350089, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 sshd-session[349897]: Invalid user test2 from 91.202.233.33 port 26000
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.298 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[70817f7e-8243-4abb-ab25-c58cb30cff6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.356 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9be941d9-610b-4358-a656-540cda77a041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.357 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.358 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.358 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:44 compute-0 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.360 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:44 compute-0 NetworkManager[48965]: <info>  [1765007084.3627] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.362 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.364 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.365 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:44 compute-0 ovn_controller[147168]: 2025-12-06T07:44:44Z|00561|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.382 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.383 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.384 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[090b0191-4ad0-4cff-917f-d97d477b6c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.385 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:44:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:44.385 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:44:44 compute-0 sshd-session[349897]: Connection reset by invalid user test2 91.202.233.33 port 26000 [preauth]
Dec 06 07:44:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:44 compute-0 podman[350180]: 2025-12-06 07:44:44.794232803 +0000 UTC m=+0.050681871 container create 543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.820 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.821 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007084.8199394, 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.821 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] VM Resumed (Lifecycle Event)
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.825 251996 DEBUG nova.compute.manager [None req-d508736f-4789-4bf5-baa7-bfea41b2f738 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:44 compute-0 systemd[1]: Started libpod-conmon-543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08.scope.
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.859 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:44 compute-0 podman[350180]: 2025-12-06 07:44:44.767448459 +0000 UTC m=+0.023897547 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.863 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12ffcbb08530249939dfbd46a6514b4b6e844a5989c4e9d348537755a9dbbd73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:44 compute-0 podman[350180]: 2025-12-06 07:44:44.879355349 +0000 UTC m=+0.135804437 container init 543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:44:44 compute-0 podman[350180]: 2025-12-06 07:44:44.887213654 +0000 UTC m=+0.143662732 container start 543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.897 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.898 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007084.8208814, 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.898 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] VM Started (Lifecycle Event)
Dec 06 07:44:44 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350196]: [NOTICE]   (350200) : New worker (350202) forked
Dec 06 07:44:44 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350196]: [NOTICE]   (350200) : Loading success.
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.923 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:44 compute-0 nova_compute[251992]: 2025-12-06 07:44:44.927 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:44:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 541 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 221 KiB/s wr, 54 op/s
Dec 06 07:44:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:45.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:45 compute-0 ceph-mon[74339]: pgmap v2684: 305 pgs: 305 active+clean; 541 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 221 KiB/s wr, 54 op/s
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.283 251996 DEBUG nova.compute.manager [req-433d7b61-9c97-474d-a6b1-26984c3bd12f req-f389f5ea-1725-46b2-8454-0576c146385f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.285 251996 DEBUG oslo_concurrency.lockutils [req-433d7b61-9c97-474d-a6b1-26984c3bd12f req-f389f5ea-1725-46b2-8454-0576c146385f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.286 251996 DEBUG oslo_concurrency.lockutils [req-433d7b61-9c97-474d-a6b1-26984c3bd12f req-f389f5ea-1725-46b2-8454-0576c146385f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.286 251996 DEBUG oslo_concurrency.lockutils [req-433d7b61-9c97-474d-a6b1-26984c3bd12f req-f389f5ea-1725-46b2-8454-0576c146385f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.287 251996 DEBUG nova.compute.manager [req-433d7b61-9c97-474d-a6b1-26984c3bd12f req-f389f5ea-1725-46b2-8454-0576c146385f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.287 251996 WARNING nova.compute.manager [req-433d7b61-9c97-474d-a6b1-26984c3bd12f req-f389f5ea-1725-46b2-8454-0576c146385f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state rescued and task_state None.
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.705 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.828 251996 DEBUG nova.compute.manager [req-27f255c9-c18c-49a5-a758-7481c3532de9 req-a8af826c-ec91-4b68-8269-bfae96e0563d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received event network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.829 251996 DEBUG oslo_concurrency.lockutils [req-27f255c9-c18c-49a5-a758-7481c3532de9 req-a8af826c-ec91-4b68-8269-bfae96e0563d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.829 251996 DEBUG oslo_concurrency.lockutils [req-27f255c9-c18c-49a5-a758-7481c3532de9 req-a8af826c-ec91-4b68-8269-bfae96e0563d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.830 251996 DEBUG oslo_concurrency.lockutils [req-27f255c9-c18c-49a5-a758-7481c3532de9 req-a8af826c-ec91-4b68-8269-bfae96e0563d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.830 251996 DEBUG nova.compute.manager [req-27f255c9-c18c-49a5-a758-7481c3532de9 req-a8af826c-ec91-4b68-8269-bfae96e0563d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] No waiting events found dispatching network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:45 compute-0 nova_compute[251992]: 2025-12-06 07:44:45.830 251996 WARNING nova.compute.manager [req-27f255c9-c18c-49a5-a758-7481c3532de9 req-a8af826c-ec91-4b68-8269-bfae96e0563d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received unexpected event network-vif-plugged-72ebf84f-114c-481c-8735-4ba8278ccfdb for instance with vm_state active and task_state deleting.
Dec 06 07:44:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:46.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4294926738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2944281054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:46 compute-0 nova_compute[251992]: 2025-12-06 07:44:46.237 251996 INFO nova.virt.libvirt.driver [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Deleting instance files /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4_del
Dec 06 07:44:46 compute-0 nova_compute[251992]: 2025-12-06 07:44:46.238 251996 INFO nova.virt.libvirt.driver [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Deletion of /var/lib/nova/instances/f3e780ab-f17f-4ecf-908b-16e88419d5f4_del complete
Dec 06 07:44:46 compute-0 nova_compute[251992]: 2025-12-06 07:44:46.300 251996 INFO nova.compute.manager [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Took 4.10 seconds to destroy the instance on the hypervisor.
Dec 06 07:44:46 compute-0 nova_compute[251992]: 2025-12-06 07:44:46.301 251996 DEBUG oslo.service.loopingcall [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:44:46 compute-0 nova_compute[251992]: 2025-12-06 07:44:46.301 251996 DEBUG nova.compute.manager [-] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:44:46 compute-0 nova_compute[251992]: 2025-12-06 07:44:46.302 251996 DEBUG nova.network.neutron [-] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:44:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 527 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 205 KiB/s wr, 49 op/s
Dec 06 07:44:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:47.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1507784745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:44:47 compute-0 ceph-mon[74339]: pgmap v2685: 305 pgs: 305 active+clean; 527 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 205 KiB/s wr, 49 op/s
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.623 251996 DEBUG nova.compute.manager [req-ee0b3334-1c39-4a01-8263-51f3c08aceb6 req-e49a82c4-a4d1-48c3-824d-6a70c16447f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.624 251996 DEBUG oslo_concurrency.lockutils [req-ee0b3334-1c39-4a01-8263-51f3c08aceb6 req-e49a82c4-a4d1-48c3-824d-6a70c16447f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.625 251996 DEBUG oslo_concurrency.lockutils [req-ee0b3334-1c39-4a01-8263-51f3c08aceb6 req-e49a82c4-a4d1-48c3-824d-6a70c16447f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.625 251996 DEBUG oslo_concurrency.lockutils [req-ee0b3334-1c39-4a01-8263-51f3c08aceb6 req-e49a82c4-a4d1-48c3-824d-6a70c16447f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.626 251996 DEBUG nova.compute.manager [req-ee0b3334-1c39-4a01-8263-51f3c08aceb6 req-e49a82c4-a4d1-48c3-824d-6a70c16447f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.626 251996 WARNING nova.compute.manager [req-ee0b3334-1c39-4a01-8263-51f3c08aceb6 req-e49a82c4-a4d1-48c3-824d-6a70c16447f2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state rescued and task_state None.
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.777 251996 INFO nova.compute.manager [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Unrescuing
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.778 251996 DEBUG oslo_concurrency.lockutils [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.778 251996 DEBUG oslo_concurrency.lockutils [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:44:47 compute-0 nova_compute[251992]: 2025-12-06 07:44:47.779 251996 DEBUG nova.network.neutron [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.173 251996 DEBUG nova.network.neutron [-] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.201 251996 INFO nova.compute.manager [-] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Took 1.90 seconds to deallocate network for instance.
Dec 06 07:44:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:48.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.292 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.293 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.411 251996 DEBUG nova.scheduler.client.report [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.431 251996 DEBUG nova.compute.manager [req-6c244679-f930-4c88-85df-a897f6f6299d req-279645a7-929d-4790-b566-d7974c636da9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Received event network-vif-deleted-72ebf84f-114c-481c-8735-4ba8278ccfdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.661 251996 DEBUG nova.scheduler.client.report [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.661 251996 DEBUG nova.compute.provider_tree [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.793 251996 DEBUG nova.scheduler.client.report [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.825 251996 DEBUG nova.scheduler.client.report [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:44:48 compute-0 nova_compute[251992]: 2025-12-06 07:44:48.940 251996 DEBUG oslo_concurrency.processutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:44:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 510 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 521 KiB/s wr, 99 op/s
Dec 06 07:44:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:49.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:44:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847219544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:49 compute-0 nova_compute[251992]: 2025-12-06 07:44:49.421 251996 DEBUG oslo_concurrency.processutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:44:49 compute-0 nova_compute[251992]: 2025-12-06 07:44:49.427 251996 DEBUG nova.compute.provider_tree [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:44:49 compute-0 nova_compute[251992]: 2025-12-06 07:44:49.455 251996 DEBUG nova.scheduler.client.report [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:44:49 compute-0 nova_compute[251992]: 2025-12-06 07:44:49.509 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:49 compute-0 nova_compute[251992]: 2025-12-06 07:44:49.592 251996 INFO nova.scheduler.client.report [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Deleted allocations for instance f3e780ab-f17f-4ecf-908b-16e88419d5f4
Dec 06 07:44:49 compute-0 nova_compute[251992]: 2025-12-06 07:44:49.692 251996 DEBUG oslo_concurrency.lockutils [None req-e56c7d60-ab33-45a0-96da-c5ad9339debc 297bc99c242e4fa8aedea4a6367b61c0 741dc47f9ced423cbd99fd6f9d32904f - - default default] Lock "f3e780ab-f17f-4ecf-908b-16e88419d5f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:50 compute-0 ceph-mon[74339]: pgmap v2686: 305 pgs: 305 active+clean; 510 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 521 KiB/s wr, 99 op/s
Dec 06 07:44:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3847219544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:44:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:50.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:50 compute-0 nova_compute[251992]: 2025-12-06 07:44:50.758 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Dec 06 07:44:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:51.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.145 251996 DEBUG nova.network.neutron [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updating instance_info_cache with network_info: [{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.170 251996 DEBUG oslo_concurrency.lockutils [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.171 251996 DEBUG nova.objects.instance [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'flavor' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:51 compute-0 kernel: tap450480d9-e0 (unregistering): left promiscuous mode
Dec 06 07:44:51 compute-0 NetworkManager[48965]: <info>  [1765007091.2667] device (tap450480d9-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:44:51 compute-0 ovn_controller[147168]: 2025-12-06T07:44:51Z|00562|binding|INFO|Releasing lport 450480d9-e0c3-414d-ba7e-8b996711a653 from this chassis (sb_readonly=0)
Dec 06 07:44:51 compute-0 ovn_controller[147168]: 2025-12-06T07:44:51Z|00563|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 down in Southbound
Dec 06 07:44:51 compute-0 ovn_controller[147168]: 2025-12-06T07:44:51Z|00564|binding|INFO|Removing iface tap450480d9-e0 ovn-installed in OVS
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.274 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.277 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.288 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:e9 10.100.0.3'], port_security=['fa:16:3e:ed:3b:e9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '53cabacd-b2a5-4ad1-a97a-0d0710d43bf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=450480d9-e0c3-414d-ba7e-8b996711a653) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.289 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 450480d9-e0c3-414d-ba7e-8b996711a653 in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.290 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.291 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[75e1358e-c58c-4d7b-bf6e-e757750faec9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.291 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.297 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Dec 06 07:44:51 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009a.scope: Consumed 6.945s CPU time.
Dec 06 07:44:51 compute-0 systemd-machined[212986]: Machine qemu-71-instance-0000009a terminated.
Dec 06 07:44:51 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350196]: [NOTICE]   (350200) : haproxy version is 2.8.14-c23fe91
Dec 06 07:44:51 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350196]: [NOTICE]   (350200) : path to executable is /usr/sbin/haproxy
Dec 06 07:44:51 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350196]: [WARNING]  (350200) : Exiting Master process...
Dec 06 07:44:51 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350196]: [WARNING]  (350200) : Exiting Master process...
Dec 06 07:44:51 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350196]: [ALERT]    (350200) : Current worker (350202) exited with code 143 (Terminated)
Dec 06 07:44:51 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350196]: [WARNING]  (350200) : All workers exited. Exiting... (0)
Dec 06 07:44:51 compute-0 systemd[1]: libpod-543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08.scope: Deactivated successfully.
Dec 06 07:44:51 compute-0 podman[350264]: 2025-12-06 07:44:51.415015652 +0000 UTC m=+0.042838477 container died 543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:44:51 compute-0 kernel: tap450480d9-e0: entered promiscuous mode
Dec 06 07:44:51 compute-0 kernel: tap450480d9-e0 (unregistering): left promiscuous mode
Dec 06 07:44:51 compute-0 NetworkManager[48965]: <info>  [1765007091.4325] manager: (tap450480d9-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08-userdata-shm.mount: Deactivated successfully.
Dec 06 07:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-12ffcbb08530249939dfbd46a6514b4b6e844a5989c4e9d348537755a9dbbd73-merged.mount: Deactivated successfully.
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.449 251996 INFO nova.virt.libvirt.driver [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance destroyed successfully.
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.450 251996 DEBUG nova.objects.instance [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'numa_topology' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:44:51 compute-0 podman[350264]: 2025-12-06 07:44:51.468091657 +0000 UTC m=+0.095914482 container cleanup 543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:44:51 compute-0 systemd[1]: libpod-conmon-543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08.scope: Deactivated successfully.
Dec 06 07:44:51 compute-0 podman[350303]: 2025-12-06 07:44:51.526194262 +0000 UTC m=+0.037974233 container remove 543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.532 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[764aea5f-857a-4a39-850b-957131dbb23e]: (4, ('Sat Dec  6 07:44:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08)\n543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08\nSat Dec  6 07:44:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08)\n543f0398da15f8ece3e5186dc065ca40a199cec341d8d5738f33818c8b338f08\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.533 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[15f9f92c-d42b-473a-8b95-20f302be21fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.534 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:51 compute-0 kernel: tap6d1a17d6-50: left promiscuous mode
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.536 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 NetworkManager[48965]: <info>  [1765007091.5480] manager: (tap450480d9-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Dec 06 07:44:51 compute-0 systemd-udevd[350245]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.554 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 kernel: tap450480d9-e0: entered promiscuous mode
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.556 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6d28ae7b-c35b-4bf8-ba68-289a6a0a671d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 NetworkManager[48965]: <info>  [1765007091.5591] device (tap450480d9-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:44:51 compute-0 ovn_controller[147168]: 2025-12-06T07:44:51Z|00565|binding|INFO|Claiming lport 450480d9-e0c3-414d-ba7e-8b996711a653 for this chassis.
Dec 06 07:44:51 compute-0 ovn_controller[147168]: 2025-12-06T07:44:51Z|00566|binding|INFO|450480d9-e0c3-414d-ba7e-8b996711a653: Claiming fa:16:3e:ed:3b:e9 10.100.0.3
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.560 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 NetworkManager[48965]: <info>  [1765007091.5635] device (tap450480d9-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.571 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[03d796da-9dd4-4bb0-8633-89cc6fa56456]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.572 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9aa850-25fa-4fa4-93f9-5c211e6cfc18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.574 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:e9 10.100.0.3'], port_security=['fa:16:3e:ed:3b:e9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '53cabacd-b2a5-4ad1-a97a-0d0710d43bf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=450480d9-e0c3-414d-ba7e-8b996711a653) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:44:51 compute-0 ovn_controller[147168]: 2025-12-06T07:44:51Z|00567|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 ovn-installed in OVS
Dec 06 07:44:51 compute-0 ovn_controller[147168]: 2025-12-06T07:44:51Z|00568|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 up in Southbound
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.579 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 systemd-machined[212986]: New machine qemu-72-instance-0000009a.
Dec 06 07:44:51 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-0000009a.
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.589 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7b6d83-3bbf-4f97-a2d5-fb5752eb8ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 735671, 'reachable_time': 32066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350333, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.592 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.592 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[4813237e-99f9-452f-a9b4-7014b325b6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.592 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 450480d9-e0c3-414d-ba7e-8b996711a653 in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.594 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.603 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ed103376-8520-46e5-b49f-02691223e993]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.604 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a17d6-51 in ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.608 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a17d6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.608 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e00714cb-b81e-497e-84ce-255142b1886b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.610 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5729baf8-3280-4d0a-8702-677f49b8160f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.622 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[fe9c7f63-fdfb-4acb-95c3-a2121acedc30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.634 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[221f3908-91c2-4e27-af72-04b0c3c3828e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.662 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7a673fa3-ac71-4d38-b0d6-1f03030152fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 NetworkManager[48965]: <info>  [1765007091.6700] manager: (tap6d1a17d6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/262)
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.668 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[11af26b5-f505-4cce-a3e0-b44fbe01d86c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 podman[350331]: 2025-12-06 07:44:51.694484589 +0000 UTC m=+0.095666015 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.698 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ea11f945-c9e2-4ec0-8b05-032e2f707287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.701 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e49b3603-d17b-4568-9180-fc7d365419b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.725 251996 DEBUG nova.compute.manager [req-a133665f-a273-403e-a5af-6f25e894718a req-11a7113c-0a0a-4d6e-821f-29bc7b924a7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.725 251996 DEBUG oslo_concurrency.lockutils [req-a133665f-a273-403e-a5af-6f25e894718a req-11a7113c-0a0a-4d6e-821f-29bc7b924a7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.725 251996 DEBUG oslo_concurrency.lockutils [req-a133665f-a273-403e-a5af-6f25e894718a req-11a7113c-0a0a-4d6e-821f-29bc7b924a7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.726 251996 DEBUG oslo_concurrency.lockutils [req-a133665f-a273-403e-a5af-6f25e894718a req-11a7113c-0a0a-4d6e-821f-29bc7b924a7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.726 251996 DEBUG nova.compute.manager [req-a133665f-a273-403e-a5af-6f25e894718a req-11a7113c-0a0a-4d6e-821f-29bc7b924a7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.726 251996 WARNING nova.compute.manager [req-a133665f-a273-403e-a5af-6f25e894718a req-11a7113c-0a0a-4d6e-821f-29bc7b924a7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:44:51 compute-0 NetworkManager[48965]: <info>  [1765007091.7269] device (tap6d1a17d6-50): carrier: link connected
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.732 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5023a557-82b5-48d0-90e7-31c8aa156316]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.748 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[01784c46-1138-4cc7-9580-dd350fe887dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350387, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.760 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5af48117-6812-4af5-84e0-6846e8ed0a73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:a2f6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736431, 'tstamp': 736431}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350388, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.777 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[631cb480-d42e-4020-9ea5-de51c6f0f40f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350389, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.806 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5327b362-cca0-43a9-b886-25d855a024a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.862 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffb8cbd-7d70-4f63-be00-78b3f52664b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.863 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.863 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.864 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.904 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 NetworkManager[48965]: <info>  [1765007091.9064] manager: (tap6d1a17d6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Dec 06 07:44:51 compute-0 kernel: tap6d1a17d6-50: entered promiscuous mode
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.908 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.909 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.911 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 ovn_controller[147168]: 2025-12-06T07:44:51Z|00569|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.934 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.935 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.936 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecf11fb-681e-4cb0-adad-eb1d2d069d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.937 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/6d1a17d6-5e44-40b7-832a-81cb86c02e71.pid.haproxy
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:44:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:44:51.937 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'env', 'PROCESS_TAG=haproxy-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a17d6-5e44-40b7-832a-81cb86c02e71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.994 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.994 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007091.9919763, 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:51 compute-0 nova_compute[251992]: 2025-12-06 07:44:51.995 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] VM Resumed (Lifecycle Event)
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.018 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.022 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.046 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.047 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007091.9942014, 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.047 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] VM Started (Lifecycle Event)
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.080 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.084 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:44:52 compute-0 ceph-mon[74339]: pgmap v2687: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.114 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:44:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:52.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:52 compute-0 podman[350481]: 2025-12-06 07:44:52.34432538 +0000 UTC m=+0.053057677 container create d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:44:52 compute-0 systemd[1]: Started libpod-conmon-d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3.scope.
Dec 06 07:44:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:44:52 compute-0 podman[350481]: 2025-12-06 07:44:52.321370261 +0000 UTC m=+0.030102588 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:44:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4920e51eacefa4a428811437aac51635418e747a2d1cdb2c3b3e73f60176450/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:44:52 compute-0 podman[350481]: 2025-12-06 07:44:52.432289775 +0000 UTC m=+0.141022082 container init d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:44:52 compute-0 podman[350481]: 2025-12-06 07:44:52.437443855 +0000 UTC m=+0.146176152 container start d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:44:52 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350497]: [NOTICE]   (350501) : New worker (350503) forked
Dec 06 07:44:52 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350497]: [NOTICE]   (350501) : Loading success.
Dec 06 07:44:52 compute-0 nova_compute[251992]: 2025-12-06 07:44:52.718 251996 DEBUG nova.compute.manager [None req-5fbf03cb-4331-4ee8-8476-a82c85457a85 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:44:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:53.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:53 compute-0 ceph-mon[74339]: pgmap v2688: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.851 251996 DEBUG nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.852 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.852 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.852 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.853 251996 DEBUG nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.853 251996 WARNING nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state active and task_state None.
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.853 251996 DEBUG nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.854 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.854 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.854 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.855 251996 DEBUG nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.855 251996 WARNING nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state active and task_state None.
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.855 251996 DEBUG nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.855 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.856 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.856 251996 DEBUG oslo_concurrency.lockutils [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.856 251996 DEBUG nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:44:53 compute-0 nova_compute[251992]: 2025-12-06 07:44:53.857 251996 WARNING nova.compute.manager [req-6b270712-c1a9-40d4-9680-c39e0487522b req-0f972585-7396-4d4d-a109-2bb6f1c848bf 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state active and task_state None.
Dec 06 07:44:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:54.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:44:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Dec 06 07:44:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:55.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:55 compute-0 nova_compute[251992]: 2025-12-06 07:44:55.759 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:56 compute-0 ceph-mon[74339]: pgmap v2689: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Dec 06 07:44:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:56.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 219 op/s
Dec 06 07:44:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:57.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:58 compute-0 ceph-mon[74339]: pgmap v2690: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 219 op/s
Dec 06 07:44:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:44:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:44:58.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:44:58 compute-0 nova_compute[251992]: 2025-12-06 07:44:58.242 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007083.2413883, f3e780ab-f17f-4ecf-908b-16e88419d5f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:44:58 compute-0 nova_compute[251992]: 2025-12-06 07:44:58.242 251996 INFO nova.compute.manager [-] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] VM Stopped (Lifecycle Event)
Dec 06 07:44:58 compute-0 nova_compute[251992]: 2025-12-06 07:44:58.274 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:44:58 compute-0 nova_compute[251992]: 2025-12-06 07:44:58.285 251996 DEBUG nova.compute.manager [None req-d6a804f9-335e-4460-a40f-80aa5d1c13c7 - - - - - -] [instance: f3e780ab-f17f-4ecf-908b-16e88419d5f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:44:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 225 op/s
Dec 06 07:44:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:44:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:44:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:44:59.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:44:59 compute-0 ceph-mon[74339]: pgmap v2691: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 225 op/s
Dec 06 07:44:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:00.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:00 compute-0 podman[350516]: 2025-12-06 07:45:00.396924866 +0000 UTC m=+0.056451180 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 07:45:00 compute-0 podman[350517]: 2025-12-06 07:45:00.412859103 +0000 UTC m=+0.072529450 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:45:00 compute-0 nova_compute[251992]: 2025-12-06 07:45:00.762 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 176 op/s
Dec 06 07:45:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:01.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:01 compute-0 sudo[350553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:01 compute-0 sudo[350553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:01 compute-0 sudo[350553]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:01 compute-0 sudo[350578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:01 compute-0 sudo[350578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:01 compute-0 sudo[350578]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:01 compute-0 ceph-mon[74339]: pgmap v2692: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 176 op/s
Dec 06 07:45:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:02.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:02 compute-0 nova_compute[251992]: 2025-12-06 07:45:02.904 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:02 compute-0 nova_compute[251992]: 2025-12-06 07:45:02.905 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:02 compute-0 nova_compute[251992]: 2025-12-06 07:45:02.924 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:45:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 KiB/s wr, 112 op/s
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.031 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.032 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.049 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.051 251996 INFO nova.compute.claims [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:45:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:03.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.274 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:45:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/485716860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.785 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.790 251996 DEBUG nova.compute.provider_tree [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.812 251996 DEBUG nova.scheduler.client.report [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:03.852 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:03.853 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:03.854 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.906 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.908 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.999 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:45:03 compute-0 nova_compute[251992]: 2025-12-06 07:45:03.999 251996 DEBUG nova.network.neutron [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:45:04 compute-0 ceph-mon[74339]: pgmap v2693: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 KiB/s wr, 112 op/s
Dec 06 07:45:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/485716860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2923747230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.088 251996 INFO nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.126 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:45:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:04.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.392 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.395 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.395 251996 INFO nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Creating image(s)
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.423 251996 DEBUG nova.storage.rbd_utils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.450 251996 DEBUG nova.storage.rbd_utils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.475 251996 DEBUG nova.storage.rbd_utils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.479 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.556 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.558 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.559 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.559 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.589 251996 DEBUG nova.storage.rbd_utils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.593 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.655 251996 DEBUG nova.policy [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e997a5eeee174b368a43ed8cb35fa1d0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f44ecb8bdc7e4692a299e29603301124', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:45:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.922 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:04 compute-0 nova_compute[251992]: 2025-12-06 07:45:04.988 251996 DEBUG nova.storage.rbd_utils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] resizing rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:45:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 576 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 215 op/s
Dec 06 07:45:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:05.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:05 compute-0 nova_compute[251992]: 2025-12-06 07:45:05.126 251996 DEBUG nova.objects.instance [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'migration_context' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:05 compute-0 nova_compute[251992]: 2025-12-06 07:45:05.142 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:45:05 compute-0 nova_compute[251992]: 2025-12-06 07:45:05.143 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Ensure instance console log exists: /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:45:05 compute-0 nova_compute[251992]: 2025-12-06 07:45:05.144 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:05 compute-0 nova_compute[251992]: 2025-12-06 07:45:05.144 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:05 compute-0 nova_compute[251992]: 2025-12-06 07:45:05.144 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:05 compute-0 nova_compute[251992]: 2025-12-06 07:45:05.521 251996 DEBUG nova.network.neutron [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Successfully created port: 14826742-0679-403f-b2e4-28fb0f26527a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:45:05 compute-0 ovn_controller[147168]: 2025-12-06T07:45:05Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:3b:e9 10.100.0.3
Dec 06 07:45:05 compute-0 nova_compute[251992]: 2025-12-06 07:45:05.763 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:06 compute-0 ceph-mon[74339]: pgmap v2694: 305 pgs: 305 active+clean; 576 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 215 op/s
Dec 06 07:45:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:06.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 603 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 170 op/s
Dec 06 07:45:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:07.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:07 compute-0 ceph-mon[74339]: pgmap v2695: 305 pgs: 305 active+clean; 603 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.0 MiB/s wr, 170 op/s
Dec 06 07:45:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/177074201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:08.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:08 compute-0 nova_compute[251992]: 2025-12-06 07:45:08.313 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:08 compute-0 nova_compute[251992]: 2025-12-06 07:45:08.878 251996 DEBUG nova.network.neutron [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Successfully updated port: 14826742-0679-403f-b2e4-28fb0f26527a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:45:08 compute-0 nova_compute[251992]: 2025-12-06 07:45:08.902 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:45:08 compute-0 nova_compute[251992]: 2025-12-06 07:45:08.902 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:45:08 compute-0 nova_compute[251992]: 2025-12-06 07:45:08.902 251996 DEBUG nova.network.neutron [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:45:09 compute-0 nova_compute[251992]: 2025-12-06 07:45:09.016 251996 DEBUG nova.compute.manager [req-d864bc3a-7f81-4f10-854e-a6f8f8a2f08a req-0df6fb40-070f-4c4b-b965-be7be4f43320 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-changed-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:09 compute-0 nova_compute[251992]: 2025-12-06 07:45:09.016 251996 DEBUG nova.compute.manager [req-d864bc3a-7f81-4f10-854e-a6f8f8a2f08a req-0df6fb40-070f-4c4b-b965-be7be4f43320 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Refreshing instance network info cache due to event network-changed-14826742-0679-403f-b2e4-28fb0f26527a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:45:09 compute-0 nova_compute[251992]: 2025-12-06 07:45:09.016 251996 DEBUG oslo_concurrency.lockutils [req-d864bc3a-7f81-4f10-854e-a6f8f8a2f08a req-0df6fb40-070f-4c4b-b965-be7be4f43320 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:45:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.1 MiB/s wr, 185 op/s
Dec 06 07:45:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:09.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:09 compute-0 nova_compute[251992]: 2025-12-06 07:45:09.144 251996 DEBUG nova.network.neutron [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:45:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4026041978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2063318395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2063318395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:09 compute-0 ceph-mon[74339]: pgmap v2696: 305 pgs: 305 active+clean; 611 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.1 MiB/s wr, 185 op/s
Dec 06 07:45:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:10.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:10 compute-0 nova_compute[251992]: 2025-12-06 07:45:10.800 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 575 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 832 KiB/s rd, 7.0 MiB/s wr, 211 op/s
Dec 06 07:45:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:11.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:12.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:12 compute-0 ceph-mon[74339]: pgmap v2697: 305 pgs: 305 active+clean; 575 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 832 KiB/s rd, 7.0 MiB/s wr, 211 op/s
Dec 06 07:45:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 7.5 MiB/s wr, 242 op/s
Dec 06 07:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:45:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:13.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.316 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.565 251996 DEBUG nova.network.neutron [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Updating instance_info_cache with network_info: [{"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.589 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.589 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance network_info: |[{"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.590 251996 DEBUG oslo_concurrency.lockutils [req-d864bc3a-7f81-4f10-854e-a6f8f8a2f08a req-0df6fb40-070f-4c4b-b965-be7be4f43320 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.590 251996 DEBUG nova.network.neutron [req-d864bc3a-7f81-4f10-854e-a6f8f8a2f08a req-0df6fb40-070f-4c4b-b965-be7be4f43320 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Refreshing network info cache for port 14826742-0679-403f-b2e4-28fb0f26527a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.594 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Start _get_guest_xml network_info=[{"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.599 251996 WARNING nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.603 251996 DEBUG nova.virt.libvirt.host [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.604 251996 DEBUG nova.virt.libvirt.host [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.611 251996 DEBUG nova.virt.libvirt.host [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.612 251996 DEBUG nova.virt.libvirt.host [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.613 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.613 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.614 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.614 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.614 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.614 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.615 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.615 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.615 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.615 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.616 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.616 251996 DEBUG nova.virt.hardware [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:45:13 compute-0 nova_compute[251992]: 2025-12-06 07:45:13.619 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:13 compute-0 ceph-mon[74339]: pgmap v2698: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 7.5 MiB/s wr, 242 op/s
Dec 06 07:45:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:45:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826853894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.044 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.070 251996 DEBUG nova.storage.rbd_utils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.074 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:14.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:45:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1910726417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.550 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.552 251996 DEBUG nova.virt.libvirt.vif [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:45:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-592712215',display_name='tempest-ServerStableDeviceRescueTest-server-592712215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-592712215',id=157,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-o0l8e5p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:45:04Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=53b4413c-a38e-4ad9-9f1b-43babd1fe2a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.552 251996 DEBUG nova.network.os_vif_util [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.553 251996 DEBUG nova.network.os_vif_util [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:c3:cc,bridge_name='br-int',has_traffic_filtering=True,id=14826742-0679-403f-b2e4-28fb0f26527a,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14826742-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.554 251996 DEBUG nova.objects.instance [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'pci_devices' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.583 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <uuid>53b4413c-a38e-4ad9-9f1b-43babd1fe2a5</uuid>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <name>instance-0000009d</name>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-592712215</nova:name>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:45:13</nova:creationTime>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <nova:user uuid="e997a5eeee174b368a43ed8cb35fa1d0">tempest-ServerStableDeviceRescueTest-1830949011-project-member</nova:user>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <nova:project uuid="f44ecb8bdc7e4692a299e29603301124">tempest-ServerStableDeviceRescueTest-1830949011</nova:project>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <nova:port uuid="14826742-0679-403f-b2e4-28fb0f26527a">
Dec 06 07:45:14 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <system>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <entry name="serial">53b4413c-a38e-4ad9-9f1b-43babd1fe2a5</entry>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <entry name="uuid">53b4413c-a38e-4ad9-9f1b-43babd1fe2a5</entry>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </system>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <os>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   </os>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <features>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   </features>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk">
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       </source>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config">
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       </source>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:45:14 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:9a:c3:cc"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <target dev="tap14826742-06"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/console.log" append="off"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <video>
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </video>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:45:14 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:45:14 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:45:14 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:45:14 compute-0 nova_compute[251992]: </domain>
Dec 06 07:45:14 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.585 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Preparing to wait for external event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.585 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.586 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.586 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.587 251996 DEBUG nova.virt.libvirt.vif [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:45:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-592712215',display_name='tempest-ServerStableDeviceRescueTest-server-592712215',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-592712215',id=157,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-o0l8e5p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:45:04Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=53b4413c-a38e-4ad9-9f1b-43babd1fe2a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.587 251996 DEBUG nova.network.os_vif_util [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.588 251996 DEBUG nova.network.os_vif_util [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:c3:cc,bridge_name='br-int',has_traffic_filtering=True,id=14826742-0679-403f-b2e4-28fb0f26527a,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14826742-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.588 251996 DEBUG os_vif [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:c3:cc,bridge_name='br-int',has_traffic_filtering=True,id=14826742-0679-403f-b2e4-28fb0f26527a,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14826742-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.589 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.589 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.590 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.593 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.593 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14826742-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.594 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14826742-06, col_values=(('external_ids', {'iface-id': '14826742-0679-403f-b2e4-28fb0f26527a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:c3:cc', 'vm-uuid': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.595 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:14 compute-0 NetworkManager[48965]: <info>  [1765007114.5967] manager: (tap14826742-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.598 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.602 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.603 251996 INFO os_vif [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:c3:cc,bridge_name='br-int',has_traffic_filtering=True,id=14826742-0679-403f-b2e4-28fb0f26527a,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14826742-06')
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.657 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.657 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.657 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:9a:c3:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.658 251996 INFO nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Using config drive
Dec 06 07:45:14 compute-0 nova_compute[251992]: 2025-12-06 07:45:14.681 251996 DEBUG nova.storage.rbd_utils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1826853894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1910726417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1731268524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.5 MiB/s wr, 294 op/s
Dec 06 07:45:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:15.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:15 compute-0 nova_compute[251992]: 2025-12-06 07:45:15.805 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.004 251996 INFO nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Creating config drive at /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.011 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwo72gfz9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4129380359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:16 compute-0 ceph-mon[74339]: pgmap v2699: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.5 MiB/s wr, 294 op/s
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.153 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwo72gfz9" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.191 251996 DEBUG nova.storage.rbd_utils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.199 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:16.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.368 251996 DEBUG oslo_concurrency.processutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.370 251996 INFO nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Deleting local config drive /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config because it was imported into RBD.
Dec 06 07:45:16 compute-0 kernel: tap14826742-06: entered promiscuous mode
Dec 06 07:45:16 compute-0 NetworkManager[48965]: <info>  [1765007116.4486] manager: (tap14826742-06): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.450 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:16 compute-0 ovn_controller[147168]: 2025-12-06T07:45:16Z|00570|binding|INFO|Claiming lport 14826742-0679-403f-b2e4-28fb0f26527a for this chassis.
Dec 06 07:45:16 compute-0 ovn_controller[147168]: 2025-12-06T07:45:16Z|00571|binding|INFO|14826742-0679-403f-b2e4-28fb0f26527a: Claiming fa:16:3e:9a:c3:cc 10.100.0.12
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.460 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:cc 10.100.0.12'], port_security=['fa:16:3e:9a:c3:cc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=14826742-0679-403f-b2e4-28fb0f26527a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.462 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 14826742-0679-403f-b2e4-28fb0f26527a in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.464 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.469 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:16 compute-0 ovn_controller[147168]: 2025-12-06T07:45:16Z|00572|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a ovn-installed in OVS
Dec 06 07:45:16 compute-0 ovn_controller[147168]: 2025-12-06T07:45:16Z|00573|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a up in Southbound
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.471 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:16 compute-0 systemd-machined[212986]: New machine qemu-73-instance-0000009d.
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.487 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[37a6d9f1-dfad-45c0-9ea3-d8428e231537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:16 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-0000009d.
Dec 06 07:45:16 compute-0 systemd-udevd[350935]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:45:16 compute-0 NetworkManager[48965]: <info>  [1765007116.5195] device (tap14826742-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:45:16 compute-0 NetworkManager[48965]: <info>  [1765007116.5208] device (tap14826742-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.535 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfd17db-c804-42dd-8c37-894889f4a5c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.539 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4aeaca5b-a3c4-4e11-8487-610e0e6df50f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.572 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2b753456-55dd-4c51-a656-b4a5ea584755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.588 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cb773323-3399-4671-8c88-3dd2ebc65067]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350947, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.608 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f111763b-9507-4f53-98ff-6064f0d83632]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736441, 'tstamp': 736441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350948, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736444, 'tstamp': 736444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350948, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.611 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.613 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.614 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.615 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.615 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:16.616 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.667 251996 DEBUG nova.network.neutron [req-d864bc3a-7f81-4f10-854e-a6f8f8a2f08a req-0df6fb40-070f-4c4b-b965-be7be4f43320 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Updated VIF entry in instance network info cache for port 14826742-0679-403f-b2e4-28fb0f26527a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.668 251996 DEBUG nova.network.neutron [req-d864bc3a-7f81-4f10-854e-a6f8f8a2f08a req-0df6fb40-070f-4c4b-b965-be7be4f43320 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Updating instance_info_cache with network_info: [{"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:45:16 compute-0 nova_compute[251992]: 2025-12-06 07:45:16.695 251996 DEBUG oslo_concurrency.lockutils [req-d864bc3a-7f81-4f10-854e-a6f8f8a2f08a req-0df6fb40-070f-4c4b-b965-be7be4f43320 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:45:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 573 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.1 MiB/s wr, 196 op/s
Dec 06 07:45:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:17.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.085 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007117.0844517, 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.086 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] VM Started (Lifecycle Event)
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.121 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.126 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007117.0859385, 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.126 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] VM Paused (Lifecycle Event)
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.152 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.155 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.193 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:45:17 compute-0 ceph-mon[74339]: pgmap v2700: 305 pgs: 305 active+clean; 573 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.1 MiB/s wr, 196 op/s
Dec 06 07:45:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4123349726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.711 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.712 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.712 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.712 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.713 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.813 251996 DEBUG nova.compute.manager [req-52670e77-aaeb-4d8f-b435-3a9d4aeb66b8 req-5ea358ea-6e0e-452c-8a44-24b9c3dcd632 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.814 251996 DEBUG oslo_concurrency.lockutils [req-52670e77-aaeb-4d8f-b435-3a9d4aeb66b8 req-5ea358ea-6e0e-452c-8a44-24b9c3dcd632 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.814 251996 DEBUG oslo_concurrency.lockutils [req-52670e77-aaeb-4d8f-b435-3a9d4aeb66b8 req-5ea358ea-6e0e-452c-8a44-24b9c3dcd632 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.815 251996 DEBUG oslo_concurrency.lockutils [req-52670e77-aaeb-4d8f-b435-3a9d4aeb66b8 req-5ea358ea-6e0e-452c-8a44-24b9c3dcd632 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.815 251996 DEBUG nova.compute.manager [req-52670e77-aaeb-4d8f-b435-3a9d4aeb66b8 req-5ea358ea-6e0e-452c-8a44-24b9c3dcd632 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Processing event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.816 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.829 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007117.8194573, 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.830 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] VM Resumed (Lifecycle Event)
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.834 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.840 251996 INFO nova.virt.libvirt.driver [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance spawned successfully.
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.841 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.883 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.890 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.894 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.894 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.895 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.895 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.895 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.896 251996 DEBUG nova.virt.libvirt.driver [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:45:17 compute-0 nova_compute[251992]: 2025-12-06 07:45:17.952 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.044 251996 INFO nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Took 13.65 seconds to spawn the instance on the hypervisor.
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.045 251996 DEBUG nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.128 251996 INFO nova.compute.manager [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Took 15.13 seconds to build instance.
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.145 251996 DEBUG oslo_concurrency.lockutils [None req-c7e8b422-7b9d-4870-aa3a-1b457ef4466d e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:45:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556149852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.192 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:18.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.274 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.274 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.278 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.278 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:45:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3556149852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.484 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.485 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4026MB free_disk=20.79522705078125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.485 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.486 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:45:18
Dec 06 07:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'backups', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log']
Dec 06 07:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.600 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.601 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.602 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.602 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:45:18 compute-0 nova_compute[251992]: 2025-12-06 07:45:18.697 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 566 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 179 op/s
Dec 06 07:45:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:19.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:45:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/965692880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.162 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.169 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.198 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.226 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.226 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/45640886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3903286247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:19 compute-0 ceph-mon[74339]: pgmap v2701: 305 pgs: 305 active+clean; 566 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 179 op/s
Dec 06 07:45:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/965692880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.597 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.945 251996 DEBUG nova.compute.manager [req-beefb3b4-2b1b-4b16-a13c-f68b62f8d565 req-e014160f-0db3-4da0-baf7-680796e41da2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.945 251996 DEBUG oslo_concurrency.lockutils [req-beefb3b4-2b1b-4b16-a13c-f68b62f8d565 req-e014160f-0db3-4da0-baf7-680796e41da2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.946 251996 DEBUG oslo_concurrency.lockutils [req-beefb3b4-2b1b-4b16-a13c-f68b62f8d565 req-e014160f-0db3-4da0-baf7-680796e41da2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.946 251996 DEBUG oslo_concurrency.lockutils [req-beefb3b4-2b1b-4b16-a13c-f68b62f8d565 req-e014160f-0db3-4da0-baf7-680796e41da2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.946 251996 DEBUG nova.compute.manager [req-beefb3b4-2b1b-4b16-a13c-f68b62f8d565 req-e014160f-0db3-4da0-baf7-680796e41da2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:19 compute-0 nova_compute[251992]: 2025-12-06 07:45:19.947 251996 WARNING nova.compute.manager [req-beefb3b4-2b1b-4b16-a13c-f68b62f8d565 req-e014160f-0db3-4da0-baf7-680796e41da2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state active and task_state None.
Dec 06 07:45:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:20.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:20 compute-0 nova_compute[251992]: 2025-12-06 07:45:20.839 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 209 op/s
Dec 06 07:45:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:21.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:21 compute-0 sudo[351039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:21 compute-0 sudo[351039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:21 compute-0 sudo[351039]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:21 compute-0 sudo[351064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:21 compute-0 sudo[351064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:21 compute-0 sudo[351064]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:22 compute-0 ceph-mon[74339]: pgmap v2702: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 209 op/s
Dec 06 07:45:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:22.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:22 compute-0 podman[351089]: 2025-12-06 07:45:22.432150644 +0000 UTC m=+0.086466333 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:45:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:45:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 54K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1536 writes, 7081 keys, 1535 commit groups, 1.0 writes per commit group, ingest: 10.19 MB, 0.02 MB/s
                                           Interval WAL: 1536 writes, 1535 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     33.1      2.12              0.24        34    0.062       0      0       0.0       0.0
                                             L6      1/0   10.27 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6     88.0     74.3      4.39              1.02        33    0.133    224K    18K       0.0       0.0
                                            Sum      1/0   10.27 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     59.3     60.9      6.51              1.25        67    0.097    224K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9    138.7    139.6      0.55              0.23        12    0.045     54K   3123       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     88.0     74.3      4.39              1.02        33    0.133    224K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     33.1      2.12              0.24        33    0.064       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.069, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.39 GB write, 0.08 MB/s write, 0.38 GB read, 0.08 MB/s read, 6.5 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 43.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000357 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2483,41.61 MB,13.6867%) FilterBlock(68,630.48 KB,0.202535%) IndexBlock(68,1.03 MB,0.339257%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:45:22 compute-0 nova_compute[251992]: 2025-12-06 07:45:22.725 251996 DEBUG nova.compute.manager [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:22 compute-0 nova_compute[251992]: 2025-12-06 07:45:22.793 251996 INFO nova.compute.manager [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] instance snapshotting
Dec 06 07:45:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 591 KiB/s wr, 224 op/s
Dec 06 07:45:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:23.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:23 compute-0 ceph-mon[74339]: pgmap v2703: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 591 KiB/s wr, 224 op/s
Dec 06 07:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:45:23 compute-0 nova_compute[251992]: 2025-12-06 07:45:23.661 251996 INFO nova.virt.libvirt.driver [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Beginning live snapshot process
Dec 06 07:45:23 compute-0 nova_compute[251992]: 2025-12-06 07:45:23.816 251996 DEBUG nova.virt.libvirt.imagebackend [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:45:24 compute-0 nova_compute[251992]: 2025-12-06 07:45:24.226 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:24 compute-0 nova_compute[251992]: 2025-12-06 07:45:24.227 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:24 compute-0 nova_compute[251992]: 2025-12-06 07:45:24.264 251996 DEBUG nova.storage.rbd_utils [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] creating snapshot(9d8b2bc34c0c460fa9f54fa1b9d330f1) on rbd image(53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:45:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:24.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Dec 06 07:45:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Dec 06 07:45:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Dec 06 07:45:24 compute-0 nova_compute[251992]: 2025-12-06 07:45:24.429 251996 DEBUG nova.storage.rbd_utils [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] cloning vms/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk@9d8b2bc34c0c460fa9f54fa1b9d330f1 to images/5b8b504a-6057-4871-810a-63dfe9ed4af8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:45:24 compute-0 nova_compute[251992]: 2025-12-06 07:45:24.548 251996 DEBUG nova.storage.rbd_utils [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] flattening images/5b8b504a-6057-4871-810a-63dfe9ed4af8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:45:24 compute-0 nova_compute[251992]: 2025-12-06 07:45:24.600 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:24 compute-0 nova_compute[251992]: 2025-12-06 07:45:24.649 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:24 compute-0 nova_compute[251992]: 2025-12-06 07:45:24.853 251996 DEBUG nova.storage.rbd_utils [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] removing snapshot(9d8b2bc34c0c460fa9f54fa1b9d330f1) on rbd image(53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:45:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 556 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 502 KiB/s wr, 210 op/s
Dec 06 07:45:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:45:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:45:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:45:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:45:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:45:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:25.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Dec 06 07:45:25 compute-0 ceph-mon[74339]: osdmap e337: 3 total, 3 up, 3 in
Dec 06 07:45:25 compute-0 ceph-mon[74339]: pgmap v2705: 305 pgs: 305 active+clean; 556 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 502 KiB/s wr, 210 op/s
Dec 06 07:45:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Dec 06 07:45:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Dec 06 07:45:25 compute-0 nova_compute[251992]: 2025-12-06 07:45:25.418 251996 DEBUG nova.storage.rbd_utils [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] creating snapshot(snap) on rbd image(5b8b504a-6057-4871-810a-63dfe9ed4af8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:45:25 compute-0 nova_compute[251992]: 2025-12-06 07:45:25.841 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008508616578261678 of space, bias 1.0, pg target 2.5525849734785035 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.644751535455053 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004087276661083193 of space, bias 1.0, pg target 1.2180084450027917 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:45:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:45:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:26.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Dec 06 07:45:26 compute-0 nova_compute[251992]: 2025-12-06 07:45:26.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:26 compute-0 nova_compute[251992]: 2025-12-06 07:45:26.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:45:26 compute-0 ceph-mon[74339]: osdmap e338: 3 total, 3 up, 3 in
Dec 06 07:45:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Dec 06 07:45:26 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Dec 06 07:45:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 566 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.1 MiB/s wr, 222 op/s
Dec 06 07:45:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:27.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:27 compute-0 nova_compute[251992]: 2025-12-06 07:45:27.283 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:45:27 compute-0 nova_compute[251992]: 2025-12-06 07:45:27.284 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:45:27 compute-0 nova_compute[251992]: 2025-12-06 07:45:27.284 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:45:27 compute-0 ceph-mon[74339]: osdmap e339: 3 total, 3 up, 3 in
Dec 06 07:45:27 compute-0 ceph-mon[74339]: pgmap v2708: 305 pgs: 305 active+clean; 566 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.1 MiB/s wr, 222 op/s
Dec 06 07:45:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/644100411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:28.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:28 compute-0 sudo[351259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:28 compute-0 sudo[351259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:28 compute-0 sudo[351259]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:28 compute-0 sudo[351284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:45:28 compute-0 sudo[351284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:28 compute-0 sudo[351284]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:28 compute-0 sudo[351309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:28 compute-0 sudo[351309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:28 compute-0 sudo[351309]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:28 compute-0 sudo[351334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:45:28 compute-0 sudo[351334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:28 compute-0 sudo[351334]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.6 MiB/s wr, 169 op/s
Dec 06 07:45:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:29.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:45:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:45:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:45:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:45:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.207 251996 INFO nova.virt.libvirt.driver [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Snapshot image upload complete
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.210 251996 INFO nova.compute.manager [None req-90570e00-db7d-450a-8964-930f87d12c65 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Took 6.41 seconds to snapshot the instance on the hypervisor.
Dec 06 07:45:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3247020229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b7127e7c-54a9-404b-9f81-df002768318d does not exist
Dec 06 07:45:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 47ad517b-879f-4a0d-bd48-3e6607d93759 does not exist
Dec 06 07:45:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dd13853b-c36f-4ef5-9f2d-5c5e138cb964 does not exist
Dec 06 07:45:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:45:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:45:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:45:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:45:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:45:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:45:29 compute-0 sudo[351391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:29 compute-0 sudo[351391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:29 compute-0 sudo[351391]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:29 compute-0 sudo[351416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:45:29 compute-0 sudo[351416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:29 compute-0 sudo[351416]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.602 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:29 compute-0 sudo[351441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:29 compute-0 sudo[351441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:29 compute-0 sudo[351441]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:29 compute-0 sudo[351466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:45:29 compute-0 sudo[351466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.920 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updating instance_info_cache with network_info: [{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.941 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.942 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.942 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.942 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.943 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.943 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.943 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:45:29 compute-0 nova_compute[251992]: 2025-12-06 07:45:29.944 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:45:30 compute-0 podman[351530]: 2025-12-06 07:45:30.037170468 +0000 UTC m=+0.038487766 container create e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:45:30 compute-0 systemd[1]: Started libpod-conmon-e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9.scope.
Dec 06 07:45:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:45:30 compute-0 podman[351530]: 2025-12-06 07:45:30.020883481 +0000 UTC m=+0.022200799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:45:30 compute-0 podman[351530]: 2025-12-06 07:45:30.11957786 +0000 UTC m=+0.120895178 container init e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:45:30 compute-0 podman[351530]: 2025-12-06 07:45:30.127777235 +0000 UTC m=+0.129094533 container start e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:45:30 compute-0 podman[351530]: 2025-12-06 07:45:30.130850419 +0000 UTC m=+0.132167717 container attach e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:45:30 compute-0 flamboyant_edison[351546]: 167 167
Dec 06 07:45:30 compute-0 systemd[1]: libpod-e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9.scope: Deactivated successfully.
Dec 06 07:45:30 compute-0 podman[351530]: 2025-12-06 07:45:30.134450327 +0000 UTC m=+0.135767625 container died e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_edison, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 07:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ddb2ff4dd21b23ac417c799c61387238e6f0440f370a81bd78d265b2e961a88-merged.mount: Deactivated successfully.
Dec 06 07:45:30 compute-0 podman[351530]: 2025-12-06 07:45:30.170763854 +0000 UTC m=+0.172081152 container remove e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_edison, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 07:45:30 compute-0 systemd[1]: libpod-conmon-e342b9244ad30ce39d3b601fcdbb74f91346bf3ba6ef290e79ce7df90cc474f9.scope: Deactivated successfully.
Dec 06 07:45:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:30.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:30 compute-0 podman[351570]: 2025-12-06 07:45:30.354051433 +0000 UTC m=+0.045526720 container create a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:45:30 compute-0 systemd[1]: Started libpod-conmon-a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1.scope.
Dec 06 07:45:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b0285b48d212d607fd419be5500dd33ddea6705dadae6e51da65e770d9abab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b0285b48d212d607fd419be5500dd33ddea6705dadae6e51da65e770d9abab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b0285b48d212d607fd419be5500dd33ddea6705dadae6e51da65e770d9abab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:30 compute-0 podman[351570]: 2025-12-06 07:45:30.334422804 +0000 UTC m=+0.025898101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b0285b48d212d607fd419be5500dd33ddea6705dadae6e51da65e770d9abab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b0285b48d212d607fd419be5500dd33ddea6705dadae6e51da65e770d9abab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:30 compute-0 podman[351570]: 2025-12-06 07:45:30.444193747 +0000 UTC m=+0.135669034 container init a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:45:30 compute-0 podman[351570]: 2025-12-06 07:45:30.450399337 +0000 UTC m=+0.141874614 container start a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:45:30 compute-0 podman[351570]: 2025-12-06 07:45:30.453762429 +0000 UTC m=+0.145237706 container attach a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 07:45:30 compute-0 podman[351589]: 2025-12-06 07:45:30.499939907 +0000 UTC m=+0.056544904 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:45:30 compute-0 podman[351590]: 2025-12-06 07:45:30.509990522 +0000 UTC m=+0.066385153 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd)
Dec 06 07:45:30 compute-0 nova_compute[251992]: 2025-12-06 07:45:30.598 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:30.599 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:30.600 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:45:30 compute-0 nova_compute[251992]: 2025-12-06 07:45:30.885 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 104 op/s
Dec 06 07:45:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:31.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:31 compute-0 elastic_lehmann[351586]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:45:31 compute-0 elastic_lehmann[351586]: --> relative data size: 1.0
Dec 06 07:45:31 compute-0 elastic_lehmann[351586]: --> All data devices are unavailable
Dec 06 07:45:31 compute-0 systemd[1]: libpod-a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1.scope: Deactivated successfully.
Dec 06 07:45:31 compute-0 podman[351570]: 2025-12-06 07:45:31.292903735 +0000 UTC m=+0.984379032 container died a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-11b0285b48d212d607fd419be5500dd33ddea6705dadae6e51da65e770d9abab-merged.mount: Deactivated successfully.
Dec 06 07:45:31 compute-0 podman[351570]: 2025-12-06 07:45:31.353554078 +0000 UTC m=+1.045029375 container remove a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lehmann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:45:31 compute-0 systemd[1]: libpod-conmon-a651de3b4c9be4c4d483b3224ba7161aa3961ebae1ee52cdf8521a3acb8070c1.scope: Deactivated successfully.
Dec 06 07:45:31 compute-0 ceph-mon[74339]: pgmap v2709: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.6 MiB/s wr, 169 op/s
Dec 06 07:45:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:45:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:45:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:45:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:45:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:45:31 compute-0 sudo[351466]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:31 compute-0 sudo[351655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:31 compute-0 sudo[351655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:31 compute-0 sudo[351655]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:31 compute-0 sudo[351680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:45:31 compute-0 sudo[351680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:31 compute-0 sudo[351680]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:31 compute-0 sudo[351705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:31 compute-0 sudo[351705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:31 compute-0 sudo[351705]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:31 compute-0 sudo[351730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:45:31 compute-0 sudo[351730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:31 compute-0 podman[351794]: 2025-12-06 07:45:31.990611649 +0000 UTC m=+0.046338963 container create 3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:45:32 compute-0 systemd[1]: Started libpod-conmon-3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50.scope.
Dec 06 07:45:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:45:32 compute-0 podman[351794]: 2025-12-06 07:45:32.066840911 +0000 UTC m=+0.122568295 container init 3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 07:45:32 compute-0 podman[351794]: 2025-12-06 07:45:31.974147817 +0000 UTC m=+0.029875171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:45:32 compute-0 podman[351794]: 2025-12-06 07:45:32.073818922 +0000 UTC m=+0.129546246 container start 3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:45:32 compute-0 practical_franklin[351810]: 167 167
Dec 06 07:45:32 compute-0 podman[351794]: 2025-12-06 07:45:32.07886063 +0000 UTC m=+0.134587954 container attach 3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:45:32 compute-0 systemd[1]: libpod-3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50.scope: Deactivated successfully.
Dec 06 07:45:32 compute-0 podman[351794]: 2025-12-06 07:45:32.080407493 +0000 UTC m=+0.136134837 container died 3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_franklin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f087eb7c2dee74035218fa221d08bb251765f587e405b85bf1ea9458e2b1d12e-merged.mount: Deactivated successfully.
Dec 06 07:45:32 compute-0 podman[351794]: 2025-12-06 07:45:32.113009877 +0000 UTC m=+0.168737201 container remove 3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:45:32 compute-0 systemd[1]: libpod-conmon-3f0ed260c156cdb7232467e32b74e9a346365d4476b304664f067505554cdd50.scope: Deactivated successfully.
Dec 06 07:45:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:32.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:32 compute-0 podman[351834]: 2025-12-06 07:45:32.282715214 +0000 UTC m=+0.043798953 container create 8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:45:32 compute-0 systemd[1]: Started libpod-conmon-8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b.scope.
Dec 06 07:45:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5314183e1d9caa945aedaa7bc10778276be47ac2becccb49fb2f8b0448e928/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5314183e1d9caa945aedaa7bc10778276be47ac2becccb49fb2f8b0448e928/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5314183e1d9caa945aedaa7bc10778276be47ac2becccb49fb2f8b0448e928/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5314183e1d9caa945aedaa7bc10778276be47ac2becccb49fb2f8b0448e928/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:32 compute-0 podman[351834]: 2025-12-06 07:45:32.26251199 +0000 UTC m=+0.023595719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:45:32 compute-0 podman[351834]: 2025-12-06 07:45:32.361974598 +0000 UTC m=+0.123058337 container init 8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:45:32 compute-0 podman[351834]: 2025-12-06 07:45:32.370008219 +0000 UTC m=+0.131091958 container start 8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 07:45:32 compute-0 podman[351834]: 2025-12-06 07:45:32.373592497 +0000 UTC m=+0.134676226 container attach 8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:45:32 compute-0 ceph-mon[74339]: pgmap v2710: 305 pgs: 305 active+clean; 594 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 104 op/s
Dec 06 07:45:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 603 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 95 op/s
Dec 06 07:45:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:33.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]: {
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:     "0": [
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:         {
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "devices": [
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "/dev/loop3"
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             ],
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "lv_name": "ceph_lv0",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "lv_size": "7511998464",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "name": "ceph_lv0",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "tags": {
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.cluster_name": "ceph",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.crush_device_class": "",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.encrypted": "0",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.osd_id": "0",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.type": "block",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:                 "ceph.vdo": "0"
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             },
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "type": "block",
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:             "vg_name": "ceph_vg0"
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:         }
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]:     ]
Dec 06 07:45:33 compute-0 thirsty_hodgkin[351850]: }
Dec 06 07:45:33 compute-0 systemd[1]: libpod-8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b.scope: Deactivated successfully.
Dec 06 07:45:33 compute-0 podman[351834]: 2025-12-06 07:45:33.155978116 +0000 UTC m=+0.917061845 container died 8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 07:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c5314183e1d9caa945aedaa7bc10778276be47ac2becccb49fb2f8b0448e928-merged.mount: Deactivated successfully.
Dec 06 07:45:33 compute-0 podman[351834]: 2025-12-06 07:45:33.202637895 +0000 UTC m=+0.963721624 container remove 8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:45:33 compute-0 systemd[1]: libpod-conmon-8edb3b6945de1b0acb0b95aa070f52e4ea22bcb0520cd8576a9391fd0764b20b.scope: Deactivated successfully.
Dec 06 07:45:33 compute-0 sudo[351730]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:33 compute-0 sudo[351872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:33 compute-0 sudo[351872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:33 compute-0 sudo[351872]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:33 compute-0 sudo[351897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:45:33 compute-0 sudo[351897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:33 compute-0 sudo[351897]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:33 compute-0 sudo[351922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:33 compute-0 sudo[351922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:33 compute-0 sudo[351922]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:33 compute-0 ceph-mon[74339]: pgmap v2711: 305 pgs: 305 active+clean; 603 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 95 op/s
Dec 06 07:45:33 compute-0 sudo[351947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:45:33 compute-0 sudo[351947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:33 compute-0 podman[352013]: 2025-12-06 07:45:33.86939116 +0000 UTC m=+0.039920996 container create 8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:45:33 compute-0 systemd[1]: Started libpod-conmon-8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f.scope.
Dec 06 07:45:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:45:33 compute-0 podman[352013]: 2025-12-06 07:45:33.854609645 +0000 UTC m=+0.025139511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:45:34 compute-0 podman[352013]: 2025-12-06 07:45:34.041680498 +0000 UTC m=+0.212210344 container init 8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:45:34 compute-0 podman[352013]: 2025-12-06 07:45:34.04832937 +0000 UTC m=+0.218859206 container start 8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:45:34 compute-0 compassionate_ishizaka[352030]: 167 167
Dec 06 07:45:34 compute-0 systemd[1]: libpod-8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f.scope: Deactivated successfully.
Dec 06 07:45:34 compute-0 podman[352013]: 2025-12-06 07:45:34.072325459 +0000 UTC m=+0.242855325 container attach 8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec 06 07:45:34 compute-0 podman[352013]: 2025-12-06 07:45:34.072806172 +0000 UTC m=+0.243336008 container died 8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdd343ce644638b9a9fb47251dbbfde6f1e201f5ef657d30c3a8a51723cd5405-merged.mount: Deactivated successfully.
Dec 06 07:45:34 compute-0 podman[352013]: 2025-12-06 07:45:34.138254818 +0000 UTC m=+0.308784654 container remove 8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ishizaka, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:45:34 compute-0 systemd[1]: libpod-conmon-8714a069fcad054262bfaf011b422561c53c7d65e4281d432a8f8496ea3c0f1f.scope: Deactivated successfully.
Dec 06 07:45:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:34.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:34 compute-0 podman[352054]: 2025-12-06 07:45:34.313092776 +0000 UTC m=+0.039687471 container create 9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 07:45:34 compute-0 systemd[1]: Started libpod-conmon-9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38.scope.
Dec 06 07:45:34 compute-0 nova_compute[251992]: 2025-12-06 07:45:34.347 251996 INFO nova.compute.manager [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Rescuing
Dec 06 07:45:34 compute-0 nova_compute[251992]: 2025-12-06 07:45:34.350 251996 DEBUG oslo_concurrency.lockutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:45:34 compute-0 nova_compute[251992]: 2025-12-06 07:45:34.350 251996 DEBUG oslo_concurrency.lockutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:45:34 compute-0 nova_compute[251992]: 2025-12-06 07:45:34.350 251996 DEBUG nova.network.neutron [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:45:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308d3b2ed630fc210513f5cd3ba40800dbad7420611f3b8034813c083ab1edae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308d3b2ed630fc210513f5cd3ba40800dbad7420611f3b8034813c083ab1edae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308d3b2ed630fc210513f5cd3ba40800dbad7420611f3b8034813c083ab1edae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308d3b2ed630fc210513f5cd3ba40800dbad7420611f3b8034813c083ab1edae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:45:34 compute-0 podman[352054]: 2025-12-06 07:45:34.374714976 +0000 UTC m=+0.101309701 container init 9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meninsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 07:45:34 compute-0 ovn_controller[147168]: 2025-12-06T07:45:34Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:c3:cc 10.100.0.12
Dec 06 07:45:34 compute-0 ovn_controller[147168]: 2025-12-06T07:45:34Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:c3:cc 10.100.0.12
Dec 06 07:45:34 compute-0 podman[352054]: 2025-12-06 07:45:34.386082468 +0000 UTC m=+0.112677163 container start 9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:45:34 compute-0 podman[352054]: 2025-12-06 07:45:34.389207914 +0000 UTC m=+0.115802629 container attach 9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meninsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:45:34 compute-0 podman[352054]: 2025-12-06 07:45:34.296791338 +0000 UTC m=+0.023386053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:45:34 compute-0 nova_compute[251992]: 2025-12-06 07:45:34.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Dec 06 07:45:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Dec 06 07:45:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Dec 06 07:45:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 197 op/s
Dec 06 07:45:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:35.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:35 compute-0 angry_meninsky[352070]: {
Dec 06 07:45:35 compute-0 angry_meninsky[352070]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:45:35 compute-0 angry_meninsky[352070]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:45:35 compute-0 angry_meninsky[352070]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:45:35 compute-0 angry_meninsky[352070]:         "osd_id": 0,
Dec 06 07:45:35 compute-0 angry_meninsky[352070]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:45:35 compute-0 angry_meninsky[352070]:         "type": "bluestore"
Dec 06 07:45:35 compute-0 angry_meninsky[352070]:     }
Dec 06 07:45:35 compute-0 angry_meninsky[352070]: }
Dec 06 07:45:35 compute-0 systemd[1]: libpod-9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38.scope: Deactivated successfully.
Dec 06 07:45:35 compute-0 podman[352092]: 2025-12-06 07:45:35.249662234 +0000 UTC m=+0.025248994 container died 9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meninsky, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:45:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-308d3b2ed630fc210513f5cd3ba40800dbad7420611f3b8034813c083ab1edae-merged.mount: Deactivated successfully.
Dec 06 07:45:35 compute-0 podman[352092]: 2025-12-06 07:45:35.29835719 +0000 UTC m=+0.073943940 container remove 9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:45:35 compute-0 systemd[1]: libpod-conmon-9c49d84c0aef509303bc7cddfc9b6ccb8963d165efb9f708f89d5d562cbb6d38.scope: Deactivated successfully.
Dec 06 07:45:35 compute-0 sudo[351947]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:45:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:45:35 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:35 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 94d9dd58-f1b1-461d-91bc-5b8f6861dd0c does not exist
Dec 06 07:45:35 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6582b24a-7746-45b0-ad45-c3cf8488704a does not exist
Dec 06 07:45:35 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 836da659-4812-4f66-86ae-31716aa17036 does not exist
Dec 06 07:45:35 compute-0 sudo[352108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:35 compute-0 sudo[352108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:35 compute-0 sudo[352108]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:35 compute-0 sudo[352133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:45:35 compute-0 sudo[352133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:35 compute-0 sudo[352133]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:35.604 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:35 compute-0 ceph-mon[74339]: osdmap e340: 3 total, 3 up, 3 in
Dec 06 07:45:35 compute-0 ceph-mon[74339]: pgmap v2713: 305 pgs: 305 active+clean; 636 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.9 MiB/s wr, 197 op/s
Dec 06 07:45:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:35 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:45:35 compute-0 nova_compute[251992]: 2025-12-06 07:45:35.888 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:36.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:36 compute-0 nova_compute[251992]: 2025-12-06 07:45:36.406 251996 DEBUG nova.network.neutron [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Updating instance_info_cache with network_info: [{"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:45:36 compute-0 nova_compute[251992]: 2025-12-06 07:45:36.438 251996 DEBUG oslo_concurrency.lockutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:45:36 compute-0 nova_compute[251992]: 2025-12-06 07:45:36.910 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:45:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 654 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.6 MiB/s wr, 189 op/s
Dec 06 07:45:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:37.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:38 compute-0 ceph-mon[74339]: pgmap v2714: 305 pgs: 305 active+clean; 654 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.6 MiB/s wr, 189 op/s
Dec 06 07:45:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2783124359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2783124359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:38.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 5.1 MiB/s wr, 167 op/s
Dec 06 07:45:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:39.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:39 compute-0 kernel: tap14826742-06 (unregistering): left promiscuous mode
Dec 06 07:45:39 compute-0 NetworkManager[48965]: <info>  [1765007139.2307] device (tap14826742-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:39 compute-0 ovn_controller[147168]: 2025-12-06T07:45:39Z|00574|binding|INFO|Releasing lport 14826742-0679-403f-b2e4-28fb0f26527a from this chassis (sb_readonly=0)
Dec 06 07:45:39 compute-0 ovn_controller[147168]: 2025-12-06T07:45:39Z|00575|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a down in Southbound
Dec 06 07:45:39 compute-0 ovn_controller[147168]: 2025-12-06T07:45:39Z|00576|binding|INFO|Removing iface tap14826742-06 ovn-installed in OVS
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.242 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.248 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:cc 10.100.0.12'], port_security=['fa:16:3e:9a:c3:cc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=14826742-0679-403f-b2e4-28fb0f26527a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.249 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 14826742-0679-403f-b2e4-28fb0f26527a in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.250 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.257 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.270 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[db2b591a-2297-452c-bd40-713d2e4e98c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:39 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Dec 06 07:45:39 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d0000009d.scope: Consumed 14.627s CPU time.
Dec 06 07:45:39 compute-0 systemd-machined[212986]: Machine qemu-73-instance-0000009d terminated.
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.299 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb5fcea-49f2-4660-b0f1-7df0dc25ab6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.303 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4592b5-d177-4c87-81cc-d6299c929ba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.324 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[8f760617-8b64-47f0-959a-c73175bdaf74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.340 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[69767ac6-b6b4-4bd2-a4aa-930c32733eb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352172, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.353 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b92d4eeb-4754-443d-81e0-68e962fd3f84]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736441, 'tstamp': 736441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352173, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736444, 'tstamp': 736444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352173, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.355 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.356 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.360 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.360 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.361 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.361 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:39.361 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.463 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:39 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:45:39 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.614 251996 DEBUG nova.compute.manager [req-ca21c72f-f7b8-4e0f-8e75-16f654d57f32 req-4ca024ac-814a-4384-bbd5-c1e039e8c108 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.614 251996 DEBUG oslo_concurrency.lockutils [req-ca21c72f-f7b8-4e0f-8e75-16f654d57f32 req-4ca024ac-814a-4384-bbd5-c1e039e8c108 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.615 251996 DEBUG oslo_concurrency.lockutils [req-ca21c72f-f7b8-4e0f-8e75-16f654d57f32 req-4ca024ac-814a-4384-bbd5-c1e039e8c108 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.615 251996 DEBUG oslo_concurrency.lockutils [req-ca21c72f-f7b8-4e0f-8e75-16f654d57f32 req-4ca024ac-814a-4384-bbd5-c1e039e8c108 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.615 251996 DEBUG nova.compute.manager [req-ca21c72f-f7b8-4e0f-8e75-16f654d57f32 req-4ca024ac-814a-4384-bbd5-c1e039e8c108 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.615 251996 WARNING nova.compute.manager [req-ca21c72f-f7b8-4e0f-8e75-16f654d57f32 req-4ca024ac-814a-4384-bbd5-c1e039e8c108 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state active and task_state rescuing.
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.641 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.926 251996 INFO nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance shutdown successfully after 3 seconds.
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.932 251996 INFO nova.virt.libvirt.driver [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance destroyed successfully.
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.932 251996 DEBUG nova.objects.instance [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'numa_topology' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:39 compute-0 nova_compute[251992]: 2025-12-06 07:45:39.959 251996 INFO nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Attempting a stable device rescue
Dec 06 07:45:40 compute-0 ceph-mon[74339]: pgmap v2715: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 5.1 MiB/s wr, 167 op/s
Dec 06 07:45:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:40.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.620 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.625 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.625 251996 INFO nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Creating image(s)
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.654 251996 DEBUG nova.storage.rbd_utils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.658 251996 DEBUG nova.objects.instance [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.739 251996 DEBUG nova.storage.rbd_utils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.770 251996 DEBUG nova.storage.rbd_utils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.776 251996 DEBUG oslo_concurrency.lockutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "20d66fc95b00ff6a503166fb914c17bc57b06499" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.777 251996 DEBUG oslo_concurrency.lockutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "20d66fc95b00ff6a503166fb914c17bc57b06499" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:40 compute-0 nova_compute[251992]: 2025-12-06 07:45:40.890 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 651 KiB/s rd, 5.1 MiB/s wr, 169 op/s
Dec 06 07:45:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000028s ======
Dec 06 07:45:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:41.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 06 07:45:41 compute-0 ceph-mon[74339]: pgmap v2716: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 651 KiB/s rd, 5.1 MiB/s wr, 169 op/s
Dec 06 07:45:41 compute-0 sudo[352241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:41 compute-0 sudo[352241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:41 compute-0 sudo[352241]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:41 compute-0 sudo[352266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:45:41 compute-0 sudo[352266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:45:41 compute-0 sudo[352266]: pam_unix(sudo:session): session closed for user root
Dec 06 07:45:41 compute-0 nova_compute[251992]: 2025-12-06 07:45:41.730 251996 DEBUG nova.virt.libvirt.imagebackend [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/5b8b504a-6057-4871-810a-63dfe9ed4af8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/5b8b504a-6057-4871-810a-63dfe9ed4af8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:45:41 compute-0 nova_compute[251992]: 2025-12-06 07:45:41.798 251996 DEBUG nova.virt.libvirt.imagebackend [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/5b8b504a-6057-4871-810a-63dfe9ed4af8/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:45:41 compute-0 nova_compute[251992]: 2025-12-06 07:45:41.799 251996 DEBUG nova.storage.rbd_utils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] cloning images/5b8b504a-6057-4871-810a-63dfe9ed4af8@snap to None/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:45:41 compute-0 nova_compute[251992]: 2025-12-06 07:45:41.904 251996 DEBUG oslo_concurrency.lockutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "20d66fc95b00ff6a503166fb914c17bc57b06499" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:41 compute-0 nova_compute[251992]: 2025-12-06 07:45:41.951 251996 DEBUG nova.objects.instance [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'migration_context' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:41 compute-0 nova_compute[251992]: 2025-12-06 07:45:41.969 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:45:41 compute-0 nova_compute[251992]: 2025-12-06 07:45:41.972 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Start _get_guest_xml network_info=[{"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:9a:c3:cc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '5b8b504a-6057-4871-810a-63dfe9ed4af8', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:45:41 compute-0 nova_compute[251992]: 2025-12-06 07:45:41.973 251996 DEBUG nova.objects.instance [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'resources' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.002 251996 WARNING nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.016 251996 DEBUG nova.virt.libvirt.host [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.017 251996 DEBUG nova.virt.libvirt.host [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.033 251996 DEBUG nova.virt.libvirt.host [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.034 251996 DEBUG nova.virt.libvirt.host [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.035 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.035 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.036 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.036 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.036 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.036 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.037 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.037 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.037 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.037 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.037 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.038 251996 DEBUG nova.virt.hardware [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.038 251996 DEBUG nova.objects.instance [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.041 251996 DEBUG nova.compute.manager [req-08fba43c-4fd0-4d6b-8bee-0b788fb20d5a req-5fd1d764-5e6a-4883-bc3a-67a19d4eb0fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.041 251996 DEBUG oslo_concurrency.lockutils [req-08fba43c-4fd0-4d6b-8bee-0b788fb20d5a req-5fd1d764-5e6a-4883-bc3a-67a19d4eb0fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.041 251996 DEBUG oslo_concurrency.lockutils [req-08fba43c-4fd0-4d6b-8bee-0b788fb20d5a req-5fd1d764-5e6a-4883-bc3a-67a19d4eb0fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.041 251996 DEBUG oslo_concurrency.lockutils [req-08fba43c-4fd0-4d6b-8bee-0b788fb20d5a req-5fd1d764-5e6a-4883-bc3a-67a19d4eb0fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.042 251996 DEBUG nova.compute.manager [req-08fba43c-4fd0-4d6b-8bee-0b788fb20d5a req-5fd1d764-5e6a-4883-bc3a-67a19d4eb0fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.042 251996 WARNING nova.compute.manager [req-08fba43c-4fd0-4d6b-8bee-0b788fb20d5a req-5fd1d764-5e6a-4883-bc3a-67a19d4eb0fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state active and task_state rescuing.
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.065 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:42.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2137929200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.522 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2137929200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:42 compute-0 nova_compute[251992]: 2025-12-06 07:45:42.566 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/225474099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.014 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.015 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:45:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 4.4 MiB/s wr, 169 op/s
Dec 06 07:45:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:43.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:45:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239859850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.502 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.505 251996 DEBUG nova.virt.libvirt.vif [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:45:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-592712215',display_name='tempest-ServerStableDeviceRescueTest-server-592712215',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-592712215',id=157,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:45:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-o0l8e5p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:45:29Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=53b4413c-a38e-4ad9-9f1b-43babd1fe2a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:9a:c3:cc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.506 251996 DEBUG nova.network.os_vif_util [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "vif_mac": "fa:16:3e:9a:c3:cc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.509 251996 DEBUG nova.network.os_vif_util [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:c3:cc,bridge_name='br-int',has_traffic_filtering=True,id=14826742-0679-403f-b2e4-28fb0f26527a,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14826742-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.512 251996 DEBUG nova.objects.instance [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'pci_devices' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.546 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <uuid>53b4413c-a38e-4ad9-9f1b-43babd1fe2a5</uuid>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <name>instance-0000009d</name>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerStableDeviceRescueTest-server-592712215</nova:name>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:45:42</nova:creationTime>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <nova:user uuid="e997a5eeee174b368a43ed8cb35fa1d0">tempest-ServerStableDeviceRescueTest-1830949011-project-member</nova:user>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <nova:project uuid="f44ecb8bdc7e4692a299e29603301124">tempest-ServerStableDeviceRescueTest-1830949011</nova:project>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <nova:port uuid="14826742-0679-403f-b2e4-28fb0f26527a">
Dec 06 07:45:43 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <system>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <entry name="serial">53b4413c-a38e-4ad9-9f1b-43babd1fe2a5</entry>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <entry name="uuid">53b4413c-a38e-4ad9-9f1b-43babd1fe2a5</entry>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </system>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <os>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   </os>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <features>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   </features>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk">
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config">
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.rescue">
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:45:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <boot order="1"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:9a:c3:cc"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <target dev="tap14826742-06"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/console.log" append="off"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <video>
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </video>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:45:43 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:45:43 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:45:43 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:45:43 compute-0 nova_compute[251992]: </domain>
Dec 06 07:45:43 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.557 251996 INFO nova.virt.libvirt.driver [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance destroyed successfully.
Dec 06 07:45:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/225474099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:43 compute-0 ceph-mon[74339]: pgmap v2717: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 4.4 MiB/s wr, 169 op/s
Dec 06 07:45:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2239859850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.692 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.694 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.694 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.694 251996 DEBUG nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] No VIF found with MAC fa:16:3e:9a:c3:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.695 251996 INFO nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Using config drive
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.726 251996 DEBUG nova.storage.rbd_utils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.755 251996 DEBUG nova.objects.instance [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:43 compute-0 nova_compute[251992]: 2025-12-06 07:45:43.793 251996 DEBUG nova.objects.instance [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'keypairs' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:44.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:45:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2902135661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:44 compute-0 nova_compute[251992]: 2025-12-06 07:45:44.642 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:44 compute-0 nova_compute[251992]: 2025-12-06 07:45:44.646 251996 INFO nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Creating config drive at /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config.rescue
Dec 06 07:45:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2902135661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:45:44 compute-0 nova_compute[251992]: 2025-12-06 07:45:44.651 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_xkoczn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:44 compute-0 nova_compute[251992]: 2025-12-06 07:45:44.791 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_xkoczn" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:44 compute-0 nova_compute[251992]: 2025-12-06 07:45:44.822 251996 DEBUG nova.storage.rbd_utils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] rbd image 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:45:44 compute-0 nova_compute[251992]: 2025-12-06 07:45:44.825 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config.rescue 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:45:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 502 KiB/s rd, 3.6 MiB/s wr, 144 op/s
Dec 06 07:45:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:45.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:45 compute-0 nova_compute[251992]: 2025-12-06 07:45:45.891 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:46 compute-0 ceph-mon[74339]: pgmap v2718: 305 pgs: 305 active+clean; 659 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 502 KiB/s rd, 3.6 MiB/s wr, 144 op/s
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.105 251996 DEBUG oslo_concurrency.processutils [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config.rescue 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.106 251996 INFO nova.virt.libvirt.driver [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Deleting local config drive /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5/disk.config.rescue because it was imported into RBD.
Dec 06 07:45:46 compute-0 kernel: tap14826742-06: entered promiscuous mode
Dec 06 07:45:46 compute-0 NetworkManager[48965]: <info>  [1765007146.1812] manager: (tap14826742-06): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Dec 06 07:45:46 compute-0 ovn_controller[147168]: 2025-12-06T07:45:46Z|00577|binding|INFO|Claiming lport 14826742-0679-403f-b2e4-28fb0f26527a for this chassis.
Dec 06 07:45:46 compute-0 ovn_controller[147168]: 2025-12-06T07:45:46Z|00578|binding|INFO|14826742-0679-403f-b2e4-28fb0f26527a: Claiming fa:16:3e:9a:c3:cc 10.100.0.12
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.181 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.200 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:46 compute-0 ovn_controller[147168]: 2025-12-06T07:45:46Z|00579|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a ovn-installed in OVS
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.202 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:46 compute-0 systemd-udevd[352536]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:45:46 compute-0 systemd-machined[212986]: New machine qemu-74-instance-0000009d.
Dec 06 07:45:46 compute-0 NetworkManager[48965]: <info>  [1765007146.2192] device (tap14826742-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:45:46 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-0000009d.
Dec 06 07:45:46 compute-0 NetworkManager[48965]: <info>  [1765007146.2199] device (tap14826742-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:45:46 compute-0 ovn_controller[147168]: 2025-12-06T07:45:46Z|00580|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a up in Southbound
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.224 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:cc 10.100.0.12'], port_security=['fa:16:3e:9a:c3:cc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=14826742-0679-403f-b2e4-28fb0f26527a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.225 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 14826742-0679-403f-b2e4-28fb0f26527a in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.226 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.242 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f41ef89f-3672-4bed-aa26-41b5f5835e90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.271 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2290d537-5035-42cc-9798-1938a9ef292f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.275 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[82b4e6f5-a304-43da-bd45-a0ff3537444c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:46.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.303 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[31f35cb3-47b0-4053-8958-7f417e8fd9e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.321 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f913f53f-0641-434f-8e61-dde06c2c0e63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352550, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.338 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9e00cd0b-b2a6-45a5-b080-75d108d7922b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736441, 'tstamp': 736441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352551, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736444, 'tstamp': 736444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352551, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.340 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.342 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.343 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.343 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.343 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.344 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:46.344 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.769 251996 DEBUG nova.compute.manager [req-7c414383-936a-48fa-8948-851447a4d1be req-ea33484b-c611-45a0-bb15-5ae750dd6448 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.770 251996 DEBUG oslo_concurrency.lockutils [req-7c414383-936a-48fa-8948-851447a4d1be req-ea33484b-c611-45a0-bb15-5ae750dd6448 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.770 251996 DEBUG oslo_concurrency.lockutils [req-7c414383-936a-48fa-8948-851447a4d1be req-ea33484b-c611-45a0-bb15-5ae750dd6448 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.770 251996 DEBUG oslo_concurrency.lockutils [req-7c414383-936a-48fa-8948-851447a4d1be req-ea33484b-c611-45a0-bb15-5ae750dd6448 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.770 251996 DEBUG nova.compute.manager [req-7c414383-936a-48fa-8948-851447a4d1be req-ea33484b-c611-45a0-bb15-5ae750dd6448 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.770 251996 WARNING nova.compute.manager [req-7c414383-936a-48fa-8948-851447a4d1be req-ea33484b-c611-45a0-bb15-5ae750dd6448 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state active and task_state rescuing.
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.929 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.930 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007146.9290693, 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.930 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] VM Resumed (Lifecycle Event)
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.934 251996 DEBUG nova.compute.manager [None req-c0d9b159-3948-4fa6-99c3-d2ece1ffd1fc e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:46 compute-0 nova_compute[251992]: 2025-12-06 07:45:46.997 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:47 compute-0 nova_compute[251992]: 2025-12-06 07:45:47.001 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:45:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 660 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 153 KiB/s rd, 897 KiB/s wr, 69 op/s
Dec 06 07:45:47 compute-0 nova_compute[251992]: 2025-12-06 07:45:47.117 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:45:47 compute-0 nova_compute[251992]: 2025-12-06 07:45:47.118 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007146.9317572, 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:47 compute-0 nova_compute[251992]: 2025-12-06 07:45:47.118 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] VM Started (Lifecycle Event)
Dec 06 07:45:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:47 compute-0 nova_compute[251992]: 2025-12-06 07:45:47.151 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:47 compute-0 nova_compute[251992]: 2025-12-06 07:45:47.154 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:45:48 compute-0 ceph-mon[74339]: pgmap v2719: 305 pgs: 305 active+clean; 660 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 153 KiB/s rd, 897 KiB/s wr, 69 op/s
Dec 06 07:45:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:48.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.475 251996 INFO nova.compute.manager [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Unrescuing
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.475 251996 DEBUG oslo_concurrency.lockutils [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.476 251996 DEBUG oslo_concurrency.lockutils [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquired lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.476 251996 DEBUG nova.network.neutron [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.960 251996 DEBUG nova.compute.manager [req-0fda20a3-ec4e-41d2-8f3e-126bec471be2 req-809797a6-b4be-407c-90e1-d4cc9a225afa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.960 251996 DEBUG oslo_concurrency.lockutils [req-0fda20a3-ec4e-41d2-8f3e-126bec471be2 req-809797a6-b4be-407c-90e1-d4cc9a225afa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.961 251996 DEBUG oslo_concurrency.lockutils [req-0fda20a3-ec4e-41d2-8f3e-126bec471be2 req-809797a6-b4be-407c-90e1-d4cc9a225afa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.961 251996 DEBUG oslo_concurrency.lockutils [req-0fda20a3-ec4e-41d2-8f3e-126bec471be2 req-809797a6-b4be-407c-90e1-d4cc9a225afa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.961 251996 DEBUG nova.compute.manager [req-0fda20a3-ec4e-41d2-8f3e-126bec471be2 req-809797a6-b4be-407c-90e1-d4cc9a225afa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:48 compute-0 nova_compute[251992]: 2025-12-06 07:45:48.961 251996 WARNING nova.compute.manager [req-0fda20a3-ec4e-41d2-8f3e-126bec471be2 req-809797a6-b4be-407c-90e1-d4cc9a225afa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state rescued and task_state unrescuing.
Dec 06 07:45:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 660 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 85 KiB/s wr, 75 op/s
Dec 06 07:45:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:49.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:49 compute-0 ceph-mon[74339]: pgmap v2720: 305 pgs: 305 active+clean; 660 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 85 KiB/s wr, 75 op/s
Dec 06 07:45:49 compute-0 nova_compute[251992]: 2025-12-06 07:45:49.646 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:50.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:50 compute-0 nova_compute[251992]: 2025-12-06 07:45:50.942 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 625 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 44 KiB/s wr, 101 op/s
Dec 06 07:45:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.069 251996 DEBUG nova.network.neutron [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Updating instance_info_cache with network_info: [{"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.089 251996 DEBUG oslo_concurrency.lockutils [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Releasing lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.091 251996 DEBUG nova.objects.instance [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'flavor' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:52 compute-0 ceph-mon[74339]: pgmap v2721: 305 pgs: 305 active+clean; 625 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 44 KiB/s wr, 101 op/s
Dec 06 07:45:52 compute-0 kernel: tap14826742-06 (unregistering): left promiscuous mode
Dec 06 07:45:52 compute-0 NetworkManager[48965]: <info>  [1765007152.1636] device (tap14826742-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00581|binding|INFO|Releasing lport 14826742-0679-403f-b2e4-28fb0f26527a from this chassis (sb_readonly=0)
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00582|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a down in Southbound
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00583|binding|INFO|Removing iface tap14826742-06 ovn-installed in OVS
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.171 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.173 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.182 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:cc 10.100.0.12'], port_security=['fa:16:3e:9a:c3:cc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=14826742-0679-403f-b2e4-28fb0f26527a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.183 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 14826742-0679-403f-b2e4-28fb0f26527a in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.185 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.192 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.202 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3accf3c7-8769-4b0a-ba6d-c6ae26f43c8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.229 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[912e0e28-fa5c-4e36-a82a-a87ac9588f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d0000009d.scope: Consumed 6.137s CPU time.
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.232 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3b89022b-97e6-4e78-a3da-852a45b8aec4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 systemd-machined[212986]: Machine qemu-74-instance-0000009d terminated.
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.258 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2adc88-d57f-40d5-a38c-58449c546440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.274 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[66fe6de1-3796-461e-89ef-a2a39a33cc49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352627, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.288 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b918a85c-63f4-4b60-9fda-560fbce03399]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736441, 'tstamp': 736441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352628, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736444, 'tstamp': 736444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352628, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.289 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.290 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:52.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.294 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.295 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.295 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.295 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.296 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:52 compute-0 kernel: tap14826742-06: entered promiscuous mode
Dec 06 07:45:52 compute-0 kernel: tap14826742-06 (unregistering): left promiscuous mode
Dec 06 07:45:52 compute-0 NetworkManager[48965]: <info>  [1765007152.3385] manager: (tap14826742-06): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00584|binding|INFO|Claiming lport 14826742-0679-403f-b2e4-28fb0f26527a for this chassis.
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00585|binding|INFO|14826742-0679-403f-b2e4-28fb0f26527a: Claiming fa:16:3e:9a:c3:cc 10.100.0.12
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.356 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:cc 10.100.0.12'], port_security=['fa:16:3e:9a:c3:cc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=14826742-0679-403f-b2e4-28fb0f26527a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.357 251996 INFO nova.virt.libvirt.driver [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance destroyed successfully.
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.357 251996 DEBUG nova.objects.instance [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'numa_topology' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.358 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 14826742-0679-403f-b2e4-28fb0f26527a in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 bound to our chassis
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00586|binding|INFO|Releasing lport 14826742-0679-403f-b2e4-28fb0f26527a from this chassis (sb_readonly=0)
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.359 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.360 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.376 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[488708e5-90fd-441a-b403-8d7ae3a84be3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.392 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:cc 10.100.0.12'], port_security=['fa:16:3e:9a:c3:cc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=14826742-0679-403f-b2e4-28fb0f26527a) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.401 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1b86bb45-6786-40f7-8cf8-c71e8a31f47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.404 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[3386c788-4025-428a-85da-1ab5eaae912c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.431 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2926c916-5d6a-4b48-a373-fd34cd3218ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.450 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4263bf-c17f-4195-b823-666f2173d9f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352639, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.465 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5d730595-31cc-43b3-a8e5-e177ca9434bc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736441, 'tstamp': 736441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352642, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736444, 'tstamp': 736444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352642, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.466 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.468 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.472 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.472 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.473 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.473 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.474 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.474 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 14826742-0679-403f-b2e4-28fb0f26527a in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.476 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.489 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[143f85f1-6500-414a-8afb-7c30a87d6782]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.522 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[539a07fc-267b-485a-9f3b-d343f8d7a325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.526 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f8073e53-8db6-4efe-a8d1-420156e475a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 kernel: tap14826742-06: entered promiscuous mode
Dec 06 07:45:52 compute-0 NetworkManager[48965]: <info>  [1765007152.5578] manager: (tap14826742-06): new Tun device (/org/freedesktop/NetworkManager/Devices/268)
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00587|binding|INFO|Claiming lport 14826742-0679-403f-b2e4-28fb0f26527a for this chassis.
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00588|binding|INFO|14826742-0679-403f-b2e4-28fb0f26527a: Claiming fa:16:3e:9a:c3:cc 10.100.0.12
Dec 06 07:45:52 compute-0 systemd-udevd[352618]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.559 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.557 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[71f3f0c8-db13-4c06-99b0-859177c137ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 NetworkManager[48965]: <info>  [1765007152.5689] device (tap14826742-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.567 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:cc 10.100.0.12'], port_security=['fa:16:3e:9a:c3:cc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=14826742-0679-403f-b2e4-28fb0f26527a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:45:52 compute-0 NetworkManager[48965]: <info>  [1765007152.5709] device (tap14826742-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00589|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a ovn-installed in OVS
Dec 06 07:45:52 compute-0 ovn_controller[147168]: 2025-12-06T07:45:52Z|00590|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a up in Southbound
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.576 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.578 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.579 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b33714-10e9-40a1-b5da-9a424e6508e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352680, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 systemd-machined[212986]: New machine qemu-75-instance-0000009d.
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.595 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[59cb25ca-7c08-4652-8203-186ace799424]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736441, 'tstamp': 736441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352685, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736444, 'tstamp': 736444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352685, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.597 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.598 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.600 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.601 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.601 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.601 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.601 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.602 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 14826742-0679-403f-b2e4-28fb0f26527a in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.604 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:45:52 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-0000009d.
Dec 06 07:45:52 compute-0 podman[352641]: 2025-12-06 07:45:52.611549267 +0000 UTC m=+0.129698951 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.619 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e64d36b9-7ab1-40c1-b52e-8b53f2562f02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.645 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0cc8d3-6045-4396-9f54-01d031caf9ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.648 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a8f042-0643-4099-a127-979af3db24a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.675 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[61e1c983-b1d6-46ce-8b42-be3e76023462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.692 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2b1737-a1b2-4545-9269-5baeaabcf034]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 37333, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352699, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.709 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[95ae10ba-0fe4-4bbc-91a1-25b642f3e5be]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736441, 'tstamp': 736441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352700, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736444, 'tstamp': 736444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352700, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.710 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.714 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.715 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.716 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.716 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:45:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:45:52.717 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.977 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.978 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007152.9773188, 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:52 compute-0 nova_compute[251992]: 2025-12-06 07:45:52.978 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] VM Resumed (Lifecycle Event)
Dec 06 07:45:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 30 KiB/s wr, 111 op/s
Dec 06 07:45:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2982845957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:45:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:53.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:53 compute-0 nova_compute[251992]: 2025-12-06 07:45:53.186 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:53 compute-0 nova_compute[251992]: 2025-12-06 07:45:53.191 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:45:53 compute-0 nova_compute[251992]: 2025-12-06 07:45:53.253 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:45:53 compute-0 nova_compute[251992]: 2025-12-06 07:45:53.253 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007152.9782631, 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:45:53 compute-0 nova_compute[251992]: 2025-12-06 07:45:53.253 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] VM Started (Lifecycle Event)
Dec 06 07:45:53 compute-0 nova_compute[251992]: 2025-12-06 07:45:53.274 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:53 compute-0 nova_compute[251992]: 2025-12-06 07:45:53.278 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:45:53 compute-0 nova_compute[251992]: 2025-12-06 07:45:53.304 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] During sync_power_state the instance has a pending task (unrescuing). Skip.
Dec 06 07:45:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:45:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888879543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:45:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3888879543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.001 251996 DEBUG nova.compute.manager [None req-53e08d05-866d-45b0-aa24-f2b37c94557a e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:45:54 compute-0 ceph-mon[74339]: pgmap v2722: 305 pgs: 305 active+clean; 597 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 30 KiB/s wr, 111 op/s
Dec 06 07:45:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3888879543' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3888879543' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:54.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.323 251996 DEBUG nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.323 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.323 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.323 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.323 251996 DEBUG nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.324 251996 WARNING nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state active and task_state None.
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.324 251996 DEBUG nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.324 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.324 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.324 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.324 251996 DEBUG nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.325 251996 WARNING nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state active and task_state None.
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.325 251996 DEBUG nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.325 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.325 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.325 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.325 251996 DEBUG nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.326 251996 WARNING nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state active and task_state None.
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.326 251996 DEBUG nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.326 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.326 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.326 251996 DEBUG oslo_concurrency.lockutils [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.326 251996 DEBUG nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.327 251996 WARNING nova.compute.manager [req-4a300cd9-8df6-409a-a835-19f1545c0c79 req-976e4c21-de1b-4950-9b8b-177f9945ff43 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state active and task_state None.
Dec 06 07:45:54 compute-0 nova_compute[251992]: 2025-12-06 07:45:54.648 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 30 KiB/s wr, 201 op/s
Dec 06 07:45:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:55.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:55 compute-0 ceph-mon[74339]: pgmap v2723: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 30 KiB/s wr, 201 op/s
Dec 06 07:45:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4244794580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4244794580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:55 compute-0 nova_compute[251992]: 2025-12-06 07:45:55.944 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:56.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 17 KiB/s wr, 217 op/s
Dec 06 07:45:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:57.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:57 compute-0 ceph-mon[74339]: pgmap v2724: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 17 KiB/s wr, 217 op/s
Dec 06 07:45:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:45:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:45:58.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:45:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2381368513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2381368513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.7 KiB/s wr, 231 op/s
Dec 06 07:45:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:45:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:45:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:45:59.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:45:59 compute-0 nova_compute[251992]: 2025-12-06 07:45:59.650 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:45:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:45:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2197277755' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:45:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2197277755' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:45:59 compute-0 ceph-mon[74339]: pgmap v2725: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.7 KiB/s wr, 231 op/s
Dec 06 07:46:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:00.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:00 compute-0 nova_compute[251992]: 2025-12-06 07:46:00.946 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.8 KiB/s wr, 215 op/s
Dec 06 07:46:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:01.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1020618160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1020618160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:01 compute-0 podman[352768]: 2025-12-06 07:46:01.405380261 +0000 UTC m=+0.060424170 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 07:46:01 compute-0 podman[352767]: 2025-12-06 07:46:01.406240844 +0000 UTC m=+0.061085097 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 07:46:01 compute-0 sudo[352804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:01 compute-0 sudo[352804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:01 compute-0 sudo[352804]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:01 compute-0 sudo[352829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:01 compute-0 sudo[352829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:01 compute-0 sudo[352829]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:02.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:02 compute-0 ceph-mon[74339]: pgmap v2726: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.8 KiB/s wr, 215 op/s
Dec 06 07:46:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 KiB/s wr, 192 op/s
Dec 06 07:46:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:03.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:03 compute-0 ceph-mon[74339]: pgmap v2727: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.3 KiB/s wr, 192 op/s
Dec 06 07:46:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4208224397' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4208224397' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2049582650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:46:03.853 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:46:03.854 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:46:03.855 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:04.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:04 compute-0 nova_compute[251992]: 2025-12-06 07:46:04.652 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 600 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 995 KiB/s wr, 223 op/s
Dec 06 07:46:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:05.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:05 compute-0 ceph-mon[74339]: pgmap v2728: 305 pgs: 305 active+clean; 600 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 995 KiB/s wr, 223 op/s
Dec 06 07:46:05 compute-0 nova_compute[251992]: 2025-12-06 07:46:05.948 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:06.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:06 compute-0 ovn_controller[147168]: 2025-12-06T07:46:06Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:c3:cc 10.100.0.12
Dec 06 07:46:06 compute-0 ovn_controller[147168]: 2025-12-06T07:46:06Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:c3:cc 10.100.0.12
Dec 06 07:46:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 613 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 130 op/s
Dec 06 07:46:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:07.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:07 compute-0 nova_compute[251992]: 2025-12-06 07:46:07.957 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:08.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:08 compute-0 ceph-mon[74339]: pgmap v2729: 305 pgs: 305 active+clean; 613 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 130 op/s
Dec 06 07:46:08 compute-0 ovn_controller[147168]: 2025-12-06T07:46:08Z|00591|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:46:08 compute-0 nova_compute[251992]: 2025-12-06 07:46:08.448 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:08 compute-0 ovn_controller[147168]: 2025-12-06T07:46:08Z|00592|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:46:08 compute-0 nova_compute[251992]: 2025-12-06 07:46:08.614 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:46:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1956200466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:46:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1956200466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 626 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:46:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:09.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/750190271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2971671356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1956200466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1956200466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:09 compute-0 ceph-mon[74339]: pgmap v2730: 305 pgs: 305 active+clean; 626 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:46:09 compute-0 nova_compute[251992]: 2025-12-06 07:46:09.654 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:10.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:10 compute-0 nova_compute[251992]: 2025-12-06 07:46:10.967 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 626 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 504 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:46:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:11.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:11 compute-0 ceph-mon[74339]: pgmap v2731: 305 pgs: 305 active+clean; 626 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 504 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec 06 07:46:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:12.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:46:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 626 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Dec 06 07:46:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:13.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:13 compute-0 ceph-mon[74339]: pgmap v2732: 305 pgs: 305 active+clean; 626 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Dec 06 07:46:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:14.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:14 compute-0 nova_compute[251992]: 2025-12-06 07:46:14.657 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:46:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:15.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:15 compute-0 nova_compute[251992]: 2025-12-06 07:46:15.970 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:16 compute-0 nova_compute[251992]: 2025-12-06 07:46:16.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:16 compute-0 NetworkManager[48965]: <info>  [1765007176.1621] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Dec 06 07:46:16 compute-0 NetworkManager[48965]: <info>  [1765007176.1628] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Dec 06 07:46:16 compute-0 nova_compute[251992]: 2025-12-06 07:46:16.292 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:16 compute-0 ovn_controller[147168]: 2025-12-06T07:46:16Z|00593|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:46:16 compute-0 nova_compute[251992]: 2025-12-06 07:46:16.306 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:16.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:16 compute-0 ceph-mon[74339]: pgmap v2733: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 07:46:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 864 KiB/s wr, 114 op/s
Dec 06 07:46:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:17.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:46:18
Dec 06 07:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', '.rgw.root']
Dec 06 07:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:46:18 compute-0 ceph-mon[74339]: pgmap v2734: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 864 KiB/s wr, 114 op/s
Dec 06 07:46:18 compute-0 nova_compute[251992]: 2025-12-06 07:46:18.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:18 compute-0 nova_compute[251992]: 2025-12-06 07:46:18.853 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:18 compute-0 nova_compute[251992]: 2025-12-06 07:46:18.853 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:18 compute-0 nova_compute[251992]: 2025-12-06 07:46:18.854 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:18 compute-0 nova_compute[251992]: 2025-12-06 07:46:18.855 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:46:18 compute-0 nova_compute[251992]: 2025-12-06 07:46:18.855 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 606 KiB/s wr, 121 op/s
Dec 06 07:46:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:19.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:46:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196588490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.347 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.439 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.440 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.443 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.443 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:46:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Dec 06 07:46:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Dec 06 07:46:19 compute-0 ceph-mon[74339]: pgmap v2735: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 606 KiB/s wr, 121 op/s
Dec 06 07:46:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1600270492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4196588490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.622 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.623 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3913MB free_disk=20.78506851196289GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.623 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.623 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.662 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.711 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.711 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.712 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.712 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:46:19 compute-0 nova_compute[251992]: 2025-12-06 07:46:19.769 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:46:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:46:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632847009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:20 compute-0 nova_compute[251992]: 2025-12-06 07:46:20.242 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:46:20 compute-0 nova_compute[251992]: 2025-12-06 07:46:20.248 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:46:20 compute-0 nova_compute[251992]: 2025-12-06 07:46:20.264 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:46:20 compute-0 nova_compute[251992]: 2025-12-06 07:46:20.267 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:46:20 compute-0 nova_compute[251992]: 2025-12-06 07:46:20.267 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:46:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:20.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:20 compute-0 ceph-mon[74339]: osdmap e341: 3 total, 3 up, 3 in
Dec 06 07:46:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1062430692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1632847009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:20 compute-0 nova_compute[251992]: 2025-12-06 07:46:20.972 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 38 KiB/s wr, 89 op/s
Dec 06 07:46:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:21.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:21 compute-0 nova_compute[251992]: 2025-12-06 07:46:21.476 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Dec 06 07:46:21 compute-0 ceph-mon[74339]: pgmap v2737: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 38 KiB/s wr, 89 op/s
Dec 06 07:46:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Dec 06 07:46:21 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Dec 06 07:46:21 compute-0 sudo[352910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:21 compute-0 sudo[352910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:21 compute-0 sudo[352910]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:21 compute-0 sudo[352935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:21 compute-0 sudo[352935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:21 compute-0 sudo[352935]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:22 compute-0 ceph-mon[74339]: osdmap e342: 3 total, 3 up, 3 in
Dec 06 07:46:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 46 op/s
Dec 06 07:46:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:23.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:23 compute-0 podman[352961]: 2025-12-06 07:46:23.454986104 +0000 UTC m=+0.111989352 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec 06 07:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:46:23 compute-0 ceph-mon[74339]: pgmap v2739: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 46 op/s
Dec 06 07:46:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3024054679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:46:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3024054679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:46:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:24.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:24 compute-0 nova_compute[251992]: 2025-12-06 07:46:24.430 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Dec 06 07:46:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Dec 06 07:46:24 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Dec 06 07:46:24 compute-0 nova_compute[251992]: 2025-12-06 07:46:24.664 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 649 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Dec 06 07:46:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:46:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:46:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:46:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:46:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:46:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:25.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Dec 06 07:46:25 compute-0 ceph-mon[74339]: osdmap e343: 3 total, 3 up, 3 in
Dec 06 07:46:25 compute-0 ceph-mon[74339]: pgmap v2741: 305 pgs: 305 active+clean; 649 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 109 op/s
Dec 06 07:46:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Dec 06 07:46:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Dec 06 07:46:25 compute-0 nova_compute[251992]: 2025-12-06 07:46:25.974 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010193109645449469 of space, bias 1.0, pg target 3.0579328936348404 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002163777510236367 of space, bias 1.0, pg target 0.642641920540201 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005233233586916062 of space, bias 1.0, pg target 1.5542703753140705 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:46:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Dec 06 07:46:26 compute-0 nova_compute[251992]: 2025-12-06 07:46:26.268 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:26 compute-0 nova_compute[251992]: 2025-12-06 07:46:26.269 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Dec 06 07:46:26 compute-0 ceph-mon[74339]: osdmap e344: 3 total, 3 up, 3 in
Dec 06 07:46:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 690 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.0 MiB/s wr, 178 op/s
Dec 06 07:46:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Dec 06 07:46:27 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Dec 06 07:46:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:27.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:27 compute-0 nova_compute[251992]: 2025-12-06 07:46:27.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:28 compute-0 ceph-mon[74339]: pgmap v2743: 305 pgs: 305 active+clean; 690 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.0 MiB/s wr, 178 op/s
Dec 06 07:46:28 compute-0 ceph-mon[74339]: osdmap e345: 3 total, 3 up, 3 in
Dec 06 07:46:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:28.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:28 compute-0 nova_compute[251992]: 2025-12-06 07:46:28.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:28 compute-0 nova_compute[251992]: 2025-12-06 07:46:28.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:46:28 compute-0 nova_compute[251992]: 2025-12-06 07:46:28.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:46:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 725 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 11 MiB/s wr, 278 op/s
Dec 06 07:46:29 compute-0 nova_compute[251992]: 2025-12-06 07:46:29.061 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:46:29 compute-0 nova_compute[251992]: 2025-12-06 07:46:29.062 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:46:29 compute-0 nova_compute[251992]: 2025-12-06 07:46:29.062 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:46:29 compute-0 nova_compute[251992]: 2025-12-06 07:46:29.062 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:46:29 compute-0 ceph-mon[74339]: pgmap v2745: 305 pgs: 305 active+clean; 725 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 11 MiB/s wr, 278 op/s
Dec 06 07:46:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:29.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:29 compute-0 nova_compute[251992]: 2025-12-06 07:46:29.667 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Dec 06 07:46:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Dec 06 07:46:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Dec 06 07:46:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2389771305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:30 compute-0 ceph-mon[74339]: osdmap e346: 3 total, 3 up, 3 in
Dec 06 07:46:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:30 compute-0 nova_compute[251992]: 2025-12-06 07:46:30.892 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updating instance_info_cache with network_info: [{"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:46:30 compute-0 nova_compute[251992]: 2025-12-06 07:46:30.911 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:46:30 compute-0 nova_compute[251992]: 2025-12-06 07:46:30.911 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:46:30 compute-0 nova_compute[251992]: 2025-12-06 07:46:30.912 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:30 compute-0 nova_compute[251992]: 2025-12-06 07:46:30.912 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:30 compute-0 nova_compute[251992]: 2025-12-06 07:46:30.912 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:30 compute-0 nova_compute[251992]: 2025-12-06 07:46:30.913 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:46:30 compute-0 nova_compute[251992]: 2025-12-06 07:46:30.975 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 735 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 9.5 MiB/s wr, 245 op/s
Dec 06 07:46:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:31.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/347993965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:31 compute-0 ceph-mon[74339]: pgmap v2747: 305 pgs: 305 active+clean; 735 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 9.5 MiB/s wr, 245 op/s
Dec 06 07:46:31 compute-0 nova_compute[251992]: 2025-12-06 07:46:31.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:46:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:32 compute-0 podman[352991]: 2025-12-06 07:46:32.409934668 +0000 UTC m=+0.069085885 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 07:46:32 compute-0 podman[352992]: 2025-12-06 07:46:32.427125771 +0000 UTC m=+0.081915831 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:46:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1354848071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 740 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.7 MiB/s wr, 210 op/s
Dec 06 07:46:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:33.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:33 compute-0 ceph-mon[74339]: pgmap v2748: 305 pgs: 305 active+clean; 740 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.7 MiB/s wr, 210 op/s
Dec 06 07:46:33 compute-0 nova_compute[251992]: 2025-12-06 07:46:33.614 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:46:33.614 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:46:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:46:33.616 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:46:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:34.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Dec 06 07:46:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/462057177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Dec 06 07:46:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Dec 06 07:46:34 compute-0 nova_compute[251992]: 2025-12-06 07:46:34.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Dec 06 07:46:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Dec 06 07:46:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Dec 06 07:46:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 733 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Dec 06 07:46:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:35.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:35 compute-0 ceph-mon[74339]: osdmap e347: 3 total, 3 up, 3 in
Dec 06 07:46:35 compute-0 ceph-mon[74339]: osdmap e348: 3 total, 3 up, 3 in
Dec 06 07:46:35 compute-0 ceph-mon[74339]: pgmap v2751: 305 pgs: 305 active+clean; 733 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 191 op/s
Dec 06 07:46:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Dec 06 07:46:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Dec 06 07:46:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Dec 06 07:46:35 compute-0 sudo[353031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:35 compute-0 sudo[353031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:35 compute-0 sudo[353031]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:35 compute-0 sudo[353056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:46:35 compute-0 sudo[353056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:35 compute-0 sudo[353056]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:35 compute-0 nova_compute[251992]: 2025-12-06 07:46:35.978 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:36 compute-0 sudo[353081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:36 compute-0 sudo[353081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:36 compute-0 sudo[353081]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:36 compute-0 sudo[353106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:46:36 compute-0 sudo[353106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:36.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:36 compute-0 sudo[353106]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 07:46:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:46:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:46:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:46:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 14a93e8d-f340-400d-bdd8-6a006eec87bc does not exist
Dec 06 07:46:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2354ced5-a79a-4a1e-bc2f-e98900e6c511 does not exist
Dec 06 07:46:36 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 639945d6-358e-4ac1-9596-cd8e66743d36 does not exist
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:46:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:46:36 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:46:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:46:36 compute-0 nova_compute[251992]: 2025-12-06 07:46:36.738 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:36 compute-0 sudo[353162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:36 compute-0 sudo[353162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:36 compute-0 sudo[353162]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:36 compute-0 sudo[353187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:46:36 compute-0 sudo[353187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:36 compute-0 sudo[353187]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Dec 06 07:46:36 compute-0 ceph-mon[74339]: osdmap e349: 3 total, 3 up, 3 in
Dec 06 07:46:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:46:36 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:46:36 compute-0 sudo[353212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:36 compute-0 sudo[353212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:36 compute-0 sudo[353212]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Dec 06 07:46:36 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Dec 06 07:46:36 compute-0 sudo[353237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:46:36 compute-0 sudo[353237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 730 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 10 MiB/s wr, 262 op/s
Dec 06 07:46:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:37.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:37 compute-0 podman[353304]: 2025-12-06 07:46:37.224115499 +0000 UTC m=+0.039875807 container create bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:46:37 compute-0 systemd[1]: Started libpod-conmon-bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b.scope.
Dec 06 07:46:37 compute-0 podman[353304]: 2025-12-06 07:46:37.205921927 +0000 UTC m=+0.021682255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:46:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:46:37 compute-0 podman[353304]: 2025-12-06 07:46:37.355218986 +0000 UTC m=+0.170979314 container init bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:46:37 compute-0 podman[353304]: 2025-12-06 07:46:37.363346555 +0000 UTC m=+0.179106863 container start bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:46:37 compute-0 podman[353304]: 2025-12-06 07:46:37.368046962 +0000 UTC m=+0.183807300 container attach bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:46:37 compute-0 confident_taussig[353320]: 167 167
Dec 06 07:46:37 compute-0 systemd[1]: libpod-bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b.scope: Deactivated successfully.
Dec 06 07:46:37 compute-0 podman[353304]: 2025-12-06 07:46:37.371604398 +0000 UTC m=+0.187364726 container died bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-612d5a05aeb1fa4c811cb19f9e9186609fa374c5bb566018798374fc767b833c-merged.mount: Deactivated successfully.
Dec 06 07:46:37 compute-0 podman[353304]: 2025-12-06 07:46:37.406732236 +0000 UTC m=+0.222492544 container remove bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:46:37 compute-0 systemd[1]: libpod-conmon-bdeebb6e1dd97d5ecb2470eec7e00f841e83262eb9bae4a4b8eabe7c330cf45b.scope: Deactivated successfully.
Dec 06 07:46:37 compute-0 podman[353345]: 2025-12-06 07:46:37.567263947 +0000 UTC m=+0.035439507 container create d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 07:46:37 compute-0 systemd[1]: Started libpod-conmon-d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c.scope.
Dec 06 07:46:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:46:37.618 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:46:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13523c81a03ce4cc2127cf4812cfecffbee29d00938c423380945dec30414744/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13523c81a03ce4cc2127cf4812cfecffbee29d00938c423380945dec30414744/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13523c81a03ce4cc2127cf4812cfecffbee29d00938c423380945dec30414744/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13523c81a03ce4cc2127cf4812cfecffbee29d00938c423380945dec30414744/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13523c81a03ce4cc2127cf4812cfecffbee29d00938c423380945dec30414744/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:37 compute-0 podman[353345]: 2025-12-06 07:46:37.551935104 +0000 UTC m=+0.020110674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:46:37 compute-0 podman[353345]: 2025-12-06 07:46:37.651076088 +0000 UTC m=+0.119251738 container init d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:46:37 compute-0 podman[353345]: 2025-12-06 07:46:37.658666563 +0000 UTC m=+0.126842123 container start d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:46:37 compute-0 podman[353345]: 2025-12-06 07:46:37.663067311 +0000 UTC m=+0.131242891 container attach d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 07:46:38 compute-0 ceph-mon[74339]: osdmap e350: 3 total, 3 up, 3 in
Dec 06 07:46:38 compute-0 ceph-mon[74339]: pgmap v2754: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 730 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 10 MiB/s wr, 262 op/s
Dec 06 07:46:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/540055131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:46:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1061802510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:38.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:38 compute-0 lucid_liskov[353361]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:46:38 compute-0 lucid_liskov[353361]: --> relative data size: 1.0
Dec 06 07:46:38 compute-0 lucid_liskov[353361]: --> All data devices are unavailable
Dec 06 07:46:38 compute-0 systemd[1]: libpod-d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c.scope: Deactivated successfully.
Dec 06 07:46:38 compute-0 podman[353345]: 2025-12-06 07:46:38.550414153 +0000 UTC m=+1.018589713 container died d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-13523c81a03ce4cc2127cf4812cfecffbee29d00938c423380945dec30414744-merged.mount: Deactivated successfully.
Dec 06 07:46:38 compute-0 podman[353345]: 2025-12-06 07:46:38.601138722 +0000 UTC m=+1.069314292 container remove d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:46:38 compute-0 systemd[1]: libpod-conmon-d638e206447b93f2c601039ae1a9144f25c9f7bc4a12e68a5ccb6c64c456ef5c.scope: Deactivated successfully.
Dec 06 07:46:38 compute-0 sudo[353237]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:38 compute-0 sudo[353387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:38 compute-0 sudo[353387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:38 compute-0 sudo[353387]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:38 compute-0 sudo[353412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:46:38 compute-0 sudo[353412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:38 compute-0 sudo[353412]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:38 compute-0 sudo[353437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:38 compute-0 sudo[353437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:38 compute-0 sudo[353437]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:38 compute-0 sudo[353462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:46:38 compute-0 sudo[353462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 783 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 14 MiB/s wr, 289 op/s
Dec 06 07:46:39 compute-0 podman[353527]: 2025-12-06 07:46:39.193763322 +0000 UTC m=+0.046351782 container create 50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jennings, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:46:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:39.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:39 compute-0 systemd[1]: Started libpod-conmon-50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8.scope.
Dec 06 07:46:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:46:39 compute-0 podman[353527]: 2025-12-06 07:46:39.170246477 +0000 UTC m=+0.022834957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:46:39 compute-0 podman[353527]: 2025-12-06 07:46:39.267205373 +0000 UTC m=+0.119793843 container init 50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:46:39 compute-0 podman[353527]: 2025-12-06 07:46:39.27378854 +0000 UTC m=+0.126377000 container start 50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jennings, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:46:39 compute-0 podman[353527]: 2025-12-06 07:46:39.277207633 +0000 UTC m=+0.129796113 container attach 50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jennings, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:46:39 compute-0 modest_jennings[353543]: 167 167
Dec 06 07:46:39 compute-0 systemd[1]: libpod-50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8.scope: Deactivated successfully.
Dec 06 07:46:39 compute-0 podman[353527]: 2025-12-06 07:46:39.278936229 +0000 UTC m=+0.131524689 container died 50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-28c64503bf5eed01a227e960f2a0ed85ab6476af5c07d65d9f6e4d640b428dbc-merged.mount: Deactivated successfully.
Dec 06 07:46:39 compute-0 podman[353527]: 2025-12-06 07:46:39.313143202 +0000 UTC m=+0.165731662 container remove 50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:46:39 compute-0 systemd[1]: libpod-conmon-50131f417a5172b1f1e1e5bfede914451b8bcb80fda1eb72b34226a2c394b5f8.scope: Deactivated successfully.
Dec 06 07:46:39 compute-0 ceph-mon[74339]: pgmap v2755: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 783 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 14 MiB/s wr, 289 op/s
Dec 06 07:46:39 compute-0 podman[353565]: 2025-12-06 07:46:39.507273911 +0000 UTC m=+0.051851481 container create e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 07:46:39 compute-0 systemd[1]: Started libpod-conmon-e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee.scope.
Dec 06 07:46:39 compute-0 podman[353565]: 2025-12-06 07:46:39.482525913 +0000 UTC m=+0.027103503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:46:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df38abaeb7de04e1969d3e6772886f6b679ff7fe5b50ab939dcefd53bac6a23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df38abaeb7de04e1969d3e6772886f6b679ff7fe5b50ab939dcefd53bac6a23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df38abaeb7de04e1969d3e6772886f6b679ff7fe5b50ab939dcefd53bac6a23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df38abaeb7de04e1969d3e6772886f6b679ff7fe5b50ab939dcefd53bac6a23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:39 compute-0 podman[353565]: 2025-12-06 07:46:39.60918325 +0000 UTC m=+0.153760820 container init e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:46:39 compute-0 podman[353565]: 2025-12-06 07:46:39.616431036 +0000 UTC m=+0.161008606 container start e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:46:39 compute-0 podman[353565]: 2025-12-06 07:46:39.619646152 +0000 UTC m=+0.164223742 container attach e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:46:39 compute-0 nova_compute[251992]: 2025-12-06 07:46:39.671 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:40 compute-0 nova_compute[251992]: 2025-12-06 07:46:40.150 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4058190283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:40 compute-0 youthful_faraday[353581]: {
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:     "0": [
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:         {
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "devices": [
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "/dev/loop3"
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             ],
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "lv_name": "ceph_lv0",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "lv_size": "7511998464",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "name": "ceph_lv0",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "tags": {
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.cluster_name": "ceph",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.crush_device_class": "",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.encrypted": "0",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.osd_id": "0",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.type": "block",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:                 "ceph.vdo": "0"
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             },
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "type": "block",
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:             "vg_name": "ceph_vg0"
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:         }
Dec 06 07:46:40 compute-0 youthful_faraday[353581]:     ]
Dec 06 07:46:40 compute-0 youthful_faraday[353581]: }
Dec 06 07:46:40 compute-0 systemd[1]: libpod-e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee.scope: Deactivated successfully.
Dec 06 07:46:40 compute-0 podman[353565]: 2025-12-06 07:46:40.485886394 +0000 UTC m=+1.030463964 container died e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:46:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9df38abaeb7de04e1969d3e6772886f6b679ff7fe5b50ab939dcefd53bac6a23-merged.mount: Deactivated successfully.
Dec 06 07:46:40 compute-0 podman[353565]: 2025-12-06 07:46:40.539053389 +0000 UTC m=+1.083630959 container remove e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:46:40 compute-0 systemd[1]: libpod-conmon-e585873817ca04f7cfb9029a1d6a1875357bff55233f8c5eb62a71aaab5b8eee.scope: Deactivated successfully.
Dec 06 07:46:40 compute-0 sudo[353462]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:40 compute-0 sudo[353601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:40 compute-0 sudo[353601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:40 compute-0 sudo[353601]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:40 compute-0 sudo[353626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:46:40 compute-0 sudo[353626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:40 compute-0 sudo[353626]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:40 compute-0 sudo[353651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:40 compute-0 sudo[353651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:40 compute-0 sudo[353651]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:40 compute-0 sudo[353676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:46:40 compute-0 sudo[353676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:40 compute-0 nova_compute[251992]: 2025-12-06 07:46:40.979 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 211 op/s
Dec 06 07:46:41 compute-0 podman[353739]: 2025-12-06 07:46:41.177622468 +0000 UTC m=+0.047423141 container create a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:46:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:41.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:41 compute-0 systemd[1]: Started libpod-conmon-a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653.scope.
Dec 06 07:46:41 compute-0 podman[353739]: 2025-12-06 07:46:41.157409423 +0000 UTC m=+0.027210126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:46:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:46:41 compute-0 podman[353739]: 2025-12-06 07:46:41.275238852 +0000 UTC m=+0.145039555 container init a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:46:41 compute-0 podman[353739]: 2025-12-06 07:46:41.283314 +0000 UTC m=+0.153114673 container start a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 07:46:41 compute-0 podman[353739]: 2025-12-06 07:46:41.285958631 +0000 UTC m=+0.155759304 container attach a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:46:41 compute-0 upbeat_feistel[353755]: 167 167
Dec 06 07:46:41 compute-0 systemd[1]: libpod-a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653.scope: Deactivated successfully.
Dec 06 07:46:41 compute-0 podman[353739]: 2025-12-06 07:46:41.291194403 +0000 UTC m=+0.160995076 container died a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:46:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-a51701498d3b13c81a0161ce94a9423928f4ea4a6799d82ad0d29a7df53dd36f-merged.mount: Deactivated successfully.
Dec 06 07:46:41 compute-0 podman[353739]: 2025-12-06 07:46:41.32483431 +0000 UTC m=+0.194634983 container remove a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:46:41 compute-0 systemd[1]: libpod-conmon-a739cdbf35d7b5d4c0786bad1cb1044d802b32b1679d2254fd42e78e5ba7c653.scope: Deactivated successfully.
Dec 06 07:46:41 compute-0 ceph-mon[74339]: pgmap v2756: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 11 MiB/s wr, 211 op/s
Dec 06 07:46:41 compute-0 podman[353779]: 2025-12-06 07:46:41.506729348 +0000 UTC m=+0.047826302 container create ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tesla, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:46:41 compute-0 systemd[1]: Started libpod-conmon-ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1.scope.
Dec 06 07:46:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bb9f58f8d10d9e23a7bb80cb5ae467e402d34d6c35370a70eefb82b2f44e4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bb9f58f8d10d9e23a7bb80cb5ae467e402d34d6c35370a70eefb82b2f44e4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bb9f58f8d10d9e23a7bb80cb5ae467e402d34d6c35370a70eefb82b2f44e4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bb9f58f8d10d9e23a7bb80cb5ae467e402d34d6c35370a70eefb82b2f44e4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:46:41 compute-0 podman[353779]: 2025-12-06 07:46:41.487824948 +0000 UTC m=+0.028921922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:46:41 compute-0 podman[353779]: 2025-12-06 07:46:41.59651484 +0000 UTC m=+0.137611824 container init ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:46:41 compute-0 podman[353779]: 2025-12-06 07:46:41.602643696 +0000 UTC m=+0.143740640 container start ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tesla, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:46:41 compute-0 podman[353779]: 2025-12-06 07:46:41.608087033 +0000 UTC m=+0.149184007 container attach ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tesla, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec 06 07:46:41 compute-0 sudo[353800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:41 compute-0 sudo[353800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:41 compute-0 sudo[353800]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:41 compute-0 sudo[353825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:41 compute-0 sudo[353825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:41 compute-0 sudo[353825]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:42.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1367365160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3457157827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:42 compute-0 eager_tesla[353795]: {
Dec 06 07:46:42 compute-0 eager_tesla[353795]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:46:42 compute-0 eager_tesla[353795]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:46:42 compute-0 eager_tesla[353795]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:46:42 compute-0 eager_tesla[353795]:         "osd_id": 0,
Dec 06 07:46:42 compute-0 eager_tesla[353795]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:46:42 compute-0 eager_tesla[353795]:         "type": "bluestore"
Dec 06 07:46:42 compute-0 eager_tesla[353795]:     }
Dec 06 07:46:42 compute-0 eager_tesla[353795]: }
Dec 06 07:46:42 compute-0 systemd[1]: libpod-ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1.scope: Deactivated successfully.
Dec 06 07:46:42 compute-0 conmon[353795]: conmon ec6583df08fd01ee2868 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1.scope/container/memory.events
Dec 06 07:46:42 compute-0 podman[353779]: 2025-12-06 07:46:42.52828517 +0000 UTC m=+1.069382134 container died ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:46:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-73bb9f58f8d10d9e23a7bb80cb5ae467e402d34d6c35370a70eefb82b2f44e4f-merged.mount: Deactivated successfully.
Dec 06 07:46:42 compute-0 podman[353779]: 2025-12-06 07:46:42.882571179 +0000 UTC m=+1.423668123 container remove ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_tesla, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:46:42 compute-0 systemd[1]: libpod-conmon-ec6583df08fd01ee28680cf7083091ccfe2220114c899382242ae6185bd557c1.scope: Deactivated successfully.
Dec 06 07:46:42 compute-0 sudo[353676]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:46:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:46:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4e993e97-fe24-499a-98b1-08e8dd356b45 does not exist
Dec 06 07:46:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b561834a-2d8a-4e9f-9d45-2d9be5361bd0 does not exist
Dec 06 07:46:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1e67fcd7-d2d4-4f72-926f-07670235697a does not exist
Dec 06 07:46:42 compute-0 sudo[353880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:46:42 compute-0 sudo[353880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:42 compute-0 sudo[353880]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:46:43 compute-0 sudo[353906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:46:43 compute-0 sudo[353906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:46:43 compute-0 sudo[353906]: pam_unix(sudo:session): session closed for user root
Dec 06 07:46:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 788 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 139 op/s
Dec 06 07:46:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:43.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:46:44 compute-0 ceph-mon[74339]: pgmap v2757: 305 pgs: 305 active+clean; 788 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 139 op/s
Dec 06 07:46:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:44 compute-0 nova_compute[251992]: 2025-12-06 07:46:44.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Dec 06 07:46:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Dec 06 07:46:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.844916) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204844991, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2248, "num_deletes": 257, "total_data_size": 3785491, "memory_usage": 3850064, "flush_reason": "Manual Compaction"}
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204867414, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3714441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53122, "largest_seqno": 55369, "table_properties": {"data_size": 3704248, "index_size": 6495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21860, "raw_average_key_size": 20, "raw_value_size": 3683631, "raw_average_value_size": 3528, "num_data_blocks": 281, "num_entries": 1044, "num_filter_entries": 1044, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007010, "oldest_key_time": 1765007010, "file_creation_time": 1765007204, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 22539 microseconds, and 8858 cpu microseconds.
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.867453) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3714441 bytes OK
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.867477) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.869305) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.869319) EVENT_LOG_v1 {"time_micros": 1765007204869315, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.869335) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3776170, prev total WAL file size 3776170, number of live WAL files 2.
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.870536) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3627KB)], [116(10MB)]
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204870600, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 14486758, "oldest_snapshot_seqno": -1}
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 9105 keys, 12445545 bytes, temperature: kUnknown
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204958706, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 12445545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12385705, "index_size": 35984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 237196, "raw_average_key_size": 26, "raw_value_size": 12224787, "raw_average_value_size": 1342, "num_data_blocks": 1400, "num_entries": 9105, "num_filter_entries": 9105, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007204, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.958977) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 12445545 bytes
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.961252) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.3 rd, 141.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.3 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 9633, records dropped: 528 output_compression: NoCompression
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.961269) EVENT_LOG_v1 {"time_micros": 1765007204961261, "job": 70, "event": "compaction_finished", "compaction_time_micros": 88196, "compaction_time_cpu_micros": 33123, "output_level": 6, "num_output_files": 1, "total_output_size": 12445545, "num_input_records": 9633, "num_output_records": 9105, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204961977, "job": 70, "event": "table_file_deletion", "file_number": 118}
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007204964239, "job": 70, "event": "table_file_deletion", "file_number": 116}
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.870463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.964363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.964371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.964374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.964377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:44 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:46:44.964380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:46:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 842 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 7.1 MiB/s wr, 264 op/s
Dec 06 07:46:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:45.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:45 compute-0 ceph-mon[74339]: osdmap e351: 3 total, 3 up, 3 in
Dec 06 07:46:45 compute-0 ceph-mon[74339]: pgmap v2759: 305 pgs: 305 active+clean; 842 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 7.1 MiB/s wr, 264 op/s
Dec 06 07:46:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Dec 06 07:46:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Dec 06 07:46:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Dec 06 07:46:45 compute-0 nova_compute[251992]: 2025-12-06 07:46:45.981 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:46.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:46 compute-0 ceph-mon[74339]: osdmap e352: 3 total, 3 up, 3 in
Dec 06 07:46:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1937280064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:46:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 835 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.2 MiB/s wr, 274 op/s
Dec 06 07:46:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:47.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:47 compute-0 nova_compute[251992]: 2025-12-06 07:46:47.340 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:47 compute-0 ceph-mon[74339]: pgmap v2761: 305 pgs: 305 active+clean; 835 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.2 MiB/s wr, 274 op/s
Dec 06 07:46:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 806 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.9 MiB/s rd, 5.9 MiB/s wr, 303 op/s
Dec 06 07:46:49 compute-0 ceph-mon[74339]: pgmap v2762: 305 pgs: 305 active+clean; 806 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.9 MiB/s rd, 5.9 MiB/s wr, 303 op/s
Dec 06 07:46:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:49.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:49 compute-0 nova_compute[251992]: 2025-12-06 07:46:49.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:50.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:50 compute-0 nova_compute[251992]: 2025-12-06 07:46:50.983 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 5.8 MiB/s wr, 349 op/s
Dec 06 07:46:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:51.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:51 compute-0 ceph-mon[74339]: pgmap v2763: 305 pgs: 305 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 5.8 MiB/s wr, 349 op/s
Dec 06 07:46:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:46:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:52.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:46:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 4.0 MiB/s wr, 294 op/s
Dec 06 07:46:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:53 compute-0 ceph-mon[74339]: pgmap v2764: 305 pgs: 305 active+clean; 787 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 4.0 MiB/s wr, 294 op/s
Dec 06 07:46:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:54.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:54 compute-0 podman[353937]: 2025-12-06 07:46:54.429408712 +0000 UTC m=+0.085743234 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:46:54 compute-0 nova_compute[251992]: 2025-12-06 07:46:54.729 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:46:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Dec 06 07:46:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Dec 06 07:46:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Dec 06 07:46:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 789 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 181 op/s
Dec 06 07:46:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:46:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:55.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:46:55 compute-0 nova_compute[251992]: 2025-12-06 07:46:55.616 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:55 compute-0 ceph-mon[74339]: osdmap e353: 3 total, 3 up, 3 in
Dec 06 07:46:55 compute-0 ceph-mon[74339]: pgmap v2766: 305 pgs: 305 active+clean; 789 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.0 MiB/s wr, 181 op/s
Dec 06 07:46:55 compute-0 nova_compute[251992]: 2025-12-06 07:46:55.985 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:46:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 796 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 139 op/s
Dec 06 07:46:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:46:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:58 compute-0 ceph-mon[74339]: pgmap v2767: 305 pgs: 305 active+clean; 796 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 139 op/s
Dec 06 07:46:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 809 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 139 op/s
Dec 06 07:46:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:46:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:46:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:46:59.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:46:59 compute-0 nova_compute[251992]: 2025-12-06 07:46:59.732 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:00 compute-0 nova_compute[251992]: 2025-12-06 07:47:00.986 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 813 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:47:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:01.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:01 compute-0 sudo[353968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:02 compute-0 sudo[353968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:02 compute-0 sudo[353968]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:02 compute-0 sudo[353993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:02 compute-0 sudo[353993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:02 compute-0 sudo[353993]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:02.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 813 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 550 KiB/s rd, 2.5 MiB/s wr, 94 op/s
Dec 06 07:47:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:03.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:03 compute-0 podman[354019]: 2025-12-06 07:47:03.400922547 +0000 UTC m=+0.054120391 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:47:03 compute-0 podman[354020]: 2025-12-06 07:47:03.40475581 +0000 UTC m=+0.056511325 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1548366512' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1075072744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:03 compute-0 ceph-mon[74339]: pgmap v2768: 305 pgs: 305 active+clean; 809 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 139 op/s
Dec 06 07:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3035636638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:03.855 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:03.856 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:03.857 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:04.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:04 compute-0 ceph-mon[74339]: pgmap v2769: 305 pgs: 305 active+clean; 813 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:47:04 compute-0 ceph-mon[74339]: pgmap v2770: 305 pgs: 305 active+clean; 813 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 550 KiB/s rd, 2.5 MiB/s wr, 94 op/s
Dec 06 07:47:04 compute-0 nova_compute[251992]: 2025-12-06 07:47:04.734 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 817 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.5 MiB/s wr, 114 op/s
Dec 06 07:47:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:05.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:05 compute-0 ceph-mon[74339]: pgmap v2771: 305 pgs: 305 active+clean; 817 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.5 MiB/s wr, 114 op/s
Dec 06 07:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2972039573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:05 compute-0 sshd-session[354059]: Connection closed by 80.94.92.182 port 55520
Dec 06 07:47:05 compute-0 nova_compute[251992]: 2025-12-06 07:47:05.987 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Dec 06 07:47:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Dec 06 07:47:06 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Dec 06 07:47:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 825 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 1.5 MiB/s wr, 92 op/s
Dec 06 07:47:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:07.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Dec 06 07:47:07 compute-0 ceph-mon[74339]: osdmap e354: 3 total, 3 up, 3 in
Dec 06 07:47:07 compute-0 ceph-mon[74339]: pgmap v2773: 305 pgs: 305 active+clean; 825 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 1.5 MiB/s wr, 92 op/s
Dec 06 07:47:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Dec 06 07:47:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Dec 06 07:47:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:08.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Dec 06 07:47:08 compute-0 ceph-mon[74339]: osdmap e355: 3 total, 3 up, 3 in
Dec 06 07:47:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Dec 06 07:47:08 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Dec 06 07:47:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 863 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.5 MiB/s wr, 167 op/s
Dec 06 07:47:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:09.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:09 compute-0 nova_compute[251992]: 2025-12-06 07:47:09.738 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:10 compute-0 ceph-mon[74339]: osdmap e356: 3 total, 3 up, 3 in
Dec 06 07:47:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4247177994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4247177994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:10 compute-0 ceph-mon[74339]: pgmap v2776: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 863 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.5 MiB/s wr, 167 op/s
Dec 06 07:47:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:11 compute-0 nova_compute[251992]: 2025-12-06 07:47:11.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 911 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 MiB/s rd, 7.9 MiB/s wr, 314 op/s
Dec 06 07:47:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:11.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:12 compute-0 ceph-mon[74339]: pgmap v2777: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 911 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 MiB/s rd, 7.9 MiB/s wr, 314 op/s
Dec 06 07:47:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:12.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:47:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 404 op/s
Dec 06 07:47:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:13.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:13 compute-0 ceph-mon[74339]: pgmap v2778: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 11 MiB/s wr, 404 op/s
Dec 06 07:47:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 07:47:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:14.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 07:47:14 compute-0 nova_compute[251992]: 2025-12-06 07:47:14.771 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 8.5 MiB/s wr, 426 op/s
Dec 06 07:47:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:15.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:16 compute-0 nova_compute[251992]: 2025-12-06 07:47:16.021 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 07:47:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 07:47:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 MiB/s rd, 7.4 MiB/s wr, 387 op/s
Dec 06 07:47:17 compute-0 nova_compute[251992]: 2025-12-06 07:47:17.267 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:17 compute-0 nova_compute[251992]: 2025-12-06 07:47:17.268 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:17.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:17 compute-0 nova_compute[251992]: 2025-12-06 07:47:17.285 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:47:17 compute-0 nova_compute[251992]: 2025-12-06 07:47:17.376 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:17 compute-0 nova_compute[251992]: 2025-12-06 07:47:17.376 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:17 compute-0 nova_compute[251992]: 2025-12-06 07:47:17.384 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:47:17 compute-0 nova_compute[251992]: 2025-12-06 07:47:17.384 251996 INFO nova.compute.claims [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:47:17 compute-0 nova_compute[251992]: 2025-12-06 07:47:17.592 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:47:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827093137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.028 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.036 251996 DEBUG nova.compute.provider_tree [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.066 251996 DEBUG nova.scheduler.client.report [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.121 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.123 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:47:18 compute-0 ceph-mon[74339]: pgmap v2779: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 8.5 MiB/s wr, 426 op/s
Dec 06 07:47:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:18.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.417 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.417 251996 DEBUG nova.network.neutron [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.438 251996 INFO nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.461 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:47:18
Dec 06 07:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms']
Dec 06 07:47:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.552 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.554 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.554 251996 INFO nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Creating image(s)
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.587 251996 DEBUG nova.storage.rbd_utils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.617 251996 DEBUG nova.storage.rbd_utils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.645 251996 DEBUG nova.storage.rbd_utils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.650 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "429af556bb67ecadb12994528488046d48f23333" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.651 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "429af556bb67ecadb12994528488046d48f23333" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.656 251996 DEBUG nova.policy [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89d63d29c7534f70817e13d23cada716', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f093eaeb91c042dd8c85f5cd256c4394', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.687 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.687 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:18 compute-0 nova_compute[251992]: 2025-12-06 07:47:18.966 251996 DEBUG nova.virt.libvirt.imagebackend [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/85f1f69b-a9ee-46e0-a30b-ae7faf09ffef/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/85f1f69b-a9ee-46e0-a30b-ae7faf09ffef/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.039 251996 DEBUG nova.virt.libvirt.imagebackend [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/85f1f69b-a9ee-46e0-a30b-ae7faf09ffef/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.040 251996 DEBUG nova.storage.rbd_utils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] cloning images/85f1f69b-a9ee-46e0-a30b-ae7faf09ffef@snap to None/f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:47:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.1 MiB/s wr, 309 op/s
Dec 06 07:47:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:47:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144854194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.131 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.219 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.219 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.223 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.223 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-0000009d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:47:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:19.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:19.372 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:47:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:19.373 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.414 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.434 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.435 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3911MB free_disk=20.6937255859375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.435 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.436 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.537 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.537 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.538 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f09d4c2b-5734-457e-93cf-a2b4f61e3afd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.539 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.540 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.613 251996 DEBUG nova.network.neutron [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Successfully created port: ce9ef951-3233-438a-91ab-5761a277635d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.657 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/183954237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:19 compute-0 ceph-mon[74339]: pgmap v2780: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 MiB/s rd, 7.4 MiB/s wr, 387 op/s
Dec 06 07:47:19 compute-0 ceph-mon[74339]: from='client.39034 192.168.122.102:0/183954237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3827093137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4117359800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3221444380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:19 compute-0 ceph-mon[74339]: pgmap v2781: 305 pgs: 305 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.1 MiB/s wr, 309 op/s
Dec 06 07:47:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2144854194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:19 compute-0 nova_compute[251992]: 2025-12-06 07:47:19.774 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:47:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153126333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.115 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.121 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:47:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.142 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.164 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.164 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.377 251996 DEBUG nova.network.neutron [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Successfully updated port: ce9ef951-3233-438a-91ab-5761a277635d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:47:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:20.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.591 251996 DEBUG nova.compute.manager [req-f4960c2c-e2d6-42c7-8304-5eb508bc5aec req-8233d6a9-f4b8-4b7e-a301-f7651ca549e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-changed-ce9ef951-3233-438a-91ab-5761a277635d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.592 251996 DEBUG nova.compute.manager [req-f4960c2c-e2d6-42c7-8304-5eb508bc5aec req-8233d6a9-f4b8-4b7e-a301-f7651ca549e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Refreshing instance network info cache due to event network-changed-ce9ef951-3233-438a-91ab-5761a277635d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.592 251996 DEBUG oslo_concurrency.lockutils [req-f4960c2c-e2d6-42c7-8304-5eb508bc5aec req-8233d6a9-f4b8-4b7e-a301-f7651ca549e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.592 251996 DEBUG oslo_concurrency.lockutils [req-f4960c2c-e2d6-42c7-8304-5eb508bc5aec req-8233d6a9-f4b8-4b7e-a301-f7651ca549e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.593 251996 DEBUG nova.network.neutron [req-f4960c2c-e2d6-42c7-8304-5eb508bc5aec req-8233d6a9-f4b8-4b7e-a301-f7651ca549e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Refreshing network info cache for port ce9ef951-3233-438a-91ab-5761a277635d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.608 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:47:20 compute-0 nova_compute[251992]: 2025-12-06 07:47:20.916 251996 DEBUG nova.network.neutron [req-f4960c2c-e2d6-42c7-8304-5eb508bc5aec req-8233d6a9-f4b8-4b7e-a301-f7651ca549e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:47:21 compute-0 nova_compute[251992]: 2025-12-06 07:47:21.022 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.5 MiB/s wr, 278 op/s
Dec 06 07:47:21 compute-0 nova_compute[251992]: 2025-12-06 07:47:21.182 251996 DEBUG nova.network.neutron [req-f4960c2c-e2d6-42c7-8304-5eb508bc5aec req-8233d6a9-f4b8-4b7e-a301-f7651ca549e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:47:21 compute-0 nova_compute[251992]: 2025-12-06 07:47:21.198 251996 DEBUG oslo_concurrency.lockutils [req-f4960c2c-e2d6-42c7-8304-5eb508bc5aec req-8233d6a9-f4b8-4b7e-a301-f7651ca549e9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:47:21 compute-0 nova_compute[251992]: 2025-12-06 07:47:21.199 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquired lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:47:21 compute-0 nova_compute[251992]: 2025-12-06 07:47:21.200 251996 DEBUG nova.network.neutron [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:47:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:21.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:21 compute-0 nova_compute[251992]: 2025-12-06 07:47:21.365 251996 DEBUG nova.network.neutron [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.121 251996 DEBUG nova.network.neutron [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Updating instance_info_cache with network_info: [{"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.144 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Releasing lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.144 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Instance network_info: |[{"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:47:22 compute-0 sudo[354258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:22 compute-0 sudo[354258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:22 compute-0 sudo[354258]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Dec 06 07:47:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/153126333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Dec 06 07:47:22 compute-0 sudo[354283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:22 compute-0 sudo[354283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:22 compute-0 sudo[354283]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.274 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "429af556bb67ecadb12994528488046d48f23333" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.443 251996 DEBUG nova.objects.instance [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lazy-loading 'migration_context' on Instance uuid f09d4c2b-5734-457e-93cf-a2b4f61e3afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.463 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.464 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Ensure instance console log exists: /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.464 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.465 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.465 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.467 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Start _get_guest_xml network_info=[{"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:47:04Z,direct_url=<?>,disk_format='raw',id=85f1f69b-a9ee-46e0-a30b-ae7faf09ffef,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1473063084',owner='f093eaeb91c042dd8c85f5cd256c4394',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:47:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '85f1f69b-a9ee-46e0-a30b-ae7faf09ffef'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.472 251996 WARNING nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.477 251996 DEBUG nova.virt.libvirt.host [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.477 251996 DEBUG nova.virt.libvirt.host [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.482 251996 DEBUG nova.virt.libvirt.host [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.483 251996 DEBUG nova.virt.libvirt.host [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.484 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.484 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:47:04Z,direct_url=<?>,disk_format='raw',id=85f1f69b-a9ee-46e0-a30b-ae7faf09ffef,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1473063084',owner='f093eaeb91c042dd8c85f5cd256c4394',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:47:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.484 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.485 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.485 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.485 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.485 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.485 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.486 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.486 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.486 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.486 251996 DEBUG nova.virt.hardware [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:47:22 compute-0 nova_compute[251992]: 2025-12-06 07:47:22.489 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:47:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915905730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.001 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.028 251996 DEBUG nova.storage.rbd_utils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.033 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 14 KiB/s wr, 152 op/s
Dec 06 07:47:23 compute-0 ceph-mon[74339]: pgmap v2782: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.5 MiB/s wr, 278 op/s
Dec 06 07:47:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/889746553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:23 compute-0 ceph-mon[74339]: osdmap e357: 3 total, 3 up, 3 in
Dec 06 07:47:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/991248695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/991248695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3915905730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:23 compute-0 ceph-mon[74339]: pgmap v2784: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 14 KiB/s wr, 152 op/s
Dec 06 07:47:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:23.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:47:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022723988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.491 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.493 251996 DEBUG nova.virt.libvirt.vif [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:47:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-947292371',display_name='tempest-TestSnapshotPattern-server-947292371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-947292371',id=162,image_ref='85f1f69b-a9ee-46e0-a30b-ae7faf09ffef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxDMv0Vhbgr4L65QJ5+X+b7zbDfxyD9+qYaGNf4b7W3f9yi+P//RoKkMpyvVNIPGPzRh0H8TZRtNdilAq90sFwxv4/Dk5avudO2cObIlP9Igfm6SfNSZd6YTMkk3vYjjg==',key_name='tempest-TestSnapshotPattern-467820612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f093eaeb91c042dd8c85f5cd256c4394',ramdisk_id='',reservation_id='r-xcchimpq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='29195b63-c365-4ace-a4f5-9c2dba89c276',image_min_disk='1',image_min_ram='0',image_owner_id='f093eaeb91c042dd8c85f5cd256c4394',image_owner_project_name='tempest-TestSnapshotPattern-563672408',image_owner_user_name='tempest-TestSnapshotPattern-563672408-project-member',image_user_id='89d63d29c7534f70817e13d23cada716',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-563672408',owner_user_name='tempest-TestSnapshotPattern-563672408-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:47:18Z,user_data=None,user_id='89d63d29c7534f70817e13d23cada716',uuid=f09d4c2b-5734-457e-93cf-a2b4f61e3afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.494 251996 DEBUG nova.network.os_vif_util [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converting VIF {"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.495 251996 DEBUG nova.network.os_vif_util [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:4f:5d,bridge_name='br-int',has_traffic_filtering=True,id=ce9ef951-3233-438a-91ab-5761a277635d,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce9ef951-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.496 251996 DEBUG nova.objects.instance [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lazy-loading 'pci_devices' on Instance uuid f09d4c2b-5734-457e-93cf-a2b4f61e3afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:47:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.908 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <uuid>f09d4c2b-5734-457e-93cf-a2b4f61e3afd</uuid>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <name>instance-000000a2</name>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <nova:name>tempest-TestSnapshotPattern-server-947292371</nova:name>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:47:22</nova:creationTime>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <nova:user uuid="89d63d29c7534f70817e13d23cada716">tempest-TestSnapshotPattern-563672408-project-member</nova:user>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <nova:project uuid="f093eaeb91c042dd8c85f5cd256c4394">tempest-TestSnapshotPattern-563672408</nova:project>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="85f1f69b-a9ee-46e0-a30b-ae7faf09ffef"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <nova:port uuid="ce9ef951-3233-438a-91ab-5761a277635d">
Dec 06 07:47:23 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <system>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <entry name="serial">f09d4c2b-5734-457e-93cf-a2b4f61e3afd</entry>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <entry name="uuid">f09d4c2b-5734-457e-93cf-a2b4f61e3afd</entry>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </system>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <os>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   </os>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <features>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   </features>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk">
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       </source>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk.config">
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       </source>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:47:23 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:2f:4f:5d"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <target dev="tapce9ef951-32"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd/console.log" append="off"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <video>
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </video>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <input type="keyboard" bus="usb"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:47:23 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:47:23 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:47:23 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:47:23 compute-0 nova_compute[251992]: </domain>
Dec 06 07:47:23 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.909 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Preparing to wait for external event network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.909 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.910 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.910 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.910 251996 DEBUG nova.virt.libvirt.vif [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:47:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-947292371',display_name='tempest-TestSnapshotPattern-server-947292371',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-947292371',id=162,image_ref='85f1f69b-a9ee-46e0-a30b-ae7faf09ffef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxDMv0Vhbgr4L65QJ5+X+b7zbDfxyD9+qYaGNf4b7W3f9yi+P//RoKkMpyvVNIPGPzRh0H8TZRtNdilAq90sFwxv4/Dk5avudO2cObIlP9Igfm6SfNSZd6YTMkk3vYjjg==',key_name='tempest-TestSnapshotPattern-467820612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f093eaeb91c042dd8c85f5cd256c4394',ramdisk_id='',reservation_id='r-xcchimpq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='29195b63-c365-4ace-a4f5-9c2dba89c276',image_min_disk='1',image_min_ram='0',image_owner_id='f093eaeb91c042dd8c85f5cd256c4394',image_owner_project_name='tempest-TestSnapshotPattern-563672408',image_owner_user_name='tempest-TestSnapshotPattern-563672408-project-member',image_user_id='89d63d29c7534f70817e13d23cada716',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-563672408',owner_user_name='tempest-TestSnapshotPattern-563672408-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:47:18Z,user_data=None,user_id='89d63d29c7534f70817e13d23cada716',uuid=f09d4c2b-5734-457e-93cf-a2b4f61e3afd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.911 251996 DEBUG nova.network.os_vif_util [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converting VIF {"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.911 251996 DEBUG nova.network.os_vif_util [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:4f:5d,bridge_name='br-int',has_traffic_filtering=True,id=ce9ef951-3233-438a-91ab-5761a277635d,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce9ef951-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.912 251996 DEBUG os_vif [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:4f:5d,bridge_name='br-int',has_traffic_filtering=True,id=ce9ef951-3233-438a-91ab-5761a277635d,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce9ef951-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.912 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.913 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.913 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.919 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.919 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce9ef951-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.919 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce9ef951-32, col_values=(('external_ids', {'iface-id': 'ce9ef951-3233-438a-91ab-5761a277635d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:4f:5d', 'vm-uuid': 'f09d4c2b-5734-457e-93cf-a2b4f61e3afd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:47:23 compute-0 NetworkManager[48965]: <info>  [1765007243.9224] manager: (tapce9ef951-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.923 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.927 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:23 compute-0 nova_compute[251992]: 2025-12-06 07:47:23.928 251996 INFO os_vif [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:4f:5d,bridge_name='br-int',has_traffic_filtering=True,id=ce9ef951-3233-438a-91ab-5761a277635d,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce9ef951-32')
Dec 06 07:47:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:24.375 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:47:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:24.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1022723988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:47:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 139 op/s
Dec 06 07:47:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:47:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:47:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:47:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:47:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:25 compute-0 nova_compute[251992]: 2025-12-06 07:47:25.245 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:47:25 compute-0 nova_compute[251992]: 2025-12-06 07:47:25.246 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:47:25 compute-0 nova_compute[251992]: 2025-12-06 07:47:25.246 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] No VIF found with MAC fa:16:3e:2f:4f:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:47:25 compute-0 nova_compute[251992]: 2025-12-06 07:47:25.247 251996 INFO nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Using config drive
Dec 06 07:47:25 compute-0 nova_compute[251992]: 2025-12-06 07:47:25.275 251996 DEBUG nova.storage.rbd_utils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:47:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:25.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:25 compute-0 podman[354446]: 2025-12-06 07:47:25.424799895 +0000 UTC m=+0.082781405 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 06 07:47:25 compute-0 nova_compute[251992]: 2025-12-06 07:47:25.961 251996 INFO nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Creating config drive at /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd/disk.config
Dec 06 07:47:25 compute-0 nova_compute[251992]: 2025-12-06 07:47:25.970 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp48p3lw9e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.024 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.118 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp48p3lw9e" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014035382351572678 of space, bias 1.0, pg target 4.210614705471803 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021748644844593336 of space, bias 1.0, pg target 0.6437598873999627 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.009193646182300392 of space, bias 1.0, pg target 2.721319269960916 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017099385817978784 quantized to 16 (current 16)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002137423227247348 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018168097431602458 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:47:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004274846454494696 quantized to 32 (current 32)
Dec 06 07:47:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:26.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.670 251996 DEBUG nova.storage.rbd_utils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] rbd image f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.674 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd/disk.config f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.704 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.736 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2017072707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2017072707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:26 compute-0 ceph-mon[74339]: pgmap v2785: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 139 op/s
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.908 251996 DEBUG oslo_concurrency.processutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd/disk.config f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.909 251996 INFO nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Deleting local config drive /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd/disk.config because it was imported into RBD.
Dec 06 07:47:26 compute-0 kernel: tapce9ef951-32: entered promiscuous mode
Dec 06 07:47:26 compute-0 NetworkManager[48965]: <info>  [1765007246.9712] manager: (tapce9ef951-32): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Dec 06 07:47:26 compute-0 ovn_controller[147168]: 2025-12-06T07:47:26Z|00594|binding|INFO|Claiming lport ce9ef951-3233-438a-91ab-5761a277635d for this chassis.
Dec 06 07:47:26 compute-0 ovn_controller[147168]: 2025-12-06T07:47:26Z|00595|binding|INFO|ce9ef951-3233-438a-91ab-5761a277635d: Claiming fa:16:3e:2f:4f:5d 10.100.0.14
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.972 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:26.985 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:4f:5d 10.100.0.14'], port_security=['fa:16:3e:2f:4f:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f09d4c2b-5734-457e-93cf-a2b4f61e3afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9edf259b-6a5e-4e11-938d-d631a412648e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f093eaeb91c042dd8c85f5cd256c4394', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaa0df08-ced0-442a-9685-6c089d405f5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90bdd78f-ae71-4d01-8170-80b57acff7fd, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ce9ef951-3233-438a-91ab-5761a277635d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:47:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:26.987 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ce9ef951-3233-438a-91ab-5761a277635d in datapath 9edf259b-6a5e-4e11-938d-d631a412648e bound to our chassis
Dec 06 07:47:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:26.989 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9edf259b-6a5e-4e11-938d-d631a412648e
Dec 06 07:47:26 compute-0 ovn_controller[147168]: 2025-12-06T07:47:26Z|00596|binding|INFO|Setting lport ce9ef951-3233-438a-91ab-5761a277635d ovn-installed in OVS
Dec 06 07:47:26 compute-0 ovn_controller[147168]: 2025-12-06T07:47:26Z|00597|binding|INFO|Setting lport ce9ef951-3233-438a-91ab-5761a277635d up in Southbound
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.994 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:26 compute-0 nova_compute[251992]: 2025-12-06 07:47:26.996 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:27 compute-0 systemd-udevd[354525]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.004 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3d82887f-c9dd-4c76-ac01-b24b20ff35c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.006 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9edf259b-61 in ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.008 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9edf259b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.008 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c519047f-e939-4160-8514-7695e1226b25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.010 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3b6a45-1cec-493a-abd8-c53e9427ebf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 systemd-machined[212986]: New machine qemu-76-instance-000000a2.
Dec 06 07:47:27 compute-0 NetworkManager[48965]: <info>  [1765007247.0170] device (tapce9ef951-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:47:27 compute-0 NetworkManager[48965]: <info>  [1765007247.0184] device (tapce9ef951-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:47:27 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-000000a2.
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.025 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3f5707-bcfe-4b55-88ab-92b99dffc543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.042 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7d068837-9718-422e-9765-764b3e6ec122]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.076 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3db0f8-7780-48a6-9a7b-88fce8130b4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 182 op/s
Dec 06 07:47:27 compute-0 systemd-udevd[354530]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.082 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c243670b-8ad3-48ea-ae5d-7f46d9800e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 NetworkManager[48965]: <info>  [1765007247.0849] manager: (tap9edf259b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/273)
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.117 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[4345a5e6-d554-466f-ba6f-a3a2d8d8dc6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.121 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e26199fe-7a5a-48f6-8d0a-9e0fa515f65a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 NetworkManager[48965]: <info>  [1765007247.1434] device (tap9edf259b-60): carrier: link connected
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.148 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a4beb9fc-531e-440f-ba00-ac4950c2e7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.166 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9c905457-192f-420b-a587-b586fa93961b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9edf259b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:ac:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751973, 'reachable_time': 19974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354559, 'error': None, 'target': 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.185 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d896a619-1d88-4399-a5f1-c06ee8790faf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:ac24'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751973, 'tstamp': 751973}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354560, 'error': None, 'target': 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.203 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b626f01b-37c7-4e43-b2a7-cd8dbc2836d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9edf259b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:ac:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 181], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751973, 'reachable_time': 19974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354561, 'error': None, 'target': 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.254 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b16a41-2a02-4cc8-97b3-386d869cef3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:27.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.324 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[249e6bf5-ee32-4655-ba85-ab170a3ba5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.326 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9edf259b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.327 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.327 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9edf259b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:47:27 compute-0 kernel: tap9edf259b-60: entered promiscuous mode
Dec 06 07:47:27 compute-0 NetworkManager[48965]: <info>  [1765007247.3308] manager: (tap9edf259b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.330 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.332 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9edf259b-60, col_values=(('external_ids', {'iface-id': '2622b20a-1eb7-4bb6-abbf-35b090425f31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.333 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:27 compute-0 ovn_controller[147168]: 2025-12-06T07:47:27Z|00598|binding|INFO|Releasing lport 2622b20a-1eb7-4bb6-abbf-35b090425f31 from this chassis (sb_readonly=0)
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.348 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.349 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9edf259b-6a5e-4e11-938d-d631a412648e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9edf259b-6a5e-4e11-938d-d631a412648e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.351 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[48167554-1763-48a1-9c7c-92be7c05b4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.352 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-9edf259b-6a5e-4e11-938d-d631a412648e
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/9edf259b-6a5e-4e11-938d-d631a412648e.pid.haproxy
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 9edf259b-6a5e-4e11-938d-d631a412648e
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:47:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:47:27.353 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'env', 'PROCESS_TAG=haproxy-9edf259b-6a5e-4e11-938d-d631a412648e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9edf259b-6a5e-4e11-938d-d631a412648e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.608 251996 DEBUG nova.compute.manager [req-112016d9-0963-4b34-85ab-422f62e7c989 req-2cd6c906-e0b1-435c-a671-58bcdfecceef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.609 251996 DEBUG oslo_concurrency.lockutils [req-112016d9-0963-4b34-85ab-422f62e7c989 req-2cd6c906-e0b1-435c-a671-58bcdfecceef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.609 251996 DEBUG oslo_concurrency.lockutils [req-112016d9-0963-4b34-85ab-422f62e7c989 req-2cd6c906-e0b1-435c-a671-58bcdfecceef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.609 251996 DEBUG oslo_concurrency.lockutils [req-112016d9-0963-4b34-85ab-422f62e7c989 req-2cd6c906-e0b1-435c-a671-58bcdfecceef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.609 251996 DEBUG nova.compute.manager [req-112016d9-0963-4b34-85ab-422f62e7c989 req-2cd6c906-e0b1-435c-a671-58bcdfecceef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Processing event network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.678 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007247.678123, f09d4c2b-5734-457e-93cf-a2b4f61e3afd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.679 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] VM Started (Lifecycle Event)
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.681 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.682 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.685 251996 DEBUG nova.virt.libvirt.driver [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.689 251996 INFO nova.virt.libvirt.driver [-] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Instance spawned successfully.
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.689 251996 INFO nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Took 9.14 seconds to spawn the instance on the hypervisor.
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.689 251996 DEBUG nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.768 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.773 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:47:27 compute-0 podman[354632]: 2025-12-06 07:47:27.752032956 +0000 UTC m=+0.021220104 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:47:27 compute-0 podman[354632]: 2025-12-06 07:47:27.900897443 +0000 UTC m=+0.170084571 container create 8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 07:47:27 compute-0 systemd[1]: Started libpod-conmon-8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5.scope.
Dec 06 07:47:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:47:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f136a746d3f92b857f755650ae662592222775e780ad64e11cc9ce2de055c3e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.974 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.975 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007247.6807303, f09d4c2b-5734-457e-93cf-a2b4f61e3afd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:47:27 compute-0 nova_compute[251992]: 2025-12-06 07:47:27.975 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] VM Paused (Lifecycle Event)
Dec 06 07:47:27 compute-0 podman[354632]: 2025-12-06 07:47:27.98048723 +0000 UTC m=+0.249674418 container init 8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:47:27 compute-0 podman[354632]: 2025-12-06 07:47:27.988731852 +0000 UTC m=+0.257918990 container start 8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.014 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:47:28 compute-0 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[354647]: [NOTICE]   (354651) : New worker (354653) forked
Dec 06 07:47:28 compute-0 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[354647]: [NOTICE]   (354651) : Loading success.
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.019 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007247.6839116, f09d4c2b-5734-457e-93cf-a2b4f61e3afd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.019 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] VM Resumed (Lifecycle Event)
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.033 251996 INFO nova.compute.manager [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Took 10.70 seconds to build instance.
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.052 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.056 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.125 251996 DEBUG oslo_concurrency.lockutils [None req-4b1eed23-e82f-4a85-aa99-40f3bcce1d43 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:28 compute-0 ceph-mon[74339]: pgmap v2786: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 182 op/s
Dec 06 07:47:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:28.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:28 compute-0 nova_compute[251992]: 2025-12-06 07:47:28.921 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 215 op/s
Dec 06 07:47:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:29.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:29 compute-0 ceph-mon[74339]: pgmap v2787: 305 pgs: 305 active+clean; 948 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 35 KiB/s wr, 215 op/s
Dec 06 07:47:29 compute-0 nova_compute[251992]: 2025-12-06 07:47:29.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:29 compute-0 nova_compute[251992]: 2025-12-06 07:47:29.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:47:29 compute-0 nova_compute[251992]: 2025-12-06 07:47:29.914 251996 DEBUG nova.compute.manager [req-86833bc6-a694-425a-bf92-3ce3af2a625a req-fceae701-4375-4d88-8584-e6291782379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:47:29 compute-0 nova_compute[251992]: 2025-12-06 07:47:29.915 251996 DEBUG oslo_concurrency.lockutils [req-86833bc6-a694-425a-bf92-3ce3af2a625a req-fceae701-4375-4d88-8584-e6291782379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:47:29 compute-0 nova_compute[251992]: 2025-12-06 07:47:29.915 251996 DEBUG oslo_concurrency.lockutils [req-86833bc6-a694-425a-bf92-3ce3af2a625a req-fceae701-4375-4d88-8584-e6291782379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:47:29 compute-0 nova_compute[251992]: 2025-12-06 07:47:29.916 251996 DEBUG oslo_concurrency.lockutils [req-86833bc6-a694-425a-bf92-3ce3af2a625a req-fceae701-4375-4d88-8584-e6291782379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:47:29 compute-0 nova_compute[251992]: 2025-12-06 07:47:29.916 251996 DEBUG nova.compute.manager [req-86833bc6-a694-425a-bf92-3ce3af2a625a req-fceae701-4375-4d88-8584-e6291782379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] No waiting events found dispatching network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:47:29 compute-0 nova_compute[251992]: 2025-12-06 07:47:29.916 251996 WARNING nova.compute.manager [req-86833bc6-a694-425a-bf92-3ce3af2a625a req-fceae701-4375-4d88-8584-e6291782379f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received unexpected event network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d for instance with vm_state active and task_state None.
Dec 06 07:47:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:30 compute-0 nova_compute[251992]: 2025-12-06 07:47:30.364 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:47:30 compute-0 nova_compute[251992]: 2025-12-06 07:47:30.364 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:47:30 compute-0 nova_compute[251992]: 2025-12-06 07:47:30.364 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:47:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:31 compute-0 nova_compute[251992]: 2025-12-06 07:47:31.027 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 916 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 40 KiB/s wr, 233 op/s
Dec 06 07:47:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/855239122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:31 compute-0 ceph-mon[74339]: pgmap v2788: 305 pgs: 305 active+clean; 916 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 40 KiB/s wr, 233 op/s
Dec 06 07:47:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:31.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3830138832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 890 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 50 KiB/s wr, 256 op/s
Dec 06 07:47:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:33.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:33 compute-0 nova_compute[251992]: 2025-12-06 07:47:33.923 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:34 compute-0 ceph-mon[74339]: pgmap v2789: 305 pgs: 305 active+clean; 890 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 50 KiB/s wr, 256 op/s
Dec 06 07:47:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:34.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:34 compute-0 podman[354665]: 2025-12-06 07:47:34.428832365 +0000 UTC m=+0.082220650 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:47:34 compute-0 podman[354666]: 2025-12-06 07:47:34.439224685 +0000 UTC m=+0.092712892 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:47:34 compute-0 nova_compute[251992]: 2025-12-06 07:47:34.461 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Updating instance_info_cache with network_info: [{"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:47:34 compute-0 nova_compute[251992]: 2025-12-06 07:47:34.480 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:47:34 compute-0 nova_compute[251992]: 2025-12-06 07:47:34.480 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:47:34 compute-0 nova_compute[251992]: 2025-12-06 07:47:34.481 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:34 compute-0 nova_compute[251992]: 2025-12-06 07:47:34.481 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:34 compute-0 nova_compute[251992]: 2025-12-06 07:47:34.481 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:34 compute-0 nova_compute[251992]: 2025-12-06 07:47:34.482 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:47:34 compute-0 nova_compute[251992]: 2025-12-06 07:47:34.482 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:47:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 867 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 53 KiB/s wr, 274 op/s
Dec 06 07:47:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3956898978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:35.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:35 compute-0 nova_compute[251992]: 2025-12-06 07:47:35.695 251996 DEBUG nova.compute.manager [req-a517d93d-f8b2-407d-ad7e-c6fef19be0ff req-7d89e736-c9bb-4aa2-9a1e-cae441a83d83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-changed-ce9ef951-3233-438a-91ab-5761a277635d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:47:35 compute-0 nova_compute[251992]: 2025-12-06 07:47:35.696 251996 DEBUG nova.compute.manager [req-a517d93d-f8b2-407d-ad7e-c6fef19be0ff req-7d89e736-c9bb-4aa2-9a1e-cae441a83d83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Refreshing instance network info cache due to event network-changed-ce9ef951-3233-438a-91ab-5761a277635d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:47:35 compute-0 nova_compute[251992]: 2025-12-06 07:47:35.696 251996 DEBUG oslo_concurrency.lockutils [req-a517d93d-f8b2-407d-ad7e-c6fef19be0ff req-7d89e736-c9bb-4aa2-9a1e-cae441a83d83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:47:35 compute-0 nova_compute[251992]: 2025-12-06 07:47:35.696 251996 DEBUG oslo_concurrency.lockutils [req-a517d93d-f8b2-407d-ad7e-c6fef19be0ff req-7d89e736-c9bb-4aa2-9a1e-cae441a83d83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:47:35 compute-0 nova_compute[251992]: 2025-12-06 07:47:35.696 251996 DEBUG nova.network.neutron [req-a517d93d-f8b2-407d-ad7e-c6fef19be0ff req-7d89e736-c9bb-4aa2-9a1e-cae441a83d83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Refreshing network info cache for port ce9ef951-3233-438a-91ab-5761a277635d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:47:36 compute-0 nova_compute[251992]: 2025-12-06 07:47:36.034 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:36 compute-0 ceph-mon[74339]: pgmap v2790: 305 pgs: 305 active+clean; 867 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 53 KiB/s wr, 274 op/s
Dec 06 07:47:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:36.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:36 compute-0 nova_compute[251992]: 2025-12-06 07:47:36.789 251996 DEBUG nova.network.neutron [req-a517d93d-f8b2-407d-ad7e-c6fef19be0ff req-7d89e736-c9bb-4aa2-9a1e-cae441a83d83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Updated VIF entry in instance network info cache for port ce9ef951-3233-438a-91ab-5761a277635d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:47:36 compute-0 nova_compute[251992]: 2025-12-06 07:47:36.790 251996 DEBUG nova.network.neutron [req-a517d93d-f8b2-407d-ad7e-c6fef19be0ff req-7d89e736-c9bb-4aa2-9a1e-cae441a83d83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Updating instance_info_cache with network_info: [{"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:47:36 compute-0 nova_compute[251992]: 2025-12-06 07:47:36.811 251996 DEBUG oslo_concurrency.lockutils [req-a517d93d-f8b2-407d-ad7e-c6fef19be0ff req-7d89e736-c9bb-4aa2-9a1e-cae441a83d83 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:47:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 869 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 46 KiB/s wr, 214 op/s
Dec 06 07:47:37 compute-0 ceph-mon[74339]: pgmap v2791: 305 pgs: 305 active+clean; 869 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 46 KiB/s wr, 214 op/s
Dec 06 07:47:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:37.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:38 compute-0 ovn_controller[147168]: 2025-12-06T07:47:38Z|00599|binding|INFO|Releasing lport 2622b20a-1eb7-4bb6-abbf-35b090425f31 from this chassis (sb_readonly=0)
Dec 06 07:47:38 compute-0 ovn_controller[147168]: 2025-12-06T07:47:38Z|00600|binding|INFO|Releasing lport 6b94462b-5171-4a4e-8d60-ac645842c400 from this chassis (sb_readonly=0)
Dec 06 07:47:38 compute-0 nova_compute[251992]: 2025-12-06 07:47:38.652 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:38 compute-0 nova_compute[251992]: 2025-12-06 07:47:38.924 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:38 compute-0 sshd-session[354702]: Invalid user guest from 45.140.17.124 port 34876
Dec 06 07:47:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 869 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 109 KiB/s wr, 180 op/s
Dec 06 07:47:39 compute-0 ceph-mon[74339]: pgmap v2792: 305 pgs: 305 active+clean; 869 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 109 KiB/s wr, 180 op/s
Dec 06 07:47:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:39 compute-0 sshd-session[354702]: Connection reset by invalid user guest 45.140.17.124 port 34876 [preauth]
Dec 06 07:47:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:40.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:41 compute-0 nova_compute[251992]: 2025-12-06 07:47:41.039 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 893 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Dec 06 07:47:41 compute-0 ceph-mon[74339]: pgmap v2793: 305 pgs: 305 active+clean; 893 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Dec 06 07:47:41 compute-0 ovn_controller[147168]: 2025-12-06T07:47:41Z|00062|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.14
Dec 06 07:47:41 compute-0 ovn_controller[147168]: 2025-12-06T07:47:41Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:2f:4f:5d 10.100.0.14
Dec 06 07:47:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:41.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:42 compute-0 sshd-session[354706]: Connection reset by authenticating user root 45.140.17.124 port 34926 [preauth]
Dec 06 07:47:42 compute-0 sudo[354710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:42 compute-0 sudo[354710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:42 compute-0 sudo[354710]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:42 compute-0 sudo[354735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:42 compute-0 sudo[354735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:42 compute-0 sudo[354735]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:42.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:47:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:47:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:47:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:47:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:47:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:47:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 900 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:47:43 compute-0 ceph-mon[74339]: pgmap v2794: 305 pgs: 305 active+clean; 900 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Dec 06 07:47:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:43.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:43 compute-0 sudo[354762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:43 compute-0 sudo[354762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:43 compute-0 sudo[354762]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:43 compute-0 sudo[354787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:47:43 compute-0 sudo[354787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:43 compute-0 sudo[354787]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:43 compute-0 sudo[354812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:43 compute-0 sudo[354812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:43 compute-0 sudo[354812]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:43 compute-0 sudo[354837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:47:43 compute-0 sudo[354837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:43 compute-0 nova_compute[251992]: 2025-12-06 07:47:43.926 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:44 compute-0 sudo[354837]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:47:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:47:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:47:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 61823b82-0432-4658-866c-654118cb867f does not exist
Dec 06 07:47:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c34ff3fd-3a88-407b-a6a0-d6529513a78b does not exist
Dec 06 07:47:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b18da3f3-a0ce-45a3-b9b9-1ef82affa95b does not exist
Dec 06 07:47:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:47:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:47:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:47:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:47:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:47:44 compute-0 sudo[354893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:44 compute-0 sudo[354893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:44 compute-0 sudo[354893]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:44 compute-0 sshd-session[354709]: Connection reset by authenticating user root 45.140.17.124 port 34934 [preauth]
Dec 06 07:47:44 compute-0 sudo[354918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:47:44 compute-0 sudo[354918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:44 compute-0 sudo[354918]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:44.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:44 compute-0 sudo[354943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:44 compute-0 sudo[354943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:44 compute-0 sudo[354943]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:44 compute-0 sudo[354968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:47:44 compute-0 sudo[354968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:44 compute-0 podman[355034]: 2025-12-06 07:47:44.83269455 +0000 UTC m=+0.039646640 container create 2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:47:44 compute-0 systemd[1]: Started libpod-conmon-2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b.scope.
Dec 06 07:47:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:47:44 compute-0 podman[355034]: 2025-12-06 07:47:44.815607639 +0000 UTC m=+0.022559749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:47:44 compute-0 podman[355034]: 2025-12-06 07:47:44.917220141 +0000 UTC m=+0.124172251 container init 2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:47:44 compute-0 podman[355034]: 2025-12-06 07:47:44.926673955 +0000 UTC m=+0.133626045 container start 2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:47:44 compute-0 podman[355034]: 2025-12-06 07:47:44.929987555 +0000 UTC m=+0.136939665 container attach 2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:47:44 compute-0 optimistic_liskov[355051]: 167 167
Dec 06 07:47:44 compute-0 systemd[1]: libpod-2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b.scope: Deactivated successfully.
Dec 06 07:47:44 compute-0 podman[355034]: 2025-12-06 07:47:44.934485526 +0000 UTC m=+0.141437616 container died 2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:47:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-155f8f33e70e029f2a22c9d4bd5c7bbb94c81c2c8e86b4253155623cd9286fcb-merged.mount: Deactivated successfully.
Dec 06 07:47:44 compute-0 podman[355034]: 2025-12-06 07:47:44.9765013 +0000 UTC m=+0.183453390 container remove 2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 07:47:44 compute-0 systemd[1]: libpod-conmon-2b08445d4f13316e18df681a20820ef68e1ffc167cee3c37fc1117553995b08b.scope: Deactivated successfully.
Dec 06 07:47:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 903 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Dec 06 07:47:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.140417) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265140467, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 807, "num_deletes": 251, "total_data_size": 1031426, "memory_usage": 1046232, "flush_reason": "Manual Compaction"}
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265147484, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 1019453, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55370, "largest_seqno": 56176, "table_properties": {"data_size": 1015316, "index_size": 1853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8795, "raw_average_key_size": 18, "raw_value_size": 1006798, "raw_average_value_size": 2071, "num_data_blocks": 82, "num_entries": 486, "num_filter_entries": 486, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007205, "oldest_key_time": 1765007205, "file_creation_time": 1765007265, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 7113 microseconds, and 3240 cpu microseconds.
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.147520) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 1019453 bytes OK
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.147543) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.148691) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.148702) EVENT_LOG_v1 {"time_micros": 1765007265148699, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.148717) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1027374, prev total WAL file size 1027374, number of live WAL files 2.
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:47:45 compute-0 podman[355075]: 2025-12-06 07:47:45.149517108 +0000 UTC m=+0.041787168 container create 6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.149223) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(995KB)], [119(11MB)]
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265149271, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 13464998, "oldest_snapshot_seqno": -1}
Dec 06 07:47:45 compute-0 systemd[1]: Started libpod-conmon-6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5.scope.
Dec 06 07:47:45 compute-0 podman[355075]: 2025-12-06 07:47:45.12992963 +0000 UTC m=+0.022199710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:47:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65e6ca8ddcc042b7b40af726e2db42c7d737fa5b8cb321d139dd1e04c75f55c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65e6ca8ddcc042b7b40af726e2db42c7d737fa5b8cb321d139dd1e04c75f55c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65e6ca8ddcc042b7b40af726e2db42c7d737fa5b8cb321d139dd1e04c75f55c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65e6ca8ddcc042b7b40af726e2db42c7d737fa5b8cb321d139dd1e04c75f55c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b65e6ca8ddcc042b7b40af726e2db42c7d737fa5b8cb321d139dd1e04c75f55c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 9071 keys, 12373059 bytes, temperature: kUnknown
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265231889, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 12373059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12313348, "index_size": 35919, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238401, "raw_average_key_size": 26, "raw_value_size": 12152709, "raw_average_value_size": 1339, "num_data_blocks": 1380, "num_entries": 9071, "num_filter_entries": 9071, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007265, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.232172) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12373059 bytes
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.233860) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.8 rd, 149.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(25.3) write-amplify(12.1) OK, records in: 9591, records dropped: 520 output_compression: NoCompression
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.233877) EVENT_LOG_v1 {"time_micros": 1765007265233869, "job": 72, "event": "compaction_finished", "compaction_time_micros": 82709, "compaction_time_cpu_micros": 35339, "output_level": 6, "num_output_files": 1, "total_output_size": 12373059, "num_input_records": 9591, "num_output_records": 9071, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265234181, "job": 72, "event": "table_file_deletion", "file_number": 121}
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007265236501, "job": 72, "event": "table_file_deletion", "file_number": 119}
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.149149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.236581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.236586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.236588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.236589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:47:45.236591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:47:45 compute-0 podman[355075]: 2025-12-06 07:47:45.243943266 +0000 UTC m=+0.136213346 container init 6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:47:45 compute-0 podman[355075]: 2025-12-06 07:47:45.251209132 +0000 UTC m=+0.143479192 container start 6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_grothendieck, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:47:45 compute-0 podman[355075]: 2025-12-06 07:47:45.254239644 +0000 UTC m=+0.146509704 container attach 6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:47:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:45 compute-0 ceph-mon[74339]: pgmap v2795: 305 pgs: 305 active+clean; 903 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Dec 06 07:47:46 compute-0 nova_compute[251992]: 2025-12-06 07:47:46.041 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:46 compute-0 charming_grothendieck[355091]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:47:46 compute-0 charming_grothendieck[355091]: --> relative data size: 1.0
Dec 06 07:47:46 compute-0 charming_grothendieck[355091]: --> All data devices are unavailable
Dec 06 07:47:46 compute-0 systemd[1]: libpod-6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5.scope: Deactivated successfully.
Dec 06 07:47:46 compute-0 podman[355075]: 2025-12-06 07:47:46.107371792 +0000 UTC m=+0.999641862 container died 6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65e6ca8ddcc042b7b40af726e2db42c7d737fa5b8cb321d139dd1e04c75f55c-merged.mount: Deactivated successfully.
Dec 06 07:47:46 compute-0 podman[355075]: 2025-12-06 07:47:46.166384244 +0000 UTC m=+1.058654304 container remove 6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:47:46 compute-0 systemd[1]: libpod-conmon-6ad4339443b7321a7d6a481f8764240456fc8fb57acc9e961776bbcde6f6b5a5.scope: Deactivated successfully.
Dec 06 07:47:46 compute-0 sudo[354968]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:46 compute-0 sudo[355121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:46 compute-0 sudo[355121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:46 compute-0 sudo[355121]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:46 compute-0 ovn_controller[147168]: 2025-12-06T07:47:46Z|00064|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.14
Dec 06 07:47:46 compute-0 ovn_controller[147168]: 2025-12-06T07:47:46Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:2f:4f:5d 10.100.0.14
Dec 06 07:47:46 compute-0 ovn_controller[147168]: 2025-12-06T07:47:46Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:4f:5d 10.100.0.14
Dec 06 07:47:46 compute-0 ovn_controller[147168]: 2025-12-06T07:47:46Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:4f:5d 10.100.0.14
Dec 06 07:47:46 compute-0 sudo[355146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:47:46 compute-0 sudo[355146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:46 compute-0 sudo[355146]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:46 compute-0 sudo[355171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:46 compute-0 sudo[355171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:46 compute-0 sudo[355171]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:46.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:46 compute-0 sudo[355196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:47:46 compute-0 sudo[355196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Dec 06 07:47:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Dec 06 07:47:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Dec 06 07:47:46 compute-0 podman[355261]: 2025-12-06 07:47:46.803009161 +0000 UTC m=+0.046159656 container create 25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 07:47:46 compute-0 systemd[1]: Started libpod-conmon-25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac.scope.
Dec 06 07:47:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:47:46 compute-0 podman[355261]: 2025-12-06 07:47:46.780039391 +0000 UTC m=+0.023189906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:47:46 compute-0 podman[355261]: 2025-12-06 07:47:46.884161911 +0000 UTC m=+0.127312426 container init 25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:47:46 compute-0 podman[355261]: 2025-12-06 07:47:46.895350112 +0000 UTC m=+0.138500607 container start 25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:47:46 compute-0 podman[355261]: 2025-12-06 07:47:46.898655731 +0000 UTC m=+0.141806296 container attach 25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 07:47:46 compute-0 focused_albattani[355277]: 167 167
Dec 06 07:47:46 compute-0 systemd[1]: libpod-25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac.scope: Deactivated successfully.
Dec 06 07:47:46 compute-0 podman[355261]: 2025-12-06 07:47:46.900978384 +0000 UTC m=+0.144128889 container died 25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1b7f06d46592571869fe4acf3113564ef649d1be111bb6c29d79f2264278048-merged.mount: Deactivated successfully.
Dec 06 07:47:46 compute-0 podman[355261]: 2025-12-06 07:47:46.936870213 +0000 UTC m=+0.180020728 container remove 25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_albattani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:47:46 compute-0 systemd[1]: libpod-conmon-25bda9cbbb1f522baeefb3afa31c31ee4de1f2633f9af59f2c243988476382ac.scope: Deactivated successfully.
Dec 06 07:47:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 906 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 127 op/s
Dec 06 07:47:47 compute-0 podman[355302]: 2025-12-06 07:47:47.094548907 +0000 UTC m=+0.023869795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:47:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:47.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:47 compute-0 podman[355302]: 2025-12-06 07:47:47.453832091 +0000 UTC m=+0.383152929 container create c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:47:47 compute-0 systemd[1]: Started libpod-conmon-c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc.scope.
Dec 06 07:47:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5335d4032226d90e3845499f757f9de2f5117bd586300de39cb1d4c0a4172e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5335d4032226d90e3845499f757f9de2f5117bd586300de39cb1d4c0a4172e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5335d4032226d90e3845499f757f9de2f5117bd586300de39cb1d4c0a4172e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5335d4032226d90e3845499f757f9de2f5117bd586300de39cb1d4c0a4172e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:47 compute-0 podman[355302]: 2025-12-06 07:47:47.560368425 +0000 UTC m=+0.489689273 container init c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:47:47 compute-0 podman[355302]: 2025-12-06 07:47:47.56833677 +0000 UTC m=+0.497657598 container start c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:47:47 compute-0 podman[355302]: 2025-12-06 07:47:47.571875436 +0000 UTC m=+0.501196314 container attach c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:47:48 compute-0 sshd-session[354993]: Connection reset by authenticating user root 45.140.17.124 port 59382 [preauth]
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]: {
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:     "0": [
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:         {
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "devices": [
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "/dev/loop3"
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             ],
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "lv_name": "ceph_lv0",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "lv_size": "7511998464",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "name": "ceph_lv0",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "tags": {
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.cluster_name": "ceph",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.crush_device_class": "",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.encrypted": "0",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.osd_id": "0",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.type": "block",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:                 "ceph.vdo": "0"
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             },
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "type": "block",
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:             "vg_name": "ceph_vg0"
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:         }
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]:     ]
Dec 06 07:47:48 compute-0 hopeful_babbage[355318]: }
Dec 06 07:47:48 compute-0 systemd[1]: libpod-c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc.scope: Deactivated successfully.
Dec 06 07:47:48 compute-0 podman[355302]: 2025-12-06 07:47:48.35185381 +0000 UTC m=+1.281174648 container died c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:47:48 compute-0 ceph-mon[74339]: osdmap e358: 3 total, 3 up, 3 in
Dec 06 07:47:48 compute-0 ceph-mon[74339]: pgmap v2797: 305 pgs: 305 active+clean; 906 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 127 op/s
Dec 06 07:47:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce5335d4032226d90e3845499f757f9de2f5117bd586300de39cb1d4c0a4172e-merged.mount: Deactivated successfully.
Dec 06 07:47:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:48.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:48 compute-0 podman[355302]: 2025-12-06 07:47:48.43708921 +0000 UTC m=+1.366410048 container remove c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_babbage, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 07:47:48 compute-0 systemd[1]: libpod-conmon-c528dbee9f4c6109cdba22b673f63ae6c4b50cd2fb8e5f3fc1323fb82ebdafdc.scope: Deactivated successfully.
Dec 06 07:47:48 compute-0 sudo[355196]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:48 compute-0 sudo[355339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:48 compute-0 sudo[355339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:48 compute-0 sudo[355339]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:48 compute-0 sudo[355364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:47:48 compute-0 sudo[355364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:48 compute-0 sudo[355364]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:48 compute-0 sudo[355389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:48 compute-0 sudo[355389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:48 compute-0 sudo[355389]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:48 compute-0 sudo[355414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:47:48 compute-0 sudo[355414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:48 compute-0 nova_compute[251992]: 2025-12-06 07:47:48.928 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:48 compute-0 podman[355480]: 2025-12-06 07:47:48.987119931 +0000 UTC m=+0.043301750 container create 5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_easley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:47:49 compute-0 systemd[1]: Started libpod-conmon-5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b.scope.
Dec 06 07:47:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:47:49 compute-0 podman[355480]: 2025-12-06 07:47:48.966834394 +0000 UTC m=+0.023016243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:47:49 compute-0 podman[355480]: 2025-12-06 07:47:49.07048362 +0000 UTC m=+0.126665459 container init 5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_easley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:47:49 compute-0 podman[355480]: 2025-12-06 07:47:49.078391543 +0000 UTC m=+0.134573362 container start 5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 07:47:49 compute-0 podman[355480]: 2025-12-06 07:47:49.082079173 +0000 UTC m=+0.138261022 container attach 5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_easley, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:47:49 compute-0 systemd[1]: libpod-5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b.scope: Deactivated successfully.
Dec 06 07:47:49 compute-0 condescending_easley[355498]: 167 167
Dec 06 07:47:49 compute-0 conmon[355498]: conmon 5a196d3efc7d940590d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b.scope/container/memory.events
Dec 06 07:47:49 compute-0 podman[355480]: 2025-12-06 07:47:49.084284532 +0000 UTC m=+0.140466351 container died 5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_easley, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:47:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 868 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Dec 06 07:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5da0badfdf6be5d5c538cb0a74879f8156fb83d8263f843a0f39f0a4a75bd757-merged.mount: Deactivated successfully.
Dec 06 07:47:49 compute-0 podman[355480]: 2025-12-06 07:47:49.120927361 +0000 UTC m=+0.177109180 container remove 5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:47:49 compute-0 systemd[1]: libpod-conmon-5a196d3efc7d940590d4189ff792e203feb6b78e4d6d575b3d3d4546bd6af02b.scope: Deactivated successfully.
Dec 06 07:47:49 compute-0 podman[355522]: 2025-12-06 07:47:49.305914222 +0000 UTC m=+0.045339255 container create a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:47:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:49 compute-0 systemd[1]: Started libpod-conmon-a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4.scope.
Dec 06 07:47:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc95f9ca0b06c3a32b77dad1b76d79041c4237600c93eccb268285ab299e273e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc95f9ca0b06c3a32b77dad1b76d79041c4237600c93eccb268285ab299e273e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc95f9ca0b06c3a32b77dad1b76d79041c4237600c93eccb268285ab299e273e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc95f9ca0b06c3a32b77dad1b76d79041c4237600c93eccb268285ab299e273e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:47:49 compute-0 podman[355522]: 2025-12-06 07:47:49.289798597 +0000 UTC m=+0.029223660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:47:49 compute-0 podman[355522]: 2025-12-06 07:47:49.38438382 +0000 UTC m=+0.123808853 container init a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:47:49 compute-0 podman[355522]: 2025-12-06 07:47:49.391433169 +0000 UTC m=+0.130858202 container start a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:47:49 compute-0 podman[355522]: 2025-12-06 07:47:49.394202034 +0000 UTC m=+0.133627067 container attach a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:47:49 compute-0 ceph-mon[74339]: pgmap v2798: 305 pgs: 305 active+clean; 868 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Dec 06 07:47:50 compute-0 sshd-session[355327]: Connection reset by authenticating user root 45.140.17.124 port 59394 [preauth]
Dec 06 07:47:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]: {
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]:         "osd_id": 0,
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]:         "type": "bluestore"
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]:     }
Dec 06 07:47:50 compute-0 suspicious_mcnulty[355538]: }
Dec 06 07:47:50 compute-0 systemd[1]: libpod-a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4.scope: Deactivated successfully.
Dec 06 07:47:50 compute-0 podman[355522]: 2025-12-06 07:47:50.202808471 +0000 UTC m=+0.942233504 container died a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc95f9ca0b06c3a32b77dad1b76d79041c4237600c93eccb268285ab299e273e-merged.mount: Deactivated successfully.
Dec 06 07:47:50 compute-0 podman[355522]: 2025-12-06 07:47:50.256748836 +0000 UTC m=+0.996173869 container remove a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mcnulty, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 07:47:50 compute-0 systemd[1]: libpod-conmon-a09ebf9cc4bd71cb7e889bd0102cfadd67e1d4255b1f691f7883f25b5b4e8ef4.scope: Deactivated successfully.
Dec 06 07:47:50 compute-0 sudo[355414]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:47:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:47:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:50 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d29d4d87-c0d8-4178-b279-a30a45a36d85 does not exist
Dec 06 07:47:50 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5e4461e7-8810-4f90-a584-815fff568dea does not exist
Dec 06 07:47:50 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0224ca8b-b286-4997-bae1-66a2bc3a1d54 does not exist
Dec 06 07:47:50 compute-0 sudo[355571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:47:50 compute-0 sudo[355571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:50 compute-0 sudo[355571]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:50.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:50 compute-0 sudo[355596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:47:50 compute-0 sudo[355596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:47:50 compute-0 sudo[355596]: pam_unix(sudo:session): session closed for user root
Dec 06 07:47:51 compute-0 nova_compute[251992]: 2025-12-06 07:47:51.044 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 815 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 878 KiB/s wr, 93 op/s
Dec 06 07:47:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:47:51 compute-0 ceph-mon[74339]: pgmap v2799: 305 pgs: 305 active+clean; 815 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 878 KiB/s wr, 93 op/s
Dec 06 07:47:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:52.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:47:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1957087422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1957087422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:47:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 773 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 709 KiB/s rd, 382 KiB/s wr, 83 op/s
Dec 06 07:47:53 compute-0 nova_compute[251992]: 2025-12-06 07:47:53.190 251996 DEBUG nova.compute.manager [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:47:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:53.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:53 compute-0 nova_compute[251992]: 2025-12-06 07:47:53.373 251996 INFO nova.compute.manager [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] instance snapshotting
Dec 06 07:47:53 compute-0 nova_compute[251992]: 2025-12-06 07:47:53.741 251996 INFO nova.virt.libvirt.driver [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Beginning live snapshot process
Dec 06 07:47:53 compute-0 nova_compute[251992]: 2025-12-06 07:47:53.930 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:54 compute-0 nova_compute[251992]: 2025-12-06 07:47:54.189 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:54 compute-0 nova_compute[251992]: 2025-12-06 07:47:54.376 251996 DEBUG nova.storage.rbd_utils [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] creating snapshot(646b1b868a73490ca51891a7d30346dc) on rbd image(f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:47:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:54.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Dec 06 07:47:54 compute-0 ceph-mon[74339]: pgmap v2800: 305 pgs: 305 active+clean; 773 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 709 KiB/s rd, 382 KiB/s wr, 83 op/s
Dec 06 07:47:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 753 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 633 KiB/s rd, 380 KiB/s wr, 78 op/s
Dec 06 07:47:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Dec 06 07:47:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Dec 06 07:47:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:47:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Dec 06 07:47:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:55.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:55 compute-0 nova_compute[251992]: 2025-12-06 07:47:55.381 251996 DEBUG nova.storage.rbd_utils [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] cloning vms/f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk@646b1b868a73490ca51891a7d30346dc to images/5ed78aa8-6ca4-4423-a38b-01e37b179e36 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:47:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Dec 06 07:47:55 compute-0 nova_compute[251992]: 2025-12-06 07:47:55.520 251996 DEBUG nova.storage.rbd_utils [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] flattening images/5ed78aa8-6ca4-4423-a38b-01e37b179e36 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:47:55 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Dec 06 07:47:56 compute-0 nova_compute[251992]: 2025-12-06 07:47:56.046 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3420904072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:47:56 compute-0 ceph-mon[74339]: pgmap v2801: 305 pgs: 305 active+clean; 753 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 633 KiB/s rd, 380 KiB/s wr, 78 op/s
Dec 06 07:47:56 compute-0 ceph-mon[74339]: osdmap e359: 3 total, 3 up, 3 in
Dec 06 07:47:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:47:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:56.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:47:56 compute-0 podman[355729]: 2025-12-06 07:47:56.48651994 +0000 UTC m=+0.130110792 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:47:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 753 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 125 KiB/s wr, 85 op/s
Dec 06 07:47:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:57.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:57 compute-0 ceph-mon[74339]: osdmap e360: 3 total, 3 up, 3 in
Dec 06 07:47:57 compute-0 ceph-mon[74339]: pgmap v2804: 305 pgs: 305 active+clean; 753 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 125 KiB/s wr, 85 op/s
Dec 06 07:47:57 compute-0 nova_compute[251992]: 2025-12-06 07:47:57.777 251996 DEBUG nova.storage.rbd_utils [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] removing snapshot(646b1b868a73490ca51891a7d30346dc) on rbd image(f09d4c2b-5734-457e-93cf-a2b4f61e3afd_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:47:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:47:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:47:58.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:47:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Dec 06 07:47:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Dec 06 07:47:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Dec 06 07:47:58 compute-0 nova_compute[251992]: 2025-12-06 07:47:58.734 251996 DEBUG nova.storage.rbd_utils [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] creating snapshot(snap) on rbd image(5ed78aa8-6ca4-4423-a38b-01e37b179e36) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:47:58 compute-0 nova_compute[251992]: 2025-12-06 07:47:58.931 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:47:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 793 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.4 MiB/s wr, 197 op/s
Dec 06 07:47:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:47:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:47:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:47:59.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:47:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Dec 06 07:47:59 compute-0 ceph-mon[74339]: osdmap e361: 3 total, 3 up, 3 in
Dec 06 07:47:59 compute-0 ceph-mon[74339]: pgmap v2806: 305 pgs: 305 active+clean; 793 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.4 MiB/s wr, 197 op/s
Dec 06 07:47:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2911447517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:47:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2911447517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:47:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Dec 06 07:47:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Dec 06 07:48:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:00.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Dec 06 07:48:00 compute-0 ceph-mon[74339]: osdmap e362: 3 total, 3 up, 3 in
Dec 06 07:48:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Dec 06 07:48:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Dec 06 07:48:01 compute-0 nova_compute[251992]: 2025-12-06 07:48:01.048 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 813 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 11 MiB/s wr, 199 op/s
Dec 06 07:48:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:01.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:01 compute-0 nova_compute[251992]: 2025-12-06 07:48:01.522 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:01 compute-0 ceph-mon[74339]: osdmap e363: 3 total, 3 up, 3 in
Dec 06 07:48:01 compute-0 ceph-mon[74339]: pgmap v2809: 305 pgs: 305 active+clean; 813 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 11 MiB/s wr, 199 op/s
Dec 06 07:48:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:02.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:02 compute-0 sudo[355792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:02 compute-0 sudo[355792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:02 compute-0 sudo[355792]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:02 compute-0 sudo[355817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:02 compute-0 sudo[355817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:02 compute-0 sudo[355817]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Dec 06 07:48:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Dec 06 07:48:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Dec 06 07:48:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 857 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 13 MiB/s wr, 60 op/s
Dec 06 07:48:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:03.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:03.857 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:03.859 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:03.860 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:03 compute-0 nova_compute[251992]: 2025-12-06 07:48:03.933 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:03 compute-0 nova_compute[251992]: 2025-12-06 07:48:03.948 251996 INFO nova.virt.libvirt.driver [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Snapshot image upload complete
Dec 06 07:48:03 compute-0 nova_compute[251992]: 2025-12-06 07:48:03.949 251996 INFO nova.compute.manager [None req-b48aa9e1-2936-4201-b563-48f84e7d881c 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Took 10.57 seconds to snapshot the instance on the hypervisor.
Dec 06 07:48:04 compute-0 ceph-mon[74339]: osdmap e364: 3 total, 3 up, 3 in
Dec 06 07:48:04 compute-0 ceph-mon[74339]: pgmap v2811: 305 pgs: 6 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 297 active+clean; 857 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 13 MiB/s wr, 60 op/s
Dec 06 07:48:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:04.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 742 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 205 op/s
Dec 06 07:48:05 compute-0 ceph-mon[74339]: pgmap v2812: 305 pgs: 11 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 290 active+clean; 742 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 205 op/s
Dec 06 07:48:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:05.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:05 compute-0 podman[355845]: 2025-12-06 07:48:05.397773584 +0000 UTC m=+0.057767640 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:48:05 compute-0 podman[355844]: 2025-12-06 07:48:05.423781765 +0000 UTC m=+0.082951399 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:48:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Dec 06 07:48:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Dec 06 07:48:05 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Dec 06 07:48:05 compute-0 nova_compute[251992]: 2025-12-06 07:48:05.939 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:05 compute-0 nova_compute[251992]: 2025-12-06 07:48:05.940 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:05 compute-0 nova_compute[251992]: 2025-12-06 07:48:05.940 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:05 compute-0 nova_compute[251992]: 2025-12-06 07:48:05.940 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:05 compute-0 nova_compute[251992]: 2025-12-06 07:48:05.941 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:05.941 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:05 compute-0 nova_compute[251992]: 2025-12-06 07:48:05.942 251996 INFO nova.compute.manager [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Terminating instance
Dec 06 07:48:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:05.942 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:48:05 compute-0 nova_compute[251992]: 2025-12-06 07:48:05.942 251996 DEBUG nova.compute.manager [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:48:05 compute-0 nova_compute[251992]: 2025-12-06 07:48:05.943 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.063 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 kernel: tap14826742-06 (unregistering): left promiscuous mode
Dec 06 07:48:06 compute-0 NetworkManager[48965]: <info>  [1765007286.2238] device (tap14826742-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:48:06 compute-0 ovn_controller[147168]: 2025-12-06T07:48:06Z|00601|binding|INFO|Releasing lport 14826742-0679-403f-b2e4-28fb0f26527a from this chassis (sb_readonly=1)
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.230 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 ovn_controller[147168]: 2025-12-06T07:48:06Z|00602|binding|INFO|Removing iface tap14826742-06 ovn-installed in OVS
Dec 06 07:48:06 compute-0 ovn_controller[147168]: 2025-12-06T07:48:06Z|00603|if_status|INFO|Dropped 2 log messages in last 1387 seconds (most recently, 1387 seconds ago) due to excessive rate
Dec 06 07:48:06 compute-0 ovn_controller[147168]: 2025-12-06T07:48:06Z|00604|if_status|INFO|Not setting lport 14826742-0679-403f-b2e4-28fb0f26527a down as sb is readonly
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.245 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 ovn_controller[147168]: 2025-12-06T07:48:06Z|00605|binding|INFO|Setting lport 14826742-0679-403f-b2e4-28fb0f26527a down in Southbound
Dec 06 07:48:06 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Dec 06 07:48:06 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000009d.scope: Consumed 18.316s CPU time.
Dec 06 07:48:06 compute-0 systemd-machined[212986]: Machine qemu-75-instance-0000009d terminated.
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.367 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.371 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.380 251996 INFO nova.virt.libvirt.driver [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Instance destroyed successfully.
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.381 251996 DEBUG nova.objects.instance [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'resources' on Instance uuid 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:48:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:06.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:06 compute-0 ceph-mon[74339]: osdmap e365: 3 total, 3 up, 3 in
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.670 251996 DEBUG nova.virt.libvirt.vif [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:45:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-592712215',display_name='tempest-ServerStableDeviceRescueTest-server-592712215',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-592712215',id=157,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:45:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-o0l8e5p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:45:54Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=53b4413c-a38e-4ad9-9f1b-43babd1fe2a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.670 251996 DEBUG nova.network.os_vif_util [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "14826742-0679-403f-b2e4-28fb0f26527a", "address": "fa:16:3e:9a:c3:cc", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14826742-06", "ovs_interfaceid": "14826742-0679-403f-b2e4-28fb0f26527a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.671 251996 DEBUG nova.network.os_vif_util [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:c3:cc,bridge_name='br-int',has_traffic_filtering=True,id=14826742-0679-403f-b2e4-28fb0f26527a,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14826742-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.672 251996 DEBUG os_vif [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:c3:cc,bridge_name='br-int',has_traffic_filtering=True,id=14826742-0679-403f-b2e4-28fb0f26527a,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14826742-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.674 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.675 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14826742-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.676 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.678 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.682 251996 INFO os_vif [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:c3:cc,bridge_name='br-int',has_traffic_filtering=True,id=14826742-0679-403f-b2e4-28fb0f26527a,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14826742-06')
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.730 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:c3:cc 10.100.0.12'], port_security=['fa:16:3e:9a:c3:cc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '53b4413c-a38e-4ad9-9f1b-43babd1fe2a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=14826742-0679-403f-b2e4-28fb0f26527a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.731 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 14826742-0679-403f-b2e4-28fb0f26527a in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.733 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.751 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1ed176-7d44-4e1e-80fc-aa447de6c89e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.785 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e50556-6d05-48e2-8fc5-bc2e6d03794b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.795 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bd49b4-8b1e-4fe9-a505-f6ed6bfe3ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.826 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[6c20f6d7-dd23-4ecd-9864-649cc4d273a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.845 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dec0b5a4-bbd0-4b7e-9aa3-d349337ec3a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a17d6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:a2:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 174], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736431, 'reachable_time': 29829, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355923, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.862 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[941b2f0c-5e26-416c-9d4d-8db463e95924]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736441, 'tstamp': 736441}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355924, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6d1a17d6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 736444, 'tstamp': 736444}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355924, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.865 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.867 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 nova_compute[251992]: 2025-12-06 07:48:06.868 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.869 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a17d6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.869 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.870 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a17d6-50, col_values=(('external_ids', {'iface-id': '6b94462b-5171-4a4e-8d60-ac645842c400'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:06.871 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.068 251996 INFO nova.virt.libvirt.driver [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Deleting instance files /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_del
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.069 251996 INFO nova.virt.libvirt.driver [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Deletion of /var/lib/nova/instances/53b4413c-a38e-4ad9-9f1b-43babd1fe2a5_del complete
Dec 06 07:48:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 667 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 174 op/s
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.342 251996 INFO nova.compute.manager [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Took 1.40 seconds to destroy the instance on the hypervisor.
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.343 251996 DEBUG oslo.service.loopingcall [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.343 251996 DEBUG nova.compute.manager [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.343 251996 DEBUG nova.network.neutron [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:48:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:07.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.380 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.381 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.381 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.381 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.381 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.383 251996 INFO nova.compute.manager [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Terminating instance
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.384 251996 DEBUG nova.compute.manager [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:48:07 compute-0 kernel: tapce9ef951-32 (unregistering): left promiscuous mode
Dec 06 07:48:07 compute-0 NetworkManager[48965]: <info>  [1765007287.5642] device (tapce9ef951-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:48:07 compute-0 ovn_controller[147168]: 2025-12-06T07:48:07Z|00606|binding|INFO|Releasing lport ce9ef951-3233-438a-91ab-5761a277635d from this chassis (sb_readonly=0)
Dec 06 07:48:07 compute-0 ovn_controller[147168]: 2025-12-06T07:48:07Z|00607|binding|INFO|Setting lport ce9ef951-3233-438a-91ab-5761a277635d down in Southbound
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:07 compute-0 ovn_controller[147168]: 2025-12-06T07:48:07Z|00608|binding|INFO|Removing iface tapce9ef951-32 ovn-installed in OVS
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.587 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.616 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:4f:5d 10.100.0.14'], port_security=['fa:16:3e:2f:4f:5d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f09d4c2b-5734-457e-93cf-a2b4f61e3afd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9edf259b-6a5e-4e11-938d-d631a412648e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f093eaeb91c042dd8c85f5cd256c4394', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaa0df08-ced0-442a-9685-6c089d405f5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90bdd78f-ae71-4d01-8170-80b57acff7fd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ce9ef951-3233-438a-91ab-5761a277635d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.619 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ce9ef951-3233-438a-91ab-5761a277635d in datapath 9edf259b-6a5e-4e11-938d-d631a412648e unbound from our chassis
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.621 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9edf259b-6a5e-4e11-938d-d631a412648e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.622 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dc1d3fa9-84d9-4020-ba4c-2a5afc23cc74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.622 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e namespace which is not needed anymore
Dec 06 07:48:07 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Dec 06 07:48:07 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a2.scope: Consumed 16.050s CPU time.
Dec 06 07:48:07 compute-0 systemd-machined[212986]: Machine qemu-76-instance-000000a2 terminated.
Dec 06 07:48:07 compute-0 ceph-mon[74339]: pgmap v2814: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 667 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 6.1 MiB/s wr, 174 op/s
Dec 06 07:48:07 compute-0 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[354647]: [NOTICE]   (354651) : haproxy version is 2.8.14-c23fe91
Dec 06 07:48:07 compute-0 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[354647]: [NOTICE]   (354651) : path to executable is /usr/sbin/haproxy
Dec 06 07:48:07 compute-0 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[354647]: [WARNING]  (354651) : Exiting Master process...
Dec 06 07:48:07 compute-0 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[354647]: [ALERT]    (354651) : Current worker (354653) exited with code 143 (Terminated)
Dec 06 07:48:07 compute-0 neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e[354647]: [WARNING]  (354651) : All workers exited. Exiting... (0)
Dec 06 07:48:07 compute-0 systemd[1]: libpod-8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5.scope: Deactivated successfully.
Dec 06 07:48:07 compute-0 podman[355949]: 2025-12-06 07:48:07.756996808 +0000 UTC m=+0.045389165 container died 8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5-userdata-shm.mount: Deactivated successfully.
Dec 06 07:48:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f136a746d3f92b857f755650ae662592222775e780ad64e11cc9ce2de055c3e2-merged.mount: Deactivated successfully.
Dec 06 07:48:07 compute-0 podman[355949]: 2025-12-06 07:48:07.793174945 +0000 UTC m=+0.081567292 container cleanup 8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.796 251996 DEBUG nova.compute.manager [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-changed-ce9ef951-3233-438a-91ab-5761a277635d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.797 251996 DEBUG nova.compute.manager [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Refreshing instance network info cache due to event network-changed-ce9ef951-3233-438a-91ab-5761a277635d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.797 251996 DEBUG oslo_concurrency.lockutils [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.797 251996 DEBUG oslo_concurrency.lockutils [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.797 251996 DEBUG nova.network.neutron [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Refreshing network info cache for port ce9ef951-3233-438a-91ab-5761a277635d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:48:07 compute-0 systemd[1]: libpod-conmon-8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5.scope: Deactivated successfully.
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.823 251996 INFO nova.virt.libvirt.driver [-] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Instance destroyed successfully.
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.823 251996 DEBUG nova.objects.instance [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lazy-loading 'resources' on Instance uuid f09d4c2b-5734-457e-93cf-a2b4f61e3afd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.842 251996 DEBUG nova.virt.libvirt.vif [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:47:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-947292371',display_name='tempest-TestSnapshotPattern-server-947292371',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-947292371',id=162,image_ref='85f1f69b-a9ee-46e0-a30b-ae7faf09ffef',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLxDMv0Vhbgr4L65QJ5+X+b7zbDfxyD9+qYaGNf4b7W3f9yi+P//RoKkMpyvVNIPGPzRh0H8TZRtNdilAq90sFwxv4/Dk5avudO2cObIlP9Igfm6SfNSZd6YTMkk3vYjjg==',key_name='tempest-TestSnapshotPattern-467820612',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:47:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f093eaeb91c042dd8c85f5cd256c4394',ramdisk_id='',reservation_id='r-xcchimpq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='29195b63-c365-4ace-a4f5-9c2dba89c276',image_min_disk='1',image_min_ram='0',image_owner_id='f093eaeb91c042dd8c85f5cd256c4394',image_owner_project_name='tempest-TestSnapshotPattern-563672408',image_owner_user_name='tempest-TestSnapshotPattern-563672408-project-member',image_user_id='89d63d29c7534f70817e13d23cada716',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-563672408',owner_user_name='tempest-TestSnapshotPattern-563672408-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:48:04Z,user_data=None,user_id='89d63d29c7534f70817e13d23cada716',uuid=f09d4c2b-5734-457e-93cf-a2b4f61e3afd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.843 251996 DEBUG nova.network.os_vif_util [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converting VIF {"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.843 251996 DEBUG nova.network.os_vif_util [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:4f:5d,bridge_name='br-int',has_traffic_filtering=True,id=ce9ef951-3233-438a-91ab-5761a277635d,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce9ef951-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.844 251996 DEBUG os_vif [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:4f:5d,bridge_name='br-int',has_traffic_filtering=True,id=ce9ef951-3233-438a-91ab-5761a277635d,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce9ef951-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.845 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.845 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce9ef951-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.847 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.850 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.853 251996 INFO os_vif [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:4f:5d,bridge_name='br-int',has_traffic_filtering=True,id=ce9ef951-3233-438a-91ab-5761a277635d,network=Network(9edf259b-6a5e-4e11-938d-d631a412648e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce9ef951-32')
Dec 06 07:48:07 compute-0 podman[355982]: 2025-12-06 07:48:07.873250805 +0000 UTC m=+0.052416245 container remove 8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.878 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1995c38f-5d2c-4ef6-b4c1-f912a02532b9]: (4, ('Sat Dec  6 07:48:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e (8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5)\n8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5\nSat Dec  6 07:48:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e (8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5)\n8d1438ba6ca0054c425fd42f13d813664bbd2815114b656b20c97ece1d67b8d5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.880 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1e30ccdf-27df-4c86-93bc-c0dada615e1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.881 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9edf259b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:07 compute-0 kernel: tap9edf259b-60: left promiscuous mode
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.883 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:07 compute-0 nova_compute[251992]: 2025-12-06 07:48:07.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.904 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0baec6-7b7b-4146-b9f3-e43d00b2940d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.921 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f48b6af1-24a7-4017-8b09-1872840fb683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.923 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0adff871-d870-4152-974c-4e18ae624b60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.938 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9d5e36-b690-4449-9a83-3be2098f704b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751965, 'reachable_time': 41700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356022, 'error': None, 'target': 'ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.942 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9edf259b-6a5e-4e11-938d-d631a412648e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:48:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d9edf259b\x2d6a5e\x2d4e11\x2d938d\x2dd631a412648e.mount: Deactivated successfully.
Dec 06 07:48:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:07.943 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[e3883e97-274d-4556-babd-a85d80798df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:08.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1771189620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 609 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.7 MiB/s wr, 163 op/s
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.318 251996 DEBUG nova.compute.manager [req-742bddc2-4c95-4fca-a0e7-6952000ad132 req-213ad1a0-c24d-434b-8bc6-e7ac37c40b76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.318 251996 DEBUG oslo_concurrency.lockutils [req-742bddc2-4c95-4fca-a0e7-6952000ad132 req-213ad1a0-c24d-434b-8bc6-e7ac37c40b76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.319 251996 DEBUG oslo_concurrency.lockutils [req-742bddc2-4c95-4fca-a0e7-6952000ad132 req-213ad1a0-c24d-434b-8bc6-e7ac37c40b76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.319 251996 DEBUG oslo_concurrency.lockutils [req-742bddc2-4c95-4fca-a0e7-6952000ad132 req-213ad1a0-c24d-434b-8bc6-e7ac37c40b76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.319 251996 DEBUG nova.compute.manager [req-742bddc2-4c95-4fca-a0e7-6952000ad132 req-213ad1a0-c24d-434b-8bc6-e7ac37c40b76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.319 251996 DEBUG nova.compute.manager [req-742bddc2-4c95-4fca-a0e7-6952000ad132 req-213ad1a0-c24d-434b-8bc6-e7ac37c40b76 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-unplugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.321 251996 DEBUG nova.compute.manager [req-db9cc803-2c01-4eb3-95a2-d34466cc2aa3 req-c52c13ed-1756-49ad-8dfa-308653bf90e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-vif-unplugged-ce9ef951-3233-438a-91ab-5761a277635d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.321 251996 DEBUG oslo_concurrency.lockutils [req-db9cc803-2c01-4eb3-95a2-d34466cc2aa3 req-c52c13ed-1756-49ad-8dfa-308653bf90e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.321 251996 DEBUG oslo_concurrency.lockutils [req-db9cc803-2c01-4eb3-95a2-d34466cc2aa3 req-c52c13ed-1756-49ad-8dfa-308653bf90e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.321 251996 DEBUG oslo_concurrency.lockutils [req-db9cc803-2c01-4eb3-95a2-d34466cc2aa3 req-c52c13ed-1756-49ad-8dfa-308653bf90e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.322 251996 DEBUG nova.compute.manager [req-db9cc803-2c01-4eb3-95a2-d34466cc2aa3 req-c52c13ed-1756-49ad-8dfa-308653bf90e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] No waiting events found dispatching network-vif-unplugged-ce9ef951-3233-438a-91ab-5761a277635d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.322 251996 DEBUG nova.compute.manager [req-db9cc803-2c01-4eb3-95a2-d34466cc2aa3 req-c52c13ed-1756-49ad-8dfa-308653bf90e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-vif-unplugged-ce9ef951-3233-438a-91ab-5761a277635d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:48:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:09.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:09.944 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.981 251996 DEBUG nova.network.neutron [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Updated VIF entry in instance network info cache for port ce9ef951-3233-438a-91ab-5761a277635d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:48:09 compute-0 nova_compute[251992]: 2025-12-06 07:48:09.982 251996 DEBUG nova.network.neutron [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Updating instance_info_cache with network_info: [{"id": "ce9ef951-3233-438a-91ab-5761a277635d", "address": "fa:16:3e:2f:4f:5d", "network": {"id": "9edf259b-6a5e-4e11-938d-d631a412648e", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-538461317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f093eaeb91c042dd8c85f5cd256c4394", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce9ef951-32", "ovs_interfaceid": "ce9ef951-3233-438a-91ab-5761a277635d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:48:10 compute-0 nova_compute[251992]: 2025-12-06 07:48:10.215 251996 DEBUG oslo_concurrency.lockutils [req-a5f64559-bde1-423a-b9a8-8c552f119d3c req-4150437f-56e3-4aed-a30c-908c080e3663 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f09d4c2b-5734-457e-93cf-a2b4f61e3afd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:48:10 compute-0 nova_compute[251992]: 2025-12-06 07:48:10.326 251996 DEBUG nova.network.neutron [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:48:10 compute-0 nova_compute[251992]: 2025-12-06 07:48:10.353 251996 INFO nova.compute.manager [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Took 3.01 seconds to deallocate network for instance.
Dec 06 07:48:10 compute-0 ceph-mon[74339]: pgmap v2815: 305 pgs: 305 active+clean; 609 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.7 MiB/s wr, 163 op/s
Dec 06 07:48:10 compute-0 nova_compute[251992]: 2025-12-06 07:48:10.413 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:10 compute-0 nova_compute[251992]: 2025-12-06 07:48:10.413 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:10.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:10 compute-0 nova_compute[251992]: 2025-12-06 07:48:10.524 251996 DEBUG oslo_concurrency.processutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Dec 06 07:48:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Dec 06 07:48:11 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.065 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 571 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 118 KiB/s rd, 8.0 KiB/s wr, 170 op/s
Dec 06 07:48:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119992932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.230 251996 DEBUG oslo_concurrency.processutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.236 251996 DEBUG nova.compute.provider_tree [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.256 251996 DEBUG nova.scheduler.client.report [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.282 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.311 251996 INFO nova.scheduler.client.report [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Deleted allocations for instance 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5
Dec 06 07:48:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:11.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.410 251996 DEBUG oslo_concurrency.lockutils [None req-75f34faf-98f7-421a-87f7-09e6600659a5 e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.477 251996 DEBUG nova.compute.manager [req-facd646d-d225-4501-90be-95c6a73fdc65 req-fc4017ca-1f7a-4300-91dc-32896c1141d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.477 251996 DEBUG oslo_concurrency.lockutils [req-facd646d-d225-4501-90be-95c6a73fdc65 req-fc4017ca-1f7a-4300-91dc-32896c1141d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.478 251996 DEBUG oslo_concurrency.lockutils [req-facd646d-d225-4501-90be-95c6a73fdc65 req-fc4017ca-1f7a-4300-91dc-32896c1141d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.478 251996 DEBUG oslo_concurrency.lockutils [req-facd646d-d225-4501-90be-95c6a73fdc65 req-fc4017ca-1f7a-4300-91dc-32896c1141d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53b4413c-a38e-4ad9-9f1b-43babd1fe2a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.478 251996 DEBUG nova.compute.manager [req-facd646d-d225-4501-90be-95c6a73fdc65 req-fc4017ca-1f7a-4300-91dc-32896c1141d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] No waiting events found dispatching network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.478 251996 WARNING nova.compute.manager [req-facd646d-d225-4501-90be-95c6a73fdc65 req-fc4017ca-1f7a-4300-91dc-32896c1141d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received unexpected event network-vif-plugged-14826742-0679-403f-b2e4-28fb0f26527a for instance with vm_state deleted and task_state None.
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.479 251996 DEBUG nova.compute.manager [req-facd646d-d225-4501-90be-95c6a73fdc65 req-fc4017ca-1f7a-4300-91dc-32896c1141d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Received event network-vif-deleted-14826742-0679-403f-b2e4-28fb0f26527a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.484 251996 DEBUG nova.compute.manager [req-ea5932cd-2858-425b-9ed2-b9e23bf26799 req-33bc4181-d7d7-40dd-944f-001520118782 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.484 251996 DEBUG oslo_concurrency.lockutils [req-ea5932cd-2858-425b-9ed2-b9e23bf26799 req-33bc4181-d7d7-40dd-944f-001520118782 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.484 251996 DEBUG oslo_concurrency.lockutils [req-ea5932cd-2858-425b-9ed2-b9e23bf26799 req-33bc4181-d7d7-40dd-944f-001520118782 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.484 251996 DEBUG oslo_concurrency.lockutils [req-ea5932cd-2858-425b-9ed2-b9e23bf26799 req-33bc4181-d7d7-40dd-944f-001520118782 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.485 251996 DEBUG nova.compute.manager [req-ea5932cd-2858-425b-9ed2-b9e23bf26799 req-33bc4181-d7d7-40dd-944f-001520118782 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] No waiting events found dispatching network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.485 251996 WARNING nova.compute.manager [req-ea5932cd-2858-425b-9ed2-b9e23bf26799 req-33bc4181-d7d7-40dd-944f-001520118782 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received unexpected event network-vif-plugged-ce9ef951-3233-438a-91ab-5761a277635d for instance with vm_state active and task_state deleting.
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.940 251996 INFO nova.virt.libvirt.driver [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Deleting instance files /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd_del
Dec 06 07:48:11 compute-0 nova_compute[251992]: 2025-12-06 07:48:11.941 251996 INFO nova.virt.libvirt.driver [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Deletion of /var/lib/nova/instances/f09d4c2b-5734-457e-93cf-a2b4f61e3afd_del complete
Dec 06 07:48:12 compute-0 ceph-mon[74339]: osdmap e366: 3 total, 3 up, 3 in
Dec 06 07:48:12 compute-0 ceph-mon[74339]: pgmap v2817: 305 pgs: 305 active+clean; 571 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 118 KiB/s rd, 8.0 KiB/s wr, 170 op/s
Dec 06 07:48:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3119992932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:12.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:12 compute-0 nova_compute[251992]: 2025-12-06 07:48:12.848 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:48:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:48:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:48:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:48:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:48:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:48:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 541 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 65 op/s
Dec 06 07:48:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Dec 06 07:48:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Dec 06 07:48:13 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Dec 06 07:48:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:13.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:48:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 168K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 45K writes, 17K syncs, 2.66 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8006 writes, 28K keys, 8006 commit groups, 1.0 writes per commit group, ingest: 28.43 MB, 0.05 MB/s
                                           Interval WAL: 8006 writes, 3292 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:48:14 compute-0 ceph-mon[74339]: pgmap v2818: 305 pgs: 305 active+clean; 541 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 65 op/s
Dec 06 07:48:14 compute-0 ceph-mon[74339]: osdmap e367: 3 total, 3 up, 3 in
Dec 06 07:48:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:14.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 522 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 5.6 KiB/s wr, 111 op/s
Dec 06 07:48:15 compute-0 ceph-mon[74339]: pgmap v2820: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 522 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 5.6 KiB/s wr, 111 op/s
Dec 06 07:48:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:15.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Dec 06 07:48:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Dec 06 07:48:15 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Dec 06 07:48:15 compute-0 nova_compute[251992]: 2025-12-06 07:48:15.974 251996 INFO nova.compute.manager [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Took 8.59 seconds to destroy the instance on the hypervisor.
Dec 06 07:48:15 compute-0 nova_compute[251992]: 2025-12-06 07:48:15.975 251996 DEBUG oslo.service.loopingcall [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:48:15 compute-0 nova_compute[251992]: 2025-12-06 07:48:15.976 251996 DEBUG nova.compute.manager [-] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:48:15 compute-0 nova_compute[251992]: 2025-12-06 07:48:15.977 251996 DEBUG nova.network.neutron [-] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.076 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.350 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.351 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.351 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.351 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.352 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.353 251996 INFO nova.compute.manager [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Terminating instance
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.354 251996 DEBUG nova.compute.manager [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:48:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:16.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:16 compute-0 kernel: tap450480d9-e0 (unregistering): left promiscuous mode
Dec 06 07:48:16 compute-0 NetworkManager[48965]: <info>  [1765007296.5321] device (tap450480d9-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.538 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 ovn_controller[147168]: 2025-12-06T07:48:16Z|00609|binding|INFO|Releasing lport 450480d9-e0c3-414d-ba7e-8b996711a653 from this chassis (sb_readonly=0)
Dec 06 07:48:16 compute-0 ovn_controller[147168]: 2025-12-06T07:48:16Z|00610|binding|INFO|Setting lport 450480d9-e0c3-414d-ba7e-8b996711a653 down in Southbound
Dec 06 07:48:16 compute-0 ovn_controller[147168]: 2025-12-06T07:48:16Z|00611|binding|INFO|Removing iface tap450480d9-e0 ovn-installed in OVS
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.540 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:16.549 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:3b:e9 10.100.0.3'], port_security=['fa:16:3e:ed:3b:e9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '53cabacd-b2a5-4ad1-a97a-0d0710d43bf9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f44ecb8bdc7e4692a299e29603301124', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7dea2a71-d8ba-42ad-bebb-b2c31a9e3976', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef95e15f-f36a-4631-8598-89c7e0374fce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=450480d9-e0c3-414d-ba7e-8b996711a653) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:16.550 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 450480d9-e0c3-414d-ba7e-8b996711a653 in datapath 6d1a17d6-5e44-40b7-832a-81cb86c02e71 unbound from our chassis
Dec 06 07:48:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:16.552 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a17d6-5e44-40b7-832a-81cb86c02e71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:48:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:16.554 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5a312ed4-dbfb-41ec-a3f2-40f5ab35b4fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:16.556 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 namespace which is not needed anymore
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Dec 06 07:48:16 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000009a.scope: Consumed 21.177s CPU time.
Dec 06 07:48:16 compute-0 systemd-machined[212986]: Machine qemu-72-instance-0000009a terminated.
Dec 06 07:48:16 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350497]: [NOTICE]   (350501) : haproxy version is 2.8.14-c23fe91
Dec 06 07:48:16 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350497]: [NOTICE]   (350501) : path to executable is /usr/sbin/haproxy
Dec 06 07:48:16 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350497]: [WARNING]  (350501) : Exiting Master process...
Dec 06 07:48:16 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350497]: [ALERT]    (350501) : Current worker (350503) exited with code 143 (Terminated)
Dec 06 07:48:16 compute-0 neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71[350497]: [WARNING]  (350501) : All workers exited. Exiting... (0)
Dec 06 07:48:16 compute-0 systemd[1]: libpod-d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3.scope: Deactivated successfully.
Dec 06 07:48:16 compute-0 podman[356074]: 2025-12-06 07:48:16.703547241 +0000 UTC m=+0.054035660 container died d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.779 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.784 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.796 251996 INFO nova.virt.libvirt.driver [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Instance destroyed successfully.
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.796 251996 DEBUG nova.objects.instance [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lazy-loading 'resources' on Instance uuid 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.809 251996 DEBUG nova.compute.manager [req-7c57d604-75dd-4c9d-b7dc-898e885a356d req-73047eab-b602-4e9f-ad2b-db6d9d6aa539 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.810 251996 DEBUG oslo_concurrency.lockutils [req-7c57d604-75dd-4c9d-b7dc-898e885a356d req-73047eab-b602-4e9f-ad2b-db6d9d6aa539 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.810 251996 DEBUG oslo_concurrency.lockutils [req-7c57d604-75dd-4c9d-b7dc-898e885a356d req-73047eab-b602-4e9f-ad2b-db6d9d6aa539 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.810 251996 DEBUG oslo_concurrency.lockutils [req-7c57d604-75dd-4c9d-b7dc-898e885a356d req-73047eab-b602-4e9f-ad2b-db6d9d6aa539 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.811 251996 DEBUG nova.compute.manager [req-7c57d604-75dd-4c9d-b7dc-898e885a356d req-73047eab-b602-4e9f-ad2b-db6d9d6aa539 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.811 251996 DEBUG nova.compute.manager [req-7c57d604-75dd-4c9d-b7dc-898e885a356d req-73047eab-b602-4e9f-ad2b-db6d9d6aa539 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-unplugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.814 251996 DEBUG nova.virt.libvirt.vif [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:43:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1644415942',display_name='tempest-ServerStableDeviceRescueTest-server-1644415942',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1644415942',id=154,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:44:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f44ecb8bdc7e4692a299e29603301124',ramdisk_id='',reservation_id='r-z5s6nndr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1830949011',owner_user_name='tempest-ServerStableDeviceRescueTest-1830949011-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:44:52Z,user_data=None,user_id='e997a5eeee174b368a43ed8cb35fa1d0',uuid=53cabacd-b2a5-4ad1-a97a-0d0710d43bf9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.815 251996 DEBUG nova.network.os_vif_util [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converting VIF {"id": "450480d9-e0c3-414d-ba7e-8b996711a653", "address": "fa:16:3e:ed:3b:e9", "network": {"id": "6d1a17d6-5e44-40b7-832a-81cb86c02e71", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1698704235-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f44ecb8bdc7e4692a299e29603301124", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap450480d9-e0", "ovs_interfaceid": "450480d9-e0c3-414d-ba7e-8b996711a653", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.815 251996 DEBUG nova.network.os_vif_util [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:e9,bridge_name='br-int',has_traffic_filtering=True,id=450480d9-e0c3-414d-ba7e-8b996711a653,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap450480d9-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.816 251996 DEBUG os_vif [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:e9,bridge_name='br-int',has_traffic_filtering=True,id=450480d9-e0c3-414d-ba7e-8b996711a653,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap450480d9-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.817 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.818 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap450480d9-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.821 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.823 251996 INFO os_vif [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:3b:e9,bridge_name='br-int',has_traffic_filtering=True,id=450480d9-e0c3-414d-ba7e-8b996711a653,network=Network(6d1a17d6-5e44-40b7-832a-81cb86c02e71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap450480d9-e0')
Dec 06 07:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3-userdata-shm.mount: Deactivated successfully.
Dec 06 07:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4920e51eacefa4a428811437aac51635418e747a2d1cdb2c3b3e73f60176450-merged.mount: Deactivated successfully.
Dec 06 07:48:16 compute-0 podman[356074]: 2025-12-06 07:48:16.889921229 +0000 UTC m=+0.240409648 container cleanup d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:48:16 compute-0 systemd[1]: libpod-conmon-d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3.scope: Deactivated successfully.
Dec 06 07:48:16 compute-0 ceph-mon[74339]: osdmap e368: 3 total, 3 up, 3 in
Dec 06 07:48:16 compute-0 podman[356130]: 2025-12-06 07:48:16.979609909 +0000 UTC m=+0.068362795 container remove d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:48:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:16.986 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8a733be7-4d09-48d8-8984-e849d7fcd6cf]: (4, ('Sat Dec  6 07:48:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3)\nd2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3\nSat Dec  6 07:48:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 (d2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3)\nd2e42d4f8130d38681c0d496d160bd71ca3f54d81a701b767e2b0b5ddc4201b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:16.989 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6c298c3d-794f-4f42-b73d-4d40073f680c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:16.990 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a17d6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:16 compute-0 nova_compute[251992]: 2025-12-06 07:48:16.992 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:16 compute-0 kernel: tap6d1a17d6-50: left promiscuous mode
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.006 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:17.009 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[12e84015-e27f-42f7-9f70-518905de474e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:17.029 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d817e5d0-549a-41ab-8ea2-de18f00d9b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:17.031 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[938d39cb-230e-4bab-8516-27669cb46774]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:17.049 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3945e289-f5f8-4358-a022-9e1c3d4ae679]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 736424, 'reachable_time': 18766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356147, 'error': None, 'target': 'ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d6d1a17d6\x2d5e44\x2d40b7\x2d832a\x2d81cb86c02e71.mount: Deactivated successfully.
Dec 06 07:48:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:17.052 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a17d6-5e44-40b7-832a-81cb86c02e71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:48:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:17.052 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[f6601d74-bc03-41e7-a374-016f81cd6dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:48:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 506 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.140 251996 DEBUG nova.network.neutron [-] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.160 251996 INFO nova.compute.manager [-] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Took 1.18 seconds to deallocate network for instance.
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.221 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.221 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.275 251996 DEBUG nova.compute.manager [req-038e6c9b-208a-420f-9d9b-5b91f3750b19 req-562631c9-2f2a-4fa6-a2ff-57f20a7e8518 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Received event network-vif-deleted-ce9ef951-3233-438a-91ab-5761a277635d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.300 251996 DEBUG oslo_concurrency.processutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:17.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128666931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.759 251996 DEBUG oslo_concurrency.processutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.766 251996 DEBUG nova.compute.provider_tree [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.853 251996 DEBUG nova.scheduler.client.report [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:48:17 compute-0 nova_compute[251992]: 2025-12-06 07:48:17.885 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:18 compute-0 ceph-mon[74339]: pgmap v2822: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 506 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 75 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Dec 06 07:48:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2535541170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:48:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2535541170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:48:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2128666931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:18 compute-0 nova_compute[251992]: 2025-12-06 07:48:18.099 251996 INFO nova.virt.libvirt.driver [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Deleting instance files /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_del
Dec 06 07:48:18 compute-0 nova_compute[251992]: 2025-12-06 07:48:18.099 251996 INFO nova.virt.libvirt.driver [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Deletion of /var/lib/nova/instances/53cabacd-b2a5-4ad1-a97a-0d0710d43bf9_del complete
Dec 06 07:48:18 compute-0 nova_compute[251992]: 2025-12-06 07:48:18.185 251996 INFO nova.scheduler.client.report [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Deleted allocations for instance f09d4c2b-5734-457e-93cf-a2b4f61e3afd
Dec 06 07:48:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:18.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:48:18
Dec 06 07:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 06 07:48:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:48:18 compute-0 nova_compute[251992]: 2025-12-06 07:48:18.648 251996 INFO nova.compute.manager [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Took 2.29 seconds to destroy the instance on the hypervisor.
Dec 06 07:48:18 compute-0 nova_compute[251992]: 2025-12-06 07:48:18.649 251996 DEBUG oslo.service.loopingcall [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:48:18 compute-0 nova_compute[251992]: 2025-12-06 07:48:18.649 251996 DEBUG nova.compute.manager [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:48:18 compute-0 nova_compute[251992]: 2025-12-06 07:48:18.650 251996 DEBUG nova.network.neutron [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:48:18 compute-0 nova_compute[251992]: 2025-12-06 07:48:18.683 251996 DEBUG oslo_concurrency.lockutils [None req-8bc7a5da-44e4-4928-8941-2a623c7f3e29 89d63d29c7534f70817e13d23cada716 f093eaeb91c042dd8c85f5cd256c4394 - - default default] Lock "f09d4c2b-5734-457e-93cf-a2b4f61e3afd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.028 251996 DEBUG nova.compute.manager [req-026146c9-dab3-44c2-ab4a-cd526c07c338 req-b2b5b183-c12d-4e3c-aacb-c9eb8ddc4fe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.029 251996 DEBUG oslo_concurrency.lockutils [req-026146c9-dab3-44c2-ab4a-cd526c07c338 req-b2b5b183-c12d-4e3c-aacb-c9eb8ddc4fe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.029 251996 DEBUG oslo_concurrency.lockutils [req-026146c9-dab3-44c2-ab4a-cd526c07c338 req-b2b5b183-c12d-4e3c-aacb-c9eb8ddc4fe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.029 251996 DEBUG oslo_concurrency.lockutils [req-026146c9-dab3-44c2-ab4a-cd526c07c338 req-b2b5b183-c12d-4e3c-aacb-c9eb8ddc4fe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.030 251996 DEBUG nova.compute.manager [req-026146c9-dab3-44c2-ab4a-cd526c07c338 req-b2b5b183-c12d-4e3c-aacb-c9eb8ddc4fe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] No waiting events found dispatching network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.030 251996 WARNING nova.compute.manager [req-026146c9-dab3-44c2-ab4a-cd526c07c338 req-b2b5b183-c12d-4e3c-aacb-c9eb8ddc4fe7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received unexpected event network-vif-plugged-450480d9-e0c3-414d-ba7e-8b996711a653 for instance with vm_state active and task_state deleting.
Dec 06 07:48:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Dec 06 07:48:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 5.9 KiB/s wr, 110 op/s
Dec 06 07:48:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Dec 06 07:48:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Dec 06 07:48:19 compute-0 ceph-mon[74339]: pgmap v2823: 305 pgs: 305 active+clean; 462 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 5.9 KiB/s wr, 110 op/s
Dec 06 07:48:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2916215043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:19 compute-0 ceph-mon[74339]: osdmap e369: 3 total, 3 up, 3 in
Dec 06 07:48:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:19.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.787 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.787 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.788 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.788 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.788 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:19 compute-0 nova_compute[251992]: 2025-12-06 07:48:19.827 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.137 251996 DEBUG nova.network.neutron [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.158 251996 INFO nova.compute.manager [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Took 1.51 seconds to deallocate network for instance.
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.224 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.225 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2907350927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.243 251996 DEBUG nova.compute.manager [req-16e62613-70c7-4a1c-b340-88e47b748425 req-98930ea0-5a29-4ef9-b977-233e485dbb24 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Received event network-vif-deleted-450480d9-e0c3-414d-ba7e-8b996711a653 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:48:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1058672723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.262 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.274 251996 DEBUG oslo_concurrency.processutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:20.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.504 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.505 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4205MB free_disk=20.868587493896484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.506 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:48:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005909562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.784 251996 DEBUG oslo_concurrency.processutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.789 251996 DEBUG nova.compute.provider_tree [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.834 251996 DEBUG nova.scheduler.client.report [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.869 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.872 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:48:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Dec 06 07:48:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Dec 06 07:48:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.915 251996 INFO nova.scheduler.client.report [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Deleted allocations for instance 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.940 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.941 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.960 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:48:20 compute-0 nova_compute[251992]: 2025-12-06 07:48:20.996 251996 DEBUG oslo_concurrency.lockutils [None req-0c92fc43-00fe-475b-88ea-ab71ac7bcbaf e997a5eeee174b368a43ed8cb35fa1d0 f44ecb8bdc7e4692a299e29603301124 - - default default] Lock "53cabacd-b2a5-4ad1-a97a-0d0710d43bf9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.078 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 393 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 66 op/s
Dec 06 07:48:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2907350927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1005909562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:21 compute-0 ceph-mon[74339]: osdmap e370: 3 total, 3 up, 3 in
Dec 06 07:48:21 compute-0 ceph-mon[74339]: pgmap v2826: 305 pgs: 305 active+clean; 393 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 3.3 KiB/s wr, 66 op/s
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.379 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007286.3775172, 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.380 251996 INFO nova.compute.manager [-] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] VM Stopped (Lifecycle Event)
Dec 06 07:48:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:21.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.408 251996 DEBUG nova.compute.manager [None req-a63d28d6-cd8f-4a28-bcba-5e3a5a504150 - - - - - -] [instance: 53b4413c-a38e-4ad9-9f1b-43babd1fe2a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:48:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:48:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/569971451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.447 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.452 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.503 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.532 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.532 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:48:21 compute-0 nova_compute[251992]: 2025-12-06 07:48:21.819 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/569971451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:22.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 07:48:22 compute-0 sudo[356240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:22 compute-0 sudo[356240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:22 compute-0 sudo[356240]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:22 compute-0 sudo[356265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:22 compute-0 sudo[356265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:22 compute-0 sudo[356265]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:22 compute-0 nova_compute[251992]: 2025-12-06 07:48:22.819 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007287.81685, f09d4c2b-5734-457e-93cf-a2b4f61e3afd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:48:22 compute-0 nova_compute[251992]: 2025-12-06 07:48:22.820 251996 INFO nova.compute.manager [-] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] VM Stopped (Lifecycle Event)
Dec 06 07:48:22 compute-0 nova_compute[251992]: 2025-12-06 07:48:22.855 251996 DEBUG nova.compute.manager [None req-a25062b2-58e2-450b-b822-834117ffe1c5 - - - - - -] [instance: f09d4c2b-5734-457e-93cf-a2b4f61e3afd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:48:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Dec 06 07:48:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Dec 06 07:48:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Dec 06 07:48:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 356 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 4.8 KiB/s wr, 100 op/s
Dec 06 07:48:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:23.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:48:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:48:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:24.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:24 compute-0 ceph-mon[74339]: osdmap e371: 3 total, 3 up, 3 in
Dec 06 07:48:24 compute-0 ceph-mon[74339]: pgmap v2828: 305 pgs: 305 active+clean; 356 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 4.8 KiB/s wr, 100 op/s
Dec 06 07:48:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:48:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:48:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:48:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:48:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:48:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 232 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 5.7 KiB/s wr, 155 op/s
Dec 06 07:48:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:25.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:25 compute-0 ceph-mon[74339]: pgmap v2829: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 232 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 5.7 KiB/s wr, 155 op/s
Dec 06 07:48:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/748425604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:25 compute-0 nova_compute[251992]: 2025-12-06 07:48:25.991 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020530895216824864 of space, bias 1.0, pg target 0.6159268565047459 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0026147083798622743 of space, bias 1.0, pg target 0.7844125139586823 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:48:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:48:26 compute-0 nova_compute[251992]: 2025-12-06 07:48:26.357 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:26 compute-0 nova_compute[251992]: 2025-12-06 07:48:26.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:26.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:26 compute-0 nova_compute[251992]: 2025-12-06 07:48:26.822 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 164 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 6.1 KiB/s wr, 149 op/s
Dec 06 07:48:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:27.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:27 compute-0 podman[356294]: 2025-12-06 07:48:27.418896143 +0000 UTC m=+0.079607139 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 07:48:27 compute-0 ceph-mon[74339]: pgmap v2830: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 164 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 6.1 KiB/s wr, 149 op/s
Dec 06 07:48:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:28.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 122 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 6.6 KiB/s wr, 154 op/s
Dec 06 07:48:29 compute-0 ceph-mon[74339]: pgmap v2831: 305 pgs: 305 active+clean; 122 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 6.6 KiB/s wr, 154 op/s
Dec 06 07:48:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 07:48:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:29.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 07:48:29 compute-0 nova_compute[251992]: 2025-12-06 07:48:29.533 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:29 compute-0 nova_compute[251992]: 2025-12-06 07:48:29.534 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:29 compute-0 nova_compute[251992]: 2025-12-06 07:48:29.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:30.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:30 compute-0 nova_compute[251992]: 2025-12-06 07:48:30.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:30 compute-0 nova_compute[251992]: 2025-12-06 07:48:30.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:48:30 compute-0 nova_compute[251992]: 2025-12-06 07:48:30.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:48:30 compute-0 nova_compute[251992]: 2025-12-06 07:48:30.726 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:48:30 compute-0 nova_compute[251992]: 2025-12-06 07:48:30.726 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Dec 06 07:48:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Dec 06 07:48:30 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Dec 06 07:48:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2280994863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 5.9 KiB/s wr, 137 op/s
Dec 06 07:48:31 compute-0 nova_compute[251992]: 2025-12-06 07:48:31.361 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:31.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:31 compute-0 nova_compute[251992]: 2025-12-06 07:48:31.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:31 compute-0 nova_compute[251992]: 2025-12-06 07:48:31.795 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007296.794404, 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:48:31 compute-0 nova_compute[251992]: 2025-12-06 07:48:31.795 251996 INFO nova.compute.manager [-] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] VM Stopped (Lifecycle Event)
Dec 06 07:48:31 compute-0 nova_compute[251992]: 2025-12-06 07:48:31.823 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:31 compute-0 nova_compute[251992]: 2025-12-06 07:48:31.966 251996 DEBUG nova.compute.manager [None req-0e5b30d2-acdc-428e-b440-076ab478c726 - - - - - -] [instance: 53cabacd-b2a5-4ad1-a97a-0d0710d43bf9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:48:32 compute-0 ceph-mon[74339]: osdmap e372: 3 total, 3 up, 3 in
Dec 06 07:48:32 compute-0 ceph-mon[74339]: pgmap v2833: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 5.9 KiB/s wr, 137 op/s
Dec 06 07:48:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2733307890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:32.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 4.8 KiB/s wr, 112 op/s
Dec 06 07:48:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2858422135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:33 compute-0 sshd-session[356323]: banner exchange: Connection from 216.218.206.68 port 38814: invalid format
Dec 06 07:48:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:33.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:33 compute-0 nova_compute[251992]: 2025-12-06 07:48:33.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:33 compute-0 nova_compute[251992]: 2025-12-06 07:48:33.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:48:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:34.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:34 compute-0 ceph-mon[74339]: pgmap v2834: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 4.8 KiB/s wr, 112 op/s
Dec 06 07:48:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 53 op/s
Dec 06 07:48:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:35 compute-0 ceph-mon[74339]: pgmap v2835: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.6 KiB/s wr, 53 op/s
Dec 06 07:48:35 compute-0 nova_compute[251992]: 2025-12-06 07:48:35.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:35 compute-0 nova_compute[251992]: 2025-12-06 07:48:35.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:35 compute-0 nova_compute[251992]: 2025-12-06 07:48:35.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:48:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:36 compute-0 nova_compute[251992]: 2025-12-06 07:48:36.240 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:48:36 compute-0 nova_compute[251992]: 2025-12-06 07:48:36.362 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:36 compute-0 podman[356325]: 2025-12-06 07:48:36.40184067 +0000 UTC m=+0.056108895 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:48:36 compute-0 podman[356326]: 2025-12-06 07:48:36.403587738 +0000 UTC m=+0.056379782 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:48:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:36.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3510655874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:48:36 compute-0 nova_compute[251992]: 2025-12-06 07:48:36.825 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 07:48:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:38.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 614 B/s wr, 14 op/s
Dec 06 07:48:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:39.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:40.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:48:41 compute-0 nova_compute[251992]: 2025-12-06 07:48:41.364 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:41.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:41 compute-0 ceph-mon[74339]: pgmap v2836: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 06 07:48:41 compute-0 nova_compute[251992]: 2025-12-06 07:48:41.826 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:42.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:42 compute-0 ceph-mon[74339]: pgmap v2837: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 614 B/s wr, 14 op/s
Dec 06 07:48:42 compute-0 ceph-mon[74339]: pgmap v2838: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:48:42 compute-0 sudo[356364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:42 compute-0 sudo[356364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:42 compute-0 sudo[356364]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:42 compute-0 sudo[356389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:42 compute-0 sudo[356389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:42 compute-0 sudo[356389]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:48:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:48:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:48:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:48:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:48:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:48:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:48:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:43.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:44 compute-0 ceph-mon[74339]: pgmap v2839: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:48:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:44.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 155 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Dec 06 07:48:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:45 compute-0 ceph-mon[74339]: pgmap v2840: 305 pgs: 305 active+clean; 155 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Dec 06 07:48:46 compute-0 nova_compute[251992]: 2025-12-06 07:48:46.366 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:46.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:46 compute-0 nova_compute[251992]: 2025-12-06 07:48:46.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:46 compute-0 nova_compute[251992]: 2025-12-06 07:48:46.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:48:46 compute-0 nova_compute[251992]: 2025-12-06 07:48:46.828 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:47 compute-0 ceph-mon[74339]: pgmap v2841: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:47.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:48.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:49 compute-0 ceph-mon[74339]: pgmap v2842: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:49.284 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:48:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:49.286 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:48:49 compute-0 nova_compute[251992]: 2025-12-06 07:48:49.313 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:49.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:50.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:50 compute-0 sudo[356419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:50 compute-0 sudo[356419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:50 compute-0 sudo[356419]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:50 compute-0 sudo[356444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:48:50 compute-0 sudo[356444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:50 compute-0 sudo[356444]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:51 compute-0 sudo[356469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:51 compute-0 sudo[356469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:51 compute-0 sudo[356469]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:51 compute-0 sudo[356495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:48:51 compute-0 sudo[356495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:51 compute-0 ceph-mon[74339]: pgmap v2843: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:51 compute-0 nova_compute[251992]: 2025-12-06 07:48:51.368 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:51.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:51 compute-0 podman[356592]: 2025-12-06 07:48:51.523354612 +0000 UTC m=+0.066663759 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 07:48:51 compute-0 nova_compute[251992]: 2025-12-06 07:48:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:48:51 compute-0 podman[356592]: 2025-12-06 07:48:51.660463002 +0000 UTC m=+0.203772149 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:48:51 compute-0 nova_compute[251992]: 2025-12-06 07:48:51.830 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:52 compute-0 podman[356745]: 2025-12-06 07:48:52.243320648 +0000 UTC m=+0.062472037 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:48:52 compute-0 podman[356745]: 2025-12-06 07:48:52.252388862 +0000 UTC m=+0.071540221 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:48:52 compute-0 podman[356810]: 2025-12-06 07:48:52.436412008 +0000 UTC m=+0.052108348 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, release=1793, io.openshift.expose-services=, version=2.2.4)
Dec 06 07:48:52 compute-0 podman[356810]: 2025-12-06 07:48:52.450440736 +0000 UTC m=+0.066137046 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, io.openshift.expose-services=, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793, architecture=x86_64, io.buildah.version=1.28.2)
Dec 06 07:48:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:52.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:52 compute-0 sudo[356495]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:48:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:48:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:52 compute-0 sudo[356844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:52 compute-0 sudo[356844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:52 compute-0 sudo[356844]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:52 compute-0 sudo[356869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:48:52 compute-0 sudo[356869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:52 compute-0 sudo[356869]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:52 compute-0 sudo[356894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:52 compute-0 sudo[356894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:52 compute-0 sudo[356894]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:52 compute-0 sudo[356919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:48:52 compute-0 sudo[356919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:53 compute-0 sudo[356919]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:48:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:48:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:48:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:48:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:53.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:48:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 54ed6733-119c-44a4-866a-a9a46cb0cc72 does not exist
Dec 06 07:48:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 669424e4-ff3c-4e8b-bc77-d5fb7f011c77 does not exist
Dec 06 07:48:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4a932ff4-d09a-4ee5-8054-0cd2657a7a6a does not exist
Dec 06 07:48:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:48:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:48:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:48:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:53 compute-0 ceph-mon[74339]: pgmap v2844: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:48:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:48:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:48:53 compute-0 sudo[356977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:53 compute-0 sudo[356977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:53 compute-0 sudo[356977]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:53 compute-0 sudo[357002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:48:53 compute-0 sudo[357002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:53 compute-0 sudo[357002]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:53 compute-0 sudo[357027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:53 compute-0 sudo[357027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:53 compute-0 sudo[357027]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:53 compute-0 sudo[357052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:48:53 compute-0 sudo[357052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:54 compute-0 podman[357118]: 2025-12-06 07:48:54.026468579 +0000 UTC m=+0.037221636 container create 67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:48:54 compute-0 systemd[1]: Started libpod-conmon-67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312.scope.
Dec 06 07:48:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:48:54 compute-0 podman[357118]: 2025-12-06 07:48:54.009572953 +0000 UTC m=+0.020326030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:48:54 compute-0 podman[357118]: 2025-12-06 07:48:54.118390789 +0000 UTC m=+0.129143856 container init 67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:48:54 compute-0 podman[357118]: 2025-12-06 07:48:54.127116625 +0000 UTC m=+0.137869682 container start 67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:48:54 compute-0 podman[357118]: 2025-12-06 07:48:54.130957298 +0000 UTC m=+0.141710355 container attach 67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:48:54 compute-0 sleepy_lichterman[357134]: 167 167
Dec 06 07:48:54 compute-0 systemd[1]: libpod-67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312.scope: Deactivated successfully.
Dec 06 07:48:54 compute-0 conmon[357134]: conmon 67ab3328a7c8d45aa724 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312.scope/container/memory.events
Dec 06 07:48:54 compute-0 podman[357118]: 2025-12-06 07:48:54.134879534 +0000 UTC m=+0.145632591 container died 67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c2184b7dcd856fa357750bb341643562df6f28e9bf4c7f33d1973146a074c70-merged.mount: Deactivated successfully.
Dec 06 07:48:54 compute-0 podman[357118]: 2025-12-06 07:48:54.175082339 +0000 UTC m=+0.185835396 container remove 67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 07:48:54 compute-0 systemd[1]: libpod-conmon-67ab3328a7c8d45aa724be48a595fdb6e05c05c7dadcb5d1021d78066d53e312.scope: Deactivated successfully.
Dec 06 07:48:54 compute-0 podman[357156]: 2025-12-06 07:48:54.326523565 +0000 UTC m=+0.039050965 container create 3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:48:54 compute-0 systemd[1]: Started libpod-conmon-3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753.scope.
Dec 06 07:48:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e46d3de3b4cefb94ddf9a503322988853b011b72e4f1e00d1bd945f26b48ac8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e46d3de3b4cefb94ddf9a503322988853b011b72e4f1e00d1bd945f26b48ac8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e46d3de3b4cefb94ddf9a503322988853b011b72e4f1e00d1bd945f26b48ac8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e46d3de3b4cefb94ddf9a503322988853b011b72e4f1e00d1bd945f26b48ac8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e46d3de3b4cefb94ddf9a503322988853b011b72e4f1e00d1bd945f26b48ac8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:54 compute-0 podman[357156]: 2025-12-06 07:48:54.309435734 +0000 UTC m=+0.021963154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:48:54 compute-0 podman[357156]: 2025-12-06 07:48:54.408160917 +0000 UTC m=+0.120688337 container init 3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:48:54 compute-0 podman[357156]: 2025-12-06 07:48:54.416184494 +0000 UTC m=+0.128711894 container start 3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:48:54 compute-0 podman[357156]: 2025-12-06 07:48:54.420581522 +0000 UTC m=+0.133108932 container attach 3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 07:48:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:54.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/994656536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:48:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2368498693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:48:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:55 compute-0 reverent_archimedes[357172]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:48:55 compute-0 reverent_archimedes[357172]: --> relative data size: 1.0
Dec 06 07:48:55 compute-0 reverent_archimedes[357172]: --> All data devices are unavailable
Dec 06 07:48:55 compute-0 systemd[1]: libpod-3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753.scope: Deactivated successfully.
Dec 06 07:48:55 compute-0 podman[357156]: 2025-12-06 07:48:55.238042898 +0000 UTC m=+0.950570308 container died 3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e46d3de3b4cefb94ddf9a503322988853b011b72e4f1e00d1bd945f26b48ac8-merged.mount: Deactivated successfully.
Dec 06 07:48:55 compute-0 podman[357156]: 2025-12-06 07:48:55.288089728 +0000 UTC m=+1.000617128 container remove 3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:48:55 compute-0 systemd[1]: libpod-conmon-3f98d223f95ce455372c944c7b3d4dbe402ba4b3ab60527b1c97dc424351c753.scope: Deactivated successfully.
Dec 06 07:48:55 compute-0 sudo[357052]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:55 compute-0 sudo[357201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:55 compute-0 sudo[357201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:55 compute-0 sudo[357201]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:55.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:55 compute-0 sudo[357226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:48:55 compute-0 sudo[357226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:55 compute-0 sudo[357226]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:55 compute-0 sudo[357251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:55 compute-0 sudo[357251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:55 compute-0 sudo[357251]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:55 compute-0 sudo[357276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:48:55 compute-0 sudo[357276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:55 compute-0 ceph-mon[74339]: pgmap v2845: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:48:55 compute-0 podman[357342]: 2025-12-06 07:48:55.859682001 +0000 UTC m=+0.038345546 container create a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:48:55 compute-0 systemd[1]: Started libpod-conmon-a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47.scope.
Dec 06 07:48:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:48:55 compute-0 podman[357342]: 2025-12-06 07:48:55.923374769 +0000 UTC m=+0.102038364 container init a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:48:55 compute-0 podman[357342]: 2025-12-06 07:48:55.929784062 +0000 UTC m=+0.108447607 container start a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:48:55 compute-0 podman[357342]: 2025-12-06 07:48:55.933119582 +0000 UTC m=+0.111783177 container attach a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:48:55 compute-0 quirky_jennings[357358]: 167 167
Dec 06 07:48:55 compute-0 systemd[1]: libpod-a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47.scope: Deactivated successfully.
Dec 06 07:48:55 compute-0 podman[357342]: 2025-12-06 07:48:55.935313261 +0000 UTC m=+0.113976806 container died a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:48:55 compute-0 podman[357342]: 2025-12-06 07:48:55.843170445 +0000 UTC m=+0.021834010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6aa7bf5e60bf513bbcf694c6fc9430bf6be500dcf8ecf154669b7dc0e46d9ae-merged.mount: Deactivated successfully.
Dec 06 07:48:55 compute-0 podman[357342]: 2025-12-06 07:48:55.972943296 +0000 UTC m=+0.151606851 container remove a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:48:55 compute-0 systemd[1]: libpod-conmon-a3f7e3e7648696acb4b4289732620d2b9910535ae6f7c51dc4951208e8ad7f47.scope: Deactivated successfully.
Dec 06 07:48:56 compute-0 podman[357381]: 2025-12-06 07:48:56.124263619 +0000 UTC m=+0.037325308 container create 223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:48:56 compute-0 systemd[1]: Started libpod-conmon-223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc.scope.
Dec 06 07:48:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae39c5b6838714b882fbd63e1028ddc2b0eea3f5df0378bbe115b3f35e7e7d19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae39c5b6838714b882fbd63e1028ddc2b0eea3f5df0378bbe115b3f35e7e7d19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae39c5b6838714b882fbd63e1028ddc2b0eea3f5df0378bbe115b3f35e7e7d19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae39c5b6838714b882fbd63e1028ddc2b0eea3f5df0378bbe115b3f35e7e7d19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:48:56 compute-0 podman[357381]: 2025-12-06 07:48:56.203763834 +0000 UTC m=+0.116825523 container init 223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:48:56 compute-0 podman[357381]: 2025-12-06 07:48:56.110062906 +0000 UTC m=+0.023124615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:48:56 compute-0 podman[357381]: 2025-12-06 07:48:56.210545817 +0000 UTC m=+0.123607506 container start 223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:48:56 compute-0 podman[357381]: 2025-12-06 07:48:56.213526627 +0000 UTC m=+0.126588336 container attach 223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:48:56 compute-0 nova_compute[251992]: 2025-12-06 07:48:56.368 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:48:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:56.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:56 compute-0 nova_compute[251992]: 2025-12-06 07:48:56.831 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]: {
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:     "0": [
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:         {
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "devices": [
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "/dev/loop3"
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             ],
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "lv_name": "ceph_lv0",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "lv_size": "7511998464",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "name": "ceph_lv0",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "tags": {
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.cluster_name": "ceph",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.crush_device_class": "",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.encrypted": "0",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.osd_id": "0",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.type": "block",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:                 "ceph.vdo": "0"
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             },
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "type": "block",
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:             "vg_name": "ceph_vg0"
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:         }
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]:     ]
Dec 06 07:48:56 compute-0 vigilant_sammet[357397]: }
Dec 06 07:48:56 compute-0 systemd[1]: libpod-223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc.scope: Deactivated successfully.
Dec 06 07:48:56 compute-0 podman[357381]: 2025-12-06 07:48:56.986923664 +0000 UTC m=+0.899985343 container died 223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:48:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 564 KiB/s wr, 11 op/s
Dec 06 07:48:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:48:57.288 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:48:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:57.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:48:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:48:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:48:58 compute-0 ceph-mon[74339]: pgmap v2846: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 564 KiB/s wr, 11 op/s
Dec 06 07:48:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Dec 06 07:48:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae39c5b6838714b882fbd63e1028ddc2b0eea3f5df0378bbe115b3f35e7e7d19-merged.mount: Deactivated successfully.
Dec 06 07:48:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:48:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:48:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:48:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:48:59 compute-0 podman[357381]: 2025-12-06 07:48:59.468665734 +0000 UTC m=+3.381727423 container remove 223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:48:59 compute-0 sudo[357276]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:59 compute-0 sudo[357431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:59 compute-0 systemd[1]: libpod-conmon-223909d1a2d7e492e83dfc2fff80d006efe33d060da1329f4477d0289b4dcafc.scope: Deactivated successfully.
Dec 06 07:48:59 compute-0 sudo[357431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:59 compute-0 sudo[357431]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:59 compute-0 podman[357419]: 2025-12-06 07:48:59.57895117 +0000 UTC m=+1.235186128 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:48:59 compute-0 sudo[357473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:48:59 compute-0 sudo[357473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:59 compute-0 sudo[357473]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:59 compute-0 sudo[357499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:48:59 compute-0 sudo[357499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:48:59 compute-0 sudo[357499]: pam_unix(sudo:session): session closed for user root
Dec 06 07:48:59 compute-0 sudo[357524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:48:59 compute-0 sudo[357524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:00 compute-0 podman[357589]: 2025-12-06 07:49:00.067527402 +0000 UTC m=+0.045804816 container create 965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:49:00 compute-0 podman[357589]: 2025-12-06 07:49:00.044327077 +0000 UTC m=+0.022604511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:49:00 compute-0 systemd[1]: Started libpod-conmon-965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3.scope.
Dec 06 07:49:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:49:00 compute-0 ceph-mon[74339]: pgmap v2847: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Dec 06 07:49:00 compute-0 podman[357589]: 2025-12-06 07:49:00.347705612 +0000 UTC m=+0.325983056 container init 965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 07:49:00 compute-0 podman[357589]: 2025-12-06 07:49:00.356367326 +0000 UTC m=+0.334644740 container start 965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:49:00 compute-0 clever_heyrovsky[357605]: 167 167
Dec 06 07:49:00 compute-0 systemd[1]: libpod-965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3.scope: Deactivated successfully.
Dec 06 07:49:00 compute-0 podman[357589]: 2025-12-06 07:49:00.369540962 +0000 UTC m=+0.347818386 container attach 965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:49:00 compute-0 podman[357589]: 2025-12-06 07:49:00.370277921 +0000 UTC m=+0.348555355 container died 965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:49:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d0f550c5f52947b14a545ab5e35d82d1c8d73c5de9365e3a74ea973ce4417ed-merged.mount: Deactivated successfully.
Dec 06 07:49:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:00.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:00 compute-0 podman[357589]: 2025-12-06 07:49:00.51924341 +0000 UTC m=+0.497520834 container remove 965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_heyrovsky, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:49:00 compute-0 systemd[1]: libpod-conmon-965cfab8821127fd3b825aa036f7ebb74d78e840cc859755afb32d1b718c25c3.scope: Deactivated successfully.
Dec 06 07:49:00 compute-0 podman[357631]: 2025-12-06 07:49:00.710742858 +0000 UTC m=+0.066136036 container create 4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 07:49:00 compute-0 systemd[1]: Started libpod-conmon-4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e.scope.
Dec 06 07:49:00 compute-0 podman[357631]: 2025-12-06 07:49:00.670415659 +0000 UTC m=+0.025808857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:49:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445e7c61d0a32a8d91d98310bea47b3dab55fb530b694abc1db054a3ffe2cd33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445e7c61d0a32a8d91d98310bea47b3dab55fb530b694abc1db054a3ffe2cd33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445e7c61d0a32a8d91d98310bea47b3dab55fb530b694abc1db054a3ffe2cd33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:49:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445e7c61d0a32a8d91d98310bea47b3dab55fb530b694abc1db054a3ffe2cd33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:49:00 compute-0 podman[357631]: 2025-12-06 07:49:00.803221873 +0000 UTC m=+0.158615051 container init 4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:49:00 compute-0 podman[357631]: 2025-12-06 07:49:00.810194191 +0000 UTC m=+0.165587379 container start 4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_antonelli, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:49:00 compute-0 podman[357631]: 2025-12-06 07:49:00.814009844 +0000 UTC m=+0.169403022 container attach 4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_antonelli, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:49:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 12 KiB/s wr, 22 op/s
Dec 06 07:49:01 compute-0 nova_compute[251992]: 2025-12-06 07:49:01.370 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:01 compute-0 ceph-mon[74339]: pgmap v2848: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 12 KiB/s wr, 22 op/s
Dec 06 07:49:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:01.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]: {
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]:         "osd_id": 0,
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]:         "type": "bluestore"
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]:     }
Dec 06 07:49:01 compute-0 optimistic_antonelli[357647]: }
Dec 06 07:49:01 compute-0 systemd[1]: libpod-4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e.scope: Deactivated successfully.
Dec 06 07:49:01 compute-0 podman[357631]: 2025-12-06 07:49:01.696502395 +0000 UTC m=+1.051895573 container died 4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_antonelli, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-445e7c61d0a32a8d91d98310bea47b3dab55fb530b694abc1db054a3ffe2cd33-merged.mount: Deactivated successfully.
Dec 06 07:49:01 compute-0 podman[357631]: 2025-12-06 07:49:01.751985461 +0000 UTC m=+1.107378659 container remove 4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_antonelli, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:49:01 compute-0 systemd[1]: libpod-conmon-4ef54010706cd2b83c8ca2c5047f9efa36605a369cde79a187c2ad1952e4e72e.scope: Deactivated successfully.
Dec 06 07:49:01 compute-0 sudo[357524]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:49:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:49:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:49:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:49:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 40917910-1f10-482d-b744-629f17cd40c4 does not exist
Dec 06 07:49:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 49a1c784-0310-4e2f-ba42-6f8929630cce does not exist
Dec 06 07:49:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 07d2b033-1b98-46a7-8df4-578962ca639e does not exist
Dec 06 07:49:01 compute-0 nova_compute[251992]: 2025-12-06 07:49:01.833 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:01 compute-0 sudo[357680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:49:01 compute-0 sudo[357680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:01 compute-0 sudo[357680]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:01 compute-0 sudo[357705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:49:01 compute-0 sudo[357705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:01 compute-0 sudo[357705]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:02.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:02 compute-0 sudo[357730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:49:02 compute-0 sudo[357730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:02 compute-0 sudo[357730]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:02 compute-0 sudo[357755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:49:02 compute-0 sudo[357755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:02 compute-0 sudo[357755]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 44 op/s
Dec 06 07:49:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:03.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:49:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:49:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:49:03.858 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:49:03.859 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:49:03.859 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:04.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:04 compute-0 ceph-mon[74339]: pgmap v2849: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 44 op/s
Dec 06 07:49:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:05.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:06 compute-0 ceph-mon[74339]: pgmap v2850: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:06 compute-0 nova_compute[251992]: 2025-12-06 07:49:06.371 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:06.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:06 compute-0 nova_compute[251992]: 2025-12-06 07:49:06.835 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:07 compute-0 podman[357783]: 2025-12-06 07:49:07.396207378 +0000 UTC m=+0.057141593 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:49:07 compute-0 podman[357784]: 2025-12-06 07:49:07.401912362 +0000 UTC m=+0.055706464 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:49:07 compute-0 ceph-mon[74339]: pgmap v2851: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:07.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:08.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:09.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:09 compute-0 ceph-mon[74339]: pgmap v2852: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 07:49:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:10.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 06 07:49:11 compute-0 nova_compute[251992]: 2025-12-06 07:49:11.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:11.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:11 compute-0 ceph-mon[74339]: pgmap v2853: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 06 07:49:11 compute-0 nova_compute[251992]: 2025-12-06 07:49:11.837 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:12.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:49:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:49:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:49:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:49:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:49:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:49:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 172 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 587 KiB/s wr, 60 op/s
Dec 06 07:49:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1194372675' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:49:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1194372675' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:49:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:13.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:14 compute-0 ceph-mon[74339]: pgmap v2854: 305 pgs: 305 active+clean; 172 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 587 KiB/s wr, 60 op/s
Dec 06 07:49:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 195 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 07:49:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:15.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:15 compute-0 ceph-mon[74339]: pgmap v2855: 305 pgs: 305 active+clean; 195 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 07:49:16 compute-0 nova_compute[251992]: 2025-12-06 07:49:16.376 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:16.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:16 compute-0 nova_compute[251992]: 2025-12-06 07:49:16.840 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 198 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:49:17 compute-0 ceph-mon[74339]: pgmap v2856: 305 pgs: 305 active+clean; 198 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:49:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:17.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:18.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:49:18
Dec 06 07:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec 06 07:49:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:49:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:49:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:19.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:20.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:20 compute-0 nova_compute[251992]: 2025-12-06 07:49:20.713 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:20 compute-0 nova_compute[251992]: 2025-12-06 07:49:20.777 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:20 compute-0 nova_compute[251992]: 2025-12-06 07:49:20.778 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:20 compute-0 nova_compute[251992]: 2025-12-06 07:49:20.778 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:20 compute-0 nova_compute[251992]: 2025-12-06 07:49:20.778 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:49:20 compute-0 nova_compute[251992]: 2025-12-06 07:49:20.779 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:49:21 compute-0 nova_compute[251992]: 2025-12-06 07:49:21.377 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:21.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:21 compute-0 nova_compute[251992]: 2025-12-06 07:49:21.842 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:22 compute-0 ceph-mon[74339]: pgmap v2857: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:49:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:49:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366501616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:22 compute-0 nova_compute[251992]: 2025-12-06 07:49:22.299 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:22 compute-0 nova_compute[251992]: 2025-12-06 07:49:22.447 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:49:22 compute-0 nova_compute[251992]: 2025-12-06 07:49:22.448 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4256MB free_disk=20.942806243896484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:49:22 compute-0 nova_compute[251992]: 2025-12-06 07:49:22.448 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:22 compute-0 nova_compute[251992]: 2025-12-06 07:49:22.449 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:22 compute-0 ovn_controller[147168]: 2025-12-06T07:49:22Z|00612|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 06 07:49:22 compute-0 sudo[357852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:49:22 compute-0 sudo[357852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:23 compute-0 sudo[357852]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:23 compute-0 sudo[357877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:49:23 compute-0 sudo[357877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:23 compute-0 sudo[357877]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:49:23 compute-0 ceph-mon[74339]: pgmap v2858: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:49:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2417538724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/366501616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:23.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:49:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:49:23 compute-0 nova_compute[251992]: 2025-12-06 07:49:23.627 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:49:23 compute-0 nova_compute[251992]: 2025-12-06 07:49:23.628 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:49:23 compute-0 nova_compute[251992]: 2025-12-06 07:49:23.658 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:49:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2798437822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:24 compute-0 nova_compute[251992]: 2025-12-06 07:49:24.100 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:24 compute-0 nova_compute[251992]: 2025-12-06 07:49:24.106 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:49:24 compute-0 nova_compute[251992]: 2025-12-06 07:49:24.275 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:49:24 compute-0 nova_compute[251992]: 2025-12-06 07:49:24.278 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:49:24 compute-0 nova_compute[251992]: 2025-12-06 07:49:24.278 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:24.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:24 compute-0 ceph-mon[74339]: pgmap v2859: 305 pgs: 305 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:49:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2798437822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:49:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:49:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:49:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:49:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:49:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 157 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Dec 06 07:49:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:25.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/729183953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:25 compute-0 ceph-mon[74339]: pgmap v2860: 305 pgs: 305 active+clean; 157 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011952121719709659 of space, bias 1.0, pg target 0.3585636515912898 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:49:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:49:26 compute-0 nova_compute[251992]: 2025-12-06 07:49:26.379 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:26.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:26 compute-0 nova_compute[251992]: 2025-12-06 07:49:26.844 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 138 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 114 KiB/s rd, 63 KiB/s wr, 36 op/s
Dec 06 07:49:27 compute-0 ceph-mon[74339]: pgmap v2861: 305 pgs: 305 active+clean; 138 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 114 KiB/s rd, 63 KiB/s wr, 36 op/s
Dec 06 07:49:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:27.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 16 KiB/s wr, 28 op/s
Dec 06 07:49:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:29.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:29 compute-0 ceph-mon[74339]: pgmap v2862: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 16 KiB/s wr, 28 op/s
Dec 06 07:49:30 compute-0 podman[357928]: 2025-12-06 07:49:30.455469825 +0000 UTC m=+0.102420335 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:49:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:30.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 07:49:31 compute-0 nova_compute[251992]: 2025-12-06 07:49:31.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:31.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:31 compute-0 nova_compute[251992]: 2025-12-06 07:49:31.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.216 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.216 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:32.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.650 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.651 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.712 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.713 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.713 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:32 compute-0 nova_compute[251992]: 2025-12-06 07:49:32.713 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:32 compute-0 ceph-mon[74339]: pgmap v2863: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 07:49:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 06 07:49:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:33.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:33 compute-0 nova_compute[251992]: 2025-12-06 07:49:33.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1050604975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:33 compute-0 ceph-mon[74339]: pgmap v2864: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 06 07:49:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3221781244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:49:34.158 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:49:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:49:34.158 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:49:34 compute-0 nova_compute[251992]: 2025-12-06 07:49:34.176 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:34.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4007388396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 06 07:49:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:35.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:35 compute-0 nova_compute[251992]: 2025-12-06 07:49:35.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:35 compute-0 nova_compute[251992]: 2025-12-06 07:49:35.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:49:35 compute-0 nova_compute[251992]: 2025-12-06 07:49:35.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:49:36 compute-0 ceph-mon[74339]: pgmap v2865: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Dec 06 07:49:36 compute-0 nova_compute[251992]: 2025-12-06 07:49:36.427 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:36.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:36 compute-0 nova_compute[251992]: 2025-12-06 07:49:36.901 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s rd, 1.2 KiB/s wr, 9 op/s
Dec 06 07:49:37 compute-0 ceph-mon[74339]: pgmap v2866: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 KiB/s rd, 1.2 KiB/s wr, 9 op/s
Dec 06 07:49:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:37.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:49:38.160 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:49:38 compute-0 podman[357961]: 2025-12-06 07:49:38.390295094 +0000 UTC m=+0.053430763 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:49:38 compute-0 podman[357962]: 2025-12-06 07:49:38.39495837 +0000 UTC m=+0.055677493 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 06 07:49:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:38.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 5 op/s
Dec 06 07:49:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:39.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:39 compute-0 ceph-mon[74339]: pgmap v2867: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 5 op/s
Dec 06 07:49:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:41 compute-0 ceph-mon[74339]: pgmap v2868: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:41 compute-0 nova_compute[251992]: 2025-12-06 07:49:41.429 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:41.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:41 compute-0 nova_compute[251992]: 2025-12-06 07:49:41.903 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:42.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:49:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:49:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:49:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:49:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:49:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:49:43 compute-0 sudo[358000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:49:43 compute-0 sudo[358000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:43 compute-0 sudo[358000]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:43 compute-0 sudo[358025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:49:43 compute-0 sudo[358025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:49:43 compute-0 sudo[358025]: pam_unix(sudo:session): session closed for user root
Dec 06 07:49:43 compute-0 ceph-mon[74339]: pgmap v2869: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:43.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:49:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:44.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:49:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:45 compute-0 ceph-mon[74339]: pgmap v2870: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:45.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:46 compute-0 nova_compute[251992]: 2025-12-06 07:49:46.430 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:46.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:46 compute-0 nova_compute[251992]: 2025-12-06 07:49:46.905 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:47 compute-0 ceph-mon[74339]: pgmap v2871: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:47.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:48.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:49 compute-0 ceph-mon[74339]: pgmap v2872: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:50.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:51 compute-0 ceph-mon[74339]: pgmap v2873: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:51 compute-0 nova_compute[251992]: 2025-12-06 07:49:51.430 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:51.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:51 compute-0 nova_compute[251992]: 2025-12-06 07:49:51.907 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4162480082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:52.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:53 compute-0 ceph-mon[74339]: pgmap v2874: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:49:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:53.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:54.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 148 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Dec 06 07:49:55 compute-0 ceph-mon[74339]: pgmap v2875: 305 pgs: 305 active+clean; 148 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.1 MiB/s wr, 14 op/s
Dec 06 07:49:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:55.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:49:55 compute-0 nova_compute[251992]: 2025-12-06 07:49:55.548 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:55 compute-0 nova_compute[251992]: 2025-12-06 07:49:55.548 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:55 compute-0 nova_compute[251992]: 2025-12-06 07:49:55.606 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:49:55 compute-0 nova_compute[251992]: 2025-12-06 07:49:55.896 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:55 compute-0 nova_compute[251992]: 2025-12-06 07:49:55.896 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:55 compute-0 nova_compute[251992]: 2025-12-06 07:49:55.902 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:49:55 compute-0 nova_compute[251992]: 2025-12-06 07:49:55.903 251996 INFO nova.compute.claims [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.191 251996 DEBUG nova.scheduler.client.report [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.266 251996 DEBUG nova.scheduler.client.report [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.266 251996 DEBUG nova.compute.provider_tree [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.291 251996 DEBUG nova.scheduler.client.report [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.315 251996 DEBUG nova.scheduler.client.report [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.390 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.432 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:56.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:49:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081352726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.822 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.829 251996 DEBUG nova.compute.provider_tree [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:49:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2081352726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.888 251996 DEBUG nova.scheduler.client.report [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.908 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.974 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:56 compute-0 nova_compute[251992]: 2025-12-06 07:49:56.974 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:49:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.127 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.128 251996 DEBUG nova.network.neutron [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:49:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.166 251996 INFO nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.207 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:49:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.578 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.579 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.580 251996 INFO nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Creating image(s)
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.613 251996 DEBUG nova.storage.rbd_utils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.638 251996 DEBUG nova.storage.rbd_utils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.662 251996 DEBUG nova.storage.rbd_utils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.666 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.739 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.740 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.741 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.741 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.774 251996 DEBUG nova.storage.rbd_utils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.778 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:49:57 compute-0 nova_compute[251992]: 2025-12-06 07:49:57.812 251996 DEBUG nova.policy [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f2335740042045fba7f544ee5140eb87', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:49:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:49:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:49:58.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:49:59 compute-0 ceph-mon[74339]: pgmap v2876: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Dec 06 07:49:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:49:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:49:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:49:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:49:59.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 07:50:00 compute-0 ceph-mon[74339]: pgmap v2877: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 07:50:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.183 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.257 251996 DEBUG nova.storage.rbd_utils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] resizing rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.431 251996 DEBUG nova.objects.instance [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'migration_context' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.452 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.453 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Ensure instance console log exists: /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.453 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.454 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.454 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:00.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:00 compute-0 nova_compute[251992]: 2025-12-06 07:50:00.797 251996 DEBUG nova.network.neutron [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Successfully created port: 6ed32036-14e7-4ab4-a9dd-38196e9a6469 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:50:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Dec 06 07:50:01 compute-0 nova_compute[251992]: 2025-12-06 07:50:01.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:01 compute-0 podman[358247]: 2025-12-06 07:50:01.450087558 +0000 UTC m=+0.105899529 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 07:50:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:01.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:01 compute-0 nova_compute[251992]: 2025-12-06 07:50:01.910 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:02 compute-0 sudo[358273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:02 compute-0 sudo[358273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:02 compute-0 sudo[358273]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:02 compute-0 sudo[358298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:50:02 compute-0 sudo[358298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:02 compute-0 sudo[358298]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:02 compute-0 ceph-mon[74339]: pgmap v2878: 305 pgs: 305 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Dec 06 07:50:02 compute-0 sudo[358323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:02 compute-0 sudo[358323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:02 compute-0 sudo[358323]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:02 compute-0 sudo[358348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 07:50:02 compute-0 sudo[358348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:02.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:02 compute-0 sudo[358348]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:50:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:50:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:03 compute-0 sudo[358393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:03 compute-0 sudo[358393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:03 compute-0 sudo[358393]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:03 compute-0 sudo[358419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:50:03 compute-0 sudo[358419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:03 compute-0 sudo[358419]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Dec 06 07:50:03 compute-0 sudo[358444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:03 compute-0 sudo[358444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:03 compute-0 sudo[358444]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:03 compute-0 sudo[358469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:50:03 compute-0 sudo[358469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:03 compute-0 sudo[358492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:03 compute-0 sudo[358492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:03 compute-0 sudo[358492]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:03 compute-0 sudo[358519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:03 compute-0 sudo[358519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:03 compute-0 sudo[358519]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2406501570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:03 compute-0 ceph-mon[74339]: pgmap v2879: 305 pgs: 305 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.4 MiB/s wr, 31 op/s
Dec 06 07:50:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:03.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:03 compute-0 sudo[358469]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:50:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:03.859 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:03.860 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:03.860 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:50:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:04.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1209685693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:04 compute-0 nova_compute[251992]: 2025-12-06 07:50:04.887 251996 DEBUG nova.network.neutron [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Successfully updated port: 6ed32036-14e7-4ab4-a9dd-38196e9a6469 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:50:04 compute-0 nova_compute[251992]: 2025-12-06 07:50:04.912 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:04 compute-0 nova_compute[251992]: 2025-12-06 07:50:04.912 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquired lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:04 compute-0 nova_compute[251992]: 2025-12-06 07:50:04.913 251996 DEBUG nova.network.neutron [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:50:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 4.2 MiB/s wr, 65 op/s
Dec 06 07:50:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:50:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:50:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:50:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bbdc53d6-d452-44e6-972e-bb0338aa505e does not exist
Dec 06 07:50:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 537de84d-908b-4e0c-97e4-6344a25a5086 does not exist
Dec 06 07:50:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 68b3c457-1932-4f78-87dd-49916595bd85 does not exist
Dec 06 07:50:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:50:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:50:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:50:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:50:05 compute-0 sudo[358575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:05 compute-0 sudo[358575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:05 compute-0 sudo[358575]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:05 compute-0 sudo[358600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:50:05 compute-0 sudo[358600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:05 compute-0 sudo[358600]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:05 compute-0 nova_compute[251992]: 2025-12-06 07:50:05.388 251996 DEBUG nova.compute.manager [req-6bab2acb-1b17-4210-b62f-33d94a9ff60b req-e4ec33a3-def6-420a-8d1c-93efe1fed0bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-changed-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:05 compute-0 nova_compute[251992]: 2025-12-06 07:50:05.388 251996 DEBUG nova.compute.manager [req-6bab2acb-1b17-4210-b62f-33d94a9ff60b req-e4ec33a3-def6-420a-8d1c-93efe1fed0bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Refreshing instance network info cache due to event network-changed-6ed32036-14e7-4ab4-a9dd-38196e9a6469. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:50:05 compute-0 nova_compute[251992]: 2025-12-06 07:50:05.388 251996 DEBUG oslo_concurrency.lockutils [req-6bab2acb-1b17-4210-b62f-33d94a9ff60b req-e4ec33a3-def6-420a-8d1c-93efe1fed0bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:05 compute-0 sudo[358625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:05 compute-0 sudo[358625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:05 compute-0 sudo[358625]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:05 compute-0 sudo[358650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:50:05 compute-0 sudo[358650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:05.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Dec 06 07:50:05 compute-0 ceph-mon[74339]: pgmap v2880: 305 pgs: 305 active+clean; 221 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 4.2 MiB/s wr, 65 op/s
Dec 06 07:50:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:50:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Dec 06 07:50:05 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.590530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405590630, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1703, "num_deletes": 260, "total_data_size": 2829956, "memory_usage": 2873248, "flush_reason": "Manual Compaction"}
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405606706, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 2760768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56177, "largest_seqno": 57879, "table_properties": {"data_size": 2752918, "index_size": 4728, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16981, "raw_average_key_size": 20, "raw_value_size": 2737003, "raw_average_value_size": 3354, "num_data_blocks": 206, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007265, "oldest_key_time": 1765007265, "file_creation_time": 1765007405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 16252 microseconds, and 7302 cpu microseconds.
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.606781) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 2760768 bytes OK
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.606813) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.608253) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.608268) EVENT_LOG_v1 {"time_micros": 1765007405608263, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.608287) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2822683, prev total WAL file size 2822683, number of live WAL files 2.
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.609199) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(2696KB)], [122(11MB)]
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405609292, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 15133827, "oldest_snapshot_seqno": -1}
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 9353 keys, 13186191 bytes, temperature: kUnknown
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405708008, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 13186191, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13123794, "index_size": 37901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 245093, "raw_average_key_size": 26, "raw_value_size": 12957404, "raw_average_value_size": 1385, "num_data_blocks": 1459, "num_entries": 9353, "num_filter_entries": 9353, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007405, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:50:05 compute-0 nova_compute[251992]: 2025-12-06 07:50:05.708 251996 DEBUG nova.network.neutron [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.708264) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 13186191 bytes
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.712710) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.2 rd, 133.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(10.3) write-amplify(4.8) OK, records in: 9887, records dropped: 534 output_compression: NoCompression
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.712741) EVENT_LOG_v1 {"time_micros": 1765007405712728, "job": 74, "event": "compaction_finished", "compaction_time_micros": 98786, "compaction_time_cpu_micros": 36671, "output_level": 6, "num_output_files": 1, "total_output_size": 13186191, "num_input_records": 9887, "num_output_records": 9353, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405713264, "job": 74, "event": "table_file_deletion", "file_number": 124}
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007405715416, "job": 74, "event": "table_file_deletion", "file_number": 122}
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.609060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.715476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.715481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.715483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.715484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:05.715486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:05 compute-0 podman[358715]: 2025-12-06 07:50:05.815361098 +0000 UTC m=+0.050904534 container create 32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:50:05 compute-0 systemd[1]: Started libpod-conmon-32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256.scope.
Dec 06 07:50:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:50:05 compute-0 podman[358715]: 2025-12-06 07:50:05.878538852 +0000 UTC m=+0.114082318 container init 32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:50:05 compute-0 podman[358715]: 2025-12-06 07:50:05.885035758 +0000 UTC m=+0.120579184 container start 32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:50:05 compute-0 podman[358715]: 2025-12-06 07:50:05.888691766 +0000 UTC m=+0.124235202 container attach 32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:50:05 compute-0 kind_hamilton[358731]: 167 167
Dec 06 07:50:05 compute-0 systemd[1]: libpod-32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256.scope: Deactivated successfully.
Dec 06 07:50:05 compute-0 conmon[358731]: conmon 32926b07ad6ec9dd5cb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256.scope/container/memory.events
Dec 06 07:50:05 compute-0 podman[358715]: 2025-12-06 07:50:05.796499919 +0000 UTC m=+0.032043385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:50:05 compute-0 podman[358715]: 2025-12-06 07:50:05.89219033 +0000 UTC m=+0.127733776 container died 32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-05c462164f224777f7d0a14d8ed945f3be25014ee3259b640da4d96db185b10d-merged.mount: Deactivated successfully.
Dec 06 07:50:05 compute-0 podman[358715]: 2025-12-06 07:50:05.93370016 +0000 UTC m=+0.169243596 container remove 32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_hamilton, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:50:05 compute-0 systemd[1]: libpod-conmon-32926b07ad6ec9dd5cb5f057b0d5aee6806236de372f2b2b728bf592d46bc256.scope: Deactivated successfully.
Dec 06 07:50:06 compute-0 podman[358755]: 2025-12-06 07:50:06.082652699 +0000 UTC m=+0.038970762 container create a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:50:06 compute-0 systemd[1]: Started libpod-conmon-a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82.scope.
Dec 06 07:50:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79417f5866de722f3ffd77b0c532ca73021837a668f846e9059a4f564aac9827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79417f5866de722f3ffd77b0c532ca73021837a668f846e9059a4f564aac9827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79417f5866de722f3ffd77b0c532ca73021837a668f846e9059a4f564aac9827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79417f5866de722f3ffd77b0c532ca73021837a668f846e9059a4f564aac9827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79417f5866de722f3ffd77b0c532ca73021837a668f846e9059a4f564aac9827/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:06 compute-0 podman[358755]: 2025-12-06 07:50:06.065517267 +0000 UTC m=+0.021835360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:50:06 compute-0 podman[358755]: 2025-12-06 07:50:06.162655238 +0000 UTC m=+0.118973321 container init a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Dec 06 07:50:06 compute-0 podman[358755]: 2025-12-06 07:50:06.171685301 +0000 UTC m=+0.128003364 container start a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:50:06 compute-0 podman[358755]: 2025-12-06 07:50:06.175941847 +0000 UTC m=+0.132259940 container attach a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:50:06 compute-0 nova_compute[251992]: 2025-12-06 07:50:06.436 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:06.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:06 compute-0 ceph-mon[74339]: osdmap e373: 3 total, 3 up, 3 in
Dec 06 07:50:06 compute-0 nova_compute[251992]: 2025-12-06 07:50:06.912 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:06 compute-0 strange_noyce[358772]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:50:06 compute-0 strange_noyce[358772]: --> relative data size: 1.0
Dec 06 07:50:06 compute-0 strange_noyce[358772]: --> All data devices are unavailable
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:06.991343) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406991376, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 269, "num_deletes": 256, "total_data_size": 14516, "memory_usage": 20296, "flush_reason": "Manual Compaction"}
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406993386, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 14659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57880, "largest_seqno": 58148, "table_properties": {"data_size": 12832, "index_size": 60, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4624, "raw_average_key_size": 17, "raw_value_size": 9250, "raw_average_value_size": 34, "num_data_blocks": 3, "num_entries": 266, "num_filter_entries": 266, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007406, "oldest_key_time": 1765007406, "file_creation_time": 1765007406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 2068 microseconds, and 591 cpu microseconds.
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:06.993414) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 14659 bytes OK
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:06.993426) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:06.994379) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:06.994393) EVENT_LOG_v1 {"time_micros": 1765007406994388, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:06.994409) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 12437, prev total WAL file size 12437, number of live WAL files 2.
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:06.994808) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303233' seq:72057594037927935, type:22 .. '6C6F676D0032323735' seq:0, type:0; will stop at (end)
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(14KB)], [125(12MB)]
Dec 06 07:50:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007406994873, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 13200850, "oldest_snapshot_seqno": -1}
Dec 06 07:50:07 compute-0 systemd[1]: libpod-a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82.scope: Deactivated successfully.
Dec 06 07:50:07 compute-0 podman[358755]: 2025-12-06 07:50:07.014664516 +0000 UTC m=+0.970982579 container died a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 9101 keys, 13036202 bytes, temperature: kUnknown
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407086015, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 13036202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12975192, "index_size": 37153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240798, "raw_average_key_size": 26, "raw_value_size": 12812849, "raw_average_value_size": 1407, "num_data_blocks": 1423, "num_entries": 9101, "num_filter_entries": 9101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:07.086286) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13036202 bytes
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:07.093974) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.0 rd, 143.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 12.6 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(1789.8) write-amplify(889.3) OK, records in: 9619, records dropped: 518 output_compression: NoCompression
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:07.093990) EVENT_LOG_v1 {"time_micros": 1765007407093983, "job": 76, "event": "compaction_finished", "compaction_time_micros": 91055, "compaction_time_cpu_micros": 34822, "output_level": 6, "num_output_files": 1, "total_output_size": 13036202, "num_input_records": 9619, "num_output_records": 9101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407094087, "job": 76, "event": "table_file_deletion", "file_number": 127}
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007407095980, "job": 76, "event": "table_file_deletion", "file_number": 125}
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:06.994665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:07.096161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:07.096169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:07.096171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:07.096173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:50:07.096175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-79417f5866de722f3ffd77b0c532ca73021837a668f846e9059a4f564aac9827-merged.mount: Deactivated successfully.
Dec 06 07:50:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.2 MiB/s wr, 46 op/s
Dec 06 07:50:07 compute-0 podman[358755]: 2025-12-06 07:50:07.171764005 +0000 UTC m=+1.128082068 container remove a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:50:07 compute-0 systemd[1]: libpod-conmon-a7a147f9317e197789aa19876e06bf46bd444d4b686dfd48b2478cc0fafaff82.scope: Deactivated successfully.
Dec 06 07:50:07 compute-0 sudo[358650]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:07 compute-0 sudo[358800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:07 compute-0 sudo[358800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:07 compute-0 sudo[358800]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:07 compute-0 sudo[358825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:50:07 compute-0 sudo[358825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:07 compute-0 sudo[358825]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:07 compute-0 sudo[358850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:07 compute-0 sudo[358850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:07 compute-0 sudo[358850]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:07 compute-0 sudo[358875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:50:07 compute-0 sudo[358875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:07.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:07 compute-0 podman[358941]: 2025-12-06 07:50:07.822724718 +0000 UTC m=+0.055210250 container create 4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:50:07 compute-0 systemd[1]: Started libpod-conmon-4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4.scope.
Dec 06 07:50:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:50:07 compute-0 podman[358941]: 2025-12-06 07:50:07.806892321 +0000 UTC m=+0.039377863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:50:07 compute-0 podman[358941]: 2025-12-06 07:50:07.910091626 +0000 UTC m=+0.142577178 container init 4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:50:07 compute-0 podman[358941]: 2025-12-06 07:50:07.916634092 +0000 UTC m=+0.149119624 container start 4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:50:07 compute-0 elastic_sanderson[358957]: 167 167
Dec 06 07:50:07 compute-0 podman[358941]: 2025-12-06 07:50:07.921548465 +0000 UTC m=+0.154034077 container attach 4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:50:07 compute-0 systemd[1]: libpod-4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4.scope: Deactivated successfully.
Dec 06 07:50:07 compute-0 podman[358941]: 2025-12-06 07:50:07.922998914 +0000 UTC m=+0.155484446 container died 4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 07:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-63029976e63d8a234e6c5600efea8759f48fa028d3a9fecad5e1a069c213bc31-merged.mount: Deactivated successfully.
Dec 06 07:50:07 compute-0 podman[358941]: 2025-12-06 07:50:07.970067244 +0000 UTC m=+0.202552786 container remove 4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_sanderson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:50:07 compute-0 systemd[1]: libpod-conmon-4ed92b3e3a38bda39f99595b0436605acdd33641d9bf023dfaee51898c818cb4.scope: Deactivated successfully.
Dec 06 07:50:08 compute-0 ceph-mon[74339]: pgmap v2882: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.2 MiB/s wr, 46 op/s
Dec 06 07:50:08 compute-0 podman[358978]: 2025-12-06 07:50:08.153125202 +0000 UTC m=+0.043995547 container create c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:50:08 compute-0 systemd[1]: Started libpod-conmon-c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f.scope.
Dec 06 07:50:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ee8fa89d0cde42f84480f0e36402d2826e1039f70d1fb56aa30f1ade398e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ee8fa89d0cde42f84480f0e36402d2826e1039f70d1fb56aa30f1ade398e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ee8fa89d0cde42f84480f0e36402d2826e1039f70d1fb56aa30f1ade398e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef6ee8fa89d0cde42f84480f0e36402d2826e1039f70d1fb56aa30f1ade398e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:08 compute-0 podman[358978]: 2025-12-06 07:50:08.137279505 +0000 UTC m=+0.028149860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:50:08 compute-0 podman[358978]: 2025-12-06 07:50:08.240747927 +0000 UTC m=+0.131618262 container init c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:50:08 compute-0 podman[358978]: 2025-12-06 07:50:08.248406923 +0000 UTC m=+0.139277268 container start c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:50:08 compute-0 podman[358978]: 2025-12-06 07:50:08.251781445 +0000 UTC m=+0.142651780 container attach c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 07:50:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:08.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:08 compute-0 nova_compute[251992]: 2025-12-06 07:50:08.944 251996 DEBUG nova.network.neutron [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updating instance_info_cache with network_info: [{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]: {
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:     "0": [
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:         {
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "devices": [
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "/dev/loop3"
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             ],
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "lv_name": "ceph_lv0",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "lv_size": "7511998464",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "name": "ceph_lv0",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "tags": {
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.cluster_name": "ceph",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.crush_device_class": "",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.encrypted": "0",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.osd_id": "0",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.type": "block",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:                 "ceph.vdo": "0"
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             },
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "type": "block",
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:             "vg_name": "ceph_vg0"
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:         }
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]:     ]
Dec 06 07:50:09 compute-0 great_proskuriakova[358995]: }
Dec 06 07:50:09 compute-0 systemd[1]: libpod-c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f.scope: Deactivated successfully.
Dec 06 07:50:09 compute-0 podman[358978]: 2025-12-06 07:50:09.034788161 +0000 UTC m=+0.925658496 container died c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:50:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.2 MiB/s wr, 47 op/s
Dec 06 07:50:09 compute-0 ceph-mon[74339]: pgmap v2883: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.2 MiB/s wr, 47 op/s
Dec 06 07:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef6ee8fa89d0cde42f84480f0e36402d2826e1039f70d1fb56aa30f1ade398e7-merged.mount: Deactivated successfully.
Dec 06 07:50:09 compute-0 podman[358978]: 2025-12-06 07:50:09.395336859 +0000 UTC m=+1.286207194 container remove c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:50:09 compute-0 systemd[1]: libpod-conmon-c169fb1699779b95700e14d77b810bed94ab38e023f8b334edfde1aaad67502f.scope: Deactivated successfully.
Dec 06 07:50:09 compute-0 sudo[358875]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:09 compute-0 podman[359005]: 2025-12-06 07:50:09.441909635 +0000 UTC m=+0.381767611 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:50:09 compute-0 podman[359008]: 2025-12-06 07:50:09.44988582 +0000 UTC m=+0.389743106 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:50:09 compute-0 sudo[359054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:09 compute-0 sudo[359054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:09 compute-0 sudo[359054]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:09 compute-0 sudo[359080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:50:09 compute-0 sudo[359080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:09 compute-0 sudo[359080]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:09.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:09 compute-0 sudo[359105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:09 compute-0 sudo[359105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:09 compute-0 sudo[359105]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:09 compute-0 sudo[359130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:50:09 compute-0 sudo[359130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:09 compute-0 podman[359196]: 2025-12-06 07:50:09.972202713 +0000 UTC m=+0.040948526 container create 2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_black, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:50:10 compute-0 systemd[1]: Started libpod-conmon-2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a.scope.
Dec 06 07:50:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:50:10 compute-0 podman[359196]: 2025-12-06 07:50:10.041881493 +0000 UTC m=+0.110627306 container init 2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_black, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:50:10 compute-0 podman[359196]: 2025-12-06 07:50:10.048898262 +0000 UTC m=+0.117644075 container start 2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_black, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:50:10 compute-0 podman[359196]: 2025-12-06 07:50:09.95281689 +0000 UTC m=+0.021562723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:50:10 compute-0 upbeat_black[359212]: 167 167
Dec 06 07:50:10 compute-0 systemd[1]: libpod-2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a.scope: Deactivated successfully.
Dec 06 07:50:10 compute-0 podman[359196]: 2025-12-06 07:50:10.053462126 +0000 UTC m=+0.122207969 container attach 2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_black, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Dec 06 07:50:10 compute-0 podman[359196]: 2025-12-06 07:50:10.053764874 +0000 UTC m=+0.122510697 container died 2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:50:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a7fbc5b8f8125af2e51279444a26fd19ab556743f18ee4bb45689742c9e65a6-merged.mount: Deactivated successfully.
Dec 06 07:50:10 compute-0 podman[359196]: 2025-12-06 07:50:10.090519695 +0000 UTC m=+0.159265518 container remove 2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_black, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:50:10 compute-0 systemd[1]: libpod-conmon-2f40a724537341808ec9f09e629f4e441bf4546a80d7724d6568cd80aaf3785a.scope: Deactivated successfully.
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.233 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Releasing lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.235 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance network_info: |[{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.236 251996 DEBUG oslo_concurrency.lockutils [req-6bab2acb-1b17-4210-b62f-33d94a9ff60b req-e4ec33a3-def6-420a-8d1c-93efe1fed0bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.236 251996 DEBUG nova.network.neutron [req-6bab2acb-1b17-4210-b62f-33d94a9ff60b req-e4ec33a3-def6-420a-8d1c-93efe1fed0bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Refreshing network info cache for port 6ed32036-14e7-4ab4-a9dd-38196e9a6469 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.242 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Start _get_guest_xml network_info=[{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.250 251996 WARNING nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.260 251996 DEBUG nova.virt.libvirt.host [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.261 251996 DEBUG nova.virt.libvirt.host [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.266 251996 DEBUG nova.virt.libvirt.host [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.267 251996 DEBUG nova.virt.libvirt.host [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:50:10 compute-0 podman[359237]: 2025-12-06 07:50:10.269685869 +0000 UTC m=+0.055980311 container create adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.270 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.270 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.271 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.272 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.273 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.273 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.274 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.274 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.275 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.275 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.276 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.276 251996 DEBUG nova.virt.hardware [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.284 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:10 compute-0 systemd[1]: Started libpod-conmon-adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732.scope.
Dec 06 07:50:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5513320ebcb57779b127b780c5439dcb0eb1b45b1fcdbb406cd44511c520b8b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5513320ebcb57779b127b780c5439dcb0eb1b45b1fcdbb406cd44511c520b8b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5513320ebcb57779b127b780c5439dcb0eb1b45b1fcdbb406cd44511c520b8b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:10 compute-0 podman[359237]: 2025-12-06 07:50:10.246862874 +0000 UTC m=+0.033157336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:50:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5513320ebcb57779b127b780c5439dcb0eb1b45b1fcdbb406cd44511c520b8b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:10 compute-0 podman[359237]: 2025-12-06 07:50:10.357700694 +0000 UTC m=+0.143995166 container init adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:50:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2605400865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:50:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2605400865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:50:10 compute-0 podman[359237]: 2025-12-06 07:50:10.365186056 +0000 UTC m=+0.151480498 container start adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:50:10 compute-0 podman[359237]: 2025-12-06 07:50:10.371484066 +0000 UTC m=+0.157778498 container attach adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:50:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:10.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1083001606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.795 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.826 251996 DEBUG nova.storage.rbd_utils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:10 compute-0 nova_compute[251992]: 2025-12-06 07:50:10.830 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 3.4 MiB/s wr, 61 op/s
Dec 06 07:50:11 compute-0 charming_babbage[359254]: {
Dec 06 07:50:11 compute-0 charming_babbage[359254]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:50:11 compute-0 charming_babbage[359254]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:50:11 compute-0 charming_babbage[359254]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:50:11 compute-0 charming_babbage[359254]:         "osd_id": 0,
Dec 06 07:50:11 compute-0 charming_babbage[359254]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:50:11 compute-0 charming_babbage[359254]:         "type": "bluestore"
Dec 06 07:50:11 compute-0 charming_babbage[359254]:     }
Dec 06 07:50:11 compute-0 charming_babbage[359254]: }
Dec 06 07:50:11 compute-0 systemd[1]: libpod-adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732.scope: Deactivated successfully.
Dec 06 07:50:11 compute-0 conmon[359254]: conmon adad14cbf45baa9da92f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732.scope/container/memory.events
Dec 06 07:50:11 compute-0 podman[359237]: 2025-12-06 07:50:11.267227474 +0000 UTC m=+1.053521926 container died adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 07:50:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5513320ebcb57779b127b780c5439dcb0eb1b45b1fcdbb406cd44511c520b8b6-merged.mount: Deactivated successfully.
Dec 06 07:50:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316813484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.330 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.332 251996 DEBUG nova.virt.libvirt.vif [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-961523943',display_name='tempest-ServerRescueNegativeTestJSON-server-961523943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-961523943',id=165,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-g79ucyw0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:49:57Z,user_data=None,user_id='f2335740042045fba7f544ee5140eb87',uuid=6e187078-1e6f-4c22-9510-ed8116b14ae5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.332 251996 DEBUG nova.network.os_vif_util [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.333 251996 DEBUG nova.network.os_vif_util [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:7f:63,bridge_name='br-int',has_traffic_filtering=True,id=6ed32036-14e7-4ab4-a9dd-38196e9a6469,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed32036-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:50:11 compute-0 podman[359237]: 2025-12-06 07:50:11.334828728 +0000 UTC m=+1.121123170 container remove adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.335 251996 DEBUG nova.objects.instance [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:11 compute-0 systemd[1]: libpod-conmon-adad14cbf45baa9da92fbe09e8756f604bc30fd3ca7a4cecc8a22780504c9732.scope: Deactivated successfully.
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.359 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <uuid>6e187078-1e6f-4c22-9510-ed8116b14ae5</uuid>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <name>instance-000000a5</name>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-961523943</nova:name>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:50:10</nova:creationTime>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <nova:user uuid="f2335740042045fba7f544ee5140eb87">tempest-ServerRescueNegativeTestJSON-1304226499-project-member</nova:user>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <nova:project uuid="4842ecff6dce4ccc981a6b65a14ea406">tempest-ServerRescueNegativeTestJSON-1304226499</nova:project>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <nova:port uuid="6ed32036-14e7-4ab4-a9dd-38196e9a6469">
Dec 06 07:50:11 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <system>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <entry name="serial">6e187078-1e6f-4c22-9510-ed8116b14ae5</entry>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <entry name="uuid">6e187078-1e6f-4c22-9510-ed8116b14ae5</entry>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </system>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <os>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   </os>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <features>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   </features>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6e187078-1e6f-4c22-9510-ed8116b14ae5_disk">
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       </source>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config">
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       </source>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:50:11 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:77:7f:63"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <target dev="tap6ed32036-14"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/console.log" append="off"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <video>
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </video>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:50:11 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:50:11 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:50:11 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:50:11 compute-0 nova_compute[251992]: </domain>
Dec 06 07:50:11 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.359 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Preparing to wait for external event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.359 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.360 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.360 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.361 251996 DEBUG nova.virt.libvirt.vif [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-961523943',display_name='tempest-ServerRescueNegativeTestJSON-server-961523943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-961523943',id=165,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-g79ucyw0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:49:57Z,user_data=None,user_id='f2335740042045fba7f544ee5140eb87',uuid=6e187078-1e6f-4c22-9510-ed8116b14ae5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.361 251996 DEBUG nova.network.os_vif_util [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.362 251996 DEBUG nova.network.os_vif_util [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:7f:63,bridge_name='br-int',has_traffic_filtering=True,id=6ed32036-14e7-4ab4-a9dd-38196e9a6469,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed32036-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.362 251996 DEBUG os_vif [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:7f:63,bridge_name='br-int',has_traffic_filtering=True,id=6ed32036-14e7-4ab4-a9dd-38196e9a6469,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed32036-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.363 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.364 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.364 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:50:11 compute-0 sudo[359130]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1083001606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:11 compute-0 ceph-mon[74339]: pgmap v2884: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 3.4 MiB/s wr, 61 op/s
Dec 06 07:50:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1316813484' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.370 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.370 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ed32036-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.371 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ed32036-14, col_values=(('external_ids', {'iface-id': '6ed32036-14e7-4ab4-a9dd-38196e9a6469', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:7f:63', 'vm-uuid': '6e187078-1e6f-4c22-9510-ed8116b14ae5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.372 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:50:11 compute-0 NetworkManager[48965]: <info>  [1765007411.3745] manager: (tap6ed32036-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.382 251996 INFO os_vif [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:7f:63,bridge_name='br-int',has_traffic_filtering=True,id=6ed32036-14e7-4ab4-a9dd-38196e9a6469,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed32036-14')
Dec 06 07:50:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:50:11 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5b39c044-86d6-4d89-843f-b53cc5f350ff does not exist
Dec 06 07:50:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9f92685a-b175-4c26-9962-aa6d9a64dfe2 does not exist
Dec 06 07:50:11 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 79721fa2-9d27-4634-9a2c-a31801d04f92 does not exist
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.456 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.456 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.456 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No VIF found with MAC fa:16:3e:77:7f:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.457 251996 INFO nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Using config drive
Dec 06 07:50:11 compute-0 sudo[359352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:11 compute-0 sudo[359352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:11 compute-0 sudo[359352]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:11 compute-0 nova_compute[251992]: 2025-12-06 07:50:11.486 251996 DEBUG nova.storage.rbd_utils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:11 compute-0 sudo[359392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:50:11 compute-0 sudo[359392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:11 compute-0 sudo[359392]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:11.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:12 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:50:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:12.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:50:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:50:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:50:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:50:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:50:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:50:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 3.4 MiB/s wr, 61 op/s
Dec 06 07:50:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:13.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:13 compute-0 ceph-mon[74339]: pgmap v2885: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 3.4 MiB/s wr, 61 op/s
Dec 06 07:50:13 compute-0 nova_compute[251992]: 2025-12-06 07:50:13.646 251996 INFO nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Creating config drive at /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config
Dec 06 07:50:13 compute-0 nova_compute[251992]: 2025-12-06 07:50:13.650 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29nl_slk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:13 compute-0 nova_compute[251992]: 2025-12-06 07:50:13.788 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp29nl_slk" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:13 compute-0 nova_compute[251992]: 2025-12-06 07:50:13.824 251996 DEBUG nova.storage.rbd_utils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:13 compute-0 nova_compute[251992]: 2025-12-06 07:50:13.829 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:14 compute-0 nova_compute[251992]: 2025-12-06 07:50:14.474 251996 DEBUG nova.network.neutron [req-6bab2acb-1b17-4210-b62f-33d94a9ff60b req-e4ec33a3-def6-420a-8d1c-93efe1fed0bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updated VIF entry in instance network info cache for port 6ed32036-14e7-4ab4-a9dd-38196e9a6469. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:50:14 compute-0 nova_compute[251992]: 2025-12-06 07:50:14.475 251996 DEBUG nova.network.neutron [req-6bab2acb-1b17-4210-b62f-33d94a9ff60b req-e4ec33a3-def6-420a-8d1c-93efe1fed0bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updating instance_info_cache with network_info: [{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:14 compute-0 nova_compute[251992]: 2025-12-06 07:50:14.514 251996 DEBUG oslo_concurrency.lockutils [req-6bab2acb-1b17-4210-b62f-33d94a9ff60b req-e4ec33a3-def6-420a-8d1c-93efe1fed0bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:14.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:14 compute-0 nova_compute[251992]: 2025-12-06 07:50:14.882 251996 DEBUG oslo_concurrency.processutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:14 compute-0 nova_compute[251992]: 2025-12-06 07:50:14.882 251996 INFO nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Deleting local config drive /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config because it was imported into RBD.
Dec 06 07:50:14 compute-0 kernel: tap6ed32036-14: entered promiscuous mode
Dec 06 07:50:14 compute-0 NetworkManager[48965]: <info>  [1765007414.9352] manager: (tap6ed32036-14): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Dec 06 07:50:14 compute-0 nova_compute[251992]: 2025-12-06 07:50:14.936 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:14 compute-0 ovn_controller[147168]: 2025-12-06T07:50:14Z|00613|binding|INFO|Claiming lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 for this chassis.
Dec 06 07:50:14 compute-0 ovn_controller[147168]: 2025-12-06T07:50:14Z|00614|binding|INFO|6ed32036-14e7-4ab4-a9dd-38196e9a6469: Claiming fa:16:3e:77:7f:63 10.100.0.7
Dec 06 07:50:14 compute-0 nova_compute[251992]: 2025-12-06 07:50:14.941 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:14 compute-0 systemd-udevd[359474]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:50:14 compute-0 NetworkManager[48965]: <info>  [1765007414.9757] device (tap6ed32036-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:50:14 compute-0 NetworkManager[48965]: <info>  [1765007414.9770] device (tap6ed32036-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:50:14 compute-0 systemd-machined[212986]: New machine qemu-77-instance-000000a5.
Dec 06 07:50:14 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-000000a5.
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.001 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:15 compute-0 ovn_controller[147168]: 2025-12-06T07:50:15Z|00615|binding|INFO|Setting lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 ovn-installed in OVS
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.009 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:15 compute-0 ovn_controller[147168]: 2025-12-06T07:50:15Z|00616|binding|INFO|Setting lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 up in Southbound
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.018 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:7f:63 10.100.0.7'], port_security=['fa:16:3e:77:7f:63 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6e187078-1e6f-4c22-9510-ed8116b14ae5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '2', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=6ed32036-14e7-4ab4-a9dd-38196e9a6469) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.019 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 6ed32036-14e7-4ab4-a9dd-38196e9a6469 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 bound to our chassis
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.021 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.035 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a642fe0d-3385-496b-8293-9de4704091c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.036 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.040 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.040 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6d611480-8648-4d34-8982-bc35d84032fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.041 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[87da315c-81db-4d0b-a0cc-113f663ca317]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.057 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[01732b68-f0e6-42e2-a20a-3c204535525d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.070 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ecb7b6-92ce-418f-860d-4895df462a28]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.102 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[16a7943f-6095-4c95-9a11-b1b1984694bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.109 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b34600-e5b1-4ba5-b9dd-09a9e2824481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 systemd-udevd[359477]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:50:15 compute-0 NetworkManager[48965]: <info>  [1765007415.1103] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/277)
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.141 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[250b2ac9-8cf2-46e8-9891-0ab93fc3469d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.144 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f73e21c8-a5fc-43f3-8dcf-09c860c4f346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.3 MiB/s wr, 73 op/s
Dec 06 07:50:15 compute-0 NetworkManager[48965]: <info>  [1765007415.1658] device (tap3d151181-00): carrier: link connected
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.169 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ea4978-ba06-4170-bfd7-4fc7a39a1ecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.188 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c0380ae2-d23a-48f8-882d-f53caf79a57f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768775, 'reachable_time': 28191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359509, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.201 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[28f7bdb4-025e-43f2-a82d-c7a07ed04ce1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 768775, 'tstamp': 768775}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359510, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.221 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a60f4fbe-54f5-4f45-b1a7-1c71704b710e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768775, 'reachable_time': 28191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359511, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.250 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[320eb326-fa00-47be-808a-aff3d9ccb12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.300 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9d44a47f-180d-4954-a6f3-50fad4fff1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.302 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.302 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.302 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.304 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:15 compute-0 NetworkManager[48965]: <info>  [1765007415.3048] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Dec 06 07:50:15 compute-0 kernel: tap3d151181-00: entered promiscuous mode
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.306 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.309 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.311 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:15 compute-0 ovn_controller[147168]: 2025-12-06T07:50:15Z|00617|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.312 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.312 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.313 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e702786e-d8b5-42d9-9b03-54d31a281e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.314 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:50:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:15.314 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.324 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:15.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:15 compute-0 podman[359561]: 2025-12-06 07:50:15.669945713 +0000 UTC m=+0.048230293 container create 751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 07:50:15 compute-0 systemd[1]: Started libpod-conmon-751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9.scope.
Dec 06 07:50:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:50:15 compute-0 podman[359561]: 2025-12-06 07:50:15.644159617 +0000 UTC m=+0.022444217 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:50:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/268d736ce3a3ce224d5794e515fd32beb9b0e0809fc8d4484285c2326adb4ecc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:15 compute-0 podman[359561]: 2025-12-06 07:50:15.758514062 +0000 UTC m=+0.136798662 container init 751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:50:15 compute-0 podman[359561]: 2025-12-06 07:50:15.764209176 +0000 UTC m=+0.142493756 container start 751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.769 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007415.7692204, 6e187078-1e6f-4c22-9510-ed8116b14ae5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.770 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] VM Started (Lifecycle Event)
Dec 06 07:50:15 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[359599]: [NOTICE]   (359604) : New worker (359606) forked
Dec 06 07:50:15 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[359599]: [NOTICE]   (359604) : Loading success.
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.860 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.864 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007415.7695127, 6e187078-1e6f-4c22-9510-ed8116b14ae5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.865 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] VM Paused (Lifecycle Event)
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.906 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.910 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:15 compute-0 nova_compute[251992]: 2025-12-06 07:50:15.954 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.053 251996 DEBUG nova.compute.manager [req-add610fc-ae1b-4c2f-98f9-c34b84db6d1f req-01832523-8dc2-4f9f-a27d-d83012a3143b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.054 251996 DEBUG oslo_concurrency.lockutils [req-add610fc-ae1b-4c2f-98f9-c34b84db6d1f req-01832523-8dc2-4f9f-a27d-d83012a3143b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.055 251996 DEBUG oslo_concurrency.lockutils [req-add610fc-ae1b-4c2f-98f9-c34b84db6d1f req-01832523-8dc2-4f9f-a27d-d83012a3143b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.055 251996 DEBUG oslo_concurrency.lockutils [req-add610fc-ae1b-4c2f-98f9-c34b84db6d1f req-01832523-8dc2-4f9f-a27d-d83012a3143b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.055 251996 DEBUG nova.compute.manager [req-add610fc-ae1b-4c2f-98f9-c34b84db6d1f req-01832523-8dc2-4f9f-a27d-d83012a3143b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Processing event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.056 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.060 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007416.0597174, 6e187078-1e6f-4c22-9510-ed8116b14ae5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.060 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] VM Resumed (Lifecycle Event)
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.062 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.065 251996 INFO nova.virt.libvirt.driver [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance spawned successfully.
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.066 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.101 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.109 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.116 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.117 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.118 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.119 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.119 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.119 251996 DEBUG nova.virt.libvirt.driver [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.133 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:50:16 compute-0 ceph-mon[74339]: pgmap v2886: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.3 MiB/s wr, 73 op/s
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.189 251996 INFO nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Took 18.61 seconds to spawn the instance on the hypervisor.
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.189 251996 DEBUG nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.261 251996 INFO nova.compute.manager [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Took 20.47 seconds to build instance.
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.280 251996 DEBUG oslo_concurrency.lockutils [None req-98c76cae-8f09-409e-92c4-17281af58d61 f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:16 compute-0 nova_compute[251992]: 2025-12-06 07:50:16.439 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:16.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:16 compute-0 sshd-session[359421]: error: kex_exchange_identification: read: Connection reset by peer
Dec 06 07:50:16 compute-0 sshd-session[359421]: Connection reset by 101.201.38.226 port 57780
Dec 06 07:50:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Dec 06 07:50:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:18 compute-0 sshd-session[359615]: Invalid user  from 101.201.38.226 port 30470
Dec 06 07:50:18 compute-0 nova_compute[251992]: 2025-12-06 07:50:18.238 251996 DEBUG nova.compute.manager [req-c9e17bb2-5707-4ae9-93c9-bb7fbf91e229 req-a56e58d7-3693-4f4f-b863-01d4f34915c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:18 compute-0 nova_compute[251992]: 2025-12-06 07:50:18.239 251996 DEBUG oslo_concurrency.lockutils [req-c9e17bb2-5707-4ae9-93c9-bb7fbf91e229 req-a56e58d7-3693-4f4f-b863-01d4f34915c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:18 compute-0 nova_compute[251992]: 2025-12-06 07:50:18.239 251996 DEBUG oslo_concurrency.lockutils [req-c9e17bb2-5707-4ae9-93c9-bb7fbf91e229 req-a56e58d7-3693-4f4f-b863-01d4f34915c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:18 compute-0 nova_compute[251992]: 2025-12-06 07:50:18.239 251996 DEBUG oslo_concurrency.lockutils [req-c9e17bb2-5707-4ae9-93c9-bb7fbf91e229 req-a56e58d7-3693-4f4f-b863-01d4f34915c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:18 compute-0 nova_compute[251992]: 2025-12-06 07:50:18.239 251996 DEBUG nova.compute.manager [req-c9e17bb2-5707-4ae9-93c9-bb7fbf91e229 req-a56e58d7-3693-4f4f-b863-01d4f34915c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] No waiting events found dispatching network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:50:18 compute-0 nova_compute[251992]: 2025-12-06 07:50:18.240 251996 WARNING nova.compute.manager [req-c9e17bb2-5707-4ae9-93c9-bb7fbf91e229 req-a56e58d7-3693-4f4f-b863-01d4f34915c4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received unexpected event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 for instance with vm_state active and task_state None.
Dec 06 07:50:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/272318413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:18 compute-0 sshd-session[359615]: Connection closed by invalid user  101.201.38.226 port 30470 [preauth]
Dec 06 07:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:50:18
Dec 06 07:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'images', 'vms', '.mgr']
Dec 06 07:50:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:50:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:18.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 26 KiB/s wr, 121 op/s
Dec 06 07:50:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:19.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:19 compute-0 ceph-mon[74339]: pgmap v2887: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Dec 06 07:50:19 compute-0 ceph-mon[74339]: pgmap v2888: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 26 KiB/s wr, 121 op/s
Dec 06 07:50:20 compute-0 nova_compute[251992]: 2025-12-06 07:50:20.205 251996 INFO nova.compute.manager [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Rescuing
Dec 06 07:50:20 compute-0 nova_compute[251992]: 2025-12-06 07:50:20.206 251996 DEBUG oslo_concurrency.lockutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:20 compute-0 nova_compute[251992]: 2025-12-06 07:50:20.207 251996 DEBUG oslo_concurrency.lockutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquired lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:20 compute-0 nova_compute[251992]: 2025-12-06 07:50:20.207 251996 DEBUG nova.network.neutron [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:50:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Dec 06 07:50:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/341899823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:21 compute-0 nova_compute[251992]: 2025-12-06 07:50:21.375 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:21 compute-0 nova_compute[251992]: 2025-12-06 07:50:21.441 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:21.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:21.712 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:50:21 compute-0 nova_compute[251992]: 2025-12-06 07:50:21.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:21.713 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:50:21 compute-0 nova_compute[251992]: 2025-12-06 07:50:21.866 251996 DEBUG nova.network.neutron [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updating instance_info_cache with network_info: [{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:22 compute-0 nova_compute[251992]: 2025-12-06 07:50:22.377 251996 DEBUG oslo_concurrency.lockutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Releasing lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:22.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:22 compute-0 nova_compute[251992]: 2025-12-06 07:50:22.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:22 compute-0 nova_compute[251992]: 2025-12-06 07:50:22.978 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:50:22 compute-0 nova_compute[251992]: 2025-12-06 07:50:22.980 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:22 compute-0 nova_compute[251992]: 2025-12-06 07:50:22.981 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:22 compute-0 nova_compute[251992]: 2025-12-06 07:50:22.981 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:22 compute-0 nova_compute[251992]: 2025-12-06 07:50:22.981 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:50:22 compute-0 nova_compute[251992]: 2025-12-06 07:50:22.982 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 134 op/s
Dec 06 07:50:23 compute-0 ceph-mon[74339]: pgmap v2889: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Dec 06 07:50:23 compute-0 sudo[359641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:23 compute-0 sudo[359641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:23 compute-0 sudo[359641]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:23 compute-0 sudo[359666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:23 compute-0 sudo[359666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:23 compute-0 sudo[359666]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:50:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:50:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:50:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055956596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:23 compute-0 nova_compute[251992]: 2025-12-06 07:50:23.652 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:23 compute-0 nova_compute[251992]: 2025-12-06 07:50:23.753 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:50:23 compute-0 nova_compute[251992]: 2025-12-06 07:50:23.753 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:50:23 compute-0 nova_compute[251992]: 2025-12-06 07:50:23.909 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:50:23 compute-0 nova_compute[251992]: 2025-12-06 07:50:23.910 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4076MB free_disk=20.946483612060547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:50:23 compute-0 nova_compute[251992]: 2025-12-06 07:50:23.911 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:23 compute-0 nova_compute[251992]: 2025-12-06 07:50:23.911 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/347522823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:24 compute-0 ceph-mon[74339]: pgmap v2890: 305 pgs: 305 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 134 op/s
Dec 06 07:50:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2055956596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.251 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 6e187078-1e6f-4c22-9510-ed8116b14ae5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.251 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.252 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.296 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:24.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:50:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565456485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.773 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.780 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.799 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.825 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:50:24 compute-0 nova_compute[251992]: 2025-12-06 07:50:24.826 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:50:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:50:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:50:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:50:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:50:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 262 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.5 MiB/s wr, 176 op/s
Dec 06 07:50:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:25.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002793917504187605 of space, bias 1.0, pg target 0.8381752512562815 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162323480830076 of space, bias 1.0, pg target 0.6486970442490229 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:50:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:50:26 compute-0 nova_compute[251992]: 2025-12-06 07:50:26.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:26 compute-0 nova_compute[251992]: 2025-12-06 07:50:26.443 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:26.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 273 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.3 MiB/s wr, 145 op/s
Dec 06 07:50:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3565456485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:27 compute-0 ceph-mon[74339]: pgmap v2891: 305 pgs: 305 active+clean; 262 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.5 MiB/s wr, 176 op/s
Dec 06 07:50:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:28.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/11715429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/281735642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:28 compute-0 ceph-mon[74339]: pgmap v2892: 305 pgs: 305 active+clean; 273 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.3 MiB/s wr, 145 op/s
Dec 06 07:50:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/519535519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 308 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.5 MiB/s wr, 145 op/s
Dec 06 07:50:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:29.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:30 compute-0 ovn_controller[147168]: 2025-12-06T07:50:30Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:7f:63 10.100.0.7
Dec 06 07:50:30 compute-0 ovn_controller[147168]: 2025-12-06T07:50:30Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:7f:63 10.100.0.7
Dec 06 07:50:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:30.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:30.716 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:30 compute-0 ceph-mon[74339]: pgmap v2893: 305 pgs: 305 active+clean; 308 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.5 MiB/s wr, 145 op/s
Dec 06 07:50:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.2 MiB/s wr, 219 op/s
Dec 06 07:50:31 compute-0 nova_compute[251992]: 2025-12-06 07:50:31.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:31 compute-0 nova_compute[251992]: 2025-12-06 07:50:31.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:31.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:32 compute-0 ceph-mon[74339]: pgmap v2894: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.2 MiB/s wr, 219 op/s
Dec 06 07:50:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:32 compute-0 podman[359721]: 2025-12-06 07:50:32.524512594 +0000 UTC m=+0.177741286 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:50:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:32.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:32 compute-0 nova_compute[251992]: 2025-12-06 07:50:32.820 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:32 compute-0 nova_compute[251992]: 2025-12-06 07:50:32.821 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:32 compute-0 nova_compute[251992]: 2025-12-06 07:50:32.821 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:50:32 compute-0 nova_compute[251992]: 2025-12-06 07:50:32.821 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:50:32 compute-0 nova_compute[251992]: 2025-12-06 07:50:32.840 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:50:32 compute-0 nova_compute[251992]: 2025-12-06 07:50:32.840 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:50:32 compute-0 nova_compute[251992]: 2025-12-06 07:50:32.840 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:50:32 compute-0 nova_compute[251992]: 2025-12-06 07:50:32.840 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:33 compute-0 nova_compute[251992]: 2025-12-06 07:50:33.051 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 06 07:50:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.2 MiB/s wr, 188 op/s
Dec 06 07:50:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:50:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4260146210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:33.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 260 op/s
Dec 06 07:50:35 compute-0 nova_compute[251992]: 2025-12-06 07:50:35.488 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updating instance_info_cache with network_info: [{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:50:35 compute-0 nova_compute[251992]: 2025-12-06 07:50:35.516 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:50:35 compute-0 nova_compute[251992]: 2025-12-06 07:50:35.516 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:50:35 compute-0 nova_compute[251992]: 2025-12-06 07:50:35.516 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:35 compute-0 nova_compute[251992]: 2025-12-06 07:50:35.517 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:35 compute-0 nova_compute[251992]: 2025-12-06 07:50:35.517 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:35.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:35 compute-0 nova_compute[251992]: 2025-12-06 07:50:35.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:35 compute-0 nova_compute[251992]: 2025-12-06 07:50:35.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3325595356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:35 compute-0 ceph-mon[74339]: pgmap v2895: 305 pgs: 305 active+clean; 366 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.2 MiB/s wr, 188 op/s
Dec 06 07:50:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4260146210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:36 compute-0 nova_compute[251992]: 2025-12-06 07:50:36.384 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:36 compute-0 nova_compute[251992]: 2025-12-06 07:50:36.446 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3917741292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:37 compute-0 ceph-mon[74339]: pgmap v2896: 305 pgs: 305 active+clean; 390 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 260 op/s
Dec 06 07:50:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2432893103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 392 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Dec 06 07:50:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:37.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:37 compute-0 nova_compute[251992]: 2025-12-06 07:50:37.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:50:37 compute-0 nova_compute[251992]: 2025-12-06 07:50:37.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:50:38 compute-0 ceph-mon[74339]: pgmap v2897: 305 pgs: 305 active+clean; 392 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 222 op/s
Dec 06 07:50:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:38 compute-0 kernel: tap6ed32036-14 (unregistering): left promiscuous mode
Dec 06 07:50:38 compute-0 NetworkManager[48965]: <info>  [1765007438.6101] device (tap6ed32036-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:50:38 compute-0 ovn_controller[147168]: 2025-12-06T07:50:38Z|00618|binding|INFO|Releasing lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 from this chassis (sb_readonly=0)
Dec 06 07:50:38 compute-0 ovn_controller[147168]: 2025-12-06T07:50:38Z|00619|binding|INFO|Setting lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 down in Southbound
Dec 06 07:50:38 compute-0 ovn_controller[147168]: 2025-12-06T07:50:38Z|00620|binding|INFO|Removing iface tap6ed32036-14 ovn-installed in OVS
Dec 06 07:50:38 compute-0 nova_compute[251992]: 2025-12-06 07:50:38.618 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-0 nova_compute[251992]: 2025-12-06 07:50:38.620 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-0 nova_compute[251992]: 2025-12-06 07:50:38.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Dec 06 07:50:38 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000a5.scope: Consumed 13.973s CPU time.
Dec 06 07:50:38 compute-0 systemd-machined[212986]: Machine qemu-77-instance-000000a5 terminated.
Dec 06 07:50:38 compute-0 nova_compute[251992]: 2025-12-06 07:50:38.848 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-0 nova_compute[251992]: 2025-12-06 07:50:38.856 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:38.933 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:7f:63 10.100.0.7'], port_security=['fa:16:3e:77:7f:63 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6e187078-1e6f-4c22-9510-ed8116b14ae5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '4', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=6ed32036-14e7-4ab4-a9dd-38196e9a6469) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:50:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:38.938 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 6ed32036-14e7-4ab4-a9dd-38196e9a6469 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis
Dec 06 07:50:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:38.941 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:50:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:38.945 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[51de38f0-35d0-4030-984c-6472032495a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:38.946 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.076 251996 INFO nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance shutdown successfully after 16 seconds.
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.082 251996 INFO nova.virt.libvirt.driver [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance destroyed successfully.
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.082 251996 DEBUG nova.objects.instance [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:39 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[359599]: [NOTICE]   (359604) : haproxy version is 2.8.14-c23fe91
Dec 06 07:50:39 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[359599]: [NOTICE]   (359604) : path to executable is /usr/sbin/haproxy
Dec 06 07:50:39 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[359599]: [WARNING]  (359604) : Exiting Master process...
Dec 06 07:50:39 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[359599]: [ALERT]    (359604) : Current worker (359606) exited with code 143 (Terminated)
Dec 06 07:50:39 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[359599]: [WARNING]  (359604) : All workers exited. Exiting... (0)
Dec 06 07:50:39 compute-0 systemd[1]: libpod-751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9.scope: Deactivated successfully.
Dec 06 07:50:39 compute-0 podman[359785]: 2025-12-06 07:50:39.100416879 +0000 UTC m=+0.050839133 container died 751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 07:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9-userdata-shm.mount: Deactivated successfully.
Dec 06 07:50:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-268d736ce3a3ce224d5794e515fd32beb9b0e0809fc8d4484285c2326adb4ecc-merged.mount: Deactivated successfully.
Dec 06 07:50:39 compute-0 podman[359785]: 2025-12-06 07:50:39.144316683 +0000 UTC m=+0.094738937 container cleanup 751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:50:39 compute-0 systemd[1]: libpod-conmon-751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9.scope: Deactivated successfully.
Dec 06 07:50:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 392 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Dec 06 07:50:39 compute-0 podman[359817]: 2025-12-06 07:50:39.219289476 +0000 UTC m=+0.052761824 container remove 751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.229 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[210c86e7-b936-4ca6-b0b8-28257b32e0ca]: (4, ('Sat Dec  6 07:50:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9)\n751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9\nSat Dec  6 07:50:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9)\n751047f21db1a3fa7365b97bca8c41d1ea5d8a2e3cd4908e0a4d757c5bdd7ac9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.231 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7f93fe-40b2-48b7-8082-17aa16c1cbcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.233 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:39 compute-0 kernel: tap3d151181-00: left promiscuous mode
Dec 06 07:50:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/754643084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/375235154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.260 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.263 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[222600c1-5a56-43ad-8696-e8c897c4f345]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.278 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[58354d07-89d9-4f76-ad77-a6ab596dcbd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.280 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6be9b1cc-b69f-4a9c-9679-44d3478faae7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.298 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a11e4a0c-5bcc-484e-bcdf-bc9d851b4dcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768768, 'reachable_time': 33198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359835, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.306 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:50:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:39.307 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3e21b7-8af4-4ae6-ba25-af8bb9785a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.313 251996 INFO nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Attempting rescue
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.314 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.320 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.320 251996 INFO nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Creating image(s)
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.360 251996 DEBUG nova.storage.rbd_utils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.366 251996 DEBUG nova.objects.instance [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.514 251996 DEBUG nova.storage.rbd_utils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.557 251996 DEBUG nova.storage.rbd_utils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.563 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:39.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.632 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.633 251996 DEBUG oslo_concurrency.lockutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.633 251996 DEBUG oslo_concurrency.lockutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.634 251996 DEBUG oslo_concurrency.lockutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.661 251996 DEBUG nova.storage.rbd_utils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:39 compute-0 nova_compute[251992]: 2025-12-06 07:50:39.666 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:40 compute-0 podman[359930]: 2025-12-06 07:50:40.399860449 +0000 UTC m=+0.055153838 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:50:40 compute-0 podman[359931]: 2025-12-06 07:50:40.403333523 +0000 UTC m=+0.058803587 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:50:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:40.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:40 compute-0 ceph-mon[74339]: pgmap v2898: 305 pgs: 305 active+clean; 392 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Dec 06 07:50:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 422 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.5 MiB/s wr, 212 op/s
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.388 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.544 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.878s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.545 251996 DEBUG nova.objects.instance [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'migration_context' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.586 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.587 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Start _get_guest_xml network_info=[{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:77:7f:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.588 251996 DEBUG nova.objects.instance [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'resources' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:41.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.624 251996 WARNING nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.631 251996 DEBUG nova.virt.libvirt.host [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.632 251996 DEBUG nova.virt.libvirt.host [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.642 251996 DEBUG nova.virt.libvirt.host [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.642 251996 DEBUG nova.virt.libvirt.host [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.644 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.644 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.644 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.644 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.645 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.645 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.645 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.645 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.645 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.645 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.646 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.646 251996 DEBUG nova.virt.hardware [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.646 251996 DEBUG nova.objects.instance [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.671 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Dec 06 07:50:41 compute-0 ceph-mon[74339]: pgmap v2899: 305 pgs: 305 active+clean; 422 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.5 MiB/s wr, 212 op/s
Dec 06 07:50:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Dec 06 07:50:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.941 251996 DEBUG nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-unplugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.942 251996 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.942 251996 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.942 251996 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.942 251996 DEBUG nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] No waiting events found dispatching network-vif-unplugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.943 251996 WARNING nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received unexpected event network-vif-unplugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 for instance with vm_state active and task_state rescuing.
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.943 251996 DEBUG nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.943 251996 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.943 251996 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.944 251996 DEBUG oslo_concurrency.lockutils [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.944 251996 DEBUG nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] No waiting events found dispatching network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:50:41 compute-0 nova_compute[251992]: 2025-12-06 07:50:41.944 251996 WARNING nova.compute.manager [req-b035783f-12e1-4345-9e3a-92165ac51d46 req-70353982-60a9-4a65-8b09-5ca08e6b7d90 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received unexpected event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 for instance with vm_state active and task_state rescuing.
Dec 06 07:50:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3577384955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:42 compute-0 nova_compute[251992]: 2025-12-06 07:50:42.134 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:42 compute-0 nova_compute[251992]: 2025-12-06 07:50:42.135 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1467634657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:42 compute-0 nova_compute[251992]: 2025-12-06 07:50:42.578 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:42 compute-0 nova_compute[251992]: 2025-12-06 07:50:42.579 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:42.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:42 compute-0 ceph-mon[74339]: osdmap e374: 3 total, 3 up, 3 in
Dec 06 07:50:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3577384955' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1467634657' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:50:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511236024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:50:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:50:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:50:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:50:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:50:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:50:43 compute-0 nova_compute[251992]: 2025-12-06 07:50:43.046 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:43 compute-0 nova_compute[251992]: 2025-12-06 07:50:43.048 251996 DEBUG nova.virt.libvirt.vif [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-961523943',display_name='tempest-ServerRescueNegativeTestJSON-server-961523943',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-961523943',id=165,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-g79ucyw0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:50:16Z,user_data=None,user_id='f2335740042045fba7f544ee5140eb87',uuid=6e187078-1e6f-4c22-9510-ed8116b14ae5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:77:7f:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:50:43 compute-0 nova_compute[251992]: 2025-12-06 07:50:43.048 251996 DEBUG nova.network.os_vif_util [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "vif_mac": "fa:16:3e:77:7f:63"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:50:43 compute-0 nova_compute[251992]: 2025-12-06 07:50:43.049 251996 DEBUG nova.network.os_vif_util [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:7f:63,bridge_name='br-int',has_traffic_filtering=True,id=6ed32036-14e7-4ab4-a9dd-38196e9a6469,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed32036-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:50:43 compute-0 nova_compute[251992]: 2025-12-06 07:50:43.050 251996 DEBUG nova.objects.instance [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 422 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Dec 06 07:50:43 compute-0 nova_compute[251992]: 2025-12-06 07:50:43.372 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <uuid>6e187078-1e6f-4c22-9510-ed8116b14ae5</uuid>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <name>instance-000000a5</name>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-961523943</nova:name>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:50:41</nova:creationTime>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <nova:user uuid="f2335740042045fba7f544ee5140eb87">tempest-ServerRescueNegativeTestJSON-1304226499-project-member</nova:user>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <nova:project uuid="4842ecff6dce4ccc981a6b65a14ea406">tempest-ServerRescueNegativeTestJSON-1304226499</nova:project>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <nova:port uuid="6ed32036-14e7-4ab4-a9dd-38196e9a6469">
Dec 06 07:50:43 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <system>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <entry name="serial">6e187078-1e6f-4c22-9510-ed8116b14ae5</entry>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <entry name="uuid">6e187078-1e6f-4c22-9510-ed8116b14ae5</entry>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </system>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <os>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   </os>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <features>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   </features>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.rescue">
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6e187078-1e6f-4c22-9510-ed8116b14ae5_disk">
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <target dev="vdb" bus="virtio"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config.rescue">
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </source>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:50:43 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:77:7f:63"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <target dev="tap6ed32036-14"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/console.log" append="off"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <video>
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </video>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:50:43 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:50:43 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:50:43 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:50:43 compute-0 nova_compute[251992]: </domain>
Dec 06 07:50:43 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:50:43 compute-0 nova_compute[251992]: 2025-12-06 07:50:43.382 251996 INFO nova.virt.libvirt.driver [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance destroyed successfully.
Dec 06 07:50:43 compute-0 sudo[360038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:43 compute-0 sudo[360038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:43 compute-0 sudo[360038]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:43 compute-0 sudo[360063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:50:43 compute-0 sudo[360063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:50:43 compute-0 sudo[360063]: pam_unix(sudo:session): session closed for user root
Dec 06 07:50:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:43.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2511236024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:43 compute-0 ceph-mon[74339]: pgmap v2901: 305 pgs: 305 active+clean; 422 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Dec 06 07:50:44 compute-0 nova_compute[251992]: 2025-12-06 07:50:44.085 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:44 compute-0 nova_compute[251992]: 2025-12-06 07:50:44.085 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:44 compute-0 nova_compute[251992]: 2025-12-06 07:50:44.086 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:50:44 compute-0 nova_compute[251992]: 2025-12-06 07:50:44.086 251996 DEBUG nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] No VIF found with MAC fa:16:3e:77:7f:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:50:44 compute-0 nova_compute[251992]: 2025-12-06 07:50:44.086 251996 INFO nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Using config drive
Dec 06 07:50:44 compute-0 nova_compute[251992]: 2025-12-06 07:50:44.112 251996 DEBUG nova.storage.rbd_utils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:44 compute-0 nova_compute[251992]: 2025-12-06 07:50:44.355 251996 DEBUG nova.objects.instance [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:44.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:44 compute-0 nova_compute[251992]: 2025-12-06 07:50:44.625 251996 DEBUG nova.objects.instance [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'keypairs' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:50:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 509 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.3 MiB/s wr, 197 op/s
Dec 06 07:50:45 compute-0 ceph-mon[74339]: pgmap v2902: 305 pgs: 305 active+clean; 509 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.3 MiB/s wr, 197 op/s
Dec 06 07:50:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:45.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:46 compute-0 nova_compute[251992]: 2025-12-06 07:50:46.392 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:46 compute-0 nova_compute[251992]: 2025-12-06 07:50:46.451 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:46 compute-0 nova_compute[251992]: 2025-12-06 07:50:46.646 251996 INFO nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Creating config drive at /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config.rescue
Dec 06 07:50:46 compute-0 nova_compute[251992]: 2025-12-06 07:50:46.651 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ahba2yv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2810597353' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:46 compute-0 nova_compute[251992]: 2025-12-06 07:50:46.792 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ahba2yv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:46 compute-0 nova_compute[251992]: 2025-12-06 07:50:46.821 251996 DEBUG nova.storage.rbd_utils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] rbd image 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:50:46 compute-0 nova_compute[251992]: 2025-12-06 07:50:46.825 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config.rescue 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.020 251996 DEBUG oslo_concurrency.processutils [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config.rescue 6e187078-1e6f-4c22-9510-ed8116b14ae5_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.022 251996 INFO nova.virt.libvirt.driver [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Deleting local config drive /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5/disk.config.rescue because it was imported into RBD.
Dec 06 07:50:47 compute-0 kernel: tap6ed32036-14: entered promiscuous mode
Dec 06 07:50:47 compute-0 NetworkManager[48965]: <info>  [1765007447.0907] manager: (tap6ed32036-14): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.091 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-0 ovn_controller[147168]: 2025-12-06T07:50:47Z|00621|binding|INFO|Claiming lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 for this chassis.
Dec 06 07:50:47 compute-0 ovn_controller[147168]: 2025-12-06T07:50:47Z|00622|binding|INFO|6ed32036-14e7-4ab4-a9dd-38196e9a6469: Claiming fa:16:3e:77:7f:63 10.100.0.7
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.107 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-0 ovn_controller[147168]: 2025-12-06T07:50:47Z|00623|binding|INFO|Setting lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 ovn-installed in OVS
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.109 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.111 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-0 systemd-udevd[360161]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:50:47 compute-0 systemd-machined[212986]: New machine qemu-78-instance-000000a5.
Dec 06 07:50:47 compute-0 NetworkManager[48965]: <info>  [1765007447.1469] device (tap6ed32036-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:50:47 compute-0 ovn_controller[147168]: 2025-12-06T07:50:47Z|00624|binding|INFO|Setting lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 up in Southbound
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.146 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:7f:63 10.100.0.7'], port_security=['fa:16:3e:77:7f:63 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6e187078-1e6f-4c22-9510-ed8116b14ae5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '5', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=6ed32036-14e7-4ab4-a9dd-38196e9a6469) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.148 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 6ed32036-14e7-4ab4-a9dd-38196e9a6469 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 bound to our chassis
Dec 06 07:50:47 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-000000a5.
Dec 06 07:50:47 compute-0 NetworkManager[48965]: <info>  [1765007447.1494] device (tap6ed32036-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.150 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.164 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[af27075d-84b7-4bed-ab38-55cf02070ee0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.165 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d151181-01 in ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.168 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d151181-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.168 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a33e38a9-5d66-4fd0-98a3-fc915af78890]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.169 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b581cf44-4aaf-44fa-91d8-266ac9dc2cc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 515 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 234 op/s
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.183 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[a294f854-4697-4a3d-b4ac-af933cdd7c97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.197 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b50e3449-bbb9-4d00-97ad-f431077a7560]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.229 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ece56135-70ac-49fb-a77f-38ff3fb20f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 systemd-udevd[360163]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.236 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb2b488-8406-45ad-b15d-fe19a15f8185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 NetworkManager[48965]: <info>  [1765007447.2378] manager: (tap3d151181-00): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.276 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[43936705-e928-483a-8dae-3050d89a9e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.280 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a03a27d7-6ad8-42a3-a042-87fb3862161d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 NetworkManager[48965]: <info>  [1765007447.3009] device (tap3d151181-00): carrier: link connected
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.305 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[db30ca98-0812-4851-b76a-64cfe7535bb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.321 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a29ff0f6-7941-440f-9e49-8feb0a4c7d2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771988, 'reachable_time': 41850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360194, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.335 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[70c2426b-f6b9-4d41-8afd-0e13e9b47976]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:130b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 771988, 'tstamp': 771988}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360195, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.351 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3c120f-6c9e-47f0-b2eb-5ce685af4826]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d151181-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:13:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 189], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771988, 'reachable_time': 41850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360196, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.388 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c13a8f77-a0f3-4b66-a2bd-8d958f7a8210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.461 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[06b563fc-1ccd-46fc-a54b-db7fe28c9e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.463 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.463 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.464 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d151181-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.465 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-0 NetworkManager[48965]: <info>  [1765007447.4667] manager: (tap3d151181-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Dec 06 07:50:47 compute-0 kernel: tap3d151181-00: entered promiscuous mode
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.468 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d151181-00, col_values=(('external_ids', {'iface-id': '7c0488e1-35c2-4c92-b43c-271fbeecd9ea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.469 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-0 ovn_controller[147168]: 2025-12-06T07:50:47Z|00625|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.483 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.484 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.485 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc8b0dc-f980-42b7-8488-496f06abc904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.486 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/3d151181-0dfe-43ab-b47e-15b53add33a6.pid.haproxy
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 3d151181-0dfe-43ab-b47e-15b53add33a6
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:50:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:50:47.487 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'env', 'PROCESS_TAG=haproxy-3d151181-0dfe-43ab-b47e-15b53add33a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d151181-0dfe-43ab-b47e-15b53add33a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:50:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.593 251996 DEBUG nova.virt.libvirt.host [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Removed pending event for 6e187078-1e6f-4c22-9510-ed8116b14ae5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.594 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007447.5934105, 6e187078-1e6f-4c22-9510-ed8116b14ae5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.594 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] VM Resumed (Lifecycle Event)
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.599 251996 DEBUG nova.compute.manager [None req-8df03c5b-26e4-40ac-b30f-e68e83293d4b f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:47.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.635 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.639 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.667 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] During sync_power_state the instance has a pending task (rescuing). Skip.
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.668 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007447.5935698, 6e187078-1e6f-4c22-9510-ed8116b14ae5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.668 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] VM Started (Lifecycle Event)
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.692 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:50:47 compute-0 nova_compute[251992]: 2025-12-06 07:50:47.695 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:50:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/588140338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:50:47 compute-0 ceph-mon[74339]: pgmap v2903: 305 pgs: 305 active+clean; 515 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 234 op/s
Dec 06 07:50:47 compute-0 podman[360289]: 2025-12-06 07:50:47.9025398 +0000 UTC m=+0.058544611 container create e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:50:47 compute-0 systemd[1]: Started libpod-conmon-e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976.scope.
Dec 06 07:50:47 compute-0 podman[360289]: 2025-12-06 07:50:47.865825599 +0000 UTC m=+0.021830400 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:50:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c2fef77d966cf4f6c0269bb58be6378d325f699b30ebfbfa1e8ad878b74b55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:50:48 compute-0 podman[360289]: 2025-12-06 07:50:48.007418059 +0000 UTC m=+0.163422860 container init e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:50:48 compute-0 podman[360289]: 2025-12-06 07:50:48.016000711 +0000 UTC m=+0.172005482 container start e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 07:50:48 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[360306]: [NOTICE]   (360310) : New worker (360312) forked
Dec 06 07:50:48 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[360306]: [NOTICE]   (360310) : Loading success.
Dec 06 07:50:48 compute-0 nova_compute[251992]: 2025-12-06 07:50:48.240 251996 DEBUG nova.compute.manager [req-82c821f7-14df-4f68-a53e-9a41ed1b403b req-62f7c239-174c-418e-9fd7-5d43d847e68a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:48 compute-0 nova_compute[251992]: 2025-12-06 07:50:48.241 251996 DEBUG oslo_concurrency.lockutils [req-82c821f7-14df-4f68-a53e-9a41ed1b403b req-62f7c239-174c-418e-9fd7-5d43d847e68a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:48 compute-0 nova_compute[251992]: 2025-12-06 07:50:48.241 251996 DEBUG oslo_concurrency.lockutils [req-82c821f7-14df-4f68-a53e-9a41ed1b403b req-62f7c239-174c-418e-9fd7-5d43d847e68a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:48 compute-0 nova_compute[251992]: 2025-12-06 07:50:48.241 251996 DEBUG oslo_concurrency.lockutils [req-82c821f7-14df-4f68-a53e-9a41ed1b403b req-62f7c239-174c-418e-9fd7-5d43d847e68a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:48 compute-0 nova_compute[251992]: 2025-12-06 07:50:48.242 251996 DEBUG nova.compute.manager [req-82c821f7-14df-4f68-a53e-9a41ed1b403b req-62f7c239-174c-418e-9fd7-5d43d847e68a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] No waiting events found dispatching network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:50:48 compute-0 nova_compute[251992]: 2025-12-06 07:50:48.242 251996 WARNING nova.compute.manager [req-82c821f7-14df-4f68-a53e-9a41ed1b403b req-62f7c239-174c-418e-9fd7-5d43d847e68a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received unexpected event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 for instance with vm_state rescued and task_state None.
Dec 06 07:50:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:48.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 233 op/s
Dec 06 07:50:49 compute-0 ceph-mon[74339]: pgmap v2904: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 233 op/s
Dec 06 07:50:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:50 compute-0 nova_compute[251992]: 2025-12-06 07:50:50.409 251996 DEBUG nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:50:50 compute-0 nova_compute[251992]: 2025-12-06 07:50:50.409 251996 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:50:50 compute-0 nova_compute[251992]: 2025-12-06 07:50:50.410 251996 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:50:50 compute-0 nova_compute[251992]: 2025-12-06 07:50:50.410 251996 DEBUG oslo_concurrency.lockutils [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:50:50 compute-0 nova_compute[251992]: 2025-12-06 07:50:50.410 251996 DEBUG nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] No waiting events found dispatching network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:50:50 compute-0 nova_compute[251992]: 2025-12-06 07:50:50.410 251996 WARNING nova.compute.manager [req-25f5af4a-4c4b-42bd-a70d-30872ec9dc86 req-7ff266a5-47d2-4f93-a7a9-54d2fb118870 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received unexpected event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 for instance with vm_state rescued and task_state None.
Dec 06 07:50:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.4 MiB/s wr, 293 op/s
Dec 06 07:50:51 compute-0 nova_compute[251992]: 2025-12-06 07:50:51.396 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:51 compute-0 nova_compute[251992]: 2025-12-06 07:50:51.574 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:51.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:52 compute-0 ceph-mon[74339]: pgmap v2905: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.4 MiB/s wr, 293 op/s
Dec 06 07:50:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 260 op/s
Dec 06 07:50:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:54 compute-0 ceph-mon[74339]: pgmap v2906: 305 pgs: 305 active+clean; 518 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.8 MiB/s wr, 260 op/s
Dec 06 07:50:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:54.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 527 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.3 MiB/s wr, 319 op/s
Dec 06 07:50:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:56 compute-0 ceph-mon[74339]: pgmap v2907: 305 pgs: 305 active+clean; 527 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.3 MiB/s wr, 319 op/s
Dec 06 07:50:56 compute-0 nova_compute[251992]: 2025-12-06 07:50:56.400 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:56 compute-0 nova_compute[251992]: 2025-12-06 07:50:56.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:50:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:56.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.9 MiB/s wr, 223 op/s
Dec 06 07:50:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:50:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:50:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:57.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:50:57 compute-0 ceph-mon[74339]: pgmap v2908: 305 pgs: 305 active+clean; 536 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.9 MiB/s wr, 223 op/s
Dec 06 07:50:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:50:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:50:58.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:50:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 548 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 195 op/s
Dec 06 07:50:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:50:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:50:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:50:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:50:59 compute-0 ceph-mon[74339]: pgmap v2909: 305 pgs: 305 active+clean; 548 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 195 op/s
Dec 06 07:51:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:00.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Dec 06 07:51:01 compute-0 nova_compute[251992]: 2025-12-06 07:51:01.401 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:01 compute-0 nova_compute[251992]: 2025-12-06 07:51:01.457 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:01.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:02.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:02 compute-0 ceph-mon[74339]: pgmap v2910: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.774941) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462775020, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 742, "num_deletes": 250, "total_data_size": 1013533, "memory_usage": 1027936, "flush_reason": "Manual Compaction"}
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462781749, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 665306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58149, "largest_seqno": 58890, "table_properties": {"data_size": 662026, "index_size": 1123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8960, "raw_average_key_size": 20, "raw_value_size": 654996, "raw_average_value_size": 1523, "num_data_blocks": 49, "num_entries": 430, "num_filter_entries": 430, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007407, "oldest_key_time": 1765007407, "file_creation_time": 1765007462, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 6855 microseconds, and 3313 cpu microseconds.
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.781791) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 665306 bytes OK
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.781822) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.783206) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.783219) EVENT_LOG_v1 {"time_micros": 1765007462783215, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.783235) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1009793, prev total WAL file size 1009793, number of live WAL files 2.
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.783722) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303236' seq:72057594037927935, type:22 .. '6D6772737461740032323737' seq:0, type:0; will stop at (end)
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(649KB)], [128(12MB)]
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462783779, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 13701508, "oldest_snapshot_seqno": -1}
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 9037 keys, 10192169 bytes, temperature: kUnknown
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462868832, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10192169, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10135643, "index_size": 32797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 239713, "raw_average_key_size": 26, "raw_value_size": 9978533, "raw_average_value_size": 1104, "num_data_blocks": 1242, "num_entries": 9037, "num_filter_entries": 9037, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007462, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.869065) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10192169 bytes
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.870162) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.0 rd, 119.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(35.9) write-amplify(15.3) OK, records in: 9531, records dropped: 494 output_compression: NoCompression
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.870177) EVENT_LOG_v1 {"time_micros": 1765007462870170, "job": 78, "event": "compaction_finished", "compaction_time_micros": 85116, "compaction_time_cpu_micros": 30922, "output_level": 6, "num_output_files": 1, "total_output_size": 10192169, "num_input_records": 9531, "num_output_records": 9037, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462870375, "job": 78, "event": "table_file_deletion", "file_number": 130}
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007462872243, "job": 78, "event": "table_file_deletion", "file_number": 128}
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.783639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.872279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.872282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.872283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.872285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:51:02.872286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:51:02 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Dec 06 07:51:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Dec 06 07:51:03 compute-0 podman[360329]: 2025-12-06 07:51:03.451477195 +0000 UTC m=+0.090934604 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 07:51:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:03.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:03 compute-0 sudo[360356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:03 compute-0 sudo[360356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:03 compute-0 sudo[360356]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:03 compute-0 sudo[360381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:03 compute-0 sudo[360381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:03 compute-0 sudo[360381]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Dec 06 07:51:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:03.860 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:03.862 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:03.862 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:03 compute-0 ovn_controller[147168]: 2025-12-06T07:51:03Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:7f:63 10.100.0.7
Dec 06 07:51:03 compute-0 ovn_controller[147168]: 2025-12-06T07:51:03Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:7f:63 10.100.0.7
Dec 06 07:51:03 compute-0 ceph-mon[74339]: pgmap v2911: 305 pgs: 305 active+clean; 551 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Dec 06 07:51:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Dec 06 07:51:03 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Dec 06 07:51:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:05 compute-0 ceph-mon[74339]: osdmap e375: 3 total, 3 up, 3 in
Dec 06 07:51:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 575 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Dec 06 07:51:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:05.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:06 compute-0 ceph-mon[74339]: pgmap v2913: 305 pgs: 305 active+clean; 575 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Dec 06 07:51:06 compute-0 nova_compute[251992]: 2025-12-06 07:51:06.405 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:06 compute-0 nova_compute[251992]: 2025-12-06 07:51:06.458 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:06.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 580 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 180 op/s
Dec 06 07:51:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:07.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:08.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 582 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 193 op/s
Dec 06 07:51:09 compute-0 ceph-mon[74339]: pgmap v2914: 305 pgs: 305 active+clean; 580 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.4 MiB/s wr, 180 op/s
Dec 06 07:51:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:09.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:10.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:10 compute-0 ceph-mon[74339]: pgmap v2915: 305 pgs: 305 active+clean; 582 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 193 op/s
Dec 06 07:51:10 compute-0 ovn_controller[147168]: 2025-12-06T07:51:10Z|00626|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:51:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Dec 06 07:51:11 compute-0 podman[360411]: 2025-12-06 07:51:11.404350591 +0000 UTC m=+0.054525642 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:51:11 compute-0 nova_compute[251992]: 2025-12-06 07:51:11.408 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:11 compute-0 podman[360412]: 2025-12-06 07:51:11.433442216 +0000 UTC m=+0.071745046 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:51:11 compute-0 nova_compute[251992]: 2025-12-06 07:51:11.460 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:11.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4194304661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:51:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4194304661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:51:11 compute-0 ceph-mon[74339]: pgmap v2916: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 169 op/s
Dec 06 07:51:11 compute-0 sudo[360450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:11 compute-0 sudo[360450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:11 compute-0 sudo[360450]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:11 compute-0 sudo[360475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:51:11 compute-0 sudo[360475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:11 compute-0 sudo[360475]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:11 compute-0 sudo[360500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:11 compute-0 sudo[360500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:11 compute-0 sudo[360500]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:12 compute-0 sudo[360525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:51:12 compute-0 sudo[360525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:12 compute-0 sudo[360525]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Dec 06 07:51:12 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Dec 06 07:51:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:12.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:51:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:51:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:51:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:12 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 07d2c8b0-1ee3-41fc-a693-28bc86880fe1 does not exist
Dec 06 07:51:12 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 81abafc5-8bd9-4077-8e30-0be7d1c51485 does not exist
Dec 06 07:51:12 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ded378b7-a464-4fb5-a19b-4c1e41d3de10 does not exist
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:51:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:51:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:51:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:51:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:51:12 compute-0 sudo[360583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:12 compute-0 sudo[360583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:12 compute-0 sudo[360583]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:12 compute-0 sudo[360608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:51:12 compute-0 sudo[360608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:12 compute-0 sudo[360608]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:12 compute-0 sudo[360633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:12 compute-0 sudo[360633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:12 compute-0 sudo[360633]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:13 compute-0 sudo[360658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:51:13 compute-0 sudo[360658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:51:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:51:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:51:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:51:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:51:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:51:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 2.5 MiB/s wr, 145 op/s
Dec 06 07:51:13 compute-0 podman[360727]: 2025-12-06 07:51:13.382898844 +0000 UTC m=+0.028015237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:51:13 compute-0 podman[360727]: 2025-12-06 07:51:13.569024106 +0000 UTC m=+0.214140459 container create e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_fermi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:51:13 compute-0 systemd[1]: Started libpod-conmon-e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b.scope.
Dec 06 07:51:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:51:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:13.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:13 compute-0 podman[360727]: 2025-12-06 07:51:13.670701919 +0000 UTC m=+0.315818282 container init e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:51:13 compute-0 podman[360727]: 2025-12-06 07:51:13.678071499 +0000 UTC m=+0.323187842 container start e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:51:13 compute-0 podman[360727]: 2025-12-06 07:51:13.694310736 +0000 UTC m=+0.339427139 container attach e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:51:13 compute-0 vigorous_fermi[360743]: 167 167
Dec 06 07:51:13 compute-0 systemd[1]: libpod-e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b.scope: Deactivated successfully.
Dec 06 07:51:13 compute-0 podman[360727]: 2025-12-06 07:51:13.698248803 +0000 UTC m=+0.343365156 container died e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 07:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0994dcf5545f8c275a221d5d4a4a9ce11e4e04410c03b4fc82a2ce6a23b19ece-merged.mount: Deactivated successfully.
Dec 06 07:51:13 compute-0 podman[360727]: 2025-12-06 07:51:13.739677201 +0000 UTC m=+0.384793544 container remove e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec 06 07:51:13 compute-0 systemd[1]: libpod-conmon-e7af0bdf04934b4f45630bed451edb24dbee7392d9c37fd79b9d528cd64c6f3b.scope: Deactivated successfully.
Dec 06 07:51:13 compute-0 ceph-mon[74339]: osdmap e376: 3 total, 3 up, 3 in
Dec 06 07:51:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:51:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:51:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:51:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:51:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:51:13 compute-0 ceph-mon[74339]: pgmap v2918: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 2.5 MiB/s wr, 145 op/s
Dec 06 07:51:13 compute-0 podman[360765]: 2025-12-06 07:51:13.909731388 +0000 UTC m=+0.044506792 container create abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:51:13 compute-0 systemd[1]: Started libpod-conmon-abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577.scope.
Dec 06 07:51:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97133d42c7245c3a218abf1ab0093337bb017b4e252f8baee493890050b73a3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97133d42c7245c3a218abf1ab0093337bb017b4e252f8baee493890050b73a3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97133d42c7245c3a218abf1ab0093337bb017b4e252f8baee493890050b73a3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97133d42c7245c3a218abf1ab0093337bb017b4e252f8baee493890050b73a3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97133d42c7245c3a218abf1ab0093337bb017b4e252f8baee493890050b73a3d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:13 compute-0 podman[360765]: 2025-12-06 07:51:13.893385977 +0000 UTC m=+0.028161411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:51:13 compute-0 podman[360765]: 2025-12-06 07:51:13.992191464 +0000 UTC m=+0.126966868 container init abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec 06 07:51:13 compute-0 podman[360765]: 2025-12-06 07:51:13.99872533 +0000 UTC m=+0.133500754 container start abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:51:14 compute-0 podman[360765]: 2025-12-06 07:51:14.00247478 +0000 UTC m=+0.137250194 container attach abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_heyrovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 07:51:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:14.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:14 compute-0 romantic_heyrovsky[360782]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:51:14 compute-0 romantic_heyrovsky[360782]: --> relative data size: 1.0
Dec 06 07:51:14 compute-0 romantic_heyrovsky[360782]: --> All data devices are unavailable
Dec 06 07:51:14 compute-0 systemd[1]: libpod-abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577.scope: Deactivated successfully.
Dec 06 07:51:14 compute-0 podman[360797]: 2025-12-06 07:51:14.914074717 +0000 UTC m=+0.021209744 container died abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_heyrovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-97133d42c7245c3a218abf1ab0093337bb017b4e252f8baee493890050b73a3d-merged.mount: Deactivated successfully.
Dec 06 07:51:14 compute-0 podman[360797]: 2025-12-06 07:51:14.967821897 +0000 UTC m=+0.074956904 container remove abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_heyrovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:51:14 compute-0 systemd[1]: libpod-conmon-abdc58c9c585d761726c409500b7885f5dccc59b995584f617193303a5e31577.scope: Deactivated successfully.
Dec 06 07:51:15 compute-0 sudo[360658]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:15 compute-0 sudo[360812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:15 compute-0 sudo[360812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:15 compute-0 sudo[360812]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:15 compute-0 sudo[360838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:51:15 compute-0 sudo[360838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:15 compute-0 sudo[360838]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 473 KiB/s rd, 692 KiB/s wr, 128 op/s
Dec 06 07:51:15 compute-0 sudo[360863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:15 compute-0 sudo[360863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:15 compute-0 sudo[360863]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:15 compute-0 sudo[360888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:51:15 compute-0 sudo[360888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:15 compute-0 ceph-mon[74339]: pgmap v2919: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 473 KiB/s rd, 692 KiB/s wr, 128 op/s
Dec 06 07:51:15 compute-0 podman[360955]: 2025-12-06 07:51:15.58428673 +0000 UTC m=+0.040576567 container create 88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:51:15 compute-0 systemd[1]: Started libpod-conmon-88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6.scope.
Dec 06 07:51:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:51:15 compute-0 podman[360955]: 2025-12-06 07:51:15.659384656 +0000 UTC m=+0.115674503 container init 88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_stonebraker, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Dec 06 07:51:15 compute-0 podman[360955]: 2025-12-06 07:51:15.568423361 +0000 UTC m=+0.024713218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:51:15 compute-0 podman[360955]: 2025-12-06 07:51:15.666945359 +0000 UTC m=+0.123235196 container start 88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:51:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:15.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:15 compute-0 podman[360955]: 2025-12-06 07:51:15.670537377 +0000 UTC m=+0.126827234 container attach 88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_stonebraker, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:51:15 compute-0 musing_stonebraker[360971]: 167 167
Dec 06 07:51:15 compute-0 systemd[1]: libpod-88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6.scope: Deactivated successfully.
Dec 06 07:51:15 compute-0 podman[360955]: 2025-12-06 07:51:15.674575095 +0000 UTC m=+0.130864952 container died 88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1b1fff152106b08675ccd7902561db3bfa2fdecba836afb29527af2ff75f952-merged.mount: Deactivated successfully.
Dec 06 07:51:15 compute-0 podman[360955]: 2025-12-06 07:51:15.716142387 +0000 UTC m=+0.172432224 container remove 88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:51:15 compute-0 systemd[1]: libpod-conmon-88f43e765fcc4d35409cfcd8f5e924cf9e80cb31649ca11597be6d1101aa56c6.scope: Deactivated successfully.
Dec 06 07:51:15 compute-0 podman[360997]: 2025-12-06 07:51:15.89268563 +0000 UTC m=+0.043366330 container create a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:51:15 compute-0 systemd[1]: Started libpod-conmon-a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527.scope.
Dec 06 07:51:15 compute-0 podman[360997]: 2025-12-06 07:51:15.872056134 +0000 UTC m=+0.022736854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:51:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6b52ef0735c132825abcce18575f72f220bbfa13510585b6773ba02cb4e44e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6b52ef0735c132825abcce18575f72f220bbfa13510585b6773ba02cb4e44e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6b52ef0735c132825abcce18575f72f220bbfa13510585b6773ba02cb4e44e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be6b52ef0735c132825abcce18575f72f220bbfa13510585b6773ba02cb4e44e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:15 compute-0 podman[360997]: 2025-12-06 07:51:15.991727872 +0000 UTC m=+0.142408592 container init a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 07:51:15 compute-0 podman[360997]: 2025-12-06 07:51:15.99790385 +0000 UTC m=+0.148584550 container start a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_knuth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:51:16 compute-0 podman[360997]: 2025-12-06 07:51:16.000721956 +0000 UTC m=+0.151402656 container attach a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:51:16 compute-0 nova_compute[251992]: 2025-12-06 07:51:16.412 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:16 compute-0 nova_compute[251992]: 2025-12-06 07:51:16.463 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:16 compute-0 naughty_knuth[361014]: {
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:     "0": [
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:         {
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "devices": [
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "/dev/loop3"
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             ],
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "lv_name": "ceph_lv0",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "lv_size": "7511998464",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "name": "ceph_lv0",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "tags": {
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.cluster_name": "ceph",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.crush_device_class": "",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.encrypted": "0",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.osd_id": "0",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.type": "block",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:                 "ceph.vdo": "0"
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             },
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "type": "block",
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:             "vg_name": "ceph_vg0"
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:         }
Dec 06 07:51:16 compute-0 naughty_knuth[361014]:     ]
Dec 06 07:51:16 compute-0 naughty_knuth[361014]: }
Dec 06 07:51:16 compute-0 systemd[1]: libpod-a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527.scope: Deactivated successfully.
Dec 06 07:51:16 compute-0 podman[360997]: 2025-12-06 07:51:16.78740478 +0000 UTC m=+0.938085500 container died a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 07:51:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-be6b52ef0735c132825abcce18575f72f220bbfa13510585b6773ba02cb4e44e-merged.mount: Deactivated successfully.
Dec 06 07:51:16 compute-0 podman[360997]: 2025-12-06 07:51:16.843919436 +0000 UTC m=+0.994600136 container remove a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:51:16 compute-0 systemd[1]: libpod-conmon-a3da01049e31684a46d6fca7ca538c940bb4ba6f9ec02f7d3635ff1aa3aad527.scope: Deactivated successfully.
Dec 06 07:51:16 compute-0 sudo[360888]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:16 compute-0 sudo[361034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:16 compute-0 sudo[361034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:16 compute-0 sudo[361034]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:17 compute-0 sudo[361059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:51:17 compute-0 sudo[361059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:17 compute-0 sudo[361059]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:17 compute-0 sudo[361085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:17 compute-0 sudo[361085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:17 compute-0 sudo[361085]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:17 compute-0 sudo[361110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:51:17 compute-0 sudo[361110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 212 KiB/s rd, 133 KiB/s wr, 137 op/s
Dec 06 07:51:17 compute-0 ceph-mon[74339]: pgmap v2920: 305 pgs: 305 active+clean; 586 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 212 KiB/s rd, 133 KiB/s wr, 137 op/s
Dec 06 07:51:17 compute-0 podman[361175]: 2025-12-06 07:51:17.488869977 +0000 UTC m=+0.043515106 container create 91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:51:17 compute-0 systemd[1]: Started libpod-conmon-91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d.scope.
Dec 06 07:51:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:51:17 compute-0 podman[361175]: 2025-12-06 07:51:17.469156435 +0000 UTC m=+0.023801594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:51:17 compute-0 podman[361175]: 2025-12-06 07:51:17.573090319 +0000 UTC m=+0.127735448 container init 91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:51:17 compute-0 podman[361175]: 2025-12-06 07:51:17.579521302 +0000 UTC m=+0.134166431 container start 91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:51:17 compute-0 peaceful_hellman[361191]: 167 167
Dec 06 07:51:17 compute-0 systemd[1]: libpod-91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d.scope: Deactivated successfully.
Dec 06 07:51:17 compute-0 podman[361175]: 2025-12-06 07:51:17.58461597 +0000 UTC m=+0.139261119 container attach 91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:51:17 compute-0 podman[361175]: 2025-12-06 07:51:17.58497674 +0000 UTC m=+0.139621859 container died 91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 07:51:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1250b94e70d3c8dcfca1974e6f68d8421aaccb67f628b87a3cfc67eb8b01de3-merged.mount: Deactivated successfully.
Dec 06 07:51:17 compute-0 podman[361175]: 2025-12-06 07:51:17.631601048 +0000 UTC m=+0.186246207 container remove 91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:51:17 compute-0 systemd[1]: libpod-conmon-91e3b6e058318d454c069a514f5fb57a36a8d688c527fc41b37424172a06a07d.scope: Deactivated successfully.
Dec 06 07:51:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:17.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:17 compute-0 podman[361215]: 2025-12-06 07:51:17.805071549 +0000 UTC m=+0.034783421 container create ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:51:17 compute-0 systemd[1]: Started libpod-conmon-ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4.scope.
Dec 06 07:51:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6b86ea75bad7913098e4d266ea7d056f31ea7405d6a3b9faa0c4831787006f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6b86ea75bad7913098e4d266ea7d056f31ea7405d6a3b9faa0c4831787006f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6b86ea75bad7913098e4d266ea7d056f31ea7405d6a3b9faa0c4831787006f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6b86ea75bad7913098e4d266ea7d056f31ea7405d6a3b9faa0c4831787006f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:17 compute-0 podman[361215]: 2025-12-06 07:51:17.886943607 +0000 UTC m=+0.116655499 container init ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:51:17 compute-0 podman[361215]: 2025-12-06 07:51:17.790240688 +0000 UTC m=+0.019952570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:51:17 compute-0 podman[361215]: 2025-12-06 07:51:17.894577014 +0000 UTC m=+0.124288886 container start ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:51:17 compute-0 podman[361215]: 2025-12-06 07:51:17.898142739 +0000 UTC m=+0.127854631 container attach ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 07:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:51:18
Dec 06 07:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups']
Dec 06 07:51:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:51:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:18.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:18 compute-0 blissful_shaw[361231]: {
Dec 06 07:51:18 compute-0 blissful_shaw[361231]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:51:18 compute-0 blissful_shaw[361231]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:51:18 compute-0 blissful_shaw[361231]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:51:18 compute-0 blissful_shaw[361231]:         "osd_id": 0,
Dec 06 07:51:18 compute-0 blissful_shaw[361231]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:51:18 compute-0 blissful_shaw[361231]:         "type": "bluestore"
Dec 06 07:51:18 compute-0 blissful_shaw[361231]:     }
Dec 06 07:51:18 compute-0 blissful_shaw[361231]: }
Dec 06 07:51:18 compute-0 systemd[1]: libpod-ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4.scope: Deactivated successfully.
Dec 06 07:51:18 compute-0 podman[361215]: 2025-12-06 07:51:18.720172408 +0000 UTC m=+0.949884280 container died ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:51:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed6b86ea75bad7913098e4d266ea7d056f31ea7405d6a3b9faa0c4831787006f-merged.mount: Deactivated successfully.
Dec 06 07:51:18 compute-0 podman[361215]: 2025-12-06 07:51:18.77137562 +0000 UTC m=+1.001087492 container remove ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_shaw, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:51:18 compute-0 systemd[1]: libpod-conmon-ba96bac86676e82a56d705cf5e7dae85748e3e4cb263e66b5b4a5ca474613bb4.scope: Deactivated successfully.
Dec 06 07:51:18 compute-0 sudo[361110]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:51:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 604 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 1009 KiB/s wr, 166 op/s
Dec 06 07:51:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:51:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:19.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 349a0bda-555d-4f38-a687-90aae6849198 does not exist
Dec 06 07:51:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7f7051a1-aa24-4d12-9a07-959a651bfb96 does not exist
Dec 06 07:51:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 84c0471b-69a6-4883-a966-d1e2fb2c6aef does not exist
Dec 06 07:51:19 compute-0 sudo[361266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:19 compute-0 sudo[361266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:19 compute-0 sudo[361266]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:19 compute-0 sudo[361291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:51:19 compute-0 sudo[361291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:19 compute-0 sudo[361291]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:20 compute-0 ceph-mon[74339]: pgmap v2921: 305 pgs: 305 active+clean; 604 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 1009 KiB/s wr, 166 op/s
Dec 06 07:51:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:20.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 135 KiB/s rd, 2.2 MiB/s wr, 228 op/s
Dec 06 07:51:21 compute-0 nova_compute[251992]: 2025-12-06 07:51:21.464 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:21 compute-0 nova_compute[251992]: 2025-12-06 07:51:21.466 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:21.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:51:22 compute-0 ceph-mon[74339]: pgmap v2922: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 135 KiB/s rd, 2.2 MiB/s wr, 228 op/s
Dec 06 07:51:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:22.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 2.0 MiB/s wr, 215 op/s
Dec 06 07:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:51:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:51:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:23.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:23 compute-0 sudo[361318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:23 compute-0 sudo[361318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:23 compute-0 sudo[361318]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:23 compute-0 sudo[361343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:23 compute-0 sudo[361343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:23 compute-0 sudo[361343]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:24 compute-0 ceph-mon[74339]: pgmap v2923: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 2.0 MiB/s wr, 215 op/s
Dec 06 07:51:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/255879142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3214352129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1178322315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3788578080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:24.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:24 compute-0 nova_compute[251992]: 2025-12-06 07:51:24.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:25 compute-0 nova_compute[251992]: 2025-12-06 07:51:25.048 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:25 compute-0 nova_compute[251992]: 2025-12-06 07:51:25.049 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:25 compute-0 nova_compute[251992]: 2025-12-06 07:51:25.050 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:25 compute-0 nova_compute[251992]: 2025-12-06 07:51:25.050 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:51:25 compute-0 nova_compute[251992]: 2025-12-06 07:51:25.051 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:51:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:51:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:51:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:51:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:51:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 194 op/s
Dec 06 07:51:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:51:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719497911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:25 compute-0 nova_compute[251992]: 2025-12-06 07:51:25.526 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3910783313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:25.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012839261411222781 of space, bias 1.0, pg target 3.8517784233668344 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002163414002884794 of space, bias 1.0, pg target 0.6425339588567839 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8478230998743718 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:51:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:51:26 compute-0 nova_compute[251992]: 2025-12-06 07:51:26.467 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:26 compute-0 nova_compute[251992]: 2025-12-06 07:51:26.511 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:51:26 compute-0 nova_compute[251992]: 2025-12-06 07:51:26.511 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:51:26 compute-0 nova_compute[251992]: 2025-12-06 07:51:26.512 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:51:26 compute-0 ceph-mon[74339]: pgmap v2924: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 1.8 MiB/s wr, 194 op/s
Dec 06 07:51:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3719497911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/149922795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:26.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:26 compute-0 nova_compute[251992]: 2025-12-06 07:51:26.660 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:51:26 compute-0 nova_compute[251992]: 2025-12-06 07:51:26.661 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3983MB free_disk=20.71881866455078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:51:26 compute-0 nova_compute[251992]: 2025-12-06 07:51:26.661 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:26 compute-0 nova_compute[251992]: 2025-12-06 07:51:26.662 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 147 op/s
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.199 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 6e187078-1e6f-4c22-9510-ed8116b14ae5 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.199 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.200 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.341 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/600074495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:27.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:51:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/156477243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:27.777 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:27.778 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.779 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.784 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.790 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.809 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.838 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:51:27 compute-0 nova_compute[251992]: 2025-12-06 07:51:27.838 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:28.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:28 compute-0 ceph-mon[74339]: pgmap v2925: 305 pgs: 305 active+clean; 632 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 147 op/s
Dec 06 07:51:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1438578257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/156477243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 2.0 MiB/s wr, 122 op/s
Dec 06 07:51:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:29.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:30.779 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Dec 06 07:51:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Dec 06 07:51:31 compute-0 ceph-mon[74339]: pgmap v2926: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 2.0 MiB/s wr, 122 op/s
Dec 06 07:51:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Dec 06 07:51:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Dec 06 07:51:31 compute-0 nova_compute[251992]: 2025-12-06 07:51:31.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:31.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Dec 06 07:51:32 compute-0 ceph-mon[74339]: pgmap v2927: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Dec 06 07:51:32 compute-0 ceph-mon[74339]: osdmap e377: 3 total, 3 up, 3 in
Dec 06 07:51:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:32.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Dec 06 07:51:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Dec 06 07:51:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 333 KiB/s wr, 170 op/s
Dec 06 07:51:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Dec 06 07:51:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Dec 06 07:51:33 compute-0 ceph-mon[74339]: osdmap e378: 3 total, 3 up, 3 in
Dec 06 07:51:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/415410419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Dec 06 07:51:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:33.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:33 compute-0 nova_compute[251992]: 2025-12-06 07:51:33.831 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:33 compute-0 nova_compute[251992]: 2025-12-06 07:51:33.832 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:33 compute-0 nova_compute[251992]: 2025-12-06 07:51:33.850 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:33 compute-0 nova_compute[251992]: 2025-12-06 07:51:33.850 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:51:33 compute-0 nova_compute[251992]: 2025-12-06 07:51:33.850 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:51:34 compute-0 nova_compute[251992]: 2025-12-06 07:51:34.048 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:51:34 compute-0 nova_compute[251992]: 2025-12-06 07:51:34.048 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:51:34 compute-0 nova_compute[251992]: 2025-12-06 07:51:34.049 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:51:34 compute-0 nova_compute[251992]: 2025-12-06 07:51:34.049 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:34 compute-0 podman[361419]: 2025-12-06 07:51:34.426096627 +0000 UTC m=+0.083932615 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:51:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:34.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Dec 06 07:51:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Dec 06 07:51:34 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Dec 06 07:51:34 compute-0 ceph-mon[74339]: pgmap v2930: 305 pgs: 305 active+clean; 634 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 333 KiB/s wr, 170 op/s
Dec 06 07:51:34 compute-0 ceph-mon[74339]: osdmap e379: 3 total, 3 up, 3 in
Dec 06 07:51:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1423580668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 665 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 MiB/s rd, 6.5 MiB/s wr, 503 op/s
Dec 06 07:51:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:35.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:36 compute-0 ceph-mon[74339]: osdmap e380: 3 total, 3 up, 3 in
Dec 06 07:51:36 compute-0 nova_compute[251992]: 2025-12-06 07:51:36.559 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:51:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:37 compute-0 nova_compute[251992]: 2025-12-06 07:51:37.034 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updating instance_info_cache with network_info: [{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:51:37 compute-0 ceph-mon[74339]: pgmap v2933: 305 pgs: 305 active+clean; 665 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 MiB/s rd, 6.5 MiB/s wr, 503 op/s
Dec 06 07:51:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 685 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 MiB/s rd, 8.0 MiB/s wr, 454 op/s
Dec 06 07:51:37 compute-0 nova_compute[251992]: 2025-12-06 07:51:37.225 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:51:37 compute-0 nova_compute[251992]: 2025-12-06 07:51:37.226 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:51:37 compute-0 nova_compute[251992]: 2025-12-06 07:51:37.226 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:37 compute-0 nova_compute[251992]: 2025-12-06 07:51:37.226 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:37 compute-0 nova_compute[251992]: 2025-12-06 07:51:37.226 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:37 compute-0 nova_compute[251992]: 2025-12-06 07:51:37.227 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:37 compute-0 nova_compute[251992]: 2025-12-06 07:51:37.227 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:37.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Dec 06 07:51:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Dec 06 07:51:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Dec 06 07:51:38 compute-0 ceph-mon[74339]: pgmap v2934: 305 pgs: 305 active+clean; 685 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 MiB/s rd, 8.0 MiB/s wr, 454 op/s
Dec 06 07:51:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:38.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 692 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 MiB/s rd, 10 MiB/s wr, 536 op/s
Dec 06 07:51:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Dec 06 07:51:39 compute-0 ceph-mon[74339]: osdmap e381: 3 total, 3 up, 3 in
Dec 06 07:51:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Dec 06 07:51:39 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Dec 06 07:51:39 compute-0 nova_compute[251992]: 2025-12-06 07:51:39.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:51:39 compute-0 nova_compute[251992]: 2025-12-06 07:51:39.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:51:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:39.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:40 compute-0 ceph-mon[74339]: pgmap v2936: 305 pgs: 305 active+clean; 692 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 MiB/s rd, 10 MiB/s wr, 536 op/s
Dec 06 07:51:40 compute-0 ceph-mon[74339]: osdmap e382: 3 total, 3 up, 3 in
Dec 06 07:51:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/520152717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:40.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 MiB/s rd, 12 MiB/s wr, 504 op/s
Dec 06 07:51:41 compute-0 nova_compute[251992]: 2025-12-06 07:51:41.560 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:41.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:42 compute-0 podman[361450]: 2025-12-06 07:51:42.393960389 +0000 UTC m=+0.048928101 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:51:42 compute-0 podman[361451]: 2025-12-06 07:51:42.408215563 +0000 UTC m=+0.059560488 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec 06 07:51:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Dec 06 07:51:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:51:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:42.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:51:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Dec 06 07:51:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Dec 06 07:51:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:51:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:51:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:51:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:51:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:51:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:51:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 7.8 MiB/s wr, 240 op/s
Dec 06 07:51:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:43.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:43 compute-0 ceph-mon[74339]: pgmap v2938: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 MiB/s rd, 12 MiB/s wr, 504 op/s
Dec 06 07:51:43 compute-0 ceph-mon[74339]: osdmap e383: 3 total, 3 up, 3 in
Dec 06 07:51:43 compute-0 sudo[361491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:43 compute-0 sudo[361491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:43 compute-0 sudo[361491]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:44 compute-0 sudo[361516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:51:44 compute-0 sudo[361516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:51:44 compute-0 sudo[361516]: pam_unix(sudo:session): session closed for user root
Dec 06 07:51:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:44.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:44 compute-0 ceph-mon[74339]: pgmap v2940: 305 pgs: 305 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 7.8 MiB/s wr, 240 op/s
Dec 06 07:51:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3778624018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:45 compute-0 nova_compute[251992]: 2025-12-06 07:51:45.184 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:45 compute-0 nova_compute[251992]: 2025-12-06 07:51:45.185 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:45 compute-0 nova_compute[251992]: 2025-12-06 07:51:45.185 251996 INFO nova.compute.manager [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Unshelving
Dec 06 07:51:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 688 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 274 op/s
Dec 06 07:51:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:45.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:45 compute-0 nova_compute[251992]: 2025-12-06 07:51:45.831 251996 INFO nova.virt.block_device [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Booting with volume 47d2f91f-77e4-4f73-968f-583938f7d1cb at /dev/vdc
Dec 06 07:51:45 compute-0 nova_compute[251992]: 2025-12-06 07:51:45.995 251996 DEBUG os_brick.utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:51:45 compute-0 nova_compute[251992]: 2025-12-06 07:51:45.997 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.009 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.010 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4ae7d2-fed5-4065-960c-65fd7d9e8872]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.011 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.020 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.021 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[8a4d844e-0c89-4774-b35e-e4475158d970]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.022 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.031 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.031 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[0848afc0-d738-494c-823c-4e88067ca220]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.033 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e7654a-1938-41eb-ae01-89e9f98973d8]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.034 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.067 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.070 251996 DEBUG os_brick.initiator.connectors.lightos [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.070 251996 DEBUG os_brick.initiator.connectors.lightos [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.071 251996 DEBUG os_brick.initiator.connectors.lightos [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.071 251996 DEBUG os_brick.utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] <== get_connector_properties: return (74ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.072 251996 DEBUG nova.virt.block_device [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating existing volume attachment record: 9191512e-a554-40c3-868b-8dce94d4d8b3 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.402 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.403 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.425 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.519 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.520 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.528 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.528 251996 INFO nova.compute.claims [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.561 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:51:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:46.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2585121310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:46 compute-0 nova_compute[251992]: 2025-12-06 07:51:46.708 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:47 compute-0 ceph-mon[74339]: pgmap v2941: 305 pgs: 305 active+clean; 688 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 274 op/s
Dec 06 07:51:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2585121310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.038 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:51:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250387133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 696 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 286 op/s
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.212 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.218 251996 DEBUG nova.compute.provider_tree [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.264 251996 DEBUG nova.scheduler.client.report [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.285 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.286 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.289 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.294 251996 DEBUG nova.objects.instance [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'pci_requests' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.320 251996 DEBUG nova.objects.instance [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'numa_topology' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.345 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.346 251996 INFO nova.compute.claims [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.351 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.351 251996 DEBUG nova.network.neutron [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.389 251996 INFO nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.416 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.539 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.541 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.541 251996 INFO nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Creating image(s)
Dec 06 07:51:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.574 251996 DEBUG nova.storage.rbd_utils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image 822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.608 251996 DEBUG nova.storage.rbd_utils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image 822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.641 251996 DEBUG nova.storage.rbd_utils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image 822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.645 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "2fa28bf165f03865dbe1b1983e86eb35688bfe26" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.646 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "2fa28bf165f03865dbe1b1983e86eb35688bfe26" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.649 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.698 251996 DEBUG nova.policy [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4962bc7b172346e19d127b46ea2d7a11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4cf19b89a6d46bca307e65731a9dd21', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:51:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Dec 06 07:51:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:51:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:47.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:51:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Dec 06 07:51:47 compute-0 nova_compute[251992]: 2025-12-06 07:51:47.947 251996 DEBUG nova.virt.libvirt.imagebackend [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/83fea89a-3a0d-4881-b429-13684080bb6c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/83fea89a-3a0d-4881-b429-13684080bb6c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.007 251996 DEBUG nova.virt.libvirt.imagebackend [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/83fea89a-3a0d-4881-b429-13684080bb6c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.008 251996 DEBUG nova.storage.rbd_utils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] cloning images/83fea89a-3a0d-4881-b429-13684080bb6c@snap to None/822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:51:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3250387133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:48 compute-0 ceph-mon[74339]: osdmap e384: 3 total, 3 up, 3 in
Dec 06 07:51:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:51:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831807183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.103 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.109 251996 DEBUG nova.compute.provider_tree [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.127 251996 DEBUG nova.scheduler.client.report [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.150 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.278 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "2fa28bf165f03865dbe1b1983e86eb35688bfe26" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.422 251996 DEBUG nova.objects.instance [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'migration_context' on Instance uuid 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.498 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.498 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Ensure instance console log exists: /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.499 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.499 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.499 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:48 compute-0 nova_compute[251992]: 2025-12-06 07:51:48.560 251996 INFO nova.network.neutron [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 07:51:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:49 compute-0 ceph-mon[74339]: pgmap v2942: 305 pgs: 305 active+clean; 696 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 286 op/s
Dec 06 07:51:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/831807183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:51:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 713 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Dec 06 07:51:49 compute-0 nova_compute[251992]: 2025-12-06 07:51:49.240 251996 DEBUG nova.network.neutron [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Successfully created port: 44c14266-b77c-4585-b6af-08f5afc76ad9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:51:49 compute-0 nova_compute[251992]: 2025-12-06 07:51:49.444 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:51:49 compute-0 nova_compute[251992]: 2025-12-06 07:51:49.444 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquired lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:51:49 compute-0 nova_compute[251992]: 2025-12-06 07:51:49.444 251996 DEBUG nova.network.neutron [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:51:49 compute-0 nova_compute[251992]: 2025-12-06 07:51:49.607 251996 DEBUG nova.compute.manager [req-6828ec99-6668-4f34-9555-e9a090516341 req-e5dd2ce2-142a-4d55-b8c3-9c60fec9d3a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-changed-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:49 compute-0 nova_compute[251992]: 2025-12-06 07:51:49.608 251996 DEBUG nova.compute.manager [req-6828ec99-6668-4f34-9555-e9a090516341 req-e5dd2ce2-142a-4d55-b8c3-9c60fec9d3a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Refreshing instance network info cache due to event network-changed-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:51:49 compute-0 nova_compute[251992]: 2025-12-06 07:51:49.608 251996 DEBUG oslo_concurrency.lockutils [req-6828ec99-6668-4f34-9555-e9a090516341 req-e5dd2ce2-142a-4d55-b8c3-9c60fec9d3a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:51:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:49.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1411954260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2754697469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:50.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:50 compute-0 nova_compute[251992]: 2025-12-06 07:51:50.680 251996 DEBUG nova.network.neutron [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Successfully updated port: 44c14266-b77c-4585-b6af-08f5afc76ad9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:51:50 compute-0 nova_compute[251992]: 2025-12-06 07:51:50.743 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:51:50 compute-0 nova_compute[251992]: 2025-12-06 07:51:50.744 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquired lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:51:50 compute-0 nova_compute[251992]: 2025-12-06 07:51:50.744 251996 DEBUG nova.network.neutron [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:51:50 compute-0 nova_compute[251992]: 2025-12-06 07:51:50.847 251996 DEBUG nova.compute.manager [req-817df227-302e-4564-9ca4-e738321af5c6 req-4446a894-717f-4323-9cd6-73aeb407c44a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-changed-44c14266-b77c-4585-b6af-08f5afc76ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:50 compute-0 nova_compute[251992]: 2025-12-06 07:51:50.848 251996 DEBUG nova.compute.manager [req-817df227-302e-4564-9ca4-e738321af5c6 req-4446a894-717f-4323-9cd6-73aeb407c44a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Refreshing instance network info cache due to event network-changed-44c14266-b77c-4585-b6af-08f5afc76ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:51:50 compute-0 nova_compute[251992]: 2025-12-06 07:51:50.848 251996 DEBUG oslo_concurrency.lockutils [req-817df227-302e-4564-9ca4-e738321af5c6 req-4446a894-717f-4323-9cd6-73aeb407c44a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:51:50 compute-0 nova_compute[251992]: 2025-12-06 07:51:50.933 251996 DEBUG nova.network.neutron [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.059 251996 DEBUG nova.network.neutron [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating instance_info_cache with network_info: [{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.079 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Releasing lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.081 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.081 251996 INFO nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Creating image(s)
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.107 251996 DEBUG nova.storage.rbd_utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.111 251996 DEBUG nova.objects.instance [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.112 251996 DEBUG oslo_concurrency.lockutils [req-6828ec99-6668-4f34-9555-e9a090516341 req-e5dd2ce2-142a-4d55-b8c3-9c60fec9d3a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.113 251996 DEBUG nova.network.neutron [req-6828ec99-6668-4f34-9555-e9a090516341 req-e5dd2ce2-142a-4d55-b8c3-9c60fec9d3a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Refreshing network info cache for port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.155 251996 DEBUG nova.storage.rbd_utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.188 251996 DEBUG nova.storage.rbd_utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.192 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "1cbdaba7df7eaa13e632f09881bbc967eee1da4d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.193 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "1cbdaba7df7eaa13e632f09881bbc967eee1da4d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 715 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 224 op/s
Dec 06 07:51:51 compute-0 ceph-mon[74339]: pgmap v2944: 305 pgs: 305 active+clean; 713 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 194 op/s
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.399 251996 DEBUG nova.virt.libvirt.imagebackend [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image locations are: [{'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.501 251996 DEBUG nova.virt.libvirt.imagebackend [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Selected location: {'url': 'rbd://40a1bae4-cf76-5610-8dab-c75116dfe0bb/images/2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.502 251996 DEBUG nova.storage.rbd_utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] cloning images/2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e@snap to None/0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.563 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.608 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "1cbdaba7df7eaa13e632f09881bbc967eee1da4d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:51.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.750 251996 DEBUG nova.objects.instance [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'migration_context' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:51 compute-0 nova_compute[251992]: 2025-12-06 07:51:51.818 251996 DEBUG nova.storage.rbd_utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] flattening vms/0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.353 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Image rbd:vms/0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.353 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.354 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Ensure instance console log exists: /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.355 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.355 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.355 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.358 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Start _get_guest_xml network_info=[{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:51:25Z,direct_url=<?>,disk_format='raw',id=2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-676074581-shelved',owner='cfa713d92cc94fa1b94404ed58b0563f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:51:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-47d2f91f-77e4-4f73-968f-583938f7d1cb', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '47d2f91f-77e4-4f73-968f-583938f7d1cb', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attached', 'instance': '0b9681c0-c0e7-4bd8-9040-865c1bff517b', 'attached_at': '', 'detached_at': '', 'volume_id': '47d2f91f-77e4-4f73-968f-583938f7d1cb', 'serial': '47d2f91f-77e4-4f73-968f-583938f7d1cb'}, 'attachment_id': '9191512e-a554-40c3-868b-8dce94d4d8b3', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': None, 'device_type': 'disk', 'mount_device': '/dev/vdc', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.362 251996 WARNING nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.366 251996 DEBUG nova.virt.libvirt.host [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.366 251996 DEBUG nova.virt.libvirt.host [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.369 251996 DEBUG nova.virt.libvirt.host [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.370 251996 DEBUG nova.virt.libvirt.host [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.371 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.371 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:51:25Z,direct_url=<?>,disk_format='raw',id=2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-676074581-shelved',owner='cfa713d92cc94fa1b94404ed58b0563f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:51:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.372 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.372 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.372 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.373 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.373 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.373 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.373 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.373 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.374 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.374 251996 DEBUG nova.virt.hardware [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.374 251996 DEBUG nova.objects.instance [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.392 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:52 compute-0 ceph-mon[74339]: pgmap v2945: 305 pgs: 305 active+clean; 715 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 224 op/s
Dec 06 07:51:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.005000133s ======
Dec 06 07:51:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:52.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000133s
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.731 251996 DEBUG nova.network.neutron [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updating instance_info_cache with network_info: [{"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.805 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Releasing lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.806 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Instance network_info: |[{"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.806 251996 DEBUG oslo_concurrency.lockutils [req-817df227-302e-4564-9ca4-e738321af5c6 req-4446a894-717f-4323-9cd6-73aeb407c44a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.807 251996 DEBUG nova.network.neutron [req-817df227-302e-4564-9ca4-e738321af5c6 req-4446a894-717f-4323-9cd6-73aeb407c44a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Refreshing network info cache for port 44c14266-b77c-4585-b6af-08f5afc76ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.812 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Start _get_guest_xml network_info=[{"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:51:32Z,direct_url=<?>,disk_format='raw',id=83fea89a-3a0d-4881-b429-13684080bb6c,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-646723643',owner='c4cf19b89a6d46bca307e65731a9dd21',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:51:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '83fea89a-3a0d-4881-b429-13684080bb6c'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.818 251996 WARNING nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.823 251996 DEBUG nova.virt.libvirt.host [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.824 251996 DEBUG nova.virt.libvirt.host [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.832 251996 DEBUG nova.virt.libvirt.host [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.832 251996 DEBUG nova.virt.libvirt.host [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.834 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.834 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T07:51:32Z,direct_url=<?>,disk_format='raw',id=83fea89a-3a0d-4881-b429-13684080bb6c,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-646723643',owner='c4cf19b89a6d46bca307e65731a9dd21',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T07:51:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.835 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.835 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.836 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.837 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.837 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.838 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.838 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.838 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.838 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.839 251996 DEBUG nova.virt.hardware [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.844 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435402962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.877 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.901 251996 DEBUG nova.storage.rbd_utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:52 compute-0 nova_compute[251992]: 2025-12-06 07:51:52.907 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.127 251996 DEBUG nova.network.neutron [req-6828ec99-6668-4f34-9555-e9a090516341 req-e5dd2ce2-142a-4d55-b8c3-9c60fec9d3a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updated VIF entry in instance network info cache for port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.128 251996 DEBUG nova.network.neutron [req-6828ec99-6668-4f34-9555-e9a090516341 req-e5dd2ce2-142a-4d55-b8c3-9c60fec9d3a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating instance_info_cache with network_info: [{"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.145 251996 DEBUG oslo_concurrency.lockutils [req-6828ec99-6668-4f34-9555-e9a090516341 req-e5dd2ce2-142a-4d55-b8c3-9c60fec9d3a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-0b9681c0-c0e7-4bd8-9040-865c1bff517b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:51:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 715 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Dec 06 07:51:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424748871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.299 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.326 251996 DEBUG nova.storage.rbd_utils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image 822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.332 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763807753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.360 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.390 251996 DEBUG nova.virt.libvirt.vif [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-676074581',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-676074581',id=168,image_ref='2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-12606996',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-qd0qnxrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member',shelved_at='2025-12-06T07:51:35.272180',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:51:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=0b9681c0-c0e7-4bd8-9040-865c1bff517b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.391 251996 DEBUG nova.network.os_vif_util [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.392 251996 DEBUG nova.network.os_vif_util [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.393 251996 DEBUG nova.objects.instance [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.409 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <uuid>0b9681c0-c0e7-4bd8-9040-865c1bff517b</uuid>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <name>instance-000000a8</name>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-676074581</nova:name>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:51:52</nova:creationTime>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:user uuid="90c9de6e67724c898a8e23b05fbf14da">tempest-AttachVolumeShelveTestJSON-1510980811-project-member</nova:user>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:project uuid="cfa713d92cc94fa1b94404ed58b0563f">tempest-AttachVolumeShelveTestJSON-1510980811</nova:project>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:port uuid="1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc">
Dec 06 07:51:53 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <system>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="serial">0b9681c0-c0e7-4bd8-9040-865c1bff517b</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="uuid">0b9681c0-c0e7-4bd8-9040-865c1bff517b</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </system>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <os>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </os>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <features>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </features>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </source>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </source>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-47d2f91f-77e4-4f73-968f-583938f7d1cb">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </source>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <target dev="vdc" bus="virtio"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <serial>47d2f91f-77e4-4f73-968f-583938f7d1cb</serial>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:b7:01:5b"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <target dev="tap1d320b87-e6"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/console.log" append="off"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <video>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </video>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <input type="keyboard" bus="usb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:51:53 compute-0 nova_compute[251992]: </domain>
Dec 06 07:51:53 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.410 251996 DEBUG nova.compute.manager [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Preparing to wait for external event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.411 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.411 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.411 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.412 251996 DEBUG nova.virt.libvirt.vif [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-676074581',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-676074581',id=168,image_ref='2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-12606996',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-qd0qnxrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member',shelved_at='2025-12-06T07:51:35.272180',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='2bf7c3bd-26e5-44a8-90a5-c1b8c9d58e1e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:51:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=0b9681c0-c0e7-4bd8-9040-865c1bff517b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.412 251996 DEBUG nova.network.os_vif_util [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.413 251996 DEBUG nova.network.os_vif_util [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.413 251996 DEBUG os_vif [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.414 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.414 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.415 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.421 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.421 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d320b87-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.422 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1d320b87-e6, col_values=(('external_ids', {'iface-id': '1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:01:5b', 'vm-uuid': '0b9681c0-c0e7-4bd8-9040-865c1bff517b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.423 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:53 compute-0 NetworkManager[48965]: <info>  [1765007513.4251] manager: (tap1d320b87-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.426 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.432 251996 INFO os_vif [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6')
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.521 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.522 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.522 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.523 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No VIF found with MAC fa:16:3e:b7:01:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.524 251996 INFO nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Using config drive
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.551 251996 DEBUG nova.storage.rbd_utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.630 251996 DEBUG nova.objects.instance [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.675 251996 DEBUG nova.objects.instance [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'keypairs' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:53.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:51:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929533344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.819 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.821 251996 DEBUG nova.virt.libvirt.vif [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:51:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1142578868',display_name='tempest-TestStampPattern-server-1142578868',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1142578868',id=170,image_ref='83fea89a-3a0d-4881-b429-13684080bb6c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8KhRxaTrkKNzMUybnifFqVhR7VOW5ilrhcPN+BlOV2c9vQAH2tT4hPBYJpZ93aPVMmrWQGW35OWGQh34F5+BdF2On//RqgE6BOka+CpM6HEuYW/HMTwic5wOTQHp91yg==',key_name='tempest-TestStampPattern-1707395411',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4cf19b89a6d46bca307e65731a9dd21',ramdisk_id='',reservation_id='r-5gszegm5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d5d854-2a1f-485b-931a-4ec90cf7ba04',image_min_disk='1',image_min_ram='0',image_owner_id='c4cf19b89a6d46bca307e65731a9dd21',image_owner_project_name='tempest-TestStampPattern-1318067975',image_owner_user_name='tempest-TestStampPattern-1318067975-project-member',image_user_id='4962bc7b172346e19d127b46ea2d7a11',network_allocated='True',owner_project_name='tempest-TestStampPattern-1318067975',owner_user_name='tempest-TestStampPattern-1318067975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:51:47Z,user_data=None,user_id='4962bc7b172346e19d127b46ea2d7a11',uuid=822fc37e-13a4-4b1b-983f-6cc928c1dfa3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.821 251996 DEBUG nova.network.os_vif_util [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converting VIF {"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.822 251996 DEBUG nova.network.os_vif_util [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:4f:f3,bridge_name='br-int',has_traffic_filtering=True,id=44c14266-b77c-4585-b6af-08f5afc76ad9,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44c14266-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.822 251996 DEBUG nova.objects.instance [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'pci_devices' on Instance uuid 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:51:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1435402962' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1424748871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2763807753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.861 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <uuid>822fc37e-13a4-4b1b-983f-6cc928c1dfa3</uuid>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <name>instance-000000aa</name>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:name>tempest-TestStampPattern-server-1142578868</nova:name>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:51:52</nova:creationTime>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:user uuid="4962bc7b172346e19d127b46ea2d7a11">tempest-TestStampPattern-1318067975-project-member</nova:user>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:project uuid="c4cf19b89a6d46bca307e65731a9dd21">tempest-TestStampPattern-1318067975</nova:project>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="83fea89a-3a0d-4881-b429-13684080bb6c"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <nova:port uuid="44c14266-b77c-4585-b6af-08f5afc76ad9">
Dec 06 07:51:53 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <system>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="serial">822fc37e-13a4-4b1b-983f-6cc928c1dfa3</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="uuid">822fc37e-13a4-4b1b-983f-6cc928c1dfa3</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </system>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <os>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </os>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <features>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </features>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </source>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk.config">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </source>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:51:53 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:93:4f:f3"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <target dev="tap44c14266-b7"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3/console.log" append="off"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <video>
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </video>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <input type="keyboard" bus="usb"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:51:53 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:51:53 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:51:53 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:51:53 compute-0 nova_compute[251992]: </domain>
Dec 06 07:51:53 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.862 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Preparing to wait for external event network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.862 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.862 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.862 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.863 251996 DEBUG nova.virt.libvirt.vif [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:51:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1142578868',display_name='tempest-TestStampPattern-server-1142578868',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1142578868',id=170,image_ref='83fea89a-3a0d-4881-b429-13684080bb6c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8KhRxaTrkKNzMUybnifFqVhR7VOW5ilrhcPN+BlOV2c9vQAH2tT4hPBYJpZ93aPVMmrWQGW35OWGQh34F5+BdF2On//RqgE6BOka+CpM6HEuYW/HMTwic5wOTQHp91yg==',key_name='tempest-TestStampPattern-1707395411',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4cf19b89a6d46bca307e65731a9dd21',ramdisk_id='',reservation_id='r-5gszegm5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d5d854-2a1f-485b-931a-4ec90cf7ba04',image_min_disk='1',image_min_ram='0',image_owner_id='c4cf19b89a6d46bca307e65731a9dd21',image_owner_project_name='tempest-TestStampPattern-1318067975',image_owner_user_name='tempest-TestStampPattern-1318067975-project-member',image_user_id='4962bc7b172346e19d127b46ea2d7a11',network_allocated='True',owner_project_name='tempest-TestStampPattern-1318067975',owner_user_name='tempest-TestStampPattern-1318067975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:51:47Z,user_data=None,user_id='4962bc7b172346e19d127b46ea2d7a11',uuid=822fc37e-13a4-4b1b-983f-6cc928c1dfa3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.863 251996 DEBUG nova.network.os_vif_util [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converting VIF {"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.864 251996 DEBUG nova.network.os_vif_util [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:4f:f3,bridge_name='br-int',has_traffic_filtering=True,id=44c14266-b77c-4585-b6af-08f5afc76ad9,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44c14266-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.864 251996 DEBUG os_vif [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4f:f3,bridge_name='br-int',has_traffic_filtering=True,id=44c14266-b77c-4585-b6af-08f5afc76ad9,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44c14266-b7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.864 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.865 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.865 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.868 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.868 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44c14266-b7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.869 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44c14266-b7, col_values=(('external_ids', {'iface-id': '44c14266-b77c-4585-b6af-08f5afc76ad9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:4f:f3', 'vm-uuid': '822fc37e-13a4-4b1b-983f-6cc928c1dfa3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.870 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:53 compute-0 NetworkManager[48965]: <info>  [1765007513.8714] manager: (tap44c14266-b7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.872 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.878 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.879 251996 INFO os_vif [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:4f:f3,bridge_name='br-int',has_traffic_filtering=True,id=44c14266-b77c-4585-b6af-08f5afc76ad9,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44c14266-b7')
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.946 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.946 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.947 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No VIF found with MAC fa:16:3e:93:4f:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:51:53 compute-0 nova_compute[251992]: 2025-12-06 07:51:53.948 251996 INFO nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Using config drive
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.145 251996 DEBUG nova.storage.rbd_utils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image 822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.155 251996 INFO nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Creating config drive at /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.160 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsu24ulle execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.306 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsu24ulle" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.334 251996 DEBUG nova.storage.rbd_utils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.337 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.505 251996 INFO nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Creating config drive at /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3/disk.config
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.510 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80qp14qc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.540 251996 DEBUG nova.network.neutron [req-817df227-302e-4564-9ca4-e738321af5c6 req-4446a894-717f-4323-9cd6-73aeb407c44a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updated VIF entry in instance network info cache for port 44c14266-b77c-4585-b6af-08f5afc76ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.541 251996 DEBUG nova.network.neutron [req-817df227-302e-4564-9ca4-e738321af5c6 req-4446a894-717f-4323-9cd6-73aeb407c44a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updating instance_info_cache with network_info: [{"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.577 251996 DEBUG oslo_concurrency.lockutils [req-817df227-302e-4564-9ca4-e738321af5c6 req-4446a894-717f-4323-9cd6-73aeb407c44a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.647 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp80qp14qc" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:54.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.681 251996 DEBUG nova.storage.rbd_utils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] rbd image 822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.685 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3/disk.config 822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.716 251996 DEBUG oslo_concurrency.processutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config 0b9681c0-c0e7-4bd8-9040-865c1bff517b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.716 251996 INFO nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Deleting local config drive /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b/disk.config because it was imported into RBD.
Dec 06 07:51:54 compute-0 kernel: tap1d320b87-e6: entered promiscuous mode
Dec 06 07:51:54 compute-0 NetworkManager[48965]: <info>  [1765007514.7664] manager: (tap1d320b87-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.769 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:54 compute-0 ovn_controller[147168]: 2025-12-06T07:51:54Z|00627|binding|INFO|Claiming lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc for this chassis.
Dec 06 07:51:54 compute-0 ovn_controller[147168]: 2025-12-06T07:51:54Z|00628|binding|INFO|1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc: Claiming fa:16:3e:b7:01:5b 10.100.0.14
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.784 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:54 compute-0 systemd-machined[212986]: New machine qemu-79-instance-000000a8.
Dec 06 07:51:54 compute-0 systemd-udevd[362244]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.807 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:54 compute-0 NetworkManager[48965]: <info>  [1765007514.8074] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Dec 06 07:51:54 compute-0 NetworkManager[48965]: <info>  [1765007514.8082] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Dec 06 07:51:54 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-000000a8.
Dec 06 07:51:54 compute-0 NetworkManager[48965]: <info>  [1765007514.8154] device (tap1d320b87-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.812 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:01:5b 10.100.0.14'], port_security=['fa:16:3e:b7:01:5b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0b9681c0-c0e7-4bd8-9040-865c1bff517b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a26fe1ae-b98b-40c8-b5a2-fa6264313a90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5411afcf-f935-4976-affc-7b12214f8e50, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:54 compute-0 NetworkManager[48965]: <info>  [1765007514.8161] device (tap1d320b87-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.816 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc in datapath 45904a2f-a5c2-4047-9c19-a87d36354c1b bound to our chassis
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.819 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.832 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[844abaf2-8edb-4bd6-b977-1e520d06a46f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.835 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45904a2f-a1 in ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.838 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45904a2f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.838 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[105d5703-6e02-4ac1-9c26-90ea0bad26b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.839 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8d68f389-024e-41b1-8c4f-561fd485277c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.854 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[86b4d610-4596-4980-b3ca-e4f111d06e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.883 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[400d3fcf-db0c-404f-ba39-704d4e0bf09f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 ceph-mon[74339]: pgmap v2946: 305 pgs: 305 active+clean; 715 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Dec 06 07:51:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2929533344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.912 251996 DEBUG oslo_concurrency.processutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3/disk.config 822fc37e-13a4-4b1b-983f-6cc928c1dfa3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:51:54 compute-0 nova_compute[251992]: 2025-12-06 07:51:54.913 251996 INFO nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Deleting local config drive /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3/disk.config because it was imported into RBD.
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.921 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9948be58-618c-4616-b06f-426b6d789263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 NetworkManager[48965]: <info>  [1765007514.9303] manager: (tap45904a2f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/287)
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.929 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b95aa8-8d53-4e76-aa9d-8bed1914b5eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.964 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f04113bb-8692-4b48-8102-6588b4780841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.966 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c0997bc1-1119-4e3e-9ff1-e2ea8e9763c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:54 compute-0 NetworkManager[48965]: <info>  [1765007514.9702] manager: (tap44c14266-b7): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Dec 06 07:51:54 compute-0 systemd-udevd[362279]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:51:54 compute-0 NetworkManager[48965]: <info>  [1765007514.9898] device (tap45904a2f-a0): carrier: link connected
Dec 06 07:51:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:54.996 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[24740326-2b7c-45d2-9e82-51f43d79c4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:55 compute-0 systemd-machined[212986]: New machine qemu-80-instance-000000aa.
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.013 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5fa2fb48-109a-4dfe-a20c-033cb4932948]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45904a2f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:67:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 191], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778757, 'reachable_time': 25946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362292, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:55 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-000000aa.
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.035 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e60d824f-8dd9-4e2c-920d-b73b045c0b59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:67fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 778757, 'tstamp': 778757}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362295, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.054 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4a223a12-2c34-4340-a53f-721800b5d0be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45904a2f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:67:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 191], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778757, 'reachable_time': 25946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362297, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.087 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[85815553-ee0d-496c-8f74-9bbfcf9e4abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:55 compute-0 kernel: tap44c14266-b7: entered promiscuous mode
Dec 06 07:51:55 compute-0 NetworkManager[48965]: <info>  [1765007515.1035] device (tap44c14266-b7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:51:55 compute-0 NetworkManager[48965]: <info>  [1765007515.1050] device (tap44c14266-b7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.157 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00629|memory|INFO|peak resident set size grew 51% in last 4266.8 seconds, from 16512 kB to 24892 kB
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00630|memory|INFO|idl-cells-OVN_Southbound:10819 idl-cells-Open_vSwitch:984 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:353 lflow-cache-entries-cache-matches:282 lflow-cache-size-KB:1432 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:612 ofctrl_installed_flow_usage-KB:447 ofctrl_rconn_packet_counter-KB:162 ofctrl_sb_flow_ref_usage-KB:229 oflow_update_usage-KB:1
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00631|binding|INFO|Claiming lport 44c14266-b77c-4585-b6af-08f5afc76ad9 for this chassis.
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00632|binding|INFO|44c14266-b77c-4585-b6af-08f5afc76ad9: Claiming fa:16:3e:93:4f:f3 10.100.0.8
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00633|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00634|binding|INFO|Setting lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc ovn-installed in OVS
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.190 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00635|binding|INFO|Setting lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc up in Southbound
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.194 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:4f:f3 10.100.0.8'], port_security=['fa:16:3e:93:4f:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '822fc37e-13a4-4b1b-983f-6cc928c1dfa3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4cf19b89a6d46bca307e65731a9dd21', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8dd9f4b-9afe-430e-a0a0-846e8785c631', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1ea0b24-813d-4f2d-b582-53b5b07aa43a, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=44c14266-b77c-4585-b6af-08f5afc76ad9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:51:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 780 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.9 MiB/s wr, 266 op/s
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00636|binding|INFO|Setting lport 44c14266-b77c-4585-b6af-08f5afc76ad9 ovn-installed in OVS
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00637|binding|INFO|Setting lport 44c14266-b77c-4585-b6af-08f5afc76ad9 up in Southbound
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.218 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.222 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[14116016-7e59-4292-808c-ddc5de017c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.224 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45904a2f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.224 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.224 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45904a2f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.225 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:55 compute-0 NetworkManager[48965]: <info>  [1765007515.2266] manager: (tap45904a2f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Dec 06 07:51:55 compute-0 kernel: tap45904a2f-a0: entered promiscuous mode
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.228 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.230 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45904a2f-a0, col_values=(('external_ids', {'iface-id': 'e43e784e-bee5-49c8-8bc7-c45a17996abf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.231 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:55 compute-0 ovn_controller[147168]: 2025-12-06T07:51:55Z|00638|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.234 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.235 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9e4362-b058-4792-8227-36829dfcf1e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.235 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:51:55 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:55.236 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'env', 'PROCESS_TAG=haproxy-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45904a2f-a5c2-4047-9c19-a87d36354c1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.247 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.549 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007515.5492985, 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.550 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] VM Started (Lifecycle Event)
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.572 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.579 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007515.5502687, 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.579 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] VM Paused (Lifecycle Event)
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.610 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.613 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.631 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.631 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007515.617389, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.631 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] VM Started (Lifecycle Event)
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.660 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.663 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007515.6174562, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.663 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] VM Paused (Lifecycle Event)
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.682 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.685 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:55 compute-0 nova_compute[251992]: 2025-12-06 07:51:55.711 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:51:55 compute-0 podman[362434]: 2025-12-06 07:51:55.630694038 +0000 UTC m=+0.026650499 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:51:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:55.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.114 251996 DEBUG nova.compute.manager [req-761ba4fc-5eb8-4b04-90a3-f9df78a9847e req-4500b22c-c865-4570-a752-df01274145c9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.115 251996 DEBUG oslo_concurrency.lockutils [req-761ba4fc-5eb8-4b04-90a3-f9df78a9847e req-4500b22c-c865-4570-a752-df01274145c9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.115 251996 DEBUG oslo_concurrency.lockutils [req-761ba4fc-5eb8-4b04-90a3-f9df78a9847e req-4500b22c-c865-4570-a752-df01274145c9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.115 251996 DEBUG oslo_concurrency.lockutils [req-761ba4fc-5eb8-4b04-90a3-f9df78a9847e req-4500b22c-c865-4570-a752-df01274145c9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.116 251996 DEBUG nova.compute.manager [req-761ba4fc-5eb8-4b04-90a3-f9df78a9847e req-4500b22c-c865-4570-a752-df01274145c9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Processing event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.116 251996 DEBUG nova.compute.manager [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.120 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007516.1205857, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.120 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] VM Resumed (Lifecycle Event)
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.122 251996 DEBUG nova.virt.libvirt.driver [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.125 251996 INFO nova.virt.libvirt.driver [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance spawned successfully.
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.138 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.141 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.159 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:51:56 compute-0 podman[362434]: 2025-12-06 07:51:56.245063996 +0000 UTC m=+0.641020427 container create d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:51:56 compute-0 systemd[1]: Started libpod-conmon-d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10.scope.
Dec 06 07:51:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb26d145b5b247fa3e4586221b30f95235183fc8b21d5f967d648ad7eee627ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:56 compute-0 nova_compute[251992]: 2025-12-06 07:51:56.567 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:56 compute-0 podman[362434]: 2025-12-06 07:51:56.62772525 +0000 UTC m=+1.023681711 container init d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:51:56 compute-0 podman[362434]: 2025-12-06 07:51:56.635440268 +0000 UTC m=+1.031396699 container start d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:51:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:56.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:56 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[362449]: [NOTICE]   (362453) : New worker (362455) forked
Dec 06 07:51:56 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[362449]: [NOTICE]   (362453) : Loading success.
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.807 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 44c14266-b77c-4585-b6af-08f5afc76ad9 in datapath 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 unbound from our chassis
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.809 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.820 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d24315d8-8b03-48b6-b8e4-f92de5477d53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.821 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9e0e5f36-41 in ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.822 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9e0e5f36-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.822 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0e9e56-1e5d-44d6-a740-e56948b48255]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.823 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3cbc20-4e6d-408b-98bc-5f8fe496b59e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.834 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[39a95f49-4f3e-492a-bfc6-e5e8431d70a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.846 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[189d23fa-3f09-47e8-b334-97da5231fdb8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.876 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d9812892-c498-455f-a08d-a6053a3aad90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.882 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[52507486-5097-457d-8b6a-af1113b2e360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 NetworkManager[48965]: <info>  [1765007516.8833] manager: (tap9e0e5f36-40): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.914 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[05725d3b-7837-4261-96f9-8677d71f45b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.920 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[085427ad-1498-4442-a19d-3988e32125cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 NetworkManager[48965]: <info>  [1765007516.9439] device (tap9e0e5f36-40): carrier: link connected
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.948 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc09d6d-69dc-4fe6-aea4-bd1e8c13669b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.966 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9e2103-f562-4024-8edd-e32c83f66423]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e0e5f36-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e5:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778953, 'reachable_time': 21370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362474, 'error': None, 'target': 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.983 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d1768c-9c14-46ef-80ba-035163a8da1c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:e5d7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 778953, 'tstamp': 778953}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362475, 'error': None, 'target': 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:56.999 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[429af7e6-49ab-41aa-8656-b694c662eac0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9e0e5f36-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:e5:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778953, 'reachable_time': 21370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362476, 'error': None, 'target': 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.029 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c8662d-c7f0-48cd-9bb1-e87f050397df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.092 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[05b1baa2-51d4-4099-925d-589a8a158843]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.093 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e0e5f36-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.093 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.094 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e0e5f36-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:57 compute-0 nova_compute[251992]: 2025-12-06 07:51:57.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:57 compute-0 NetworkManager[48965]: <info>  [1765007517.0964] manager: (tap9e0e5f36-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Dec 06 07:51:57 compute-0 kernel: tap9e0e5f36-40: entered promiscuous mode
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.099 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9e0e5f36-40, col_values=(('external_ids', {'iface-id': 'b8d91b14-ad14-4c15-a901-b1a5c72f0e0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:51:57 compute-0 nova_compute[251992]: 2025-12-06 07:51:57.100 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:57 compute-0 ovn_controller[147168]: 2025-12-06T07:51:57Z|00639|binding|INFO|Releasing lport b8d91b14-ad14-4c15-a901-b1a5c72f0e0f from this chassis (sb_readonly=0)
Dec 06 07:51:57 compute-0 nova_compute[251992]: 2025-12-06 07:51:57.118 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.119 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.119 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[107fb78d-7f6a-4caa-bb94-ddb24bc67ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.120 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7.pid.haproxy
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:51:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:51:57.121 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'env', 'PROCESS_TAG=haproxy-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:51:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Dec 06 07:51:57 compute-0 ceph-mon[74339]: pgmap v2947: 305 pgs: 305 active+clean; 780 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.9 MiB/s wr, 266 op/s
Dec 06 07:51:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Dec 06 07:51:57 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Dec 06 07:51:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 796 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.8 MiB/s wr, 258 op/s
Dec 06 07:51:57 compute-0 podman[362510]: 2025-12-06 07:51:57.524017673 +0000 UTC m=+0.050310589 container create 3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 07:51:57 compute-0 systemd[1]: Started libpod-conmon-3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80.scope.
Dec 06 07:51:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:51:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eadd1968043f135b0cfdd8e8a4a8acfb1dbaefdc42d3ed88b0ba580b2356e4e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:51:57 compute-0 podman[362510]: 2025-12-06 07:51:57.497794615 +0000 UTC m=+0.024087551 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:51:57 compute-0 podman[362510]: 2025-12-06 07:51:57.602469029 +0000 UTC m=+0.128761975 container init 3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:51:57 compute-0 podman[362510]: 2025-12-06 07:51:57.608679557 +0000 UTC m=+0.134972473 container start 3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 07:51:57 compute-0 nova_compute[251992]: 2025-12-06 07:51:57.630 251996 DEBUG nova.compute.manager [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:57 compute-0 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[362525]: [NOTICE]   (362529) : New worker (362531) forked
Dec 06 07:51:57 compute-0 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[362525]: [NOTICE]   (362529) : Loading success.
Dec 06 07:51:57 compute-0 nova_compute[251992]: 2025-12-06 07:51:57.724 251996 DEBUG oslo_concurrency.lockutils [None req-5a388242-cd39-475e-b07e-d4d007915840 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:57.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.028 251996 DEBUG nova.compute.manager [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.028 251996 DEBUG oslo_concurrency.lockutils [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.028 251996 DEBUG oslo_concurrency.lockutils [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.028 251996 DEBUG oslo_concurrency.lockutils [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.028 251996 DEBUG nova.compute.manager [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Processing event network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.029 251996 DEBUG nova.compute.manager [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.029 251996 DEBUG oslo_concurrency.lockutils [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.029 251996 DEBUG oslo_concurrency.lockutils [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.029 251996 DEBUG oslo_concurrency.lockutils [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.029 251996 DEBUG nova.compute.manager [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] No waiting events found dispatching network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.030 251996 WARNING nova.compute.manager [req-685a2558-ded8-45a9-94c6-8f48568f529d req-698e7fd1-a208-4bf3-9410-7c796e07c566 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received unexpected event network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 for instance with vm_state building and task_state spawning.
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.030 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.033 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007518.0335393, 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.033 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] VM Resumed (Lifecycle Event)
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.035 251996 DEBUG nova.virt.libvirt.driver [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.038 251996 INFO nova.virt.libvirt.driver [-] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Instance spawned successfully.
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.038 251996 INFO nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Took 10.50 seconds to spawn the instance on the hypervisor.
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.039 251996 DEBUG nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.052 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.055 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.076 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.123 251996 INFO nova.compute.manager [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Took 11.64 seconds to build instance.
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.148 251996 DEBUG oslo_concurrency.lockutils [None req-fa9523cb-95c9-4fcb-ae42-8f62ddb7ee71 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:58 compute-0 ceph-mon[74339]: osdmap e385: 3 total, 3 up, 3 in
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.209 251996 DEBUG nova.compute.manager [req-9b882602-c78f-4396-aab4-90160066c6e7 req-b817f9f4-4271-499e-bb24-e97d0b6c5926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.209 251996 DEBUG oslo_concurrency.lockutils [req-9b882602-c78f-4396-aab4-90160066c6e7 req-b817f9f4-4271-499e-bb24-e97d0b6c5926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.209 251996 DEBUG oslo_concurrency.lockutils [req-9b882602-c78f-4396-aab4-90160066c6e7 req-b817f9f4-4271-499e-bb24-e97d0b6c5926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.210 251996 DEBUG oslo_concurrency.lockutils [req-9b882602-c78f-4396-aab4-90160066c6e7 req-b817f9f4-4271-499e-bb24-e97d0b6c5926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.210 251996 DEBUG nova.compute.manager [req-9b882602-c78f-4396-aab4-90160066c6e7 req-b817f9f4-4271-499e-bb24-e97d0b6c5926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] No waiting events found dispatching network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.210 251996 WARNING nova.compute.manager [req-9b882602-c78f-4396-aab4-90160066c6e7 req-b817f9f4-4271-499e-bb24-e97d0b6c5926 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received unexpected event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc for instance with vm_state active and task_state None.
Dec 06 07:51:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:51:58.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:58 compute-0 nova_compute[251992]: 2025-12-06 07:51:58.871 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:51:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 768 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 4.7 MiB/s wr, 288 op/s
Dec 06 07:51:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:51:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:51:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:51:59.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:51:59 compute-0 ceph-mon[74339]: pgmap v2949: 305 pgs: 305 active+clean; 796 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 5.8 MiB/s wr, 258 op/s
Dec 06 07:52:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:00.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:00 compute-0 ceph-mon[74339]: pgmap v2950: 305 pgs: 305 active+clean; 768 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 4.7 MiB/s wr, 288 op/s
Dec 06 07:52:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 717 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 4.7 MiB/s wr, 381 op/s
Dec 06 07:52:01 compute-0 nova_compute[251992]: 2025-12-06 07:52:01.569 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:01.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Dec 06 07:52:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Dec 06 07:52:02 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Dec 06 07:52:03 compute-0 ceph-mon[74339]: pgmap v2951: 305 pgs: 305 active+clean; 717 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 4.7 MiB/s wr, 381 op/s
Dec 06 07:52:03 compute-0 ceph-mon[74339]: osdmap e386: 3 total, 3 up, 3 in
Dec 06 07:52:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 717 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 1.1 MiB/s wr, 260 op/s
Dec 06 07:52:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:03.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:03.862 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:03.863 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:03.863 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:03 compute-0 nova_compute[251992]: 2025-12-06 07:52:03.874 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:04 compute-0 sudo[362543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:04 compute-0 sudo[362543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:04 compute-0 sudo[362543]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:04 compute-0 sudo[362568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:04 compute-0 sudo[362568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:04 compute-0 sudo[362568]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:04 compute-0 ceph-mon[74339]: pgmap v2953: 305 pgs: 305 active+clean; 717 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 1.1 MiB/s wr, 260 op/s
Dec 06 07:52:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:04.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:05 compute-0 nova_compute[251992]: 2025-12-06 07:52:05.068 251996 DEBUG nova.compute.manager [req-28dd01c7-94e9-4133-bea4-0d0b44e426ed req-65939aa7-aee0-4a30-80e3-d93f5f8eb47e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-changed-44c14266-b77c-4585-b6af-08f5afc76ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:05 compute-0 nova_compute[251992]: 2025-12-06 07:52:05.068 251996 DEBUG nova.compute.manager [req-28dd01c7-94e9-4133-bea4-0d0b44e426ed req-65939aa7-aee0-4a30-80e3-d93f5f8eb47e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Refreshing instance network info cache due to event network-changed-44c14266-b77c-4585-b6af-08f5afc76ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:52:05 compute-0 nova_compute[251992]: 2025-12-06 07:52:05.069 251996 DEBUG oslo_concurrency.lockutils [req-28dd01c7-94e9-4133-bea4-0d0b44e426ed req-65939aa7-aee0-4a30-80e3-d93f5f8eb47e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:52:05 compute-0 nova_compute[251992]: 2025-12-06 07:52:05.069 251996 DEBUG oslo_concurrency.lockutils [req-28dd01c7-94e9-4133-bea4-0d0b44e426ed req-65939aa7-aee0-4a30-80e3-d93f5f8eb47e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:52:05 compute-0 nova_compute[251992]: 2025-12-06 07:52:05.069 251996 DEBUG nova.network.neutron [req-28dd01c7-94e9-4133-bea4-0d0b44e426ed req-65939aa7-aee0-4a30-80e3-d93f5f8eb47e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Refreshing network info cache for port 44c14266-b77c-4585-b6af-08f5afc76ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:52:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 722 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 917 KiB/s wr, 285 op/s
Dec 06 07:52:05 compute-0 podman[362594]: 2025-12-06 07:52:05.439775177 +0000 UTC m=+0.090273907 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 07:52:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:05.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:06 compute-0 nova_compute[251992]: 2025-12-06 07:52:06.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:06 compute-0 ceph-mon[74339]: pgmap v2954: 305 pgs: 305 active+clean; 722 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 917 KiB/s wr, 285 op/s
Dec 06 07:52:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:06.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:07 compute-0 nova_compute[251992]: 2025-12-06 07:52:07.022 251996 DEBUG nova.network.neutron [req-28dd01c7-94e9-4133-bea4-0d0b44e426ed req-65939aa7-aee0-4a30-80e3-d93f5f8eb47e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updated VIF entry in instance network info cache for port 44c14266-b77c-4585-b6af-08f5afc76ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:52:07 compute-0 nova_compute[251992]: 2025-12-06 07:52:07.023 251996 DEBUG nova.network.neutron [req-28dd01c7-94e9-4133-bea4-0d0b44e426ed req-65939aa7-aee0-4a30-80e3-d93f5f8eb47e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updating instance_info_cache with network_info: [{"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:52:07 compute-0 nova_compute[251992]: 2025-12-06 07:52:07.042 251996 DEBUG oslo_concurrency.lockutils [req-28dd01c7-94e9-4133-bea4-0d0b44e426ed req-65939aa7-aee0-4a30-80e3-d93f5f8eb47e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:52:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 725 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.2 MiB/s wr, 250 op/s
Dec 06 07:52:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:07.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:08.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:08 compute-0 nova_compute[251992]: 2025-12-06 07:52:08.877 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:09 compute-0 ceph-mon[74339]: pgmap v2955: 305 pgs: 305 active+clean; 725 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.2 MiB/s wr, 250 op/s
Dec 06 07:52:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2310510304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:52:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2310510304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:52:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 744 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.5 MiB/s wr, 212 op/s
Dec 06 07:52:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:09.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:10.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:10 compute-0 ceph-mon[74339]: pgmap v2956: 305 pgs: 305 active+clean; 744 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.5 MiB/s wr, 212 op/s
Dec 06 07:52:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 750 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 153 op/s
Dec 06 07:52:11 compute-0 ovn_controller[147168]: 2025-12-06T07:52:11Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:01:5b 10.100.0.14
Dec 06 07:52:11 compute-0 nova_compute[251992]: 2025-12-06 07:52:11.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:11.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:12.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:12 compute-0 ovn_controller[147168]: 2025-12-06T07:52:12Z|00073|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.8
Dec 06 07:52:12 compute-0 ovn_controller[147168]: 2025-12-06T07:52:12Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:93:4f:f3 10.100.0.8
Dec 06 07:52:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:52:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:52:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:52:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:52:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:52:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:52:13 compute-0 ceph-mon[74339]: pgmap v2957: 305 pgs: 305 active+clean; 750 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 153 op/s
Dec 06 07:52:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 750 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Dec 06 07:52:13 compute-0 podman[362626]: 2025-12-06 07:52:13.408145821 +0000 UTC m=+0.058847259 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 07:52:13 compute-0 podman[362627]: 2025-12-06 07:52:13.433044923 +0000 UTC m=+0.079107776 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:52:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:13.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:13 compute-0 nova_compute[251992]: 2025-12-06 07:52:13.878 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:14.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 763 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 186 op/s
Dec 06 07:52:15 compute-0 ceph-mon[74339]: pgmap v2958: 305 pgs: 305 active+clean; 750 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Dec 06 07:52:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:15.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:52:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1736642058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:52:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:52:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1736642058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:52:16 compute-0 nova_compute[251992]: 2025-12-06 07:52:16.575 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:16.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:16 compute-0 ceph-mon[74339]: pgmap v2959: 305 pgs: 305 active+clean; 763 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 186 op/s
Dec 06 07:52:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1736642058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:52:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1736642058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:52:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 764 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Dec 06 07:52:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:17 compute-0 ovn_controller[147168]: 2025-12-06T07:52:17Z|00075|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.8
Dec 06 07:52:17 compute-0 ovn_controller[147168]: 2025-12-06T07:52:17Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:93:4f:f3 10.100.0.8
Dec 06 07:52:17 compute-0 ovn_controller[147168]: 2025-12-06T07:52:17Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:4f:f3 10.100.0.8
Dec 06 07:52:17 compute-0 ovn_controller[147168]: 2025-12-06T07:52:17Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:4f:f3 10.100.0.8
Dec 06 07:52:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:52:18
Dec 06 07:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Dec 06 07:52:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:52:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:18.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:18 compute-0 nova_compute[251992]: 2025-12-06 07:52:18.880 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 737 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 153 op/s
Dec 06 07:52:19 compute-0 ceph-mon[74339]: pgmap v2960: 305 pgs: 305 active+clean; 764 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Dec 06 07:52:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:19.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:20 compute-0 sudo[362668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:20 compute-0 sudo[362668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-0 sudo[362668]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-0 sudo[362693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:52:20 compute-0 sudo[362693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-0 sudo[362693]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-0 sudo[362718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:20 compute-0 sudo[362718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-0 sudo[362718]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:20 compute-0 sudo[362743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:52:20 compute-0 sudo[362743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:20.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:52:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:52:20 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:21 compute-0 sudo[362743]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:21 compute-0 ceph-mon[74339]: pgmap v2961: 305 pgs: 305 active+clean; 737 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 153 op/s
Dec 06 07:52:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3857071513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:21 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1192855359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 689 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 630 KiB/s wr, 162 op/s
Dec 06 07:52:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:52:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:52:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:52:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:52:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:52:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 064adfea-8250-4258-b8cc-d5f09fb35b0d does not exist
Dec 06 07:52:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 36e86d71-6d69-43de-a15b-87702dc408d4 does not exist
Dec 06 07:52:21 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ac239335-d085-49c5-95a8-976d3286bec8 does not exist
Dec 06 07:52:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:52:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:52:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:52:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:52:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:52:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:52:21 compute-0 sudo[362801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:21 compute-0 nova_compute[251992]: 2025-12-06 07:52:21.577 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:21 compute-0 sudo[362801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:21 compute-0 sudo[362801]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:21 compute-0 sudo[362826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:52:21 compute-0 sudo[362826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:21 compute-0 sudo[362826]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:21 compute-0 sudo[362851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:21 compute-0 sudo[362851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:21 compute-0 sudo[362851]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:21 compute-0 sudo[362876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:52:21 compute-0 sudo[362876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:22 compute-0 podman[362941]: 2025-12-06 07:52:22.075515433 +0000 UTC m=+0.040111713 container create 0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:52:22 compute-0 systemd[1]: Started libpod-conmon-0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7.scope.
Dec 06 07:52:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:52:22 compute-0 podman[362941]: 2025-12-06 07:52:22.15101074 +0000 UTC m=+0.115607020 container init 0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 07:52:22 compute-0 podman[362941]: 2025-12-06 07:52:22.058618237 +0000 UTC m=+0.023214537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:52:22 compute-0 podman[362941]: 2025-12-06 07:52:22.158742569 +0000 UTC m=+0.123338849 container start 0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:52:22 compute-0 podman[362941]: 2025-12-06 07:52:22.161976626 +0000 UTC m=+0.126572926 container attach 0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:52:22 compute-0 jolly_goodall[362958]: 167 167
Dec 06 07:52:22 compute-0 systemd[1]: libpod-0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7.scope: Deactivated successfully.
Dec 06 07:52:22 compute-0 podman[362941]: 2025-12-06 07:52:22.165985544 +0000 UTC m=+0.130581824 container died 0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:52:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:52:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:52:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:52:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:52:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2429076d3538e0283be7262ce6e44b9b462b87002635e1cddde757e32b2955d5-merged.mount: Deactivated successfully.
Dec 06 07:52:22 compute-0 podman[362941]: 2025-12-06 07:52:22.214898064 +0000 UTC m=+0.179494344 container remove 0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_goodall, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:52:22 compute-0 systemd[1]: libpod-conmon-0097051730dfa7919a379b158350d8515cb80c294f39853b9c075299004335b7.scope: Deactivated successfully.
Dec 06 07:52:22 compute-0 podman[362983]: 2025-12-06 07:52:22.389072113 +0000 UTC m=+0.039825725 container create 4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:52:22 compute-0 systemd[1]: Started libpod-conmon-4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c.scope.
Dec 06 07:52:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d829e22aaf29f4d2b85cd6d8568c91d1845a61d7514e64916ce2d11f35441111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:22 compute-0 podman[362983]: 2025-12-06 07:52:22.371346465 +0000 UTC m=+0.022100097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d829e22aaf29f4d2b85cd6d8568c91d1845a61d7514e64916ce2d11f35441111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d829e22aaf29f4d2b85cd6d8568c91d1845a61d7514e64916ce2d11f35441111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d829e22aaf29f4d2b85cd6d8568c91d1845a61d7514e64916ce2d11f35441111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d829e22aaf29f4d2b85cd6d8568c91d1845a61d7514e64916ce2d11f35441111/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:22 compute-0 podman[362983]: 2025-12-06 07:52:22.479225295 +0000 UTC m=+0.129978937 container init 4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 07:52:22 compute-0 podman[362983]: 2025-12-06 07:52:22.489672207 +0000 UTC m=+0.140425859 container start 4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:52:22 compute-0 podman[362983]: 2025-12-06 07:52:22.493556322 +0000 UTC m=+0.144309934 container attach 4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 07:52:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:22.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Dec 06 07:52:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 689 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 880 KiB/s rd, 589 KiB/s wr, 103 op/s
Dec 06 07:52:23 compute-0 beautiful_keldysh[362999]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:52:23 compute-0 beautiful_keldysh[362999]: --> relative data size: 1.0
Dec 06 07:52:23 compute-0 beautiful_keldysh[362999]: --> All data devices are unavailable
Dec 06 07:52:23 compute-0 systemd[1]: libpod-4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c.scope: Deactivated successfully.
Dec 06 07:52:23 compute-0 podman[362983]: 2025-12-06 07:52:23.338847159 +0000 UTC m=+0.989600771 container died 4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:52:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:52:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Dec 06 07:52:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:23.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Dec 06 07:52:23 compute-0 nova_compute[251992]: 2025-12-06 07:52:23.917 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:24 compute-0 sudo[363026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:24 compute-0 sudo[363026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:24 compute-0 sudo[363026]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:24 compute-0 sudo[363051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:24 compute-0 sudo[363051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:24 compute-0 sudo[363051]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:24 compute-0 nova_compute[251992]: 2025-12-06 07:52:24.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:24 compute-0 nova_compute[251992]: 2025-12-06 07:52:24.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:24 compute-0 nova_compute[251992]: 2025-12-06 07:52:24.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:24 compute-0 nova_compute[251992]: 2025-12-06 07:52:24.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:24 compute-0 nova_compute[251992]: 2025-12-06 07:52:24.687 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:52:24 compute-0 nova_compute[251992]: 2025-12-06 07:52:24.688 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:52:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:52:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:52:25 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:52:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 85 KiB/s wr, 70 op/s
Dec 06 07:52:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012158412141727155 of space, bias 1.0, pg target 3.6475236425181463 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002257562406942118 of space, bias 1.0, pg target 0.670496034861809 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005022217569328122 of space, bias 1.0, pg target 1.4915986180904521 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:52:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Dec 06 07:52:26 compute-0 nova_compute[251992]: 2025-12-06 07:52:26.580 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 82 KiB/s wr, 52 op/s
Dec 06 07:52:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:52:27 compute-0 ceph-mon[74339]: pgmap v2962: 305 pgs: 305 active+clean; 689 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 630 KiB/s wr, 162 op/s
Dec 06 07:52:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2314624498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d829e22aaf29f4d2b85cd6d8568c91d1845a61d7514e64916ce2d11f35441111-merged.mount: Deactivated successfully.
Dec 06 07:52:27 compute-0 podman[362983]: 2025-12-06 07:52:27.468578473 +0000 UTC m=+5.119332085 container remove 4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:52:27 compute-0 sudo[362876]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:27 compute-0 systemd[1]: libpod-conmon-4624974003edc9783a588c45e9ecec2fd17afa74fd0b1bd149eb0950158f6f1c.scope: Deactivated successfully.
Dec 06 07:52:27 compute-0 sudo[363100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:27 compute-0 sudo[363100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:27 compute-0 sudo[363100]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:27 compute-0 sudo[363127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:52:27 compute-0 sudo[363127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:27 compute-0 sudo[363127]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:52:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/491815996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.649 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.961s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:27 compute-0 sudo[363152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:27 compute-0 sudo[363152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:27 compute-0 sudo[363152]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:27 compute-0 sudo[363180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:52:27 compute-0 sudo[363180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.907 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.908 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000aa as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.913 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.913 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.913 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.916 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.916 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:27 compute-0 nova_compute[251992]: 2025-12-06 07:52:27.917 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.114 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.116 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3578MB free_disk=20.73296356201172GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.116 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.116 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:28 compute-0 podman[363245]: 2025-12-06 07:52:28.052896079 +0000 UTC m=+0.023199166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.223 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 6e187078-1e6f-4c22-9510-ed8116b14ae5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.223 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.224 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.224 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.224 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.295 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:28.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:28 compute-0 nova_compute[251992]: 2025-12-06 07:52:28.921 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:29 compute-0 podman[363245]: 2025-12-06 07:52:29.137053441 +0000 UTC m=+1.107356518 container create f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:52:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 80 KiB/s wr, 48 op/s
Dec 06 07:52:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:52:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441895647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:29 compute-0 nova_compute[251992]: 2025-12-06 07:52:29.323 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:29 compute-0 nova_compute[251992]: 2025-12-06 07:52:29.329 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:52:29 compute-0 nova_compute[251992]: 2025-12-06 07:52:29.364 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:52:29 compute-0 nova_compute[251992]: 2025-12-06 07:52:29.424 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:52:29 compute-0 nova_compute[251992]: 2025-12-06 07:52:29.426 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:29 compute-0 ceph-mon[74339]: pgmap v2963: 305 pgs: 305 active+clean; 689 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 880 KiB/s rd, 589 KiB/s wr, 103 op/s
Dec 06 07:52:29 compute-0 ceph-mon[74339]: osdmap e387: 3 total, 3 up, 3 in
Dec 06 07:52:29 compute-0 ceph-mon[74339]: pgmap v2965: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 85 KiB/s wr, 70 op/s
Dec 06 07:52:29 compute-0 ceph-mon[74339]: pgmap v2966: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 82 KiB/s wr, 52 op/s
Dec 06 07:52:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/491815996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:29 compute-0 systemd[1]: Started libpod-conmon-f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a.scope.
Dec 06 07:52:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:52:29 compute-0 podman[363245]: 2025-12-06 07:52:29.93389146 +0000 UTC m=+1.904194557 container init f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:52:29 compute-0 podman[363245]: 2025-12-06 07:52:29.941336861 +0000 UTC m=+1.911639928 container start f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cerf, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:52:29 compute-0 podman[363245]: 2025-12-06 07:52:29.946766968 +0000 UTC m=+1.917070045 container attach f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:52:29 compute-0 keen_cerf[363285]: 167 167
Dec 06 07:52:29 compute-0 systemd[1]: libpod-f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a.scope: Deactivated successfully.
Dec 06 07:52:29 compute-0 podman[363245]: 2025-12-06 07:52:29.949693177 +0000 UTC m=+1.919996244 container died f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cerf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:52:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c36ca706d7325a38c17f955da618db0685f5728964bc48b0080bc4ef7c470d5-merged.mount: Deactivated successfully.
Dec 06 07:52:30 compute-0 podman[363245]: 2025-12-06 07:52:30.089975181 +0000 UTC m=+2.060278248 container remove f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:52:30 compute-0 systemd[1]: libpod-conmon-f3bddca81545d2f2a9c35e2b87690c2ff2d420e0eaf30853630b23466b50a20a.scope: Deactivated successfully.
Dec 06 07:52:30 compute-0 podman[363309]: 2025-12-06 07:52:30.284333475 +0000 UTC m=+0.058271593 container create bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:52:30 compute-0 podman[363309]: 2025-12-06 07:52:30.248320074 +0000 UTC m=+0.022258222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:52:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:30.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Dec 06 07:52:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Dec 06 07:52:31 compute-0 ceph-mon[74339]: pgmap v2967: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 80 KiB/s wr, 48 op/s
Dec 06 07:52:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3441895647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1971704627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:31 compute-0 systemd[1]: Started libpod-conmon-bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e.scope.
Dec 06 07:52:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Dec 06 07:52:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87924ae08d461e1950d97f65e75af98c3e795579169603da6898fa92ba832c9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87924ae08d461e1950d97f65e75af98c3e795579169603da6898fa92ba832c9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87924ae08d461e1950d97f65e75af98c3e795579169603da6898fa92ba832c9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87924ae08d461e1950d97f65e75af98c3e795579169603da6898fa92ba832c9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 676 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 1.0 MiB/s wr, 46 op/s
Dec 06 07:52:31 compute-0 podman[363309]: 2025-12-06 07:52:31.227303197 +0000 UTC m=+1.001241345 container init bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:52:31 compute-0 podman[363309]: 2025-12-06 07:52:31.234281915 +0000 UTC m=+1.008220043 container start bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_noether, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:52:31 compute-0 podman[363309]: 2025-12-06 07:52:31.248201271 +0000 UTC m=+1.022139429 container attach bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:52:31 compute-0 nova_compute[251992]: 2025-12-06 07:52:31.582 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:31.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:32 compute-0 agitated_noether[363326]: {
Dec 06 07:52:32 compute-0 agitated_noether[363326]:     "0": [
Dec 06 07:52:32 compute-0 agitated_noether[363326]:         {
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "devices": [
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "/dev/loop3"
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             ],
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "lv_name": "ceph_lv0",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "lv_size": "7511998464",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "name": "ceph_lv0",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "tags": {
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.cluster_name": "ceph",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.crush_device_class": "",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.encrypted": "0",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.osd_id": "0",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.type": "block",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:                 "ceph.vdo": "0"
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             },
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "type": "block",
Dec 06 07:52:32 compute-0 agitated_noether[363326]:             "vg_name": "ceph_vg0"
Dec 06 07:52:32 compute-0 agitated_noether[363326]:         }
Dec 06 07:52:32 compute-0 agitated_noether[363326]:     ]
Dec 06 07:52:32 compute-0 agitated_noether[363326]: }
Dec 06 07:52:32 compute-0 systemd[1]: libpod-bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e.scope: Deactivated successfully.
Dec 06 07:52:32 compute-0 podman[363309]: 2025-12-06 07:52:32.032804971 +0000 UTC m=+1.806743099 container died bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_noether, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:52:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-87924ae08d461e1950d97f65e75af98c3e795579169603da6898fa92ba832c9a-merged.mount: Deactivated successfully.
Dec 06 07:52:32 compute-0 podman[363309]: 2025-12-06 07:52:32.201305547 +0000 UTC m=+1.975243705 container remove bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_noether, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:52:32 compute-0 systemd[1]: libpod-conmon-bab72ce52e27b5319565cdafd617ee6fbdd0a42dfe2b98aa74d410711ad5574e.scope: Deactivated successfully.
Dec 06 07:52:32 compute-0 sudo[363180]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:32 compute-0 sudo[363348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:32 compute-0 sudo[363348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:32 compute-0 sudo[363348]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:32 compute-0 ceph-mon[74339]: osdmap e388: 3 total, 3 up, 3 in
Dec 06 07:52:32 compute-0 sudo[363373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:52:32 compute-0 sudo[363373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:32 compute-0 sudo[363373]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:32 compute-0 sudo[363398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:32 compute-0 sudo[363398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:32 compute-0 sudo[363398]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.446 251996 DEBUG oslo_concurrency.lockutils [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.447 251996 DEBUG oslo_concurrency.lockutils [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.465 251996 INFO nova.compute.manager [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Detaching volume 47d2f91f-77e4-4f73-968f-583938f7d1cb
Dec 06 07:52:32 compute-0 sudo[363423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:52:32 compute-0 sudo[363423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.666 251996 INFO nova.virt.block_device [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Attempting to driver detach volume 47d2f91f-77e4-4f73-968f-583938f7d1cb from mountpoint /dev/vdc
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.679 251996 DEBUG nova.virt.libvirt.driver [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Attempting to detach device vdc from instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.679 251996 DEBUG nova.virt.libvirt.guest [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-47d2f91f-77e4-4f73-968f-583938f7d1cb">
Dec 06 07:52:32 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   </source>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <serial>47d2f91f-77e4-4f73-968f-583938f7d1cb</serial>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]: </disk>
Dec 06 07:52:32 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:52:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:52:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.736 251996 INFO nova.virt.libvirt.driver [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully detached device vdc from instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b from the persistent domain config.
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.737 251996 DEBUG nova.virt.libvirt.driver [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:52:32 compute-0 nova_compute[251992]: 2025-12-06 07:52:32.737 251996 DEBUG nova.virt.libvirt.guest [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-47d2f91f-77e4-4f73-968f-583938f7d1cb">
Dec 06 07:52:32 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   </source>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <serial>47d2f91f-77e4-4f73-968f-583938f7d1cb</serial>
Dec 06 07:52:32 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Dec 06 07:52:32 compute-0 nova_compute[251992]: </disk>
Dec 06 07:52:32 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:52:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Dec 06 07:52:32 compute-0 podman[363487]: 2025-12-06 07:52:32.792784225 +0000 UTC m=+0.021925602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:52:32 compute-0 podman[363487]: 2025-12-06 07:52:32.958991759 +0000 UTC m=+0.188133116 container create c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 07:52:32 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Dec 06 07:52:33 compute-0 nova_compute[251992]: 2025-12-06 07:52:33.021 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765007553.0204906, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:52:33 compute-0 nova_compute[251992]: 2025-12-06 07:52:33.024 251996 DEBUG nova.virt.libvirt.driver [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:52:33 compute-0 nova_compute[251992]: 2025-12-06 07:52:33.026 251996 INFO nova.virt.libvirt.driver [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully detached device vdc from instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b from the live domain config.
Dec 06 07:52:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 676 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.0 MiB/s wr, 26 op/s
Dec 06 07:52:33 compute-0 nova_compute[251992]: 2025-12-06 07:52:33.450 251996 DEBUG nova.objects.instance [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'flavor' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:33 compute-0 nova_compute[251992]: 2025-12-06 07:52:33.531 251996 DEBUG oslo_concurrency.lockutils [None req-4c20dc96-3db3-4058-82d3-b94a4bec0a02 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:33 compute-0 ceph-mon[74339]: pgmap v2969: 305 pgs: 305 active+clean; 676 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 1.0 MiB/s wr, 46 op/s
Dec 06 07:52:33 compute-0 ceph-mon[74339]: osdmap e389: 3 total, 3 up, 3 in
Dec 06 07:52:33 compute-0 systemd[1]: Started libpod-conmon-c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26.scope.
Dec 06 07:52:33 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:52:33 compute-0 podman[363487]: 2025-12-06 07:52:33.731541664 +0000 UTC m=+0.960683041 container init c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:52:33 compute-0 podman[363487]: 2025-12-06 07:52:33.73809349 +0000 UTC m=+0.967234847 container start c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:52:33 compute-0 determined_cannon[363507]: 167 167
Dec 06 07:52:33 compute-0 systemd[1]: libpod-c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26.scope: Deactivated successfully.
Dec 06 07:52:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:33.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:33 compute-0 nova_compute[251992]: 2025-12-06 07:52:33.970 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.019 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.020 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.020 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.020 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.021 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.022 251996 INFO nova.compute.manager [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Terminating instance
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.023 251996 DEBUG nova.compute.manager [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:52:34 compute-0 kernel: tap1d320b87-e6 (unregistering): left promiscuous mode
Dec 06 07:52:34 compute-0 NetworkManager[48965]: <info>  [1765007554.0708] device (tap1d320b87-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:52:34 compute-0 ovn_controller[147168]: 2025-12-06T07:52:34Z|00640|binding|INFO|Releasing lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc from this chassis (sb_readonly=0)
Dec 06 07:52:34 compute-0 ovn_controller[147168]: 2025-12-06T07:52:34Z|00641|binding|INFO|Setting lport 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc down in Southbound
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.078 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 ovn_controller[147168]: 2025-12-06T07:52:34Z|00642|binding|INFO|Removing iface tap1d320b87-e6 ovn-installed in OVS
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.085 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.155 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:01:5b 10.100.0.14'], port_security=['fa:16:3e:b7:01:5b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0b9681c0-c0e7-4bd8-9040-865c1bff517b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'a26fe1ae-b98b-40c8-b5a2-fa6264313a90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.189', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5411afcf-f935-4976-affc-7b12214f8e50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.158 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc in datapath 45904a2f-a5c2-4047-9c19-a87d36354c1b unbound from our chassis
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.160 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45904a2f-a5c2-4047-9c19-a87d36354c1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.160 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.162 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[44ea4990-f0f8-469d-b049-84b5134c55b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.163 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b namespace which is not needed anymore
Dec 06 07:52:34 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Dec 06 07:52:34 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a8.scope: Consumed 16.520s CPU time.
Dec 06 07:52:34 compute-0 systemd-machined[212986]: Machine qemu-79-instance-000000a8 terminated.
Dec 06 07:52:34 compute-0 podman[363487]: 2025-12-06 07:52:34.223020625 +0000 UTC m=+1.452161982 container attach c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cannon, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:52:34 compute-0 podman[363487]: 2025-12-06 07:52:34.224980108 +0000 UTC m=+1.454121465 container died c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.252 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.259 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.266 251996 INFO nova.virt.libvirt.driver [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Instance destroyed successfully.
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.267 251996 DEBUG nova.objects.instance [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'resources' on Instance uuid 0b9681c0-c0e7-4bd8-9040-865c1bff517b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dc013e45adaf8966eb5260c68740c9e06a28e41fa3fce0ca2a544bb5628c518-merged.mount: Deactivated successfully.
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.286 251996 DEBUG nova.virt.libvirt.vif [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-06T07:50:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-676074581',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-676074581',id=168,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNKYB7DoHCE/Zq13G9bNkMnqez+ah/cvrfQVcJ98DATDISc+hKRWhcY3n96hXJBgGVFk2F3L+nAWB+E3c8HLOV2K86PN0fFtoRWNhKFWMW6mo0EoTV5X0kp2CVI8eUnC1g==',key_name='tempest-keypair-12606996',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:51:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-qd0qnxrq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=0b9681c0-c0e7-4bd8-9040-865c1bff517b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.287 251996 DEBUG nova.network.os_vif_util [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "address": "fa:16:3e:b7:01:5b", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1d320b87-e6", "ovs_interfaceid": "1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.288 251996 DEBUG nova.network.os_vif_util [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.289 251996 DEBUG os_vif [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.291 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.292 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d320b87-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.294 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.296 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.301 251996 INFO os_vif [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:01:5b,bridge_name='br-int',has_traffic_filtering=True,id=1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1d320b87-e6')
Dec 06 07:52:34 compute-0 podman[363487]: 2025-12-06 07:52:34.339826976 +0000 UTC m=+1.568968333 container remove c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.346 251996 DEBUG nova.compute.manager [req-f35aa0bc-9e6c-49f4-80ec-83297e97103e req-04ded87a-da63-4eed-b61d-0bd5d8c1bf98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-unplugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.347 251996 DEBUG oslo_concurrency.lockutils [req-f35aa0bc-9e6c-49f4-80ec-83297e97103e req-04ded87a-da63-4eed-b61d-0bd5d8c1bf98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.347 251996 DEBUG oslo_concurrency.lockutils [req-f35aa0bc-9e6c-49f4-80ec-83297e97103e req-04ded87a-da63-4eed-b61d-0bd5d8c1bf98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.347 251996 DEBUG oslo_concurrency.lockutils [req-f35aa0bc-9e6c-49f4-80ec-83297e97103e req-04ded87a-da63-4eed-b61d-0bd5d8c1bf98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.347 251996 DEBUG nova.compute.manager [req-f35aa0bc-9e6c-49f4-80ec-83297e97103e req-04ded87a-da63-4eed-b61d-0bd5d8c1bf98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] No waiting events found dispatching network-vif-unplugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.347 251996 DEBUG nova.compute.manager [req-f35aa0bc-9e6c-49f4-80ec-83297e97103e req-04ded87a-da63-4eed-b61d-0bd5d8c1bf98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-unplugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:52:34 compute-0 systemd[1]: libpod-conmon-c481cf4a31bef9c1c19f6db04688973c82826888a316b9a8ff8353f1b1fcad26.scope: Deactivated successfully.
Dec 06 07:52:34 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[362449]: [NOTICE]   (362453) : haproxy version is 2.8.14-c23fe91
Dec 06 07:52:34 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[362449]: [NOTICE]   (362453) : path to executable is /usr/sbin/haproxy
Dec 06 07:52:34 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[362449]: [WARNING]  (362453) : Exiting Master process...
Dec 06 07:52:34 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[362449]: [ALERT]    (362453) : Current worker (362455) exited with code 143 (Terminated)
Dec 06 07:52:34 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[362449]: [WARNING]  (362453) : All workers exited. Exiting... (0)
Dec 06 07:52:34 compute-0 systemd[1]: libpod-d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10.scope: Deactivated successfully.
Dec 06 07:52:34 compute-0 podman[363579]: 2025-12-06 07:52:34.472768764 +0000 UTC m=+0.077622256 container died d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:52:34 compute-0 ceph-mon[74339]: pgmap v2971: 305 pgs: 305 active+clean; 676 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.0 MiB/s wr, 26 op/s
Dec 06 07:52:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/853213052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.637 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.637 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10-userdata-shm.mount: Deactivated successfully.
Dec 06 07:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb26d145b5b247fa3e4586221b30f95235183fc8b21d5f967d648ad7eee627ea-merged.mount: Deactivated successfully.
Dec 06 07:52:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:34 compute-0 podman[363601]: 2025-12-06 07:52:34.769198521 +0000 UTC m=+0.275631888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:52:34 compute-0 podman[363579]: 2025-12-06 07:52:34.798983045 +0000 UTC m=+0.403836537 container cleanup d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:52:34 compute-0 systemd[1]: libpod-conmon-d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10.scope: Deactivated successfully.
Dec 06 07:52:34 compute-0 podman[363601]: 2025-12-06 07:52:34.862278453 +0000 UTC m=+0.368711800 container create 4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:52:34 compute-0 systemd[1]: Started libpod-conmon-4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f.scope.
Dec 06 07:52:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:52:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c262330062acceca9e2c452b0d3bbd1d267f601c901d384488fb3a522f5a848/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c262330062acceca9e2c452b0d3bbd1d267f601c901d384488fb3a522f5a848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c262330062acceca9e2c452b0d3bbd1d267f601c901d384488fb3a522f5a848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c262330062acceca9e2c452b0d3bbd1d267f601c901d384488fb3a522f5a848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:52:34 compute-0 podman[363628]: 2025-12-06 07:52:34.957452851 +0000 UTC m=+0.134088609 container remove d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.967 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0873cc-7e7c-416f-840a-230dab6dceed]: (4, ('Sat Dec  6 07:52:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b (d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10)\nd71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10\nSat Dec  6 07:52:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b (d71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10)\nd71749f2a2c05f140f29895c19cac516af7a1826d6b7da1355b9eda06d18dc10\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.971 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2b693a06-c2af-4277-b9d4-3a45ea47f9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.972 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45904a2f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.974 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 kernel: tap45904a2f-a0: left promiscuous mode
Dec 06 07:52:34 compute-0 podman[363601]: 2025-12-06 07:52:34.981072088 +0000 UTC m=+0.487505465 container init 4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Dec 06 07:52:34 compute-0 nova_compute[251992]: 2025-12-06 07:52:34.989 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:34 compute-0 podman[363601]: 2025-12-06 07:52:34.991699225 +0000 UTC m=+0.498132572 container start 4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cori, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:52:34 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:34.993 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b330ed86-599c-46a4-9201-a26812a2862e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:35 compute-0 podman[363601]: 2025-12-06 07:52:35.001284583 +0000 UTC m=+0.507717950 container attach 4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:52:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:35.007 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1149d8f2-e828-48e0-9d64-17fd00155984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:35.009 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[81da1baf-24fb-4a5b-bc72-69090e5f7b3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.015 251996 INFO nova.virt.libvirt.driver [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Deleting instance files /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b_del
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.016 251996 INFO nova.virt.libvirt.driver [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Deletion of /var/lib/nova/instances/0b9681c0-c0e7-4bd8-9040-865c1bff517b_del complete
Dec 06 07:52:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:35.029 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[16daf7a2-dd37-44f7-9f68-d7a78457993c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778750, 'reachable_time': 35447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363651, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:35.034 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:52:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:35.035 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[08508a47-2168-421e-a540-8afa0096fa89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d45904a2f\x2da5c2\x2d4047\x2d9c19\x2da87d36354c1b.mount: Deactivated successfully.
Dec 06 07:52:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:35.036 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.085 251996 INFO nova.compute.manager [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Took 1.06 seconds to destroy the instance on the hypervisor.
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.086 251996 DEBUG oslo.service.loopingcall [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.086 251996 DEBUG nova.compute.manager [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.086 251996 DEBUG nova.network.neutron [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:52:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 653 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 2.6 MiB/s wr, 84 op/s
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.420 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.421 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.421 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.422 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.455 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.722 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.722 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.722 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:52:35 compute-0 nova_compute[251992]: 2025-12-06 07:52:35.722 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:35.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:35 compute-0 practical_cori[363644]: {
Dec 06 07:52:35 compute-0 practical_cori[363644]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:52:35 compute-0 practical_cori[363644]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:52:35 compute-0 practical_cori[363644]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:52:35 compute-0 practical_cori[363644]:         "osd_id": 0,
Dec 06 07:52:35 compute-0 practical_cori[363644]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:52:35 compute-0 practical_cori[363644]:         "type": "bluestore"
Dec 06 07:52:35 compute-0 practical_cori[363644]:     }
Dec 06 07:52:35 compute-0 practical_cori[363644]: }
Dec 06 07:52:35 compute-0 systemd[1]: libpod-4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f.scope: Deactivated successfully.
Dec 06 07:52:35 compute-0 podman[363601]: 2025-12-06 07:52:35.841053432 +0000 UTC m=+1.347486799 container died 4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 07:52:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:36.038 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.436 251996 DEBUG nova.compute.manager [req-b4438e9d-dee3-4600-a2bd-01818c632c67 req-c8c2c276-5f83-4984-b5d2-86393010c47d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.437 251996 DEBUG oslo_concurrency.lockutils [req-b4438e9d-dee3-4600-a2bd-01818c632c67 req-c8c2c276-5f83-4984-b5d2-86393010c47d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.437 251996 DEBUG oslo_concurrency.lockutils [req-b4438e9d-dee3-4600-a2bd-01818c632c67 req-c8c2c276-5f83-4984-b5d2-86393010c47d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.437 251996 DEBUG oslo_concurrency.lockutils [req-b4438e9d-dee3-4600-a2bd-01818c632c67 req-c8c2c276-5f83-4984-b5d2-86393010c47d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.437 251996 DEBUG nova.compute.manager [req-b4438e9d-dee3-4600-a2bd-01818c632c67 req-c8c2c276-5f83-4984-b5d2-86393010c47d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] No waiting events found dispatching network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.437 251996 WARNING nova.compute.manager [req-b4438e9d-dee3-4600-a2bd-01818c632c67 req-c8c2c276-5f83-4984-b5d2-86393010c47d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received unexpected event network-vif-plugged-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc for instance with vm_state active and task_state deleting.
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.577 251996 DEBUG nova.network.neutron [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.584 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.605 251996 INFO nova.compute.manager [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Took 1.52 seconds to deallocate network for instance.
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.657 251996 DEBUG nova.compute.manager [req-48e88125-765d-41ca-8842-ea7527b18942 req-a02293c8-02e0-475f-8ebe-9cb81a6f5594 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Received event network-vif-deleted-1d320b87-e6ec-40ef-b2ce-50bd50b6f5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.659 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.659 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:36 compute-0 nova_compute[251992]: 2025-12-06 07:52:36.747 251996 DEBUG oslo_concurrency.processutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c262330062acceca9e2c452b0d3bbd1d267f601c901d384488fb3a522f5a848-merged.mount: Deactivated successfully.
Dec 06 07:52:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/927376835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2536026822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:37 compute-0 podman[363601]: 2025-12-06 07:52:37.12744967 +0000 UTC m=+2.633883017 container remove 4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cori, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:52:37 compute-0 sudo[363423]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:52:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 62 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Dec 06 07:52:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:52:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171804119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:37 compute-0 systemd[1]: libpod-conmon-4afeadc1926e745eff8fcd4991f00ef1a957457451a4a542a3b743d4e608b47f.scope: Deactivated successfully.
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.252 251996 DEBUG oslo_concurrency.processutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.259 251996 DEBUG nova.compute.provider_tree [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.281 251996 DEBUG nova.scheduler.client.report [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:52:37 compute-0 podman[363669]: 2025-12-06 07:52:37.320052047 +0000 UTC m=+1.445767700 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.321 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.384 251996 INFO nova.scheduler.client.report [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Deleted allocations for instance 0b9681c0-c0e7-4bd8-9040-865c1bff517b
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.490 251996 DEBUG oslo_concurrency.lockutils [None req-e5bc2869-8096-4d52-b683-4dcd6113872f 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "0b9681c0-c0e7-4bd8-9040-865c1bff517b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:52:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2a2bdcaf-508e-4e5c-a4a6-8404aaf63e6b does not exist
Dec 06 07:52:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 73c837a2-bfc6-48df-aa3f-2a183285aadf does not exist
Dec 06 07:52:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e67cb31b-59b5-4d85-8fa1-b11ab92cace9 does not exist
Dec 06 07:52:37 compute-0 sudo[363729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:37 compute-0 sudo[363729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:37 compute-0 sudo[363729]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:37 compute-0 sudo[363754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:52:37 compute-0 sudo[363754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:37 compute-0 sudo[363754]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.670 251996 DEBUG oslo_concurrency.lockutils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.671 251996 DEBUG oslo_concurrency.lockutils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.691 251996 DEBUG nova.objects.instance [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'flavor' on Instance uuid 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.744 251996 DEBUG oslo_concurrency.lockutils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.796 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updating instance_info_cache with network_info: [{"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:52:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:37.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.811 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-6e187078-1e6f-4c22-9510-ed8116b14ae5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.811 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.812 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.812 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.812 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.812 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.813 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.813 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.995 251996 DEBUG oslo_concurrency.lockutils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.996 251996 DEBUG oslo_concurrency.lockutils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:37 compute-0 nova_compute[251992]: 2025-12-06 07:52:37.996 251996 INFO nova.compute.manager [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Attaching volume 5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7 to /dev/vdb
Dec 06 07:52:38 compute-0 ceph-mon[74339]: pgmap v2972: 305 pgs: 305 active+clean; 653 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 2.6 MiB/s wr, 84 op/s
Dec 06 07:52:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/171804119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.401 251996 DEBUG os_brick.utils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.402 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.434 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.434 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[979745f3-1d40-42c2-b29c-7fb93c8d88ee]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.435 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.445 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.446 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5de4f3-f8c8-4939-a16b-6adc2c420735]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.447 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.457 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.457 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[ef1ad16f-b50e-436c-aa4c-b464e94d172f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.458 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[56361d60-edab-4a1b-b6c5-a3e274593d14]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.459 251996 DEBUG oslo_concurrency.processutils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.494 251996 DEBUG oslo_concurrency.processutils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.497 251996 DEBUG os_brick.initiator.connectors.lightos [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.497 251996 DEBUG os_brick.initiator.connectors.lightos [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.497 251996 DEBUG os_brick.initiator.connectors.lightos [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.498 251996 DEBUG os_brick.utils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:52:38 compute-0 nova_compute[251992]: 2025-12-06 07:52:38.498 251996 DEBUG nova.virt.block_device [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updating existing volume attachment record: 0853f26e-9475-461f-837c-4ce0b71081e7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:52:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:38.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:39 compute-0 ceph-mon[74339]: pgmap v2973: 305 pgs: 305 active+clean; 637 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 62 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Dec 06 07:52:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2547543116' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 611 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 81 op/s
Dec 06 07:52:39 compute-0 nova_compute[251992]: 2025-12-06 07:52:39.295 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:39 compute-0 nova_compute[251992]: 2025-12-06 07:52:39.299 251996 DEBUG nova.objects.instance [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'flavor' on Instance uuid 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:39 compute-0 nova_compute[251992]: 2025-12-06 07:52:39.411 251996 DEBUG nova.virt.libvirt.driver [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Attempting to attach volume 5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:52:39 compute-0 nova_compute[251992]: 2025-12-06 07:52:39.415 251996 DEBUG nova.virt.libvirt.guest [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:52:39 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:52:39 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7">
Dec 06 07:52:39 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:39 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:39 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:39 compute-0 nova_compute[251992]:   </source>
Dec 06 07:52:39 compute-0 nova_compute[251992]:   <auth username="openstack">
Dec 06 07:52:39 compute-0 nova_compute[251992]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:52:39 compute-0 nova_compute[251992]:   </auth>
Dec 06 07:52:39 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:52:39 compute-0 nova_compute[251992]:   <serial>5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7</serial>
Dec 06 07:52:39 compute-0 nova_compute[251992]: </disk>
Dec 06 07:52:39 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:52:39 compute-0 nova_compute[251992]: 2025-12-06 07:52:39.555 251996 DEBUG nova.virt.libvirt.driver [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:52:39 compute-0 nova_compute[251992]: 2025-12-06 07:52:39.556 251996 DEBUG nova.virt.libvirt.driver [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:52:39 compute-0 nova_compute[251992]: 2025-12-06 07:52:39.557 251996 DEBUG nova.virt.libvirt.driver [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:52:39 compute-0 nova_compute[251992]: 2025-12-06 07:52:39.557 251996 DEBUG nova.virt.libvirt.driver [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] No VIF found with MAC fa:16:3e:93:4f:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:52:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:39.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:40 compute-0 nova_compute[251992]: 2025-12-06 07:52:40.426 251996 DEBUG oslo_concurrency.lockutils [None req-c8625d23-adcb-4d60-bb5a-9be332edb3f5 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1634490562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2287609545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4213075360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:40.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 682 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 123 op/s
Dec 06 07:52:41 compute-0 nova_compute[251992]: 2025-12-06 07:52:41.586 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:41 compute-0 nova_compute[251992]: 2025-12-06 07:52:41.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:52:41 compute-0 nova_compute[251992]: 2025-12-06 07:52:41.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:52:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:41.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:52:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:52:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:52:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:52:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:52:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:52:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 682 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 120 op/s
Dec 06 07:52:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:44 compute-0 nova_compute[251992]: 2025-12-06 07:52:44.299 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:44 compute-0 podman[363810]: 2025-12-06 07:52:44.401044394 +0000 UTC m=+0.059884287 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 07:52:44 compute-0 podman[363809]: 2025-12-06 07:52:44.401601259 +0000 UTC m=+0.060174845 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 07:52:44 compute-0 sudo[363846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:44 compute-0 sudo[363846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:44 compute-0 sudo[363846]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:44 compute-0 sudo[363871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:52:44 compute-0 sudo[363871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:52:44 compute-0 sudo[363871]: pam_unix(sudo:session): session closed for user root
Dec 06 07:52:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:44.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:44 compute-0 ceph-mon[74339]: pgmap v2974: 305 pgs: 305 active+clean; 611 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.0 MiB/s wr, 81 op/s
Dec 06 07:52:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 702 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 113 op/s
Dec 06 07:52:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1235752040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:45 compute-0 ceph-mon[74339]: pgmap v2975: 305 pgs: 305 active+clean; 682 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.7 MiB/s wr, 123 op/s
Dec 06 07:52:45 compute-0 ceph-mon[74339]: pgmap v2976: 305 pgs: 305 active+clean; 682 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 120 op/s
Dec 06 07:52:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/888545578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:45 compute-0 nova_compute[251992]: 2025-12-06 07:52:45.988 251996 DEBUG oslo_concurrency.lockutils [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:45 compute-0 nova_compute[251992]: 2025-12-06 07:52:45.988 251996 DEBUG oslo_concurrency.lockutils [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.008 251996 INFO nova.compute.manager [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Detaching volume 5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.180 251996 INFO nova.virt.block_device [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Attempting to driver detach volume 5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7 from mountpoint /dev/vdb
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.187 251996 DEBUG nova.virt.libvirt.driver [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Attempting to detach device vdb from instance 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.188 251996 DEBUG nova.virt.libvirt.guest [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7">
Dec 06 07:52:46 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   </source>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <serial>5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7</serial>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]: </disk>
Dec 06 07:52:46 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.199 251996 INFO nova.virt.libvirt.driver [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Successfully detached device vdb from instance 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 from the persistent domain config.
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.199 251996 DEBUG nova.virt.libvirt.driver [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.200 251996 DEBUG nova.virt.libvirt.guest [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7">
Dec 06 07:52:46 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   </source>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <serial>5cd6cf93-6842-435d-92c9-f0e7f8a0e1f7</serial>
Dec 06 07:52:46 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:52:46 compute-0 nova_compute[251992]: </disk>
Dec 06 07:52:46 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.434 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765007566.4344923, 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.437 251996 DEBUG nova.virt.libvirt.driver [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.440 251996 INFO nova.virt.libvirt.driver [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Successfully detached device vdb from instance 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 from the live domain config.
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.588 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.664 251996 DEBUG nova.objects.instance [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'flavor' on Instance uuid 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:46 compute-0 nova_compute[251992]: 2025-12-06 07:52:46.721 251996 DEBUG oslo_concurrency.lockutils [None req-21dc42a3-63f2-4973-8400-d02e2ef5a4ec 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.094 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.095 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.112 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.185 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.185 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.192 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.193 251996 INFO nova.compute.claims [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:52:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 702 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 79 op/s
Dec 06 07:52:47 compute-0 ceph-mon[74339]: pgmap v2977: 305 pgs: 305 active+clean; 702 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 113 op/s
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.344 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:52:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3784843722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.776 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.783 251996 DEBUG nova.compute.provider_tree [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.802 251996 DEBUG nova.scheduler.client.report [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:52:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:47.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.826 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.827 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.888 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:52:47 compute-0 nova_compute[251992]: 2025-12-06 07:52:47.888 251996 DEBUG nova.network.neutron [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:52:48 compute-0 ceph-mon[74339]: pgmap v2978: 305 pgs: 305 active+clean; 702 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 79 op/s
Dec 06 07:52:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3784843722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:52:48 compute-0 nova_compute[251992]: 2025-12-06 07:52:48.624 251996 INFO nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:52:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:48.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.093 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:52:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 706 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.264 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007554.2634242, 0b9681c0-c0e7-4bd8-9040-865c1bff517b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.265 251996 INFO nova.compute.manager [-] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] VM Stopped (Lifecycle Event)
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.302 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.305 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.307 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.308 251996 INFO nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Creating image(s)
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.342 251996 DEBUG nova.storage.rbd_utils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.371 251996 DEBUG nova.storage.rbd_utils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.404 251996 DEBUG nova.storage.rbd_utils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.409 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.443 251996 DEBUG nova.policy [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90c9de6e67724c898a8e23b05fbf14da', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.448 251996 DEBUG nova.compute.manager [None req-b2b210fc-80f5-4f99-9578-5e53458037d3 - - - - - -] [instance: 0b9681c0-c0e7-4bd8-9040-865c1bff517b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.479 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.479 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.480 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.481 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.512 251996 DEBUG nova.storage.rbd_utils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:49 compute-0 nova_compute[251992]: 2025-12-06 07:52:49.517 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:49.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:50 compute-0 nova_compute[251992]: 2025-12-06 07:52:50.254 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:50 compute-0 nova_compute[251992]: 2025-12-06 07:52:50.323 251996 DEBUG nova.storage.rbd_utils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] resizing rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:52:50 compute-0 nova_compute[251992]: 2025-12-06 07:52:50.426 251996 DEBUG nova.objects.instance [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'migration_context' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:50 compute-0 nova_compute[251992]: 2025-12-06 07:52:50.445 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:52:50 compute-0 nova_compute[251992]: 2025-12-06 07:52:50.446 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Ensure instance console log exists: /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:52:50 compute-0 nova_compute[251992]: 2025-12-06 07:52:50.446 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:50 compute-0 nova_compute[251992]: 2025-12-06 07:52:50.447 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:50 compute-0 nova_compute[251992]: 2025-12-06 07:52:50.447 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:50 compute-0 ceph-mon[74339]: pgmap v2979: 305 pgs: 305 active+clean; 706 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Dec 06 07:52:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:50.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 172 op/s
Dec 06 07:52:51 compute-0 nova_compute[251992]: 2025-12-06 07:52:51.630 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:51.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.053 251996 DEBUG nova.compute.manager [req-d6e0fdbe-cb42-469b-a8be-bfdf76d8ac19 req-ec9dde82-2c91-4451-8f4f-60f7548d5f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-changed-44c14266-b77c-4585-b6af-08f5afc76ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.054 251996 DEBUG nova.compute.manager [req-d6e0fdbe-cb42-469b-a8be-bfdf76d8ac19 req-ec9dde82-2c91-4451-8f4f-60f7548d5f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Refreshing instance network info cache due to event network-changed-44c14266-b77c-4585-b6af-08f5afc76ad9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.054 251996 DEBUG oslo_concurrency.lockutils [req-d6e0fdbe-cb42-469b-a8be-bfdf76d8ac19 req-ec9dde82-2c91-4451-8f4f-60f7548d5f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.054 251996 DEBUG oslo_concurrency.lockutils [req-d6e0fdbe-cb42-469b-a8be-bfdf76d8ac19 req-ec9dde82-2c91-4451-8f4f-60f7548d5f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.054 251996 DEBUG nova.network.neutron [req-d6e0fdbe-cb42-469b-a8be-bfdf76d8ac19 req-ec9dde82-2c91-4451-8f4f-60f7548d5f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Refreshing network info cache for port 44c14266-b77c-4585-b6af-08f5afc76ad9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.589 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.590 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.590 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.590 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.591 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.592 251996 INFO nova.compute.manager [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Terminating instance
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.593 251996 DEBUG nova.compute.manager [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:52:52 compute-0 kernel: tap44c14266-b7 (unregistering): left promiscuous mode
Dec 06 07:52:52 compute-0 NetworkManager[48965]: <info>  [1765007572.6539] device (tap44c14266-b7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:52:52 compute-0 ceph-mon[74339]: pgmap v2980: 305 pgs: 305 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 172 op/s
Dec 06 07:52:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:52 compute-0 ovn_controller[147168]: 2025-12-06T07:52:52Z|00643|binding|INFO|Releasing lport 44c14266-b77c-4585-b6af-08f5afc76ad9 from this chassis (sb_readonly=0)
Dec 06 07:52:52 compute-0 ovn_controller[147168]: 2025-12-06T07:52:52Z|00644|binding|INFO|Setting lport 44c14266-b77c-4585-b6af-08f5afc76ad9 down in Southbound
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.705 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:52 compute-0 ovn_controller[147168]: 2025-12-06T07:52:52Z|00645|binding|INFO|Removing iface tap44c14266-b7 ovn-installed in OVS
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.708 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.722 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:52.734 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:4f:f3 10.100.0.8'], port_security=['fa:16:3e:93:4f:f3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '822fc37e-13a4-4b1b-983f-6cc928c1dfa3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4cf19b89a6d46bca307e65731a9dd21', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8dd9f4b-9afe-430e-a0a0-846e8785c631', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1ea0b24-813d-4f2d-b582-53b5b07aa43a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=44c14266-b77c-4585-b6af-08f5afc76ad9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:52:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:52.735 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 44c14266-b77c-4585-b6af-08f5afc76ad9 in datapath 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 unbound from our chassis
Dec 06 07:52:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:52.737 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:52:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:52.739 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9c25b303-0b95-41d8-873e-b45ba68481b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:52.740 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 namespace which is not needed anymore
Dec 06 07:52:52 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Dec 06 07:52:52 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000aa.scope: Consumed 17.620s CPU time.
Dec 06 07:52:52 compute-0 systemd-machined[212986]: Machine qemu-80-instance-000000aa terminated.
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.814 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.819 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.824 251996 INFO nova.virt.libvirt.driver [-] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Instance destroyed successfully.
Dec 06 07:52:52 compute-0 nova_compute[251992]: 2025-12-06 07:52:52.824 251996 DEBUG nova.objects.instance [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lazy-loading 'resources' on Instance uuid 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:52 compute-0 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[362525]: [NOTICE]   (362529) : haproxy version is 2.8.14-c23fe91
Dec 06 07:52:52 compute-0 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[362525]: [NOTICE]   (362529) : path to executable is /usr/sbin/haproxy
Dec 06 07:52:52 compute-0 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[362525]: [WARNING]  (362529) : Exiting Master process...
Dec 06 07:52:52 compute-0 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[362525]: [WARNING]  (362529) : Exiting Master process...
Dec 06 07:52:52 compute-0 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[362525]: [ALERT]    (362529) : Current worker (362531) exited with code 143 (Terminated)
Dec 06 07:52:52 compute-0 neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7[362525]: [WARNING]  (362529) : All workers exited. Exiting... (0)
Dec 06 07:52:52 compute-0 systemd[1]: libpod-3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80.scope: Deactivated successfully.
Dec 06 07:52:52 compute-0 podman[364116]: 2025-12-06 07:52:52.883542081 +0000 UTC m=+0.046523676 container died 3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-eadd1968043f135b0cfdd8e8a4a8acfb1dbaefdc42d3ed88b0ba580b2356e4e8-merged.mount: Deactivated successfully.
Dec 06 07:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80-userdata-shm.mount: Deactivated successfully.
Dec 06 07:52:52 compute-0 podman[364116]: 2025-12-06 07:52:52.923965271 +0000 UTC m=+0.086946866 container cleanup 3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:52:52 compute-0 systemd[1]: libpod-conmon-3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80.scope: Deactivated successfully.
Dec 06 07:52:52 compute-0 podman[364151]: 2025-12-06 07:52:52.989397887 +0000 UTC m=+0.040924535 container remove 3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:52:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:52.995 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[56141f55-00eb-4a6f-a80c-02f1c11014a3]: (4, ('Sat Dec  6 07:52:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 (3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80)\n3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80\nSat Dec  6 07:52:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 (3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80)\n3d9be8f86e294116a3405eee022e12b3bc5f170cf6a8d261582cef4e96697f80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:52.998 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d85b907c-5dd2-43ce-9b38-73cd16f66c15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:52.999 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e0e5f36-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.001 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:53 compute-0 kernel: tap9e0e5f36-40: left promiscuous mode
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:53.022 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7cadf1aa-4423-4600-8481-ab3cd795095a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:53.044 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[390c5d4b-3c94-4582-8391-f8f18c519f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:53.046 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d61257bb-c5aa-4ac1-85d6-80ae48dcce5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:53.063 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[df88809f-72d1-4e67-ad1b-2447bc1f6cd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778945, 'reachable_time': 35121, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364171, 'error': None, 'target': 'ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:53.065 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:52:53 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:52:53.065 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b7e4d0-2110-4fcc-a494-f4fd2b1f8bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:52:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d9e0e5f36\x2d40fa\x2d4d3b\x2db8ee\x2d8071f7ac21d7.mount: Deactivated successfully.
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.113 251996 DEBUG nova.virt.libvirt.vif [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:51:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1142578868',display_name='tempest-TestStampPattern-server-1142578868',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1142578868',id=170,image_ref='83fea89a-3a0d-4881-b429-13684080bb6c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8KhRxaTrkKNzMUybnifFqVhR7VOW5ilrhcPN+BlOV2c9vQAH2tT4hPBYJpZ93aPVMmrWQGW35OWGQh34F5+BdF2On//RqgE6BOka+CpM6HEuYW/HMTwic5wOTQHp91yg==',key_name='tempest-TestStampPattern-1707395411',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:51:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4cf19b89a6d46bca307e65731a9dd21',ramdisk_id='',reservation_id='r-5gszegm5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='e7d5d854-2a1f-485b-931a-4ec90cf7ba04',image_min_disk='1',image_min_ram='0',image_owner_id='c4cf19b89a6d46bca307e65731a9dd21',image_owner_project_name='tempest-TestStampPattern-1318067975',image_owner_user_name='tempest-TestStampPattern-1318067975-project-member',image_user_id='4962bc7b172346e19d127b46ea2d7a11',owner_project_name='tempest-TestStampPattern-1318067975',owner_user_name='tempest-TestStampPattern-1318067975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:51:58Z,user_data=None,user_id='4962bc7b172346e19d127b46ea2d7a11',uuid=822fc37e-13a4-4b1b-983f-6cc928c1dfa3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.114 251996 DEBUG nova.network.os_vif_util [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converting VIF {"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.114 251996 DEBUG nova.network.os_vif_util [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:4f:f3,bridge_name='br-int',has_traffic_filtering=True,id=44c14266-b77c-4585-b6af-08f5afc76ad9,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44c14266-b7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.115 251996 DEBUG os_vif [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:4f:f3,bridge_name='br-int',has_traffic_filtering=True,id=44c14266-b77c-4585-b6af-08f5afc76ad9,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44c14266-b7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.117 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.117 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44c14266-b7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.118 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.119 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:53 compute-0 nova_compute[251992]: 2025-12-06 07:52:53.122 251996 INFO os_vif [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:4f:f3,bridge_name='br-int',has_traffic_filtering=True,id=44c14266-b77c-4585-b6af-08f5afc76ad9,network=Network(9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44c14266-b7')
Dec 06 07:52:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Dec 06 07:52:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:53.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:54 compute-0 nova_compute[251992]: 2025-12-06 07:52:54.206 251996 DEBUG nova.network.neutron [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Successfully created port: ad09ca6a-7b57-4547-9e95-4976d37ac5f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:52:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:54 compute-0 nova_compute[251992]: 2025-12-06 07:52:54.914 251996 DEBUG nova.compute.manager [req-3bb22757-a712-41d8-a843-7715c6737953 req-96c07873-ab08-4c86-a8e2-2df30ec824bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-vif-unplugged-44c14266-b77c-4585-b6af-08f5afc76ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:54 compute-0 nova_compute[251992]: 2025-12-06 07:52:54.914 251996 DEBUG oslo_concurrency.lockutils [req-3bb22757-a712-41d8-a843-7715c6737953 req-96c07873-ab08-4c86-a8e2-2df30ec824bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:54 compute-0 nova_compute[251992]: 2025-12-06 07:52:54.915 251996 DEBUG oslo_concurrency.lockutils [req-3bb22757-a712-41d8-a843-7715c6737953 req-96c07873-ab08-4c86-a8e2-2df30ec824bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:54 compute-0 nova_compute[251992]: 2025-12-06 07:52:54.915 251996 DEBUG oslo_concurrency.lockutils [req-3bb22757-a712-41d8-a843-7715c6737953 req-96c07873-ab08-4c86-a8e2-2df30ec824bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:54 compute-0 nova_compute[251992]: 2025-12-06 07:52:54.915 251996 DEBUG nova.compute.manager [req-3bb22757-a712-41d8-a843-7715c6737953 req-96c07873-ab08-4c86-a8e2-2df30ec824bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] No waiting events found dispatching network-vif-unplugged-44c14266-b77c-4585-b6af-08f5afc76ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:52:54 compute-0 nova_compute[251992]: 2025-12-06 07:52:54.915 251996 DEBUG nova.compute.manager [req-3bb22757-a712-41d8-a843-7715c6737953 req-96c07873-ab08-4c86-a8e2-2df30ec824bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-vif-unplugged-44c14266-b77c-4585-b6af-08f5afc76ad9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:52:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 752 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 195 op/s
Dec 06 07:52:55 compute-0 ceph-mon[74339]: pgmap v2981: 305 pgs: 305 active+clean; 714 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Dec 06 07:52:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:55.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:56 compute-0 nova_compute[251992]: 2025-12-06 07:52:56.297 251996 DEBUG nova.network.neutron [req-d6e0fdbe-cb42-469b-a8be-bfdf76d8ac19 req-ec9dde82-2c91-4451-8f4f-60f7548d5f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updated VIF entry in instance network info cache for port 44c14266-b77c-4585-b6af-08f5afc76ad9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:52:56 compute-0 nova_compute[251992]: 2025-12-06 07:52:56.298 251996 DEBUG nova.network.neutron [req-d6e0fdbe-cb42-469b-a8be-bfdf76d8ac19 req-ec9dde82-2c91-4451-8f4f-60f7548d5f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updating instance_info_cache with network_info: [{"id": "44c14266-b77c-4585-b6af-08f5afc76ad9", "address": "fa:16:3e:93:4f:f3", "network": {"id": "9e0e5f36-40fa-4d3b-b8ee-8071f7ac21d7", "bridge": "br-int", "label": "tempest-TestStampPattern-1578740976-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4cf19b89a6d46bca307e65731a9dd21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44c14266-b7", "ovs_interfaceid": "44c14266-b77c-4585-b6af-08f5afc76ad9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:52:56 compute-0 nova_compute[251992]: 2025-12-06 07:52:56.325 251996 DEBUG oslo_concurrency.lockutils [req-d6e0fdbe-cb42-469b-a8be-bfdf76d8ac19 req-ec9dde82-2c91-4451-8f4f-60f7548d5f38 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-822fc37e-13a4-4b1b-983f-6cc928c1dfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:52:56 compute-0 nova_compute[251992]: 2025-12-06 07:52:56.631 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:56.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:56 compute-0 ceph-mon[74339]: pgmap v2982: 305 pgs: 305 active+clean; 752 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 195 op/s
Dec 06 07:52:56 compute-0 nova_compute[251992]: 2025-12-06 07:52:56.873 251996 DEBUG nova.network.neutron [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Successfully updated port: ad09ca6a-7b57-4547-9e95-4976d37ac5f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:52:56 compute-0 nova_compute[251992]: 2025-12-06 07:52:56.890 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:52:56 compute-0 nova_compute[251992]: 2025-12-06 07:52:56.890 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquired lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:52:56 compute-0 nova_compute[251992]: 2025-12-06 07:52:56.890 251996 DEBUG nova.network.neutron [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.041 251996 DEBUG nova.network.neutron [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:52:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 752 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 184 op/s
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.554 251996 DEBUG nova.compute.manager [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.554 251996 DEBUG oslo_concurrency.lockutils [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.555 251996 DEBUG oslo_concurrency.lockutils [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.556 251996 DEBUG oslo_concurrency.lockutils [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.556 251996 DEBUG nova.compute.manager [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] No waiting events found dispatching network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.556 251996 WARNING nova.compute.manager [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received unexpected event network-vif-plugged-44c14266-b77c-4585-b6af-08f5afc76ad9 for instance with vm_state active and task_state deleting.
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.557 251996 DEBUG nova.compute.manager [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-changed-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.557 251996 DEBUG nova.compute.manager [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Refreshing instance network info cache due to event network-changed-ad09ca6a-7b57-4547-9e95-4976d37ac5f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:52:57 compute-0 nova_compute[251992]: 2025-12-06 07:52:57.557 251996 DEBUG oslo_concurrency.lockutils [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:52:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:52:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:57.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.119 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.633 251996 DEBUG nova.network.neutron [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.667 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Releasing lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.668 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance network_info: |[{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.668 251996 DEBUG oslo_concurrency.lockutils [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.668 251996 DEBUG nova.network.neutron [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Refreshing network info cache for port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.671 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Start _get_guest_xml network_info=[{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.676 251996 WARNING nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.682 251996 DEBUG nova.virt.libvirt.host [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.683 251996 DEBUG nova.virt.libvirt.host [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.690 251996 DEBUG nova.virt.libvirt.host [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.691 251996 DEBUG nova.virt.libvirt.host [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.692 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.693 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.693 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.693 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.694 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.694 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.694 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.694 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.694 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.695 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.695 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.695 251996 DEBUG nova.virt.hardware [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:52:58 compute-0 nova_compute[251992]: 2025-12-06 07:52:58.698 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:52:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:52:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:52:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:52:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/901956373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:59 compute-0 ceph-mon[74339]: pgmap v2983: 305 pgs: 305 active+clean; 752 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 184 op/s
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.184 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.213 251996 DEBUG nova.storage.rbd_utils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.218 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:52:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 744 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 194 op/s
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.685 251996 INFO nova.virt.libvirt.driver [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Deleting instance files /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3_del
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.687 251996 INFO nova.virt.libvirt.driver [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Deletion of /var/lib/nova/instances/822fc37e-13a4-4b1b-983f-6cc928c1dfa3_del complete
Dec 06 07:52:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:52:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3864168267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.728 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.730 251996 DEBUG nova.virt.libvirt.vif [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1180268370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1180268370',id=172,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHi/zU/wIK9gDsaOwcMdl3RsHvsLUGXMCp6e+v7Vsr1tSU1UeVN9QkmLR8bRL7zUBTSmDE2iL72n56YVoqmlRT/okHDeuUDKoH9btDdrzNPAdyAh2Xwe7f5FUrx+EbYFg==',key_name='tempest-keypair-763555621',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-tqsa30w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:52:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=91b85b86-0d07-4df4-80d5-48fa343c00b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.730 251996 DEBUG nova.network.os_vif_util [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.731 251996 DEBUG nova.network.os_vif_util [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.734 251996 DEBUG nova.objects.instance [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'pci_devices' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.801 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <uuid>91b85b86-0d07-4df4-80d5-48fa343c00b8</uuid>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <name>instance-000000ac</name>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-1180268370</nova:name>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:52:58</nova:creationTime>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <nova:user uuid="90c9de6e67724c898a8e23b05fbf14da">tempest-AttachVolumeShelveTestJSON-1510980811-project-member</nova:user>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <nova:project uuid="cfa713d92cc94fa1b94404ed58b0563f">tempest-AttachVolumeShelveTestJSON-1510980811</nova:project>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <nova:port uuid="ad09ca6a-7b57-4547-9e95-4976d37ac5f9">
Dec 06 07:52:59 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <system>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <entry name="serial">91b85b86-0d07-4df4-80d5-48fa343c00b8</entry>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <entry name="uuid">91b85b86-0d07-4df4-80d5-48fa343c00b8</entry>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </system>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <os>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   </os>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <features>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   </features>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/91b85b86-0d07-4df4-80d5-48fa343c00b8_disk">
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       </source>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config">
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       </source>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:52:59 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:5d:9b:dc"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <target dev="tapad09ca6a-7b"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/console.log" append="off"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <video>
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </video>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:52:59 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:52:59 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:52:59 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:52:59 compute-0 nova_compute[251992]: </domain>
Dec 06 07:52:59 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.802 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Preparing to wait for external event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.802 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.803 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.803 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.804 251996 DEBUG nova.virt.libvirt.vif [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1180268370',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1180268370',id=172,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHi/zU/wIK9gDsaOwcMdl3RsHvsLUGXMCp6e+v7Vsr1tSU1UeVN9QkmLR8bRL7zUBTSmDE2iL72n56YVoqmlRT/okHDeuUDKoH9btDdrzNPAdyAh2Xwe7f5FUrx+EbYFg==',key_name='tempest-keypair-763555621',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-tqsa30w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:52:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=91b85b86-0d07-4df4-80d5-48fa343c00b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.804 251996 DEBUG nova.network.os_vif_util [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.805 251996 DEBUG nova.network.os_vif_util [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.806 251996 DEBUG os_vif [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.806 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.807 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.807 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.811 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.811 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad09ca6a-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.812 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad09ca6a-7b, col_values=(('external_ids', {'iface-id': 'ad09ca6a-7b57-4547-9e95-4976d37ac5f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:9b:dc', 'vm-uuid': '91b85b86-0d07-4df4-80d5-48fa343c00b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.814 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:59 compute-0 NetworkManager[48965]: <info>  [1765007579.8150] manager: (tapad09ca6a-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.816 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.820 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.821 251996 INFO os_vif [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b')
Dec 06 07:52:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:52:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:52:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:52:59.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.893 251996 INFO nova.compute.manager [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Took 7.30 seconds to destroy the instance on the hypervisor.
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.894 251996 DEBUG oslo.service.loopingcall [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.894 251996 DEBUG nova.compute.manager [-] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.894 251996 DEBUG nova.network.neutron [-] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.923 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.923 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.924 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No VIF found with MAC fa:16:3e:5d:9b:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.925 251996 INFO nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Using config drive
Dec 06 07:52:59 compute-0 nova_compute[251992]: 2025-12-06 07:52:59.951 251996 DEBUG nova.storage.rbd_utils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:53:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/901956373' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:53:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3864168267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.421 251996 INFO nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Creating config drive at /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.427 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz83_5ic4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.570 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz83_5ic4" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.596 251996 DEBUG nova.storage.rbd_utils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] rbd image 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.599 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.830 251996 DEBUG nova.network.neutron [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updated VIF entry in instance network info cache for port ad09ca6a-7b57-4547-9e95-4976d37ac5f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.831 251996 DEBUG nova.network.neutron [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.869 251996 DEBUG oslo_concurrency.lockutils [req-b5d567ec-28b7-4e26-859f-90e465a5dffb req-34c9fdf6-3c83-440d-9017-a38d15497ca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.899 251996 DEBUG nova.network.neutron [-] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.917 251996 INFO nova.compute.manager [-] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Took 1.02 seconds to deallocate network for instance.
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.928 251996 DEBUG nova.compute.manager [req-26018a1a-70da-4022-ba32-9cc8cd142b14 req-738ba2b1-77d0-4f35-a627-66d620e0346f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Received event network-vif-deleted-44c14266-b77c-4585-b6af-08f5afc76ad9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.958 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:00 compute-0 nova_compute[251992]: 2025-12-06 07:53:00.959 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.043 251996 DEBUG oslo_concurrency.processutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Dec 06 07:53:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:53:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1978308452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:53:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308678200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.519 251996 DEBUG oslo_concurrency.processutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.525 251996 DEBUG nova.compute.provider_tree [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:53:01 compute-0 ceph-mon[74339]: pgmap v2984: 305 pgs: 305 active+clean; 744 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 194 op/s
Dec 06 07:53:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1978308452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.544 251996 DEBUG nova.scheduler.client.report [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.568 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.600 251996 INFO nova.scheduler.client.report [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Deleted allocations for instance 822fc37e-13a4-4b1b-983f-6cc928c1dfa3
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.630 251996 DEBUG oslo_concurrency.processutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config 91b85b86-0d07-4df4-80d5-48fa343c00b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.631 251996 INFO nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Deleting local config drive /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8/disk.config because it was imported into RBD.
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.633 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:01 compute-0 kernel: tapad09ca6a-7b: entered promiscuous mode
Dec 06 07:53:01 compute-0 NetworkManager[48965]: <info>  [1765007581.6833] manager: (tapad09ca6a-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Dec 06 07:53:01 compute-0 ovn_controller[147168]: 2025-12-06T07:53:01Z|00646|binding|INFO|Claiming lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 for this chassis.
Dec 06 07:53:01 compute-0 ovn_controller[147168]: 2025-12-06T07:53:01Z|00647|binding|INFO|ad09ca6a-7b57-4547-9e95-4976d37ac5f9: Claiming fa:16:3e:5d:9b:dc 10.100.0.9
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.684 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.692 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:9b:dc 10.100.0.9'], port_security=['fa:16:3e:5d:9b:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '91b85b86-0d07-4df4-80d5-48fa343c00b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e15bf02-5e56-4488-babc-bd5f6809e0ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5411afcf-f935-4976-affc-7b12214f8e50, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ad09ca6a-7b57-4547-9e95-4976d37ac5f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.693 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 in datapath 45904a2f-a5c2-4047-9c19-a87d36354c1b bound to our chassis
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.695 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.701 251996 DEBUG oslo_concurrency.lockutils [None req-b0c0281e-feb0-4935-b0b8-1612d8b71d0f 4962bc7b172346e19d127b46ea2d7a11 c4cf19b89a6d46bca307e65731a9dd21 - - default default] Lock "822fc37e-13a4-4b1b-983f-6cc928c1dfa3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.703 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:01 compute-0 ovn_controller[147168]: 2025-12-06T07:53:01Z|00648|binding|INFO|Setting lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 ovn-installed in OVS
Dec 06 07:53:01 compute-0 ovn_controller[147168]: 2025-12-06T07:53:01Z|00649|binding|INFO|Setting lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 up in Southbound
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.707 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.707 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2e58c55a-f34d-4f2c-b8b4-4d2b4d096942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.708 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45904a2f-a1 in ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.709 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45904a2f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.710 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a09e32b1-f3cd-4a77-9bd5-e3b3cbb8f9d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.711 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e9113ee6-7988-4c48-bca2-d91f625dbb82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 systemd-udevd[364353]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.724 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[ee50d932-c492-4c30-a649-4d7a8b486432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 NetworkManager[48965]: <info>  [1765007581.7274] device (tapad09ca6a-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:53:01 compute-0 systemd-machined[212986]: New machine qemu-81-instance-000000ac.
Dec 06 07:53:01 compute-0 NetworkManager[48965]: <info>  [1765007581.7288] device (tapad09ca6a-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.740 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[705ad18b-9e47-4346-9300-f0b31547a37e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-000000ac.
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.775 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5db99431-0532-49fd-9902-0985bf1a0883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 NetworkManager[48965]: <info>  [1765007581.7823] manager: (tap45904a2f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/294)
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.781 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1aedc3e2-4452-44d0-a93c-8d2b625d53b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.818 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ad16a76a-dbab-4994-8e2f-74a9d8ad81fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.821 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcb4d1a-2698-41a0-92e6-ee54151acdde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:01.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:01 compute-0 NetworkManager[48965]: <info>  [1765007581.8499] device (tap45904a2f-a0): carrier: link connected
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.856 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d6737f41-260d-418d-b688-c5a01a87cd47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.873 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e33a2e86-292a-4e8f-b8b9-58149934fa4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45904a2f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:67:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785443, 'reachable_time': 31858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364386, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.891 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9c8388-02e7-4bb4-b82d-958bde638719]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:67fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785443, 'tstamp': 785443}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364387, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.908 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2e79358b-a807-4fbe-b354-da4681c0460b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45904a2f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:67:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785443, 'reachable_time': 31858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364388, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.939 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[75b30286-ad16-463f-9e56-a3daf30243bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.992 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9065ff-1ae5-4228-97a6-62f6d7ccf466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.993 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45904a2f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.994 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:53:01 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:01.994 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45904a2f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:01 compute-0 nova_compute[251992]: 2025-12-06 07:53:01.996 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:01 compute-0 kernel: tap45904a2f-a0: entered promiscuous mode
Dec 06 07:53:01 compute-0 NetworkManager[48965]: <info>  [1765007581.9968] manager: (tap45904a2f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:02.001 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45904a2f-a0, col_values=(('external_ids', {'iface-id': 'e43e784e-bee5-49c8-8bc7-c45a17996abf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:02 compute-0 ovn_controller[147168]: 2025-12-06T07:53:02Z|00650|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.002 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.003 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:02.005 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:02.010 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7544c936-516c-437c-8cae-a9914232e44c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:02.011 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/45904a2f-a5c2-4047-9c19-a87d36354c1b.pid.haproxy
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 45904a2f-a5c2-4047-9c19-a87d36354c1b
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:53:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:02.012 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'env', 'PROCESS_TAG=haproxy-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45904a2f-a5c2-4047-9c19-a87d36354c1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.018 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:02 compute-0 podman[364427]: 2025-12-06 07:53:02.381968167 +0000 UTC m=+0.055820677 container create 47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 07:53:02 compute-0 systemd[1]: Started libpod-conmon-47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8.scope.
Dec 06 07:53:02 compute-0 podman[364427]: 2025-12-06 07:53:02.350436437 +0000 UTC m=+0.024288977 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:53:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fa28f91d4d8338c18590dae331aaac5c392a03fdbda5ee1403e4f89d563b276/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:02 compute-0 podman[364427]: 2025-12-06 07:53:02.47360123 +0000 UTC m=+0.147453760 container init 47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 07:53:02 compute-0 podman[364427]: 2025-12-06 07:53:02.479697854 +0000 UTC m=+0.153550364 container start 47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:53:02 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[364471]: [NOTICE]   (364480) : New worker (364482) forked
Dec 06 07:53:02 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[364471]: [NOTICE]   (364480) : Loading success.
Dec 06 07:53:02 compute-0 ceph-mon[74339]: pgmap v2985: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Dec 06 07:53:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3308678200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.555 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007582.554705, 91b85b86-0d07-4df4-80d5-48fa343c00b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.556 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] VM Started (Lifecycle Event)
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.588 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.592 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007582.5553813, 91b85b86-0d07-4df4-80d5-48fa343c00b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.592 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] VM Paused (Lifecycle Event)
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.614 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.617 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:53:02 compute-0 nova_compute[251992]: 2025-12-06 07:53:02.638 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:53:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.072 251996 DEBUG nova.compute.manager [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.072 251996 DEBUG oslo_concurrency.lockutils [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.073 251996 DEBUG oslo_concurrency.lockutils [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.073 251996 DEBUG oslo_concurrency.lockutils [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.073 251996 DEBUG nova.compute.manager [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Processing event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.073 251996 DEBUG nova.compute.manager [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.073 251996 DEBUG oslo_concurrency.lockutils [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.074 251996 DEBUG oslo_concurrency.lockutils [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.074 251996 DEBUG oslo_concurrency.lockutils [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.074 251996 DEBUG nova.compute.manager [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] No waiting events found dispatching network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.074 251996 WARNING nova.compute.manager [req-b11e3ecf-c4e4-46e5-8454-18405ee9cb62 req-3e39ab6d-1043-4a85-81be-71b14fe95f8f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received unexpected event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 for instance with vm_state building and task_state spawning.
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.075 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.078 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007583.0781431, 91b85b86-0d07-4df4-80d5-48fa343c00b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.078 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] VM Resumed (Lifecycle Event)
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.081 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.084 251996 INFO nova.virt.libvirt.driver [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance spawned successfully.
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.084 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.141 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.146 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.146 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.147 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.147 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.149 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.150 251996 DEBUG nova.virt.libvirt.driver [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.158 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.193 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:53:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 161 op/s
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.470 251996 INFO nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Took 14.16 seconds to spawn the instance on the hypervisor.
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.471 251996 DEBUG nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.580 251996 INFO nova.compute.manager [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Took 16.42 seconds to build instance.
Dec 06 07:53:03 compute-0 nova_compute[251992]: 2025-12-06 07:53:03.690 251996 DEBUG oslo_concurrency.lockutils [None req-813cc1c3-57ad-4054-a7ec-a359f49763b9 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:03.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:03.863 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:03.863 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:03.864 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:04 compute-0 sudo[364493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:04 compute-0 sudo[364493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:04 compute-0 sudo[364493]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Dec 06 07:53:04 compute-0 sudo[364518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:04 compute-0 sudo[364518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:04 compute-0 sudo[364518]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:04.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:04 compute-0 ceph-mon[74339]: pgmap v2986: 305 pgs: 305 active+clean; 720 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 161 op/s
Dec 06 07:53:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2434060058' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2434060058' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:04 compute-0 nova_compute[251992]: 2025-12-06 07:53:04.814 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Dec 06 07:53:05 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Dec 06 07:53:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 710 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 267 op/s
Dec 06 07:53:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:05.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:06 compute-0 ceph-mon[74339]: osdmap e390: 3 total, 3 up, 3 in
Dec 06 07:53:06 compute-0 nova_compute[251992]: 2025-12-06 07:53:06.635 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:06.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Dec 06 07:53:07 compute-0 ceph-mon[74339]: pgmap v2988: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 710 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 267 op/s
Dec 06 07:53:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 325 op/s
Dec 06 07:53:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Dec 06 07:53:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Dec 06 07:53:07 compute-0 nova_compute[251992]: 2025-12-06 07:53:07.440 251996 DEBUG nova.compute.manager [req-8641c32e-adab-4414-8540-ed69fb0de882 req-222df9b7-b41b-42dc-b37e-f49c15e69751 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-changed-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:07 compute-0 nova_compute[251992]: 2025-12-06 07:53:07.441 251996 DEBUG nova.compute.manager [req-8641c32e-adab-4414-8540-ed69fb0de882 req-222df9b7-b41b-42dc-b37e-f49c15e69751 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Refreshing instance network info cache due to event network-changed-ad09ca6a-7b57-4547-9e95-4976d37ac5f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:53:07 compute-0 nova_compute[251992]: 2025-12-06 07:53:07.441 251996 DEBUG oslo_concurrency.lockutils [req-8641c32e-adab-4414-8540-ed69fb0de882 req-222df9b7-b41b-42dc-b37e-f49c15e69751 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:07 compute-0 nova_compute[251992]: 2025-12-06 07:53:07.441 251996 DEBUG oslo_concurrency.lockutils [req-8641c32e-adab-4414-8540-ed69fb0de882 req-222df9b7-b41b-42dc-b37e-f49c15e69751 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:07 compute-0 nova_compute[251992]: 2025-12-06 07:53:07.441 251996 DEBUG nova.network.neutron [req-8641c32e-adab-4414-8540-ed69fb0de882 req-222df9b7-b41b-42dc-b37e-f49c15e69751 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Refreshing network info cache for port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:53:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:07 compute-0 nova_compute[251992]: 2025-12-06 07:53:07.823 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007572.8221533, 822fc37e-13a4-4b1b-983f-6cc928c1dfa3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:53:07 compute-0 nova_compute[251992]: 2025-12-06 07:53:07.823 251996 INFO nova.compute.manager [-] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] VM Stopped (Lifecycle Event)
Dec 06 07:53:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:07.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:07 compute-0 nova_compute[251992]: 2025-12-06 07:53:07.849 251996 DEBUG nova.compute.manager [None req-51d5079f-864a-4ae2-b657-67b919f9d19b - - - - - -] [instance: 822fc37e-13a4-4b1b-983f-6cc928c1dfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:08 compute-0 podman[364545]: 2025-12-06 07:53:08.435914548 +0000 UTC m=+0.087670446 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:53:08 compute-0 ceph-mon[74339]: pgmap v2989: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 325 op/s
Dec 06 07:53:08 compute-0 ceph-mon[74339]: osdmap e391: 3 total, 3 up, 3 in
Dec 06 07:53:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:08.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:53:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2874821777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:53:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2874821777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 654 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 319 op/s
Dec 06 07:53:09 compute-0 nova_compute[251992]: 2025-12-06 07:53:09.816 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:09.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:10.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2874821777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2874821777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:10 compute-0 nova_compute[251992]: 2025-12-06 07:53:10.874 251996 DEBUG nova.network.neutron [req-8641c32e-adab-4414-8540-ed69fb0de882 req-222df9b7-b41b-42dc-b37e-f49c15e69751 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updated VIF entry in instance network info cache for port ad09ca6a-7b57-4547-9e95-4976d37ac5f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:53:10 compute-0 nova_compute[251992]: 2025-12-06 07:53:10.875 251996 DEBUG nova.network.neutron [req-8641c32e-adab-4414-8540-ed69fb0de882 req-222df9b7-b41b-42dc-b37e-f49c15e69751 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:10 compute-0 nova_compute[251992]: 2025-12-06 07:53:10.900 251996 DEBUG oslo_concurrency.lockutils [req-8641c32e-adab-4414-8540-ed69fb0de882 req-222df9b7-b41b-42dc-b37e-f49c15e69751 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 349 op/s
Dec 06 07:53:11 compute-0 nova_compute[251992]: 2025-12-06 07:53:11.637 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:11.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:12.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:53:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:53:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:53:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:53:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:53:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:53:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 252 op/s
Dec 06 07:53:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:13.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:14 compute-0 ceph-mon[74339]: pgmap v2991: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 654 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 319 op/s
Dec 06 07:53:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:14.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:14 compute-0 nova_compute[251992]: 2025-12-06 07:53:14.818 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:15 compute-0 ceph-mon[74339]: pgmap v2992: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 349 op/s
Dec 06 07:53:15 compute-0 ceph-mon[74339]: pgmap v2993: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 252 op/s
Dec 06 07:53:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 368 KiB/s wr, 159 op/s
Dec 06 07:53:15 compute-0 podman[364575]: 2025-12-06 07:53:15.411972229 +0000 UTC m=+0.061651404 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:53:15 compute-0 podman[364576]: 2025-12-06 07:53:15.412737841 +0000 UTC m=+0.060394791 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Dec 06 07:53:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:15.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:16 compute-0 ceph-mon[74339]: pgmap v2994: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 368 KiB/s wr, 159 op/s
Dec 06 07:53:16 compute-0 nova_compute[251992]: 2025-12-06 07:53:16.640 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:16.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 612 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 515 KiB/s wr, 113 op/s
Dec 06 07:53:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:17.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:17 compute-0 ovn_controller[147168]: 2025-12-06T07:53:17Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:9b:dc 10.100.0.9
Dec 06 07:53:17 compute-0 ovn_controller[147168]: 2025-12-06T07:53:17Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:9b:dc 10.100.0.9
Dec 06 07:53:18 compute-0 ceph-mon[74339]: pgmap v2995: 305 pgs: 305 active+clean; 612 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 515 KiB/s wr, 113 op/s
Dec 06 07:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:53:18
Dec 06 07:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.log', 'images', 'backups', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta']
Dec 06 07:53:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:53:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Dec 06 07:53:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Dec 06 07:53:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Dec 06 07:53:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 605 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 861 KiB/s rd, 1.7 MiB/s wr, 139 op/s
Dec 06 07:53:19 compute-0 ceph-mon[74339]: osdmap e392: 3 total, 3 up, 3 in
Dec 06 07:53:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3201494518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:19 compute-0 nova_compute[251992]: 2025-12-06 07:53:19.821 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:19.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:20.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:20 compute-0 ceph-mon[74339]: pgmap v2997: 305 pgs: 305 active+clean; 605 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 861 KiB/s rd, 1.7 MiB/s wr, 139 op/s
Dec 06 07:53:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:53:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1779789501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:53:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1779789501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 592 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 915 KiB/s rd, 2.6 MiB/s wr, 157 op/s
Dec 06 07:53:21 compute-0 nova_compute[251992]: 2025-12-06 07:53:21.643 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1779789501' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1779789501' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:21.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:22.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:22 compute-0 ceph-mon[74339]: pgmap v2998: 305 pgs: 305 active+clean; 592 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 915 KiB/s rd, 2.6 MiB/s wr, 157 op/s
Dec 06 07:53:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1020941421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 592 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 915 KiB/s rd, 2.6 MiB/s wr, 157 op/s
Dec 06 07:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:53:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:53:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1210000473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:23.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:24 compute-0 nova_compute[251992]: 2025-12-06 07:53:24.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:24 compute-0 nova_compute[251992]: 2025-12-06 07:53:24.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:24 compute-0 nova_compute[251992]: 2025-12-06 07:53:24.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:24 compute-0 nova_compute[251992]: 2025-12-06 07:53:24.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:24 compute-0 nova_compute[251992]: 2025-12-06 07:53:24.682 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:53:24 compute-0 nova_compute[251992]: 2025-12-06 07:53:24.682 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:24.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:24 compute-0 sudo[364618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:24 compute-0 sudo[364618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:24 compute-0 sudo[364618]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:24 compute-0 sudo[364643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:24 compute-0 sudo[364643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:24 compute-0 sudo[364643]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:24 compute-0 nova_compute[251992]: 2025-12-06 07:53:24.825 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/199665366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.151 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.221 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.222 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.222 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.225 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.226 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:53:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 2.6 MiB/s wr, 137 op/s
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.400 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.401 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3750MB free_disk=20.739639282226562GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.401 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.402 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:25 compute-0 ceph-mon[74339]: pgmap v2999: 305 pgs: 305 active+clean; 592 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 915 KiB/s rd, 2.6 MiB/s wr, 157 op/s
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.501 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 6e187078-1e6f-4c22-9510-ed8116b14ae5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 91b85b86-0d07-4df4-80d5-48fa343c00b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:53:25 compute-0 nova_compute[251992]: 2025-12-06 07:53:25.555 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:25.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2736518958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:26 compute-0 nova_compute[251992]: 2025-12-06 07:53:26.048 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:26 compute-0 nova_compute[251992]: 2025-12-06 07:53:26.053 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:53:26 compute-0 nova_compute[251992]: 2025-12-06 07:53:26.112 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:53:26 compute-0 nova_compute[251992]: 2025-12-06 07:53:26.142 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:53:26 compute-0 nova_compute[251992]: 2025-12-06 07:53:26.142 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011857246300949191 of space, bias 1.0, pg target 3.557173890284757 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6425879396984925 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8478230998743718 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:53:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:53:26 compute-0 nova_compute[251992]: 2025-12-06 07:53:26.645 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:26.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:26 compute-0 ovn_controller[147168]: 2025-12-06T07:53:26Z|00651|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:53:26 compute-0 ovn_controller[147168]: 2025-12-06T07:53:26Z|00652|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec 06 07:53:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/199665366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:26 compute-0 ceph-mon[74339]: pgmap v3000: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 2.6 MiB/s wr, 137 op/s
Dec 06 07:53:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2736518958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:26 compute-0 nova_compute[251992]: 2025-12-06 07:53:26.886 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:53:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:53:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:53:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:53:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:53:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 433 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Dec 06 07:53:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:27.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:28.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:29 compute-0 ceph-mon[74339]: pgmap v3001: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 433 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Dec 06 07:53:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 842 KiB/s rd, 963 KiB/s wr, 84 op/s
Dec 06 07:53:29 compute-0 nova_compute[251992]: 2025-12-06 07:53:29.829 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:30 compute-0 ceph-mon[74339]: pgmap v3002: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 842 KiB/s rd, 963 KiB/s wr, 84 op/s
Dec 06 07:53:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:30.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 843 KiB/s wr, 125 op/s
Dec 06 07:53:31 compute-0 nova_compute[251992]: 2025-12-06 07:53:31.647 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:31.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:32 compute-0 nova_compute[251992]: 2025-12-06 07:53:32.136 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:32 compute-0 ceph-mon[74339]: pgmap v3003: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 843 KiB/s wr, 125 op/s
Dec 06 07:53:32 compute-0 nova_compute[251992]: 2025-12-06 07:53:32.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.699638) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612699755, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1752, "num_deletes": 258, "total_data_size": 2894464, "memory_usage": 2936880, "flush_reason": "Manual Compaction"}
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612720789, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 2846580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58891, "largest_seqno": 60642, "table_properties": {"data_size": 2838345, "index_size": 4984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17926, "raw_average_key_size": 21, "raw_value_size": 2821728, "raw_average_value_size": 3311, "num_data_blocks": 216, "num_entries": 852, "num_filter_entries": 852, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007463, "oldest_key_time": 1765007463, "file_creation_time": 1765007612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 21219 microseconds, and 8638 cpu microseconds.
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.720845) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 2846580 bytes OK
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.720917) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.723055) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.723072) EVENT_LOG_v1 {"time_micros": 1765007612723066, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.723091) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 2886922, prev total WAL file size 2886922, number of live WAL files 2.
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.724178) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(2779KB)], [131(9953KB)]
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612724300, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 13038749, "oldest_snapshot_seqno": -1}
Dec 06 07:53:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:32.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 9358 keys, 11087920 bytes, temperature: kUnknown
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612826246, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 11087920, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11028407, "index_size": 35016, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 247491, "raw_average_key_size": 26, "raw_value_size": 10864789, "raw_average_value_size": 1161, "num_data_blocks": 1329, "num_entries": 9358, "num_filter_entries": 9358, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.826488) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11087920 bytes
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.828117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.8 rd, 108.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 9.7 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(8.5) write-amplify(3.9) OK, records in: 9889, records dropped: 531 output_compression: NoCompression
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.828136) EVENT_LOG_v1 {"time_micros": 1765007612828127, "job": 80, "event": "compaction_finished", "compaction_time_micros": 102011, "compaction_time_cpu_micros": 46728, "output_level": 6, "num_output_files": 1, "total_output_size": 11087920, "num_input_records": 9889, "num_output_records": 9358, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612828680, "job": 80, "event": "table_file_deletion", "file_number": 133}
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007612830806, "job": 80, "event": "table_file_deletion", "file_number": 131}
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.724018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.830919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.830925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.830926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.830928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:32 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:53:32.830930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:53:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 90 op/s
Dec 06 07:53:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:33.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:34 compute-0 nova_compute[251992]: 2025-12-06 07:53:34.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:34 compute-0 nova_compute[251992]: 2025-12-06 07:53:34.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:53:34 compute-0 nova_compute[251992]: 2025-12-06 07:53:34.694 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:53:34 compute-0 nova_compute[251992]: 2025-12-06 07:53:34.694 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:34.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:34 compute-0 nova_compute[251992]: 2025-12-06 07:53:34.834 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:34 compute-0 ceph-mon[74339]: pgmap v3004: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 90 op/s
Dec 06 07:53:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:35.090 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:35 compute-0 nova_compute[251992]: 2025-12-06 07:53:35.090 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:35.091 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:53:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 44 KiB/s wr, 91 op/s
Dec 06 07:53:35 compute-0 ovn_controller[147168]: 2025-12-06T07:53:35Z|00653|binding|INFO|Releasing lport 7c0488e1-35c2-4c92-b43c-271fbeecd9ea from this chassis (sb_readonly=0)
Dec 06 07:53:35 compute-0 ovn_controller[147168]: 2025-12-06T07:53:35Z|00654|binding|INFO|Releasing lport e43e784e-bee5-49c8-8bc7-c45a17996abf from this chassis (sb_readonly=0)
Dec 06 07:53:35 compute-0 nova_compute[251992]: 2025-12-06 07:53:35.343 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:35 compute-0 nova_compute[251992]: 2025-12-06 07:53:35.688 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:35.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:36 compute-0 nova_compute[251992]: 2025-12-06 07:53:36.649 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:36 compute-0 nova_compute[251992]: 2025-12-06 07:53:36.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3540705987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 75 op/s
Dec 06 07:53:37 compute-0 nova_compute[251992]: 2025-12-06 07:53:37.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:37.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:37 compute-0 sudo[364720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:37 compute-0 sudo[364720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:37 compute-0 sudo[364720]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:38 compute-0 sudo[364745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:53:38 compute-0 sudo[364745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:38 compute-0 sudo[364745]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:38 compute-0 sudo[364770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:38 compute-0 sudo[364770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:38 compute-0 sudo[364770]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:38 compute-0 sudo[364795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:53:38 compute-0 sudo[364795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:53:38 compute-0 sudo[364795]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 07:53:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:53:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 07:53:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:53:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:38.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:38 compute-0 ceph-mon[74339]: pgmap v3005: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 44 KiB/s wr, 91 op/s
Dec 06 07:53:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3684386716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.7 KiB/s wr, 72 op/s
Dec 06 07:53:39 compute-0 podman[364851]: 2025-12-06 07:53:39.454761409 +0000 UTC m=+0.087400819 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:53:39 compute-0 nova_compute[251992]: 2025-12-06 07:53:39.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:39 compute-0 nova_compute[251992]: 2025-12-06 07:53:39.836 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:39.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:40.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:53:40 compute-0 ceph-mon[74339]: pgmap v3006: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 75 op/s
Dec 06 07:53:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 07:53:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 07:53:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:41.094 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 76 op/s
Dec 06 07:53:41 compute-0 nova_compute[251992]: 2025-12-06 07:53:41.652 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:41 compute-0 ceph-mon[74339]: pgmap v3007: 305 pgs: 305 active+clean; 590 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.7 KiB/s wr, 72 op/s
Dec 06 07:53:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:41.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:42 compute-0 nova_compute[251992]: 2025-12-06 07:53:42.145 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:42 compute-0 nova_compute[251992]: 2025-12-06 07:53:42.145 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:42 compute-0 nova_compute[251992]: 2025-12-06 07:53:42.146 251996 INFO nova.compute.manager [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Shelving
Dec 06 07:53:42 compute-0 nova_compute[251992]: 2025-12-06 07:53:42.171 251996 DEBUG nova.virt.libvirt.driver [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 07:53:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 07:53:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 07:53:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:42 compute-0 nova_compute[251992]: 2025-12-06 07:53:42.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:42 compute-0 nova_compute[251992]: 2025-12-06 07:53:42.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:53:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:42.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:53:43 compute-0 ceph-mon[74339]: pgmap v3008: 305 pgs: 305 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 76 op/s
Dec 06 07:53:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 111 KiB/s rd, 20 KiB/s wr, 24 op/s
Dec 06 07:53:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:53:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:53:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:53:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:53:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:53:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev aea0f8e4-3356-43ca-acf3-993c6d3440c6 does not exist
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7c90823e-e835-401d-ba07-c1ab62406d37 does not exist
Dec 06 07:53:43 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a10b5a22-5efc-427c-9073-b2b2f6defc2c does not exist
Dec 06 07:53:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:53:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:53:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:53:43 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:53:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:53:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:53:43 compute-0 sudo[364880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:43 compute-0 sudo[364880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:43 compute-0 sudo[364880]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:43 compute-0 sudo[364905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:53:43 compute-0 sudo[364905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:43 compute-0 sudo[364905]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:43 compute-0 sudo[364930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:43 compute-0 sudo[364930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:43 compute-0 sudo[364930]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:43 compute-0 sudo[364955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:53:43 compute-0 sudo[364955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:43.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:43 compute-0 podman[365021]: 2025-12-06 07:53:43.902506185 +0000 UTC m=+0.050828464 container create 5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:53:43 compute-0 systemd[1]: Started libpod-conmon-5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f.scope.
Dec 06 07:53:43 compute-0 podman[365021]: 2025-12-06 07:53:43.880641154 +0000 UTC m=+0.028963483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:53:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:53:44 compute-0 podman[365021]: 2025-12-06 07:53:44.010937559 +0000 UTC m=+0.159259858 container init 5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ramanujan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:53:44 compute-0 podman[365021]: 2025-12-06 07:53:44.024832154 +0000 UTC m=+0.173154433 container start 5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ramanujan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 07:53:44 compute-0 podman[365021]: 2025-12-06 07:53:44.027881727 +0000 UTC m=+0.176204006 container attach 5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:53:44 compute-0 eloquent_ramanujan[365037]: 167 167
Dec 06 07:53:44 compute-0 systemd[1]: libpod-5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f.scope: Deactivated successfully.
Dec 06 07:53:44 compute-0 conmon[365037]: conmon 5a68a2b33d1fae02467b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f.scope/container/memory.events
Dec 06 07:53:44 compute-0 podman[365021]: 2025-12-06 07:53:44.037095186 +0000 UTC m=+0.185417475 container died 5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:53:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/194808883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:53:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:53:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:53:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:53:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:53:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a16ae01fdcb28e0a941c98ea65f7baa6e5f36c1ed0b6ffbc92f63a7630ee553b-merged.mount: Deactivated successfully.
Dec 06 07:53:44 compute-0 podman[365021]: 2025-12-06 07:53:44.086395345 +0000 UTC m=+0.234717634 container remove 5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ramanujan, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:53:44 compute-0 systemd[1]: libpod-conmon-5a68a2b33d1fae02467bc485fe7da441bf391a41d298c8ab2b16c57b1cb5bf9f.scope: Deactivated successfully.
Dec 06 07:53:44 compute-0 podman[365060]: 2025-12-06 07:53:44.286712751 +0000 UTC m=+0.045523870 container create 5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:53:44 compute-0 systemd[1]: Started libpod-conmon-5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d.scope.
Dec 06 07:53:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecae7502c6b10dad7183813f1167cb0bd725146fd0d8fe8a9296fc56ffe3d995/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecae7502c6b10dad7183813f1167cb0bd725146fd0d8fe8a9296fc56ffe3d995/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecae7502c6b10dad7183813f1167cb0bd725146fd0d8fe8a9296fc56ffe3d995/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecae7502c6b10dad7183813f1167cb0bd725146fd0d8fe8a9296fc56ffe3d995/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecae7502c6b10dad7183813f1167cb0bd725146fd0d8fe8a9296fc56ffe3d995/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:44 compute-0 podman[365060]: 2025-12-06 07:53:44.35934354 +0000 UTC m=+0.118154679 container init 5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:53:44 compute-0 podman[365060]: 2025-12-06 07:53:44.268474208 +0000 UTC m=+0.027285357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:53:44 compute-0 podman[365060]: 2025-12-06 07:53:44.370276755 +0000 UTC m=+0.129087894 container start 5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:53:44 compute-0 podman[365060]: 2025-12-06 07:53:44.373316317 +0000 UTC m=+0.132127436 container attach 5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:53:44 compute-0 kernel: tapad09ca6a-7b (unregistering): left promiscuous mode
Dec 06 07:53:44 compute-0 NetworkManager[48965]: <info>  [1765007624.5441] device (tapad09ca6a-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.553 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:44 compute-0 ovn_controller[147168]: 2025-12-06T07:53:44Z|00655|binding|INFO|Releasing lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 from this chassis (sb_readonly=0)
Dec 06 07:53:44 compute-0 ovn_controller[147168]: 2025-12-06T07:53:44Z|00656|binding|INFO|Setting lport ad09ca6a-7b57-4547-9e95-4976d37ac5f9 down in Southbound
Dec 06 07:53:44 compute-0 ovn_controller[147168]: 2025-12-06T07:53:44Z|00657|binding|INFO|Removing iface tapad09ca6a-7b ovn-installed in OVS
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.558 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.574 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.614 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:9b:dc 10.100.0.9'], port_security=['fa:16:3e:5d:9b:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '91b85b86-0d07-4df4-80d5-48fa343c00b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cfa713d92cc94fa1b94404ed58b0563f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e15bf02-5e56-4488-babc-bd5f6809e0ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5411afcf-f935-4976-affc-7b12214f8e50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ad09ca6a-7b57-4547-9e95-4976d37ac5f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.617 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 in datapath 45904a2f-a5c2-4047-9c19-a87d36354c1b unbound from our chassis
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.619 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45904a2f-a5c2-4047-9c19-a87d36354c1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:53:44 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Dec 06 07:53:44 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000ac.scope: Consumed 15.686s CPU time.
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.622 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[13bd9a94-da27-428b-9c83-33d2e7c5166b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.623 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b namespace which is not needed anymore
Dec 06 07:53:44 compute-0 systemd-machined[212986]: Machine qemu-81-instance-000000ac terminated.
Dec 06 07:53:44 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[364471]: [NOTICE]   (364480) : haproxy version is 2.8.14-c23fe91
Dec 06 07:53:44 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[364471]: [NOTICE]   (364480) : path to executable is /usr/sbin/haproxy
Dec 06 07:53:44 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[364471]: [WARNING]  (364480) : Exiting Master process...
Dec 06 07:53:44 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[364471]: [ALERT]    (364480) : Current worker (364482) exited with code 143 (Terminated)
Dec 06 07:53:44 compute-0 neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b[364471]: [WARNING]  (364480) : All workers exited. Exiting... (0)
Dec 06 07:53:44 compute-0 systemd[1]: libpod-47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8.scope: Deactivated successfully.
Dec 06 07:53:44 compute-0 podman[365106]: 2025-12-06 07:53:44.760711819 +0000 UTC m=+0.047426060 container died 47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:53:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:44.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8-userdata-shm.mount: Deactivated successfully.
Dec 06 07:53:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fa28f91d4d8338c18590dae331aaac5c392a03fdbda5ee1403e4f89d563b276-merged.mount: Deactivated successfully.
Dec 06 07:53:44 compute-0 podman[365106]: 2025-12-06 07:53:44.818047816 +0000 UTC m=+0.104762047 container cleanup 47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 06 07:53:44 compute-0 systemd[1]: libpod-conmon-47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8.scope: Deactivated successfully.
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.838 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:44 compute-0 podman[365147]: 2025-12-06 07:53:44.884341635 +0000 UTC m=+0.042608801 container remove 47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 07:53:44 compute-0 sudo[365148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.889 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bab553db-290d-4bff-ae0d-cf7ac7f3349b]: (4, ('Sat Dec  6 07:53:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b (47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8)\n47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8\nSat Dec  6 07:53:44 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b (47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8)\n47d1b2536f3e010c53350babf41f28fb70eb433e4c2766109fde36d3a3a688b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:44 compute-0 sudo[365148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.893 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6dc99d-9b72-459f-b801-921a185ee91b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.894 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45904a2f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:44 compute-0 kernel: tap45904a2f-a0: left promiscuous mode
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.897 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:44 compute-0 sudo[365148]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.921 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.926 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[090cf3be-6df3-40aa-a3cd-adb72c2042be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.938 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[63c596ce-9c16-43f9-98cb-59710c4f7b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.941 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4cd30572-1ecf-4042-be9c-0ef1937cfae1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.960 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d2984a-5755-4506-af78-f700e489a73d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785435, 'reachable_time': 28111, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365208, 'error': None, 'target': 'ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.965 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45904a2f-a5c2-4047-9c19-a87d36354c1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:53:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d45904a2f\x2da5c2\x2d4047\x2d9c19\x2da87d36354c1b.mount: Deactivated successfully.
Dec 06 07:53:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:44.966 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[3b48811d-ccbc-47f1-bcf6-a636071d8f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:44 compute-0 sudo[365186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.974 251996 DEBUG nova.compute.manager [req-05232392-9a9e-4ca3-a877-6e0b47a11359 req-3b63fa23-af8e-49ad-b671-f0cccf9ddc1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-unplugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.975 251996 DEBUG oslo_concurrency.lockutils [req-05232392-9a9e-4ca3-a877-6e0b47a11359 req-3b63fa23-af8e-49ad-b671-f0cccf9ddc1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.975 251996 DEBUG oslo_concurrency.lockutils [req-05232392-9a9e-4ca3-a877-6e0b47a11359 req-3b63fa23-af8e-49ad-b671-f0cccf9ddc1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:44 compute-0 sudo[365186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.976 251996 DEBUG oslo_concurrency.lockutils [req-05232392-9a9e-4ca3-a877-6e0b47a11359 req-3b63fa23-af8e-49ad-b671-f0cccf9ddc1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.977 251996 DEBUG nova.compute.manager [req-05232392-9a9e-4ca3-a877-6e0b47a11359 req-3b63fa23-af8e-49ad-b671-f0cccf9ddc1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] No waiting events found dispatching network-vif-unplugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:44 compute-0 nova_compute[251992]: 2025-12-06 07:53:44.978 251996 WARNING nova.compute.manager [req-05232392-9a9e-4ca3-a877-6e0b47a11359 req-3b63fa23-af8e-49ad-b671-f0cccf9ddc1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received unexpected event network-vif-unplugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 for instance with vm_state active and task_state shelving.
Dec 06 07:53:44 compute-0 sudo[365186]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:45 compute-0 ceph-mon[74339]: pgmap v3009: 305 pgs: 305 active+clean; 547 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 111 KiB/s rd, 20 KiB/s wr, 24 op/s
Dec 06 07:53:45 compute-0 nova_compute[251992]: 2025-12-06 07:53:45.194 251996 INFO nova.virt.libvirt.driver [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance shutdown successfully after 3 seconds.
Dec 06 07:53:45 compute-0 nova_compute[251992]: 2025-12-06 07:53:45.200 251996 INFO nova.virt.libvirt.driver [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance destroyed successfully.
Dec 06 07:53:45 compute-0 nova_compute[251992]: 2025-12-06 07:53:45.201 251996 DEBUG nova.objects.instance [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'numa_topology' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:53:45 compute-0 pensive_gould[365076]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:53:45 compute-0 pensive_gould[365076]: --> relative data size: 1.0
Dec 06 07:53:45 compute-0 pensive_gould[365076]: --> All data devices are unavailable
Dec 06 07:53:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 508 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 40 KiB/s wr, 73 op/s
Dec 06 07:53:45 compute-0 systemd[1]: libpod-5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d.scope: Deactivated successfully.
Dec 06 07:53:45 compute-0 podman[365060]: 2025-12-06 07:53:45.262176289 +0000 UTC m=+1.020987428 container died 5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 07:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecae7502c6b10dad7183813f1167cb0bd725146fd0d8fe8a9296fc56ffe3d995-merged.mount: Deactivated successfully.
Dec 06 07:53:45 compute-0 podman[365060]: 2025-12-06 07:53:45.343509924 +0000 UTC m=+1.102321053 container remove 5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:53:45 compute-0 systemd[1]: libpod-conmon-5600bb2f3b972316040d9d57c73c95d0ff51bdc4c67d157b9c52b3cd567b962d.scope: Deactivated successfully.
Dec 06 07:53:45 compute-0 sudo[364955]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:45 compute-0 nova_compute[251992]: 2025-12-06 07:53:45.424 251996 INFO nova.virt.libvirt.driver [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Beginning cold snapshot process
Dec 06 07:53:45 compute-0 sudo[365241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:45 compute-0 sudo[365241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:45 compute-0 sudo[365241]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:45 compute-0 sudo[365278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:53:45 compute-0 sudo[365278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:45 compute-0 sudo[365278]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:45 compute-0 podman[365265]: 2025-12-06 07:53:45.542963975 +0000 UTC m=+0.055275573 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:53:45 compute-0 podman[365266]: 2025-12-06 07:53:45.551254359 +0000 UTC m=+0.063565947 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 07:53:45 compute-0 nova_compute[251992]: 2025-12-06 07:53:45.591 251996 DEBUG nova.virt.libvirt.imagebackend [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] No parent info for 6efab05d-c7cf-4770-a5c3-c806a2739063; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 06 07:53:45 compute-0 sudo[365345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:45 compute-0 sudo[365345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:45 compute-0 sudo[365345]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:45 compute-0 sudo[365389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:53:45 compute-0 sudo[365389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:45 compute-0 nova_compute[251992]: 2025-12-06 07:53:45.816 251996 DEBUG nova.storage.rbd_utils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] creating snapshot(35e67546df10435683349a9ff22d2da9) on rbd image(91b85b86-0d07-4df4-80d5-48fa343c00b8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:53:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:45.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:46 compute-0 podman[365470]: 2025-12-06 07:53:46.023676166 +0000 UTC m=+0.038153621 container create 84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:53:46 compute-0 systemd[1]: Started libpod-conmon-84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5.scope.
Dec 06 07:53:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Dec 06 07:53:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:53:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Dec 06 07:53:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Dec 06 07:53:46 compute-0 podman[365470]: 2025-12-06 07:53:46.006927954 +0000 UTC m=+0.021405439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:53:46 compute-0 podman[365470]: 2025-12-06 07:53:46.106595602 +0000 UTC m=+0.121073057 container init 84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:53:46 compute-0 podman[365470]: 2025-12-06 07:53:46.113261233 +0000 UTC m=+0.127738688 container start 84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 07:53:46 compute-0 podman[365470]: 2025-12-06 07:53:46.117005843 +0000 UTC m=+0.131483298 container attach 84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:53:46 compute-0 cranky_saha[365487]: 167 167
Dec 06 07:53:46 compute-0 systemd[1]: libpod-84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5.scope: Deactivated successfully.
Dec 06 07:53:46 compute-0 podman[365470]: 2025-12-06 07:53:46.118273768 +0000 UTC m=+0.132751223 container died 84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:53:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-820ceb39897a6754fd01897c42bf78fe8d36698c157531f705341e18495c3257-merged.mount: Deactivated successfully.
Dec 06 07:53:46 compute-0 nova_compute[251992]: 2025-12-06 07:53:46.144 251996 DEBUG nova.storage.rbd_utils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] cloning vms/91b85b86-0d07-4df4-80d5-48fa343c00b8_disk@35e67546df10435683349a9ff22d2da9 to images/f1264656-a902-4225-be11-839d8f664fd7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 07:53:46 compute-0 podman[365470]: 2025-12-06 07:53:46.153890928 +0000 UTC m=+0.168368383 container remove 84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec 06 07:53:46 compute-0 systemd[1]: libpod-conmon-84b0d5b04a61841cc05ce40307ee168331803446712bcab1b242bf7a8217a8c5.scope: Deactivated successfully.
Dec 06 07:53:46 compute-0 nova_compute[251992]: 2025-12-06 07:53:46.277 251996 DEBUG nova.storage.rbd_utils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] flattening images/f1264656-a902-4225-be11-839d8f664fd7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 07:53:46 compute-0 podman[365546]: 2025-12-06 07:53:46.320576806 +0000 UTC m=+0.040235087 container create 77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 07:53:46 compute-0 systemd[1]: Started libpod-conmon-77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775.scope.
Dec 06 07:53:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fcd52539a75ad92f88d8d4855799156aa6aea0e778bb0367c10679d0bccf09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fcd52539a75ad92f88d8d4855799156aa6aea0e778bb0367c10679d0bccf09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fcd52539a75ad92f88d8d4855799156aa6aea0e778bb0367c10679d0bccf09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fcd52539a75ad92f88d8d4855799156aa6aea0e778bb0367c10679d0bccf09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:46 compute-0 podman[365546]: 2025-12-06 07:53:46.397674456 +0000 UTC m=+0.117332737 container init 77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:53:46 compute-0 podman[365546]: 2025-12-06 07:53:46.303978158 +0000 UTC m=+0.023636459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:53:46 compute-0 podman[365546]: 2025-12-06 07:53:46.403978917 +0000 UTC m=+0.123637208 container start 77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:53:46 compute-0 podman[365546]: 2025-12-06 07:53:46.408097607 +0000 UTC m=+0.127755908 container attach 77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 07:53:46 compute-0 nova_compute[251992]: 2025-12-06 07:53:46.655 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:46 compute-0 nova_compute[251992]: 2025-12-06 07:53:46.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:46 compute-0 nova_compute[251992]: 2025-12-06 07:53:46.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:53:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:46.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:47 compute-0 nova_compute[251992]: 2025-12-06 07:53:47.057 251996 DEBUG nova.compute.manager [req-b71e3aee-7772-49ef-8309-23f21890b6b0 req-2cf7d092-cd8a-4310-a047-d1404e7f7055 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:47 compute-0 nova_compute[251992]: 2025-12-06 07:53:47.058 251996 DEBUG oslo_concurrency.lockutils [req-b71e3aee-7772-49ef-8309-23f21890b6b0 req-2cf7d092-cd8a-4310-a047-d1404e7f7055 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:47 compute-0 nova_compute[251992]: 2025-12-06 07:53:47.058 251996 DEBUG oslo_concurrency.lockutils [req-b71e3aee-7772-49ef-8309-23f21890b6b0 req-2cf7d092-cd8a-4310-a047-d1404e7f7055 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:47 compute-0 nova_compute[251992]: 2025-12-06 07:53:47.059 251996 DEBUG oslo_concurrency.lockutils [req-b71e3aee-7772-49ef-8309-23f21890b6b0 req-2cf7d092-cd8a-4310-a047-d1404e7f7055 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:47 compute-0 nova_compute[251992]: 2025-12-06 07:53:47.060 251996 DEBUG nova.compute.manager [req-b71e3aee-7772-49ef-8309-23f21890b6b0 req-2cf7d092-cd8a-4310-a047-d1404e7f7055 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] No waiting events found dispatching network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:47 compute-0 nova_compute[251992]: 2025-12-06 07:53:47.060 251996 WARNING nova.compute.manager [req-b71e3aee-7772-49ef-8309-23f21890b6b0 req-2cf7d092-cd8a-4310-a047-d1404e7f7055 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received unexpected event network-vif-plugged-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 for instance with vm_state active and task_state shelving_image_uploading.
Dec 06 07:53:47 compute-0 ceph-mon[74339]: pgmap v3010: 305 pgs: 305 active+clean; 508 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 40 KiB/s wr, 73 op/s
Dec 06 07:53:47 compute-0 ceph-mon[74339]: osdmap e393: 3 total, 3 up, 3 in
Dec 06 07:53:47 compute-0 nova_compute[251992]: 2025-12-06 07:53:47.129 251996 DEBUG nova.storage.rbd_utils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] removing snapshot(35e67546df10435683349a9ff22d2da9) on rbd image(91b85b86-0d07-4df4-80d5-48fa343c00b8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 07:53:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:53:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/610152004' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:53:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/610152004' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]: {
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:     "0": [
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:         {
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "devices": [
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "/dev/loop3"
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             ],
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "lv_name": "ceph_lv0",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "lv_size": "7511998464",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "name": "ceph_lv0",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "tags": {
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.cluster_name": "ceph",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.crush_device_class": "",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.encrypted": "0",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.osd_id": "0",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.type": "block",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:                 "ceph.vdo": "0"
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             },
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "type": "block",
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:             "vg_name": "ceph_vg0"
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:         }
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]:     ]
Dec 06 07:53:47 compute-0 vibrant_goldwasser[365580]: }
Dec 06 07:53:47 compute-0 systemd[1]: libpod-77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775.scope: Deactivated successfully.
Dec 06 07:53:47 compute-0 conmon[365580]: conmon 77df08925277d468ab38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775.scope/container/memory.events
Dec 06 07:53:47 compute-0 podman[365546]: 2025-12-06 07:53:47.195645116 +0000 UTC m=+0.915303437 container died 77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:53:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-77fcd52539a75ad92f88d8d4855799156aa6aea0e778bb0367c10679d0bccf09-merged.mount: Deactivated successfully.
Dec 06 07:53:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 508 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 39 KiB/s wr, 91 op/s
Dec 06 07:53:47 compute-0 podman[365546]: 2025-12-06 07:53:47.263103406 +0000 UTC m=+0.982761687 container remove 77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec 06 07:53:47 compute-0 systemd[1]: libpod-conmon-77df08925277d468ab387febe702e7c2c5f1767e99b2d1baa163f8072fd7a775.scope: Deactivated successfully.
Dec 06 07:53:47 compute-0 sudo[365389]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:47 compute-0 sudo[365620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:47 compute-0 sudo[365620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:47 compute-0 sudo[365620]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:47 compute-0 sudo[365645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:53:47 compute-0 sudo[365645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:47 compute-0 sudo[365645]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:47 compute-0 sudo[365670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:47 compute-0 sudo[365670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:47 compute-0 sudo[365670]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:47 compute-0 sudo[365695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:53:47 compute-0 sudo[365695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:47.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:47 compute-0 podman[365758]: 2025-12-06 07:53:47.905303573 +0000 UTC m=+0.039323352 container create 20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_euler, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:53:47 compute-0 systemd[1]: Started libpod-conmon-20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b.scope.
Dec 06 07:53:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:53:47 compute-0 podman[365758]: 2025-12-06 07:53:47.887233386 +0000 UTC m=+0.021253185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:53:47 compute-0 podman[365758]: 2025-12-06 07:53:47.99116721 +0000 UTC m=+0.125187009 container init 20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:53:47 compute-0 podman[365758]: 2025-12-06 07:53:47.997384837 +0000 UTC m=+0.131404616 container start 20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_euler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec 06 07:53:48 compute-0 stoic_euler[365774]: 167 167
Dec 06 07:53:48 compute-0 podman[365758]: 2025-12-06 07:53:48.001242522 +0000 UTC m=+0.135262321 container attach 20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:53:48 compute-0 systemd[1]: libpod-20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b.scope: Deactivated successfully.
Dec 06 07:53:48 compute-0 podman[365758]: 2025-12-06 07:53:48.00303461 +0000 UTC m=+0.137054389 container died 20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:53:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c68e0f8aa731666513512889ce301a1c756ca9f11eec0ab078c7104d31e13212-merged.mount: Deactivated successfully.
Dec 06 07:53:48 compute-0 podman[365758]: 2025-12-06 07:53:48.041179649 +0000 UTC m=+0.175199428 container remove 20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_euler, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:53:48 compute-0 systemd[1]: libpod-conmon-20e915270dbcb6b3fe91c1922174f9a60d8c16f2262caa97684ba4e72c96c81b.scope: Deactivated successfully.
Dec 06 07:53:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Dec 06 07:53:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/610152004' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/610152004' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Dec 06 07:53:48 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Dec 06 07:53:48 compute-0 nova_compute[251992]: 2025-12-06 07:53:48.149 251996 DEBUG nova.storage.rbd_utils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] creating snapshot(snap) on rbd image(f1264656-a902-4225-be11-839d8f664fd7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 07:53:48 compute-0 podman[365811]: 2025-12-06 07:53:48.219232394 +0000 UTC m=+0.045783527 container create 1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:53:48 compute-0 systemd[1]: Started libpod-conmon-1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856.scope.
Dec 06 07:53:48 compute-0 podman[365811]: 2025-12-06 07:53:48.19836912 +0000 UTC m=+0.024920303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:53:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad9b7e1a1a3a91230ecfa8bf4d275e5fd10873b441c65ad7f9b60cdffea93de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad9b7e1a1a3a91230ecfa8bf4d275e5fd10873b441c65ad7f9b60cdffea93de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad9b7e1a1a3a91230ecfa8bf4d275e5fd10873b441c65ad7f9b60cdffea93de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad9b7e1a1a3a91230ecfa8bf4d275e5fd10873b441c65ad7f9b60cdffea93de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:53:48 compute-0 podman[365811]: 2025-12-06 07:53:48.305774938 +0000 UTC m=+0.132326091 container init 1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:53:48 compute-0 podman[365811]: 2025-12-06 07:53:48.31401588 +0000 UTC m=+0.140567013 container start 1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 07:53:48 compute-0 podman[365811]: 2025-12-06 07:53:48.317341811 +0000 UTC m=+0.143892944 container attach 1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mendel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:53:48 compute-0 nova_compute[251992]: 2025-12-06 07:53:48.680 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:48 compute-0 nova_compute[251992]: 2025-12-06 07:53:48.681 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:53:48 compute-0 nova_compute[251992]: 2025-12-06 07:53:48.696 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:53:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Dec 06 07:53:49 compute-0 ceph-mon[74339]: pgmap v3012: 305 pgs: 305 active+clean; 508 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 39 KiB/s wr, 91 op/s
Dec 06 07:53:49 compute-0 ceph-mon[74339]: osdmap e394: 3 total, 3 up, 3 in
Dec 06 07:53:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/65824309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:53:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/65824309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:53:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Dec 06 07:53:49 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Dec 06 07:53:49 compute-0 modest_mendel[365832]: {
Dec 06 07:53:49 compute-0 modest_mendel[365832]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:53:49 compute-0 modest_mendel[365832]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:53:49 compute-0 modest_mendel[365832]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:53:49 compute-0 modest_mendel[365832]:         "osd_id": 0,
Dec 06 07:53:49 compute-0 modest_mendel[365832]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:53:49 compute-0 modest_mendel[365832]:         "type": "bluestore"
Dec 06 07:53:49 compute-0 modest_mendel[365832]:     }
Dec 06 07:53:49 compute-0 modest_mendel[365832]: }
Dec 06 07:53:49 compute-0 systemd[1]: libpod-1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856.scope: Deactivated successfully.
Dec 06 07:53:49 compute-0 podman[365811]: 2025-12-06 07:53:49.195496375 +0000 UTC m=+1.022047508 container died 1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mendel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ad9b7e1a1a3a91230ecfa8bf4d275e5fd10873b441c65ad7f9b60cdffea93de-merged.mount: Deactivated successfully.
Dec 06 07:53:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 529 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 164 op/s
Dec 06 07:53:49 compute-0 podman[365811]: 2025-12-06 07:53:49.25537733 +0000 UTC m=+1.081928463 container remove 1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mendel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:53:49 compute-0 systemd[1]: libpod-conmon-1d55cae125f190cec9e1007a0c1d6242761db18f58cf95980e6d4146dd816856.scope: Deactivated successfully.
Dec 06 07:53:49 compute-0 sudo[365695]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:53:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:53:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 078788f4-445a-46a7-acd2-ba665ced8ae1 does not exist
Dec 06 07:53:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e9643418-3011-4638-b8ab-9dc801e5b931 does not exist
Dec 06 07:53:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0c7b8be1-b90a-42d6-889e-0b03b6db09e4 does not exist
Dec 06 07:53:49 compute-0 sudo[365866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:53:49 compute-0 sudo[365866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:49 compute-0 sudo[365866]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:49 compute-0 sudo[365891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:53:49 compute-0 sudo[365891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:53:49 compute-0 sudo[365891]: pam_unix(sudo:session): session closed for user root
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.452 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.453 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.453 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.453 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.453 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.454 251996 INFO nova.compute.manager [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Terminating instance
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.455 251996 DEBUG nova.compute.manager [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:53:49 compute-0 kernel: tap6ed32036-14 (unregistering): left promiscuous mode
Dec 06 07:53:49 compute-0 NetworkManager[48965]: <info>  [1765007629.6036] device (tap6ed32036-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.604 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.613 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 ovn_controller[147168]: 2025-12-06T07:53:49Z|00658|binding|INFO|Releasing lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 from this chassis (sb_readonly=0)
Dec 06 07:53:49 compute-0 ovn_controller[147168]: 2025-12-06T07:53:49Z|00659|binding|INFO|Setting lport 6ed32036-14e7-4ab4-a9dd-38196e9a6469 down in Southbound
Dec 06 07:53:49 compute-0 ovn_controller[147168]: 2025-12-06T07:53:49Z|00660|binding|INFO|Removing iface tap6ed32036-14 ovn-installed in OVS
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.615 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.622 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:7f:63 10.100.0.7'], port_security=['fa:16:3e:77:7f:63 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6e187078-1e6f-4c22-9510-ed8116b14ae5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d151181-0dfe-43ab-b47e-15b53add33a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4842ecff6dce4ccc981a6b65a14ea406', 'neutron:revision_number': '6', 'neutron:security_group_ids': '19b7817b-5f7f-47d5-9095-54d9f4ab28e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=328c1a1e-05c1-492e-8ea7-52ea97c29304, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=6ed32036-14e7-4ab4-a9dd-38196e9a6469) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.623 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 6ed32036-14e7-4ab4-a9dd-38196e9a6469 in datapath 3d151181-0dfe-43ab-b47e-15b53add33a6 unbound from our chassis
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.625 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d151181-0dfe-43ab-b47e-15b53add33a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.627 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7e8381-6cb2-408f-8c83-2f3b29db5b5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.627 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 namespace which is not needed anymore
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.640 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Dec 06 07:53:49 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a5.scope: Consumed 20.716s CPU time.
Dec 06 07:53:49 compute-0 systemd-machined[212986]: Machine qemu-78-instance-000000a5 terminated.
Dec 06 07:53:49 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[360306]: [NOTICE]   (360310) : haproxy version is 2.8.14-c23fe91
Dec 06 07:53:49 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[360306]: [NOTICE]   (360310) : path to executable is /usr/sbin/haproxy
Dec 06 07:53:49 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[360306]: [WARNING]  (360310) : Exiting Master process...
Dec 06 07:53:49 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[360306]: [ALERT]    (360310) : Current worker (360312) exited with code 143 (Terminated)
Dec 06 07:53:49 compute-0 neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6[360306]: [WARNING]  (360310) : All workers exited. Exiting... (0)
Dec 06 07:53:49 compute-0 systemd[1]: libpod-e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976.scope: Deactivated successfully.
Dec 06 07:53:49 compute-0 podman[365941]: 2025-12-06 07:53:49.767352383 +0000 UTC m=+0.043123873 container died e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 07:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976-userdata-shm.mount: Deactivated successfully.
Dec 06 07:53:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c2fef77d966cf4f6c0269bb58be6378d325f699b30ebfbfa1e8ad878b74b55-merged.mount: Deactivated successfully.
Dec 06 07:53:49 compute-0 podman[365941]: 2025-12-06 07:53:49.808655608 +0000 UTC m=+0.084427088 container cleanup e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:53:49 compute-0 systemd[1]: libpod-conmon-e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976.scope: Deactivated successfully.
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.841 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 podman[365972]: 2025-12-06 07:53:49.867997449 +0000 UTC m=+0.040390230 container remove e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.874 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7565cb08-29a4-4571-9f2f-ecf13521e21e]: (4, ('Sat Dec  6 07:53:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976)\ne8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976\nSat Dec  6 07:53:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 (e8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976)\ne8b3b75341568f6d5d51a901c03a7c298e83c559451fdc8a1807c4abd9250976\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.876 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[92a528fd-9fc8-4cd1-9dc5-9629d36e3a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.877 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d151181-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.880 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.892 251996 INFO nova.virt.libvirt.driver [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Instance destroyed successfully.
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.893 251996 DEBUG nova.objects.instance [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lazy-loading 'resources' on Instance uuid 6e187078-1e6f-4c22-9510-ed8116b14ae5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:53:49 compute-0 kernel: tap3d151181-00: left promiscuous mode
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.907 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.909 251996 DEBUG nova.virt.libvirt.vif [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-961523943',display_name='tempest-ServerRescueNegativeTestJSON-server-961523943',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-961523943',id=165,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:50:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4842ecff6dce4ccc981a6b65a14ea406',ramdisk_id='',reservation_id='r-g79ucyw0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-1304226499',owner_user_name='tempest-ServerRescueNegativeTestJSON-1304226499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:50:47Z,user_data=None,user_id='f2335740042045fba7f544ee5140eb87',uuid=6e187078-1e6f-4c22-9510-ed8116b14ae5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:53:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:53:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:49.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.909 251996 DEBUG nova.network.os_vif_util [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converting VIF {"id": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "address": "fa:16:3e:77:7f:63", "network": {"id": "3d151181-0dfe-43ab-b47e-15b53add33a6", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-534312753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4842ecff6dce4ccc981a6b65a14ea406", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed32036-14", "ovs_interfaceid": "6ed32036-14e7-4ab4-a9dd-38196e9a6469", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.910 251996 DEBUG nova.network.os_vif_util [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:7f:63,bridge_name='br-int',has_traffic_filtering=True,id=6ed32036-14e7-4ab4-a9dd-38196e9a6469,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed32036-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.910 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa3e30d-0121-4108-9add-aaf0cd3d1f11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.911 251996 DEBUG os_vif [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:7f:63,bridge_name='br-int',has_traffic_filtering=True,id=6ed32036-14e7-4ab4-a9dd-38196e9a6469,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed32036-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.912 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.913 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ed32036-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.914 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.916 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.920 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee22a61-b6cd-469e-a052-8a6be437bcf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.922 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1770b386-ac14-4e1e-97a7-e0b9baf4327c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.924 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.928 251996 INFO os_vif [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:7f:63,bridge_name='br-int',has_traffic_filtering=True,id=6ed32036-14e7-4ab4-a9dd-38196e9a6469,network=Network(3d151181-0dfe-43ab-b47e-15b53add33a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed32036-14')
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.936 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[992cf49d-3fbe-4e15-a8b3-fa80d73c382b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771981, 'reachable_time': 39575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366004, 'error': None, 'target': 'ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.939 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d151181-0dfe-43ab-b47e-15b53add33a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:53:49 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:53:49.939 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b03026-fe3d-4e68-806a-f6d202206742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:53:49 compute-0 systemd[1]: run-netns-ovnmeta\x2d3d151181\x2d0dfe\x2d43ab\x2db47e\x2d15b53add33a6.mount: Deactivated successfully.
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.989 251996 DEBUG nova.compute.manager [req-84045b59-fbd6-4e33-be3d-ecf9a78e5791 req-6a2cb116-8eb9-48e7-b202-cc42b7370e25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-unplugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.990 251996 DEBUG oslo_concurrency.lockutils [req-84045b59-fbd6-4e33-be3d-ecf9a78e5791 req-6a2cb116-8eb9-48e7-b202-cc42b7370e25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.990 251996 DEBUG oslo_concurrency.lockutils [req-84045b59-fbd6-4e33-be3d-ecf9a78e5791 req-6a2cb116-8eb9-48e7-b202-cc42b7370e25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.990 251996 DEBUG oslo_concurrency.lockutils [req-84045b59-fbd6-4e33-be3d-ecf9a78e5791 req-6a2cb116-8eb9-48e7-b202-cc42b7370e25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.990 251996 DEBUG nova.compute.manager [req-84045b59-fbd6-4e33-be3d-ecf9a78e5791 req-6a2cb116-8eb9-48e7-b202-cc42b7370e25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] No waiting events found dispatching network-vif-unplugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:49 compute-0 nova_compute[251992]: 2025-12-06 07:53:49.990 251996 DEBUG nova.compute.manager [req-84045b59-fbd6-4e33-be3d-ecf9a78e5791 req-6a2cb116-8eb9-48e7-b202-cc42b7370e25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-unplugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:53:50 compute-0 ceph-mon[74339]: osdmap e395: 3 total, 3 up, 3 in
Dec 06 07:53:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:50 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.655 251996 INFO nova.virt.libvirt.driver [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Snapshot image upload complete
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.655 251996 DEBUG nova.compute.manager [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.697 251996 INFO nova.virt.libvirt.driver [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Deleting instance files /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5_del
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.698 251996 INFO nova.virt.libvirt.driver [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Deletion of /var/lib/nova/instances/6e187078-1e6f-4c22-9510-ed8116b14ae5_del complete
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.710 251996 INFO nova.compute.manager [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Shelve offloading
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.717 251996 INFO nova.virt.libvirt.driver [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance destroyed successfully.
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.717 251996 DEBUG nova.compute.manager [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.720 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.720 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquired lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.721 251996 DEBUG nova.network.neutron [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.752 251996 INFO nova.compute.manager [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Took 1.30 seconds to destroy the instance on the hypervisor.
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.752 251996 DEBUG oslo.service.loopingcall [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.753 251996 DEBUG nova.compute.manager [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:53:50 compute-0 nova_compute[251992]: 2025-12-06 07:53:50.753 251996 DEBUG nova.network.neutron [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:53:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:50.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:51 compute-0 ceph-mon[74339]: pgmap v3015: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 529 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 164 op/s
Dec 06 07:53:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 541 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 7.8 MiB/s wr, 230 op/s
Dec 06 07:53:51 compute-0 nova_compute[251992]: 2025-12-06 07:53:51.657 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:51.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:53:52 compute-0 nova_compute[251992]: 2025-12-06 07:53:52.089 251996 DEBUG nova.compute.manager [req-b8fd8eeb-a7b2-4706-bdd6-fc80bf1c4ee2 req-012d4e0d-1930-4414-9f62-65c8486252e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:52 compute-0 nova_compute[251992]: 2025-12-06 07:53:52.089 251996 DEBUG oslo_concurrency.lockutils [req-b8fd8eeb-a7b2-4706-bdd6-fc80bf1c4ee2 req-012d4e0d-1930-4414-9f62-65c8486252e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:52 compute-0 nova_compute[251992]: 2025-12-06 07:53:52.090 251996 DEBUG oslo_concurrency.lockutils [req-b8fd8eeb-a7b2-4706-bdd6-fc80bf1c4ee2 req-012d4e0d-1930-4414-9f62-65c8486252e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:52 compute-0 nova_compute[251992]: 2025-12-06 07:53:52.090 251996 DEBUG oslo_concurrency.lockutils [req-b8fd8eeb-a7b2-4706-bdd6-fc80bf1c4ee2 req-012d4e0d-1930-4414-9f62-65c8486252e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:52 compute-0 nova_compute[251992]: 2025-12-06 07:53:52.090 251996 DEBUG nova.compute.manager [req-b8fd8eeb-a7b2-4706-bdd6-fc80bf1c4ee2 req-012d4e0d-1930-4414-9f62-65c8486252e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] No waiting events found dispatching network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:53:52 compute-0 nova_compute[251992]: 2025-12-06 07:53:52.090 251996 WARNING nova.compute.manager [req-b8fd8eeb-a7b2-4706-bdd6-fc80bf1c4ee2 req-012d4e0d-1930-4414-9f62-65c8486252e4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received unexpected event network-vif-plugged-6ed32036-14e7-4ab4-a9dd-38196e9a6469 for instance with vm_state rescued and task_state deleting.
Dec 06 07:53:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:52.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:53 compute-0 ceph-mon[74339]: pgmap v3016: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 541 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 7.8 MiB/s wr, 230 op/s
Dec 06 07:53:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 541 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 193 op/s
Dec 06 07:53:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:53 compute-0 nova_compute[251992]: 2025-12-06 07:53:53.883 251996 DEBUG nova.network.neutron [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:53 compute-0 nova_compute[251992]: 2025-12-06 07:53:53.901 251996 INFO nova.compute.manager [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Took 3.15 seconds to deallocate network for instance.
Dec 06 07:53:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:53.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:53 compute-0 nova_compute[251992]: 2025-12-06 07:53:53.943 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:53 compute-0 nova_compute[251992]: 2025-12-06 07:53:53.944 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.016 251996 DEBUG nova.compute.manager [req-70753bbf-777b-4aea-9fb0-a93d3af2ddde req-ab45438a-205c-464a-b9fc-1db4b5d61250 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Received event network-vif-deleted-6ed32036-14e7-4ab4-a9dd-38196e9a6469 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.048 251996 DEBUG oslo_concurrency.processutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/433557300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.500 251996 DEBUG oslo_concurrency.processutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.507 251996 DEBUG nova.compute.provider_tree [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.587 251996 DEBUG nova.scheduler.client.report [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.654 251996 DEBUG nova.network.neutron [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.714 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:54.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.826 251996 INFO nova.scheduler.client.report [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Deleted allocations for instance 6e187078-1e6f-4c22-9510-ed8116b14ae5
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.915 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:54 compute-0 nova_compute[251992]: 2025-12-06 07:53:54.966 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Releasing lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:55 compute-0 nova_compute[251992]: 2025-12-06 07:53:55.077 251996 DEBUG oslo_concurrency.lockutils [None req-4d369b43-f2a7-4b24-b526-f3fbcf7e317d f2335740042045fba7f544ee5140eb87 4842ecff6dce4ccc981a6b65a14ea406 - - default default] Lock "6e187078-1e6f-4c22-9510-ed8116b14ae5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 251 op/s
Dec 06 07:53:55 compute-0 ceph-mon[74339]: pgmap v3017: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 541 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.5 MiB/s wr, 193 op/s
Dec 06 07:53:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/433557300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:55.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:56 compute-0 sshd-session[366049]: Invalid user sol from 80.94.92.182 port 58244
Dec 06 07:53:56 compute-0 ceph-mon[74339]: pgmap v3018: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 251 op/s
Dec 06 07:53:56 compute-0 sshd-session[366049]: Connection closed by invalid user sol 80.94.92.182 port 58244 [preauth]
Dec 06 07:53:56 compute-0 nova_compute[251992]: 2025-12-06 07:53:56.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 430 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 225 op/s
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.337 251996 INFO nova.virt.libvirt.driver [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Instance destroyed successfully.
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.337 251996 DEBUG nova.objects.instance [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lazy-loading 'resources' on Instance uuid 91b85b86-0d07-4df4-80d5-48fa343c00b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.355 251996 DEBUG nova.virt.libvirt.vif [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:52:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1180268370',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1180268370',id=172,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHi/zU/wIK9gDsaOwcMdl3RsHvsLUGXMCp6e+v7Vsr1tSU1UeVN9QkmLR8bRL7zUBTSmDE2iL72n56YVoqmlRT/okHDeuUDKoH9btDdrzNPAdyAh2Xwe7f5FUrx+EbYFg==',key_name='tempest-keypair-763555621',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:53:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='cfa713d92cc94fa1b94404ed58b0563f',ramdisk_id='',reservation_id='r-tqsa30w8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1510980811',owner_user_name='tempest-AttachVolumeShelveTestJSON-1510980811-project-member',shelved_at='2025-12-06T07:53:50.655408',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='f1264656-a902-4225-be11-839d8f664fd7'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:53:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90c9de6e67724c898a8e23b05fbf14da',uuid=91b85b86-0d07-4df4-80d5-48fa343c00b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.356 251996 DEBUG nova.network.os_vif_util [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converting VIF {"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.357 251996 DEBUG nova.network.os_vif_util [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.357 251996 DEBUG os_vif [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.359 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.360 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad09ca6a-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.361 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.363 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.366 251996 INFO os_vif [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:9b:dc,bridge_name='br-int',has_traffic_filtering=True,id=ad09ca6a-7b57-4547-9e95-4976d37ac5f9,network=Network(45904a2f-a5c2-4047-9c19-a87d36354c1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad09ca6a-7b')
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.447 251996 DEBUG nova.compute.manager [req-e956df31-a538-4931-a33d-4828a0fe7218 req-3943424c-6944-4f38-a2bd-2b415dbbe3bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Received event network-changed-ad09ca6a-7b57-4547-9e95-4976d37ac5f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.447 251996 DEBUG nova.compute.manager [req-e956df31-a538-4931-a33d-4828a0fe7218 req-3943424c-6944-4f38-a2bd-2b415dbbe3bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Refreshing instance network info cache due to event network-changed-ad09ca6a-7b57-4547-9e95-4976d37ac5f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.448 251996 DEBUG oslo_concurrency.lockutils [req-e956df31-a538-4931-a33d-4828a0fe7218 req-3943424c-6944-4f38-a2bd-2b415dbbe3bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.448 251996 DEBUG oslo_concurrency.lockutils [req-e956df31-a538-4931-a33d-4828a0fe7218 req-3943424c-6944-4f38-a2bd-2b415dbbe3bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:53:57 compute-0 nova_compute[251992]: 2025-12-06 07:53:57.448 251996 DEBUG nova.network.neutron [req-e956df31-a538-4931-a33d-4828a0fe7218 req-3943424c-6944-4f38-a2bd-2b415dbbe3bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Refreshing network info cache for port ad09ca6a-7b57-4547-9e95-4976d37ac5f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:53:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:57.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.182 251996 INFO nova.virt.libvirt.driver [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Deleting instance files /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8_del
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.183 251996 INFO nova.virt.libvirt.driver [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Deletion of /var/lib/nova/instances/91b85b86-0d07-4df4-80d5-48fa343c00b8_del complete
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.266 251996 INFO nova.scheduler.client.report [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Deleted allocations for instance 91b85b86-0d07-4df4-80d5-48fa343c00b8
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.308 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.308 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.335 251996 DEBUG oslo_concurrency.processutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:53:58 compute-0 ceph-mon[74339]: pgmap v3019: 305 pgs: 305 active+clean; 430 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 225 op/s
Dec 06 07:53:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4115530818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.612 251996 DEBUG nova.network.neutron [req-e956df31-a538-4931-a33d-4828a0fe7218 req-3943424c-6944-4f38-a2bd-2b415dbbe3bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updated VIF entry in instance network info cache for port ad09ca6a-7b57-4547-9e95-4976d37ac5f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.613 251996 DEBUG nova.network.neutron [req-e956df31-a538-4931-a33d-4828a0fe7218 req-3943424c-6944-4f38-a2bd-2b415dbbe3bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Updating instance_info_cache with network_info: [{"id": "ad09ca6a-7b57-4547-9e95-4976d37ac5f9", "address": "fa:16:3e:5d:9b:dc", "network": {"id": "45904a2f-a5c2-4047-9c19-a87d36354c1b", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-1547381509-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cfa713d92cc94fa1b94404ed58b0563f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapad09ca6a-7b", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.635 251996 DEBUG oslo_concurrency.lockutils [req-e956df31-a538-4931-a33d-4828a0fe7218 req-3943424c-6944-4f38-a2bd-2b415dbbe3bc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-91b85b86-0d07-4df4-80d5-48fa343c00b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:53:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:53:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Dec 06 07:53:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Dec 06 07:53:58 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Dec 06 07:53:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:53:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/808322903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.776 251996 DEBUG oslo_concurrency.processutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.782 251996 DEBUG nova.compute.provider_tree [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:53:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:53:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:53:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.823 251996 DEBUG nova.scheduler.client.report [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.851 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:58 compute-0 nova_compute[251992]: 2025-12-06 07:53:58.896 251996 DEBUG oslo_concurrency.lockutils [None req-fccc3e0a-3c61-4c64-8cbc-715e2aa7d7c7 90c9de6e67724c898a8e23b05fbf14da cfa713d92cc94fa1b94404ed58b0563f - - default default] Lock "91b85b86-0d07-4df4-80d5-48fa343c00b8" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 16.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:53:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 190 op/s
Dec 06 07:53:59 compute-0 ceph-mon[74339]: osdmap e396: 3 total, 3 up, 3 in
Dec 06 07:53:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/808322903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:53:59 compute-0 nova_compute[251992]: 2025-12-06 07:53:59.790 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007624.7896678, 91b85b86-0d07-4df4-80d5-48fa343c00b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:53:59 compute-0 nova_compute[251992]: 2025-12-06 07:53:59.791 251996 INFO nova.compute.manager [-] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] VM Stopped (Lifecycle Event)
Dec 06 07:53:59 compute-0 nova_compute[251992]: 2025-12-06 07:53:59.842 251996 DEBUG nova.compute.manager [None req-4d6b7654-fcb9-4345-8872-f489f2e91a7c - - - - - -] [instance: 91b85b86-0d07-4df4-80d5-48fa343c00b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:53:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:53:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:53:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:53:59.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:00 compute-0 ceph-mon[74339]: pgmap v3021: 305 pgs: 305 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 190 op/s
Dec 06 07:54:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:00.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 17 KiB/s wr, 135 op/s
Dec 06 07:54:01 compute-0 nova_compute[251992]: 2025-12-06 07:54:01.661 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:01.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:02 compute-0 nova_compute[251992]: 2025-12-06 07:54:02.362 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:02 compute-0 ceph-mon[74339]: pgmap v3022: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 17 KiB/s wr, 135 op/s
Dec 06 07:54:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:02.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 17 KiB/s wr, 135 op/s
Dec 06 07:54:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:54:03.864 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:54:03.865 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:54:03.865 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:04 compute-0 nova_compute[251992]: 2025-12-06 07:54:04.544 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:04 compute-0 nova_compute[251992]: 2025-12-06 07:54:04.759 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:04.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:04 compute-0 nova_compute[251992]: 2025-12-06 07:54:04.889 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007629.8879297, 6e187078-1e6f-4c22-9510-ed8116b14ae5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:54:04 compute-0 nova_compute[251992]: 2025-12-06 07:54:04.890 251996 INFO nova.compute.manager [-] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] VM Stopped (Lifecycle Event)
Dec 06 07:54:04 compute-0 nova_compute[251992]: 2025-12-06 07:54:04.926 251996 DEBUG nova.compute.manager [None req-4becffbe-6044-4985-b262-34a5a0e545d1 - - - - - -] [instance: 6e187078-1e6f-4c22-9510-ed8116b14ae5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:54:04 compute-0 ceph-mon[74339]: pgmap v3023: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 17 KiB/s wr, 135 op/s
Dec 06 07:54:05 compute-0 sudo[366097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:05 compute-0 sudo[366097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:05 compute-0 sudo[366097]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:05 compute-0 sudo[366123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:05 compute-0 sudo[366123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:05 compute-0 sudo[366123]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 68 op/s
Dec 06 07:54:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:05.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1188045838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:06 compute-0 nova_compute[251992]: 2025-12-06 07:54:06.664 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:06.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:06 compute-0 ceph-mon[74339]: pgmap v3024: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 13 KiB/s wr, 68 op/s
Dec 06 07:54:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 13 KiB/s wr, 63 op/s
Dec 06 07:54:07 compute-0 nova_compute[251992]: 2025-12-06 07:54:07.364 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:07.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:08 compute-0 ceph-mon[74339]: pgmap v3025: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 13 KiB/s wr, 63 op/s
Dec 06 07:54:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:08.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:54:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3822958825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:54:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3822958825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 12 KiB/s wr, 41 op/s
Dec 06 07:54:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3822958825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3822958825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:09.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:10 compute-0 podman[366150]: 2025-12-06 07:54:10.425817221 +0000 UTC m=+0.083054523 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 07:54:10 compute-0 ceph-mon[74339]: pgmap v3026: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 12 KiB/s wr, 41 op/s
Dec 06 07:54:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:10.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 10 KiB/s wr, 36 op/s
Dec 06 07:54:11 compute-0 nova_compute[251992]: 2025-12-06 07:54:11.666 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:11.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:12 compute-0 nova_compute[251992]: 2025-12-06 07:54:12.414 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:12.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:12 compute-0 ceph-mon[74339]: pgmap v3027: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 10 KiB/s wr, 36 op/s
Dec 06 07:54:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/917406840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:54:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:54:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:54:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:54:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:54:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:54:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 3.0 KiB/s wr, 0 op/s
Dec 06 07:54:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2975706700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:13.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:14.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:15 compute-0 ceph-mon[74339]: pgmap v3028: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s rd, 3.0 KiB/s wr, 0 op/s
Dec 06 07:54:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3823144344' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3823144344' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec 06 07:54:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:15.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:16 compute-0 podman[366180]: 2025-12-06 07:54:16.408635983 +0000 UTC m=+0.059965529 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:54:16 compute-0 podman[366179]: 2025-12-06 07:54:16.423996998 +0000 UTC m=+0.075533169 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:54:16 compute-0 nova_compute[251992]: 2025-12-06 07:54:16.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:16.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Dec 06 07:54:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 369 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec 06 07:54:17 compute-0 nova_compute[251992]: 2025-12-06 07:54:17.417 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:17 compute-0 ceph-mon[74339]: pgmap v3029: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec 06 07:54:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Dec 06 07:54:17 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Dec 06 07:54:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:17.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:54:18
Dec 06 07:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'volumes', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Dec 06 07:54:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:54:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:18.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:18 compute-0 nova_compute[251992]: 2025-12-06 07:54:18.838 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:18 compute-0 ceph-mon[74339]: pgmap v3030: 305 pgs: 305 active+clean; 369 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Dec 06 07:54:18 compute-0 ceph-mon[74339]: osdmap e397: 3 total, 3 up, 3 in
Dec 06 07:54:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/51705472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 321 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 193 op/s
Dec 06 07:54:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:19.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:20.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Dec 06 07:54:20 compute-0 ceph-mon[74339]: pgmap v3032: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 321 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 193 op/s
Dec 06 07:54:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Dec 06 07:54:20 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Dec 06 07:54:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 212 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 323 op/s
Dec 06 07:54:21 compute-0 nova_compute[251992]: 2025-12-06 07:54:21.721 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:21 compute-0 ceph-mon[74339]: osdmap e398: 3 total, 3 up, 3 in
Dec 06 07:54:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:21.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:22 compute-0 nova_compute[251992]: 2025-12-06 07:54:22.420 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:22.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:23 compute-0 ceph-mon[74339]: pgmap v3034: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 212 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 323 op/s
Dec 06 07:54:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2227001455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/122337862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 212 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 642 KiB/s wr, 204 op/s
Dec 06 07:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:54:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:54:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:23.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:24.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:25 compute-0 ceph-mon[74339]: pgmap v3035: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 212 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 642 KiB/s wr, 204 op/s
Dec 06 07:54:25 compute-0 sudo[366221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:25 compute-0 sudo[366221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:25 compute-0 sudo[366221]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 191 op/s
Dec 06 07:54:25 compute-0 sudo[366246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:25 compute-0 sudo[366246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:25 compute-0 sudo[366246]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:25.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021745009771077612 of space, bias 1.0, pg target 0.6523502931323284 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021628687418574354 of space, bias 1.0, pg target 0.6488606225572306 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:54:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:54:26 compute-0 nova_compute[251992]: 2025-12-06 07:54:26.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:26 compute-0 nova_compute[251992]: 2025-12-06 07:54:26.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:26 compute-0 nova_compute[251992]: 2025-12-06 07:54:26.688 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:26 compute-0 nova_compute[251992]: 2025-12-06 07:54:26.688 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:26 compute-0 nova_compute[251992]: 2025-12-06 07:54:26.688 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:54:26 compute-0 nova_compute[251992]: 2025-12-06 07:54:26.688 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:54:26.712 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:54:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:54:26.713 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:54:26 compute-0 nova_compute[251992]: 2025-12-06 07:54:26.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:26 compute-0 nova_compute[251992]: 2025-12-06 07:54:26.721 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:26.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:54:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230897178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.131 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:54:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:54:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:54:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:54:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:54:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 162 op/s
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.286 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.287 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4189MB free_disk=20.942642211914062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.287 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.288 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.376 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.376 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.395 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.422 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:27 compute-0 ceph-mon[74339]: pgmap v3036: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 191 op/s
Dec 06 07:54:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/230897178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:54:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688298799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.812 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.818 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.835 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.862 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:54:27 compute-0 nova_compute[251992]: 2025-12-06 07:54:27.862 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:54:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:27.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:28 compute-0 ceph-mon[74339]: pgmap v3037: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 162 op/s
Dec 06 07:54:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2688298799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Dec 06 07:54:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Dec 06 07:54:28 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Dec 06 07:54:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 503 KiB/s rd, 3.6 KiB/s wr, 57 op/s
Dec 06 07:54:29 compute-0 ceph-mon[74339]: osdmap e399: 3 total, 3 up, 3 in
Dec 06 07:54:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:29.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:30 compute-0 ceph-mon[74339]: pgmap v3039: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 503 KiB/s rd, 3.6 KiB/s wr, 57 op/s
Dec 06 07:54:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 513 KiB/s rd, 16 KiB/s wr, 62 op/s
Dec 06 07:54:31 compute-0 nova_compute[251992]: 2025-12-06 07:54:31.723 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:31 compute-0 nova_compute[251992]: 2025-12-06 07:54:31.853 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:31 compute-0 nova_compute[251992]: 2025-12-06 07:54:31.854 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:31.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:32 compute-0 nova_compute[251992]: 2025-12-06 07:54:32.425 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:32 compute-0 ceph-mon[74339]: pgmap v3040: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 513 KiB/s rd, 16 KiB/s wr, 62 op/s
Dec 06 07:54:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 513 KiB/s rd, 16 KiB/s wr, 62 op/s
Dec 06 07:54:33 compute-0 nova_compute[251992]: 2025-12-06 07:54:33.676 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:33.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:34.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:35 compute-0 ceph-mon[74339]: pgmap v3041: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 513 KiB/s rd, 16 KiB/s wr, 62 op/s
Dec 06 07:54:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 28 KiB/s wr, 53 op/s
Dec 06 07:54:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:35.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1322277353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:36 compute-0 nova_compute[251992]: 2025-12-06 07:54:36.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:36 compute-0 nova_compute[251992]: 2025-12-06 07:54:36.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:54:36 compute-0 nova_compute[251992]: 2025-12-06 07:54:36.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:54:36 compute-0 nova_compute[251992]: 2025-12-06 07:54:36.682 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:54:36 compute-0 nova_compute[251992]: 2025-12-06 07:54:36.683 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:54:36.715 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:54:36 compute-0 nova_compute[251992]: 2025-12-06 07:54:36.724 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:36.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:37 compute-0 ceph-mon[74339]: pgmap v3042: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 28 KiB/s wr, 53 op/s
Dec 06 07:54:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3167987060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 28 KiB/s wr, 53 op/s
Dec 06 07:54:37 compute-0 nova_compute[251992]: 2025-12-06 07:54:37.429 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:37 compute-0 nova_compute[251992]: 2025-12-06 07:54:37.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:37 compute-0 nova_compute[251992]: 2025-12-06 07:54:37.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:37.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:38 compute-0 ceph-mon[74339]: pgmap v3043: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 28 KiB/s wr, 53 op/s
Dec 06 07:54:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:38.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 26 KiB/s wr, 47 op/s
Dec 06 07:54:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:39.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:40 compute-0 ceph-mon[74339]: pgmap v3044: 305 pgs: 305 active+clean; 202 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 26 KiB/s wr, 47 op/s
Dec 06 07:54:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:40.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 145 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 490 KiB/s rd, 22 KiB/s wr, 52 op/s
Dec 06 07:54:41 compute-0 podman[366324]: 2025-12-06 07:54:41.488042161 +0000 UTC m=+0.132154996 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 07:54:41 compute-0 nova_compute[251992]: 2025-12-06 07:54:41.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3558779861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:41 compute-0 nova_compute[251992]: 2025-12-06 07:54:41.726 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:54:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:41.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:54:42 compute-0 nova_compute[251992]: 2025-12-06 07:54:42.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:42.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:42 compute-0 ceph-mon[74339]: pgmap v3045: 305 pgs: 305 active+clean; 145 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 490 KiB/s rd, 22 KiB/s wr, 52 op/s
Dec 06 07:54:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:54:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:54:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:54:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:54:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:54:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:54:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 145 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 129 KiB/s rd, 11 KiB/s wr, 24 op/s
Dec 06 07:54:43 compute-0 nova_compute[251992]: 2025-12-06 07:54:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:54:43 compute-0 nova_compute[251992]: 2025-12-06 07:54:43.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:54:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:43.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2838096353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:54:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:44.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:45 compute-0 ceph-mon[74339]: pgmap v3046: 305 pgs: 305 active+clean; 145 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 129 KiB/s rd, 11 KiB/s wr, 24 op/s
Dec 06 07:54:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 138 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 392 KiB/s wr, 48 op/s
Dec 06 07:54:45 compute-0 sudo[366352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:45 compute-0 sudo[366352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:45 compute-0 sudo[366352]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:45 compute-0 sudo[366377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:45 compute-0 sudo[366377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:45 compute-0 sudo[366377]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:45.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1223599382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1223599382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:46 compute-0 nova_compute[251992]: 2025-12-06 07:54:46.727 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:46.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:47 compute-0 ceph-mon[74339]: pgmap v3047: 305 pgs: 305 active+clean; 138 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 392 KiB/s wr, 48 op/s
Dec 06 07:54:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 159 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.4 MiB/s wr, 59 op/s
Dec 06 07:54:47 compute-0 podman[366403]: 2025-12-06 07:54:47.414223305 +0000 UTC m=+0.073811172 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:54:47 compute-0 nova_compute[251992]: 2025-12-06 07:54:47.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:47 compute-0 podman[366404]: 2025-12-06 07:54:47.444169703 +0000 UTC m=+0.103743720 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:54:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2161790072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:48 compute-0 ceph-mon[74339]: pgmap v3048: 305 pgs: 305 active+clean; 159 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.4 MiB/s wr, 59 op/s
Dec 06 07:54:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1092582499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:54:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1092582499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:54:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3787459085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:54:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Dec 06 07:54:49 compute-0 sudo[366444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:49 compute-0 sudo[366444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:49 compute-0 sudo[366444]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:49 compute-0 sudo[366469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:54:49 compute-0 sudo[366469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:49 compute-0 sudo[366469]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:49 compute-0 sudo[366494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:49 compute-0 sudo[366494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:49 compute-0 sudo[366494]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:49 compute-0 sudo[366519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:54:49 compute-0 sudo[366519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:50.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:50 compute-0 sudo[366519]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:50 compute-0 ceph-mon[74339]: pgmap v3049: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Dec 06 07:54:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:54:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:54:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:54:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:54:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:54:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:50 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 45d01b62-9059-422e-b203-628f4088a6bd does not exist
Dec 06 07:54:50 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fe1d5fde-19f5-4d2b-8b24-d3cdfc121250 does not exist
Dec 06 07:54:50 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 022dfa31-3152-4e96-84f1-8e0be680f902 does not exist
Dec 06 07:54:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:54:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:54:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:54:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:54:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:54:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:54:50 compute-0 sudo[366575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:50 compute-0 sudo[366575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:50 compute-0 sudo[366575]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:50 compute-0 sudo[366600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:54:50 compute-0 sudo[366600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:50 compute-0 sudo[366600]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:50 compute-0 sudo[366625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:50 compute-0 sudo[366625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:50 compute-0 sudo[366625]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:50 compute-0 sudo[366650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:54:50 compute-0 sudo[366650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:50.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:51 compute-0 podman[366714]: 2025-12-06 07:54:51.018342374 +0000 UTC m=+0.037179415 container create b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:54:51 compute-0 systemd[1]: Started libpod-conmon-b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511.scope.
Dec 06 07:54:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:54:51 compute-0 podman[366714]: 2025-12-06 07:54:51.002160067 +0000 UTC m=+0.020997128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:54:51 compute-0 podman[366714]: 2025-12-06 07:54:51.10528505 +0000 UTC m=+0.124122121 container init b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:54:51 compute-0 podman[366714]: 2025-12-06 07:54:51.117447368 +0000 UTC m=+0.136284409 container start b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:54:51 compute-0 podman[366714]: 2025-12-06 07:54:51.121148587 +0000 UTC m=+0.139985628 container attach b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 07:54:51 compute-0 flamboyant_shamir[366731]: 167 167
Dec 06 07:54:51 compute-0 systemd[1]: libpod-b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511.scope: Deactivated successfully.
Dec 06 07:54:51 compute-0 podman[366714]: 2025-12-06 07:54:51.124751075 +0000 UTC m=+0.143588126 container died b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 07:54:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6e6df575cd86d134142f6a12e4f5f23948893d90ad577fec267480f0065fdc5-merged.mount: Deactivated successfully.
Dec 06 07:54:51 compute-0 podman[366714]: 2025-12-06 07:54:51.165896485 +0000 UTC m=+0.184733526 container remove b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:54:51 compute-0 systemd[1]: libpod-conmon-b84144e63b7d0adc6c277d9fd3edb659efa00a4e7c984997866ac1e8f7842511.scope: Deactivated successfully.
Dec 06 07:54:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 07:54:51 compute-0 podman[366754]: 2025-12-06 07:54:51.314594827 +0000 UTC m=+0.037883413 container create fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:54:51 compute-0 systemd[1]: Started libpod-conmon-fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660.scope.
Dec 06 07:54:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1643f02e98971515a8eff2c332b3d4fceab9468d72c96a07b8ce11a7eaa553e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1643f02e98971515a8eff2c332b3d4fceab9468d72c96a07b8ce11a7eaa553e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1643f02e98971515a8eff2c332b3d4fceab9468d72c96a07b8ce11a7eaa553e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1643f02e98971515a8eff2c332b3d4fceab9468d72c96a07b8ce11a7eaa553e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1643f02e98971515a8eff2c332b3d4fceab9468d72c96a07b8ce11a7eaa553e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:51 compute-0 podman[366754]: 2025-12-06 07:54:51.389340054 +0000 UTC m=+0.112628630 container init fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:54:51 compute-0 podman[366754]: 2025-12-06 07:54:51.299368087 +0000 UTC m=+0.022656693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:54:51 compute-0 podman[366754]: 2025-12-06 07:54:51.395421198 +0000 UTC m=+0.118709784 container start fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 07:54:51 compute-0 podman[366754]: 2025-12-06 07:54:51.398387458 +0000 UTC m=+0.121676044 container attach fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 07:54:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:54:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:54:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:54:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:54:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:54:51 compute-0 nova_compute[251992]: 2025-12-06 07:54:51.729 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:52.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:52 compute-0 crazy_heisenberg[366770]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:54:52 compute-0 crazy_heisenberg[366770]: --> relative data size: 1.0
Dec 06 07:54:52 compute-0 crazy_heisenberg[366770]: --> All data devices are unavailable
Dec 06 07:54:52 compute-0 systemd[1]: libpod-fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660.scope: Deactivated successfully.
Dec 06 07:54:52 compute-0 podman[366754]: 2025-12-06 07:54:52.199977028 +0000 UTC m=+0.923265604 container died fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1643f02e98971515a8eff2c332b3d4fceab9468d72c96a07b8ce11a7eaa553e1-merged.mount: Deactivated successfully.
Dec 06 07:54:52 compute-0 podman[366754]: 2025-12-06 07:54:52.257193932 +0000 UTC m=+0.980482518 container remove fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:54:52 compute-0 systemd[1]: libpod-conmon-fbc710e07b073ffae02d8f3101b09aa6c0d38592cc8229b9ecd0830981ebc660.scope: Deactivated successfully.
Dec 06 07:54:52 compute-0 sudo[366650]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:52 compute-0 sudo[366799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:52 compute-0 sudo[366799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:52 compute-0 sudo[366799]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:52 compute-0 sudo[366824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:54:52 compute-0 sudo[366824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:52 compute-0 sudo[366824]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:52 compute-0 nova_compute[251992]: 2025-12-06 07:54:52.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:52 compute-0 sudo[366849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:52 compute-0 sudo[366849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:52 compute-0 sudo[366849]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:52 compute-0 ceph-mon[74339]: pgmap v3050: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 07:54:52 compute-0 sudo[366874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:54:52 compute-0 sudo[366874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:52.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:52 compute-0 podman[366938]: 2025-12-06 07:54:52.887057189 +0000 UTC m=+0.046197158 container create dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 07:54:52 compute-0 systemd[1]: Started libpod-conmon-dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8.scope.
Dec 06 07:54:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:54:52 compute-0 podman[366938]: 2025-12-06 07:54:52.954717204 +0000 UTC m=+0.113857173 container init dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_johnson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 07:54:52 compute-0 podman[366938]: 2025-12-06 07:54:52.963986274 +0000 UTC m=+0.123126233 container start dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 07:54:52 compute-0 podman[366938]: 2025-12-06 07:54:52.868283602 +0000 UTC m=+0.027423591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:54:52 compute-0 condescending_johnson[366955]: 167 167
Dec 06 07:54:52 compute-0 podman[366938]: 2025-12-06 07:54:52.967310004 +0000 UTC m=+0.126449993 container attach dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:54:52 compute-0 systemd[1]: libpod-dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8.scope: Deactivated successfully.
Dec 06 07:54:52 compute-0 podman[366938]: 2025-12-06 07:54:52.9682581 +0000 UTC m=+0.127398059 container died dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_johnson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:54:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fe4542a47ae0e87000a093f2682c09a7e36e81ac856977c4467b6cf5640c5e1-merged.mount: Deactivated successfully.
Dec 06 07:54:53 compute-0 podman[366938]: 2025-12-06 07:54:53.002063472 +0000 UTC m=+0.161203431 container remove dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_johnson, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 07:54:53 compute-0 systemd[1]: libpod-conmon-dc902c1a7d92d717f962037613410d28f0aa4bef0241663dead8738795925df8.scope: Deactivated successfully.
Dec 06 07:54:53 compute-0 podman[366978]: 2025-12-06 07:54:53.175172313 +0000 UTC m=+0.043791253 container create 9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 07:54:53 compute-0 systemd[1]: Started libpod-conmon-9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83.scope.
Dec 06 07:54:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d20a7d44a303c7616a6c19decda0fe31003fd8cefc22ce09aac62ed68c0e271c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d20a7d44a303c7616a6c19decda0fe31003fd8cefc22ce09aac62ed68c0e271c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d20a7d44a303c7616a6c19decda0fe31003fd8cefc22ce09aac62ed68c0e271c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d20a7d44a303c7616a6c19decda0fe31003fd8cefc22ce09aac62ed68c0e271c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:53 compute-0 podman[366978]: 2025-12-06 07:54:53.152437469 +0000 UTC m=+0.021056429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:54:53 compute-0 podman[366978]: 2025-12-06 07:54:53.258310166 +0000 UTC m=+0.126929126 container init 9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bose, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec 06 07:54:53 compute-0 podman[366978]: 2025-12-06 07:54:53.264148833 +0000 UTC m=+0.132767773 container start 9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec 06 07:54:53 compute-0 podman[366978]: 2025-12-06 07:54:53.269225371 +0000 UTC m=+0.137844331 container attach 9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 07:54:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Dec 06 07:54:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:53 compute-0 elated_bose[366995]: {
Dec 06 07:54:53 compute-0 elated_bose[366995]:     "0": [
Dec 06 07:54:53 compute-0 elated_bose[366995]:         {
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "devices": [
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "/dev/loop3"
Dec 06 07:54:53 compute-0 elated_bose[366995]:             ],
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "lv_name": "ceph_lv0",
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "lv_size": "7511998464",
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "name": "ceph_lv0",
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "tags": {
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.cluster_name": "ceph",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.crush_device_class": "",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.encrypted": "0",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.osd_id": "0",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.type": "block",
Dec 06 07:54:53 compute-0 elated_bose[366995]:                 "ceph.vdo": "0"
Dec 06 07:54:53 compute-0 elated_bose[366995]:             },
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "type": "block",
Dec 06 07:54:53 compute-0 elated_bose[366995]:             "vg_name": "ceph_vg0"
Dec 06 07:54:53 compute-0 elated_bose[366995]:         }
Dec 06 07:54:53 compute-0 elated_bose[366995]:     ]
Dec 06 07:54:53 compute-0 elated_bose[366995]: }
Dec 06 07:54:54 compute-0 systemd[1]: libpod-9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83.scope: Deactivated successfully.
Dec 06 07:54:54 compute-0 podman[366978]: 2025-12-06 07:54:54.009130046 +0000 UTC m=+0.877748996 container died 9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:54:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:54.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d20a7d44a303c7616a6c19decda0fe31003fd8cefc22ce09aac62ed68c0e271c-merged.mount: Deactivated successfully.
Dec 06 07:54:54 compute-0 podman[366978]: 2025-12-06 07:54:54.09858324 +0000 UTC m=+0.967202200 container remove 9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:54:54 compute-0 systemd[1]: libpod-conmon-9daf0eaace33bf0d913d3a0920f5d90e2f1ab691966587ac1350148ef5cdbe83.scope: Deactivated successfully.
Dec 06 07:54:54 compute-0 sudo[366874]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:54 compute-0 sudo[367018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:54 compute-0 sudo[367018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:54 compute-0 sudo[367018]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:54 compute-0 sudo[367043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:54:54 compute-0 sudo[367043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:54 compute-0 sudo[367043]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:54 compute-0 sudo[367068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:54 compute-0 sudo[367068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:54 compute-0 sudo[367068]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:54 compute-0 sudo[367093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:54:54 compute-0 sudo[367093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:54 compute-0 ceph-mon[74339]: pgmap v3051: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Dec 06 07:54:54 compute-0 podman[367158]: 2025-12-06 07:54:54.737077999 +0000 UTC m=+0.041083610 container create 3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:54:54 compute-0 systemd[1]: Started libpod-conmon-3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba.scope.
Dec 06 07:54:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:54:54 compute-0 podman[367158]: 2025-12-06 07:54:54.717884611 +0000 UTC m=+0.021890232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:54:54 compute-0 podman[367158]: 2025-12-06 07:54:54.832590276 +0000 UTC m=+0.136595977 container init 3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 07:54:54 compute-0 podman[367158]: 2025-12-06 07:54:54.8401449 +0000 UTC m=+0.144150541 container start 3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:54:54 compute-0 podman[367158]: 2025-12-06 07:54:54.845264708 +0000 UTC m=+0.149270369 container attach 3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec 06 07:54:54 compute-0 pensive_pike[367175]: 167 167
Dec 06 07:54:54 compute-0 systemd[1]: libpod-3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba.scope: Deactivated successfully.
Dec 06 07:54:54 compute-0 podman[367158]: 2025-12-06 07:54:54.848151676 +0000 UTC m=+0.152157317 container died 3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 07:54:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:54.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6034fb056d58074d2c85e5c516cef2196d25e76c309ee55ebe0b3e6d2350696c-merged.mount: Deactivated successfully.
Dec 06 07:54:54 compute-0 podman[367158]: 2025-12-06 07:54:54.896743987 +0000 UTC m=+0.200749618 container remove 3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:54:54 compute-0 systemd[1]: libpod-conmon-3f3cef0e46c73cf3356ff56563d51036a482b38e9a81dfa0e0add66016f242ba.scope: Deactivated successfully.
Dec 06 07:54:55 compute-0 podman[367200]: 2025-12-06 07:54:55.105280155 +0000 UTC m=+0.059777475 container create a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yalow, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:54:55 compute-0 systemd[1]: Started libpod-conmon-a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e.scope.
Dec 06 07:54:55 compute-0 podman[367200]: 2025-12-06 07:54:55.073596339 +0000 UTC m=+0.028093709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:54:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e45a84cc60bd532dda5e060cf301be7c4ee4e149355b263f267daab2fe317/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e45a84cc60bd532dda5e060cf301be7c4ee4e149355b263f267daab2fe317/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e45a84cc60bd532dda5e060cf301be7c4ee4e149355b263f267daab2fe317/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e45a84cc60bd532dda5e060cf301be7c4ee4e149355b263f267daab2fe317/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:54:55 compute-0 podman[367200]: 2025-12-06 07:54:55.204176203 +0000 UTC m=+0.158673503 container init a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yalow, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Dec 06 07:54:55 compute-0 podman[367200]: 2025-12-06 07:54:55.211154121 +0000 UTC m=+0.165651411 container start a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 07:54:55 compute-0 podman[367200]: 2025-12-06 07:54:55.228399987 +0000 UTC m=+0.182897267 container attach a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:54:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:54:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]: {
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]:         "osd_id": 0,
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]:         "type": "bluestore"
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]:     }
Dec 06 07:54:56 compute-0 intelligent_yalow[367217]: }
Dec 06 07:54:56 compute-0 systemd[1]: libpod-a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e.scope: Deactivated successfully.
Dec 06 07:54:56 compute-0 podman[367200]: 2025-12-06 07:54:56.058884116 +0000 UTC m=+1.013381396 container died a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yalow, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:54:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-272e45a84cc60bd532dda5e060cf301be7c4ee4e149355b263f267daab2fe317-merged.mount: Deactivated successfully.
Dec 06 07:54:56 compute-0 podman[367200]: 2025-12-06 07:54:56.144229249 +0000 UTC m=+1.098726529 container remove a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_yalow, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:54:56 compute-0 systemd[1]: libpod-conmon-a4c5664e399f9330096e389f2d85758f77aae18e5a3e0e5342e671b6f70ee78e.scope: Deactivated successfully.
Dec 06 07:54:56 compute-0 sudo[367093]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:54:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:54:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 70282e48-ff97-4e2d-8a2a-042b7fe85c5c does not exist
Dec 06 07:54:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev eaec8141-e7a9-44af-b953-4b4ea6fbf8d5 does not exist
Dec 06 07:54:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev cbd27ee1-fc07-4a93-b3b4-ed983f6d2b83 does not exist
Dec 06 07:54:56 compute-0 sudo[367252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:54:56 compute-0 sudo[367252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:56 compute-0 sudo[367252]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:56 compute-0 sudo[367277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:54:56 compute-0 sudo[367277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:54:56 compute-0 sudo[367277]: pam_unix(sudo:session): session closed for user root
Dec 06 07:54:56 compute-0 nova_compute[251992]: 2025-12-06 07:54:56.731 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:56.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:57 compute-0 ceph-mon[74339]: pgmap v3052: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 06 07:54:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:54:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 119 op/s
Dec 06 07:54:57 compute-0 nova_compute[251992]: 2025-12-06 07:54:57.438 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:54:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:54:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:54:58.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:54:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:54:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:54:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:54:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:54:58.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:54:59 compute-0 ceph-mon[74339]: pgmap v3053: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 119 op/s
Dec 06 07:54:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 409 KiB/s wr, 96 op/s
Dec 06 07:55:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:00.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:00.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:01 compute-0 ceph-mon[74339]: pgmap v3054: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 409 KiB/s wr, 96 op/s
Dec 06 07:55:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Dec 06 07:55:01 compute-0 nova_compute[251992]: 2025-12-06 07:55:01.732 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:02.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:02 compute-0 ceph-mon[74339]: pgmap v3055: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 91 op/s
Dec 06 07:55:02 compute-0 nova_compute[251992]: 2025-12-06 07:55:02.441 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:02.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 988 KiB/s rd, 85 B/s wr, 38 op/s
Dec 06 07:55:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:03.865 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:03.865 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:03.865 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:04.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:04 compute-0 ceph-mon[74339]: pgmap v3056: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 988 KiB/s rd, 85 B/s wr, 38 op/s
Dec 06 07:55:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 186 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 70 op/s
Dec 06 07:55:05 compute-0 sudo[367307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:05 compute-0 sudo[367307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:05 compute-0 sudo[367307]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:05 compute-0 sudo[367332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:05 compute-0 sudo[367332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:05 compute-0 sudo[367332]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:06.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:06 compute-0 ceph-mon[74339]: pgmap v3057: 305 pgs: 305 active+clean; 186 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 70 op/s
Dec 06 07:55:06 compute-0 nova_compute[251992]: 2025-12-06 07:55:06.734 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:06.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:07.049 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:55:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:07.050 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:55:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:07.051 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:07 compute-0 nova_compute[251992]: 2025-12-06 07:55:07.051 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 442 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 06 07:55:07 compute-0 nova_compute[251992]: 2025-12-06 07:55:07.442 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:08.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:09 compute-0 ceph-mon[74339]: pgmap v3058: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 442 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 06 07:55:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:55:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:55:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:10.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:55:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/548726769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:55:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/548726769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:55:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:10.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:55:11 compute-0 ceph-mon[74339]: pgmap v3059: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 07:55:11 compute-0 nova_compute[251992]: 2025-12-06 07:55:11.737 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:12.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:12 compute-0 nova_compute[251992]: 2025-12-06 07:55:12.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:12 compute-0 podman[367360]: 2025-12-06 07:55:12.535191392 +0000 UTC m=+0.168189610 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:55:12 compute-0 ceph-mon[74339]: pgmap v3060: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:55:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/264023895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:12.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:55:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:55:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:55:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:55:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:55:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:55:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:55:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:55:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3405073010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3405073010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:14.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:14.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:14 compute-0 ceph-mon[74339]: pgmap v3061: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 07:55:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 231 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Dec 06 07:55:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:16.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:16 compute-0 nova_compute[251992]: 2025-12-06 07:55:16.740 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:16.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:16 compute-0 ceph-mon[74339]: pgmap v3062: 305 pgs: 305 active+clean; 231 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Dec 06 07:55:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 128 KiB/s rd, 2.6 MiB/s wr, 60 op/s
Dec 06 07:55:17 compute-0 nova_compute[251992]: 2025-12-06 07:55:17.446 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1464405074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2755240412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:18.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:18 compute-0 podman[367390]: 2025-12-06 07:55:18.403156563 +0000 UTC m=+0.062693243 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 07:55:18 compute-0 podman[367389]: 2025-12-06 07:55:18.403145202 +0000 UTC m=+0.063676099 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 07:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:55:18
Dec 06 07:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', '.mgr']
Dec 06 07:55:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:55:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:18.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:19 compute-0 ceph-mon[74339]: pgmap v3063: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 128 KiB/s rd, 2.6 MiB/s wr, 60 op/s
Dec 06 07:55:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:55:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:20.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:55:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:20.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:55:21 compute-0 ceph-mon[74339]: pgmap v3064: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 07:55:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 06 07:55:21 compute-0 nova_compute[251992]: 2025-12-06 07:55:21.741 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:22.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:22 compute-0 nova_compute[251992]: 2025-12-06 07:55:22.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:55:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1697 writes, 7577 keys, 1696 commit groups, 1.0 writes per commit group, ingest: 11.09 MB, 0.02 MB/s
                                           Interval WAL: 1697 writes, 1696 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     36.7      2.20              0.27        40    0.055       0      0       0.0       0.0
                                             L6      1/0   10.57 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.9     94.2     80.0      4.94              1.23        39    0.127    282K    21K       0.0       0.0
                                            Sum      1/0   10.57 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.9     65.2     66.7      7.14              1.50        79    0.090    282K    21K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6    126.9    127.4      0.62              0.25        12    0.052     58K   3125       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     94.2     80.0      4.94              1.23        39    0.127    282K    21K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     36.8      2.20              0.27        39    0.056       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.079, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.46 GB write, 0.09 MB/s write, 0.45 GB read, 0.09 MB/s read, 7.1 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 51.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000438 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2968,49.56 MB,16.3036%) FilterBlock(80,785.23 KB,0.252247%) IndexBlock(80,1.29 MB,0.42298%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 07:55:22 compute-0 ovn_controller[147168]: 2025-12-06T07:55:22Z|00661|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 07:55:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:22.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:23 compute-0 ceph-mon[74339]: pgmap v3065: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 06 07:55:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1818746968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 06 07:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:55:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:55:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:55:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:24.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:55:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1116877406' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2797911486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3794574650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:24.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:25 compute-0 ceph-mon[74339]: pgmap v3066: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Dec 06 07:55:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 254 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 98 op/s
Dec 06 07:55:25 compute-0 sudo[367433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:25 compute-0 sudo[367433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:25 compute-0 sudo[367433]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:25 compute-0 sudo[367458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:25 compute-0 sudo[367458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:25 compute-0 sudo[367458]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:55:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:26.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:55:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031712381351200448 of space, bias 1.0, pg target 0.9513714405360134 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.6487515703517588 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002275737774520752 of space, bias 1.0, pg target 0.6827213323562257 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:55:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:55:26 compute-0 ceph-mon[74339]: pgmap v3067: 305 pgs: 305 active+clean; 254 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 98 op/s
Dec 06 07:55:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Dec 06 07:55:26 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Dec 06 07:55:26 compute-0 nova_compute[251992]: 2025-12-06 07:55:26.779 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:26.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:55:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:55:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:55:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:55:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:55:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 254 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 857 KiB/s wr, 113 op/s
Dec 06 07:55:27 compute-0 nova_compute[251992]: 2025-12-06 07:55:27.452 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:27 compute-0 ceph-mon[74339]: osdmap e400: 3 total, 3 up, 3 in
Dec 06 07:55:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:28.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:28 compute-0 nova_compute[251992]: 2025-12-06 07:55:28.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:28 compute-0 nova_compute[251992]: 2025-12-06 07:55:28.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:28 compute-0 nova_compute[251992]: 2025-12-06 07:55:28.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:28 compute-0 nova_compute[251992]: 2025-12-06 07:55:28.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:28 compute-0 nova_compute[251992]: 2025-12-06 07:55:28.687 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:55:28 compute-0 nova_compute[251992]: 2025-12-06 07:55:28.687 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:28.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:55:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791543045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.114 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.270 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.271 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4188MB free_disk=20.921722412109375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.271 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.272 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.326 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.326 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.344 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 07:55:29 compute-0 ceph-mon[74339]: pgmap v3069: 305 pgs: 305 active+clean; 254 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 857 KiB/s wr, 113 op/s
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.366 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.367 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.397 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.423 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.439 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:55:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403846061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.872 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.878 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.905 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.907 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:55:29 compute-0 nova_compute[251992]: 2025-12-06 07:55:29.907 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:30.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/791543045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:30 compute-0 ceph-mon[74339]: pgmap v3070: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Dec 06 07:55:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3403846061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:30.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.0 MiB/s wr, 189 op/s
Dec 06 07:55:31 compute-0 nova_compute[251992]: 2025-12-06 07:55:31.783 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:55:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:32.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:55:32 compute-0 ceph-mon[74339]: pgmap v3071: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.0 MiB/s wr, 189 op/s
Dec 06 07:55:32 compute-0 nova_compute[251992]: 2025-12-06 07:55:32.454 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:32.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 209 op/s
Dec 06 07:55:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:33 compute-0 nova_compute[251992]: 2025-12-06 07:55:33.902 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:34.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:34 compute-0 ceph-mon[74339]: pgmap v3072: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 209 op/s
Dec 06 07:55:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:34.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 171 op/s
Dec 06 07:55:35 compute-0 nova_compute[251992]: 2025-12-06 07:55:35.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:36.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:36 compute-0 ceph-mon[74339]: pgmap v3073: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 171 op/s
Dec 06 07:55:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1621257542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:36 compute-0 nova_compute[251992]: 2025-12-06 07:55:36.824 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:36.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 178 op/s
Dec 06 07:55:37 compute-0 nova_compute[251992]: 2025-12-06 07:55:37.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:37 compute-0 nova_compute[251992]: 2025-12-06 07:55:37.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:37 compute-0 nova_compute[251992]: 2025-12-06 07:55:37.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:55:37 compute-0 nova_compute[251992]: 2025-12-06 07:55:37.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:55:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3361064064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:38 compute-0 nova_compute[251992]: 2025-12-06 07:55:38.008 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:55:38 compute-0 nova_compute[251992]: 2025-12-06 07:55:38.009 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:38.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:38 compute-0 nova_compute[251992]: 2025-12-06 07:55:38.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:38 compute-0 ceph-mon[74339]: pgmap v3074: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.5 MiB/s wr, 178 op/s
Dec 06 07:55:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:38.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 318 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Dec 06 07:55:39 compute-0 nova_compute[251992]: 2025-12-06 07:55:39.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:40.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:40 compute-0 nova_compute[251992]: 2025-12-06 07:55:40.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:40 compute-0 ceph-mon[74339]: pgmap v3075: 305 pgs: 305 active+clean; 318 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Dec 06 07:55:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:40.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 411 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 6.6 MiB/s wr, 278 op/s
Dec 06 07:55:41 compute-0 nova_compute[251992]: 2025-12-06 07:55:41.825 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:42.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.357 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.357 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.372 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.450 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.451 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.458 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.458 251996 INFO nova.compute.claims [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.463 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:42 compute-0 nova_compute[251992]: 2025-12-06 07:55:42.567 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:42 compute-0 ceph-mon[74339]: pgmap v3076: 305 pgs: 305 active+clean; 411 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 6.6 MiB/s wr, 278 op/s
Dec 06 07:55:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:42.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:55:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319919641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.020 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.026 251996 DEBUG nova.compute.provider_tree [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:55:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:55:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:55:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:55:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:55:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:55:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.050 251996 DEBUG nova.scheduler.client.report [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:55:43 compute-0 podman[367555]: 2025-12-06 07:55:43.053918677 +0000 UTC m=+0.105141259 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.088 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.088 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.139 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.140 251996 DEBUG nova.network.neutron [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.161 251996 INFO nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.179 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.225 251996 INFO nova.virt.block_device [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Booting with volume 10679127-bb23-4af1-8eeb-3ae98e77d7db at /dev/vda
Dec 06 07:55:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.5 MiB/s wr, 235 op/s
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.675 251996 DEBUG os_brick.utils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.678 251996 DEBUG nova.policy [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e685a049c8a74aa8aea831fbdaf2acf8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6164fee998c94b71a37886fe42b4c56c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.677 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.688 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.688 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5029d7-3710-476e-a4d8-b7cb60ed622c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.689 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.696 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.696 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3523c7-1862-4ff2-a2f0-1886db03848a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.698 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.704 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.704 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[a815b3a5-34e7-48b7-b6b4-2ea847fe5c9f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.705 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[8caa914b-52c7-4906-a64a-0ce5cf8eebc6]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.706 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.740 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.743 251996 DEBUG os_brick.initiator.connectors.lightos [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.743 251996 DEBUG os_brick.initiator.connectors.lightos [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.743 251996 DEBUG os_brick.initiator.connectors.lightos [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.743 251996 DEBUG os_brick.utils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:55:43 compute-0 nova_compute[251992]: 2025-12-06 07:55:43.744 251996 DEBUG nova.virt.block_device [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating existing volume attachment record: 65097bc2-7207-442f-b133-04ee2fd4c64d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:55:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:55:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:44.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:55:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1319919641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.433 251996 DEBUG nova.network.neutron [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Successfully created port: 7b695d2a-7c72-4125-a16a-a2d8b4342195 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:55:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:55:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/272212823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.704 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:44.705 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:55:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:44.706 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:55:44 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:44.707 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.920 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.922 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.922 251996 INFO nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Creating image(s)
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.922 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.923 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Ensure instance console log exists: /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.923 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.923 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:44 compute-0 nova_compute[251992]: 2025-12-06 07:55:44.924 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:44.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:45 compute-0 ceph-mon[74339]: pgmap v3077: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.5 MiB/s wr, 235 op/s
Dec 06 07:55:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/272212823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 409 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.6 MiB/s wr, 234 op/s
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:55:45 compute-0 sudo[367593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:45 compute-0 sudo[367593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.780 251996 DEBUG nova.network.neutron [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Successfully updated port: 7b695d2a-7c72-4125-a16a-a2d8b4342195 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:55:45 compute-0 sudo[367593]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:45 compute-0 sudo[367618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:45 compute-0 sudo[367618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:45 compute-0 sudo[367618]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.906 251996 DEBUG nova.compute.manager [req-a31ee41a-671f-4b02-af1f-c63aff3e17eb req-ad0ac82b-6f81-494d-9d6c-bf42ce383fe6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.906 251996 DEBUG nova.compute.manager [req-a31ee41a-671f-4b02-af1f-c63aff3e17eb req-ad0ac82b-6f81-494d-9d6c-bf42ce383fe6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing instance network info cache due to event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.906 251996 DEBUG oslo_concurrency.lockutils [req-a31ee41a-671f-4b02-af1f-c63aff3e17eb req-ad0ac82b-6f81-494d-9d6c-bf42ce383fe6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.906 251996 DEBUG oslo_concurrency.lockutils [req-a31ee41a-671f-4b02-af1f-c63aff3e17eb req-ad0ac82b-6f81-494d-9d6c-bf42ce383fe6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.907 251996 DEBUG nova.network.neutron [req-a31ee41a-671f-4b02-af1f-c63aff3e17eb req-ad0ac82b-6f81-494d-9d6c-bf42ce383fe6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:55:45 compute-0 nova_compute[251992]: 2025-12-06 07:55:45.909 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:55:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:46.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:46 compute-0 nova_compute[251992]: 2025-12-06 07:55:46.382 251996 DEBUG nova.network.neutron [req-a31ee41a-671f-4b02-af1f-c63aff3e17eb req-ad0ac82b-6f81-494d-9d6c-bf42ce383fe6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:55:46 compute-0 ceph-mon[74339]: pgmap v3078: 305 pgs: 305 active+clean; 409 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.6 MiB/s wr, 234 op/s
Dec 06 07:55:46 compute-0 nova_compute[251992]: 2025-12-06 07:55:46.622 251996 DEBUG nova.network.neutron [req-a31ee41a-671f-4b02-af1f-c63aff3e17eb req-ad0ac82b-6f81-494d-9d6c-bf42ce383fe6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:55:46 compute-0 nova_compute[251992]: 2025-12-06 07:55:46.685 251996 DEBUG oslo_concurrency.lockutils [req-a31ee41a-671f-4b02-af1f-c63aff3e17eb req-ad0ac82b-6f81-494d-9d6c-bf42ce383fe6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:55:46 compute-0 nova_compute[251992]: 2025-12-06 07:55:46.686 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:55:46 compute-0 nova_compute[251992]: 2025-12-06 07:55:46.686 251996 DEBUG nova.network.neutron [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:55:46 compute-0 nova_compute[251992]: 2025-12-06 07:55:46.828 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:46.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:46 compute-0 nova_compute[251992]: 2025-12-06 07:55:46.939 251996 DEBUG nova.network.neutron [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:55:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 389 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 212 op/s
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.465 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/80759059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1452116217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.878 251996 DEBUG nova.network.neutron [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.953 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.954 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Instance network_info: |[{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.959 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Start _get_guest_xml network_info=[{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-10679127-bb23-4af1-8eeb-3ae98e77d7db', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '10679127-bb23-4af1-8eeb-3ae98e77d7db', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'd7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2', 'attached_at': '', 'detached_at': '', 'volume_id': '10679127-bb23-4af1-8eeb-3ae98e77d7db', 'serial': '10679127-bb23-4af1-8eeb-3ae98e77d7db'}, 'attachment_id': '65097bc2-7207-442f-b133-04ee2fd4c64d', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.964 251996 WARNING nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.969 251996 DEBUG nova.virt.libvirt.host [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.969 251996 DEBUG nova.virt.libvirt.host [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.974 251996 DEBUG nova.virt.libvirt.host [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.975 251996 DEBUG nova.virt.libvirt.host [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.977 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.978 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.979 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.979 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.980 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.980 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.980 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.981 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.981 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.982 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.982 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:55:47 compute-0 nova_compute[251992]: 2025-12-06 07:55:47.983 251996 DEBUG nova.virt.hardware [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.028 251996 DEBUG nova.storage.rbd_utils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.032 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:48.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:55:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/709697698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.475 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.503 251996 DEBUG nova.virt.libvirt.vif [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:55:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1835922517',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1835922517',id=175,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-qq1sqzj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:55:43Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.504 251996 DEBUG nova.network.os_vif_util [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.505 251996 DEBUG nova.network.os_vif_util [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:c3:e9,bridge_name='br-int',has_traffic_filtering=True,id=7b695d2a-7c72-4125-a16a-a2d8b4342195,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b695d2a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.506 251996 DEBUG nova.objects.instance [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'pci_devices' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.525 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <uuid>d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2</uuid>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <name>instance-000000af</name>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-1835922517</nova:name>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:55:47</nova:creationTime>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <nova:user uuid="e685a049c8a74aa8aea831fbdaf2acf8">tempest-TestInstancesWithCinderVolumes-1429596635-project-member</nova:user>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <nova:project uuid="6164fee998c94b71a37886fe42b4c56c">tempest-TestInstancesWithCinderVolumes-1429596635</nova:project>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <nova:port uuid="7b695d2a-7c72-4125-a16a-a2d8b4342195">
Dec 06 07:55:48 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <system>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <entry name="serial">d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2</entry>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <entry name="uuid">d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2</entry>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </system>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <os>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   </os>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <features>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   </features>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2_disk.config">
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       </source>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-10679127-bb23-4af1-8eeb-3ae98e77d7db">
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       </source>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:55:48 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <serial>10679127-bb23-4af1-8eeb-3ae98e77d7db</serial>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:5c:c3:e9"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <target dev="tap7b695d2a-7c"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2/console.log" append="off"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <video>
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </video>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:55:48 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:55:48 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:55:48 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:55:48 compute-0 nova_compute[251992]: </domain>
Dec 06 07:55:48 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.526 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Preparing to wait for external event network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.526 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.526 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.527 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.527 251996 DEBUG nova.virt.libvirt.vif [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:55:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1835922517',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1835922517',id=175,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-qq1sqzj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:55:43Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.527 251996 DEBUG nova.network.os_vif_util [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.528 251996 DEBUG nova.network.os_vif_util [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:c3:e9,bridge_name='br-int',has_traffic_filtering=True,id=7b695d2a-7c72-4125-a16a-a2d8b4342195,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b695d2a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.528 251996 DEBUG os_vif [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:c3:e9,bridge_name='br-int',has_traffic_filtering=True,id=7b695d2a-7c72-4125-a16a-a2d8b4342195,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b695d2a-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.529 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.529 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.529 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.533 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.533 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b695d2a-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.534 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b695d2a-7c, col_values=(('external_ids', {'iface-id': '7b695d2a-7c72-4125-a16a-a2d8b4342195', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:c3:e9', 'vm-uuid': 'd7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.535 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:48 compute-0 NetworkManager[48965]: <info>  [1765007748.5366] manager: (tap7b695d2a-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.538 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.541 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.542 251996 INFO os_vif [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:c3:e9,bridge_name='br-int',has_traffic_filtering=True,id=7b695d2a-7c72-4125-a16a-a2d8b4342195,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b695d2a-7c')
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.596 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.597 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.597 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:5c:c3:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.597 251996 INFO nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Using config drive
Dec 06 07:55:48 compute-0 ceph-mon[74339]: pgmap v3079: 305 pgs: 305 active+clean; 389 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 212 op/s
Dec 06 07:55:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2459220151' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/709697698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:48 compute-0 nova_compute[251992]: 2025-12-06 07:55:48.625 251996 DEBUG nova.storage.rbd_utils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:55:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:48.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.4 MiB/s wr, 186 op/s
Dec 06 07:55:49 compute-0 podman[367705]: 2025-12-06 07:55:49.39897769 +0000 UTC m=+0.053157355 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:55:49 compute-0 podman[367706]: 2025-12-06 07:55:49.416898544 +0000 UTC m=+0.062877658 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 07:55:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/523815383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:55:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:50.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.241 251996 INFO nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Creating config drive at /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2/disk.config
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.247 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7xz23ow execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.386 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7xz23ow" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.417 251996 DEBUG nova.storage.rbd_utils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] rbd image d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.421 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2/disk.config d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.575 251996 DEBUG oslo_concurrency.processutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2/disk.config d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.576 251996 INFO nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Deleting local config drive /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2/disk.config because it was imported into RBD.
Dec 06 07:55:50 compute-0 kernel: tap7b695d2a-7c: entered promiscuous mode
Dec 06 07:55:50 compute-0 ceph-mon[74339]: pgmap v3080: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.4 MiB/s wr, 186 op/s
Dec 06 07:55:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1446130076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:50 compute-0 NetworkManager[48965]: <info>  [1765007750.6417] manager: (tap7b695d2a-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Dec 06 07:55:50 compute-0 ovn_controller[147168]: 2025-12-06T07:55:50Z|00662|binding|INFO|Claiming lport 7b695d2a-7c72-4125-a16a-a2d8b4342195 for this chassis.
Dec 06 07:55:50 compute-0 ovn_controller[147168]: 2025-12-06T07:55:50Z|00663|binding|INFO|7b695d2a-7c72-4125-a16a-a2d8b4342195: Claiming fa:16:3e:5c:c3:e9 10.100.0.7
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.691 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.698 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.707 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:c3:e9 10.100.0.7'], port_security=['fa:16:3e:5c:c3:e9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3764201-4b86-4407-84d2-684bd05a44b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6164fee998c94b71a37886fe42b4c56c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2bb7af25-e3c4-4687-888a-3caf6297e5c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a293aea-136f-4ea2-8198-6213071653ca, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=7b695d2a-7c72-4125-a16a-a2d8b4342195) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.710 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 7b695d2a-7c72-4125-a16a-a2d8b4342195 in datapath a3764201-4b86-4407-84d2-684bd05a44b3 bound to our chassis
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.713 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3764201-4b86-4407-84d2-684bd05a44b3
Dec 06 07:55:50 compute-0 systemd-machined[212986]: New machine qemu-82-instance-000000af.
Dec 06 07:55:50 compute-0 systemd-udevd[367797]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:55:50 compute-0 NetworkManager[48965]: <info>  [1765007750.7272] device (tap7b695d2a-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:55:50 compute-0 NetworkManager[48965]: <info>  [1765007750.7284] device (tap7b695d2a-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.726 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[60e08484-c9cf-4638-bffa-419e83524bf2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.731 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3764201-41 in ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:55:50 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-000000af.
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.734 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3764201-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.734 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[01110e06-d39c-48d8-9a9d-32c83c1ab3c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.735 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2c431b43-7ba5-4b24-ba3a-293ef9494a25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.750 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[5708764a-8ee7-4a0a-bcb3-0242012d8121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_controller[147168]: 2025-12-06T07:55:50Z|00664|binding|INFO|Setting lport 7b695d2a-7c72-4125-a16a-a2d8b4342195 ovn-installed in OVS
Dec 06 07:55:50 compute-0 ovn_controller[147168]: 2025-12-06T07:55:50Z|00665|binding|INFO|Setting lport 7b695d2a-7c72-4125-a16a-a2d8b4342195 up in Southbound
Dec 06 07:55:50 compute-0 nova_compute[251992]: 2025-12-06 07:55:50.761 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.774 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[28685e25-dba3-4cc5-b51a-0d760f356604]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.804 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9bee2b5f-950f-43d7-a847-4a90f5deeabf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.810 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1b1000-86d7-4f54-83d2-ca3bf74e8f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 NetworkManager[48965]: <info>  [1765007750.8124] manager: (tapa3764201-40): new Veth device (/org/freedesktop/NetworkManager/Devices/298)
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.840 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1250b433-7e7d-455d-98a5-e8af75feec5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.843 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d0001ca0-68ad-4926-9a3b-3153a9462716]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 NetworkManager[48965]: <info>  [1765007750.8634] device (tapa3764201-40): carrier: link connected
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.868 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e21b701a-aea3-4f86-adb5-ae938e274ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.889 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4b023000-d556-401d-8ac6-3eb97ca5349a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3764201-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:90:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802345, 'reachable_time': 41113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367830, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.905 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[86c67287-e1fa-4edf-89e5-00bf4c4424de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:90e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802345, 'tstamp': 802345}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367831, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.923 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f0474b63-dc7b-454e-913a-50e84c599669]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3764201-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:90:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802345, 'reachable_time': 41113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367832, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:50.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:50.955 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc32098-f33f-4ffb-a373-fb21dee281ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.008 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[17870018-d1d8-4ad0-b36f-f231395120da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.009 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3764201-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.010 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.010 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3764201-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:51 compute-0 NetworkManager[48965]: <info>  [1765007751.0130] manager: (tapa3764201-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.012 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:51 compute-0 kernel: tapa3764201-40: entered promiscuous mode
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.016 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3764201-40, col_values=(('external_ids', {'iface-id': '901b0fd3-1832-4628-bbf4-0a14b30cd979'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.015 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:51 compute-0 ovn_controller[147168]: 2025-12-06T07:55:51Z|00666|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.018 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.036 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.037 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.038 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3ebf96-e450-4ae6-95aa-9e44329441e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.039 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-a3764201-4b86-4407-84d2-684bd05a44b3
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/a3764201-4b86-4407-84d2-684bd05a44b3.pid.haproxy
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID a3764201-4b86-4407-84d2-684bd05a44b3
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:55:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:55:51.041 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'env', 'PROCESS_TAG=haproxy-a3764201-4b86-4407-84d2-684bd05a44b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3764201-4b86-4407-84d2-684bd05a44b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.056 251996 DEBUG nova.compute.manager [req-72acd355-cee3-4ccf-8786-272c83a75f1b req-22703ad3-0080-45d5-83be-3e906e180cea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.056 251996 DEBUG oslo_concurrency.lockutils [req-72acd355-cee3-4ccf-8786-272c83a75f1b req-22703ad3-0080-45d5-83be-3e906e180cea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.057 251996 DEBUG oslo_concurrency.lockutils [req-72acd355-cee3-4ccf-8786-272c83a75f1b req-22703ad3-0080-45d5-83be-3e906e180cea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.057 251996 DEBUG oslo_concurrency.lockutils [req-72acd355-cee3-4ccf-8786-272c83a75f1b req-22703ad3-0080-45d5-83be-3e906e180cea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.057 251996 DEBUG nova.compute.manager [req-72acd355-cee3-4ccf-8786-272c83a75f1b req-22703ad3-0080-45d5-83be-3e906e180cea 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Processing event network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:55:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.364 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007751.3638308, d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.365 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] VM Started (Lifecycle Event)
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.367 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.371 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.374 251996 INFO nova.virt.libvirt.driver [-] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Instance spawned successfully.
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.374 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.396 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.402 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.409 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.409 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.410 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.410 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.411 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.411 251996 DEBUG nova.virt.libvirt.driver [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:55:51 compute-0 podman[367908]: 2025-12-06 07:55:51.412233605 +0000 UTC m=+0.055989372 container create 7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.448 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.449 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007751.3641202, d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.449 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] VM Paused (Lifecycle Event)
Dec 06 07:55:51 compute-0 systemd[1]: Started libpod-conmon-7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9.scope.
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.478 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:51 compute-0 podman[367908]: 2025-12-06 07:55:51.386295555 +0000 UTC m=+0.030051322 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.481 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007751.371089, d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.481 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] VM Resumed (Lifecycle Event)
Dec 06 07:55:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d903600859cfcfe897f814bef42a336ee5f9248229a4e5aa7dcc90979243f094/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.488 251996 INFO nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Took 6.57 seconds to spawn the instance on the hypervisor.
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.489 251996 DEBUG nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:51 compute-0 podman[367908]: 2025-12-06 07:55:51.498946365 +0000 UTC m=+0.142702152 container init 7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:55:51 compute-0 podman[367908]: 2025-12-06 07:55:51.506123929 +0000 UTC m=+0.149879696 container start 7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.520 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.525 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:55:51 compute-0 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[367923]: [NOTICE]   (367927) : New worker (367929) forked
Dec 06 07:55:51 compute-0 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[367923]: [NOTICE]   (367927) : Loading success.
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.561 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.564 251996 INFO nova.compute.manager [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Took 9.14 seconds to build instance.
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.592 251996 DEBUG oslo_concurrency.lockutils [None req-51652244-d1b7-4700-95e5-4953909bec92 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:51 compute-0 nova_compute[251992]: 2025-12-06 07:55:51.880 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:52.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:52 compute-0 ceph-mon[74339]: pgmap v3081: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Dec 06 07:55:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3864288182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:52.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:53 compute-0 nova_compute[251992]: 2025-12-06 07:55:53.158 251996 DEBUG nova.compute.manager [req-29cc4eac-dc39-48fb-9c38-c6f45ee1cc8e req-fe206c88-b5c4-4c4b-899a-1df2c7b7f0da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:55:53 compute-0 nova_compute[251992]: 2025-12-06 07:55:53.158 251996 DEBUG oslo_concurrency.lockutils [req-29cc4eac-dc39-48fb-9c38-c6f45ee1cc8e req-fe206c88-b5c4-4c4b-899a-1df2c7b7f0da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:53 compute-0 nova_compute[251992]: 2025-12-06 07:55:53.158 251996 DEBUG oslo_concurrency.lockutils [req-29cc4eac-dc39-48fb-9c38-c6f45ee1cc8e req-fe206c88-b5c4-4c4b-899a-1df2c7b7f0da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:53 compute-0 nova_compute[251992]: 2025-12-06 07:55:53.159 251996 DEBUG oslo_concurrency.lockutils [req-29cc4eac-dc39-48fb-9c38-c6f45ee1cc8e req-fe206c88-b5c4-4c4b-899a-1df2c7b7f0da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:53 compute-0 nova_compute[251992]: 2025-12-06 07:55:53.159 251996 DEBUG nova.compute.manager [req-29cc4eac-dc39-48fb-9c38-c6f45ee1cc8e req-fe206c88-b5c4-4c4b-899a-1df2c7b7f0da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] No waiting events found dispatching network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:55:53 compute-0 nova_compute[251992]: 2025-12-06 07:55:53.159 251996 WARNING nova.compute.manager [req-29cc4eac-dc39-48fb-9c38-c6f45ee1cc8e req-fe206c88-b5c4-4c4b-899a-1df2c7b7f0da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received unexpected event network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 for instance with vm_state active and task_state None.
Dec 06 07:55:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 800 KiB/s rd, 888 KiB/s wr, 78 op/s
Dec 06 07:55:53 compute-0 nova_compute[251992]: 2025-12-06 07:55:53.538 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:54.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:54 compute-0 nova_compute[251992]: 2025-12-06 07:55:54.824 251996 DEBUG oslo_concurrency.lockutils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:54 compute-0 nova_compute[251992]: 2025-12-06 07:55:54.824 251996 DEBUG oslo_concurrency.lockutils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:54 compute-0 nova_compute[251992]: 2025-12-06 07:55:54.847 251996 DEBUG nova.objects.instance [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:54 compute-0 ceph-mon[74339]: pgmap v3082: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 800 KiB/s rd, 888 KiB/s wr, 78 op/s
Dec 06 07:55:54 compute-0 nova_compute[251992]: 2025-12-06 07:55:54.898 251996 DEBUG oslo_concurrency.lockutils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:54.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.161 251996 DEBUG oslo_concurrency.lockutils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.162 251996 DEBUG oslo_concurrency.lockutils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.163 251996 INFO nova.compute.manager [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Attaching volume 7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f to /dev/vdb
Dec 06 07:55:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 28 KiB/s wr, 80 op/s
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.336 251996 DEBUG os_brick.utils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.337 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.356 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.357 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[9d355ac4-acc2-43d6-b035-715a21a7163e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.358 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.365 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.366 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[127bd42d-b6de-48ad-9599-f03cfbf4fdf7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.367 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.377 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.377 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[36b53107-e477-46b1-a7f9-db1b59501b38]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.379 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[8b856ed6-7e39-4cd3-86a9-469dec4ca7ab]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.379 251996 DEBUG oslo_concurrency.processutils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.407 251996 DEBUG oslo_concurrency.processutils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.410 251996 DEBUG os_brick.initiator.connectors.lightos [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.410 251996 DEBUG os_brick.initiator.connectors.lightos [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.410 251996 DEBUG os_brick.initiator.connectors.lightos [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.411 251996 DEBUG os_brick.utils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:55:55 compute-0 nova_compute[251992]: 2025-12-06 07:55:55.411 251996 DEBUG nova.virt.block_device [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating existing volume attachment record: 8c9d307e-4065-46a7-b824-0c2533f64d2c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:55:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/574396443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:56.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:55:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3251479305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:56 compute-0 nova_compute[251992]: 2025-12-06 07:55:56.603 251996 DEBUG nova.objects.instance [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:56 compute-0 sudo[367947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:56 compute-0 sudo[367947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:56 compute-0 sudo[367947]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:56 compute-0 sudo[367972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:55:56 compute-0 sudo[367972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:56 compute-0 sudo[367972]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:56 compute-0 nova_compute[251992]: 2025-12-06 07:55:56.699 251996 DEBUG nova.virt.libvirt.driver [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Attempting to attach volume 7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:55:56 compute-0 nova_compute[251992]: 2025-12-06 07:55:56.703 251996 DEBUG nova.virt.libvirt.guest [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:55:56 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:55:56 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f">
Dec 06 07:55:56 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:55:56 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:55:56 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:55:56 compute-0 nova_compute[251992]:   </source>
Dec 06 07:55:56 compute-0 nova_compute[251992]:   <auth username="openstack">
Dec 06 07:55:56 compute-0 nova_compute[251992]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:55:56 compute-0 nova_compute[251992]:   </auth>
Dec 06 07:55:56 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:55:56 compute-0 nova_compute[251992]:   <serial>7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f</serial>
Dec 06 07:55:56 compute-0 nova_compute[251992]: </disk>
Dec 06 07:55:56 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:55:56 compute-0 sudo[367997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:56 compute-0 sudo[367997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:56 compute-0 sudo[367997]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:56 compute-0 sudo[368034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:55:56 compute-0 sudo[368034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:56 compute-0 nova_compute[251992]: 2025-12-06 07:55:56.882 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:56.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 31 KiB/s wr, 118 op/s
Dec 06 07:55:57 compute-0 ceph-mon[74339]: pgmap v3083: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 28 KiB/s wr, 80 op/s
Dec 06 07:55:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3251479305' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:55:57 compute-0 sudo[368034]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:57 compute-0 nova_compute[251992]: 2025-12-06 07:55:57.451 251996 DEBUG nova.virt.libvirt.driver [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:57 compute-0 nova_compute[251992]: 2025-12-06 07:55:57.451 251996 DEBUG nova.virt.libvirt.driver [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:57 compute-0 nova_compute[251992]: 2025-12-06 07:55:57.452 251996 DEBUG nova.virt.libvirt.driver [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:55:57 compute-0 nova_compute[251992]: 2025-12-06 07:55:57.452 251996 DEBUG nova.virt.libvirt.driver [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:5c:c3:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:55:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:55:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:55:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:55:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:55:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:55:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:55:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7df85bc3-33be-4657-ac23-a62cb8da04f2 does not exist
Dec 06 07:55:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 80b19ed3-abbf-48ba-8fd7-5a28b467c0a3 does not exist
Dec 06 07:55:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 28581975-78ff-4539-8236-f57f796d6227 does not exist
Dec 06 07:55:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:55:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:55:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:55:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:55:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:55:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:55:57 compute-0 sudo[368101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:57 compute-0 sudo[368101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:57 compute-0 sudo[368101]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:58 compute-0 sudo[368126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:55:58 compute-0 sudo[368126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:58 compute-0 sudo[368126]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:58 compute-0 sudo[368151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:55:58 compute-0 sudo[368151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:58 compute-0 sudo[368151]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:55:58.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:58 compute-0 sudo[368176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:55:58 compute-0 sudo[368176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:55:58 compute-0 podman[368241]: 2025-12-06 07:55:58.477843712 +0000 UTC m=+0.061508551 container create 6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:55:58 compute-0 podman[368241]: 2025-12-06 07:55:58.441596915 +0000 UTC m=+0.025261804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:55:58 compute-0 nova_compute[251992]: 2025-12-06 07:55:58.540 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:55:58 compute-0 systemd[1]: Started libpod-conmon-6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e.scope.
Dec 06 07:55:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:55:58 compute-0 podman[368241]: 2025-12-06 07:55:58.661056987 +0000 UTC m=+0.244721846 container init 6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 07:55:58 compute-0 ceph-mon[74339]: pgmap v3084: 305 pgs: 305 active+clean; 361 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 31 KiB/s wr, 118 op/s
Dec 06 07:55:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:55:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:55:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:55:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:55:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:55:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:55:58 compute-0 podman[368241]: 2025-12-06 07:55:58.67008688 +0000 UTC m=+0.253751719 container start 6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:55:58 compute-0 clever_easley[368257]: 167 167
Dec 06 07:55:58 compute-0 systemd[1]: libpod-6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e.scope: Deactivated successfully.
Dec 06 07:55:58 compute-0 podman[368241]: 2025-12-06 07:55:58.680785229 +0000 UTC m=+0.264450088 container attach 6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:55:58 compute-0 podman[368241]: 2025-12-06 07:55:58.681152669 +0000 UTC m=+0.264817508 container died 6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:55:58 compute-0 nova_compute[251992]: 2025-12-06 07:55:58.696 251996 DEBUG oslo_concurrency.lockutils [None req-1980e3b2-63ce-4afb-88b4-c305f32db5e1 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff007343447d31f85b4aa37f4663ccc8ae179f11b1d9dc9544d13ce3e69f7a9b-merged.mount: Deactivated successfully.
Dec 06 07:55:58 compute-0 podman[368241]: 2025-12-06 07:55:58.720344916 +0000 UTC m=+0.304009755 container remove 6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:55:58 compute-0 systemd[1]: libpod-conmon-6b9b06c9462860a1ec1d4bf43c1ec927d739763d2843a26dfbb86011d4449a1e.scope: Deactivated successfully.
Dec 06 07:55:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:55:58 compute-0 podman[368282]: 2025-12-06 07:55:58.905204234 +0000 UTC m=+0.046464135 container create 1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:55:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:55:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:55:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:55:58.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:55:58 compute-0 systemd[1]: Started libpod-conmon-1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea.scope.
Dec 06 07:55:58 compute-0 podman[368282]: 2025-12-06 07:55:58.884614359 +0000 UTC m=+0.025874270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:55:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:55:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec98bd52bfba5e1196cd93d1a8ddf04d592996bdac863d35b2346eb2792bd8cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec98bd52bfba5e1196cd93d1a8ddf04d592996bdac863d35b2346eb2792bd8cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec98bd52bfba5e1196cd93d1a8ddf04d592996bdac863d35b2346eb2792bd8cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec98bd52bfba5e1196cd93d1a8ddf04d592996bdac863d35b2346eb2792bd8cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:55:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec98bd52bfba5e1196cd93d1a8ddf04d592996bdac863d35b2346eb2792bd8cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:55:59 compute-0 podman[368282]: 2025-12-06 07:55:59.034416871 +0000 UTC m=+0.175676812 container init 1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 07:55:59 compute-0 podman[368282]: 2025-12-06 07:55:59.041729279 +0000 UTC m=+0.182989170 container start 1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:55:59 compute-0 podman[368282]: 2025-12-06 07:55:59.045382287 +0000 UTC m=+0.186642178 container attach 1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:55:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 46 KiB/s wr, 135 op/s
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.503 251996 DEBUG oslo_concurrency.lockutils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.504 251996 DEBUG oslo_concurrency.lockutils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.522 251996 DEBUG nova.objects.instance [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.581 251996 DEBUG oslo_concurrency.lockutils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.795 251996 DEBUG oslo_concurrency.lockutils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.795 251996 DEBUG oslo_concurrency.lockutils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.796 251996 INFO nova.compute.manager [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Attaching volume 1b7ce5d9-6944-4992-97d1-2fb28bc7b126 to /dev/vdc
Dec 06 07:55:59 compute-0 stoic_jemison[368299]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:55:59 compute-0 stoic_jemison[368299]: --> relative data size: 1.0
Dec 06 07:55:59 compute-0 stoic_jemison[368299]: --> All data devices are unavailable
Dec 06 07:55:59 compute-0 systemd[1]: libpod-1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea.scope: Deactivated successfully.
Dec 06 07:55:59 compute-0 podman[368282]: 2025-12-06 07:55:59.858033065 +0000 UTC m=+0.999292966 container died 1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:55:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec98bd52bfba5e1196cd93d1a8ddf04d592996bdac863d35b2346eb2792bd8cc-merged.mount: Deactivated successfully.
Dec 06 07:55:59 compute-0 podman[368282]: 2025-12-06 07:55:59.916448692 +0000 UTC m=+1.057708583 container remove 1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.936 251996 DEBUG os_brick.utils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.939 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:59 compute-0 sudo[368176]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:59 compute-0 systemd[1]: libpod-conmon-1c64a78cac5f7b45d4f7fd7961020d3e2c98c1c46c0bfe604c1df215a18976ea.scope: Deactivated successfully.
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.954 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.954 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[6edff402-b063-4f32-9cb5-1e56e5c45e2c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.956 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.968 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.968 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[93d7e725-f5ba-4561-b574-41f4e874a23b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.970 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.983 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.984 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[0915e17b-9cb1-452e-96a7-7a1ae6d570c4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.986 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d17bc2-a1bc-448a-860d-7d83549a531b]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:55:59 compute-0 nova_compute[251992]: 2025-12-06 07:55:59.986 251996 DEBUG oslo_concurrency.processutils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:00 compute-0 sudo[368330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:00 compute-0 sudo[368330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:00 compute-0 sudo[368330]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.019 251996 DEBUG oslo_concurrency.processutils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.022 251996 DEBUG os_brick.initiator.connectors.lightos [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.023 251996 DEBUG os_brick.initiator.connectors.lightos [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.023 251996 DEBUG os_brick.initiator.connectors.lightos [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.024 251996 DEBUG os_brick.utils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.024 251996 DEBUG nova.virt.block_device [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating existing volume attachment record: 7ac5e72a-7ff5-439f-b48f-76d979f4ab9d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:56:00 compute-0 sudo[368360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:56:00 compute-0 sudo[368360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:00 compute-0 sudo[368360]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:00.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:00 compute-0 sudo[368385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:00 compute-0 sudo[368385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:00 compute-0 sudo[368385]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:00 compute-0 sudo[368410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:56:00 compute-0 sudo[368410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:00 compute-0 podman[368474]: 2025-12-06 07:56:00.506481383 +0000 UTC m=+0.035618452 container create 6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:56:00 compute-0 systemd[1]: Started libpod-conmon-6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455.scope.
Dec 06 07:56:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:56:00 compute-0 podman[368474]: 2025-12-06 07:56:00.580054348 +0000 UTC m=+0.109191447 container init 6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:56:00 compute-0 podman[368474]: 2025-12-06 07:56:00.490985765 +0000 UTC m=+0.020122854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:56:00 compute-0 podman[368474]: 2025-12-06 07:56:00.589184585 +0000 UTC m=+0.118321654 container start 6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:56:00 compute-0 podman[368474]: 2025-12-06 07:56:00.593520452 +0000 UTC m=+0.122657551 container attach 6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:56:00 compute-0 elegant_beaver[368491]: 167 167
Dec 06 07:56:00 compute-0 systemd[1]: libpod-6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455.scope: Deactivated successfully.
Dec 06 07:56:00 compute-0 podman[368474]: 2025-12-06 07:56:00.598169917 +0000 UTC m=+0.127307006 container died 6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:56:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-581de88416178c98db70ecda46bc3d7fc744d43e31001e990d1b0cfb508f575e-merged.mount: Deactivated successfully.
Dec 06 07:56:00 compute-0 podman[368474]: 2025-12-06 07:56:00.638844765 +0000 UTC m=+0.167981834 container remove 6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:56:00 compute-0 systemd[1]: libpod-conmon-6c83dea8db5ef824e8c00954bf715188f9d0219542a618e941c7fb573cd5a455.scope: Deactivated successfully.
Dec 06 07:56:00 compute-0 ceph-mon[74339]: pgmap v3085: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 46 KiB/s wr, 135 op/s
Dec 06 07:56:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1213676908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.769 251996 DEBUG nova.objects.instance [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.797 251996 DEBUG nova.virt.libvirt.driver [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Attempting to attach volume 1b7ce5d9-6944-4992-97d1-2fb28bc7b126 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.800 251996 DEBUG nova.virt.libvirt.guest [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:56:00 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:56:00 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-1b7ce5d9-6944-4992-97d1-2fb28bc7b126">
Dec 06 07:56:00 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:56:00 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:56:00 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:56:00 compute-0 nova_compute[251992]:   </source>
Dec 06 07:56:00 compute-0 nova_compute[251992]:   <auth username="openstack">
Dec 06 07:56:00 compute-0 nova_compute[251992]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:56:00 compute-0 nova_compute[251992]:   </auth>
Dec 06 07:56:00 compute-0 nova_compute[251992]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:56:00 compute-0 nova_compute[251992]:   <serial>1b7ce5d9-6944-4992-97d1-2fb28bc7b126</serial>
Dec 06 07:56:00 compute-0 nova_compute[251992]: </disk>
Dec 06 07:56:00 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:56:00 compute-0 podman[368514]: 2025-12-06 07:56:00.833717153 +0000 UTC m=+0.047300947 container create e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:56:00 compute-0 systemd[1]: Started libpod-conmon-e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296.scope.
Dec 06 07:56:00 compute-0 podman[368514]: 2025-12-06 07:56:00.812047059 +0000 UTC m=+0.025630873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:56:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17bbc31fe1f00d7d546b84d5f3177aba18da15aa57c917e2dc7843d1473382c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17bbc31fe1f00d7d546b84d5f3177aba18da15aa57c917e2dc7843d1473382c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17bbc31fe1f00d7d546b84d5f3177aba18da15aa57c917e2dc7843d1473382c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d17bbc31fe1f00d7d546b84d5f3177aba18da15aa57c917e2dc7843d1473382c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:00 compute-0 podman[368514]: 2025-12-06 07:56:00.929091946 +0000 UTC m=+0.142675770 container init e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_knuth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:56:00 compute-0 podman[368514]: 2025-12-06 07:56:00.937481903 +0000 UTC m=+0.151065707 container start e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_knuth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 07:56:00 compute-0 podman[368514]: 2025-12-06 07:56:00.940969988 +0000 UTC m=+0.154553802 container attach e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.940 251996 DEBUG nova.virt.libvirt.driver [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.941 251996 DEBUG nova.virt.libvirt.driver [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.942 251996 DEBUG nova.virt.libvirt.driver [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.942 251996 DEBUG nova.virt.libvirt.driver [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:56:00 compute-0 nova_compute[251992]: 2025-12-06 07:56:00.942 251996 DEBUG nova.virt.libvirt.driver [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] No VIF found with MAC fa:16:3e:5c:c3:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:56:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:00.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:01 compute-0 nova_compute[251992]: 2025-12-06 07:56:01.114 251996 DEBUG oslo_concurrency.lockutils [None req-c440a3b6-08a2-459b-9484-ce8e13a9793e e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 49 KiB/s wr, 191 op/s
Dec 06 07:56:01 compute-0 silly_knuth[368550]: {
Dec 06 07:56:01 compute-0 silly_knuth[368550]:     "0": [
Dec 06 07:56:01 compute-0 silly_knuth[368550]:         {
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "devices": [
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "/dev/loop3"
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             ],
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "lv_name": "ceph_lv0",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "lv_size": "7511998464",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "name": "ceph_lv0",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "tags": {
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.cluster_name": "ceph",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.crush_device_class": "",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.encrypted": "0",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.osd_id": "0",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.type": "block",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:                 "ceph.vdo": "0"
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             },
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "type": "block",
Dec 06 07:56:01 compute-0 silly_knuth[368550]:             "vg_name": "ceph_vg0"
Dec 06 07:56:01 compute-0 silly_knuth[368550]:         }
Dec 06 07:56:01 compute-0 silly_knuth[368550]:     ]
Dec 06 07:56:01 compute-0 silly_knuth[368550]: }
Dec 06 07:56:01 compute-0 systemd[1]: libpod-e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296.scope: Deactivated successfully.
Dec 06 07:56:01 compute-0 podman[368514]: 2025-12-06 07:56:01.71708829 +0000 UTC m=+0.930672094 container died e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_knuth, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:56:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d17bbc31fe1f00d7d546b84d5f3177aba18da15aa57c917e2dc7843d1473382c-merged.mount: Deactivated successfully.
Dec 06 07:56:01 compute-0 podman[368514]: 2025-12-06 07:56:01.769273218 +0000 UTC m=+0.982857012 container remove e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:56:01 compute-0 systemd[1]: libpod-conmon-e20c509c57fcd12f12efe5a833f2e8967fe68b5e81209eda02486ee98a1f5296.scope: Deactivated successfully.
Dec 06 07:56:01 compute-0 sudo[368410]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:01 compute-0 sudo[368571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:01 compute-0 sudo[368571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:01 compute-0 sudo[368571]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:01 compute-0 nova_compute[251992]: 2025-12-06 07:56:01.887 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:01 compute-0 sudo[368596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:56:01 compute-0 sudo[368596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:01 compute-0 sudo[368596]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:01 compute-0 sudo[368621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:01 compute-0 sudo[368621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:01 compute-0 sudo[368621]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:02 compute-0 sudo[368646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:56:02 compute-0 sudo[368646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:02.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:02 compute-0 podman[368712]: 2025-12-06 07:56:02.348752494 +0000 UTC m=+0.047165113 container create a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:56:02 compute-0 systemd[1]: Started libpod-conmon-a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada.scope.
Dec 06 07:56:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:56:02 compute-0 podman[368712]: 2025-12-06 07:56:02.324929062 +0000 UTC m=+0.023341751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:56:02 compute-0 podman[368712]: 2025-12-06 07:56:02.428474926 +0000 UTC m=+0.126887555 container init a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:56:02 compute-0 podman[368712]: 2025-12-06 07:56:02.43603759 +0000 UTC m=+0.134450209 container start a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:56:02 compute-0 podman[368712]: 2025-12-06 07:56:02.439664158 +0000 UTC m=+0.138076787 container attach a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 07:56:02 compute-0 stoic_margulis[368728]: 167 167
Dec 06 07:56:02 compute-0 systemd[1]: libpod-a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada.scope: Deactivated successfully.
Dec 06 07:56:02 compute-0 podman[368712]: 2025-12-06 07:56:02.442844234 +0000 UTC m=+0.141256843 container died a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:56:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-afaac11e3e553cf1076730e7a3498fefed5800c2cbecc46162ab8bd87dceb357-merged.mount: Deactivated successfully.
Dec 06 07:56:02 compute-0 podman[368712]: 2025-12-06 07:56:02.486760778 +0000 UTC m=+0.185173387 container remove a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_margulis, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:56:02 compute-0 systemd[1]: libpod-conmon-a2736a6b5aa8059452700de96f716cf151388b923bf2ef375fcbda2ceea70ada.scope: Deactivated successfully.
Dec 06 07:56:02 compute-0 podman[368752]: 2025-12-06 07:56:02.650156648 +0000 UTC m=+0.036725512 container create 4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:56:02 compute-0 systemd[1]: Started libpod-conmon-4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e.scope.
Dec 06 07:56:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc7a41fba7c48dd3408761dab59a5f8739f37bc73ae0d2ad8d1b3979712ac88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc7a41fba7c48dd3408761dab59a5f8739f37bc73ae0d2ad8d1b3979712ac88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc7a41fba7c48dd3408761dab59a5f8739f37bc73ae0d2ad8d1b3979712ac88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:02 compute-0 podman[368752]: 2025-12-06 07:56:02.634842244 +0000 UTC m=+0.021411138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:56:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc7a41fba7c48dd3408761dab59a5f8739f37bc73ae0d2ad8d1b3979712ac88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:56:02 compute-0 podman[368752]: 2025-12-06 07:56:02.740745072 +0000 UTC m=+0.127313936 container init 4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 07:56:02 compute-0 podman[368752]: 2025-12-06 07:56:02.749386405 +0000 UTC m=+0.135955269 container start 4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:56:02 compute-0 podman[368752]: 2025-12-06 07:56:02.752467178 +0000 UTC m=+0.139036042 container attach 4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:56:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:02.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:03 compute-0 ceph-mon[74339]: pgmap v3086: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 49 KiB/s wr, 191 op/s
Dec 06 07:56:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 38 KiB/s wr, 218 op/s
Dec 06 07:56:03 compute-0 nova_compute[251992]: 2025-12-06 07:56:03.563 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]: {
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]:         "osd_id": 0,
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]:         "type": "bluestore"
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]:     }
Dec 06 07:56:03 compute-0 recursing_antonelli[368769]: }
Dec 06 07:56:03 compute-0 systemd[1]: libpod-4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e.scope: Deactivated successfully.
Dec 06 07:56:03 compute-0 podman[368791]: 2025-12-06 07:56:03.650609054 +0000 UTC m=+0.024905924 container died 4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:56:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bc7a41fba7c48dd3408761dab59a5f8739f37bc73ae0d2ad8d1b3979712ac88-merged.mount: Deactivated successfully.
Dec 06 07:56:03 compute-0 podman[368791]: 2025-12-06 07:56:03.6994088 +0000 UTC m=+0.073705660 container remove 4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 07:56:03 compute-0 systemd[1]: libpod-conmon-4f63f9db38e7fe346bee84ddec5d8a92ea57ca51ab2b3e56a673956afdc4015e.scope: Deactivated successfully.
Dec 06 07:56:03 compute-0 sudo[368646]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:56:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:56:03.866 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:56:03.869 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:56:03.869 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:04 compute-0 nova_compute[251992]: 2025-12-06 07:56:04.078 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:04 compute-0 NetworkManager[48965]: <info>  [1765007764.0790] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Dec 06 07:56:04 compute-0 NetworkManager[48965]: <info>  [1765007764.0805] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Dec 06 07:56:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:56:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:56:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:04.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:04 compute-0 nova_compute[251992]: 2025-12-06 07:56:04.200 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:04 compute-0 ovn_controller[147168]: 2025-12-06T07:56:04Z|00667|binding|INFO|Releasing lport 901b0fd3-1832-4628-bbf4-0a14b30cd979 from this chassis (sb_readonly=0)
Dec 06 07:56:04 compute-0 nova_compute[251992]: 2025-12-06 07:56:04.217 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:56:04 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8b6bff95-9fa3-4eff-a09c-2dea5d05f5ce does not exist
Dec 06 07:56:04 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a2c6005d-85d3-438a-92e7-fc12b47abf4d does not exist
Dec 06 07:56:04 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d7466d68-a634-4c93-9d59-54d54991f349 does not exist
Dec 06 07:56:04 compute-0 sudo[368806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:04 compute-0 sudo[368806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:04 compute-0 sudo[368806]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:04 compute-0 sudo[368831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:56:04 compute-0 sudo[368831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:04 compute-0 sudo[368831]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:04.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:05 compute-0 ceph-mon[74339]: pgmap v3087: 305 pgs: 305 active+clean; 362 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 38 KiB/s wr, 218 op/s
Dec 06 07:56:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:56:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:56:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 367 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 688 KiB/s wr, 206 op/s
Dec 06 07:56:05 compute-0 nova_compute[251992]: 2025-12-06 07:56:05.387 251996 DEBUG nova.compute.manager [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:05 compute-0 nova_compute[251992]: 2025-12-06 07:56:05.387 251996 DEBUG nova.compute.manager [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing instance network info cache due to event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:56:05 compute-0 nova_compute[251992]: 2025-12-06 07:56:05.387 251996 DEBUG oslo_concurrency.lockutils [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:05 compute-0 nova_compute[251992]: 2025-12-06 07:56:05.387 251996 DEBUG oslo_concurrency.lockutils [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:05 compute-0 nova_compute[251992]: 2025-12-06 07:56:05.388 251996 DEBUG nova.network.neutron [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:56:05 compute-0 sudo[368857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:05 compute-0 sudo[368857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:05 compute-0 sudo[368857]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:05 compute-0 sudo[368882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:05 compute-0 sudo[368882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:05 compute-0 sudo[368882]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:06 compute-0 nova_compute[251992]: 2025-12-06 07:56:06.887 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:06.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:07 compute-0 nova_compute[251992]: 2025-12-06 07:56:07.230 251996 DEBUG nova.network.neutron [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updated VIF entry in instance network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:56:07 compute-0 nova_compute[251992]: 2025-12-06 07:56:07.231 251996 DEBUG nova.network.neutron [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:07 compute-0 nova_compute[251992]: 2025-12-06 07:56:07.268 251996 DEBUG oslo_concurrency.lockutils [req-c314521c-2d87-46a5-807b-18ceaa6de4a9 req-03948823-413d-4d71-a6bb-11bbf1f24ef1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:56:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 380 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.4 MiB/s wr, 218 op/s
Dec 06 07:56:07 compute-0 ceph-mon[74339]: pgmap v3088: 305 pgs: 305 active+clean; 367 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 688 KiB/s wr, 206 op/s
Dec 06 07:56:07 compute-0 nova_compute[251992]: 2025-12-06 07:56:07.580 251996 DEBUG nova.compute.manager [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:07 compute-0 nova_compute[251992]: 2025-12-06 07:56:07.580 251996 DEBUG nova.compute.manager [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing instance network info cache due to event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:56:07 compute-0 nova_compute[251992]: 2025-12-06 07:56:07.581 251996 DEBUG oslo_concurrency.lockutils [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:07 compute-0 nova_compute[251992]: 2025-12-06 07:56:07.581 251996 DEBUG oslo_concurrency.lockutils [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:07 compute-0 nova_compute[251992]: 2025-12-06 07:56:07.581 251996 DEBUG nova.network.neutron [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:56:07 compute-0 ovn_controller[147168]: 2025-12-06T07:56:07Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:c3:e9 10.100.0.7
Dec 06 07:56:07 compute-0 ovn_controller[147168]: 2025-12-06T07:56:07Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:c3:e9 10.100.0.7
Dec 06 07:56:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:08.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:08 compute-0 ceph-mon[74339]: pgmap v3089: 305 pgs: 305 active+clean; 380 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.4 MiB/s wr, 218 op/s
Dec 06 07:56:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3175340184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/582007015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:08 compute-0 nova_compute[251992]: 2025-12-06 07:56:08.567 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:08.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 394 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 188 op/s
Dec 06 07:56:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2025625938' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2025625938' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:09 compute-0 nova_compute[251992]: 2025-12-06 07:56:09.449 251996 DEBUG nova.network.neutron [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updated VIF entry in instance network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:56:09 compute-0 nova_compute[251992]: 2025-12-06 07:56:09.449 251996 DEBUG nova.network.neutron [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:09 compute-0 nova_compute[251992]: 2025-12-06 07:56:09.664 251996 DEBUG oslo_concurrency.lockutils [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:56:09 compute-0 nova_compute[251992]: 2025-12-06 07:56:09.664 251996 DEBUG nova.compute.manager [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:09 compute-0 nova_compute[251992]: 2025-12-06 07:56:09.664 251996 DEBUG nova.compute.manager [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing instance network info cache due to event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:56:09 compute-0 nova_compute[251992]: 2025-12-06 07:56:09.665 251996 DEBUG oslo_concurrency.lockutils [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:09 compute-0 nova_compute[251992]: 2025-12-06 07:56:09.665 251996 DEBUG oslo_concurrency.lockutils [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:09 compute-0 nova_compute[251992]: 2025-12-06 07:56:09.665 251996 DEBUG nova.network.neutron [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:56:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:10.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:10 compute-0 ceph-mon[74339]: pgmap v3090: 305 pgs: 305 active+clean; 394 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 188 op/s
Dec 06 07:56:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 07:56:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:10.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 07:56:11 compute-0 nova_compute[251992]: 2025-12-06 07:56:11.093 251996 DEBUG nova.network.neutron [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updated VIF entry in instance network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:56:11 compute-0 nova_compute[251992]: 2025-12-06 07:56:11.094 251996 DEBUG nova.network.neutron [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:11 compute-0 nova_compute[251992]: 2025-12-06 07:56:11.166 251996 DEBUG oslo_concurrency.lockutils [req-5f5fe72b-dbba-4ff5-a5c5-af64aa64a490 req-250a27cc-378e-4e0f-861a-8e4ea9d92093 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:56:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 289 op/s
Dec 06 07:56:11 compute-0 nova_compute[251992]: 2025-12-06 07:56:11.937 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:12.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:12 compute-0 nova_compute[251992]: 2025-12-06 07:56:12.586 251996 DEBUG nova.compute.manager [req-fb155a34-1645-4438-93b9-731d16533fa3 req-f788e83f-8f8d-443e-bf68-48232fc7a111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:12 compute-0 nova_compute[251992]: 2025-12-06 07:56:12.586 251996 DEBUG nova.compute.manager [req-fb155a34-1645-4438-93b9-731d16533fa3 req-f788e83f-8f8d-443e-bf68-48232fc7a111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing instance network info cache due to event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:56:12 compute-0 nova_compute[251992]: 2025-12-06 07:56:12.587 251996 DEBUG oslo_concurrency.lockutils [req-fb155a34-1645-4438-93b9-731d16533fa3 req-f788e83f-8f8d-443e-bf68-48232fc7a111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:12 compute-0 nova_compute[251992]: 2025-12-06 07:56:12.587 251996 DEBUG oslo_concurrency.lockutils [req-fb155a34-1645-4438-93b9-731d16533fa3 req-f788e83f-8f8d-443e-bf68-48232fc7a111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:12 compute-0 nova_compute[251992]: 2025-12-06 07:56:12.587 251996 DEBUG nova.network.neutron [req-fb155a34-1645-4438-93b9-731d16533fa3 req-f788e83f-8f8d-443e-bf68-48232fc7a111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:56:12 compute-0 ceph-mon[74339]: pgmap v3091: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 289 op/s
Dec 06 07:56:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:12.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:56:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:56:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:56:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:56:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:56:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:56:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.2 MiB/s wr, 259 op/s
Dec 06 07:56:13 compute-0 podman[368912]: 2025-12-06 07:56:13.452639459 +0000 UTC m=+0.106488755 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Dec 06 07:56:13 compute-0 nova_compute[251992]: 2025-12-06 07:56:13.569 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:14.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:14 compute-0 ceph-mon[74339]: pgmap v3092: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.2 MiB/s wr, 259 op/s
Dec 06 07:56:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:14.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.049 251996 DEBUG nova.network.neutron [req-fb155a34-1645-4438-93b9-731d16533fa3 req-f788e83f-8f8d-443e-bf68-48232fc7a111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updated VIF entry in instance network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.050 251996 DEBUG nova.network.neutron [req-fb155a34-1645-4438-93b9-731d16533fa3 req-f788e83f-8f8d-443e-bf68-48232fc7a111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.169 251996 DEBUG oslo_concurrency.lockutils [req-fb155a34-1645-4438-93b9-731d16533fa3 req-f788e83f-8f8d-443e-bf68-48232fc7a111 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:56:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 443 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.8 MiB/s wr, 246 op/s
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.440 251996 DEBUG oslo_concurrency.lockutils [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.440 251996 DEBUG oslo_concurrency.lockutils [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.459 251996 INFO nova.compute.manager [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Detaching volume 7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.604 251996 INFO nova.virt.block_device [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Attempting to driver detach volume 7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f from mountpoint /dev/vdb
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.615 251996 DEBUG nova.virt.libvirt.driver [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Attempting to detach device vdb from instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.616 251996 DEBUG nova.virt.libvirt.guest [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f">
Dec 06 07:56:15 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   </source>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <serial>7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f</serial>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]: </disk>
Dec 06 07:56:15 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.645 251996 INFO nova.virt.libvirt.driver [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdb from instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 from the persistent domain config.
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.646 251996 DEBUG nova.virt.libvirt.driver [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.646 251996 DEBUG nova.virt.libvirt.guest [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f">
Dec 06 07:56:15 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   </source>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <serial>7e4b734a-50a7-4ff3-bbbd-b8eb0e71528f</serial>
Dec 06 07:56:15 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:56:15 compute-0 nova_compute[251992]: </disk>
Dec 06 07:56:15 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.870 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765007775.8696427, d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.874 251996 DEBUG nova.virt.libvirt.driver [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:56:15 compute-0 nova_compute[251992]: 2025-12-06 07:56:15.877 251996 INFO nova.virt.libvirt.driver [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdb from instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 from the live domain config.
Dec 06 07:56:16 compute-0 nova_compute[251992]: 2025-12-06 07:56:16.036 251996 DEBUG nova.objects.instance [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:16 compute-0 nova_compute[251992]: 2025-12-06 07:56:16.072 251996 DEBUG oslo_concurrency.lockutils [None req-d89dd98f-0eee-418d-b60f-e383a6a1ef32 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:16.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:16 compute-0 nova_compute[251992]: 2025-12-06 07:56:16.939 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:16.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:16 compute-0 ceph-mon[74339]: pgmap v3093: 305 pgs: 305 active+clean; 443 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.8 MiB/s wr, 246 op/s
Dec 06 07:56:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 454 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.9 MiB/s wr, 254 op/s
Dec 06 07:56:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:18.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:18 compute-0 nova_compute[251992]: 2025-12-06 07:56:18.492 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:56:18
Dec 06 07:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', '.mgr', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec 06 07:56:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:56:18 compute-0 nova_compute[251992]: 2025-12-06 07:56:18.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:18.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:19 compute-0 ceph-mon[74339]: pgmap v3094: 305 pgs: 305 active+clean; 454 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.9 MiB/s wr, 254 op/s
Dec 06 07:56:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 239 op/s
Dec 06 07:56:19 compute-0 nova_compute[251992]: 2025-12-06 07:56:19.400 251996 DEBUG oslo_concurrency.lockutils [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:19 compute-0 nova_compute[251992]: 2025-12-06 07:56:19.401 251996 DEBUG oslo_concurrency.lockutils [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:19 compute-0 nova_compute[251992]: 2025-12-06 07:56:19.935 251996 INFO nova.compute.manager [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Detaching volume 1b7ce5d9-6944-4992-97d1-2fb28bc7b126
Dec 06 07:56:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:56:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3084393564' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:56:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3084393564' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3084393564' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3084393564' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.080 251996 INFO nova.virt.block_device [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Attempting to driver detach volume 1b7ce5d9-6944-4992-97d1-2fb28bc7b126 from mountpoint /dev/vdc
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.088 251996 DEBUG nova.virt.libvirt.driver [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Attempting to detach device vdc from instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.089 251996 DEBUG nova.virt.libvirt.guest [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-1b7ce5d9-6944-4992-97d1-2fb28bc7b126">
Dec 06 07:56:20 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   </source>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <serial>1b7ce5d9-6944-4992-97d1-2fb28bc7b126</serial>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]: </disk>
Dec 06 07:56:20 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.096 251996 INFO nova.virt.libvirt.driver [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdc from instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 from the persistent domain config.
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.096 251996 DEBUG nova.virt.libvirt.driver [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.097 251996 DEBUG nova.virt.libvirt.guest [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-1b7ce5d9-6944-4992-97d1-2fb28bc7b126">
Dec 06 07:56:20 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   </source>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <serial>1b7ce5d9-6944-4992-97d1-2fb28bc7b126</serial>
Dec 06 07:56:20 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec 06 07:56:20 compute-0 nova_compute[251992]: </disk>
Dec 06 07:56:20 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:56:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:20.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:20 compute-0 podman[368944]: 2025-12-06 07:56:20.385273969 +0000 UTC m=+0.044790979 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 07:56:20 compute-0 podman[368945]: 2025-12-06 07:56:20.394837357 +0000 UTC m=+0.053279279 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.583 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765007780.582869, d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.586 251996 DEBUG nova.virt.libvirt.driver [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.588 251996 INFO nova.virt.libvirt.driver [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully detached device vdc from instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 from the live domain config.
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.757 251996 DEBUG nova.objects.instance [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'flavor' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:20 compute-0 nova_compute[251992]: 2025-12-06 07:56:20.801 251996 DEBUG oslo_concurrency.lockutils [None req-28076478-bfb6-46d5-ab6e-375a5c6612e8 e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:20.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:21 compute-0 ceph-mon[74339]: pgmap v3095: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.2 MiB/s wr, 239 op/s
Dec 06 07:56:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 244 op/s
Dec 06 07:56:21 compute-0 nova_compute[251992]: 2025-12-06 07:56:21.942 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:22.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:22 compute-0 ceph-mon[74339]: pgmap v3096: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 244 op/s
Dec 06 07:56:22 compute-0 nova_compute[251992]: 2025-12-06 07:56:22.598 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:22 compute-0 nova_compute[251992]: 2025-12-06 07:56:22.950 251996 DEBUG nova.compute.manager [req-20086b84-1e6a-4e51-a83a-e59558218091 req-afbe5444-c634-4391-a721-f6a4430f449c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:22 compute-0 nova_compute[251992]: 2025-12-06 07:56:22.950 251996 DEBUG nova.compute.manager [req-20086b84-1e6a-4e51-a83a-e59558218091 req-afbe5444-c634-4391-a721-f6a4430f449c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing instance network info cache due to event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:56:22 compute-0 nova_compute[251992]: 2025-12-06 07:56:22.951 251996 DEBUG oslo_concurrency.lockutils [req-20086b84-1e6a-4e51-a83a-e59558218091 req-afbe5444-c634-4391-a721-f6a4430f449c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:22 compute-0 nova_compute[251992]: 2025-12-06 07:56:22.951 251996 DEBUG oslo_concurrency.lockutils [req-20086b84-1e6a-4e51-a83a-e59558218091 req-afbe5444-c634-4391-a721-f6a4430f449c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:22 compute-0 nova_compute[251992]: 2025-12-06 07:56:22.951 251996 DEBUG nova.network.neutron [req-20086b84-1e6a-4e51-a83a-e59558218091 req-afbe5444-c634-4391-a721-f6a4430f449c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:56:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:22.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 143 op/s
Dec 06 07:56:23 compute-0 nova_compute[251992]: 2025-12-06 07:56:23.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:56:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:56:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:24.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:24 compute-0 nova_compute[251992]: 2025-12-06 07:56:24.763 251996 DEBUG nova.network.neutron [req-20086b84-1e6a-4e51-a83a-e59558218091 req-afbe5444-c634-4391-a721-f6a4430f449c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updated VIF entry in instance network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:56:24 compute-0 nova_compute[251992]: 2025-12-06 07:56:24.764 251996 DEBUG nova.network.neutron [req-20086b84-1e6a-4e51-a83a-e59558218091 req-afbe5444-c634-4391-a721-f6a4430f449c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:24 compute-0 nova_compute[251992]: 2025-12-06 07:56:24.781 251996 DEBUG oslo_concurrency.lockutils [req-20086b84-1e6a-4e51-a83a-e59558218091 req-afbe5444-c634-4391-a721-f6a4430f449c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:56:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:24.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:25 compute-0 ceph-mon[74339]: pgmap v3097: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.5 MiB/s wr, 143 op/s
Dec 06 07:56:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 728 KiB/s rd, 1.6 MiB/s wr, 118 op/s
Dec 06 07:56:26 compute-0 sudo[368984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:26 compute-0 sudo[368984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:26 compute-0 sudo[368984]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:26 compute-0 sudo[369009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:26 compute-0 sudo[369009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:26 compute-0 sudo[369009]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021974019402568397 of space, bias 1.0, pg target 0.6592205820770519 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008751075981760656 of space, bias 1.0, pg target 2.6253227945281967 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:56:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 07:56:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3802332635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:26 compute-0 ceph-mon[74339]: pgmap v3098: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 728 KiB/s rd, 1.6 MiB/s wr, 118 op/s
Dec 06 07:56:26 compute-0 nova_compute[251992]: 2025-12-06 07:56:26.944 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:26.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:56:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:56:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:56:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:56:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:56:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 700 KiB/s rd, 1.0 MiB/s wr, 105 op/s
Dec 06 07:56:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:28.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:28 compute-0 nova_compute[251992]: 2025-12-06 07:56:28.300 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3501000067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2046333412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2046333412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:28 compute-0 nova_compute[251992]: 2025-12-06 07:56:28.575 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 263 KiB/s wr, 89 op/s
Dec 06 07:56:29 compute-0 nova_compute[251992]: 2025-12-06 07:56:29.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:29 compute-0 nova_compute[251992]: 2025-12-06 07:56:29.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:29 compute-0 nova_compute[251992]: 2025-12-06 07:56:29.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:29 compute-0 nova_compute[251992]: 2025-12-06 07:56:29.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:29 compute-0 nova_compute[251992]: 2025-12-06 07:56:29.681 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:56:29 compute-0 nova_compute[251992]: 2025-12-06 07:56:29.682 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:29 compute-0 ceph-mon[74339]: pgmap v3099: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 700 KiB/s rd, 1.0 MiB/s wr, 105 op/s
Dec 06 07:56:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:56:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455582906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.147 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:30.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.217 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.218 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.352 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.353 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3968MB free_disk=20.942161560058594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.353 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.354 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.502 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:56:30 compute-0 nova_compute[251992]: 2025-12-06 07:56:30.637 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:30.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:56:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791126325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:31 compute-0 nova_compute[251992]: 2025-12-06 07:56:31.065 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:31 compute-0 nova_compute[251992]: 2025-12-06 07:56:31.070 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:56:31 compute-0 nova_compute[251992]: 2025-12-06 07:56:31.086 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:56:31 compute-0 nova_compute[251992]: 2025-12-06 07:56:31.110 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:56:31 compute-0 nova_compute[251992]: 2025-12-06 07:56:31.111 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 567 KiB/s rd, 217 KiB/s wr, 80 op/s
Dec 06 07:56:31 compute-0 nova_compute[251992]: 2025-12-06 07:56:31.322 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:31 compute-0 nova_compute[251992]: 2025-12-06 07:56:31.946 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3374446509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:31 compute-0 ceph-mon[74339]: pgmap v3100: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 263 KiB/s wr, 89 op/s
Dec 06 07:56:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3455582906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:32.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:32.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1791126325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:33 compute-0 ceph-mon[74339]: pgmap v3101: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 567 KiB/s rd, 217 KiB/s wr, 80 op/s
Dec 06 07:56:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2624542154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 410 KiB/s rd, 33 KiB/s wr, 51 op/s
Dec 06 07:56:33 compute-0 nova_compute[251992]: 2025-12-06 07:56:33.578 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:34.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:34.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:35 compute-0 nova_compute[251992]: 2025-12-06 07:56:35.105 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:35 compute-0 ceph-mon[74339]: pgmap v3102: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 410 KiB/s rd, 33 KiB/s wr, 51 op/s
Dec 06 07:56:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 21 KiB/s wr, 31 op/s
Dec 06 07:56:35 compute-0 nova_compute[251992]: 2025-12-06 07:56:35.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:36.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:36 compute-0 ceph-mon[74339]: pgmap v3103: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 21 KiB/s wr, 31 op/s
Dec 06 07:56:36 compute-0 nova_compute[251992]: 2025-12-06 07:56:36.948 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:36.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 18 KiB/s wr, 18 op/s
Dec 06 07:56:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3087927977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:38.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:38 compute-0 nova_compute[251992]: 2025-12-06 07:56:38.579 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:38 compute-0 ceph-mon[74339]: pgmap v3104: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 18 KiB/s wr, 18 op/s
Dec 06 07:56:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4063988645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:38.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 16 KiB/s wr, 13 op/s
Dec 06 07:56:39 compute-0 nova_compute[251992]: 2025-12-06 07:56:39.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:39 compute-0 nova_compute[251992]: 2025-12-06 07:56:39.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:56:39 compute-0 nova_compute[251992]: 2025-12-06 07:56:39.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:56:39 compute-0 nova_compute[251992]: 2025-12-06 07:56:39.957 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:39 compute-0 nova_compute[251992]: 2025-12-06 07:56:39.957 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:39 compute-0 nova_compute[251992]: 2025-12-06 07:56:39.957 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:56:39 compute-0 nova_compute[251992]: 2025-12-06 07:56:39.958 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:40 compute-0 ceph-mon[74339]: pgmap v3105: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 16 KiB/s wr, 13 op/s
Dec 06 07:56:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:40.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 KiB/s rd, 19 KiB/s wr, 7 op/s
Dec 06 07:56:42 compute-0 nova_compute[251992]: 2025-12-06 07:56:42.007 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:56:42 compute-0 nova_compute[251992]: 2025-12-06 07:56:42.014 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:42 compute-0 nova_compute[251992]: 2025-12-06 07:56:42.028 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:56:42 compute-0 nova_compute[251992]: 2025-12-06 07:56:42.029 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:56:42 compute-0 nova_compute[251992]: 2025-12-06 07:56:42.029 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:42 compute-0 nova_compute[251992]: 2025-12-06 07:56:42.029 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:42 compute-0 nova_compute[251992]: 2025-12-06 07:56:42.030 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2467485622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:42.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:42.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:56:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:56:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:56:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:56:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:56:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:56:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 469 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 192 KiB/s wr, 5 op/s
Dec 06 07:56:43 compute-0 nova_compute[251992]: 2025-12-06 07:56:43.582 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:43 compute-0 ceph-mon[74339]: pgmap v3106: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 KiB/s rd, 19 KiB/s wr, 7 op/s
Dec 06 07:56:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:44 compute-0 podman[369088]: 2025-12-06 07:56:44.446446885 +0000 UTC m=+0.087861163 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec 06 07:56:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:44.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:45 compute-0 ceph-mon[74339]: pgmap v3107: 305 pgs: 305 active+clean; 469 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 192 KiB/s wr, 5 op/s
Dec 06 07:56:45 compute-0 nova_compute[251992]: 2025-12-06 07:56:45.236 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:56:45.237 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:56:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:56:45.238 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:56:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 895 KiB/s wr, 25 op/s
Dec 06 07:56:45 compute-0 nova_compute[251992]: 2025-12-06 07:56:45.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:46 compute-0 sudo[369115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:46 compute-0 sudo[369115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:46 compute-0 sudo[369115]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:46.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:46 compute-0 sudo[369140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:56:46 compute-0 sudo[369140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:56:46 compute-0 sudo[369140]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1981897599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:46 compute-0 nova_compute[251992]: 2025-12-06 07:56:46.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:56:46 compute-0 nova_compute[251992]: 2025-12-06 07:56:46.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:56:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:47 compute-0 nova_compute[251992]: 2025-12-06 07:56:47.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 444 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 06 07:56:47 compute-0 ceph-mon[74339]: pgmap v3108: 305 pgs: 305 active+clean; 462 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 895 KiB/s wr, 25 op/s
Dec 06 07:56:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3066351593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2783866018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:56:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:48.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:48 compute-0 nova_compute[251992]: 2025-12-06 07:56:48.584 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:48.950993) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007808951059, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2135, "num_deletes": 262, "total_data_size": 3644362, "memory_usage": 3691632, "flush_reason": "Manual Compaction"}
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007808971687, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 3565501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60643, "largest_seqno": 62777, "table_properties": {"data_size": 3556074, "index_size": 5856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20437, "raw_average_key_size": 20, "raw_value_size": 3536743, "raw_average_value_size": 3543, "num_data_blocks": 254, "num_entries": 998, "num_filter_entries": 998, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007613, "oldest_key_time": 1765007613, "file_creation_time": 1765007808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 20746 microseconds, and 8056 cpu microseconds.
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:48.971728) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 3565501 bytes OK
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:48.971757) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:48.973061) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:48.973073) EVENT_LOG_v1 {"time_micros": 1765007808973069, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:48.973089) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3635551, prev total WAL file size 3636262, number of live WAL files 2.
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:48.974127) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323734' seq:72057594037927935, type:22 .. '6C6F676D0032353238' seq:0, type:0; will stop at (end)
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(3481KB)], [134(10MB)]
Dec 06 07:56:48 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007808974195, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 14653421, "oldest_snapshot_seqno": -1}
Dec 06 07:56:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:49.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 9814 keys, 14481575 bytes, temperature: kUnknown
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809063310, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 14481575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14415713, "index_size": 40213, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 258136, "raw_average_key_size": 26, "raw_value_size": 14240944, "raw_average_value_size": 1451, "num_data_blocks": 1547, "num_entries": 9814, "num_filter_entries": 9814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007808, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:49.063609) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 14481575 bytes
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:49.065021) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.3 rd, 162.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 10.6 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.2) write-amplify(4.1) OK, records in: 10356, records dropped: 542 output_compression: NoCompression
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:49.065044) EVENT_LOG_v1 {"time_micros": 1765007809065034, "job": 82, "event": "compaction_finished", "compaction_time_micros": 89204, "compaction_time_cpu_micros": 36125, "output_level": 6, "num_output_files": 1, "total_output_size": 14481575, "num_input_records": 10356, "num_output_records": 9814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809065794, "job": 82, "event": "table_file_deletion", "file_number": 136}
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007809068343, "job": 82, "event": "table_file_deletion", "file_number": 134}
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:48.973996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:49.068415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:49.068421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:49.068423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:49.068425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:49.068427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:49 compute-0 ceph-mon[74339]: pgmap v3109: 305 pgs: 305 active+clean; 444 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 06 07:56:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Dec 06 07:56:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:50.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:50 compute-0 ceph-mon[74339]: pgmap v3110: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Dec 06 07:56:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Dec 06 07:56:51 compute-0 podman[369168]: 2025-12-06 07:56:51.43008132 +0000 UTC m=+0.072655092 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 07:56:51 compute-0 podman[369169]: 2025-12-06 07:56:51.430937023 +0000 UTC m=+0.075232162 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 07:56:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/826770176' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/826770176' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:52 compute-0 nova_compute[251992]: 2025-12-06 07:56:52.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:52 compute-0 ceph-mon[74339]: pgmap v3111: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Dec 06 07:56:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:53.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 07:56:53 compute-0 nova_compute[251992]: 2025-12-06 07:56:53.586 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.701053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813701222, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 307, "num_deletes": 251, "total_data_size": 112612, "memory_usage": 118744, "flush_reason": "Manual Compaction"}
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813704421, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 111806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62778, "largest_seqno": 63084, "table_properties": {"data_size": 109831, "index_size": 202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5149, "raw_average_key_size": 18, "raw_value_size": 105890, "raw_average_value_size": 379, "num_data_blocks": 9, "num_entries": 279, "num_filter_entries": 279, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007808, "oldest_key_time": 1765007808, "file_creation_time": 1765007813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 3401 microseconds, and 1185 cpu microseconds.
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.704465) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 111806 bytes OK
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.704481) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.705814) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.705827) EVENT_LOG_v1 {"time_micros": 1765007813705823, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.705841) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 110416, prev total WAL file size 110416, number of live WAL files 2.
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.706351) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(109KB)], [137(13MB)]
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813706420, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 14593381, "oldest_snapshot_seqno": -1}
Dec 06 07:56:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 9583 keys, 12657326 bytes, temperature: kUnknown
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813807655, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 12657326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12594739, "index_size": 37514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24005, "raw_key_size": 254031, "raw_average_key_size": 26, "raw_value_size": 12425703, "raw_average_value_size": 1296, "num_data_blocks": 1425, "num_entries": 9583, "num_filter_entries": 9583, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.809019) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 12657326 bytes
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.811052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.0 rd, 124.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.8 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(243.7) write-amplify(113.2) OK, records in: 10093, records dropped: 510 output_compression: NoCompression
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.811069) EVENT_LOG_v1 {"time_micros": 1765007813811062, "job": 84, "event": "compaction_finished", "compaction_time_micros": 101329, "compaction_time_cpu_micros": 44112, "output_level": 6, "num_output_files": 1, "total_output_size": 12657326, "num_input_records": 10093, "num_output_records": 9583, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813811205, "job": 84, "event": "table_file_deletion", "file_number": 139}
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007813814296, "job": 84, "event": "table_file_deletion", "file_number": 137}
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.706162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.814391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.814398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.814400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.814402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:56:53.814404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:56:53 compute-0 nova_compute[251992]: 2025-12-06 07:56:53.877 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:53 compute-0 nova_compute[251992]: 2025-12-06 07:56:53.877 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:53 compute-0 nova_compute[251992]: 2025-12-06 07:56:53.904 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.003 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.004 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.015 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.015 251996 INFO nova.compute.claims [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.190 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:54.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:56:54.243 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:56:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:56:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/805143102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.637 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.646 251996 DEBUG nova.compute.provider_tree [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.664 251996 DEBUG nova.scheduler.client.report [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.692 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.693 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.750 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.750 251996 DEBUG nova.network.neutron [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.772 251996 INFO nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.790 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.876 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.877 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.878 251996 INFO nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Creating image(s)
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.904 251996 DEBUG nova.storage.rbd_utils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 3bd60b1c-0294-4922-a147-cf09dadee874_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.930 251996 DEBUG nova.storage.rbd_utils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 3bd60b1c-0294-4922-a147-cf09dadee874_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.956 251996 DEBUG nova.storage.rbd_utils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 3bd60b1c-0294-4922-a147-cf09dadee874_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:56:54 compute-0 nova_compute[251992]: 2025-12-06 07:56:54.960 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:56:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:56:55 compute-0 nova_compute[251992]: 2025-12-06 07:56:55.025 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:55 compute-0 nova_compute[251992]: 2025-12-06 07:56:55.026 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:55 compute-0 nova_compute[251992]: 2025-12-06 07:56:55.027 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:55 compute-0 nova_compute[251992]: 2025-12-06 07:56:55.027 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:55 compute-0 nova_compute[251992]: 2025-12-06 07:56:55.057 251996 DEBUG nova.storage.rbd_utils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 3bd60b1c-0294-4922-a147-cf09dadee874_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:56:55 compute-0 nova_compute[251992]: 2025-12-06 07:56:55.061 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 3bd60b1c-0294-4922-a147-cf09dadee874_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:56:55 compute-0 nova_compute[251992]: 2025-12-06 07:56:55.088 251996 DEBUG nova.policy [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0ce6d0a8def6432aa60891ea00ef9d8b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63df107b8bd14504974c75ba92ae469b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:56:55 compute-0 ceph-mon[74339]: pgmap v3112: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 07:56:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1180554779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:56:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1180554779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:56:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/805143102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:56:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 159 op/s
Dec 06 07:56:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:56.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:56 compute-0 ceph-mon[74339]: pgmap v3113: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 159 op/s
Dec 06 07:56:56 compute-0 nova_compute[251992]: 2025-12-06 07:56:56.867 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 3bd60b1c-0294-4922-a147-cf09dadee874_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.806s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:56:56 compute-0 nova_compute[251992]: 2025-12-06 07:56:56.969 251996 DEBUG nova.storage.rbd_utils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] resizing rbd image 3bd60b1c-0294-4922-a147-cf09dadee874_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:56:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:57.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:57 compute-0 nova_compute[251992]: 2025-12-06 07:56:57.021 251996 DEBUG nova.network.neutron [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Successfully created port: 329a0a78-e637-4b44-989e-4bdd16f6ed49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:56:57 compute-0 nova_compute[251992]: 2025-12-06 07:56:57.025 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:57 compute-0 nova_compute[251992]: 2025-12-06 07:56:57.100 251996 DEBUG nova.objects.instance [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'migration_context' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:56:57 compute-0 nova_compute[251992]: 2025-12-06 07:56:57.170 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:56:57 compute-0 nova_compute[251992]: 2025-12-06 07:56:57.170 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Ensure instance console log exists: /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:56:57 compute-0 nova_compute[251992]: 2025-12-06 07:56:57.171 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:56:57 compute-0 nova_compute[251992]: 2025-12-06 07:56:57.171 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:56:57 compute-0 nova_compute[251992]: 2025-12-06 07:56:57.172 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:56:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 152 op/s
Dec 06 07:56:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:56:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:56:58.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.589 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.745 251996 DEBUG nova.network.neutron [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Successfully updated port: 329a0a78-e637-4b44-989e-4bdd16f6ed49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.791 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.791 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquired lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.791 251996 DEBUG nova.network.neutron [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:56:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.826 251996 DEBUG nova.compute.manager [req-16f78f19-cd2a-413f-9386-f8d570f9379f req-b9f39614-f866-4476-830e-d8f7b2cab9ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received event network-changed-329a0a78-e637-4b44-989e-4bdd16f6ed49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.827 251996 DEBUG nova.compute.manager [req-16f78f19-cd2a-413f-9386-f8d570f9379f req-b9f39614-f866-4476-830e-d8f7b2cab9ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Refreshing instance network info cache due to event network-changed-329a0a78-e637-4b44-989e-4bdd16f6ed49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.827 251996 DEBUG oslo_concurrency.lockutils [req-16f78f19-cd2a-413f-9386-f8d570f9379f req-b9f39614-f866-4476-830e-d8f7b2cab9ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:56:58 compute-0 ceph-mon[74339]: pgmap v3114: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 152 op/s
Dec 06 07:56:58 compute-0 nova_compute[251992]: 2025-12-06 07:56:58.931 251996 DEBUG nova.network.neutron [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:56:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:56:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:56:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:56:59.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:56:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 965 KiB/s wr, 137 op/s
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.012 251996 DEBUG nova.network.neutron [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updating instance_info_cache with network_info: [{"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.043 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Releasing lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.044 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Instance network_info: |[{"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.044 251996 DEBUG oslo_concurrency.lockutils [req-16f78f19-cd2a-413f-9386-f8d570f9379f req-b9f39614-f866-4476-830e-d8f7b2cab9ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.044 251996 DEBUG nova.network.neutron [req-16f78f19-cd2a-413f-9386-f8d570f9379f req-b9f39614-f866-4476-830e-d8f7b2cab9ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Refreshing network info cache for port 329a0a78-e637-4b44-989e-4bdd16f6ed49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.047 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Start _get_guest_xml network_info=[{"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.052 251996 WARNING nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.057 251996 DEBUG nova.virt.libvirt.host [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.058 251996 DEBUG nova.virt.libvirt.host [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.064 251996 DEBUG nova.virt.libvirt.host [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.065 251996 DEBUG nova.virt.libvirt.host [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.066 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.066 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.067 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.067 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.067 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.067 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.067 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.068 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.068 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.068 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.068 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.069 251996 DEBUG nova.virt.hardware [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.072 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:00.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:57:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066413510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.506 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.545 251996 DEBUG nova.storage.rbd_utils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 3bd60b1c-0294-4922-a147-cf09dadee874_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:57:00 compute-0 nova_compute[251992]: 2025-12-06 07:57:00.551 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:01.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:57:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2862865191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.091 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.093 251996 DEBUG nova.virt.libvirt.vif [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:56:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-768689492',display_name='tempest-AttachVolumeTestJSON-server-768689492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-768689492',id=179,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJHeet4uvwdibuA5GRHZPmpIh4XCBgdCXAm7X7BkTb0rRuySFdQbhvNZDJ8IsfUOC1nBB4/Mjg31cISQt/m+PbsNHVcX+U/71BUHefGJy1lvnsWPTUZWre4hlUR2ABa6g==',key_name='tempest-keypair-493356033',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-g6enkhu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:56:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=3bd60b1c-0294-4922-a147-cf09dadee874,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.094 251996 DEBUG nova.network.os_vif_util [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.095 251996 DEBUG nova.network.os_vif_util [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:25:5c,bridge_name='br-int',has_traffic_filtering=True,id=329a0a78-e637-4b44-989e-4bdd16f6ed49,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329a0a78-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.096 251996 DEBUG nova.objects.instance [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.115 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <uuid>3bd60b1c-0294-4922-a147-cf09dadee874</uuid>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <name>instance-000000b3</name>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <nova:name>tempest-AttachVolumeTestJSON-server-768689492</nova:name>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:57:00</nova:creationTime>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <nova:user uuid="0ce6d0a8def6432aa60891ea00ef9d8b">tempest-AttachVolumeTestJSON-950214889-project-member</nova:user>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <nova:project uuid="63df107b8bd14504974c75ba92ae469b">tempest-AttachVolumeTestJSON-950214889</nova:project>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <nova:port uuid="329a0a78-e637-4b44-989e-4bdd16f6ed49">
Dec 06 07:57:01 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <system>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <entry name="serial">3bd60b1c-0294-4922-a147-cf09dadee874</entry>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <entry name="uuid">3bd60b1c-0294-4922-a147-cf09dadee874</entry>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </system>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <os>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   </os>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <features>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   </features>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/3bd60b1c-0294-4922-a147-cf09dadee874_disk">
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       </source>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/3bd60b1c-0294-4922-a147-cf09dadee874_disk.config">
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       </source>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:57:01 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:dc:25:5c"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <target dev="tap329a0a78-e6"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874/console.log" append="off"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <video>
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </video>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:57:01 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:57:01 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:57:01 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:57:01 compute-0 nova_compute[251992]: </domain>
Dec 06 07:57:01 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.116 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Preparing to wait for external event network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.117 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.117 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.117 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.118 251996 DEBUG nova.virt.libvirt.vif [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:56:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-768689492',display_name='tempest-AttachVolumeTestJSON-server-768689492',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-768689492',id=179,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJHeet4uvwdibuA5GRHZPmpIh4XCBgdCXAm7X7BkTb0rRuySFdQbhvNZDJ8IsfUOC1nBB4/Mjg31cISQt/m+PbsNHVcX+U/71BUHefGJy1lvnsWPTUZWre4hlUR2ABa6g==',key_name='tempest-keypair-493356033',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-g6enkhu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:56:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=3bd60b1c-0294-4922-a147-cf09dadee874,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.118 251996 DEBUG nova.network.os_vif_util [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.119 251996 DEBUG nova.network.os_vif_util [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:25:5c,bridge_name='br-int',has_traffic_filtering=True,id=329a0a78-e637-4b44-989e-4bdd16f6ed49,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329a0a78-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.119 251996 DEBUG os_vif [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:25:5c,bridge_name='br-int',has_traffic_filtering=True,id=329a0a78-e637-4b44-989e-4bdd16f6ed49,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329a0a78-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.120 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.120 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.121 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.126 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.126 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap329a0a78-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.126 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap329a0a78-e6, col_values=(('external_ids', {'iface-id': '329a0a78-e637-4b44-989e-4bdd16f6ed49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:25:5c', 'vm-uuid': '3bd60b1c-0294-4922-a147-cf09dadee874'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.128 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:01 compute-0 NetworkManager[48965]: <info>  [1765007821.1296] manager: (tap329a0a78-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.129 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.134 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.135 251996 INFO os_vif [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:25:5c,bridge_name='br-int',has_traffic_filtering=True,id=329a0a78-e637-4b44-989e-4bdd16f6ed49,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329a0a78-e6')
Dec 06 07:57:01 compute-0 ceph-mon[74339]: pgmap v3115: 305 pgs: 305 active+clean; 439 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 965 KiB/s wr, 137 op/s
Dec 06 07:57:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2066413510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2001879348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2862865191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.200 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.200 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.200 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No VIF found with MAC fa:16:3e:dc:25:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.200 251996 INFO nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Using config drive
Dec 06 07:57:01 compute-0 nova_compute[251992]: 2025-12-06 07:57:01.223 251996 DEBUG nova.storage.rbd_utils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 3bd60b1c-0294-4922-a147-cf09dadee874_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:57:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 472 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 139 op/s
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.019 251996 DEBUG nova.network.neutron [req-16f78f19-cd2a-413f-9386-f8d570f9379f req-b9f39614-f866-4476-830e-d8f7b2cab9ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updated VIF entry in instance network info cache for port 329a0a78-e637-4b44-989e-4bdd16f6ed49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.020 251996 DEBUG nova.network.neutron [req-16f78f19-cd2a-413f-9386-f8d570f9379f req-b9f39614-f866-4476-830e-d8f7b2cab9ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updating instance_info_cache with network_info: [{"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.024 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.039 251996 DEBUG oslo_concurrency.lockutils [req-16f78f19-cd2a-413f-9386-f8d570f9379f req-b9f39614-f866-4476-830e-d8f7b2cab9ab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.096 251996 INFO nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Creating config drive at /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874/disk.config
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.103 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0s6pt69_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:02.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.269 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0s6pt69_" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.296 251996 DEBUG nova.storage.rbd_utils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] rbd image 3bd60b1c-0294-4922-a147-cf09dadee874_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.300 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874/disk.config 3bd60b1c-0294-4922-a147-cf09dadee874_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:02 compute-0 ceph-mon[74339]: pgmap v3116: 305 pgs: 305 active+clean; 472 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 139 op/s
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.841 251996 DEBUG oslo_concurrency.processutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874/disk.config 3bd60b1c-0294-4922-a147-cf09dadee874_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.842 251996 INFO nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Deleting local config drive /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874/disk.config because it was imported into RBD.
Dec 06 07:57:02 compute-0 kernel: tap329a0a78-e6: entered promiscuous mode
Dec 06 07:57:02 compute-0 NetworkManager[48965]: <info>  [1765007822.8905] manager: (tap329a0a78-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Dec 06 07:57:02 compute-0 systemd-udevd[369530]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:57:02 compute-0 ovn_controller[147168]: 2025-12-06T07:57:02Z|00668|binding|INFO|Claiming lport 329a0a78-e637-4b44-989e-4bdd16f6ed49 for this chassis.
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.925 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:02 compute-0 ovn_controller[147168]: 2025-12-06T07:57:02Z|00669|binding|INFO|329a0a78-e637-4b44-989e-4bdd16f6ed49: Claiming fa:16:3e:dc:25:5c 10.100.0.7
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.933 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:5c 10.100.0.7'], port_security=['fa:16:3e:dc:25:5c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3bd60b1c-0294-4922-a147-cf09dadee874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5014de6a-a224-455c-b4ae-38ca6350a5a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=329a0a78-e637-4b44-989e-4bdd16f6ed49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.936 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 329a0a78-e637-4b44-989e-4bdd16f6ed49 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 bound to our chassis
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.938 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:57:02 compute-0 NetworkManager[48965]: <info>  [1765007822.9430] device (tap329a0a78-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:57:02 compute-0 NetworkManager[48965]: <info>  [1765007822.9452] device (tap329a0a78-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:57:02 compute-0 ovn_controller[147168]: 2025-12-06T07:57:02Z|00670|binding|INFO|Setting lport 329a0a78-e637-4b44-989e-4bdd16f6ed49 ovn-installed in OVS
Dec 06 07:57:02 compute-0 ovn_controller[147168]: 2025-12-06T07:57:02Z|00671|binding|INFO|Setting lport 329a0a78-e637-4b44-989e-4bdd16f6ed49 up in Southbound
Dec 06 07:57:02 compute-0 nova_compute[251992]: 2025-12-06 07:57:02.947 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:02 compute-0 systemd-machined[212986]: New machine qemu-83-instance-000000b3.
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.956 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6cebc1c5-9039-48fd-8880-02dba21d0abc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.957 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap997afd36-d1 in ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.960 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap997afd36-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.960 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bfeb6b45-2dc8-4836-ba23-5b2fc536f61e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.961 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[19512423-4f33-4c80-8387-ab6205a46ac0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:02 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-000000b3.
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.977 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[715809ae-ce3c-40c3-ba32-a823f142b529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:02.991 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[755b21db-72e1-4c77-82d4-34cd70db4698]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:03.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.027 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b67fad8a-ca1d-4f9f-8f77-b57270cec44a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.032 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ea9428-4efd-4ecc-90bf-cb393339c166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 NetworkManager[48965]: <info>  [1765007823.0340] manager: (tap997afd36-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.064 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a58a37ee-8059-4a02-9f5a-668940f128b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.066 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[5e56f47d-4d0b-448d-992f-1a7a26ea2aaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 NetworkManager[48965]: <info>  [1765007823.0897] device (tap997afd36-d0): carrier: link connected
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.097 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[96b471f3-0514-47f1-8b1a-7c4784e0cb5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.112 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[014d6e0e-373f-42e1-a0e4-a26dd9b2f1df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809567, 'reachable_time': 41839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369567, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.129 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[15777aed-ffd0-4426-bb88-989691e179f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:8b72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809567, 'tstamp': 809567}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369568, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.143 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[14456300-907f-4302-bb1a-2cadff022646]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809567, 'reachable_time': 41839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369569, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.172 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab8c8d7-9797-4de0-a693-a45bab381127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.227 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc5cda1-ab0e-4163-a20e-034e8cef55dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.229 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.229 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.230 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap997afd36-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:03 compute-0 nova_compute[251992]: 2025-12-06 07:57:03.231 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:03 compute-0 NetworkManager[48965]: <info>  [1765007823.2327] manager: (tap997afd36-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Dec 06 07:57:03 compute-0 kernel: tap997afd36-d0: entered promiscuous mode
Dec 06 07:57:03 compute-0 nova_compute[251992]: 2025-12-06 07:57:03.234 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.235 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap997afd36-d0, col_values=(('external_ids', {'iface-id': '904065d3-3080-49e2-8707-2794a4ba4e6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:03 compute-0 nova_compute[251992]: 2025-12-06 07:57:03.236 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:03 compute-0 ovn_controller[147168]: 2025-12-06T07:57:03Z|00672|binding|INFO|Releasing lport 904065d3-3080-49e2-8707-2794a4ba4e6e from this chassis (sb_readonly=0)
Dec 06 07:57:03 compute-0 nova_compute[251992]: 2025-12-06 07:57:03.249 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.251 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.252 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b950cb79-56e4-4573-aba0-929d37daa6c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.252 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.253 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'env', 'PROCESS_TAG=haproxy-997afd36-d3a2-430f-ba34-f342135a9bb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/997afd36-d3a2-430f-ba34-f342135a9bb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:57:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 479 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 696 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Dec 06 07:57:03 compute-0 podman[369637]: 2025-12-06 07:57:03.613097195 +0000 UTC m=+0.045759646 container create 30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:57:03 compute-0 systemd[1]: Started libpod-conmon-30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26.scope.
Dec 06 07:57:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:57:03 compute-0 podman[369637]: 2025-12-06 07:57:03.588220744 +0000 UTC m=+0.020883215 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:57:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/532c1da0058e670efc2a752ed53c346f751849cdcaa93b91067952a6794214bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:03 compute-0 podman[369637]: 2025-12-06 07:57:03.709583839 +0000 UTC m=+0.142246300 container init 30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:57:03 compute-0 podman[369637]: 2025-12-06 07:57:03.714685836 +0000 UTC m=+0.147348287 container start 30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:57:03 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[369652]: [NOTICE]   (369656) : New worker (369658) forked
Dec 06 07:57:03 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[369652]: [NOTICE]   (369656) : Loading success.
Dec 06 07:57:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.868 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.869 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:03.870 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:03 compute-0 nova_compute[251992]: 2025-12-06 07:57:03.991 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007823.9904916, 3bd60b1c-0294-4922-a147-cf09dadee874 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:57:03 compute-0 nova_compute[251992]: 2025-12-06 07:57:03.992 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] VM Started (Lifecycle Event)
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.019 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.025 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007823.9906301, 3bd60b1c-0294-4922-a147-cf09dadee874 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.026 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] VM Paused (Lifecycle Event)
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.195 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.200 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.226 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:57:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:04.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.970 251996 DEBUG nova.compute.manager [req-3f67ccd2-ab0a-445c-bf42-e0260dc3caf5 req-fe20b943-2bae-47b3-8c63-d75e892ed280 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received event network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.971 251996 DEBUG oslo_concurrency.lockutils [req-3f67ccd2-ab0a-445c-bf42-e0260dc3caf5 req-fe20b943-2bae-47b3-8c63-d75e892ed280 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.971 251996 DEBUG oslo_concurrency.lockutils [req-3f67ccd2-ab0a-445c-bf42-e0260dc3caf5 req-fe20b943-2bae-47b3-8c63-d75e892ed280 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.972 251996 DEBUG oslo_concurrency.lockutils [req-3f67ccd2-ab0a-445c-bf42-e0260dc3caf5 req-fe20b943-2bae-47b3-8c63-d75e892ed280 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.972 251996 DEBUG nova.compute.manager [req-3f67ccd2-ab0a-445c-bf42-e0260dc3caf5 req-fe20b943-2bae-47b3-8c63-d75e892ed280 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Processing event network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.973 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.976 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007824.9767427, 3bd60b1c-0294-4922-a147-cf09dadee874 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.977 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] VM Resumed (Lifecycle Event)
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.979 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.983 251996 INFO nova.virt.libvirt.driver [-] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Instance spawned successfully.
Dec 06 07:57:04 compute-0 nova_compute[251992]: 2025-12-06 07:57:04.983 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:57:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:05.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:05 compute-0 ceph-mon[74339]: pgmap v3117: 305 pgs: 305 active+clean; 479 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 696 KiB/s rd, 2.4 MiB/s wr, 86 op/s
Dec 06 07:57:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3478441018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:05 compute-0 sudo[369673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:05 compute-0 sudo[369673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:05 compute-0 sudo[369673]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:05 compute-0 sudo[369699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:57:05 compute-0 sudo[369699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:05 compute-0 sudo[369699]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:05 compute-0 sudo[369724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:05 compute-0 sudo[369724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:05 compute-0 sudo[369724]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:05 compute-0 sudo[369749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:57:05 compute-0 sudo[369749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 501 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 622 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.523 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.535 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.536 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.537 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.537 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.538 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.539 251996 DEBUG nova.virt.libvirt.driver [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.546 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.728 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.784 251996 INFO nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Took 10.91 seconds to spawn the instance on the hypervisor.
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.785 251996 DEBUG nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.854 251996 INFO nova.compute.manager [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Took 11.87 seconds to build instance.
Dec 06 07:57:05 compute-0 sudo[369749]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:05 compute-0 nova_compute[251992]: 2025-12-06 07:57:05.878 251996 DEBUG oslo_concurrency.lockutils [None req-9fc56efa-0cd3-4a4c-a269-82926828b563 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 07:57:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:57:06 compute-0 sudo[369804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:06 compute-0 sudo[369804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:06 compute-0 sudo[369804]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:06 compute-0 sudo[369829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:57:06 compute-0 sudo[369829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:06 compute-0 sudo[369829]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:06 compute-0 nova_compute[251992]: 2025-12-06 07:57:06.129 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:06 compute-0 sudo[369854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:06 compute-0 sudo[369854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:06 compute-0 sudo[369854]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:06 compute-0 sudo[369879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 07:57:06 compute-0 sudo[369879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:06.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:06 compute-0 sudo[369904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:06 compute-0 sudo[369904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:06 compute-0 sudo[369904]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:06 compute-0 sudo[369929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:06 compute-0 sudo[369929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:06 compute-0 sudo[369929]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:06 compute-0 sudo[369879]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:57:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 07:57:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:07.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:57:07 compute-0 nova_compute[251992]: 2025-12-06 07:57:07.026 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:07 compute-0 nova_compute[251992]: 2025-12-06 07:57:07.078 251996 DEBUG nova.compute.manager [req-64ce812e-e581-4ab5-880f-55a5c43410f3 req-3c130fa7-87c2-4ead-9814-86dd1daf7de3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received event network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:07 compute-0 nova_compute[251992]: 2025-12-06 07:57:07.079 251996 DEBUG oslo_concurrency.lockutils [req-64ce812e-e581-4ab5-880f-55a5c43410f3 req-3c130fa7-87c2-4ead-9814-86dd1daf7de3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:07 compute-0 nova_compute[251992]: 2025-12-06 07:57:07.079 251996 DEBUG oslo_concurrency.lockutils [req-64ce812e-e581-4ab5-880f-55a5c43410f3 req-3c130fa7-87c2-4ead-9814-86dd1daf7de3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:07 compute-0 nova_compute[251992]: 2025-12-06 07:57:07.080 251996 DEBUG oslo_concurrency.lockutils [req-64ce812e-e581-4ab5-880f-55a5c43410f3 req-3c130fa7-87c2-4ead-9814-86dd1daf7de3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:07 compute-0 nova_compute[251992]: 2025-12-06 07:57:07.080 251996 DEBUG nova.compute.manager [req-64ce812e-e581-4ab5-880f-55a5c43410f3 req-3c130fa7-87c2-4ead-9814-86dd1daf7de3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] No waiting events found dispatching network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:57:07 compute-0 nova_compute[251992]: 2025-12-06 07:57:07.080 251996 WARNING nova.compute.manager [req-64ce812e-e581-4ab5-880f-55a5c43410f3 req-3c130fa7-87c2-4ead-9814-86dd1daf7de3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received unexpected event network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 for instance with vm_state active and task_state None.
Dec 06 07:57:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 385 KiB/s rd, 3.9 MiB/s wr, 115 op/s
Dec 06 07:57:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:57:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:57:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:57:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:57:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:57:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c8a833a9-9548-4a04-9990-8040d437860e does not exist
Dec 06 07:57:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3f183f42-0b1f-4e48-8009-67ec2cbb8ccd does not exist
Dec 06 07:57:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ade66689-5d37-4c26-b16d-12592cafb3c7 does not exist
Dec 06 07:57:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:57:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:57:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:57:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:57:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:57:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:57:07 compute-0 sudo[369974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:07 compute-0 sudo[369974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:07 compute-0 sudo[369974]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:07 compute-0 sudo[369999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:57:07 compute-0 sudo[369999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:07 compute-0 sudo[369999]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:07 compute-0 sudo[370024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:07 compute-0 sudo[370024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:07 compute-0 sudo[370024]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:07 compute-0 sudo[370049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:57:07 compute-0 sudo[370049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:08 compute-0 podman[370113]: 2025-12-06 07:57:08.06186871 +0000 UTC m=+0.071956513 container create cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 07:57:08 compute-0 systemd[1]: Started libpod-conmon-cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e.scope.
Dec 06 07:57:08 compute-0 podman[370113]: 2025-12-06 07:57:08.010467962 +0000 UTC m=+0.020555785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:57:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:57:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:08.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:08 compute-0 podman[370113]: 2025-12-06 07:57:08.377619029 +0000 UTC m=+0.387706932 container init cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:57:08 compute-0 podman[370113]: 2025-12-06 07:57:08.385878122 +0000 UTC m=+0.395965935 container start cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:57:08 compute-0 podman[370113]: 2025-12-06 07:57:08.390288152 +0000 UTC m=+0.400375955 container attach cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 07:57:08 compute-0 amazing_hugle[370130]: 167 167
Dec 06 07:57:08 compute-0 systemd[1]: libpod-cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e.scope: Deactivated successfully.
Dec 06 07:57:08 compute-0 podman[370113]: 2025-12-06 07:57:08.396724725 +0000 UTC m=+0.406812528 container died cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:57:08 compute-0 ceph-mon[74339]: pgmap v3118: 305 pgs: 305 active+clean; 501 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 622 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec 06 07:57:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:57:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:57:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:57:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:57:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:57:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c34fc10802a1ff5bc91d04a8ca0ef1eb9b95e71e6de94c511e0f14aed6c098c6-merged.mount: Deactivated successfully.
Dec 06 07:57:08 compute-0 podman[370113]: 2025-12-06 07:57:08.802934026 +0000 UTC m=+0.813021829 container remove cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hugle, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 07:57:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:08 compute-0 systemd[1]: libpod-conmon-cf0f317f66f0e0af6757563024209b77b87fcbda96d607ab8ecf3733830aae3e.scope: Deactivated successfully.
Dec 06 07:57:08 compute-0 podman[370153]: 2025-12-06 07:57:08.983268953 +0000 UTC m=+0.039442866 container create 5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:57:09 compute-0 systemd[1]: Started libpod-conmon-5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb.scope.
Dec 06 07:57:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/235443d5f3235e96ced98a7f2744a4b45dc66c277a2341a128c74911d352b5d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/235443d5f3235e96ced98a7f2744a4b45dc66c277a2341a128c74911d352b5d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/235443d5f3235e96ced98a7f2744a4b45dc66c277a2341a128c74911d352b5d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/235443d5f3235e96ced98a7f2744a4b45dc66c277a2341a128c74911d352b5d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/235443d5f3235e96ced98a7f2744a4b45dc66c277a2341a128c74911d352b5d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:09 compute-0 podman[370153]: 2025-12-06 07:57:08.965956795 +0000 UTC m=+0.022130728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:57:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:57:09 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:57:09 compute-0 podman[370153]: 2025-12-06 07:57:09.078848572 +0000 UTC m=+0.135022505 container init 5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:57:09 compute-0 podman[370153]: 2025-12-06 07:57:09.085579163 +0000 UTC m=+0.141753076 container start 5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:57:09 compute-0 podman[370153]: 2025-12-06 07:57:09.088699657 +0000 UTC m=+0.144873590 container attach 5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:57:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 131 op/s
Dec 06 07:57:09 compute-0 nova_compute[251992]: 2025-12-06 07:57:09.458 251996 DEBUG nova.compute.manager [req-f3037b5b-4dbf-4cfd-8ab6-c5f8b1136402 req-d0d3f641-31b4-4aad-8dd5-7c68a168b003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received event network-changed-329a0a78-e637-4b44-989e-4bdd16f6ed49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:09 compute-0 nova_compute[251992]: 2025-12-06 07:57:09.460 251996 DEBUG nova.compute.manager [req-f3037b5b-4dbf-4cfd-8ab6-c5f8b1136402 req-d0d3f641-31b4-4aad-8dd5-7c68a168b003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Refreshing instance network info cache due to event network-changed-329a0a78-e637-4b44-989e-4bdd16f6ed49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:09 compute-0 nova_compute[251992]: 2025-12-06 07:57:09.460 251996 DEBUG oslo_concurrency.lockutils [req-f3037b5b-4dbf-4cfd-8ab6-c5f8b1136402 req-d0d3f641-31b4-4aad-8dd5-7c68a168b003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:09 compute-0 nova_compute[251992]: 2025-12-06 07:57:09.460 251996 DEBUG oslo_concurrency.lockutils [req-f3037b5b-4dbf-4cfd-8ab6-c5f8b1136402 req-d0d3f641-31b4-4aad-8dd5-7c68a168b003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:09 compute-0 nova_compute[251992]: 2025-12-06 07:57:09.461 251996 DEBUG nova.network.neutron [req-f3037b5b-4dbf-4cfd-8ab6-c5f8b1136402 req-d0d3f641-31b4-4aad-8dd5-7c68a168b003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Refreshing network info cache for port 329a0a78-e637-4b44-989e-4bdd16f6ed49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:09 compute-0 ceph-mon[74339]: pgmap v3119: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 385 KiB/s rd, 3.9 MiB/s wr, 115 op/s
Dec 06 07:57:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2468210167' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2468210167' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:09 compute-0 gracious_feistel[370169]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:57:09 compute-0 gracious_feistel[370169]: --> relative data size: 1.0
Dec 06 07:57:09 compute-0 gracious_feistel[370169]: --> All data devices are unavailable
Dec 06 07:57:09 compute-0 systemd[1]: libpod-5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb.scope: Deactivated successfully.
Dec 06 07:57:09 compute-0 podman[370153]: 2025-12-06 07:57:09.91666419 +0000 UTC m=+0.972838103 container died 5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:57:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-235443d5f3235e96ced98a7f2744a4b45dc66c277a2341a128c74911d352b5d9-merged.mount: Deactivated successfully.
Dec 06 07:57:09 compute-0 podman[370153]: 2025-12-06 07:57:09.969483315 +0000 UTC m=+1.025657228 container remove 5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:57:09 compute-0 systemd[1]: libpod-conmon-5e2684d65ba0be29a12d8bf72ea8a794e07e6f82807f8085e45bcba6fb7c5feb.scope: Deactivated successfully.
Dec 06 07:57:10 compute-0 sudo[370049]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:10 compute-0 sudo[370199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:10 compute-0 sudo[370199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:10 compute-0 sudo[370199]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:10 compute-0 sudo[370224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:57:10 compute-0 sudo[370224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:10 compute-0 sudo[370224]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:10 compute-0 sudo[370249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:10 compute-0 sudo[370249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:10 compute-0 sudo[370249]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:10.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:10 compute-0 sudo[370274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:57:10 compute-0 sudo[370274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:10 compute-0 podman[370339]: 2025-12-06 07:57:10.577560633 +0000 UTC m=+0.052742234 container create b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_poitras, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:57:10 compute-0 podman[370339]: 2025-12-06 07:57:10.544366768 +0000 UTC m=+0.019548389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:57:10 compute-0 systemd[1]: Started libpod-conmon-b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146.scope.
Dec 06 07:57:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:57:10 compute-0 podman[370339]: 2025-12-06 07:57:10.762091362 +0000 UTC m=+0.237272983 container init b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:57:10 compute-0 podman[370339]: 2025-12-06 07:57:10.771027634 +0000 UTC m=+0.246209235 container start b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_poitras, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:57:10 compute-0 practical_poitras[370355]: 167 167
Dec 06 07:57:10 compute-0 systemd[1]: libpod-b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146.scope: Deactivated successfully.
Dec 06 07:57:10 compute-0 podman[370339]: 2025-12-06 07:57:10.830967711 +0000 UTC m=+0.306149372 container attach b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 07:57:10 compute-0 podman[370339]: 2025-12-06 07:57:10.83168496 +0000 UTC m=+0.306866601 container died b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 07:57:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-13148b6a3e77f525ca7237961a5b1b3b3b1e3339b4b3f1a9d374d90b1b656268-merged.mount: Deactivated successfully.
Dec 06 07:57:10 compute-0 podman[370339]: 2025-12-06 07:57:10.906376476 +0000 UTC m=+0.381558087 container remove b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 07:57:10 compute-0 systemd[1]: libpod-conmon-b81e796cb23c63885281532f481c02f58abd65a71c8ad24f0c51aba899e46146.scope: Deactivated successfully.
Dec 06 07:57:10 compute-0 nova_compute[251992]: 2025-12-06 07:57:10.995 251996 DEBUG nova.network.neutron [req-f3037b5b-4dbf-4cfd-8ab6-c5f8b1136402 req-d0d3f641-31b4-4aad-8dd5-7c68a168b003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updated VIF entry in instance network info cache for port 329a0a78-e637-4b44-989e-4bdd16f6ed49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:10 compute-0 nova_compute[251992]: 2025-12-06 07:57:10.996 251996 DEBUG nova.network.neutron [req-f3037b5b-4dbf-4cfd-8ab6-c5f8b1136402 req-d0d3f641-31b4-4aad-8dd5-7c68a168b003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updating instance_info_cache with network_info: [{"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:11 compute-0 ceph-mon[74339]: pgmap v3120: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 131 op/s
Dec 06 07:57:11 compute-0 nova_compute[251992]: 2025-12-06 07:57:11.016 251996 DEBUG oslo_concurrency.lockutils [req-f3037b5b-4dbf-4cfd-8ab6-c5f8b1136402 req-d0d3f641-31b4-4aad-8dd5-7c68a168b003 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-3bd60b1c-0294-4922-a147-cf09dadee874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:11.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:11 compute-0 podman[370383]: 2025-12-06 07:57:11.116386313 +0000 UTC m=+0.070885254 container create 5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:57:11 compute-0 nova_compute[251992]: 2025-12-06 07:57:11.131 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:11 compute-0 podman[370383]: 2025-12-06 07:57:11.068456629 +0000 UTC m=+0.022955590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:57:11 compute-0 systemd[1]: Started libpod-conmon-5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025.scope.
Dec 06 07:57:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb14a2ef8baa8d9866d42172ee3e40d35b44d385e6858595261fb77b3c750072/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb14a2ef8baa8d9866d42172ee3e40d35b44d385e6858595261fb77b3c750072/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb14a2ef8baa8d9866d42172ee3e40d35b44d385e6858595261fb77b3c750072/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb14a2ef8baa8d9866d42172ee3e40d35b44d385e6858595261fb77b3c750072/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:11 compute-0 podman[370383]: 2025-12-06 07:57:11.219379482 +0000 UTC m=+0.173878453 container init 5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:57:11 compute-0 podman[370383]: 2025-12-06 07:57:11.228844737 +0000 UTC m=+0.183343678 container start 5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_aryabhata, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:57:11 compute-0 podman[370383]: 2025-12-06 07:57:11.232230518 +0000 UTC m=+0.186729489 container attach 5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:57:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 153 op/s
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]: {
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:     "0": [
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:         {
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "devices": [
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "/dev/loop3"
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             ],
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "lv_name": "ceph_lv0",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "lv_size": "7511998464",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "name": "ceph_lv0",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "tags": {
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.cluster_name": "ceph",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.crush_device_class": "",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.encrypted": "0",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.osd_id": "0",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.type": "block",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:                 "ceph.vdo": "0"
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             },
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "type": "block",
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:             "vg_name": "ceph_vg0"
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:         }
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]:     ]
Dec 06 07:57:11 compute-0 pensive_aryabhata[370401]: }
Dec 06 07:57:11 compute-0 systemd[1]: libpod-5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025.scope: Deactivated successfully.
Dec 06 07:57:11 compute-0 podman[370383]: 2025-12-06 07:57:11.980988603 +0000 UTC m=+0.935487534 container died 5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 07:57:12 compute-0 nova_compute[251992]: 2025-12-06 07:57:12.087 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:12.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb14a2ef8baa8d9866d42172ee3e40d35b44d385e6858595261fb77b3c750072-merged.mount: Deactivated successfully.
Dec 06 07:57:12 compute-0 sshd-session[370376]: Connection reset by authenticating user root 45.135.232.92 port 48804 [preauth]
Dec 06 07:57:12 compute-0 podman[370383]: 2025-12-06 07:57:12.96776939 +0000 UTC m=+1.922268331 container remove 5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 07:57:13 compute-0 sudo[370274]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:13.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:57:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:57:13 compute-0 systemd[1]: libpod-conmon-5a68b78e9a79e82f48fa9797257d36b3d076a018425b96b785d2bdf76d6ed025.scope: Deactivated successfully.
Dec 06 07:57:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:57:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:57:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:57:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:57:13 compute-0 sudo[370424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:13 compute-0 sudo[370424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:13 compute-0 sudo[370424]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:13 compute-0 sudo[370451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:57:13 compute-0 sudo[370451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:13 compute-0 sudo[370451]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:13 compute-0 sudo[370476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:13 compute-0 sudo[370476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:13 compute-0 sudo[370476]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:13 compute-0 sudo[370501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:57:13 compute-0 sudo[370501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 07:57:13 compute-0 podman[370568]: 2025-12-06 07:57:13.58739559 +0000 UTC m=+0.036443384 container create 15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_rubin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:57:13 compute-0 systemd[1]: Started libpod-conmon-15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415.scope.
Dec 06 07:57:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:57:13 compute-0 podman[370568]: 2025-12-06 07:57:13.663666938 +0000 UTC m=+0.112714742 container init 15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_rubin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:57:13 compute-0 podman[370568]: 2025-12-06 07:57:13.569573059 +0000 UTC m=+0.018620883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:57:13 compute-0 podman[370568]: 2025-12-06 07:57:13.670868513 +0000 UTC m=+0.119916307 container start 15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 07:57:13 compute-0 podman[370568]: 2025-12-06 07:57:13.674682945 +0000 UTC m=+0.123730739 container attach 15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 07:57:13 compute-0 loving_rubin[370585]: 167 167
Dec 06 07:57:13 compute-0 systemd[1]: libpod-15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415.scope: Deactivated successfully.
Dec 06 07:57:13 compute-0 conmon[370585]: conmon 15291ecebe09344cd385 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415.scope/container/memory.events
Dec 06 07:57:13 compute-0 podman[370568]: 2025-12-06 07:57:13.679027053 +0000 UTC m=+0.128074847 container died 15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_rubin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:57:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d848909fbdf92e4906791d08a5ddc42251c7965075cc5f13cee4e8df36bf131-merged.mount: Deactivated successfully.
Dec 06 07:57:13 compute-0 podman[370568]: 2025-12-06 07:57:13.714319615 +0000 UTC m=+0.163367409 container remove 15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:57:13 compute-0 systemd[1]: libpod-conmon-15291ecebe09344cd38598c67a24d717d9fb20ffdf604fd2630c177c92e1a415.scope: Deactivated successfully.
Dec 06 07:57:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:14 compute-0 podman[370607]: 2025-12-06 07:57:13.937038145 +0000 UTC m=+0.025177071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:57:14 compute-0 podman[370607]: 2025-12-06 07:57:14.102338765 +0000 UTC m=+0.190477661 container create 462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 07:57:14 compute-0 ceph-mon[74339]: pgmap v3121: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 153 op/s
Dec 06 07:57:14 compute-0 systemd[1]: Started libpod-conmon-462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c.scope.
Dec 06 07:57:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9865c266a4aad080d234d7e2fd2f3aee1df4deb44941ae4fff5954578a972cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9865c266a4aad080d234d7e2fd2f3aee1df4deb44941ae4fff5954578a972cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9865c266a4aad080d234d7e2fd2f3aee1df4deb44941ae4fff5954578a972cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9865c266a4aad080d234d7e2fd2f3aee1df4deb44941ae4fff5954578a972cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:14.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:14 compute-0 podman[370607]: 2025-12-06 07:57:14.38490101 +0000 UTC m=+0.473039936 container init 462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noyce, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:57:14 compute-0 podman[370607]: 2025-12-06 07:57:14.395597589 +0000 UTC m=+0.483736505 container start 462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noyce, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:57:14 compute-0 podman[370607]: 2025-12-06 07:57:14.603841498 +0000 UTC m=+0.691980414 container attach 462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noyce, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:57:14 compute-0 sshd-session[370450]: Connection reset by authenticating user root 45.135.232.92 port 48820 [preauth]
Dec 06 07:57:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:15 compute-0 tender_noyce[370623]: {
Dec 06 07:57:15 compute-0 tender_noyce[370623]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:57:15 compute-0 tender_noyce[370623]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:57:15 compute-0 tender_noyce[370623]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:57:15 compute-0 tender_noyce[370623]:         "osd_id": 0,
Dec 06 07:57:15 compute-0 tender_noyce[370623]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:57:15 compute-0 tender_noyce[370623]:         "type": "bluestore"
Dec 06 07:57:15 compute-0 tender_noyce[370623]:     }
Dec 06 07:57:15 compute-0 tender_noyce[370623]: }
Dec 06 07:57:15 compute-0 systemd[1]: libpod-462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c.scope: Deactivated successfully.
Dec 06 07:57:15 compute-0 podman[370646]: 2025-12-06 07:57:15.279119499 +0000 UTC m=+0.031646894 container died 462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noyce, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:57:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9865c266a4aad080d234d7e2fd2f3aee1df4deb44941ae4fff5954578a972cd-merged.mount: Deactivated successfully.
Dec 06 07:57:15 compute-0 podman[370646]: 2025-12-06 07:57:15.328279535 +0000 UTC m=+0.080806880 container remove 462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 07:57:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 123 op/s
Dec 06 07:57:15 compute-0 systemd[1]: libpod-conmon-462d04793a3ac44792dfa9b3539eb2f22d91f49df24b272bb6a4ad02bee6287c.scope: Deactivated successfully.
Dec 06 07:57:15 compute-0 sudo[370501]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:57:15 compute-0 podman[370647]: 2025-12-06 07:57:15.393350672 +0000 UTC m=+0.123061532 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:57:16 compute-0 nova_compute[251992]: 2025-12-06 07:57:16.134 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:16 compute-0 ceph-mon[74339]: pgmap v3122: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 07:57:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:16.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:16 compute-0 sshd-session[370628]: Invalid user user from 45.135.232.92 port 23954
Dec 06 07:57:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:57:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:16 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 784a1fb9-02a9-4030-ab87-552f03c551c2 does not exist
Dec 06 07:57:16 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bd0a2d2f-0c87-4e2d-be21-135d3af02dfc does not exist
Dec 06 07:57:16 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 05fcad26-73f6-4884-8382-0a201c8529c7 does not exist
Dec 06 07:57:16 compute-0 sudo[370687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:16 compute-0 sudo[370687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:16 compute-0 sudo[370687]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:16 compute-0 sudo[370712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:57:16 compute-0 sudo[370712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:16 compute-0 sudo[370712]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:17 compute-0 sshd-session[370628]: Connection reset by invalid user user 45.135.232.92 port 23954 [preauth]
Dec 06 07:57:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:17 compute-0 nova_compute[251992]: 2025-12-06 07:57:17.089 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 526 KiB/s wr, 91 op/s
Dec 06 07:57:17 compute-0 ceph-mon[74339]: pgmap v3123: 305 pgs: 305 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 123 op/s
Dec 06 07:57:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:57:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:18.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:57:18
Dec 06 07:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', '.rgw.root', 'images', 'backups', 'default.rgw.log', 'vms', 'default.rgw.control', '.mgr']
Dec 06 07:57:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:57:18 compute-0 sshd-session[370738]: Invalid user admin from 45.135.232.92 port 23984
Dec 06 07:57:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:18 compute-0 sshd-session[370738]: Connection reset by invalid user admin 45.135.232.92 port 23984 [preauth]
Dec 06 07:57:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:19 compute-0 ceph-mon[74339]: pgmap v3124: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 526 KiB/s wr, 91 op/s
Dec 06 07:57:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 513 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 749 KiB/s wr, 77 op/s
Dec 06 07:57:20 compute-0 ovn_controller[147168]: 2025-12-06T07:57:20Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:25:5c 10.100.0.7
Dec 06 07:57:20 compute-0 ovn_controller[147168]: 2025-12-06T07:57:20Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:25:5c 10.100.0.7
Dec 06 07:57:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:20.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:20 compute-0 sshd-session[370741]: Connection reset by authenticating user root 45.135.232.92 port 23990 [preauth]
Dec 06 07:57:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:21.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:21 compute-0 nova_compute[251992]: 2025-12-06 07:57:21.138 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Dec 06 07:57:22 compute-0 nova_compute[251992]: 2025-12-06 07:57:22.132 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:22.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:22 compute-0 podman[370744]: 2025-12-06 07:57:22.40607397 +0000 UTC m=+0.057584335 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:57:22 compute-0 podman[370745]: 2025-12-06 07:57:22.410021686 +0000 UTC m=+0.063437962 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 07:57:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:23.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:23 compute-0 ceph-mon[74339]: pgmap v3125: 305 pgs: 305 active+clean; 513 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 749 KiB/s wr, 77 op/s
Dec 06 07:57:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/672821242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/672821242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 06 07:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:57:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:57:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:24.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:24 compute-0 ceph-mon[74339]: pgmap v3126: 305 pgs: 305 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 97 op/s
Dec 06 07:57:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:25.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 2.5 MiB/s wr, 97 op/s
Dec 06 07:57:25 compute-0 ceph-mon[74339]: pgmap v3127: 305 pgs: 305 active+clean; 526 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Dec 06 07:57:26 compute-0 nova_compute[251992]: 2025-12-06 07:57:26.141 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004357907884329053 of space, bias 1.0, pg target 1.307372365298716 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00876525276847199 of space, bias 1.0, pg target 2.620810577773125 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8478230998743718 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:57:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Dec 06 07:57:26 compute-0 sudo[370784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:26 compute-0 sudo[370784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:26 compute-0 sudo[370784]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:26 compute-0 sudo[370809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:26 compute-0 sudo[370809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:26 compute-0 sudo[370809]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:26 compute-0 ceph-mon[74339]: pgmap v3128: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 2.5 MiB/s wr, 97 op/s
Dec 06 07:57:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:27.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:27 compute-0 nova_compute[251992]: 2025-12-06 07:57:27.135 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:57:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:57:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:57:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:57:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:57:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 411 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:57:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:28.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/668820867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/167430239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:29.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:57:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597368563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:57:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/597368563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 414 KiB/s rd, 2.3 MiB/s wr, 100 op/s
Dec 06 07:57:29 compute-0 nova_compute[251992]: 2025-12-06 07:57:29.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:29 compute-0 nova_compute[251992]: 2025-12-06 07:57:29.686 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:29 compute-0 nova_compute[251992]: 2025-12-06 07:57:29.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:29 compute-0 nova_compute[251992]: 2025-12-06 07:57:29.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:29 compute-0 nova_compute[251992]: 2025-12-06 07:57:29.687 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:57:29 compute-0 nova_compute[251992]: 2025-12-06 07:57:29.687 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:29 compute-0 ceph-mon[74339]: pgmap v3129: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 411 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Dec 06 07:57:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/597368563' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/597368563' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:57:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812412658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.147 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.242 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.242 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.246 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.247 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:57:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:30.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.421 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.422 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3793MB free_disk=20.89681625366211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.422 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.423 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.511 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.512 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 3bd60b1c-0294-4922-a147-cf09dadee874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.512 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.512 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:57:30 compute-0 nova_compute[251992]: 2025-12-06 07:57:30.577 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Dec 06 07:57:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:57:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1802422466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:31.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:31 compute-0 nova_compute[251992]: 2025-12-06 07:57:31.051 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:31 compute-0 nova_compute[251992]: 2025-12-06 07:57:31.057 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:57:31 compute-0 nova_compute[251992]: 2025-12-06 07:57:31.075 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:57:31 compute-0 nova_compute[251992]: 2025-12-06 07:57:31.104 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:57:31 compute-0 nova_compute[251992]: 2025-12-06 07:57:31.104 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:31 compute-0 nova_compute[251992]: 2025-12-06 07:57:31.142 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 538 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec 06 07:57:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Dec 06 07:57:31 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Dec 06 07:57:32 compute-0 nova_compute[251992]: 2025-12-06 07:57:32.181 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:32.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:32 compute-0 ceph-mon[74339]: pgmap v3130: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 414 KiB/s rd, 2.3 MiB/s wr, 100 op/s
Dec 06 07:57:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3812412658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 538 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 669 KiB/s wr, 58 op/s
Dec 06 07:57:33 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Dec 06 07:57:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1802422466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:34 compute-0 ceph-mon[74339]: pgmap v3131: 305 pgs: 305 active+clean; 538 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Dec 06 07:57:34 compute-0 ceph-mon[74339]: osdmap e401: 3 total, 3 up, 3 in
Dec 06 07:57:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:34.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:35.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Dec 06 07:57:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 589 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 94 op/s
Dec 06 07:57:35 compute-0 ceph-mon[74339]: pgmap v3133: 305 pgs: 305 active+clean; 538 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 669 KiB/s wr, 58 op/s
Dec 06 07:57:36 compute-0 nova_compute[251992]: 2025-12-06 07:57:36.098 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:36 compute-0 nova_compute[251992]: 2025-12-06 07:57:36.145 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:36.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Dec 06 07:57:36 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Dec 06 07:57:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:37.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:37 compute-0 nova_compute[251992]: 2025-12-06 07:57:37.182 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 119 op/s
Dec 06 07:57:37 compute-0 ceph-mon[74339]: pgmap v3134: 305 pgs: 305 active+clean; 589 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.9 MiB/s wr, 94 op/s
Dec 06 07:57:37 compute-0 nova_compute[251992]: 2025-12-06 07:57:37.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:38.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:38 compute-0 ceph-mon[74339]: osdmap e402: 3 total, 3 up, 3 in
Dec 06 07:57:38 compute-0 ceph-mon[74339]: pgmap v3136: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 119 op/s
Dec 06 07:57:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1035152140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Dec 06 07:57:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Dec 06 07:57:38 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Dec 06 07:57:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:39.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 118 op/s
Dec 06 07:57:39 compute-0 ceph-mon[74339]: osdmap e403: 3 total, 3 up, 3 in
Dec 06 07:57:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2114481433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.188 251996 DEBUG nova.compute.manager [req-40a2cb4e-3e8c-4b9e-a6ab-e83fef572973 req-9aedd739-c570-4881-9eb5-39b4e7e438df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.189 251996 DEBUG nova.compute.manager [req-40a2cb4e-3e8c-4b9e-a6ab-e83fef572973 req-9aedd739-c570-4881-9eb5-39b4e7e438df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing instance network info cache due to event network-changed-7b695d2a-7c72-4125-a16a-a2d8b4342195. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.189 251996 DEBUG oslo_concurrency.lockutils [req-40a2cb4e-3e8c-4b9e-a6ab-e83fef572973 req-9aedd739-c570-4881-9eb5-39b4e7e438df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.190 251996 DEBUG oslo_concurrency.lockutils [req-40a2cb4e-3e8c-4b9e-a6ab-e83fef572973 req-9aedd739-c570-4881-9eb5-39b4e7e438df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.190 251996 DEBUG nova.network.neutron [req-40a2cb4e-3e8c-4b9e-a6ab-e83fef572973 req-9aedd739-c570-4881-9eb5-39b4e7e438df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Refreshing network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:57:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:57:40 compute-0 nova_compute[251992]: 2025-12-06 07:57:40.899 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:57:41 compute-0 ceph-mon[74339]: pgmap v3138: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 118 op/s
Dec 06 07:57:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:41.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:41 compute-0 nova_compute[251992]: 2025-12-06 07:57:41.148 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 121 op/s
Dec 06 07:57:41 compute-0 nova_compute[251992]: 2025-12-06 07:57:41.992 251996 DEBUG nova.network.neutron [req-40a2cb4e-3e8c-4b9e-a6ab-e83fef572973 req-9aedd739-c570-4881-9eb5-39b4e7e438df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updated VIF entry in instance network info cache for port 7b695d2a-7c72-4125-a16a-a2d8b4342195. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:57:41 compute-0 nova_compute[251992]: 2025-12-06 07:57:41.993 251996 DEBUG nova.network.neutron [req-40a2cb4e-3e8c-4b9e-a6ab-e83fef572973 req-9aedd739-c570-4881-9eb5-39b4e7e438df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:42 compute-0 nova_compute[251992]: 2025-12-06 07:57:42.015 251996 DEBUG oslo_concurrency.lockutils [req-40a2cb4e-3e8c-4b9e-a6ab-e83fef572973 req-9aedd739-c570-4881-9eb5-39b4e7e438df 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:42 compute-0 nova_compute[251992]: 2025-12-06 07:57:42.015 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:57:42 compute-0 nova_compute[251992]: 2025-12-06 07:57:42.015 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:57:42 compute-0 nova_compute[251992]: 2025-12-06 07:57:42.016 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:42 compute-0 nova_compute[251992]: 2025-12-06 07:57:42.222 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:42.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:42 compute-0 ceph-mon[74339]: pgmap v3139: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 121 op/s
Dec 06 07:57:42 compute-0 nova_compute[251992]: 2025-12-06 07:57:42.918 251996 DEBUG oslo_concurrency.lockutils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:42 compute-0 nova_compute[251992]: 2025-12-06 07:57:42.918 251996 DEBUG oslo_concurrency.lockutils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.001 251996 DEBUG nova.objects.instance [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:57:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:57:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:57:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:57:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:57:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:57:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:43.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.076 251996 DEBUG oslo_concurrency.lockutils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.209 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [{"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.264 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.265 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.265 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.265 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.265 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 22 op/s
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.559 251996 DEBUG oslo_concurrency.lockutils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.559 251996 DEBUG oslo_concurrency.lockutils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.560 251996 INFO nova.compute.manager [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Attaching volume b338aabc-085b-43e3-adbc-b55bb8505744 to /dev/vdb
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.763 251996 DEBUG os_brick.utils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.765 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.777 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.778 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[50532ada-9b58-41dd-90eb-555cf4f56f78]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.779 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.786 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.787 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[27533c5f-902b-4634-92b3-7ecf1f585836]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.788 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.797 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.797 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[9e23eb31-e745-4bb5-94bd-7c4cf8ba7a39]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.798 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[85832f19-0d86-40bc-bdea-ebcf7c5359d9]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.798 251996 DEBUG oslo_concurrency.processutils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.825 251996 DEBUG oslo_concurrency.processutils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.829 251996 DEBUG os_brick.initiator.connectors.lightos [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.829 251996 DEBUG os_brick.initiator.connectors.lightos [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.829 251996 DEBUG os_brick.initiator.connectors.lightos [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.829 251996 DEBUG os_brick.utils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:57:43 compute-0 nova_compute[251992]: 2025-12-06 07:57:43.830 251996 DEBUG nova.virt.block_device [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updating existing volume attachment record: 07facc5c-dc70-489d-99ff-b91176773f2a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:57:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Dec 06 07:57:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:44 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Dec 06 07:57:44 compute-0 ceph-mon[74339]: pgmap v3140: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 22 op/s
Dec 06 07:57:44 compute-0 ceph-mon[74339]: osdmap e404: 3 total, 3 up, 3 in
Dec 06 07:57:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2598950693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:44 compute-0 nova_compute[251992]: 2025-12-06 07:57:44.649 251996 DEBUG nova.objects.instance [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:44 compute-0 nova_compute[251992]: 2025-12-06 07:57:44.672 251996 DEBUG nova.virt.libvirt.driver [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Attempting to attach volume b338aabc-085b-43e3-adbc-b55bb8505744 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:57:44 compute-0 nova_compute[251992]: 2025-12-06 07:57:44.675 251996 DEBUG nova.virt.libvirt.guest [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:57:44 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:44 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-b338aabc-085b-43e3-adbc-b55bb8505744">
Dec 06 07:57:44 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:44 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:44 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:44 compute-0 nova_compute[251992]:   </source>
Dec 06 07:57:44 compute-0 nova_compute[251992]:   <auth username="openstack">
Dec 06 07:57:44 compute-0 nova_compute[251992]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:57:44 compute-0 nova_compute[251992]:   </auth>
Dec 06 07:57:44 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:57:44 compute-0 nova_compute[251992]:   <serial>b338aabc-085b-43e3-adbc-b55bb8505744</serial>
Dec 06 07:57:44 compute-0 nova_compute[251992]: </disk>
Dec 06 07:57:44 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:57:44 compute-0 nova_compute[251992]: 2025-12-06 07:57:44.805 251996 DEBUG nova.virt.libvirt.driver [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:44 compute-0 nova_compute[251992]: 2025-12-06 07:57:44.805 251996 DEBUG nova.virt.libvirt.driver [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:44 compute-0 nova_compute[251992]: 2025-12-06 07:57:44.805 251996 DEBUG nova.virt.libvirt.driver [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:44 compute-0 nova_compute[251992]: 2025-12-06 07:57:44.806 251996 DEBUG nova.virt.libvirt.driver [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No VIF found with MAC fa:16:3e:dc:25:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:57:44 compute-0 nova_compute[251992]: 2025-12-06 07:57:44.984 251996 DEBUG oslo_concurrency.lockutils [None req-cf41534c-cf66-44ce-908f-9b86eb636c7d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:45.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 573 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.9 KiB/s wr, 58 op/s
Dec 06 07:57:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1237685915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:46.149 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:57:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:46.150 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:57:46 compute-0 nova_compute[251992]: 2025-12-06 07:57:46.149 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:46 compute-0 nova_compute[251992]: 2025-12-06 07:57:46.259 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:46.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:46 compute-0 podman[370916]: 2025-12-06 07:57:46.448005815 +0000 UTC m=+0.102704162 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:57:46 compute-0 sudo[370942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:46 compute-0 sudo[370942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:46 compute-0 sudo[370942]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:46 compute-0 sudo[370967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:57:46 compute-0 sudo[370967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:57:46 compute-0 sudo[370967]: pam_unix(sudo:session): session closed for user root
Dec 06 07:57:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:47.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:47 compute-0 ceph-mon[74339]: pgmap v3142: 305 pgs: 305 active+clean; 573 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.9 KiB/s wr, 58 op/s
Dec 06 07:57:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3400634989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:47 compute-0 nova_compute[251992]: 2025-12-06 07:57:47.224 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 553 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 3.1 KiB/s wr, 66 op/s
Dec 06 07:57:47 compute-0 nova_compute[251992]: 2025-12-06 07:57:47.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:47 compute-0 nova_compute[251992]: 2025-12-06 07:57:47.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:57:47 compute-0 nova_compute[251992]: 2025-12-06 07:57:47.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:57:47 compute-0 nova_compute[251992]: 2025-12-06 07:57:47.957 251996 DEBUG oslo_concurrency.lockutils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:47 compute-0 nova_compute[251992]: 2025-12-06 07:57:47.957 251996 DEBUG oslo_concurrency.lockutils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:47 compute-0 nova_compute[251992]: 2025-12-06 07:57:47.992 251996 DEBUG nova.objects.instance [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.144 251996 DEBUG oslo_concurrency.lockutils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:48.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.585 251996 DEBUG oslo_concurrency.lockutils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.586 251996 DEBUG oslo_concurrency.lockutils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.586 251996 INFO nova.compute.manager [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Attaching volume eb0607fb-3d18-4469-8db9-a7f938c59ae5 to /dev/vdc
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.761 251996 DEBUG os_brick.utils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.763 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.776 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.777 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[0186571a-7659-49f7-880a-4b3c83470cb3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.778 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.787 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.788 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ef40cd-d1a8-4ee1-9ad5-d43e206a3261]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.789 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.805 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.805 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[0352feb3-08d5-4905-8459-fc26ecf5f329]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.807 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[53923699-b112-470d-a985-2a2ad0a6b55c]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.807 251996 DEBUG oslo_concurrency.processutils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.851 251996 DEBUG oslo_concurrency.processutils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "nvme version" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.854 251996 DEBUG os_brick.initiator.connectors.lightos [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.854 251996 DEBUG os_brick.initiator.connectors.lightos [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.854 251996 DEBUG os_brick.initiator.connectors.lightos [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.855 251996 DEBUG os_brick.utils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 07:57:48 compute-0 nova_compute[251992]: 2025-12-06 07:57:48.855 251996 DEBUG nova.virt.block_device [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updating existing volume attachment record: 1648adba-2716-478f-9583-31d19f3d0ba2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 07:57:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:49.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:49 compute-0 ceph-mon[74339]: pgmap v3143: 305 pgs: 305 active+clean; 553 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 3.1 KiB/s wr, 66 op/s
Dec 06 07:57:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 4.8 KiB/s wr, 73 op/s
Dec 06 07:57:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:57:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1315147020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:49 compute-0 nova_compute[251992]: 2025-12-06 07:57:49.998 251996 DEBUG nova.objects.instance [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:50 compute-0 nova_compute[251992]: 2025-12-06 07:57:50.157 251996 DEBUG nova.virt.libvirt.driver [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Attempting to attach volume eb0607fb-3d18-4469-8db9-a7f938c59ae5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 07:57:50 compute-0 nova_compute[251992]: 2025-12-06 07:57:50.159 251996 DEBUG nova.virt.libvirt.guest [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 07:57:50 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:50 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-eb0607fb-3d18-4469-8db9-a7f938c59ae5">
Dec 06 07:57:50 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:50 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:50 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:50 compute-0 nova_compute[251992]:   </source>
Dec 06 07:57:50 compute-0 nova_compute[251992]:   <auth username="openstack">
Dec 06 07:57:50 compute-0 nova_compute[251992]:     <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:57:50 compute-0 nova_compute[251992]:   </auth>
Dec 06 07:57:50 compute-0 nova_compute[251992]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:57:50 compute-0 nova_compute[251992]:   <serial>eb0607fb-3d18-4469-8db9-a7f938c59ae5</serial>
Dec 06 07:57:50 compute-0 nova_compute[251992]: </disk>
Dec 06 07:57:50 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 07:57:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:50.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:50 compute-0 nova_compute[251992]: 2025-12-06 07:57:50.478 251996 DEBUG nova.virt.libvirt.driver [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:50 compute-0 nova_compute[251992]: 2025-12-06 07:57:50.479 251996 DEBUG nova.virt.libvirt.driver [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:50 compute-0 nova_compute[251992]: 2025-12-06 07:57:50.479 251996 DEBUG nova.virt.libvirt.driver [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:50 compute-0 nova_compute[251992]: 2025-12-06 07:57:50.479 251996 DEBUG nova.virt.libvirt.driver [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:57:50 compute-0 nova_compute[251992]: 2025-12-06 07:57:50.480 251996 DEBUG nova.virt.libvirt.driver [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] No VIF found with MAC fa:16:3e:dc:25:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:57:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1315147020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3680227484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.023 251996 DEBUG oslo_concurrency.lockutils [None req-3295481c-634e-401d-88cf-f7b56ad850c3 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:51.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.094 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.095 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.095 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.095 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.096 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.097 251996 INFO nova.compute.manager [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Terminating instance
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.098 251996 DEBUG nova.compute.manager [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.153 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 6.0 KiB/s wr, 84 op/s
Dec 06 07:57:51 compute-0 kernel: tap7b695d2a-7c (unregistering): left promiscuous mode
Dec 06 07:57:51 compute-0 NetworkManager[48965]: <info>  [1765007871.4483] device (tap7b695d2a-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:57:51 compute-0 ovn_controller[147168]: 2025-12-06T07:57:51Z|00673|binding|INFO|Releasing lport 7b695d2a-7c72-4125-a16a-a2d8b4342195 from this chassis (sb_readonly=0)
Dec 06 07:57:51 compute-0 ovn_controller[147168]: 2025-12-06T07:57:51Z|00674|binding|INFO|Setting lport 7b695d2a-7c72-4125-a16a-a2d8b4342195 down in Southbound
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 ovn_controller[147168]: 2025-12-06T07:57:51Z|00675|binding|INFO|Removing iface tap7b695d2a-7c ovn-installed in OVS
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.458 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.471 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000af.scope: Deactivated successfully.
Dec 06 07:57:51 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000af.scope: Consumed 19.533s CPU time.
Dec 06 07:57:51 compute-0 systemd-machined[212986]: Machine qemu-82-instance-000000af terminated.
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.579 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:c3:e9 10.100.0.7'], port_security=['fa:16:3e:5c:c3:e9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3764201-4b86-4407-84d2-684bd05a44b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6164fee998c94b71a37886fe42b4c56c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2bb7af25-e3c4-4687-888a-3caf6297e5c6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a293aea-136f-4ea2-8198-6213071653ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=7b695d2a-7c72-4125-a16a-a2d8b4342195) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.581 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 7b695d2a-7c72-4125-a16a-a2d8b4342195 in datapath a3764201-4b86-4407-84d2-684bd05a44b3 unbound from our chassis
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.584 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3764201-4b86-4407-84d2-684bd05a44b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.586 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[af4bd13c-168a-4a09-95f3-48da730deaa6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.587 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 namespace which is not needed anymore
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.722 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.730 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.738 251996 INFO nova.virt.libvirt.driver [-] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Instance destroyed successfully.
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.738 251996 DEBUG nova.objects.instance [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lazy-loading 'resources' on Instance uuid d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:51 compute-0 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[367923]: [NOTICE]   (367927) : haproxy version is 2.8.14-c23fe91
Dec 06 07:57:51 compute-0 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[367923]: [NOTICE]   (367927) : path to executable is /usr/sbin/haproxy
Dec 06 07:57:51 compute-0 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[367923]: [WARNING]  (367927) : Exiting Master process...
Dec 06 07:57:51 compute-0 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[367923]: [WARNING]  (367927) : Exiting Master process...
Dec 06 07:57:51 compute-0 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[367923]: [ALERT]    (367927) : Current worker (367929) exited with code 143 (Terminated)
Dec 06 07:57:51 compute-0 neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3[367923]: [WARNING]  (367927) : All workers exited. Exiting... (0)
Dec 06 07:57:51 compute-0 systemd[1]: libpod-7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9.scope: Deactivated successfully.
Dec 06 07:57:51 compute-0 conmon[367923]: conmon 7eef8bab9c080fbd43ae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9.scope/container/memory.events
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.751 251996 DEBUG nova.virt.libvirt.vif [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:55:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1835922517',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1835922517',id=175,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMlCJgA181fL+hWV96XYAuaaRjR/DFcxIrENEwwuUSLNLg2Wo/zP2WcPtpxKQuFaV64lRGeBPzRnqkTHdlSql81bpyaGplyAnqRHnVLqVTwCxa7e5Tmw+I0TD65PH3Dpw==',key_name='tempest-TestInstancesWithCinderVolumes-1103529456',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:55:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6164fee998c94b71a37886fe42b4c56c',ramdisk_id='',reservation_id='r-qq1sqzj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-1429596635',owner_user_name='tempest-TestInstancesWithCinderVolumes-1429596635-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:55:51Z,user_data=None,user_id='e685a049c8a74aa8aea831fbdaf2acf8',uuid=d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.752 251996 DEBUG nova.network.os_vif_util [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converting VIF {"id": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "address": "fa:16:3e:5c:c3:e9", "network": {"id": "a3764201-4b86-4407-84d2-684bd05a44b3", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-2060653314-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6164fee998c94b71a37886fe42b4c56c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b695d2a-7c", "ovs_interfaceid": "7b695d2a-7c72-4125-a16a-a2d8b4342195", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:57:51 compute-0 podman[371046]: 2025-12-06 07:57:51.752687178 +0000 UTC m=+0.047124173 container died 7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.754 251996 DEBUG nova.network.os_vif_util [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:c3:e9,bridge_name='br-int',has_traffic_filtering=True,id=7b695d2a-7c72-4125-a16a-a2d8b4342195,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b695d2a-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.754 251996 DEBUG os_vif [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:c3:e9,bridge_name='br-int',has_traffic_filtering=True,id=7b695d2a-7c72-4125-a16a-a2d8b4342195,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b695d2a-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.757 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.757 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b695d2a-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.759 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.761 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.764 251996 INFO os_vif [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:c3:e9,bridge_name='br-int',has_traffic_filtering=True,id=7b695d2a-7c72-4125-a16a-a2d8b4342195,network=Network(a3764201-4b86-4407-84d2-684bd05a44b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b695d2a-7c')
Dec 06 07:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9-userdata-shm.mount: Deactivated successfully.
Dec 06 07:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d903600859cfcfe897f814bef42a336ee5f9248229a4e5aa7dcc90979243f094-merged.mount: Deactivated successfully.
Dec 06 07:57:51 compute-0 podman[371046]: 2025-12-06 07:57:51.794486926 +0000 UTC m=+0.088923921 container cleanup 7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 07:57:51 compute-0 systemd[1]: libpod-conmon-7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9.scope: Deactivated successfully.
Dec 06 07:57:51 compute-0 podman[371102]: 2025-12-06 07:57:51.850090536 +0000 UTC m=+0.036108685 container remove 7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 07:57:51 compute-0 ceph-mon[74339]: pgmap v3144: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 4.8 KiB/s wr, 73 op/s
Dec 06 07:57:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3980298066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.855 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a8e8a2-f317-480e-af4c-92c93f44a430]: (4, ('Sat Dec  6 07:57:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 (7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9)\n7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9\nSat Dec  6 07:57:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 (7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9)\n7eef8bab9c080fbd43ae0c2163735cff54dd1f3df761ce98efec7b6f2a93fbf9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.856 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3366bc-286c-414a-81f3-28dfa36ab9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.857 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3764201-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:51 compute-0 kernel: tapa3764201-40: left promiscuous mode
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.859 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 nova_compute[251992]: 2025-12-06 07:57:51.873 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.876 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7d44d6b0-c351-439b-9ef2-05563b138125]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.893 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[29ef23be-1a0c-413c-9435-ac0423ca3c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.895 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[93134348-5757-423d-81a1-bd4bbcf9d56f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.910 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[39332a68-07c3-4267-9d8b-105affdba80c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802338, 'reachable_time': 44266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371120, 'error': None, 'target': 'ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:51 compute-0 systemd[1]: run-netns-ovnmeta\x2da3764201\x2d4b86\x2d4407\x2d84d2\x2d684bd05a44b3.mount: Deactivated successfully.
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.913 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3764201-4b86-4407-84d2-684bd05a44b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:57:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:51.913 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[078853a1-fe5e-4990-a604-4809f473e1a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:52.153 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.185 251996 DEBUG nova.compute.manager [req-a0dd0117-7149-4ef8-a51f-f4606ad6327c req-2f32465f-5feb-4ce5-aad2-46a182ff839a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-vif-unplugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.185 251996 DEBUG oslo_concurrency.lockutils [req-a0dd0117-7149-4ef8-a51f-f4606ad6327c req-2f32465f-5feb-4ce5-aad2-46a182ff839a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.185 251996 DEBUG oslo_concurrency.lockutils [req-a0dd0117-7149-4ef8-a51f-f4606ad6327c req-2f32465f-5feb-4ce5-aad2-46a182ff839a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.186 251996 DEBUG oslo_concurrency.lockutils [req-a0dd0117-7149-4ef8-a51f-f4606ad6327c req-2f32465f-5feb-4ce5-aad2-46a182ff839a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.186 251996 DEBUG nova.compute.manager [req-a0dd0117-7149-4ef8-a51f-f4606ad6327c req-2f32465f-5feb-4ce5-aad2-46a182ff839a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] No waiting events found dispatching network-vif-unplugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.186 251996 DEBUG nova.compute.manager [req-a0dd0117-7149-4ef8-a51f-f4606ad6327c req-2f32465f-5feb-4ce5-aad2-46a182ff839a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-vif-unplugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.227 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.294 251996 INFO nova.virt.libvirt.driver [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Deleting instance files /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2_del
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.295 251996 INFO nova.virt.libvirt.driver [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Deletion of /var/lib/nova/instances/d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2_del complete
Dec 06 07:57:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:52.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.375 251996 INFO nova.compute.manager [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Took 1.28 seconds to destroy the instance on the hypervisor.
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.375 251996 DEBUG oslo.service.loopingcall [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.376 251996 DEBUG nova.compute.manager [-] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.376 251996 DEBUG nova.network.neutron [-] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.971 251996 DEBUG oslo_concurrency.lockutils [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.972 251996 DEBUG oslo_concurrency.lockutils [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:52 compute-0 nova_compute[251992]: 2025-12-06 07:57:52.989 251996 INFO nova.compute.manager [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Detaching volume b338aabc-085b-43e3-adbc-b55bb8505744
Dec 06 07:57:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:53.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:53 compute-0 ceph-mon[74339]: pgmap v3145: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 6.0 KiB/s wr, 84 op/s
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.152 251996 INFO nova.virt.block_device [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Attempting to driver detach volume b338aabc-085b-43e3-adbc-b55bb8505744 from mountpoint /dev/vdb
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.160 251996 DEBUG nova.virt.libvirt.driver [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Attempting to detach device vdb from instance 3bd60b1c-0294-4922-a147-cf09dadee874 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.161 251996 DEBUG nova.virt.libvirt.guest [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-b338aabc-085b-43e3-adbc-b55bb8505744">
Dec 06 07:57:53 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   </source>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <serial>b338aabc-085b-43e3-adbc-b55bb8505744</serial>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]: </disk>
Dec 06 07:57:53 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.169 251996 INFO nova.virt.libvirt.driver [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully detached device vdb from instance 3bd60b1c-0294-4922-a147-cf09dadee874 from the persistent domain config.
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.169 251996 DEBUG nova.virt.libvirt.driver [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3bd60b1c-0294-4922-a147-cf09dadee874 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.170 251996 DEBUG nova.virt.libvirt.guest [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-b338aabc-085b-43e3-adbc-b55bb8505744">
Dec 06 07:57:53 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   </source>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <target dev="vdb" bus="virtio"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <serial>b338aabc-085b-43e3-adbc-b55bb8505744</serial>
Dec 06 07:57:53 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 07:57:53 compute-0 nova_compute[251992]: </disk>
Dec 06 07:57:53 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.334 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765007873.3337355, 3bd60b1c-0294-4922-a147-cf09dadee874 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.335 251996 DEBUG nova.virt.libvirt.driver [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3bd60b1c-0294-4922-a147-cf09dadee874 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.337 251996 INFO nova.virt.libvirt.driver [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully detached device vdb from instance 3bd60b1c-0294-4922-a147-cf09dadee874 from the live domain config.
Dec 06 07:57:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 6.0 KiB/s wr, 84 op/s
Dec 06 07:57:53 compute-0 podman[371126]: 2025-12-06 07:57:53.396458904 +0000 UTC m=+0.050724561 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 07:57:53 compute-0 podman[371123]: 2025-12-06 07:57:53.416844354 +0000 UTC m=+0.074095761 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.761 251996 DEBUG nova.objects.instance [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.879 251996 DEBUG oslo_concurrency.lockutils [None req-6bfa13e4-804b-4042-a998-4976ff345b82 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.881 251996 DEBUG nova.network.neutron [-] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.905 251996 INFO nova.compute.manager [-] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Took 1.53 seconds to deallocate network for instance.
Dec 06 07:57:53 compute-0 nova_compute[251992]: 2025-12-06 07:57:53.999 251996 DEBUG nova.compute.manager [req-d9d01955-6949-41c5-a6bf-10cccf2f52e2 req-c699201c-1174-49a0-a8f5-15c2256f3da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-vif-deleted-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.311 251996 INFO nova.compute.manager [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Took 0.41 seconds to detach 1 volumes for instance.
Dec 06 07:57:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:57:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:54.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.389 251996 DEBUG nova.compute.manager [req-55162e5d-e11d-451f-af44-a987f916574a req-120bf2e3-93fa-4377-87b2-a2c038b2e965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received event network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.390 251996 DEBUG oslo_concurrency.lockutils [req-55162e5d-e11d-451f-af44-a987f916574a req-120bf2e3-93fa-4377-87b2-a2c038b2e965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.390 251996 DEBUG oslo_concurrency.lockutils [req-55162e5d-e11d-451f-af44-a987f916574a req-120bf2e3-93fa-4377-87b2-a2c038b2e965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.390 251996 DEBUG oslo_concurrency.lockutils [req-55162e5d-e11d-451f-af44-a987f916574a req-120bf2e3-93fa-4377-87b2-a2c038b2e965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.390 251996 DEBUG nova.compute.manager [req-55162e5d-e11d-451f-af44-a987f916574a req-120bf2e3-93fa-4377-87b2-a2c038b2e965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] No waiting events found dispatching network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.391 251996 WARNING nova.compute.manager [req-55162e5d-e11d-451f-af44-a987f916574a req-120bf2e3-93fa-4377-87b2-a2c038b2e965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Received unexpected event network-vif-plugged-7b695d2a-7c72-4125-a16a-a2d8b4342195 for instance with vm_state active and task_state deleting.
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.534 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.535 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.617 251996 DEBUG oslo_concurrency.processutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.937 251996 DEBUG oslo_concurrency.lockutils [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.938 251996 DEBUG oslo_concurrency.lockutils [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:54 compute-0 nova_compute[251992]: 2025-12-06 07:57:54.956 251996 INFO nova.compute.manager [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Detaching volume eb0607fb-3d18-4469-8db9-a7f938c59ae5
Dec 06 07:57:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:57:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490756509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:55.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.057 251996 DEBUG oslo_concurrency.processutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.062 251996 DEBUG nova.compute.provider_tree [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.078 251996 DEBUG nova.scheduler.client.report [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.104 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:55 compute-0 ceph-mon[74339]: pgmap v3146: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 6.0 KiB/s wr, 84 op/s
Dec 06 07:57:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/490756509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.133 251996 INFO nova.scheduler.client.report [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Deleted allocations for instance d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.135 251996 INFO nova.virt.block_device [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Attempting to driver detach volume eb0607fb-3d18-4469-8db9-a7f938c59ae5 from mountpoint /dev/vdc
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.143 251996 DEBUG nova.virt.libvirt.driver [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Attempting to detach device vdc from instance 3bd60b1c-0294-4922-a147-cf09dadee874 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.143 251996 DEBUG nova.virt.libvirt.guest [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-eb0607fb-3d18-4469-8db9-a7f938c59ae5">
Dec 06 07:57:55 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   </source>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <serial>eb0607fb-3d18-4469-8db9-a7f938c59ae5</serial>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]: </disk>
Dec 06 07:57:55 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.149 251996 INFO nova.virt.libvirt.driver [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully detached device vdc from instance 3bd60b1c-0294-4922-a147-cf09dadee874 from the persistent domain config.
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.149 251996 DEBUG nova.virt.libvirt.driver [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 3bd60b1c-0294-4922-a147-cf09dadee874 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.149 251996 DEBUG nova.virt.libvirt.guest [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <source protocol="rbd" name="volumes/volume-eb0607fb-3d18-4469-8db9-a7f938c59ae5">
Dec 06 07:57:55 compute-0 nova_compute[251992]:     <host name="192.168.122.100" port="6789"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:     <host name="192.168.122.102" port="6789"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:     <host name="192.168.122.101" port="6789"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   </source>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <target dev="vdc" bus="virtio"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <serial>eb0607fb-3d18-4469-8db9-a7f938c59ae5</serial>
Dec 06 07:57:55 compute-0 nova_compute[251992]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Dec 06 07:57:55 compute-0 nova_compute[251992]: </disk>
Dec 06 07:57:55 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.197 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765007875.1969006, 3bd60b1c-0294-4922-a147-cf09dadee874 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.198 251996 DEBUG nova.virt.libvirt.driver [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 3bd60b1c-0294-4922-a147-cf09dadee874 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.200 251996 INFO nova.virt.libvirt.driver [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully detached device vdc from instance 3bd60b1c-0294-4922-a147-cf09dadee874 from the live domain config.
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.236 251996 DEBUG oslo_concurrency.lockutils [None req-992a334e-303a-4185-b9cc-0d251786faed e685a049c8a74aa8aea831fbdaf2acf8 6164fee998c94b71a37886fe42b4c56c - - default default] Lock "d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 7.1 KiB/s wr, 70 op/s
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.495 251996 DEBUG nova.objects.instance [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'flavor' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:55 compute-0 nova_compute[251992]: 2025-12-06 07:57:55.542 251996 DEBUG oslo_concurrency.lockutils [None req-30741e9f-862c-4496-810c-98131dd7e917 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:56.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:56 compute-0 nova_compute[251992]: 2025-12-06 07:57:56.761 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:57.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.229 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:57 compute-0 ceph-mon[74339]: pgmap v3147: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 7.1 KiB/s wr, 70 op/s
Dec 06 07:57:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 7.2 KiB/s wr, 53 op/s
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.635 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.635 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.635 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.636 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.636 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.637 251996 INFO nova.compute.manager [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Terminating instance
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.639 251996 DEBUG nova.compute.manager [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 07:57:57 compute-0 kernel: tap329a0a78-e6 (unregistering): left promiscuous mode
Dec 06 07:57:57 compute-0 NetworkManager[48965]: <info>  [1765007877.6999] device (tap329a0a78-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.706 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:57 compute-0 ovn_controller[147168]: 2025-12-06T07:57:57Z|00676|binding|INFO|Releasing lport 329a0a78-e637-4b44-989e-4bdd16f6ed49 from this chassis (sb_readonly=0)
Dec 06 07:57:57 compute-0 ovn_controller[147168]: 2025-12-06T07:57:57Z|00677|binding|INFO|Setting lport 329a0a78-e637-4b44-989e-4bdd16f6ed49 down in Southbound
Dec 06 07:57:57 compute-0 ovn_controller[147168]: 2025-12-06T07:57:57Z|00678|binding|INFO|Removing iface tap329a0a78-e6 ovn-installed in OVS
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.728 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:57 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Dec 06 07:57:57 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b3.scope: Consumed 15.788s CPU time.
Dec 06 07:57:57 compute-0 systemd-machined[212986]: Machine qemu-83-instance-000000b3 terminated.
Dec 06 07:57:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:57.829 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:5c 10.100.0.7'], port_security=['fa:16:3e:dc:25:5c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3bd60b1c-0294-4922-a147-cf09dadee874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5014de6a-a224-455c-b4ae-38ca6350a5a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=329a0a78-e637-4b44-989e-4bdd16f6ed49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:57:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:57.830 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 329a0a78-e637-4b44-989e-4bdd16f6ed49 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 unbound from our chassis
Dec 06 07:57:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:57.832 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 997afd36-d3a2-430f-ba34-f342135a9bb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:57:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:57.833 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[737fdbe3-5c3a-4fbc-b826-b3a8dbae3f22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:57.834 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace which is not needed anymore
Dec 06 07:57:57 compute-0 kernel: tap329a0a78-e6: entered promiscuous mode
Dec 06 07:57:57 compute-0 kernel: tap329a0a78-e6 (unregistering): left promiscuous mode
Dec 06 07:57:57 compute-0 NetworkManager[48965]: <info>  [1765007877.8653] manager: (tap329a0a78-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/306)
Dec 06 07:57:57 compute-0 ovn_controller[147168]: 2025-12-06T07:57:57Z|00679|binding|INFO|Claiming lport 329a0a78-e637-4b44-989e-4bdd16f6ed49 for this chassis.
Dec 06 07:57:57 compute-0 ovn_controller[147168]: 2025-12-06T07:57:57Z|00680|binding|INFO|329a0a78-e637-4b44-989e-4bdd16f6ed49: Claiming fa:16:3e:dc:25:5c 10.100.0.7
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.866 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.878 251996 INFO nova.virt.libvirt.driver [-] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Instance destroyed successfully.
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.879 251996 DEBUG nova.objects.instance [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lazy-loading 'resources' on Instance uuid 3bd60b1c-0294-4922-a147-cf09dadee874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:57:57 compute-0 ovn_controller[147168]: 2025-12-06T07:57:57Z|00681|if_status|INFO|Dropped 4 log messages in last 592 seconds (most recently, 592 seconds ago) due to excessive rate
Dec 06 07:57:57 compute-0 ovn_controller[147168]: 2025-12-06T07:57:57Z|00682|if_status|INFO|Not setting lport 329a0a78-e637-4b44-989e-4bdd16f6ed49 down as sb is readonly
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.882 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.884 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:57 compute-0 ovn_controller[147168]: 2025-12-06T07:57:57Z|00683|binding|INFO|Releasing lport 329a0a78-e637-4b44-989e-4bdd16f6ed49 from this chassis (sb_readonly=0)
Dec 06 07:57:57 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:57.964 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:5c 10.100.0.7'], port_security=['fa:16:3e:dc:25:5c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3bd60b1c-0294-4922-a147-cf09dadee874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5014de6a-a224-455c-b4ae-38ca6350a5a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=329a0a78-e637-4b44-989e-4bdd16f6ed49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:57:57 compute-0 nova_compute[251992]: 2025-12-06 07:57:57.977 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:57 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[369652]: [NOTICE]   (369656) : haproxy version is 2.8.14-c23fe91
Dec 06 07:57:57 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[369652]: [NOTICE]   (369656) : path to executable is /usr/sbin/haproxy
Dec 06 07:57:57 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[369652]: [WARNING]  (369656) : Exiting Master process...
Dec 06 07:57:57 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[369652]: [ALERT]    (369656) : Current worker (369658) exited with code 143 (Terminated)
Dec 06 07:57:57 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[369652]: [WARNING]  (369656) : All workers exited. Exiting... (0)
Dec 06 07:57:57 compute-0 systemd[1]: libpod-30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26.scope: Deactivated successfully.
Dec 06 07:57:58 compute-0 podman[371223]: 2025-12-06 07:57:58.002446721 +0000 UTC m=+0.052724174 container died 30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 07:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26-userdata-shm.mount: Deactivated successfully.
Dec 06 07:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-532c1da0058e670efc2a752ed53c346f751849cdcaa93b91067952a6794214bb-merged.mount: Deactivated successfully.
Dec 06 07:57:58 compute-0 podman[371223]: 2025-12-06 07:57:58.044068244 +0000 UTC m=+0.094345707 container cleanup 30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 07:57:58 compute-0 systemd[1]: libpod-conmon-30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26.scope: Deactivated successfully.
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.090 251996 DEBUG nova.virt.libvirt.vif [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:56:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeTestJSON-server-768689492',display_name='tempest-AttachVolumeTestJSON-server-768689492',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumetestjson-server-768689492',id=179,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIJHeet4uvwdibuA5GRHZPmpIh4XCBgdCXAm7X7BkTb0rRuySFdQbhvNZDJ8IsfUOC1nBB4/Mjg31cISQt/m+PbsNHVcX+U/71BUHefGJy1lvnsWPTUZWre4hlUR2ABa6g==',key_name='tempest-keypair-493356033',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:57:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63df107b8bd14504974c75ba92ae469b',ramdisk_id='',reservation_id='r-g6enkhu0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeTestJSON-950214889',owner_user_name='tempest-AttachVolumeTestJSON-950214889-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:57:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0ce6d0a8def6432aa60891ea00ef9d8b',uuid=3bd60b1c-0294-4922-a147-cf09dadee874,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.091 251996 DEBUG nova.network.os_vif_util [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converting VIF {"id": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "address": "fa:16:3e:dc:25:5c", "network": {"id": "997afd36-d3a2-430f-ba34-f342135a9bb6", "bridge": "br-int", "label": "tempest-AttachVolumeTestJSON-1971011215-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63df107b8bd14504974c75ba92ae469b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap329a0a78-e6", "ovs_interfaceid": "329a0a78-e637-4b44-989e-4bdd16f6ed49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.092 251996 DEBUG nova.network.os_vif_util [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:25:5c,bridge_name='br-int',has_traffic_filtering=True,id=329a0a78-e637-4b44-989e-4bdd16f6ed49,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329a0a78-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.093 251996 DEBUG os_vif [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:25:5c,bridge_name='br-int',has_traffic_filtering=True,id=329a0a78-e637-4b44-989e-4bdd16f6ed49,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329a0a78-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.097 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.097 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap329a0a78-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.100 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.103 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:57:58 compute-0 podman[371250]: 2025-12-06 07:57:58.105664676 +0000 UTC m=+0.042338504 container remove 30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.106 251996 INFO os_vif [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:25:5c,bridge_name='br-int',has_traffic_filtering=True,id=329a0a78-e637-4b44-989e-4bdd16f6ed49,network=Network(997afd36-d3a2-430f-ba34-f342135a9bb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap329a0a78-e6')
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.111 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb65698-df28-4f8b-8298-e8cdc3dd0f80]: (4, ('Sat Dec  6 07:57:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26)\n30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26\nSat Dec  6 07:57:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26)\n30185eb3c2b4b8a686136cdf5464cc4c94dfb85c0c5c94ebd0ac5981fef62d26\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.112 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3636c572-0875-4fd7-9236-2935267e38a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.113 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:58 compute-0 kernel: tap997afd36-d0: left promiscuous mode
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.121 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[146224c2-1463-48e9-93fc-544b172be0df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.130 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.142 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[85270827-20dc-49cc-bf70-bfa361f1b49e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.143 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bbba8dd3-b3d2-4496-9b30-dfb56811ad5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.161 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[70c4c91e-01d6-4de0-8214-a4d486eb2162]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809561, 'reachable_time': 44769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371281, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.163 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.164 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8fd3fa-a6d6-4c14-af5f-603044be0f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.164 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 329a0a78-e637-4b44-989e-4bdd16f6ed49 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 bound to our chassis
Dec 06 07:57:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d997afd36\x2dd3a2\x2d430f\x2dba34\x2df342135a9bb6.mount: Deactivated successfully.
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.166 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.175 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:25:5c 10.100.0.7'], port_security=['fa:16:3e:dc:25:5c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3bd60b1c-0294-4922-a147-cf09dadee874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-997afd36-d3a2-430f-ba34-f342135a9bb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63df107b8bd14504974c75ba92ae469b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5014de6a-a224-455c-b4ae-38ca6350a5a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2999ae76-b414-45fb-8813-4039468da309, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=329a0a78-e637-4b44-989e-4bdd16f6ed49) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.177 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7db79cde-2d1f-4d2f-a437-32ef4e4d46f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.178 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap997afd36-d1 in ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.180 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap997afd36-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.180 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f0b6d9-31db-4228-8fdb-94364da31c93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.180 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a359a9d1-a1c3-433e-a802-d2d9218f0171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.193 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[679f40a0-347e-4553-ac11-6f10159cb3a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.206 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a9243069-9469-4dbf-b86f-f34e44b99dd9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.240 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f67bba83-18a4-4ad3-965b-97d331e8fc93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1565005430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1758045960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:57:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1758045960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:57:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/998417130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.253 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b0f164-d5ad-45ff-8c34-0faa5cfa0fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 NetworkManager[48965]: <info>  [1765007878.2545] manager: (tap997afd36-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/307)
Dec 06 07:57:58 compute-0 systemd-udevd[371195]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.305 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[013ec9d6-8747-4938-9484-31d75aefbdd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.308 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d461ba82-ea2a-4145-8021-c24080976e9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 NetworkManager[48965]: <info>  [1765007878.3324] device (tap997afd36-d0): carrier: link connected
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.341 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[30bd5ddc-3269-44ee-b790-0849ace67806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:57:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:57:58.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.362 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd34b98-978f-4622-b5a7-b7cfe247c7d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815092, 'reachable_time': 26381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371311, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.381 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[813cefde-d50a-4a9f-a9bf-6f1558e0955d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:8b72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815092, 'tstamp': 815092}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371312, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.400 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7b207298-87f7-4592-bb0a-e635ab89a71e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap997afd36-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:8b:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815092, 'reachable_time': 26381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371313, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.434 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5f75a5c9-5050-48d1-98d8-47826faca072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.479 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1137f3de-4ae6-47bb-b813-ac667883c8ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.480 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.481 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.481 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap997afd36-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.483 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-0 kernel: tap997afd36-d0: entered promiscuous mode
Dec 06 07:57:58 compute-0 NetworkManager[48965]: <info>  [1765007878.4837] manager: (tap997afd36-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.486 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap997afd36-d0, col_values=(('external_ids', {'iface-id': '904065d3-3080-49e2-8707-2794a4ba4e6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:58 compute-0 ovn_controller[147168]: 2025-12-06T07:57:58Z|00684|binding|INFO|Releasing lport 904065d3-3080-49e2-8707-2794a4ba4e6e from this chassis (sb_readonly=0)
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.503 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.504 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.504 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[184978c1-ffcd-4a4b-b9b1-951d936d0a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.505 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/997afd36-d3a2-430f-ba34-f342135a9bb6.pid.haproxy
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 997afd36-d3a2-430f-ba34-f342135a9bb6
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.506 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'env', 'PROCESS_TAG=haproxy-997afd36-d3a2-430f-ba34-f342135a9bb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/997afd36-d3a2-430f-ba34-f342135a9bb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.593 251996 DEBUG nova.compute.manager [req-a4802044-b770-4468-93f6-519949db84e6 req-b6518b36-6d71-4bf8-bb2b-65b435dc3b7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received event network-vif-unplugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.593 251996 DEBUG oslo_concurrency.lockutils [req-a4802044-b770-4468-93f6-519949db84e6 req-b6518b36-6d71-4bf8-bb2b-65b435dc3b7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.594 251996 DEBUG oslo_concurrency.lockutils [req-a4802044-b770-4468-93f6-519949db84e6 req-b6518b36-6d71-4bf8-bb2b-65b435dc3b7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.594 251996 DEBUG oslo_concurrency.lockutils [req-a4802044-b770-4468-93f6-519949db84e6 req-b6518b36-6d71-4bf8-bb2b-65b435dc3b7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.594 251996 DEBUG nova.compute.manager [req-a4802044-b770-4468-93f6-519949db84e6 req-b6518b36-6d71-4bf8-bb2b-65b435dc3b7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] No waiting events found dispatching network-vif-unplugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:57:58 compute-0 nova_compute[251992]: 2025-12-06 07:57:58.595 251996 DEBUG nova.compute.manager [req-a4802044-b770-4468-93f6-519949db84e6 req-b6518b36-6d71-4bf8-bb2b-65b435dc3b7f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received event network-vif-unplugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 07:57:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:57:58 compute-0 podman[371343]: 2025-12-06 07:57:58.846508366 +0000 UTC m=+0.054050229 container create dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:57:58 compute-0 systemd[1]: Started libpod-conmon-dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9.scope.
Dec 06 07:57:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:57:58 compute-0 podman[371343]: 2025-12-06 07:57:58.821188013 +0000 UTC m=+0.028729876 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886305be0b2a10c9c2d80f27a6074b5d2ebc4c8ffcbc06d0b0032403e84b4f3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:57:58 compute-0 podman[371343]: 2025-12-06 07:57:58.931287684 +0000 UTC m=+0.138829537 container init dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:57:58 compute-0 podman[371343]: 2025-12-06 07:57:58.94077293 +0000 UTC m=+0.148314773 container start dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:57:58 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[371359]: [NOTICE]   (371363) : New worker (371365) forked
Dec 06 07:57:58 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[371359]: [NOTICE]   (371363) : Loading success.
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.990 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 329a0a78-e637-4b44-989e-4bdd16f6ed49 in datapath 997afd36-d3a2-430f-ba34-f342135a9bb6 unbound from our chassis
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.994 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 997afd36-d3a2-430f-ba34-f342135a9bb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.995 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[42b5a5ce-7444-412f-898b-e71e09886431]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:58.996 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 namespace which is not needed anymore
Dec 06 07:57:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:57:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:57:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:57:59.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:57:59 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[371359]: [NOTICE]   (371363) : haproxy version is 2.8.14-c23fe91
Dec 06 07:57:59 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[371359]: [NOTICE]   (371363) : path to executable is /usr/sbin/haproxy
Dec 06 07:57:59 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[371359]: [WARNING]  (371363) : Exiting Master process...
Dec 06 07:57:59 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[371359]: [ALERT]    (371363) : Current worker (371365) exited with code 143 (Terminated)
Dec 06 07:57:59 compute-0 neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6[371359]: [WARNING]  (371363) : All workers exited. Exiting... (0)
Dec 06 07:57:59 compute-0 systemd[1]: libpod-dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9.scope: Deactivated successfully.
Dec 06 07:57:59 compute-0 podman[371393]: 2025-12-06 07:57:59.166587563 +0000 UTC m=+0.048733035 container died dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 07:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9-userdata-shm.mount: Deactivated successfully.
Dec 06 07:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-886305be0b2a10c9c2d80f27a6074b5d2ebc4c8ffcbc06d0b0032403e84b4f3a-merged.mount: Deactivated successfully.
Dec 06 07:57:59 compute-0 podman[371393]: 2025-12-06 07:57:59.205057591 +0000 UTC m=+0.087203043 container cleanup dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 07:57:59 compute-0 systemd[1]: libpod-conmon-dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9.scope: Deactivated successfully.
Dec 06 07:57:59 compute-0 podman[371424]: 2025-12-06 07:57:59.264259389 +0000 UTC m=+0.036296721 container remove dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.270 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5a358e42-b8f5-4763-87b8-076d30b4cce0]: (4, ('Sat Dec  6 07:57:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9)\ndff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9\nSat Dec  6 07:57:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 (dff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9)\ndff6a8bf01dc2e7b081f92d0e571571810fc167ebf6560898bf7a7d081e654e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.272 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d86def-98cb-4b39-9d31-64e50b5c2cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.274 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap997afd36-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:57:59 compute-0 nova_compute[251992]: 2025-12-06 07:57:59.276 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:59 compute-0 kernel: tap997afd36-d0: left promiscuous mode
Dec 06 07:57:59 compute-0 nova_compute[251992]: 2025-12-06 07:57:59.288 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.291 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a953555c-f3b6-4cba-bb6b-6a120b5d4977]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.306 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[24be4815-8274-458f-a9d3-91f3b67d67ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.308 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1711a8-9eda-47ea-bd78-0c287ba10096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.328 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f20c70a2-2812-4d6e-9ab2-b4960a35adba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815082, 'reachable_time': 20321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371439, 'error': None, 'target': 'ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.330 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-997afd36-d3a2-430f-ba34-f342135a9bb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 07:57:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:57:59.331 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[656458ac-b750-4b0b-91db-353569fd608e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:57:59 compute-0 systemd[1]: run-netns-ovnmeta\x2d997afd36\x2dd3a2\x2d430f\x2dba34\x2df342135a9bb6.mount: Deactivated successfully.
Dec 06 07:57:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 528 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 103 op/s
Dec 06 07:57:59 compute-0 ceph-mon[74339]: pgmap v3148: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 7.2 KiB/s wr, 53 op/s
Dec 06 07:58:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:00.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:00 compute-0 nova_compute[251992]: 2025-12-06 07:58:00.794 251996 DEBUG nova.compute.manager [req-38b64351-6210-4cff-a0ce-a4366c9a6ec4 req-49127b48-a51f-4f4c-af8b-95fbae3c243e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received event network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:58:00 compute-0 nova_compute[251992]: 2025-12-06 07:58:00.795 251996 DEBUG oslo_concurrency.lockutils [req-38b64351-6210-4cff-a0ce-a4366c9a6ec4 req-49127b48-a51f-4f4c-af8b-95fbae3c243e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:00 compute-0 nova_compute[251992]: 2025-12-06 07:58:00.795 251996 DEBUG oslo_concurrency.lockutils [req-38b64351-6210-4cff-a0ce-a4366c9a6ec4 req-49127b48-a51f-4f4c-af8b-95fbae3c243e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:00 compute-0 nova_compute[251992]: 2025-12-06 07:58:00.795 251996 DEBUG oslo_concurrency.lockutils [req-38b64351-6210-4cff-a0ce-a4366c9a6ec4 req-49127b48-a51f-4f4c-af8b-95fbae3c243e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:00 compute-0 nova_compute[251992]: 2025-12-06 07:58:00.796 251996 DEBUG nova.compute.manager [req-38b64351-6210-4cff-a0ce-a4366c9a6ec4 req-49127b48-a51f-4f4c-af8b-95fbae3c243e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] No waiting events found dispatching network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:58:00 compute-0 nova_compute[251992]: 2025-12-06 07:58:00.796 251996 WARNING nova.compute.manager [req-38b64351-6210-4cff-a0ce-a4366c9a6ec4 req-49127b48-a51f-4f4c-af8b-95fbae3c243e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received unexpected event network-vif-plugged-329a0a78-e637-4b44-989e-4bdd16f6ed49 for instance with vm_state active and task_state deleting.
Dec 06 07:58:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:01.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:01 compute-0 ceph-mon[74339]: pgmap v3149: 305 pgs: 305 active+clean; 528 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 103 op/s
Dec 06 07:58:01 compute-0 nova_compute[251992]: 2025-12-06 07:58:01.259 251996 INFO nova.virt.libvirt.driver [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Deleting instance files /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874_del
Dec 06 07:58:01 compute-0 nova_compute[251992]: 2025-12-06 07:58:01.260 251996 INFO nova.virt.libvirt.driver [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Deletion of /var/lib/nova/instances/3bd60b1c-0294-4922-a147-cf09dadee874_del complete
Dec 06 07:58:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 461 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Dec 06 07:58:02 compute-0 nova_compute[251992]: 2025-12-06 07:58:02.159 251996 INFO nova.compute.manager [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Took 4.52 seconds to destroy the instance on the hypervisor.
Dec 06 07:58:02 compute-0 nova_compute[251992]: 2025-12-06 07:58:02.160 251996 DEBUG oslo.service.loopingcall [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 07:58:02 compute-0 nova_compute[251992]: 2025-12-06 07:58:02.160 251996 DEBUG nova.compute.manager [-] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 07:58:02 compute-0 nova_compute[251992]: 2025-12-06 07:58:02.160 251996 DEBUG nova.network.neutron [-] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 07:58:02 compute-0 nova_compute[251992]: 2025-12-06 07:58:02.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:02.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:58:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2165652783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:58:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2165652783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:03.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:03 compute-0 nova_compute[251992]: 2025-12-06 07:58:03.100 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 461 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Dec 06 07:58:03 compute-0 ceph-mon[74339]: pgmap v3150: 305 pgs: 305 active+clean; 461 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Dec 06 07:58:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2165652783' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2165652783' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:58:03.869 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:58:03.870 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:58:03.870 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.311 251996 DEBUG nova.network.neutron [-] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.325 251996 DEBUG nova.compute.manager [req-3dfcb5a5-63be-4adb-a2bb-ae815265c459 req-9d792024-36b5-4296-935b-d68d5206ea98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Received event network-vif-deleted-329a0a78-e637-4b44-989e-4bdd16f6ed49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.325 251996 INFO nova.compute.manager [req-3dfcb5a5-63be-4adb-a2bb-ae815265c459 req-9d792024-36b5-4296-935b-d68d5206ea98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Neutron deleted interface 329a0a78-e637-4b44-989e-4bdd16f6ed49; detaching it from the instance and deleting it from the info cache
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.326 251996 DEBUG nova.network.neutron [req-3dfcb5a5-63be-4adb-a2bb-ae815265c459 req-9d792024-36b5-4296-935b-d68d5206ea98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:58:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 07:58:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:04.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.471 251996 INFO nova.compute.manager [-] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Took 2.31 seconds to deallocate network for instance.
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.478 251996 DEBUG nova.compute.manager [req-3dfcb5a5-63be-4adb-a2bb-ae815265c459 req-9d792024-36b5-4296-935b-d68d5206ea98 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Detach interface failed, port_id=329a0a78-e637-4b44-989e-4bdd16f6ed49, reason: Instance 3bd60b1c-0294-4922-a147-cf09dadee874 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.607 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.608 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:04 compute-0 nova_compute[251992]: 2025-12-06 07:58:04.657 251996 DEBUG oslo_concurrency.processutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:58:04 compute-0 ceph-mon[74339]: pgmap v3151: 305 pgs: 305 active+clean; 461 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Dec 06 07:58:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:05.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:58:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239946977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:05 compute-0 nova_compute[251992]: 2025-12-06 07:58:05.158 251996 DEBUG oslo_concurrency.processutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:58:05 compute-0 nova_compute[251992]: 2025-12-06 07:58:05.166 251996 DEBUG nova.compute.provider_tree [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:58:05 compute-0 nova_compute[251992]: 2025-12-06 07:58:05.348 251996 DEBUG nova.scheduler.client.report [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:58:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Dec 06 07:58:05 compute-0 nova_compute[251992]: 2025-12-06 07:58:05.404 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:05 compute-0 nova_compute[251992]: 2025-12-06 07:58:05.506 251996 INFO nova.scheduler.client.report [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Deleted allocations for instance 3bd60b1c-0294-4922-a147-cf09dadee874
Dec 06 07:58:05 compute-0 nova_compute[251992]: 2025-12-06 07:58:05.756 251996 DEBUG oslo_concurrency.lockutils [None req-fcb06dcd-edeb-41d9-b4e1-444a9a45841d 0ce6d0a8def6432aa60891ea00ef9d8b 63df107b8bd14504974c75ba92ae469b - - default default] Lock "3bd60b1c-0294-4922-a147-cf09dadee874" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Dec 06 07:58:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:06.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:06 compute-0 nova_compute[251992]: 2025-12-06 07:58:06.737 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007871.7357693, d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:58:06 compute-0 nova_compute[251992]: 2025-12-06 07:58:06.738 251996 INFO nova.compute.manager [-] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] VM Stopped (Lifecycle Event)
Dec 06 07:58:06 compute-0 sudo[371467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:06 compute-0 sudo[371467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:06 compute-0 sudo[371467]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:06 compute-0 nova_compute[251992]: 2025-12-06 07:58:06.776 251996 DEBUG nova.compute.manager [None req-3df989d5-2343-4b70-82ae-4665cfb8990c - - - - - -] [instance: d7cce8d1-30dc-4c3c-8e97-c0e0cf1aaad2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:58:06 compute-0 sudo[371492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:06 compute-0 sudo[371492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:06 compute-0 sudo[371492]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:07.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:07 compute-0 nova_compute[251992]: 2025-12-06 07:58:07.238 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Dec 06 07:58:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/239946977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:08 compute-0 nova_compute[251992]: 2025-12-06 07:58:08.153 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:08.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Dec 06 07:58:08 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Dec 06 07:58:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:09.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:09 compute-0 ceph-mon[74339]: pgmap v3152: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Dec 06 07:58:09 compute-0 ceph-mon[74339]: pgmap v3153: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Dec 06 07:58:09 compute-0 ceph-mon[74339]: osdmap e405: 3 total, 3 up, 3 in
Dec 06 07:58:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 203 op/s
Dec 06 07:58:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:10.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:11.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 146 op/s
Dec 06 07:58:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/362523394' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/362523394' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:12 compute-0 ceph-mon[74339]: pgmap v3155: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 203 op/s
Dec 06 07:58:12 compute-0 nova_compute[251992]: 2025-12-06 07:58:12.241 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:12 compute-0 nova_compute[251992]: 2025-12-06 07:58:12.878 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765007877.8766124, 3bd60b1c-0294-4922-a147-cf09dadee874 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:58:12 compute-0 nova_compute[251992]: 2025-12-06 07:58:12.878 251996 INFO nova.compute.manager [-] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] VM Stopped (Lifecycle Event)
Dec 06 07:58:12 compute-0 nova_compute[251992]: 2025-12-06 07:58:12.905 251996 DEBUG nova.compute.manager [None req-1ae80369-2f7d-4573-9499-5b39a488eb7d - - - - - -] [instance: 3bd60b1c-0294-4922-a147-cf09dadee874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:58:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:58:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:58:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:58:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:58:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:58:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:58:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:13.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:13 compute-0 nova_compute[251992]: 2025-12-06 07:58:13.155 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 146 op/s
Dec 06 07:58:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 07:58:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 52K writes, 191K keys, 52K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 52K writes, 19K syncs, 2.64 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6236 writes, 23K keys, 6236 commit groups, 1.0 writes per commit group, ingest: 27.26 MB, 0.05 MB/s
                                           Interval WAL: 6236 writes, 2474 syncs, 2.52 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 07:58:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:14 compute-0 ceph-mon[74339]: pgmap v3156: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 146 op/s
Dec 06 07:58:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:15.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 299 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 KiB/s wr, 86 op/s
Dec 06 07:58:15 compute-0 ceph-mon[74339]: pgmap v3157: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 146 op/s
Dec 06 07:58:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3384756515' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3384756515' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:17 compute-0 sudo[371522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:17.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:17 compute-0 sudo[371522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-0 sudo[371522]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:17 compute-0 sudo[371554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:17 compute-0 sudo[371554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-0 sudo[371554]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:17 compute-0 podman[371547]: 2025-12-06 07:58:17.238843223 +0000 UTC m=+0.149467224 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 07:58:17 compute-0 nova_compute[251992]: 2025-12-06 07:58:17.242 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:17 compute-0 sudo[371596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:17 compute-0 sudo[371596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-0 sudo[371596]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:17 compute-0 sudo[371623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:58:17 compute-0 sudo[371623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 791 KiB/s rd, 16 KiB/s wr, 80 op/s
Dec 06 07:58:17 compute-0 sudo[371623]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:18 compute-0 nova_compute[251992]: 2025-12-06 07:58:18.156 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:18 compute-0 ceph-mon[74339]: pgmap v3158: 305 pgs: 305 active+clean; 299 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 KiB/s wr, 86 op/s
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2071960946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2071960946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:18.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 796190d1-714b-4318-b24d-7d64054fcb91 does not exist
Dec 06 07:58:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 21f86775-0c97-4b0c-9c3c-52f4f86392f5 does not exist
Dec 06 07:58:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1a5510f5-acc3-4235-9687-db3a60660e1b does not exist
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:58:18 compute-0 sudo[371679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:18 compute-0 sudo[371679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:18 compute-0 sudo[371679]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:58:18
Dec 06 07:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'backups']
Dec 06 07:58:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:58:18 compute-0 sudo[371704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:18 compute-0 sudo[371704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:18 compute-0 sudo[371704]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:18 compute-0 sudo[371729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:18 compute-0 sudo[371729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:18 compute-0 sudo[371729]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:18 compute-0 sudo[371754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:58:18 compute-0 sudo[371754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Dec 06 07:58:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Dec 06 07:58:18 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Dec 06 07:58:18 compute-0 podman[371818]: 2025-12-06 07:58:18.967921642 +0000 UTC m=+0.035488199 container create fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:58:18 compute-0 systemd[1]: Started libpod-conmon-fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25.scope.
Dec 06 07:58:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:58:19 compute-0 podman[371818]: 2025-12-06 07:58:19.026724969 +0000 UTC m=+0.094291546 container init fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 07:58:19 compute-0 podman[371818]: 2025-12-06 07:58:19.033216023 +0000 UTC m=+0.100782580 container start fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:58:19 compute-0 podman[371818]: 2025-12-06 07:58:19.035927677 +0000 UTC m=+0.103494254 container attach fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:58:19 compute-0 charming_chatelet[371834]: 167 167
Dec 06 07:58:19 compute-0 systemd[1]: libpod-fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25.scope: Deactivated successfully.
Dec 06 07:58:19 compute-0 podman[371818]: 2025-12-06 07:58:19.040557542 +0000 UTC m=+0.108124109 container died fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:58:19 compute-0 podman[371818]: 2025-12-06 07:58:18.95304629 +0000 UTC m=+0.020612877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:58:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-954305b829d8f6192f401e2e1f1240430e40d7532e00fef432a3b93b7081d645-merged.mount: Deactivated successfully.
Dec 06 07:58:19 compute-0 podman[371818]: 2025-12-06 07:58:19.076522541 +0000 UTC m=+0.144089098 container remove fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:58:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:19.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:19 compute-0 systemd[1]: libpod-conmon-fc0d733ba04477aa40c6bc82c947c5feb4fc5fd776c3729eb838b506a511be25.scope: Deactivated successfully.
Dec 06 07:58:19 compute-0 podman[371859]: 2025-12-06 07:58:19.216764367 +0000 UTC m=+0.037567676 container create 20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:58:19 compute-0 systemd[1]: Started libpod-conmon-20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908.scope.
Dec 06 07:58:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b22ab1cf5fdcf3a4fbf63f2978c560e7658e2c81e45d54f6470078bbba9d45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b22ab1cf5fdcf3a4fbf63f2978c560e7658e2c81e45d54f6470078bbba9d45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b22ab1cf5fdcf3a4fbf63f2978c560e7658e2c81e45d54f6470078bbba9d45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b22ab1cf5fdcf3a4fbf63f2978c560e7658e2c81e45d54f6470078bbba9d45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b22ab1cf5fdcf3a4fbf63f2978c560e7658e2c81e45d54f6470078bbba9d45/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:19 compute-0 podman[371859]: 2025-12-06 07:58:19.281423141 +0000 UTC m=+0.102226440 container init 20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:58:19 compute-0 podman[371859]: 2025-12-06 07:58:19.287593358 +0000 UTC m=+0.108396667 container start 20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:58:19 compute-0 podman[371859]: 2025-12-06 07:58:19.290804854 +0000 UTC m=+0.111608173 container attach 20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec 06 07:58:19 compute-0 podman[371859]: 2025-12-06 07:58:19.200631051 +0000 UTC m=+0.021434370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:58:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 236 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 16 KiB/s wr, 71 op/s
Dec 06 07:58:20 compute-0 infallible_wing[371876]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:58:20 compute-0 infallible_wing[371876]: --> relative data size: 1.0
Dec 06 07:58:20 compute-0 infallible_wing[371876]: --> All data devices are unavailable
Dec 06 07:58:20 compute-0 systemd[1]: libpod-20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908.scope: Deactivated successfully.
Dec 06 07:58:20 compute-0 podman[371859]: 2025-12-06 07:58:20.126832463 +0000 UTC m=+0.947635762 container died 20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 07:58:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-17b22ab1cf5fdcf3a4fbf63f2978c560e7658e2c81e45d54f6470078bbba9d45-merged.mount: Deactivated successfully.
Dec 06 07:58:20 compute-0 podman[371859]: 2025-12-06 07:58:20.179208487 +0000 UTC m=+1.000011786 container remove 20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:58:20 compute-0 systemd[1]: libpod-conmon-20f4b09b94adf4633af53da782f0f44c9266dc7e6550eb8146661a859782b908.scope: Deactivated successfully.
Dec 06 07:58:20 compute-0 sudo[371754]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:20 compute-0 sudo[371904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:20 compute-0 sudo[371904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:20 compute-0 sudo[371904]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:20 compute-0 sudo[371929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:20 compute-0 sudo[371929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:20 compute-0 sudo[371929]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:20 compute-0 ceph-mon[74339]: pgmap v3159: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 791 KiB/s rd, 16 KiB/s wr, 80 op/s
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2071960946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2071960946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:58:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:58:20 compute-0 ceph-mon[74339]: osdmap e406: 3 total, 3 up, 3 in
Dec 06 07:58:20 compute-0 sudo[371954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:20 compute-0 sudo[371954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:20 compute-0 sudo[371954]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:20 compute-0 sudo[371979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:58:20 compute-0 sudo[371979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:20 compute-0 podman[372042]: 2025-12-06 07:58:20.784216752 +0000 UTC m=+0.055560160 container create 624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bardeen, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 07:58:20 compute-0 systemd[1]: Started libpod-conmon-624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5.scope.
Dec 06 07:58:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:58:20 compute-0 podman[372042]: 2025-12-06 07:58:20.755900958 +0000 UTC m=+0.027244446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:58:20 compute-0 podman[372042]: 2025-12-06 07:58:20.858320211 +0000 UTC m=+0.129663619 container init 624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bardeen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:58:20 compute-0 podman[372042]: 2025-12-06 07:58:20.863867721 +0000 UTC m=+0.135211119 container start 624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:58:20 compute-0 podman[372042]: 2025-12-06 07:58:20.867087409 +0000 UTC m=+0.138430837 container attach 624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:58:20 compute-0 objective_bardeen[372058]: 167 167
Dec 06 07:58:20 compute-0 systemd[1]: libpod-624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5.scope: Deactivated successfully.
Dec 06 07:58:20 compute-0 podman[372042]: 2025-12-06 07:58:20.868915937 +0000 UTC m=+0.140259365 container died 624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:58:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-79392abe4ec4315c3ac99c63136e1bb2b2bd1f55091942b46546d5a24144a6b5-merged.mount: Deactivated successfully.
Dec 06 07:58:20 compute-0 podman[372042]: 2025-12-06 07:58:20.904806406 +0000 UTC m=+0.176149804 container remove 624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bardeen, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:58:20 compute-0 systemd[1]: libpod-conmon-624dc4919daddcfbff0cfec63960e224c01cf4df74f40c1719f59466e4f073a5.scope: Deactivated successfully.
Dec 06 07:58:21 compute-0 podman[372083]: 2025-12-06 07:58:21.06912583 +0000 UTC m=+0.041771898 container create 8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 07:58:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:21.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:21 compute-0 systemd[1]: Started libpod-conmon-8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8.scope.
Dec 06 07:58:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b759efb046625c1a552f3c03c79c66abde20d3af9e99e39209c13410556a96f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b759efb046625c1a552f3c03c79c66abde20d3af9e99e39209c13410556a96f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b759efb046625c1a552f3c03c79c66abde20d3af9e99e39209c13410556a96f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b759efb046625c1a552f3c03c79c66abde20d3af9e99e39209c13410556a96f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:21 compute-0 podman[372083]: 2025-12-06 07:58:21.052315877 +0000 UTC m=+0.024961965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:58:21 compute-0 podman[372083]: 2025-12-06 07:58:21.147953527 +0000 UTC m=+0.120599595 container init 8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec 06 07:58:21 compute-0 podman[372083]: 2025-12-06 07:58:21.155062198 +0000 UTC m=+0.127708266 container start 8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 07:58:21 compute-0 podman[372083]: 2025-12-06 07:58:21.158220635 +0000 UTC m=+0.130866723 container attach 8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:58:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 220 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 688 KiB/s rd, 18 KiB/s wr, 116 op/s
Dec 06 07:58:21 compute-0 ceph-mon[74339]: pgmap v3161: 305 pgs: 305 active+clean; 236 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 16 KiB/s wr, 71 op/s
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]: {
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:     "0": [
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:         {
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "devices": [
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "/dev/loop3"
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             ],
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "lv_name": "ceph_lv0",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "lv_size": "7511998464",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "name": "ceph_lv0",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "tags": {
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.cluster_name": "ceph",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.crush_device_class": "",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.encrypted": "0",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.osd_id": "0",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.type": "block",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:                 "ceph.vdo": "0"
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             },
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "type": "block",
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:             "vg_name": "ceph_vg0"
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:         }
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]:     ]
Dec 06 07:58:21 compute-0 dreamy_darwin[372100]: }
Dec 06 07:58:21 compute-0 systemd[1]: libpod-8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8.scope: Deactivated successfully.
Dec 06 07:58:21 compute-0 podman[372083]: 2025-12-06 07:58:21.860227927 +0000 UTC m=+0.832873995 container died 8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 07:58:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b759efb046625c1a552f3c03c79c66abde20d3af9e99e39209c13410556a96f6-merged.mount: Deactivated successfully.
Dec 06 07:58:21 compute-0 podman[372083]: 2025-12-06 07:58:21.925053836 +0000 UTC m=+0.897699904 container remove 8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:58:21 compute-0 systemd[1]: libpod-conmon-8debac96b232a372d27c860853e973d6b733ee67e2dde98d5fac2a6e31854ce8.scope: Deactivated successfully.
Dec 06 07:58:21 compute-0 sudo[371979]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:22 compute-0 sudo[372124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:22 compute-0 sudo[372124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:22 compute-0 sudo[372124]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:22 compute-0 sudo[372149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:22 compute-0 sudo[372149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:22 compute-0 sudo[372149]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:22 compute-0 sudo[372174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:22 compute-0 sudo[372174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:22 compute-0 sudo[372174]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:22 compute-0 sudo[372199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:58:22 compute-0 sudo[372199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:22 compute-0 nova_compute[251992]: 2025-12-06 07:58:22.245 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:22.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 07:58:22 compute-0 podman[372265]: 2025-12-06 07:58:22.509892607 +0000 UTC m=+0.035855948 container create 8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 07:58:22 compute-0 systemd[1]: Started libpod-conmon-8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75.scope.
Dec 06 07:58:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:58:22 compute-0 podman[372265]: 2025-12-06 07:58:22.581018866 +0000 UTC m=+0.106982227 container init 8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 07:58:22 compute-0 podman[372265]: 2025-12-06 07:58:22.586217697 +0000 UTC m=+0.112181038 container start 8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 07:58:22 compute-0 podman[372265]: 2025-12-06 07:58:22.494336357 +0000 UTC m=+0.020299718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:58:22 compute-0 podman[372265]: 2025-12-06 07:58:22.590061041 +0000 UTC m=+0.116024482 container attach 8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:58:22 compute-0 agitated_chaum[372281]: 167 167
Dec 06 07:58:22 compute-0 systemd[1]: libpod-8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75.scope: Deactivated successfully.
Dec 06 07:58:22 compute-0 podman[372265]: 2025-12-06 07:58:22.594544941 +0000 UTC m=+0.120508282 container died 8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:58:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-94e3b57a95a069aad9b2568cee3d89d6dfe2f8ca974314bd83fe563aed7dfae5-merged.mount: Deactivated successfully.
Dec 06 07:58:22 compute-0 podman[372265]: 2025-12-06 07:58:22.628094317 +0000 UTC m=+0.154057658 container remove 8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 07:58:22 compute-0 systemd[1]: libpod-conmon-8850eb4e8060a063e1f8adad27db83e5a33e33413a0b5e1722e9e96a98b20d75.scope: Deactivated successfully.
Dec 06 07:58:22 compute-0 ceph-mon[74339]: pgmap v3162: 305 pgs: 305 active+clean; 220 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 688 KiB/s rd, 18 KiB/s wr, 116 op/s
Dec 06 07:58:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1654241962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1654241962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:22 compute-0 podman[372305]: 2025-12-06 07:58:22.813489699 +0000 UTC m=+0.074585683 container create 933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 07:58:22 compute-0 systemd[1]: Started libpod-conmon-933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329.scope.
Dec 06 07:58:22 compute-0 podman[372305]: 2025-12-06 07:58:22.759231225 +0000 UTC m=+0.020327229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:58:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3498760c81522c81ff4d8fd27af33ddd3c95137aeb23850e07819a331b2e205/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3498760c81522c81ff4d8fd27af33ddd3c95137aeb23850e07819a331b2e205/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3498760c81522c81ff4d8fd27af33ddd3c95137aeb23850e07819a331b2e205/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3498760c81522c81ff4d8fd27af33ddd3c95137aeb23850e07819a331b2e205/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:58:22 compute-0 podman[372305]: 2025-12-06 07:58:22.887537418 +0000 UTC m=+0.148633422 container init 933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:58:22 compute-0 podman[372305]: 2025-12-06 07:58:22.897792725 +0000 UTC m=+0.158888709 container start 933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:58:22 compute-0 podman[372305]: 2025-12-06 07:58:22.901030232 +0000 UTC m=+0.162126216 container attach 933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 07:58:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:23.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:23 compute-0 nova_compute[251992]: 2025-12-06 07:58:23.158 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 220 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 688 KiB/s rd, 18 KiB/s wr, 116 op/s
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]: {
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]:         "osd_id": 0,
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]:         "type": "bluestore"
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]:     }
Dec 06 07:58:23 compute-0 vibrant_hamilton[372322]: }
Dec 06 07:58:23 compute-0 systemd[1]: libpod-933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329.scope: Deactivated successfully.
Dec 06 07:58:23 compute-0 podman[372345]: 2025-12-06 07:58:23.829901576 +0000 UTC m=+0.024824601 container died 933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:58:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3498760c81522c81ff4d8fd27af33ddd3c95137aeb23850e07819a331b2e205-merged.mount: Deactivated successfully.
Dec 06 07:58:23 compute-0 podman[372345]: 2025-12-06 07:58:23.878545199 +0000 UTC m=+0.073468194 container remove 933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hamilton, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:58:23 compute-0 systemd[1]: libpod-conmon-933ca222a372c887099cd3ecc6f787f130602379e3f3613387edd64d2d130329.scope: Deactivated successfully.
Dec 06 07:58:23 compute-0 podman[372344]: 2025-12-06 07:58:23.889872215 +0000 UTC m=+0.071324236 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 07:58:23 compute-0 podman[372351]: 2025-12-06 07:58:23.90490848 +0000 UTC m=+0.084927532 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:58:23 compute-0 sudo[372199]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:58:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:58:23 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0200b730-10dd-4b6c-b0d6-b3556d630e2c does not exist
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 64aaa78a-7f06-41de-aed6-2319252956be does not exist
Dec 06 07:58:23 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ff1b90fe-d921-4e95-a31a-d7cfdb0a534c does not exist
Dec 06 07:58:24 compute-0 sudo[372400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:24 compute-0 sudo[372400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:24 compute-0 sudo[372400]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:24 compute-0 sudo[372425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:58:24 compute-0 sudo[372425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:24 compute-0 sudo[372425]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:24 compute-0 ceph-mon[74339]: pgmap v3163: 305 pgs: 305 active+clean; 220 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 688 KiB/s rd, 18 KiB/s wr, 116 op/s
Dec 06 07:58:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:58:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/155028513' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:58:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/155028513' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:58:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:25.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 946 KiB/s rd, 31 KiB/s wr, 144 op/s
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011670403522240833 of space, bias 1.0, pg target 0.350112105667225 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.6487515703517588 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:58:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 07:58:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:26 compute-0 sudo[372451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:26 compute-0 sudo[372451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:26 compute-0 sudo[372451]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:26 compute-0 sudo[372476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:26 compute-0 sudo[372476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:26 compute-0 sudo[372476]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:26 compute-0 nova_compute[251992]: 2025-12-06 07:58:26.969 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:58:26.969 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:58:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:58:26.971 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:58:26 compute-0 ceph-mon[74339]: pgmap v3164: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 946 KiB/s rd, 31 KiB/s wr, 144 op/s
Dec 06 07:58:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:27.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:27 compute-0 nova_compute[251992]: 2025-12-06 07:58:27.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:58:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:58:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:58:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:58:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:58:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 150 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 829 KiB/s rd, 17 KiB/s wr, 130 op/s
Dec 06 07:58:27 compute-0 nova_compute[251992]: 2025-12-06 07:58:27.498 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:27 compute-0 nova_compute[251992]: 2025-12-06 07:58:27.510 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2677149162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:28 compute-0 nova_compute[251992]: 2025-12-06 07:58:28.196 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Dec 06 07:58:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:29.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Dec 06 07:58:29 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Dec 06 07:58:29 compute-0 ceph-mon[74339]: pgmap v3165: 305 pgs: 305 active+clean; 150 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 829 KiB/s rd, 17 KiB/s wr, 130 op/s
Dec 06 07:58:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 17 KiB/s wr, 126 op/s
Dec 06 07:58:29 compute-0 nova_compute[251992]: 2025-12-06 07:58:29.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:29 compute-0 nova_compute[251992]: 2025-12-06 07:58:29.726 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:29 compute-0 nova_compute[251992]: 2025-12-06 07:58:29.727 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:29 compute-0 nova_compute[251992]: 2025-12-06 07:58:29.727 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:29 compute-0 nova_compute[251992]: 2025-12-06 07:58:29.727 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:58:29 compute-0 nova_compute[251992]: 2025-12-06 07:58:29.727 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:58:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/425738491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:30 compute-0 ceph-mon[74339]: osdmap e407: 3 total, 3 up, 3 in
Dec 06 07:58:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/957644364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:58:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/564199662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:30 compute-0 nova_compute[251992]: 2025-12-06 07:58:30.214 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:58:30 compute-0 nova_compute[251992]: 2025-12-06 07:58:30.382 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:58:30 compute-0 nova_compute[251992]: 2025-12-06 07:58:30.384 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4142MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:58:30 compute-0 nova_compute[251992]: 2025-12-06 07:58:30.384 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:58:30 compute-0 nova_compute[251992]: 2025-12-06 07:58:30.385 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:58:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:30.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:30 compute-0 nova_compute[251992]: 2025-12-06 07:58:30.788 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:58:30 compute-0 nova_compute[251992]: 2025-12-06 07:58:30.789 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:58:30 compute-0 nova_compute[251992]: 2025-12-06 07:58:30.806 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:58:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:31.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:58:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233188705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:31 compute-0 nova_compute[251992]: 2025-12-06 07:58:31.256 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:58:31 compute-0 nova_compute[251992]: 2025-12-06 07:58:31.261 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:58:31 compute-0 nova_compute[251992]: 2025-12-06 07:58:31.308 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:58:31 compute-0 nova_compute[251992]: 2025-12-06 07:58:31.341 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:58:31 compute-0 nova_compute[251992]: 2025-12-06 07:58:31.341 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:58:31 compute-0 ceph-mon[74339]: pgmap v3167: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 17 KiB/s wr, 126 op/s
Dec 06 07:58:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/564199662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 303 KiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:58:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2233188705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:32 compute-0 ceph-mon[74339]: pgmap v3168: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 303 KiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:58:32 compute-0 nova_compute[251992]: 2025-12-06 07:58:32.502 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:33.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:33 compute-0 nova_compute[251992]: 2025-12-06 07:58:33.198 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 303 KiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:58:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:34 compute-0 ceph-mon[74339]: pgmap v3169: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 303 KiB/s rd, 15 KiB/s wr, 76 op/s
Dec 06 07:58:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:35.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Dec 06 07:58:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:58:35.973 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:58:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:36.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:36 compute-0 ceph-mon[74339]: pgmap v3170: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Dec 06 07:58:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:37.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:37 compute-0 nova_compute[251992]: 2025-12-06 07:58:37.334 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Dec 06 07:58:37 compute-0 nova_compute[251992]: 2025-12-06 07:58:37.503 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:37 compute-0 nova_compute[251992]: 2025-12-06 07:58:37.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:38 compute-0 nova_compute[251992]: 2025-12-06 07:58:38.200 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:38.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:38 compute-0 ceph-mon[74339]: pgmap v3171: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Dec 06 07:58:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:39.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:58:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:40.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:40 compute-0 ceph-mon[74339]: pgmap v3172: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 07:58:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/570637393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:41.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Dec 06 07:58:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/319071054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:42.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:42 compute-0 nova_compute[251992]: 2025-12-06 07:58:42.506 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:42 compute-0 nova_compute[251992]: 2025-12-06 07:58:42.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:42 compute-0 nova_compute[251992]: 2025-12-06 07:58:42.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:58:42 compute-0 ceph-mon[74339]: pgmap v3173: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Dec 06 07:58:42 compute-0 nova_compute[251992]: 2025-12-06 07:58:42.911 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 07:58:42 compute-0 nova_compute[251992]: 2025-12-06 07:58:42.911 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:42 compute-0 nova_compute[251992]: 2025-12-06 07:58:42.911 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:42 compute-0 nova_compute[251992]: 2025-12-06 07:58:42.912 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:58:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:58:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:58:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:58:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:58:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:58:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:43.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:43 compute-0 nova_compute[251992]: 2025-12-06 07:58:43.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Dec 06 07:58:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:44.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:44 compute-0 ceph-mon[74339]: pgmap v3174: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Dec 06 07:58:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:45.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 6 op/s
Dec 06 07:58:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:46.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:46 compute-0 ceph-mon[74339]: pgmap v3175: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 6 op/s
Dec 06 07:58:47 compute-0 sudo[372557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:47 compute-0 sudo[372557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:47 compute-0 sudo[372557]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:47 compute-0 sudo[372582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:58:47 compute-0 sudo[372582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:58:47 compute-0 sudo[372582]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:47.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Dec 06 07:58:47 compute-0 podman[372608]: 2025-12-06 07:58:47.424736544 +0000 UTC m=+0.080763701 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 06 07:58:47 compute-0 nova_compute[251992]: 2025-12-06 07:58:47.507 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:47 compute-0 nova_compute[251992]: 2025-12-06 07:58:47.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:48 compute-0 nova_compute[251992]: 2025-12-06 07:58:48.203 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:48.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:58:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:49.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:58:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 132 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 437 KiB/s wr, 18 op/s
Dec 06 07:58:49 compute-0 ceph-mon[74339]: pgmap v3176: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 170 B/s wr, 7 op/s
Dec 06 07:58:49 compute-0 nova_compute[251992]: 2025-12-06 07:58:49.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:49 compute-0 nova_compute[251992]: 2025-12-06 07:58:49.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:58:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:50.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:50 compute-0 ceph-mon[74339]: pgmap v3177: 305 pgs: 305 active+clean; 132 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 437 KiB/s wr, 18 op/s
Dec 06 07:58:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:58:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:51.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:58:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:51 compute-0 nova_compute[251992]: 2025-12-06 07:58:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:51 compute-0 nova_compute[251992]: 2025-12-06 07:58:51.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 07:58:52 compute-0 ceph-mon[74339]: pgmap v3178: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:52.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:52 compute-0 nova_compute[251992]: 2025-12-06 07:58:52.508 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:52 compute-0 nova_compute[251992]: 2025-12-06 07:58:52.671 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:58:52 compute-0 nova_compute[251992]: 2025-12-06 07:58:52.671 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 07:58:52 compute-0 nova_compute[251992]: 2025-12-06 07:58:52.695 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 07:58:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:53.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:53 compute-0 nova_compute[251992]: 2025-12-06 07:58:53.206 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1376359152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:58:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:58:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2637314742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:58:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:54 compute-0 ceph-mon[74339]: pgmap v3179: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2637314742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:58:54 compute-0 podman[372640]: 2025-12-06 07:58:54.411539504 +0000 UTC m=+0.062444126 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 07:58:54 compute-0 podman[372639]: 2025-12-06 07:58:54.411944585 +0000 UTC m=+0.062459756 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 07:58:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:55.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:56.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:56 compute-0 ceph-mon[74339]: pgmap v3180: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 07:58:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:57.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 07:58:57 compute-0 nova_compute[251992]: 2025-12-06 07:58:57.510 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:58 compute-0 nova_compute[251992]: 2025-12-06 07:58:58.247 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:58:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:58:58.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:58:58 compute-0 ceph-mon[74339]: pgmap v3181: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 07:58:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3401036419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:58:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:58:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:58:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:58:59.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:58:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 07:59:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:00.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:00 compute-0 nova_compute[251992]: 2025-12-06 07:59:00.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:00 compute-0 ceph-mon[74339]: pgmap v3182: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.100 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.100 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:01.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.124 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.202 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.202 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.208 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.208 251996 INFO nova.compute.claims [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Claim successful on node compute-0.ctlplane.example.com
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.348 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec 06 07:59:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:59:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125294339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.810 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.818 251996 DEBUG nova.compute.provider_tree [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.931 251996 DEBUG nova.scheduler.client.report [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.960 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:01 compute-0 nova_compute[251992]: 2025-12-06 07:59:01.960 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 07:59:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3125294339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.031 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.032 251996 DEBUG nova.network.neutron [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.233 251996 INFO nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.301 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 07:59:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:02.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.510 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.612 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.614 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.615 251996 INFO nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Creating image(s)
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.647 251996 DEBUG nova.storage.rbd_utils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.681 251996 DEBUG nova.storage.rbd_utils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.719 251996 DEBUG nova.storage.rbd_utils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.723 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.796 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.798 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.798 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.799 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.828 251996 DEBUG nova.storage.rbd_utils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:02 compute-0 nova_compute[251992]: 2025-12-06 07:59:02.832 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:03.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.175 251996 DEBUG nova.policy [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.248 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 3 op/s
Dec 06 07:59:03 compute-0 ceph-mon[74339]: pgmap v3183: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.497 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.597 251996 DEBUG nova.storage.rbd_utils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.730 251996 DEBUG nova.objects.instance [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.787 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.788 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Ensure instance console log exists: /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.789 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.789 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:03 compute-0 nova_compute[251992]: 2025-12-06 07:59:03.789 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:03.870 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:03.871 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:03.872 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:04.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:04 compute-0 ceph-mon[74339]: pgmap v3184: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 12 KiB/s wr, 3 op/s
Dec 06 07:59:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 196 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Dec 06 07:59:06 compute-0 nova_compute[251992]: 2025-12-06 07:59:06.148 251996 DEBUG nova.network.neutron [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Successfully created port: 05c9980c-d230-4d61-9d98-4586e200fac5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 07:59:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:06.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:07 compute-0 ceph-mon[74339]: pgmap v3185: 305 pgs: 305 active+clean; 196 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Dec 06 07:59:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:07 compute-0 sudo[372871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:07 compute-0 sudo[372871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:07 compute-0 sudo[372871]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:07 compute-0 sudo[372896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:07 compute-0 sudo[372896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:07 compute-0 sudo[372896]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 06 07:59:07 compute-0 nova_compute[251992]: 2025-12-06 07:59:07.512 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:08 compute-0 nova_compute[251992]: 2025-12-06 07:59:08.250 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:08.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:08.774 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:59:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:08.775 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:59:08 compute-0 nova_compute[251992]: 2025-12-06 07:59:08.777 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:09.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:09 compute-0 ceph-mon[74339]: pgmap v3186: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 06 07:59:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/137019635' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 07:59:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/137019635' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 07:59:09 compute-0 nova_compute[251992]: 2025-12-06 07:59:09.272 251996 DEBUG nova.network.neutron [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Successfully updated port: 05c9980c-d230-4d61-9d98-4586e200fac5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 07:59:09 compute-0 nova_compute[251992]: 2025-12-06 07:59:09.326 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:59:09 compute-0 nova_compute[251992]: 2025-12-06 07:59:09.326 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:59:09 compute-0 nova_compute[251992]: 2025-12-06 07:59:09.327 251996 DEBUG nova.network.neutron [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 07:59:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:59:09 compute-0 nova_compute[251992]: 2025-12-06 07:59:09.562 251996 DEBUG nova.compute.manager [req-f1e022d2-bc3c-49cb-bfcf-081aab992d77 req-82ee8c65-9c1d-4fcb-a26c-ec6ad87c5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:59:09 compute-0 nova_compute[251992]: 2025-12-06 07:59:09.563 251996 DEBUG nova.compute.manager [req-f1e022d2-bc3c-49cb-bfcf-081aab992d77 req-82ee8c65-9c1d-4fcb-a26c-ec6ad87c5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing instance network info cache due to event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:59:09 compute-0 nova_compute[251992]: 2025-12-06 07:59:09.563 251996 DEBUG oslo_concurrency.lockutils [req-f1e022d2-bc3c-49cb-bfcf-081aab992d77 req-82ee8c65-9c1d-4fcb-a26c-ec6ad87c5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:59:10 compute-0 nova_compute[251992]: 2025-12-06 07:59:10.151 251996 DEBUG nova.network.neutron [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 07:59:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3664579802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:10.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:11.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:59:11 compute-0 ceph-mon[74339]: pgmap v3187: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:59:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:12.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:12 compute-0 nova_compute[251992]: 2025-12-06 07:59:12.514 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:12 compute-0 ceph-mon[74339]: pgmap v3188: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 07:59:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:59:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:59:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:59:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:59:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:59:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:59:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:13.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:13 compute-0 nova_compute[251992]: 2025-12-06 07:59:13.281 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec 06 07:59:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:14.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:14 compute-0 nova_compute[251992]: 2025-12-06 07:59:14.845 251996 DEBUG nova.network.neutron [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.112 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.112 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance network_info: |[{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.113 251996 DEBUG oslo_concurrency.lockutils [req-f1e022d2-bc3c-49cb-bfcf-081aab992d77 req-82ee8c65-9c1d-4fcb-a26c-ec6ad87c5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.113 251996 DEBUG nova.network.neutron [req-f1e022d2-bc3c-49cb-bfcf-081aab992d77 req-82ee8c65-9c1d-4fcb-a26c-ec6ad87c5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.116 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Start _get_guest_xml network_info=[{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.121 251996 WARNING nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.127 251996 DEBUG nova.virt.libvirt.host [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.129 251996 DEBUG nova.virt.libvirt.host [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 07:59:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:15.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.135 251996 DEBUG nova.virt.libvirt.host [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.136 251996 DEBUG nova.virt.libvirt.host [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.138 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.139 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.140 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.141 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.141 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.142 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.142 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.143 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.143 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.144 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.144 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.145 251996 DEBUG nova.virt.hardware [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.151 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:15 compute-0 ceph-mon[74339]: pgmap v3189: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec 06 07:59:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 236 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 121 op/s
Dec 06 07:59:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:59:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1601421389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.587 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.617 251996 DEBUG nova.storage.rbd_utils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:15 compute-0 nova_compute[251992]: 2025-12-06 07:59:15.621 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:59:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3820747644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.110 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.112 251996 DEBUG nova.virt.libvirt.vif [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1262973925',display_name='tempest-TestNetworkAdvancedServerOps-server-1262973925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1262973925',id=181,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHajJo6sYKmo7BjEFfbTegHFFaysH3CPUR6yuP2Rayw3S9ts1Wd6TY6anx2QtLxK6yp4z4nQqn7Ss4CGPtBiZQsZd5U8dFeDqjYG81KqlV6e9SPXI48qB0u9ty6SGnMpqw==',key_name='tempest-TestNetworkAdvancedServerOps-720860597',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-1koojzsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:59:02Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=d1a8ef9c-0ce7-4841-9523-7f11435a1884,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.112 251996 DEBUG nova.network.os_vif_util [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.113 251996 DEBUG nova.network.os_vif_util [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.114 251996 DEBUG nova.objects.instance [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.133 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] End _get_guest_xml xml=<domain type="kvm">
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <uuid>d1a8ef9c-0ce7-4841-9523-7f11435a1884</uuid>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <name>instance-000000b5</name>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <metadata>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1262973925</nova:name>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 07:59:15</nova:creationTime>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <nova:port uuid="05c9980c-d230-4d61-9d98-4586e200fac5">
Dec 06 07:59:16 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   </metadata>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <system>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <entry name="serial">d1a8ef9c-0ce7-4841-9523-7f11435a1884</entry>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <entry name="uuid">d1a8ef9c-0ce7-4841-9523-7f11435a1884</entry>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </system>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <os>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   </os>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <features>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <apic/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   </features>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   </clock>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   </cpu>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   <devices>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk">
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       </source>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk.config">
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       </source>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 07:59:16 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       </auth>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </disk>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:28:35:99"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <target dev="tap05c9980c-d2"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </interface>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/console.log" append="off"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </serial>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <video>
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </video>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </rng>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 07:59:16 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 07:59:16 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 07:59:16 compute-0 nova_compute[251992]:   </devices>
Dec 06 07:59:16 compute-0 nova_compute[251992]: </domain>
Dec 06 07:59:16 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.134 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Preparing to wait for external event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.134 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.134 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.134 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.135 251996 DEBUG nova.virt.libvirt.vif [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T07:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1262973925',display_name='tempest-TestNetworkAdvancedServerOps-server-1262973925',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1262973925',id=181,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHajJo6sYKmo7BjEFfbTegHFFaysH3CPUR6yuP2Rayw3S9ts1Wd6TY6anx2QtLxK6yp4z4nQqn7Ss4CGPtBiZQsZd5U8dFeDqjYG81KqlV6e9SPXI48qB0u9ty6SGnMpqw==',key_name='tempest-TestNetworkAdvancedServerOps-720860597',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-1koojzsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T07:59:02Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=d1a8ef9c-0ce7-4841-9523-7f11435a1884,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.135 251996 DEBUG nova.network.os_vif_util [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.136 251996 DEBUG nova.network.os_vif_util [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.136 251996 DEBUG os_vif [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.137 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.137 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.138 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.142 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.143 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05c9980c-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.143 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05c9980c-d2, col_values=(('external_ids', {'iface-id': '05c9980c-d230-4d61-9d98-4586e200fac5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:35:99', 'vm-uuid': 'd1a8ef9c-0ce7-4841-9523-7f11435a1884'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.145 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:16 compute-0 NetworkManager[48965]: <info>  [1765007956.1465] manager: (tap05c9980c-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.147 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.151 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.152 251996 INFO os_vif [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2')
Dec 06 07:59:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1601421389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3820747644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.218 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.218 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.218 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:28:35:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.219 251996 INFO nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Using config drive
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.251 251996 DEBUG nova.storage.rbd_utils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.781 251996 INFO nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Creating config drive at /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/disk.config
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.786 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0rd6ny6y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.927 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0rd6ny6y" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.955 251996 DEBUG nova.storage.rbd_utils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 07:59:16 compute-0 nova_compute[251992]: 2025-12-06 07:59:16.958 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/disk.config d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.027 251996 DEBUG nova.network.neutron [req-f1e022d2-bc3c-49cb-bfcf-081aab992d77 req-82ee8c65-9c1d-4fcb-a26c-ec6ad87c5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updated VIF entry in instance network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.028 251996 DEBUG nova.network.neutron [req-f1e022d2-bc3c-49cb-bfcf-081aab992d77 req-82ee8c65-9c1d-4fcb-a26c-ec6ad87c5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.050 251996 DEBUG oslo_concurrency.lockutils [req-f1e022d2-bc3c-49cb-bfcf-081aab992d77 req-82ee8c65-9c1d-4fcb-a26c-ec6ad87c5264 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.111 251996 DEBUG oslo_concurrency.processutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/disk.config d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.112 251996 INFO nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Deleting local config drive /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/disk.config because it was imported into RBD.
Dec 06 07:59:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:17.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:17 compute-0 kernel: tap05c9980c-d2: entered promiscuous mode
Dec 06 07:59:17 compute-0 ovn_controller[147168]: 2025-12-06T07:59:17Z|00685|binding|INFO|Claiming lport 05c9980c-d230-4d61-9d98-4586e200fac5 for this chassis.
Dec 06 07:59:17 compute-0 ovn_controller[147168]: 2025-12-06T07:59:17Z|00686|binding|INFO|05c9980c-d230-4d61-9d98-4586e200fac5: Claiming fa:16:3e:28:35:99 10.100.0.14
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.172 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:17 compute-0 NetworkManager[48965]: <info>  [1765007957.1731] manager: (tap05c9980c-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.175 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.179 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.192 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:35:99 10.100.0.14'], port_security=['fa:16:3e:28:35:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd1a8ef9c-0ce7-4841-9523-7f11435a1884', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d104e6f-7c98-46ec-bf5b-e2d926211253', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c0a0ddb-0305-44e4-8a0a-e612a87c4904, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=05c9980c-d230-4d61-9d98-4586e200fac5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.193 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 05c9980c-d230-4d61-9d98-4586e200fac5 in datapath ecf01de9-e04e-423a-b106-dcf22b107dc4 bound to our chassis
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.194 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ecf01de9-e04e-423a-b106-dcf22b107dc4
Dec 06 07:59:17 compute-0 systemd-udevd[373061]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:59:17 compute-0 systemd-machined[212986]: New machine qemu-84-instance-000000b5.
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.214 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9acf593c-7df3-46a5-b146-c1afe33567f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.216 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapecf01de9-e1 in ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.219 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapecf01de9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.220 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[114ca430-07c5-4eef-acef-be52d101248e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.221 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3e085119-b610-46eb-a4c2-59db89773a20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 NetworkManager[48965]: <info>  [1765007957.2276] device (tap05c9980c-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 07:59:17 compute-0 NetworkManager[48965]: <info>  [1765007957.2289] device (tap05c9980c-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.236 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6518c0-8b36-4f2f-8356-4c84871bcdb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-000000b5.
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.254 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:17 compute-0 ovn_controller[147168]: 2025-12-06T07:59:17Z|00687|binding|INFO|Setting lport 05c9980c-d230-4d61-9d98-4586e200fac5 ovn-installed in OVS
Dec 06 07:59:17 compute-0 ovn_controller[147168]: 2025-12-06T07:59:17Z|00688|binding|INFO|Setting lport 05c9980c-d230-4d61-9d98-4586e200fac5 up in Southbound
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.260 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.261 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[981ee055-a3dc-45c3-a0a0-7d5ea6a41c02]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.291 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[213e83c6-e32e-4df0-9874-8a1ba19839fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 systemd-udevd[373065]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:59:17 compute-0 NetworkManager[48965]: <info>  [1765007957.3000] manager: (tapecf01de9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/311)
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.298 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1a337e-ccde-4e48-a238-a98329b846a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.332 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f38d0eae-803d-41a6-863d-fe40d7679019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.335 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[141dab8f-fc2e-4431-ac57-fd136770c2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 NetworkManager[48965]: <info>  [1765007957.3642] device (tapecf01de9-e0): carrier: link connected
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.370 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b17e8033-bd4f-4ec0-a012-6c4c5e0f3534]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.387 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9f98a846-c774-4589-9259-c20720760d53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapecf01de9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:13:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822995, 'reachable_time': 25877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373094, 'error': None, 'target': 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.403 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c872d388-382c-442a-997f-fb30a4ceb5ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:1385'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822995, 'tstamp': 822995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373095, 'error': None, 'target': 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.421 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d67be168-4b3d-41ea-9e0e-6547737ab539]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapecf01de9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:13:85'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822995, 'reachable_time': 25877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373096, 'error': None, 'target': 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 266 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 99 op/s
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.451 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[971b34df-51ed-4252-b1ba-fe453a0528dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.505 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d7517e4c-6496-414e-a198-05e51f6dc768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.507 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecf01de9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.507 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.508 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapecf01de9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:17 compute-0 NetworkManager[48965]: <info>  [1765007957.5107] manager: (tapecf01de9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Dec 06 07:59:17 compute-0 kernel: tapecf01de9-e0: entered promiscuous mode
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.510 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.512 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapecf01de9-e0, col_values=(('external_ids', {'iface-id': '99dcb7eb-e90a-4aef-bab3-916110abd3fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:17 compute-0 ovn_controller[147168]: 2025-12-06T07:59:17Z|00689|binding|INFO|Releasing lport 99dcb7eb-e90a-4aef-bab3-916110abd3fa from this chassis (sb_readonly=0)
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.516 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ecf01de9-e04e-423a-b106-dcf22b107dc4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ecf01de9-e04e-423a-b106-dcf22b107dc4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.517 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f03d54-5ac6-4ccc-ab57-43fa90392c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.518 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: global
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-ecf01de9-e04e-423a-b106-dcf22b107dc4
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/ecf01de9-e04e-423a-b106-dcf22b107dc4.pid.haproxy
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID ecf01de9-e04e-423a-b106-dcf22b107dc4
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 07:59:17 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:17.520 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'env', 'PROCESS_TAG=haproxy-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ecf01de9-e04e-423a-b106-dcf22b107dc4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 07:59:17 compute-0 nova_compute[251992]: 2025-12-06 07:59:17.529 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:17 compute-0 ceph-mon[74339]: pgmap v3190: 305 pgs: 305 active+clean; 236 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 121 op/s
Dec 06 07:59:17 compute-0 podman[373128]: 2025-12-06 07:59:17.915327079 +0000 UTC m=+0.053696740 container create 7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 07:59:17 compute-0 systemd[1]: Started libpod-conmon-7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19.scope.
Dec 06 07:59:17 compute-0 podman[373128]: 2025-12-06 07:59:17.884505217 +0000 UTC m=+0.022874908 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 07:59:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:59:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f0be4bcd37072b072347183e2eb36ffa76b65153065604abc73149306e3741/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:18 compute-0 podman[373128]: 2025-12-06 07:59:18.003895459 +0000 UTC m=+0.142265140 container init 7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 07:59:18 compute-0 podman[373128]: 2025-12-06 07:59:18.009615274 +0000 UTC m=+0.147984935 container start 7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 07:59:18 compute-0 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[373149]: [NOTICE]   (373186) : New worker (373194) forked
Dec 06 07:59:18 compute-0 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[373149]: [NOTICE]   (373186) : Loading success.
Dec 06 07:59:18 compute-0 podman[373140]: 2025-12-06 07:59:18.039652974 +0000 UTC m=+0.086862095 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 07:59:18 compute-0 nova_compute[251992]: 2025-12-06 07:59:18.193 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007958.1928895, d1a8ef9c-0ce7-4841-9523-7f11435a1884 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:59:18 compute-0 nova_compute[251992]: 2025-12-06 07:59:18.194 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] VM Started (Lifecycle Event)
Dec 06 07:59:18 compute-0 nova_compute[251992]: 2025-12-06 07:59:18.215 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:59:18 compute-0 nova_compute[251992]: 2025-12-06 07:59:18.219 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007958.1930761, d1a8ef9c-0ce7-4841-9523-7f11435a1884 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:59:18 compute-0 nova_compute[251992]: 2025-12-06 07:59:18.220 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] VM Paused (Lifecycle Event)
Dec 06 07:59:18 compute-0 nova_compute[251992]: 2025-12-06 07:59:18.238 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:59:18 compute-0 nova_compute[251992]: 2025-12-06 07:59:18.241 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:59:18 compute-0 nova_compute[251992]: 2025-12-06 07:59:18.261 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:59:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:18 compute-0 ceph-mon[74339]: pgmap v3191: 305 pgs: 305 active+clean; 266 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 99 op/s
Dec 06 07:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_07:59:18
Dec 06 07:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 07:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 07:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.control', '.mgr']
Dec 06 07:59:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 07:59:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:18.777 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 07:59:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:19.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.139 251996 DEBUG nova.compute.manager [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.140 251996 DEBUG oslo_concurrency.lockutils [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.140 251996 DEBUG oslo_concurrency.lockutils [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.140 251996 DEBUG oslo_concurrency.lockutils [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.141 251996 DEBUG nova.compute.manager [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Processing event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.141 251996 DEBUG nova.compute.manager [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.141 251996 DEBUG oslo_concurrency.lockutils [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.141 251996 DEBUG oslo_concurrency.lockutils [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.142 251996 DEBUG oslo_concurrency.lockutils [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.142 251996 DEBUG nova.compute.manager [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.142 251996 WARNING nova.compute.manager [req-0a7fd691-5c95-4c2f-a890-8e5e8f3979c4 req-aabedc0b-66b8-4ddd-8a70-2ae923b35d48 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state building and task_state spawning.
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.143 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.147 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765007959.1467025, d1a8ef9c-0ce7-4841-9523-7f11435a1884 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.147 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] VM Resumed (Lifecycle Event)
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.149 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.152 251996 INFO nova.virt.libvirt.driver [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance spawned successfully.
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.153 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.175 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.179 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.180 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.181 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.181 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.181 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.182 251996 DEBUG nova.virt.libvirt.driver [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.185 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.226 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.292 251996 INFO nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Took 16.68 seconds to spawn the instance on the hypervisor.
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.293 251996 DEBUG nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.403 251996 INFO nova.compute.manager [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Took 18.23 seconds to build instance.
Dec 06 07:59:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 3.5 MiB/s wr, 77 op/s
Dec 06 07:59:19 compute-0 nova_compute[251992]: 2025-12-06 07:59:19.476 251996 DEBUG oslo_concurrency.lockutils [None req-0e2cb2bb-c985-416b-a3a6-667fb9882381 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1432958510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/29171320' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:20 compute-0 ceph-mon[74339]: pgmap v3192: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 3.5 MiB/s wr, 77 op/s
Dec 06 07:59:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:21.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:21 compute-0 nova_compute[251992]: 2025-12-06 07:59:21.145 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Dec 06 07:59:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:22.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:22 compute-0 nova_compute[251992]: 2025-12-06 07:59:22.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:23 compute-0 ceph-mon[74339]: pgmap v3193: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Dec 06 07:59:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:23.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Dec 06 07:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 07:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:59:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:59:23 compute-0 nova_compute[251992]: 2025-12-06 07:59:23.780 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:23 compute-0 NetworkManager[48965]: <info>  [1765007963.7808] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Dec 06 07:59:23 compute-0 NetworkManager[48965]: <info>  [1765007963.7817] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Dec 06 07:59:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:23 compute-0 nova_compute[251992]: 2025-12-06 07:59:23.959 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:23 compute-0 ovn_controller[147168]: 2025-12-06T07:59:23Z|00690|binding|INFO|Releasing lport 99dcb7eb-e90a-4aef-bab3-916110abd3fa from this chassis (sb_readonly=0)
Dec 06 07:59:23 compute-0 nova_compute[251992]: 2025-12-06 07:59:23.983 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:24 compute-0 sudo[373231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:24 compute-0 sudo[373231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:24 compute-0 sudo[373231]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:24 compute-0 podman[373255]: 2025-12-06 07:59:24.519364871 +0000 UTC m=+0.047248166 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 07:59:24 compute-0 sudo[373268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:24 compute-0 sudo[373268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:24 compute-0 sudo[373268]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:24 compute-0 podman[373256]: 2025-12-06 07:59:24.55489974 +0000 UTC m=+0.080234846 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 07:59:24 compute-0 sudo[373316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:24 compute-0 sudo[373316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:24 compute-0 sudo[373316]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:24 compute-0 sudo[373341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 07:59:24 compute-0 sudo[373341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:25 compute-0 ceph-mon[74339]: pgmap v3194: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Dec 06 07:59:25 compute-0 podman[373438]: 2025-12-06 07:59:25.116842874 +0000 UTC m=+0.073334701 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 07:59:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:25 compute-0 podman[373438]: 2025-12-06 07:59:25.209831723 +0000 UTC m=+0.166323520 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:59:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Dec 06 07:59:25 compute-0 podman[373594]: 2025-12-06 07:59:25.790918502 +0000 UTC m=+0.058941371 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:59:25 compute-0 podman[373594]: 2025-12-06 07:59:25.802408312 +0000 UTC m=+0.070431141 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 07:59:25 compute-0 nova_compute[251992]: 2025-12-06 07:59:25.852 251996 DEBUG nova.compute.manager [req-82c4fd1e-4c66-447f-befd-03b19f2a631f req-5b1684ee-549d-4d8e-b6f3-1e691ca1aca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 07:59:25 compute-0 nova_compute[251992]: 2025-12-06 07:59:25.853 251996 DEBUG nova.compute.manager [req-82c4fd1e-4c66-447f-befd-03b19f2a631f req-5b1684ee-549d-4d8e-b6f3-1e691ca1aca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing instance network info cache due to event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 07:59:25 compute-0 nova_compute[251992]: 2025-12-06 07:59:25.854 251996 DEBUG oslo_concurrency.lockutils [req-82c4fd1e-4c66-447f-befd-03b19f2a631f req-5b1684ee-549d-4d8e-b6f3-1e691ca1aca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:59:25 compute-0 nova_compute[251992]: 2025-12-06 07:59:25.854 251996 DEBUG oslo_concurrency.lockutils [req-82c4fd1e-4c66-447f-befd-03b19f2a631f req-5b1684ee-549d-4d8e-b6f3-1e691ca1aca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:59:25 compute-0 nova_compute[251992]: 2025-12-06 07:59:25.855 251996 DEBUG nova.network.neutron [req-82c4fd1e-4c66-447f-befd-03b19f2a631f req-5b1684ee-549d-4d8e-b6f3-1e691ca1aca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 07:59:26 compute-0 podman[373658]: 2025-12-06 07:59:26.001305899 +0000 UTC m=+0.054772988 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, release=1793, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, architecture=x86_64, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 06 07:59:26 compute-0 podman[373658]: 2025-12-06 07:59:26.017432765 +0000 UTC m=+0.070899824 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 06 07:59:26 compute-0 sudo[373341]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:59:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:59:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:26 compute-0 sudo[373688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:26 compute-0 nova_compute[251992]: 2025-12-06 07:59:26.147 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:26 compute-0 sudo[373688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:26 compute-0 sudo[373688]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:26 compute-0 sudo[373713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:26 compute-0 sudo[373713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:26 compute-0 sudo[373713]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:26 compute-0 sudo[373738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:26 compute-0 sudo[373738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:26 compute-0 sudo[373738]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:26 compute-0 sudo[373763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 07:59:26 compute-0 sudo[373763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019983816652707985 of space, bias 1.0, pg target 0.5995144995812396 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004323374685929649 of space, bias 1.0, pg target 1.2970124057788945 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 07:59:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:26.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:26 compute-0 sudo[373763]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:59:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 07:59:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 07:59:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bc97a46f-054d-43fa-8a08-8fbf6d554ae8 does not exist
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9a2dcda4-e48e-4c6c-bf43-156cf2aac845 does not exist
Dec 06 07:59:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dc9f12f5-d87d-493b-8c4f-365fe2540cdd does not exist
Dec 06 07:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 07:59:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 07:59:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:59:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 07:59:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:59:26 compute-0 sudo[373819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:26 compute-0 sudo[373819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:26 compute-0 sudo[373819]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:27 compute-0 sudo[373844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:27 compute-0 sudo[373844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:27 compute-0 sudo[373844]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:27 compute-0 sudo[373869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:27 compute-0 sudo[373869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:27 compute-0 sudo[373869]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:27 compute-0 sudo[373895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 07:59:27 compute-0 sudo[373895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:27.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 07:59:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 07:59:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 07:59:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 07:59:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 07:59:27 compute-0 ceph-mon[74339]: pgmap v3195: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Dec 06 07:59:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:59:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 07:59:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 07:59:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 07:59:27 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 07:59:27 compute-0 sudo[373932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:27 compute-0 sudo[373932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:27 compute-0 sudo[373932]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:27 compute-0 sudo[373968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:27 compute-0 sudo[373968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:27 compute-0 sudo[373968]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 214 op/s
Dec 06 07:59:27 compute-0 nova_compute[251992]: 2025-12-06 07:59:27.518 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:27 compute-0 podman[374007]: 2025-12-06 07:59:27.523872035 +0000 UTC m=+0.079117466 container create 918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:59:27 compute-0 podman[374007]: 2025-12-06 07:59:27.466381463 +0000 UTC m=+0.021626914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:59:27 compute-0 systemd[1]: Started libpod-conmon-918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a.scope.
Dec 06 07:59:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:59:27 compute-0 podman[374007]: 2025-12-06 07:59:27.875264186 +0000 UTC m=+0.430509637 container init 918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 07:59:27 compute-0 podman[374007]: 2025-12-06 07:59:27.881398071 +0000 UTC m=+0.436643502 container start 918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:59:27 compute-0 bold_mendel[374024]: 167 167
Dec 06 07:59:27 compute-0 systemd[1]: libpod-918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a.scope: Deactivated successfully.
Dec 06 07:59:27 compute-0 podman[374007]: 2025-12-06 07:59:27.932774938 +0000 UTC m=+0.488020389 container attach 918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 07:59:27 compute-0 podman[374007]: 2025-12-06 07:59:27.933873858 +0000 UTC m=+0.489119299 container died 918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Dec 06 07:59:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aa9c5f4dc8af7c949b701c22c48b3fa5e109026d8782b3c515afd3e417f0622-merged.mount: Deactivated successfully.
Dec 06 07:59:27 compute-0 podman[374007]: 2025-12-06 07:59:27.987405262 +0000 UTC m=+0.542650693 container remove 918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 07:59:28 compute-0 systemd[1]: libpod-conmon-918e9fae7a85fa962ec11ec9330d4a87dbc0ca1c5010dc5f59b679e913852a7a.scope: Deactivated successfully.
Dec 06 07:59:28 compute-0 podman[374050]: 2025-12-06 07:59:28.15888928 +0000 UTC m=+0.043197727 container create 31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_colden, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:59:28 compute-0 systemd[1]: Started libpod-conmon-31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2.scope.
Dec 06 07:59:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f9c2dbbf3c1eb66c051b6ff98bd95acf923b8ac206d21a146d163dd6913d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f9c2dbbf3c1eb66c051b6ff98bd95acf923b8ac206d21a146d163dd6913d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f9c2dbbf3c1eb66c051b6ff98bd95acf923b8ac206d21a146d163dd6913d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f9c2dbbf3c1eb66c051b6ff98bd95acf923b8ac206d21a146d163dd6913d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7f9c2dbbf3c1eb66c051b6ff98bd95acf923b8ac206d21a146d163dd6913d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:28 compute-0 podman[374050]: 2025-12-06 07:59:28.139010533 +0000 UTC m=+0.023319000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:59:28 compute-0 podman[374050]: 2025-12-06 07:59:28.241975121 +0000 UTC m=+0.126283588 container init 31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 07:59:28 compute-0 podman[374050]: 2025-12-06 07:59:28.250005888 +0000 UTC m=+0.134314335 container start 31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 07:59:28 compute-0 podman[374050]: 2025-12-06 07:59:28.25487318 +0000 UTC m=+0.139181657 container attach 31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_colden, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 07:59:28 compute-0 ceph-mon[74339]: pgmap v3196: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 214 op/s
Dec 06 07:59:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:29 compute-0 sad_colden[374066]: --> passed data devices: 0 physical, 1 LVM
Dec 06 07:59:29 compute-0 sad_colden[374066]: --> relative data size: 1.0
Dec 06 07:59:29 compute-0 sad_colden[374066]: --> All data devices are unavailable
Dec 06 07:59:29 compute-0 systemd[1]: libpod-31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2.scope: Deactivated successfully.
Dec 06 07:59:29 compute-0 podman[374050]: 2025-12-06 07:59:29.077056865 +0000 UTC m=+0.961365312 container died 31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 07:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd7f9c2dbbf3c1eb66c051b6ff98bd95acf923b8ac206d21a146d163dd6913d1-merged.mount: Deactivated successfully.
Dec 06 07:59:29 compute-0 podman[374050]: 2025-12-06 07:59:29.132286506 +0000 UTC m=+1.016594953 container remove 31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 07:59:29 compute-0 systemd[1]: libpod-conmon-31dd426d4b8e7deb38cf858da573588e0f6823be22609c3b3f80d00b2babbad2.scope: Deactivated successfully.
Dec 06 07:59:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:29.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:29 compute-0 sudo[373895]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:29 compute-0 sudo[374094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:29 compute-0 sudo[374094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:29 compute-0 sudo[374094]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:29 compute-0 sudo[374119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:29 compute-0 sudo[374119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:29 compute-0 sudo[374119]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:29 compute-0 sudo[374144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:29 compute-0 sudo[374144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:29 compute-0 sudo[374144]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:29 compute-0 sudo[374169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 07:59:29 compute-0 sudo[374169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 196 op/s
Dec 06 07:59:29 compute-0 podman[374232]: 2025-12-06 07:59:29.722193324 +0000 UTC m=+0.048434339 container create c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 07:59:29 compute-0 systemd[1]: Started libpod-conmon-c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb.scope.
Dec 06 07:59:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:59:29 compute-0 podman[374232]: 2025-12-06 07:59:29.790488677 +0000 UTC m=+0.116729692 container init c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:59:29 compute-0 podman[374232]: 2025-12-06 07:59:29.796236112 +0000 UTC m=+0.122477107 container start c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 07:59:29 compute-0 podman[374232]: 2025-12-06 07:59:29.700877348 +0000 UTC m=+0.027118363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:59:29 compute-0 naughty_kare[374248]: 167 167
Dec 06 07:59:29 compute-0 systemd[1]: libpod-c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb.scope: Deactivated successfully.
Dec 06 07:59:29 compute-0 podman[374232]: 2025-12-06 07:59:29.80324393 +0000 UTC m=+0.129484955 container attach c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 07:59:29 compute-0 podman[374232]: 2025-12-06 07:59:29.803627881 +0000 UTC m=+0.129868886 container died c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:59:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f8f82603974c75d0bb4fbf31aa5499cedffc7b5d020cf4277d56991bc865331-merged.mount: Deactivated successfully.
Dec 06 07:59:29 compute-0 podman[374232]: 2025-12-06 07:59:29.841733659 +0000 UTC m=+0.167974654 container remove c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 07:59:29 compute-0 systemd[1]: libpod-conmon-c3f0e393f039e083475c4d96c2663310dc603a864c37fcd4d7c343f0cd1f92eb.scope: Deactivated successfully.
Dec 06 07:59:30 compute-0 podman[374273]: 2025-12-06 07:59:30.026599468 +0000 UTC m=+0.067309558 container create abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 07:59:30 compute-0 systemd[1]: Started libpod-conmon-abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768.scope.
Dec 06 07:59:30 compute-0 podman[374273]: 2025-12-06 07:59:29.989191748 +0000 UTC m=+0.029901868 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:59:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a92edf018982649a116bc1995af0d7b0e84647c5204894d08a695f551d772b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a92edf018982649a116bc1995af0d7b0e84647c5204894d08a695f551d772b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a92edf018982649a116bc1995af0d7b0e84647c5204894d08a695f551d772b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6a92edf018982649a116bc1995af0d7b0e84647c5204894d08a695f551d772b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:30 compute-0 podman[374273]: 2025-12-06 07:59:30.163701217 +0000 UTC m=+0.204411307 container init abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec 06 07:59:30 compute-0 podman[374273]: 2025-12-06 07:59:30.169650668 +0000 UTC m=+0.210360758 container start abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:59:30 compute-0 podman[374273]: 2025-12-06 07:59:30.173363778 +0000 UTC m=+0.214073868 container attach abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 07:59:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:30.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:30 compute-0 ceph-mon[74339]: pgmap v3197: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 196 op/s
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]: {
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:     "0": [
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:         {
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "devices": [
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "/dev/loop3"
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             ],
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "lv_name": "ceph_lv0",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "lv_size": "7511998464",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "name": "ceph_lv0",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "tags": {
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.cluster_name": "ceph",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.crush_device_class": "",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.encrypted": "0",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.osd_id": "0",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.type": "block",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:                 "ceph.vdo": "0"
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             },
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "type": "block",
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:             "vg_name": "ceph_vg0"
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:         }
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]:     ]
Dec 06 07:59:30 compute-0 sweet_rosalind[374289]: }
Dec 06 07:59:30 compute-0 systemd[1]: libpod-abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768.scope: Deactivated successfully.
Dec 06 07:59:30 compute-0 podman[374273]: 2025-12-06 07:59:30.935688119 +0000 UTC m=+0.976398199 container died abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:59:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6a92edf018982649a116bc1995af0d7b0e84647c5204894d08a695f551d772b-merged.mount: Deactivated successfully.
Dec 06 07:59:30 compute-0 podman[374273]: 2025-12-06 07:59:30.999410077 +0000 UTC m=+1.040120167 container remove abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rosalind, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 07:59:31 compute-0 systemd[1]: libpod-conmon-abfd598f410dd08f54b60738e4cd32b1fa15184ec0fb0910cdc5fd5828148768.scope: Deactivated successfully.
Dec 06 07:59:31 compute-0 sudo[374169]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:31 compute-0 sudo[374314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:31 compute-0 sudo[374314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:31 compute-0 sudo[374314]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:31.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.149 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:31 compute-0 sudo[374339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:31 compute-0 sudo[374339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:31 compute-0 sudo[374339]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:31 compute-0 sudo[374364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:31 compute-0 sudo[374364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:31 compute-0 sudo[374364]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:31 compute-0 sudo[374389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 07:59:31 compute-0 sudo[374389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 495 KiB/s wr, 181 op/s
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.542 251996 DEBUG nova.network.neutron [req-82c4fd1e-4c66-447f-befd-03b19f2a631f req-5b1684ee-549d-4d8e-b6f3-1e691ca1aca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updated VIF entry in instance network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.543 251996 DEBUG nova.network.neutron [req-82c4fd1e-4c66-447f-befd-03b19f2a631f req-5b1684ee-549d-4d8e-b6f3-1e691ca1aca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.578 251996 DEBUG oslo_concurrency.lockutils [req-82c4fd1e-4c66-447f-befd-03b19f2a631f req-5b1684ee-549d-4d8e-b6f3-1e691ca1aca2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:59:31 compute-0 podman[374454]: 2025-12-06 07:59:31.676662742 +0000 UTC m=+0.055407976 container create 9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.683 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.711 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.712 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.712 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.712 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 07:59:31 compute-0 nova_compute[251992]: 2025-12-06 07:59:31.713 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:31 compute-0 systemd[1]: Started libpod-conmon-9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff.scope.
Dec 06 07:59:31 compute-0 podman[374454]: 2025-12-06 07:59:31.656415186 +0000 UTC m=+0.035160440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:59:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:59:31 compute-0 podman[374454]: 2025-12-06 07:59:31.772591941 +0000 UTC m=+0.151337175 container init 9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 07:59:31 compute-0 podman[374454]: 2025-12-06 07:59:31.778859621 +0000 UTC m=+0.157604855 container start 9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:59:31 compute-0 podman[374454]: 2025-12-06 07:59:31.782084107 +0000 UTC m=+0.160829351 container attach 9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chebyshev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:59:31 compute-0 quizzical_chebyshev[374470]: 167 167
Dec 06 07:59:31 compute-0 systemd[1]: libpod-9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff.scope: Deactivated successfully.
Dec 06 07:59:31 compute-0 conmon[374470]: conmon 9a4c4f73049e3d6d5888 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff.scope/container/memory.events
Dec 06 07:59:31 compute-0 podman[374454]: 2025-12-06 07:59:31.787310839 +0000 UTC m=+0.166056073 container died 9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 07:59:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dfd01f2af6afd9b93f72ba34a56711da442f424159dbaa2d36cb629dddcd86f-merged.mount: Deactivated successfully.
Dec 06 07:59:31 compute-0 podman[374454]: 2025-12-06 07:59:31.823341891 +0000 UTC m=+0.202087115 container remove 9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 07:59:31 compute-0 systemd[1]: libpod-conmon-9a4c4f73049e3d6d588896799ead9085d5dff7b2758a33582c691e08aad319ff.scope: Deactivated successfully.
Dec 06 07:59:31 compute-0 podman[374513]: 2025-12-06 07:59:31.999831373 +0000 UTC m=+0.043286749 container create 2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 07:59:32 compute-0 systemd[1]: Started libpod-conmon-2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30.scope.
Dec 06 07:59:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 07:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa33df917f9905a3a29ded6f533aeb48a5093879c32747e25d72ab2b97192889/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa33df917f9905a3a29ded6f533aeb48a5093879c32747e25d72ab2b97192889/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa33df917f9905a3a29ded6f533aeb48a5093879c32747e25d72ab2b97192889/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa33df917f9905a3a29ded6f533aeb48a5093879c32747e25d72ab2b97192889/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:32 compute-0 podman[374513]: 2025-12-06 07:59:31.980155322 +0000 UTC m=+0.023610728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 07:59:32 compute-0 podman[374513]: 2025-12-06 07:59:32.080657214 +0000 UTC m=+0.124112600 container init 2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 07:59:32 compute-0 podman[374513]: 2025-12-06 07:59:32.087375745 +0000 UTC m=+0.130831131 container start 2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 07:59:32 compute-0 podman[374513]: 2025-12-06 07:59:32.090491669 +0000 UTC m=+0.133947065 container attach 2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 07:59:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:59:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2834288287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.188 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.265 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.266 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.411 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.412 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3897MB free_disk=20.946338653564453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.412 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.413 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 07:59:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:32.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.508 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance d1a8ef9c-0ce7-4841-9523-7f11435a1884 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.509 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.509 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 07:59:32 compute-0 ceph-mon[74339]: pgmap v3198: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 495 KiB/s wr, 181 op/s
Dec 06 07:59:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2834288287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.521 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:32 compute-0 nova_compute[251992]: 2025-12-06 07:59:32.548 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 07:59:33 compute-0 strange_jennings[374529]: {
Dec 06 07:59:33 compute-0 strange_jennings[374529]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 07:59:33 compute-0 strange_jennings[374529]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 07:59:33 compute-0 strange_jennings[374529]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 07:59:33 compute-0 strange_jennings[374529]:         "osd_id": 0,
Dec 06 07:59:33 compute-0 strange_jennings[374529]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 07:59:33 compute-0 strange_jennings[374529]:         "type": "bluestore"
Dec 06 07:59:33 compute-0 strange_jennings[374529]:     }
Dec 06 07:59:33 compute-0 strange_jennings[374529]: }
Dec 06 07:59:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 07:59:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285353139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:33 compute-0 systemd[1]: libpod-2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30.scope: Deactivated successfully.
Dec 06 07:59:33 compute-0 nova_compute[251992]: 2025-12-06 07:59:33.083 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 07:59:33 compute-0 nova_compute[251992]: 2025-12-06 07:59:33.089 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 07:59:33 compute-0 podman[374575]: 2025-12-06 07:59:33.10001072 +0000 UTC m=+0.029996360 container died 2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 07:59:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa33df917f9905a3a29ded6f533aeb48a5093879c32747e25d72ab2b97192889-merged.mount: Deactivated successfully.
Dec 06 07:59:33 compute-0 nova_compute[251992]: 2025-12-06 07:59:33.120 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 07:59:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:33.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:33 compute-0 podman[374575]: 2025-12-06 07:59:33.155079846 +0000 UTC m=+0.085065476 container remove 2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_jennings, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 07:59:33 compute-0 systemd[1]: libpod-conmon-2ed11174f65d4c4f09bb3261d5e78af7ea57ca6f426d3af837bc143f13b10a30.scope: Deactivated successfully.
Dec 06 07:59:33 compute-0 nova_compute[251992]: 2025-12-06 07:59:33.163 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 07:59:33 compute-0 nova_compute[251992]: 2025-12-06 07:59:33.163 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 07:59:33 compute-0 sudo[374389]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 07:59:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 07:59:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7636f089-8f6c-4dcc-8cae-e45e68bca7cc does not exist
Dec 06 07:59:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 007a09ff-e1b7-4081-a57e-1cc7e6d2de61 does not exist
Dec 06 07:59:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 61b2960b-a44d-4a22-b55b-732364ba1d44 does not exist
Dec 06 07:59:33 compute-0 sudo[374589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:33 compute-0 sudo[374589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:33 compute-0 sudo[374589]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:33 compute-0 sudo[374614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:59:33 compute-0 sudo[374614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:33 compute-0 sudo[374614]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 37 KiB/s wr, 89 op/s
Dec 06 07:59:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2230960405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/285353139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 07:59:33 compute-0 ovn_controller[147168]: 2025-12-06T07:59:33Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:35:99 10.100.0.14
Dec 06 07:59:33 compute-0 ovn_controller[147168]: 2025-12-06T07:59:33Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:35:99 10.100.0.14
Dec 06 07:59:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:33.912322) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973912610, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1754, "num_deletes": 253, "total_data_size": 3003767, "memory_usage": 3054376, "flush_reason": "Manual Compaction"}
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973924744, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 1835587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63085, "largest_seqno": 64838, "table_properties": {"data_size": 1829327, "index_size": 3205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16842, "raw_average_key_size": 21, "raw_value_size": 1815433, "raw_average_value_size": 2318, "num_data_blocks": 142, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007814, "oldest_key_time": 1765007814, "file_creation_time": 1765007973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 12476 microseconds, and 6228 cpu microseconds.
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:33.924803) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 1835587 bytes OK
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:33.924822) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:33.926221) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:33.926298) EVENT_LOG_v1 {"time_micros": 1765007973926285, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:33.926323) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 2996300, prev total WAL file size 2996300, number of live WAL files 2.
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:33.927870) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323736' seq:72057594037927935, type:22 .. '6D6772737461740032353237' seq:0, type:0; will stop at (end)
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(1792KB)], [140(12MB)]
Dec 06 07:59:33 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007973927970, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 14492913, "oldest_snapshot_seqno": -1}
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 9913 keys, 11730226 bytes, temperature: kUnknown
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007974004826, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 11730226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11667877, "index_size": 36444, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24837, "raw_key_size": 261414, "raw_average_key_size": 26, "raw_value_size": 11495614, "raw_average_value_size": 1159, "num_data_blocks": 1384, "num_entries": 9913, "num_filter_entries": 9913, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765007973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:34.005236) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 11730226 bytes
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:34.006678) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.3 rd, 152.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.1 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(14.3) write-amplify(6.4) OK, records in: 10366, records dropped: 453 output_compression: NoCompression
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:34.006716) EVENT_LOG_v1 {"time_micros": 1765007974006702, "job": 86, "event": "compaction_finished", "compaction_time_micros": 76967, "compaction_time_cpu_micros": 31774, "output_level": 6, "num_output_files": 1, "total_output_size": 11730226, "num_input_records": 10366, "num_output_records": 9913, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007974007188, "job": 86, "event": "table_file_deletion", "file_number": 142}
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765007974009331, "job": 86, "event": "table_file_deletion", "file_number": 140}
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:33.927653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:34.009411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:34.009415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:34.009417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:34.009418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-07:59:34.009420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 07:59:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:34.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:34 compute-0 ceph-mon[74339]: pgmap v3199: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 37 KiB/s wr, 89 op/s
Dec 06 07:59:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1606500053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 344 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 183 op/s
Dec 06 07:59:36 compute-0 nova_compute[251992]: 2025-12-06 07:59:36.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:36.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:36 compute-0 ceph-mon[74339]: pgmap v3200: 305 pgs: 305 active+clean; 344 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 183 op/s
Dec 06 07:59:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:37.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 172 op/s
Dec 06 07:59:37 compute-0 nova_compute[251992]: 2025-12-06 07:59:37.522 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:38 compute-0 nova_compute[251992]: 2025-12-06 07:59:38.130 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:38.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:38 compute-0 nova_compute[251992]: 2025-12-06 07:59:38.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:39.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:39 compute-0 ceph-mon[74339]: pgmap v3201: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 172 op/s
Dec 06 07:59:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Dec 06 07:59:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:40.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:40 compute-0 nova_compute[251992]: 2025-12-06 07:59:40.817 251996 INFO nova.compute.manager [None req-dc26eacb-ff3f-474a-ab63-e998ca78d0d4 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Get console output
Dec 06 07:59:40 compute-0 nova_compute[251992]: 2025-12-06 07:59:40.847 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 07:59:41 compute-0 nova_compute[251992]: 2025-12-06 07:59:41.155 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:41 compute-0 ceph-mon[74339]: pgmap v3202: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Dec 06 07:59:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2980456092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec 06 07:59:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3364756026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:42.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:42 compute-0 nova_compute[251992]: 2025-12-06 07:59:42.525 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:59:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:59:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:59:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:59:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 07:59:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 07:59:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Dec 06 07:59:43 compute-0 nova_compute[251992]: 2025-12-06 07:59:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:43 compute-0 nova_compute[251992]: 2025-12-06 07:59:43.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:43 compute-0 nova_compute[251992]: 2025-12-06 07:59:43.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:44 compute-0 nova_compute[251992]: 2025-12-06 07:59:44.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:45 compute-0 nova_compute[251992]: 2025-12-06 07:59:45.364 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:45 compute-0 nova_compute[251992]: 2025-12-06 07:59:45.365 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 07:59:45 compute-0 nova_compute[251992]: 2025-12-06 07:59:45.365 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 07:59:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 663 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Dec 06 07:59:46 compute-0 nova_compute[251992]: 2025-12-06 07:59:46.157 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:46 compute-0 nova_compute[251992]: 2025-12-06 07:59:46.176 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 07:59:46 compute-0 nova_compute[251992]: 2025-12-06 07:59:46.177 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 07:59:46 compute-0 nova_compute[251992]: 2025-12-06 07:59:46.177 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 07:59:46 compute-0 nova_compute[251992]: 2025-12-06 07:59:46.177 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 07:59:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:46.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:46 compute-0 nova_compute[251992]: 2025-12-06 07:59:46.815 251996 INFO nova.compute.manager [None req-ce7d7ee0-3327-4b1f-9636-d0375acc83e9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Get console output
Dec 06 07:59:46 compute-0 nova_compute[251992]: 2025-12-06 07:59:46.821 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 07:59:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:47.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Dec 06 07:59:47 compute-0 sudo[374646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:47 compute-0 sudo[374646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:47 compute-0 sudo[374646]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:47 compute-0 nova_compute[251992]: 2025-12-06 07:59:47.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:47 compute-0 sudo[374671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 07:59:47 compute-0 sudo[374671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 07:59:47 compute-0 sudo[374671]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:47 compute-0 ceph-mon[74339]: pgmap v3203: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec 06 07:59:48 compute-0 podman[374696]: 2025-12-06 07:59:48.425760719 +0000 UTC m=+0.083631818 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 07:59:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:48.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:48 compute-0 ceph-mon[74339]: pgmap v3204: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 654 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Dec 06 07:59:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1719363118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 07:59:48 compute-0 ceph-mon[74339]: pgmap v3205: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 663 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Dec 06 07:59:48 compute-0 ceph-mon[74339]: pgmap v3206: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Dec 06 07:59:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:49.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 25 KiB/s wr, 14 op/s
Dec 06 07:59:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:50.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:51 compute-0 nova_compute[251992]: 2025-12-06 07:59:51.159 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:51.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 31 KiB/s wr, 14 op/s
Dec 06 07:59:51 compute-0 nova_compute[251992]: 2025-12-06 07:59:51.613 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 07:59:51 compute-0 nova_compute[251992]: 2025-12-06 07:59:51.651 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 07:59:51 compute-0 nova_compute[251992]: 2025-12-06 07:59:51.652 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 07:59:51 compute-0 nova_compute[251992]: 2025-12-06 07:59:51.652 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:51 compute-0 nova_compute[251992]: 2025-12-06 07:59:51.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 07:59:51 compute-0 nova_compute[251992]: 2025-12-06 07:59:51.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 07:59:51 compute-0 ceph-mon[74339]: pgmap v3207: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 25 KiB/s wr, 14 op/s
Dec 06 07:59:52 compute-0 sshd-session[374724]: Invalid user solana from 80.94.92.182 port 60590
Dec 06 07:59:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:52.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:52 compute-0 nova_compute[251992]: 2025-12-06 07:59:52.527 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:52 compute-0 sshd-session[374724]: Connection closed by invalid user solana 80.94.92.182 port 60590 [preauth]
Dec 06 07:59:52 compute-0 ceph-mon[74339]: pgmap v3208: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 31 KiB/s wr, 14 op/s
Dec 06 07:59:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 13 op/s
Dec 06 07:59:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:54.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:55 compute-0 ceph-mon[74339]: pgmap v3209: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 13 op/s
Dec 06 07:59:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:55.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:55 compute-0 podman[374730]: 2025-12-06 07:59:55.407129654 +0000 UTC m=+0.059571738 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 06 07:59:55 compute-0 podman[374729]: 2025-12-06 07:59:55.414617997 +0000 UTC m=+0.070817352 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 07:59:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 13 op/s
Dec 06 07:59:56 compute-0 nova_compute[251992]: 2025-12-06 07:59:56.162 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 07:59:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:56.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 07:59:57 compute-0 ceph-mon[74339]: pgmap v3210: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 6.2 KiB/s wr, 13 op/s
Dec 06 07:59:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:57.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 07:59:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2052642224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 7.7 KiB/s wr, 1 op/s
Dec 06 07:59:57 compute-0 nova_compute[251992]: 2025-12-06 07:59:57.528 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2052642224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 07:59:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 07:59:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:07:59:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 07:59:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 07:59:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 07:59:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 07:59:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:07:59:59.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 07:59:59 compute-0 ceph-mon[74339]: pgmap v3211: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 7.7 KiB/s wr, 1 op/s
Dec 06 07:59:59 compute-0 nova_compute[251992]: 2025-12-06 07:59:59.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 07:59:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:59.342 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 07:59:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 07:59:59.344 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 07:59:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 06 08:00:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 08:00:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1256096928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 08:00:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:00.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:00 compute-0 nova_compute[251992]: 2025-12-06 08:00:00.767 251996 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:00:00 compute-0 nova_compute[251992]: 2025-12-06 08:00:00.767 251996 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:00:00 compute-0 nova_compute[251992]: 2025-12-06 08:00:00.767 251996 DEBUG nova.network.neutron [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:00:01 compute-0 nova_compute[251992]: 2025-12-06 08:00:01.164 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:01 compute-0 ceph-mon[74339]: pgmap v3212: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 06 08:00:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 06 08:00:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3876234082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:02 compute-0 ceph-mon[74339]: pgmap v3213: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec 06 08:00:02 compute-0 nova_compute[251992]: 2025-12-06 08:00:02.530 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:02.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 06 08:00:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:03.871 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:03.872 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:03.873 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:04 compute-0 ceph-mon[74339]: pgmap v3214: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec 06 08:00:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:04.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:05.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.1 KiB/s wr, 0 op/s
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.144 251996 DEBUG nova.network.neutron [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.161 251996 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.264 251996 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.265 251996 DEBUG nova.virt.libvirt.volume.remotefs [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Creating file /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/70ea66e3afe44cf9b852b8ab8f6f3eb2.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.265 251996 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/70ea66e3afe44cf9b852b8ab8f6f3eb2.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:06 compute-0 ceph-mon[74339]: pgmap v3215: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.1 KiB/s wr, 0 op/s
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.651 251996 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/70ea66e3afe44cf9b852b8ab8f6f3eb2.tmp" returned: 1 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.652 251996 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884/70ea66e3afe44cf9b852b8ab8f6f3eb2.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.652 251996 DEBUG nova.virt.libvirt.volume.remotefs [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Creating directory /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.652 251996 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.889 251996 DEBUG oslo_concurrency.processutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/d1a8ef9c-0ce7-4841-9523-7f11435a1884" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:06 compute-0 nova_compute[251992]: 2025-12-06 08:00:06.897 251996 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 06 08:00:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:07.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 6.4 KiB/s wr, 1 op/s
Dec 06 08:00:07 compute-0 nova_compute[251992]: 2025-12-06 08:00:07.533 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:07 compute-0 sudo[374775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:07 compute-0 sudo[374775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:07 compute-0 sudo[374775]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:07 compute-0 sudo[374800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:07 compute-0 sudo[374800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:07 compute-0 sudo[374800]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:08 compute-0 ceph-mon[74339]: pgmap v3216: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 6.4 KiB/s wr, 1 op/s
Dec 06 08:00:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:09.347 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.4 KiB/s wr, 1 op/s
Dec 06 08:00:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3263341091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:00:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3263341091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:00:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1989904733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:10.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:10 compute-0 ceph-mon[74339]: pgmap v3217: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.4 KiB/s wr, 1 op/s
Dec 06 08:00:10 compute-0 kernel: tap05c9980c-d2 (unregistering): left promiscuous mode
Dec 06 08:00:10 compute-0 NetworkManager[48965]: <info>  [1765008010.8562] device (tap05c9980c-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:00:10 compute-0 ovn_controller[147168]: 2025-12-06T08:00:10Z|00691|binding|INFO|Releasing lport 05c9980c-d230-4d61-9d98-4586e200fac5 from this chassis (sb_readonly=0)
Dec 06 08:00:10 compute-0 ovn_controller[147168]: 2025-12-06T08:00:10Z|00692|binding|INFO|Setting lport 05c9980c-d230-4d61-9d98-4586e200fac5 down in Southbound
Dec 06 08:00:10 compute-0 nova_compute[251992]: 2025-12-06 08:00:10.864 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:10 compute-0 ovn_controller[147168]: 2025-12-06T08:00:10Z|00693|binding|INFO|Removing iface tap05c9980c-d2 ovn-installed in OVS
Dec 06 08:00:10 compute-0 nova_compute[251992]: 2025-12-06 08:00:10.867 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:10 compute-0 nova_compute[251992]: 2025-12-06 08:00:10.889 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:10 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Dec 06 08:00:10 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b5.scope: Consumed 16.034s CPU time.
Dec 06 08:00:10 compute-0 systemd-machined[212986]: Machine qemu-84-instance-000000b5 terminated.
Dec 06 08:00:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:10.973 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:35:99 10.100.0.14'], port_security=['fa:16:3e:28:35:99 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd1a8ef9c-0ce7-4841-9523-7f11435a1884', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d104e6f-7c98-46ec-bf5b-e2d926211253', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c0a0ddb-0305-44e4-8a0a-e612a87c4904, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=05c9980c-d230-4d61-9d98-4586e200fac5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:00:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:10.975 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 05c9980c-d230-4d61-9d98-4586e200fac5 in datapath ecf01de9-e04e-423a-b106-dcf22b107dc4 unbound from our chassis
Dec 06 08:00:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:10.980 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ecf01de9-e04e-423a-b106-dcf22b107dc4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:00:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:10.984 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ea060b-2866-4fa9-9144-400b8976ed33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:10 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:10.985 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 namespace which is not needed anymore
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.099 251996 INFO nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance shutdown successfully after 4 seconds.
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.107 251996 INFO nova.virt.libvirt.driver [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Instance destroyed successfully.
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.108 251996 DEBUG nova.virt.libvirt.vif [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1262973925',display_name='tempest-TestNetworkAdvancedServerOps-server-1262973925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1262973925',id=181,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHajJo6sYKmo7BjEFfbTegHFFaysH3CPUR6yuP2Rayw3S9ts1Wd6TY6anx2QtLxK6yp4z4nQqn7Ss4CGPtBiZQsZd5U8dFeDqjYG81KqlV6e9SPXI48qB0u9ty6SGnMpqw==',key_name='tempest-TestNetworkAdvancedServerOps-720860597',keypairs=<?>,launch_index=0,launched_at=2025-12-06T07:59:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-1koojzsy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T07:59:56Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=d1a8ef9c-0ce7-4841-9523-7f11435a1884,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--425482001", "vif_mac": "fa:16:3e:28:35:99"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.109 251996 DEBUG nova.network.os_vif_util [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--425482001", "vif_mac": "fa:16:3e:28:35:99"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.110 251996 DEBUG nova.network.os_vif_util [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.111 251996 DEBUG os_vif [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.113 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.113 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05c9980c-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.115 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.117 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.124 251996 INFO os_vif [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2')
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.129 251996 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.129 251996 DEBUG nova.virt.libvirt.driver [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:00:11 compute-0 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[373149]: [NOTICE]   (373186) : haproxy version is 2.8.14-c23fe91
Dec 06 08:00:11 compute-0 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[373149]: [NOTICE]   (373186) : path to executable is /usr/sbin/haproxy
Dec 06 08:00:11 compute-0 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[373149]: [WARNING]  (373186) : Exiting Master process...
Dec 06 08:00:11 compute-0 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[373149]: [ALERT]    (373186) : Current worker (373194) exited with code 143 (Terminated)
Dec 06 08:00:11 compute-0 neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4[373149]: [WARNING]  (373186) : All workers exited. Exiting... (0)
Dec 06 08:00:11 compute-0 systemd[1]: libpod-7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19.scope: Deactivated successfully.
Dec 06 08:00:11 compute-0 podman[374851]: 2025-12-06 08:00:11.156473812 +0000 UTC m=+0.065266622 container died 7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:00:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19-userdata-shm.mount: Deactivated successfully.
Dec 06 08:00:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-11f0be4bcd37072b072347183e2eb36ffa76b65153065604abc73149306e3741-merged.mount: Deactivated successfully.
Dec 06 08:00:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:11.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:11 compute-0 podman[374851]: 2025-12-06 08:00:11.202521195 +0000 UTC m=+0.111313965 container cleanup 7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:00:11 compute-0 systemd[1]: libpod-conmon-7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19.scope: Deactivated successfully.
Dec 06 08:00:11 compute-0 podman[374890]: 2025-12-06 08:00:11.268437693 +0000 UTC m=+0.043880754 container remove 7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.274 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[203e6058-709e-4734-958c-b3753c34609f]: (4, ('Sat Dec  6 08:00:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 (7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19)\n7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19\nSat Dec  6 08:00:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 (7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19)\n7273d8ed719420a93d23ee4301d7b18669ad7738f895a8e8ef8e04a207387c19\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.276 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e35ce841-16af-4e31-a768-deac5b6c553e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.278 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapecf01de9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.280 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:11 compute-0 kernel: tapecf01de9-e0: left promiscuous mode
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.283 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.285 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[68f46ada-e328-4ce4-9956-55ed597201ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.296 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.302 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9f483bc8-4e82-403a-9d12-28fdbc7a4b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.303 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce8f8d6-0c1b-40bc-8aef-7b1eb215a321]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.308 251996 DEBUG neutronclient.v2_0.client [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 05c9980c-d230-4d61-9d98-4586e200fac5 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.320 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5e646de4-0be0-45ab-94d8-e38ca899cadf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822987, 'reachable_time': 21495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374905, 'error': None, 'target': 'ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:11 compute-0 systemd[1]: run-netns-ovnmeta\x2decf01de9\x2de04e\x2d423a\x2db106\x2ddcf22b107dc4.mount: Deactivated successfully.
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.325 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ecf01de9-e04e-423a-b106-dcf22b107dc4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:00:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:11.326 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[97c281aa-38c4-4cf0-8085-14f3343fa6f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.437 251996 DEBUG nova.compute.manager [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.438 251996 DEBUG oslo_concurrency.lockutils [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.438 251996 DEBUG oslo_concurrency.lockutils [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.438 251996 DEBUG oslo_concurrency.lockutils [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.439 251996 DEBUG nova.compute.manager [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.439 251996 WARNING nova.compute.manager [req-1db7003a-7e7e-4efc-ac76-208674fb26aa req-44de25ee-4b2f-40bb-9296-46d1d6d9e8f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-unplugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state active and task_state resize_migrating.
Dec 06 08:00:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 32 KiB/s wr, 4 op/s
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.459 251996 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.460 251996 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:11 compute-0 nova_compute[251992]: 2025-12-06 08:00:11.460 251996 DEBUG oslo_concurrency.lockutils [None req-c0f0095b-d508-4403-a689-5b9203472ba0 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:12 compute-0 nova_compute[251992]: 2025-12-06 08:00:12.536 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:12.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:12 compute-0 ceph-mon[74339]: pgmap v3218: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 32 KiB/s wr, 4 op/s
Dec 06 08:00:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:00:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:00:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:00:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:00:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:00:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:00:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:13.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 32 KiB/s wr, 4 op/s
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.610 251996 DEBUG nova.compute.manager [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.610 251996 DEBUG oslo_concurrency.lockutils [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.611 251996 DEBUG oslo_concurrency.lockutils [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.611 251996 DEBUG oslo_concurrency.lockutils [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.611 251996 DEBUG nova.compute.manager [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.611 251996 WARNING nova.compute.manager [req-ca50f585-bfbf-4c8d-a0e6-f58c7b5842ec req-7a6bbeb9-4e25-495d-8840-22a8c00e3b53 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state active and task_state resize_migrated.
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.686 251996 DEBUG nova.compute.manager [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.687 251996 DEBUG nova.compute.manager [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing instance network info cache due to event network-changed-05c9980c-d230-4d61-9d98-4586e200fac5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.687 251996 DEBUG oslo_concurrency.lockutils [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.687 251996 DEBUG oslo_concurrency.lockutils [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.688 251996 DEBUG nova.network.neutron [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Refreshing network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:00:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.927 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.928 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:13 compute-0 nova_compute[251992]: 2025-12-06 08:00:13.945 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.040 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.040 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.050 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.050 251996 INFO nova.compute.claims [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.195 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:14.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:14 compute-0 ceph-mon[74339]: pgmap v3219: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 32 KiB/s wr, 4 op/s
Dec 06 08:00:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:00:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/903707780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.644 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.649 251996 DEBUG nova.compute.provider_tree [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.667 251996 DEBUG nova.scheduler.client.report [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.691 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.692 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.736 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.737 251996 DEBUG nova.network.neutron [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.769 251996 INFO nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.796 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.881 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.882 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.883 251996 INFO nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Creating image(s)
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.914 251996 DEBUG nova.storage.rbd_utils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.942 251996 DEBUG nova.storage.rbd_utils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.967 251996 DEBUG nova.storage.rbd_utils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:00:14 compute-0 nova_compute[251992]: 2025-12-06 08:00:14.970 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.037 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.038 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.038 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.039 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.065 251996 DEBUG nova.storage.rbd_utils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.068 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:15.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.215 251996 DEBUG nova.policy [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.317 251996 DEBUG nova.network.neutron [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updated VIF entry in instance network info cache for port 05c9980c-d230-4d61-9d98-4586e200fac5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.318 251996 DEBUG nova.network.neutron [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.417 251996 DEBUG oslo_concurrency.lockutils [req-b1a1fa4a-84bc-4138-8022-fbbc424b4ec9 req-2d74281c-c30e-444f-bab0-3c8c1aac4a9f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.446 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 363 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 779 KiB/s rd, 50 KiB/s wr, 43 op/s
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.530 251996 DEBUG nova.storage.rbd_utils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:00:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/903707780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.662 251996 DEBUG nova.objects.instance [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 8f7f3d80-9c81-41ab-9009-09c77ea059c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.698 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.699 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Ensure instance console log exists: /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.700 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.700 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:15 compute-0 nova_compute[251992]: 2025-12-06 08:00:15.700 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:16 compute-0 nova_compute[251992]: 2025-12-06 08:00:16.115 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:16.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Dec 06 08:00:16 compute-0 ceph-mon[74339]: pgmap v3220: 305 pgs: 305 active+clean; 363 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 779 KiB/s rd, 50 KiB/s wr, 43 op/s
Dec 06 08:00:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Dec 06 08:00:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Dec 06 08:00:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:17.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:17 compute-0 nova_compute[251992]: 2025-12-06 08:00:17.343 251996 DEBUG nova.network.neutron [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Successfully created port: 9781e762-7ca0-4640-b89b-3924c5259021 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:00:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 622 KiB/s wr, 98 op/s
Dec 06 08:00:17 compute-0 nova_compute[251992]: 2025-12-06 08:00:17.538 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:17 compute-0 ceph-mon[74339]: osdmap e408: 3 total, 3 up, 3 in
Dec 06 08:00:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3011078489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.356 251996 DEBUG nova.compute.manager [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.357 251996 DEBUG oslo_concurrency.lockutils [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.357 251996 DEBUG oslo_concurrency.lockutils [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.357 251996 DEBUG oslo_concurrency.lockutils [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.357 251996 DEBUG nova.compute.manager [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.357 251996 WARNING nova.compute.manager [req-7fbf0448-d48f-4d44-a668-2e53116cba77 req-1e779d7d-c7cd-4321-afcf-de5d04cc5f42 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state active and task_state resize_finish.
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.441 251996 DEBUG nova.network.neutron [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Successfully updated port: 9781e762-7ca0-4640-b89b-3924c5259021 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.463 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.464 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.464 251996 DEBUG nova.network.neutron [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.542 251996 DEBUG nova.compute.manager [req-5ed9e0b9-64a6-439d-b31d-db457dd7f5fc req-ceab9dea-75b6-4092-ac2c-59b82d29f894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received event network-changed-9781e762-7ca0-4640-b89b-3924c5259021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.542 251996 DEBUG nova.compute.manager [req-5ed9e0b9-64a6-439d-b31d-db457dd7f5fc req-ceab9dea-75b6-4092-ac2c-59b82d29f894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Refreshing instance network info cache due to event network-changed-9781e762-7ca0-4640-b89b-3924c5259021. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.543 251996 DEBUG oslo_concurrency.lockutils [req-5ed9e0b9-64a6-439d-b31d-db457dd7f5fc req-ceab9dea-75b6-4092-ac2c-59b82d29f894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:00:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:18.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:00:18
Dec 06 08:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'images', 'volumes']
Dec 06 08:00:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:00:18 compute-0 nova_compute[251992]: 2025-12-06 08:00:18.645 251996 DEBUG nova.network.neutron [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:00:18 compute-0 ceph-mon[74339]: pgmap v3222: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 622 KiB/s wr, 98 op/s
Dec 06 08:00:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/188999392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:19.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:19 compute-0 podman[375098]: 2025-12-06 08:00:19.445131213 +0000 UTC m=+0.108647413 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:00:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 622 KiB/s wr, 98 op/s
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.183 251996 DEBUG nova.network.neutron [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Updating instance_info_cache with network_info: [{"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.206 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.206 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Instance network_info: |[{"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.206 251996 DEBUG oslo_concurrency.lockutils [req-5ed9e0b9-64a6-439d-b31d-db457dd7f5fc req-ceab9dea-75b6-4092-ac2c-59b82d29f894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.207 251996 DEBUG nova.network.neutron [req-5ed9e0b9-64a6-439d-b31d-db457dd7f5fc req-ceab9dea-75b6-4092-ac2c-59b82d29f894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Refreshing network info cache for port 9781e762-7ca0-4640-b89b-3924c5259021 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.209 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Start _get_guest_xml network_info=[{"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.213 251996 WARNING nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.219 251996 DEBUG nova.virt.libvirt.host [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.220 251996 DEBUG nova.virt.libvirt.host [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.225 251996 DEBUG nova.virt.libvirt.host [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.226 251996 DEBUG nova.virt.libvirt.host [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.227 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.227 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.227 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.228 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.228 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.228 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.228 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.228 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.229 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.229 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.229 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.229 251996 DEBUG nova.virt.hardware [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.232 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:20.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:00:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158161411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.671 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.697 251996 DEBUG nova.storage.rbd_utils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:00:20 compute-0 nova_compute[251992]: 2025-12-06 08:00:20.701 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:20 compute-0 ceph-mon[74339]: pgmap v3223: 305 pgs: 305 active+clean; 368 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 622 KiB/s wr, 98 op/s
Dec 06 08:00:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3158161411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.078 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.079 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.079 251996 DEBUG nova.compute.manager [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Going to confirm migration 21 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.118 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:00:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327561922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.151 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.152 251996 DEBUG nova.virt.libvirt.vif [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:00:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-149809131',display_name='tempest-TestNetworkBasicOps-server-149809131',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-149809131',id=183,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxHD2OHcAAQL7D+iLxr5yYZ9dfEaGJ0zjEOCDS2Q+B9dwZiHwtpuC0rSUGm3GJzoEFOe9pvrqVLrq6wJkm52PCY7VS0xdWT/UpZmuYCHLG2CAhDqxmLJ9dSGglmTzeTKw==',key_name='tempest-TestNetworkBasicOps-1299118840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-axkif81i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:00:14Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=8f7f3d80-9c81-41ab-9009-09c77ea059c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.153 251996 DEBUG nova.network.os_vif_util [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.154 251996 DEBUG nova.network.os_vif_util [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:e9:ae,bridge_name='br-int',has_traffic_filtering=True,id=9781e762-7ca0-4640-b89b-3924c5259021,network=Network(629211d9-a797-44b4-bd7d-576fb48d8f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9781e762-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.155 251996 DEBUG nova.objects.instance [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 8f7f3d80-9c81-41ab-9009-09c77ea059c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:00:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:21.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.677 251996 DEBUG nova.compute.manager [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.677 251996 DEBUG oslo_concurrency.lockutils [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.678 251996 DEBUG oslo_concurrency.lockutils [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.679 251996 DEBUG oslo_concurrency.lockutils [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.679 251996 DEBUG nova.compute.manager [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] No waiting events found dispatching network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.679 251996 WARNING nova.compute.manager [req-2f29faa5-8cd1-48a0-991c-4bb324ed9a14 req-6dfcf0c5-65c7-4399-82b0-aafd36cff95a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Received unexpected event network-vif-plugged-05c9980c-d230-4d61-9d98-4586e200fac5 for instance with vm_state resized and task_state None.
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.712 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <uuid>8f7f3d80-9c81-41ab-9009-09c77ea059c3</uuid>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <name>instance-000000b7</name>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkBasicOps-server-149809131</nova:name>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:00:20</nova:creationTime>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <nova:port uuid="9781e762-7ca0-4640-b89b-3924c5259021">
Dec 06 08:00:21 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.26" ipVersion="4"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <system>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <entry name="serial">8f7f3d80-9c81-41ab-9009-09c77ea059c3</entry>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <entry name="uuid">8f7f3d80-9c81-41ab-9009-09c77ea059c3</entry>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </system>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <os>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   </os>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <features>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   </features>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk">
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       </source>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk.config">
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       </source>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:00:21 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:26:e9:ae"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <target dev="tap9781e762-7c"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3/console.log" append="off"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <video>
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </video>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:00:21 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:00:21 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:00:21 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:00:21 compute-0 nova_compute[251992]: </domain>
Dec 06 08:00:21 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.714 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Preparing to wait for external event network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.715 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.715 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.715 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.717 251996 DEBUG nova.virt.libvirt.vif [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:00:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-149809131',display_name='tempest-TestNetworkBasicOps-server-149809131',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-149809131',id=183,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxHD2OHcAAQL7D+iLxr5yYZ9dfEaGJ0zjEOCDS2Q+B9dwZiHwtpuC0rSUGm3GJzoEFOe9pvrqVLrq6wJkm52PCY7VS0xdWT/UpZmuYCHLG2CAhDqxmLJ9dSGglmTzeTKw==',key_name='tempest-TestNetworkBasicOps-1299118840',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-axkif81i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:00:14Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=8f7f3d80-9c81-41ab-9009-09c77ea059c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.717 251996 DEBUG nova.network.os_vif_util [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.719 251996 DEBUG nova.network.os_vif_util [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:e9:ae,bridge_name='br-int',has_traffic_filtering=True,id=9781e762-7ca0-4640-b89b-3924c5259021,network=Network(629211d9-a797-44b4-bd7d-576fb48d8f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9781e762-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.720 251996 DEBUG os_vif [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:e9:ae,bridge_name='br-int',has_traffic_filtering=True,id=9781e762-7ca0-4640-b89b-3924c5259021,network=Network(629211d9-a797-44b4-bd7d-576fb48d8f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9781e762-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.724 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.725 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.726 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.732 251996 DEBUG nova.network.neutron [req-5ed9e0b9-64a6-439d-b31d-db457dd7f5fc req-ceab9dea-75b6-4092-ac2c-59b82d29f894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Updated VIF entry in instance network info cache for port 9781e762-7ca0-4640-b89b-3924c5259021. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.733 251996 DEBUG nova.network.neutron [req-5ed9e0b9-64a6-439d-b31d-db457dd7f5fc req-ceab9dea-75b6-4092-ac2c-59b82d29f894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Updating instance_info_cache with network_info: [{"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:00:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1327561922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.738 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.739 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9781e762-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.740 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9781e762-7c, col_values=(('external_ids', {'iface-id': '9781e762-7ca0-4640-b89b-3924c5259021', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:e9:ae', 'vm-uuid': '8f7f3d80-9c81-41ab-9009-09c77ea059c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:21 compute-0 NetworkManager[48965]: <info>  [1765008021.7462] manager: (tap9781e762-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.746 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.750 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.753 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.755 251996 INFO os_vif [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:e9:ae,bridge_name='br-int',has_traffic_filtering=True,id=9781e762-7ca0-4640-b89b-3924c5259021,network=Network(629211d9-a797-44b4-bd7d-576fb48d8f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9781e762-7c')
Dec 06 08:00:21 compute-0 nova_compute[251992]: 2025-12-06 08:00:21.831 251996 DEBUG oslo_concurrency.lockutils [req-5ed9e0b9-64a6-439d-b31d-db457dd7f5fc req-ceab9dea-75b6-4092-ac2c-59b82d29f894 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.105 251996 DEBUG neutronclient.v2_0.client [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 05c9980c-d230-4d61-9d98-4586e200fac5 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.105 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.106 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.106 251996 DEBUG nova.network.neutron [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.107 251996 DEBUG nova.objects.instance [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'info_cache' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.142 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.142 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.142 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:26:e9:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.143 251996 INFO nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Using config drive
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.172 251996 DEBUG nova.storage.rbd_utils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:00:22 compute-0 nova_compute[251992]: 2025-12-06 08:00:22.540 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:22 compute-0 ceph-mon[74339]: pgmap v3224: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Dec 06 08:00:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:23.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.314 251996 INFO nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Creating config drive at /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3/disk.config
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.320 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1bhnspsm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.465 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1bhnspsm" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.504 251996 DEBUG nova.storage.rbd_utils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.508 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3/disk.config 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:00:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.707 251996 DEBUG oslo_concurrency.processutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3/disk.config 8f7f3d80-9c81-41ab-9009-09c77ea059c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.708 251996 INFO nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Deleting local config drive /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3/disk.config because it was imported into RBD.
Dec 06 08:00:23 compute-0 kernel: tap9781e762-7c: entered promiscuous mode
Dec 06 08:00:23 compute-0 NetworkManager[48965]: <info>  [1765008023.7668] manager: (tap9781e762-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.768 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:23 compute-0 ovn_controller[147168]: 2025-12-06T08:00:23Z|00694|binding|INFO|Claiming lport 9781e762-7ca0-4640-b89b-3924c5259021 for this chassis.
Dec 06 08:00:23 compute-0 ovn_controller[147168]: 2025-12-06T08:00:23Z|00695|binding|INFO|9781e762-7ca0-4640-b89b-3924c5259021: Claiming fa:16:3e:26:e9:ae 10.100.0.26
Dec 06 08:00:23 compute-0 systemd-machined[212986]: New machine qemu-85-instance-000000b7.
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.832 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.839 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:23 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-000000b7.
Dec 06 08:00:23 compute-0 ovn_controller[147168]: 2025-12-06T08:00:23Z|00696|binding|INFO|Setting lport 9781e762-7ca0-4640-b89b-3924c5259021 ovn-installed in OVS
Dec 06 08:00:23 compute-0 nova_compute[251992]: 2025-12-06 08:00:23.859 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:23 compute-0 systemd-udevd[375265]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:00:23 compute-0 NetworkManager[48965]: <info>  [1765008023.8822] device (tap9781e762-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:00:23 compute-0 NetworkManager[48965]: <info>  [1765008023.8833] device (tap9781e762-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:00:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.157 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:e9:ae 10.100.0.26'], port_security=['fa:16:3e:26:e9:ae 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '8f7f3d80-9c81-41ab-9009-09c77ea059c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-629211d9-a797-44b4-bd7d-576fb48d8f81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b21ed5f8-7b1b-49be-b944-dd2a5d277014', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f8dae15-0d84-4d6f-8a18-38e3eff898be, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=9781e762-7ca0-4640-b89b-3924c5259021) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:00:24 compute-0 ovn_controller[147168]: 2025-12-06T08:00:24Z|00697|binding|INFO|Setting lport 9781e762-7ca0-4640-b89b-3924c5259021 up in Southbound
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.159 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 9781e762-7ca0-4640-b89b-3924c5259021 in datapath 629211d9-a797-44b4-bd7d-576fb48d8f81 bound to our chassis
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.161 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 629211d9-a797-44b4-bd7d-576fb48d8f81
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.177 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9f3276-c1a1-4d65-b20b-26c28f860182]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.178 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap629211d9-a1 in ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.181 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap629211d9-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.181 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[976db396-b5de-4506-b816-cbf1a714e20a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.182 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac633b8-9817-403f-a488-c5e3f6748171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.201 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[da1d2eb1-8fd0-485d-b5e2-e046a84ab37c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.228 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8f5d90a0-6bf9-43b5-9347-31e84708cc67]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.266 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[38c71e55-015d-4db1-9d86-de2c855b9efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.272 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d0847e01-630b-44d6-a32b-2cb67f667c43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 NetworkManager[48965]: <info>  [1765008024.2738] manager: (tap629211d9-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/317)
Dec 06 08:00:24 compute-0 systemd-udevd[375267]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.302 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[17e5bcfd-87b8-473e-8ab6-28f94a539a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.306 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[03622c3b-433d-4875-8991-395cadb51eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 NetworkManager[48965]: <info>  [1765008024.3385] device (tap629211d9-a0): carrier: link connected
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.346 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1edecbe8-bfb0-41f5-8a57-583327c11bff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.372 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3d744851-15d4-462d-bf1b-aa707d162f98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap629211d9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:88:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829692, 'reachable_time': 31141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375298, 'error': None, 'target': 'ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.395 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[306cf4fb-6ac9-4bdf-a189-5fe22d93788d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:88f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 829692, 'tstamp': 829692}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375299, 'error': None, 'target': 'ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.415 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcec4ee-94ab-4be0-8073-135232edd046]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap629211d9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:88:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829692, 'reachable_time': 31141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375300, 'error': None, 'target': 'ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.449 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[49e2c1e9-3d1b-432a-859e-b463a40ed484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.523 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[47a5e4be-8c93-4fdb-8b57-57915988d643]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.525 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap629211d9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.525 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.526 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap629211d9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:24 compute-0 nova_compute[251992]: 2025-12-06 08:00:24.527 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:24 compute-0 kernel: tap629211d9-a0: entered promiscuous mode
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.529 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap629211d9-a0, col_values=(('external_ids', {'iface-id': '11eb41f3-8432-4a72-8981-c494bdd1804a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:24 compute-0 NetworkManager[48965]: <info>  [1765008024.5305] manager: (tap629211d9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Dec 06 08:00:24 compute-0 ovn_controller[147168]: 2025-12-06T08:00:24Z|00698|binding|INFO|Releasing lport 11eb41f3-8432-4a72-8981-c494bdd1804a from this chassis (sb_readonly=0)
Dec 06 08:00:24 compute-0 nova_compute[251992]: 2025-12-06 08:00:24.530 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:24 compute-0 nova_compute[251992]: 2025-12-06 08:00:24.545 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.546 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/629211d9-a797-44b4-bd7d-576fb48d8f81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/629211d9-a797-44b4-bd7d-576fb48d8f81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.547 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a49f04ac-c268-4f6b-8b71-d253a42dc14c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.548 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-629211d9-a797-44b4-bd7d-576fb48d8f81
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/629211d9-a797-44b4-bd7d-576fb48d8f81.pid.haproxy
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 629211d9-a797-44b4-bd7d-576fb48d8f81
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:00:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:24.549 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81', 'env', 'PROCESS_TAG=haproxy-629211d9-a797-44b4-bd7d-576fb48d8f81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/629211d9-a797-44b4-bd7d-576fb48d8f81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:00:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:24.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:24 compute-0 ceph-mon[74339]: pgmap v3225: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Dec 06 08:00:24 compute-0 nova_compute[251992]: 2025-12-06 08:00:24.946 251996 DEBUG nova.compute.manager [req-2b3f1ffd-2cab-4437-870f-27014b1f830a req-a9795fc1-cdd0-4d7a-b7ca-ab7e77419965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received event network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:24 compute-0 nova_compute[251992]: 2025-12-06 08:00:24.947 251996 DEBUG oslo_concurrency.lockutils [req-2b3f1ffd-2cab-4437-870f-27014b1f830a req-a9795fc1-cdd0-4d7a-b7ca-ab7e77419965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:24 compute-0 nova_compute[251992]: 2025-12-06 08:00:24.948 251996 DEBUG oslo_concurrency.lockutils [req-2b3f1ffd-2cab-4437-870f-27014b1f830a req-a9795fc1-cdd0-4d7a-b7ca-ab7e77419965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:24 compute-0 nova_compute[251992]: 2025-12-06 08:00:24.948 251996 DEBUG oslo_concurrency.lockutils [req-2b3f1ffd-2cab-4437-870f-27014b1f830a req-a9795fc1-cdd0-4d7a-b7ca-ab7e77419965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:24 compute-0 nova_compute[251992]: 2025-12-06 08:00:24.948 251996 DEBUG nova.compute.manager [req-2b3f1ffd-2cab-4437-870f-27014b1f830a req-a9795fc1-cdd0-4d7a-b7ca-ab7e77419965 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Processing event network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:00:24 compute-0 podman[375364]: 2025-12-06 08:00:24.95161765 +0000 UTC m=+0.059315523 container create 35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:00:24 compute-0 systemd[1]: Started libpod-conmon-35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071.scope.
Dec 06 08:00:25 compute-0 podman[375364]: 2025-12-06 08:00:24.919133073 +0000 UTC m=+0.026830986 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:00:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a86dba3331c5808dc7cb60c2858a12ce787bec2fce5aa4806fd191407eeaad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:25 compute-0 podman[375364]: 2025-12-06 08:00:25.04355774 +0000 UTC m=+0.151255663 container init 35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 06 08:00:25 compute-0 podman[375364]: 2025-12-06 08:00:25.050287582 +0000 UTC m=+0.157985445 container start 35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.051 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.052 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008025.0509567, 8f7f3d80-9c81-41ab-9009-09c77ea059c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.053 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] VM Started (Lifecycle Event)
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.056 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.061 251996 INFO nova.virt.libvirt.driver [-] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Instance spawned successfully.
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.061 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.074 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:00:25 compute-0 neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81[375384]: [NOTICE]   (375391) : New worker (375393) forked
Dec 06 08:00:25 compute-0 neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81[375384]: [NOTICE]   (375391) : Loading success.
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.078 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.090 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.091 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.091 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.091 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.092 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.092 251996 DEBUG nova.virt.libvirt.driver [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.100 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.101 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008025.0533755, 8f7f3d80-9c81-41ab-9009-09c77ea059c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.101 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] VM Paused (Lifecycle Event)
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.139 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.142 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008025.0554836, 8f7f3d80-9c81-41ab-9009-09c77ea059c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.143 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] VM Resumed (Lifecycle Event)
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.167 251996 INFO nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Took 10.29 seconds to spawn the instance on the hypervisor.
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.168 251996 DEBUG nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.169 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.177 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.192 251996 DEBUG nova.network.neutron [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Updating instance_info_cache with network_info: [{"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:00:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:25.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.224 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.228 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-d1a8ef9c-0ce7-4841-9523-7f11435a1884" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.228 251996 DEBUG nova.objects.instance [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid d1a8ef9c-0ce7-4841-9523-7f11435a1884 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.334 251996 INFO nova.compute.manager [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Took 11.33 seconds to build instance.
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.391 251996 DEBUG nova.storage.rbd_utils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] removing snapshot(nova-resize) on rbd image(d1a8ef9c-0ce7-4841-9523-7f11435a1884_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 06 08:00:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 210 op/s
Dec 06 08:00:25 compute-0 nova_compute[251992]: 2025-12-06 08:00:25.734 251996 DEBUG oslo_concurrency.lockutils [None req-c31dcb7d-4cf5-432e-8e7d-55ffa602364a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Dec 06 08:00:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Dec 06 08:00:25 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:25.941663) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025941774, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 698, "num_deletes": 251, "total_data_size": 904596, "memory_usage": 918936, "flush_reason": "Manual Compaction"}
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025950601, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 894187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64839, "largest_seqno": 65536, "table_properties": {"data_size": 890607, "index_size": 1423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8317, "raw_average_key_size": 19, "raw_value_size": 883367, "raw_average_value_size": 2063, "num_data_blocks": 63, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765007974, "oldest_key_time": 1765007974, "file_creation_time": 1765008025, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 9023 microseconds, and 5337 cpu microseconds.
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:25.950682) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 894187 bytes OK
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:25.950723) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:25.952403) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:25.952428) EVENT_LOG_v1 {"time_micros": 1765008025952420, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:25.952450) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 901043, prev total WAL file size 901043, number of live WAL files 2.
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:25.953273) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(873KB)], [143(11MB)]
Dec 06 08:00:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008025953347, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 12624413, "oldest_snapshot_seqno": -1}
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.012 251996 DEBUG nova.virt.libvirt.vif [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T07:58:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1262973925',display_name='tempest-TestNetworkAdvancedServerOps-server-1262973925',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1262973925',id=181,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHajJo6sYKmo7BjEFfbTegHFFaysH3CPUR6yuP2Rayw3S9ts1Wd6TY6anx2QtLxK6yp4z4nQqn7Ss4CGPtBiZQsZd5U8dFeDqjYG81KqlV6e9SPXI48qB0u9ty6SGnMpqw==',key_name='tempest-TestNetworkAdvancedServerOps-720860597',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:00:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-1koojzsy',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:00:19Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=d1a8ef9c-0ce7-4841-9523-7f11435a1884,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.012 251996 DEBUG nova.network.os_vif_util [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "05c9980c-d230-4d61-9d98-4586e200fac5", "address": "fa:16:3e:28:35:99", "network": {"id": "ecf01de9-e04e-423a-b106-dcf22b107dc4", "bridge": "br-int", "label": "tempest-network-smoke--425482001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c9980c-d2", "ovs_interfaceid": "05c9980c-d230-4d61-9d98-4586e200fac5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.014 251996 DEBUG nova.network.os_vif_util [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.014 251996 DEBUG os_vif [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.016 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.017 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05c9980c-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.017 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.019 251996 INFO os_vif [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:35:99,bridge_name='br-int',has_traffic_filtering=True,id=05c9980c-d230-4d61-9d98-4586e200fac5,network=Network(ecf01de9-e04e-423a-b106-dcf22b107dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c9980c-d2')
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.020 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.020 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 9827 keys, 10656587 bytes, temperature: kUnknown
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026052373, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10656587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10595803, "index_size": 35135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 260369, "raw_average_key_size": 26, "raw_value_size": 10425951, "raw_average_value_size": 1060, "num_data_blocks": 1322, "num_entries": 9827, "num_filter_entries": 9827, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008025, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:26.052647) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10656587 bytes
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:26.053709) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.4 rd, 107.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(26.0) write-amplify(11.9) OK, records in: 10341, records dropped: 514 output_compression: NoCompression
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:26.053725) EVENT_LOG_v1 {"time_micros": 1765008026053718, "job": 88, "event": "compaction_finished", "compaction_time_micros": 99119, "compaction_time_cpu_micros": 45322, "output_level": 6, "num_output_files": 1, "total_output_size": 10656587, "num_input_records": 10341, "num_output_records": 9827, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026053962, "job": 88, "event": "table_file_deletion", "file_number": 145}
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008026055938, "job": 88, "event": "table_file_deletion", "file_number": 143}
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:25.953165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:26.056013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:26.056018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:26.056020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:26.056022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:00:26.056024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.099 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008011.098994, d1a8ef9c-0ce7-4841-9523-7f11435a1884 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.100 251996 INFO nova.compute.manager [-] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] VM Stopped (Lifecycle Event)
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.187 251996 DEBUG oslo_concurrency.processutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.244 251996 DEBUG nova.compute.manager [None req-ff2220b6-008b-4670-bfb2-b1d78056343d - - - - - -] [instance: d1a8ef9c-0ce7-4841-9523-7f11435a1884] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00534155877768472 of space, bias 1.0, pg target 1.602467633305416 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004331917108691606 of space, bias 1.0, pg target 1.2952432154987903 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:00:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Dec 06 08:00:26 compute-0 podman[375443]: 2025-12-06 08:00:26.412947761 +0000 UTC m=+0.072079555 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 06 08:00:26 compute-0 podman[375457]: 2025-12-06 08:00:26.417572657 +0000 UTC m=+0.071179913 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 08:00:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:26.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.711 251996 DEBUG oslo_concurrency.processutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.719 251996 DEBUG nova.compute.provider_tree [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.746 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:26 compute-0 nova_compute[251992]: 2025-12-06 08:00:26.915 251996 DEBUG nova.scheduler.client.report [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:00:27 compute-0 ceph-mon[74339]: pgmap v3226: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 210 op/s
Dec 06 08:00:27 compute-0 ceph-mon[74339]: osdmap e409: 3 total, 3 up, 3 in
Dec 06 08:00:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/404955260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.052 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.058 251996 DEBUG nova.compute.manager [req-9ed4ba14-e670-4a38-af2c-724ad193948d req-036be711-6ec8-4f10-a918-ae9023f8b7fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received event network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.059 251996 DEBUG oslo_concurrency.lockutils [req-9ed4ba14-e670-4a38-af2c-724ad193948d req-036be711-6ec8-4f10-a918-ae9023f8b7fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.059 251996 DEBUG oslo_concurrency.lockutils [req-9ed4ba14-e670-4a38-af2c-724ad193948d req-036be711-6ec8-4f10-a918-ae9023f8b7fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.059 251996 DEBUG oslo_concurrency.lockutils [req-9ed4ba14-e670-4a38-af2c-724ad193948d req-036be711-6ec8-4f10-a918-ae9023f8b7fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.059 251996 DEBUG nova.compute.manager [req-9ed4ba14-e670-4a38-af2c-724ad193948d req-036be711-6ec8-4f10-a918-ae9023f8b7fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] No waiting events found dispatching network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.060 251996 WARNING nova.compute.manager [req-9ed4ba14-e670-4a38-af2c-724ad193948d req-036be711-6ec8-4f10-a918-ae9023f8b7fd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received unexpected event network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 for instance with vm_state active and task_state None.
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.145 251996 INFO nova.scheduler.client.report [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Deleted allocation for migration 3edf7ec4-1fd2-4a82-b2cd-ff838e7ab052
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.198 251996 DEBUG oslo_concurrency.lockutils [None req-716e2d0b-e6d1-4150-a7b2-44986c68579a 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "d1a8ef9c-0ce7-4841-9523-7f11435a1884" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:00:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:00:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:00:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:00:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:00:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 204 op/s
Dec 06 08:00:27 compute-0 nova_compute[251992]: 2025-12-06 08:00:27.583 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:27 compute-0 sudo[375496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:27 compute-0 sudo[375496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:27 compute-0 sudo[375496]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:27 compute-0 sudo[375521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:27 compute-0 sudo[375521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:27 compute-0 sudo[375521]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:28.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:29 compute-0 ceph-mon[74339]: pgmap v3228: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 204 op/s
Dec 06 08:00:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:00:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:29.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:00:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 204 op/s
Dec 06 08:00:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:30.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:31 compute-0 ceph-mon[74339]: pgmap v3229: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 204 op/s
Dec 06 08:00:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:31.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 08:00:31 compute-0 nova_compute[251992]: 2025-12-06 08:00:31.750 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:32.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:32 compute-0 nova_compute[251992]: 2025-12-06 08:00:32.609 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:32 compute-0 nova_compute[251992]: 2025-12-06 08:00:32.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:32 compute-0 nova_compute[251992]: 2025-12-06 08:00:32.712 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:32 compute-0 nova_compute[251992]: 2025-12-06 08:00:32.713 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:32 compute-0 nova_compute[251992]: 2025-12-06 08:00:32.713 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:32 compute-0 nova_compute[251992]: 2025-12-06 08:00:32.713 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:00:32 compute-0 nova_compute[251992]: 2025-12-06 08:00:32.713 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:00:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2998638154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.148 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:33.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.235 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.236 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000b7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:00:33 compute-0 ceph-mon[74339]: pgmap v3230: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 08:00:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2998638154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.403 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.404 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3985MB free_disk=20.87603759765625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.405 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.405 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.496 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 8f7f3d80-9c81-41ab-9009-09c77ea059c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.496 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.497 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.516 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.541 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.542 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.580 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:00:33 compute-0 sudo[375572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:33 compute-0 sudo[375572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:33 compute-0 sudo[375572]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.620 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:00:33 compute-0 sudo[375597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:33 compute-0 sudo[375597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:33 compute-0 sudo[375597]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:33 compute-0 nova_compute[251992]: 2025-12-06 08:00:33.677 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:33 compute-0 sudo[375622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:33 compute-0 sudo[375622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:33 compute-0 sudo[375622]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:33 compute-0 sudo[375648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:00:33 compute-0 sudo[375648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Dec 06 08:00:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Dec 06 08:00:33 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Dec 06 08:00:34 compute-0 sudo[375648]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:00:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:00:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 sudo[375711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:34 compute-0 sudo[375711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:34 compute-0 sudo[375711]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:00:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847114911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:34 compute-0 sudo[375736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:34 compute-0 sudo[375736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:34 compute-0 sudo[375736]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-0 nova_compute[251992]: 2025-12-06 08:00:34.173 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:34 compute-0 nova_compute[251992]: 2025-12-06 08:00:34.180 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:00:34 compute-0 nova_compute[251992]: 2025-12-06 08:00:34.195 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:00:34 compute-0 sudo[375763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:34 compute-0 sudo[375763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:34 compute-0 sudo[375763]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-0 nova_compute[251992]: 2025-12-06 08:00:34.222 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:00:34 compute-0 nova_compute[251992]: 2025-12-06 08:00:34.222 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:34 compute-0 sudo[375788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:00:34 compute-0 sudo[375788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:34.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:00:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:00:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 sudo[375788]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:00:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:00:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 sudo[375843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:34 compute-0 sudo[375843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:34 compute-0 sudo[375843]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-0 sudo[375868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:34 compute-0 sudo[375868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:34 compute-0 sudo[375868]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:34 compute-0 ceph-mon[74339]: pgmap v3231: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 164 op/s
Dec 06 08:00:34 compute-0 ceph-mon[74339]: osdmap e410: 3 total, 3 up, 3 in
Dec 06 08:00:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/847114911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2233248341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:35 compute-0 sudo[375893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:35 compute-0 sudo[375893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:35 compute-0 sudo[375893]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:35 compute-0 sudo[375918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 08:00:35 compute-0 sudo[375918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:35.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 55 KiB/s wr, 168 op/s
Dec 06 08:00:35 compute-0 podman[375985]: 2025-12-06 08:00:35.381738084 +0000 UTC m=+0.023469135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:35 compute-0 podman[375985]: 2025-12-06 08:00:35.530488738 +0000 UTC m=+0.172219769 container create fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendeleev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 08:00:35 compute-0 systemd[1]: Started libpod-conmon-fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf.scope.
Dec 06 08:00:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:35 compute-0 podman[375985]: 2025-12-06 08:00:35.681954845 +0000 UTC m=+0.323685926 container init fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendeleev, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 08:00:35 compute-0 podman[375985]: 2025-12-06 08:00:35.6910488 +0000 UTC m=+0.332779861 container start fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:00:35 compute-0 jovial_mendeleev[376001]: 167 167
Dec 06 08:00:35 compute-0 systemd[1]: libpod-fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf.scope: Deactivated successfully.
Dec 06 08:00:35 compute-0 podman[375985]: 2025-12-06 08:00:35.710394652 +0000 UTC m=+0.352125713 container attach fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendeleev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:00:35 compute-0 podman[375985]: 2025-12-06 08:00:35.712089828 +0000 UTC m=+0.353820890 container died fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendeleev, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-80e9909c6d7ef6bc40488b3b7d7c2e78b269c509e3f7e39316f0ef633d7af2e6-merged.mount: Deactivated successfully.
Dec 06 08:00:35 compute-0 podman[375985]: 2025-12-06 08:00:35.753591169 +0000 UTC m=+0.395322200 container remove fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:00:35 compute-0 systemd[1]: libpod-conmon-fd2ea22f68b69d963e46ffd0897abbede19548add59eddcd54e21abc5cf793cf.scope: Deactivated successfully.
Dec 06 08:00:35 compute-0 podman[376025]: 2025-12-06 08:00:35.915524937 +0000 UTC m=+0.040889233 container create b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:00:35 compute-0 systemd[1]: Started libpod-conmon-b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9.scope.
Dec 06 08:00:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa15d344b514b83f7d0aab095f2c3d323cc44740b58b42fb42d62cb40d7d6bba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa15d344b514b83f7d0aab095f2c3d323cc44740b58b42fb42d62cb40d7d6bba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa15d344b514b83f7d0aab095f2c3d323cc44740b58b42fb42d62cb40d7d6bba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa15d344b514b83f7d0aab095f2c3d323cc44740b58b42fb42d62cb40d7d6bba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:35 compute-0 podman[376025]: 2025-12-06 08:00:35.993696877 +0000 UTC m=+0.119061193 container init b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:00:35 compute-0 podman[376025]: 2025-12-06 08:00:35.896906206 +0000 UTC m=+0.022270522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3524168111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:36 compute-0 podman[376025]: 2025-12-06 08:00:36.001713003 +0000 UTC m=+0.127077299 container start b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:00:36 compute-0 podman[376025]: 2025-12-06 08:00:36.014462828 +0000 UTC m=+0.139827124 container attach b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 08:00:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:36.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:36 compute-0 nova_compute[251992]: 2025-12-06 08:00:36.753 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:37 compute-0 ceph-mon[74339]: pgmap v3233: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 55 KiB/s wr, 168 op/s
Dec 06 08:00:37 compute-0 silly_villani[376041]: [
Dec 06 08:00:37 compute-0 silly_villani[376041]:     {
Dec 06 08:00:37 compute-0 silly_villani[376041]:         "available": false,
Dec 06 08:00:37 compute-0 silly_villani[376041]:         "ceph_device": false,
Dec 06 08:00:37 compute-0 silly_villani[376041]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 08:00:37 compute-0 silly_villani[376041]:         "lsm_data": {},
Dec 06 08:00:37 compute-0 silly_villani[376041]:         "lvs": [],
Dec 06 08:00:37 compute-0 silly_villani[376041]:         "path": "/dev/sr0",
Dec 06 08:00:37 compute-0 silly_villani[376041]:         "rejected_reasons": [
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "Insufficient space (<5GB)",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "Has a FileSystem"
Dec 06 08:00:37 compute-0 silly_villani[376041]:         ],
Dec 06 08:00:37 compute-0 silly_villani[376041]:         "sys_api": {
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "actuators": null,
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "device_nodes": "sr0",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "devname": "sr0",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "human_readable_size": "482.00 KB",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "id_bus": "ata",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "model": "QEMU DVD-ROM",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "nr_requests": "2",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "parent": "/dev/sr0",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "partitions": {},
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "path": "/dev/sr0",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "removable": "1",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "rev": "2.5+",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "ro": "0",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "rotational": "1",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "sas_address": "",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "sas_device_handle": "",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "scheduler_mode": "mq-deadline",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "sectors": 0,
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "sectorsize": "2048",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "size": 493568.0,
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "support_discard": "2048",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "type": "disk",
Dec 06 08:00:37 compute-0 silly_villani[376041]:             "vendor": "QEMU"
Dec 06 08:00:37 compute-0 silly_villani[376041]:         }
Dec 06 08:00:37 compute-0 silly_villani[376041]:     }
Dec 06 08:00:37 compute-0 silly_villani[376041]: ]
Dec 06 08:00:37 compute-0 systemd[1]: libpod-b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9.scope: Deactivated successfully.
Dec 06 08:00:37 compute-0 podman[376025]: 2025-12-06 08:00:37.202863335 +0000 UTC m=+1.328227651 container died b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:00:37 compute-0 systemd[1]: libpod-b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9.scope: Consumed 1.217s CPU time.
Dec 06 08:00:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:37.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa15d344b514b83f7d0aab095f2c3d323cc44740b58b42fb42d62cb40d7d6bba-merged.mount: Deactivated successfully.
Dec 06 08:00:37 compute-0 podman[376025]: 2025-12-06 08:00:37.43246123 +0000 UTC m=+1.557825536 container remove b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:00:37 compute-0 systemd[1]: libpod-conmon-b4c16431dc2753cfa2d478d537e5643b085e59dd179f4d16d21f81b5ac8856f9.scope: Deactivated successfully.
Dec 06 08:00:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 29 KiB/s wr, 136 op/s
Dec 06 08:00:37 compute-0 sudo[375918]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:00:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:00:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:37 compute-0 nova_compute[251992]: 2025-12-06 08:00:37.612 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:00:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:00:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:00:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:00:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:00:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 19a0e5eb-8ebe-49b2-a037-4480be8a3b51 does not exist
Dec 06 08:00:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 18d39302-1764-4d5f-9a09-604be5e21988 does not exist
Dec 06 08:00:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5363d5ea-9500-43d4-a81d-db03de008b2c does not exist
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:00:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:00:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:00:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:00:38 compute-0 sudo[377326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:38 compute-0 sudo[377326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:38 compute-0 sudo[377326]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:38 compute-0 nova_compute[251992]: 2025-12-06 08:00:38.217 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:38 compute-0 sudo[377351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:38 compute-0 sudo[377351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:38 compute-0 sudo[377351]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:38 compute-0 sudo[377376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:38 compute-0 sudo[377376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:38 compute-0 sudo[377376]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:38 compute-0 sudo[377401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:00:38 compute-0 sudo[377401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:38.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:38 compute-0 ceph-mon[74339]: pgmap v3234: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 29 KiB/s wr, 136 op/s
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:00:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:00:38 compute-0 podman[377466]: 2025-12-06 08:00:38.741859973 +0000 UTC m=+0.094255194 container create c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_payne, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:00:38 compute-0 podman[377466]: 2025-12-06 08:00:38.669099179 +0000 UTC m=+0.021494420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:38 compute-0 systemd[1]: Started libpod-conmon-c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32.scope.
Dec 06 08:00:38 compute-0 ovn_controller[147168]: 2025-12-06T08:00:38Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:e9:ae 10.100.0.26
Dec 06 08:00:38 compute-0 ovn_controller[147168]: 2025-12-06T08:00:38Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:e9:ae 10.100.0.26
Dec 06 08:00:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:39 compute-0 podman[377466]: 2025-12-06 08:00:39.213709335 +0000 UTC m=+0.566104576 container init c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_payne, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:00:39 compute-0 podman[377466]: 2025-12-06 08:00:39.219745408 +0000 UTC m=+0.572140629 container start c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:00:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:39.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:39 compute-0 magical_payne[377482]: 167 167
Dec 06 08:00:39 compute-0 systemd[1]: libpod-c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32.scope: Deactivated successfully.
Dec 06 08:00:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 29 KiB/s wr, 136 op/s
Dec 06 08:00:39 compute-0 podman[377466]: 2025-12-06 08:00:39.670029859 +0000 UTC m=+1.022425160 container attach c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_payne, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:00:39 compute-0 podman[377466]: 2025-12-06 08:00:39.671379425 +0000 UTC m=+1.023774686 container died c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:00:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb6e5966765792c096d753fc11934cde326fc9edc81962d6d56bd73833b63528-merged.mount: Deactivated successfully.
Dec 06 08:00:39 compute-0 podman[377466]: 2025-12-06 08:00:39.733408489 +0000 UTC m=+1.085803710 container remove c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec 06 08:00:39 compute-0 systemd[1]: libpod-conmon-c32871f8d90d5508209d6515780f1cdec34c2802821710b70150ba1686381d32.scope: Deactivated successfully.
Dec 06 08:00:39 compute-0 podman[377508]: 2025-12-06 08:00:39.891952597 +0000 UTC m=+0.037170324 container create 5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 08:00:39 compute-0 systemd[1]: Started libpod-conmon-5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793.scope.
Dec 06 08:00:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801fb3667e7308c465dfcd0acf6b6b8fad293123c4730eb815d80f2d0fb1fc32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801fb3667e7308c465dfcd0acf6b6b8fad293123c4730eb815d80f2d0fb1fc32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801fb3667e7308c465dfcd0acf6b6b8fad293123c4730eb815d80f2d0fb1fc32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801fb3667e7308c465dfcd0acf6b6b8fad293123c4730eb815d80f2d0fb1fc32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/801fb3667e7308c465dfcd0acf6b6b8fad293123c4730eb815d80f2d0fb1fc32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:39 compute-0 podman[377508]: 2025-12-06 08:00:39.876044277 +0000 UTC m=+0.021262044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:39 compute-0 podman[377508]: 2025-12-06 08:00:39.977992889 +0000 UTC m=+0.123210646 container init 5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_poincare, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:00:39 compute-0 podman[377508]: 2025-12-06 08:00:39.993848376 +0000 UTC m=+0.139066113 container start 5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:00:39 compute-0 podman[377508]: 2025-12-06 08:00:39.997634319 +0000 UTC m=+0.142852116 container attach 5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:00:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:40.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:40 compute-0 nova_compute[251992]: 2025-12-06 08:00:40.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:40 compute-0 beautiful_poincare[377524]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:00:40 compute-0 beautiful_poincare[377524]: --> relative data size: 1.0
Dec 06 08:00:40 compute-0 beautiful_poincare[377524]: --> All data devices are unavailable
Dec 06 08:00:40 compute-0 systemd[1]: libpod-5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793.scope: Deactivated successfully.
Dec 06 08:00:40 compute-0 podman[377508]: 2025-12-06 08:00:40.791001527 +0000 UTC m=+0.936219274 container died 5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:00:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-801fb3667e7308c465dfcd0acf6b6b8fad293123c4730eb815d80f2d0fb1fc32-merged.mount: Deactivated successfully.
Dec 06 08:00:40 compute-0 podman[377508]: 2025-12-06 08:00:40.880005909 +0000 UTC m=+1.025223646 container remove 5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:00:40 compute-0 ceph-mon[74339]: pgmap v3235: 305 pgs: 305 active+clean; 407 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 29 KiB/s wr, 136 op/s
Dec 06 08:00:40 compute-0 sudo[377401]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:40 compute-0 systemd[1]: libpod-conmon-5c3a6ec530f255a5cf0335870d64f7c6338c50763ceb97213ad32b0664644793.scope: Deactivated successfully.
Dec 06 08:00:40 compute-0 sudo[377550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:40 compute-0 sudo[377550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:40 compute-0 sudo[377550]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:41 compute-0 sudo[377575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:41 compute-0 sudo[377575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:41 compute-0 sudo[377575]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:41 compute-0 sudo[377600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:41 compute-0 sudo[377600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:41 compute-0 sudo[377600]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:41 compute-0 sudo[377626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:00:41 compute-0 sudo[377626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:41 compute-0 nova_compute[251992]: 2025-12-06 08:00:41.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:41.152 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:00:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:41.153 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:00:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:41.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 441 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 08:00:41 compute-0 podman[377690]: 2025-12-06 08:00:41.444635184 +0000 UTC m=+0.021158541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:41 compute-0 podman[377690]: 2025-12-06 08:00:41.737937209 +0000 UTC m=+0.314460586 container create eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:00:41 compute-0 nova_compute[251992]: 2025-12-06 08:00:41.758 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:41 compute-0 systemd[1]: Started libpod-conmon-eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4.scope.
Dec 06 08:00:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:41 compute-0 podman[377690]: 2025-12-06 08:00:41.835842601 +0000 UTC m=+0.412365968 container init eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:00:41 compute-0 podman[377690]: 2025-12-06 08:00:41.841925375 +0000 UTC m=+0.418448712 container start eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:00:41 compute-0 podman[377690]: 2025-12-06 08:00:41.846509469 +0000 UTC m=+0.423032826 container attach eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:00:41 compute-0 goofy_margulis[377706]: 167 167
Dec 06 08:00:41 compute-0 systemd[1]: libpod-eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4.scope: Deactivated successfully.
Dec 06 08:00:41 compute-0 podman[377690]: 2025-12-06 08:00:41.848163203 +0000 UTC m=+0.424686560 container died eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:00:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-49617365c11a08154b0974e4ff51713ce1fa5131c985d02de98d5ec06dba6b6b-merged.mount: Deactivated successfully.
Dec 06 08:00:41 compute-0 podman[377690]: 2025-12-06 08:00:41.885269464 +0000 UTC m=+0.461792801 container remove eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 08:00:41 compute-0 systemd[1]: libpod-conmon-eb8f0e84f7a84401ddb1c30fbb4faf5954064651d6efc46ddad4fe24e87dc0c4.scope: Deactivated successfully.
Dec 06 08:00:42 compute-0 podman[377730]: 2025-12-06 08:00:42.08114738 +0000 UTC m=+0.043742141 container create 0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:00:42 compute-0 systemd[1]: Started libpod-conmon-0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5.scope.
Dec 06 08:00:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/976e4545c6220fdb0e5b9cf1ceba1c64b1f1f6eea4c2977babd14b224bcb3a90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/976e4545c6220fdb0e5b9cf1ceba1c64b1f1f6eea4c2977babd14b224bcb3a90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/976e4545c6220fdb0e5b9cf1ceba1c64b1f1f6eea4c2977babd14b224bcb3a90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/976e4545c6220fdb0e5b9cf1ceba1c64b1f1f6eea4c2977babd14b224bcb3a90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:42 compute-0 podman[377730]: 2025-12-06 08:00:42.155451855 +0000 UTC m=+0.118046636 container init 0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:00:42 compute-0 podman[377730]: 2025-12-06 08:00:42.063176545 +0000 UTC m=+0.025771356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:42 compute-0 podman[377730]: 2025-12-06 08:00:42.162280179 +0000 UTC m=+0.124874940 container start 0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:00:42 compute-0 podman[377730]: 2025-12-06 08:00:42.165596729 +0000 UTC m=+0.128191560 container attach 0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:00:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:42 compute-0 nova_compute[251992]: 2025-12-06 08:00:42.614 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:42 compute-0 pensive_tesla[377746]: {
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:     "0": [
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:         {
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "devices": [
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "/dev/loop3"
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             ],
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "lv_name": "ceph_lv0",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "lv_size": "7511998464",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "name": "ceph_lv0",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "tags": {
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.cluster_name": "ceph",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.crush_device_class": "",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.encrypted": "0",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.osd_id": "0",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.type": "block",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:                 "ceph.vdo": "0"
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             },
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "type": "block",
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:             "vg_name": "ceph_vg0"
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:         }
Dec 06 08:00:42 compute-0 pensive_tesla[377746]:     ]
Dec 06 08:00:42 compute-0 pensive_tesla[377746]: }
Dec 06 08:00:42 compute-0 systemd[1]: libpod-0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5.scope: Deactivated successfully.
Dec 06 08:00:42 compute-0 podman[377730]: 2025-12-06 08:00:42.927254562 +0000 UTC m=+0.889849323 container died 0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 08:00:42 compute-0 ceph-mon[74339]: pgmap v3236: 305 pgs: 305 active+clean; 441 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 08:00:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/744438241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-976e4545c6220fdb0e5b9cf1ceba1c64b1f1f6eea4c2977babd14b224bcb3a90-merged.mount: Deactivated successfully.
Dec 06 08:00:43 compute-0 podman[377730]: 2025-12-06 08:00:43.020370455 +0000 UTC m=+0.982965216 container remove 0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:00:43 compute-0 systemd[1]: libpod-conmon-0510136f1fce712aadc78f5d5167741a3635437cf0ac51581baa020246b7c8f5.scope: Deactivated successfully.
Dec 06 08:00:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:00:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:00:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:00:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:00:43 compute-0 sudo[377626]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:00:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:00:43 compute-0 sudo[377769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:43 compute-0 sudo[377769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:43 compute-0 sudo[377769]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:43 compute-0 sudo[377794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:43 compute-0 sudo[377794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:43 compute-0 sudo[377794]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:43 compute-0 sudo[377819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:43 compute-0 sudo[377819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:43 compute-0 sudo[377819]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:43.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:43 compute-0 sudo[377844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:00:43 compute-0 sudo[377844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 441 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 08:00:43 compute-0 podman[377909]: 2025-12-06 08:00:43.560718655 +0000 UTC m=+0.036394733 container create 8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:00:43 compute-0 systemd[1]: Started libpod-conmon-8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477.scope.
Dec 06 08:00:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:43 compute-0 podman[377909]: 2025-12-06 08:00:43.546235824 +0000 UTC m=+0.021911922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:43 compute-0 podman[377909]: 2025-12-06 08:00:43.643223901 +0000 UTC m=+0.118899999 container init 8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_stonebraker, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 08:00:43 compute-0 podman[377909]: 2025-12-06 08:00:43.649516311 +0000 UTC m=+0.125192389 container start 8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_stonebraker, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:00:43 compute-0 podman[377909]: 2025-12-06 08:00:43.653776466 +0000 UTC m=+0.129452574 container attach 8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 08:00:43 compute-0 thirsty_stonebraker[377926]: 167 167
Dec 06 08:00:43 compute-0 systemd[1]: libpod-8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477.scope: Deactivated successfully.
Dec 06 08:00:43 compute-0 podman[377909]: 2025-12-06 08:00:43.655639016 +0000 UTC m=+0.131315094 container died 8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_stonebraker, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:00:43 compute-0 nova_compute[251992]: 2025-12-06 08:00:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:43 compute-0 nova_compute[251992]: 2025-12-06 08:00:43.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f1b1586650313e4da3f07f152a15def7256f572f0c1a57c9d90f1330a9cca0c-merged.mount: Deactivated successfully.
Dec 06 08:00:43 compute-0 podman[377909]: 2025-12-06 08:00:43.694243818 +0000 UTC m=+0.169919906 container remove 8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_stonebraker, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:00:43 compute-0 systemd[1]: libpod-conmon-8cc194ba9cc077ffdd124b7743b9a982772948dfbf3fa2065b3a972db3abf477.scope: Deactivated successfully.
Dec 06 08:00:43 compute-0 podman[377949]: 2025-12-06 08:00:43.85998112 +0000 UTC m=+0.046134355 container create d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:00:43 compute-0 systemd[1]: Started libpod-conmon-d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83.scope.
Dec 06 08:00:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b7e07ee39ee4195d41eb1692095b5d094d02c716f80326bdc67bca5abbe0b04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:43 compute-0 podman[377949]: 2025-12-06 08:00:43.840396502 +0000 UTC m=+0.026549787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b7e07ee39ee4195d41eb1692095b5d094d02c716f80326bdc67bca5abbe0b04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b7e07ee39ee4195d41eb1692095b5d094d02c716f80326bdc67bca5abbe0b04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b7e07ee39ee4195d41eb1692095b5d094d02c716f80326bdc67bca5abbe0b04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:43 compute-0 podman[377949]: 2025-12-06 08:00:43.9489257 +0000 UTC m=+0.135078955 container init d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:00:43 compute-0 podman[377949]: 2025-12-06 08:00:43.955704983 +0000 UTC m=+0.141858218 container start d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:00:43 compute-0 podman[377949]: 2025-12-06 08:00:43.958423346 +0000 UTC m=+0.144576601 container attach d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:00:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1692009206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:44.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:44 compute-0 nova_compute[251992]: 2025-12-06 08:00:44.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:44 compute-0 nova_compute[251992]: 2025-12-06 08:00:44.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:00:44 compute-0 nova_compute[251992]: 2025-12-06 08:00:44.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]: {
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]:         "osd_id": 0,
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]:         "type": "bluestore"
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]:     }
Dec 06 08:00:44 compute-0 xenodochial_bardeen[377966]: }
Dec 06 08:00:44 compute-0 systemd[1]: libpod-d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83.scope: Deactivated successfully.
Dec 06 08:00:44 compute-0 podman[377949]: 2025-12-06 08:00:44.831478825 +0000 UTC m=+1.017632070 container died d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:00:44 compute-0 nova_compute[251992]: 2025-12-06 08:00:44.833 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:00:44 compute-0 nova_compute[251992]: 2025-12-06 08:00:44.834 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:00:44 compute-0 nova_compute[251992]: 2025-12-06 08:00:44.834 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:00:44 compute-0 nova_compute[251992]: 2025-12-06 08:00:44.834 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8f7f3d80-9c81-41ab-9009-09c77ea059c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:00:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b7e07ee39ee4195d41eb1692095b5d094d02c716f80326bdc67bca5abbe0b04-merged.mount: Deactivated successfully.
Dec 06 08:00:44 compute-0 podman[377949]: 2025-12-06 08:00:44.888934496 +0000 UTC m=+1.075087731 container remove d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 08:00:44 compute-0 systemd[1]: libpod-conmon-d3c201592b15f646450c7e6f1bd80861033ad66016d7a5525b84e93fd3475c83.scope: Deactivated successfully.
Dec 06 08:00:44 compute-0 sudo[377844]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:00:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:00:45 compute-0 ceph-mon[74339]: pgmap v3237: 305 pgs: 305 active+clean; 441 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 2.6 MiB/s wr, 129 op/s
Dec 06 08:00:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev eb1bd90c-4101-46e8-aa25-d34620651ed1 does not exist
Dec 06 08:00:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8a7f8a45-0e0a-4991-a863-a7640bb47f80 does not exist
Dec 06 08:00:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dc52a98e-63a4-4229-bcad-1eccb3510dfa does not exist
Dec 06 08:00:45 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:45.156 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:45 compute-0 sudo[377999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:45 compute-0 sudo[377999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:45 compute-0 sudo[377999]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:45.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:45 compute-0 sudo[378024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:00:45 compute-0 sudo[378024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:45 compute-0 sudo[378024]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 380 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 868 KiB/s rd, 2.3 MiB/s wr, 141 op/s
Dec 06 08:00:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:00:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:46 compute-0 nova_compute[251992]: 2025-12-06 08:00:46.761 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:46 compute-0 nova_compute[251992]: 2025-12-06 08:00:46.965 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:46 compute-0 nova_compute[251992]: 2025-12-06 08:00:46.965 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:46 compute-0 nova_compute[251992]: 2025-12-06 08:00:46.965 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:46 compute-0 nova_compute[251992]: 2025-12-06 08:00:46.966 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:46 compute-0 nova_compute[251992]: 2025-12-06 08:00:46.966 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:46 compute-0 nova_compute[251992]: 2025-12-06 08:00:46.967 251996 INFO nova.compute.manager [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Terminating instance
Dec 06 08:00:46 compute-0 nova_compute[251992]: 2025-12-06 08:00:46.967 251996 DEBUG nova.compute.manager [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:00:47 compute-0 kernel: tap9781e762-7c (unregistering): left promiscuous mode
Dec 06 08:00:47 compute-0 NetworkManager[48965]: <info>  [1765008047.0271] device (tap9781e762-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:00:47 compute-0 ovn_controller[147168]: 2025-12-06T08:00:47Z|00699|binding|INFO|Releasing lport 9781e762-7ca0-4640-b89b-3924c5259021 from this chassis (sb_readonly=0)
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.044 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 ovn_controller[147168]: 2025-12-06T08:00:47Z|00700|binding|INFO|Setting lport 9781e762-7ca0-4640-b89b-3924c5259021 down in Southbound
Dec 06 08:00:47 compute-0 ovn_controller[147168]: 2025-12-06T08:00:47Z|00701|binding|INFO|Removing iface tap9781e762-7c ovn-installed in OVS
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.052 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:e9:ae 10.100.0.26'], port_security=['fa:16:3e:26:e9:ae 10.100.0.26'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.26/28', 'neutron:device_id': '8f7f3d80-9c81-41ab-9009-09c77ea059c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-629211d9-a797-44b4-bd7d-576fb48d8f81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b21ed5f8-7b1b-49be-b944-dd2a5d277014', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f8dae15-0d84-4d6f-8a18-38e3eff898be, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=9781e762-7ca0-4640-b89b-3924c5259021) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.054 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 9781e762-7ca0-4640-b89b-3924c5259021 in datapath 629211d9-a797-44b4-bd7d-576fb48d8f81 unbound from our chassis
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.055 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 629211d9-a797-44b4-bd7d-576fb48d8f81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.058 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[290f7872-db92-4991-8677-323635c8ca95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.059 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81 namespace which is not needed anymore
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.059 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.064 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Updating instance_info_cache with network_info: [{"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.091 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-8f7f3d80-9c81-41ab-9009-09c77ea059c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.092 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.092 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:47 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b7.scope: Deactivated successfully.
Dec 06 08:00:47 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b7.scope: Consumed 14.777s CPU time.
Dec 06 08:00:47 compute-0 systemd-machined[212986]: Machine qemu-85-instance-000000b7 terminated.
Dec 06 08:00:47 compute-0 ceph-mon[74339]: pgmap v3238: 305 pgs: 305 active+clean; 380 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 868 KiB/s rd, 2.3 MiB/s wr, 141 op/s
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.207 251996 INFO nova.virt.libvirt.driver [-] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Instance destroyed successfully.
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.208 251996 DEBUG nova.objects.instance [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 8f7f3d80-9c81-41ab-9009-09c77ea059c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.227 251996 DEBUG nova.virt.libvirt.vif [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:00:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-149809131',display_name='tempest-TestNetworkBasicOps-server-149809131',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-149809131',id=183,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJxHD2OHcAAQL7D+iLxr5yYZ9dfEaGJ0zjEOCDS2Q+B9dwZiHwtpuC0rSUGm3GJzoEFOe9pvrqVLrq6wJkm52PCY7VS0xdWT/UpZmuYCHLG2CAhDqxmLJ9dSGglmTzeTKw==',key_name='tempest-TestNetworkBasicOps-1299118840',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:00:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-axkif81i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:00:25Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=8f7f3d80-9c81-41ab-9009-09c77ea059c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.228 251996 DEBUG nova.network.os_vif_util [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "9781e762-7ca0-4640-b89b-3924c5259021", "address": "fa:16:3e:26:e9:ae", "network": {"id": "629211d9-a797-44b4-bd7d-576fb48d8f81", "bridge": "br-int", "label": "tempest-network-smoke--1544884874", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.26", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9781e762-7c", "ovs_interfaceid": "9781e762-7ca0-4640-b89b-3924c5259021", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:00:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:47.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.229 251996 DEBUG nova.network.os_vif_util [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:e9:ae,bridge_name='br-int',has_traffic_filtering=True,id=9781e762-7ca0-4640-b89b-3924c5259021,network=Network(629211d9-a797-44b4-bd7d-576fb48d8f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9781e762-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.229 251996 DEBUG os_vif [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:e9:ae,bridge_name='br-int',has_traffic_filtering=True,id=9781e762-7ca0-4640-b89b-3924c5259021,network=Network(629211d9-a797-44b4-bd7d-576fb48d8f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9781e762-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.233 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9781e762-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:47 compute-0 neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81[375384]: [NOTICE]   (375391) : haproxy version is 2.8.14-c23fe91
Dec 06 08:00:47 compute-0 neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81[375384]: [NOTICE]   (375391) : path to executable is /usr/sbin/haproxy
Dec 06 08:00:47 compute-0 neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81[375384]: [WARNING]  (375391) : Exiting Master process...
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.264 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81[375384]: [ALERT]    (375391) : Current worker (375393) exited with code 143 (Terminated)
Dec 06 08:00:47 compute-0 neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81[375384]: [WARNING]  (375391) : All workers exited. Exiting... (0)
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 systemd[1]: libpod-35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071.scope: Deactivated successfully.
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.270 251996 INFO os_vif [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:e9:ae,bridge_name='br-int',has_traffic_filtering=True,id=9781e762-7ca0-4640-b89b-3924c5259021,network=Network(629211d9-a797-44b4-bd7d-576fb48d8f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9781e762-7c')
Dec 06 08:00:47 compute-0 podman[378074]: 2025-12-06 08:00:47.276555174 +0000 UTC m=+0.086756723 container died 35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:00:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071-userdata-shm.mount: Deactivated successfully.
Dec 06 08:00:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5a86dba3331c5808dc7cb60c2858a12ce787bec2fce5aa4806fd191407eeaad-merged.mount: Deactivated successfully.
Dec 06 08:00:47 compute-0 podman[378074]: 2025-12-06 08:00:47.330321434 +0000 UTC m=+0.140522983 container cleanup 35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:00:47 compute-0 systemd[1]: libpod-conmon-35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071.scope: Deactivated successfully.
Dec 06 08:00:47 compute-0 podman[378135]: 2025-12-06 08:00:47.394526837 +0000 UTC m=+0.038206932 container remove 35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.402 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3d46c9f6-f5ce-4688-87e8-0efdc1249409]: (4, ('Sat Dec  6 08:00:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81 (35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071)\n35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071\nSat Dec  6 08:00:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81 (35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071)\n35de596e92819f3d4fa14b0aac47233ed7ffd803862b7d6134ee5f46c3fc2071\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.404 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bcab1167-f0b2-408f-a8f2-2a93b6d6fb98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.405 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap629211d9-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.407 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 kernel: tap629211d9-a0: left promiscuous mode
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.437 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3fba7c42-1198-4bbd-b9c9-e14dc77311f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.451 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6516b6a5-fbd9-4bdd-a662-9c2651fd97bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.452 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5776b326-2667-4aef-a593-4ba2813b9f6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.471 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9c2737-7c04-4013-9e8a-31703aefc780]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 829685, 'reachable_time': 37888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378150, 'error': None, 'target': 'ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d629211d9\x2da797\x2d44b4\x2dbd7d\x2d576fb48d8f81.mount: Deactivated successfully.
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.477 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-629211d9-a797-44b4-bd7d-576fb48d8f81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:00:47 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:00:47.478 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3e4182-6724-42b1-9ce6-37dbf39cc8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.615 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.749 251996 INFO nova.virt.libvirt.driver [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Deleting instance files /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3_del
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.750 251996 INFO nova.virt.libvirt.driver [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Deletion of /var/lib/nova/instances/8f7f3d80-9c81-41ab-9009-09c77ea059c3_del complete
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.839 251996 INFO nova.compute.manager [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Took 0.87 seconds to destroy the instance on the hypervisor.
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.840 251996 DEBUG oslo.service.loopingcall [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.840 251996 DEBUG nova.compute.manager [-] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:00:47 compute-0 nova_compute[251992]: 2025-12-06 08:00:47.840 251996 DEBUG nova.network.neutron [-] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:00:47 compute-0 sudo[378152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:47 compute-0 sudo[378152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:47 compute-0 sudo[378152]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:48 compute-0 sudo[378177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:00:48 compute-0 sudo[378177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:00:48 compute-0 sudo[378177]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:48 compute-0 nova_compute[251992]: 2025-12-06 08:00:48.092 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:48 compute-0 nova_compute[251992]: 2025-12-06 08:00:48.363 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:48.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:48 compute-0 nova_compute[251992]: 2025-12-06 08:00:48.742 251996 DEBUG nova.network.neutron [-] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:00:48 compute-0 nova_compute[251992]: 2025-12-06 08:00:48.773 251996 INFO nova.compute.manager [-] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Took 0.93 seconds to deallocate network for instance.
Dec 06 08:00:48 compute-0 nova_compute[251992]: 2025-12-06 08:00:48.846 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:48 compute-0 nova_compute[251992]: 2025-12-06 08:00:48.847 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:48 compute-0 nova_compute[251992]: 2025-12-06 08:00:48.901 251996 DEBUG oslo_concurrency.processutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:00:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.097 251996 DEBUG nova.compute.manager [req-f1013848-00c6-454d-8e4a-5dd7cde7b31b req-b3214493-619f-4ba4-991d-156a5fd21928 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received event network-vif-deleted-9781e762-7ca0-4640-b89b-3924c5259021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:49 compute-0 ceph-mon[74339]: pgmap v3239: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Dec 06 08:00:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:49.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.276 251996 DEBUG nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received event network-vif-unplugged-9781e762-7ca0-4640-b89b-3924c5259021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.277 251996 DEBUG oslo_concurrency.lockutils [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.277 251996 DEBUG oslo_concurrency.lockutils [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.277 251996 DEBUG oslo_concurrency.lockutils [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.277 251996 DEBUG nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] No waiting events found dispatching network-vif-unplugged-9781e762-7ca0-4640-b89b-3924c5259021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.278 251996 WARNING nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received unexpected event network-vif-unplugged-9781e762-7ca0-4640-b89b-3924c5259021 for instance with vm_state deleted and task_state None.
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.278 251996 DEBUG nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received event network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.278 251996 DEBUG oslo_concurrency.lockutils [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.278 251996 DEBUG oslo_concurrency.lockutils [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.278 251996 DEBUG oslo_concurrency.lockutils [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.279 251996 DEBUG nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] No waiting events found dispatching network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.279 251996 WARNING nova.compute.manager [req-879ee0c6-8067-4be6-871f-0587b3bcd73e req-3c9b605a-6fea-464a-9b5d-ec3cb9d56fc1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Received unexpected event network-vif-plugged-9781e762-7ca0-4640-b89b-3924c5259021 for instance with vm_state deleted and task_state None.
Dec 06 08:00:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:00:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536474158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.332 251996 DEBUG oslo_concurrency.processutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.338 251996 DEBUG nova.compute.provider_tree [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.398 251996 DEBUG nova.scheduler.client.report [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:00:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.603 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:49 compute-0 nova_compute[251992]: 2025-12-06 08:00:49.668 251996 INFO nova.scheduler.client.report [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 8f7f3d80-9c81-41ab-9009-09c77ea059c3
Dec 06 08:00:50 compute-0 nova_compute[251992]: 2025-12-06 08:00:50.075 251996 DEBUG oslo_concurrency.lockutils [None req-a2ed1b7e-cb36-480c-ac7d-34fdc59d11c7 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "8f7f3d80-9c81-41ab-9009-09c77ea059c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:00:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1536474158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:50 compute-0 podman[378225]: 2025-12-06 08:00:50.422933894 +0000 UTC m=+0.080582806 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:00:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:50.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:50 compute-0 nova_compute[251992]: 2025-12-06 08:00:50.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:51.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:51 compute-0 ceph-mon[74339]: pgmap v3240: 305 pgs: 305 active+clean; 360 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Dec 06 08:00:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2562147690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:00:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3781078514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:00:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3781078514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:00:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 147 op/s
Dec 06 08:00:52 compute-0 nova_compute[251992]: 2025-12-06 08:00:52.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:52 compute-0 ceph-mon[74339]: pgmap v3241: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 548 KiB/s rd, 2.2 MiB/s wr, 147 op/s
Dec 06 08:00:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:52.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:52 compute-0 nova_compute[251992]: 2025-12-06 08:00:52.617 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:53.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 20 KiB/s wr, 86 op/s
Dec 06 08:00:53 compute-0 nova_compute[251992]: 2025-12-06 08:00:53.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:00:53 compute-0 nova_compute[251992]: 2025-12-06 08:00:53.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:00:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:54 compute-0 ceph-mon[74339]: pgmap v3242: 305 pgs: 305 active+clean; 245 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 20 KiB/s wr, 86 op/s
Dec 06 08:00:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:54.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 20 KiB/s wr, 92 op/s
Dec 06 08:00:56 compute-0 ceph-mon[74339]: pgmap v3243: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 270 KiB/s rd, 20 KiB/s wr, 92 op/s
Dec 06 08:00:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:56.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:57.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:57 compute-0 nova_compute[251992]: 2025-12-06 08:00:57.268 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:57 compute-0 podman[378255]: 2025-12-06 08:00:57.394921794 +0000 UTC m=+0.058308195 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:00:57 compute-0 podman[378256]: 2025-12-06 08:00:57.405788647 +0000 UTC m=+0.063983387 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:00:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 252 KiB/s rd, 5.3 KiB/s wr, 64 op/s
Dec 06 08:00:57 compute-0 nova_compute[251992]: 2025-12-06 08:00:57.620 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:00:58 compute-0 ceph-mon[74339]: pgmap v3244: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 252 KiB/s rd, 5.3 KiB/s wr, 64 op/s
Dec 06 08:00:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:00:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:00:58.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:00:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:00:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:00:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:00:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:00:59.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:00:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 4.3 KiB/s wr, 62 op/s
Dec 06 08:01:00 compute-0 ceph-mon[74339]: pgmap v3245: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 4.3 KiB/s wr, 62 op/s
Dec 06 08:01:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:00.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:01.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:01 compute-0 CROND[378299]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 08:01:01 compute-0 run-parts[378302]: (/etc/cron.hourly) starting 0anacron
Dec 06 08:01:01 compute-0 run-parts[378308]: (/etc/cron.hourly) finished 0anacron
Dec 06 08:01:01 compute-0 CROND[378298]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 08:01:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 4.3 KiB/s wr, 62 op/s
Dec 06 08:01:02 compute-0 nova_compute[251992]: 2025-12-06 08:01:02.204 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008047.203153, 8f7f3d80-9c81-41ab-9009-09c77ea059c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:01:02 compute-0 nova_compute[251992]: 2025-12-06 08:01:02.205 251996 INFO nova.compute.manager [-] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] VM Stopped (Lifecycle Event)
Dec 06 08:01:02 compute-0 nova_compute[251992]: 2025-12-06 08:01:02.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:02.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:02 compute-0 ceph-mon[74339]: pgmap v3246: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 199 KiB/s rd, 4.3 KiB/s wr, 62 op/s
Dec 06 08:01:02 compute-0 nova_compute[251992]: 2025-12-06 08:01:02.656 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:03 compute-0 nova_compute[251992]: 2025-12-06 08:01:03.166 251996 DEBUG nova.compute.manager [None req-da1c2fd0-8bf8-4bd8-a443-a0e2c99a0f81 - - - - - -] [instance: 8f7f3d80-9c81-41ab-9009-09c77ea059c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:01:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 KiB/s rd, 255 B/s wr, 5 op/s
Dec 06 08:01:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:01:03.873 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:01:03.874 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:01:03.874 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:04.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:04 compute-0 ceph-mon[74339]: pgmap v3247: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 KiB/s rd, 255 B/s wr, 5 op/s
Dec 06 08:01:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:05.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 KiB/s rd, 597 B/s wr, 6 op/s
Dec 06 08:01:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:06.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:06 compute-0 ceph-mon[74339]: pgmap v3248: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 KiB/s rd, 597 B/s wr, 6 op/s
Dec 06 08:01:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:07.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:07 compute-0 nova_compute[251992]: 2025-12-06 08:01:07.317 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 3 op/s
Dec 06 08:01:07 compute-0 nova_compute[251992]: 2025-12-06 08:01:07.658 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:08 compute-0 sudo[378312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:08 compute-0 sudo[378312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:08 compute-0 sudo[378312]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:08 compute-0 sudo[378337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:08 compute-0 sudo[378337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:08 compute-0 sudo[378337]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:08.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:08 compute-0 ceph-mon[74339]: pgmap v3249: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 3 op/s
Dec 06 08:01:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:09.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 3 op/s
Dec 06 08:01:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2492647070' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:01:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2492647070' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:01:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:10.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:10 compute-0 ceph-mon[74339]: pgmap v3250: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 3 op/s
Dec 06 08:01:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:11.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.5 KiB/s wr, 75 op/s
Dec 06 08:01:12 compute-0 nova_compute[251992]: 2025-12-06 08:01:12.320 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:12.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:12 compute-0 nova_compute[251992]: 2025-12-06 08:01:12.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:12 compute-0 ceph-mon[74339]: pgmap v3251: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.5 KiB/s wr, 75 op/s
Dec 06 08:01:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:01:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:01:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:01:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:01:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:01:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:01:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:13.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.5 KiB/s wr, 75 op/s
Dec 06 08:01:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:14.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:14 compute-0 ceph-mon[74339]: pgmap v3252: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 1.5 KiB/s wr, 75 op/s
Dec 06 08:01:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3347912175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:15.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 1.5 KiB/s wr, 96 op/s
Dec 06 08:01:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:16.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:17 compute-0 ceph-mon[74339]: pgmap v3253: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 1.5 KiB/s wr, 96 op/s
Dec 06 08:01:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:17.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:17 compute-0 nova_compute[251992]: 2025-12-06 08:01:17.321 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 1.2 KiB/s wr, 104 op/s
Dec 06 08:01:17 compute-0 nova_compute[251992]: 2025-12-06 08:01:17.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:01:18
Dec 06 08:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'images']
Dec 06 08:01:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:01:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:18.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:19 compute-0 ceph-mon[74339]: pgmap v3254: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 1.2 KiB/s wr, 104 op/s
Dec 06 08:01:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:19.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 1023 B/s wr, 101 op/s
Dec 06 08:01:20 compute-0 ceph-mon[74339]: pgmap v3255: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 1023 B/s wr, 101 op/s
Dec 06 08:01:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:21.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 122 KiB/s rd, 1023 B/s wr, 198 op/s
Dec 06 08:01:21 compute-0 podman[378369]: 2025-12-06 08:01:21.505014757 +0000 UTC m=+0.158663101 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 06 08:01:22 compute-0 nova_compute[251992]: 2025-12-06 08:01:22.324 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:22 compute-0 ceph-mon[74339]: pgmap v3256: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 122 KiB/s rd, 1023 B/s wr, 198 op/s
Dec 06 08:01:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:22.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:22 compute-0 nova_compute[251992]: 2025-12-06 08:01:22.750 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:23.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 0 B/s wr, 126 op/s
Dec 06 08:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:01:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:01:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:24 compute-0 ceph-mon[74339]: pgmap v3257: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 0 B/s wr, 126 op/s
Dec 06 08:01:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:24.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:25.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 0 B/s wr, 131 op/s
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:01:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:01:26 compute-0 ceph-mon[74339]: pgmap v3258: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 0 B/s wr, 131 op/s
Dec 06 08:01:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:26.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:27.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:01:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:01:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:01:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:01:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:01:27 compute-0 nova_compute[251992]: 2025-12-06 08:01:27.327 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 0 B/s wr, 109 op/s
Dec 06 08:01:27 compute-0 nova_compute[251992]: 2025-12-06 08:01:27.753 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:28 compute-0 sudo[378399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:28 compute-0 sudo[378399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:28 compute-0 sudo[378399]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:28 compute-0 sudo[378436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:28 compute-0 sudo[378436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:28 compute-0 podman[378423]: 2025-12-06 08:01:28.300352572 +0000 UTC m=+0.052323803 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:01:28 compute-0 podman[378424]: 2025-12-06 08:01:28.3006379 +0000 UTC m=+0.050406691 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 06 08:01:28 compute-0 sudo[378436]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:28 compute-0 ceph-mon[74339]: pgmap v3259: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 0 B/s wr, 109 op/s
Dec 06 08:01:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:28.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:29.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 101 op/s
Dec 06 08:01:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:30.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:31.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:31 compute-0 ceph-mon[74339]: pgmap v3260: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 101 op/s
Dec 06 08:01:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 101 op/s
Dec 06 08:01:32 compute-0 nova_compute[251992]: 2025-12-06 08:01:32.330 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:32.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:32 compute-0 nova_compute[251992]: 2025-12-06 08:01:32.755 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:32 compute-0 ceph-mon[74339]: pgmap v3261: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 101 op/s
Dec 06 08:01:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:33.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:01:33 compute-0 nova_compute[251992]: 2025-12-06 08:01:33.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1832219241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:34.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:34 compute-0 nova_compute[251992]: 2025-12-06 08:01:34.730 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:34 compute-0 nova_compute[251992]: 2025-12-06 08:01:34.730 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:34 compute-0 nova_compute[251992]: 2025-12-06 08:01:34.730 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:34 compute-0 nova_compute[251992]: 2025-12-06 08:01:34.731 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:01:34 compute-0 nova_compute[251992]: 2025-12-06 08:01:34.731 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:01:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:01:35 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059535779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:35 compute-0 nova_compute[251992]: 2025-12-06 08:01:35.173 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:01:35 compute-0 ceph-mon[74339]: pgmap v3262: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:01:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:35.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:35 compute-0 nova_compute[251992]: 2025-12-06 08:01:35.352 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:01:35 compute-0 nova_compute[251992]: 2025-12-06 08:01:35.353 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4189MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:01:35 compute-0 nova_compute[251992]: 2025-12-06 08:01:35.354 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:01:35 compute-0 nova_compute[251992]: 2025-12-06 08:01:35.354 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:01:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:01:35 compute-0 nova_compute[251992]: 2025-12-06 08:01:35.516 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:01:35 compute-0 nova_compute[251992]: 2025-12-06 08:01:35.516 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:01:35 compute-0 nova_compute[251992]: 2025-12-06 08:01:35.605 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:01:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:01:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/785130296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:36 compute-0 nova_compute[251992]: 2025-12-06 08:01:36.064 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:01:36 compute-0 nova_compute[251992]: 2025-12-06 08:01:36.070 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:01:36 compute-0 nova_compute[251992]: 2025-12-06 08:01:36.087 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:01:36 compute-0 nova_compute[251992]: 2025-12-06 08:01:36.171 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:01:36 compute-0 nova_compute[251992]: 2025-12-06 08:01:36.171 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:01:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2059535779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2160125574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/785130296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:36.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:01:37.112 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:01:37 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:01:37.113 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:01:37 compute-0 nova_compute[251992]: 2025-12-06 08:01:37.114 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:37.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:37 compute-0 nova_compute[251992]: 2025-12-06 08:01:37.331 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:37 compute-0 nova_compute[251992]: 2025-12-06 08:01:37.758 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:37 compute-0 ceph-mon[74339]: pgmap v3263: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:01:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:39 compute-0 ceph-mon[74339]: pgmap v3264: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:01:39.115 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:01:39 compute-0 nova_compute[251992]: 2025-12-06 08:01:39.165 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:39.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:40 compute-0 ceph-mon[74339]: pgmap v3265: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:40 compute-0 nova_compute[251992]: 2025-12-06 08:01:40.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:40.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:41.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:42 compute-0 nova_compute[251992]: 2025-12-06 08:01:42.334 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:42 compute-0 ceph-mon[74339]: pgmap v3266: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:42.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:42 compute-0 nova_compute[251992]: 2025-12-06 08:01:42.759 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:01:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:01:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:01:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:01:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:01:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:01:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:43.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1929047662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:43 compute-0 nova_compute[251992]: 2025-12-06 08:01:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:44 compute-0 nova_compute[251992]: 2025-12-06 08:01:44.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:44 compute-0 nova_compute[251992]: 2025-12-06 08:01:44.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:01:44 compute-0 nova_compute[251992]: 2025-12-06 08:01:44.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:01:44 compute-0 nova_compute[251992]: 2025-12-06 08:01:44.675 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:01:44 compute-0 nova_compute[251992]: 2025-12-06 08:01:44.675 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:44.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:45 compute-0 ceph-mon[74339]: pgmap v3267: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3042858999' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:45.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:45 compute-0 sudo[378540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:45 compute-0 sudo[378540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:45 compute-0 sudo[378540]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:45 compute-0 sudo[378565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:01:45 compute-0 sudo[378565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:45 compute-0 sudo[378565]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:45 compute-0 sudo[378590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:45 compute-0 sudo[378590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:45 compute-0 sudo[378590]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:45 compute-0 sudo[378615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:01:45 compute-0 sudo[378615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:46 compute-0 sudo[378615]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:01:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:01:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:01:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:01:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:01:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 80ec5b5f-123b-4ec7-9789-32cd1fae7448 does not exist
Dec 06 08:01:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f3abcf55-c820-41f1-b74c-7571ca287960 does not exist
Dec 06 08:01:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 87c76696-fa8b-4316-810b-c51930e8f33c does not exist
Dec 06 08:01:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:01:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:01:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:01:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:01:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:01:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:01:46 compute-0 sudo[378670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:46 compute-0 sudo[378670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:46 compute-0 sudo[378670]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:46 compute-0 sudo[378695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:01:46 compute-0 sudo[378695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:46 compute-0 sudo[378695]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:46 compute-0 sudo[378720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:46 compute-0 sudo[378720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:46 compute-0 sudo[378720]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:46 compute-0 sudo[378745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:01:46 compute-0 sudo[378745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:46 compute-0 nova_compute[251992]: 2025-12-06 08:01:46.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:46.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:46 compute-0 podman[378812]: 2025-12-06 08:01:46.939418455 +0000 UTC m=+0.042514248 container create 3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 08:01:46 compute-0 systemd[1]: Started libpod-conmon-3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d.scope.
Dec 06 08:01:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:01:47 compute-0 podman[378812]: 2025-12-06 08:01:46.919892649 +0000 UTC m=+0.022988452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:01:47 compute-0 podman[378812]: 2025-12-06 08:01:47.021352197 +0000 UTC m=+0.124447990 container init 3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermat, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 08:01:47 compute-0 podman[378812]: 2025-12-06 08:01:47.031886842 +0000 UTC m=+0.134982615 container start 3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermat, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:01:47 compute-0 podman[378812]: 2025-12-06 08:01:47.036040353 +0000 UTC m=+0.139136226 container attach 3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:01:47 compute-0 musing_fermat[378828]: 167 167
Dec 06 08:01:47 compute-0 systemd[1]: libpod-3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d.scope: Deactivated successfully.
Dec 06 08:01:47 compute-0 conmon[378828]: conmon 3720f31cd92f1cade264 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d.scope/container/memory.events
Dec 06 08:01:47 compute-0 podman[378812]: 2025-12-06 08:01:47.039767323 +0000 UTC m=+0.142863166 container died 3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 08:01:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-21df623b1ae88cae68a5fdebea397abe076878c6e00aec09e7d2acc72d5b1c03-merged.mount: Deactivated successfully.
Dec 06 08:01:47 compute-0 podman[378812]: 2025-12-06 08:01:47.094396568 +0000 UTC m=+0.197492341 container remove 3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:01:47 compute-0 systemd[1]: libpod-conmon-3720f31cd92f1cade264e6be402ca28b8c82154e0d2ff815aeb5f3dfc7d1882d.scope: Deactivated successfully.
Dec 06 08:01:47 compute-0 ceph-mon[74339]: pgmap v3268: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:01:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:01:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:01:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:01:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:01:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:47.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:47 compute-0 podman[378854]: 2025-12-06 08:01:47.334521528 +0000 UTC m=+0.069637170 container create 8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 06 08:01:47 compute-0 nova_compute[251992]: 2025-12-06 08:01:47.338 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:47 compute-0 systemd[1]: Started libpod-conmon-8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981.scope.
Dec 06 08:01:47 compute-0 podman[378854]: 2025-12-06 08:01:47.308268689 +0000 UTC m=+0.043384341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:01:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e54bfdcb1d328a630e0d4ee8e5874aa2f925d298877adbb300a586277efea19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e54bfdcb1d328a630e0d4ee8e5874aa2f925d298877adbb300a586277efea19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e54bfdcb1d328a630e0d4ee8e5874aa2f925d298877adbb300a586277efea19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e54bfdcb1d328a630e0d4ee8e5874aa2f925d298877adbb300a586277efea19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e54bfdcb1d328a630e0d4ee8e5874aa2f925d298877adbb300a586277efea19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:47 compute-0 podman[378854]: 2025-12-06 08:01:47.436592172 +0000 UTC m=+0.171707864 container init 8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:01:47 compute-0 podman[378854]: 2025-12-06 08:01:47.450762254 +0000 UTC m=+0.185877886 container start 8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:01:47 compute-0 podman[378854]: 2025-12-06 08:01:47.454829953 +0000 UTC m=+0.189945575 container attach 8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:01:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:47 compute-0 nova_compute[251992]: 2025-12-06 08:01:47.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:47 compute-0 nova_compute[251992]: 2025-12-06 08:01:47.761 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:48 compute-0 lucid_hodgkin[378870]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:01:48 compute-0 lucid_hodgkin[378870]: --> relative data size: 1.0
Dec 06 08:01:48 compute-0 lucid_hodgkin[378870]: --> All data devices are unavailable
Dec 06 08:01:48 compute-0 systemd[1]: libpod-8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981.scope: Deactivated successfully.
Dec 06 08:01:48 compute-0 podman[378885]: 2025-12-06 08:01:48.312906658 +0000 UTC m=+0.025571371 container died 8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:01:48 compute-0 sudo[378896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:48 compute-0 sudo[378896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:48 compute-0 sudo[378896]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:48 compute-0 sudo[378921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:48 compute-0 sudo[378921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:48 compute-0 sudo[378921]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e54bfdcb1d328a630e0d4ee8e5874aa2f925d298877adbb300a586277efea19-merged.mount: Deactivated successfully.
Dec 06 08:01:48 compute-0 podman[378885]: 2025-12-06 08:01:48.550994023 +0000 UTC m=+0.263658716 container remove 8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 08:01:48 compute-0 systemd[1]: libpod-conmon-8b2a3067a27bd6b49f9dbb8df6161995efac0692c8b0ae1e24e19c6a18294981.scope: Deactivated successfully.
Dec 06 08:01:48 compute-0 sudo[378745]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:48 compute-0 sudo[378948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:48 compute-0 sudo[378948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:48 compute-0 sudo[378948]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:48 compute-0 sudo[378973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:01:48 compute-0 sudo[378973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:48 compute-0 sudo[378973]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:48 compute-0 sudo[378998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:48 compute-0 sudo[378998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:48 compute-0 sudo[378998]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:48 compute-0 sudo[379023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:01:48 compute-0 sudo[379023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:49 compute-0 podman[379090]: 2025-12-06 08:01:49.144008324 +0000 UTC m=+0.041811439 container create d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 08:01:49 compute-0 systemd[1]: Started libpod-conmon-d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2.scope.
Dec 06 08:01:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:01:49 compute-0 podman[379090]: 2025-12-06 08:01:49.12605101 +0000 UTC m=+0.023854155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:01:49 compute-0 podman[379090]: 2025-12-06 08:01:49.225935255 +0000 UTC m=+0.123738370 container init d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banzai, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:01:49 compute-0 podman[379090]: 2025-12-06 08:01:49.2316587 +0000 UTC m=+0.129461815 container start d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banzai, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 08:01:49 compute-0 podman[379090]: 2025-12-06 08:01:49.234470036 +0000 UTC m=+0.132273161 container attach d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:01:49 compute-0 recursing_banzai[379106]: 167 167
Dec 06 08:01:49 compute-0 systemd[1]: libpod-d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2.scope: Deactivated successfully.
Dec 06 08:01:49 compute-0 podman[379090]: 2025-12-06 08:01:49.236762368 +0000 UTC m=+0.134565483 container died d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:01:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a57431700e299a9795d0dc89e2355c9808643282bb35aa8798487926a03acff5-merged.mount: Deactivated successfully.
Dec 06 08:01:49 compute-0 podman[379090]: 2025-12-06 08:01:49.27428682 +0000 UTC m=+0.172089935 container remove d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_banzai, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:01:49 compute-0 systemd[1]: libpod-conmon-d74c11f2c74b7925aa88093777e32ea1eda511741976d280d65323320fd70fb2.scope: Deactivated successfully.
Dec 06 08:01:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:49.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:49 compute-0 ceph-mon[74339]: pgmap v3269: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:49 compute-0 podman[379132]: 2025-12-06 08:01:49.426074755 +0000 UTC m=+0.040774890 container create 44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:01:49 compute-0 systemd[1]: Started libpod-conmon-44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93.scope.
Dec 06 08:01:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5eb8e590781fc4ce0d543d62cf8232520b023912c82086d42326cbd5fb4ed37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5eb8e590781fc4ce0d543d62cf8232520b023912c82086d42326cbd5fb4ed37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5eb8e590781fc4ce0d543d62cf8232520b023912c82086d42326cbd5fb4ed37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5eb8e590781fc4ce0d543d62cf8232520b023912c82086d42326cbd5fb4ed37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:49 compute-0 podman[379132]: 2025-12-06 08:01:49.405558482 +0000 UTC m=+0.020258637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:01:49 compute-0 podman[379132]: 2025-12-06 08:01:49.531246864 +0000 UTC m=+0.145946999 container init 44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:01:49 compute-0 podman[379132]: 2025-12-06 08:01:49.537950994 +0000 UTC m=+0.152651149 container start 44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:01:49 compute-0 podman[379132]: 2025-12-06 08:01:49.541839059 +0000 UTC m=+0.156539224 container attach 44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]: {
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:     "0": [
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:         {
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "devices": [
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "/dev/loop3"
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             ],
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "lv_name": "ceph_lv0",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "lv_size": "7511998464",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "name": "ceph_lv0",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "tags": {
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.cluster_name": "ceph",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.crush_device_class": "",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.encrypted": "0",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.osd_id": "0",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.type": "block",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:                 "ceph.vdo": "0"
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             },
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "type": "block",
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:             "vg_name": "ceph_vg0"
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:         }
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]:     ]
Dec 06 08:01:50 compute-0 flamboyant_hodgkin[379148]: }
Dec 06 08:01:50 compute-0 systemd[1]: libpod-44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93.scope: Deactivated successfully.
Dec 06 08:01:50 compute-0 podman[379132]: 2025-12-06 08:01:50.308276291 +0000 UTC m=+0.922976426 container died 44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:01:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5eb8e590781fc4ce0d543d62cf8232520b023912c82086d42326cbd5fb4ed37-merged.mount: Deactivated successfully.
Dec 06 08:01:50 compute-0 podman[379132]: 2025-12-06 08:01:50.366132792 +0000 UTC m=+0.980832927 container remove 44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hodgkin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:01:50 compute-0 systemd[1]: libpod-conmon-44ee0208faa6798f8eaf96e40771435e0b81e8e4b130594117e40adfb0666e93.scope: Deactivated successfully.
Dec 06 08:01:50 compute-0 sudo[379023]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:50 compute-0 sudo[379173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:50 compute-0 sudo[379173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:50 compute-0 sudo[379173]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:50 compute-0 sudo[379198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:01:50 compute-0 sudo[379198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:50 compute-0 sudo[379198]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:50 compute-0 sudo[379223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:50 compute-0 sudo[379223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:50 compute-0 sudo[379223]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:50 compute-0 sudo[379248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:01:50 compute-0 sudo[379248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:50.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:50 compute-0 ceph-mon[74339]: pgmap v3270: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:50 compute-0 podman[379314]: 2025-12-06 08:01:50.935253189 +0000 UTC m=+0.037257286 container create 34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:01:50 compute-0 systemd[1]: Started libpod-conmon-34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b.scope.
Dec 06 08:01:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:01:51 compute-0 podman[379314]: 2025-12-06 08:01:50.916928024 +0000 UTC m=+0.018932131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:01:51 compute-0 podman[379314]: 2025-12-06 08:01:51.019194224 +0000 UTC m=+0.121198321 container init 34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 08:01:51 compute-0 podman[379314]: 2025-12-06 08:01:51.024890698 +0000 UTC m=+0.126894775 container start 34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 08:01:51 compute-0 podman[379314]: 2025-12-06 08:01:51.028001302 +0000 UTC m=+0.130005379 container attach 34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:01:51 compute-0 epic_buck[379330]: 167 167
Dec 06 08:01:51 compute-0 systemd[1]: libpod-34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b.scope: Deactivated successfully.
Dec 06 08:01:51 compute-0 podman[379314]: 2025-12-06 08:01:51.031161017 +0000 UTC m=+0.133165104 container died 34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 08:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c72470f37786c9be187918fddde33c21f4c61dfbacdd64b67ab733cc3260dde-merged.mount: Deactivated successfully.
Dec 06 08:01:51 compute-0 podman[379314]: 2025-12-06 08:01:51.084130866 +0000 UTC m=+0.186134943 container remove 34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_buck, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 08:01:51 compute-0 systemd[1]: libpod-conmon-34f3a872a21352d284f8fc8c43edcd28fcd6cc3f7dd8ea0da4426d3f05283a4b.scope: Deactivated successfully.
Dec 06 08:01:51 compute-0 podman[379354]: 2025-12-06 08:01:51.230207388 +0000 UTC m=+0.037191665 container create 062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:01:51 compute-0 systemd[1]: Started libpod-conmon-062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55.scope.
Dec 06 08:01:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1933099b476fb11eab848a3f9b87f6af41a8b223a7ab040b260dfe61fbb2a3ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1933099b476fb11eab848a3f9b87f6af41a8b223a7ab040b260dfe61fbb2a3ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1933099b476fb11eab848a3f9b87f6af41a8b223a7ab040b260dfe61fbb2a3ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1933099b476fb11eab848a3f9b87f6af41a8b223a7ab040b260dfe61fbb2a3ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:01:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:51.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:51 compute-0 podman[379354]: 2025-12-06 08:01:51.213473306 +0000 UTC m=+0.020457603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:01:51 compute-0 podman[379354]: 2025-12-06 08:01:51.312238261 +0000 UTC m=+0.119222588 container init 062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:01:51 compute-0 podman[379354]: 2025-12-06 08:01:51.318707626 +0000 UTC m=+0.125691903 container start 062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:01:51 compute-0 podman[379354]: 2025-12-06 08:01:51.325389546 +0000 UTC m=+0.132373843 container attach 062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:01:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:51 compute-0 nova_compute[251992]: 2025-12-06 08:01:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:52 compute-0 funny_ritchie[379370]: {
Dec 06 08:01:52 compute-0 funny_ritchie[379370]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:01:52 compute-0 funny_ritchie[379370]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:01:52 compute-0 funny_ritchie[379370]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:01:52 compute-0 funny_ritchie[379370]:         "osd_id": 0,
Dec 06 08:01:52 compute-0 funny_ritchie[379370]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:01:52 compute-0 funny_ritchie[379370]:         "type": "bluestore"
Dec 06 08:01:52 compute-0 funny_ritchie[379370]:     }
Dec 06 08:01:52 compute-0 funny_ritchie[379370]: }
Dec 06 08:01:52 compute-0 systemd[1]: libpod-062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55.scope: Deactivated successfully.
Dec 06 08:01:52 compute-0 podman[379354]: 2025-12-06 08:01:52.152943347 +0000 UTC m=+0.959927624 container died 062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-1933099b476fb11eab848a3f9b87f6af41a8b223a7ab040b260dfe61fbb2a3ac-merged.mount: Deactivated successfully.
Dec 06 08:01:52 compute-0 podman[379354]: 2025-12-06 08:01:52.217350655 +0000 UTC m=+1.024334932 container remove 062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:01:52 compute-0 systemd[1]: libpod-conmon-062b2749971eb29ad76b2d04a813264b42503163feb706ea67b3b91303189c55.scope: Deactivated successfully.
Dec 06 08:01:52 compute-0 sudo[379248]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:01:52 compute-0 podman[379391]: 2025-12-06 08:01:52.289030459 +0000 UTC m=+0.103533725 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 08:01:52 compute-0 nova_compute[251992]: 2025-12-06 08:01:52.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:52.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:52 compute-0 nova_compute[251992]: 2025-12-06 08:01:52.764 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.004000107s ======
Dec 06 08:01:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:53.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000107s
Dec 06 08:01:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:53 compute-0 nova_compute[251992]: 2025-12-06 08:01:53.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:01:53 compute-0 nova_compute[251992]: 2025-12-06 08:01:53.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:01:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:54 compute-0 ceph-mon[74339]: pgmap v3271: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:01:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:01:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:54.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:01:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e34f1572-d911-4083-9606-f837b2ce0c50 does not exist
Dec 06 08:01:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6e061d62-d3d7-4f77-b825-4869f25ee359 does not exist
Dec 06 08:01:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 149eb563-5615-416d-b7b2-6dab0030b1d1 does not exist
Dec 06 08:01:54 compute-0 sudo[379430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:01:54 compute-0 sudo[379430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:54 compute-0 sudo[379430]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:55 compute-0 sudo[379455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:01:55 compute-0 sudo[379455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:01:55 compute-0 sudo[379455]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:55 compute-0 ceph-mon[74339]: pgmap v3272: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:01:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2554890083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:01:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:01:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:55.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:01:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:56.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:57.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:57 compute-0 nova_compute[251992]: 2025-12-06 08:01:57.344 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:57 compute-0 nova_compute[251992]: 2025-12-06 08:01:57.766 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:01:58 compute-0 podman[379482]: 2025-12-06 08:01:58.391193989 +0000 UTC m=+0.053041712 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 06 08:01:58 compute-0 podman[379483]: 2025-12-06 08:01:58.396158023 +0000 UTC m=+0.057762209 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 08:01:58 compute-0 ceph-mon[74339]: pgmap v3273: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:01:58.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:01:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:01:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:01:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:01:59.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:01:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:01:59 compute-0 ceph-mon[74339]: pgmap v3274: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:02:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:00.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:01.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:01 compute-0 ceph-mon[74339]: pgmap v3275: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:02:02 compute-0 nova_compute[251992]: 2025-12-06 08:02:02.348 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:02.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:02 compute-0 nova_compute[251992]: 2025-12-06 08:02:02.767 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:03 compute-0 ceph-mon[74339]: pgmap v3276: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:03.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:03.874 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:03.875 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:03.875 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:04.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:05.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:06 compute-0 ceph-mon[74339]: pgmap v3277: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3338388719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:06.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:07.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:07 compute-0 nova_compute[251992]: 2025-12-06 08:02:07.351 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/513943934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:07 compute-0 ceph-mon[74339]: pgmap v3278: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:07 compute-0 nova_compute[251992]: 2025-12-06 08:02:07.770 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:08 compute-0 sudo[379526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:08 compute-0 sudo[379526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:08 compute-0 sudo[379526]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:08 compute-0 sudo[379551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:08 compute-0 sudo[379551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:08 compute-0 sudo[379551]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:08.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:09.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:09 compute-0 ceph-mon[74339]: pgmap v3279: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2644919661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:02:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2644919661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:02:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:10 compute-0 ceph-mon[74339]: pgmap v3280: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:02:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:11.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 08:02:12 compute-0 ovn_controller[147168]: 2025-12-06T08:02:12Z|00702|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 06 08:02:12 compute-0 nova_compute[251992]: 2025-12-06 08:02:12.354 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:12 compute-0 ceph-mon[74339]: pgmap v3281: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 08:02:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:12.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:12 compute-0 nova_compute[251992]: 2025-12-06 08:02:12.772 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:02:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:02:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:02:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:02:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:02:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:02:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:13.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 768 KiB/s rd, 12 KiB/s wr, 35 op/s
Dec 06 08:02:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:14.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:14 compute-0 ceph-mon[74339]: pgmap v3282: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 768 KiB/s rd, 12 KiB/s wr, 35 op/s
Dec 06 08:02:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:15.036 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:02:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:15.036 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:02:15 compute-0 nova_compute[251992]: 2025-12-06 08:02:15.037 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:15.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000054s ======
Dec 06 08:02:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:16.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Dec 06 08:02:16 compute-0 ceph-mon[74339]: pgmap v3283: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1528343113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:17.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:17 compute-0 nova_compute[251992]: 2025-12-06 08:02:17.357 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:17 compute-0 nova_compute[251992]: 2025-12-06 08:02:17.772 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:02:18
Dec 06 08:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'default.rgw.log', 'backups', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.mgr']
Dec 06 08:02:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:02:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:18.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:18 compute-0 ceph-mon[74339]: pgmap v3284: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:19.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:20 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:20.038 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:20.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:21 compute-0 ceph-mon[74339]: pgmap v3285: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:02:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 06 08:02:22 compute-0 nova_compute[251992]: 2025-12-06 08:02:22.360 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:22 compute-0 podman[379583]: 2025-12-06 08:02:22.40740141 +0000 UTC m=+0.070746240 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:02:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:22.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:22 compute-0 nova_compute[251992]: 2025-12-06 08:02:22.774 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:23 compute-0 ceph-mon[74339]: pgmap v3286: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Dec 06 08:02:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:23.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 72 op/s
Dec 06 08:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:02:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:02:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2139978182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:24.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:25 compute-0 ceph-mon[74339]: pgmap v3287: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 72 op/s
Dec 06 08:02:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3856105960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:25.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 244 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 115 op/s
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031481554182951794 of space, bias 1.0, pg target 0.9444466254885538 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:02:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:02:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:26.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:02:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:02:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:02:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:02:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:02:27 compute-0 ceph-mon[74339]: pgmap v3288: 305 pgs: 305 active+clean; 244 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 115 op/s
Dec 06 08:02:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:27.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:27 compute-0 nova_compute[251992]: 2025-12-06 08:02:27.364 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:02:27 compute-0 nova_compute[251992]: 2025-12-06 08:02:27.777 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:28 compute-0 sudo[379613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:28 compute-0 sudo[379613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:28 compute-0 sudo[379613]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:28 compute-0 podman[379637]: 2025-12-06 08:02:28.747285854 +0000 UTC m=+0.051661395 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:02:28 compute-0 sudo[379650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:28 compute-0 sudo[379650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:28 compute-0 sudo[379650]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:28 compute-0 podman[379638]: 2025-12-06 08:02:28.75417282 +0000 UTC m=+0.054208243 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 08:02:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:28.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:29 compute-0 ceph-mon[74339]: pgmap v3289: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:02:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:29.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:02:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:30.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:31 compute-0 ceph-mon[74339]: pgmap v3290: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:02:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:31.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Dec 06 08:02:32 compute-0 nova_compute[251992]: 2025-12-06 08:02:32.366 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3778702547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:32 compute-0 nova_compute[251992]: 2025-12-06 08:02:32.778 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:02:33 compute-0 ceph-mon[74339]: pgmap v3291: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Dec 06 08:02:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2859200518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:34.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:34 compute-0 ceph-mon[74339]: pgmap v3292: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:02:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:35.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:02:35 compute-0 nova_compute[251992]: 2025-12-06 08:02:35.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:35 compute-0 nova_compute[251992]: 2025-12-06 08:02:35.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:35 compute-0 nova_compute[251992]: 2025-12-06 08:02:35.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:35 compute-0 nova_compute[251992]: 2025-12-06 08:02:35.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:35 compute-0 nova_compute[251992]: 2025-12-06 08:02:35.693 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:02:35 compute-0 nova_compute[251992]: 2025-12-06 08:02:35.694 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:02:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2405441602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.115 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.271 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.273 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4205MB free_disk=20.921833038330078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.273 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.273 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.361 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.362 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.519 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:36 compute-0 ceph-mon[74339]: pgmap v3293: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:02:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2405441602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:02:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406994185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.951 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.958 251996 DEBUG nova.compute.manager [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.963 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.984 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.985 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:02:36 compute-0 nova_compute[251992]: 2025-12-06 08:02:36.985 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.074 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.075 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.101 251996 DEBUG nova.objects.instance [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'pci_requests' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.117 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.118 251996 INFO nova.compute.claims [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.118 251996 DEBUG nova.objects.instance [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'resources' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.136 251996 DEBUG nova.objects.instance [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'numa_topology' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.155 251996 DEBUG nova.objects.instance [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'pci_devices' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.266 251996 INFO nova.compute.resource_tracker [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating resource usage from migration c6765a8d-aa3f-4525-b048-7289f8cbbc8a
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.267 251996 DEBUG nova.compute.resource_tracker [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Starting to track incoming migration c6765a8d-aa3f-4525-b048-7289f8cbbc8a with flavor 25848a18-11d9-4f11-80b5-5d005675c76d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.342 251996 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:37.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.370 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 73 KiB/s wr, 87 op/s
Dec 06 08:02:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:02:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1008802938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.763 251996 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.768 251996 DEBUG nova.compute.provider_tree [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.780 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.788 251996 DEBUG nova.scheduler.client.report [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.815 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:37 compute-0 nova_compute[251992]: 2025-12-06 08:02:37.816 251996 INFO nova.compute.manager [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Migrating
Dec 06 08:02:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2406994185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1008802938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:38 compute-0 nova_compute[251992]: 2025-12-06 08:02:38.979 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:39 compute-0 ceph-mon[74339]: pgmap v3294: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 73 KiB/s wr, 87 op/s
Dec 06 08:02:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:39.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Dec 06 08:02:40 compute-0 nova_compute[251992]: 2025-12-06 08:02:40.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:40.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:40 compute-0 ceph-mon[74339]: pgmap v3295: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Dec 06 08:02:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:41.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Dec 06 08:02:42 compute-0 nova_compute[251992]: 2025-12-06 08:02:42.393 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:42.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:42 compute-0 nova_compute[251992]: 2025-12-06 08:02:42.782 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:02:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:02:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:02:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:02:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:02:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:02:43 compute-0 sshd-session[379775]: Accepted publickey for nova from 192.168.122.101 port 49474 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 08:02:43 compute-0 systemd[1]: Created slice User Slice of UID 42436.
Dec 06 08:02:43 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec 06 08:02:43 compute-0 systemd-logind[798]: New session 62 of user nova.
Dec 06 08:02:43 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec 06 08:02:43 compute-0 systemd[1]: Starting User Manager for UID 42436...
Dec 06 08:02:43 compute-0 systemd[379780]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 08:02:43 compute-0 ceph-mon[74339]: pgmap v3296: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Dec 06 08:02:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2170586488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:43 compute-0 systemd[379780]: Queued start job for default target Main User Target.
Dec 06 08:02:43 compute-0 systemd[379780]: Created slice User Application Slice.
Dec 06 08:02:43 compute-0 systemd[379780]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 08:02:43 compute-0 systemd[379780]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 08:02:43 compute-0 systemd[379780]: Reached target Paths.
Dec 06 08:02:43 compute-0 systemd[379780]: Reached target Timers.
Dec 06 08:02:43 compute-0 systemd[379780]: Starting D-Bus User Message Bus Socket...
Dec 06 08:02:43 compute-0 systemd[379780]: Starting Create User's Volatile Files and Directories...
Dec 06 08:02:43 compute-0 systemd[379780]: Listening on D-Bus User Message Bus Socket.
Dec 06 08:02:43 compute-0 systemd[379780]: Reached target Sockets.
Dec 06 08:02:43 compute-0 systemd[379780]: Finished Create User's Volatile Files and Directories.
Dec 06 08:02:43 compute-0 systemd[379780]: Reached target Basic System.
Dec 06 08:02:43 compute-0 systemd[379780]: Reached target Main User Target.
Dec 06 08:02:43 compute-0 systemd[379780]: Startup finished in 165ms.
Dec 06 08:02:43 compute-0 systemd[1]: Started User Manager for UID 42436.
Dec 06 08:02:43 compute-0 systemd[1]: Started Session 62 of User nova.
Dec 06 08:02:43 compute-0 sshd-session[379775]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 08:02:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:43.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:43 compute-0 sshd-session[379795]: Received disconnect from 192.168.122.101 port 49474:11: disconnected by user
Dec 06 08:02:43 compute-0 sshd-session[379795]: Disconnected from user nova 192.168.122.101 port 49474
Dec 06 08:02:43 compute-0 sshd-session[379775]: pam_unix(sshd:session): session closed for user nova
Dec 06 08:02:43 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Dec 06 08:02:43 compute-0 systemd-logind[798]: Session 62 logged out. Waiting for processes to exit.
Dec 06 08:02:43 compute-0 systemd-logind[798]: Removed session 62.
Dec 06 08:02:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec 06 08:02:43 compute-0 sshd-session[379797]: Accepted publickey for nova from 192.168.122.101 port 42236 ssh2: ECDSA SHA256:5h97iTzAu3mBuYSMbk8G6sKxagpkfKREMv90u9x0+T0
Dec 06 08:02:43 compute-0 systemd-logind[798]: New session 64 of user nova.
Dec 06 08:02:43 compute-0 systemd[1]: Started Session 64 of User nova.
Dec 06 08:02:43 compute-0 sshd-session[379797]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Dec 06 08:02:43 compute-0 sshd-session[379800]: Received disconnect from 192.168.122.101 port 42236:11: disconnected by user
Dec 06 08:02:43 compute-0 sshd-session[379800]: Disconnected from user nova 192.168.122.101 port 42236
Dec 06 08:02:43 compute-0 sshd-session[379797]: pam_unix(sshd:session): session closed for user nova
Dec 06 08:02:43 compute-0 systemd[1]: session-64.scope: Deactivated successfully.
Dec 06 08:02:43 compute-0 systemd-logind[798]: Session 64 logged out. Waiting for processes to exit.
Dec 06 08:02:43 compute-0 systemd-logind[798]: Removed session 64.
Dec 06 08:02:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/702947871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:02:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:44.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:45.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 06 08:02:45 compute-0 nova_compute[251992]: 2025-12-06 08:02:45.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:45 compute-0 nova_compute[251992]: 2025-12-06 08:02:45.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:02:45 compute-0 nova_compute[251992]: 2025-12-06 08:02:45.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:02:45 compute-0 nova_compute[251992]: 2025-12-06 08:02:45.676 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:02:45 compute-0 nova_compute[251992]: 2025-12-06 08:02:45.676 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:46 compute-0 ceph-mon[74339]: pgmap v3297: 305 pgs: 305 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec 06 08:02:46 compute-0 nova_compute[251992]: 2025-12-06 08:02:46.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:46 compute-0 nova_compute[251992]: 2025-12-06 08:02:46.694 251996 DEBUG nova.compute.manager [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:46 compute-0 nova_compute[251992]: 2025-12-06 08:02:46.695 251996 DEBUG oslo_concurrency.lockutils [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:46 compute-0 nova_compute[251992]: 2025-12-06 08:02:46.695 251996 DEBUG oslo_concurrency.lockutils [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:46 compute-0 nova_compute[251992]: 2025-12-06 08:02:46.696 251996 DEBUG oslo_concurrency.lockutils [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:46 compute-0 nova_compute[251992]: 2025-12-06 08:02:46.696 251996 DEBUG nova.compute.manager [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:02:46 compute-0 nova_compute[251992]: 2025-12-06 08:02:46.696 251996 WARNING nova.compute.manager [req-f857d680-8365-49cf-a9ef-71e5a516ddd6 req-b2243d07-230b-4309-adb4-819697e66749 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state active and task_state resize_migrating.
Dec 06 08:02:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:46.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:47.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:47 compute-0 nova_compute[251992]: 2025-12-06 08:02:47.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:02:47 compute-0 nova_compute[251992]: 2025-12-06 08:02:47.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:47 compute-0 nova_compute[251992]: 2025-12-06 08:02:47.816 251996 INFO nova.network.neutron [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating port 7be900e8-79cd-473a-8f1d-df5029d9e773 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Dec 06 08:02:47 compute-0 nova_compute[251992]: 2025-12-06 08:02:47.820 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:47 compute-0 ceph-mon[74339]: pgmap v3298: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 296 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Dec 06 08:02:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:48.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:48 compute-0 sudo[379804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:48 compute-0 sudo[379804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:48 compute-0 sudo[379804]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:48 compute-0 sudo[379829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:48 compute-0 sudo[379829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:48 compute-0 sudo[379829]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:48 compute-0 ceph-mon[74339]: pgmap v3299: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:02:48 compute-0 nova_compute[251992]: 2025-12-06 08:02:48.949 251996 DEBUG nova.compute.manager [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:48 compute-0 nova_compute[251992]: 2025-12-06 08:02:48.950 251996 DEBUG oslo_concurrency.lockutils [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:48 compute-0 nova_compute[251992]: 2025-12-06 08:02:48.950 251996 DEBUG oslo_concurrency.lockutils [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:48 compute-0 nova_compute[251992]: 2025-12-06 08:02:48.950 251996 DEBUG oslo_concurrency.lockutils [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:48 compute-0 nova_compute[251992]: 2025-12-06 08:02:48.951 251996 DEBUG nova.compute.manager [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:02:48 compute-0 nova_compute[251992]: 2025-12-06 08:02:48.951 251996 WARNING nova.compute.manager [req-d5906808-cd1c-4475-86ba-9093020b1480 req-5f8d2e00-90f7-462f-9d27-b3bf6bcf13a1 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state active and task_state resize_migrated.
Dec 06 08:02:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:49 compute-0 nova_compute[251992]: 2025-12-06 08:02:49.299 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:02:49 compute-0 nova_compute[251992]: 2025-12-06 08:02:49.299 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:02:49 compute-0 nova_compute[251992]: 2025-12-06 08:02:49.300 251996 DEBUG nova.network.neutron [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:02:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:49.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:02:49 compute-0 nova_compute[251992]: 2025-12-06 08:02:49.524 251996 DEBUG nova.compute.manager [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:49 compute-0 nova_compute[251992]: 2025-12-06 08:02:49.525 251996 DEBUG nova.compute.manager [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing instance network info cache due to event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:02:49 compute-0 nova_compute[251992]: 2025-12-06 08:02:49.526 251996 DEBUG oslo_concurrency.lockutils [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:02:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:50 compute-0 ceph-mon[74339]: pgmap v3300: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:02:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:51.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Dec 06 08:02:52 compute-0 nova_compute[251992]: 2025-12-06 08:02:52.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:52.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:52 compute-0 nova_compute[251992]: 2025-12-06 08:02:52.820 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:53 compute-0 ceph-mon[74339]: pgmap v3301: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Dec 06 08:02:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:53.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:53 compute-0 podman[379857]: 2025-12-06 08:02:53.434038829 +0000 UTC m=+0.091557072 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:02:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 102 KiB/s wr, 17 op/s
Dec 06 08:02:53 compute-0 nova_compute[251992]: 2025-12-06 08:02:53.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:53 compute-0 systemd[1]: Stopping User Manager for UID 42436...
Dec 06 08:02:53 compute-0 systemd[379780]: Activating special unit Exit the Session...
Dec 06 08:02:53 compute-0 systemd[379780]: Stopped target Main User Target.
Dec 06 08:02:53 compute-0 systemd[379780]: Stopped target Basic System.
Dec 06 08:02:53 compute-0 systemd[379780]: Stopped target Paths.
Dec 06 08:02:53 compute-0 systemd[379780]: Stopped target Sockets.
Dec 06 08:02:53 compute-0 systemd[379780]: Stopped target Timers.
Dec 06 08:02:53 compute-0 systemd[379780]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 08:02:53 compute-0 systemd[379780]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 08:02:53 compute-0 systemd[379780]: Closed D-Bus User Message Bus Socket.
Dec 06 08:02:53 compute-0 systemd[379780]: Stopped Create User's Volatile Files and Directories.
Dec 06 08:02:53 compute-0 systemd[379780]: Removed slice User Application Slice.
Dec 06 08:02:53 compute-0 systemd[379780]: Reached target Shutdown.
Dec 06 08:02:53 compute-0 systemd[379780]: Finished Exit the Session.
Dec 06 08:02:53 compute-0 systemd[379780]: Reached target Exit the Session.
Dec 06 08:02:53 compute-0 systemd[1]: user@42436.service: Deactivated successfully.
Dec 06 08:02:53 compute-0 systemd[1]: Stopped User Manager for UID 42436.
Dec 06 08:02:53 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec 06 08:02:53 compute-0 systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec 06 08:02:53 compute-0 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec 06 08:02:53 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec 06 08:02:53 compute-0 systemd[1]: Removed slice User Slice of UID 42436.
Dec 06 08:02:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:02:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:54.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.270 251996 DEBUG nova.network.neutron [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.317 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.322 251996 DEBUG oslo_concurrency.lockutils [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.323 251996 DEBUG nova.network.neutron [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:02:55 compute-0 sudo[379885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:55 compute-0 sudo[379885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:55 compute-0 sudo[379885]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:55 compute-0 ceph-mon[74339]: pgmap v3302: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 102 KiB/s wr, 17 op/s
Dec 06 08:02:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:55.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:55 compute-0 sudo[379910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:02:55 compute-0 sudo[379910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:55 compute-0 sudo[379910]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:55 compute-0 sudo[379935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:55 compute-0 sudo[379935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:55 compute-0 sudo[379935]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:55 compute-0 sudo[379960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:02:55 compute-0 sudo[379960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 102 KiB/s wr, 17 op/s
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.552 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.554 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.555 251996 INFO nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Creating image(s)
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.602 251996 DEBUG nova.storage.rbd_utils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] creating snapshot(nova-resize) on rbd image(a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:02:55 compute-0 nova_compute[251992]: 2025-12-06 08:02:55.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:02:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:02:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:02:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:56 compute-0 sudo[379960]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Dec 06 08:02:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Dec 06 08:02:56 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Dec 06 08:02:56 compute-0 nova_compute[251992]: 2025-12-06 08:02:56.428 251996 DEBUG nova.objects.instance [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:02:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:02:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:56.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:02:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:02:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:02:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:02:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:02:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:02:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e824448c-1314-464b-8f6b-a66552de87fc does not exist
Dec 06 08:02:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 33ab2e3e-fbf7-477e-85e9-53fbb94f7638 does not exist
Dec 06 08:02:56 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f8d748f6-af57-4ef9-bf8f-f3d7eeae9684 does not exist
Dec 06 08:02:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:02:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:02:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:02:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:02:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:02:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:02:56 compute-0 sudo[380089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:56 compute-0 sudo[380089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:56 compute-0 sudo[380089]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:56 compute-0 sudo[380114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:02:56 compute-0 sudo[380114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:56 compute-0 sudo[380114]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:57 compute-0 sudo[380139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:57 compute-0 sudo[380139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:57 compute-0 sudo[380139]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.070 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.070 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Ensure instance console log exists: /var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.071 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.071 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.072 251996 DEBUG oslo_concurrency.lockutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.074 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Start _get_guest_xml network_info=[{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1508194701", "vif_mac": "fa:16:3e:eb:70:ef"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.081 251996 WARNING nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.086 251996 DEBUG nova.virt.libvirt.host [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.087 251996 DEBUG nova.virt.libvirt.host [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.090 251996 DEBUG nova.virt.libvirt.host [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.090 251996 DEBUG nova.virt.libvirt.host [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.092 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:02:57 compute-0 sudo[380165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.092 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.093 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.094 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:02:57 compute-0 sudo[380165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.094 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.095 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.095 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.096 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.096 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.096 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.097 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.097 251996 DEBUG nova.virt.hardware [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.098 251996 DEBUG nova.objects.instance [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.254 251996 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:57 compute-0 ceph-mon[74339]: pgmap v3303: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 102 KiB/s wr, 17 op/s
Dec 06 08:02:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:57 compute-0 ceph-mon[74339]: osdmap e411: 3 total, 3 up, 3 in
Dec 06 08:02:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:02:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:02:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:02:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:02:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:02:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:02:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:57.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.448 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 204 B/s rd, 15 KiB/s wr, 0 op/s
Dec 06 08:02:57 compute-0 podman[380240]: 2025-12-06 08:02:57.439475511 +0000 UTC m=+0.028498481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:02:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:02:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200200904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.780 251996 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.826 251996 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:02:57 compute-0 nova_compute[251992]: 2025-12-06 08:02:57.852 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:57 compute-0 podman[380240]: 2025-12-06 08:02:57.975252688 +0000 UTC m=+0.564275618 container create 3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feynman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 08:02:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:02:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16996870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.239 251996 DEBUG oslo_concurrency.processutils [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.243 251996 DEBUG nova.virt.libvirt.vif [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:02:47Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1508194701", "vif_mac": "fa:16:3e:eb:70:ef"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.244 251996 DEBUG nova.network.os_vif_util [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1508194701", "vif_mac": "fa:16:3e:eb:70:ef"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.246 251996 DEBUG nova.network.os_vif_util [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.252 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <uuid>a00396fd-1a78-4cad-9c38-7b0905ab5b9f</uuid>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <name>instance-000000b8</name>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1312966031</nova:name>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:02:57</nova:creationTime>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <nova:port uuid="7be900e8-79cd-473a-8f1d-df5029d9e773">
Dec 06 08:02:58 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <system>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <entry name="serial">a00396fd-1a78-4cad-9c38-7b0905ab5b9f</entry>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <entry name="uuid">a00396fd-1a78-4cad-9c38-7b0905ab5b9f</entry>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </system>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <os>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   </os>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <features>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   </features>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk">
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       </source>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/a00396fd-1a78-4cad-9c38-7b0905ab5b9f_disk.config">
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       </source>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:02:58 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:eb:70:ef"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <target dev="tap7be900e8-79"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/a00396fd-1a78-4cad-9c38-7b0905ab5b9f/console.log" append="off"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <video>
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </video>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:02:58 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:02:58 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:02:58 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:02:58 compute-0 nova_compute[251992]: </domain>
Dec 06 08:02:58 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.256 251996 DEBUG nova.virt.libvirt.vif [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:02:47Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1508194701", "vif_mac": "fa:16:3e:eb:70:ef"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.257 251996 DEBUG nova.network.os_vif_util [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1508194701", "vif_mac": "fa:16:3e:eb:70:ef"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.258 251996 DEBUG nova.network.os_vif_util [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.259 251996 DEBUG os_vif [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.261 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.261 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.263 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.272 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7be900e8-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.273 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7be900e8-79, col_values=(('external_ids', {'iface-id': '7be900e8-79cd-473a-8f1d-df5029d9e773', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:70:ef', 'vm-uuid': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.275 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.2774] manager: (tap7be900e8-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.279 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.284 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.285 251996 INFO os_vif [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79')
Dec 06 08:02:58 compute-0 systemd[1]: Started libpod-conmon-3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df.scope.
Dec 06 08:02:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/200200904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/16996870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:02:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:02:58 compute-0 podman[380240]: 2025-12-06 08:02:58.332909309 +0000 UTC m=+0.921932259 container init 3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feynman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:02:58 compute-0 podman[380240]: 2025-12-06 08:02:58.344624965 +0000 UTC m=+0.933647885 container start 3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feynman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:02:58 compute-0 podman[380240]: 2025-12-06 08:02:58.349208018 +0000 UTC m=+0.938230948 container attach 3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:02:58 compute-0 peaceful_feynman[380310]: 167 167
Dec 06 08:02:58 compute-0 systemd[1]: libpod-3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df.scope: Deactivated successfully.
Dec 06 08:02:58 compute-0 podman[380240]: 2025-12-06 08:02:58.350845113 +0000 UTC m=+0.939868053 container died 3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:02:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c69ac4b3770ac15a3578292314269299c7fba86a898b101713cb80b8b5c49500-merged.mount: Deactivated successfully.
Dec 06 08:02:58 compute-0 podman[380240]: 2025-12-06 08:02:58.392285661 +0000 UTC m=+0.981308591 container remove 3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feynman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.393 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.393 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.394 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] No VIF found with MAC fa:16:3e:eb:70:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.394 251996 INFO nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Using config drive
Dec 06 08:02:58 compute-0 systemd[1]: libpod-conmon-3b757eb0587d782e9ca5c45ea5c99f41f008c29f978a711cb9992fc6249e62df.scope: Deactivated successfully.
Dec 06 08:02:58 compute-0 kernel: tap7be900e8-79: entered promiscuous mode
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.4997] manager: (tap7be900e8-79): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.500 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 ovn_controller[147168]: 2025-12-06T08:02:58Z|00703|binding|INFO|Claiming lport 7be900e8-79cd-473a-8f1d-df5029d9e773 for this chassis.
Dec 06 08:02:58 compute-0 ovn_controller[147168]: 2025-12-06T08:02:58Z|00704|binding|INFO|7be900e8-79cd-473a-8f1d-df5029d9e773: Claiming fa:16:3e:eb:70:ef 10.100.0.4
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.513 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.5198] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.5203] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.528 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:70:ef 10.100.0.4'], port_security=['fa:16:3e:eb:70:ef 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '89003807-49b2-48b6-9510-52c4e2235abf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64cba72f-9df1-4bbf-91d5-2ef412c84dfa, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=7be900e8-79cd-473a-8f1d-df5029d9e773) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.530 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 7be900e8-79cd-473a-8f1d-df5029d9e773 in datapath 26d75c28-bf40-4c60-9e29-1a7b2fb696a0 bound to our chassis
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.531 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:02:58 compute-0 systemd-udevd[380373]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.5464] device (tap7be900e8-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.5471] device (tap7be900e8-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.546 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[11677adb-e104-4ae0-801f-8e219458f23f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.548 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26d75c28-b1 in ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:02:58 compute-0 systemd-machined[212986]: New machine qemu-86-instance-000000b8.
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.550 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26d75c28-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.551 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[26c37d0a-9d72-414d-9a56-dab7f0eb864f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.552 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a91c73c8-93a7-466c-a263-04aee20da4c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 podman[380360]: 2025-12-06 08:02:58.562114983 +0000 UTC m=+0.049519966 container create 2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chatelet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.566 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[c3568003-d31b-4a76-8bfb-f4acbc510db8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-000000b8.
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.592 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[79480a65-49d4-4104-ab6c-0019ee042be2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 systemd[1]: Started libpod-conmon-2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790.scope.
Dec 06 08:02:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aeea54c889495b43074aa68469fb8b50dc7ef80b7bae381b805133b40d4cb15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aeea54c889495b43074aa68469fb8b50dc7ef80b7bae381b805133b40d4cb15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.635 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[00b51017-10a2-4022-8c22-8895e521a72b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aeea54c889495b43074aa68469fb8b50dc7ef80b7bae381b805133b40d4cb15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aeea54c889495b43074aa68469fb8b50dc7ef80b7bae381b805133b40d4cb15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:02:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aeea54c889495b43074aa68469fb8b50dc7ef80b7bae381b805133b40d4cb15/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:02:58 compute-0 podman[380360]: 2025-12-06 08:02:58.540206153 +0000 UTC m=+0.027611156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 podman[380360]: 2025-12-06 08:02:58.654845485 +0000 UTC m=+0.142250488 container init 2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.6556] manager: (tap26d75c28-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.660 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.654 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5fe06d-3163-4a3b-8198-07f1e39aeca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_controller[147168]: 2025-12-06T08:02:58Z|00705|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 ovn-installed in OVS
Dec 06 08:02:58 compute-0 ovn_controller[147168]: 2025-12-06T08:02:58Z|00706|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 up in Southbound
Dec 06 08:02:58 compute-0 podman[380360]: 2025-12-06 08:02:58.677284901 +0000 UTC m=+0.164689874 container start 2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chatelet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 podman[380360]: 2025-12-06 08:02:58.681411463 +0000 UTC m=+0.168816446 container attach 2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chatelet, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.698 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b734cede-0b3b-455a-9c5b-de5913b3231b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.702 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b9377387-dff9-44fc-a7ea-b2f17e2f56a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.7306] device (tap26d75c28-b0): carrier: link connected
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.737 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcbeb54-c3a4-478f-a47f-4010b8cbbd01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.758 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8d753ff1-dd1f-45b4-a546-e0c1b309c8a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d75c28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:c0:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845131, 'reachable_time': 23767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380418, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.775 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[14ea1012-e0f8-4763-80a6-5fc1b0f4a251]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1e:c04e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845131, 'tstamp': 845131}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380419, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.790 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[edac25ec-4504-473f-8578-e378869f3c03]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26d75c28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1e:c0:4e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845131, 'reachable_time': 23767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380420, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:02:58.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.816 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7bc95e-7787-4bda-a241-43b56d665fc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.866 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d192feeb-0c79-4e21-ade0-34fffbdd215e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.867 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d75c28-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.867 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.868 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d75c28-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.869 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 NetworkManager[48965]: <info>  [1765008178.8704] manager: (tap26d75c28-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Dec 06 08:02:58 compute-0 kernel: tap26d75c28-b0: entered promiscuous mode
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.875 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26d75c28-b0, col_values=(('external_ids', {'iface-id': '3e000649-26bb-4d0b-8171-d46f3570081b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:02:58 compute-0 ovn_controller[147168]: 2025-12-06T08:02:58Z|00707|binding|INFO|Releasing lport 3e000649-26bb-4d0b-8171-d46f3570081b from this chassis (sb_readonly=0)
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.876 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.878 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.879 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c8654a60-f114-406d-a24c-6d9f81de52fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.880 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.pid.haproxy
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 26d75c28-bf40-4c60-9e29-1a7b2fb696a0
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:02:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:02:58.881 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'env', 'PROCESS_TAG=haproxy-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26d75c28-bf40-4c60-9e29-1a7b2fb696a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:02:58 compute-0 nova_compute[251992]: 2025-12-06 08:02:58.888 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:02:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:02:59 compute-0 podman[380453]: 2025-12-06 08:02:59.232643757 +0000 UTC m=+0.051626804 container create 09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:02:59 compute-0 systemd[1]: Started libpod-conmon-09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1.scope.
Dec 06 08:02:59 compute-0 podman[380453]: 2025-12-06 08:02:59.204349434 +0000 UTC m=+0.023332491 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:02:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:02:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5e75348f107097bbbaf954780641a895f581e9f996b23ea260fd44b0d50392a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:02:59 compute-0 podman[380453]: 2025-12-06 08:02:59.334626569 +0000 UTC m=+0.153609606 container init 09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:02:59 compute-0 podman[380466]: 2025-12-06 08:02:59.334777503 +0000 UTC m=+0.064688287 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:02:59 compute-0 podman[380469]: 2025-12-06 08:02:59.339179792 +0000 UTC m=+0.066605228 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:02:59 compute-0 podman[380453]: 2025-12-06 08:02:59.341987698 +0000 UTC m=+0.160970725 container start 09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:02:59 compute-0 ceph-mon[74339]: pgmap v3305: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 204 B/s rd, 15 KiB/s wr, 0 op/s
Dec 06 08:02:59 compute-0 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[380489]: [NOTICE]   (380508) : New worker (380511) forked
Dec 06 08:02:59 compute-0 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[380489]: [NOTICE]   (380508) : Loading success.
Dec 06 08:02:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:02:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:02:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:02:59.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:02:59 compute-0 hungry_chatelet[380389]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:02:59 compute-0 hungry_chatelet[380389]: --> relative data size: 1.0
Dec 06 08:02:59 compute-0 hungry_chatelet[380389]: --> All data devices are unavailable
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.498 251996 DEBUG nova.network.neutron [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated VIF entry in instance network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.499 251996 DEBUG nova.network.neutron [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:02:59 compute-0 systemd[1]: libpod-2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790.scope: Deactivated successfully.
Dec 06 08:02:59 compute-0 conmon[380389]: conmon 2b7561b8c86aa928d7c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790.scope/container/memory.events
Dec 06 08:02:59 compute-0 podman[380360]: 2025-12-06 08:02:59.505699565 +0000 UTC m=+0.993104568 container died 2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:02:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 204 B/s rd, 15 KiB/s wr, 0 op/s
Dec 06 08:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aeea54c889495b43074aa68469fb8b50dc7ef80b7bae381b805133b40d4cb15-merged.mount: Deactivated successfully.
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.543 251996 DEBUG oslo_concurrency.lockutils [req-2fca31e6-113e-4d8b-820f-fa20a57a1fd8 req-d95829ab-b244-457a-a2ca-4f216a46fea5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:02:59 compute-0 podman[380360]: 2025-12-06 08:02:59.559186058 +0000 UTC m=+1.046591041 container remove 2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_chatelet, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:02:59 compute-0 systemd[1]: libpod-conmon-2b7561b8c86aa928d7c53f9e4a1e4a7a8aeacfd17c6e79f53093946416daa790.scope: Deactivated successfully.
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.581 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008179.5810363, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.582 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Resumed (Lifecycle Event)
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.583 251996 DEBUG nova.compute.manager [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.586 251996 INFO nova.virt.libvirt.driver [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance running successfully.
Dec 06 08:02:59 compute-0 virtqemud[251613]: argument unsupported: QEMU guest agent is not configured
Dec 06 08:02:59 compute-0 sudo[380165]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.590 251996 DEBUG nova.virt.libvirt.guest [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.590 251996 DEBUG nova.virt.libvirt.driver [None req-6daf4208-fa7e-48a9-baaf-b5691e1e7b89 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.633 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.637 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:02:59 compute-0 sudo[380584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:59 compute-0 sudo[380584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.647 251996 DEBUG nova.compute.manager [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.648 251996 DEBUG oslo_concurrency.lockutils [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.648 251996 DEBUG oslo_concurrency.lockutils [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.648 251996 DEBUG oslo_concurrency.lockutils [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.648 251996 DEBUG nova.compute.manager [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.648 251996 WARNING nova.compute.manager [req-1f78927a-34ea-4751-85d1-2ded3ae70331 req-301afaa5-a1fb-4ce1-8e7b-81dd3e11d736 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state active and task_state resize_finish.
Dec 06 08:02:59 compute-0 sudo[380584]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.713 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.713 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008179.581951, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.713 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Started (Lifecycle Event)
Dec 06 08:02:59 compute-0 sudo[380609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:02:59 compute-0 sudo[380609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:59 compute-0 sudo[380609]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.781 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.784 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:02:59 compute-0 sudo[380634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:02:59 compute-0 sudo[380634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:02:59 compute-0 sudo[380634]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:59 compute-0 nova_compute[251992]: 2025-12-06 08:02:59.813 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] During sync_power_state the instance has a pending task (resize_finish). Skip.
Dec 06 08:02:59 compute-0 sudo[380659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:02:59 compute-0 sudo[380659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:00 compute-0 podman[380724]: 2025-12-06 08:03:00.155589452 +0000 UTC m=+0.041435020 container create 87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:03:00 compute-0 systemd[1]: Started libpod-conmon-87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e.scope.
Dec 06 08:03:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:03:00 compute-0 podman[380724]: 2025-12-06 08:03:00.138917062 +0000 UTC m=+0.024762650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:03:00 compute-0 podman[380724]: 2025-12-06 08:03:00.238482259 +0000 UTC m=+0.124327837 container init 87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec 06 08:03:00 compute-0 podman[380724]: 2025-12-06 08:03:00.245717863 +0000 UTC m=+0.131563431 container start 87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:03:00 compute-0 busy_bartik[380740]: 167 167
Dec 06 08:03:00 compute-0 systemd[1]: libpod-87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e.scope: Deactivated successfully.
Dec 06 08:03:00 compute-0 conmon[380740]: conmon 87b4b2ab6a6cacd60c01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e.scope/container/memory.events
Dec 06 08:03:00 compute-0 podman[380724]: 2025-12-06 08:03:00.257188883 +0000 UTC m=+0.143034471 container attach 87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:03:00 compute-0 podman[380724]: 2025-12-06 08:03:00.257604925 +0000 UTC m=+0.143450483 container died 87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a21adf68dd0ff79c8f24ad55febe7b972482a849123e8f6c8c94539b760e5e3-merged.mount: Deactivated successfully.
Dec 06 08:03:00 compute-0 podman[380724]: 2025-12-06 08:03:00.317014858 +0000 UTC m=+0.202860426 container remove 87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:03:00 compute-0 systemd[1]: libpod-conmon-87b4b2ab6a6cacd60c01ad4a1b67831752d226d5d864cff71efd9f04a664935e.scope: Deactivated successfully.
Dec 06 08:03:00 compute-0 ceph-mon[74339]: pgmap v3306: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 204 B/s rd, 15 KiB/s wr, 0 op/s
Dec 06 08:03:00 compute-0 podman[380764]: 2025-12-06 08:03:00.482952315 +0000 UTC m=+0.043218877 container create 84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_euclid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:03:00 compute-0 systemd[1]: Started libpod-conmon-84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce.scope.
Dec 06 08:03:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:03:00 compute-0 podman[380764]: 2025-12-06 08:03:00.462489273 +0000 UTC m=+0.022755875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f946b1e36f0e0a8ad75aec079381e6b7b9691309e2e00995360fd8a222254f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f946b1e36f0e0a8ad75aec079381e6b7b9691309e2e00995360fd8a222254f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f946b1e36f0e0a8ad75aec079381e6b7b9691309e2e00995360fd8a222254f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f946b1e36f0e0a8ad75aec079381e6b7b9691309e2e00995360fd8a222254f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:00 compute-0 podman[380764]: 2025-12-06 08:03:00.583448807 +0000 UTC m=+0.143715379 container init 84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:03:00 compute-0 podman[380764]: 2025-12-06 08:03:00.591768872 +0000 UTC m=+0.152035444 container start 84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_euclid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:03:00 compute-0 podman[380764]: 2025-12-06 08:03:00.596372576 +0000 UTC m=+0.156639148 container attach 84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 08:03:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:00.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]: {
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:     "0": [
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:         {
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "devices": [
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "/dev/loop3"
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             ],
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "lv_name": "ceph_lv0",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "lv_size": "7511998464",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "name": "ceph_lv0",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "tags": {
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.cluster_name": "ceph",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.crush_device_class": "",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.encrypted": "0",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.osd_id": "0",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.type": "block",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:                 "ceph.vdo": "0"
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             },
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "type": "block",
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:             "vg_name": "ceph_vg0"
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:         }
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]:     ]
Dec 06 08:03:01 compute-0 wonderful_euclid[380781]: }
Dec 06 08:03:01 compute-0 systemd[1]: libpod-84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce.scope: Deactivated successfully.
Dec 06 08:03:01 compute-0 podman[380764]: 2025-12-06 08:03:01.365478259 +0000 UTC m=+0.925744831 container died 84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:03:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f946b1e36f0e0a8ad75aec079381e6b7b9691309e2e00995360fd8a222254f1-merged.mount: Deactivated successfully.
Dec 06 08:03:01 compute-0 podman[380764]: 2025-12-06 08:03:01.421608984 +0000 UTC m=+0.981875566 container remove 84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_euclid, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:03:01 compute-0 systemd[1]: libpod-conmon-84f668eb4d0963d17288edd438423ee455fdbdd30408ad3f1d143db3530392ce.scope: Deactivated successfully.
Dec 06 08:03:01 compute-0 sudo[380659]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:01 compute-0 sudo[380802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:01 compute-0 sudo[380802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:01 compute-0 sudo[380802]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 689 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec 06 08:03:01 compute-0 sudo[380827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:03:01 compute-0 sudo[380827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:01 compute-0 sudo[380827]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:01 compute-0 sudo[380852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:01 compute-0 sudo[380852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:01 compute-0 sudo[380852]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:01 compute-0 sudo[380877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:03:01 compute-0 sudo[380877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:02 compute-0 podman[380940]: 2025-12-06 08:03:02.004996435 +0000 UTC m=+0.038018326 container create 1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 08:03:02 compute-0 systemd[1]: Started libpod-conmon-1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93.scope.
Dec 06 08:03:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:03:02 compute-0 podman[380940]: 2025-12-06 08:03:02.080268397 +0000 UTC m=+0.113290308 container init 1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:03:02 compute-0 podman[380940]: 2025-12-06 08:03:01.988688076 +0000 UTC m=+0.021709987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:03:02 compute-0 podman[380940]: 2025-12-06 08:03:02.086511875 +0000 UTC m=+0.119533756 container start 1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:03:02 compute-0 podman[380940]: 2025-12-06 08:03:02.089123926 +0000 UTC m=+0.122145837 container attach 1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:03:02 compute-0 magical_satoshi[380956]: 167 167
Dec 06 08:03:02 compute-0 systemd[1]: libpod-1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93.scope: Deactivated successfully.
Dec 06 08:03:02 compute-0 podman[380940]: 2025-12-06 08:03:02.093026181 +0000 UTC m=+0.126048062 container died 1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-19204d20575876a03eeca2d40e87579f22a8e1ddd9e995ce7909d38124f6e899-merged.mount: Deactivated successfully.
Dec 06 08:03:02 compute-0 podman[380940]: 2025-12-06 08:03:02.136511814 +0000 UTC m=+0.169533705 container remove 1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:03:02 compute-0 systemd[1]: libpod-conmon-1f4531506a30b74c640b11c3a83c633f5c048dfbcd57ff96a3bd994f6baaac93.scope: Deactivated successfully.
Dec 06 08:03:02 compute-0 podman[380979]: 2025-12-06 08:03:02.3594507 +0000 UTC m=+0.092746053 container create 7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:03:02 compute-0 systemd[1]: Started libpod-conmon-7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c.scope.
Dec 06 08:03:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:03:02 compute-0 podman[380979]: 2025-12-06 08:03:02.324853327 +0000 UTC m=+0.058148700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d322512a8595920e62642549f1a1b033aaddbac364e5a0f5f9a96b39cccfe9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d322512a8595920e62642549f1a1b033aaddbac364e5a0f5f9a96b39cccfe9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d322512a8595920e62642549f1a1b033aaddbac364e5a0f5f9a96b39cccfe9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d322512a8595920e62642549f1a1b033aaddbac364e5a0f5f9a96b39cccfe9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:03:02 compute-0 podman[380979]: 2025-12-06 08:03:02.447503556 +0000 UTC m=+0.180798919 container init 7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec 06 08:03:02 compute-0 podman[380979]: 2025-12-06 08:03:02.454871845 +0000 UTC m=+0.188167168 container start 7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:03:02 compute-0 podman[380979]: 2025-12-06 08:03:02.458352649 +0000 UTC m=+0.191648082 container attach 7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 08:03:02 compute-0 ceph-mon[74339]: pgmap v3307: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 689 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec 06 08:03:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:02 compute-0 nova_compute[251992]: 2025-12-06 08:03:02.824 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:03 compute-0 nova_compute[251992]: 2025-12-06 08:03:03.275 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]: {
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]:         "osd_id": 0,
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]:         "type": "bluestore"
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]:     }
Dec 06 08:03:03 compute-0 wizardly_hellman[380995]: }
Dec 06 08:03:03 compute-0 systemd[1]: libpod-7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c.scope: Deactivated successfully.
Dec 06 08:03:03 compute-0 podman[380979]: 2025-12-06 08:03:03.331188972 +0000 UTC m=+1.064484315 container died 7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 08:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d322512a8595920e62642549f1a1b033aaddbac364e5a0f5f9a96b39cccfe9b-merged.mount: Deactivated successfully.
Dec 06 08:03:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:03.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:03 compute-0 podman[380979]: 2025-12-06 08:03:03.382998809 +0000 UTC m=+1.116294132 container remove 7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hellman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 08:03:03 compute-0 systemd[1]: libpod-conmon-7c95b47b88a2a72b039eecdaccd83753d1399df9630b92c6a2255bb4d458769c.scope: Deactivated successfully.
Dec 06 08:03:03 compute-0 sudo[380877]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:03:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:03:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:03:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:03:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ed8d7b18-d89f-4aa6-b637-6c96a2b33615 does not exist
Dec 06 08:03:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e57f9e2f-c290-4b75-9058-ca87421961f0 does not exist
Dec 06 08:03:03 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev edeae24c-c233-480c-8db0-e34a5a4f28a3 does not exist
Dec 06 08:03:03 compute-0 sudo[381029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:03 compute-0 sudo[381029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:03 compute-0 sudo[381029]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 689 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec 06 08:03:03 compute-0 sudo[381054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:03:03 compute-0 sudo[381054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:03 compute-0 sudo[381054]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:03 compute-0 nova_compute[251992]: 2025-12-06 08:03:03.575 251996 DEBUG nova.compute.manager [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:03 compute-0 nova_compute[251992]: 2025-12-06 08:03:03.577 251996 DEBUG oslo_concurrency.lockutils [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:03 compute-0 nova_compute[251992]: 2025-12-06 08:03:03.577 251996 DEBUG oslo_concurrency.lockutils [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:03 compute-0 nova_compute[251992]: 2025-12-06 08:03:03.577 251996 DEBUG oslo_concurrency.lockutils [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:03 compute-0 nova_compute[251992]: 2025-12-06 08:03:03.577 251996 DEBUG nova.compute.manager [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:03 compute-0 nova_compute[251992]: 2025-12-06 08:03:03.577 251996 WARNING nova.compute.manager [req-c8624d72-0856-4e5d-9f15-d5b60a4a11af req-537ca85b-778d-4f34-af73-89c40fe7cac0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state resized and task_state None.
Dec 06 08:03:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:03.876 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:03.877 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:03.878 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:03:04 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:03:04 compute-0 ceph-mon[74339]: pgmap v3308: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 689 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec 06 08:03:04 compute-0 nova_compute[251992]: 2025-12-06 08:03:04.683 251996 DEBUG nova.network.neutron [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Port 7be900e8-79cd-473a-8f1d-df5029d9e773 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Dec 06 08:03:04 compute-0 nova_compute[251992]: 2025-12-06 08:03:04.684 251996 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:03:04 compute-0 nova_compute[251992]: 2025-12-06 08:03:04.684 251996 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:03:04 compute-0 nova_compute[251992]: 2025-12-06 08:03:04.684 251996 DEBUG nova.network.neutron [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:03:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:04.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 KiB/s wr, 104 op/s
Dec 06 08:03:06 compute-0 ceph-mon[74339]: pgmap v3309: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 KiB/s wr, 104 op/s
Dec 06 08:03:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:06.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 93 op/s
Dec 06 08:03:07 compute-0 nova_compute[251992]: 2025-12-06 08:03:07.727 251996 DEBUG nova.network.neutron [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:03:07 compute-0 nova_compute[251992]: 2025-12-06 08:03:07.826 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:07 compute-0 nova_compute[251992]: 2025-12-06 08:03:07.959 251996 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:03:08 compute-0 kernel: tap7be900e8-79 (unregistering): left promiscuous mode
Dec 06 08:03:08 compute-0 NetworkManager[48965]: <info>  [1765008188.1554] device (tap7be900e8-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 ovn_controller[147168]: 2025-12-06T08:03:08Z|00708|binding|INFO|Releasing lport 7be900e8-79cd-473a-8f1d-df5029d9e773 from this chassis (sb_readonly=0)
Dec 06 08:03:08 compute-0 ovn_controller[147168]: 2025-12-06T08:03:08Z|00709|binding|INFO|Setting lport 7be900e8-79cd-473a-8f1d-df5029d9e773 down in Southbound
Dec 06 08:03:08 compute-0 ovn_controller[147168]: 2025-12-06T08:03:08Z|00710|binding|INFO|Removing iface tap7be900e8-79 ovn-installed in OVS
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.164 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.182 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.184 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:70:ef 10.100.0.4'], port_security=['fa:16:3e:eb:70:ef 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a00396fd-1a78-4cad-9c38-7b0905ab5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '89003807-49b2-48b6-9510-52c4e2235abf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.245', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64cba72f-9df1-4bbf-91d5-2ef412c84dfa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=7be900e8-79cd-473a-8f1d-df5029d9e773) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.185 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 7be900e8-79cd-473a-8f1d-df5029d9e773 in datapath 26d75c28-bf40-4c60-9e29-1a7b2fb696a0 unbound from our chassis
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.186 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.188 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a7ec76-d394-476b-907f-2456dbb3549a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.189 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 namespace which is not needed anymore
Dec 06 08:03:08 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Dec 06 08:03:08 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b8.scope: Consumed 9.724s CPU time.
Dec 06 08:03:08 compute-0 systemd-machined[212986]: Machine qemu-86-instance-000000b8 terminated.
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.276 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[380489]: [NOTICE]   (380508) : haproxy version is 2.8.14-c23fe91
Dec 06 08:03:08 compute-0 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[380489]: [NOTICE]   (380508) : path to executable is /usr/sbin/haproxy
Dec 06 08:03:08 compute-0 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[380489]: [WARNING]  (380508) : Exiting Master process...
Dec 06 08:03:08 compute-0 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[380489]: [ALERT]    (380508) : Current worker (380511) exited with code 143 (Terminated)
Dec 06 08:03:08 compute-0 neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0[380489]: [WARNING]  (380508) : All workers exited. Exiting... (0)
Dec 06 08:03:08 compute-0 systemd[1]: libpod-09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1.scope: Deactivated successfully.
Dec 06 08:03:08 compute-0 podman[381105]: 2025-12-06 08:03:08.315523248 +0000 UTC m=+0.041526322 container died 09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.334 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.338 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.349 251996 INFO nova.virt.libvirt.driver [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Instance destroyed successfully.
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.349 251996 DEBUG nova.objects.instance [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'resources' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:03:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1-userdata-shm.mount: Deactivated successfully.
Dec 06 08:03:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e75348f107097bbbaf954780641a895f581e9f996b23ea260fd44b0d50392a-merged.mount: Deactivated successfully.
Dec 06 08:03:08 compute-0 podman[381105]: 2025-12-06 08:03:08.37452145 +0000 UTC m=+0.100524544 container cleanup 09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 08:03:08 compute-0 systemd[1]: libpod-conmon-09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1.scope: Deactivated successfully.
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.397 251996 DEBUG nova.virt.libvirt.vif [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:01:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1312966031',display_name='tempest-TestNetworkAdvancedServerOps-server-1312966031',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1312966031',id=184,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJW2ZsdJB9rXQtXWcXFfgPIxPNFvSSCKBWIMfh01jJc1P8HsIFfpY6rfx3BE/xpkRnGjfmas+KX3ri+dmiYMlm6kvXvL38+thL7RipAL0y0ulvcYl+qu5q/CwooSB+nYg==',key_name='tempest-TestNetworkAdvancedServerOps-1347378077',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:02:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-9vk08kgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:02:59Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=a00396fd-1a78-4cad-9c38-7b0905ab5b9f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.398 251996 DEBUG nova.network.os_vif_util [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.399 251996 DEBUG nova.network.os_vif_util [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.399 251996 DEBUG os_vif [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.401 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.401 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7be900e8-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.403 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.405 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.408 251996 INFO os_vif [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:70:ef,bridge_name='br-int',has_traffic_filtering=True,id=7be900e8-79cd-473a-8f1d-df5029d9e773,network=Network(26d75c28-bf40-4c60-9e29-1a7b2fb696a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7be900e8-79')
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.411 251996 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.412 251996 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.438 251996 DEBUG nova.objects.instance [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid a00396fd-1a78-4cad-9c38-7b0905ab5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:03:08 compute-0 podman[381147]: 2025-12-06 08:03:08.443775329 +0000 UTC m=+0.049165098 container remove 09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.449 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fb2be7-e42c-4f51-90d2-7ad4efa096d2]: (4, ('Sat Dec  6 08:03:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 (09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1)\n09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1\nSat Dec  6 08:03:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 (09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1)\n09c7910853eb806ee375bac8d6574bb74257f3f7da6d4c37be6cfc39000382d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.451 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[788ee07a-6f5b-41f2-b2a5-468c399db1e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.452 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d75c28-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.453 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 kernel: tap26d75c28-b0: left promiscuous mode
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.468 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.470 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cffa08f1-e90a-40f9-b81b-c16daac521e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.483 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6dd8fb-4f7b-4c69-9836-efdc87cf99ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.484 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9766841e-0ce5-478a-84c2-9d25e6708c83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.499 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b3aeadfd-fa3b-4b45-b064-ecac4a98a7b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845121, 'reachable_time': 27569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381162, 'error': None, 'target': 'ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d26d75c28\x2dbf40\x2d4c60\x2d9e29\x2d1a7b2fb696a0.mount: Deactivated successfully.
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.502 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26d75c28-bf40-4c60-9e29-1a7b2fb696a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:03:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:08.503 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3745fd-b1cc-4c54-88be-7ce5f6c2d2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.530 251996 DEBUG oslo_concurrency.processutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.695 251996 DEBUG nova.compute.manager [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.696 251996 DEBUG oslo_concurrency.lockutils [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.696 251996 DEBUG oslo_concurrency.lockutils [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.697 251996 DEBUG oslo_concurrency.lockutils [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.697 251996 DEBUG nova.compute.manager [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.697 251996 WARNING nova.compute.manager [req-05275dd8-c06f-4242-bb89-d05dfae0d49e req-e06e161e-05d1-4db5-8ddc-b6f06cf8c62d 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-unplugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state resized and task_state resize_reverting.
Dec 06 08:03:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:08.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:08 compute-0 ceph-mon[74339]: pgmap v3310: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 93 op/s
Dec 06 08:03:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:03:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1554085838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.986 251996 DEBUG oslo_concurrency.processutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:03:08 compute-0 nova_compute[251992]: 2025-12-06 08:03:08.994 251996 DEBUG nova.compute.provider_tree [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:03:09 compute-0 sudo[381183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:09 compute-0 sudo[381183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:09 compute-0 sudo[381183]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:09 compute-0 nova_compute[251992]: 2025-12-06 08:03:09.057 251996 DEBUG nova.scheduler.client.report [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:03:09 compute-0 sudo[381210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:09 compute-0 sudo[381210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:09 compute-0 sudo[381210]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:09 compute-0 nova_compute[251992]: 2025-12-06 08:03:09.136 251996 DEBUG oslo_concurrency.lockutils [None req-1f64614c-540f-4e22-8455-4d6c7bcdab73 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 86 op/s
Dec 06 08:03:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1554085838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3763747548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:03:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3763747548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:03:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:10.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:10 compute-0 nova_compute[251992]: 2025-12-06 08:03:10.963 251996 DEBUG nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:10 compute-0 nova_compute[251992]: 2025-12-06 08:03:10.964 251996 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:10 compute-0 nova_compute[251992]: 2025-12-06 08:03:10.964 251996 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:10 compute-0 nova_compute[251992]: 2025-12-06 08:03:10.964 251996 DEBUG oslo_concurrency.lockutils [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:10 compute-0 nova_compute[251992]: 2025-12-06 08:03:10.965 251996 DEBUG nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:10 compute-0 nova_compute[251992]: 2025-12-06 08:03:10.965 251996 WARNING nova.compute.manager [req-482411ff-1634-409a-8fb8-2500fc3c1671 req-37f0df25-fffc-4d8a-898b-a6f25f6002f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state resized and task_state resize_reverting.
Dec 06 08:03:11 compute-0 ceph-mon[74339]: pgmap v3311: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 86 op/s
Dec 06 08:03:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 86 op/s
Dec 06 08:03:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:12 compute-0 nova_compute[251992]: 2025-12-06 08:03:12.828 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:03:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:03:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:03:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:03:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:03:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:03:13 compute-0 ceph-mon[74339]: pgmap v3312: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 86 op/s
Dec 06 08:03:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:13.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:13 compute-0 nova_compute[251992]: 2025-12-06 08:03:13.403 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 45 op/s
Dec 06 08:03:13 compute-0 nova_compute[251992]: 2025-12-06 08:03:13.688 251996 DEBUG nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:13 compute-0 nova_compute[251992]: 2025-12-06 08:03:13.689 251996 DEBUG nova.compute.manager [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing instance network info cache due to event network-changed-7be900e8-79cd-473a-8f1d-df5029d9e773. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:03:13 compute-0 nova_compute[251992]: 2025-12-06 08:03:13.689 251996 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:03:13 compute-0 nova_compute[251992]: 2025-12-06 08:03:13.690 251996 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:03:13 compute-0 nova_compute[251992]: 2025-12-06 08:03:13.690 251996 DEBUG nova.network.neutron [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Refreshing network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:03:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:14.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:15 compute-0 ceph-mon[74339]: pgmap v3313: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 45 op/s
Dec 06 08:03:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:15.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 7.3 KiB/s wr, 48 op/s
Dec 06 08:03:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:16.182 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:03:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:16.183 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:03:16 compute-0 nova_compute[251992]: 2025-12-06 08:03:16.183 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Dec 06 08:03:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Dec 06 08:03:16 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Dec 06 08:03:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:16.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:17 compute-0 ceph-mon[74339]: pgmap v3314: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 7.3 KiB/s wr, 48 op/s
Dec 06 08:03:17 compute-0 ceph-mon[74339]: osdmap e412: 3 total, 3 up, 3 in
Dec 06 08:03:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1183410053' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:17.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:17 compute-0 nova_compute[251992]: 2025-12-06 08:03:17.490 251996 DEBUG nova.network.neutron [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updated VIF entry in instance network info cache for port 7be900e8-79cd-473a-8f1d-df5029d9e773. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:03:17 compute-0 nova_compute[251992]: 2025-12-06 08:03:17.491 251996 DEBUG nova.network.neutron [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Updating instance_info_cache with network_info: [{"id": "7be900e8-79cd-473a-8f1d-df5029d9e773", "address": "fa:16:3e:eb:70:ef", "network": {"id": "26d75c28-bf40-4c60-9e29-1a7b2fb696a0", "bridge": "br-int", "label": "tempest-network-smoke--1508194701", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7be900e8-79", "ovs_interfaceid": "7be900e8-79cd-473a-8f1d-df5029d9e773", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:03:17 compute-0 nova_compute[251992]: 2025-12-06 08:03:17.508 251996 DEBUG oslo_concurrency.lockutils [req-74c79a2c-c789-4548-a459-676a36e50df2 req-4798739c-064a-4eda-addf-0e06b907c625 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-a00396fd-1a78-4cad-9c38-7b0905ab5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:03:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 8.8 KiB/s wr, 2 op/s
Dec 06 08:03:17 compute-0 nova_compute[251992]: 2025-12-06 08:03:17.830 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:18 compute-0 nova_compute[251992]: 2025-12-06 08:03:18.034 251996 DEBUG nova.compute.manager [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:03:18 compute-0 nova_compute[251992]: 2025-12-06 08:03:18.034 251996 DEBUG oslo_concurrency.lockutils [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:18 compute-0 nova_compute[251992]: 2025-12-06 08:03:18.035 251996 DEBUG oslo_concurrency.lockutils [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:18 compute-0 nova_compute[251992]: 2025-12-06 08:03:18.035 251996 DEBUG oslo_concurrency.lockutils [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "a00396fd-1a78-4cad-9c38-7b0905ab5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:18 compute-0 nova_compute[251992]: 2025-12-06 08:03:18.035 251996 DEBUG nova.compute.manager [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] No waiting events found dispatching network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:03:18 compute-0 nova_compute[251992]: 2025-12-06 08:03:18.035 251996 WARNING nova.compute.manager [req-e8c12e66-397b-4b3b-ac89-c5caef36375d req-9e87bd3b-d84b-4c52-a73d-62140a6a65e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Received unexpected event network-vif-plugged-7be900e8-79cd-473a-8f1d-df5029d9e773 for instance with vm_state resized and task_state resize_reverting.
Dec 06 08:03:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3691660085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:18 compute-0 nova_compute[251992]: 2025-12-06 08:03:18.405 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:03:18
Dec 06 08:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'volumes', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root']
Dec 06 08:03:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:03:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:18.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:19 compute-0 ceph-mon[74339]: pgmap v3316: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 8.8 KiB/s wr, 2 op/s
Dec 06 08:03:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:19.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 8.8 KiB/s wr, 2 op/s
Dec 06 08:03:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:20.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:21 compute-0 ceph-mon[74339]: pgmap v3317: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 8.8 KiB/s wr, 2 op/s
Dec 06 08:03:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:21.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 135 op/s
Dec 06 08:03:22 compute-0 nova_compute[251992]: 2025-12-06 08:03:22.832 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:22.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:23 compute-0 nova_compute[251992]: 2025-12-06 08:03:23.346 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008188.3457785, a00396fd-1a78-4cad-9c38-7b0905ab5b9f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:03:23 compute-0 nova_compute[251992]: 2025-12-06 08:03:23.347 251996 INFO nova.compute.manager [-] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] VM Stopped (Lifecycle Event)
Dec 06 08:03:23 compute-0 nova_compute[251992]: 2025-12-06 08:03:23.382 251996 DEBUG nova.compute.manager [None req-bf3a3cdf-fa81-4cb6-b04e-007044138fb8 - - - - - -] [instance: a00396fd-1a78-4cad-9c38-7b0905ab5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:03:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:23 compute-0 nova_compute[251992]: 2025-12-06 08:03:23.407 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 135 op/s
Dec 06 08:03:23 compute-0 ceph-mon[74339]: pgmap v3318: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 135 op/s
Dec 06 08:03:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3120929876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:03:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:03:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Dec 06 08:03:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Dec 06 08:03:23 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Dec 06 08:03:24 compute-0 podman[381244]: 2025-12-06 08:03:24.422130099 +0000 UTC m=+0.081295325 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:03:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:24.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:24 compute-0 ceph-mon[74339]: pgmap v3319: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 135 op/s
Dec 06 08:03:24 compute-0 ceph-mon[74339]: osdmap e413: 3 total, 3 up, 3 in
Dec 06 08:03:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:25.185 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:25.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 KiB/s wr, 143 op/s
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002173592208728829 of space, bias 1.0, pg target 0.6520776626186487 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:03:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:03:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:26.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:27 compute-0 ceph-mon[74339]: pgmap v3321: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 KiB/s wr, 143 op/s
Dec 06 08:03:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:03:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:03:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:03:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:03:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:03:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:27.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 133 op/s
Dec 06 08:03:27 compute-0 nova_compute[251992]: 2025-12-06 08:03:27.834 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:28 compute-0 nova_compute[251992]: 2025-12-06 08:03:28.409 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:28.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:29 compute-0 ceph-mon[74339]: pgmap v3322: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 133 op/s
Dec 06 08:03:29 compute-0 sudo[381274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:29 compute-0 sudo[381274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:29 compute-0 sudo[381274]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:29 compute-0 sudo[381299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:29 compute-0 sudo[381299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:29 compute-0 sudo[381299]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:29.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:29 compute-0 nova_compute[251992]: 2025-12-06 08:03:29.477 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 133 op/s
Dec 06 08:03:29 compute-0 nova_compute[251992]: 2025-12-06 08:03:29.595 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:30 compute-0 podman[381325]: 2025-12-06 08:03:30.398451012 +0000 UTC m=+0.051750158 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:03:30 compute-0 podman[381326]: 2025-12-06 08:03:30.410734664 +0000 UTC m=+0.059991690 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 08:03:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:30.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:31 compute-0 ceph-mon[74339]: pgmap v3323: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 133 op/s
Dec 06 08:03:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:31.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:03:32 compute-0 nova_compute[251992]: 2025-12-06 08:03:32.836 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:33.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:33 compute-0 nova_compute[251992]: 2025-12-06 08:03:33.410 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:03:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:34.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:34 compute-0 ceph-mon[74339]: pgmap v3324: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:03:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:03:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:35.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:03:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 13 KiB/s wr, 45 op/s
Dec 06 08:03:35 compute-0 ceph-mon[74339]: pgmap v3325: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:03:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1238206177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/969753334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:36 compute-0 nova_compute[251992]: 2025-12-06 08:03:36.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:36 compute-0 nova_compute[251992]: 2025-12-06 08:03:36.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:36 compute-0 nova_compute[251992]: 2025-12-06 08:03:36.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:36 compute-0 nova_compute[251992]: 2025-12-06 08:03:36.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:36 compute-0 nova_compute[251992]: 2025-12-06 08:03:36.683 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:03:36 compute-0 nova_compute[251992]: 2025-12-06 08:03:36.683 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:03:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:36.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:03:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015708727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.135 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:03:37 compute-0 ceph-mon[74339]: pgmap v3326: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 13 KiB/s wr, 45 op/s
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.299 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.300 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4182MB free_disk=20.94266128540039GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.301 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.301 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.384 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.385 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:03:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.423 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:03:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 14 KiB/s wr, 43 op/s
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.838 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:03:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1661303541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.900 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.907 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.934 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.936 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:03:37 compute-0 nova_compute[251992]: 2025-12-06 08:03:37.936 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:03:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2015708727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1661303541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:38 compute-0 nova_compute[251992]: 2025-12-06 08:03:38.411 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:38.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:38 compute-0 nova_compute[251992]: 2025-12-06 08:03:38.930 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:39 compute-0 ceph-mon[74339]: pgmap v3327: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 14 KiB/s wr, 43 op/s
Dec 06 08:03:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 14 KiB/s wr, 43 op/s
Dec 06 08:03:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:40.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:40 compute-0 ceph-mon[74339]: pgmap v3328: 305 pgs: 305 active+clean; 202 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 14 KiB/s wr, 43 op/s
Dec 06 08:03:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 23 KiB/s wr, 58 op/s
Dec 06 08:03:41 compute-0 nova_compute[251992]: 2025-12-06 08:03:41.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:42 compute-0 nova_compute[251992]: 2025-12-06 08:03:42.840 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:42.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:03:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:03:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:03:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:03:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:03:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:03:43 compute-0 ceph-mon[74339]: pgmap v3329: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 23 KiB/s wr, 58 op/s
Dec 06 08:03:43 compute-0 nova_compute[251992]: 2025-12-06 08:03:43.412 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:43.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 11 KiB/s wr, 33 op/s
Dec 06 08:03:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:44.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:45.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 121 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 11 KiB/s wr, 47 op/s
Dec 06 08:03:45 compute-0 nova_compute[251992]: 2025-12-06 08:03:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:45 compute-0 nova_compute[251992]: 2025-12-06 08:03:45.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:03:45 compute-0 nova_compute[251992]: 2025-12-06 08:03:45.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:03:45 compute-0 nova_compute[251992]: 2025-12-06 08:03:45.765 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:03:46 compute-0 ceph-mon[74339]: pgmap v3330: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 11 KiB/s wr, 33 op/s
Dec 06 08:03:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1425635196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:46.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:47.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Dec 06 08:03:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1345692704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:47 compute-0 ceph-mon[74339]: pgmap v3331: 305 pgs: 305 active+clean; 121 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 11 KiB/s wr, 47 op/s
Dec 06 08:03:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3888401866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:47 compute-0 nova_compute[251992]: 2025-12-06 08:03:47.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:47 compute-0 nova_compute[251992]: 2025-12-06 08:03:47.841 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:48 compute-0 nova_compute[251992]: 2025-12-06 08:03:48.414 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:48 compute-0 nova_compute[251992]: 2025-12-06 08:03:48.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:48 compute-0 nova_compute[251992]: 2025-12-06 08:03:48.667 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:48 compute-0 nova_compute[251992]: 2025-12-06 08:03:48.668 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:48 compute-0 ceph-mon[74339]: pgmap v3332: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Dec 06 08:03:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:48.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:49 compute-0 sudo[381416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:49 compute-0 sudo[381416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:49 compute-0 sudo[381416]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:49 compute-0 sudo[381441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:03:49 compute-0 sudo[381441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:03:49 compute-0 sudo[381441]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Dec 06 08:03:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:50.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:50 compute-0 ceph-mon[74339]: pgmap v3333: 305 pgs: 305 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Dec 06 08:03:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2935238790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:03:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:51.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 126 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 185 KiB/s wr, 29 op/s
Dec 06 08:03:51 compute-0 nova_compute[251992]: 2025-12-06 08:03:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:51 compute-0 nova_compute[251992]: 2025-12-06 08:03:51.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:03:52 compute-0 nova_compute[251992]: 2025-12-06 08:03:52.845 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:52.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:52 compute-0 ceph-mon[74339]: pgmap v3334: 305 pgs: 305 active+clean; 126 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 185 KiB/s wr, 29 op/s
Dec 06 08:03:53 compute-0 nova_compute[251992]: 2025-12-06 08:03:53.415 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:53.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 126 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 176 KiB/s wr, 14 op/s
Dec 06 08:03:53 compute-0 nova_compute[251992]: 2025-12-06 08:03:53.789 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:53 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Dec 06 08:03:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:53.985155) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:03:53 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Dec 06 08:03:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008233985230, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2135, "num_deletes": 257, "total_data_size": 3927179, "memory_usage": 3983960, "flush_reason": "Manual Compaction"}
Dec 06 08:03:53 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234006942, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3819360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65537, "largest_seqno": 67671, "table_properties": {"data_size": 3809665, "index_size": 6124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19924, "raw_average_key_size": 20, "raw_value_size": 3790269, "raw_average_value_size": 3867, "num_data_blocks": 267, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008026, "oldest_key_time": 1765008026, "file_creation_time": 1765008233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 21836 microseconds, and 9213 cpu microseconds.
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.006993) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3819360 bytes OK
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.007014) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.008798) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.008812) EVENT_LOG_v1 {"time_micros": 1765008234008808, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.008831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3918481, prev total WAL file size 3918481, number of live WAL files 2.
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.010204) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353237' seq:72057594037927935, type:22 .. '6C6F676D0032373738' seq:0, type:0; will stop at (end)
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3729KB)], [146(10MB)]
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234010259, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 14475947, "oldest_snapshot_seqno": -1}
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 10272 keys, 14316278 bytes, temperature: kUnknown
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234094058, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 14316278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14248883, "index_size": 40594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25733, "raw_key_size": 270636, "raw_average_key_size": 26, "raw_value_size": 14067782, "raw_average_value_size": 1369, "num_data_blocks": 1554, "num_entries": 10272, "num_filter_entries": 10272, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008234, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.094425) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 14316278 bytes
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.095974) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.4 rd, 170.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.5) write-amplify(3.7) OK, records in: 10807, records dropped: 535 output_compression: NoCompression
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.095991) EVENT_LOG_v1 {"time_micros": 1765008234095983, "job": 90, "event": "compaction_finished", "compaction_time_micros": 83951, "compaction_time_cpu_micros": 38731, "output_level": 6, "num_output_files": 1, "total_output_size": 14316278, "num_input_records": 10807, "num_output_records": 10272, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234097056, "job": 90, "event": "table_file_deletion", "file_number": 148}
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008234099041, "job": 90, "event": "table_file_deletion", "file_number": 146}
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.009981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.099223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.099231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.099234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.099236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:54.099239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:54.758 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:03:54 compute-0 nova_compute[251992]: 2025-12-06 08:03:54.758 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:54.759 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:03:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:54.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:54 compute-0 ceph-mon[74339]: pgmap v3335: 305 pgs: 305 active+clean; 126 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 176 KiB/s wr, 14 op/s
Dec 06 08:03:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:03:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:55.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:03:55 compute-0 podman[381469]: 2025-12-06 08:03:55.44044845 +0000 UTC m=+0.087419310 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:03:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec 06 08:03:55 compute-0 nova_compute[251992]: 2025-12-06 08:03:55.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:03:55 compute-0 nova_compute[251992]: 2025-12-06 08:03:55.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.012724) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236012770, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 251, "total_data_size": 45453, "memory_usage": 50856, "flush_reason": "Manual Compaction"}
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236016010, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 45381, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67672, "largest_seqno": 67941, "table_properties": {"data_size": 43497, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4821, "raw_average_key_size": 18, "raw_value_size": 39888, "raw_average_value_size": 151, "num_data_blocks": 5, "num_entries": 264, "num_filter_entries": 264, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008234, "oldest_key_time": 1765008234, "file_creation_time": 1765008236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 3407 microseconds, and 1380 cpu microseconds.
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.016083) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 45381 bytes OK
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.016151) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.018031) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.018055) EVENT_LOG_v1 {"time_micros": 1765008236018048, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.018075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 43384, prev total WAL file size 43384, number of live WAL files 2.
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.018573) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(44KB)], [149(13MB)]
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236018623, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 14361659, "oldest_snapshot_seqno": -1}
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 10027 keys, 12478890 bytes, temperature: kUnknown
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236102087, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 12478890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12414625, "index_size": 38104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25093, "raw_key_size": 266238, "raw_average_key_size": 26, "raw_value_size": 12239190, "raw_average_value_size": 1220, "num_data_blocks": 1442, "num_entries": 10027, "num_filter_entries": 10027, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.102381) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 12478890 bytes
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.104873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.8 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(591.4) write-amplify(275.0) OK, records in: 10536, records dropped: 509 output_compression: NoCompression
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.104889) EVENT_LOG_v1 {"time_micros": 1765008236104881, "job": 92, "event": "compaction_finished", "compaction_time_micros": 83596, "compaction_time_cpu_micros": 53584, "output_level": 6, "num_output_files": 1, "total_output_size": 12478890, "num_input_records": 10536, "num_output_records": 10027, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236105148, "job": 92, "event": "table_file_deletion", "file_number": 151}
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008236107489, "job": 92, "event": "table_file_deletion", "file_number": 149}
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.018494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.107586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.107590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.107592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.107593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:03:56.107595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:03:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:03:56.761 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:03:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:56.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:57 compute-0 ceph-mon[74339]: pgmap v3336: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Dec 06 08:03:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:57.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:03:57 compute-0 nova_compute[251992]: 2025-12-06 08:03:57.846 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2113418163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:58 compute-0 nova_compute[251992]: 2025-12-06 08:03:58.417 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:03:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:03:58.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:03:59 compute-0 ceph-mon[74339]: pgmap v3337: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:03:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3306292249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:03:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:03:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:03:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:03:59.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:03:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:00 compute-0 nova_compute[251992]: 2025-12-06 08:04:00.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:00.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:01 compute-0 ceph-mon[74339]: pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:01 compute-0 podman[381501]: 2025-12-06 08:04:01.389507069 +0000 UTC m=+0.050224487 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 06 08:04:01 compute-0 podman[381500]: 2025-12-06 08:04:01.389980171 +0000 UTC m=+0.055393986 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:04:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:01.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 06 08:04:01 compute-0 nova_compute[251992]: 2025-12-06 08:04:01.684 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:01 compute-0 nova_compute[251992]: 2025-12-06 08:04:01.685 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:04:01 compute-0 nova_compute[251992]: 2025-12-06 08:04:01.702 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:04:02 compute-0 nova_compute[251992]: 2025-12-06 08:04:02.847 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:02.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:03 compute-0 ceph-mon[74339]: pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec 06 08:04:03 compute-0 nova_compute[251992]: 2025-12-06 08:04:03.418 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:03.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.6 MiB/s wr, 36 op/s
Dec 06 08:04:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:03.877 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:03.877 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:03.877 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:03 compute-0 sudo[381540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:03 compute-0 sudo[381540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:03 compute-0 sudo[381540]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:03 compute-0 sudo[381565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:04:03 compute-0 sudo[381565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:03 compute-0 sudo[381565]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:04 compute-0 sudo[381590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:04 compute-0 sudo[381590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:04 compute-0 sudo[381590]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:04 compute-0 sudo[381615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:04:04 compute-0 sudo[381615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:04:04 compute-0 sudo[381615]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 08:04:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:04:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 08:04:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:04:04 compute-0 ceph-mon[74339]: pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.6 MiB/s wr, 36 op/s
Dec 06 08:04:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:04:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:04.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:05.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 94 op/s
Dec 06 08:04:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:04:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:04:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:04:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:04:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:06 compute-0 ceph-mon[74339]: pgmap v3341: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 94 op/s
Dec 06 08:04:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:06.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:07.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:07 compute-0 nova_compute[251992]: 2025-12-06 08:04:07.849 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:08 compute-0 nova_compute[251992]: 2025-12-06 08:04:08.420 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:08.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:09 compute-0 sudo[381674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:09 compute-0 sudo[381674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:09 compute-0 sudo[381674]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:09.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:09 compute-0 sudo[381699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:09 compute-0 sudo[381699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:09 compute-0 sudo[381699]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:09 compute-0 ceph-mon[74339]: pgmap v3342: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:04:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:04:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2412057235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:04:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:04:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:04:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2412057235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:04:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:04:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:04:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:04:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:10 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev afd23ce8-4efa-4b25-bf77-7c25b39ec71e does not exist
Dec 06 08:04:10 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 383751cf-6841-4fb4-bf73-3a4b60199c2e does not exist
Dec 06 08:04:10 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5bc7760b-2c97-4280-83a6-2a7291adbc7b does not exist
Dec 06 08:04:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:04:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:04:10 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:04:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:04:10 compute-0 sudo[381724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:10 compute-0 sudo[381724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:10 compute-0 sudo[381724]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:10 compute-0 sudo[381749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:04:10 compute-0 sudo[381749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:10 compute-0 sudo[381749]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:10 compute-0 sudo[381774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:10 compute-0 sudo[381774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:10 compute-0 sudo[381774]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:10 compute-0 sudo[381799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:04:10 compute-0 sudo[381799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:10 compute-0 podman[381866]: 2025-12-06 08:04:10.674946346 +0000 UTC m=+0.045430867 container create 117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:04:10 compute-0 systemd[1]: Started libpod-conmon-117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149.scope.
Dec 06 08:04:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:04:10 compute-0 podman[381866]: 2025-12-06 08:04:10.746381164 +0000 UTC m=+0.116865695 container init 117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 08:04:10 compute-0 podman[381866]: 2025-12-06 08:04:10.655884481 +0000 UTC m=+0.026369012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:04:10 compute-0 podman[381866]: 2025-12-06 08:04:10.753962738 +0000 UTC m=+0.124447249 container start 117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 08:04:10 compute-0 podman[381866]: 2025-12-06 08:04:10.758010738 +0000 UTC m=+0.128495269 container attach 117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:04:10 compute-0 reverent_wing[381882]: 167 167
Dec 06 08:04:10 compute-0 systemd[1]: libpod-117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149.scope: Deactivated successfully.
Dec 06 08:04:10 compute-0 podman[381866]: 2025-12-06 08:04:10.761228214 +0000 UTC m=+0.131712715 container died 117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:04:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a59b6344b97f5dedd09de31b9de67fb1c782346b7b84d55966c971b481c28c5a-merged.mount: Deactivated successfully.
Dec 06 08:04:10 compute-0 podman[381866]: 2025-12-06 08:04:10.799423345 +0000 UTC m=+0.169907856 container remove 117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:04:10 compute-0 ceph-mon[74339]: pgmap v3343: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2412057235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2412057235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:04:10 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:04:10 compute-0 systemd[1]: libpod-conmon-117aed320749fa17d41f819addaddd5d23acdf1fb39e76c71d2947b92762d149.scope: Deactivated successfully.
Dec 06 08:04:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:10.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:11 compute-0 podman[381905]: 2025-12-06 08:04:10.937948103 +0000 UTC m=+0.021423490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:04:11 compute-0 podman[381905]: 2025-12-06 08:04:11.306475908 +0000 UTC m=+0.389951265 container create 66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bose, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:04:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:11.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:11 compute-0 systemd[1]: Started libpod-conmon-66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e.scope.
Dec 06 08:04:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2d7653dc84e3942d46ea665f1ef2778c1fce137060ed324175172332d3171/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2d7653dc84e3942d46ea665f1ef2778c1fce137060ed324175172332d3171/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2d7653dc84e3942d46ea665f1ef2778c1fce137060ed324175172332d3171/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2d7653dc84e3942d46ea665f1ef2778c1fce137060ed324175172332d3171/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8c2d7653dc84e3942d46ea665f1ef2778c1fce137060ed324175172332d3171/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:11 compute-0 podman[381905]: 2025-12-06 08:04:11.88433999 +0000 UTC m=+0.967815377 container init 66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bose, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:04:11 compute-0 podman[381905]: 2025-12-06 08:04:11.89171739 +0000 UTC m=+0.975192757 container start 66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bose, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 08:04:12 compute-0 podman[381905]: 2025-12-06 08:04:12.096832804 +0000 UTC m=+1.180308171 container attach 66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bose, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:04:12 compute-0 thirsty_bose[381923]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:04:12 compute-0 thirsty_bose[381923]: --> relative data size: 1.0
Dec 06 08:04:12 compute-0 thirsty_bose[381923]: --> All data devices are unavailable
Dec 06 08:04:12 compute-0 systemd[1]: libpod-66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e.scope: Deactivated successfully.
Dec 06 08:04:12 compute-0 podman[381905]: 2025-12-06 08:04:12.741697785 +0000 UTC m=+1.825173142 container died 66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:04:12 compute-0 nova_compute[251992]: 2025-12-06 08:04:12.851 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8c2d7653dc84e3942d46ea665f1ef2778c1fce137060ed324175172332d3171-merged.mount: Deactivated successfully.
Dec 06 08:04:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:12.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:04:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:04:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:04:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:04:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:04:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:04:13 compute-0 podman[381905]: 2025-12-06 08:04:13.200053444 +0000 UTC m=+2.283528811 container remove 66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bose, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:04:13 compute-0 ceph-mon[74339]: pgmap v3344: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:04:13 compute-0 systemd[1]: libpod-conmon-66f9ae91270e7be4b0325284fa5580c2b585c8328002f4ae4077d85574e0369e.scope: Deactivated successfully.
Dec 06 08:04:13 compute-0 sudo[381799]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.323 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.323 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:13 compute-0 sudo[381952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:13 compute-0 sudo[381952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:13 compute-0 sudo[381952]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.340 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:04:13 compute-0 sudo[381977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:04:13 compute-0 sudo[381977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:13 compute-0 sudo[381977]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.422 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:13 compute-0 sudo[382002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:13 compute-0 sudo[382002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:13 compute-0 sudo[382002]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.449 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.449 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:13.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.456 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.457 251996 INFO nova.compute.claims [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:04:13 compute-0 sudo[382027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:04:13 compute-0 sudo[382027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.541 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Dec 06 08:04:13 compute-0 podman[382111]: 2025-12-06 08:04:13.782766687 +0000 UTC m=+0.018529541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:04:13 compute-0 podman[382111]: 2025-12-06 08:04:13.881975294 +0000 UTC m=+0.117738128 container create 9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:04:13 compute-0 systemd[1]: Started libpod-conmon-9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5.scope.
Dec 06 08:04:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:04:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174871040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:13 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.980 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:13 compute-0 nova_compute[251992]: 2025-12-06 08:04:13.985 251996 DEBUG nova.compute.provider_tree [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:04:13 compute-0 podman[382111]: 2025-12-06 08:04:13.99003749 +0000 UTC m=+0.225800334 container init 9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:04:13 compute-0 podman[382111]: 2025-12-06 08:04:13.996273128 +0000 UTC m=+0.232035962 container start 9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:04:14 compute-0 musing_shannon[382127]: 167 167
Dec 06 08:04:14 compute-0 systemd[1]: libpod-9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5.scope: Deactivated successfully.
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.004 251996 DEBUG nova.scheduler.client.report [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:04:14 compute-0 podman[382111]: 2025-12-06 08:04:14.005858426 +0000 UTC m=+0.241621260 container attach 9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 08:04:14 compute-0 podman[382111]: 2025-12-06 08:04:14.006342499 +0000 UTC m=+0.242105343 container died 9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec 06 08:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2899098699f490219b0c28575a773b54f761d7bff631834ce50f031cabdb3a28-merged.mount: Deactivated successfully.
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.036 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.037 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:04:14 compute-0 podman[382111]: 2025-12-06 08:04:14.047605793 +0000 UTC m=+0.283368627 container remove 9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:04:14 compute-0 systemd[1]: libpod-conmon-9fd004662e09808a69fbc69bb63602b2333ffba6886e522a584d5a7b50a31da5.scope: Deactivated successfully.
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.084 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.085 251996 DEBUG nova.network.neutron [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.124 251996 INFO nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.146 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:04:14 compute-0 podman[382153]: 2025-12-06 08:04:14.19944013 +0000 UTC m=+0.039291691 container create 25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.219 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.221 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.221 251996 INFO nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Creating image(s)
Dec 06 08:04:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4174871040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:14 compute-0 systemd[1]: Started libpod-conmon-25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d.scope.
Dec 06 08:04:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.254 251996 DEBUG nova.storage.rbd_utils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 2ce29812-b64c-4801-a37b-68c55429b70c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:04:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8922ad45bf82fcab58e5d73c7d3bc2b720debad990462045de1ba4476d2be9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8922ad45bf82fcab58e5d73c7d3bc2b720debad990462045de1ba4476d2be9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8922ad45bf82fcab58e5d73c7d3bc2b720debad990462045de1ba4476d2be9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b8922ad45bf82fcab58e5d73c7d3bc2b720debad990462045de1ba4476d2be9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:14 compute-0 podman[382153]: 2025-12-06 08:04:14.274512826 +0000 UTC m=+0.114364407 container init 25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:04:14 compute-0 podman[382153]: 2025-12-06 08:04:14.183761278 +0000 UTC m=+0.023612859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:04:14 compute-0 podman[382153]: 2025-12-06 08:04:14.283973812 +0000 UTC m=+0.123825373 container start 25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.293 251996 DEBUG nova.storage.rbd_utils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 2ce29812-b64c-4801-a37b-68c55429b70c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:04:14 compute-0 podman[382153]: 2025-12-06 08:04:14.293809447 +0000 UTC m=+0.133661008 container attach 25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.319 251996 DEBUG nova.storage.rbd_utils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 2ce29812-b64c-4801-a37b-68c55429b70c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.323 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.366 251996 DEBUG nova.policy [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed2d17026504d70b893923a85cece4d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.391 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.392 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.393 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.393 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.419 251996 DEBUG nova.storage.rbd_utils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 2ce29812-b64c-4801-a37b-68c55429b70c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.426 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2ce29812-b64c-4801-a37b-68c55429b70c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.830 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 2ce29812-b64c-4801-a37b-68c55429b70c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.896 251996 DEBUG nova.storage.rbd_utils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] resizing rbd image 2ce29812-b64c-4801-a37b-68c55429b70c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:04:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:14.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.989 251996 DEBUG nova.network.neutron [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Successfully created port: 42776015-1d70-4b92-9890-d31aaa444637 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:04:14 compute-0 nova_compute[251992]: 2025-12-06 08:04:14.998 251996 DEBUG nova.objects.instance [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'migration_context' on Instance uuid 2ce29812-b64c-4801-a37b-68c55429b70c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.012 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.012 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Ensure instance console log exists: /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.013 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.013 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.013 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]: {
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:     "0": [
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:         {
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "devices": [
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "/dev/loop3"
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             ],
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "lv_name": "ceph_lv0",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "lv_size": "7511998464",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "name": "ceph_lv0",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "tags": {
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.cluster_name": "ceph",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.crush_device_class": "",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.encrypted": "0",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.osd_id": "0",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.type": "block",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:                 "ceph.vdo": "0"
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             },
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "type": "block",
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:             "vg_name": "ceph_vg0"
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:         }
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]:     ]
Dec 06 08:04:15 compute-0 mystifying_mendel[382184]: }
Dec 06 08:04:15 compute-0 systemd[1]: libpod-25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d.scope: Deactivated successfully.
Dec 06 08:04:15 compute-0 podman[382153]: 2025-12-06 08:04:15.060383632 +0000 UTC m=+0.900235193 container died 25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b8922ad45bf82fcab58e5d73c7d3bc2b720debad990462045de1ba4476d2be9-merged.mount: Deactivated successfully.
Dec 06 08:04:15 compute-0 podman[382153]: 2025-12-06 08:04:15.117710699 +0000 UTC m=+0.957562260 container remove 25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mendel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:04:15 compute-0 systemd[1]: libpod-conmon-25115a3b663c0b0cbb871017c3dc1b440475776ce008a7f99002927df3f3ce2d.scope: Deactivated successfully.
Dec 06 08:04:15 compute-0 sudo[382027]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:15 compute-0 sudo[382355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:15 compute-0 sudo[382355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:15 compute-0 sudo[382355]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:15 compute-0 ceph-mon[74339]: pgmap v3345: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 64 op/s
Dec 06 08:04:15 compute-0 sudo[382380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:04:15 compute-0 sudo[382380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:15 compute-0 sudo[382380]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:15 compute-0 sudo[382405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:15 compute-0 sudo[382405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:15 compute-0 sudo[382405]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:15 compute-0 sudo[382430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:04:15 compute-0 sudo[382430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:15.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 211 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 133 op/s
Dec 06 08:04:15 compute-0 podman[382495]: 2025-12-06 08:04:15.757629247 +0000 UTC m=+0.038186672 container create fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:04:15 compute-0 systemd[1]: Started libpod-conmon-fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9.scope.
Dec 06 08:04:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:04:15 compute-0 podman[382495]: 2025-12-06 08:04:15.740452392 +0000 UTC m=+0.021009837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:04:15 compute-0 podman[382495]: 2025-12-06 08:04:15.841577802 +0000 UTC m=+0.122135247 container init fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 08:04:15 compute-0 podman[382495]: 2025-12-06 08:04:15.846959746 +0000 UTC m=+0.127517171 container start fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 08:04:15 compute-0 crazy_driscoll[382512]: 167 167
Dec 06 08:04:15 compute-0 podman[382495]: 2025-12-06 08:04:15.852377362 +0000 UTC m=+0.132934807 container attach fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:04:15 compute-0 systemd[1]: libpod-fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9.scope: Deactivated successfully.
Dec 06 08:04:15 compute-0 conmon[382512]: conmon fde8d927d27bede1df1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9.scope/container/memory.events
Dec 06 08:04:15 compute-0 podman[382495]: 2025-12-06 08:04:15.853978666 +0000 UTC m=+0.134536081 container died fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 08:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a7cf83d662e8db39594a3c62e5913bb8efc62987a745d9335f8c3b29ff1e30-merged.mount: Deactivated successfully.
Dec 06 08:04:15 compute-0 podman[382495]: 2025-12-06 08:04:15.893311347 +0000 UTC m=+0.173868772 container remove fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:04:15 compute-0 systemd[1]: libpod-conmon-fde8d927d27bede1df1db422ac1fe19ce7ca2e0d8d6c374269ef994af561eee9.scope: Deactivated successfully.
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.936 251996 DEBUG nova.network.neutron [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Successfully updated port: 42776015-1d70-4b92-9890-d31aaa444637 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.952 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.952 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquired lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:04:15 compute-0 nova_compute[251992]: 2025-12-06 08:04:15.952 251996 DEBUG nova.network.neutron [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:04:16 compute-0 nova_compute[251992]: 2025-12-06 08:04:16.018 251996 DEBUG nova.compute.manager [req-2dd15bc8-c319-43d8-85d1-06dbc80a4db8 req-7128c67e-3568-452b-a1ea-5e2a2b5a96ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-changed-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:04:16 compute-0 nova_compute[251992]: 2025-12-06 08:04:16.019 251996 DEBUG nova.compute.manager [req-2dd15bc8-c319-43d8-85d1-06dbc80a4db8 req-7128c67e-3568-452b-a1ea-5e2a2b5a96ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Refreshing instance network info cache due to event network-changed-42776015-1d70-4b92-9890-d31aaa444637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:04:16 compute-0 nova_compute[251992]: 2025-12-06 08:04:16.019 251996 DEBUG oslo_concurrency.lockutils [req-2dd15bc8-c319-43d8-85d1-06dbc80a4db8 req-7128c67e-3568-452b-a1ea-5e2a2b5a96ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:04:16 compute-0 podman[382535]: 2025-12-06 08:04:16.046134522 +0000 UTC m=+0.038403118 container create 912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:04:16 compute-0 nova_compute[251992]: 2025-12-06 08:04:16.069 251996 DEBUG nova.network.neutron [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:04:16 compute-0 systemd[1]: Started libpod-conmon-912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44.scope.
Dec 06 08:04:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70c1a1db8aed28801397e68b57e86aa2c57e64c8f23de6f24a7ec2407e3e23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70c1a1db8aed28801397e68b57e86aa2c57e64c8f23de6f24a7ec2407e3e23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70c1a1db8aed28801397e68b57e86aa2c57e64c8f23de6f24a7ec2407e3e23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70c1a1db8aed28801397e68b57e86aa2c57e64c8f23de6f24a7ec2407e3e23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:16 compute-0 podman[382535]: 2025-12-06 08:04:16.124893057 +0000 UTC m=+0.117161663 container init 912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_black, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:04:16 compute-0 podman[382535]: 2025-12-06 08:04:16.030004286 +0000 UTC m=+0.022272902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:04:16 compute-0 podman[382535]: 2025-12-06 08:04:16.132420189 +0000 UTC m=+0.124688785 container start 912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_black, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:04:16 compute-0 podman[382535]: 2025-12-06 08:04:16.134950298 +0000 UTC m=+0.127218894 container attach 912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:04:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:16.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:16 compute-0 kind_black[382551]: {
Dec 06 08:04:16 compute-0 kind_black[382551]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:04:16 compute-0 kind_black[382551]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:04:16 compute-0 kind_black[382551]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:04:16 compute-0 kind_black[382551]:         "osd_id": 0,
Dec 06 08:04:16 compute-0 kind_black[382551]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:04:16 compute-0 kind_black[382551]:         "type": "bluestore"
Dec 06 08:04:16 compute-0 kind_black[382551]:     }
Dec 06 08:04:16 compute-0 kind_black[382551]: }
Dec 06 08:04:16 compute-0 systemd[1]: libpod-912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44.scope: Deactivated successfully.
Dec 06 08:04:16 compute-0 podman[382535]: 2025-12-06 08:04:16.957163944 +0000 UTC m=+0.949432550 container died 912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_black, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 08:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c70c1a1db8aed28801397e68b57e86aa2c57e64c8f23de6f24a7ec2407e3e23-merged.mount: Deactivated successfully.
Dec 06 08:04:17 compute-0 podman[382535]: 2025-12-06 08:04:17.009808805 +0000 UTC m=+1.002077401 container remove 912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:04:17 compute-0 systemd[1]: libpod-conmon-912c955d36f4271b246342c201fdf30b4c53b04725698e2baf1cc47143841e44.scope: Deactivated successfully.
Dec 06 08:04:17 compute-0 sudo[382430]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:04:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:04:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d963e4d6-ca53-4733-8622-6ea3cbde2867 does not exist
Dec 06 08:04:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b8647a1d-89b2-4dc5-b820-56887f744094 does not exist
Dec 06 08:04:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 70789e14-88af-4043-b3c4-8776a8ea9f33 does not exist
Dec 06 08:04:17 compute-0 sudo[382587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:17 compute-0 sudo[382587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:17 compute-0 sudo[382587]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:17 compute-0 sudo[382612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:04:17 compute-0 sudo[382612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:17 compute-0 sudo[382612]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:17 compute-0 ceph-mon[74339]: pgmap v3346: 305 pgs: 305 active+clean; 211 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 133 op/s
Dec 06 08:04:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:04:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:17.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 243 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 527 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 06 08:04:17 compute-0 nova_compute[251992]: 2025-12-06 08:04:17.853 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.424 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.537 251996 DEBUG nova.network.neutron [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.555 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Releasing lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.555 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Instance network_info: |[{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.556 251996 DEBUG oslo_concurrency.lockutils [req-2dd15bc8-c319-43d8-85d1-06dbc80a4db8 req-7128c67e-3568-452b-a1ea-5e2a2b5a96ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.556 251996 DEBUG nova.network.neutron [req-2dd15bc8-c319-43d8-85d1-06dbc80a4db8 req-7128c67e-3568-452b-a1ea-5e2a2b5a96ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Refreshing network info cache for port 42776015-1d70-4b92-9890-d31aaa444637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.558 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Start _get_guest_xml network_info=[{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.564 251996 WARNING nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.569 251996 DEBUG nova.virt.libvirt.host [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.570 251996 DEBUG nova.virt.libvirt.host [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.572 251996 DEBUG nova.virt.libvirt.host [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.573 251996 DEBUG nova.virt.libvirt.host [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.574 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.574 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.574 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.575 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.575 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.575 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.575 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.575 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.576 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.576 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.576 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.576 251996 DEBUG nova.virt.hardware [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:04:18 compute-0 nova_compute[251992]: 2025-12-06 08:04:18.580 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:04:18
Dec 06 08:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'vms', 'images', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes']
Dec 06 08:04:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:04:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:18.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:04:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/854582321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.041 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.065 251996 DEBUG nova.storage.rbd_utils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 2ce29812-b64c-4801-a37b-68c55429b70c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.068 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:19 compute-0 ceph-mon[74339]: pgmap v3347: 305 pgs: 305 active+clean; 243 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 527 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Dec 06 08:04:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/854582321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:19.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:04:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257155745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.496 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.498 251996 DEBUG nova.virt.libvirt.vif [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1543702869',display_name='tempest-TestNetworkAdvancedServerOps-server-1543702869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1543702869',id=187,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBInGUhg7nhCoc2vx+Ix7gLWIjZpxEjyqveZeyfMP/1wxX8FSrtE3tQA2JbvpPn3Vva7vIRTnPCXD+7DHbX9YJlXkUS+5x8l7M/agABi3TQb6p6z9n1aAcCS+pz1vzZhCpQ==',key_name='tempest-TestNetworkAdvancedServerOps-1853257958',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-q1zcoiif',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:04:14Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=2ce29812-b64c-4801-a37b-68c55429b70c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.499 251996 DEBUG nova.network.os_vif_util [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.500 251996 DEBUG nova.network.os_vif_util [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.502 251996 DEBUG nova.objects.instance [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ce29812-b64c-4801-a37b-68c55429b70c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.523 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <uuid>2ce29812-b64c-4801-a37b-68c55429b70c</uuid>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <name>instance-000000bb</name>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1543702869</nova:name>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:04:18</nova:creationTime>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <nova:user uuid="2ed2d17026504d70b893923a85cece4d">tempest-TestNetworkAdvancedServerOps-1171852383-project-member</nova:user>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <nova:project uuid="fd8e24e430c64364ace789d88a68ba5f">tempest-TestNetworkAdvancedServerOps-1171852383</nova:project>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <nova:port uuid="42776015-1d70-4b92-9890-d31aaa444637">
Dec 06 08:04:19 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <system>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <entry name="serial">2ce29812-b64c-4801-a37b-68c55429b70c</entry>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <entry name="uuid">2ce29812-b64c-4801-a37b-68c55429b70c</entry>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </system>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <os>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   </os>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <features>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   </features>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2ce29812-b64c-4801-a37b-68c55429b70c_disk">
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       </source>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/2ce29812-b64c-4801-a37b-68c55429b70c_disk.config">
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       </source>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:04:19 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:ae:d9:3a"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <target dev="tap42776015-1d"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/console.log" append="off"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <video>
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </video>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:04:19 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:04:19 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:04:19 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:04:19 compute-0 nova_compute[251992]: </domain>
Dec 06 08:04:19 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.524 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Preparing to wait for external event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.525 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.525 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.525 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.526 251996 DEBUG nova.virt.libvirt.vif [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1543702869',display_name='tempest-TestNetworkAdvancedServerOps-server-1543702869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1543702869',id=187,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBInGUhg7nhCoc2vx+Ix7gLWIjZpxEjyqveZeyfMP/1wxX8FSrtE3tQA2JbvpPn3Vva7vIRTnPCXD+7DHbX9YJlXkUS+5x8l7M/agABi3TQb6p6z9n1aAcCS+pz1vzZhCpQ==',key_name='tempest-TestNetworkAdvancedServerOps-1853257958',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-q1zcoiif',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:04:14Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=2ce29812-b64c-4801-a37b-68c55429b70c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.526 251996 DEBUG nova.network.os_vif_util [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converting VIF {"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.527 251996 DEBUG nova.network.os_vif_util [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.527 251996 DEBUG os_vif [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.528 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.529 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.529 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.534 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.534 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42776015-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.535 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap42776015-1d, col_values=(('external_ids', {'iface-id': '42776015-1d70-4b92-9890-d31aaa444637', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:d9:3a', 'vm-uuid': '2ce29812-b64c-4801-a37b-68c55429b70c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.536 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:19 compute-0 NetworkManager[48965]: <info>  [1765008259.5384] manager: (tap42776015-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.538 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:04:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 243 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.547 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:19 compute-0 nova_compute[251992]: 2025-12-06 08:04:19.548 251996 INFO os_vif [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d')
Dec 06 08:04:20 compute-0 nova_compute[251992]: 2025-12-06 08:04:20.031 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:04:20 compute-0 nova_compute[251992]: 2025-12-06 08:04:20.032 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:04:20 compute-0 nova_compute[251992]: 2025-12-06 08:04:20.032 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] No VIF found with MAC fa:16:3e:ae:d9:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:04:20 compute-0 nova_compute[251992]: 2025-12-06 08:04:20.033 251996 INFO nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Using config drive
Dec 06 08:04:20 compute-0 nova_compute[251992]: 2025-12-06 08:04:20.061 251996 DEBUG nova.storage.rbd_utils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 2ce29812-b64c-4801-a37b-68c55429b70c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:04:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3257155745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:20 compute-0 ceph-mon[74339]: pgmap v3348: 305 pgs: 305 active+clean; 243 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Dec 06 08:04:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:20.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:21.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:21 compute-0 nova_compute[251992]: 2025-12-06 08:04:21.546 251996 INFO nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Creating config drive at /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/disk.config
Dec 06 08:04:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:04:21 compute-0 nova_compute[251992]: 2025-12-06 08:04:21.551 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0o54_k6u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:21 compute-0 nova_compute[251992]: 2025-12-06 08:04:21.611 251996 DEBUG nova.network.neutron [req-2dd15bc8-c319-43d8-85d1-06dbc80a4db8 req-7128c67e-3568-452b-a1ea-5e2a2b5a96ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updated VIF entry in instance network info cache for port 42776015-1d70-4b92-9890-d31aaa444637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:04:21 compute-0 nova_compute[251992]: 2025-12-06 08:04:21.612 251996 DEBUG nova.network.neutron [req-2dd15bc8-c319-43d8-85d1-06dbc80a4db8 req-7128c67e-3568-452b-a1ea-5e2a2b5a96ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:04:21 compute-0 nova_compute[251992]: 2025-12-06 08:04:21.658 251996 DEBUG oslo_concurrency.lockutils [req-2dd15bc8-c319-43d8-85d1-06dbc80a4db8 req-7128c67e-3568-452b-a1ea-5e2a2b5a96ac 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:04:21 compute-0 nova_compute[251992]: 2025-12-06 08:04:21.693 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0o54_k6u" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:21 compute-0 nova_compute[251992]: 2025-12-06 08:04:21.723 251996 DEBUG nova.storage.rbd_utils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] rbd image 2ce29812-b64c-4801-a37b-68c55429b70c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:04:21 compute-0 nova_compute[251992]: 2025-12-06 08:04:21.728 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/disk.config 2ce29812-b64c-4801-a37b-68c55429b70c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.523 251996 DEBUG oslo_concurrency.processutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/disk.config 2ce29812-b64c-4801-a37b-68c55429b70c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.796s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.525 251996 INFO nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Deleting local config drive /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c/disk.config because it was imported into RBD.
Dec 06 08:04:22 compute-0 kernel: tap42776015-1d: entered promiscuous mode
Dec 06 08:04:22 compute-0 NetworkManager[48965]: <info>  [1765008262.6035] manager: (tap42776015-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Dec 06 08:04:22 compute-0 ovn_controller[147168]: 2025-12-06T08:04:22Z|00711|binding|INFO|Claiming lport 42776015-1d70-4b92-9890-d31aaa444637 for this chassis.
Dec 06 08:04:22 compute-0 ovn_controller[147168]: 2025-12-06T08:04:22Z|00712|binding|INFO|42776015-1d70-4b92-9890-d31aaa444637: Claiming fa:16:3e:ae:d9:3a 10.100.0.9
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.604 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.606 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.617 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:d9:3a 10.100.0.9'], port_security=['fa:16:3e:ae:d9:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2ce29812-b64c-4801-a37b-68c55429b70c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b4f8dab-73cc-4482-899b-78d869a3817d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81064306-0257-4e49-ba6e-9b830b20be21, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=42776015-1d70-4b92-9890-d31aaa444637) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.631 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 42776015-1d70-4b92-9890-d31aaa444637 in datapath 7ab4eeff-e26c-426f-afdf-0ed982f0262e bound to our chassis
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.634 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ab4eeff-e26c-426f-afdf-0ed982f0262e
Dec 06 08:04:22 compute-0 systemd-udevd[382774]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.648 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[77f0a81e-62f2-4e1c-9a5a-52d1982b9c8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.650 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ab4eeff-e1 in ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:04:22 compute-0 systemd-machined[212986]: New machine qemu-87-instance-000000bb.
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.652 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ab4eeff-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.652 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[797f2215-ae7e-45fa-8c33-fc738263ccf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.653 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[35ce1c65-af8d-460b-a7a6-50aef6347b98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 NetworkManager[48965]: <info>  [1765008262.6646] device (tap42776015-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:04:22 compute-0 NetworkManager[48965]: <info>  [1765008262.6660] device (tap42776015-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:04:22 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-000000bb.
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.673 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8ae694-d976-4394-b73f-db2cffbe4646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.682 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:22 compute-0 ovn_controller[147168]: 2025-12-06T08:04:22Z|00713|binding|INFO|Setting lport 42776015-1d70-4b92-9890-d31aaa444637 ovn-installed in OVS
Dec 06 08:04:22 compute-0 ovn_controller[147168]: 2025-12-06T08:04:22Z|00714|binding|INFO|Setting lport 42776015-1d70-4b92-9890-d31aaa444637 up in Southbound
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.688 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.688 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3197824f-14e5-49e5-9124-41bc961d3258]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.719 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[05707a6d-beea-4458-b484-e2e39fdad4e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 NetworkManager[48965]: <info>  [1765008262.7276] manager: (tap7ab4eeff-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.727 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6d321642-eba3-4738-a0ed-d069347598ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.754 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab97698-6599-4829-9914-ccd17cbd3bae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.757 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e11f841a-b512-47ca-af02-c5159c3988c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 NetworkManager[48965]: <info>  [1765008262.7804] device (tap7ab4eeff-e0): carrier: link connected
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.787 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1de21df0-caf9-4e76-a3fb-0d855ba64839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.803 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[14b1dedb-2c2e-4914-af4b-0fa2ffabf487]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ab4eeff-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:5d:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853536, 'reachable_time': 22571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382807, 'error': None, 'target': 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.821 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf7a69f-9e87-45c5-b16a-07e452a1c3ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:5dda'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 853536, 'tstamp': 853536}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382808, 'error': None, 'target': 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.836 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2367e004-5474-4103-8930-c71d136b0a3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ab4eeff-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:5d:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853536, 'reachable_time': 22571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382809, 'error': None, 'target': 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.854 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.867 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[19a7aed2-ea65-47fa-8b88-0e7ed538345e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.919 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eaedd589-bf17-4079-b0d2-432867ff0feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.921 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab4eeff-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.921 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.922 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ab4eeff-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.923 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:22 compute-0 kernel: tap7ab4eeff-e0: entered promiscuous mode
Dec 06 08:04:22 compute-0 NetworkManager[48965]: <info>  [1765008262.9261] manager: (tap7ab4eeff-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.926 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ab4eeff-e0, col_values=(('external_ids', {'iface-id': '341c5909-2539-4d67-99cf-ac110e415c92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.927 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:22 compute-0 ovn_controller[147168]: 2025-12-06T08:04:22Z|00715|binding|INFO|Releasing lport 341c5909-2539-4d67-99cf-ac110e415c92 from this chassis (sb_readonly=0)
Dec 06 08:04:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:22.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.934 251996 DEBUG nova.compute.manager [req-d0081959-1bc6-43f8-a6f2-8f8d1e1a0a15 req-7795c7c8-7cd8-43d5-9766-e5ccc5155334 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.934 251996 DEBUG oslo_concurrency.lockutils [req-d0081959-1bc6-43f8-a6f2-8f8d1e1a0a15 req-7795c7c8-7cd8-43d5-9766-e5ccc5155334 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.935 251996 DEBUG oslo_concurrency.lockutils [req-d0081959-1bc6-43f8-a6f2-8f8d1e1a0a15 req-7795c7c8-7cd8-43d5-9766-e5ccc5155334 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.935 251996 DEBUG oslo_concurrency.lockutils [req-d0081959-1bc6-43f8-a6f2-8f8d1e1a0a15 req-7795c7c8-7cd8-43d5-9766-e5ccc5155334 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.935 251996 DEBUG nova.compute.manager [req-d0081959-1bc6-43f8-a6f2-8f8d1e1a0a15 req-7795c7c8-7cd8-43d5-9766-e5ccc5155334 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Processing event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:04:22 compute-0 nova_compute[251992]: 2025-12-06 08:04:22.944 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.946 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ab4eeff-e26c-426f-afdf-0ed982f0262e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ab4eeff-e26c-426f-afdf-0ed982f0262e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.947 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0d3d61-f62e-48c1-a384-c99e429758f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.948 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-7ab4eeff-e26c-426f-afdf-0ed982f0262e
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/7ab4eeff-e26c-426f-afdf-0ed982f0262e.pid.haproxy
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 7ab4eeff-e26c-426f-afdf-0ed982f0262e
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:04:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:22.950 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'env', 'PROCESS_TAG=haproxy-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ab4eeff-e26c-426f-afdf-0ed982f0262e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:04:23 compute-0 podman[382842]: 2025-12-06 08:04:23.278840319 +0000 UTC m=+0.024267495 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:04:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:23.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.525 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.526 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008263.5246542, 2ce29812-b64c-4801-a37b-68c55429b70c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.527 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] VM Started (Lifecycle Event)
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.529 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.533 251996 INFO nova.virt.libvirt.driver [-] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Instance spawned successfully.
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.533 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:04:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.551 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.553 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.560 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.560 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.560 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.561 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.561 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.561 251996 DEBUG nova.virt.libvirt.driver [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.600 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.600 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008263.5256383, 2ce29812-b64c-4801-a37b-68c55429b70c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.600 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] VM Paused (Lifecycle Event)
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.628 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.632 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008263.52913, 2ce29812-b64c-4801-a37b-68c55429b70c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.632 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] VM Resumed (Lifecycle Event)
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.638 251996 INFO nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Took 9.42 seconds to spawn the instance on the hypervisor.
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.638 251996 DEBUG nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:04:23 compute-0 ceph-mon[74339]: pgmap v3349: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.673 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.677 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:04:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.706 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.718 251996 INFO nova.compute.manager [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Took 10.30 seconds to build instance.
Dec 06 08:04:23 compute-0 nova_compute[251992]: 2025-12-06 08:04:23.738 251996 DEBUG oslo_concurrency.lockutils [None req-b47f7ded-5c97-4e29-966e-28ea2142f0a9 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:23 compute-0 podman[382842]: 2025-12-06 08:04:23.740958878 +0000 UTC m=+0.486386054 container create 785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:04:23 compute-0 systemd[1]: Started libpod-conmon-785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f.scope.
Dec 06 08:04:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626ab94b8ca9f5475e83ece6657c98d84b40fa537e36e13c8387840baddd96a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:04:23 compute-0 podman[382842]: 2025-12-06 08:04:23.858915811 +0000 UTC m=+0.604342987 container init 785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 08:04:23 compute-0 podman[382842]: 2025-12-06 08:04:23.865456258 +0000 UTC m=+0.610883414 container start 785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:04:23 compute-0 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[382899]: [NOTICE]   (382904) : New worker (382906) forked
Dec 06 08:04:23 compute-0 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[382899]: [NOTICE]   (382904) : Loading success.
Dec 06 08:04:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:24 compute-0 nova_compute[251992]: 2025-12-06 08:04:24.538 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:24 compute-0 ceph-mon[74339]: pgmap v3350: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Dec 06 08:04:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:24.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:25 compute-0 nova_compute[251992]: 2025-12-06 08:04:25.080 251996 DEBUG nova.compute.manager [req-0f5b8290-e958-4828-b6d9-41fb0b9afd2c req-4d79d8f2-4b20-41fd-a239-1ca495095253 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:04:25 compute-0 nova_compute[251992]: 2025-12-06 08:04:25.081 251996 DEBUG oslo_concurrency.lockutils [req-0f5b8290-e958-4828-b6d9-41fb0b9afd2c req-4d79d8f2-4b20-41fd-a239-1ca495095253 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:25 compute-0 nova_compute[251992]: 2025-12-06 08:04:25.081 251996 DEBUG oslo_concurrency.lockutils [req-0f5b8290-e958-4828-b6d9-41fb0b9afd2c req-4d79d8f2-4b20-41fd-a239-1ca495095253 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:25 compute-0 nova_compute[251992]: 2025-12-06 08:04:25.081 251996 DEBUG oslo_concurrency.lockutils [req-0f5b8290-e958-4828-b6d9-41fb0b9afd2c req-4d79d8f2-4b20-41fd-a239-1ca495095253 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:25 compute-0 nova_compute[251992]: 2025-12-06 08:04:25.081 251996 DEBUG nova.compute.manager [req-0f5b8290-e958-4828-b6d9-41fb0b9afd2c req-4d79d8f2-4b20-41fd-a239-1ca495095253 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:04:25 compute-0 nova_compute[251992]: 2025-12-06 08:04:25.082 251996 WARNING nova.compute.manager [req-0f5b8290-e958-4828-b6d9-41fb0b9afd2c req-4d79d8f2-4b20-41fd-a239-1ca495095253 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received unexpected event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with vm_state active and task_state None.
Dec 06 08:04:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:25.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002542188663223525 of space, bias 1.0, pg target 0.7626565989670575 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:04:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:04:26 compute-0 podman[382916]: 2025-12-06 08:04:26.434057899 +0000 UTC m=+0.093247028 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 06 08:04:26 compute-0 ceph-mon[74339]: pgmap v3351: 305 pgs: 305 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Dec 06 08:04:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:26.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:04:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:04:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:04:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:04:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:04:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:27.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 201 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Dec 06 08:04:27 compute-0 NetworkManager[48965]: <info>  [1765008267.7569] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Dec 06 08:04:27 compute-0 NetworkManager[48965]: <info>  [1765008267.7575] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Dec 06 08:04:27 compute-0 nova_compute[251992]: 2025-12-06 08:04:27.756 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/848864160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:27 compute-0 nova_compute[251992]: 2025-12-06 08:04:27.842 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:27 compute-0 ovn_controller[147168]: 2025-12-06T08:04:27Z|00716|binding|INFO|Releasing lport 341c5909-2539-4d67-99cf-ac110e415c92 from this chassis (sb_readonly=0)
Dec 06 08:04:27 compute-0 nova_compute[251992]: 2025-12-06 08:04:27.850 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:27 compute-0 nova_compute[251992]: 2025-12-06 08:04:27.855 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:28 compute-0 nova_compute[251992]: 2025-12-06 08:04:28.786 251996 DEBUG nova.compute.manager [req-c4c561f0-dd4a-4f88-b9ed-f851522b56df req-c37ad2bf-15d8-4a61-a0fc-511c9b70f697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-changed-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:04:28 compute-0 nova_compute[251992]: 2025-12-06 08:04:28.786 251996 DEBUG nova.compute.manager [req-c4c561f0-dd4a-4f88-b9ed-f851522b56df req-c37ad2bf-15d8-4a61-a0fc-511c9b70f697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Refreshing instance network info cache due to event network-changed-42776015-1d70-4b92-9890-d31aaa444637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:04:28 compute-0 nova_compute[251992]: 2025-12-06 08:04:28.787 251996 DEBUG oslo_concurrency.lockutils [req-c4c561f0-dd4a-4f88-b9ed-f851522b56df req-c37ad2bf-15d8-4a61-a0fc-511c9b70f697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:04:28 compute-0 nova_compute[251992]: 2025-12-06 08:04:28.787 251996 DEBUG oslo_concurrency.lockutils [req-c4c561f0-dd4a-4f88-b9ed-f851522b56df req-c37ad2bf-15d8-4a61-a0fc-511c9b70f697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:04:28 compute-0 nova_compute[251992]: 2025-12-06 08:04:28.787 251996 DEBUG nova.network.neutron [req-c4c561f0-dd4a-4f88-b9ed-f851522b56df req-c37ad2bf-15d8-4a61-a0fc-511c9b70f697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Refreshing network info cache for port 42776015-1d70-4b92-9890-d31aaa444637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:04:28 compute-0 ceph-mon[74339]: pgmap v3352: 305 pgs: 305 active+clean; 201 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Dec 06 08:04:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:28.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:29.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:29 compute-0 nova_compute[251992]: 2025-12-06 08:04:29.540 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 201 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 28 KiB/s wr, 95 op/s
Dec 06 08:04:29 compute-0 sudo[382945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:29 compute-0 sudo[382945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:29 compute-0 sudo[382945]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:29 compute-0 sudo[382970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:29 compute-0 sudo[382970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:29 compute-0 sudo[382970]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:30 compute-0 nova_compute[251992]: 2025-12-06 08:04:30.199 251996 DEBUG nova.network.neutron [req-c4c561f0-dd4a-4f88-b9ed-f851522b56df req-c37ad2bf-15d8-4a61-a0fc-511c9b70f697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updated VIF entry in instance network info cache for port 42776015-1d70-4b92-9890-d31aaa444637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:04:30 compute-0 nova_compute[251992]: 2025-12-06 08:04:30.202 251996 DEBUG nova.network.neutron [req-c4c561f0-dd4a-4f88-b9ed-f851522b56df req-c37ad2bf-15d8-4a61-a0fc-511c9b70f697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:04:30 compute-0 nova_compute[251992]: 2025-12-06 08:04:30.486 251996 DEBUG oslo_concurrency.lockutils [req-c4c561f0-dd4a-4f88-b9ed-f851522b56df req-c37ad2bf-15d8-4a61-a0fc-511c9b70f697 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:04:30 compute-0 ceph-mon[74339]: pgmap v3353: 305 pgs: 305 active+clean; 201 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 28 KiB/s wr, 95 op/s
Dec 06 08:04:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:31 compute-0 ovn_controller[147168]: 2025-12-06T08:04:31Z|00717|binding|INFO|Releasing lport 341c5909-2539-4d67-99cf-ac110e415c92 from this chassis (sb_readonly=0)
Dec 06 08:04:31 compute-0 nova_compute[251992]: 2025-12-06 08:04:31.149 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:31.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 109 op/s
Dec 06 08:04:32 compute-0 podman[382996]: 2025-12-06 08:04:32.401842054 +0000 UTC m=+0.063494894 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:04:32 compute-0 podman[382997]: 2025-12-06 08:04:32.463978501 +0000 UTC m=+0.117584524 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:04:32 compute-0 ceph-mon[74339]: pgmap v3354: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 109 op/s
Dec 06 08:04:32 compute-0 nova_compute[251992]: 2025-12-06 08:04:32.856 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:32.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:33.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Dec 06 08:04:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2893058911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1055536290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:34 compute-0 nova_compute[251992]: 2025-12-06 08:04:34.185 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:34 compute-0 nova_compute[251992]: 2025-12-06 08:04:34.542 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:34 compute-0 ceph-mon[74339]: pgmap v3355: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Dec 06 08:04:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:34.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Dec 06 08:04:36 compute-0 nova_compute[251992]: 2025-12-06 08:04:36.674 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:36 compute-0 nova_compute[251992]: 2025-12-06 08:04:36.700 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:36 compute-0 nova_compute[251992]: 2025-12-06 08:04:36.701 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:36 compute-0 nova_compute[251992]: 2025-12-06 08:04:36.701 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:36 compute-0 nova_compute[251992]: 2025-12-06 08:04:36.702 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:04:36 compute-0 nova_compute[251992]: 2025-12-06 08:04:36.702 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:36 compute-0 ceph-mon[74339]: pgmap v3356: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Dec 06 08:04:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:36.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:04:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616088116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.192 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.212 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.264 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.264 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:04:37 compute-0 ovn_controller[147168]: 2025-12-06T08:04:37Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:d9:3a 10.100.0.9
Dec 06 08:04:37 compute-0 ovn_controller[147168]: 2025-12-06T08:04:37Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:d9:3a 10.100.0.9
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.413 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.414 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3950MB free_disk=20.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.414 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.414 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:37.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.480 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 2ce29812-b64c-4801-a37b-68c55429b70c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.480 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.480 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.529 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:04:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 877 KiB/s wr, 53 op/s
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.858 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1616088116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:04:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876630916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.957 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.962 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:04:37 compute-0 nova_compute[251992]: 2025-12-06 08:04:37.977 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:04:38 compute-0 nova_compute[251992]: 2025-12-06 08:04:38.004 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:04:38 compute-0 nova_compute[251992]: 2025-12-06 08:04:38.004 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:38 compute-0 ceph-mon[74339]: pgmap v3357: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 877 KiB/s wr, 53 op/s
Dec 06 08:04:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2876630916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:38.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:38 compute-0 nova_compute[251992]: 2025-12-06 08:04:38.981 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:39.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:39 compute-0 nova_compute[251992]: 2025-12-06 08:04:39.546 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 441 KiB/s rd, 877 KiB/s wr, 28 op/s
Dec 06 08:04:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:40.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:40 compute-0 ceph-mon[74339]: pgmap v3358: 305 pgs: 305 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 441 KiB/s rd, 877 KiB/s wr, 28 op/s
Dec 06 08:04:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:41.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Dec 06 08:04:42 compute-0 nova_compute[251992]: 2025-12-06 08:04:42.860 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:42.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:04:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:04:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:04:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:04:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:04:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:04:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:43 compute-0 nova_compute[251992]: 2025-12-06 08:04:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:44 compute-0 ceph-mon[74339]: pgmap v3359: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Dec 06 08:04:44 compute-0 nova_compute[251992]: 2025-12-06 08:04:44.105 251996 INFO nova.compute.manager [None req-4d62d999-2428-4e01-ac72-4666fc7d7164 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Get console output
Dec 06 08:04:44 compute-0 nova_compute[251992]: 2025-12-06 08:04:44.115 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:04:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:44 compute-0 nova_compute[251992]: 2025-12-06 08:04:44.548 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:44.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:45 compute-0 ceph-mon[74339]: pgmap v3360: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:45 compute-0 nova_compute[251992]: 2025-12-06 08:04:45.212 251996 INFO nova.compute.manager [None req-44423852-ff89-46f3-8f22-5b07356ba2cd 2ed2d17026504d70b893923a85cece4d fd8e24e430c64364ace789d88a68ba5f - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Get console output
Dec 06 08:04:45 compute-0 nova_compute[251992]: 2025-12-06 08:04:45.220 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:04:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:45.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:45 compute-0 nova_compute[251992]: 2025-12-06 08:04:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:45 compute-0 nova_compute[251992]: 2025-12-06 08:04:45.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:04:45 compute-0 nova_compute[251992]: 2025-12-06 08:04:45.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:04:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/826595747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:46 compute-0 nova_compute[251992]: 2025-12-06 08:04:46.157 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:04:46 compute-0 nova_compute[251992]: 2025-12-06 08:04:46.158 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:04:46 compute-0 nova_compute[251992]: 2025-12-06 08:04:46.158 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:04:46 compute-0 nova_compute[251992]: 2025-12-06 08:04:46.159 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2ce29812-b64c-4801-a37b-68c55429b70c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:04:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:46.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:47 compute-0 ceph-mon[74339]: pgmap v3361: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2339609487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:47.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:47 compute-0 nova_compute[251992]: 2025-12-06 08:04:47.861 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:48 compute-0 nova_compute[251992]: 2025-12-06 08:04:48.656 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:04:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:48.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:49 compute-0 nova_compute[251992]: 2025-12-06 08:04:49.011 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:04:49 compute-0 nova_compute[251992]: 2025-12-06 08:04:49.012 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:04:49 compute-0 nova_compute[251992]: 2025-12-06 08:04:49.013 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:49 compute-0 ceph-mon[74339]: pgmap v3362: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:04:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:49.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:49 compute-0 nova_compute[251992]: 2025-12-06 08:04:49.550 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 241 KiB/s rd, 1.3 MiB/s wr, 48 op/s
Dec 06 08:04:49 compute-0 nova_compute[251992]: 2025-12-06 08:04:49.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:49 compute-0 sudo[383087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:49 compute-0 sudo[383087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:49 compute-0 sudo[383087]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:49 compute-0 nova_compute[251992]: 2025-12-06 08:04:49.797 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Check if temp file /var/lib/nova/instances/tmpddofkvg0 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 06 08:04:49 compute-0 nova_compute[251992]: 2025-12-06 08:04:49.798 251996 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpddofkvg0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2ce29812-b64c-4801-a37b-68c55429b70c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 06 08:04:49 compute-0 sudo[383112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:04:49 compute-0 sudo[383112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:04:49 compute-0 sudo[383112]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4333110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:04:50 compute-0 nova_compute[251992]: 2025-12-06 08:04:50.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:50.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:51 compute-0 ceph-mon[74339]: pgmap v3363: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 241 KiB/s rd, 1.3 MiB/s wr, 48 op/s
Dec 06 08:04:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:51.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 207 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 247 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Dec 06 08:04:52 compute-0 nova_compute[251992]: 2025-12-06 08:04:52.863 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:52.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:53 compute-0 ceph-mon[74339]: pgmap v3364: 305 pgs: 305 active+clean; 207 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 247 KiB/s rd, 1.6 MiB/s wr, 60 op/s
Dec 06 08:04:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:53.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 207 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 KiB/s rd, 344 KiB/s wr, 12 op/s
Dec 06 08:04:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:54 compute-0 nova_compute[251992]: 2025-12-06 08:04:54.552 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:55 compute-0 ceph-mon[74339]: pgmap v3365: 305 pgs: 305 active+clean; 207 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 KiB/s rd, 344 KiB/s wr, 12 op/s
Dec 06 08:04:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:04:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:55.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:04:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:55 compute-0 nova_compute[251992]: 2025-12-06 08:04:55.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:55 compute-0 nova_compute[251992]: 2025-12-06 08:04:55.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:55 compute-0 nova_compute[251992]: 2025-12-06 08:04:55.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:04:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:56.165 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:56.166 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:04:56 compute-0 ceph-mon[74339]: pgmap v3366: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.618 251996 INFO nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Took 4.72 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.619 251996 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.638 251996 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpddofkvg0',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2ce29812-b64c-4801-a37b-68c55429b70c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(06675f83-919b-4ae3-a34e-52208b739c5e),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.642 251996 DEBUG nova.objects.instance [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lazy-loading 'migration_context' on Instance uuid 2ce29812-b64c-4801-a37b-68c55429b70c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.643 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.645 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.645 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.660 251996 DEBUG nova.virt.libvirt.vif [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1543702869',display_name='tempest-TestNetworkAdvancedServerOps-server-1543702869',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1543702869',id=187,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBInGUhg7nhCoc2vx+Ix7gLWIjZpxEjyqveZeyfMP/1wxX8FSrtE3tQA2JbvpPn3Vva7vIRTnPCXD+7DHbX9YJlXkUS+5x8l7M/agABi3TQb6p6z9n1aAcCS+pz1vzZhCpQ==',key_name='tempest-TestNetworkAdvancedServerOps-1853257958',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-q1zcoiif',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:04:23Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=2ce29812-b64c-4801-a37b-68c55429b70c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.661 251996 DEBUG nova.network.os_vif_util [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.661 251996 DEBUG nova.network.os_vif_util [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.662 251996 DEBUG nova.virt.libvirt.migration [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating guest XML with vif config: <interface type="ethernet">
Dec 06 08:04:56 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:ae:d9:3a"/>
Dec 06 08:04:56 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 08:04:56 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:04:56 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 08:04:56 compute-0 nova_compute[251992]:   <target dev="tap42776015-1d"/>
Dec 06 08:04:56 compute-0 nova_compute[251992]: </interface>
Dec 06 08:04:56 compute-0 nova_compute[251992]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.662 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.665 251996 DEBUG nova.compute.manager [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.665 251996 DEBUG oslo_concurrency.lockutils [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.666 251996 DEBUG oslo_concurrency.lockutils [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.666 251996 DEBUG oslo_concurrency.lockutils [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.666 251996 DEBUG nova.compute.manager [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.666 251996 DEBUG nova.compute.manager [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.667 251996 DEBUG nova.compute.manager [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.667 251996 DEBUG oslo_concurrency.lockutils [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.667 251996 DEBUG oslo_concurrency.lockutils [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.667 251996 DEBUG oslo_concurrency.lockutils [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.668 251996 DEBUG nova.compute.manager [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:04:56 compute-0 nova_compute[251992]: 2025-12-06 08:04:56.668 251996 WARNING nova.compute.manager [req-f8323299-1584-46f6-99dd-bf69c4e67865 req-8ed29b65-fca0-4ac2-834f-1ca53bb71691 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received unexpected event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with vm_state active and task_state migrating.
Dec 06 08:04:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:04:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:56.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.148 251996 DEBUG nova.virt.libvirt.migration [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.149 251996 INFO nova.virt.libvirt.migration [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.222 251996 INFO nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 06 08:04:57 compute-0 podman[383141]: 2025-12-06 08:04:57.419948462 +0000 UTC m=+0.077938534 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:04:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3888507031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4063916969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:04:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:57.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.725 251996 DEBUG nova.virt.libvirt.migration [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.726 251996 DEBUG nova.virt.libvirt.migration [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.788 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.810 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid 2ce29812-b64c-4801-a37b-68c55429b70c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.810 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.811 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "2ce29812-b64c-4801-a37b-68c55429b70c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.811 251996 INFO nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.811 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "2ce29812-b64c-4801-a37b-68c55429b70c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.838 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008297.8385541, 2ce29812-b64c-4801-a37b-68c55429b70c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.839 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] VM Paused (Lifecycle Event)
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.857 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.860 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.864 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:57 compute-0 nova_compute[251992]: 2025-12-06 08:04:57.882 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 06 08:04:58 compute-0 kernel: tap42776015-1d (unregistering): left promiscuous mode
Dec 06 08:04:58 compute-0 NetworkManager[48965]: <info>  [1765008298.0087] device (tap42776015-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.015 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:58 compute-0 ovn_controller[147168]: 2025-12-06T08:04:58Z|00718|binding|INFO|Releasing lport 42776015-1d70-4b92-9890-d31aaa444637 from this chassis (sb_readonly=0)
Dec 06 08:04:58 compute-0 ovn_controller[147168]: 2025-12-06T08:04:58Z|00719|binding|INFO|Setting lport 42776015-1d70-4b92-9890-d31aaa444637 down in Southbound
Dec 06 08:04:58 compute-0 ovn_controller[147168]: 2025-12-06T08:04:58Z|00720|binding|INFO|Removing iface tap42776015-1d ovn-installed in OVS
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.018 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.023 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:d9:3a 10.100.0.9'], port_security=['fa:16:3e:ae:d9:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '9f96b960-b4f2-40bd-ae99-08121f5e8b78'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2ce29812-b64c-4801-a37b-68c55429b70c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd8e24e430c64364ace789d88a68ba5f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2b4f8dab-73cc-4482-899b-78d869a3817d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81064306-0257-4e49-ba6e-9b830b20be21, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=42776015-1d70-4b92-9890-d31aaa444637) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.024 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 42776015-1d70-4b92-9890-d31aaa444637 in datapath 7ab4eeff-e26c-426f-afdf-0ed982f0262e unbound from our chassis
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.025 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ab4eeff-e26c-426f-afdf-0ed982f0262e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.028 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[424b8191-1d37-4525-be46-405f40dd6147]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.028 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e namespace which is not needed anymore
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:58 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Dec 06 08:04:58 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000bb.scope: Consumed 15.671s CPU time.
Dec 06 08:04:58 compute-0 systemd-machined[212986]: Machine qemu-87-instance-000000bb terminated.
Dec 06 08:04:58 compute-0 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[382899]: [NOTICE]   (382904) : haproxy version is 2.8.14-c23fe91
Dec 06 08:04:58 compute-0 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[382899]: [NOTICE]   (382904) : path to executable is /usr/sbin/haproxy
Dec 06 08:04:58 compute-0 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[382899]: [WARNING]  (382904) : Exiting Master process...
Dec 06 08:04:58 compute-0 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[382899]: [ALERT]    (382904) : Current worker (382906) exited with code 143 (Terminated)
Dec 06 08:04:58 compute-0 neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e[382899]: [WARNING]  (382904) : All workers exited. Exiting... (0)
Dec 06 08:04:58 compute-0 systemd[1]: libpod-785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f.scope: Deactivated successfully.
Dec 06 08:04:58 compute-0 podman[383194]: 2025-12-06 08:04:58.162598542 +0000 UTC m=+0.043417203 container died 785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:04:58 compute-0 virtqemud[251613]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/2ce29812-b64c-4801-a37b-68c55429b70c_disk: No such file or directory
Dec 06 08:04:58 compute-0 virtqemud[251613]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/2ce29812-b64c-4801-a37b-68c55429b70c_disk: No such file or directory
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.168 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:58 compute-0 kernel: tap42776015-1d: entered promiscuous mode
Dec 06 08:04:58 compute-0 NetworkManager[48965]: <info>  [1765008298.1790] manager: (tap42776015-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Dec 06 08:04:58 compute-0 kernel: tap42776015-1d (unregistering): left promiscuous mode
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.187 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f-userdata-shm.mount: Deactivated successfully.
Dec 06 08:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-626ab94b8ca9f5475e83ece6657c98d84b40fa537e36e13c8387840baddd96a3-merged.mount: Deactivated successfully.
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.205 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.206 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.206 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 06 08:04:58 compute-0 podman[383194]: 2025-12-06 08:04:58.208889421 +0000 UTC m=+0.089708082 container cleanup 785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:04:58 compute-0 systemd[1]: libpod-conmon-785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f.scope: Deactivated successfully.
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.228 251996 DEBUG nova.virt.libvirt.guest [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '2ce29812-b64c-4801-a37b-68c55429b70c' (instance-000000bb) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.228 251996 INFO nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Migration operation has completed
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.228 251996 INFO nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] _post_live_migration() is started..
Dec 06 08:04:58 compute-0 podman[383226]: 2025-12-06 08:04:58.273652188 +0000 UTC m=+0.040681618 container remove 785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.279 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8481362d-78bb-43df-9f29-b015c280bc9c]: (4, ('Sat Dec  6 08:04:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e (785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f)\n785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f\nSat Dec  6 08:04:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e (785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f)\n785185a55d2e4816ca216e15c644cc552ba9e9c0a511167603ad39e5e2b8fe5f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.280 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c9eb28-09c2-4e0a-90a1-9b16038a549a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.281 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab4eeff-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.283 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:58 compute-0 kernel: tap7ab4eeff-e0: left promiscuous mode
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.299 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.302 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[062d4725-632a-40af-aee9-ccf209fb7af4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.318 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[46bf3e16-4590-43e7-913c-4fe63c85e8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.320 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f0ce50-e952-4c3a-b803-d17ad0d647f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.334 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6a400d-c7a5-4a70-9dc6-b5bc6c6ced7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853530, 'reachable_time': 43607, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383246, 'error': None, 'target': 'ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.336 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ab4eeff-e26c-426f-afdf-0ed982f0262e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:04:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:04:58.336 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[f0454366-7a42-482e-9082-8d6693fd65b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:04:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ab4eeff\x2de26c\x2d426f\x2dafdf\x2d0ed982f0262e.mount: Deactivated successfully.
Dec 06 08:04:58 compute-0 ceph-mon[74339]: pgmap v3367: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.825 251996 DEBUG nova.compute.manager [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-changed-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.826 251996 DEBUG nova.compute.manager [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Refreshing instance network info cache due to event network-changed-42776015-1d70-4b92-9890-d31aaa444637. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.827 251996 DEBUG oslo_concurrency.lockutils [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.827 251996 DEBUG oslo_concurrency.lockutils [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:04:58 compute-0 nova_compute[251992]: 2025-12-06 08:04:58.827 251996 DEBUG nova.network.neutron [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Refreshing network info cache for port 42776015-1d70-4b92-9890-d31aaa444637 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:04:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:04:58.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:04:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:04:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:04:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:04:59.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:04:59 compute-0 nova_compute[251992]: 2025-12-06 08:04:59.555 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:04:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.543 251996 DEBUG nova.network.neutron [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Activated binding for port 42776015-1d70-4b92-9890-d31aaa444637 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.544 251996 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.547 251996 DEBUG nova.virt.libvirt.vif [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:04:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1543702869',display_name='tempest-TestNetworkAdvancedServerOps-server-1543702869',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1543702869',id=187,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBInGUhg7nhCoc2vx+Ix7gLWIjZpxEjyqveZeyfMP/1wxX8FSrtE3tQA2JbvpPn3Vva7vIRTnPCXD+7DHbX9YJlXkUS+5x8l7M/agABi3TQb6p6z9n1aAcCS+pz1vzZhCpQ==',key_name='tempest-TestNetworkAdvancedServerOps-1853257958',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fd8e24e430c64364ace789d88a68ba5f',ramdisk_id='',reservation_id='r-q1zcoiif',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1171852383',owner_user_name='tempest-TestNetworkAdvancedServerOps-1171852383-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:04:46Z,user_data=None,user_id='2ed2d17026504d70b893923a85cece4d',uuid=2ce29812-b64c-4801-a37b-68c55429b70c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.548 251996 DEBUG nova.network.os_vif_util [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converting VIF {"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.549 251996 DEBUG nova.network.os_vif_util [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.550 251996 DEBUG os_vif [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.553 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.553 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42776015-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.555 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.560 251996 INFO os_vif [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:d9:3a,bridge_name='br-int',has_traffic_filtering=True,id=42776015-1d70-4b92-9890-d31aaa444637,network=Network(7ab4eeff-e26c-426f-afdf-0ed982f0262e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap42776015-1d')
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.561 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.561 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.561 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.562 251996 DEBUG nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.562 251996 INFO nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Deleting instance files /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c_del
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.562 251996 INFO nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Deletion of /var/lib/nova/instances/2ce29812-b64c-4801-a37b-68c55429b70c_del complete
Dec 06 08:05:00 compute-0 ceph-mon[74339]: pgmap v3368: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.939 251996 DEBUG nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.939 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.939 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.940 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.940 251996 DEBUG nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.940 251996 WARNING nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received unexpected event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with vm_state active and task_state migrating.
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.941 251996 DEBUG nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.941 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.941 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.941 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.942 251996 DEBUG nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.942 251996 WARNING nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received unexpected event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with vm_state active and task_state migrating.
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.942 251996 DEBUG nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.942 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.943 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.943 251996 DEBUG oslo_concurrency.lockutils [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.943 251996 DEBUG nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:05:00 compute-0 nova_compute[251992]: 2025-12-06 08:05:00.943 251996 WARNING nova.compute.manager [req-e40a8eec-80bd-4b7f-add2-797ed5885434 req-6fafe99b-c3dc-4bb0-8c1c-bd563b0bbb1b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received unexpected event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with vm_state active and task_state migrating.
Dec 06 08:05:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:00.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:01 compute-0 nova_compute[251992]: 2025-12-06 08:05:01.226 251996 DEBUG nova.compute.manager [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:01 compute-0 nova_compute[251992]: 2025-12-06 08:05:01.226 251996 DEBUG oslo_concurrency.lockutils [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:01 compute-0 nova_compute[251992]: 2025-12-06 08:05:01.226 251996 DEBUG oslo_concurrency.lockutils [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:01 compute-0 nova_compute[251992]: 2025-12-06 08:05:01.227 251996 DEBUG oslo_concurrency.lockutils [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:01 compute-0 nova_compute[251992]: 2025-12-06 08:05:01.227 251996 DEBUG nova.compute.manager [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:05:01 compute-0 nova_compute[251992]: 2025-12-06 08:05:01.227 251996 DEBUG nova.compute.manager [req-b41bb6dd-d27a-48c0-88ea-a0e70effc023 req-25cbe009-44d0-47d3-ae63-f963432f5e55 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:05:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 607 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Dec 06 08:05:01 compute-0 nova_compute[251992]: 2025-12-06 08:05:01.977 251996 DEBUG nova.network.neutron [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updated VIF entry in instance network info cache for port 42776015-1d70-4b92-9890-d31aaa444637. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:05:01 compute-0 nova_compute[251992]: 2025-12-06 08:05:01.978 251996 DEBUG nova.network.neutron [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Updating instance_info_cache with network_info: [{"id": "42776015-1d70-4b92-9890-d31aaa444637", "address": "fa:16:3e:ae:d9:3a", "network": {"id": "7ab4eeff-e26c-426f-afdf-0ed982f0262e", "bridge": "br-int", "label": "tempest-network-smoke--49053478", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd8e24e430c64364ace789d88a68ba5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap42776015-1d", "ovs_interfaceid": "42776015-1d70-4b92-9890-d31aaa444637", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:05:02 compute-0 nova_compute[251992]: 2025-12-06 08:05:02.009 251996 DEBUG oslo_concurrency.lockutils [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-2ce29812-b64c-4801-a37b-68c55429b70c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:05:02 compute-0 nova_compute[251992]: 2025-12-06 08:05:02.009 251996 DEBUG nova.compute.manager [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:02 compute-0 nova_compute[251992]: 2025-12-06 08:05:02.009 251996 DEBUG oslo_concurrency.lockutils [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:02 compute-0 nova_compute[251992]: 2025-12-06 08:05:02.010 251996 DEBUG oslo_concurrency.lockutils [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:02 compute-0 nova_compute[251992]: 2025-12-06 08:05:02.010 251996 DEBUG oslo_concurrency.lockutils [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:02 compute-0 nova_compute[251992]: 2025-12-06 08:05:02.010 251996 DEBUG nova.compute.manager [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:05:02 compute-0 nova_compute[251992]: 2025-12-06 08:05:02.010 251996 DEBUG nova.compute.manager [req-858452fd-fb61-46f3-b3a0-1200e647d52a req-6a5870c6-aba0-4fee-935f-b59cd71de0ed 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-unplugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:05:02 compute-0 ceph-mon[74339]: pgmap v3369: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 607 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Dec 06 08:05:02 compute-0 nova_compute[251992]: 2025-12-06 08:05:02.867 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:02.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:03 compute-0 nova_compute[251992]: 2025-12-06 08:05:03.030 251996 DEBUG nova.compute.manager [req-2c401851-40c1-4cdc-9c18-47c9217e38e0 req-095e35ba-2746-4904-bc32-411fcea2974b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:03 compute-0 nova_compute[251992]: 2025-12-06 08:05:03.030 251996 DEBUG oslo_concurrency.lockutils [req-2c401851-40c1-4cdc-9c18-47c9217e38e0 req-095e35ba-2746-4904-bc32-411fcea2974b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:03 compute-0 nova_compute[251992]: 2025-12-06 08:05:03.031 251996 DEBUG oslo_concurrency.lockutils [req-2c401851-40c1-4cdc-9c18-47c9217e38e0 req-095e35ba-2746-4904-bc32-411fcea2974b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:03 compute-0 nova_compute[251992]: 2025-12-06 08:05:03.031 251996 DEBUG oslo_concurrency.lockutils [req-2c401851-40c1-4cdc-9c18-47c9217e38e0 req-095e35ba-2746-4904-bc32-411fcea2974b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:03 compute-0 nova_compute[251992]: 2025-12-06 08:05:03.031 251996 DEBUG nova.compute.manager [req-2c401851-40c1-4cdc-9c18-47c9217e38e0 req-095e35ba-2746-4904-bc32-411fcea2974b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] No waiting events found dispatching network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:05:03 compute-0 nova_compute[251992]: 2025-12-06 08:05:03.031 251996 WARNING nova.compute.manager [req-2c401851-40c1-4cdc-9c18-47c9217e38e0 req-095e35ba-2746-4904-bc32-411fcea2974b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Received unexpected event network-vif-plugged-42776015-1d70-4b92-9890-d31aaa444637 for instance with vm_state active and task_state migrating.
Dec 06 08:05:03 compute-0 podman[383251]: 2025-12-06 08:05:03.439561523 +0000 UTC m=+0.085815586 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:05:03 compute-0 podman[383250]: 2025-12-06 08:05:03.45388173 +0000 UTC m=+0.103379931 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:05:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:03.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 1.5 MiB/s wr, 49 op/s
Dec 06 08:05:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:03.878 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:03.879 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:03.879 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:04 compute-0 ceph-mon[74339]: pgmap v3370: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 1.5 MiB/s wr, 49 op/s
Dec 06 08:05:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:04.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:05.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:05 compute-0 nova_compute[251992]: 2025-12-06 08:05:05.557 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 94 op/s
Dec 06 08:05:06 compute-0 nova_compute[251992]: 2025-12-06 08:05:06.944 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:06 compute-0 nova_compute[251992]: 2025-12-06 08:05:06.945 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:06 compute-0 nova_compute[251992]: 2025-12-06 08:05:06.945 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "2ce29812-b64c-4801-a37b-68c55429b70c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:06 compute-0 nova_compute[251992]: 2025-12-06 08:05:06.967 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:06 compute-0 nova_compute[251992]: 2025-12-06 08:05:06.968 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:06 compute-0 nova_compute[251992]: 2025-12-06 08:05:06.968 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:06 compute-0 nova_compute[251992]: 2025-12-06 08:05:06.968 251996 DEBUG nova.compute.resource_tracker [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:05:06 compute-0 nova_compute[251992]: 2025-12-06 08:05:06.968 251996 DEBUG oslo_concurrency.processutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:06.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:07 compute-0 ceph-mon[74339]: pgmap v3371: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 94 op/s
Dec 06 08:05:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:05:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4284473718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:07 compute-0 nova_compute[251992]: 2025-12-06 08:05:07.407 251996 DEBUG oslo_concurrency.processutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:07.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Dec 06 08:05:07 compute-0 nova_compute[251992]: 2025-12-06 08:05:07.582 251996 WARNING nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:05:07 compute-0 nova_compute[251992]: 2025-12-06 08:05:07.585 251996 DEBUG nova.compute.resource_tracker [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4113MB free_disk=20.921833038330078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:05:07 compute-0 nova_compute[251992]: 2025-12-06 08:05:07.585 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:07 compute-0 nova_compute[251992]: 2025-12-06 08:05:07.586 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:07 compute-0 nova_compute[251992]: 2025-12-06 08:05:07.869 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4284473718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:08 compute-0 nova_compute[251992]: 2025-12-06 08:05:08.317 251996 DEBUG nova.compute.resource_tracker [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Migration for instance 2ce29812-b64c-4801-a37b-68c55429b70c refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 08:05:08 compute-0 nova_compute[251992]: 2025-12-06 08:05:08.690 251996 DEBUG nova.compute.resource_tracker [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 06 08:05:08 compute-0 nova_compute[251992]: 2025-12-06 08:05:08.727 251996 DEBUG nova.compute.resource_tracker [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Migration 06675f83-919b-4ae3-a34e-52208b739c5e is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 08:05:08 compute-0 nova_compute[251992]: 2025-12-06 08:05:08.729 251996 DEBUG nova.compute.resource_tracker [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:05:08 compute-0 nova_compute[251992]: 2025-12-06 08:05:08.729 251996 DEBUG nova.compute.resource_tracker [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:05:08 compute-0 nova_compute[251992]: 2025-12-06 08:05:08.807 251996 DEBUG oslo_concurrency.processutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:08.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:05:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2107243988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:05:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:05:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2107243988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:05:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:05:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1560372794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:09 compute-0 ceph-mon[74339]: pgmap v3372: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Dec 06 08:05:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2107243988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:05:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2107243988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:05:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1560372794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:09 compute-0 nova_compute[251992]: 2025-12-06 08:05:09.268 251996 DEBUG oslo_concurrency.processutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:09 compute-0 nova_compute[251992]: 2025-12-06 08:05:09.274 251996 DEBUG nova.compute.provider_tree [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:05:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:09 compute-0 nova_compute[251992]: 2025-12-06 08:05:09.527 251996 DEBUG nova.scheduler.client.report [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:05:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Dec 06 08:05:09 compute-0 nova_compute[251992]: 2025-12-06 08:05:09.863 251996 DEBUG nova.compute.resource_tracker [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:05:09 compute-0 nova_compute[251992]: 2025-12-06 08:05:09.863 251996 DEBUG oslo_concurrency.lockutils [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:09 compute-0 nova_compute[251992]: 2025-12-06 08:05:09.869 251996 INFO nova.compute.manager [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Migrating instance to compute-2.ctlplane.example.com finished successfully.
Dec 06 08:05:09 compute-0 sudo[383334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:09 compute-0 sudo[383334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:09 compute-0 sudo[383334]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:09 compute-0 sudo[383359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:09 compute-0 sudo[383359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:09 compute-0 sudo[383359]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:10 compute-0 nova_compute[251992]: 2025-12-06 08:05:10.181 251996 INFO nova.scheduler.client.report [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] Deleted allocation for migration 06675f83-919b-4ae3-a34e-52208b739c5e
Dec 06 08:05:10 compute-0 nova_compute[251992]: 2025-12-06 08:05:10.182 251996 DEBUG nova.virt.libvirt.driver [None req-214681bc-aa39-4a6d-a65c-40bda5711e05 1bdbfd9a9c034d4baf0368c23697a002 0280d2f586294ccf97547f8bc41590f8 - - default default] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 06 08:05:10 compute-0 nova_compute[251992]: 2025-12-06 08:05:10.561 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:10.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:11 compute-0 ceph-mon[74339]: pgmap v3373: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Dec 06 08:05:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:11.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 80 op/s
Dec 06 08:05:12 compute-0 nova_compute[251992]: 2025-12-06 08:05:12.871 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:13.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:05:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:05:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:05:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:05:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:05:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:05:13 compute-0 nova_compute[251992]: 2025-12-06 08:05:13.204 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008298.2030845, 2ce29812-b64c-4801-a37b-68c55429b70c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:05:13 compute-0 nova_compute[251992]: 2025-12-06 08:05:13.205 251996 INFO nova.compute.manager [-] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] VM Stopped (Lifecycle Event)
Dec 06 08:05:13 compute-0 ceph-mon[74339]: pgmap v3374: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 80 op/s
Dec 06 08:05:13 compute-0 nova_compute[251992]: 2025-12-06 08:05:13.246 251996 DEBUG nova.compute.manager [None req-bc1330e7-5481-4745-a7f1-034123a33558 - - - - - -] [instance: 2ce29812-b64c-4801-a37b-68c55429b70c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:13.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 46 op/s
Dec 06 08:05:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:15.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:15 compute-0 ceph-mon[74339]: pgmap v3375: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 46 op/s
Dec 06 08:05:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:15.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:15 compute-0 nova_compute[251992]: 2025-12-06 08:05:15.565 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 114 op/s
Dec 06 08:05:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:17.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:17 compute-0 ceph-mon[74339]: pgmap v3376: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 114 op/s
Dec 06 08:05:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:17 compute-0 sudo[383388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:17 compute-0 sudo[383388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:17 compute-0 sudo[383388]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:17 compute-0 sudo[383413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:05:17 compute-0 sudo[383413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:17 compute-0 sudo[383413]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:17 compute-0 sudo[383438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:17 compute-0 sudo[383438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:17 compute-0 sudo[383438]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:17 compute-0 sudo[383463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:05:17 compute-0 sudo[383463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:17 compute-0 nova_compute[251992]: 2025-12-06 08:05:17.873 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:18 compute-0 sudo[383463]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:05:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:05:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:05:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:05:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:05:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2c90a888-b036-490f-8485-474955ec83d4 does not exist
Dec 06 08:05:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev aa013d66-bad6-4a33-8944-d24a2e504e32 does not exist
Dec 06 08:05:18 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 08b0e9c2-e79b-4ffa-8904-689e84aae345 does not exist
Dec 06 08:05:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:05:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:05:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:05:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:05:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:05:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:05:18 compute-0 sudo[383517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:18 compute-0 sudo[383517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:18 compute-0 sudo[383517]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:18 compute-0 sudo[383542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:05:18 compute-0 sudo[383542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:18 compute-0 sudo[383542]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:18 compute-0 sudo[383567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:18 compute-0 sudo[383567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:18 compute-0 sudo[383567]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:05:18
Dec 06 08:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Dec 06 08:05:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:05:18 compute-0 sudo[383592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:05:18 compute-0 sudo[383592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:18 compute-0 podman[383657]: 2025-12-06 08:05:18.934366414 +0000 UTC m=+0.043781693 container create d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:05:18 compute-0 systemd[1]: Started libpod-conmon-d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab.scope.
Dec 06 08:05:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:05:19 compute-0 podman[383657]: 2025-12-06 08:05:19.008517994 +0000 UTC m=+0.117933303 container init d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:05:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:19 compute-0 podman[383657]: 2025-12-06 08:05:18.917056686 +0000 UTC m=+0.026471985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:05:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:19.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:19 compute-0 podman[383657]: 2025-12-06 08:05:19.015791681 +0000 UTC m=+0.125206960 container start d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 08:05:19 compute-0 podman[383657]: 2025-12-06 08:05:19.020995971 +0000 UTC m=+0.130411250 container attach d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 08:05:19 compute-0 suspicious_lewin[383673]: 167 167
Dec 06 08:05:19 compute-0 systemd[1]: libpod-d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab.scope: Deactivated successfully.
Dec 06 08:05:19 compute-0 podman[383657]: 2025-12-06 08:05:19.024066294 +0000 UTC m=+0.133481573 container died d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 08:05:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-e03f11769d13a9ca8fcc28c8da44f95930e446b57139de4f61e1f5029849d7bc-merged.mount: Deactivated successfully.
Dec 06 08:05:19 compute-0 podman[383657]: 2025-12-06 08:05:19.062032468 +0000 UTC m=+0.171447747 container remove d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_lewin, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:05:19 compute-0 systemd[1]: libpod-conmon-d126858eaccb84fc009ede8d0312ecb5c3178f5b2061d43a09b9b654f5b591ab.scope: Deactivated successfully.
Dec 06 08:05:19 compute-0 podman[383696]: 2025-12-06 08:05:19.212268162 +0000 UTC m=+0.038022967 container create a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 08:05:19 compute-0 systemd[1]: Started libpod-conmon-a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1.scope.
Dec 06 08:05:19 compute-0 nova_compute[251992]: 2025-12-06 08:05:19.263 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:19 compute-0 podman[383696]: 2025-12-06 08:05:19.195590392 +0000 UTC m=+0.021345207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:05:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c535ca3fd0aac79018d9e1fd1d08ff255a733e9b6c8a6975ac6eb13ebce3f8f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c535ca3fd0aac79018d9e1fd1d08ff255a733e9b6c8a6975ac6eb13ebce3f8f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c535ca3fd0aac79018d9e1fd1d08ff255a733e9b6c8a6975ac6eb13ebce3f8f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c535ca3fd0aac79018d9e1fd1d08ff255a733e9b6c8a6975ac6eb13ebce3f8f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c535ca3fd0aac79018d9e1fd1d08ff255a733e9b6c8a6975ac6eb13ebce3f8f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:19 compute-0 podman[383696]: 2025-12-06 08:05:19.313602447 +0000 UTC m=+0.139357262 container init a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:05:19 compute-0 podman[383696]: 2025-12-06 08:05:19.320443501 +0000 UTC m=+0.146198306 container start a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 08:05:19 compute-0 podman[383696]: 2025-12-06 08:05:19.324386948 +0000 UTC m=+0.150141753 container attach a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:05:19 compute-0 nova_compute[251992]: 2025-12-06 08:05:19.388 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:19 compute-0 ceph-mon[74339]: pgmap v3377: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:05:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:05:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:05:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:05:19 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:05:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:20 compute-0 compassionate_feynman[383712]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:05:20 compute-0 compassionate_feynman[383712]: --> relative data size: 1.0
Dec 06 08:05:20 compute-0 compassionate_feynman[383712]: --> All data devices are unavailable
Dec 06 08:05:20 compute-0 systemd[1]: libpod-a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1.scope: Deactivated successfully.
Dec 06 08:05:20 compute-0 podman[383696]: 2025-12-06 08:05:20.247666551 +0000 UTC m=+1.073421356 container died a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 08:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c535ca3fd0aac79018d9e1fd1d08ff255a733e9b6c8a6975ac6eb13ebce3f8f9-merged.mount: Deactivated successfully.
Dec 06 08:05:20 compute-0 podman[383696]: 2025-12-06 08:05:20.306004055 +0000 UTC m=+1.131758860 container remove a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_feynman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 08:05:20 compute-0 systemd[1]: libpod-conmon-a679812610c0ec656a3b264fd79987ce020b9f117886b84ace81c2380af015d1.scope: Deactivated successfully.
Dec 06 08:05:20 compute-0 sudo[383592]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:20 compute-0 sudo[383742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:20 compute-0 sudo[383742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:20 compute-0 sudo[383742]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:20 compute-0 sudo[383767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:05:20 compute-0 sudo[383767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:20 compute-0 sudo[383767]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:20 compute-0 sudo[383792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:20 compute-0 sudo[383792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:20 compute-0 sudo[383792]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:20 compute-0 ceph-mon[74339]: pgmap v3378: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:20 compute-0 sudo[383817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:05:20 compute-0 nova_compute[251992]: 2025-12-06 08:05:20.566 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:20 compute-0 sudo[383817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:20 compute-0 podman[383881]: 2025-12-06 08:05:20.878363259 +0000 UTC m=+0.035580400 container create da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:05:20 compute-0 systemd[1]: Started libpod-conmon-da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77.scope.
Dec 06 08:05:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:05:20 compute-0 podman[383881]: 2025-12-06 08:05:20.950716722 +0000 UTC m=+0.107933863 container init da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:05:20 compute-0 podman[383881]: 2025-12-06 08:05:20.9587964 +0000 UTC m=+0.116013541 container start da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:05:20 compute-0 podman[383881]: 2025-12-06 08:05:20.86243656 +0000 UTC m=+0.019653721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:05:20 compute-0 hardcore_lamport[383898]: 167 167
Dec 06 08:05:20 compute-0 podman[383881]: 2025-12-06 08:05:20.964010881 +0000 UTC m=+0.121228042 container attach da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 06 08:05:20 compute-0 systemd[1]: libpod-da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77.scope: Deactivated successfully.
Dec 06 08:05:20 compute-0 podman[383881]: 2025-12-06 08:05:20.964788162 +0000 UTC m=+0.122005303 container died da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 08:05:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a41616048994884501d0e1865d662a1459105771df145334a3fe67119ed26296-merged.mount: Deactivated successfully.
Dec 06 08:05:21 compute-0 podman[383881]: 2025-12-06 08:05:21.005316616 +0000 UTC m=+0.162533757 container remove da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:05:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:21.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:21 compute-0 systemd[1]: libpod-conmon-da8349ecde00d9dd50441c5398f4baca3d528c8697f60405c3853e79afd2ac77.scope: Deactivated successfully.
Dec 06 08:05:21 compute-0 podman[383922]: 2025-12-06 08:05:21.177948293 +0000 UTC m=+0.050727109 container create 9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wescoff, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:05:21 compute-0 podman[383922]: 2025-12-06 08:05:21.147325467 +0000 UTC m=+0.020104303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:05:21 compute-0 systemd[1]: Started libpod-conmon-9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23.scope.
Dec 06 08:05:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7676e4ce8a8eb02130f3b8305096e974d8d20601adb5f60aababb2c778ede47e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7676e4ce8a8eb02130f3b8305096e974d8d20601adb5f60aababb2c778ede47e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7676e4ce8a8eb02130f3b8305096e974d8d20601adb5f60aababb2c778ede47e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7676e4ce8a8eb02130f3b8305096e974d8d20601adb5f60aababb2c778ede47e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:21 compute-0 podman[383922]: 2025-12-06 08:05:21.314524649 +0000 UTC m=+0.187303485 container init 9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:05:21 compute-0 podman[383922]: 2025-12-06 08:05:21.320758067 +0000 UTC m=+0.193536883 container start 9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:05:21 compute-0 podman[383922]: 2025-12-06 08:05:21.325332301 +0000 UTC m=+0.198111137 container attach 9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wescoff, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:05:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:21.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]: {
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:     "0": [
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:         {
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "devices": [
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "/dev/loop3"
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             ],
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "lv_name": "ceph_lv0",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "lv_size": "7511998464",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "name": "ceph_lv0",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "tags": {
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.cluster_name": "ceph",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.crush_device_class": "",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.encrypted": "0",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.osd_id": "0",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.type": "block",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:                 "ceph.vdo": "0"
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             },
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "type": "block",
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:             "vg_name": "ceph_vg0"
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:         }
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]:     ]
Dec 06 08:05:22 compute-0 vigorous_wescoff[383939]: }
Dec 06 08:05:22 compute-0 systemd[1]: libpod-9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23.scope: Deactivated successfully.
Dec 06 08:05:22 compute-0 podman[383922]: 2025-12-06 08:05:22.14281454 +0000 UTC m=+1.015593366 container died 9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7676e4ce8a8eb02130f3b8305096e974d8d20601adb5f60aababb2c778ede47e-merged.mount: Deactivated successfully.
Dec 06 08:05:22 compute-0 podman[383922]: 2025-12-06 08:05:22.19436722 +0000 UTC m=+1.067146036 container remove 9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:05:22 compute-0 systemd[1]: libpod-conmon-9350814369d33f7b654739b5f14b4632b0af23f91cc8c48e7a2b2a59fcc86a23.scope: Deactivated successfully.
Dec 06 08:05:22 compute-0 sudo[383817]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:22 compute-0 sudo[383960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:22 compute-0 sudo[383960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:22 compute-0 sudo[383960]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:22 compute-0 sudo[383985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:05:22 compute-0 sudo[383985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:22 compute-0 sudo[383985]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:22 compute-0 sudo[384010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:22 compute-0 sudo[384010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:22 compute-0 sudo[384010]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:22 compute-0 sudo[384035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:05:22 compute-0 sudo[384035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:05:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 15K writes, 68K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1571 writes, 6993 keys, 1571 commit groups, 1.0 writes per commit group, ingest: 10.60 MB, 0.02 MB/s
                                           Interval WAL: 1571 writes, 1571 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     39.9      2.27              0.30        46    0.049       0      0       0.0       0.0
                                             L6      1/0   11.90 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.2     99.9     85.5      5.47              1.48        45    0.122    345K    24K       0.0       0.0
                                            Sum      1/0   11.90 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.2     70.6     72.1      7.74              1.79        91    0.085    345K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    134.3    136.5      0.61              0.28        12    0.050     62K   3063       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0     99.9     85.5      5.47              1.48        45    0.122    345K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     39.9      2.27              0.30        45    0.050       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.088, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.55 GB write, 0.09 MB/s write, 0.53 GB read, 0.09 MB/s read, 7.7 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 59.88 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000477 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3445,57.42 MB,18.8882%) FilterBlock(92,949.23 KB,0.30493%) IndexBlock(92,1.54 MB,0.505859%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:05:22 compute-0 ceph-mon[74339]: pgmap v3379: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 08:05:22 compute-0 podman[384096]: 2025-12-06 08:05:22.767067385 +0000 UTC m=+0.038739627 container create ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:05:22 compute-0 systemd[1]: Started libpod-conmon-ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83.scope.
Dec 06 08:05:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:05:22 compute-0 podman[384096]: 2025-12-06 08:05:22.748991736 +0000 UTC m=+0.020663998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:05:22 compute-0 nova_compute[251992]: 2025-12-06 08:05:22.876 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:22 compute-0 podman[384096]: 2025-12-06 08:05:22.883978319 +0000 UTC m=+0.155650571 container init ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 08:05:22 compute-0 podman[384096]: 2025-12-06 08:05:22.891669746 +0000 UTC m=+0.163341978 container start ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:05:22 compute-0 hardcore_tu[384112]: 167 167
Dec 06 08:05:22 compute-0 systemd[1]: libpod-ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83.scope: Deactivated successfully.
Dec 06 08:05:22 compute-0 podman[384096]: 2025-12-06 08:05:22.900353861 +0000 UTC m=+0.172026123 container attach ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:05:22 compute-0 podman[384096]: 2025-12-06 08:05:22.900786022 +0000 UTC m=+0.172458264 container died ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:05:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0bab29dce2f8c37d68f9b16860c6d5d1b9165b7948e64edddfa198a9c6af82e-merged.mount: Deactivated successfully.
Dec 06 08:05:22 compute-0 podman[384096]: 2025-12-06 08:05:22.990936025 +0000 UTC m=+0.262608267 container remove ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_tu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:05:23 compute-0 systemd[1]: libpod-conmon-ea383a2251c027a45d0a4dffaf13c0630dee780a60e6778d8d50269c737aaa83.scope: Deactivated successfully.
Dec 06 08:05:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:23.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:23 compute-0 podman[384136]: 2025-12-06 08:05:23.154777656 +0000 UTC m=+0.053427002 container create af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:05:23 compute-0 systemd[1]: Started libpod-conmon-af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370.scope.
Dec 06 08:05:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1462f9cc93d3a82b89c1e85a246941ec34a3b7e271bf61e8e4b25c638d7d6b4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1462f9cc93d3a82b89c1e85a246941ec34a3b7e271bf61e8e4b25c638d7d6b4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1462f9cc93d3a82b89c1e85a246941ec34a3b7e271bf61e8e4b25c638d7d6b4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1462f9cc93d3a82b89c1e85a246941ec34a3b7e271bf61e8e4b25c638d7d6b4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:23 compute-0 podman[384136]: 2025-12-06 08:05:23.121641682 +0000 UTC m=+0.020291048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:05:23 compute-0 podman[384136]: 2025-12-06 08:05:23.235897855 +0000 UTC m=+0.134547201 container init af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 08:05:23 compute-0 podman[384136]: 2025-12-06 08:05:23.244488077 +0000 UTC m=+0.143137423 container start af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:05:23 compute-0 podman[384136]: 2025-12-06 08:05:23.294492876 +0000 UTC m=+0.193142222 container attach af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:05:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:23.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:05:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:05:24 compute-0 hungry_shockley[384152]: {
Dec 06 08:05:24 compute-0 hungry_shockley[384152]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:05:24 compute-0 hungry_shockley[384152]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:05:24 compute-0 hungry_shockley[384152]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:05:24 compute-0 hungry_shockley[384152]:         "osd_id": 0,
Dec 06 08:05:24 compute-0 hungry_shockley[384152]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:05:24 compute-0 hungry_shockley[384152]:         "type": "bluestore"
Dec 06 08:05:24 compute-0 hungry_shockley[384152]:     }
Dec 06 08:05:24 compute-0 hungry_shockley[384152]: }
Dec 06 08:05:24 compute-0 systemd[1]: libpod-af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370.scope: Deactivated successfully.
Dec 06 08:05:24 compute-0 podman[384136]: 2025-12-06 08:05:24.11328883 +0000 UTC m=+1.011938196 container died af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 08:05:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1462f9cc93d3a82b89c1e85a246941ec34a3b7e271bf61e8e4b25c638d7d6b4d-merged.mount: Deactivated successfully.
Dec 06 08:05:24 compute-0 podman[384136]: 2025-12-06 08:05:24.197064291 +0000 UTC m=+1.095713637 container remove af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shockley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:05:24 compute-0 systemd[1]: libpod-conmon-af42f6ae4ba1e5e34e0188f516bd1ace911e7e2e76c55e3ef108763d404f1370.scope: Deactivated successfully.
Dec 06 08:05:24 compute-0 sudo[384035]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:05:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:05:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a389485b-6115-4259-9080-9b93b5e1b7c7 does not exist
Dec 06 08:05:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev eab2da1d-60ad-407a-b8b3-17018423de40 does not exist
Dec 06 08:05:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2e5c1f33-b244-4ea0-8c4b-1b0db29aa003 does not exist
Dec 06 08:05:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:24 compute-0 sudo[384188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:24 compute-0 sudo[384188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:24 compute-0 sudo[384188]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:24 compute-0 sudo[384213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:05:24 compute-0 sudo[384213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:24 compute-0 sudo[384213]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:25.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:25 compute-0 ceph-mon[74339]: pgmap v3380: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:05:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:05:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:25.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:25 compute-0 nova_compute[251992]: 2025-12-06 08:05:25.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:05:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:05:26 compute-0 ceph-mon[74339]: pgmap v3381: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Dec 06 08:05:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:27.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:05:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:05:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:05:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:05:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:05:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:27.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 123 KiB/s rd, 257 KiB/s wr, 26 op/s
Dec 06 08:05:27 compute-0 nova_compute[251992]: 2025-12-06 08:05:27.876 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:28 compute-0 podman[384240]: 2025-12-06 08:05:28.432144469 +0000 UTC m=+0.090171254 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:05:28 compute-0 ceph-mon[74339]: pgmap v3382: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 123 KiB/s rd, 257 KiB/s wr, 26 op/s
Dec 06 08:05:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:29.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:29.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 13 KiB/s wr, 1 op/s
Dec 06 08:05:30 compute-0 sudo[384270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:30 compute-0 sudo[384270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:30 compute-0 sudo[384270]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:30 compute-0 sudo[384295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:30 compute-0 sudo[384295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:30 compute-0 sudo[384295]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:30 compute-0 nova_compute[251992]: 2025-12-06 08:05:30.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:31.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:31 compute-0 ceph-mon[74339]: pgmap v3383: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 13 KiB/s wr, 1 op/s
Dec 06 08:05:31 compute-0 nova_compute[251992]: 2025-12-06 08:05:31.353 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:31 compute-0 nova_compute[251992]: 2025-12-06 08:05:31.354 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:31 compute-0 nova_compute[251992]: 2025-12-06 08:05:31.408 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:05:31 compute-0 nova_compute[251992]: 2025-12-06 08:05:31.483 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:31 compute-0 nova_compute[251992]: 2025-12-06 08:05:31.483 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:31 compute-0 nova_compute[251992]: 2025-12-06 08:05:31.493 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:05:31 compute-0 nova_compute[251992]: 2025-12-06 08:05:31.494 251996 INFO nova.compute.claims [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:05:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:31.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 06 08:05:31 compute-0 nova_compute[251992]: 2025-12-06 08:05:31.589 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:05:32 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427222586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.022 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.030 251996 DEBUG nova.compute.provider_tree [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.046 251996 DEBUG nova.scheduler.client.report [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.064 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.065 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.121 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.121 251996 DEBUG nova.network.neutron [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.138 251996 INFO nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.156 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:05:32 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2427222586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.238 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.240 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.240 251996 INFO nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Creating image(s)
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.271 251996 DEBUG nova.storage.rbd_utils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:32 compute-0 sshd-session[384320]: Connection closed by authenticating user root 47.237.163.130 port 41542 [preauth]
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.301 251996 DEBUG nova.storage.rbd_utils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.332 251996 DEBUG nova.storage.rbd_utils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.335 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.363 251996 DEBUG nova.policy [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.403 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.404 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.405 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.405 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.434 251996 DEBUG nova.storage.rbd_utils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.438 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.742 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.819 251996 DEBUG nova.storage.rbd_utils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.878 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:32 compute-0 nova_compute[251992]: 2025-12-06 08:05:32.942 251996 DEBUG nova.objects.instance [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 388e62ad-25a3-4c35-9824-d2a225ce2f4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:05:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:33.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:33 compute-0 nova_compute[251992]: 2025-12-06 08:05:33.124 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:05:33 compute-0 nova_compute[251992]: 2025-12-06 08:05:33.125 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Ensure instance console log exists: /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:05:33 compute-0 nova_compute[251992]: 2025-12-06 08:05:33.125 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:33 compute-0 nova_compute[251992]: 2025-12-06 08:05:33.126 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:33 compute-0 nova_compute[251992]: 2025-12-06 08:05:33.126 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:33 compute-0 ceph-mon[74339]: pgmap v3384: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 06 08:05:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4130761412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:33.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 06 08:05:34 compute-0 nova_compute[251992]: 2025-12-06 08:05:34.195 251996 DEBUG nova.network.neutron [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Successfully created port: ee8fd6ad-ff43-405a-a189-212b9e919084 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:05:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/310815550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:34 compute-0 podman[384512]: 2025-12-06 08:05:34.391145077 +0000 UTC m=+0.051504290 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 08:05:34 compute-0 podman[384513]: 2025-12-06 08:05:34.423543532 +0000 UTC m=+0.084000348 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 08:05:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:35.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:35 compute-0 ceph-mon[74339]: pgmap v3385: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.250 251996 DEBUG nova.network.neutron [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Successfully updated port: ee8fd6ad-ff43-405a-a189-212b9e919084 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.268 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.268 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.268 251996 DEBUG nova.network.neutron [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.352 251996 DEBUG nova.compute.manager [req-2d230c63-cd49-46b0-a24e-a718ec112c62 req-d2bbc1b4-db8c-4b86-ba62-8738826c56dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received event network-changed-ee8fd6ad-ff43-405a-a189-212b9e919084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.352 251996 DEBUG nova.compute.manager [req-2d230c63-cd49-46b0-a24e-a718ec112c62 req-d2bbc1b4-db8c-4b86-ba62-8738826c56dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Refreshing instance network info cache due to event network-changed-ee8fd6ad-ff43-405a-a189-212b9e919084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.352 251996 DEBUG oslo_concurrency.lockutils [req-2d230c63-cd49-46b0-a24e-a718ec112c62 req-d2bbc1b4-db8c-4b86-ba62-8738826c56dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:05:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:35.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.577 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 224 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 733 KiB/s wr, 16 op/s
Dec 06 08:05:35 compute-0 nova_compute[251992]: 2025-12-06 08:05:35.642 251996 DEBUG nova.network.neutron [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:05:36 compute-0 nova_compute[251992]: 2025-12-06 08:05:36.972 251996 DEBUG nova.network.neutron [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Updating instance_info_cache with network_info: [{"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.000 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.000 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Instance network_info: |[{"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.001 251996 DEBUG oslo_concurrency.lockutils [req-2d230c63-cd49-46b0-a24e-a718ec112c62 req-d2bbc1b4-db8c-4b86-ba62-8738826c56dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.001 251996 DEBUG nova.network.neutron [req-2d230c63-cd49-46b0-a24e-a718ec112c62 req-d2bbc1b4-db8c-4b86-ba62-8738826c56dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Refreshing network info cache for port ee8fd6ad-ff43-405a-a189-212b9e919084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.005 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Start _get_guest_xml network_info=[{"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.011 251996 WARNING nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.019 251996 DEBUG nova.virt.libvirt.host [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.020 251996 DEBUG nova.virt.libvirt.host [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.026 251996 DEBUG nova.virt.libvirt.host [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.027 251996 DEBUG nova.virt.libvirt.host [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.028 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.028 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.029 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.029 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.029 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.030 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.030 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.030 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.030 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.031 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.031 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.031 251996 DEBUG nova.virt.hardware [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.034 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:37.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:37 compute-0 ceph-mon[74339]: pgmap v3386: 305 pgs: 305 active+clean; 224 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 733 KiB/s wr, 16 op/s
Dec 06 08:05:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:05:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552322534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.456 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.486 251996 DEBUG nova.storage.rbd_utils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.491 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:37.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.682 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.683 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.879 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:05:37 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107381487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.925 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.927 251996 DEBUG nova.virt.libvirt.vif [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739748842',display_name='tempest-TestNetworkBasicOps-server-739748842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739748842',id=189,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOsnqpXoJup18anUJGZVcYpsgFpg7Y2ayu9Iu1GPQ5DicaaqFPAb0cP8S5poGYhObFQhTByxwkNMTuaxSUBroVnud6l5myyLrNErWs6f9UUTKdMMVghHCXtDMKVjkOpnA==',key_name='tempest-TestNetworkBasicOps-1772659890',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-at2ipv7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:05:32Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=388e62ad-25a3-4c35-9824-d2a225ce2f4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.927 251996 DEBUG nova.network.os_vif_util [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.928 251996 DEBUG nova.network.os_vif_util [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:4c:52,bridge_name='br-int',has_traffic_filtering=True,id=ee8fd6ad-ff43-405a-a189-212b9e919084,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee8fd6ad-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.930 251996 DEBUG nova.objects.instance [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 388e62ad-25a3-4c35-9824-d2a225ce2f4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.948 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <uuid>388e62ad-25a3-4c35-9824-d2a225ce2f4e</uuid>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <name>instance-000000bd</name>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkBasicOps-server-739748842</nova:name>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:05:37</nova:creationTime>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <nova:port uuid="ee8fd6ad-ff43-405a-a189-212b9e919084">
Dec 06 08:05:37 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <system>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <entry name="serial">388e62ad-25a3-4c35-9824-d2a225ce2f4e</entry>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <entry name="uuid">388e62ad-25a3-4c35-9824-d2a225ce2f4e</entry>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </system>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <os>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   </os>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <features>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   </features>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk">
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       </source>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk.config">
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       </source>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:05:37 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:f2:4c:52"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <target dev="tapee8fd6ad-ff"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e/console.log" append="off"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <video>
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </video>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:05:37 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:05:37 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:05:37 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:05:37 compute-0 nova_compute[251992]: </domain>
Dec 06 08:05:37 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.954 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Preparing to wait for external event network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.954 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.955 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.955 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.956 251996 DEBUG nova.virt.libvirt.vif [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739748842',display_name='tempest-TestNetworkBasicOps-server-739748842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739748842',id=189,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOsnqpXoJup18anUJGZVcYpsgFpg7Y2ayu9Iu1GPQ5DicaaqFPAb0cP8S5poGYhObFQhTByxwkNMTuaxSUBroVnud6l5myyLrNErWs6f9UUTKdMMVghHCXtDMKVjkOpnA==',key_name='tempest-TestNetworkBasicOps-1772659890',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-at2ipv7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:05:32Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=388e62ad-25a3-4c35-9824-d2a225ce2f4e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.956 251996 DEBUG nova.network.os_vif_util [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.957 251996 DEBUG nova.network.os_vif_util [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:4c:52,bridge_name='br-int',has_traffic_filtering=True,id=ee8fd6ad-ff43-405a-a189-212b9e919084,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee8fd6ad-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.957 251996 DEBUG os_vif [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:4c:52,bridge_name='br-int',has_traffic_filtering=True,id=ee8fd6ad-ff43-405a-a189-212b9e919084,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee8fd6ad-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.958 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.958 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.959 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.962 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.963 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee8fd6ad-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.963 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee8fd6ad-ff, col_values=(('external_ids', {'iface-id': 'ee8fd6ad-ff43-405a-a189-212b9e919084', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:4c:52', 'vm-uuid': '388e62ad-25a3-4c35-9824-d2a225ce2f4e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:37 compute-0 NetworkManager[48965]: <info>  [1765008337.9662] manager: (tapee8fd6ad-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.967 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.970 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:37 compute-0 nova_compute[251992]: 2025-12-06 08:05:37.971 251996 INFO os_vif [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:4c:52,bridge_name='br-int',has_traffic_filtering=True,id=ee8fd6ad-ff43-405a-a189-212b9e919084,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee8fd6ad-ff')
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.016 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.017 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.017 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:f2:4c:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.018 251996 INFO nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Using config drive
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.040 251996 DEBUG nova.storage.rbd_utils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:05:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3964016892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.114 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.203 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.203 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.339 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.340 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4100MB free_disk=20.934524536132812GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.341 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.341 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.392 251996 DEBUG nova.network.neutron [req-2d230c63-cd49-46b0-a24e-a718ec112c62 req-d2bbc1b4-db8c-4b86-ba62-8738826c56dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Updated VIF entry in instance network info cache for port ee8fd6ad-ff43-405a-a189-212b9e919084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.393 251996 DEBUG nova.network.neutron [req-2d230c63-cd49-46b0-a24e-a718ec112c62 req-d2bbc1b4-db8c-4b86-ba62-8738826c56dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Updating instance_info_cache with network_info: [{"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.411 251996 DEBUG oslo_concurrency.lockutils [req-2d230c63-cd49-46b0-a24e-a718ec112c62 req-d2bbc1b4-db8c-4b86-ba62-8738826c56dc 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.458 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 388e62ad-25a3-4c35-9824-d2a225ce2f4e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.458 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.458 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.479 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.497 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.497 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:05:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3552322534' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/107381487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3336857505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3964016892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.513 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.546 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.572 251996 INFO nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Creating config drive at /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e/disk.config
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.576 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp222f2c68 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.608 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.716 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp222f2c68" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.914 251996 DEBUG nova.storage.rbd_utils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:05:38 compute-0 nova_compute[251992]: 2025-12-06 08:05:38.917 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e/disk.config 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:05:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:05:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3101918147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:39.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.043 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.048 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.068 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.077 251996 DEBUG oslo_concurrency.processutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e/disk.config 388e62ad-25a3-4c35-9824-d2a225ce2f4e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.077 251996 INFO nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Deleting local config drive /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e/disk.config because it was imported into RBD.
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.096 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.097 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:39 compute-0 kernel: tapee8fd6ad-ff: entered promiscuous mode
Dec 06 08:05:39 compute-0 NetworkManager[48965]: <info>  [1765008339.1299] manager: (tapee8fd6ad-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Dec 06 08:05:39 compute-0 ovn_controller[147168]: 2025-12-06T08:05:39Z|00721|binding|INFO|Claiming lport ee8fd6ad-ff43-405a-a189-212b9e919084 for this chassis.
Dec 06 08:05:39 compute-0 ovn_controller[147168]: 2025-12-06T08:05:39Z|00722|binding|INFO|ee8fd6ad-ff43-405a-a189-212b9e919084: Claiming fa:16:3e:f2:4c:52 10.100.0.8
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.135 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.147 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:4c:52 10.100.0.8'], port_security=['fa:16:3e:f2:4c:52 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '388e62ad-25a3-4c35-9824-d2a225ce2f4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53428806-158a-4192-bf7e-ff703d59f7c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d84d8f55-4938-4502-958a-437fbc252df8, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ee8fd6ad-ff43-405a-a189-212b9e919084) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.151 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ee8fd6ad-ff43-405a-a189-212b9e919084 in datapath 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 bound to our chassis
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.153 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71
Dec 06 08:05:39 compute-0 systemd-udevd[384733]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:05:39 compute-0 systemd-machined[212986]: New machine qemu-88-instance-000000bd.
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.166 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ee708e-4e2b-4647-8111-d35a01c8dce9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.167 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9482cb7a-b1 in ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:05:39 compute-0 NetworkManager[48965]: <info>  [1765008339.1690] device (tapee8fd6ad-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:05:39 compute-0 NetworkManager[48965]: <info>  [1765008339.1699] device (tapee8fd6ad-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.170 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9482cb7a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.170 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[13c049f7-cc77-4217-b1b6-ddf56ca0b58c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.171 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[81001dd8-b15e-48ec-bb29-1d0d9b8e6c34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.186 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[ece85a4e-618a-4cae-a848-53fa651c7248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.203 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:39 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-000000bd.
Dec 06 08:05:39 compute-0 ovn_controller[147168]: 2025-12-06T08:05:39Z|00723|binding|INFO|Setting lport ee8fd6ad-ff43-405a-a189-212b9e919084 ovn-installed in OVS
Dec 06 08:05:39 compute-0 ovn_controller[147168]: 2025-12-06T08:05:39Z|00724|binding|INFO|Setting lport ee8fd6ad-ff43-405a-a189-212b9e919084 up in Southbound
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.210 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.211 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b37c28-0714-47f6-869b-bab9ecec3c99]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.239 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5ad39f-a205-44d2-8b95-0b77699815c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.246 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[945746fc-d77a-489e-ba31-e8011f2fba80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 NetworkManager[48965]: <info>  [1765008339.2469] manager: (tap9482cb7a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Dec 06 08:05:39 compute-0 systemd-udevd[384737]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:05:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.288 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a24737e9-84e2-43dc-9a07-9a82453fd88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.291 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[717a83b5-44cd-4668-97af-e7e627a3c9e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 NetworkManager[48965]: <info>  [1765008339.3214] device (tap9482cb7a-b0): carrier: link connected
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.330 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[60810a49-34b1-4912-931e-a1f48308bea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.346 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7a3cc8-0ca7-4f72-84a2-e221e191e204]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9482cb7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:30:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861190, 'reachable_time': 23894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384767, 'error': None, 'target': 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.359 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[89a39855-b9c7-4d8f-89c4-6bd2925078e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:303c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 861190, 'tstamp': 861190}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384768, 'error': None, 'target': 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.375 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5a121b-7d96-460b-b5bb-825ee3ca3479]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9482cb7a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:30:3c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 220], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861190, 'reachable_time': 23894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384769, 'error': None, 'target': 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.399 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c17758dd-e636-4ab7-b3e8-87ffc17b0123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.443 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[183fe72f-373e-40b2-ac75-449feb60adfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.445 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9482cb7a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.445 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.446 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9482cb7a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:39 compute-0 NetworkManager[48965]: <info>  [1765008339.4487] manager: (tap9482cb7a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.448 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:39 compute-0 kernel: tap9482cb7a-b0: entered promiscuous mode
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.451 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.451 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9482cb7a-b0, col_values=(('external_ids', {'iface-id': '7a26b0d3-5a0f-46fe-987b-780e7076a0fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:39 compute-0 ovn_controller[147168]: 2025-12-06T08:05:39Z|00725|binding|INFO|Releasing lport 7a26b0d3-5a0f-46fe-987b-780e7076a0fa from this chassis (sb_readonly=0)
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.455 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.456 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9482cb7a-b1a1-4dca-80a7-c7782ee5fe71.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9482cb7a-b1a1-4dca-80a7-c7782ee5fe71.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.456 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c40f1373-4758-4067-bff5-b7429f2e74f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.457 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/9482cb7a-b1a1-4dca-80a7-c7782ee5fe71.pid.haproxy
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:05:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:39.458 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'env', 'PROCESS_TAG=haproxy-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9482cb7a-b1a1-4dca-80a7-c7782ee5fe71.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:05:39 compute-0 nova_compute[251992]: 2025-12-06 08:05:39.467 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:39.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:39 compute-0 ceph-mon[74339]: pgmap v3387: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3101918147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:39 compute-0 podman[384801]: 2025-12-06 08:05:39.802497575 +0000 UTC m=+0.027253156 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:05:39 compute-0 podman[384801]: 2025-12-06 08:05:39.94755488 +0000 UTC m=+0.172310441 container create 8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 08:05:39 compute-0 systemd[1]: Started libpod-conmon-8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d.scope.
Dec 06 08:05:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:05:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2a6896dcd19746e41167c356ecc0e6f6ef9aa5df0b6faafddab2206c80b1a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:05:40 compute-0 podman[384801]: 2025-12-06 08:05:40.026316465 +0000 UTC m=+0.251072076 container init 8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:05:40 compute-0 podman[384801]: 2025-12-06 08:05:40.032828401 +0000 UTC m=+0.257583962 container start 8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 08:05:40 compute-0 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[384853]: [NOTICE]   (384861) : New worker (384864) forked
Dec 06 08:05:40 compute-0 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[384853]: [NOTICE]   (384861) : Loading success.
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.071 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008340.0708776, 388e62ad-25a3-4c35-9824-d2a225ce2f4e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.072 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] VM Started (Lifecycle Event)
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.101 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.105 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008340.0718477, 388e62ad-25a3-4c35-9824-d2a225ce2f4e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.105 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] VM Paused (Lifecycle Event)
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.127 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.130 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.213 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:05:40 compute-0 ceph-mon[74339]: pgmap v3388: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:05:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:40.730 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:05:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:40.732 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:05:40 compute-0 nova_compute[251992]: 2025-12-06 08:05:40.775 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:41.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.091 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.223 251996 DEBUG nova.compute.manager [req-b6593481-f49a-4558-ae6e-e27b88d433e5 req-f8ad8c48-c069-49c1-b921-99970b5cdd23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received event network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.224 251996 DEBUG oslo_concurrency.lockutils [req-b6593481-f49a-4558-ae6e-e27b88d433e5 req-f8ad8c48-c069-49c1-b921-99970b5cdd23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.224 251996 DEBUG oslo_concurrency.lockutils [req-b6593481-f49a-4558-ae6e-e27b88d433e5 req-f8ad8c48-c069-49c1-b921-99970b5cdd23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.224 251996 DEBUG oslo_concurrency.lockutils [req-b6593481-f49a-4558-ae6e-e27b88d433e5 req-f8ad8c48-c069-49c1-b921-99970b5cdd23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.225 251996 DEBUG nova.compute.manager [req-b6593481-f49a-4558-ae6e-e27b88d433e5 req-f8ad8c48-c069-49c1-b921-99970b5cdd23 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Processing event network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.225 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.229 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008341.229183, 388e62ad-25a3-4c35-9824-d2a225ce2f4e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.229 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] VM Resumed (Lifecycle Event)
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.231 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.233 251996 INFO nova.virt.libvirt.driver [-] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Instance spawned successfully.
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.234 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.256 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.257 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.258 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.258 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.258 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.259 251996 DEBUG nova.virt.libvirt.driver [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.263 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.266 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.295 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.337 251996 INFO nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Took 9.10 seconds to spawn the instance on the hypervisor.
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.337 251996 DEBUG nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.472 251996 INFO nova.compute.manager [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Took 10.02 seconds to build instance.
Dec 06 08:05:41 compute-0 nova_compute[251992]: 2025-12-06 08:05:41.492 251996 DEBUG oslo_concurrency.lockutils [None req-686b90c0-ba69-442b-9934-653beaad6bec d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:41.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 06 08:05:42 compute-0 nova_compute[251992]: 2025-12-06 08:05:42.889 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:42 compute-0 ceph-mon[74339]: pgmap v3389: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Dec 06 08:05:42 compute-0 nova_compute[251992]: 2025-12-06 08:05:42.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:43.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:05:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:05:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:05:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:05:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:05:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:05:43 compute-0 nova_compute[251992]: 2025-12-06 08:05:43.337 251996 DEBUG nova.compute.manager [req-0c8574e5-81c9-4ff6-a5e8-fde6c939ad10 req-8057e79a-4ad8-4615-a1cb-d0b36e657ee9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received event network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:43 compute-0 nova_compute[251992]: 2025-12-06 08:05:43.338 251996 DEBUG oslo_concurrency.lockutils [req-0c8574e5-81c9-4ff6-a5e8-fde6c939ad10 req-8057e79a-4ad8-4615-a1cb-d0b36e657ee9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:05:43 compute-0 nova_compute[251992]: 2025-12-06 08:05:43.338 251996 DEBUG oslo_concurrency.lockutils [req-0c8574e5-81c9-4ff6-a5e8-fde6c939ad10 req-8057e79a-4ad8-4615-a1cb-d0b36e657ee9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:05:43 compute-0 nova_compute[251992]: 2025-12-06 08:05:43.338 251996 DEBUG oslo_concurrency.lockutils [req-0c8574e5-81c9-4ff6-a5e8-fde6c939ad10 req-8057e79a-4ad8-4615-a1cb-d0b36e657ee9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:05:43 compute-0 nova_compute[251992]: 2025-12-06 08:05:43.338 251996 DEBUG nova.compute.manager [req-0c8574e5-81c9-4ff6-a5e8-fde6c939ad10 req-8057e79a-4ad8-4615-a1cb-d0b36e657ee9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] No waiting events found dispatching network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:05:43 compute-0 nova_compute[251992]: 2025-12-06 08:05:43.339 251996 WARNING nova.compute.manager [req-0c8574e5-81c9-4ff6-a5e8-fde6c939ad10 req-8057e79a-4ad8-4615-a1cb-d0b36e657ee9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received unexpected event network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 for instance with vm_state active and task_state None.
Dec 06 08:05:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:43.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Dec 06 08:05:43 compute-0 nova_compute[251992]: 2025-12-06 08:05:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:45 compute-0 NetworkManager[48965]: <info>  [1765008345.3093] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Dec 06 08:05:45 compute-0 NetworkManager[48965]: <info>  [1765008345.3101] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Dec 06 08:05:45 compute-0 nova_compute[251992]: 2025-12-06 08:05:45.312 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:45 compute-0 ceph-mon[74339]: pgmap v3390: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Dec 06 08:05:45 compute-0 nova_compute[251992]: 2025-12-06 08:05:45.385 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:45 compute-0 ovn_controller[147168]: 2025-12-06T08:05:45Z|00726|binding|INFO|Releasing lport 7a26b0d3-5a0f-46fe-987b-780e7076a0fa from this chassis (sb_readonly=0)
Dec 06 08:05:45 compute-0 nova_compute[251992]: 2025-12-06 08:05:45.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:45.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Dec 06 08:05:45 compute-0 nova_compute[251992]: 2025-12-06 08:05:45.633 251996 DEBUG nova.compute.manager [req-5ef9e0d6-a8ab-4cf4-8695-e6a755eb240b req-305836f2-5717-4acf-a151-44fb70075b2a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received event network-changed-ee8fd6ad-ff43-405a-a189-212b9e919084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:05:45 compute-0 nova_compute[251992]: 2025-12-06 08:05:45.635 251996 DEBUG nova.compute.manager [req-5ef9e0d6-a8ab-4cf4-8695-e6a755eb240b req-305836f2-5717-4acf-a151-44fb70075b2a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Refreshing instance network info cache due to event network-changed-ee8fd6ad-ff43-405a-a189-212b9e919084. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:05:45 compute-0 nova_compute[251992]: 2025-12-06 08:05:45.635 251996 DEBUG oslo_concurrency.lockutils [req-5ef9e0d6-a8ab-4cf4-8695-e6a755eb240b req-305836f2-5717-4acf-a151-44fb70075b2a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:05:45 compute-0 nova_compute[251992]: 2025-12-06 08:05:45.635 251996 DEBUG oslo_concurrency.lockutils [req-5ef9e0d6-a8ab-4cf4-8695-e6a755eb240b req-305836f2-5717-4acf-a151-44fb70075b2a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:05:45 compute-0 nova_compute[251992]: 2025-12-06 08:05:45.636 251996 DEBUG nova.network.neutron [req-5ef9e0d6-a8ab-4cf4-8695-e6a755eb240b req-305836f2-5717-4acf-a151-44fb70075b2a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Refreshing network info cache for port ee8fd6ad-ff43-405a-a189-212b9e919084 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:05:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/825215880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2786519022' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:05:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:47.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 113 op/s
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.660 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.660 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.687 251996 DEBUG nova.network.neutron [req-5ef9e0d6-a8ab-4cf4-8695-e6a755eb240b req-305836f2-5717-4acf-a151-44fb70075b2a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Updated VIF entry in instance network info cache for port ee8fd6ad-ff43-405a-a189-212b9e919084. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.687 251996 DEBUG nova.network.neutron [req-5ef9e0d6-a8ab-4cf4-8695-e6a755eb240b req-305836f2-5717-4acf-a151-44fb70075b2a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Updating instance_info_cache with network_info: [{"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.713 251996 DEBUG oslo_concurrency.lockutils [req-5ef9e0d6-a8ab-4cf4-8695-e6a755eb240b req-305836f2-5717-4acf-a151-44fb70075b2a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:05:47 compute-0 ceph-mon[74339]: pgmap v3391: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Dec 06 08:05:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/641287599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.818 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.819 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.819 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.819 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 388e62ad-25a3-4c35-9824-d2a225ce2f4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.890 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:47 compute-0 nova_compute[251992]: 2025-12-06 08:05:47.967 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:49.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:49 compute-0 ceph-mon[74339]: pgmap v3392: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 113 op/s
Dec 06 08:05:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/795117279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:05:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:49.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 08:05:50 compute-0 sudo[384881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:50 compute-0 sudo[384881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:50 compute-0 sudo[384881]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:50 compute-0 sudo[384906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:05:50 compute-0 sudo[384906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:05:50 compute-0 sudo[384906]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:05:50.734 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:05:50 compute-0 nova_compute[251992]: 2025-12-06 08:05:50.828 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Updating instance_info_cache with network_info: [{"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:05:50 compute-0 nova_compute[251992]: 2025-12-06 08:05:50.895 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-388e62ad-25a3-4c35-9824-d2a225ce2f4e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:05:50 compute-0 nova_compute[251992]: 2025-12-06 08:05:50.896 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:05:50 compute-0 nova_compute[251992]: 2025-12-06 08:05:50.896 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:50 compute-0 nova_compute[251992]: 2025-12-06 08:05:50.896 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:51.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:51 compute-0 ceph-mon[74339]: pgmap v3393: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 08:05:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:51.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 08:05:51 compute-0 nova_compute[251992]: 2025-12-06 08:05:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:51 compute-0 nova_compute[251992]: 2025-12-06 08:05:51.678 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:52 compute-0 nova_compute[251992]: 2025-12-06 08:05:52.892 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:52 compute-0 nova_compute[251992]: 2025-12-06 08:05:52.969 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:53.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:53 compute-0 sshd-session[384877]: Connection closed by 80.94.92.182 port 34702 [preauth]
Dec 06 08:05:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:53.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 80 op/s
Dec 06 08:05:53 compute-0 ceph-mon[74339]: pgmap v3394: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Dec 06 08:05:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:05:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:55.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:05:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:55.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 154 op/s
Dec 06 08:05:55 compute-0 ceph-mon[74339]: pgmap v3395: 305 pgs: 305 active+clean; 293 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 80 op/s
Dec 06 08:05:56 compute-0 nova_compute[251992]: 2025-12-06 08:05:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:56 compute-0 nova_compute[251992]: 2025-12-06 08:05:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:05:56 compute-0 nova_compute[251992]: 2025-12-06 08:05:56.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:05:56 compute-0 ovn_controller[147168]: 2025-12-06T08:05:56Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:4c:52 10.100.0.8
Dec 06 08:05:56 compute-0 ovn_controller[147168]: 2025-12-06T08:05:56Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:4c:52 10.100.0.8
Dec 06 08:05:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:57.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:57 compute-0 ceph-mon[74339]: pgmap v3396: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 154 op/s
Dec 06 08:05:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:05:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:57.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:05:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Dec 06 08:05:57 compute-0 nova_compute[251992]: 2025-12-06 08:05:57.893 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:57 compute-0 nova_compute[251992]: 2025-12-06 08:05:57.970 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:05:58 compute-0 ceph-mon[74339]: pgmap v3397: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 MiB/s wr, 131 op/s
Dec 06 08:05:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:05:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:05:59 compute-0 podman[384936]: 2025-12-06 08:05:59.427533955 +0000 UTC m=+0.078587492 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:05:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:05:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:05:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:05:59.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:05:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Dec 06 08:06:00 compute-0 ceph-mon[74339]: pgmap v3398: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Dec 06 08:06:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:01.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 08:06:02 compute-0 nova_compute[251992]: 2025-12-06 08:06:02.897 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:02 compute-0 nova_compute[251992]: 2025-12-06 08:06:02.972 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:03.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:03 compute-0 ceph-mon[74339]: pgmap v3399: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Dec 06 08:06:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:03.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Dec 06 08:06:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:03.880 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:03.880 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:03.881 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.680 251996 INFO nova.compute.manager [None req-a616de2e-415a-4bdd-94b9-a413f6a46d83 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Get console output
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.687 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.947 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.947 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.947 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.948 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.948 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.949 251996 INFO nova.compute.manager [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Terminating instance
Dec 06 08:06:04 compute-0 nova_compute[251992]: 2025-12-06 08:06:04.949 251996 DEBUG nova.compute.manager [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:06:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:05.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:05 compute-0 podman[384967]: 2025-12-06 08:06:05.39605199 +0000 UTC m=+0.052707173 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 08:06:05 compute-0 podman[384968]: 2025-12-06 08:06:05.398279231 +0000 UTC m=+0.052197020 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Dec 06 08:06:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:05.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Dec 06 08:06:05 compute-0 ceph-mon[74339]: pgmap v3400: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Dec 06 08:06:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:07.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:07 compute-0 kernel: tapee8fd6ad-ff (unregistering): left promiscuous mode
Dec 06 08:06:07 compute-0 NetworkManager[48965]: <info>  [1765008367.3289] device (tapee8fd6ad-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.337 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:07 compute-0 ovn_controller[147168]: 2025-12-06T08:06:07Z|00727|binding|INFO|Releasing lport ee8fd6ad-ff43-405a-a189-212b9e919084 from this chassis (sb_readonly=0)
Dec 06 08:06:07 compute-0 ovn_controller[147168]: 2025-12-06T08:06:07Z|00728|binding|INFO|Setting lport ee8fd6ad-ff43-405a-a189-212b9e919084 down in Southbound
Dec 06 08:06:07 compute-0 ceph-mon[74339]: pgmap v3401: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Dec 06 08:06:07 compute-0 ovn_controller[147168]: 2025-12-06T08:06:07Z|00729|binding|INFO|Removing iface tapee8fd6ad-ff ovn-installed in OVS
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.361 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:07 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Dec 06 08:06:07 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000bd.scope: Consumed 14.739s CPU time.
Dec 06 08:06:07 compute-0 systemd-machined[212986]: Machine qemu-88-instance-000000bd terminated.
Dec 06 08:06:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:07.411 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:4c:52 10.100.0.8'], port_security=['fa:16:3e:f2:4c:52 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '388e62ad-25a3-4c35-9824-d2a225ce2f4e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53428806-158a-4192-bf7e-ff703d59f7c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d84d8f55-4938-4502-958a-437fbc252df8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=ee8fd6ad-ff43-405a-a189-212b9e919084) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:06:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:07.413 158118 INFO neutron.agent.ovn.metadata.agent [-] Port ee8fd6ad-ff43-405a-a189-212b9e919084 in datapath 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 unbound from our chassis
Dec 06 08:06:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:07.414 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:06:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:07.415 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1c237d15-2406-4c44-99f7-625b174f90e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:07.416 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 namespace which is not needed anymore
Dec 06 08:06:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:07.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.589 251996 INFO nova.virt.libvirt.driver [-] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Instance destroyed successfully.
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.590 251996 DEBUG nova.objects.instance [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 388e62ad-25a3-4c35-9824-d2a225ce2f4e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:06:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 621 KiB/s rd, 3.2 MiB/s wr, 102 op/s
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.669 251996 DEBUG nova.virt.libvirt.vif [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:05:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739748842',display_name='tempest-TestNetworkBasicOps-server-739748842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739748842',id=189,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFOsnqpXoJup18anUJGZVcYpsgFpg7Y2ayu9Iu1GPQ5DicaaqFPAb0cP8S5poGYhObFQhTByxwkNMTuaxSUBroVnud6l5myyLrNErWs6f9UUTKdMMVghHCXtDMKVjkOpnA==',key_name='tempest-TestNetworkBasicOps-1772659890',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:05:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-at2ipv7b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:05:41Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=388e62ad-25a3-4c35-9824-d2a225ce2f4e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.670 251996 DEBUG nova.network.os_vif_util [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "ee8fd6ad-ff43-405a-a189-212b9e919084", "address": "fa:16:3e:f2:4c:52", "network": {"id": "9482cb7a-b1a1-4dca-80a7-c7782ee5fe71", "bridge": "br-int", "label": "tempest-network-smoke--1672901811", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee8fd6ad-ff", "ovs_interfaceid": "ee8fd6ad-ff43-405a-a189-212b9e919084", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.671 251996 DEBUG nova.network.os_vif_util [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:4c:52,bridge_name='br-int',has_traffic_filtering=True,id=ee8fd6ad-ff43-405a-a189-212b9e919084,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee8fd6ad-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.671 251996 DEBUG os_vif [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:4c:52,bridge_name='br-int',has_traffic_filtering=True,id=ee8fd6ad-ff43-405a-a189-212b9e919084,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee8fd6ad-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.673 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.673 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee8fd6ad-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.679 251996 INFO os_vif [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:4c:52,bridge_name='br-int',has_traffic_filtering=True,id=ee8fd6ad-ff43-405a-a189-212b9e919084,network=Network(9482cb7a-b1a1-4dca-80a7-c7782ee5fe71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee8fd6ad-ff')
Dec 06 08:06:07 compute-0 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[384853]: [NOTICE]   (384861) : haproxy version is 2.8.14-c23fe91
Dec 06 08:06:07 compute-0 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[384853]: [NOTICE]   (384861) : path to executable is /usr/sbin/haproxy
Dec 06 08:06:07 compute-0 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[384853]: [WARNING]  (384861) : Exiting Master process...
Dec 06 08:06:07 compute-0 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[384853]: [ALERT]    (384861) : Current worker (384864) exited with code 143 (Terminated)
Dec 06 08:06:07 compute-0 neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71[384853]: [WARNING]  (384861) : All workers exited. Exiting... (0)
Dec 06 08:06:07 compute-0 systemd[1]: libpod-8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d.scope: Deactivated successfully.
Dec 06 08:06:07 compute-0 podman[385031]: 2025-12-06 08:06:07.769919956 +0000 UTC m=+0.259790702 container died 8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d-userdata-shm.mount: Deactivated successfully.
Dec 06 08:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b2a6896dcd19746e41167c356ecc0e6f6ef9aa5df0b6faafddab2206c80b1a0-merged.mount: Deactivated successfully.
Dec 06 08:06:07 compute-0 podman[385031]: 2025-12-06 08:06:07.828890307 +0000 UTC m=+0.318761053 container cleanup 8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 08:06:07 compute-0 systemd[1]: libpod-conmon-8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d.scope: Deactivated successfully.
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.898 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.947 251996 DEBUG nova.compute.manager [req-c09ad33a-7eab-4f30-a7c7-02f84e601651 req-746bcbbd-98c6-4c0a-b134-1ad0e80818d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received event network-vif-unplugged-ee8fd6ad-ff43-405a-a189-212b9e919084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.948 251996 DEBUG oslo_concurrency.lockutils [req-c09ad33a-7eab-4f30-a7c7-02f84e601651 req-746bcbbd-98c6-4c0a-b134-1ad0e80818d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.949 251996 DEBUG oslo_concurrency.lockutils [req-c09ad33a-7eab-4f30-a7c7-02f84e601651 req-746bcbbd-98c6-4c0a-b134-1ad0e80818d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.950 251996 DEBUG oslo_concurrency.lockutils [req-c09ad33a-7eab-4f30-a7c7-02f84e601651 req-746bcbbd-98c6-4c0a-b134-1ad0e80818d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.950 251996 DEBUG nova.compute.manager [req-c09ad33a-7eab-4f30-a7c7-02f84e601651 req-746bcbbd-98c6-4c0a-b134-1ad0e80818d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] No waiting events found dispatching network-vif-unplugged-ee8fd6ad-ff43-405a-a189-212b9e919084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:06:07 compute-0 nova_compute[251992]: 2025-12-06 08:06:07.950 251996 DEBUG nova.compute.manager [req-c09ad33a-7eab-4f30-a7c7-02f84e601651 req-746bcbbd-98c6-4c0a-b134-1ad0e80818d9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received event network-vif-unplugged-ee8fd6ad-ff43-405a-a189-212b9e919084 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:06:08 compute-0 podman[385092]: 2025-12-06 08:06:08.10248246 +0000 UTC m=+0.252231517 container remove 8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.110 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7d637118-5fc7-4045-b3da-e71050cb1cf0]: (4, ('Sat Dec  6 08:06:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 (8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d)\n8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d\nSat Dec  6 08:06:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 (8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d)\n8858a994ef6d679be91100e2a762c8a2bcdec59fc1d6230a21ed638c75be961d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.112 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f65b8ca8-8be9-443f-a5d3-0f226fbe6a49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.113 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9482cb7a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:08 compute-0 nova_compute[251992]: 2025-12-06 08:06:08.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:08 compute-0 kernel: tap9482cb7a-b0: left promiscuous mode
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.168 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4df94c75-ec6d-494b-94a7-b2730f2cfc9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:08 compute-0 nova_compute[251992]: 2025-12-06 08:06:08.178 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.189 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd62c677-5bc6-4433-9776-f80308798870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.190 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[956b871b-e429-4d64-be7a-8db18c90642a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.206 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a20bf9-e59b-41c4-891e-4c5ba5d64e97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 861182, 'reachable_time': 17005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385107, 'error': None, 'target': 'ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d9482cb7a\x2db1a1\x2d4dca\x2d80a7\x2dc7782ee5fe71.mount: Deactivated successfully.
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.209 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9482cb7a-b1a1-4dca-80a7-c7782ee5fe71 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:06:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:08.209 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[200361df-c562-49e4-b1f3-59845f267372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:06:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3387559026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:06:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:06:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3387559026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:06:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:09 compute-0 ceph-mon[74339]: pgmap v3402: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 621 KiB/s rd, 3.2 MiB/s wr, 102 op/s
Dec 06 08:06:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:09.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.8 MiB/s wr, 95 op/s
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.044 251996 DEBUG nova.compute.manager [req-6d82925a-8a0d-4e9b-9aea-748ad3943dc1 req-dcc9eab6-b8b2-42c1-a841-e1ec2b3099bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received event network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.045 251996 DEBUG oslo_concurrency.lockutils [req-6d82925a-8a0d-4e9b-9aea-748ad3943dc1 req-dcc9eab6-b8b2-42c1-a841-e1ec2b3099bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.045 251996 DEBUG oslo_concurrency.lockutils [req-6d82925a-8a0d-4e9b-9aea-748ad3943dc1 req-dcc9eab6-b8b2-42c1-a841-e1ec2b3099bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.045 251996 DEBUG oslo_concurrency.lockutils [req-6d82925a-8a0d-4e9b-9aea-748ad3943dc1 req-dcc9eab6-b8b2-42c1-a841-e1ec2b3099bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.045 251996 DEBUG nova.compute.manager [req-6d82925a-8a0d-4e9b-9aea-748ad3943dc1 req-dcc9eab6-b8b2-42c1-a841-e1ec2b3099bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] No waiting events found dispatching network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.045 251996 WARNING nova.compute.manager [req-6d82925a-8a0d-4e9b-9aea-748ad3943dc1 req-dcc9eab6-b8b2-42c1-a841-e1ec2b3099bd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received unexpected event network-vif-plugged-ee8fd6ad-ff43-405a-a189-212b9e919084 for instance with vm_state active and task_state deleting.
Dec 06 08:06:10 compute-0 sudo[385110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:10 compute-0 sudo[385110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:10 compute-0 sudo[385110]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:10 compute-0 sudo[385135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:10 compute-0 sudo[385135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:10 compute-0 sudo[385135]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.381 251996 INFO nova.virt.libvirt.driver [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Deleting instance files /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e_del
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.381 251996 INFO nova.virt.libvirt.driver [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Deletion of /var/lib/nova/instances/388e62ad-25a3-4c35-9824-d2a225ce2f4e_del complete
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.431 251996 INFO nova.compute.manager [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Took 5.48 seconds to destroy the instance on the hypervisor.
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.431 251996 DEBUG oslo.service.loopingcall [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.431 251996 DEBUG nova.compute.manager [-] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:06:10 compute-0 nova_compute[251992]: 2025-12-06 08:06:10.432 251996 DEBUG nova.network.neutron [-] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:06:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3387559026' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:06:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3387559026' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:06:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:11.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:11 compute-0 nova_compute[251992]: 2025-12-06 08:06:11.531 251996 DEBUG nova.network.neutron [-] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:06:11 compute-0 nova_compute[251992]: 2025-12-06 08:06:11.555 251996 INFO nova.compute.manager [-] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Took 1.12 seconds to deallocate network for instance.
Dec 06 08:06:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:11.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 643 KiB/s rd, 2.9 MiB/s wr, 132 op/s
Dec 06 08:06:11 compute-0 nova_compute[251992]: 2025-12-06 08:06:11.608 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:11 compute-0 nova_compute[251992]: 2025-12-06 08:06:11.608 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:11 compute-0 nova_compute[251992]: 2025-12-06 08:06:11.647 251996 DEBUG nova.compute.manager [req-12bc53c6-53ff-4fc0-98d4-f7d0fda24ebc req-3a1b3567-8a97-4620-bf7c-10797522f1ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Received event network-vif-deleted-ee8fd6ad-ff43-405a-a189-212b9e919084 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:11 compute-0 ceph-mon[74339]: pgmap v3403: 305 pgs: 305 active+clean; 351 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.8 MiB/s wr, 95 op/s
Dec 06 08:06:11 compute-0 nova_compute[251992]: 2025-12-06 08:06:11.673 251996 DEBUG oslo_concurrency.processutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:06:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1533816120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:12 compute-0 nova_compute[251992]: 2025-12-06 08:06:12.141 251996 DEBUG oslo_concurrency.processutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:12 compute-0 nova_compute[251992]: 2025-12-06 08:06:12.149 251996 DEBUG nova.compute.provider_tree [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:06:12 compute-0 nova_compute[251992]: 2025-12-06 08:06:12.198 251996 DEBUG nova.scheduler.client.report [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:06:12 compute-0 nova_compute[251992]: 2025-12-06 08:06:12.220 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:12 compute-0 nova_compute[251992]: 2025-12-06 08:06:12.263 251996 INFO nova.scheduler.client.report [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 388e62ad-25a3-4c35-9824-d2a225ce2f4e
Dec 06 08:06:12 compute-0 nova_compute[251992]: 2025-12-06 08:06:12.355 251996 DEBUG oslo_concurrency.lockutils [None req-26c2a1d9-370d-4653-a031-eb59ccf979ab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "388e62ad-25a3-4c35-9824-d2a225ce2f4e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:12 compute-0 nova_compute[251992]: 2025-12-06 08:06:12.676 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:12 compute-0 nova_compute[251992]: 2025-12-06 08:06:12.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:06:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:06:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:06:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:06:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:06:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:06:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:13.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:13 compute-0 ceph-mon[74339]: pgmap v3404: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 643 KiB/s rd, 2.9 MiB/s wr, 132 op/s
Dec 06 08:06:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1533816120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 08:06:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:15.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:15 compute-0 ceph-mon[74339]: pgmap v3405: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 08:06:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:15.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 209 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Dec 06 08:06:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1944957738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:17.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:17 compute-0 ceph-mon[74339]: pgmap v3406: 305 pgs: 305 active+clean; 209 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Dec 06 08:06:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:17.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 274 KiB/s wr, 75 op/s
Dec 06 08:06:17 compute-0 nova_compute[251992]: 2025-12-06 08:06:17.681 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:17 compute-0 nova_compute[251992]: 2025-12-06 08:06:17.900 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:06:18
Dec 06 08:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'backups', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec 06 08:06:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:06:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:19.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:19 compute-0 ceph-mon[74339]: pgmap v3407: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 274 KiB/s wr, 75 op/s
Dec 06 08:06:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:19.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 62 KiB/s wr, 68 op/s
Dec 06 08:06:20 compute-0 ceph-mon[74339]: pgmap v3408: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 62 KiB/s wr, 68 op/s
Dec 06 08:06:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:21.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:21 compute-0 nova_compute[251992]: 2025-12-06 08:06:21.359 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:21 compute-0 nova_compute[251992]: 2025-12-06 08:06:21.486 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:21.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 75 KiB/s wr, 95 op/s
Dec 06 08:06:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/98723092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:22 compute-0 nova_compute[251992]: 2025-12-06 08:06:22.588 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008367.5875387, 388e62ad-25a3-4c35-9824-d2a225ce2f4e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:22 compute-0 nova_compute[251992]: 2025-12-06 08:06:22.589 251996 INFO nova.compute.manager [-] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] VM Stopped (Lifecycle Event)
Dec 06 08:06:22 compute-0 nova_compute[251992]: 2025-12-06 08:06:22.610 251996 DEBUG nova.compute.manager [None req-24320c39-7fb0-4985-94de-a160807cd67f - - - - - -] [instance: 388e62ad-25a3-4c35-9824-d2a225ce2f4e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:22 compute-0 ceph-mon[74339]: pgmap v3409: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 75 KiB/s wr, 95 op/s
Dec 06 08:06:22 compute-0 nova_compute[251992]: 2025-12-06 08:06:22.683 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:22 compute-0 nova_compute[251992]: 2025-12-06 08:06:22.901 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:23.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:23.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Dec 06 08:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:06:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:06:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:24 compute-0 ceph-mon[74339]: pgmap v3410: 305 pgs: 305 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Dec 06 08:06:24 compute-0 sudo[385190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:24 compute-0 sudo[385190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:24 compute-0 sudo[385190]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:24 compute-0 sudo[385215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:06:24 compute-0 sudo[385215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:24 compute-0 sudo[385215]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:24 compute-0 sudo[385240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:24 compute-0 sudo[385240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:24 compute-0 sudo[385240]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:24 compute-0 sudo[385265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:06:24 compute-0 sudo[385265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:25.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:25 compute-0 sudo[385265]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:06:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:06:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:06:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:06:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:06:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:25 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev db21de0e-3d6c-4c1c-8147-4fbc2052c06d does not exist
Dec 06 08:06:25 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 80c3e479-cfb9-42f5-9479-9a3be6708fb6 does not exist
Dec 06 08:06:25 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c50a40f2-3d52-428b-88c7-b8465aea4611 does not exist
Dec 06 08:06:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:06:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:06:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:06:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:06:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:06:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:06:25 compute-0 sudo[385320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:25 compute-0 sudo[385320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:25 compute-0 sudo[385320]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:25 compute-0 sudo[385345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:06:25 compute-0 sudo[385345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:25 compute-0 sudo[385345]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 59 op/s
Dec 06 08:06:25 compute-0 sudo[385370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:25 compute-0 sudo[385370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:25 compute-0 sudo[385370]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:06:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:06:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:06:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:06:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:06:25 compute-0 sudo[385395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:06:25 compute-0 sudo[385395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:26 compute-0 podman[385460]: 2025-12-06 08:06:26.003494481 +0000 UTC m=+0.039044224 container create c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mclean, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:06:26 compute-0 systemd[1]: Started libpod-conmon-c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb.scope.
Dec 06 08:06:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:06:26 compute-0 podman[385460]: 2025-12-06 08:06:25.985831605 +0000 UTC m=+0.021381378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:06:26 compute-0 podman[385460]: 2025-12-06 08:06:26.088735561 +0000 UTC m=+0.124285324 container init c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mclean, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:06:26 compute-0 podman[385460]: 2025-12-06 08:06:26.095184276 +0000 UTC m=+0.130734009 container start c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:06:26 compute-0 podman[385460]: 2025-12-06 08:06:26.098183186 +0000 UTC m=+0.133732949 container attach c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mclean, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:06:26 compute-0 nervous_mclean[385477]: 167 167
Dec 06 08:06:26 compute-0 systemd[1]: libpod-c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb.scope: Deactivated successfully.
Dec 06 08:06:26 compute-0 podman[385460]: 2025-12-06 08:06:26.101744032 +0000 UTC m=+0.137293775 container died c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:06:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a03499a6425db63b1641ac4e12a839ee1e42812cbe5ccc497c56f344b08f30ea-merged.mount: Deactivated successfully.
Dec 06 08:06:26 compute-0 podman[385460]: 2025-12-06 08:06:26.32142839 +0000 UTC m=+0.356978163 container remove c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mclean, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 08:06:26 compute-0 systemd[1]: libpod-conmon-c9348ef1b3a4e1307dd780d36fd364173ff650c55d3e5782c1c57f4c9e1e9cfb.scope: Deactivated successfully.
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:06:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:06:26 compute-0 podman[385501]: 2025-12-06 08:06:26.564218411 +0000 UTC m=+0.101240412 container create 3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:06:26 compute-0 podman[385501]: 2025-12-06 08:06:26.487389289 +0000 UTC m=+0.024411250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:06:26 compute-0 ceph-mon[74339]: pgmap v3411: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 59 op/s
Dec 06 08:06:26 compute-0 systemd[1]: Started libpod-conmon-3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd.scope.
Dec 06 08:06:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a5cb0ae4e56273dff7a7801fa0ad5511ed5be1e016d5d788906e05e56c077b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a5cb0ae4e56273dff7a7801fa0ad5511ed5be1e016d5d788906e05e56c077b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a5cb0ae4e56273dff7a7801fa0ad5511ed5be1e016d5d788906e05e56c077b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a5cb0ae4e56273dff7a7801fa0ad5511ed5be1e016d5d788906e05e56c077b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a5cb0ae4e56273dff7a7801fa0ad5511ed5be1e016d5d788906e05e56c077b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:26 compute-0 podman[385501]: 2025-12-06 08:06:26.878917253 +0000 UTC m=+0.415939234 container init 3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:06:26 compute-0 podman[385501]: 2025-12-06 08:06:26.887407102 +0000 UTC m=+0.424429053 container start 3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec 06 08:06:27 compute-0 podman[385501]: 2025-12-06 08:06:27.031057369 +0000 UTC m=+0.568079350 container attach 3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:06:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:27.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:06:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:06:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:06:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:06:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:06:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 31 op/s
Dec 06 08:06:27 compute-0 objective_lovelace[385518]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:06:27 compute-0 objective_lovelace[385518]: --> relative data size: 1.0
Dec 06 08:06:27 compute-0 objective_lovelace[385518]: --> All data devices are unavailable
Dec 06 08:06:27 compute-0 nova_compute[251992]: 2025-12-06 08:06:27.686 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:27 compute-0 systemd[1]: libpod-3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd.scope: Deactivated successfully.
Dec 06 08:06:27 compute-0 podman[385501]: 2025-12-06 08:06:27.719345011 +0000 UTC m=+1.256366972 container died 3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:06:27 compute-0 nova_compute[251992]: 2025-12-06 08:06:27.903 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8a5cb0ae4e56273dff7a7801fa0ad5511ed5be1e016d5d788906e05e56c077b-merged.mount: Deactivated successfully.
Dec 06 08:06:28 compute-0 podman[385501]: 2025-12-06 08:06:28.265658202 +0000 UTC m=+1.802680163 container remove 3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:06:28 compute-0 systemd[1]: libpod-conmon-3dc5293edcbc67cca8ba7a877b06924a5501419c4523c5c3616ca802839f7dfd.scope: Deactivated successfully.
Dec 06 08:06:28 compute-0 sudo[385395]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:28 compute-0 sudo[385548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:28 compute-0 sudo[385548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:28 compute-0 sudo[385548]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:28 compute-0 sudo[385573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:06:28 compute-0 sudo[385573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:28 compute-0 sudo[385573]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:28 compute-0 sudo[385598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:28 compute-0 sudo[385598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:28 compute-0 sudo[385598]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:28 compute-0 sudo[385623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:06:28 compute-0 sudo[385623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:28 compute-0 podman[385689]: 2025-12-06 08:06:28.844569484 +0000 UTC m=+0.036145277 container create e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:06:28 compute-0 systemd[1]: Started libpod-conmon-e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b.scope.
Dec 06 08:06:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:06:28 compute-0 podman[385689]: 2025-12-06 08:06:28.828704315 +0000 UTC m=+0.020280128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:06:28 compute-0 podman[385689]: 2025-12-06 08:06:28.934427608 +0000 UTC m=+0.126003421 container init e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:06:28 compute-0 podman[385689]: 2025-12-06 08:06:28.940627805 +0000 UTC m=+0.132203588 container start e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:06:28 compute-0 objective_blackburn[385704]: 167 167
Dec 06 08:06:28 compute-0 systemd[1]: libpod-e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b.scope: Deactivated successfully.
Dec 06 08:06:28 compute-0 podman[385689]: 2025-12-06 08:06:28.946814693 +0000 UTC m=+0.138390486 container attach e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:06:28 compute-0 podman[385689]: 2025-12-06 08:06:28.947171343 +0000 UTC m=+0.138747126 container died e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:06:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bf005744bde8198d6868911ce88acf54263d8f4eb69f74a71b1fef37da60563-merged.mount: Deactivated successfully.
Dec 06 08:06:28 compute-0 podman[385689]: 2025-12-06 08:06:28.992877476 +0000 UTC m=+0.184453269 container remove e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 08:06:28 compute-0 systemd[1]: libpod-conmon-e16c9822df6f4b3f872f14fd81d706ccf4e0c84bb0f5f76531b8cf521d5bac7b.scope: Deactivated successfully.
Dec 06 08:06:29 compute-0 ceph-mon[74339]: pgmap v3412: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 31 op/s
Dec 06 08:06:29 compute-0 podman[385732]: 2025-12-06 08:06:29.145394072 +0000 UTC m=+0.041989554 container create c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_darwin, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 08:06:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:29.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:29 compute-0 systemd[1]: Started libpod-conmon-c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad.scope.
Dec 06 08:06:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b450ede0b79cd0497fe5de1c7b23921a1a83737add6cd17db5e48ced6013624c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b450ede0b79cd0497fe5de1c7b23921a1a83737add6cd17db5e48ced6013624c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b450ede0b79cd0497fe5de1c7b23921a1a83737add6cd17db5e48ced6013624c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b450ede0b79cd0497fe5de1c7b23921a1a83737add6cd17db5e48ced6013624c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:29 compute-0 podman[385732]: 2025-12-06 08:06:29.128827435 +0000 UTC m=+0.025422937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:06:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:29 compute-0 podman[385732]: 2025-12-06 08:06:29.387966548 +0000 UTC m=+0.284562080 container init c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_darwin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Dec 06 08:06:29 compute-0 podman[385732]: 2025-12-06 08:06:29.398219744 +0000 UTC m=+0.294815236 container start c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_darwin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:06:29 compute-0 podman[385732]: 2025-12-06 08:06:29.405318195 +0000 UTC m=+0.301913747 container attach c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_darwin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:06:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:29.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:06:29 compute-0 nova_compute[251992]: 2025-12-06 08:06:29.643 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:29.642 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:06:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:29.645 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:06:30 compute-0 goofy_darwin[385747]: {
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:     "0": [
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:         {
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "devices": [
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "/dev/loop3"
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             ],
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "lv_name": "ceph_lv0",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "lv_size": "7511998464",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "name": "ceph_lv0",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "tags": {
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.cluster_name": "ceph",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.crush_device_class": "",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.encrypted": "0",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.osd_id": "0",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.type": "block",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:                 "ceph.vdo": "0"
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             },
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "type": "block",
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:             "vg_name": "ceph_vg0"
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:         }
Dec 06 08:06:30 compute-0 goofy_darwin[385747]:     ]
Dec 06 08:06:30 compute-0 goofy_darwin[385747]: }
Dec 06 08:06:30 compute-0 systemd[1]: libpod-c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad.scope: Deactivated successfully.
Dec 06 08:06:30 compute-0 podman[385732]: 2025-12-06 08:06:30.152482487 +0000 UTC m=+1.049077969 container died c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b450ede0b79cd0497fe5de1c7b23921a1a83737add6cd17db5e48ced6013624c-merged.mount: Deactivated successfully.
Dec 06 08:06:30 compute-0 podman[385732]: 2025-12-06 08:06:30.217537402 +0000 UTC m=+1.114132894 container remove c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_darwin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:06:30 compute-0 systemd[1]: libpod-conmon-c0f04e78ea44b342db8a3c53408da2f3f0730b16e9ab46448bb9c69ea683edad.scope: Deactivated successfully.
Dec 06 08:06:30 compute-0 sudo[385623]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:30 compute-0 podman[385756]: 2025-12-06 08:06:30.281835827 +0000 UTC m=+0.095268372 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 06 08:06:30 compute-0 sudo[385792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:30 compute-0 sudo[385792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:30 compute-0 sudo[385792]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:30 compute-0 sudo[385820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:06:30 compute-0 sudo[385820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:30 compute-0 sudo[385820]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:30 compute-0 sudo[385845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:30 compute-0 sudo[385845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:30 compute-0 sudo[385848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:30 compute-0 sudo[385845]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:30 compute-0 sudo[385848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:30 compute-0 sudo[385848]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:30 compute-0 sudo[385896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:30 compute-0 sudo[385895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:06:30 compute-0 sudo[385896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:30 compute-0 sudo[385895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:30 compute-0 sudo[385896]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:30 compute-0 podman[385987]: 2025-12-06 08:06:30.803868224 +0000 UTC m=+0.038504380 container create 5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_villani, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:06:30 compute-0 systemd[1]: Started libpod-conmon-5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794.scope.
Dec 06 08:06:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:06:30 compute-0 podman[385987]: 2025-12-06 08:06:30.880581104 +0000 UTC m=+0.115217290 container init 5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_villani, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:06:30 compute-0 podman[385987]: 2025-12-06 08:06:30.786590537 +0000 UTC m=+0.021226693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:06:30 compute-0 podman[385987]: 2025-12-06 08:06:30.888371524 +0000 UTC m=+0.123007660 container start 5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 08:06:30 compute-0 podman[385987]: 2025-12-06 08:06:30.892027212 +0000 UTC m=+0.126663348 container attach 5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 08:06:30 compute-0 xenodochial_villani[386004]: 167 167
Dec 06 08:06:30 compute-0 systemd[1]: libpod-5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794.scope: Deactivated successfully.
Dec 06 08:06:30 compute-0 podman[385987]: 2025-12-06 08:06:30.894349525 +0000 UTC m=+0.128985661 container died 5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_villani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:06:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-22240314b0c9195dfcfb4a3ead323ad7eed16b2e35f18f3c627434d8bc3bbe95-merged.mount: Deactivated successfully.
Dec 06 08:06:30 compute-0 podman[385987]: 2025-12-06 08:06:30.936894353 +0000 UTC m=+0.171530489 container remove 5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_villani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:06:30 compute-0 systemd[1]: libpod-conmon-5c924689f2daa120401dba5e93a8508552d7eb15eefaa4acb8cfe45df9842794.scope: Deactivated successfully.
Dec 06 08:06:31 compute-0 podman[386030]: 2025-12-06 08:06:31.090343254 +0000 UTC m=+0.043059974 container create 83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:06:31 compute-0 ceph-mon[74339]: pgmap v3413: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:06:31 compute-0 systemd[1]: Started libpod-conmon-83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a.scope.
Dec 06 08:06:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:31.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be66497d83287aeb0ac273fbf4185e0064b2eff364ca0b5847d59879fcb2e76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be66497d83287aeb0ac273fbf4185e0064b2eff364ca0b5847d59879fcb2e76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be66497d83287aeb0ac273fbf4185e0064b2eff364ca0b5847d59879fcb2e76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be66497d83287aeb0ac273fbf4185e0064b2eff364ca0b5847d59879fcb2e76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:31 compute-0 podman[386030]: 2025-12-06 08:06:31.07093302 +0000 UTC m=+0.023649780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:06:31 compute-0 podman[386030]: 2025-12-06 08:06:31.173990911 +0000 UTC m=+0.126707701 container init 83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 08:06:31 compute-0 podman[386030]: 2025-12-06 08:06:31.180751443 +0000 UTC m=+0.133468183 container start 83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:06:31 compute-0 podman[386030]: 2025-12-06 08:06:31.186157459 +0000 UTC m=+0.138874209 container attach 83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:06:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:31.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:06:32 compute-0 bold_engelbart[386047]: {
Dec 06 08:06:32 compute-0 bold_engelbart[386047]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:06:32 compute-0 bold_engelbart[386047]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:06:32 compute-0 bold_engelbart[386047]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:06:32 compute-0 bold_engelbart[386047]:         "osd_id": 0,
Dec 06 08:06:32 compute-0 bold_engelbart[386047]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:06:32 compute-0 bold_engelbart[386047]:         "type": "bluestore"
Dec 06 08:06:32 compute-0 bold_engelbart[386047]:     }
Dec 06 08:06:32 compute-0 bold_engelbart[386047]: }
Dec 06 08:06:32 compute-0 systemd[1]: libpod-83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a.scope: Deactivated successfully.
Dec 06 08:06:32 compute-0 podman[386030]: 2025-12-06 08:06:32.035153218 +0000 UTC m=+0.987869938 container died 83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 08:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2be66497d83287aeb0ac273fbf4185e0064b2eff364ca0b5847d59879fcb2e76-merged.mount: Deactivated successfully.
Dec 06 08:06:32 compute-0 podman[386030]: 2025-12-06 08:06:32.088055876 +0000 UTC m=+1.040772596 container remove 83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 08:06:32 compute-0 systemd[1]: libpod-conmon-83b42386dc0d905aa0ad4a2710e90d2f64b01c82bff6edcb6c1fbed4e2d0b20a.scope: Deactivated successfully.
Dec 06 08:06:32 compute-0 sudo[385895]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:06:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:06:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 488c48f1-fa46-4a69-a9e9-826e8f3c75a3 does not exist
Dec 06 08:06:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8f763cb7-7335-4ee8-81e4-76b2c0189901 does not exist
Dec 06 08:06:32 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 527bd0ca-8c0c-4fbc-b6ae-780c5ff41416 does not exist
Dec 06 08:06:32 compute-0 sudo[386083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:32 compute-0 sudo[386083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:32 compute-0 sudo[386083]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:32 compute-0 sudo[386108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:06:32 compute-0 sudo[386108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:32 compute-0 sudo[386108]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:32 compute-0 nova_compute[251992]: 2025-12-06 08:06:32.691 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:32 compute-0 nova_compute[251992]: 2025-12-06 08:06:32.905 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:33 compute-0 ceph-mon[74339]: pgmap v3414: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:06:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:06:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2064629927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 08:06:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:33.648 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4058813300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:35.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:35 compute-0 ceph-mon[74339]: pgmap v3415: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 08:06:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:35.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 08:06:36 compute-0 podman[386135]: 2025-12-06 08:06:36.399354441 +0000 UTC m=+0.058919011 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 08:06:36 compute-0 podman[386136]: 2025-12-06 08:06:36.410825031 +0000 UTC m=+0.069569869 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:06:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:37.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:37.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:37 compute-0 nova_compute[251992]: 2025-12-06 08:06:37.696 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:37 compute-0 nova_compute[251992]: 2025-12-06 08:06:37.907 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:38 compute-0 ceph-mon[74339]: pgmap v3416: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 341 B/s wr, 1 op/s
Dec 06 08:06:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:39.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:39 compute-0 ceph-mon[74339]: pgmap v3417: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:39.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:39 compute-0 nova_compute[251992]: 2025-12-06 08:06:39.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:39 compute-0 nova_compute[251992]: 2025-12-06 08:06:39.655 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:39 compute-0 nova_compute[251992]: 2025-12-06 08:06:39.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:39 compute-0 nova_compute[251992]: 2025-12-06 08:06:39.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:39 compute-0 nova_compute[251992]: 2025-12-06 08:06:39.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:39 compute-0 nova_compute[251992]: 2025-12-06 08:06:39.683 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:06:39 compute-0 nova_compute[251992]: 2025-12-06 08:06:39.683 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:06:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401945160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.126 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.292 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.293 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4144MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.294 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.294 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.472 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.472 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:06:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2401945160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.490 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:06:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/378523446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.886 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:40 compute-0 nova_compute[251992]: 2025-12-06 08:06:40.892 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:06:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:41.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:41 compute-0 ceph-mon[74339]: pgmap v3418: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/378523446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:41.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.151 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.178 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.178 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.417 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.418 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:42 compute-0 ceph-mon[74339]: pgmap v3419: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.698 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.811 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.948 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.949 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.957 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.957 251996 INFO nova.compute.claims [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:06:42 compute-0 nova_compute[251992]: 2025-12-06 08:06:42.969 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:06:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:06:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:06:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:06:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:06:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:06:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:43.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:43 compute-0 nova_compute[251992]: 2025-12-06 08:06:43.535 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:06:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:43.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:06:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:06:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229911353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:43 compute-0 nova_compute[251992]: 2025-12-06 08:06:43.970 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:43 compute-0 nova_compute[251992]: 2025-12-06 08:06:43.976 251996 DEBUG nova.compute.provider_tree [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:06:43 compute-0 nova_compute[251992]: 2025-12-06 08:06:43.992 251996 DEBUG nova.scheduler.client.report [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.015 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.016 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.077 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.078 251996 DEBUG nova.network.neutron [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.105 251996 INFO nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.125 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:06:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2229911353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.241 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.243 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.244 251996 INFO nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Creating image(s)
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.280 251996 DEBUG nova.storage.rbd_utils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 131d5537-9b5a-407d-97af-efc5bd314951_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.312 251996 DEBUG nova.storage.rbd_utils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 131d5537-9b5a-407d-97af-efc5bd314951_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.338 251996 DEBUG nova.storage.rbd_utils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 131d5537-9b5a-407d-97af-efc5bd314951_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.342 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.382 251996 DEBUG nova.policy [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.434 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.435 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.436 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.436 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.466 251996 DEBUG nova.storage.rbd_utils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 131d5537-9b5a-407d-97af-efc5bd314951_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.470 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 131d5537-9b5a-407d-97af-efc5bd314951_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.763 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 131d5537-9b5a-407d-97af-efc5bd314951_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.845 251996 DEBUG nova.storage.rbd_utils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 131d5537-9b5a-407d-97af-efc5bd314951_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.950 251996 DEBUG nova.objects.instance [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.975 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.975 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Ensure instance console log exists: /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.976 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.976 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:44 compute-0 nova_compute[251992]: 2025-12-06 08:06:44.976 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:45.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:45 compute-0 ceph-mon[74339]: pgmap v3420: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:06:45 compute-0 nova_compute[251992]: 2025-12-06 08:06:45.583 251996 DEBUG nova.network.neutron [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Successfully created port: cbe16ad6-d576-4461-9682-554b48a77542 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:06:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 154 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 893 KiB/s wr, 24 op/s
Dec 06 08:06:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:45.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:46 compute-0 nova_compute[251992]: 2025-12-06 08:06:46.599 251996 DEBUG nova.network.neutron [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Successfully updated port: cbe16ad6-d576-4461-9682-554b48a77542 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:06:46 compute-0 nova_compute[251992]: 2025-12-06 08:06:46.616 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:06:46 compute-0 nova_compute[251992]: 2025-12-06 08:06:46.616 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:06:46 compute-0 nova_compute[251992]: 2025-12-06 08:06:46.617 251996 DEBUG nova.network.neutron [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:06:46 compute-0 nova_compute[251992]: 2025-12-06 08:06:46.700 251996 DEBUG nova.compute.manager [req-01904c78-96a6-4961-990f-58d747a9f27f req-5f7111d2-b87b-4067-ac8b-6762aaf6a986 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-changed-cbe16ad6-d576-4461-9682-554b48a77542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:46 compute-0 nova_compute[251992]: 2025-12-06 08:06:46.700 251996 DEBUG nova.compute.manager [req-01904c78-96a6-4961-990f-58d747a9f27f req-5f7111d2-b87b-4067-ac8b-6762aaf6a986 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing instance network info cache due to event network-changed-cbe16ad6-d576-4461-9682-554b48a77542. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:06:46 compute-0 nova_compute[251992]: 2025-12-06 08:06:46.700 251996 DEBUG oslo_concurrency.lockutils [req-01904c78-96a6-4961-990f-58d747a9f27f req-5f7111d2-b87b-4067-ac8b-6762aaf6a986 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:06:46 compute-0 nova_compute[251992]: 2025-12-06 08:06:46.761 251996 DEBUG nova.network.neutron [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:06:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:47.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:47 compute-0 nova_compute[251992]: 2025-12-06 08:06:47.180 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:47 compute-0 ceph-mon[74339]: pgmap v3421: 305 pgs: 305 active+clean; 154 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 893 KiB/s wr, 24 op/s
Dec 06 08:06:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 08:06:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:47 compute-0 nova_compute[251992]: 2025-12-06 08:06:47.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:47 compute-0 nova_compute[251992]: 2025-12-06 08:06:47.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:06:47 compute-0 nova_compute[251992]: 2025-12-06 08:06:47.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:06:47 compute-0 nova_compute[251992]: 2025-12-06 08:06:47.684 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 08:06:47 compute-0 nova_compute[251992]: 2025-12-06 08:06:47.684 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:06:47 compute-0 nova_compute[251992]: 2025-12-06 08:06:47.702 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:47 compute-0 nova_compute[251992]: 2025-12-06 08:06:47.971 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1111685073' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.135 251996 DEBUG nova.network.neutron [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.154 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.155 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Instance network_info: |[{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.155 251996 DEBUG oslo_concurrency.lockutils [req-01904c78-96a6-4961-990f-58d747a9f27f req-5f7111d2-b87b-4067-ac8b-6762aaf6a986 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.155 251996 DEBUG nova.network.neutron [req-01904c78-96a6-4961-990f-58d747a9f27f req-5f7111d2-b87b-4067-ac8b-6762aaf6a986 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing network info cache for port cbe16ad6-d576-4461-9682-554b48a77542 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.158 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Start _get_guest_xml network_info=[{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.162 251996 WARNING nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.171 251996 DEBUG nova.virt.libvirt.host [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.171 251996 DEBUG nova.virt.libvirt.host [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.174 251996 DEBUG nova.virt.libvirt.host [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.174 251996 DEBUG nova.virt.libvirt.host [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.175 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.176 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.176 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.176 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.176 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.177 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.177 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.177 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.177 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.178 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.178 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.178 251996 DEBUG nova.virt.hardware [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:06:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:49.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.181 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:49 compute-0 ceph-mon[74339]: pgmap v3422: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 08:06:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/27461616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 08:06:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:06:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/756859241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:49.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.627 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.653 251996 DEBUG nova.storage.rbd_utils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 131d5537-9b5a-407d-97af-efc5bd314951_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:49 compute-0 nova_compute[251992]: 2025-12-06 08:06:49.658 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:06:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203557611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.099 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.101 251996 DEBUG nova.virt.libvirt.vif [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:06:44Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.102 251996 DEBUG nova.network.os_vif_util [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.103 251996 DEBUG nova.network.os_vif_util [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:9a:61,bridge_name='br-int',has_traffic_filtering=True,id=cbe16ad6-d576-4461-9682-554b48a77542,network=Network(1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbe16ad6-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.105 251996 DEBUG nova.objects.instance [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.135 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <uuid>131d5537-9b5a-407d-97af-efc5bd314951</uuid>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <name>instance-000000bf</name>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkBasicOps-server-416534484</nova:name>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:06:49</nova:creationTime>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <nova:port uuid="cbe16ad6-d576-4461-9682-554b48a77542">
Dec 06 08:06:50 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <system>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <entry name="serial">131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <entry name="uuid">131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </system>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <os>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   </os>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <features>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   </features>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/131d5537-9b5a-407d-97af-efc5bd314951_disk">
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       </source>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/131d5537-9b5a-407d-97af-efc5bd314951_disk.config">
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       </source>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:06:50 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:e3:9a:61"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <target dev="tapcbe16ad6-d5"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log" append="off"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <video>
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </video>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:06:50 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:06:50 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:06:50 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:06:50 compute-0 nova_compute[251992]: </domain>
Dec 06 08:06:50 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.137 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Preparing to wait for external event network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.137 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.137 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.138 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.138 251996 DEBUG nova.virt.libvirt.vif [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:06:44Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.139 251996 DEBUG nova.network.os_vif_util [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.140 251996 DEBUG nova.network.os_vif_util [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:9a:61,bridge_name='br-int',has_traffic_filtering=True,id=cbe16ad6-d576-4461-9682-554b48a77542,network=Network(1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbe16ad6-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.140 251996 DEBUG os_vif [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:9a:61,bridge_name='br-int',has_traffic_filtering=True,id=cbe16ad6-d576-4461-9682-554b48a77542,network=Network(1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbe16ad6-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.141 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.142 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.142 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.147 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.148 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbe16ad6-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.149 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcbe16ad6-d5, col_values=(('external_ids', {'iface-id': 'cbe16ad6-d576-4461-9682-554b48a77542', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:9a:61', 'vm-uuid': '131d5537-9b5a-407d-97af-efc5bd314951'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.150 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:50 compute-0 NetworkManager[48965]: <info>  [1765008410.1519] manager: (tapcbe16ad6-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.157 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.158 251996 INFO os_vif [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:9a:61,bridge_name='br-int',has_traffic_filtering=True,id=cbe16ad6-d576-4461-9682-554b48a77542,network=Network(1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbe16ad6-d5')
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.219 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.220 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.220 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:e3:9a:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.221 251996 INFO nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Using config drive
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.246 251996 DEBUG nova.storage.rbd_utils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 131d5537-9b5a-407d-97af-efc5bd314951_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1249092426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:06:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/756859241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3203557611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:50 compute-0 sudo[386495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:50 compute-0 sudo[386495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:50 compute-0 sudo[386495]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:50 compute-0 sudo[386520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:06:50 compute-0 sudo[386520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:06:50 compute-0 sudo[386520]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.851 251996 INFO nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Creating config drive at /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/disk.config
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.856 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3y0ik1ox execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:50 compute-0 nova_compute[251992]: 2025-12-06 08:06:50.993 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3y0ik1ox" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.019 251996 DEBUG nova.storage.rbd_utils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 131d5537-9b5a-407d-97af-efc5bd314951_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.022 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/disk.config 131d5537-9b5a-407d-97af-efc5bd314951_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.165 251996 DEBUG oslo_concurrency.processutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/disk.config 131d5537-9b5a-407d-97af-efc5bd314951_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.166 251996 INFO nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Deleting local config drive /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/disk.config because it was imported into RBD.
Dec 06 08:06:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:51 compute-0 kernel: tapcbe16ad6-d5: entered promiscuous mode
Dec 06 08:06:51 compute-0 NetworkManager[48965]: <info>  [1765008411.2139] manager: (tapcbe16ad6-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.214 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 ovn_controller[147168]: 2025-12-06T08:06:51Z|00730|binding|INFO|Claiming lport cbe16ad6-d576-4461-9682-554b48a77542 for this chassis.
Dec 06 08:06:51 compute-0 ovn_controller[147168]: 2025-12-06T08:06:51Z|00731|binding|INFO|cbe16ad6-d576-4461-9682-554b48a77542: Claiming fa:16:3e:e3:9a:61 10.100.0.10
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.223 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.230 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:9a:61 10.100.0.10'], port_security=['fa:16:3e:e3:9a:61 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '131d5537-9b5a-407d-97af-efc5bd314951', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '96bfcbe0-efa0-4be3-8868-9b784c4417a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e1d5a37-aa54-4385-9ff9-4f158377e4c1, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=cbe16ad6-d576-4461-9682-554b48a77542) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.233 158118 INFO neutron.agent.ovn.metadata.agent [-] Port cbe16ad6-d576-4461-9682-554b48a77542 in datapath 1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0 bound to our chassis
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.234 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0
Dec 06 08:06:51 compute-0 systemd-udevd[386600]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.246 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8722db-c256-44f6-b4a6-b2ee4a7cde95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.247 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1d56bd3d-51 in ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:06:51 compute-0 systemd-machined[212986]: New machine qemu-89-instance-000000bf.
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.249 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1d56bd3d-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.249 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3a911fa2-4346-4b3c-8be6-05dcfc7e7360]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.250 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2afc2b-905c-4715-9acf-3a67879fca66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 NetworkManager[48965]: <info>  [1765008411.2571] device (tapcbe16ad6-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:06:51 compute-0 NetworkManager[48965]: <info>  [1765008411.2580] device (tapcbe16ad6-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.262 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[a96c4b61-b4e0-43ff-b654-a471fb135877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-000000bf.
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.284 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.284 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[de6bc3a3-c5e7-4981-a32b-64d5f793b8f9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_controller[147168]: 2025-12-06T08:06:51Z|00732|binding|INFO|Setting lport cbe16ad6-d576-4461-9682-554b48a77542 ovn-installed in OVS
Dec 06 08:06:51 compute-0 ovn_controller[147168]: 2025-12-06T08:06:51Z|00733|binding|INFO|Setting lport cbe16ad6-d576-4461-9682-554b48a77542 up in Southbound
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.290 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.313 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[bea54964-f220-4b51-8776-75bf1dc6f4c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 systemd-udevd[386603]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:06:51 compute-0 NetworkManager[48965]: <info>  [1765008411.3191] manager: (tap1d56bd3d-50): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.319 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e01a24d1-7d73-4500-95b5-5d541e5879d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ceph-mon[74339]: pgmap v3423: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.350 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a8707339-8305-4d75-b793-1289fa86e6c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.352 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2de953ef-e78d-4f3d-abdd-2d54f1a085ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 NetworkManager[48965]: <info>  [1765008411.3710] device (tap1d56bd3d-50): carrier: link connected
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.374 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[9db53a5f-ccd5-47a7-ab9f-8dd911e9e524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.393 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4e03bdd2-95e8-44b2-96cd-4420cb5c108d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d56bd3d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:30:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868395, 'reachable_time': 17938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386632, 'error': None, 'target': 'ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.409 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb55e8e-5a2d-4cdd-89b3-03d3de76d1a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:3026'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 868395, 'tstamp': 868395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386633, 'error': None, 'target': 'ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.423 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef82bf9-90b7-40af-9f4f-faf295cc813d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1d56bd3d-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:30:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868395, 'reachable_time': 17938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386634, 'error': None, 'target': 'ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.452 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbe7992-8d40-4b6b-bad7-f57baa2497b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.502 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea46b6b-5aae-4d1f-a71e-8850c31b88a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.504 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d56bd3d-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.504 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.504 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1d56bd3d-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:51 compute-0 NetworkManager[48965]: <info>  [1765008411.5068] manager: (tap1d56bd3d-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Dec 06 08:06:51 compute-0 kernel: tap1d56bd3d-50: entered promiscuous mode
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.506 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.507 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.511 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1d56bd3d-50, col_values=(('external_ids', {'iface-id': 'bc641a00-a4a7-4082-9aee-a2ad8d4616c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.512 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.513 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 ovn_controller[147168]: 2025-12-06T08:06:51Z|00734|binding|INFO|Releasing lport bc641a00-a4a7-4082-9aee-a2ad8d4616c8 from this chassis (sb_readonly=0)
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.515 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.516 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[702930f2-f320-4b91-805e-2987e4e359ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.517 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0.pid.haproxy
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:06:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:06:51.518 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0', 'env', 'PROCESS_TAG=haproxy-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 56 op/s
Dec 06 08:06:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:51.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.740 251996 DEBUG nova.network.neutron [req-01904c78-96a6-4961-990f-58d747a9f27f req-5f7111d2-b87b-4067-ac8b-6762aaf6a986 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updated VIF entry in instance network info cache for port cbe16ad6-d576-4461-9682-554b48a77542. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.740 251996 DEBUG nova.network.neutron [req-01904c78-96a6-4961-990f-58d747a9f27f req-5f7111d2-b87b-4067-ac8b-6762aaf6a986 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:06:51 compute-0 podman[386704]: 2025-12-06 08:06:51.859122828 +0000 UTC m=+0.045718626 container create b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.888 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008411.8874671, 131d5537-9b5a-407d-97af-efc5bd314951 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.888 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] VM Started (Lifecycle Event)
Dec 06 08:06:51 compute-0 systemd[1]: Started libpod-conmon-b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046.scope.
Dec 06 08:06:51 compute-0 podman[386704]: 2025-12-06 08:06:51.833853406 +0000 UTC m=+0.020449214 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:06:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6892a1eb0f50d2161494fc9ca0a3586016d47a5fb5b76871cb9a24e4ea9455ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:06:51 compute-0 podman[386704]: 2025-12-06 08:06:51.959889587 +0000 UTC m=+0.146485395 container init b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:06:51 compute-0 podman[386704]: 2025-12-06 08:06:51.964439989 +0000 UTC m=+0.151035777 container start b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:06:51 compute-0 nova_compute[251992]: 2025-12-06 08:06:51.968 251996 DEBUG oslo_concurrency.lockutils [req-01904c78-96a6-4961-990f-58d747a9f27f req-5f7111d2-b87b-4067-ac8b-6762aaf6a986 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:06:51 compute-0 neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0[386724]: [NOTICE]   (386728) : New worker (386730) forked
Dec 06 08:06:51 compute-0 neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0[386724]: [NOTICE]   (386728) : Loading success.
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.101 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.107 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008411.8877106, 131d5537-9b5a-407d-97af-efc5bd314951 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.108 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] VM Paused (Lifecycle Event)
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.143 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.147 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.174 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.820 251996 DEBUG nova.compute.manager [req-49e1aa9c-ed6d-43b6-b077-1ad2f13bc0ac req-cb26eb32-98e0-4603-9180-c0fc35ddb35c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.821 251996 DEBUG oslo_concurrency.lockutils [req-49e1aa9c-ed6d-43b6-b077-1ad2f13bc0ac req-cb26eb32-98e0-4603-9180-c0fc35ddb35c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.821 251996 DEBUG oslo_concurrency.lockutils [req-49e1aa9c-ed6d-43b6-b077-1ad2f13bc0ac req-cb26eb32-98e0-4603-9180-c0fc35ddb35c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.821 251996 DEBUG oslo_concurrency.lockutils [req-49e1aa9c-ed6d-43b6-b077-1ad2f13bc0ac req-cb26eb32-98e0-4603-9180-c0fc35ddb35c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.821 251996 DEBUG nova.compute.manager [req-49e1aa9c-ed6d-43b6-b077-1ad2f13bc0ac req-cb26eb32-98e0-4603-9180-c0fc35ddb35c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Processing event network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.822 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.826 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008412.8259418, 131d5537-9b5a-407d-97af-efc5bd314951 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.826 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] VM Resumed (Lifecycle Event)
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.828 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.831 251996 INFO nova.virt.libvirt.driver [-] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Instance spawned successfully.
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.831 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.979 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.984 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.984 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.985 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.985 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.986 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.986 251996 DEBUG nova.virt.libvirt.driver [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:06:52 compute-0 nova_compute[251992]: 2025-12-06 08:06:52.992 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:06:53 compute-0 nova_compute[251992]: 2025-12-06 08:06:53.012 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:53 compute-0 nova_compute[251992]: 2025-12-06 08:06:53.040 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:06:53 compute-0 nova_compute[251992]: 2025-12-06 08:06:53.068 251996 INFO nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Took 8.83 seconds to spawn the instance on the hypervisor.
Dec 06 08:06:53 compute-0 nova_compute[251992]: 2025-12-06 08:06:53.069 251996 DEBUG nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:06:53 compute-0 nova_compute[251992]: 2025-12-06 08:06:53.146 251996 INFO nova.compute.manager [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Took 10.27 seconds to build instance.
Dec 06 08:06:53 compute-0 nova_compute[251992]: 2025-12-06 08:06:53.162 251996 DEBUG oslo_concurrency.lockutils [None req-38d87644-c2ce-4f4e-9fbb-456b4ca0c041 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:53.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:53 compute-0 ceph-mon[74339]: pgmap v3424: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 56 op/s
Dec 06 08:06:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 56 op/s
Dec 06 08:06:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:06:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:53.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:06:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/614127371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:54 compute-0 nova_compute[251992]: 2025-12-06 08:06:54.902 251996 DEBUG nova.compute.manager [req-26859f5c-4274-473f-b5ad-980327ac0500 req-e7e87083-f9b9-4555-96a5-eec5cde027cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:54 compute-0 nova_compute[251992]: 2025-12-06 08:06:54.903 251996 DEBUG oslo_concurrency.lockutils [req-26859f5c-4274-473f-b5ad-980327ac0500 req-e7e87083-f9b9-4555-96a5-eec5cde027cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:06:54 compute-0 nova_compute[251992]: 2025-12-06 08:06:54.904 251996 DEBUG oslo_concurrency.lockutils [req-26859f5c-4274-473f-b5ad-980327ac0500 req-e7e87083-f9b9-4555-96a5-eec5cde027cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:06:54 compute-0 nova_compute[251992]: 2025-12-06 08:06:54.904 251996 DEBUG oslo_concurrency.lockutils [req-26859f5c-4274-473f-b5ad-980327ac0500 req-e7e87083-f9b9-4555-96a5-eec5cde027cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:06:54 compute-0 nova_compute[251992]: 2025-12-06 08:06:54.904 251996 DEBUG nova.compute.manager [req-26859f5c-4274-473f-b5ad-980327ac0500 req-e7e87083-f9b9-4555-96a5-eec5cde027cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] No waiting events found dispatching network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:06:54 compute-0 nova_compute[251992]: 2025-12-06 08:06:54.904 251996 WARNING nova.compute.manager [req-26859f5c-4274-473f-b5ad-980327ac0500 req-e7e87083-f9b9-4555-96a5-eec5cde027cd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received unexpected event network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 for instance with vm_state active and task_state None.
Dec 06 08:06:55 compute-0 nova_compute[251992]: 2025-12-06 08:06:55.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:55.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:55 compute-0 ceph-mon[74339]: pgmap v3425: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 56 op/s
Dec 06 08:06:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1753148160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:06:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 948 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 06 08:06:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:55.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:56 compute-0 ceph-mon[74339]: pgmap v3426: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 948 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Dec 06 08:06:56 compute-0 nova_compute[251992]: 2025-12-06 08:06:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:56 compute-0 nova_compute[251992]: 2025-12-06 08:06:56.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:06:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:57.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 104 op/s
Dec 06 08:06:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:57.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:57 compute-0 nova_compute[251992]: 2025-12-06 08:06:57.737 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:57 compute-0 NetworkManager[48965]: <info>  [1765008417.7408] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Dec 06 08:06:57 compute-0 NetworkManager[48965]: <info>  [1765008417.7417] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Dec 06 08:06:57 compute-0 nova_compute[251992]: 2025-12-06 08:06:57.787 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:57 compute-0 ovn_controller[147168]: 2025-12-06T08:06:57Z|00735|binding|INFO|Releasing lport bc641a00-a4a7-4082-9aee-a2ad8d4616c8 from this chassis (sb_readonly=0)
Dec 06 08:06:57 compute-0 nova_compute[251992]: 2025-12-06 08:06:57.797 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:58 compute-0 nova_compute[251992]: 2025-12-06 08:06:58.013 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:06:58 compute-0 nova_compute[251992]: 2025-12-06 08:06:58.238 251996 DEBUG nova.compute.manager [req-1a82e54a-2e00-4768-af6c-6e40d3467bbf req-6bfc6d1f-f73e-4b81-94ed-2621f26439ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-changed-cbe16ad6-d576-4461-9682-554b48a77542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:06:58 compute-0 nova_compute[251992]: 2025-12-06 08:06:58.238 251996 DEBUG nova.compute.manager [req-1a82e54a-2e00-4768-af6c-6e40d3467bbf req-6bfc6d1f-f73e-4b81-94ed-2621f26439ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing instance network info cache due to event network-changed-cbe16ad6-d576-4461-9682-554b48a77542. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:06:58 compute-0 nova_compute[251992]: 2025-12-06 08:06:58.239 251996 DEBUG oslo_concurrency.lockutils [req-1a82e54a-2e00-4768-af6c-6e40d3467bbf req-6bfc6d1f-f73e-4b81-94ed-2621f26439ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:06:58 compute-0 nova_compute[251992]: 2025-12-06 08:06:58.239 251996 DEBUG oslo_concurrency.lockutils [req-1a82e54a-2e00-4768-af6c-6e40d3467bbf req-6bfc6d1f-f73e-4b81-94ed-2621f26439ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:06:58 compute-0 nova_compute[251992]: 2025-12-06 08:06:58.239 251996 DEBUG nova.network.neutron [req-1a82e54a-2e00-4768-af6c-6e40d3467bbf req-6bfc6d1f-f73e-4b81-94ed-2621f26439ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing network info cache for port cbe16ad6-d576-4461-9682-554b48a77542 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:06:58 compute-0 nova_compute[251992]: 2025-12-06 08:06:58.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:06:58 compute-0 ceph-mon[74339]: pgmap v3427: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 104 op/s
Dec 06 08:06:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:06:59.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:06:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 06 08:06:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:06:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:06:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:06:59.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:06:59 compute-0 nova_compute[251992]: 2025-12-06 08:06:59.944 251996 DEBUG nova.network.neutron [req-1a82e54a-2e00-4768-af6c-6e40d3467bbf req-6bfc6d1f-f73e-4b81-94ed-2621f26439ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updated VIF entry in instance network info cache for port cbe16ad6-d576-4461-9682-554b48a77542. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:06:59 compute-0 nova_compute[251992]: 2025-12-06 08:06:59.945 251996 DEBUG nova.network.neutron [req-1a82e54a-2e00-4768-af6c-6e40d3467bbf req-6bfc6d1f-f73e-4b81-94ed-2621f26439ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:06:59 compute-0 nova_compute[251992]: 2025-12-06 08:06:59.970 251996 DEBUG oslo_concurrency.lockutils [req-1a82e54a-2e00-4768-af6c-6e40d3467bbf req-6bfc6d1f-f73e-4b81-94ed-2621f26439ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:07:00 compute-0 nova_compute[251992]: 2025-12-06 08:07:00.155 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:00 compute-0 podman[386744]: 2025-12-06 08:07:00.443907408 +0000 UTC m=+0.095266541 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:07:00 compute-0 ceph-mon[74339]: pgmap v3428: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Dec 06 08:07:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:01.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Dec 06 08:07:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:01.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:02 compute-0 ceph-mon[74339]: pgmap v3429: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Dec 06 08:07:03 compute-0 nova_compute[251992]: 2025-12-06 08:07:03.014 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:03.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 144 op/s
Dec 06 08:07:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000080s ======
Dec 06 08:07:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:03.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 06 08:07:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:03.882 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:03.883 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:03.883 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:04 compute-0 ceph-mon[74339]: pgmap v3430: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 144 op/s
Dec 06 08:07:05 compute-0 nova_compute[251992]: 2025-12-06 08:07:05.159 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:05.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 653 KiB/s wr, 161 op/s
Dec 06 08:07:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:05.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:06 compute-0 ovn_controller[147168]: 2025-12-06T08:07:06Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:9a:61 10.100.0.10
Dec 06 08:07:06 compute-0 ovn_controller[147168]: 2025-12-06T08:07:06Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:9a:61 10.100.0.10
Dec 06 08:07:06 compute-0 ceph-mon[74339]: pgmap v3431: 305 pgs: 305 active+clean; 219 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 653 KiB/s wr, 161 op/s
Dec 06 08:07:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:07.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:07 compute-0 podman[386774]: 2025-12-06 08:07:07.390240818 +0000 UTC m=+0.051567143 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:07:07 compute-0 podman[386775]: 2025-12-06 08:07:07.396564618 +0000 UTC m=+0.056230758 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 08:07:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 239 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 147 op/s
Dec 06 08:07:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:07.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:08 compute-0 nova_compute[251992]: 2025-12-06 08:07:08.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:09.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:09 compute-0 ceph-mon[74339]: pgmap v3432: 305 pgs: 305 active+clean; 239 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 147 op/s
Dec 06 08:07:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 239 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 111 op/s
Dec 06 08:07:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:09.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:10 compute-0 nova_compute[251992]: 2025-12-06 08:07:10.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2782371719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:07:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2782371719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:07:10 compute-0 ceph-mon[74339]: pgmap v3433: 305 pgs: 305 active+clean; 239 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 111 op/s
Dec 06 08:07:10 compute-0 sudo[386811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:10 compute-0 sudo[386811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:10 compute-0 sudo[386811]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:10 compute-0 sudo[386836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:10 compute-0 sudo[386836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:10 compute-0 sudo[386836]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 181 op/s
Dec 06 08:07:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:11.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:12 compute-0 ceph-mon[74339]: pgmap v3434: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 181 op/s
Dec 06 08:07:13 compute-0 nova_compute[251992]: 2025-12-06 08:07:13.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:07:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:07:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:07:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:07:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:07:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:07:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:13.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:13 compute-0 nova_compute[251992]: 2025-12-06 08:07:13.557 251996 INFO nova.compute.manager [None req-5a1a3afe-2b3e-4213-b482-dd3595f9d26c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Get console output
Dec 06 08:07:13 compute-0 nova_compute[251992]: 2025-12-06 08:07:13.565 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:07:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 508 KiB/s rd, 4.1 MiB/s wr, 108 op/s
Dec 06 08:07:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:13.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:14 compute-0 ceph-mon[74339]: pgmap v3435: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 508 KiB/s rd, 4.1 MiB/s wr, 108 op/s
Dec 06 08:07:15 compute-0 nova_compute[251992]: 2025-12-06 08:07:15.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:15.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Dec 06 08:07:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:15.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:16 compute-0 ceph-mon[74339]: pgmap v3436: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Dec 06 08:07:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:17.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec 06 08:07:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:17 compute-0 nova_compute[251992]: 2025-12-06 08:07:17.846 251996 DEBUG oslo_concurrency.lockutils [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "interface-131d5537-9b5a-407d-97af-efc5bd314951-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:17 compute-0 nova_compute[251992]: 2025-12-06 08:07:17.847 251996 DEBUG oslo_concurrency.lockutils [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "interface-131d5537-9b5a-407d-97af-efc5bd314951-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:17 compute-0 nova_compute[251992]: 2025-12-06 08:07:17.847 251996 DEBUG nova.objects.instance [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'flavor' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:07:18 compute-0 nova_compute[251992]: 2025-12-06 08:07:18.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:07:18
Dec 06 08:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.meta', '.rgw.root', 'images', '.mgr']
Dec 06 08:07:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:07:18 compute-0 ceph-mon[74339]: pgmap v3437: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 3.6 MiB/s wr, 110 op/s
Dec 06 08:07:18 compute-0 nova_compute[251992]: 2025-12-06 08:07:18.905 251996 DEBUG nova.objects.instance [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_requests' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:07:18 compute-0 nova_compute[251992]: 2025-12-06 08:07:18.932 251996 DEBUG nova.network.neutron [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:07:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:19.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.7 MiB/s wr, 88 op/s
Dec 06 08:07:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:19.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:19 compute-0 nova_compute[251992]: 2025-12-06 08:07:19.704 251996 DEBUG nova.policy [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:07:20 compute-0 nova_compute[251992]: 2025-12-06 08:07:20.199 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:20 compute-0 nova_compute[251992]: 2025-12-06 08:07:20.560 251996 DEBUG nova.network.neutron [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Successfully created port: e01db4de-5597-4913-b15d-568789f0cf17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:07:20 compute-0 ceph-mon[74339]: pgmap v3438: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 2.7 MiB/s wr, 88 op/s
Dec 06 08:07:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:21.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 478 KiB/s rd, 2.8 MiB/s wr, 91 op/s
Dec 06 08:07:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:21.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:22 compute-0 nova_compute[251992]: 2025-12-06 08:07:22.007 251996 DEBUG nova.network.neutron [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Successfully updated port: e01db4de-5597-4913-b15d-568789f0cf17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:07:22 compute-0 nova_compute[251992]: 2025-12-06 08:07:22.030 251996 DEBUG oslo_concurrency.lockutils [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:07:22 compute-0 nova_compute[251992]: 2025-12-06 08:07:22.030 251996 DEBUG oslo_concurrency.lockutils [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:07:22 compute-0 nova_compute[251992]: 2025-12-06 08:07:22.030 251996 DEBUG nova.network.neutron [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:07:22 compute-0 nova_compute[251992]: 2025-12-06 08:07:22.123 251996 DEBUG nova.compute.manager [req-a7831282-6295-490b-99b9-ee42e612b911 req-350efcb4-571a-48ea-8fa8-ea83deb5fc6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-changed-e01db4de-5597-4913-b15d-568789f0cf17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:22 compute-0 nova_compute[251992]: 2025-12-06 08:07:22.124 251996 DEBUG nova.compute.manager [req-a7831282-6295-490b-99b9-ee42e612b911 req-350efcb4-571a-48ea-8fa8-ea83deb5fc6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing instance network info cache due to event network-changed-e01db4de-5597-4913-b15d-568789f0cf17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:07:22 compute-0 nova_compute[251992]: 2025-12-06 08:07:22.124 251996 DEBUG oslo_concurrency.lockutils [req-a7831282-6295-490b-99b9-ee42e612b911 req-350efcb4-571a-48ea-8fa8-ea83deb5fc6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:07:22 compute-0 ceph-mon[74339]: pgmap v3439: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 478 KiB/s rd, 2.8 MiB/s wr, 91 op/s
Dec 06 08:07:23 compute-0 nova_compute[251992]: 2025-12-06 08:07:23.022 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:23.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 192 KiB/s wr, 20 op/s
Dec 06 08:07:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:23.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:07:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:07:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:24 compute-0 ceph-mon[74339]: pgmap v3440: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 150 KiB/s rd, 192 KiB/s wr, 20 op/s
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.168 251996 DEBUG nova.network.neutron [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.202 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:25.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.431 251996 DEBUG oslo_concurrency.lockutils [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.432 251996 DEBUG oslo_concurrency.lockutils [req-a7831282-6295-490b-99b9-ee42e612b911 req-350efcb4-571a-48ea-8fa8-ea83deb5fc6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.432 251996 DEBUG nova.network.neutron [req-a7831282-6295-490b-99b9-ee42e612b911 req-350efcb4-571a-48ea-8fa8-ea83deb5fc6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing network info cache for port e01db4de-5597-4913-b15d-568789f0cf17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.436 251996 DEBUG nova.virt.libvirt.vif [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:06:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:06:53Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.436 251996 DEBUG nova.network.os_vif_util [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.437 251996 DEBUG nova.network.os_vif_util [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.438 251996 DEBUG os_vif [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.439 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.439 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.440 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.444 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.445 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape01db4de-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.445 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape01db4de-55, col_values=(('external_ids', {'iface-id': 'e01db4de-5597-4913-b15d-568789f0cf17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:ec:42', 'vm-uuid': '131d5537-9b5a-407d-97af-efc5bd314951'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.447 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 NetworkManager[48965]: <info>  [1765008445.4495] manager: (tape01db4de-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.450 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.461 251996 INFO os_vif [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55')
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.462 251996 DEBUG nova.virt.libvirt.vif [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:06:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:06:53Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.463 251996 DEBUG nova.network.os_vif_util [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.463 251996 DEBUG nova.network.os_vif_util [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.471 251996 DEBUG nova.virt.libvirt.guest [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] attach device xml: <interface type="ethernet">
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:70:ec:42"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <target dev="tape01db4de-55"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]: </interface>
Dec 06 08:07:25 compute-0 nova_compute[251992]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 08:07:25 compute-0 NetworkManager[48965]: <info>  [1765008445.4888] manager: (tape01db4de-55): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Dec 06 08:07:25 compute-0 kernel: tape01db4de-55: entered promiscuous mode
Dec 06 08:07:25 compute-0 ovn_controller[147168]: 2025-12-06T08:07:25Z|00736|binding|INFO|Claiming lport e01db4de-5597-4913-b15d-568789f0cf17 for this chassis.
Dec 06 08:07:25 compute-0 ovn_controller[147168]: 2025-12-06T08:07:25Z|00737|binding|INFO|e01db4de-5597-4913-b15d-568789f0cf17: Claiming fa:16:3e:70:ec:42 10.100.0.29
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.492 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.507 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:ec:42 10.100.0.29'], port_security=['fa:16:3e:70:ec:42 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '131d5537-9b5a-407d-97af-efc5bd314951', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c0486dd-d5db-4e6a-a51e-94ac8eaed290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0019305c-8ebd-4d1b-ac3e-eb77d507b742, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=e01db4de-5597-4913-b15d-568789f0cf17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.508 158118 INFO neutron.agent.ovn.metadata.agent [-] Port e01db4de-5597-4913-b15d-568789f0cf17 in datapath b39fdb1c-6386-42dc-9c1d-e70684ee69f2 bound to our chassis
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.509 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b39fdb1c-6386-42dc-9c1d-e70684ee69f2
Dec 06 08:07:25 compute-0 systemd-udevd[386876]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.526 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fd83c4f4-87a3-49e4-a66e-8707c829cc53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.527 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb39fdb1c-61 in ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.528 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb39fdb1c-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.528 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1d34d7-8bd2-4757-9140-d278708245c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.529 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d364d203-a28c-4e77-a450-35719b950166]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.537 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 ovn_controller[147168]: 2025-12-06T08:07:25Z|00738|binding|INFO|Setting lport e01db4de-5597-4913-b15d-568789f0cf17 ovn-installed in OVS
Dec 06 08:07:25 compute-0 NetworkManager[48965]: <info>  [1765008445.5409] device (tape01db4de-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:07:25 compute-0 ovn_controller[147168]: 2025-12-06T08:07:25Z|00739|binding|INFO|Setting lport e01db4de-5597-4913-b15d-568789f0cf17 up in Southbound
Dec 06 08:07:25 compute-0 NetworkManager[48965]: <info>  [1765008445.5425] device (tape01db4de-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.542 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.562 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5fa67d-b6e1-4f08-b8e6-145fc50ed51c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.577 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[798ca171-3e2c-4d72-9b1f-0a7d00aa7d1c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.610 251996 DEBUG nova.virt.libvirt.driver [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.611 251996 DEBUG nova.virt.libvirt.driver [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.611 251996 DEBUG nova.virt.libvirt.driver [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:e3:9a:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.611 251996 DEBUG nova.virt.libvirt.driver [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:70:ec:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.611 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[639efd78-7c91-483a-b3f7-dec04dbe88b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 systemd-udevd[386879]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:07:25 compute-0 NetworkManager[48965]: <info>  [1765008445.6209] manager: (tapb39fdb1c-60): new Veth device (/org/freedesktop/NetworkManager/Devices/346)
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.620 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[de1a3d25-d55b-49ee-9f1a-a2525d5c6e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 200 KiB/s wr, 35 op/s
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.640 251996 DEBUG nova.virt.libvirt.guest [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <nova:name>tempest-TestNetworkBasicOps-server-416534484</nova:name>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 08:07:25</nova:creationTime>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:port uuid="cbe16ad6-d576-4461-9682-554b48a77542">
Dec 06 08:07:25 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     <nova:port uuid="e01db4de-5597-4913-b15d-568789f0cf17">
Dec 06 08:07:25 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Dec 06 08:07:25 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:07:25 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 08:07:25 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 08:07:25 compute-0 nova_compute[251992]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 08:07:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.651 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c6184464-57a1-43be-a708-d9dfe116eaf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:25.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.654 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[dab0c85c-8edc-4ddf-b8d9-5134a5fd9b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 NetworkManager[48965]: <info>  [1765008445.6786] device (tapb39fdb1c-60): carrier: link connected
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.683 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[edfaa84b-05bd-4412-ab08-61a9096aa03d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.684 251996 DEBUG oslo_concurrency.lockutils [None req-3a2049f7-84b2-489b-8451-b7c9b9d8ef2c d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "interface-131d5537-9b5a-407d-97af-efc5bd314951-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.709 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[80df41aa-23b7-4858-87a7-6fc0674d78bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb39fdb1c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:c2:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871826, 'reachable_time': 19701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386902, 'error': None, 'target': 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.726 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc66557-9fa7-41ea-a88e-cd701477f50c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:c284'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 871826, 'tstamp': 871826}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386903, 'error': None, 'target': 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.740 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5c331e82-13fc-4550-8be2-b1c6a36e352b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb39fdb1c-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:c2:84'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871826, 'reachable_time': 19701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386904, 'error': None, 'target': 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.770 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[eed7d226-21b0-4a99-903b-a84e60e41607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.808 251996 DEBUG nova.compute.manager [req-43202ac5-a4ab-4714-8c96-b3dc221c6766 req-b018b865-b89d-496b-95c1-5d7d8c12a427 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.808 251996 DEBUG oslo_concurrency.lockutils [req-43202ac5-a4ab-4714-8c96-b3dc221c6766 req-b018b865-b89d-496b-95c1-5d7d8c12a427 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.809 251996 DEBUG oslo_concurrency.lockutils [req-43202ac5-a4ab-4714-8c96-b3dc221c6766 req-b018b865-b89d-496b-95c1-5d7d8c12a427 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.809 251996 DEBUG oslo_concurrency.lockutils [req-43202ac5-a4ab-4714-8c96-b3dc221c6766 req-b018b865-b89d-496b-95c1-5d7d8c12a427 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.809 251996 DEBUG nova.compute.manager [req-43202ac5-a4ab-4714-8c96-b3dc221c6766 req-b018b865-b89d-496b-95c1-5d7d8c12a427 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] No waiting events found dispatching network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.809 251996 WARNING nova.compute.manager [req-43202ac5-a4ab-4714-8c96-b3dc221c6766 req-b018b865-b89d-496b-95c1-5d7d8c12a427 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received unexpected event network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 for instance with vm_state active and task_state None.
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.831 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7032838f-f844-48f0-a37a-d3b06877ef0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.833 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb39fdb1c-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.833 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.833 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb39fdb1c-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:25 compute-0 NetworkManager[48965]: <info>  [1765008445.8361] manager: (tapb39fdb1c-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.836 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 kernel: tapb39fdb1c-60: entered promiscuous mode
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.838 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb39fdb1c-60, col_values=(('external_ids', {'iface-id': 'dd1c1cd7-6c16-4899-898c-492393f63b35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.839 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 ovn_controller[147168]: 2025-12-06T08:07:25Z|00740|binding|INFO|Releasing lport dd1c1cd7-6c16-4899-898c-492393f63b35 from this chassis (sb_readonly=0)
Dec 06 08:07:25 compute-0 nova_compute[251992]: 2025-12-06 08:07:25.853 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.855 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b39fdb1c-6386-42dc-9c1d-e70684ee69f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b39fdb1c-6386-42dc-9c1d-e70684ee69f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.856 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a8701af3-3d0d-4ed6-be1c-863da63b4241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.857 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-b39fdb1c-6386-42dc-9c1d-e70684ee69f2
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/b39fdb1c-6386-42dc-9c1d-e70684ee69f2.pid.haproxy
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID b39fdb1c-6386-42dc-9c1d-e70684ee69f2
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:07:25 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:25.859 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'env', 'PROCESS_TAG=haproxy-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b39fdb1c-6386-42dc-9c1d-e70684ee69f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:07:26 compute-0 podman[386936]: 2025-12-06 08:07:26.225838004 +0000 UTC m=+0.046949977 container create 288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:07:26 compute-0 systemd[1]: Started libpod-conmon-288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7.scope.
Dec 06 08:07:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34bd730cbfa03c301053a09204466f076634cc7900e911fcfb53c416be6d3f40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:26 compute-0 podman[386936]: 2025-12-06 08:07:26.201855147 +0000 UTC m=+0.022967140 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:07:26 compute-0 podman[386936]: 2025-12-06 08:07:26.309644195 +0000 UTC m=+0.130756178 container init 288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 08:07:26 compute-0 podman[386936]: 2025-12-06 08:07:26.319657186 +0000 UTC m=+0.140769159 container start 288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:07:26 compute-0 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[386952]: [NOTICE]   (386956) : New worker (386958) forked
Dec 06 08:07:26 compute-0 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[386952]: [NOTICE]   (386956) : Loading success.
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004343367590266146 of space, bias 1.0, pg target 1.3030102770798437 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.6464803764191328 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:07:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:07:26 compute-0 ceph-mon[74339]: pgmap v3441: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 438 KiB/s rd, 200 KiB/s wr, 35 op/s
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.162 251996 DEBUG nova.network.neutron [req-a7831282-6295-490b-99b9-ee42e612b911 req-350efcb4-571a-48ea-8fa8-ea83deb5fc6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updated VIF entry in instance network info cache for port e01db4de-5597-4913-b15d-568789f0cf17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.163 251996 DEBUG nova.network.neutron [req-a7831282-6295-490b-99b9-ee42e612b911 req-350efcb4-571a-48ea-8fa8-ea83deb5fc6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.181 251996 DEBUG oslo_concurrency.lockutils [req-a7831282-6295-490b-99b9-ee42e612b911 req-350efcb4-571a-48ea-8fa8-ea83deb5fc6c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:07:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:27.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:07:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:07:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:07:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:07:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:07:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:27.393 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:27.395 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:07:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 34 KiB/s wr, 36 op/s
Dec 06 08:07:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:27.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.831172) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447831238, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2116, "num_deletes": 251, "total_data_size": 3813170, "memory_usage": 3890648, "flush_reason": "Manual Compaction"}
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447851861, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3722802, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67943, "largest_seqno": 70057, "table_properties": {"data_size": 3713404, "index_size": 5891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19496, "raw_average_key_size": 20, "raw_value_size": 3694645, "raw_average_value_size": 3848, "num_data_blocks": 258, "num_entries": 960, "num_filter_entries": 960, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008236, "oldest_key_time": 1765008236, "file_creation_time": 1765008447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 20740 microseconds, and 8461 cpu microseconds.
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.851912) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3722802 bytes OK
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.851937) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.853427) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.853440) EVENT_LOG_v1 {"time_micros": 1765008447853436, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.853456) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3804635, prev total WAL file size 3804635, number of live WAL files 2.
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.854557) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3635KB)], [152(11MB)]
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447854654, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 16201692, "oldest_snapshot_seqno": -1}
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.901 251996 DEBUG nova.compute.manager [req-7fb85750-f7c9-4c12-b6df-d26f191bd579 req-ffe770f2-8bda-4d7f-adf1-d2077f290f50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.901 251996 DEBUG oslo_concurrency.lockutils [req-7fb85750-f7c9-4c12-b6df-d26f191bd579 req-ffe770f2-8bda-4d7f-adf1-d2077f290f50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.901 251996 DEBUG oslo_concurrency.lockutils [req-7fb85750-f7c9-4c12-b6df-d26f191bd579 req-ffe770f2-8bda-4d7f-adf1-d2077f290f50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.902 251996 DEBUG oslo_concurrency.lockutils [req-7fb85750-f7c9-4c12-b6df-d26f191bd579 req-ffe770f2-8bda-4d7f-adf1-d2077f290f50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.902 251996 DEBUG nova.compute.manager [req-7fb85750-f7c9-4c12-b6df-d26f191bd579 req-ffe770f2-8bda-4d7f-adf1-d2077f290f50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] No waiting events found dispatching network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:07:27 compute-0 nova_compute[251992]: 2025-12-06 08:07:27.902 251996 WARNING nova.compute.manager [req-7fb85750-f7c9-4c12-b6df-d26f191bd579 req-ffe770f2-8bda-4d7f-adf1-d2077f290f50 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received unexpected event network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 for instance with vm_state active and task_state None.
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 10470 keys, 14220212 bytes, temperature: kUnknown
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447956594, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 14220212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14151711, "index_size": 41252, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26181, "raw_key_size": 276164, "raw_average_key_size": 26, "raw_value_size": 13967336, "raw_average_value_size": 1334, "num_data_blocks": 1573, "num_entries": 10470, "num_filter_entries": 10470, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.956846) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 14220212 bytes
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.958435) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.8 rd, 139.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.9 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 10987, records dropped: 517 output_compression: NoCompression
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.958455) EVENT_LOG_v1 {"time_micros": 1765008447958445, "job": 94, "event": "compaction_finished", "compaction_time_micros": 102032, "compaction_time_cpu_micros": 65328, "output_level": 6, "num_output_files": 1, "total_output_size": 14220212, "num_input_records": 10987, "num_output_records": 10470, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447959138, "job": 94, "event": "table_file_deletion", "file_number": 154}
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008447961171, "job": 94, "event": "table_file_deletion", "file_number": 152}
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.854367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.961337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.961343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.961345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.961347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:27 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:27.961349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:28 compute-0 nova_compute[251992]: 2025-12-06 08:07:28.024 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:28 compute-0 ovn_controller[147168]: 2025-12-06T08:07:28Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:ec:42 10.100.0.29
Dec 06 08:07:28 compute-0 ovn_controller[147168]: 2025-12-06T08:07:28Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:ec:42 10.100.0.29
Dec 06 08:07:28 compute-0 ceph-mon[74339]: pgmap v3442: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 34 KiB/s wr, 36 op/s
Dec 06 08:07:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:29.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 847 KiB/s rd, 34 KiB/s wr, 36 op/s
Dec 06 08:07:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:29.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:30 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:07:30.397 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:07:30 compute-0 nova_compute[251992]: 2025-12-06 08:07:30.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:30 compute-0 ceph-mon[74339]: pgmap v3443: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 847 KiB/s rd, 34 KiB/s wr, 36 op/s
Dec 06 08:07:30 compute-0 sudo[386969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:30 compute-0 sudo[386969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:30 compute-0 sudo[386969]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:30 compute-0 sudo[387000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:30 compute-0 sudo[387000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:30 compute-0 sudo[387000]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:30 compute-0 podman[386993]: 2025-12-06 08:07:30.980879043 +0000 UTC m=+0.100184284 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:07:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:31.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 73 op/s
Dec 06 08:07:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:31.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:32 compute-0 sudo[387047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:32 compute-0 sudo[387047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:32 compute-0 sudo[387047]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:32 compute-0 sudo[387072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:07:32 compute-0 sudo[387072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:32 compute-0 sudo[387072]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:32 compute-0 sudo[387097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:32 compute-0 sudo[387097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:32 compute-0 sudo[387097]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:32 compute-0 sudo[387122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:07:32 compute-0 sudo[387122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:32 compute-0 ceph-mon[74339]: pgmap v3444: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 73 op/s
Dec 06 08:07:33 compute-0 nova_compute[251992]: 2025-12-06 08:07:33.027 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:33 compute-0 sudo[387122]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:33.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 08:07:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:07:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:07:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:07:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b3ee02da-950e-4618-8890-34af03598acb does not exist
Dec 06 08:07:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4ebe7eb4-7532-47ac-a261-30b559709a16 does not exist
Dec 06 08:07:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 91bafb54-91a6-4aa1-b163-6c9d7d39acaa does not exist
Dec 06 08:07:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:07:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:07:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:07:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:07:33 compute-0 sudo[387179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:33 compute-0 sudo[387179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:33 compute-0 sudo[387179]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:33 compute-0 sudo[387204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:07:33 compute-0 sudo[387204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:33 compute-0 sudo[387204]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:33 compute-0 sudo[387229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:33 compute-0 sudo[387229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:33 compute-0 sudo[387229]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.0 KiB/s wr, 70 op/s
Dec 06 08:07:33 compute-0 sudo[387254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:07:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:33 compute-0 sudo[387254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1761730012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:07:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:07:33 compute-0 podman[387320]: 2025-12-06 08:07:33.970500805 +0000 UTC m=+0.042466417 container create cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Dec 06 08:07:34 compute-0 systemd[1]: Started libpod-conmon-cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30.scope.
Dec 06 08:07:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:07:34 compute-0 podman[387320]: 2025-12-06 08:07:34.042771555 +0000 UTC m=+0.114737187 container init cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meitner, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 08:07:34 compute-0 podman[387320]: 2025-12-06 08:07:33.951777949 +0000 UTC m=+0.023743581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:07:34 compute-0 podman[387320]: 2025-12-06 08:07:34.04852223 +0000 UTC m=+0.120487832 container start cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meitner, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:07:34 compute-0 podman[387320]: 2025-12-06 08:07:34.051835289 +0000 UTC m=+0.123800891 container attach cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meitner, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 08:07:34 compute-0 systemd[1]: libpod-cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30.scope: Deactivated successfully.
Dec 06 08:07:34 compute-0 wizardly_meitner[387336]: 167 167
Dec 06 08:07:34 compute-0 conmon[387336]: conmon cc74d89187a2cb32f151 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30.scope/container/memory.events
Dec 06 08:07:34 compute-0 podman[387320]: 2025-12-06 08:07:34.056189196 +0000 UTC m=+0.128154828 container died cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 08:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e9e5f671366b86d04f169e75272dfc0828a4f122c217887ea9f1b26e9437419-merged.mount: Deactivated successfully.
Dec 06 08:07:34 compute-0 podman[387320]: 2025-12-06 08:07:34.10445768 +0000 UTC m=+0.176423292 container remove cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:07:34 compute-0 systemd[1]: libpod-conmon-cc74d89187a2cb32f151d30af7d94e4c74ca53219f518dc25c1e7f11c1889e30.scope: Deactivated successfully.
Dec 06 08:07:34 compute-0 podman[387361]: 2025-12-06 08:07:34.266809591 +0000 UTC m=+0.040188207 container create 10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:07:34 compute-0 systemd[1]: Started libpod-conmon-10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2.scope.
Dec 06 08:07:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:07:34 compute-0 podman[387361]: 2025-12-06 08:07:34.250091509 +0000 UTC m=+0.023470145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6f64f378cfcd6bd9e7d511dacd545efe787e8877a8358c994642d183913e6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6f64f378cfcd6bd9e7d511dacd545efe787e8877a8358c994642d183913e6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6f64f378cfcd6bd9e7d511dacd545efe787e8877a8358c994642d183913e6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6f64f378cfcd6bd9e7d511dacd545efe787e8877a8358c994642d183913e6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc6f64f378cfcd6bd9e7d511dacd545efe787e8877a8358c994642d183913e6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:34 compute-0 podman[387361]: 2025-12-06 08:07:34.368543635 +0000 UTC m=+0.141922271 container init 10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:07:34 compute-0 podman[387361]: 2025-12-06 08:07:34.378343439 +0000 UTC m=+0.151722055 container start 10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:07:34 compute-0 podman[387361]: 2025-12-06 08:07:34.382204594 +0000 UTC m=+0.155583220 container attach 10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:07:34 compute-0 ceph-mon[74339]: pgmap v3445: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.0 KiB/s wr, 70 op/s
Dec 06 08:07:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2083970358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:35 compute-0 wizardly_buck[387377]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:07:35 compute-0 wizardly_buck[387377]: --> relative data size: 1.0
Dec 06 08:07:35 compute-0 wizardly_buck[387377]: --> All data devices are unavailable
Dec 06 08:07:35 compute-0 systemd[1]: libpod-10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2.scope: Deactivated successfully.
Dec 06 08:07:35 compute-0 podman[387361]: 2025-12-06 08:07:35.180609849 +0000 UTC m=+0.953988485 container died 10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec 06 08:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc6f64f378cfcd6bd9e7d511dacd545efe787e8877a8358c994642d183913e6f-merged.mount: Deactivated successfully.
Dec 06 08:07:35 compute-0 podman[387361]: 2025-12-06 08:07:35.236465006 +0000 UTC m=+1.009843622 container remove 10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:07:35 compute-0 systemd[1]: libpod-conmon-10e6b363307079b399463d5ba3d4568ac5b051387053f3ec29466f755eac7eb2.scope: Deactivated successfully.
Dec 06 08:07:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:35.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:35 compute-0 sudo[387254]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:35 compute-0 sudo[387405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:35 compute-0 sudo[387405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:35 compute-0 sudo[387405]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:35 compute-0 sudo[387430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:07:35 compute-0 sudo[387430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:35 compute-0 sudo[387430]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:35 compute-0 sudo[387455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:35 compute-0 sudo[387455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:35 compute-0 sudo[387455]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:35 compute-0 nova_compute[251992]: 2025-12-06 08:07:35.451 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:35 compute-0 sudo[387480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:07:35 compute-0 sudo[387480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Dec 06 08:07:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:35.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:35 compute-0 podman[387545]: 2025-12-06 08:07:35.811784471 +0000 UTC m=+0.039295672 container create 961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 08:07:35 compute-0 systemd[1]: Started libpod-conmon-961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1.scope.
Dec 06 08:07:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:07:35 compute-0 podman[387545]: 2025-12-06 08:07:35.876173178 +0000 UTC m=+0.103684389 container init 961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chaum, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:07:35 compute-0 podman[387545]: 2025-12-06 08:07:35.882278483 +0000 UTC m=+0.109789674 container start 961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chaum, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:07:35 compute-0 podman[387545]: 2025-12-06 08:07:35.885431078 +0000 UTC m=+0.112942269 container attach 961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:07:35 compute-0 suspicious_chaum[387561]: 167 167
Dec 06 08:07:35 compute-0 systemd[1]: libpod-961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1.scope: Deactivated successfully.
Dec 06 08:07:35 compute-0 podman[387545]: 2025-12-06 08:07:35.792834079 +0000 UTC m=+0.020345290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:07:35 compute-0 podman[387545]: 2025-12-06 08:07:35.888412659 +0000 UTC m=+0.115923850 container died 961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfcecfdc507dd7dbbf822166c7bcb1da1bc2b7c537088424cff163102ac38ff1-merged.mount: Deactivated successfully.
Dec 06 08:07:35 compute-0 podman[387545]: 2025-12-06 08:07:35.919782485 +0000 UTC m=+0.147293676 container remove 961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:07:35 compute-0 systemd[1]: libpod-conmon-961df5b915c35011ef39701c4d42096cc7b55314a8c6e144d1181ba417fb0cb1.scope: Deactivated successfully.
Dec 06 08:07:36 compute-0 podman[387583]: 2025-12-06 08:07:36.102802794 +0000 UTC m=+0.043140015 container create 7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:07:36 compute-0 systemd[1]: Started libpod-conmon-7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4.scope.
Dec 06 08:07:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02630d7fdb11efc5cd3b114e80d8365ff0d4f8056d40582a221711b15fab4ae2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02630d7fdb11efc5cd3b114e80d8365ff0d4f8056d40582a221711b15fab4ae2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02630d7fdb11efc5cd3b114e80d8365ff0d4f8056d40582a221711b15fab4ae2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02630d7fdb11efc5cd3b114e80d8365ff0d4f8056d40582a221711b15fab4ae2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:36 compute-0 podman[387583]: 2025-12-06 08:07:36.162633578 +0000 UTC m=+0.102970819 container init 7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:07:36 compute-0 podman[387583]: 2025-12-06 08:07:36.169015571 +0000 UTC m=+0.109352792 container start 7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:07:36 compute-0 podman[387583]: 2025-12-06 08:07:36.173526102 +0000 UTC m=+0.113863343 container attach 7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:07:36 compute-0 podman[387583]: 2025-12-06 08:07:36.082935247 +0000 UTC m=+0.023272498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:07:36 compute-0 nice_dirac[387600]: {
Dec 06 08:07:36 compute-0 nice_dirac[387600]:     "0": [
Dec 06 08:07:36 compute-0 nice_dirac[387600]:         {
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "devices": [
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "/dev/loop3"
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             ],
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "lv_name": "ceph_lv0",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "lv_size": "7511998464",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "name": "ceph_lv0",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "tags": {
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.cluster_name": "ceph",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.crush_device_class": "",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.encrypted": "0",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.osd_id": "0",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.type": "block",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:                 "ceph.vdo": "0"
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             },
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "type": "block",
Dec 06 08:07:36 compute-0 nice_dirac[387600]:             "vg_name": "ceph_vg0"
Dec 06 08:07:36 compute-0 nice_dirac[387600]:         }
Dec 06 08:07:36 compute-0 nice_dirac[387600]:     ]
Dec 06 08:07:36 compute-0 nice_dirac[387600]: }
Dec 06 08:07:36 compute-0 ceph-mon[74339]: pgmap v3446: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Dec 06 08:07:36 compute-0 systemd[1]: libpod-7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4.scope: Deactivated successfully.
Dec 06 08:07:36 compute-0 podman[387583]: 2025-12-06 08:07:36.965005649 +0000 UTC m=+0.905342900 container died 7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-02630d7fdb11efc5cd3b114e80d8365ff0d4f8056d40582a221711b15fab4ae2-merged.mount: Deactivated successfully.
Dec 06 08:07:37 compute-0 podman[387583]: 2025-12-06 08:07:37.027696111 +0000 UTC m=+0.968033342 container remove 7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:07:37 compute-0 systemd[1]: libpod-conmon-7156cbb22a528a9ca89e54306dcb409d5066cbabad1814313153d96ad56055b4.scope: Deactivated successfully.
Dec 06 08:07:37 compute-0 sudo[387480]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:37 compute-0 sudo[387620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:37 compute-0 sudo[387620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:37 compute-0 sudo[387620]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:37 compute-0 sudo[387645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:07:37 compute-0 sudo[387645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:37 compute-0 sudo[387645]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:37 compute-0 sudo[387670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:37.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:37 compute-0 sudo[387670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:37 compute-0 sudo[387670]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:37 compute-0 sudo[387695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:07:37 compute-0 sudo[387695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 KiB/s wr, 61 op/s
Dec 06 08:07:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:37.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:37 compute-0 podman[387762]: 2025-12-06 08:07:37.733390443 +0000 UTC m=+0.065170659 container create 47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:07:37 compute-0 systemd[1]: Started libpod-conmon-47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69.scope.
Dec 06 08:07:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:07:37 compute-0 podman[387762]: 2025-12-06 08:07:37.70844788 +0000 UTC m=+0.040228186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:07:37 compute-0 podman[387762]: 2025-12-06 08:07:37.805976772 +0000 UTC m=+0.137757018 container init 47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:07:37 compute-0 podman[387762]: 2025-12-06 08:07:37.813907496 +0000 UTC m=+0.145687712 container start 47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:07:37 compute-0 podman[387762]: 2025-12-06 08:07:37.817266466 +0000 UTC m=+0.149046722 container attach 47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:07:37 compute-0 confident_lumiere[387780]: 167 167
Dec 06 08:07:37 compute-0 systemd[1]: libpod-47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69.scope: Deactivated successfully.
Dec 06 08:07:37 compute-0 podman[387762]: 2025-12-06 08:07:37.819522977 +0000 UTC m=+0.151303203 container died 47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3e884852fae16a9f8969f92fe46ae5b54e47c2f48073fb233d15e311a25e72d-merged.mount: Deactivated successfully.
Dec 06 08:07:37 compute-0 podman[387762]: 2025-12-06 08:07:37.859895716 +0000 UTC m=+0.191675942 container remove 47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:07:37 compute-0 podman[387779]: 2025-12-06 08:07:37.86631968 +0000 UTC m=+0.096863425 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 08:07:37 compute-0 systemd[1]: libpod-conmon-47600a33b325883f01f42c8b9b2f2de43f1244ea5c72aa6eda3dd42d632adf69.scope: Deactivated successfully.
Dec 06 08:07:37 compute-0 podman[387776]: 2025-12-06 08:07:37.877033699 +0000 UTC m=+0.100375349 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 08:07:38 compute-0 nova_compute[251992]: 2025-12-06 08:07:38.030 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:38 compute-0 podman[387842]: 2025-12-06 08:07:38.065272589 +0000 UTC m=+0.054399920 container create 807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 08:07:38 compute-0 systemd[1]: Started libpod-conmon-807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4.scope.
Dec 06 08:07:38 compute-0 podman[387842]: 2025-12-06 08:07:38.050313245 +0000 UTC m=+0.039440616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:07:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2312e16480646dc51513caf25fec358f275e1bba8dfebe1ebb89e319c3e0ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2312e16480646dc51513caf25fec358f275e1bba8dfebe1ebb89e319c3e0ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2312e16480646dc51513caf25fec358f275e1bba8dfebe1ebb89e319c3e0ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2312e16480646dc51513caf25fec358f275e1bba8dfebe1ebb89e319c3e0ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:07:38 compute-0 podman[387842]: 2025-12-06 08:07:38.174917057 +0000 UTC m=+0.164044398 container init 807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 08:07:38 compute-0 podman[387842]: 2025-12-06 08:07:38.186937931 +0000 UTC m=+0.176065272 container start 807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 08:07:38 compute-0 podman[387842]: 2025-12-06 08:07:38.19059271 +0000 UTC m=+0.179720111 container attach 807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 08:07:38 compute-0 ceph-mon[74339]: pgmap v3447: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.3 KiB/s wr, 61 op/s
Dec 06 08:07:39 compute-0 sharp_wright[387858]: {
Dec 06 08:07:39 compute-0 sharp_wright[387858]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:07:39 compute-0 sharp_wright[387858]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:07:39 compute-0 sharp_wright[387858]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:07:39 compute-0 sharp_wright[387858]:         "osd_id": 0,
Dec 06 08:07:39 compute-0 sharp_wright[387858]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:07:39 compute-0 sharp_wright[387858]:         "type": "bluestore"
Dec 06 08:07:39 compute-0 sharp_wright[387858]:     }
Dec 06 08:07:39 compute-0 sharp_wright[387858]: }
Dec 06 08:07:39 compute-0 systemd[1]: libpod-807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4.scope: Deactivated successfully.
Dec 06 08:07:39 compute-0 podman[387842]: 2025-12-06 08:07:39.056868286 +0000 UTC m=+1.045995637 container died 807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d2312e16480646dc51513caf25fec358f275e1bba8dfebe1ebb89e319c3e0ee-merged.mount: Deactivated successfully.
Dec 06 08:07:39 compute-0 podman[387842]: 2025-12-06 08:07:39.103306998 +0000 UTC m=+1.092434339 container remove 807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:07:39 compute-0 systemd[1]: libpod-conmon-807d261f8b12ae6bac171ec824d68f00a8696335df53b819bed6f32a43a4ddc4.scope: Deactivated successfully.
Dec 06 08:07:39 compute-0 sudo[387695]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:07:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:07:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 43b327ed-db33-4bb4-bf8f-6ce12921184d does not exist
Dec 06 08:07:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8280a7c5-af61-4ec3-bd36-799652bc9057 does not exist
Dec 06 08:07:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5ea7ed65-f16a-4a60-aaf3-2f2418b931ae does not exist
Dec 06 08:07:39 compute-0 sudo[387892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:39 compute-0 sudo[387892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:39 compute-0 sudo[387892]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:39.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:39 compute-0 sudo[387917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:07:39 compute-0 sudo[387917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:39 compute-0 sudo[387917]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.3 KiB/s wr, 42 op/s
Dec 06 08:07:39 compute-0 nova_compute[251992]: 2025-12-06 08:07:39.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:39.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:39 compute-0 nova_compute[251992]: 2025-12-06 08:07:39.808 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:39 compute-0 nova_compute[251992]: 2025-12-06 08:07:39.808 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:39 compute-0 nova_compute[251992]: 2025-12-06 08:07:39.809 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:39 compute-0 nova_compute[251992]: 2025-12-06 08:07:39.809 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:07:39 compute-0 nova_compute[251992]: 2025-12-06 08:07:39.809 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:07:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:07:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044460620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.268 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:07:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:40 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.455 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.506 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.507 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.688 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.690 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3925MB free_disk=20.897048950195312GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.690 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.690 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.931 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 131d5537-9b5a-407d-97af-efc5bd314951 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.932 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:07:40 compute-0 nova_compute[251992]: 2025-12-06 08:07:40.932 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:07:41 compute-0 nova_compute[251992]: 2025-12-06 08:07:41.032 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:07:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:41.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:07:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/847495634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:41 compute-0 nova_compute[251992]: 2025-12-06 08:07:41.498 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:07:41 compute-0 nova_compute[251992]: 2025-12-06 08:07:41.505 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:07:41 compute-0 ceph-mon[74339]: pgmap v3448: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.3 KiB/s wr, 42 op/s
Dec 06 08:07:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3044460620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 21 KiB/s wr, 81 op/s
Dec 06 08:07:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:41.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:41 compute-0 nova_compute[251992]: 2025-12-06 08:07:41.744 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:07:41 compute-0 nova_compute[251992]: 2025-12-06 08:07:41.794 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:07:41 compute-0 nova_compute[251992]: 2025-12-06 08:07:41.795 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:07:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3191702422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/847495634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:42 compute-0 ceph-mon[74339]: pgmap v3449: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 21 KiB/s wr, 81 op/s
Dec 06 08:07:43 compute-0 nova_compute[251992]: 2025-12-06 08:07:43.034 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:07:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:07:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:07:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:07:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:07:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:07:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:43.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 20 KiB/s wr, 44 op/s
Dec 06 08:07:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:43 compute-0 nova_compute[251992]: 2025-12-06 08:07:43.789 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:44 compute-0 nova_compute[251992]: 2025-12-06 08:07:44.655 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:44 compute-0 ceph-mon[74339]: pgmap v3450: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 529 KiB/s rd, 20 KiB/s wr, 44 op/s
Dec 06 08:07:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:45.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:45 compute-0 nova_compute[251992]: 2025-12-06 08:07:45.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 316 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 537 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Dec 06 08:07:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:07:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:45.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:07:46 compute-0 ceph-mon[74339]: pgmap v3451: 305 pgs: 305 active+clean; 316 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 537 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Dec 06 08:07:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:47.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 06 08:07:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:47.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:47 compute-0 sshd-session[387990]: Connection reset by authenticating user root 91.202.233.33 port 62170 [preauth]
Dec 06 08:07:48 compute-0 nova_compute[251992]: 2025-12-06 08:07:48.037 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:48 compute-0 nova_compute[251992]: 2025-12-06 08:07:48.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:48 compute-0 nova_compute[251992]: 2025-12-06 08:07:48.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:07:48 compute-0 nova_compute[251992]: 2025-12-06 08:07:48.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:07:48 compute-0 ceph-mon[74339]: pgmap v3452: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 06 08:07:48 compute-0 nova_compute[251992]: 2025-12-06 08:07:48.925 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:07:48 compute-0 nova_compute[251992]: 2025-12-06 08:07:48.925 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:07:48 compute-0 nova_compute[251992]: 2025-12-06 08:07:48.926 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:07:48 compute-0 nova_compute[251992]: 2025-12-06 08:07:48.926 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:07:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:49.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 498 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:07:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:49.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/856270045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:07:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3251160969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2436695836' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:50 compute-0 sshd-session[387993]: Connection reset by authenticating user root 91.202.233.33 port 62192 [preauth]
Dec 06 08:07:50 compute-0 nova_compute[251992]: 2025-12-06 08:07:50.462 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:50 compute-0 ceph-mon[74339]: pgmap v3453: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 498 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:07:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/419021482' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:07:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/228762705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:07:51 compute-0 sudo[387998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:51 compute-0 sudo[387998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:51 compute-0 sudo[387998]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:51 compute-0 sudo[388023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:07:51 compute-0 sudo[388023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:07:51 compute-0 sudo[388023]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:51.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Dec 06 08:07:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:51.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:51 compute-0 nova_compute[251992]: 2025-12-06 08:07:51.791 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:07:51 compute-0 nova_compute[251992]: 2025-12-06 08:07:51.834 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:07:51 compute-0 nova_compute[251992]: 2025-12-06 08:07:51.835 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:07:51 compute-0 nova_compute[251992]: 2025-12-06 08:07:51.835 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:51 compute-0 nova_compute[251992]: 2025-12-06 08:07:51.836 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:51 compute-0 sshd-session[387996]: Connection reset by authenticating user root 91.202.233.33 port 62208 [preauth]
Dec 06 08:07:52 compute-0 nova_compute[251992]: 2025-12-06 08:07:52.829 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:52 compute-0 ceph-mon[74339]: pgmap v3454: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 517 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Dec 06 08:07:53 compute-0 nova_compute[251992]: 2025-12-06 08:07:53.040 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:53.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:53 compute-0 ovn_controller[147168]: 2025-12-06T08:07:53Z|00741|binding|INFO|Releasing lport dd1c1cd7-6c16-4899-898c-492393f63b35 from this chassis (sb_readonly=0)
Dec 06 08:07:53 compute-0 ovn_controller[147168]: 2025-12-06T08:07:53Z|00742|binding|INFO|Releasing lport bc641a00-a4a7-4082-9aee-a2ad8d4616c8 from this chassis (sb_readonly=0)
Dec 06 08:07:53 compute-0 nova_compute[251992]: 2025-12-06 08:07:53.493 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Dec 06 08:07:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:53.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:54 compute-0 sshd-session[388049]: Invalid user pi from 91.202.233.33 port 22572
Dec 06 08:07:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:54 compute-0 sshd-session[388049]: Connection reset by invalid user pi 91.202.233.33 port 22572 [preauth]
Dec 06 08:07:54 compute-0 nova_compute[251992]: 2025-12-06 08:07:54.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:55 compute-0 ceph-mon[74339]: pgmap v3455: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Dec 06 08:07:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:55.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:07:55 compute-0 nova_compute[251992]: 2025-12-06 08:07:55.466 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Dec 06 08:07:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:55.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:56 compute-0 sshd-session[388052]: Connection reset by authenticating user root 91.202.233.33 port 22586 [preauth]
Dec 06 08:07:56 compute-0 nova_compute[251992]: 2025-12-06 08:07:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:56 compute-0 nova_compute[251992]: 2025-12-06 08:07:56.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:07:57 compute-0 ceph-mon[74339]: pgmap v3456: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Dec 06 08:07:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:57.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 305 KiB/s wr, 115 op/s
Dec 06 08:07:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:57.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:58 compute-0 nova_compute[251992]: 2025-12-06 08:07:58.042 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:07:59 compute-0 ceph-mon[74339]: pgmap v3457: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 305 KiB/s wr, 115 op/s
Dec 06 08:07:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:07:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:07:59.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:07:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.328198) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479328266, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 539, "num_deletes": 251, "total_data_size": 570899, "memory_usage": 580592, "flush_reason": "Manual Compaction"}
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479333313, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 426863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70058, "largest_seqno": 70596, "table_properties": {"data_size": 424076, "index_size": 758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7616, "raw_average_key_size": 20, "raw_value_size": 418292, "raw_average_value_size": 1142, "num_data_blocks": 33, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008448, "oldest_key_time": 1765008448, "file_creation_time": 1765008479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 5155 microseconds, and 2244 cpu microseconds.
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.333356) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 426863 bytes OK
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.333381) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.334638) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.334655) EVENT_LOG_v1 {"time_micros": 1765008479334649, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.334673) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 567847, prev total WAL file size 567847, number of live WAL files 2.
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.335272) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353236' seq:72057594037927935, type:22 .. '6D6772737461740032373738' seq:0, type:0; will stop at (end)
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(416KB)], [155(13MB)]
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479335368, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 14647075, "oldest_snapshot_seqno": -1}
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 10330 keys, 10916982 bytes, temperature: kUnknown
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479426049, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10916982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10853849, "index_size": 36204, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25861, "raw_key_size": 273492, "raw_average_key_size": 26, "raw_value_size": 10676363, "raw_average_value_size": 1033, "num_data_blocks": 1362, "num_entries": 10330, "num_filter_entries": 10330, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.426389) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10916982 bytes
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.427711) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.3 rd, 120.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(59.9) write-amplify(25.6) OK, records in: 10836, records dropped: 506 output_compression: NoCompression
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.427746) EVENT_LOG_v1 {"time_micros": 1765008479427725, "job": 96, "event": "compaction_finished", "compaction_time_micros": 90803, "compaction_time_cpu_micros": 53155, "output_level": 6, "num_output_files": 1, "total_output_size": 10916982, "num_input_records": 10836, "num_output_records": 10330, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479428178, "job": 96, "event": "table_file_deletion", "file_number": 157}
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008479431115, "job": 96, "event": "table_file_deletion", "file_number": 155}
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.335073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.431287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.431300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.431303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.431306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:07:59.431309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:07:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 102 op/s
Dec 06 08:07:59 compute-0 nova_compute[251992]: 2025-12-06 08:07:59.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:07:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:07:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:07:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:07:59.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:00 compute-0 nova_compute[251992]: 2025-12-06 08:08:00.470 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:01.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:01 compute-0 ceph-mon[74339]: pgmap v3458: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 102 op/s
Dec 06 08:08:01 compute-0 podman[388058]: 2025-12-06 08:08:01.492681152 +0000 UTC m=+0.130656796 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:08:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Dec 06 08:08:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:01.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:03 compute-0 nova_compute[251992]: 2025-12-06 08:08:03.045 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:03.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:03 compute-0 ceph-mon[74339]: pgmap v3459: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 102 op/s
Dec 06 08:08:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:08:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:03.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:03.884 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:03.886 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:03.887 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:05.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:05 compute-0 ceph-mon[74339]: pgmap v3460: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:08:05 compute-0 nova_compute[251992]: 2025-12-06 08:08:05.475 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 127 op/s
Dec 06 08:08:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:05.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:07.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:07 compute-0 ceph-mon[74339]: pgmap v3461: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 127 op/s
Dec 06 08:08:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 275 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 672 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Dec 06 08:08:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:07.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:08 compute-0 nova_compute[251992]: 2025-12-06 08:08:08.048 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:08 compute-0 podman[388088]: 2025-12-06 08:08:08.390937244 +0000 UTC m=+0.053388441 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:08:08 compute-0 podman[388089]: 2025-12-06 08:08:08.398087167 +0000 UTC m=+0.059005603 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 08:08:08 compute-0 nova_compute[251992]: 2025-12-06 08:08:08.506 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:08 compute-0 ceph-mon[74339]: pgmap v3462: 305 pgs: 305 active+clean; 275 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 672 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Dec 06 08:08:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:09.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/840833753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:08:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/840833753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:08:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 275 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 06 08:08:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:09.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:10 compute-0 nova_compute[251992]: 2025-12-06 08:08:10.480 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:10 compute-0 ceph-mon[74339]: pgmap v3463: 305 pgs: 305 active+clean; 275 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 06 08:08:11 compute-0 sudo[388131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:11 compute-0 sudo[388131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:11 compute-0 sudo[388131]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:11 compute-0 sudo[388156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:11 compute-0 sudo[388156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:11 compute-0 sudo[388156]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:11.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 08:08:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:11.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:12 compute-0 ceph-mon[74339]: pgmap v3464: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Dec 06 08:08:13 compute-0 nova_compute[251992]: 2025-12-06 08:08:13.050 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:08:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:08:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:08:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:08:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:08:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:08:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:08:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:13.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:08:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 55K writes, 204K keys, 55K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 55K writes, 21K syncs, 2.63 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3291 writes, 12K keys, 3291 commit groups, 1.0 writes per commit group, ingest: 15.21 MB, 0.03 MB/s
                                           Interval WAL: 3291 writes, 1292 syncs, 2.55 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 08:08:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:14 compute-0 ceph-mon[74339]: pgmap v3465: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:08:15 compute-0 nova_compute[251992]: 2025-12-06 08:08:15.001 251996 DEBUG nova.compute.manager [req-1f08758d-0177-4b67-8f2f-b6c189ca446a req-26d095ec-c24c-46d2-8a1a-1cfcf95a05da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-changed-e01db4de-5597-4913-b15d-568789f0cf17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:15 compute-0 nova_compute[251992]: 2025-12-06 08:08:15.001 251996 DEBUG nova.compute.manager [req-1f08758d-0177-4b67-8f2f-b6c189ca446a req-26d095ec-c24c-46d2-8a1a-1cfcf95a05da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing instance network info cache due to event network-changed-e01db4de-5597-4913-b15d-568789f0cf17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:08:15 compute-0 nova_compute[251992]: 2025-12-06 08:08:15.001 251996 DEBUG oslo_concurrency.lockutils [req-1f08758d-0177-4b67-8f2f-b6c189ca446a req-26d095ec-c24c-46d2-8a1a-1cfcf95a05da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:08:15 compute-0 nova_compute[251992]: 2025-12-06 08:08:15.002 251996 DEBUG oslo_concurrency.lockutils [req-1f08758d-0177-4b67-8f2f-b6c189ca446a req-26d095ec-c24c-46d2-8a1a-1cfcf95a05da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:08:15 compute-0 nova_compute[251992]: 2025-12-06 08:08:15.002 251996 DEBUG nova.network.neutron [req-1f08758d-0177-4b67-8f2f-b6c189ca446a req-26d095ec-c24c-46d2-8a1a-1cfcf95a05da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing network info cache for port e01db4de-5597-4913-b15d-568789f0cf17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:08:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:15.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:15 compute-0 nova_compute[251992]: 2025-12-06 08:08:15.483 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 06 08:08:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:15.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:16 compute-0 ceph-mon[74339]: pgmap v3466: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Dec 06 08:08:17 compute-0 nova_compute[251992]: 2025-12-06 08:08:17.066 251996 DEBUG nova.network.neutron [req-1f08758d-0177-4b67-8f2f-b6c189ca446a req-26d095ec-c24c-46d2-8a1a-1cfcf95a05da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updated VIF entry in instance network info cache for port e01db4de-5597-4913-b15d-568789f0cf17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:08:17 compute-0 nova_compute[251992]: 2025-12-06 08:08:17.068 251996 DEBUG nova.network.neutron [req-1f08758d-0177-4b67-8f2f-b6c189ca446a req-26d095ec-c24c-46d2-8a1a-1cfcf95a05da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:17 compute-0 nova_compute[251992]: 2025-12-06 08:08:17.255 251996 DEBUG oslo_concurrency.lockutils [req-1f08758d-0177-4b67-8f2f-b6c189ca446a req-26d095ec-c24c-46d2-8a1a-1cfcf95a05da 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:08:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:17.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 290 KiB/s wr, 12 op/s
Dec 06 08:08:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:17.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:18 compute-0 nova_compute[251992]: 2025-12-06 08:08:18.052 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:08:18
Dec 06 08:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes']
Dec 06 08:08:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:08:18 compute-0 ceph-mon[74339]: pgmap v3467: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 290 KiB/s wr, 12 op/s
Dec 06 08:08:19 compute-0 nova_compute[251992]: 2025-12-06 08:08:19.151 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:19.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 64 KiB/s wr, 6 op/s
Dec 06 08:08:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:19.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:20 compute-0 nova_compute[251992]: 2025-12-06 08:08:20.486 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:20 compute-0 ceph-mon[74339]: pgmap v3468: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 64 KiB/s wr, 6 op/s
Dec 06 08:08:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:21.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 66 KiB/s wr, 6 op/s
Dec 06 08:08:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:21.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 08:08:22 compute-0 ceph-mon[74339]: pgmap v3469: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s rd, 66 KiB/s wr, 6 op/s
Dec 06 08:08:23 compute-0 nova_compute[251992]: 2025-12-06 08:08:23.056 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 06 08:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:08:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:23.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:08:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:08:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:24 compute-0 ceph-mon[74339]: pgmap v3470: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec 06 08:08:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:25 compute-0 nova_compute[251992]: 2025-12-06 08:08:25.489 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s wr, 1 op/s
Dec 06 08:08:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:25.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043490019542155225 of space, bias 1.0, pg target 1.3047005862646568 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.6464803764191328 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:08:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:08:26 compute-0 ceph-mon[74339]: pgmap v3471: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s wr, 1 op/s
Dec 06 08:08:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:08:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:08:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:08:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:08:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:08:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:27.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:08:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:27.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/118683916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:28 compute-0 nova_compute[251992]: 2025-12-06 08:08:28.073 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:28 compute-0 ceph-mon[74339]: pgmap v3472: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:08:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:29.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:08:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:29.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:30 compute-0 nova_compute[251992]: 2025-12-06 08:08:30.494 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:31 compute-0 ceph-mon[74339]: pgmap v3473: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:08:31 compute-0 sudo[388191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:31 compute-0 sudo[388191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:31 compute-0 sudo[388191]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:31.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:31 compute-0 sudo[388216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:31 compute-0 sudo[388216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:31 compute-0 sudo[388216]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:31.718 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:08:31 compute-0 nova_compute[251992]: 2025-12-06 08:08:31.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:31.720 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:08:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:31.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:32 compute-0 podman[388241]: 2025-12-06 08:08:32.425946164 +0000 UTC m=+0.084518933 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec 06 08:08:33 compute-0 ceph-mon[74339]: pgmap v3474: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:33 compute-0 nova_compute[251992]: 2025-12-06 08:08:33.075 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:33.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:33.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:35 compute-0 ceph-mon[74339]: pgmap v3475: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1624875607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3781957031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/282098574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:08:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:35.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:35 compute-0 nova_compute[251992]: 2025-12-06 08:08:35.497 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:35.722 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:35.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2223726682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:08:37 compute-0 ceph-mon[74339]: pgmap v3476: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:37.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:37.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:38 compute-0 nova_compute[251992]: 2025-12-06 08:08:38.078 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:39 compute-0 ceph-mon[74339]: pgmap v3477: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Dec 06 08:08:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:39.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:39 compute-0 podman[388272]: 2025-12-06 08:08:39.39297915 +0000 UTC m=+0.053277470 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:08:39 compute-0 podman[388273]: 2025-12-06 08:08:39.398722945 +0000 UTC m=+0.057578165 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 08:08:39 compute-0 sudo[388306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:39 compute-0 sudo[388306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:39 compute-0 sudo[388306]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:39 compute-0 sudo[388331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:08:39 compute-0 sudo[388331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:39 compute-0 sudo[388331]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:39 compute-0 nova_compute[251992]: 2025-12-06 08:08:39.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:08:39 compute-0 nova_compute[251992]: 2025-12-06 08:08:39.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:39 compute-0 nova_compute[251992]: 2025-12-06 08:08:39.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:39 compute-0 nova_compute[251992]: 2025-12-06 08:08:39.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:39 compute-0 nova_compute[251992]: 2025-12-06 08:08:39.684 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:08:39 compute-0 nova_compute[251992]: 2025-12-06 08:08:39.684 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:39 compute-0 sudo[388356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:39 compute-0 sudo[388356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:39 compute-0 sudo[388356]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:39.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:39 compute-0 sudo[388382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:08:39 compute-0 sudo[388382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:08:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677702468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.160 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:40 compute-0 sudo[388382]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.295 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.296 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:08:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:08:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:08:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:08:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:08:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:08:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 18e1f5ab-d87c-4015-a520-1e96af827936 does not exist
Dec 06 08:08:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e7984cb3-0016-4b82-81a8-935067e527cd does not exist
Dec 06 08:08:40 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d4644eff-685b-4377-93b7-b2145dfb1dca does not exist
Dec 06 08:08:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:08:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:08:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:08:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:08:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:08:40 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.467 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.468 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3943MB free_disk=20.876216888427734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.468 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.469 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:40 compute-0 sudo[388459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:40 compute-0 sudo[388459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:40 compute-0 sudo[388459]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.500 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:40 compute-0 sudo[388484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:08:40 compute-0 sudo[388484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:40 compute-0 sudo[388484]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:40 compute-0 sudo[388509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:40 compute-0 sudo[388509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:40 compute-0 sudo[388509]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:40 compute-0 sudo[388534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:08:40 compute-0 sudo[388534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.674 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 131d5537-9b5a-407d-97af-efc5bd314951 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.684 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.684 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:08:40 compute-0 nova_compute[251992]: 2025-12-06 08:08:40.764 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:41 compute-0 podman[388620]: 2025-12-06 08:08:41.036536069 +0000 UTC m=+0.042475977 container create 226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 08:08:41 compute-0 systemd[1]: Started libpod-conmon-226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9.scope.
Dec 06 08:08:41 compute-0 ceph-mon[74339]: pgmap v3478: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:08:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2677702468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:08:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:08:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:08:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:08:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:08:41 compute-0 podman[388620]: 2025-12-06 08:08:41.016351084 +0000 UTC m=+0.022290992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:08:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:08:41 compute-0 podman[388620]: 2025-12-06 08:08:41.13143455 +0000 UTC m=+0.137374478 container init 226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_thompson, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:08:41 compute-0 podman[388620]: 2025-12-06 08:08:41.13776995 +0000 UTC m=+0.143709868 container start 226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 08:08:41 compute-0 podman[388620]: 2025-12-06 08:08:41.141585154 +0000 UTC m=+0.147525062 container attach 226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:08:41 compute-0 exciting_thompson[388638]: 167 167
Dec 06 08:08:41 compute-0 systemd[1]: libpod-226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9.scope: Deactivated successfully.
Dec 06 08:08:41 compute-0 podman[388620]: 2025-12-06 08:08:41.144422431 +0000 UTC m=+0.150362339 container died 226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 08:08:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a4ef61204e9524db8afe3459d831a6a032cb461bd8dd38f1f5880a36ed6f65-merged.mount: Deactivated successfully.
Dec 06 08:08:41 compute-0 podman[388620]: 2025-12-06 08:08:41.182129217 +0000 UTC m=+0.188069125 container remove 226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_thompson, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:08:41 compute-0 systemd[1]: libpod-conmon-226373c6114d3f5f99925b67ea0d3093a60d1ef00d955fb0d730b9be8cb3b3b9.scope: Deactivated successfully.
Dec 06 08:08:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:08:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671105623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:41 compute-0 nova_compute[251992]: 2025-12-06 08:08:41.211 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:41 compute-0 nova_compute[251992]: 2025-12-06 08:08:41.218 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:08:41 compute-0 nova_compute[251992]: 2025-12-06 08:08:41.289 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:08:41 compute-0 nova_compute[251992]: 2025-12-06 08:08:41.290 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:08:41 compute-0 nova_compute[251992]: 2025-12-06 08:08:41.291 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:41 compute-0 podman[388664]: 2025-12-06 08:08:41.341438726 +0000 UTC m=+0.040663467 container create c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:08:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:41.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:41 compute-0 systemd[1]: Started libpod-conmon-c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29.scope.
Dec 06 08:08:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e630948f411efd6ca17f616ed15b735e539738416d509248f1359b08e162096/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e630948f411efd6ca17f616ed15b735e539738416d509248f1359b08e162096/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e630948f411efd6ca17f616ed15b735e539738416d509248f1359b08e162096/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e630948f411efd6ca17f616ed15b735e539738416d509248f1359b08e162096/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e630948f411efd6ca17f616ed15b735e539738416d509248f1359b08e162096/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:41 compute-0 podman[388664]: 2025-12-06 08:08:41.41567422 +0000 UTC m=+0.114898991 container init c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec 06 08:08:41 compute-0 podman[388664]: 2025-12-06 08:08:41.324382506 +0000 UTC m=+0.023607267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:08:41 compute-0 podman[388664]: 2025-12-06 08:08:41.422653688 +0000 UTC m=+0.121878419 container start c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:08:41 compute-0 podman[388664]: 2025-12-06 08:08:41.426263075 +0000 UTC m=+0.125487836 container attach c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:08:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Dec 06 08:08:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:41.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:42 compute-0 nice_archimedes[388681]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:08:42 compute-0 nice_archimedes[388681]: --> relative data size: 1.0
Dec 06 08:08:42 compute-0 nice_archimedes[388681]: --> All data devices are unavailable
Dec 06 08:08:42 compute-0 systemd[1]: libpod-c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29.scope: Deactivated successfully.
Dec 06 08:08:42 compute-0 podman[388664]: 2025-12-06 08:08:42.251296318 +0000 UTC m=+0.950521079 container died c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec 06 08:08:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2671105623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e630948f411efd6ca17f616ed15b735e539738416d509248f1359b08e162096-merged.mount: Deactivated successfully.
Dec 06 08:08:42 compute-0 podman[388664]: 2025-12-06 08:08:42.304246357 +0000 UTC m=+1.003471098 container remove c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_archimedes, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:08:42 compute-0 systemd[1]: libpod-conmon-c47ae025ec113e3f60a530a46cfde21c50ab6fd10988feccc29f69af9fb3ed29.scope: Deactivated successfully.
Dec 06 08:08:42 compute-0 sudo[388534]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:42 compute-0 sudo[388707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:42 compute-0 sudo[388707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:42 compute-0 sudo[388707]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:42 compute-0 sudo[388732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:08:42 compute-0 sudo[388732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:42 compute-0 sudo[388732]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:42 compute-0 sudo[388757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:42 compute-0 sudo[388757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:42 compute-0 sudo[388757]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:42 compute-0 sudo[388782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:08:42 compute-0 sudo[388782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:42 compute-0 podman[388848]: 2025-12-06 08:08:42.936176948 +0000 UTC m=+0.049493566 container create a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:08:42 compute-0 systemd[1]: Started libpod-conmon-a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5.scope.
Dec 06 08:08:43 compute-0 podman[388848]: 2025-12-06 08:08:42.912959882 +0000 UTC m=+0.026276580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:08:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:08:43 compute-0 podman[388848]: 2025-12-06 08:08:43.021691976 +0000 UTC m=+0.135008604 container init a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:08:43 compute-0 podman[388848]: 2025-12-06 08:08:43.028679904 +0000 UTC m=+0.141996512 container start a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:08:43 compute-0 podman[388848]: 2025-12-06 08:08:43.032133218 +0000 UTC m=+0.145449886 container attach a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:08:43 compute-0 relaxed_blackwell[388865]: 167 167
Dec 06 08:08:43 compute-0 systemd[1]: libpod-a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5.scope: Deactivated successfully.
Dec 06 08:08:43 compute-0 podman[388848]: 2025-12-06 08:08:43.033675499 +0000 UTC m=+0.146992107 container died a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:08:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:08:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:08:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:08:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:08:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:08:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:08:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f36f35e51f454c044b15a789612e4dc811138af889990e5ad9562d0e7916003-merged.mount: Deactivated successfully.
Dec 06 08:08:43 compute-0 podman[388848]: 2025-12-06 08:08:43.074679816 +0000 UTC m=+0.187996424 container remove a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:08:43 compute-0 nova_compute[251992]: 2025-12-06 08:08:43.080 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:43 compute-0 systemd[1]: libpod-conmon-a4557bd5b7e9243d2aee9262c2f532bbb4b5a6e520a84f1e2aee33f77ae8cbb5.scope: Deactivated successfully.
Dec 06 08:08:43 compute-0 podman[388890]: 2025-12-06 08:08:43.241811585 +0000 UTC m=+0.038966991 container create 130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:08:43 compute-0 ceph-mon[74339]: pgmap v3479: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Dec 06 08:08:43 compute-0 systemd[1]: Started libpod-conmon-130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1.scope.
Dec 06 08:08:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9eaa4a8b4e4d7922a1bf7c9a1462df764b67586dd5eb2e845b7b33f68967a67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9eaa4a8b4e4d7922a1bf7c9a1462df764b67586dd5eb2e845b7b33f68967a67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9eaa4a8b4e4d7922a1bf7c9a1462df764b67586dd5eb2e845b7b33f68967a67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9eaa4a8b4e4d7922a1bf7c9a1462df764b67586dd5eb2e845b7b33f68967a67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:43 compute-0 podman[388890]: 2025-12-06 08:08:43.319756629 +0000 UTC m=+0.116912045 container init 130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elion, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:08:43 compute-0 podman[388890]: 2025-12-06 08:08:43.2263989 +0000 UTC m=+0.023554326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:08:43 compute-0 podman[388890]: 2025-12-06 08:08:43.329866602 +0000 UTC m=+0.127022008 container start 130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elion, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:08:43 compute-0 podman[388890]: 2025-12-06 08:08:43.333593742 +0000 UTC m=+0.130749168 container attach 130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:08:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:43.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 83 op/s
Dec 06 08:08:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:43.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:44 compute-0 affectionate_elion[388907]: {
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:     "0": [
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:         {
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "devices": [
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "/dev/loop3"
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             ],
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "lv_name": "ceph_lv0",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "lv_size": "7511998464",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "name": "ceph_lv0",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "tags": {
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.cluster_name": "ceph",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.crush_device_class": "",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.encrypted": "0",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.osd_id": "0",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.type": "block",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:                 "ceph.vdo": "0"
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             },
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "type": "block",
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:             "vg_name": "ceph_vg0"
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:         }
Dec 06 08:08:44 compute-0 affectionate_elion[388907]:     ]
Dec 06 08:08:44 compute-0 affectionate_elion[388907]: }
Dec 06 08:08:44 compute-0 systemd[1]: libpod-130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1.scope: Deactivated successfully.
Dec 06 08:08:44 compute-0 podman[388890]: 2025-12-06 08:08:44.185823168 +0000 UTC m=+0.982978594 container died 130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elion, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:08:44 compute-0 nova_compute[251992]: 2025-12-06 08:08:44.284 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9eaa4a8b4e4d7922a1bf7c9a1462df764b67586dd5eb2e845b7b33f68967a67-merged.mount: Deactivated successfully.
Dec 06 08:08:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:44 compute-0 podman[388890]: 2025-12-06 08:08:44.653270452 +0000 UTC m=+1.450425858 container remove 130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:08:44 compute-0 nova_compute[251992]: 2025-12-06 08:08:44.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1075791896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:44 compute-0 sudo[388782]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:44 compute-0 sudo[388927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:44 compute-0 systemd[1]: libpod-conmon-130dcff648d5b74b7f0cce3ffbdb3cab1ba9198afda111c5988bf7913bb0b7e1.scope: Deactivated successfully.
Dec 06 08:08:44 compute-0 sudo[388927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:44 compute-0 sudo[388927]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:44 compute-0 sudo[388952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:08:44 compute-0 sudo[388952]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:44 compute-0 sudo[388952]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:44 compute-0 sudo[388977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:44 compute-0 sudo[388977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:44 compute-0 sudo[388977]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:44 compute-0 sudo[389002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:08:44 compute-0 sudo[389002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:45 compute-0 podman[389069]: 2025-12-06 08:08:45.27612584 +0000 UTC m=+0.040965297 container create 27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_darwin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:08:45 compute-0 systemd[1]: Started libpod-conmon-27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b.scope.
Dec 06 08:08:45 compute-0 podman[389069]: 2025-12-06 08:08:45.259953043 +0000 UTC m=+0.024792520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:08:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:08:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:45.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:45 compute-0 podman[389069]: 2025-12-06 08:08:45.381969505 +0000 UTC m=+0.146808982 container init 27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:08:45 compute-0 podman[389069]: 2025-12-06 08:08:45.39140804 +0000 UTC m=+0.156247497 container start 27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:08:45 compute-0 friendly_darwin[389085]: 167 167
Dec 06 08:08:45 compute-0 systemd[1]: libpod-27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b.scope: Deactivated successfully.
Dec 06 08:08:45 compute-0 podman[389069]: 2025-12-06 08:08:45.402995153 +0000 UTC m=+0.167834630 container attach 27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_darwin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:08:45 compute-0 podman[389069]: 2025-12-06 08:08:45.404061551 +0000 UTC m=+0.168901008 container died 27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:08:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-58d8279217f6d7538c3d16350ff73c8ac629c359afcf563b62c09610c9898bc9-merged.mount: Deactivated successfully.
Dec 06 08:08:45 compute-0 podman[389069]: 2025-12-06 08:08:45.438530432 +0000 UTC m=+0.203369909 container remove 27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_darwin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:08:45 compute-0 systemd[1]: libpod-conmon-27734bb0c5be80c0e73f754fa0bc017f422bd0a47907f08cf1a11032e61a9d7b.scope: Deactivated successfully.
Dec 06 08:08:45 compute-0 nova_compute[251992]: 2025-12-06 08:08:45.502 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 102 op/s
Dec 06 08:08:45 compute-0 podman[389109]: 2025-12-06 08:08:45.596366121 +0000 UTC m=+0.023548596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:08:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:45.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:46 compute-0 podman[389109]: 2025-12-06 08:08:46.034623496 +0000 UTC m=+0.461805951 container create 4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:08:46 compute-0 ceph-mon[74339]: pgmap v3480: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 83 op/s
Dec 06 08:08:46 compute-0 systemd[1]: Started libpod-conmon-4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9.scope.
Dec 06 08:08:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158ec3c55b4d56869beecfab808fa153b3a603d00f1c7162e285157ba75c7073/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158ec3c55b4d56869beecfab808fa153b3a603d00f1c7162e285157ba75c7073/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158ec3c55b4d56869beecfab808fa153b3a603d00f1c7162e285157ba75c7073/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/158ec3c55b4d56869beecfab808fa153b3a603d00f1c7162e285157ba75c7073/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.232 251996 DEBUG oslo_concurrency.lockutils [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "interface-131d5537-9b5a-407d-97af-efc5bd314951-e01db4de-5597-4913-b15d-568789f0cf17" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.234 251996 DEBUG oslo_concurrency.lockutils [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "interface-131d5537-9b5a-407d-97af-efc5bd314951-e01db4de-5597-4913-b15d-568789f0cf17" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.259 251996 DEBUG nova.objects.instance [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'flavor' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.280 251996 DEBUG nova.virt.libvirt.vif [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:06:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:06:53Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.281 251996 DEBUG nova.network.os_vif_util [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.283 251996 DEBUG nova.network.os_vif_util [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.290 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.294 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.297 251996 DEBUG nova.virt.libvirt.driver [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Attempting to detach device tape01db4de-55 from instance 131d5537-9b5a-407d-97af-efc5bd314951 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.298 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] detach device xml: <interface type="ethernet">
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:70:ec:42"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <target dev="tape01db4de-55"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]: </interface>
Dec 06 08:08:46 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 08:08:46 compute-0 podman[389109]: 2025-12-06 08:08:46.350457839 +0000 UTC m=+0.777640364 container init 4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_elbakyan, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.350 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.356 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface>not found in domain: <domain type='kvm' id='89'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <name>instance-000000bf</name>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <uuid>131d5537-9b5a-407d-97af-efc5bd314951</uuid>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:name>tempest-TestNetworkBasicOps-server-416534484</nova:name>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 08:07:25</nova:creationTime>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:port uuid="cbe16ad6-d576-4461-9682-554b48a77542">
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:port uuid="e01db4de-5597-4913-b15d-568789f0cf17">
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 08:08:46 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <memory unit='KiB'>131072</memory>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <vcpu placement='static'>1</vcpu>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <resource>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <partition>/machine</partition>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </resource>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <sysinfo type='smbios'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <system>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='manufacturer'>RDO</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='serial'>131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='uuid'>131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='family'>Virtual Machine</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </system>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <os>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <boot dev='hd'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <smbios mode='sysinfo'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </os>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <features>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <vmcoreinfo state='on'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </features>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <model fallback='forbid'>Nehalem</model>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <feature policy='require' name='x2apic'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <feature policy='require' name='hypervisor'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <feature policy='require' name='vme'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <clock offset='utc'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <timer name='hpet' present='no'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <on_poweroff>destroy</on_poweroff>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <on_reboot>restart</on_reboot>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <on_crash>destroy</on_crash>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <disk type='network' device='disk'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/131d5537-9b5a-407d-97af-efc5bd314951_disk' index='2'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </source>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target dev='vda' bus='virtio'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='virtio-disk0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <disk type='network' device='cdrom'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/131d5537-9b5a-407d-97af-efc5bd314951_disk.config' index='1'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </source>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target dev='sda' bus='sata'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <readonly/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='sata0-0-0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pcie.0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='1' port='0x10'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='2' port='0x11'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='3' port='0x12'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.3'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='4' port='0x13'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.4'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='5' port='0x14'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='6' port='0x15'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.6'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='7' port='0x16'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.7'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='8' port='0x17'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.8'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='9' port='0x18'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.9'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='10' port='0x19'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.10'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='11' port='0x1a'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.11'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='12' port='0x1b'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.12'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='13' port='0x1c'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.13'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='14' port='0x1d'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.14'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='15' port='0x1e'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.15'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='16' port='0x1f'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.16'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='17' port='0x20'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.17'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='18' port='0x21'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.18'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='19' port='0x22'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.19'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='20' port='0x23'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.20'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='21' port='0x24'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.21'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='22' port='0x25'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.22'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='23' port='0x26'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.23'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='24' port='0x27'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.24'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='25' port='0x28'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.25'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-pci-bridge'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.26'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='usb'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='sata' index='0'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='ide'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <interface type='ethernet'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <mac address='fa:16:3e:e3:9a:61'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target dev='tapcbe16ad6-d5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model type='virtio'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <mtu size='1442'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='net0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <interface type='ethernet'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <mac address='fa:16:3e:70:ec:42'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target dev='tape01db4de-55'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model type='virtio'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <mtu size='1442'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='net1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <serial type='pty'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log' append='off'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target type='isa-serial' port='0'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <model name='isa-serial'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </target>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log' append='off'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target type='serial' port='0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </console>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <input type='tablet' bus='usb'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='input0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='usb' bus='0' port='1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <input type='mouse' bus='ps2'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='input1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <input type='keyboard' bus='ps2'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='input2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <listen type='address' address='::0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </graphics>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <audio id='1' type='none'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <video>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='video0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </video>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <watchdog model='itco' action='reset'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='watchdog0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </watchdog>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <memballoon model='virtio'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <stats period='10'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='balloon0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <rng model='virtio'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <backend model='random'>/dev/urandom</backend>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='rng0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <label>system_u:system_r:svirt_t:s0:c240,c950</label>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c240,c950</imagelabel>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <label>+107:+107</label>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <imagelabel>+107:+107</imagelabel>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 08:08:46 compute-0 nova_compute[251992]: </domain>
Dec 06 08:08:46 compute-0 nova_compute[251992]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.357 251996 INFO nova.virt.libvirt.driver [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully detached device tape01db4de-55 from instance 131d5537-9b5a-407d-97af-efc5bd314951 from the persistent domain config.
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.357 251996 DEBUG nova.virt.libvirt.driver [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] (1/8): Attempting to detach device tape01db4de-55 with device alias net1 from instance 131d5537-9b5a-407d-97af-efc5bd314951 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.358 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] detach device xml: <interface type="ethernet">
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <mac address="fa:16:3e:70:ec:42"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <model type="virtio"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <mtu size="1442"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <target dev="tape01db4de-55"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]: </interface>
Dec 06 08:08:46 compute-0 nova_compute[251992]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 08:08:46 compute-0 podman[389109]: 2025-12-06 08:08:46.364635752 +0000 UTC m=+0.791818227 container start 4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_elbakyan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:08:46 compute-0 kernel: tape01db4de-55 (unregistering): left promiscuous mode
Dec 06 08:08:46 compute-0 NetworkManager[48965]: <info>  [1765008526.4784] device (tape01db4de-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:08:46 compute-0 ovn_controller[147168]: 2025-12-06T08:08:46Z|00743|binding|INFO|Releasing lport e01db4de-5597-4913-b15d-568789f0cf17 from this chassis (sb_readonly=0)
Dec 06 08:08:46 compute-0 ovn_controller[147168]: 2025-12-06T08:08:46Z|00744|binding|INFO|Setting lport e01db4de-5597-4913-b15d-568789f0cf17 down in Southbound
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.488 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:46 compute-0 ovn_controller[147168]: 2025-12-06T08:08:46Z|00745|binding|INFO|Removing iface tape01db4de-55 ovn-installed in OVS
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.491 251996 DEBUG nova.virt.libvirt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Received event <DeviceRemovedEvent: 1765008526.490701, 131d5537-9b5a-407d-97af-efc5bd314951 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 08:08:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:46.494 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:ec:42 10.100.0.29', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '131d5537-9b5a-407d-97af-efc5bd314951', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0019305c-8ebd-4d1b-ac3e-eb77d507b742, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=e01db4de-5597-4913-b15d-568789f0cf17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:08:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:46.495 158118 INFO neutron.agent.ovn.metadata.agent [-] Port e01db4de-5597-4913-b15d-568789f0cf17 in datapath b39fdb1c-6386-42dc-9c1d-e70684ee69f2 unbound from our chassis
Dec 06 08:08:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:46.496 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b39fdb1c-6386-42dc-9c1d-e70684ee69f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.497 251996 DEBUG nova.virt.libvirt.driver [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Start waiting for the detach event from libvirt for device tape01db4de-55 with device alias net1 for instance 131d5537-9b5a-407d-97af-efc5bd314951 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.497 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 08:08:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:46.501 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[edcdd598-cca1-4e53-877d-b6a742106b71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:46 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:46.502 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 namespace which is not needed anymore
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.506 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.507 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface>not found in domain: <domain type='kvm' id='89'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <name>instance-000000bf</name>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <uuid>131d5537-9b5a-407d-97af-efc5bd314951</uuid>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:name>tempest-TestNetworkBasicOps-server-416534484</nova:name>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 08:07:25</nova:creationTime>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:port uuid="cbe16ad6-d576-4461-9682-554b48a77542">
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:port uuid="e01db4de-5597-4913-b15d-568789f0cf17">
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 08:08:46 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <memory unit='KiB'>131072</memory>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <vcpu placement='static'>1</vcpu>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <resource>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <partition>/machine</partition>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </resource>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <sysinfo type='smbios'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <system>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='manufacturer'>RDO</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='serial'>131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='uuid'>131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <entry name='family'>Virtual Machine</entry>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </system>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <os>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <boot dev='hd'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <smbios mode='sysinfo'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </os>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <features>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <vmcoreinfo state='on'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </features>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <model fallback='forbid'>Nehalem</model>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <feature policy='require' name='x2apic'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <feature policy='require' name='hypervisor'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <feature policy='require' name='vme'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <clock offset='utc'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <timer name='hpet' present='no'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <on_poweroff>destroy</on_poweroff>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <on_reboot>restart</on_reboot>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <on_crash>destroy</on_crash>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <disk type='network' device='disk'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/131d5537-9b5a-407d-97af-efc5bd314951_disk' index='2'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </source>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target dev='vda' bus='virtio'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='virtio-disk0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <disk type='network' device='cdrom'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/131d5537-9b5a-407d-97af-efc5bd314951_disk.config' index='1'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </source>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target dev='sda' bus='sata'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <readonly/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='sata0-0-0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pcie.0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='1' port='0x10'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='2' port='0x11'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='3' port='0x12'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.3'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='4' port='0x13'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.4'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='5' port='0x14'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='6' port='0x15'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.6'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='7' port='0x16'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.7'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='8' port='0x17'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.8'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='9' port='0x18'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.9'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='10' port='0x19'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.10'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='11' port='0x1a'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.11'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='12' port='0x1b'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.12'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='13' port='0x1c'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.13'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='14' port='0x1d'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.14'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='15' port='0x1e'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.15'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='16' port='0x1f'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.16'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='17' port='0x20'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.17'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='18' port='0x21'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.18'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='19' port='0x22'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.19'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='20' port='0x23'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.20'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='21' port='0x24'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.21'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='22' port='0x25'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.22'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='23' port='0x26'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.23'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='24' port='0x27'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.24'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 08:08:46 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target chassis='25' port='0x28'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.25'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model name='pcie-pci-bridge'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='pci.26'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='usb'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <controller type='sata' index='0'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='ide'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <interface type='ethernet'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <mac address='fa:16:3e:e3:9a:61'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target dev='tapcbe16ad6-d5'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model type='virtio'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <mtu size='1442'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='net0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <serial type='pty'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log' append='off'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target type='isa-serial' port='0'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:         <model name='isa-serial'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       </target>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log' append='off'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <target type='serial' port='0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </console>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <input type='tablet' bus='usb'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='input0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='usb' bus='0' port='1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <input type='mouse' bus='ps2'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='input1'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <input type='keyboard' bus='ps2'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='input2'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <listen type='address' address='::0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </graphics>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <audio id='1' type='none'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <video>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='video0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </video>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <watchdog model='itco' action='reset'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='watchdog0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </watchdog>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <memballoon model='virtio'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <stats period='10'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='balloon0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <rng model='virtio'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <backend model='random'>/dev/urandom</backend>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <alias name='rng0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <label>system_u:system_r:svirt_t:s0:c240,c950</label>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c240,c950</imagelabel>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <label>+107:+107</label>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <imagelabel>+107:+107</imagelabel>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 08:08:46 compute-0 nova_compute[251992]: </domain>
Dec 06 08:08:46 compute-0 nova_compute[251992]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.507 251996 INFO nova.virt.libvirt.driver [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully detached device tape01db4de-55 from instance 131d5537-9b5a-407d-97af-efc5bd314951 from the live domain config.
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.508 251996 DEBUG nova.virt.libvirt.vif [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:06:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:06:53Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.508 251996 DEBUG nova.network.os_vif_util [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.509 251996 DEBUG nova.network.os_vif_util [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.509 251996 DEBUG os_vif [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.511 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape01db4de-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.513 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.514 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.529 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.533 251996 INFO os_vif [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55')
Dec 06 08:08:46 compute-0 nova_compute[251992]: 2025-12-06 08:08:46.535 251996 DEBUG nova.virt.libvirt.guest [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:name>tempest-TestNetworkBasicOps-server-416534484</nova:name>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 08:08:46</nova:creationTime>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     <nova:port uuid="cbe16ad6-d576-4461-9682-554b48a77542">
Dec 06 08:08:46 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 08:08:46 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:08:46 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 08:08:46 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 08:08:46 compute-0 nova_compute[251992]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 08:08:46 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:08:46 compute-0 podman[389109]: 2025-12-06 08:08:46.611667937 +0000 UTC m=+1.038850382 container attach 4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:08:46 compute-0 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[386952]: [NOTICE]   (386956) : haproxy version is 2.8.14-c23fe91
Dec 06 08:08:46 compute-0 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[386952]: [NOTICE]   (386956) : path to executable is /usr/sbin/haproxy
Dec 06 08:08:46 compute-0 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[386952]: [WARNING]  (386956) : Exiting Master process...
Dec 06 08:08:46 compute-0 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[386952]: [ALERT]    (386956) : Current worker (386958) exited with code 143 (Terminated)
Dec 06 08:08:46 compute-0 neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2[386952]: [WARNING]  (386956) : All workers exited. Exiting... (0)
Dec 06 08:08:46 compute-0 systemd[1]: libpod-288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7.scope: Deactivated successfully.
Dec 06 08:08:46 compute-0 podman[389155]: 2025-12-06 08:08:46.775759646 +0000 UTC m=+0.105870418 container died 288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:08:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 102 op/s
Dec 06 08:08:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:47.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.119 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.406 251996 DEBUG nova.compute.manager [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-unplugged-e01db4de-5597-4913-b15d-568789f0cf17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.407 251996 DEBUG oslo_concurrency.lockutils [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.407 251996 DEBUG oslo_concurrency.lockutils [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.407 251996 DEBUG oslo_concurrency.lockutils [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.407 251996 DEBUG nova.compute.manager [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] No waiting events found dispatching network-vif-unplugged-e01db4de-5597-4913-b15d-568789f0cf17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-34bd730cbfa03c301053a09204466f076634cc7900e911fcfb53c416be6d3f40-merged.mount: Deactivated successfully.
Dec 06 08:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7-userdata-shm.mount: Deactivated successfully.
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.407 251996 WARNING nova.compute.manager [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received unexpected event network-vif-unplugged-e01db4de-5597-4913-b15d-568789f0cf17 for instance with vm_state active and task_state None.
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.411 251996 DEBUG nova.compute.manager [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.411 251996 DEBUG oslo_concurrency.lockutils [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.411 251996 DEBUG oslo_concurrency.lockutils [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.412 251996 DEBUG oslo_concurrency.lockutils [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.412 251996 DEBUG nova.compute.manager [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] No waiting events found dispatching network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.412 251996 WARNING nova.compute.manager [req-2bbf9d14-4c86-4363-bdc6-6c7e140883ba req-58f5d2be-625d-4305-82e7-d1a5bfd2130e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received unexpected event network-vif-plugged-e01db4de-5597-4913-b15d-568789f0cf17 for instance with vm_state active and task_state None.
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.414 251996 DEBUG nova.compute.manager [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-deleted-e01db4de-5597-4913-b15d-568789f0cf17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.414 251996 INFO nova.compute.manager [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Neutron deleted interface e01db4de-5597-4913-b15d-568789f0cf17; detaching it from the instance and deleting it from the info cache
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.414 251996 DEBUG nova.network.neutron [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:48 compute-0 ceph-mon[74339]: pgmap v3481: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 102 op/s
Dec 06 08:08:48 compute-0 podman[389155]: 2025-12-06 08:08:48.443545669 +0000 UTC m=+1.773656421 container cleanup 288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:08:48 compute-0 systemd[1]: libpod-conmon-288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7.scope: Deactivated successfully.
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]: {
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]:         "osd_id": 0,
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]:         "type": "bluestore"
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]:     }
Dec 06 08:08:48 compute-0 ecstatic_elbakyan[389126]: }
Dec 06 08:08:48 compute-0 systemd[1]: libpod-4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9.scope: Deactivated successfully.
Dec 06 08:08:48 compute-0 systemd[1]: libpod-4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9.scope: Consumed 2.129s CPU time.
Dec 06 08:08:48 compute-0 podman[389203]: 2025-12-06 08:08:48.505570632 +0000 UTC m=+0.040765990 container remove 288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:08:48 compute-0 podman[389109]: 2025-12-06 08:08:48.510033123 +0000 UTC m=+2.937215568 container died 4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.512 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ee087607-f0e1-4114-9bb7-1bc139317ac3]: (4, ('Sat Dec  6 08:08:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 (288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7)\n288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7\nSat Dec  6 08:08:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 (288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7)\n288fdba414e0a8ac158fbd9123bdfe10c17d3cb76b24c9e058cdc7125a4432e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.514 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[be016e4d-5bb9-4a68-9690-eff36accf824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.515 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb39fdb1c-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:48 compute-0 kernel: tapb39fdb1c-60: left promiscuous mode
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.517 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.531 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.537 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[458f99c4-18d5-4311-a2d0-1516b2b131c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-158ec3c55b4d56869beecfab808fa153b3a603d00f1c7162e285157ba75c7073-merged.mount: Deactivated successfully.
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.556 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7e455ab3-a19e-4eb9-82ad-018688354533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.557 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad701c0-b148-4869-ad12-7596f721bc21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:48 compute-0 podman[389109]: 2025-12-06 08:08:48.568871051 +0000 UTC m=+2.996053496 container remove 4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.573 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a3247190-ec92-4b14-afb3-84c9cab965fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 871819, 'reachable_time': 34504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389232, 'error': None, 'target': 'ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:48 compute-0 systemd[1]: run-netns-ovnmeta\x2db39fdb1c\x2d6386\x2d42dc\x2d9c1d\x2de70684ee69f2.mount: Deactivated successfully.
Dec 06 08:08:48 compute-0 systemd[1]: libpod-conmon-4acb5639aacbe6ce47a5c87e210db9877b81b38871692db36357ec027cd1bce9.scope: Deactivated successfully.
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.595 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b39fdb1c-6386-42dc-9c1d-e70684ee69f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:08:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:48.596 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[4288d9b0-78b1-4006-9d07-73b027d0198d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:48 compute-0 sudo[389002]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:08:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:08:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1ce46182-bf22-466e-abf1-16f982989e98 does not exist
Dec 06 08:08:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8e1a6621-0ea3-41b4-ba98-b29fd734d6f7 does not exist
Dec 06 08:08:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 71687001-195e-47c6-8546-6a26d0d3e255 does not exist
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.676 251996 DEBUG nova.objects.instance [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'system_metadata' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:08:48 compute-0 sudo[389233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:48 compute-0 sudo[389233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:48 compute-0 sudo[389233]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:48 compute-0 sudo[389258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:08:48 compute-0 sudo[389258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:48 compute-0 sudo[389258]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.830 251996 DEBUG oslo_concurrency.lockutils [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.831 251996 DEBUG oslo_concurrency.lockutils [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.831 251996 DEBUG nova.network.neutron [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.863 251996 DEBUG nova.objects.instance [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lazy-loading 'flavor' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.928 251996 DEBUG nova.virt.libvirt.vif [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:06:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:06:53Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.928 251996 DEBUG nova.network.os_vif_util [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.929 251996 DEBUG nova.network.os_vif_util [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.932 251996 DEBUG nova.virt.libvirt.guest [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.935 251996 DEBUG nova.virt.libvirt.guest [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface>not found in domain: <domain type='kvm' id='89'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <name>instance-000000bf</name>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <uuid>131d5537-9b5a-407d-97af-efc5bd314951</uuid>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:name>tempest-TestNetworkBasicOps-server-416534484</nova:name>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 08:08:46</nova:creationTime>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:port uuid="cbe16ad6-d576-4461-9682-554b48a77542">
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 08:08:48 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <memory unit='KiB'>131072</memory>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <vcpu placement='static'>1</vcpu>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <resource>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <partition>/machine</partition>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </resource>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <sysinfo type='smbios'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <system>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='manufacturer'>RDO</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='serial'>131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='uuid'>131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='family'>Virtual Machine</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </system>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <os>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <boot dev='hd'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <smbios mode='sysinfo'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </os>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <features>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <vmcoreinfo state='on'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </features>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <model fallback='forbid'>Nehalem</model>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <feature policy='require' name='x2apic'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <feature policy='require' name='hypervisor'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <feature policy='require' name='vme'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <clock offset='utc'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <timer name='hpet' present='no'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <on_poweroff>destroy</on_poweroff>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <on_reboot>restart</on_reboot>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <on_crash>destroy</on_crash>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <disk type='network' device='disk'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/131d5537-9b5a-407d-97af-efc5bd314951_disk' index='2'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </source>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target dev='vda' bus='virtio'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='virtio-disk0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <disk type='network' device='cdrom'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/131d5537-9b5a-407d-97af-efc5bd314951_disk.config' index='1'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </source>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target dev='sda' bus='sata'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <readonly/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='sata0-0-0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pcie.0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='1' port='0x10'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='2' port='0x11'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='3' port='0x12'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.3'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='4' port='0x13'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.4'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='5' port='0x14'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='6' port='0x15'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.6'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='7' port='0x16'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.7'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='8' port='0x17'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.8'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='9' port='0x18'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.9'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='10' port='0x19'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.10'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='11' port='0x1a'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.11'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='12' port='0x1b'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.12'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='13' port='0x1c'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.13'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='14' port='0x1d'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.14'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='15' port='0x1e'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.15'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='16' port='0x1f'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.16'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='17' port='0x20'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.17'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='18' port='0x21'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.18'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='19' port='0x22'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.19'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='20' port='0x23'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.20'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='21' port='0x24'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.21'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='22' port='0x25'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.22'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='23' port='0x26'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.23'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='24' port='0x27'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.24'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='25' port='0x28'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.25'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-pci-bridge'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.26'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='usb'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='sata' index='0'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='ide'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <interface type='ethernet'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <mac address='fa:16:3e:e3:9a:61'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target dev='tapcbe16ad6-d5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model type='virtio'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <mtu size='1442'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='net0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <serial type='pty'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log' append='off'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target type='isa-serial' port='0'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <model name='isa-serial'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </target>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log' append='off'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target type='serial' port='0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </console>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <input type='tablet' bus='usb'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='input0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='usb' bus='0' port='1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <input type='mouse' bus='ps2'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='input1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <input type='keyboard' bus='ps2'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='input2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <listen type='address' address='::0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </graphics>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <audio id='1' type='none'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <video>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='video0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </video>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <watchdog model='itco' action='reset'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='watchdog0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </watchdog>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <memballoon model='virtio'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <stats period='10'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='balloon0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <rng model='virtio'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <backend model='random'>/dev/urandom</backend>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='rng0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <label>system_u:system_r:svirt_t:s0:c240,c950</label>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c240,c950</imagelabel>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <label>+107:+107</label>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <imagelabel>+107:+107</imagelabel>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 08:08:48 compute-0 nova_compute[251992]: </domain>
Dec 06 08:08:48 compute-0 nova_compute[251992]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.935 251996 DEBUG nova.virt.libvirt.guest [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.940 251996 DEBUG nova.virt.libvirt.guest [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:70:ec:42"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tape01db4de-55"/></interface>not found in domain: <domain type='kvm' id='89'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <name>instance-000000bf</name>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <uuid>131d5537-9b5a-407d-97af-efc5bd314951</uuid>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:name>tempest-TestNetworkBasicOps-server-416534484</nova:name>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 08:08:46</nova:creationTime>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:port uuid="cbe16ad6-d576-4461-9682-554b48a77542">
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 08:08:48 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <memory unit='KiB'>131072</memory>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <currentMemory unit='KiB'>131072</currentMemory>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <vcpu placement='static'>1</vcpu>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <resource>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <partition>/machine</partition>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </resource>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <sysinfo type='smbios'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <system>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='manufacturer'>RDO</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='product'>OpenStack Compute</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='serial'>131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='uuid'>131d5537-9b5a-407d-97af-efc5bd314951</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <entry name='family'>Virtual Machine</entry>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </system>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <os>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <boot dev='hd'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <smbios mode='sysinfo'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </os>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <features>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <vmcoreinfo state='on'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </features>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <cpu mode='custom' match='exact' check='full'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <model fallback='forbid'>Nehalem</model>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <feature policy='require' name='x2apic'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <feature policy='require' name='hypervisor'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <feature policy='require' name='vme'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <clock offset='utc'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <timer name='pit' tickpolicy='delay'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <timer name='rtc' tickpolicy='catchup'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <timer name='hpet' present='no'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <on_poweroff>destroy</on_poweroff>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <on_reboot>restart</on_reboot>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <on_crash>destroy</on_crash>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <disk type='network' device='disk'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/131d5537-9b5a-407d-97af-efc5bd314951_disk' index='2'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </source>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target dev='vda' bus='virtio'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='virtio-disk0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <disk type='network' device='cdrom'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <driver name='qemu' type='raw' cache='none'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <auth username='openstack'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <secret type='ceph' uuid='40a1bae4-cf76-5610-8dab-c75116dfe0bb'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <source protocol='rbd' name='vms/131d5537-9b5a-407d-97af-efc5bd314951_disk.config' index='1'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.100' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.102' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <host name='192.168.122.101' port='6789'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </source>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target dev='sda' bus='sata'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <readonly/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='sata0-0-0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='0' model='pcie-root'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pcie.0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='1' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='1' port='0x10'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='2' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='2' port='0x11'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='3' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='3' port='0x12'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.3'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='4' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='4' port='0x13'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.4'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='5' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='5' port='0x14'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='6' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='6' port='0x15'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.6'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='7' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='7' port='0x16'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.7'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='8' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='8' port='0x17'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.8'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='9' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='9' port='0x18'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.9'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='10' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='10' port='0x19'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.10'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='11' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='11' port='0x1a'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.11'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='12' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='12' port='0x1b'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.12'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='13' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='13' port='0x1c'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.13'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='14' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='14' port='0x1d'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.14'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='15' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='15' port='0x1e'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.15'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='16' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='16' port='0x1f'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.16'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='17' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='17' port='0x20'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.17'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='18' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='18' port='0x21'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.18'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='19' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='19' port='0x22'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.19'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='20' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='20' port='0x23'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.20'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='21' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='21' port='0x24'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.21'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='22' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='22' port='0x25'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.22'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='23' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='23' port='0x26'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.23'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='24' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='24' port='0x27'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.24'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='25' model='pcie-root-port'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-root-port'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target chassis='25' port='0x28'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.25'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model name='pcie-pci-bridge'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='pci.26'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='usb' index='0' model='piix3-uhci'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='usb'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <controller type='sata' index='0'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='ide'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </controller>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <interface type='ethernet'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <mac address='fa:16:3e:e3:9a:61'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target dev='tapcbe16ad6-d5'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model type='virtio'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <driver name='vhost' rx_queue_size='512'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <mtu size='1442'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='net0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <serial type='pty'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log' append='off'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target type='isa-serial' port='0'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:         <model name='isa-serial'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       </target>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <console type='pty' tty='/dev/pts/0'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <source path='/dev/pts/0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <log file='/var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951/console.log' append='off'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <target type='serial' port='0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='serial0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </console>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <input type='tablet' bus='usb'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='input0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='usb' bus='0' port='1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <input type='mouse' bus='ps2'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='input1'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <input type='keyboard' bus='ps2'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='input2'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </input>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <listen type='address' address='::0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </graphics>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <audio id='1' type='none'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <video>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <model type='virtio' heads='1' primary='yes'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='video0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </video>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <watchdog model='itco' action='reset'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='watchdog0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </watchdog>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <memballoon model='virtio'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <stats period='10'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='balloon0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <rng model='virtio'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <backend model='random'>/dev/urandom</backend>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <alias name='rng0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <label>system_u:system_r:svirt_t:s0:c240,c950</label>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c240,c950</imagelabel>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <label>+107:+107</label>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <imagelabel>+107:+107</imagelabel>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </seclabel>
Dec 06 08:08:48 compute-0 nova_compute[251992]: </domain>
Dec 06 08:08:48 compute-0 nova_compute[251992]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.940 251996 WARNING nova.virt.libvirt.driver [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Detaching interface fa:16:3e:70:ec:42 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tape01db4de-55' not found.
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.941 251996 DEBUG nova.virt.libvirt.vif [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:06:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:06:53Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.941 251996 DEBUG nova.network.os_vif_util [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converting VIF {"id": "e01db4de-5597-4913-b15d-568789f0cf17", "address": "fa:16:3e:70:ec:42", "network": {"id": "b39fdb1c-6386-42dc-9c1d-e70684ee69f2", "bridge": "br-int", "label": "tempest-network-smoke--1171047386", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape01db4de-55", "ovs_interfaceid": "e01db4de-5597-4913-b15d-568789f0cf17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.942 251996 DEBUG nova.network.os_vif_util [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.943 251996 DEBUG os_vif [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.944 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.944 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape01db4de-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.945 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.946 251996 INFO os_vif [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:ec:42,bridge_name='br-int',has_traffic_filtering=True,id=e01db4de-5597-4913-b15d-568789f0cf17,network=Network(b39fdb1c-6386-42dc-9c1d-e70684ee69f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape01db4de-55')
Dec 06 08:08:48 compute-0 nova_compute[251992]: 2025-12-06 08:08:48.947 251996 DEBUG nova.virt.libvirt.guest [req-b4cb3291-a95c-4f4e-8755-e60bfae5c3c7 req-181b8506-fce4-49e8-be14-37f6c690bfa6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:name>tempest-TestNetworkBasicOps-server-416534484</nova:name>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:creationTime>2025-12-06 08:08:48</nova:creationTime>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:flavor name="m1.nano">
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:memory>128</nova:memory>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:disk>1</nova:disk>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:swap>0</nova:swap>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:vcpus>1</nova:vcpus>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:flavor>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:owner>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:owner>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   <nova:ports>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     <nova:port uuid="cbe16ad6-d576-4461-9682-554b48a77542">
Dec 06 08:08:48 compute-0 nova_compute[251992]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 08:08:48 compute-0 nova_compute[251992]:     </nova:port>
Dec 06 08:08:48 compute-0 nova_compute[251992]:   </nova:ports>
Dec 06 08:08:48 compute-0 nova_compute[251992]: </nova:instance>
Dec 06 08:08:48 compute-0 nova_compute[251992]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Dec 06 08:08:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:49 compute-0 ovn_controller[147168]: 2025-12-06T08:08:49Z|00746|binding|INFO|Releasing lport bc641a00-a4a7-4082-9aee-a2ad8d4616c8 from this chassis (sb_readonly=0)
Dec 06 08:08:49 compute-0 nova_compute[251992]: 2025-12-06 08:08:49.421 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:49 compute-0 ceph-mon[74339]: pgmap v3482: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 102 op/s
Dec 06 08:08:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:08:49 compute-0 nova_compute[251992]: 2025-12-06 08:08:49.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:49 compute-0 nova_compute[251992]: 2025-12-06 08:08:49.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:08:49 compute-0 nova_compute[251992]: 2025-12-06 08:08:49.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:08:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 102 op/s
Dec 06 08:08:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:49.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:49 compute-0 nova_compute[251992]: 2025-12-06 08:08:49.831 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:08:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:50.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:50 compute-0 ceph-mon[74339]: pgmap v3483: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 102 op/s
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.799 251996 DEBUG nova.network.neutron [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.847 251996 DEBUG oslo_concurrency.lockutils [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.850 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.850 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.851 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.885 251996 DEBUG oslo_concurrency.lockutils [None req-e7167a36-d30b-4b50-9b3d-0aa65c73d1d4 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "interface-131d5537-9b5a-407d-97af-efc5bd314951-e01db4de-5597-4913-b15d-568789f0cf17" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.923 251996 DEBUG nova.compute.manager [req-c822f93e-5b55-40ba-99bb-681e0f3c3185 req-7729c1b2-67bd-44fc-9951-4b7098b037c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-changed-cbe16ad6-d576-4461-9682-554b48a77542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.924 251996 DEBUG nova.compute.manager [req-c822f93e-5b55-40ba-99bb-681e0f3c3185 req-7729c1b2-67bd-44fc-9951-4b7098b037c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing instance network info cache due to event network-changed-cbe16ad6-d576-4461-9682-554b48a77542. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.924 251996 DEBUG oslo_concurrency.lockutils [req-c822f93e-5b55-40ba-99bb-681e0f3c3185 req-7729c1b2-67bd-44fc-9951-4b7098b037c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.985 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.986 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.986 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.987 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.988 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.990 251996 INFO nova.compute.manager [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Terminating instance
Dec 06 08:08:50 compute-0 nova_compute[251992]: 2025-12-06 08:08:50.992 251996 DEBUG nova.compute.manager [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:08:51 compute-0 kernel: tapcbe16ad6-d5 (unregistering): left promiscuous mode
Dec 06 08:08:51 compute-0 NetworkManager[48965]: <info>  [1765008531.0471] device (tapcbe16ad6-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.054 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 ovn_controller[147168]: 2025-12-06T08:08:51Z|00747|binding|INFO|Releasing lport cbe16ad6-d576-4461-9682-554b48a77542 from this chassis (sb_readonly=0)
Dec 06 08:08:51 compute-0 ovn_controller[147168]: 2025-12-06T08:08:51Z|00748|binding|INFO|Setting lport cbe16ad6-d576-4461-9682-554b48a77542 down in Southbound
Dec 06 08:08:51 compute-0 ovn_controller[147168]: 2025-12-06T08:08:51Z|00749|binding|INFO|Removing iface tapcbe16ad6-d5 ovn-installed in OVS
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.056 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.061 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:9a:61 10.100.0.10'], port_security=['fa:16:3e:e3:9a:61 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '131d5537-9b5a-407d-97af-efc5bd314951', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '96bfcbe0-efa0-4be3-8868-9b784c4417a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e1d5a37-aa54-4385-9ff9-4f158377e4c1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=cbe16ad6-d576-4461-9682-554b48a77542) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.063 158118 INFO neutron.agent.ovn.metadata.agent [-] Port cbe16ad6-d576-4461-9682-554b48a77542 in datapath 1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0 unbound from our chassis
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.064 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.065 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8028371a-eb75-4f86-a5bf-3549d2b557e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.065 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0 namespace which is not needed anymore
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.075 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Dec 06 08:08:51 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000bf.scope: Consumed 19.327s CPU time.
Dec 06 08:08:51 compute-0 systemd-machined[212986]: Machine qemu-89-instance-000000bf terminated.
Dec 06 08:08:51 compute-0 neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0[386724]: [NOTICE]   (386728) : haproxy version is 2.8.14-c23fe91
Dec 06 08:08:51 compute-0 neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0[386724]: [NOTICE]   (386728) : path to executable is /usr/sbin/haproxy
Dec 06 08:08:51 compute-0 neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0[386724]: [WARNING]  (386728) : Exiting Master process...
Dec 06 08:08:51 compute-0 neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0[386724]: [ALERT]    (386728) : Current worker (386730) exited with code 143 (Terminated)
Dec 06 08:08:51 compute-0 neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0[386724]: [WARNING]  (386728) : All workers exited. Exiting... (0)
Dec 06 08:08:51 compute-0 systemd[1]: libpod-b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046.scope: Deactivated successfully.
Dec 06 08:08:51 compute-0 podman[389311]: 2025-12-06 08:08:51.195892638 +0000 UTC m=+0.041506171 container died b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.230 251996 INFO nova.virt.libvirt.driver [-] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Instance destroyed successfully.
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.231 251996 DEBUG nova.objects.instance [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 131d5537-9b5a-407d-97af-efc5bd314951 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:08:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046-userdata-shm.mount: Deactivated successfully.
Dec 06 08:08:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6892a1eb0f50d2161494fc9ca0a3586016d47a5fb5b76871cb9a24e4ea9455ba-merged.mount: Deactivated successfully.
Dec 06 08:08:51 compute-0 podman[389311]: 2025-12-06 08:08:51.252011203 +0000 UTC m=+0.097624726 container cleanup b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:08:51 compute-0 systemd[1]: libpod-conmon-b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046.scope: Deactivated successfully.
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.268 251996 DEBUG nova.virt.libvirt.vif [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-416534484',display_name='tempest-TestNetworkBasicOps-server-416534484',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-416534484',id=191,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcK72NHJQZj1SOlzPXNIyj/EahKYKvzGuI9wWjXw21tyomau5BzaHrS65HPkCW6d+F/TpM4Nf1hp15CwV3oJtnsWhkxf1U/DQj7i/qxu5KN1mZmgoDo9AVVhX47DkyW5Q==',key_name='tempest-TestNetworkBasicOps-1452587742',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:06:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-0jy5h723',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:06:53Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=131d5537-9b5a-407d-97af-efc5bd314951,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.268 251996 DEBUG nova.network.os_vif_util [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.269 251996 DEBUG nova.network.os_vif_util [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:9a:61,bridge_name='br-int',has_traffic_filtering=True,id=cbe16ad6-d576-4461-9682-554b48a77542,network=Network(1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbe16ad6-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.269 251996 DEBUG os_vif [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:9a:61,bridge_name='br-int',has_traffic_filtering=True,id=cbe16ad6-d576-4461-9682-554b48a77542,network=Network(1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbe16ad6-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.270 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbe16ad6-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.296 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.298 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.301 251996 INFO os_vif [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:9a:61,bridge_name='br-int',has_traffic_filtering=True,id=cbe16ad6-d576-4461-9682-554b48a77542,network=Network(1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbe16ad6-d5')
Dec 06 08:08:51 compute-0 podman[389350]: 2025-12-06 08:08:51.33793129 +0000 UTC m=+0.055034205 container remove b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.343 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b034e9-437a-4867-b622-c44cf049b473]: (4, ('Sat Dec  6 08:08:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0 (b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046)\nb7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046\nSat Dec  6 08:08:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0 (b7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046)\nb7b68919fc1020ca0bb2fdb065b835c77cbf9e3c1ec9d10f69525825a89ab046\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.345 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5a64ba34-39ba-4a56-8b3f-d6a7fcf9de10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.346 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1d56bd3d-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:08:51 compute-0 kernel: tap1d56bd3d-50: left promiscuous mode
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.347 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.350 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.352 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7b8eb3-06c4-4625-8e3a-a61189285bfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.362 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.374 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc96ff1-8aa3-44d5-a0a9-1922030b9d0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.375 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1d0cfd-9477-48d7-9e78-31084b1c4bb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.389 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9c637e-25d8-4f4e-8a04-a9a4d96fa48a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 868389, 'reachable_time': 27659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389380, 'error': None, 'target': 'ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.391 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:08:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:08:51.391 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c16ca6-c8b5-408d-b96d-011513d39648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:08:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d1d56bd3d\x2d5dd9\x2d4d72\x2d8ef6\x2d80a2c18f25b0.mount: Deactivated successfully.
Dec 06 08:08:51 compute-0 sudo[389381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:51 compute-0 sudo[389381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:51 compute-0 sudo[389381]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/6449198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2217186686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:51 compute-0 sudo[389406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:08:51 compute-0 sudo[389406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:08:51 compute-0 sudo[389406]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.660 251996 INFO nova.virt.libvirt.driver [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Deleting instance files /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951_del
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.661 251996 INFO nova.virt.libvirt.driver [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Deletion of /var/lib/nova/instances/131d5537-9b5a-407d-97af-efc5bd314951_del complete
Dec 06 08:08:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 139 op/s
Dec 06 08:08:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:08:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:51.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.865 251996 INFO nova.compute.manager [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Took 0.87 seconds to destroy the instance on the hypervisor.
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.865 251996 DEBUG oslo.service.loopingcall [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.866 251996 DEBUG nova.compute.manager [-] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:08:51 compute-0 nova_compute[251992]: 2025-12-06 08:08:51.866 251996 DEBUG nova.network.neutron [-] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:08:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:52 compute-0 ceph-mon[74339]: pgmap v3484: 305 pgs: 305 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 139 op/s
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.102 251996 DEBUG nova.compute.manager [req-c00e0fd6-a77d-43ec-bd9c-82f67f38a5dc req-7240f607-722b-4a6e-91ab-8fa253b85b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-unplugged-cbe16ad6-d576-4461-9682-554b48a77542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.103 251996 DEBUG oslo_concurrency.lockutils [req-c00e0fd6-a77d-43ec-bd9c-82f67f38a5dc req-7240f607-722b-4a6e-91ab-8fa253b85b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.103 251996 DEBUG oslo_concurrency.lockutils [req-c00e0fd6-a77d-43ec-bd9c-82f67f38a5dc req-7240f607-722b-4a6e-91ab-8fa253b85b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.104 251996 DEBUG oslo_concurrency.lockutils [req-c00e0fd6-a77d-43ec-bd9c-82f67f38a5dc req-7240f607-722b-4a6e-91ab-8fa253b85b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.104 251996 DEBUG nova.compute.manager [req-c00e0fd6-a77d-43ec-bd9c-82f67f38a5dc req-7240f607-722b-4a6e-91ab-8fa253b85b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] No waiting events found dispatching network-vif-unplugged-cbe16ad6-d576-4461-9682-554b48a77542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.105 251996 DEBUG nova.compute.manager [req-c00e0fd6-a77d-43ec-bd9c-82f67f38a5dc req-7240f607-722b-4a6e-91ab-8fa253b85b3b 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-unplugged-cbe16ad6-d576-4461-9682-554b48a77542 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.121 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.386 251996 DEBUG nova.network.neutron [-] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.426 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [{"id": "cbe16ad6-d576-4461-9682-554b48a77542", "address": "fa:16:3e:e3:9a:61", "network": {"id": "1d56bd3d-5dd9-4d72-8ef6-80a2c18f25b0", "bridge": "br-int", "label": "tempest-network-smoke--38960524", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbe16ad6-d5", "ovs_interfaceid": "cbe16ad6-d576-4461-9682-554b48a77542", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.451 251996 INFO nova.compute.manager [-] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Took 1.59 seconds to deallocate network for instance.
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.464 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.465 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.465 251996 DEBUG oslo_concurrency.lockutils [req-c822f93e-5b55-40ba-99bb-681e0f3c3185 req-7729c1b2-67bd-44fc-9951-4b7098b037c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.466 251996 DEBUG nova.network.neutron [req-c822f93e-5b55-40ba-99bb-681e0f3c3185 req-7729c1b2-67bd-44fc-9951-4b7098b037c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Refreshing network info cache for port cbe16ad6-d576-4461-9682-554b48a77542 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.467 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.521 251996 DEBUG nova.compute.manager [req-2260c4c6-0483-404f-bfff-a12fd6ec7fc0 req-3719c3ba-b1aa-4c21-8fab-aede940d0c92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-deleted-cbe16ad6-d576-4461-9682-554b48a77542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.536 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.536 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.597 251996 DEBUG oslo_concurrency.processutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.664 251996 INFO nova.network.neutron [req-c822f93e-5b55-40ba-99bb-681e0f3c3185 req-7729c1b2-67bd-44fc-9951-4b7098b037c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Port cbe16ad6-d576-4461-9682-554b48a77542 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 06 08:08:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 MiB/s wr, 56 op/s
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.665 251996 DEBUG nova.network.neutron [req-c822f93e-5b55-40ba-99bb-681e0f3c3185 req-7729c1b2-67bd-44fc-9951-4b7098b037c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:08:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:53.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:53 compute-0 nova_compute[251992]: 2025-12-06 08:08:53.905 251996 DEBUG oslo_concurrency.lockutils [req-c822f93e-5b55-40ba-99bb-681e0f3c3185 req-7729c1b2-67bd-44fc-9951-4b7098b037c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-131d5537-9b5a-407d-97af-efc5bd314951" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:08:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:08:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854929407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:54 compute-0 nova_compute[251992]: 2025-12-06 08:08:54.016 251996 DEBUG oslo_concurrency.processutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:08:54 compute-0 nova_compute[251992]: 2025-12-06 08:08:54.026 251996 DEBUG nova.compute.provider_tree [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:08:54 compute-0 nova_compute[251992]: 2025-12-06 08:08:54.084 251996 DEBUG nova.scheduler.client.report [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:08:54 compute-0 nova_compute[251992]: 2025-12-06 08:08:54.190 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:54 compute-0 nova_compute[251992]: 2025-12-06 08:08:54.214 251996 INFO nova.scheduler.client.report [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 131d5537-9b5a-407d-97af-efc5bd314951
Dec 06 08:08:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:54.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:54 compute-0 ceph-mon[74339]: pgmap v3485: 305 pgs: 305 active+clean; 271 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 2.0 MiB/s wr, 56 op/s
Dec 06 08:08:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1854929407' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:08:54 compute-0 nova_compute[251992]: 2025-12-06 08:08:54.916 251996 DEBUG oslo_concurrency.lockutils [None req-3cc36ceb-73ba-437b-af09-b67c53d1f347 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:55 compute-0 nova_compute[251992]: 2025-12-06 08:08:55.201 251996 DEBUG nova.compute.manager [req-d037e8f9-0166-4d6e-af92-9792d9c388cd req-f04cc12b-f2a2-412c-89a3-b41826acb478 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received event network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:08:55 compute-0 nova_compute[251992]: 2025-12-06 08:08:55.201 251996 DEBUG oslo_concurrency.lockutils [req-d037e8f9-0166-4d6e-af92-9792d9c388cd req-f04cc12b-f2a2-412c-89a3-b41826acb478 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "131d5537-9b5a-407d-97af-efc5bd314951-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:08:55 compute-0 nova_compute[251992]: 2025-12-06 08:08:55.202 251996 DEBUG oslo_concurrency.lockutils [req-d037e8f9-0166-4d6e-af92-9792d9c388cd req-f04cc12b-f2a2-412c-89a3-b41826acb478 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:08:55 compute-0 nova_compute[251992]: 2025-12-06 08:08:55.202 251996 DEBUG oslo_concurrency.lockutils [req-d037e8f9-0166-4d6e-af92-9792d9c388cd req-f04cc12b-f2a2-412c-89a3-b41826acb478 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "131d5537-9b5a-407d-97af-efc5bd314951-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:08:55 compute-0 nova_compute[251992]: 2025-12-06 08:08:55.202 251996 DEBUG nova.compute.manager [req-d037e8f9-0166-4d6e-af92-9792d9c388cd req-f04cc12b-f2a2-412c-89a3-b41826acb478 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] No waiting events found dispatching network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:08:55 compute-0 nova_compute[251992]: 2025-12-06 08:08:55.202 251996 WARNING nova.compute.manager [req-d037e8f9-0166-4d6e-af92-9792d9c388cd req-f04cc12b-f2a2-412c-89a3-b41826acb478 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Received unexpected event network-vif-plugged-cbe16ad6-d576-4461-9682-554b48a77542 for instance with vm_state deleted and task_state None.
Dec 06 08:08:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Dec 06 08:08:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:55.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:56 compute-0 nova_compute[251992]: 2025-12-06 08:08:56.297 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:56 compute-0 ceph-mon[74339]: pgmap v3486: 305 pgs: 305 active+clean; 232 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Dec 06 08:08:56 compute-0 nova_compute[251992]: 2025-12-06 08:08:56.863 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 06 08:08:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:57.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:08:58 compute-0 nova_compute[251992]: 2025-12-06 08:08:58.124 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:08:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:08:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:08:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:08:58 compute-0 nova_compute[251992]: 2025-12-06 08:08:58.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:08:58 compute-0 nova_compute[251992]: 2025-12-06 08:08:58.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:08:58 compute-0 ceph-mon[74339]: pgmap v3487: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 06 08:08:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:08:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 06 08:08:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:08:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:08:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:08:59.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:00 compute-0 nova_compute[251992]: 2025-12-06 08:09:00.091 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:00 compute-0 nova_compute[251992]: 2025-12-06 08:09:00.233 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:00 compute-0 nova_compute[251992]: 2025-12-06 08:09:00.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:00 compute-0 ceph-mon[74339]: pgmap v3488: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 06 08:09:01 compute-0 nova_compute[251992]: 2025-12-06 08:09:01.299 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:09:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:01.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:02 compute-0 ceph-mon[74339]: pgmap v3489: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:09:03 compute-0 nova_compute[251992]: 2025-12-06 08:09:03.127 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:03 compute-0 podman[389461]: 2025-12-06 08:09:03.459776585 +0000 UTC m=+0.104984595 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 08:09:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 247 KiB/s rd, 193 KiB/s wr, 55 op/s
Dec 06 08:09:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:03.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:09:03.885 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:09:03.885 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:09:03.885 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:04 compute-0 ceph-mon[74339]: pgmap v3490: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 247 KiB/s rd, 193 KiB/s wr, 55 op/s
Dec 06 08:09:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.2 MiB/s wr, 111 op/s
Dec 06 08:09:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1965870762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:06 compute-0 nova_compute[251992]: 2025-12-06 08:09:06.229 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008531.2283456, 131d5537-9b5a-407d-97af-efc5bd314951 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:09:06 compute-0 nova_compute[251992]: 2025-12-06 08:09:06.230 251996 INFO nova.compute.manager [-] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] VM Stopped (Lifecycle Event)
Dec 06 08:09:06 compute-0 nova_compute[251992]: 2025-12-06 08:09:06.308 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:06 compute-0 nova_compute[251992]: 2025-12-06 08:09:06.333 251996 DEBUG nova.compute.manager [None req-ffe9400a-278a-42e4-ac42-1910a88c4305 - - - - - -] [instance: 131d5537-9b5a-407d-97af-efc5bd314951] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:09:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:06 compute-0 ceph-mon[74339]: pgmap v3491: 305 pgs: 305 active+clean; 165 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.2 MiB/s wr, 111 op/s
Dec 06 08:09:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2693546772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:09:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:07.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:08 compute-0 nova_compute[251992]: 2025-12-06 08:09:08.129 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:09 compute-0 ceph-mon[74339]: pgmap v3492: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:09:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4283153530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:09:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4283153530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:09:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 08:09:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:09.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:10 compute-0 podman[389492]: 2025-12-06 08:09:10.434356962 +0000 UTC m=+0.094005065 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:09:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:10.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:10 compute-0 podman[389493]: 2025-12-06 08:09:10.47091861 +0000 UTC m=+0.118779855 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 08:09:11 compute-0 nova_compute[251992]: 2025-12-06 08:09:11.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:11 compute-0 ceph-mon[74339]: pgmap v3493: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Dec 06 08:09:11 compute-0 sudo[389534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:11 compute-0 sudo[389534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:11 compute-0 sudo[389534]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Dec 06 08:09:11 compute-0 sudo[389559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:11 compute-0 sudo[389559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:11 compute-0 sudo[389559]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:11.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:12.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:12 compute-0 ceph-mon[74339]: pgmap v3494: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Dec 06 08:09:12 compute-0 nova_compute[251992]: 2025-12-06 08:09:12.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:12 compute-0 nova_compute[251992]: 2025-12-06 08:09:12.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:09:12 compute-0 nova_compute[251992]: 2025-12-06 08:09:12.684 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:09:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:09:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:09:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:09:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:09:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:09:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:09:13 compute-0 nova_compute[251992]: 2025-12-06 08:09:13.189 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:09:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:13.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:14.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:14 compute-0 nova_compute[251992]: 2025-12-06 08:09:14.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:15 compute-0 ceph-mon[74339]: pgmap v3495: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:09:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:09:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:15.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:16 compute-0 nova_compute[251992]: 2025-12-06 08:09:16.311 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:16.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:17 compute-0 ceph-mon[74339]: pgmap v3496: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Dec 06 08:09:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 766 KiB/s wr, 74 op/s
Dec 06 08:09:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:17.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:18 compute-0 nova_compute[251992]: 2025-12-06 08:09:18.190 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:18.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:09:18
Dec 06 08:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', 'volumes']
Dec 06 08:09:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:09:19 compute-0 ceph-mon[74339]: pgmap v3497: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 766 KiB/s wr, 74 op/s
Dec 06 08:09:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:09:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:20.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:21 compute-0 ceph-mon[74339]: pgmap v3498: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:09:21 compute-0 nova_compute[251992]: 2025-12-06 08:09:21.313 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 899 KiB/s wr, 91 op/s
Dec 06 08:09:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:21.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/600799336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:22.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:23 compute-0 ceph-mon[74339]: pgmap v3499: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 899 KiB/s wr, 91 op/s
Dec 06 08:09:23 compute-0 nova_compute[251992]: 2025-12-06 08:09:23.192 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 887 KiB/s wr, 17 op/s
Dec 06 08:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:09:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:09:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:23.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:24.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:25 compute-0 ceph-mon[74339]: pgmap v3500: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 887 KiB/s wr, 17 op/s
Dec 06 08:09:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 231 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 205 KiB/s rd, 3.7 MiB/s wr, 70 op/s
Dec 06 08:09:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:26 compute-0 nova_compute[251992]: 2025-12-06 08:09:26.315 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/295627267' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1005039998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:09:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:26.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030258351944909734 of space, bias 1.0, pg target 0.907750558347292 quantized to 32 (current 32)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:09:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:09:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:09:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:09:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:09:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:09:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:09:27 compute-0 ceph-mon[74339]: pgmap v3501: 305 pgs: 305 active+clean; 231 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 205 KiB/s rd, 3.7 MiB/s wr, 70 op/s
Dec 06 08:09:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Dec 06 08:09:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:27.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:28 compute-0 nova_compute[251992]: 2025-12-06 08:09:28.194 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:28.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:29 compute-0 ceph-mon[74339]: pgmap v3502: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Dec 06 08:09:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Dec 06 08:09:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:29.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:30.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:30 compute-0 ceph-mon[74339]: pgmap v3503: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Dec 06 08:09:31 compute-0 nova_compute[251992]: 2025-12-06 08:09:31.316 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Dec 06 08:09:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:31.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:31 compute-0 sudo[389594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:31 compute-0 sudo[389594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:31 compute-0 sudo[389594]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:31 compute-0 sudo[389619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:31 compute-0 sudo[389619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:31 compute-0 sudo[389619]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:09:32.050 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:09:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:09:32.051 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:09:32 compute-0 nova_compute[251992]: 2025-12-06 08:09:32.051 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:32.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:32 compute-0 ceph-mon[74339]: pgmap v3504: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 445 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Dec 06 08:09:33 compute-0 nova_compute[251992]: 2025-12-06 08:09:33.249 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 3.1 MiB/s wr, 85 op/s
Dec 06 08:09:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:33.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:34 compute-0 podman[389645]: 2025-12-06 08:09:34.467365996 +0000 UTC m=+0.118930549 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller)
Dec 06 08:09:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:34.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:34 compute-0 ceph-mon[74339]: pgmap v3505: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 3.1 MiB/s wr, 85 op/s
Dec 06 08:09:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1780192475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 165 op/s
Dec 06 08:09:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:35.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/136787216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/512540478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:36 compute-0 nova_compute[251992]: 2025-12-06 08:09:36.318 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:36.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:36 compute-0 ceph-mon[74339]: pgmap v3506: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 165 op/s
Dec 06 08:09:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 255 KiB/s wr, 116 op/s
Dec 06 08:09:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:37.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:09:38.053 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:09:38 compute-0 nova_compute[251992]: 2025-12-06 08:09:38.250 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:38.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:38 compute-0 ceph-mon[74339]: pgmap v3507: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 255 KiB/s wr, 116 op/s
Dec 06 08:09:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 101 op/s
Dec 06 08:09:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:40.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:40 compute-0 ceph-mon[74339]: pgmap v3508: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 101 op/s
Dec 06 08:09:41 compute-0 nova_compute[251992]: 2025-12-06 08:09:41.318 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:41 compute-0 podman[389678]: 2025-12-06 08:09:41.395713631 +0000 UTC m=+0.047503083 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:09:41 compute-0 podman[389677]: 2025-12-06 08:09:41.395746692 +0000 UTC m=+0.050112712 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 08:09:41 compute-0 nova_compute[251992]: 2025-12-06 08:09:41.670 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 128 op/s
Dec 06 08:09:41 compute-0 nova_compute[251992]: 2025-12-06 08:09:41.696 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:41 compute-0 nova_compute[251992]: 2025-12-06 08:09:41.697 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:41 compute-0 nova_compute[251992]: 2025-12-06 08:09:41.697 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:41 compute-0 nova_compute[251992]: 2025-12-06 08:09:41.697 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:09:41 compute-0 nova_compute[251992]: 2025-12-06 08:09:41.697 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:41.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:09:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139757730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.152 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.300 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.302 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4173MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.302 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.302 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.369 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.369 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.388 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:09:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:42.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:09:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1052256903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.831 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.837 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.860 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.886 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:09:42 compute-0 nova_compute[251992]: 2025-12-06 08:09:42.887 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:09:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:09:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:09:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:09:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:09:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:09:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:09:43 compute-0 ceph-mon[74339]: pgmap v3509: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 128 op/s
Dec 06 08:09:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1139757730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1052256903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:43 compute-0 nova_compute[251992]: 2025-12-06 08:09:43.252 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 110 op/s
Dec 06 08:09:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:43.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:43 compute-0 nova_compute[251992]: 2025-12-06 08:09:43.867 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:44.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:45 compute-0 ceph-mon[74339]: pgmap v3510: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 110 op/s
Dec 06 08:09:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2057202737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:45 compute-0 nova_compute[251992]: 2025-12-06 08:09:45.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 110 op/s
Dec 06 08:09:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:45.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:46 compute-0 nova_compute[251992]: 2025-12-06 08:09:46.334 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:46.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:47 compute-0 ceph-mon[74339]: pgmap v3511: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 KiB/s wr, 110 op/s
Dec 06 08:09:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 06 08:09:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:47.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:48 compute-0 nova_compute[251992]: 2025-12-06 08:09:48.255 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:48.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:48 compute-0 ceph-mon[74339]: pgmap v3512: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Dec 06 08:09:49 compute-0 sudo[389761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:49 compute-0 sudo[389761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:49 compute-0 sudo[389761]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:49 compute-0 sudo[389786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:09:49 compute-0 sudo[389786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:49 compute-0 sudo[389786]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:49 compute-0 sudo[389811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:49 compute-0 sudo[389811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:49 compute-0 sudo[389811]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:49 compute-0 sudo[389836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:09:49 compute-0 sudo[389836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:49 compute-0 nova_compute[251992]: 2025-12-06 08:09:49.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:49 compute-0 nova_compute[251992]: 2025-12-06 08:09:49.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:09:49 compute-0 nova_compute[251992]: 2025-12-06 08:09:49.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:09:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:09:49 compute-0 nova_compute[251992]: 2025-12-06 08:09:49.782 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:09:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:49.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:49 compute-0 podman[389934]: 2025-12-06 08:09:49.886268968 +0000 UTC m=+0.062938729 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:09:49 compute-0 podman[389934]: 2025-12-06 08:09:49.982887386 +0000 UTC m=+0.159557117 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:09:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:50.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:50 compute-0 podman[390084]: 2025-12-06 08:09:50.538799516 +0000 UTC m=+0.054427349 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:09:50 compute-0 podman[390084]: 2025-12-06 08:09:50.549396822 +0000 UTC m=+0.065024645 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:09:50 compute-0 podman[390151]: 2025-12-06 08:09:50.737710753 +0000 UTC m=+0.045934870 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9)
Dec 06 08:09:50 compute-0 podman[390151]: 2025-12-06 08:09:50.752350658 +0000 UTC m=+0.060574775 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, name=keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, version=2.2.4)
Dec 06 08:09:50 compute-0 ceph-mon[74339]: pgmap v3513: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:09:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1999819615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:50 compute-0 sudo[389836]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:09:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:09:50 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:50 compute-0 sudo[390183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:50 compute-0 sudo[390183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:50 compute-0 sudo[390183]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:50 compute-0 sudo[390208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:09:50 compute-0 sudo[390208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:50 compute-0 sudo[390208]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 compute-0 sudo[390233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:51 compute-0 sudo[390233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:51 compute-0 sudo[390233]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 compute-0 sudo[390258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:09:51 compute-0 sudo[390258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:51 compute-0 nova_compute[251992]: 2025-12-06 08:09:51.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:51 compute-0 sudo[390258]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:09:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:09:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:09:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:09:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:09:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7b7cfcab-d473-4362-ba4c-0c2212b9cfd9 does not exist
Dec 06 08:09:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 58812a6a-f71d-4884-906f-4114e0e6a8b7 does not exist
Dec 06 08:09:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3fcdd705-1c6b-4be1-9957-fe4761bc201c does not exist
Dec 06 08:09:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:09:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:09:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:09:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:09:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:09:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:09:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:09:51 compute-0 sudo[390317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:51 compute-0 sudo[390317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:51 compute-0 sudo[390317]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 compute-0 sudo[390342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:09:51 compute-0 sudo[390342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:51 compute-0 sudo[390342]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 compute-0 sudo[390367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:51 compute-0 sudo[390367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:51 compute-0 sudo[390367]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:51.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:51 compute-0 sudo[390392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:09:51 compute-0 sudo[390392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:51 compute-0 sudo[390417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:51 compute-0 sudo[390417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:51 compute-0 sudo[390417]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 compute-0 sudo[390442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:51 compute-0 sudo[390442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:51 compute-0 sudo[390442]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1936916397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:09:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:09:52 compute-0 podman[390507]: 2025-12-06 08:09:52.166036375 +0000 UTC m=+0.041385508 container create 6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:09:52 compute-0 systemd[1]: Started libpod-conmon-6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1.scope.
Dec 06 08:09:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:09:52 compute-0 podman[390507]: 2025-12-06 08:09:52.147041962 +0000 UTC m=+0.022391075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:09:52 compute-0 podman[390507]: 2025-12-06 08:09:52.255462868 +0000 UTC m=+0.130811981 container init 6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 08:09:52 compute-0 podman[390507]: 2025-12-06 08:09:52.261172062 +0000 UTC m=+0.136521155 container start 6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:09:52 compute-0 podman[390507]: 2025-12-06 08:09:52.26406664 +0000 UTC m=+0.139415753 container attach 6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:09:52 compute-0 fervent_blackburn[390524]: 167 167
Dec 06 08:09:52 compute-0 systemd[1]: libpod-6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1.scope: Deactivated successfully.
Dec 06 08:09:52 compute-0 conmon[390524]: conmon 6917a3d339d41bb01518 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1.scope/container/memory.events
Dec 06 08:09:52 compute-0 podman[390507]: 2025-12-06 08:09:52.269772054 +0000 UTC m=+0.145121147 container died 6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:09:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ae02c511eca45ae3dbbeb5bc29852f0332801b07e527a5701f9ebefaeb4171-merged.mount: Deactivated successfully.
Dec 06 08:09:52 compute-0 podman[390507]: 2025-12-06 08:09:52.314715697 +0000 UTC m=+0.190064790 container remove 6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_blackburn, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:09:52 compute-0 systemd[1]: libpod-conmon-6917a3d339d41bb01518bcba695fe54546d3d4669f612d92b07f3236f44566c1.scope: Deactivated successfully.
Dec 06 08:09:52 compute-0 podman[390548]: 2025-12-06 08:09:52.466704368 +0000 UTC m=+0.046430793 container create e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_villani, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:09:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:52.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:52 compute-0 systemd[1]: Started libpod-conmon-e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d.scope.
Dec 06 08:09:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1fe1c3cdc3311db598a401ad3dac08722fac3103b66908ff23809fdde7399e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1fe1c3cdc3311db598a401ad3dac08722fac3103b66908ff23809fdde7399e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1fe1c3cdc3311db598a401ad3dac08722fac3103b66908ff23809fdde7399e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1fe1c3cdc3311db598a401ad3dac08722fac3103b66908ff23809fdde7399e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1fe1c3cdc3311db598a401ad3dac08722fac3103b66908ff23809fdde7399e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:52 compute-0 podman[390548]: 2025-12-06 08:09:52.531062185 +0000 UTC m=+0.110788620 container init e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:09:52 compute-0 podman[390548]: 2025-12-06 08:09:52.445765723 +0000 UTC m=+0.025492188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:09:52 compute-0 podman[390548]: 2025-12-06 08:09:52.540303555 +0000 UTC m=+0.120029980 container start e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_villani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:09:52 compute-0 podman[390548]: 2025-12-06 08:09:52.544472277 +0000 UTC m=+0.124198762 container attach e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_villani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:09:52 compute-0 nova_compute[251992]: 2025-12-06 08:09:52.776 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:53 compute-0 ceph-mon[74339]: pgmap v3514: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:09:53 compute-0 nova_compute[251992]: 2025-12-06 08:09:53.258 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:53 compute-0 gifted_villani[390564]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:09:53 compute-0 gifted_villani[390564]: --> relative data size: 1.0
Dec 06 08:09:53 compute-0 gifted_villani[390564]: --> All data devices are unavailable
Dec 06 08:09:53 compute-0 systemd[1]: libpod-e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d.scope: Deactivated successfully.
Dec 06 08:09:53 compute-0 podman[390580]: 2025-12-06 08:09:53.396473917 +0000 UTC m=+0.023748041 container died e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:09:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1fe1c3cdc3311db598a401ad3dac08722fac3103b66908ff23809fdde7399e8-merged.mount: Deactivated successfully.
Dec 06 08:09:53 compute-0 podman[390580]: 2025-12-06 08:09:53.455370416 +0000 UTC m=+0.082644520 container remove e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:09:53 compute-0 systemd[1]: libpod-conmon-e1d184250d56a96daa90d428745b0e7f2f5568f105ada4e807d928b3cafa002d.scope: Deactivated successfully.
Dec 06 08:09:53 compute-0 sudo[390392]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:53 compute-0 sudo[390595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:53 compute-0 sudo[390595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:53 compute-0 sudo[390595]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:53 compute-0 sudo[390620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:09:53 compute-0 sudo[390620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:53 compute-0 sudo[390620]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:53 compute-0 sudo[390645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:53 compute-0 sudo[390645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:53 compute-0 sudo[390645]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:53 compute-0 nova_compute[251992]: 2025-12-06 08:09:53.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:53 compute-0 nova_compute[251992]: 2025-12-06 08:09:53.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:53 compute-0 sudo[390670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:09:53 compute-0 sudo[390670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:53.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:54 compute-0 podman[390737]: 2025-12-06 08:09:54.052209161 +0000 UTC m=+0.040412761 container create d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 08:09:54 compute-0 systemd[1]: Started libpod-conmon-d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795.scope.
Dec 06 08:09:54 compute-0 podman[390737]: 2025-12-06 08:09:54.034042991 +0000 UTC m=+0.022246581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:09:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:09:54 compute-0 podman[390737]: 2025-12-06 08:09:54.156283219 +0000 UTC m=+0.144486879 container init d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:09:54 compute-0 podman[390737]: 2025-12-06 08:09:54.168639683 +0000 UTC m=+0.156843293 container start d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 08:09:54 compute-0 podman[390737]: 2025-12-06 08:09:54.173852204 +0000 UTC m=+0.162055864 container attach d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:09:54 compute-0 nifty_williamson[390753]: 167 167
Dec 06 08:09:54 compute-0 systemd[1]: libpod-d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795.scope: Deactivated successfully.
Dec 06 08:09:54 compute-0 podman[390737]: 2025-12-06 08:09:54.178457018 +0000 UTC m=+0.166660618 container died d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ffaabace692b50c4d0463a42ce8f50bf0c25da036eaeb7e0ce86dc36c2239bb-merged.mount: Deactivated successfully.
Dec 06 08:09:54 compute-0 podman[390737]: 2025-12-06 08:09:54.227390219 +0000 UTC m=+0.215593789 container remove d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:09:54 compute-0 systemd[1]: libpod-conmon-d8cd9918676f3b4cfc4a58edcffcf16fb136890d1031c2e1291d10623ed72795.scope: Deactivated successfully.
Dec 06 08:09:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:54 compute-0 podman[390777]: 2025-12-06 08:09:54.391572328 +0000 UTC m=+0.047848982 container create 6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:09:54 compute-0 systemd[1]: Started libpod-conmon-6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32.scope.
Dec 06 08:09:54 compute-0 podman[390777]: 2025-12-06 08:09:54.371945909 +0000 UTC m=+0.028222623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:09:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead67fc25ebf753b145dbc23145c9efc9e3ee6d1ce53a3650a938397178adeea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead67fc25ebf753b145dbc23145c9efc9e3ee6d1ce53a3650a938397178adeea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead67fc25ebf753b145dbc23145c9efc9e3ee6d1ce53a3650a938397178adeea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead67fc25ebf753b145dbc23145c9efc9e3ee6d1ce53a3650a938397178adeea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:54.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:54 compute-0 podman[390777]: 2025-12-06 08:09:54.493664514 +0000 UTC m=+0.149941198 container init 6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:09:54 compute-0 podman[390777]: 2025-12-06 08:09:54.501474185 +0000 UTC m=+0.157750839 container start 6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:09:54 compute-0 podman[390777]: 2025-12-06 08:09:54.505475162 +0000 UTC m=+0.161751826 container attach 6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:09:55 compute-0 ceph-mon[74339]: pgmap v3515: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]: {
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:     "0": [
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:         {
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "devices": [
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "/dev/loop3"
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             ],
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "lv_name": "ceph_lv0",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "lv_size": "7511998464",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "name": "ceph_lv0",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "tags": {
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.cluster_name": "ceph",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.crush_device_class": "",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.encrypted": "0",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.osd_id": "0",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.type": "block",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:                 "ceph.vdo": "0"
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             },
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "type": "block",
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:             "vg_name": "ceph_vg0"
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:         }
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]:     ]
Dec 06 08:09:55 compute-0 flamboyant_wiles[390794]: }
Dec 06 08:09:55 compute-0 systemd[1]: libpod-6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32.scope: Deactivated successfully.
Dec 06 08:09:55 compute-0 podman[390777]: 2025-12-06 08:09:55.231083862 +0000 UTC m=+0.887360566 container died 6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 08:09:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ead67fc25ebf753b145dbc23145c9efc9e3ee6d1ce53a3650a938397178adeea-merged.mount: Deactivated successfully.
Dec 06 08:09:55 compute-0 podman[390777]: 2025-12-06 08:09:55.284045561 +0000 UTC m=+0.940322215 container remove 6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:09:55 compute-0 systemd[1]: libpod-conmon-6efaaba87acabf4b73461859738fc579c309206328521f92e815801bdd3fae32.scope: Deactivated successfully.
Dec 06 08:09:55 compute-0 sudo[390670]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:55 compute-0 sudo[390816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:55 compute-0 sudo[390816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:55 compute-0 sudo[390816]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:55 compute-0 sudo[390841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:09:55 compute-0 sudo[390841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:55 compute-0 sudo[390841]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:55 compute-0 sudo[390866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:55 compute-0 sudo[390866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:55 compute-0 sudo[390866]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:55 compute-0 sudo[390891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:09:55 compute-0 sudo[390891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:55.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:56 compute-0 podman[390957]: 2025-12-06 08:09:56.031228763 +0000 UTC m=+0.057669507 container create 73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 08:09:56 compute-0 systemd[1]: Started libpod-conmon-73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6.scope.
Dec 06 08:09:56 compute-0 podman[390957]: 2025-12-06 08:09:56.00149366 +0000 UTC m=+0.027934454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:09:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:09:56 compute-0 podman[390957]: 2025-12-06 08:09:56.127233323 +0000 UTC m=+0.153674087 container init 73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:09:56 compute-0 podman[390957]: 2025-12-06 08:09:56.136171284 +0000 UTC m=+0.162612028 container start 73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:09:56 compute-0 podman[390957]: 2025-12-06 08:09:56.13967716 +0000 UTC m=+0.166117894 container attach 73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:09:56 compute-0 laughing_tu[390973]: 167 167
Dec 06 08:09:56 compute-0 systemd[1]: libpod-73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6.scope: Deactivated successfully.
Dec 06 08:09:56 compute-0 conmon[390973]: conmon 73f7324ae366c7cd30aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6.scope/container/memory.events
Dec 06 08:09:56 compute-0 podman[390957]: 2025-12-06 08:09:56.144610022 +0000 UTC m=+0.171050756 container died 73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 08:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cfafc33fb1403a59a2834d71c04bde68f80704bb60786fdc06530609f2988c5-merged.mount: Deactivated successfully.
Dec 06 08:09:56 compute-0 podman[390957]: 2025-12-06 08:09:56.186689738 +0000 UTC m=+0.213130492 container remove 73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:09:56 compute-0 systemd[1]: libpod-conmon-73f7324ae366c7cd30aa59a52010ee3f73adf0a8a80e4a2aaa59502e5a0a47e6.scope: Deactivated successfully.
Dec 06 08:09:56 compute-0 nova_compute[251992]: 2025-12-06 08:09:56.376 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:56 compute-0 podman[390997]: 2025-12-06 08:09:56.387012604 +0000 UTC m=+0.041329387 container create 214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:09:56 compute-0 systemd[1]: Started libpod-conmon-214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601.scope.
Dec 06 08:09:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b61a47e3746cf86b72c81147dea2d6c2858b5b35d5d7de4ef02446bf4f34bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b61a47e3746cf86b72c81147dea2d6c2858b5b35d5d7de4ef02446bf4f34bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b61a47e3746cf86b72c81147dea2d6c2858b5b35d5d7de4ef02446bf4f34bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b61a47e3746cf86b72c81147dea2d6c2858b5b35d5d7de4ef02446bf4f34bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:09:56 compute-0 podman[390997]: 2025-12-06 08:09:56.368910475 +0000 UTC m=+0.023227288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:09:56 compute-0 podman[390997]: 2025-12-06 08:09:56.468193904 +0000 UTC m=+0.122510727 container init 214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:09:56 compute-0 podman[390997]: 2025-12-06 08:09:56.478513732 +0000 UTC m=+0.132830525 container start 214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:09:56 compute-0 podman[390997]: 2025-12-06 08:09:56.483502217 +0000 UTC m=+0.137819040 container attach 214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:09:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:56.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:09:57 compute-0 ceph-mon[74339]: pgmap v3516: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3535291815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:09:57 compute-0 pedantic_spence[391014]: {
Dec 06 08:09:57 compute-0 pedantic_spence[391014]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:09:57 compute-0 pedantic_spence[391014]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:09:57 compute-0 pedantic_spence[391014]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:09:57 compute-0 pedantic_spence[391014]:         "osd_id": 0,
Dec 06 08:09:57 compute-0 pedantic_spence[391014]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:09:57 compute-0 pedantic_spence[391014]:         "type": "bluestore"
Dec 06 08:09:57 compute-0 pedantic_spence[391014]:     }
Dec 06 08:09:57 compute-0 pedantic_spence[391014]: }
Dec 06 08:09:57 compute-0 systemd[1]: libpod-214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601.scope: Deactivated successfully.
Dec 06 08:09:57 compute-0 podman[390997]: 2025-12-06 08:09:57.403556224 +0000 UTC m=+1.057873037 container died 214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:09:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8b61a47e3746cf86b72c81147dea2d6c2858b5b35d5d7de4ef02446bf4f34bb-merged.mount: Deactivated successfully.
Dec 06 08:09:57 compute-0 podman[390997]: 2025-12-06 08:09:57.480061648 +0000 UTC m=+1.134378441 container remove 214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_spence, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 08:09:57 compute-0 systemd[1]: libpod-conmon-214d274af251158c810890553cd33c40d862994d09cd005f504b677de2143601.scope: Deactivated successfully.
Dec 06 08:09:57 compute-0 sudo[390891]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:09:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:09:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6a81da39-0b8d-4225-8a1f-0d47516317b4 does not exist
Dec 06 08:09:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 835a4132-4bac-4fb0-94f8-a78d52f35859 does not exist
Dec 06 08:09:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0a840a4c-afad-4c7b-9b02-ee84bba379c5 does not exist
Dec 06 08:09:57 compute-0 sudo[391050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:09:57 compute-0 sudo[391050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:57 compute-0 sudo[391050]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:57 compute-0 sudo[391075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:09:57 compute-0 sudo[391075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:09:57 compute-0 sudo[391075]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:09:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:57.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:09:58 compute-0 nova_compute[251992]: 2025-12-06 08:09:58.259 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:09:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:09:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:09:58.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:09:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:58 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:09:58 compute-0 ceph-mon[74339]: pgmap v3517: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:58 compute-0 nova_compute[251992]: 2025-12-06 08:09:58.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:09:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:09:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:09:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:09:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:09:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:09:59.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 08:10:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:00.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:00 compute-0 nova_compute[251992]: 2025-12-06 08:10:00.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:00 compute-0 nova_compute[251992]: 2025-12-06 08:10:00.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:10:00 compute-0 ceph-mon[74339]: pgmap v3518: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:10:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 08:10:01 compute-0 nova_compute[251992]: 2025-12-06 08:10:01.379 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:10:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:01.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:02.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:02 compute-0 nova_compute[251992]: 2025-12-06 08:10:02.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:02 compute-0 ceph-mon[74339]: pgmap v3519: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:10:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2883649163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:03 compute-0 nova_compute[251992]: 2025-12-06 08:10:03.261 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:10:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3216410970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:03.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:10:03.885 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:10:03.886 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:10:03.886 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:04.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:04 compute-0 ceph-mon[74339]: pgmap v3520: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:10:05 compute-0 podman[391104]: 2025-12-06 08:10:05.463326178 +0000 UTC m=+0.118462108 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:10:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Dec 06 08:10:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:05.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/230461093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:06 compute-0 nova_compute[251992]: 2025-12-06 08:10:06.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:06.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:06 compute-0 ceph-mon[74339]: pgmap v3521: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Dec 06 08:10:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Dec 06 08:10:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:08 compute-0 nova_compute[251992]: 2025-12-06 08:10:08.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:08.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:08 compute-0 ceph-mon[74339]: pgmap v3522: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Dec 06 08:10:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Dec 06 08:10:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:09.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1560501208' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:10:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1560501208' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:10:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:10.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:10 compute-0 ceph-mon[74339]: pgmap v3523: 305 pgs: 305 active+clean; 175 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Dec 06 08:10:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1048770933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2754319733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/918454138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:11 compute-0 nova_compute[251992]: 2025-12-06 08:10:11.382 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Dec 06 08:10:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:12 compute-0 sudo[391134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:12 compute-0 sudo[391134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:12 compute-0 sudo[391134]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:12 compute-0 podman[391158]: 2025-12-06 08:10:12.133882697 +0000 UTC m=+0.058521660 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:10:12 compute-0 sudo[391171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:12 compute-0 podman[391159]: 2025-12-06 08:10:12.147581557 +0000 UTC m=+0.070648667 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:10:12 compute-0 sudo[391171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:12 compute-0 sudo[391171]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:12.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:12 compute-0 ceph-mon[74339]: pgmap v3524: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Dec 06 08:10:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:10:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:10:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:10:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:10:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:10:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:10:13 compute-0 nova_compute[251992]: 2025-12-06 08:10:13.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 06 08:10:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:13.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:14 compute-0 nova_compute[251992]: 2025-12-06 08:10:14.358 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:10:14.361 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:10:14 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:10:14.364 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:10:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:14.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:14 compute-0 ceph-mon[74339]: pgmap v3525: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Dec 06 08:10:15 compute-0 ovn_controller[147168]: 2025-12-06T08:10:15Z|00750|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 06 08:10:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 08:10:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:15.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:16 compute-0 nova_compute[251992]: 2025-12-06 08:10:16.384 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:16.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:16 compute-0 ceph-mon[74339]: pgmap v3526: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Dec 06 08:10:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Dec 06 08:10:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:17.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:18 compute-0 nova_compute[251992]: 2025-12-06 08:10:18.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:10:18
Dec 06 08:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', '.rgw.root']
Dec 06 08:10:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:10:19 compute-0 ceph-mon[74339]: pgmap v3527: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Dec 06 08:10:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Dec 06 08:10:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:19.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:21 compute-0 ceph-mon[74339]: pgmap v3528: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Dec 06 08:10:21 compute-0 nova_compute[251992]: 2025-12-06 08:10:21.386 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.1 MiB/s wr, 150 op/s
Dec 06 08:10:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:21.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:22 compute-0 sshd-session[391227]: Connection reset by authenticating user root 45.140.17.124 port 49070 [preauth]
Dec 06 08:10:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:10:22.368 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:10:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:22.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:23 compute-0 ceph-mon[74339]: pgmap v3529: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.1 MiB/s wr, 150 op/s
Dec 06 08:10:23 compute-0 nova_compute[251992]: 2025-12-06 08:10:23.269 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:10:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:10:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:23.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:24.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:24 compute-0 sshd-session[391230]: Connection reset by authenticating user root 45.140.17.124 port 49100 [preauth]
Dec 06 08:10:25 compute-0 ceph-mon[74339]: pgmap v3530: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:10:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:10:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:25.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:26 compute-0 nova_compute[251992]: 2025-12-06 08:10:26.386 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:10:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:10:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:26.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:27 compute-0 ceph-mon[74339]: pgmap v3531: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:10:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:10:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:10:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:10:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:10:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:10:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 226 KiB/s wr, 59 op/s
Dec 06 08:10:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:27.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:28 compute-0 nova_compute[251992]: 2025-12-06 08:10:28.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:28.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:29 compute-0 ceph-mon[74339]: pgmap v3532: 305 pgs: 305 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 226 KiB/s wr, 59 op/s
Dec 06 08:10:29 compute-0 sshd-session[391233]: Connection reset by authenticating user root 45.140.17.124 port 20018 [preauth]
Dec 06 08:10:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 226 KiB/s wr, 35 op/s
Dec 06 08:10:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:29.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:30.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:31 compute-0 ceph-mon[74339]: pgmap v3533: 305 pgs: 305 active+clean; 170 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 994 KiB/s rd, 226 KiB/s wr, 35 op/s
Dec 06 08:10:31 compute-0 sshd-session[391238]: Connection reset by authenticating user root 45.140.17.124 port 20032 [preauth]
Dec 06 08:10:31 compute-0 nova_compute[251992]: 2025-12-06 08:10:31.432 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:10:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:31.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:32 compute-0 sudo[391243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:32 compute-0 sudo[391243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:32 compute-0 sudo[391243]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:32 compute-0 sudo[391268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:32 compute-0 sudo[391268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:32 compute-0 sudo[391268]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:32.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:33 compute-0 nova_compute[251992]: 2025-12-06 08:10:33.272 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:33 compute-0 sshd-session[391241]: Invalid user test2 from 45.140.17.124 port 20040
Dec 06 08:10:33 compute-0 ceph-mon[74339]: pgmap v3534: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:10:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:33 compute-0 sshd-session[391241]: Connection reset by invalid user test2 45.140.17.124 port 20040 [preauth]
Dec 06 08:10:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:33.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:34.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:34 compute-0 ceph-mon[74339]: pgmap v3535: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:36 compute-0 podman[391295]: 2025-12-06 08:10:36.427800251 +0000 UTC m=+0.086993789 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:10:36 compute-0 nova_compute[251992]: 2025-12-06 08:10:36.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:37 compute-0 ceph-mon[74339]: pgmap v3536: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:37 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1969955887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:37.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:38 compute-0 nova_compute[251992]: 2025-12-06 08:10:38.275 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3610654472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:38.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:39 compute-0 ceph-mon[74339]: pgmap v3537: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Dec 06 08:10:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Dec 06 08:10:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:39.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:40 compute-0 ceph-mon[74339]: pgmap v3538: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Dec 06 08:10:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:40.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:41 compute-0 nova_compute[251992]: 2025-12-06 08:10:41.434 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Dec 06 08:10:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:41.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:42 compute-0 podman[391325]: 2025-12-06 08:10:42.397696365 +0000 UTC m=+0.058320826 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:10:42 compute-0 podman[391326]: 2025-12-06 08:10:42.441033961 +0000 UTC m=+0.097618321 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 08:10:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:42.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:42 compute-0 nova_compute[251992]: 2025-12-06 08:10:42.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:42 compute-0 ceph-mon[74339]: pgmap v3539: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Dec 06 08:10:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:10:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:10:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:10:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:10:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:10:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:10:43 compute-0 nova_compute[251992]: 2025-12-06 08:10:43.277 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:43 compute-0 nova_compute[251992]: 2025-12-06 08:10:43.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Dec 06 08:10:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:43.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.001 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.002 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.002 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.002 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.003 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:10:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941166648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.434 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:10:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:44.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.626 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.628 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4193MB free_disk=20.942710876464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.628 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:44 compute-0 nova_compute[251992]: 2025-12-06 08:10:44.629 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:44 compute-0 ceph-mon[74339]: pgmap v3540: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Dec 06 08:10:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2941166648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.327 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.329 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.356 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.404 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.405 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.422 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.469 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.490 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Dec 06 08:10:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:45.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:10:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3041411772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.941 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.946 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.966 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.968 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:10:45 compute-0 nova_compute[251992]: 2025-12-06 08:10:45.968 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:46 compute-0 nova_compute[251992]: 2025-12-06 08:10:46.436 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:46.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:47 compute-0 ceph-mon[74339]: pgmap v3541: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s wr, 0 op/s
Dec 06 08:10:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3041411772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2452691386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s wr, 0 op/s
Dec 06 08:10:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:47.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:48 compute-0 nova_compute[251992]: 2025-12-06 08:10:48.279 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:48.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:48 compute-0 nova_compute[251992]: 2025-12-06 08:10:48.968 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:49 compute-0 ceph-mon[74339]: pgmap v3542: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s wr, 0 op/s
Dec 06 08:10:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s wr, 0 op/s
Dec 06 08:10:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:49.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:50.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:51 compute-0 ceph-mon[74339]: pgmap v3543: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s wr, 0 op/s
Dec 06 08:10:51 compute-0 nova_compute[251992]: 2025-12-06 08:10:51.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:51 compute-0 nova_compute[251992]: 2025-12-06 08:10:51.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:51 compute-0 nova_compute[251992]: 2025-12-06 08:10:51.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:10:51 compute-0 nova_compute[251992]: 2025-12-06 08:10:51.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:10:51 compute-0 nova_compute[251992]: 2025-12-06 08:10:51.676 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:10:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:10:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:51.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:52 compute-0 sudo[391414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:52 compute-0 sudo[391414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:52 compute-0 sudo[391414]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:52 compute-0 sudo[391439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:52 compute-0 sudo[391439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:52 compute-0 sudo[391439]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4129445202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2490917646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:52.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:53 compute-0 nova_compute[251992]: 2025-12-06 08:10:53.282 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:53 compute-0 ceph-mon[74339]: pgmap v3544: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s wr, 0 op/s
Dec 06 08:10:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Dec 06 08:10:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:53.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.391649) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654391699, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1735, "num_deletes": 258, "total_data_size": 3035405, "memory_usage": 3072272, "flush_reason": "Manual Compaction"}
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654407689, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2977267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70597, "largest_seqno": 72331, "table_properties": {"data_size": 2969422, "index_size": 4659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16291, "raw_average_key_size": 19, "raw_value_size": 2953709, "raw_average_value_size": 3602, "num_data_blocks": 205, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008480, "oldest_key_time": 1765008480, "file_creation_time": 1765008654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 16099 microseconds, and 6376 cpu microseconds.
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.407738) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2977267 bytes OK
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.407762) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.410823) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.410837) EVENT_LOG_v1 {"time_micros": 1765008654410832, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.410852) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3028198, prev total WAL file size 3028198, number of live WAL files 2.
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.411698) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373737' seq:72057594037927935, type:22 .. '6C6F676D0033303331' seq:0, type:0; will stop at (end)
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2907KB)], [158(10MB)]
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654411775, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 13894249, "oldest_snapshot_seqno": -1}
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 10621 keys, 13754876 bytes, temperature: kUnknown
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654513567, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 13754876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13686892, "index_size": 40351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 280536, "raw_average_key_size": 26, "raw_value_size": 13501461, "raw_average_value_size": 1271, "num_data_blocks": 1537, "num_entries": 10621, "num_filter_entries": 10621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.513833) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13754876 bytes
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.516165) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.4 rd, 135.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 10.4 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(9.3) write-amplify(4.6) OK, records in: 11150, records dropped: 529 output_compression: NoCompression
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.516185) EVENT_LOG_v1 {"time_micros": 1765008654516176, "job": 98, "event": "compaction_finished", "compaction_time_micros": 101868, "compaction_time_cpu_micros": 35110, "output_level": 6, "num_output_files": 1, "total_output_size": 13754876, "num_input_records": 11150, "num_output_records": 10621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654516763, "job": 98, "event": "table_file_deletion", "file_number": 160}
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008654518714, "job": 98, "event": "table_file_deletion", "file_number": 158}
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.411568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.518773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.518779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.518781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.518782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:10:54.518784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:10:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:54.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:54 compute-0 nova_compute[251992]: 2025-12-06 08:10:54.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:55 compute-0 ceph-mon[74339]: pgmap v3545: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Dec 06 08:10:55 compute-0 nova_compute[251992]: 2025-12-06 08:10:55.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:10:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 18 KiB/s wr, 3 op/s
Dec 06 08:10:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:55.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.480 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.508 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.509 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.564 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:10:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:56.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.648 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.649 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.654 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.655 251996 INFO nova.compute.claims [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:10:56 compute-0 nova_compute[251992]: 2025-12-06 08:10:56.767 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:10:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075003298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.181 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.189 251996 DEBUG nova.compute.provider_tree [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.220 251996 DEBUG nova.scheduler.client.report [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.283 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.284 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.336 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.336 251996 DEBUG nova.network.neutron [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.352 251996 INFO nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.370 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:10:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Dec 06 08:10:57 compute-0 ceph-mon[74339]: pgmap v3546: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 18 KiB/s wr, 3 op/s
Dec 06 08:10:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2075003298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:10:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Dec 06 08:10:57 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.447 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.449 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.449 251996 INFO nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Creating image(s)
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.478 251996 DEBUG nova.storage.rbd_utils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.507 251996 DEBUG nova.storage.rbd_utils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.532 251996 DEBUG nova.storage.rbd_utils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.535 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.627 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.629 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.630 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.631 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.666 251996 DEBUG nova.storage.rbd_utils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:10:57 compute-0 nova_compute[251992]: 2025-12-06 08:10:57.670 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:10:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 21 KiB/s wr, 4 op/s
Dec 06 08:10:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.004 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:10:58 compute-0 sudo[391583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:58 compute-0 sudo[391583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-0 sudo[391583]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.086 251996 DEBUG nova.storage.rbd_utils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:10:58 compute-0 sudo[391626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:10:58 compute-0 sudo[391626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-0 sudo[391626]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-0 sudo[391684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:58 compute-0 sudo[391684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-0 sudo[391684]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.195 251996 DEBUG nova.objects.instance [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:10:58 compute-0 sudo[391712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:10:58 compute-0 sudo[391712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.210 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.210 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Ensure instance console log exists: /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.211 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.211 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.212 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.284 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:10:58 compute-0 sudo[391712]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:10:58 compute-0 ceph-mon[74339]: osdmap e414: 3 total, 3 up, 3 in
Dec 06 08:10:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/19922362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:10:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:10:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:10:58 compute-0 sudo[391775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:58 compute-0 sudo[391775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-0 sudo[391775]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-0 sudo[391800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:10:58 compute-0 sudo[391800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-0 sudo[391800]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:10:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:10:58.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:10:58 compute-0 sudo[391825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:10:58 compute-0 sudo[391825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-0 sudo[391825]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 compute-0 sudo[391850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:10:58 compute-0 sudo[391850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:10:58 compute-0 nova_compute[251992]: 2025-12-06 08:10:58.879 251996 DEBUG nova.policy [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:10:59 compute-0 sudo[391850]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:10:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:10:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:10:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:10:59 compute-0 ceph-mon[74339]: pgmap v3548: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 21 KiB/s wr, 4 op/s
Dec 06 08:10:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:10:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:10:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2171125071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:10:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:10:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 21 KiB/s wr, 4 op/s
Dec 06 08:10:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:10:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:10:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:10:59.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:11:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:11:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:11:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 766078b1-4bb2-403f-957e-f5b2de593ceb does not exist
Dec 06 08:11:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2fe81f33-1e52-4edb-bd1b-14e0db2dbfe8 does not exist
Dec 06 08:11:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 29a79ab9-aa6f-4435-bcd5-ddea78f1093f does not exist
Dec 06 08:11:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:11:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:11:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:11:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:11:00 compute-0 sudo[391907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:00 compute-0 sudo[391907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:00 compute-0 sudo[391907]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:00 compute-0 sudo[391932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:11:00 compute-0 sudo[391932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:00 compute-0 sudo[391932]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:00 compute-0 sudo[391957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:00 compute-0 sudo[391957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:00 compute-0 sudo[391957]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:00 compute-0 sudo[391982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:11:00 compute-0 sudo[391982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.504019) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660504074, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 369, "num_deletes": 251, "total_data_size": 254354, "memory_usage": 262632, "flush_reason": "Manual Compaction"}
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660508395, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 252984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72332, "largest_seqno": 72700, "table_properties": {"data_size": 250560, "index_size": 523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6131, "raw_average_key_size": 19, "raw_value_size": 245661, "raw_average_value_size": 777, "num_data_blocks": 21, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008654, "oldest_key_time": 1765008654, "file_creation_time": 1765008660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 4425 microseconds, and 1769 cpu microseconds.
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.508447) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 252984 bytes OK
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.508469) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.509751) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.509766) EVENT_LOG_v1 {"time_micros": 1765008660509762, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.509782) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 251880, prev total WAL file size 251880, number of live WAL files 2.
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.510184) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(247KB)], [161(13MB)]
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660510214, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 14007860, "oldest_snapshot_seqno": -1}
Dec 06 08:11:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:00.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 10417 keys, 11996221 bytes, temperature: kUnknown
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660588547, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 11996221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11931052, "index_size": 38048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 277013, "raw_average_key_size": 26, "raw_value_size": 11750671, "raw_average_value_size": 1128, "num_data_blocks": 1432, "num_entries": 10417, "num_filter_entries": 10417, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:11:00 compute-0 nova_compute[251992]: 2025-12-06 08:11:00.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.588790) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 11996221 bytes
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.697036) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.7 rd, 153.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.1 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(102.8) write-amplify(47.4) OK, records in: 10937, records dropped: 520 output_compression: NoCompression
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.697072) EVENT_LOG_v1 {"time_micros": 1765008660697059, "job": 100, "event": "compaction_finished", "compaction_time_micros": 78398, "compaction_time_cpu_micros": 26504, "output_level": 6, "num_output_files": 1, "total_output_size": 11996221, "num_input_records": 10937, "num_output_records": 10417, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660697268, "job": 100, "event": "table_file_deletion", "file_number": 163}
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008660699403, "job": 100, "event": "table_file_deletion", "file_number": 161}
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.510082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.699453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.699457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.699459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.699460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:11:00.699461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:11:00 compute-0 podman[392047]: 2025-12-06 08:11:00.793392938 +0000 UTC m=+0.062486280 container create e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:11:00 compute-0 systemd[1]: Started libpod-conmon-e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47.scope.
Dec 06 08:11:00 compute-0 podman[392047]: 2025-12-06 08:11:00.749685823 +0000 UTC m=+0.018779185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:11:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:11:01 compute-0 podman[392047]: 2025-12-06 08:11:01.106881249 +0000 UTC m=+0.375974681 container init e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:11:01 compute-0 podman[392047]: 2025-12-06 08:11:01.115006522 +0000 UTC m=+0.384099904 container start e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:11:01 compute-0 podman[392047]: 2025-12-06 08:11:01.119834443 +0000 UTC m=+0.388927835 container attach e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:11:01 compute-0 nostalgic_lumiere[392063]: 167 167
Dec 06 08:11:01 compute-0 systemd[1]: libpod-e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47.scope: Deactivated successfully.
Dec 06 08:11:01 compute-0 podman[392047]: 2025-12-06 08:11:01.121149319 +0000 UTC m=+0.390242691 container died e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7406abe4f784a3e0fd122be5a1974f823b2ba5a9cf40dcd01d9375697b52dd16-merged.mount: Deactivated successfully.
Dec 06 08:11:01 compute-0 podman[392047]: 2025-12-06 08:11:01.241792408 +0000 UTC m=+0.510885750 container remove e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:11:01 compute-0 systemd[1]: libpod-conmon-e519f8eebfa24f98f49f1c905439d5ba0f6b2ea5de0cea3c580b8f50a887ea47.scope: Deactivated successfully.
Dec 06 08:11:01 compute-0 podman[392088]: 2025-12-06 08:11:01.396666902 +0000 UTC m=+0.040343774 container create 937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:11:01 compute-0 systemd[1]: Started libpod-conmon-937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1.scope.
Dec 06 08:11:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95ba5c5ad3147bedc4be966cf0a05cb884258ddb812a8c33de917bdb1532136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95ba5c5ad3147bedc4be966cf0a05cb884258ddb812a8c33de917bdb1532136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95ba5c5ad3147bedc4be966cf0a05cb884258ddb812a8c33de917bdb1532136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95ba5c5ad3147bedc4be966cf0a05cb884258ddb812a8c33de917bdb1532136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c95ba5c5ad3147bedc4be966cf0a05cb884258ddb812a8c33de917bdb1532136/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:01 compute-0 podman[392088]: 2025-12-06 08:11:01.377854497 +0000 UTC m=+0.021531389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:11:01 compute-0 podman[392088]: 2025-12-06 08:11:01.473730239 +0000 UTC m=+0.117407131 container init 937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 08:11:01 compute-0 podman[392088]: 2025-12-06 08:11:01.480806462 +0000 UTC m=+0.124483334 container start 937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:11:01 compute-0 nova_compute[251992]: 2025-12-06 08:11:01.482 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:01 compute-0 podman[392088]: 2025-12-06 08:11:01.485198353 +0000 UTC m=+0.128875285 container attach 937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:11:01 compute-0 ceph-mon[74339]: pgmap v3549: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 21 KiB/s wr, 4 op/s
Dec 06 08:11:01 compute-0 nova_compute[251992]: 2025-12-06 08:11:01.531 251996 DEBUG nova.network.neutron [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Successfully created port: db2dc841-9ffa-4ba0-9187-49705de50963 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:11:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 916 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:11:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:02 compute-0 practical_jackson[392105]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:11:02 compute-0 practical_jackson[392105]: --> relative data size: 1.0
Dec 06 08:11:02 compute-0 practical_jackson[392105]: --> All data devices are unavailable
Dec 06 08:11:02 compute-0 systemd[1]: libpod-937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1.scope: Deactivated successfully.
Dec 06 08:11:02 compute-0 podman[392088]: 2025-12-06 08:11:02.330157904 +0000 UTC m=+0.973834816 container died 937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 08:11:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c95ba5c5ad3147bedc4be966cf0a05cb884258ddb812a8c33de917bdb1532136-merged.mount: Deactivated successfully.
Dec 06 08:11:02 compute-0 podman[392088]: 2025-12-06 08:11:02.395231664 +0000 UTC m=+1.038908536 container remove 937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:11:02 compute-0 systemd[1]: libpod-conmon-937b78f19e682f3eb598f445273647f3990afcfe6a4b22e880c626c6850759e1.scope: Deactivated successfully.
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.410 251996 DEBUG nova.network.neutron [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Successfully updated port: db2dc841-9ffa-4ba0-9187-49705de50963 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:11:02 compute-0 sudo[391982]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.438 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.439 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.439 251996 DEBUG nova.network.neutron [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:11:02 compute-0 sudo[392132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:02 compute-0 sudo[392132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:02 compute-0 sudo[392132]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:02 compute-0 ceph-mon[74339]: pgmap v3550: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 916 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.532 251996 DEBUG nova.compute.manager [req-617bcb54-2b28-44ef-9bef-87443eff821a req-4a2a5411-9b48-43ba-a54b-43c5bcc69dc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-changed-db2dc841-9ffa-4ba0-9187-49705de50963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.534 251996 DEBUG nova.compute.manager [req-617bcb54-2b28-44ef-9bef-87443eff821a req-4a2a5411-9b48-43ba-a54b-43c5bcc69dc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Refreshing instance network info cache due to event network-changed-db2dc841-9ffa-4ba0-9187-49705de50963. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.534 251996 DEBUG oslo_concurrency.lockutils [req-617bcb54-2b28-44ef-9bef-87443eff821a req-4a2a5411-9b48-43ba-a54b-43c5bcc69dc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:11:02 compute-0 sudo[392157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:11:02 compute-0 sudo[392157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:02 compute-0 sudo[392157]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:02.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:02 compute-0 sudo[392182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:02 compute-0 sudo[392182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:02 compute-0 sudo[392182]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:02 compute-0 sudo[392207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:11:02 compute-0 sudo[392207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:11:02 compute-0 nova_compute[251992]: 2025-12-06 08:11:02.840 251996 DEBUG nova.network.neutron [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:11:02 compute-0 podman[392273]: 2025-12-06 08:11:02.943228527 +0000 UTC m=+0.036410537 container create 59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_saha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec 06 08:11:02 compute-0 systemd[1]: Started libpod-conmon-59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d.scope.
Dec 06 08:11:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:11:03 compute-0 podman[392273]: 2025-12-06 08:11:03.006667702 +0000 UTC m=+0.099849802 container init 59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_saha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:11:03 compute-0 podman[392273]: 2025-12-06 08:11:03.013236181 +0000 UTC m=+0.106418191 container start 59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_saha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:11:03 compute-0 podman[392273]: 2025-12-06 08:11:03.016054308 +0000 UTC m=+0.109236318 container attach 59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:11:03 compute-0 practical_saha[392289]: 167 167
Dec 06 08:11:03 compute-0 systemd[1]: libpod-59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d.scope: Deactivated successfully.
Dec 06 08:11:03 compute-0 podman[392273]: 2025-12-06 08:11:03.0194435 +0000 UTC m=+0.112625530 container died 59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:11:03 compute-0 podman[392273]: 2025-12-06 08:11:02.927916669 +0000 UTC m=+0.021098699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:11:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5920292b599d76e9fb73f628f48f77f52ef67f51d0da50344406a2e5ba29b04-merged.mount: Deactivated successfully.
Dec 06 08:11:03 compute-0 podman[392273]: 2025-12-06 08:11:03.066695172 +0000 UTC m=+0.159877182 container remove 59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:11:03 compute-0 systemd[1]: libpod-conmon-59bcc8178443d84ba824438c72267ca437d106f00594cc38776963e043d0272d.scope: Deactivated successfully.
Dec 06 08:11:03 compute-0 podman[392315]: 2025-12-06 08:11:03.278260647 +0000 UTC m=+0.060589788 container create ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:11:03 compute-0 nova_compute[251992]: 2025-12-06 08:11:03.287 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:03 compute-0 systemd[1]: Started libpod-conmon-ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06.scope.
Dec 06 08:11:03 compute-0 podman[392315]: 2025-12-06 08:11:03.257625753 +0000 UTC m=+0.039954944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:11:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e07df2a4d0e627765932d2b8edf64e0ebe3d2fb635b2ae40f378d2ab881c73d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e07df2a4d0e627765932d2b8edf64e0ebe3d2fb635b2ae40f378d2ab881c73d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e07df2a4d0e627765932d2b8edf64e0ebe3d2fb635b2ae40f378d2ab881c73d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e07df2a4d0e627765932d2b8edf64e0ebe3d2fb635b2ae40f378d2ab881c73d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:03 compute-0 podman[392315]: 2025-12-06 08:11:03.380345818 +0000 UTC m=+0.162674979 container init ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:11:03 compute-0 podman[392315]: 2025-12-06 08:11:03.386808294 +0000 UTC m=+0.169137435 container start ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:11:03 compute-0 podman[392315]: 2025-12-06 08:11:03.390886807 +0000 UTC m=+0.173215958 container attach ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:11:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 916 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:11:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:03.886 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:03.887 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:03.888 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:03.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:04 compute-0 magical_nightingale[392332]: {
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:     "0": [
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:         {
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "devices": [
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "/dev/loop3"
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             ],
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "lv_name": "ceph_lv0",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "lv_size": "7511998464",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "name": "ceph_lv0",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "tags": {
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.cluster_name": "ceph",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.crush_device_class": "",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.encrypted": "0",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.osd_id": "0",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.type": "block",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:                 "ceph.vdo": "0"
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             },
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "type": "block",
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:             "vg_name": "ceph_vg0"
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:         }
Dec 06 08:11:04 compute-0 magical_nightingale[392332]:     ]
Dec 06 08:11:04 compute-0 magical_nightingale[392332]: }
Dec 06 08:11:04 compute-0 systemd[1]: libpod-ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06.scope: Deactivated successfully.
Dec 06 08:11:04 compute-0 podman[392341]: 2025-12-06 08:11:04.251450605 +0000 UTC m=+0.026058463 container died ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:11:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e07df2a4d0e627765932d2b8edf64e0ebe3d2fb635b2ae40f378d2ab881c73d-merged.mount: Deactivated successfully.
Dec 06 08:11:04 compute-0 podman[392341]: 2025-12-06 08:11:04.302858031 +0000 UTC m=+0.077465879 container remove ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 08:11:04 compute-0 systemd[1]: libpod-conmon-ffe6ceb2e8676bb57088417fbf2ce835075ee32be2d6d910249bf4a62a53dc06.scope: Deactivated successfully.
Dec 06 08:11:04 compute-0 sudo[392207]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:04 compute-0 sudo[392356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:04 compute-0 sudo[392356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:04 compute-0 sudo[392356]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.411 251996 DEBUG nova.network.neutron [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updating instance_info_cache with network_info: [{"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.448 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.449 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Instance network_info: |[{"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.449 251996 DEBUG oslo_concurrency.lockutils [req-617bcb54-2b28-44ef-9bef-87443eff821a req-4a2a5411-9b48-43ba-a54b-43c5bcc69dc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.449 251996 DEBUG nova.network.neutron [req-617bcb54-2b28-44ef-9bef-87443eff821a req-4a2a5411-9b48-43ba-a54b-43c5bcc69dc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Refreshing network info cache for port db2dc841-9ffa-4ba0-9187-49705de50963 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.452 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Start _get_guest_xml network_info=[{"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:11:04 compute-0 sudo[392381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:11:04 compute-0 sudo[392381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.458 251996 WARNING nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:11:04 compute-0 sudo[392381]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.465 251996 DEBUG nova.virt.libvirt.host [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.466 251996 DEBUG nova.virt.libvirt.host [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.469 251996 DEBUG nova.virt.libvirt.host [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.469 251996 DEBUG nova.virt.libvirt.host [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.470 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.471 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.471 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.472 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.472 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.472 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.473 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.473 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.473 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.473 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.474 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.474 251996 DEBUG nova.virt.hardware [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.477 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:04 compute-0 sudo[392406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:04 compute-0 sudo[392406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:04 compute-0 sudo[392406]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:04 compute-0 sudo[392432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:11:04 compute-0 sudo[392432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:04.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:04 compute-0 ceph-mon[74339]: pgmap v3551: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 916 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Dec 06 08:11:04 compute-0 podman[392513]: 2025-12-06 08:11:04.902516647 +0000 UTC m=+0.037762354 container create 44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 08:11:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:11:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2928991331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:11:04 compute-0 systemd[1]: Started libpod-conmon-44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888.scope.
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.950 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:11:04 compute-0 podman[392513]: 2025-12-06 08:11:04.97836093 +0000 UTC m=+0.113606657 container init 44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:11:04 compute-0 podman[392513]: 2025-12-06 08:11:04.886575331 +0000 UTC m=+0.021821048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:11:04 compute-0 podman[392513]: 2025-12-06 08:11:04.984595031 +0000 UTC m=+0.119840748 container start 44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:11:04 compute-0 blissful_golick[392531]: 167 167
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.988 251996 DEBUG nova.storage.rbd_utils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:11:04 compute-0 podman[392513]: 2025-12-06 08:11:04.989380212 +0000 UTC m=+0.124625969 container attach 44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:11:04 compute-0 systemd[1]: libpod-44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888.scope: Deactivated successfully.
Dec 06 08:11:04 compute-0 podman[392513]: 2025-12-06 08:11:04.9915359 +0000 UTC m=+0.126781617 container died 44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:11:04 compute-0 nova_compute[251992]: 2025-12-06 08:11:04.994 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a5c06a91d87244ea8cf77a9141e5348c9bc4e7ebcb45d468d2241a158d7b587-merged.mount: Deactivated successfully.
Dec 06 08:11:05 compute-0 podman[392513]: 2025-12-06 08:11:05.035805291 +0000 UTC m=+0.171050998 container remove 44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:11:05 compute-0 systemd[1]: libpod-conmon-44dc333cef6a1bbdcf99a243b6851ef3d207d97e5bdd5cb7f4f4b32e19d5b888.scope: Deactivated successfully.
Dec 06 08:11:05 compute-0 podman[392596]: 2025-12-06 08:11:05.206881388 +0000 UTC m=+0.048475996 container create acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chatelet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:11:05 compute-0 systemd[1]: Started libpod-conmon-acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706.scope.
Dec 06 08:11:05 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e2ec2b4cccc2ca3d35c30649d617ed72a148bb87259fda079999cda5ade4e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e2ec2b4cccc2ca3d35c30649d617ed72a148bb87259fda079999cda5ade4e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e2ec2b4cccc2ca3d35c30649d617ed72a148bb87259fda079999cda5ade4e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9e2ec2b4cccc2ca3d35c30649d617ed72a148bb87259fda079999cda5ade4e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:05 compute-0 podman[392596]: 2025-12-06 08:11:05.18827366 +0000 UTC m=+0.029868308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:11:05 compute-0 podman[392596]: 2025-12-06 08:11:05.286517906 +0000 UTC m=+0.128112534 container init acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:11:05 compute-0 podman[392596]: 2025-12-06 08:11:05.292968462 +0000 UTC m=+0.134563080 container start acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chatelet, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 08:11:05 compute-0 podman[392596]: 2025-12-06 08:11:05.297474366 +0000 UTC m=+0.139068984 container attach acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:11:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:11:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2064527638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.451 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.453 251996 DEBUG nova.virt.libvirt.vif [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1516061334',display_name='tempest-TestNetworkBasicOps-server-1516061334',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1516061334',id=198,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN7zzAxFHv2BK6lDcH3TkL1dghIY1UDFXKAjdgrHB3jbZay3rOebcvKFFR16Dt5bAsikdU8GpHTMi6/2WJUM95RdVrtVRafvS8V7387L2oERD9LvmL0Pc0856TqRNAinkQ==',key_name='tempest-TestNetworkBasicOps-246940953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-qsl2kvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:10:57Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.454 251996 DEBUG nova.network.os_vif_util [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.454 251996 DEBUG nova.network.os_vif_util [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:4d:4f,bridge_name='br-int',has_traffic_filtering=True,id=db2dc841-9ffa-4ba0-9187-49705de50963,network=Network(45d279a8-df8a-401d-a171-8e7cd6ef2787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb2dc841-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.456 251996 DEBUG nova.objects.instance [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.474 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <uuid>9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7</uuid>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <name>instance-000000c6</name>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkBasicOps-server-1516061334</nova:name>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:11:04</nova:creationTime>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <nova:port uuid="db2dc841-9ffa-4ba0-9187-49705de50963">
Dec 06 08:11:05 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <system>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <entry name="serial">9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7</entry>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <entry name="uuid">9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7</entry>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </system>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <os>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   </os>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <features>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   </features>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk">
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       </source>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk.config">
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       </source>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:11:05 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:f3:4d:4f"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <target dev="tapdb2dc841-9f"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7/console.log" append="off"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <video>
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </video>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:11:05 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:11:05 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:11:05 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:11:05 compute-0 nova_compute[251992]: </domain>
Dec 06 08:11:05 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.476 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Preparing to wait for external event network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.476 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.477 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.477 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.478 251996 DEBUG nova.virt.libvirt.vif [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1516061334',display_name='tempest-TestNetworkBasicOps-server-1516061334',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1516061334',id=198,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN7zzAxFHv2BK6lDcH3TkL1dghIY1UDFXKAjdgrHB3jbZay3rOebcvKFFR16Dt5bAsikdU8GpHTMi6/2WJUM95RdVrtVRafvS8V7387L2oERD9LvmL0Pc0856TqRNAinkQ==',key_name='tempest-TestNetworkBasicOps-246940953',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-qsl2kvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:10:57Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.478 251996 DEBUG nova.network.os_vif_util [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.479 251996 DEBUG nova.network.os_vif_util [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:4d:4f,bridge_name='br-int',has_traffic_filtering=True,id=db2dc841-9ffa-4ba0-9187-49705de50963,network=Network(45d279a8-df8a-401d-a171-8e7cd6ef2787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb2dc841-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.479 251996 DEBUG os_vif [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:4d:4f,bridge_name='br-int',has_traffic_filtering=True,id=db2dc841-9ffa-4ba0-9187-49705de50963,network=Network(45d279a8-df8a-401d-a171-8e7cd6ef2787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb2dc841-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.480 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.481 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.481 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.486 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.486 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb2dc841-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.487 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb2dc841-9f, col_values=(('external_ids', {'iface-id': 'db2dc841-9ffa-4ba0-9187-49705de50963', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:4d:4f', 'vm-uuid': '9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.488 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:05 compute-0 NetworkManager[48965]: <info>  [1765008665.4899] manager: (tapdb2dc841-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.492 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.496 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.498 251996 INFO os_vif [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:4d:4f,bridge_name='br-int',has_traffic_filtering=True,id=db2dc841-9ffa-4ba0-9187-49705de50963,network=Network(45d279a8-df8a-401d-a171-8e7cd6ef2787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb2dc841-9f')
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.548 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.548 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.549 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:f3:4d:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.549 251996 INFO nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Using config drive
Dec 06 08:11:05 compute-0 nova_compute[251992]: 2025-12-06 08:11:05.575 251996 DEBUG nova.storage.rbd_utils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:11:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 08:11:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:05.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.027 251996 DEBUG nova.network.neutron [req-617bcb54-2b28-44ef-9bef-87443eff821a req-4a2a5411-9b48-43ba-a54b-43c5bcc69dc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updated VIF entry in instance network info cache for port db2dc841-9ffa-4ba0-9187-49705de50963. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.028 251996 DEBUG nova.network.neutron [req-617bcb54-2b28-44ef-9bef-87443eff821a req-4a2a5411-9b48-43ba-a54b-43c5bcc69dc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updating instance_info_cache with network_info: [{"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]: {
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]:         "osd_id": 0,
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]:         "type": "bluestore"
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]:     }
Dec 06 08:11:06 compute-0 elastic_chatelet[392613]: }
Dec 06 08:11:06 compute-0 systemd[1]: libpod-acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706.scope: Deactivated successfully.
Dec 06 08:11:06 compute-0 podman[392596]: 2025-12-06 08:11:06.084896945 +0000 UTC m=+0.926491563 container died acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chatelet, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.165 251996 DEBUG oslo_concurrency.lockutils [req-617bcb54-2b28-44ef-9bef-87443eff821a req-4a2a5411-9b48-43ba-a54b-43c5bcc69dc9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.182 251996 INFO nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Creating config drive at /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7/disk.config
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.187 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5h_8t9zn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2928991331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:11:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2064527638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:11:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9e2ec2b4cccc2ca3d35c30649d617ed72a148bb87259fda079999cda5ade4e3-merged.mount: Deactivated successfully.
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.320 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5h_8t9zn" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:06 compute-0 podman[392596]: 2025-12-06 08:11:06.337887732 +0000 UTC m=+1.179482340 container remove acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chatelet, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:11:06 compute-0 systemd[1]: libpod-conmon-acb00f4adf41cb947e3a4480c9f137af5a6907909b0c232a6d95b26daf6ae706.scope: Deactivated successfully.
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.349 251996 DEBUG nova.storage.rbd_utils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.352 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7/disk.config 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:06 compute-0 sudo[392432]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:11:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:11:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e2e059cb-051c-443b-9363-91602d0ceddd does not exist
Dec 06 08:11:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 66f90d17-00df-4e02-83b7-ac05f85ecad9 does not exist
Dec 06 08:11:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0b649fbe-6ab7-4ee6-a73d-8c49af9c938a does not exist
Dec 06 08:11:06 compute-0 sudo[392705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:06 compute-0 sudo[392705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:06 compute-0 sudo[392705]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.511 251996 DEBUG oslo_concurrency.processutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7/disk.config 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.512 251996 INFO nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Deleting local config drive /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7/disk.config because it was imported into RBD.
Dec 06 08:11:06 compute-0 sudo[392739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:11:06 compute-0 sudo[392739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:06 compute-0 sudo[392739]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:06 compute-0 kernel: tapdb2dc841-9f: entered promiscuous mode
Dec 06 08:11:06 compute-0 NetworkManager[48965]: <info>  [1765008666.5743] manager: (tapdb2dc841-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.576 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:06 compute-0 ovn_controller[147168]: 2025-12-06T08:11:06Z|00751|binding|INFO|Claiming lport db2dc841-9ffa-4ba0-9187-49705de50963 for this chassis.
Dec 06 08:11:06 compute-0 ovn_controller[147168]: 2025-12-06T08:11:06Z|00752|binding|INFO|db2dc841-9ffa-4ba0-9187-49705de50963: Claiming fa:16:3e:f3:4d:4f 10.100.0.11
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.586 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:06.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.601 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:4d:4f 10.100.0.11'], port_security=['fa:16:3e:f3:4d:4f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45d279a8-df8a-401d-a171-8e7cd6ef2787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c1b22efd-e5dd-452e-874f-48c563426bca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df697692-73c6-4c24-8d1e-7c6596e7ba06, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=db2dc841-9ffa-4ba0-9187-49705de50963) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.603 158118 INFO neutron.agent.ovn.metadata.agent [-] Port db2dc841-9ffa-4ba0-9187-49705de50963 in datapath 45d279a8-df8a-401d-a171-8e7cd6ef2787 bound to our chassis
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.604 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45d279a8-df8a-401d-a171-8e7cd6ef2787
Dec 06 08:11:06 compute-0 systemd-udevd[392796]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:11:06 compute-0 NetworkManager[48965]: <info>  [1765008666.6210] device (tapdb2dc841-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.619 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[30eee0a8-0303-45f4-befe-ebba686d7f81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.622 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45d279a8-d1 in ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:11:06 compute-0 NetworkManager[48965]: <info>  [1765008666.6230] device (tapdb2dc841-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:11:06 compute-0 podman[392732]: 2025-12-06 08:11:06.623401408 +0000 UTC m=+0.120163547 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 08:11:06 compute-0 systemd-machined[212986]: New machine qemu-90-instance-000000c6.
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.624 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45d279a8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.625 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a65fe342-edfa-4ec8-b3b4-50ddb5020d2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.626 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8e71cc-6314-4347-b0de-d9b68f85b3da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-000000c6.
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.641 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0611f59d-4b6a-45e8-b571-d72c517a9c90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_controller[147168]: 2025-12-06T08:11:06Z|00753|binding|INFO|Setting lport db2dc841-9ffa-4ba0-9187-49705de50963 ovn-installed in OVS
Dec 06 08:11:06 compute-0 ovn_controller[147168]: 2025-12-06T08:11:06Z|00754|binding|INFO|Setting lport db2dc841-9ffa-4ba0-9187-49705de50963 up in Southbound
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.649 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.665 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d65b14b8-4e02-438e-9d43-57bc43eac568]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.698 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ac2375-a594-4857-8bfb-4488089b821c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.703 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[26181824-8528-44d8-93ce-271932655f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 NetworkManager[48965]: <info>  [1765008666.7052] manager: (tap45d279a8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.736 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c8649b-643a-47af-9188-f929601c28f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.739 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[987980a1-b2db-4944-9efa-9df3d465737d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 NetworkManager[48965]: <info>  [1765008666.7613] device (tap45d279a8-d0): carrier: link connected
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.765 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[bda84f1e-9e25-42f2-a305-4984d42ee6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.785 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e33162cd-b467-4ec5-8901-af5fccfae775]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45d279a8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:60:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893934, 'reachable_time': 16235, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392830, 'error': None, 'target': 'ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.800 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c470bdb5-594b-4e3d-a94c-4d296bf79e8b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:60c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 893934, 'tstamp': 893934}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 392831, 'error': None, 'target': 'ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.817 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[65bbb1de-9ac2-424b-b8fb-53c5747c8963]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45d279a8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:60:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893934, 'reachable_time': 16235, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 392832, 'error': None, 'target': 'ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.846 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3eff5898-b259-4f47-a6e9-1ff47484c4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.898 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b024c8-62a8-4abc-a7db-31bc967c8a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.899 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45d279a8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.900 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.900 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45d279a8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.902 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:06 compute-0 NetworkManager[48965]: <info>  [1765008666.9028] manager: (tap45d279a8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Dec 06 08:11:06 compute-0 kernel: tap45d279a8-d0: entered promiscuous mode
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.904 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45d279a8-d0, col_values=(('external_ids', {'iface-id': '87203587-ba9e-4dcf-9130-3f1e1a02f74f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:06 compute-0 ovn_controller[147168]: 2025-12-06T08:11:06Z|00755|binding|INFO|Releasing lport 87203587-ba9e-4dcf-9130-3f1e1a02f74f from this chassis (sb_readonly=0)
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.905 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:06 compute-0 nova_compute[251992]: 2025-12-06 08:11:06.918 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.919 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45d279a8-df8a-401d-a171-8e7cd6ef2787.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45d279a8-df8a-401d-a171-8e7cd6ef2787.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.920 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f2405f-b884-4c50-8866-fd7f03fc3d20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.920 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-45d279a8-df8a-401d-a171-8e7cd6ef2787
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/45d279a8-df8a-401d-a171-8e7cd6ef2787.pid.haproxy
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 45d279a8-df8a-401d-a171-8e7cd6ef2787
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:11:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:06.921 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787', 'env', 'PROCESS_TAG=haproxy-45d279a8-df8a-401d-a171-8e7cd6ef2787', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45d279a8-df8a-401d-a171-8e7cd6ef2787.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.180 251996 DEBUG nova.compute.manager [req-6ba32caa-84ca-4903-b905-f0f5e9e83ae2 req-ff3cf4b0-90a3-418c-bfff-20dc6d43a7a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.181 251996 DEBUG oslo_concurrency.lockutils [req-6ba32caa-84ca-4903-b905-f0f5e9e83ae2 req-ff3cf4b0-90a3-418c-bfff-20dc6d43a7a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.181 251996 DEBUG oslo_concurrency.lockutils [req-6ba32caa-84ca-4903-b905-f0f5e9e83ae2 req-ff3cf4b0-90a3-418c-bfff-20dc6d43a7a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.181 251996 DEBUG oslo_concurrency.lockutils [req-6ba32caa-84ca-4903-b905-f0f5e9e83ae2 req-ff3cf4b0-90a3-418c-bfff-20dc6d43a7a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.182 251996 DEBUG nova.compute.manager [req-6ba32caa-84ca-4903-b905-f0f5e9e83ae2 req-ff3cf4b0-90a3-418c-bfff-20dc6d43a7a9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Processing event network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:11:07 compute-0 podman[392901]: 2025-12-06 08:11:07.281765938 +0000 UTC m=+0.063832456 container create 77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:11:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Dec 06 08:11:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Dec 06 08:11:07 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Dec 06 08:11:07 compute-0 podman[392901]: 2025-12-06 08:11:07.240193572 +0000 UTC m=+0.022260170 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:11:07 compute-0 systemd[1]: Started libpod-conmon-77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc.scope.
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.374 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.375 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008667.3741705, 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.376 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] VM Started (Lifecycle Event)
Dec 06 08:11:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.378 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:11:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/328f500448ba9bd010e53bec5c849f9f0a1cadaa819537ab2d81237bc7b7c3bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.384 251996 INFO nova.virt.libvirt.driver [-] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Instance spawned successfully.
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.384 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:11:07 compute-0 podman[392901]: 2025-12-06 08:11:07.397546314 +0000 UTC m=+0.179612852 container init 77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:11:07 compute-0 podman[392901]: 2025-12-06 08:11:07.402296774 +0000 UTC m=+0.184363292 container start 77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:11:07 compute-0 ceph-mon[74339]: pgmap v3552: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Dec 06 08:11:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:11:07 compute-0 ceph-mon[74339]: osdmap e415: 3 total, 3 up, 3 in
Dec 06 08:11:07 compute-0 neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787[392922]: [NOTICE]   (392926) : New worker (392928) forked
Dec 06 08:11:07 compute-0 neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787[392922]: [NOTICE]   (392926) : Loading success.
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.554 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.559 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.613 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.614 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008667.3752117, 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.614 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] VM Paused (Lifecycle Event)
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.619 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.620 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.620 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.621 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.621 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.621 251996 DEBUG nova.virt.libvirt.driver [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:11:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:07.634 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.634 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:07.635 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.701 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.704 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008667.3782008, 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.705 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] VM Resumed (Lifecycle Event)
Dec 06 08:11:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.739 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.741 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.748 251996 INFO nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Took 10.30 seconds to spawn the instance on the hypervisor.
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.748 251996 DEBUG nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.780 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.818 251996 INFO nova.compute.manager [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Took 11.20 seconds to build instance.
Dec 06 08:11:07 compute-0 nova_compute[251992]: 2025-12-06 08:11:07.843 251996 DEBUG oslo_concurrency.lockutils [None req-bb022b86-4a8e-4b58-8219-d845f0007709 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:08 compute-0 nova_compute[251992]: 2025-12-06 08:11:08.289 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2249248437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:08.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:11:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1874445790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:11:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:11:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1874445790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:11:09 compute-0 nova_compute[251992]: 2025-12-06 08:11:09.287 251996 DEBUG nova.compute.manager [req-3018cfff-63e5-47f6-a3d5-ee2cd7a24716 req-8777f045-5bcc-4fb6-a91b-ca413b6e1473 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:09 compute-0 nova_compute[251992]: 2025-12-06 08:11:09.287 251996 DEBUG oslo_concurrency.lockutils [req-3018cfff-63e5-47f6-a3d5-ee2cd7a24716 req-8777f045-5bcc-4fb6-a91b-ca413b6e1473 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:09 compute-0 nova_compute[251992]: 2025-12-06 08:11:09.288 251996 DEBUG oslo_concurrency.lockutils [req-3018cfff-63e5-47f6-a3d5-ee2cd7a24716 req-8777f045-5bcc-4fb6-a91b-ca413b6e1473 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:09 compute-0 nova_compute[251992]: 2025-12-06 08:11:09.288 251996 DEBUG oslo_concurrency.lockutils [req-3018cfff-63e5-47f6-a3d5-ee2cd7a24716 req-8777f045-5bcc-4fb6-a91b-ca413b6e1473 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:09 compute-0 nova_compute[251992]: 2025-12-06 08:11:09.289 251996 DEBUG nova.compute.manager [req-3018cfff-63e5-47f6-a3d5-ee2cd7a24716 req-8777f045-5bcc-4fb6-a91b-ca413b6e1473 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] No waiting events found dispatching network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:11:09 compute-0 nova_compute[251992]: 2025-12-06 08:11:09.289 251996 WARNING nova.compute.manager [req-3018cfff-63e5-47f6-a3d5-ee2cd7a24716 req-8777f045-5bcc-4fb6-a91b-ca413b6e1473 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received unexpected event network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 for instance with vm_state active and task_state None.
Dec 06 08:11:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:09 compute-0 ceph-mon[74339]: pgmap v3554: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 08:11:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1874445790' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:11:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1874445790' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:11:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 08:11:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:10 compute-0 nova_compute[251992]: 2025-12-06 08:11:10.489 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:10 compute-0 ceph-mon[74339]: pgmap v3555: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Dec 06 08:11:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 190 op/s
Dec 06 08:11:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:11.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:12 compute-0 sudo[392939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:12 compute-0 sudo[392939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:12 compute-0 sudo[392939]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:12 compute-0 sudo[392976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:12 compute-0 sudo[392976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:12 compute-0 sudo[392976]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:12 compute-0 podman[392963]: 2025-12-06 08:11:12.58223329 +0000 UTC m=+0.058320996 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:11:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:12.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:12 compute-0 podman[392964]: 2025-12-06 08:11:12.617966197 +0000 UTC m=+0.094166806 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:11:12 compute-0 ceph-mon[74339]: pgmap v3556: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 190 op/s
Dec 06 08:11:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:11:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:11:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:11:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:11:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:11:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:11:13 compute-0 nova_compute[251992]: 2025-12-06 08:11:13.292 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 190 op/s
Dec 06 08:11:13 compute-0 ceph-mgr[74630]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 08:11:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Dec 06 08:11:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Dec 06 08:11:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Dec 06 08:11:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:14.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:15 compute-0 ceph-mon[74339]: pgmap v3557: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 190 op/s
Dec 06 08:11:15 compute-0 ceph-mon[74339]: osdmap e416: 3 total, 3 up, 3 in
Dec 06 08:11:15 compute-0 nova_compute[251992]: 2025-12-06 08:11:15.494 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:15.637 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 34 KiB/s wr, 266 op/s
Dec 06 08:11:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:16 compute-0 nova_compute[251992]: 2025-12-06 08:11:16.462 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:16 compute-0 NetworkManager[48965]: <info>  [1765008676.4627] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Dec 06 08:11:16 compute-0 NetworkManager[48965]: <info>  [1765008676.4640] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Dec 06 08:11:16 compute-0 nova_compute[251992]: 2025-12-06 08:11:16.545 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:16 compute-0 ovn_controller[147168]: 2025-12-06T08:11:16Z|00756|binding|INFO|Releasing lport 87203587-ba9e-4dcf-9130-3f1e1a02f74f from this chassis (sb_readonly=0)
Dec 06 08:11:16 compute-0 nova_compute[251992]: 2025-12-06 08:11:16.554 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:16.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:17 compute-0 nova_compute[251992]: 2025-12-06 08:11:16.999 251996 DEBUG nova.compute.manager [req-47ad25af-278e-4223-b7fb-9a1cd13a46bf req-133aef6b-a478-4dac-a31a-6381c4e16e71 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-changed-db2dc841-9ffa-4ba0-9187-49705de50963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:17 compute-0 nova_compute[251992]: 2025-12-06 08:11:17.000 251996 DEBUG nova.compute.manager [req-47ad25af-278e-4223-b7fb-9a1cd13a46bf req-133aef6b-a478-4dac-a31a-6381c4e16e71 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Refreshing instance network info cache due to event network-changed-db2dc841-9ffa-4ba0-9187-49705de50963. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:11:17 compute-0 nova_compute[251992]: 2025-12-06 08:11:17.000 251996 DEBUG oslo_concurrency.lockutils [req-47ad25af-278e-4223-b7fb-9a1cd13a46bf req-133aef6b-a478-4dac-a31a-6381c4e16e71 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:11:17 compute-0 nova_compute[251992]: 2025-12-06 08:11:17.000 251996 DEBUG oslo_concurrency.lockutils [req-47ad25af-278e-4223-b7fb-9a1cd13a46bf req-133aef6b-a478-4dac-a31a-6381c4e16e71 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:11:17 compute-0 nova_compute[251992]: 2025-12-06 08:11:17.001 251996 DEBUG nova.network.neutron [req-47ad25af-278e-4223-b7fb-9a1cd13a46bf req-133aef6b-a478-4dac-a31a-6381c4e16e71 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Refreshing network info cache for port db2dc841-9ffa-4ba0-9187-49705de50963 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:11:17 compute-0 ceph-mon[74339]: pgmap v3559: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 34 KiB/s wr, 266 op/s
Dec 06 08:11:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 261 op/s
Dec 06 08:11:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:18 compute-0 nova_compute[251992]: 2025-12-06 08:11:18.294 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:18 compute-0 nova_compute[251992]: 2025-12-06 08:11:18.317 251996 DEBUG nova.network.neutron [req-47ad25af-278e-4223-b7fb-9a1cd13a46bf req-133aef6b-a478-4dac-a31a-6381c4e16e71 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updated VIF entry in instance network info cache for port db2dc841-9ffa-4ba0-9187-49705de50963. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:11:18 compute-0 nova_compute[251992]: 2025-12-06 08:11:18.318 251996 DEBUG nova.network.neutron [req-47ad25af-278e-4223-b7fb-9a1cd13a46bf req-133aef6b-a478-4dac-a31a-6381c4e16e71 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updating instance_info_cache with network_info: [{"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:11:18 compute-0 nova_compute[251992]: 2025-12-06 08:11:18.569 251996 DEBUG oslo_concurrency.lockutils [req-47ad25af-278e-4223-b7fb-9a1cd13a46bf req-133aef6b-a478-4dac-a31a-6381c4e16e71 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:11:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:18.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:11:18
Dec 06 08:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.rgw.root', 'images', '.mgr', 'volumes']
Dec 06 08:11:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:11:18 compute-0 ceph-mon[74339]: pgmap v3560: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 261 op/s
Dec 06 08:11:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 261 op/s
Dec 06 08:11:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:19.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:20 compute-0 nova_compute[251992]: 2025-12-06 08:11:20.497 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:20 compute-0 ceph-mon[74339]: pgmap v3561: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 261 op/s
Dec 06 08:11:20 compute-0 ovn_controller[147168]: 2025-12-06T08:11:20Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:4d:4f 10.100.0.11
Dec 06 08:11:20 compute-0 ovn_controller[147168]: 2025-12-06T08:11:20Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:4d:4f 10.100.0.11
Dec 06 08:11:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 2.5 MiB/s wr, 269 op/s
Dec 06 08:11:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:21.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:22 compute-0 ceph-mon[74339]: pgmap v3562: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 2.5 MiB/s wr, 269 op/s
Dec 06 08:11:23 compute-0 nova_compute[251992]: 2025-12-06 08:11:23.297 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 2.5 MiB/s wr, 269 op/s
Dec 06 08:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:11:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:11:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:23.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:25 compute-0 ceph-mon[74339]: pgmap v3563: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 947 KiB/s rd, 2.5 MiB/s wr, 269 op/s
Dec 06 08:11:25 compute-0 nova_compute[251992]: 2025-12-06 08:11:25.501 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1014 KiB/s rd, 2.3 MiB/s wr, 282 op/s
Dec 06 08:11:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:25.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002663236611297227 of space, bias 1.0, pg target 0.7989709833891682 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:11:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:11:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:26.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:11:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:11:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:11:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:11:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:11:27 compute-0 ceph-mon[74339]: pgmap v3564: 305 pgs: 305 active+clean; 215 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1014 KiB/s rd, 2.3 MiB/s wr, 282 op/s
Dec 06 08:11:27 compute-0 nova_compute[251992]: 2025-12-06 08:11:27.576 251996 INFO nova.compute.manager [None req-b706dd25-3a27-4c74-be60-30acd94d38c3 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Get console output
Dec 06 08:11:27 compute-0 nova_compute[251992]: 2025-12-06 08:11:27.583 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:11:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 2.2 MiB/s wr, 206 op/s
Dec 06 08:11:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:27.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:28 compute-0 nova_compute[251992]: 2025-12-06 08:11:28.300 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:28 compute-0 ovn_controller[147168]: 2025-12-06T08:11:28Z|00757|binding|INFO|Releasing lport 87203587-ba9e-4dcf-9130-3f1e1a02f74f from this chassis (sb_readonly=0)
Dec 06 08:11:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:28.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:28 compute-0 ceph-mon[74339]: pgmap v3565: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 2.2 MiB/s wr, 206 op/s
Dec 06 08:11:28 compute-0 nova_compute[251992]: 2025-12-06 08:11:28.694 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.2 MiB/s wr, 175 op/s
Dec 06 08:11:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:29.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:30 compute-0 nova_compute[251992]: 2025-12-06 08:11:30.504 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:30 compute-0 ceph-mon[74339]: pgmap v3566: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.2 MiB/s wr, 175 op/s
Dec 06 08:11:30 compute-0 ovn_controller[147168]: 2025-12-06T08:11:30Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:4d:4f 10.100.0.11
Dec 06 08:11:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 400 KiB/s rd, 2.2 MiB/s wr, 176 op/s
Dec 06 08:11:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:31.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:32 compute-0 sudo[393042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:32 compute-0 sudo[393042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:32 compute-0 sudo[393042]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:32 compute-0 sudo[393067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:32 compute-0 sudo[393067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:32 compute-0 sudo[393067]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:33 compute-0 ceph-mon[74339]: pgmap v3567: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 400 KiB/s rd, 2.2 MiB/s wr, 176 op/s
Dec 06 08:11:33 compute-0 sshd-session[393040]: Invalid user sol from 80.94.92.182 port 37060
Dec 06 08:11:33 compute-0 nova_compute[251992]: 2025-12-06 08:11:33.303 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:33 compute-0 ovn_controller[147168]: 2025-12-06T08:11:33Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:4d:4f 10.100.0.11
Dec 06 08:11:33 compute-0 sshd-session[393040]: Connection closed by invalid user sol 80.94.92.182 port 37060 [preauth]
Dec 06 08:11:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 65 KiB/s wr, 57 op/s
Dec 06 08:11:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:34.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:35 compute-0 ceph-mon[74339]: pgmap v3568: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 65 KiB/s wr, 57 op/s
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.507 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.686 251996 DEBUG nova.compute.manager [req-c5abcd9c-236f-46c3-ba4d-c86c42fdaaa2 req-78a0bc99-36e1-4f41-af48-acb8d317c5dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-changed-db2dc841-9ffa-4ba0-9187-49705de50963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.687 251996 DEBUG nova.compute.manager [req-c5abcd9c-236f-46c3-ba4d-c86c42fdaaa2 req-78a0bc99-36e1-4f41-af48-acb8d317c5dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Refreshing instance network info cache due to event network-changed-db2dc841-9ffa-4ba0-9187-49705de50963. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.688 251996 DEBUG oslo_concurrency.lockutils [req-c5abcd9c-236f-46c3-ba4d-c86c42fdaaa2 req-78a0bc99-36e1-4f41-af48-acb8d317c5dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.688 251996 DEBUG oslo_concurrency.lockutils [req-c5abcd9c-236f-46c3-ba4d-c86c42fdaaa2 req-78a0bc99-36e1-4f41-af48-acb8d317c5dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.688 251996 DEBUG nova.network.neutron [req-c5abcd9c-236f-46c3-ba4d-c86c42fdaaa2 req-78a0bc99-36e1-4f41-af48-acb8d317c5dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Refreshing network info cache for port db2dc841-9ffa-4ba0-9187-49705de50963 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:11:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 66 KiB/s wr, 57 op/s
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.871 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.871 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.871 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.872 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.872 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.873 251996 INFO nova.compute.manager [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Terminating instance
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.873 251996 DEBUG nova.compute.manager [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:11:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:35.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:35 compute-0 kernel: tapdb2dc841-9f (unregistering): left promiscuous mode
Dec 06 08:11:35 compute-0 NetworkManager[48965]: <info>  [1765008695.9373] device (tapdb2dc841-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:11:35 compute-0 ovn_controller[147168]: 2025-12-06T08:11:35Z|00758|binding|INFO|Releasing lport db2dc841-9ffa-4ba0-9187-49705de50963 from this chassis (sb_readonly=0)
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.947 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:35 compute-0 ovn_controller[147168]: 2025-12-06T08:11:35Z|00759|binding|INFO|Setting lport db2dc841-9ffa-4ba0-9187-49705de50963 down in Southbound
Dec 06 08:11:35 compute-0 ovn_controller[147168]: 2025-12-06T08:11:35Z|00760|binding|INFO|Removing iface tapdb2dc841-9f ovn-installed in OVS
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.951 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:35.957 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:4d:4f 10.100.0.11'], port_security=['fa:16:3e:f3:4d:4f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45d279a8-df8a-401d-a171-8e7cd6ef2787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1b22efd-e5dd-452e-874f-48c563426bca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df697692-73c6-4c24-8d1e-7c6596e7ba06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=db2dc841-9ffa-4ba0-9187-49705de50963) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:11:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:35.970 158118 INFO neutron.agent.ovn.metadata.agent [-] Port db2dc841-9ffa-4ba0-9187-49705de50963 in datapath 45d279a8-df8a-401d-a171-8e7cd6ef2787 unbound from our chassis
Dec 06 08:11:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:35.971 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45d279a8-df8a-401d-a171-8e7cd6ef2787, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:11:35 compute-0 nova_compute[251992]: 2025-12-06 08:11:35.970 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:35.973 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1c58c15b-1676-40dc-9d05-047e0ff41209]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:35.973 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787 namespace which is not needed anymore
Dec 06 08:11:36 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Dec 06 08:11:36 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c6.scope: Consumed 14.512s CPU time.
Dec 06 08:11:36 compute-0 systemd-machined[212986]: Machine qemu-90-instance-000000c6 terminated.
Dec 06 08:11:36 compute-0 nova_compute[251992]: 2025-12-06 08:11:36.110 251996 INFO nova.virt.libvirt.driver [-] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Instance destroyed successfully.
Dec 06 08:11:36 compute-0 nova_compute[251992]: 2025-12-06 08:11:36.111 251996 DEBUG nova.objects.instance [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:11:36 compute-0 neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787[392922]: [NOTICE]   (392926) : haproxy version is 2.8.14-c23fe91
Dec 06 08:11:36 compute-0 neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787[392922]: [NOTICE]   (392926) : path to executable is /usr/sbin/haproxy
Dec 06 08:11:36 compute-0 neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787[392922]: [WARNING]  (392926) : Exiting Master process...
Dec 06 08:11:36 compute-0 neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787[392922]: [ALERT]    (392926) : Current worker (392928) exited with code 143 (Terminated)
Dec 06 08:11:36 compute-0 neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787[392922]: [WARNING]  (392926) : All workers exited. Exiting... (0)
Dec 06 08:11:36 compute-0 systemd[1]: libpod-77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc.scope: Deactivated successfully.
Dec 06 08:11:36 compute-0 podman[393118]: 2025-12-06 08:11:36.127512492 +0000 UTC m=+0.052922068 container died 77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 08:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-328f500448ba9bd010e53bec5c849f9f0a1cadaa819537ab2d81237bc7b7c3bb-merged.mount: Deactivated successfully.
Dec 06 08:11:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc-userdata-shm.mount: Deactivated successfully.
Dec 06 08:11:36 compute-0 podman[393118]: 2025-12-06 08:11:36.168174213 +0000 UTC m=+0.093583789 container cleanup 77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:11:36 compute-0 systemd[1]: libpod-conmon-77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc.scope: Deactivated successfully.
Dec 06 08:11:36 compute-0 podman[393161]: 2025-12-06 08:11:36.229779587 +0000 UTC m=+0.035252734 container remove 77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.234 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e0200e9f-9ed4-4a10-a9a2-5a49e2b35e77]: (4, ('Sat Dec  6 08:11:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787 (77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc)\n77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc\nSat Dec  6 08:11:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787 (77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc)\n77c8c16b5695c2344c1ee8f715ce947d82b2abfbf4cd6847e480ee28ffa92ddc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.236 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c5085843-7aa9-4042-ac4e-addb2a697bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.237 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45d279a8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:36 compute-0 nova_compute[251992]: 2025-12-06 08:11:36.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:36 compute-0 kernel: tap45d279a8-d0: left promiscuous mode
Dec 06 08:11:36 compute-0 nova_compute[251992]: 2025-12-06 08:11:36.255 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.258 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5d767d-4743-4d25-8fb4-27234c0ee499]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.277 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[602dedfb-070f-4771-b51e-d3f9c7ce6a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.279 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7d08ba59-3211-423f-95fb-f12c2c8669b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.293 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[46a52aab-e9b2-4fba-a87e-2c4c77ea7f26]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893928, 'reachable_time': 15034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393179, 'error': None, 'target': 'ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:36 compute-0 systemd[1]: run-netns-ovnmeta\x2d45d279a8\x2ddf8a\x2d401d\x2da171\x2d8e7cd6ef2787.mount: Deactivated successfully.
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.298 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45d279a8-df8a-401d-a171-8e7cd6ef2787 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:11:36 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:36.298 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2fc9b3-5de3-4ffb-a9f8-53cc9b519def]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:11:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:36.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:37 compute-0 ceph-mon[74339]: pgmap v3569: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 66 KiB/s wr, 57 op/s
Dec 06 08:11:37 compute-0 podman[393181]: 2025-12-06 08:11:37.433373626 +0000 UTC m=+0.091490653 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:11:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 15 op/s
Dec 06 08:11:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:37.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.248 251996 DEBUG nova.virt.libvirt.vif [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:10:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1516061334',display_name='tempest-TestNetworkBasicOps-server-1516061334',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1516061334',id=198,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN7zzAxFHv2BK6lDcH3TkL1dghIY1UDFXKAjdgrHB3jbZay3rOebcvKFFR16Dt5bAsikdU8GpHTMi6/2WJUM95RdVrtVRafvS8V7387L2oERD9LvmL0Pc0856TqRNAinkQ==',key_name='tempest-TestNetworkBasicOps-246940953',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:11:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-qsl2kvse',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:11:07Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.249 251996 DEBUG nova.network.os_vif_util [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.250 251996 DEBUG nova.network.os_vif_util [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:4d:4f,bridge_name='br-int',has_traffic_filtering=True,id=db2dc841-9ffa-4ba0-9187-49705de50963,network=Network(45d279a8-df8a-401d-a171-8e7cd6ef2787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb2dc841-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.250 251996 DEBUG os_vif [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:4d:4f,bridge_name='br-int',has_traffic_filtering=True,id=db2dc841-9ffa-4ba0-9187-49705de50963,network=Network(45d279a8-df8a-401d-a171-8e7cd6ef2787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb2dc841-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.252 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.252 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb2dc841-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.254 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.256 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.259 251996 INFO os_vif [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:4d:4f,bridge_name='br-int',has_traffic_filtering=True,id=db2dc841-9ffa-4ba0-9187-49705de50963,network=Network(45d279a8-df8a-401d-a171-8e7cd6ef2787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb2dc841-9f')
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.285 251996 DEBUG nova.compute.manager [req-45d22d1e-33a0-431c-8a2d-525289be1a65 req-9a0e7b56-ffcf-42d6-ba1b-0a15b938aff5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-vif-unplugged-db2dc841-9ffa-4ba0-9187-49705de50963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.286 251996 DEBUG oslo_concurrency.lockutils [req-45d22d1e-33a0-431c-8a2d-525289be1a65 req-9a0e7b56-ffcf-42d6-ba1b-0a15b938aff5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.286 251996 DEBUG oslo_concurrency.lockutils [req-45d22d1e-33a0-431c-8a2d-525289be1a65 req-9a0e7b56-ffcf-42d6-ba1b-0a15b938aff5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.286 251996 DEBUG oslo_concurrency.lockutils [req-45d22d1e-33a0-431c-8a2d-525289be1a65 req-9a0e7b56-ffcf-42d6-ba1b-0a15b938aff5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.286 251996 DEBUG nova.compute.manager [req-45d22d1e-33a0-431c-8a2d-525289be1a65 req-9a0e7b56-ffcf-42d6-ba1b-0a15b938aff5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] No waiting events found dispatching network-vif-unplugged-db2dc841-9ffa-4ba0-9187-49705de50963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.286 251996 DEBUG nova.compute.manager [req-45d22d1e-33a0-431c-8a2d-525289be1a65 req-9a0e7b56-ffcf-42d6-ba1b-0a15b938aff5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-vif-unplugged-db2dc841-9ffa-4ba0-9187-49705de50963 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.304 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:38.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.645 251996 INFO nova.virt.libvirt.driver [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Deleting instance files /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_del
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.646 251996 INFO nova.virt.libvirt.driver [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Deletion of /var/lib/nova/instances/9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7_del complete
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.724 251996 INFO nova.compute.manager [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Took 2.85 seconds to destroy the instance on the hypervisor.
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.725 251996 DEBUG oslo.service.loopingcall [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.725 251996 DEBUG nova.compute.manager [-] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:11:38 compute-0 nova_compute[251992]: 2025-12-06 08:11:38.725 251996 DEBUG nova.network.neutron [-] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:11:39 compute-0 nova_compute[251992]: 2025-12-06 08:11:39.196 251996 DEBUG nova.network.neutron [req-c5abcd9c-236f-46c3-ba4d-c86c42fdaaa2 req-78a0bc99-36e1-4f41-af48-acb8d317c5dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updated VIF entry in instance network info cache for port db2dc841-9ffa-4ba0-9187-49705de50963. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:11:39 compute-0 nova_compute[251992]: 2025-12-06 08:11:39.197 251996 DEBUG nova.network.neutron [req-c5abcd9c-236f-46c3-ba4d-c86c42fdaaa2 req-78a0bc99-36e1-4f41-af48-acb8d317c5dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updating instance_info_cache with network_info: [{"id": "db2dc841-9ffa-4ba0-9187-49705de50963", "address": "fa:16:3e:f3:4d:4f", "network": {"id": "45d279a8-df8a-401d-a171-8e7cd6ef2787", "bridge": "br-int", "label": "tempest-network-smoke--748535621", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb2dc841-9f", "ovs_interfaceid": "db2dc841-9ffa-4ba0-9187-49705de50963", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:11:39 compute-0 ceph-mon[74339]: pgmap v3570: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 18 KiB/s wr, 15 op/s
Dec 06 08:11:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2058364697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 6.7 KiB/s wr, 1 op/s
Dec 06 08:11:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:39.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3957424338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:40.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:41 compute-0 ceph-mon[74339]: pgmap v3571: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 6.7 KiB/s wr, 1 op/s
Dec 06 08:11:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Dec 06 08:11:41 compute-0 nova_compute[251992]: 2025-12-06 08:11:41.834 251996 DEBUG nova.compute.manager [req-51693e02-68ce-4d6a-b6bf-b5fd95940337 req-ecc55405-cecc-46eb-8ba5-e6edaed63fd4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:41 compute-0 nova_compute[251992]: 2025-12-06 08:11:41.834 251996 DEBUG oslo_concurrency.lockutils [req-51693e02-68ce-4d6a-b6bf-b5fd95940337 req-ecc55405-cecc-46eb-8ba5-e6edaed63fd4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:41 compute-0 nova_compute[251992]: 2025-12-06 08:11:41.835 251996 DEBUG oslo_concurrency.lockutils [req-51693e02-68ce-4d6a-b6bf-b5fd95940337 req-ecc55405-cecc-46eb-8ba5-e6edaed63fd4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:41 compute-0 nova_compute[251992]: 2025-12-06 08:11:41.835 251996 DEBUG oslo_concurrency.lockutils [req-51693e02-68ce-4d6a-b6bf-b5fd95940337 req-ecc55405-cecc-46eb-8ba5-e6edaed63fd4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:41 compute-0 nova_compute[251992]: 2025-12-06 08:11:41.835 251996 DEBUG nova.compute.manager [req-51693e02-68ce-4d6a-b6bf-b5fd95940337 req-ecc55405-cecc-46eb-8ba5-e6edaed63fd4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] No waiting events found dispatching network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:11:41 compute-0 nova_compute[251992]: 2025-12-06 08:11:41.835 251996 WARNING nova.compute.manager [req-51693e02-68ce-4d6a-b6bf-b5fd95940337 req-ecc55405-cecc-46eb-8ba5-e6edaed63fd4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received unexpected event network-vif-plugged-db2dc841-9ffa-4ba0-9187-49705de50963 for instance with vm_state active and task_state deleting.
Dec 06 08:11:41 compute-0 nova_compute[251992]: 2025-12-06 08:11:41.853 251996 DEBUG oslo_concurrency.lockutils [req-c5abcd9c-236f-46c3-ba4d-c86c42fdaaa2 req-78a0bc99-36e1-4f41-af48-acb8d317c5dd 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:11:41 compute-0 nova_compute[251992]: 2025-12-06 08:11:41.889 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:41.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.334 251996 DEBUG nova.network.neutron [-] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.343 251996 DEBUG nova.compute.manager [req-a2e1da89-e9e9-4493-9b4a-4fa9b52cdbd2 req-f96fe6e2-695b-45c2-ab08-9510aa3120bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Received event network-vif-deleted-db2dc841-9ffa-4ba0-9187-49705de50963 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.344 251996 INFO nova.compute.manager [req-a2e1da89-e9e9-4493-9b4a-4fa9b52cdbd2 req-f96fe6e2-695b-45c2-ab08-9510aa3120bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Neutron deleted interface db2dc841-9ffa-4ba0-9187-49705de50963; detaching it from the instance and deleting it from the info cache
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.344 251996 DEBUG nova.network.neutron [req-a2e1da89-e9e9-4493-9b4a-4fa9b52cdbd2 req-f96fe6e2-695b-45c2-ab08-9510aa3120bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.407 251996 INFO nova.compute.manager [-] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Took 3.68 seconds to deallocate network for instance.
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.413 251996 DEBUG nova.compute.manager [req-a2e1da89-e9e9-4493-9b4a-4fa9b52cdbd2 req-f96fe6e2-695b-45c2-ab08-9510aa3120bb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Detach interface failed, port_id=db2dc841-9ffa-4ba0-9187-49705de50963, reason: Instance 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.478 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.479 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:42.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:42 compute-0 nova_compute[251992]: 2025-12-06 08:11:42.674 251996 DEBUG oslo_concurrency.processutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:11:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:11:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:11:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:11:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:11:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:11:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:11:43 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857437547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.096 251996 DEBUG oslo_concurrency.processutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.102 251996 DEBUG nova.compute.provider_tree [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.125 251996 DEBUG nova.scheduler.client.report [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.178 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.210 251996 INFO nova.scheduler.client.report [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.256 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.308 251996 DEBUG oslo_concurrency.lockutils [None req-cc52d94c-95b8-4a10-98ae-323372a9ee4a d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:43 compute-0 ceph-mon[74339]: pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 7.8 KiB/s wr, 28 op/s
Dec 06 08:11:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3857437547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:43 compute-0 podman[393252]: 2025-12-06 08:11:43.394367236 +0000 UTC m=+0.052053684 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 08:11:43 compute-0 podman[393251]: 2025-12-06 08:11:43.418867356 +0000 UTC m=+0.078207619 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 06 08:11:43 compute-0 nova_compute[251992]: 2025-12-06 08:11:43.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Dec 06 08:11:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:43.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:44.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:44 compute-0 ceph-mon[74339]: pgmap v3573: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Dec 06 08:11:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Dec 06 08:11:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:45.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:46.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:46 compute-0 ceph-mon[74339]: pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 27 op/s
Dec 06 08:11:46 compute-0 nova_compute[251992]: 2025-12-06 08:11:46.983 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:46 compute-0 nova_compute[251992]: 2025-12-06 08:11:46.984 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:46 compute-0 nova_compute[251992]: 2025-12-06 08:11:46.984 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:46 compute-0 nova_compute[251992]: 2025-12-06 08:11:46.984 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:11:46 compute-0 nova_compute[251992]: 2025-12-06 08:11:46.984 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:11:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640659210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:47 compute-0 nova_compute[251992]: 2025-12-06 08:11:47.459 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:47 compute-0 nova_compute[251992]: 2025-12-06 08:11:47.635 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:11:47 compute-0 nova_compute[251992]: 2025-12-06 08:11:47.636 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4145MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:11:47 compute-0 nova_compute[251992]: 2025-12-06 08:11:47.636 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:11:47 compute-0 nova_compute[251992]: 2025-12-06 08:11:47.636 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:11:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Dec 06 08:11:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:47.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/640659210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.261 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.307 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.330 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.330 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.359 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:11:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:11:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1897870452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.784 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.791 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.817 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.993 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:48.993 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:11:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:48.995 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:11:48 compute-0 ceph-mon[74339]: pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Dec 06 08:11:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1897870452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.998 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:11:48 compute-0 nova_compute[251992]: 2025-12-06 08:11:48.999 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:11:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:11:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:49.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:49 compute-0 nova_compute[251992]: 2025-12-06 08:11:49.993 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:49 compute-0 nova_compute[251992]: 2025-12-06 08:11:49.993 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:50.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:51 compute-0 nova_compute[251992]: 2025-12-06 08:11:51.108 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008696.1068938, 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:11:51 compute-0 nova_compute[251992]: 2025-12-06 08:11:51.108 251996 INFO nova.compute.manager [-] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] VM Stopped (Lifecycle Event)
Dec 06 08:11:51 compute-0 nova_compute[251992]: 2025-12-06 08:11:51.276 251996 DEBUG nova.compute.manager [None req-c2f25a3e-fc5f-4a6d-ac95-ac6c7ee92ec3 - - - - - -] [instance: 9f3ff9fd-f4a2-478d-86ae-b6d3c679b7d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:11:51 compute-0 ceph-mon[74339]: pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:11:51 compute-0 nova_compute[251992]: 2025-12-06 08:11:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:51 compute-0 nova_compute[251992]: 2025-12-06 08:11:51.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:11:51 compute-0 nova_compute[251992]: 2025-12-06 08:11:51.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:11:51 compute-0 nova_compute[251992]: 2025-12-06 08:11:51.726 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:11:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:11:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:51.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:52.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:52 compute-0 sudo[393341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:52 compute-0 sudo[393341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:52 compute-0 sudo[393341]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:52 compute-0 sudo[393366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:11:52 compute-0 sudo[393366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:11:52 compute-0 sudo[393366]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:53 compute-0 nova_compute[251992]: 2025-12-06 08:11:53.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:53 compute-0 nova_compute[251992]: 2025-12-06 08:11:53.310 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:53 compute-0 ceph-mon[74339]: pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:11:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3257912044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:53.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:55 compute-0 ceph-mon[74339]: pgmap v3578: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3596922978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:11:56 compute-0 nova_compute[251992]: 2025-12-06 08:11:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:56 compute-0 nova_compute[251992]: 2025-12-06 08:11:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:57 compute-0 nova_compute[251992]: 2025-12-06 08:11:57.287 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:57 compute-0 ceph-mon[74339]: pgmap v3579: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:57 compute-0 nova_compute[251992]: 2025-12-06 08:11:57.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:11:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:11:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:57.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:11:58 compute-0 nova_compute[251992]: 2025-12-06 08:11:58.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:58 compute-0 nova_compute[251992]: 2025-12-06 08:11:58.312 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:11:58 compute-0 ceph-mon[74339]: pgmap v3580: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:11:58.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:11:58 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:11:58.998 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:11:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:11:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:11:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:11:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:11:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:11:59.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:00 compute-0 ceph-mon[74339]: pgmap v3581: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:01 compute-0 nova_compute[251992]: 2025-12-06 08:12:01.046 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:01 compute-0 nova_compute[251992]: 2025-12-06 08:12:01.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:01 compute-0 nova_compute[251992]: 2025-12-06 08:12:01.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:01.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:02 compute-0 nova_compute[251992]: 2025-12-06 08:12:02.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:02 compute-0 nova_compute[251992]: 2025-12-06 08:12:02.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:02 compute-0 nova_compute[251992]: 2025-12-06 08:12:02.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:12:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:03 compute-0 ceph-mon[74339]: pgmap v3582: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:03 compute-0 nova_compute[251992]: 2025-12-06 08:12:03.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:03 compute-0 nova_compute[251992]: 2025-12-06 08:12:03.314 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:03.888 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:03.888 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:03.889 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:03.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:04.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:05 compute-0 ceph-mon[74339]: pgmap v3583: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:05.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:06.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:06 compute-0 sudo[393399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:06 compute-0 sudo[393399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:06 compute-0 sudo[393399]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:07 compute-0 sudo[393424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:12:07 compute-0 sudo[393424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:07 compute-0 sudo[393424]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:07 compute-0 sudo[393449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:07 compute-0 sudo[393449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:07 compute-0 sudo[393449]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:07 compute-0 sudo[393475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:12:07 compute-0 sudo[393475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:07 compute-0 ceph-mon[74339]: pgmap v3584: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:12:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2454514231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:07 compute-0 sudo[393475]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:12:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:12:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:12:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:12:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:12:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 32446538-1c06-40ce-89cf-e67e53dcb0e4 does not exist
Dec 06 08:12:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ba667f25-4ec1-425e-8ff4-62e4588e95a4 does not exist
Dec 06 08:12:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 73210fa2-c1bb-4a86-afd4-308564fda96f does not exist
Dec 06 08:12:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:12:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:12:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:12:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:12:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:12:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:12:07 compute-0 sudo[393532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:07.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:07 compute-0 sudo[393532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:07 compute-0 sudo[393532]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:08 compute-0 sudo[393558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:12:08 compute-0 sudo[393558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:08 compute-0 sudo[393558]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:08 compute-0 podman[393556]: 2025-12-06 08:12:08.068746386 +0000 UTC m=+0.096238773 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 08:12:08 compute-0 sudo[393600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:08 compute-0 sudo[393600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:08 compute-0 sudo[393600]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:08 compute-0 sudo[393633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:12:08 compute-0 sudo[393633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:08 compute-0 nova_compute[251992]: 2025-12-06 08:12:08.272 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:08 compute-0 nova_compute[251992]: 2025-12-06 08:12:08.314 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:08 compute-0 podman[393697]: 2025-12-06 08:12:08.433275543 +0000 UTC m=+0.040311784 container create 9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:12:08 compute-0 systemd[1]: Started libpod-conmon-9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285.scope.
Dec 06 08:12:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:12:08 compute-0 podman[393697]: 2025-12-06 08:12:08.500291395 +0000 UTC m=+0.107327646 container init 9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:12:08 compute-0 podman[393697]: 2025-12-06 08:12:08.506290589 +0000 UTC m=+0.113326820 container start 9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:12:08 compute-0 podman[393697]: 2025-12-06 08:12:08.413656686 +0000 UTC m=+0.020692927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:12:08 compute-0 podman[393697]: 2025-12-06 08:12:08.509344692 +0000 UTC m=+0.116381003 container attach 9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:12:08 compute-0 pensive_goldstine[393713]: 167 167
Dec 06 08:12:08 compute-0 systemd[1]: libpod-9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285.scope: Deactivated successfully.
Dec 06 08:12:08 compute-0 podman[393697]: 2025-12-06 08:12:08.512620672 +0000 UTC m=+0.119656903 container died 9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4d114c09dd6eda7f07179ff8a3fc305e2e605c3d38ad3f432fffb1fded42ead-merged.mount: Deactivated successfully.
Dec 06 08:12:08 compute-0 podman[393697]: 2025-12-06 08:12:08.544646528 +0000 UTC m=+0.151682759 container remove 9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goldstine, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:12:08 compute-0 systemd[1]: libpod-conmon-9aa8913167d48c95442033b7b3612f138c099b2b75bd74a03a87de94c92d4285.scope: Deactivated successfully.
Dec 06 08:12:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:08.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:08 compute-0 podman[393739]: 2025-12-06 08:12:08.697478526 +0000 UTC m=+0.045139605 container create 62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:12:08 compute-0 systemd[1]: Started libpod-conmon-62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc.scope.
Dec 06 08:12:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d30e262dc6b51b731ab620af07d2233f35d5baf92ca56c8abf276e9aa4a32c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d30e262dc6b51b731ab620af07d2233f35d5baf92ca56c8abf276e9aa4a32c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d30e262dc6b51b731ab620af07d2233f35d5baf92ca56c8abf276e9aa4a32c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d30e262dc6b51b731ab620af07d2233f35d5baf92ca56c8abf276e9aa4a32c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d30e262dc6b51b731ab620af07d2233f35d5baf92ca56c8abf276e9aa4a32c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:08 compute-0 podman[393739]: 2025-12-06 08:12:08.676162864 +0000 UTC m=+0.023823963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:12:08 compute-0 podman[393739]: 2025-12-06 08:12:08.773628128 +0000 UTC m=+0.121289237 container init 62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:12:08 compute-0 podman[393739]: 2025-12-06 08:12:08.780148237 +0000 UTC m=+0.127809316 container start 62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:12:08 compute-0 podman[393739]: 2025-12-06 08:12:08.783479308 +0000 UTC m=+0.131140397 container attach 62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:12:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2454514231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:12:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:12:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:12:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:12:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:12:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:12:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3619985939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:12:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:12:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3619985939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:12:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:09 compute-0 nice_shtern[393755]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:12:09 compute-0 nice_shtern[393755]: --> relative data size: 1.0
Dec 06 08:12:09 compute-0 nice_shtern[393755]: --> All data devices are unavailable
Dec 06 08:12:09 compute-0 systemd[1]: libpod-62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc.scope: Deactivated successfully.
Dec 06 08:12:09 compute-0 podman[393739]: 2025-12-06 08:12:09.634816435 +0000 UTC m=+0.982477534 container died 62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-93d30e262dc6b51b731ab620af07d2233f35d5baf92ca56c8abf276e9aa4a32c-merged.mount: Deactivated successfully.
Dec 06 08:12:09 compute-0 podman[393739]: 2025-12-06 08:12:09.69060231 +0000 UTC m=+1.038263389 container remove 62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 08:12:09 compute-0 systemd[1]: libpod-conmon-62a62159806adf1767409fa337f3247102cc81ae56da6360d7ec125009cd8cdc.scope: Deactivated successfully.
Dec 06 08:12:09 compute-0 sudo[393633]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:09 compute-0 sudo[393783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:09 compute-0 sudo[393783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:09 compute-0 sudo[393783]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:09 compute-0 ceph-mon[74339]: pgmap v3585: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3619985939' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:12:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3619985939' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:12:09 compute-0 sudo[393808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:12:09 compute-0 sudo[393808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:09 compute-0 sudo[393808]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:09 compute-0 sudo[393833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:09 compute-0 sudo[393833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:09 compute-0 sudo[393833]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:09 compute-0 sudo[393858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:12:09 compute-0 sudo[393858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:09.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:10 compute-0 podman[393924]: 2025-12-06 08:12:10.254862257 +0000 UTC m=+0.043623103 container create 74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:12:10 compute-0 systemd[1]: Started libpod-conmon-74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f.scope.
Dec 06 08:12:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:12:10 compute-0 podman[393924]: 2025-12-06 08:12:10.329134148 +0000 UTC m=+0.117895004 container init 74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 08:12:10 compute-0 podman[393924]: 2025-12-06 08:12:10.236718252 +0000 UTC m=+0.025479128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:12:10 compute-0 podman[393924]: 2025-12-06 08:12:10.33577618 +0000 UTC m=+0.124537026 container start 74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:12:10 compute-0 podman[393924]: 2025-12-06 08:12:10.339142421 +0000 UTC m=+0.127903287 container attach 74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:12:10 compute-0 optimistic_maxwell[393940]: 167 167
Dec 06 08:12:10 compute-0 systemd[1]: libpod-74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f.scope: Deactivated successfully.
Dec 06 08:12:10 compute-0 podman[393924]: 2025-12-06 08:12:10.340963161 +0000 UTC m=+0.129724007 container died 74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1799d01048799057d856485f1fd27cc7f6a2183e4575bbcacdd634032867925c-merged.mount: Deactivated successfully.
Dec 06 08:12:10 compute-0 podman[393924]: 2025-12-06 08:12:10.37565865 +0000 UTC m=+0.164419496 container remove 74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:12:10 compute-0 systemd[1]: libpod-conmon-74d981d7457909b793824584fa74332db5709aa123dfc3f3862c75132a66829f.scope: Deactivated successfully.
Dec 06 08:12:10 compute-0 podman[393964]: 2025-12-06 08:12:10.552033792 +0000 UTC m=+0.039440649 container create b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 08:12:10 compute-0 systemd[1]: Started libpod-conmon-b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45.scope.
Dec 06 08:12:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7202466653f9bdc2ed1c8882dae59f19169a0cce85511737b6b9fe2ed99bcd64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:10 compute-0 podman[393964]: 2025-12-06 08:12:10.535725506 +0000 UTC m=+0.023132363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7202466653f9bdc2ed1c8882dae59f19169a0cce85511737b6b9fe2ed99bcd64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7202466653f9bdc2ed1c8882dae59f19169a0cce85511737b6b9fe2ed99bcd64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7202466653f9bdc2ed1c8882dae59f19169a0cce85511737b6b9fe2ed99bcd64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:10 compute-0 podman[393964]: 2025-12-06 08:12:10.643310338 +0000 UTC m=+0.130717215 container init b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_feynman, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 08:12:10 compute-0 podman[393964]: 2025-12-06 08:12:10.656779877 +0000 UTC m=+0.144186734 container start b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_feynman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 08:12:10 compute-0 podman[393964]: 2025-12-06 08:12:10.660688983 +0000 UTC m=+0.148095840 container attach b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_feynman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 08:12:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:10.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:10 compute-0 ceph-mon[74339]: pgmap v3586: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]: {
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:     "0": [
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:         {
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "devices": [
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "/dev/loop3"
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             ],
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "lv_name": "ceph_lv0",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "lv_size": "7511998464",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "name": "ceph_lv0",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "tags": {
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.cluster_name": "ceph",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.crush_device_class": "",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.encrypted": "0",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.osd_id": "0",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.type": "block",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:                 "ceph.vdo": "0"
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             },
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "type": "block",
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:             "vg_name": "ceph_vg0"
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:         }
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]:     ]
Dec 06 08:12:11 compute-0 optimistic_feynman[393980]: }
Dec 06 08:12:11 compute-0 systemd[1]: libpod-b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45.scope: Deactivated successfully.
Dec 06 08:12:11 compute-0 podman[393964]: 2025-12-06 08:12:11.482450323 +0000 UTC m=+0.969857180 container died b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7202466653f9bdc2ed1c8882dae59f19169a0cce85511737b6b9fe2ed99bcd64-merged.mount: Deactivated successfully.
Dec 06 08:12:11 compute-0 podman[393964]: 2025-12-06 08:12:11.541837927 +0000 UTC m=+1.029244784 container remove b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:12:11 compute-0 systemd[1]: libpod-conmon-b10224ea6551f8e0f8c0da5e330cbda39fcbb2e8ea9865bc4a61e80eebf2ee45.scope: Deactivated successfully.
Dec 06 08:12:11 compute-0 sudo[393858]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:11 compute-0 sudo[394001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:11 compute-0 sudo[394001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:11 compute-0 sudo[394001]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:11 compute-0 sudo[394026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:12:11 compute-0 sudo[394026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:11 compute-0 sudo[394026]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:11 compute-0 sudo[394051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:11 compute-0 sudo[394051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:11 compute-0 sudo[394051]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:11 compute-0 sudo[394076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:12:11 compute-0 sudo[394076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:11.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:12 compute-0 podman[394140]: 2025-12-06 08:12:12.128509294 +0000 UTC m=+0.037697045 container create 9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:12:12 compute-0 systemd[1]: Started libpod-conmon-9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db.scope.
Dec 06 08:12:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:12:12 compute-0 podman[394140]: 2025-12-06 08:12:12.192997796 +0000 UTC m=+0.102185567 container init 9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:12:12 compute-0 podman[394140]: 2025-12-06 08:12:12.200317615 +0000 UTC m=+0.109505366 container start 9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:12:12 compute-0 podman[394140]: 2025-12-06 08:12:12.203302456 +0000 UTC m=+0.112490217 container attach 9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:12:12 compute-0 ecstatic_dewdney[394156]: 167 167
Dec 06 08:12:12 compute-0 systemd[1]: libpod-9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db.scope: Deactivated successfully.
Dec 06 08:12:12 compute-0 conmon[394156]: conmon 9fbf87652267fb5531e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db.scope/container/memory.events
Dec 06 08:12:12 compute-0 podman[394140]: 2025-12-06 08:12:12.205480935 +0000 UTC m=+0.114668716 container died 9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:12:12 compute-0 podman[394140]: 2025-12-06 08:12:12.111676397 +0000 UTC m=+0.020864168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c6700612cc723fe33df134f5540e11ee810467a626548e5e54564b7a825564-merged.mount: Deactivated successfully.
Dec 06 08:12:12 compute-0 podman[394140]: 2025-12-06 08:12:12.239530381 +0000 UTC m=+0.148718132 container remove 9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:12:12 compute-0 systemd[1]: libpod-conmon-9fbf87652267fb5531e7eed7cea4eb17fde0741f02c42c4d9ac54ccdd1bdc8db.scope: Deactivated successfully.
Dec 06 08:12:12 compute-0 podman[394178]: 2025-12-06 08:12:12.438222577 +0000 UTC m=+0.053589996 container create 07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:12:12 compute-0 systemd[1]: Started libpod-conmon-07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e.scope.
Dec 06 08:12:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06930508747daa3f19b168a3801d34dabd579747bef4341e81924fa9b1a8e7b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06930508747daa3f19b168a3801d34dabd579747bef4341e81924fa9b1a8e7b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06930508747daa3f19b168a3801d34dabd579747bef4341e81924fa9b1a8e7b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06930508747daa3f19b168a3801d34dabd579747bef4341e81924fa9b1a8e7b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:12 compute-0 podman[394178]: 2025-12-06 08:12:12.495483844 +0000 UTC m=+0.110851273 container init 07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:12:12 compute-0 podman[394178]: 2025-12-06 08:12:12.502443512 +0000 UTC m=+0.117810931 container start 07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 08:12:12 compute-0 podman[394178]: 2025-12-06 08:12:12.505225058 +0000 UTC m=+0.120592477 container attach 07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:12:12 compute-0 podman[394178]: 2025-12-06 08:12:12.414184805 +0000 UTC m=+0.029552304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:12:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:12.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:12 compute-0 sudo[394199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:12 compute-0 sudo[394199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:12 compute-0 sudo[394199]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:12 compute-0 ceph-mon[74339]: pgmap v3587: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:13 compute-0 sudo[394224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:13 compute-0 sudo[394224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:13 compute-0 sudo[394224]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:12:13 compute-0 nova_compute[251992]: 2025-12-06 08:12:13.276 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:13 compute-0 nova_compute[251992]: 2025-12-06 08:12:13.317 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:13 compute-0 pensive_lewin[394194]: {
Dec 06 08:12:13 compute-0 pensive_lewin[394194]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:12:13 compute-0 pensive_lewin[394194]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:12:13 compute-0 pensive_lewin[394194]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:12:13 compute-0 pensive_lewin[394194]:         "osd_id": 0,
Dec 06 08:12:13 compute-0 pensive_lewin[394194]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:12:13 compute-0 pensive_lewin[394194]:         "type": "bluestore"
Dec 06 08:12:13 compute-0 pensive_lewin[394194]:     }
Dec 06 08:12:13 compute-0 pensive_lewin[394194]: }
Dec 06 08:12:13 compute-0 systemd[1]: libpod-07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e.scope: Deactivated successfully.
Dec 06 08:12:13 compute-0 podman[394178]: 2025-12-06 08:12:13.404823237 +0000 UTC m=+1.020190656 container died 07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-06930508747daa3f19b168a3801d34dabd579747bef4341e81924fa9b1a8e7b1-merged.mount: Deactivated successfully.
Dec 06 08:12:13 compute-0 podman[394178]: 2025-12-06 08:12:13.461889257 +0000 UTC m=+1.077256686 container remove 07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 08:12:13 compute-0 systemd[1]: libpod-conmon-07c87794fad0465eee1b8f1c938412f8d6166779626d8cdeb2cd2b82d35e5e5e.scope: Deactivated successfully.
Dec 06 08:12:13 compute-0 sudo[394076]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:12:13 compute-0 podman[394267]: 2025-12-06 08:12:13.520360036 +0000 UTC m=+0.087635003 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:12:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:12:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e1ef4fcb-30d3-4e19-8634-c6cddabc5542 does not exist
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 373cf044-89d7-4096-9ec1-5f513243d63a does not exist
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b0d5254e-cf2d-4dd1-a677-d669432d7de2 does not exist
Dec 06 08:12:13 compute-0 podman[394279]: 2025-12-06 08:12:13.541333115 +0000 UTC m=+0.097512260 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 08:12:13 compute-0 sudo[394314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:13 compute-0 sudo[394314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:13 compute-0 sudo[394314]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:13 compute-0 sudo[394340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:12:13 compute-0 sudo[394340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:13 compute-0 sudo[394340]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:13.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:12:14 compute-0 ceph-mon[74339]: pgmap v3588: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:15.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:16.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:16 compute-0 ceph-mon[74339]: pgmap v3589: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:17.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:18 compute-0 nova_compute[251992]: 2025-12-06 08:12:18.281 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:18 compute-0 nova_compute[251992]: 2025-12-06 08:12:18.318 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:12:18
Dec 06 08:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.rgw.root', '.mgr', 'backups', 'default.rgw.control']
Dec 06 08:12:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:12:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:18.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:19 compute-0 ceph-mon[74339]: pgmap v3590: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:19.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/308335499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3595195691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:20.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:21 compute-0 ceph-mon[74339]: pgmap v3591: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:21.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:22.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:23 compute-0 ceph-mon[74339]: pgmap v3592: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:12:23 compute-0 nova_compute[251992]: 2025-12-06 08:12:23.285 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:23 compute-0 nova_compute[251992]: 2025-12-06 08:12:23.319 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:12:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:12:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:24.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:25 compute-0 ceph-mon[74339]: pgmap v3593: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:12:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 821 KiB/s rd, 12 KiB/s wr, 37 op/s
Dec 06 08:12:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:25.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:12:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:12:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:26.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:26.980 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:12:26 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:26.981 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:12:26 compute-0 nova_compute[251992]: 2025-12-06 08:12:26.982 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:27 compute-0 ceph-mon[74339]: pgmap v3594: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 821 KiB/s rd, 12 KiB/s wr, 37 op/s
Dec 06 08:12:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:12:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:12:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:12:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:12:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:12:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:27.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:28 compute-0 nova_compute[251992]: 2025-12-06 08:12:28.288 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:28 compute-0 nova_compute[251992]: 2025-12-06 08:12:28.321 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:28 compute-0 ceph-mon[74339]: pgmap v3595: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:28.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:29.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:30.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:30 compute-0 ceph-mon[74339]: pgmap v3596: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:31 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:31.982 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:12:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:31.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:32.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:32 compute-0 ceph-mon[74339]: pgmap v3597: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:33 compute-0 sudo[394375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:33 compute-0 sudo[394375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:33 compute-0 sudo[394375]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:33 compute-0 sudo[394401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:33 compute-0 sudo[394401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:33 compute-0 sudo[394401]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:33 compute-0 nova_compute[251992]: 2025-12-06 08:12:33.301 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:33 compute-0 nova_compute[251992]: 2025-12-06 08:12:33.322 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:33.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:34.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:35 compute-0 ceph-mon[74339]: pgmap v3598: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:12:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Dec 06 08:12:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:37 compute-0 ceph-mon[74339]: pgmap v3599: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Dec 06 08:12:37 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Dec 06 08:12:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 876 KiB/s wr, 47 op/s
Dec 06 08:12:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:38 compute-0 nova_compute[251992]: 2025-12-06 08:12:38.304 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:38 compute-0 nova_compute[251992]: 2025-12-06 08:12:38.324 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:38 compute-0 podman[394428]: 2025-12-06 08:12:38.444897089 +0000 UTC m=+0.090926001 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:12:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:38.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:39 compute-0 ceph-mon[74339]: pgmap v3600: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 876 KiB/s wr, 47 op/s
Dec 06 08:12:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2996067469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 876 KiB/s wr, 11 op/s
Dec 06 08:12:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4002310260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:40.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:41 compute-0 ceph-mon[74339]: pgmap v3601: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 876 KiB/s wr, 11 op/s
Dec 06 08:12:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:12:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:41.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:42.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:12:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:12:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:12:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:12:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:12:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:12:43 compute-0 nova_compute[251992]: 2025-12-06 08:12:43.308 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:43 compute-0 nova_compute[251992]: 2025-12-06 08:12:43.325 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:12:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:43.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:44 compute-0 podman[394457]: 2025-12-06 08:12:44.38658732 +0000 UTC m=+0.048563280 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:12:44 compute-0 podman[394458]: 2025-12-06 08:12:44.389541641 +0000 UTC m=+0.047892982 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 08:12:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:44 compute-0 nova_compute[251992]: 2025-12-06 08:12:44.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:44 compute-0 nova_compute[251992]: 2025-12-06 08:12:44.697 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:44 compute-0 nova_compute[251992]: 2025-12-06 08:12:44.698 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:44 compute-0 nova_compute[251992]: 2025-12-06 08:12:44.698 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:44 compute-0 nova_compute[251992]: 2025-12-06 08:12:44.698 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:12:44 compute-0 nova_compute[251992]: 2025-12-06 08:12:44.698 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:44 compute-0 ceph-mon[74339]: pgmap v3602: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:12:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:44.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1750020864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.120 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.272 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.274 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4170MB free_disk=20.942890167236328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.274 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.275 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.406 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.407 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.426 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 418 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Dec 06 08:12:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:12:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118951938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.921 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.927 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.952 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.953 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:12:45 compute-0 ceph-mon[74339]: pgmap v3603: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:12:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1750020864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:45 compute-0 nova_compute[251992]: 2025-12-06 08:12:45.953 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:45.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.255 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.256 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.278 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.344 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.345 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.351 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.352 251996 INFO nova.compute.claims [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.469 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:46.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:12:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2206427747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.913 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.919 251996 DEBUG nova.compute.provider_tree [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.943 251996 DEBUG nova.scheduler.client.report [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.946 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.969 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:46 compute-0 nova_compute[251992]: 2025-12-06 08:12:46.970 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.019 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.020 251996 DEBUG nova.network.neutron [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.038 251996 INFO nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.060 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.154 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.155 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.156 251996 INFO nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Creating image(s)
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.184 251996 DEBUG nova.storage.rbd_utils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.212 251996 DEBUG nova.storage.rbd_utils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.238 251996 DEBUG nova.storage.rbd_utils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.241 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.270 251996 DEBUG nova.policy [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.307 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.308 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.309 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.309 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.333 251996 DEBUG nova.storage.rbd_utils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:12:47 compute-0 nova_compute[251992]: 2025-12-06 08:12:47.337 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 403 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Dec 06 08:12:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.312 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.327 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.435 251996 DEBUG nova.network.neutron [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Successfully created port: 26d5b067-93d9-4736-a51f-3d695f40988b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:12:48 compute-0 ceph-mon[74339]: pgmap v3604: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 418 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Dec 06 08:12:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3118951938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2206427747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.646 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.711 251996 DEBUG nova.storage.rbd_utils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:12:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:48.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.801 251996 DEBUG nova.objects.instance [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid 5ae83b01-2f96-47ca-92c5-66f9500be47d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.873 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.873 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Ensure instance console log exists: /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.874 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.874 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:48 compute-0 nova_compute[251992]: 2025-12-06 08:12:48.874 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:49 compute-0 ceph-mon[74339]: pgmap v3605: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 403 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Dec 06 08:12:49 compute-0 nova_compute[251992]: 2025-12-06 08:12:49.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.3 MiB/s wr, 59 op/s
Dec 06 08:12:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:50.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:50 compute-0 nova_compute[251992]: 2025-12-06 08:12:50.250 251996 DEBUG nova.network.neutron [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Successfully updated port: 26d5b067-93d9-4736-a51f-3d695f40988b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:12:50 compute-0 nova_compute[251992]: 2025-12-06 08:12:50.276 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:12:50 compute-0 nova_compute[251992]: 2025-12-06 08:12:50.276 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:12:50 compute-0 nova_compute[251992]: 2025-12-06 08:12:50.276 251996 DEBUG nova.network.neutron [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:12:50 compute-0 nova_compute[251992]: 2025-12-06 08:12:50.436 251996 DEBUG nova.network.neutron [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:12:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:50.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:51 compute-0 nova_compute[251992]: 2025-12-06 08:12:51.338 251996 DEBUG nova.compute.manager [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:12:51 compute-0 nova_compute[251992]: 2025-12-06 08:12:51.338 251996 DEBUG nova.compute.manager [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing instance network info cache due to event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:12:51 compute-0 nova_compute[251992]: 2025-12-06 08:12:51.339 251996 DEBUG oslo_concurrency.lockutils [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:12:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 3.1 MiB/s wr, 86 op/s
Dec 06 08:12:51 compute-0 nova_compute[251992]: 2025-12-06 08:12:51.967 251996 DEBUG nova.network.neutron [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updating instance_info_cache with network_info: [{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:12:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:52.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:52 compute-0 ceph-mon[74339]: pgmap v3606: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 384 KiB/s rd, 1.3 MiB/s wr, 59 op/s
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.532 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.533 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Instance network_info: |[{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.533 251996 DEBUG oslo_concurrency.lockutils [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.533 251996 DEBUG nova.network.neutron [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.538 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Start _get_guest_xml network_info=[{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.543 251996 WARNING nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.547 251996 DEBUG nova.virt.libvirt.host [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.548 251996 DEBUG nova.virt.libvirt.host [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.554 251996 DEBUG nova.virt.libvirt.host [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.555 251996 DEBUG nova.virt.libvirt.host [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.557 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.557 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.558 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.558 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.559 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.559 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.559 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.560 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.560 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.561 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.561 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.561 251996 DEBUG nova.virt.hardware [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.567 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:12:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:52.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.756 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 08:12:52 compute-0 nova_compute[251992]: 2025-12-06 08:12:52.756 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:12:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:12:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306060753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.013 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.040 251996 DEBUG nova.storage.rbd_utils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.043 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:53 compute-0 sudo[394791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:53 compute-0 sudo[394791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:53 compute-0 sudo[394791]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:53 compute-0 ceph-mon[74339]: pgmap v3607: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 3.1 MiB/s wr, 86 op/s
Dec 06 08:12:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1156600947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2306060753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:53 compute-0 sudo[394816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.325 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:53 compute-0 sudo[394816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.329 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:53 compute-0 sudo[394816]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:12:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/393233924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.466 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.468 251996 DEBUG nova.virt.libvirt.vif [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:12:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-208038318',display_name='tempest-TestNetworkBasicOps-server-208038318',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-208038318',id=200,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6LpbDGGsqUYLXXtpD4eX1duN1Tec/27La7VKOPMHJKTUnIjUHvsqzc6p8QoFMm0Jsj2gdxIhoQXUry048nNvgtQkx6G0Y3SARRMFBQVqnPVQKM1z95bezOQcgef6h6Ew==',key_name='tempest-TestNetworkBasicOps-240589395',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-jexoqbsm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:12:47Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5ae83b01-2f96-47ca-92c5-66f9500be47d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.468 251996 DEBUG nova.network.os_vif_util [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.469 251996 DEBUG nova.network.os_vif_util [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:d8:d3,bridge_name='br-int',has_traffic_filtering=True,id=26d5b067-93d9-4736-a51f-3d695f40988b,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d5b067-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.471 251996 DEBUG nova.objects.instance [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ae83b01-2f96-47ca-92c5-66f9500be47d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:12:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.916 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <uuid>5ae83b01-2f96-47ca-92c5-66f9500be47d</uuid>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <name>instance-000000c8</name>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkBasicOps-server-208038318</nova:name>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:12:52</nova:creationTime>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <nova:port uuid="26d5b067-93d9-4736-a51f-3d695f40988b">
Dec 06 08:12:53 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <system>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <entry name="serial">5ae83b01-2f96-47ca-92c5-66f9500be47d</entry>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <entry name="uuid">5ae83b01-2f96-47ca-92c5-66f9500be47d</entry>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </system>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <os>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   </os>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <features>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   </features>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/5ae83b01-2f96-47ca-92c5-66f9500be47d_disk">
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       </source>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/5ae83b01-2f96-47ca-92c5-66f9500be47d_disk.config">
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       </source>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:12:53 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:90:d8:d3"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <target dev="tap26d5b067-93"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d/console.log" append="off"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <video>
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </video>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:12:53 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:12:53 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:12:53 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:12:53 compute-0 nova_compute[251992]: </domain>
Dec 06 08:12:53 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.917 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Preparing to wait for external event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.918 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.918 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.919 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.919 251996 DEBUG nova.virt.libvirt.vif [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:12:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-208038318',display_name='tempest-TestNetworkBasicOps-server-208038318',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-208038318',id=200,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6LpbDGGsqUYLXXtpD4eX1duN1Tec/27La7VKOPMHJKTUnIjUHvsqzc6p8QoFMm0Jsj2gdxIhoQXUry048nNvgtQkx6G0Y3SARRMFBQVqnPVQKM1z95bezOQcgef6h6Ew==',key_name='tempest-TestNetworkBasicOps-240589395',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-jexoqbsm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:12:47Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5ae83b01-2f96-47ca-92c5-66f9500be47d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.920 251996 DEBUG nova.network.os_vif_util [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.920 251996 DEBUG nova.network.os_vif_util [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:90:d8:d3,bridge_name='br-int',has_traffic_filtering=True,id=26d5b067-93d9-4736-a51f-3d695f40988b,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d5b067-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.920 251996 DEBUG os_vif [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:d8:d3,bridge_name='br-int',has_traffic_filtering=True,id=26d5b067-93d9-4736-a51f-3d695f40988b,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d5b067-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.923 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.923 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.924 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.930 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.930 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26d5b067-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.931 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26d5b067-93, col_values=(('external_ids', {'iface-id': '26d5b067-93d9-4736-a51f-3d695f40988b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:90:d8:d3', 'vm-uuid': '5ae83b01-2f96-47ca-92c5-66f9500be47d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.932 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:53 compute-0 NetworkManager[48965]: <info>  [1765008773.9348] manager: (tap26d5b067-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/354)
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.935 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.940 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:53 compute-0 nova_compute[251992]: 2025-12-06 08:12:53.942 251996 INFO os_vif [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:90:d8:d3,bridge_name='br-int',has_traffic_filtering=True,id=26d5b067-93d9-4736-a51f-3d695f40988b,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d5b067-93')
Dec 06 08:12:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:12:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:54.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.228 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.228 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.229 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:90:d8:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.229 251996 INFO nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Using config drive
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.277 251996 DEBUG nova.storage.rbd_utils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.284 251996 DEBUG nova.network.neutron [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updated VIF entry in instance network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.285 251996 DEBUG nova.network.neutron [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updating instance_info_cache with network_info: [{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:12:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.473 251996 DEBUG oslo_concurrency.lockutils [req-9245425a-89c8-48bc-9a75-094cfc20b5e7 req-fb9b3a2d-4e8f-498a-8247-6539fd14bec6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:12:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:54.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/393233924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.864 251996 INFO nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Creating config drive at /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d/disk.config
Dec 06 08:12:54 compute-0 nova_compute[251992]: 2025-12-06 08:12:54.874 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57vclrfw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:55 compute-0 nova_compute[251992]: 2025-12-06 08:12:55.017 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57vclrfw" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:55 compute-0 nova_compute[251992]: 2025-12-06 08:12:55.060 251996 DEBUG nova.storage.rbd_utils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:12:55 compute-0 nova_compute[251992]: 2025-12-06 08:12:55.065 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d/disk.config 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:12:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:12:55 compute-0 nova_compute[251992]: 2025-12-06 08:12:55.899 251996 DEBUG oslo_concurrency.processutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d/disk.config 5ae83b01-2f96-47ca-92c5-66f9500be47d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.834s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:12:55 compute-0 nova_compute[251992]: 2025-12-06 08:12:55.900 251996 INFO nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Deleting local config drive /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d/disk.config because it was imported into RBD.
Dec 06 08:12:55 compute-0 ceph-mon[74339]: pgmap v3608: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 08:12:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3952924626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:12:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3228740001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1362112174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:12:55 compute-0 kernel: tap26d5b067-93: entered promiscuous mode
Dec 06 08:12:55 compute-0 NetworkManager[48965]: <info>  [1765008775.9571] manager: (tap26d5b067-93): new Tun device (/org/freedesktop/NetworkManager/Devices/355)
Dec 06 08:12:55 compute-0 ovn_controller[147168]: 2025-12-06T08:12:55Z|00761|binding|INFO|Claiming lport 26d5b067-93d9-4736-a51f-3d695f40988b for this chassis.
Dec 06 08:12:55 compute-0 ovn_controller[147168]: 2025-12-06T08:12:55Z|00762|binding|INFO|26d5b067-93d9-4736-a51f-3d695f40988b: Claiming fa:16:3e:90:d8:d3 10.100.0.3
Dec 06 08:12:55 compute-0 nova_compute[251992]: 2025-12-06 08:12:55.957 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:55 compute-0 nova_compute[251992]: 2025-12-06 08:12:55.962 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:55 compute-0 nova_compute[251992]: 2025-12-06 08:12:55.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:55 compute-0 systemd-machined[212986]: New machine qemu-91-instance-000000c8.
Dec 06 08:12:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:56.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:56 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-000000c8.
Dec 06 08:12:56 compute-0 systemd-udevd[394917]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.031 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:56 compute-0 ovn_controller[147168]: 2025-12-06T08:12:56Z|00763|binding|INFO|Setting lport 26d5b067-93d9-4736-a51f-3d695f40988b ovn-installed in OVS
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.034 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:56 compute-0 NetworkManager[48965]: <info>  [1765008776.0410] device (tap26d5b067-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:12:56 compute-0 NetworkManager[48965]: <info>  [1765008776.0437] device (tap26d5b067-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:12:56 compute-0 ovn_controller[147168]: 2025-12-06T08:12:56Z|00764|binding|INFO|Setting lport 26d5b067-93d9-4736-a51f-3d695f40988b up in Southbound
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.065 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:d8:d3 10.100.0.3'], port_security=['fa:16:3e:90:d8:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5ae83b01-2f96-47ca-92c5-66f9500be47d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a92fb155-cdbe-4539-a1b3-91333eee3cc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c607db35-6f3d-4821-9124-a70a0a233535, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=26d5b067-93d9-4736-a51f-3d695f40988b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.067 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 26d5b067-93d9-4736-a51f-3d695f40988b in datapath 1bf97b73-354e-4df7-9a72-727cdc64dc43 bound to our chassis
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.069 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1bf97b73-354e-4df7-9a72-727cdc64dc43
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.082 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[153f6c4b-9270-4e44-81f8-489bf8e066d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.084 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1bf97b73-31 in ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.087 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1bf97b73-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.087 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[56bcb6bd-5592-48c9-aa49-63b5142997ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.090 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[40285ca3-4b3d-4623-904a-9cb4e00c18b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.105 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[7db9ede3-41a4-4b89-939e-39686699a57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.120 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[913bdf46-06a9-4a66-a516-17d06d1413df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.154 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[c17694bf-c7ef-47c9-85df-93d67420d9ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.160 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[728dacf2-fbee-4b27-86e9-2d1fe2f331a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 NetworkManager[48965]: <info>  [1765008776.1627] manager: (tap1bf97b73-30): new Veth device (/org/freedesktop/NetworkManager/Devices/356)
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.193 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc42a05-35a9-4c77-9ffb-8b39347b52ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.196 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[6af0839d-38d1-4b3d-81c0-c542f1d45470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 NetworkManager[48965]: <info>  [1765008776.2205] device (tap1bf97b73-30): carrier: link connected
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.226 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8ccf3b-2b44-4db9-8965-70e6ca2afe9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.246 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8aae00ff-caf1-45a8-8fb1-64f665d8ef7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1bf97b73-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:e2:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904880, 'reachable_time': 39495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394950, 'error': None, 'target': 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.262 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3b53a7e9-d62b-4770-9422-98beccc3f438]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:e244'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 904880, 'tstamp': 904880}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394951, 'error': None, 'target': 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.279 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca58bdb-3055-4602-adff-e3e4cc60bf19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1bf97b73-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:e2:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 231], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904880, 'reachable_time': 39495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394952, 'error': None, 'target': 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.318 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6932807c-33f7-4ae9-9494-488a33190d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.389 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dd252b9e-8617-4f44-9a9b-f82468038905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.391 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bf97b73-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.391 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.391 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bf97b73-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:12:56 compute-0 kernel: tap1bf97b73-30: entered promiscuous mode
Dec 06 08:12:56 compute-0 NetworkManager[48965]: <info>  [1765008776.3943] manager: (tap1bf97b73-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.396 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1bf97b73-30, col_values=(('external_ids', {'iface-id': 'df0319da-86fa-419b-bb2d-0ca654179487'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:12:56 compute-0 ovn_controller[147168]: 2025-12-06T08:12:56Z|00765|binding|INFO|Releasing lport df0319da-86fa-419b-bb2d-0ca654179487 from this chassis (sb_readonly=0)
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.410 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.411 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1bf97b73-354e-4df7-9a72-727cdc64dc43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1bf97b73-354e-4df7-9a72-727cdc64dc43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.412 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cdabba07-0e04-4500-9207-95392a083b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.413 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-1bf97b73-354e-4df7-9a72-727cdc64dc43
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/1bf97b73-354e-4df7-9a72-727cdc64dc43.pid.haproxy
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 1bf97b73-354e-4df7-9a72-727cdc64dc43
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:12:56 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:12:56.414 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'env', 'PROCESS_TAG=haproxy-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1bf97b73-354e-4df7-9a72-727cdc64dc43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:12:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:12:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:56.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.806 251996 DEBUG nova.compute.manager [req-bf1e95f2-daf9-418d-872e-2be41be7c152 req-3391874f-59c3-4e22-9332-b55395dd9c13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.806 251996 DEBUG oslo_concurrency.lockutils [req-bf1e95f2-daf9-418d-872e-2be41be7c152 req-3391874f-59c3-4e22-9332-b55395dd9c13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.807 251996 DEBUG oslo_concurrency.lockutils [req-bf1e95f2-daf9-418d-872e-2be41be7c152 req-3391874f-59c3-4e22-9332-b55395dd9c13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.807 251996 DEBUG oslo_concurrency.lockutils [req-bf1e95f2-daf9-418d-872e-2be41be7c152 req-3391874f-59c3-4e22-9332-b55395dd9c13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:56 compute-0 nova_compute[251992]: 2025-12-06 08:12:56.807 251996 DEBUG nova.compute.manager [req-bf1e95f2-daf9-418d-872e-2be41be7c152 req-3391874f-59c3-4e22-9332-b55395dd9c13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Processing event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:12:56 compute-0 podman[395002]: 2025-12-06 08:12:56.76229524 +0000 UTC m=+0.027961981 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:12:56 compute-0 podman[395002]: 2025-12-06 08:12:56.939353799 +0000 UTC m=+0.205020510 container create 9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 08:12:57 compute-0 systemd[1]: Started libpod-conmon-9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296.scope.
Dec 06 08:12:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21c14f94a811e26621198f008b303f7d8198d259d9d68b53bf636bc6512fb502/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:12:57 compute-0 podman[395002]: 2025-12-06 08:12:57.121166219 +0000 UTC m=+0.386832950 container init 9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:12:57 compute-0 podman[395002]: 2025-12-06 08:12:57.127524841 +0000 UTC m=+0.393191552 container start 9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 06 08:12:57 compute-0 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[395035]: [NOTICE]   (395041) : New worker (395047) forked
Dec 06 08:12:57 compute-0 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[395035]: [NOTICE]   (395041) : Loading success.
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.221 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008777.2206783, 5ae83b01-2f96-47ca-92c5-66f9500be47d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.221 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] VM Started (Lifecycle Event)
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.223 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.226 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.229 251996 INFO nova.virt.libvirt.driver [-] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Instance spawned successfully.
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.230 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.361 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.368 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.371 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.372 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.373 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.373 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.374 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.374 251996 DEBUG nova.virt.libvirt.driver [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.507 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.508 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008777.2209842, 5ae83b01-2f96-47ca-92c5-66f9500be47d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.509 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] VM Paused (Lifecycle Event)
Dec 06 08:12:57 compute-0 ceph-mon[74339]: pgmap v3609: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.552 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.555 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008777.225852, 5ae83b01-2f96-47ca-92c5-66f9500be47d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.556 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] VM Resumed (Lifecycle Event)
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.589 251996 INFO nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Took 10.43 seconds to spawn the instance on the hypervisor.
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.590 251996 DEBUG nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.592 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.600 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.646 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.699 251996 INFO nova.compute.manager [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Took 11.38 seconds to build instance.
Dec 06 08:12:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 08:12:57 compute-0 nova_compute[251992]: 2025-12-06 08:12:57.797 251996 DEBUG oslo_concurrency.lockutils [None req-13230e29-2d98-4447-a8fc-48b1c2fe14d0 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:12:58.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.332 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:12:58 compute-0 ceph-mon[74339]: pgmap v3610: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Dec 06 08:12:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:12:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:12:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:12:58.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.933 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.965 251996 DEBUG nova.compute.manager [req-864d763d-eedc-4851-848d-c1d4204e8597 req-68fd5e98-2d65-4d2d-9d36-fb384e04e5fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.966 251996 DEBUG oslo_concurrency.lockutils [req-864d763d-eedc-4851-848d-c1d4204e8597 req-68fd5e98-2d65-4d2d-9d36-fb384e04e5fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.966 251996 DEBUG oslo_concurrency.lockutils [req-864d763d-eedc-4851-848d-c1d4204e8597 req-68fd5e98-2d65-4d2d-9d36-fb384e04e5fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.966 251996 DEBUG oslo_concurrency.lockutils [req-864d763d-eedc-4851-848d-c1d4204e8597 req-68fd5e98-2d65-4d2d-9d36-fb384e04e5fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.966 251996 DEBUG nova.compute.manager [req-864d763d-eedc-4851-848d-c1d4204e8597 req-68fd5e98-2d65-4d2d-9d36-fb384e04e5fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] No waiting events found dispatching network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:12:58 compute-0 nova_compute[251992]: 2025-12-06 08:12:58.967 251996 WARNING nova.compute.manager [req-864d763d-eedc-4851-848d-c1d4204e8597 req-68fd5e98-2d65-4d2d-9d36-fb384e04e5fa 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received unexpected event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b for instance with vm_state active and task_state None.
Dec 06 08:12:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:12:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 06 08:13:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:00.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:00.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:01 compute-0 ceph-mon[74339]: pgmap v3611: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec 06 08:13:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Dec 06 08:13:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:02.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:02 compute-0 nova_compute[251992]: 2025-12-06 08:13:02.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:02 compute-0 nova_compute[251992]: 2025-12-06 08:13:02.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:02.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:02 compute-0 ceph-mon[74339]: pgmap v3612: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.085 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:03 compute-0 NetworkManager[48965]: <info>  [1765008783.0877] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Dec 06 08:13:03 compute-0 NetworkManager[48965]: <info>  [1765008783.0885] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.160 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:03 compute-0 ovn_controller[147168]: 2025-12-06T08:13:03Z|00766|binding|INFO|Releasing lport df0319da-86fa-419b-bb2d-0ca654179487 from this chassis (sb_readonly=0)
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.170 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.333 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.484 251996 DEBUG nova.compute.manager [req-00141c7d-dc58-449c-9b07-dbd568a3eb11 req-f205106a-55b7-473b-a45f-9d8f0e83ec92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.484 251996 DEBUG nova.compute.manager [req-00141c7d-dc58-449c-9b07-dbd568a3eb11 req-f205106a-55b7-473b-a45f-9d8f0e83ec92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing instance network info cache due to event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.485 251996 DEBUG oslo_concurrency.lockutils [req-00141c7d-dc58-449c-9b07-dbd568a3eb11 req-f205106a-55b7-473b-a45f-9d8f0e83ec92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.485 251996 DEBUG oslo_concurrency.lockutils [req-00141c7d-dc58-449c-9b07-dbd568a3eb11 req-f205106a-55b7-473b-a45f-9d8f0e83ec92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.485 251996 DEBUG nova.network.neutron [req-00141c7d-dc58-449c-9b07-dbd568a3eb11 req-f205106a-55b7-473b-a45f-9d8f0e83ec92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:13:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 06 08:13:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:13:03.889 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:13:03.890 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:13:03.890 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:03 compute-0 nova_compute[251992]: 2025-12-06 08:13:03.934 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:04.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:04.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:05 compute-0 nova_compute[251992]: 2025-12-06 08:13:05.129 251996 DEBUG nova.network.neutron [req-00141c7d-dc58-449c-9b07-dbd568a3eb11 req-f205106a-55b7-473b-a45f-9d8f0e83ec92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updated VIF entry in instance network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:13:05 compute-0 nova_compute[251992]: 2025-12-06 08:13:05.130 251996 DEBUG nova.network.neutron [req-00141c7d-dc58-449c-9b07-dbd568a3eb11 req-f205106a-55b7-473b-a45f-9d8f0e83ec92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updating instance_info_cache with network_info: [{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:05 compute-0 nova_compute[251992]: 2025-12-06 08:13:05.209 251996 DEBUG oslo_concurrency.lockutils [req-00141c7d-dc58-449c-9b07-dbd568a3eb11 req-f205106a-55b7-473b-a45f-9d8f0e83ec92 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:05 compute-0 ceph-mon[74339]: pgmap v3613: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 06 08:13:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 06 08:13:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:06.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:06 compute-0 ceph-mon[74339]: pgmap v3614: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 144 op/s
Dec 06 08:13:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:06.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 142 op/s
Dec 06 08:13:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:08.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:08 compute-0 nova_compute[251992]: 2025-12-06 08:13:08.336 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:08.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:08 compute-0 nova_compute[251992]: 2025-12-06 08:13:08.937 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:09 compute-0 ceph-mon[74339]: pgmap v3615: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 142 op/s
Dec 06 08:13:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:09 compute-0 podman[395064]: 2025-12-06 08:13:09.471919161 +0000 UTC m=+0.108682013 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 06 08:13:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 141 op/s
Dec 06 08:13:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:10.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 376 KiB/s wr, 162 op/s
Dec 06 08:13:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1088677073' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:13:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1088677073' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:13:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:12.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:12.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:13:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:13:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:13:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:13:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:13:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:13:13 compute-0 nova_compute[251992]: 2025-12-06 08:13:13.337 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:13 compute-0 sudo[395093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:13 compute-0 sudo[395093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:13 compute-0 sudo[395093]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:13 compute-0 sudo[395118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:13 compute-0 sudo[395118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:13 compute-0 sudo[395118]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:13 compute-0 ceph-mon[74339]: pgmap v3616: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 141 op/s
Dec 06 08:13:13 compute-0 ceph-mon[74339]: pgmap v3617: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 376 KiB/s wr, 162 op/s
Dec 06 08:13:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 364 KiB/s wr, 20 op/s
Dec 06 08:13:13 compute-0 nova_compute[251992]: 2025-12-06 08:13:13.939 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:14 compute-0 sudo[395143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:14 compute-0 sudo[395143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:14 compute-0 sudo[395143]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:14.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:14 compute-0 sudo[395168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:13:14 compute-0 sudo[395168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:14 compute-0 sudo[395168]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 compute-0 sudo[395193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:14 compute-0 sudo[395193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:14 compute-0 sudo[395193]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 compute-0 sudo[395218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:13:14 compute-0 sudo[395218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:13:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:14 compute-0 sudo[395218]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:13:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:14.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3653969976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:15 compute-0 podman[395275]: 2025-12-06 08:13:15.393166097 +0000 UTC m=+0.051727926 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 08:13:15 compute-0 podman[395276]: 2025-12-06 08:13:15.404089634 +0000 UTC m=+0.062551350 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:13:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 260 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 376 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Dec 06 08:13:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:16 compute-0 ceph-mon[74339]: pgmap v3618: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 364 KiB/s wr, 20 op/s
Dec 06 08:13:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:13:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:13:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:13:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:13:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:13:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:16.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:17 compute-0 ovn_controller[147168]: 2025-12-06T08:13:17Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:90:d8:d3 10.100.0.3
Dec 06 08:13:17 compute-0 ovn_controller[147168]: 2025-12-06T08:13:17Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:90:d8:d3 10.100.0.3
Dec 06 08:13:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5206578a-b8c6-4fc2-8506-e38053719673 does not exist
Dec 06 08:13:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5d5723d4-f491-4e66-8b4d-e4a46c72f3e9 does not exist
Dec 06 08:13:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 37809adb-72d1-4ca6-aede-7ac2b8593301 does not exist
Dec 06 08:13:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:13:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:13:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:13:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:13:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:13:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:13:17 compute-0 sudo[395314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:17 compute-0 sudo[395314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:17 compute-0 sudo[395314]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:17 compute-0 sudo[395339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:13:17 compute-0 sudo[395339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:17 compute-0 sudo[395339]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:17 compute-0 sudo[395364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:17 compute-0 sudo[395364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:17 compute-0 sudo[395364]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:17 compute-0 sudo[395389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:13:17 compute-0 sudo[395389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:17 compute-0 ceph-mon[74339]: pgmap v3619: 305 pgs: 305 active+clean; 260 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 376 KiB/s rd, 1.5 MiB/s wr, 59 op/s
Dec 06 08:13:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:13:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:13:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:13:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:13:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:13:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 06 08:13:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:18.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:18 compute-0 podman[395454]: 2025-12-06 08:13:18.072805683 +0000 UTC m=+0.051713737 container create 651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:13:18 compute-0 podman[395454]: 2025-12-06 08:13:18.043791844 +0000 UTC m=+0.022699898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:13:18 compute-0 systemd[1]: Started libpod-conmon-651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a.scope.
Dec 06 08:13:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:13:18 compute-0 podman[395454]: 2025-12-06 08:13:18.300506408 +0000 UTC m=+0.279414482 container init 651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 08:13:18 compute-0 podman[395454]: 2025-12-06 08:13:18.309634186 +0000 UTC m=+0.288542240 container start 651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:13:18 compute-0 podman[395454]: 2025-12-06 08:13:18.313834451 +0000 UTC m=+0.292742525 container attach 651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:13:18 compute-0 determined_booth[395470]: 167 167
Dec 06 08:13:18 compute-0 systemd[1]: libpod-651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a.scope: Deactivated successfully.
Dec 06 08:13:18 compute-0 podman[395454]: 2025-12-06 08:13:18.316803381 +0000 UTC m=+0.295711425 container died 651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:13:18 compute-0 nova_compute[251992]: 2025-12-06 08:13:18.339 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:13:18
Dec 06 08:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', '.mgr', 'images', 'volumes', 'vms', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta']
Dec 06 08:13:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-fecc9bf3e61c13b5407ce2e2bcdd14828048cd1569000fa74d529d2b13732f31-merged.mount: Deactivated successfully.
Dec 06 08:13:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:18.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:18 compute-0 podman[395454]: 2025-12-06 08:13:18.818361427 +0000 UTC m=+0.797269471 container remove 651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:13:18 compute-0 systemd[1]: libpod-conmon-651fe7e502ea92a54115242387ed6731415ea4c3e34bbf9675e6a366f03ca56a.scope: Deactivated successfully.
Dec 06 08:13:18 compute-0 nova_compute[251992]: 2025-12-06 08:13:18.941 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:18 compute-0 podman[395495]: 2025-12-06 08:13:18.988490549 +0000 UTC m=+0.051571703 container create 3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:13:19 compute-0 systemd[1]: Started libpod-conmon-3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8.scope.
Dec 06 08:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:13:19 compute-0 podman[395495]: 2025-12-06 08:13:18.957517797 +0000 UTC m=+0.020598981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cc8c253ca4cba0b7ef0acd0d241e83056187fb9f89b6da47e2f3e0c5fecade/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cc8c253ca4cba0b7ef0acd0d241e83056187fb9f89b6da47e2f3e0c5fecade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cc8c253ca4cba0b7ef0acd0d241e83056187fb9f89b6da47e2f3e0c5fecade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cc8c253ca4cba0b7ef0acd0d241e83056187fb9f89b6da47e2f3e0c5fecade/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cc8c253ca4cba0b7ef0acd0d241e83056187fb9f89b6da47e2f3e0c5fecade/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:19 compute-0 podman[395495]: 2025-12-06 08:13:19.080891479 +0000 UTC m=+0.143972653 container init 3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wiles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:13:19 compute-0 podman[395495]: 2025-12-06 08:13:19.089171814 +0000 UTC m=+0.152252968 container start 3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:13:19 compute-0 podman[395495]: 2025-12-06 08:13:19.107793819 +0000 UTC m=+0.170874973 container attach 3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wiles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:13:19 compute-0 ceph-mon[74339]: pgmap v3620: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 06 08:13:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 06 08:13:19 compute-0 confident_wiles[395512]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:13:19 compute-0 confident_wiles[395512]: --> relative data size: 1.0
Dec 06 08:13:19 compute-0 confident_wiles[395512]: --> All data devices are unavailable
Dec 06 08:13:19 compute-0 systemd[1]: libpod-3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8.scope: Deactivated successfully.
Dec 06 08:13:19 compute-0 podman[395495]: 2025-12-06 08:13:19.943870953 +0000 UTC m=+1.006952107 container died 3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:13:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:20.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-97cc8c253ca4cba0b7ef0acd0d241e83056187fb9f89b6da47e2f3e0c5fecade-merged.mount: Deactivated successfully.
Dec 06 08:13:20 compute-0 podman[395495]: 2025-12-06 08:13:20.338299218 +0000 UTC m=+1.401380372 container remove 3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:13:20 compute-0 sudo[395389]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:20 compute-0 systemd[1]: libpod-conmon-3271652a1ec729761d696456086e65688dfba2323e49a9e6fc62bc16d79db3f8.scope: Deactivated successfully.
Dec 06 08:13:20 compute-0 sudo[395539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:20 compute-0 sudo[395539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:20 compute-0 sudo[395539]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:20 compute-0 sudo[395564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:13:20 compute-0 sudo[395564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:20 compute-0 sudo[395564]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:20 compute-0 sudo[395589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:20 compute-0 sudo[395589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:20 compute-0 sudo[395589]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:20 compute-0 sudo[395614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:13:20 compute-0 sudo[395614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:20.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:21 compute-0 podman[395679]: 2025-12-06 08:13:20.924753169 +0000 UTC m=+0.020881719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:13:21 compute-0 podman[395679]: 2025-12-06 08:13:21.093169354 +0000 UTC m=+0.189297884 container create 82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:13:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1773110591' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:13:21 compute-0 systemd[1]: Started libpod-conmon-82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296.scope.
Dec 06 08:13:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:13:21 compute-0 podman[395679]: 2025-12-06 08:13:21.253387157 +0000 UTC m=+0.349515687 container init 82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wiles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:13:21 compute-0 podman[395679]: 2025-12-06 08:13:21.260167211 +0000 UTC m=+0.356295771 container start 82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wiles, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:13:21 compute-0 dreamy_wiles[395696]: 167 167
Dec 06 08:13:21 compute-0 systemd[1]: libpod-82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296.scope: Deactivated successfully.
Dec 06 08:13:21 compute-0 conmon[395696]: conmon 82100bd50fa596fef5f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296.scope/container/memory.events
Dec 06 08:13:21 compute-0 podman[395679]: 2025-12-06 08:13:21.429260074 +0000 UTC m=+0.525388604 container attach 82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:13:21 compute-0 podman[395679]: 2025-12-06 08:13:21.430426546 +0000 UTC m=+0.526555066 container died 82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wiles, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-903567adc0ea628148a6fe048509604bee874a379b0e50abf82d37010c59cb3f-merged.mount: Deactivated successfully.
Dec 06 08:13:21 compute-0 podman[395679]: 2025-12-06 08:13:21.554210719 +0000 UTC m=+0.650339249 container remove 82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_wiles, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:13:21 compute-0 systemd[1]: libpod-conmon-82100bd50fa596fef5f4fb175563c71d33f5f019b20d1d8f29e4178ef9375296.scope: Deactivated successfully.
Dec 06 08:13:21 compute-0 podman[395720]: 2025-12-06 08:13:21.705277882 +0000 UTC m=+0.035876155 container create 76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:13:21 compute-0 systemd[1]: Started libpod-conmon-76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae.scope.
Dec 06 08:13:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80b1800215e6f09ef344dab3ad7afe90354f96a7dbfde028e681ceba0ddcbcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80b1800215e6f09ef344dab3ad7afe90354f96a7dbfde028e681ceba0ddcbcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80b1800215e6f09ef344dab3ad7afe90354f96a7dbfde028e681ceba0ddcbcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80b1800215e6f09ef344dab3ad7afe90354f96a7dbfde028e681ceba0ddcbcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 320 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 908 KiB/s rd, 3.9 MiB/s wr, 130 op/s
Dec 06 08:13:21 compute-0 podman[395720]: 2025-12-06 08:13:21.689882605 +0000 UTC m=+0.020480898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:13:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:22.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:13:22.251 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:13:22 compute-0 nova_compute[251992]: 2025-12-06 08:13:22.251 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:13:22.255 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:13:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:22.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:23 compute-0 nova_compute[251992]: 2025-12-06 08:13:23.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:23 compute-0 podman[395720]: 2025-12-06 08:13:23.60169572 +0000 UTC m=+1.932294023 container init 76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_khorana, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:13:23 compute-0 podman[395720]: 2025-12-06 08:13:23.612302249 +0000 UTC m=+1.942900532 container start 76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:13:23 compute-0 podman[395720]: 2025-12-06 08:13:23.62521635 +0000 UTC m=+1.955814623 container attach 76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:13:23 compute-0 ceph-mon[74339]: pgmap v3621: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Dec 06 08:13:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1710177433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:13:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:13:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 320 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 803 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 06 08:13:23 compute-0 nova_compute[251992]: 2025-12-06 08:13:23.943 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:24.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]: {
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:     "0": [
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:         {
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "devices": [
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "/dev/loop3"
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             ],
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "lv_name": "ceph_lv0",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "lv_size": "7511998464",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "name": "ceph_lv0",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "tags": {
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.cluster_name": "ceph",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.crush_device_class": "",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.encrypted": "0",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.osd_id": "0",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.type": "block",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:                 "ceph.vdo": "0"
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             },
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "type": "block",
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:             "vg_name": "ceph_vg0"
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:         }
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]:     ]
Dec 06 08:13:24 compute-0 upbeat_khorana[395736]: }
Dec 06 08:13:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:24 compute-0 systemd[1]: libpod-76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae.scope: Deactivated successfully.
Dec 06 08:13:24 compute-0 conmon[395736]: conmon 76d0b59ecc52be702d78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae.scope/container/memory.events
Dec 06 08:13:24 compute-0 podman[395720]: 2025-12-06 08:13:24.457367636 +0000 UTC m=+2.787965949 container died 76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_khorana, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b80b1800215e6f09ef344dab3ad7afe90354f96a7dbfde028e681ceba0ddcbcd-merged.mount: Deactivated successfully.
Dec 06 08:13:24 compute-0 podman[395720]: 2025-12-06 08:13:24.660687649 +0000 UTC m=+2.991285942 container remove 76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec 06 08:13:24 compute-0 systemd[1]: libpod-conmon-76d0b59ecc52be702d78b68927b2eb74e74d3cd199c64ef6b9818fb8a2f0b2ae.scope: Deactivated successfully.
Dec 06 08:13:24 compute-0 sudo[395614]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:24 compute-0 ceph-mon[74339]: pgmap v3622: 305 pgs: 305 active+clean; 320 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 908 KiB/s rd, 3.9 MiB/s wr, 130 op/s
Dec 06 08:13:24 compute-0 ceph-mon[74339]: pgmap v3623: 305 pgs: 305 active+clean; 320 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 803 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Dec 06 08:13:24 compute-0 sudo[395764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:24 compute-0 sudo[395764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:24 compute-0 sudo[395764]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:24.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:24 compute-0 sudo[395789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:13:24 compute-0 sudo[395789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:24 compute-0 sudo[395789]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:24 compute-0 sudo[395814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:24 compute-0 sudo[395814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:24 compute-0 sudo[395814]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:24 compute-0 sudo[395839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:13:24 compute-0 sudo[395839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:25 compute-0 podman[395905]: 2025-12-06 08:13:25.232565215 +0000 UTC m=+0.023963802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:13:25 compute-0 podman[395905]: 2025-12-06 08:13:25.635386498 +0000 UTC m=+0.426784995 container create 34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_herschel, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 08:13:25 compute-0 systemd[1]: Started libpod-conmon-34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db.scope.
Dec 06 08:13:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:13:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 888 KiB/s rd, 3.6 MiB/s wr, 148 op/s
Dec 06 08:13:25 compute-0 podman[395905]: 2025-12-06 08:13:25.966769911 +0000 UTC m=+0.758168438 container init 34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:13:25 compute-0 podman[395905]: 2025-12-06 08:13:25.980054281 +0000 UTC m=+0.771452778 container start 34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 08:13:25 compute-0 brave_herschel[395921]: 167 167
Dec 06 08:13:25 compute-0 systemd[1]: libpod-34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db.scope: Deactivated successfully.
Dec 06 08:13:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:26.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:26 compute-0 podman[395905]: 2025-12-06 08:13:26.073647624 +0000 UTC m=+0.865046141 container attach 34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_herschel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:13:26 compute-0 podman[395905]: 2025-12-06 08:13:26.076962914 +0000 UTC m=+0.868361411 container died 34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-614f6fb4fe5f22c10738e0459571679797b0099d3de9f92dcb4b43715dbd2e0e-merged.mount: Deactivated successfully.
Dec 06 08:13:26 compute-0 podman[395905]: 2025-12-06 08:13:26.115535022 +0000 UTC m=+0.906933529 container remove 34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_herschel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:13:26 compute-0 systemd[1]: libpod-conmon-34e7136c34268b1614d556488d6e9306fa883d62f20900a639b03f4d813312db.scope: Deactivated successfully.
Dec 06 08:13:26 compute-0 podman[395946]: 2025-12-06 08:13:26.267843359 +0000 UTC m=+0.021812904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004240313256095292 of space, bias 1.0, pg target 1.2720939768285875 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.6464803764191328 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:13:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:13:26 compute-0 podman[395946]: 2025-12-06 08:13:26.758397926 +0000 UTC m=+0.512367461 container create 4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:13:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:26.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:26 compute-0 systemd[1]: Started libpod-conmon-4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9.scope.
Dec 06 08:13:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7389d8c8ac79d3721fe5d4964903494e53ec969d8641bc0fc9e02d13edf14ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7389d8c8ac79d3721fe5d4964903494e53ec969d8641bc0fc9e02d13edf14ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7389d8c8ac79d3721fe5d4964903494e53ec969d8641bc0fc9e02d13edf14ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7389d8c8ac79d3721fe5d4964903494e53ec969d8641bc0fc9e02d13edf14ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:26 compute-0 podman[395946]: 2025-12-06 08:13:26.993434631 +0000 UTC m=+0.747404196 container init 4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:13:27 compute-0 podman[395946]: 2025-12-06 08:13:27.000048471 +0000 UTC m=+0.754018026 container start 4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:13:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:13:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:13:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:13:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:13:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:13:27 compute-0 podman[395946]: 2025-12-06 08:13:27.400976943 +0000 UTC m=+1.154946578 container attach 4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:13:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 775 KiB/s rd, 2.4 MiB/s wr, 118 op/s
Dec 06 08:13:27 compute-0 angry_benz[395963]: {
Dec 06 08:13:27 compute-0 angry_benz[395963]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:13:27 compute-0 angry_benz[395963]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:13:27 compute-0 angry_benz[395963]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:13:27 compute-0 angry_benz[395963]:         "osd_id": 0,
Dec 06 08:13:27 compute-0 angry_benz[395963]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:13:27 compute-0 angry_benz[395963]:         "type": "bluestore"
Dec 06 08:13:27 compute-0 angry_benz[395963]:     }
Dec 06 08:13:27 compute-0 angry_benz[395963]: }
Dec 06 08:13:27 compute-0 systemd[1]: libpod-4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9.scope: Deactivated successfully.
Dec 06 08:13:27 compute-0 podman[395946]: 2025-12-06 08:13:27.916972501 +0000 UTC m=+1.670942036 container died 4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7389d8c8ac79d3721fe5d4964903494e53ec969d8641bc0fc9e02d13edf14ce-merged.mount: Deactivated successfully.
Dec 06 08:13:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:28 compute-0 podman[395946]: 2025-12-06 08:13:28.129246447 +0000 UTC m=+1.883215982 container remove 4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:13:28 compute-0 systemd[1]: libpod-conmon-4efa3c3b726740b5354df6d4c14cae7f5bce8b77b0e7e65030df080a2c33c3c9.scope: Deactivated successfully.
Dec 06 08:13:28 compute-0 sudo[395839]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:13:28 compute-0 nova_compute[251992]: 2025-12-06 08:13:28.344 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:28.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:28 compute-0 nova_compute[251992]: 2025-12-06 08:13:28.945 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 553 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Dec 06 08:13:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:30.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:30.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Dec 06 08:13:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:32.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:32 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:13:32.257 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:13:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:32.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:33 compute-0 nova_compute[251992]: 2025-12-06 08:13:33.345 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 32 KiB/s wr, 111 op/s
Dec 06 08:13:33 compute-0 nova_compute[251992]: 2025-12-06 08:13:33.947 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:34.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:34.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 32 KiB/s wr, 111 op/s
Dec 06 08:13:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:13:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 08:13:35 compute-0 ceph-mon[74339]: paxos.0).electionLogic(63) init, last seen epoch 63, mid-election, bumping
Dec 06 08:13:35 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:36 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 08:13:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:36.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 72 op/s
Dec 06 08:13:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:38.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:38 compute-0 nova_compute[251992]: 2025-12-06 08:13:38.348 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:38 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:38.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:38 compute-0 nova_compute[251992]: 2025-12-06 08:13:38.949 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:39 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:39 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 511 B/s wr, 63 op/s
Dec 06 08:13:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:40.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:40 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:40 compute-0 podman[396004]: 2025-12-06 08:13:40.443827168 +0000 UTC m=+0.095679911 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:13:40 compute-0 ceph-mds[92997]: mds.beacon.cephfs.compute-0.qqwnku missed beacon ack from the monitors
Dec 06 08:13:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:40.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 08:13:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 107m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:13:41 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev eca101e5-b4b0-44f6-8adf-9a1dea051956 does not exist
Dec 06 08:13:41 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8fa9268e-2f43-46c3-a9c1-aa5468a170db does not exist
Dec 06 08:13:41 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4fdbcd6c-594c-46d6-b72f-5a349ef74c2e does not exist
Dec 06 08:13:41 compute-0 sudo[396033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:41 compute-0 sudo[396032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:41 compute-0 sudo[396033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:41 compute-0 sudo[396032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:41 compute-0 sudo[396033]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:41 compute-0 sudo[396032]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:41 compute-0 sudo[396082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:13:41 compute-0 sudo[396082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:41 compute-0 sudo[396082]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:41 compute-0 sudo[396083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:13:41 compute-0 sudo[396083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:13:41 compute-0 sudo[396083]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec 06 08:13:41 compute-0 ceph-mon[74339]: paxos.0).electionLogic(66) init, last seen epoch 66
Dec 06 08:13:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 91 op/s
Dec 06 08:13:41 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:42 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:42.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:42 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec 06 08:13:42 compute-0 ceph-mon[74339]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 08:13:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:42.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:13:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:13:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:13:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:13:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:13:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:13:43 compute-0 nova_compute[251992]: 2025-12-06 08:13:43.351 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 08:13:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 08:13:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 08:13:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Dec 06 08:13:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.sfzyix(active, since 107m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 08:13:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 08:13:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 06 08:13:43 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 08:13:43 compute-0 ceph-mon[74339]: mon.compute-1 calling monitor election
Dec 06 08:13:43 compute-0 ceph-mon[74339]: mon.compute-0 calling monitor election
Dec 06 08:13:43 compute-0 ceph-mon[74339]: mon.compute-2 calling monitor election
Dec 06 08:13:43 compute-0 ceph-mon[74339]: pgmap v3632: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 91 op/s
Dec 06 08:13:43 compute-0 ceph-mon[74339]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 06 08:13:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/138621338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:43 compute-0 ceph-mon[74339]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec 06 08:13:43 compute-0 ceph-mon[74339]: fsmap cephfs:1 {0=cephfs.compute-0.qqwnku=up:active} 2 up:standby
Dec 06 08:13:43 compute-0 ceph-mon[74339]: osdmap e416: 3 total, 3 up, 3 in
Dec 06 08:13:43 compute-0 ceph-mon[74339]: mgrmap e11: compute-0.sfzyix(active, since 107m), standbys: compute-2.ytlehq, compute-1.nmklwp
Dec 06 08:13:43 compute-0 ceph-mon[74339]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Dec 06 08:13:43 compute-0 ceph-mon[74339]: Cluster is now healthy
Dec 06 08:13:43 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 08:13:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3746927814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 2.0 MiB/s wr, 28 op/s
Dec 06 08:13:43 compute-0 nova_compute[251992]: 2025-12-06 08:13:43.951 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:44.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:44.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Dec 06 08:13:45 compute-0 ceph-mon[74339]: pgmap v3633: 305 pgs: 305 active+clean; 268 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 2.0 MiB/s wr, 28 op/s
Dec 06 08:13:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3413032714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:46.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:46 compute-0 podman[396134]: 2025-12-06 08:13:46.403273111 +0000 UTC m=+0.056872096 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 08:13:46 compute-0 podman[396135]: 2025-12-06 08:13:46.417301352 +0000 UTC m=+0.068099990 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:13:46 compute-0 nova_compute[251992]: 2025-12-06 08:13:46.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:46 compute-0 nova_compute[251992]: 2025-12-06 08:13:46.715 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:46 compute-0 nova_compute[251992]: 2025-12-06 08:13:46.716 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:46 compute-0 nova_compute[251992]: 2025-12-06 08:13:46.716 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:46 compute-0 nova_compute[251992]: 2025-12-06 08:13:46.716 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:13:46 compute-0 nova_compute[251992]: 2025-12-06 08:13:46.717 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:46.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:13:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3656757550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.189 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.304 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.304 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.463 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.465 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3973MB free_disk=20.897945404052734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.465 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.465 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.566 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 5ae83b01-2f96-47ca-92c5-66f9500be47d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.566 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.566 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:13:47 compute-0 nova_compute[251992]: 2025-12-06 08:13:47.609 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:13:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Dec 06 08:13:47 compute-0 ceph-mon[74339]: pgmap v3634: 305 pgs: 305 active+clean; 272 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 194 KiB/s rd, 2.1 MiB/s wr, 45 op/s
Dec 06 08:13:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3656757550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:13:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586938534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:48 compute-0 nova_compute[251992]: 2025-12-06 08:13:48.054 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:13:48 compute-0 nova_compute[251992]: 2025-12-06 08:13:48.059 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:13:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:48.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:48 compute-0 nova_compute[251992]: 2025-12-06 08:13:48.260 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:13:48 compute-0 nova_compute[251992]: 2025-12-06 08:13:48.284 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:13:48 compute-0 nova_compute[251992]: 2025-12-06 08:13:48.285 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:48 compute-0 nova_compute[251992]: 2025-12-06 08:13:48.354 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:48.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:48 compute-0 ceph-mon[74339]: pgmap v3635: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Dec 06 08:13:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3586938534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:48 compute-0 nova_compute[251992]: 2025-12-06 08:13:48.953 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:49 compute-0 nova_compute[251992]: 2025-12-06 08:13:49.279 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Dec 06 08:13:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:50 compute-0 nova_compute[251992]: 2025-12-06 08:13:50.177 251996 INFO nova.compute.manager [None req-c6fcc5b9-e3a6-4832-b0f1-f45b3dc1d1e1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Get console output
Dec 06 08:13:50 compute-0 nova_compute[251992]: 2025-12-06 08:13:50.184 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:13:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:50.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:51 compute-0 ceph-mon[74339]: pgmap v3636: 305 pgs: 305 active+clean; 273 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Dec 06 08:13:51 compute-0 nova_compute[251992]: 2025-12-06 08:13:51.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 06 08:13:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:52.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:52 compute-0 nova_compute[251992]: 2025-12-06 08:13:52.330 251996 DEBUG nova.compute.manager [req-a79b88f6-9a11-40cb-bdb9-424f804592de req-e566e887-f798-4b76-b0b2-515a870baded 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:52 compute-0 nova_compute[251992]: 2025-12-06 08:13:52.331 251996 DEBUG nova.compute.manager [req-a79b88f6-9a11-40cb-bdb9-424f804592de req-e566e887-f798-4b76-b0b2-515a870baded 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing instance network info cache due to event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:13:52 compute-0 nova_compute[251992]: 2025-12-06 08:13:52.331 251996 DEBUG oslo_concurrency.lockutils [req-a79b88f6-9a11-40cb-bdb9-424f804592de req-e566e887-f798-4b76-b0b2-515a870baded 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:52 compute-0 nova_compute[251992]: 2025-12-06 08:13:52.331 251996 DEBUG oslo_concurrency.lockutils [req-a79b88f6-9a11-40cb-bdb9-424f804592de req-e566e887-f798-4b76-b0b2-515a870baded 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:52 compute-0 nova_compute[251992]: 2025-12-06 08:13:52.331 251996 DEBUG nova.network.neutron [req-a79b88f6-9a11-40cb-bdb9-424f804592de req-e566e887-f798-4b76-b0b2-515a870baded 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:13:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:52.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.356 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.465 251996 DEBUG nova.compute.manager [req-ab46c2f3-014d-44eb-a080-08002f6f5bc1 req-b39bcabb-e673-4117-b634-6a33fe1ad97e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-unplugged-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.465 251996 DEBUG oslo_concurrency.lockutils [req-ab46c2f3-014d-44eb-a080-08002f6f5bc1 req-b39bcabb-e673-4117-b634-6a33fe1ad97e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.465 251996 DEBUG oslo_concurrency.lockutils [req-ab46c2f3-014d-44eb-a080-08002f6f5bc1 req-b39bcabb-e673-4117-b634-6a33fe1ad97e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.465 251996 DEBUG oslo_concurrency.lockutils [req-ab46c2f3-014d-44eb-a080-08002f6f5bc1 req-b39bcabb-e673-4117-b634-6a33fe1ad97e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.466 251996 DEBUG nova.compute.manager [req-ab46c2f3-014d-44eb-a080-08002f6f5bc1 req-b39bcabb-e673-4117-b634-6a33fe1ad97e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] No waiting events found dispatching network-vif-unplugged-26d5b067-93d9-4736-a51f-3d695f40988b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.466 251996 WARNING nova.compute.manager [req-ab46c2f3-014d-44eb-a080-08002f6f5bc1 req-b39bcabb-e673-4117-b634-6a33fe1ad97e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received unexpected event network-vif-unplugged-26d5b067-93d9-4736-a51f-3d695f40988b for instance with vm_state active and task_state None.
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.467 251996 INFO nova.compute.manager [None req-d6c3a4f9-3988-47c2-b676-9e0270472162 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Get console output
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.471 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:13:53 compute-0 ceph-mon[74339]: pgmap v3637: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 06 08:13:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2634793710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 262 KiB/s rd, 97 KiB/s wr, 39 op/s
Dec 06 08:13:53 compute-0 nova_compute[251992]: 2025-12-06 08:13:53.954 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:54 compute-0 ovn_controller[147168]: 2025-12-06T08:13:54Z|00767|binding|INFO|Releasing lport df0319da-86fa-419b-bb2d-0ca654179487 from this chassis (sb_readonly=0)
Dec 06 08:13:54 compute-0 ceph-mon[74339]: pgmap v3638: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 262 KiB/s rd, 97 KiB/s wr, 39 op/s
Dec 06 08:13:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2605282351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.680 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:13:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:54.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.846 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.892 251996 DEBUG nova.network.neutron [req-a79b88f6-9a11-40cb-bdb9-424f804592de req-e566e887-f798-4b76-b0b2-515a870baded 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updated VIF entry in instance network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.893 251996 DEBUG nova.network.neutron [req-a79b88f6-9a11-40cb-bdb9-424f804592de req-e566e887-f798-4b76-b0b2-515a870baded 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updating instance_info_cache with network_info: [{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.909 251996 DEBUG oslo_concurrency.lockutils [req-a79b88f6-9a11-40cb-bdb9-424f804592de req-e566e887-f798-4b76-b0b2-515a870baded 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.910 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.910 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:13:54 compute-0 nova_compute[251992]: 2025-12-06 08:13:54.910 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5ae83b01-2f96-47ca-92c5-66f9500be47d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.185 251996 DEBUG nova.compute.manager [req-8936af3c-7bb7-4805-8ae6-3ee4217e444d req-9fad85b3-a8cc-4438-82b4-735d7de56d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.185 251996 DEBUG nova.compute.manager [req-8936af3c-7bb7-4805-8ae6-3ee4217e444d req-9fad85b3-a8cc-4438-82b4-735d7de56d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing instance network info cache due to event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.185 251996 DEBUG oslo_concurrency.lockutils [req-8936af3c-7bb7-4805-8ae6-3ee4217e444d req-9fad85b3-a8cc-4438-82b4-735d7de56d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.436 251996 INFO nova.compute.manager [None req-9617a79e-d49d-477c-ba3d-9865ca8fc6e1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Get console output
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.441 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.548 251996 DEBUG nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.548 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.549 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.549 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.549 251996 DEBUG nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] No waiting events found dispatching network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.549 251996 WARNING nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received unexpected event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b for instance with vm_state active and task_state None.
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.549 251996 DEBUG nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.549 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.550 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.550 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.550 251996 DEBUG nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] No waiting events found dispatching network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.550 251996 WARNING nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received unexpected event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b for instance with vm_state active and task_state None.
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.550 251996 DEBUG nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.550 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.551 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.551 251996 DEBUG oslo_concurrency.lockutils [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.551 251996 DEBUG nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] No waiting events found dispatching network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:13:55 compute-0 nova_compute[251992]: 2025-12-06 08:13:55.551 251996 WARNING nova.compute.manager [req-534cee29-4336-48b3-bbc6-dc058efbb5b0 req-362c7d5f-8d37-4278-9386-d37b4b0c0c74 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received unexpected event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b for instance with vm_state active and task_state None.
Dec 06 08:13:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 267 KiB/s rd, 100 KiB/s wr, 40 op/s
Dec 06 08:13:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:13:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:56.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:13:56 compute-0 nova_compute[251992]: 2025-12-06 08:13:56.838 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updating instance_info_cache with network_info: [{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:56 compute-0 nova_compute[251992]: 2025-12-06 08:13:56.858 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:56 compute-0 nova_compute[251992]: 2025-12-06 08:13:56.858 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:13:56 compute-0 nova_compute[251992]: 2025-12-06 08:13:56.859 251996 DEBUG oslo_concurrency.lockutils [req-8936af3c-7bb7-4805-8ae6-3ee4217e444d req-9fad85b3-a8cc-4438-82b4-735d7de56d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:13:56 compute-0 nova_compute[251992]: 2025-12-06 08:13:56.859 251996 DEBUG nova.network.neutron [req-8936af3c-7bb7-4805-8ae6-3ee4217e444d req-9fad85b3-a8cc-4438-82b4-735d7de56d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:13:56 compute-0 ceph-mon[74339]: pgmap v3639: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 267 KiB/s rd, 100 KiB/s wr, 40 op/s
Dec 06 08:13:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 79 KiB/s wr, 24 op/s
Dec 06 08:13:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:13:58.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:58 compute-0 nova_compute[251992]: 2025-12-06 08:13:58.359 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:58 compute-0 nova_compute[251992]: 2025-12-06 08:13:58.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:13:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:13:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:13:58.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:13:58 compute-0 nova_compute[251992]: 2025-12-06 08:13:58.956 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:13:58 compute-0 ceph-mon[74339]: pgmap v3640: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 79 KiB/s wr, 24 op/s
Dec 06 08:13:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:13:59 compute-0 nova_compute[251992]: 2025-12-06 08:13:59.470 251996 DEBUG nova.network.neutron [req-8936af3c-7bb7-4805-8ae6-3ee4217e444d req-9fad85b3-a8cc-4438-82b4-735d7de56d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updated VIF entry in instance network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:13:59 compute-0 nova_compute[251992]: 2025-12-06 08:13:59.470 251996 DEBUG nova.network.neutron [req-8936af3c-7bb7-4805-8ae6-3ee4217e444d req-9fad85b3-a8cc-4438-82b4-735d7de56d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updating instance_info_cache with network_info: [{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:13:59 compute-0 nova_compute[251992]: 2025-12-06 08:13:59.492 251996 DEBUG oslo_concurrency.lockutils [req-8936af3c-7bb7-4805-8ae6-3ee4217e444d req-9fad85b3-a8cc-4438-82b4-735d7de56d58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:13:59 compute-0 nova_compute[251992]: 2025-12-06 08:13:59.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:59 compute-0 nova_compute[251992]: 2025-12-06 08:13:59.676 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:13:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 33 KiB/s wr, 12 op/s
Dec 06 08:14:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:00.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:01 compute-0 ceph-mon[74339]: pgmap v3641: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 33 KiB/s wr, 12 op/s
Dec 06 08:14:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/980405708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:01 compute-0 sudo[396221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:01 compute-0 sudo[396221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:01 compute-0 sudo[396221]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:01 compute-0 sudo[396246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:01 compute-0 sudo[396246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:01 compute-0 sudo[396246]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 35 KiB/s wr, 40 op/s
Dec 06 08:14:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:02.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.319 251996 DEBUG nova.compute.manager [req-e2fe5ec9-fe99-491a-ad07-87ef39a587ca req-b706c970-b7c4-4d72-97f7-1cfe8b5d7274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.319 251996 DEBUG nova.compute.manager [req-e2fe5ec9-fe99-491a-ad07-87ef39a587ca req-b706c970-b7c4-4d72-97f7-1cfe8b5d7274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing instance network info cache due to event network-changed-26d5b067-93d9-4736-a51f-3d695f40988b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.319 251996 DEBUG oslo_concurrency.lockutils [req-e2fe5ec9-fe99-491a-ad07-87ef39a587ca req-b706c970-b7c4-4d72-97f7-1cfe8b5d7274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.319 251996 DEBUG oslo_concurrency.lockutils [req-e2fe5ec9-fe99-491a-ad07-87ef39a587ca req-b706c970-b7c4-4d72-97f7-1cfe8b5d7274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.320 251996 DEBUG nova.network.neutron [req-e2fe5ec9-fe99-491a-ad07-87ef39a587ca req-b706c970-b7c4-4d72-97f7-1cfe8b5d7274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Refreshing network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.378 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.378 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.378 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.379 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.379 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.380 251996 INFO nova.compute.manager [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Terminating instance
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.381 251996 DEBUG nova.compute.manager [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:14:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:02.417 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:14:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:02.418 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.488 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-0 kernel: tap26d5b067-93 (unregistering): left promiscuous mode
Dec 06 08:14:02 compute-0 NetworkManager[48965]: <info>  [1765008842.5530] device (tap26d5b067-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:14:02 compute-0 ovn_controller[147168]: 2025-12-06T08:14:02Z|00768|binding|INFO|Releasing lport 26d5b067-93d9-4736-a51f-3d695f40988b from this chassis (sb_readonly=0)
Dec 06 08:14:02 compute-0 ovn_controller[147168]: 2025-12-06T08:14:02Z|00769|binding|INFO|Setting lport 26d5b067-93d9-4736-a51f-3d695f40988b down in Southbound
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.558 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-0 ovn_controller[147168]: 2025-12-06T08:14:02Z|00770|binding|INFO|Removing iface tap26d5b067-93 ovn-installed in OVS
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.561 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:02.570 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:90:d8:d3 10.100.0.3'], port_security=['fa:16:3e:90:d8:d3 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5ae83b01-2f96-47ca-92c5-66f9500be47d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a92fb155-cdbe-4539-a1b3-91333eee3cc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c607db35-6f3d-4821-9124-a70a0a233535, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=26d5b067-93d9-4736-a51f-3d695f40988b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:14:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:02.571 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 26d5b067-93d9-4736-a51f-3d695f40988b in datapath 1bf97b73-354e-4df7-9a72-727cdc64dc43 unbound from our chassis
Dec 06 08:14:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:02.572 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1bf97b73-354e-4df7-9a72-727cdc64dc43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:14:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:02.575 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6178dd43-2116-4cf4-9e60-96b400e057a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:02.576 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 namespace which is not needed anymore
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.580 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Dec 06 08:14:02 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c8.scope: Consumed 16.535s CPU time.
Dec 06 08:14:02 compute-0 systemd-machined[212986]: Machine qemu-91-instance-000000c8 terminated.
Dec 06 08:14:02 compute-0 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[395035]: [NOTICE]   (395041) : haproxy version is 2.8.14-c23fe91
Dec 06 08:14:02 compute-0 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[395035]: [NOTICE]   (395041) : path to executable is /usr/sbin/haproxy
Dec 06 08:14:02 compute-0 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[395035]: [WARNING]  (395041) : Exiting Master process...
Dec 06 08:14:02 compute-0 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[395035]: [ALERT]    (395041) : Current worker (395047) exited with code 143 (Terminated)
Dec 06 08:14:02 compute-0 neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43[395035]: [WARNING]  (395041) : All workers exited. Exiting... (0)
Dec 06 08:14:02 compute-0 systemd[1]: libpod-9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296.scope: Deactivated successfully.
Dec 06 08:14:02 compute-0 podman[396297]: 2025-12-06 08:14:02.725000029 +0000 UTC m=+0.048317463 container died 9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.824 251996 INFO nova.virt.libvirt.driver [-] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Instance destroyed successfully.
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.825 251996 DEBUG nova.objects.instance [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid 5ae83b01-2f96-47ca-92c5-66f9500be47d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296-userdata-shm.mount: Deactivated successfully.
Dec 06 08:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-21c14f94a811e26621198f008b303f7d8198d259d9d68b53bf636bc6512fb502-merged.mount: Deactivated successfully.
Dec 06 08:14:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:02.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.839 251996 DEBUG nova.virt.libvirt.vif [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:12:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-208038318',display_name='tempest-TestNetworkBasicOps-server-208038318',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-208038318',id=200,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO6LpbDGGsqUYLXXtpD4eX1duN1Tec/27La7VKOPMHJKTUnIjUHvsqzc6p8QoFMm0Jsj2gdxIhoQXUry048nNvgtQkx6G0Y3SARRMFBQVqnPVQKM1z95bezOQcgef6h6Ew==',key_name='tempest-TestNetworkBasicOps-240589395',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:12:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-jexoqbsm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:12:57Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=5ae83b01-2f96-47ca-92c5-66f9500be47d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.840 251996 DEBUG nova.network.os_vif_util [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.841 251996 DEBUG nova.network.os_vif_util [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:90:d8:d3,bridge_name='br-int',has_traffic_filtering=True,id=26d5b067-93d9-4736-a51f-3d695f40988b,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d5b067-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.841 251996 DEBUG os_vif [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:d8:d3,bridge_name='br-int',has_traffic_filtering=True,id=26d5b067-93d9-4736-a51f-3d695f40988b,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d5b067-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.844 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.844 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26d5b067-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.845 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.847 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:14:02 compute-0 nova_compute[251992]: 2025-12-06 08:14:02.852 251996 INFO os_vif [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:90:d8:d3,bridge_name='br-int',has_traffic_filtering=True,id=26d5b067-93d9-4736-a51f-3d695f40988b,network=Network(1bf97b73-354e-4df7-9a72-727cdc64dc43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26d5b067-93')
Dec 06 08:14:02 compute-0 podman[396297]: 2025-12-06 08:14:02.883836045 +0000 UTC m=+0.207153479 container cleanup 9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 08:14:02 compute-0 systemd[1]: libpod-conmon-9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296.scope: Deactivated successfully.
Dec 06 08:14:03 compute-0 podman[396355]: 2025-12-06 08:14:03.066854336 +0000 UTC m=+0.161262821 container remove 9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.089 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a9296b20-380e-42b3-a2db-110694f803b3]: (4, ('Sat Dec  6 08:14:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 (9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296)\n9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296\nSat Dec  6 08:14:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 (9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296)\n9adcc310dbaf1c71057764d9ad8ac83dd53b0b4e5cb56c337145f4d951d45296\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.092 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.091 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f1caecb3-84f8-40a5-a9dd-fbfecbab3ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.094 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bf97b73-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:03 compute-0 kernel: tap1bf97b73-30: left promiscuous mode
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.109 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.112 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[15d566fc-ba7b-4738-a5e3-681a890aa692]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.127 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[94fa4d4e-dcbd-408f-8787-208e372dfd71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.128 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[46014c02-ec43-46a1-91c0-a1a10e5c163e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.143 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1e1c73-bc4f-47ef-a954-5d313d620c13]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904873, 'reachable_time': 38728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396374, 'error': None, 'target': 'ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d1bf97b73\x2d354e\x2d4df7\x2d9a72\x2d727cdc64dc43.mount: Deactivated successfully.
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.149 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1bf97b73-354e-4df7-9a72-727cdc64dc43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.151 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[cbdf9293-ab21-45d6-a45e-34731267bfbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.228 251996 DEBUG nova.compute.manager [req-83010659-f76b-455b-b13c-0b3ac02f38f8 req-ed22028b-63f9-4cf6-80e3-6347b977cf8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-unplugged-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.228 251996 DEBUG oslo_concurrency.lockutils [req-83010659-f76b-455b-b13c-0b3ac02f38f8 req-ed22028b-63f9-4cf6-80e3-6347b977cf8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.229 251996 DEBUG oslo_concurrency.lockutils [req-83010659-f76b-455b-b13c-0b3ac02f38f8 req-ed22028b-63f9-4cf6-80e3-6347b977cf8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.229 251996 DEBUG oslo_concurrency.lockutils [req-83010659-f76b-455b-b13c-0b3ac02f38f8 req-ed22028b-63f9-4cf6-80e3-6347b977cf8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.230 251996 DEBUG nova.compute.manager [req-83010659-f76b-455b-b13c-0b3ac02f38f8 req-ed22028b-63f9-4cf6-80e3-6347b977cf8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] No waiting events found dispatching network-vif-unplugged-26d5b067-93d9-4736-a51f-3d695f40988b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.230 251996 DEBUG nova.compute.manager [req-83010659-f76b-455b-b13c-0b3ac02f38f8 req-ed22028b-63f9-4cf6-80e3-6347b977cf8a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-unplugged-26d5b067-93d9-4736-a51f-3d695f40988b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.360 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:03 compute-0 ceph-mon[74339]: pgmap v3642: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 35 KiB/s wr, 40 op/s
Dec 06 08:14:03 compute-0 nova_compute[251992]: 2025-12-06 08:14:03.655 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 18 KiB/s wr, 29 op/s
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.890 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.890 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:03.891 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:04.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:04 compute-0 nova_compute[251992]: 2025-12-06 08:14:04.136 251996 DEBUG nova.network.neutron [req-e2fe5ec9-fe99-491a-ad07-87ef39a587ca req-b706c970-b7c4-4d72-97f7-1cfe8b5d7274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updated VIF entry in instance network info cache for port 26d5b067-93d9-4736-a51f-3d695f40988b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:14:04 compute-0 nova_compute[251992]: 2025-12-06 08:14:04.136 251996 DEBUG nova.network.neutron [req-e2fe5ec9-fe99-491a-ad07-87ef39a587ca req-b706c970-b7c4-4d72-97f7-1cfe8b5d7274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updating instance_info_cache with network_info: [{"id": "26d5b067-93d9-4736-a51f-3d695f40988b", "address": "fa:16:3e:90:d8:d3", "network": {"id": "1bf97b73-354e-4df7-9a72-727cdc64dc43", "bridge": "br-int", "label": "tempest-network-smoke--166740708", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26d5b067-93", "ovs_interfaceid": "26d5b067-93d9-4736-a51f-3d695f40988b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:14:04 compute-0 nova_compute[251992]: 2025-12-06 08:14:04.167 251996 DEBUG oslo_concurrency.lockutils [req-e2fe5ec9-fe99-491a-ad07-87ef39a587ca req-b706c970-b7c4-4d72-97f7-1cfe8b5d7274 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-5ae83b01-2f96-47ca-92c5-66f9500be47d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:14:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:04 compute-0 nova_compute[251992]: 2025-12-06 08:14:04.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:04 compute-0 ceph-mon[74339]: pgmap v3643: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 18 KiB/s wr, 29 op/s
Dec 06 08:14:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:04.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.109 251996 INFO nova.virt.libvirt.driver [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Deleting instance files /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d_del
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.110 251996 INFO nova.virt.libvirt.driver [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Deletion of /var/lib/nova/instances/5ae83b01-2f96-47ca-92c5-66f9500be47d_del complete
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.158 251996 INFO nova.compute.manager [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Took 2.78 seconds to destroy the instance on the hypervisor.
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.159 251996 DEBUG oslo.service.loopingcall [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.159 251996 DEBUG nova.compute.manager [-] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.159 251996 DEBUG nova.network.neutron [-] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.324 251996 DEBUG nova.compute.manager [req-ba983f08-a6db-4c74-9b65-164c2512ad0f req-3e419c23-8152-4634-8e3b-c0fec2f29260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.324 251996 DEBUG oslo_concurrency.lockutils [req-ba983f08-a6db-4c74-9b65-164c2512ad0f req-3e419c23-8152-4634-8e3b-c0fec2f29260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.324 251996 DEBUG oslo_concurrency.lockutils [req-ba983f08-a6db-4c74-9b65-164c2512ad0f req-3e419c23-8152-4634-8e3b-c0fec2f29260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.325 251996 DEBUG oslo_concurrency.lockutils [req-ba983f08-a6db-4c74-9b65-164c2512ad0f req-3e419c23-8152-4634-8e3b-c0fec2f29260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.325 251996 DEBUG nova.compute.manager [req-ba983f08-a6db-4c74-9b65-164c2512ad0f req-3e419c23-8152-4634-8e3b-c0fec2f29260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] No waiting events found dispatching network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.325 251996 WARNING nova.compute.manager [req-ba983f08-a6db-4c74-9b65-164c2512ad0f req-3e419c23-8152-4634-8e3b-c0fec2f29260 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received unexpected event network-vif-plugged-26d5b067-93d9-4736-a51f-3d695f40988b for instance with vm_state active and task_state deleting.
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:05 compute-0 nova_compute[251992]: 2025-12-06 08:14:05.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:14:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 19 KiB/s wr, 45 op/s
Dec 06 08:14:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:06.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.214 251996 DEBUG nova.network.neutron [-] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.231 251996 INFO nova.compute.manager [-] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Took 1.07 seconds to deallocate network for instance.
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.289 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.290 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.356 251996 DEBUG oslo_concurrency.processutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.385 251996 DEBUG nova.compute.manager [req-23c4b3fe-c61d-4adc-864d-58adf99ad96b req-476c048c-1963-44c8-befd-d7cb0b23e0c0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Received event network-vif-deleted-26d5b067-93d9-4736-a51f-3d695f40988b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:14:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:14:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1991368399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.779 251996 DEBUG oslo_concurrency.processutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.785 251996 DEBUG nova.compute.provider_tree [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.808 251996 DEBUG nova.scheduler.client.report [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.831 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:06.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.856 251996 INFO nova.scheduler.client.report [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance 5ae83b01-2f96-47ca-92c5-66f9500be47d
Dec 06 08:14:06 compute-0 nova_compute[251992]: 2025-12-06 08:14:06.951 251996 DEBUG oslo_concurrency.lockutils [None req-eb51c2d1-76b0-4fc6-834f-6006507d55e5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "5ae83b01-2f96-47ca-92c5-66f9500be47d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:07 compute-0 ceph-mon[74339]: pgmap v3644: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 19 KiB/s wr, 45 op/s
Dec 06 08:14:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1991368399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:07 compute-0 nova_compute[251992]: 2025-12-06 08:14:07.624 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:07 compute-0 nova_compute[251992]: 2025-12-06 08:14:07.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:07 compute-0 nova_compute[251992]: 2025-12-06 08:14:07.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:14:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 17 KiB/s wr, 55 op/s
Dec 06 08:14:07 compute-0 nova_compute[251992]: 2025-12-06 08:14:07.846 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:08.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:08 compute-0 nova_compute[251992]: 2025-12-06 08:14:08.362 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:08.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:09.421 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:09 compute-0 ceph-mon[74339]: pgmap v3645: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 17 KiB/s wr, 55 op/s
Dec 06 08:14:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3462730774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:14:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3462730774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:14:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Dec 06 08:14:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:10.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:10 compute-0 nova_compute[251992]: 2025-12-06 08:14:10.156 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:10 compute-0 nova_compute[251992]: 2025-12-06 08:14:10.272 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:10 compute-0 ceph-mon[74339]: pgmap v3646: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Dec 06 08:14:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:10.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:11 compute-0 podman[396403]: 2025-12-06 08:14:11.449907371 +0000 UTC m=+0.097478740 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 08:14:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Dec 06 08:14:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:12.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/607142524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:12.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:12 compute-0 nova_compute[251992]: 2025-12-06 08:14:12.848 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:14:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:14:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:14:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:14:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:14:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:14:13 compute-0 nova_compute[251992]: 2025-12-06 08:14:13.363 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:13 compute-0 ceph-mon[74339]: pgmap v3647: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Dec 06 08:14:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:14:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:14.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:14 compute-0 ceph-mon[74339]: pgmap v3648: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:14:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Dec 06 08:14:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:16.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:16.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:17 compute-0 podman[396433]: 2025-12-06 08:14:17.39993855 +0000 UTC m=+0.055444288 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:14:17 compute-0 podman[396434]: 2025-12-06 08:14:17.409072928 +0000 UTC m=+0.059574959 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:14:17 compute-0 ceph-mon[74339]: pgmap v3649: 305 pgs: 305 active+clean; 155 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Dec 06 08:14:17 compute-0 nova_compute[251992]: 2025-12-06 08:14:17.690 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:17 compute-0 nova_compute[251992]: 2025-12-06 08:14:17.690 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:14:17 compute-0 nova_compute[251992]: 2025-12-06 08:14:17.704 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:14:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 08:14:17 compute-0 nova_compute[251992]: 2025-12-06 08:14:17.823 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008842.821159, 5ae83b01-2f96-47ca-92c5-66f9500be47d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:14:17 compute-0 nova_compute[251992]: 2025-12-06 08:14:17.823 251996 INFO nova.compute.manager [-] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] VM Stopped (Lifecycle Event)
Dec 06 08:14:17 compute-0 nova_compute[251992]: 2025-12-06 08:14:17.844 251996 DEBUG nova.compute.manager [None req-93f5af0c-4d86-4874-8b63-23b3dda5b56d - - - - - -] [instance: 5ae83b01-2f96-47ca-92c5-66f9500be47d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:14:17 compute-0 nova_compute[251992]: 2025-12-06 08:14:17.852 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:18.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:18 compute-0 nova_compute[251992]: 2025-12-06 08:14:18.365 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:14:18
Dec 06 08:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.meta']
Dec 06 08:14:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:14:18 compute-0 nova_compute[251992]: 2025-12-06 08:14:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:18.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3375879178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2525064073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.668 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.668 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.669 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.669 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.670 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.670 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.670 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.702 251996 DEBUG nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.703 251996 WARNING nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.703 251996 WARNING nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.703 251996 INFO nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Removable base files: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.703 251996 INFO nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.703 251996 INFO nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/40c8d19f192ebe6ef01b2a3ea96d896752dcd737
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.704 251996 DEBUG nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.704 251996 DEBUG nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Dec 06 08:14:19 compute-0 nova_compute[251992]: 2025-12-06 08:14:19.704 251996 DEBUG nova.virt.libvirt.imagecache [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Dec 06 08:14:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:14:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:20.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:20 compute-0 ceph-mon[74339]: pgmap v3650: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 08:14:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:20.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:21 compute-0 sudo[396473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:21 compute-0 sudo[396473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:21 compute-0 sudo[396473]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:21 compute-0 sudo[396498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:21 compute-0 sudo[396498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:21 compute-0 sudo[396498]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:14:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:22 compute-0 ceph-mon[74339]: pgmap v3651: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:14:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:22 compute-0 nova_compute[251992]: 2025-12-06 08:14:22.855 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:22.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:23 compute-0 nova_compute[251992]: 2025-12-06 08:14:23.367 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:23 compute-0 ceph-mon[74339]: pgmap v3652: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:14:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:14:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:14:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:24.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:25 compute-0 ceph-mon[74339]: pgmap v3653: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:14:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Dec 06 08:14:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:26.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:14:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:14:26 compute-0 ceph-mon[74339]: pgmap v3654: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Dec 06 08:14:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:26.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:14:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:14:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:14:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:14:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:14:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 355 KiB/s wr, 74 op/s
Dec 06 08:14:27 compute-0 nova_compute[251992]: 2025-12-06 08:14:27.857 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:28.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:28 compute-0 nova_compute[251992]: 2025-12-06 08:14:28.369 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:29 compute-0 ceph-mon[74339]: pgmap v3655: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 355 KiB/s wr, 74 op/s
Dec 06 08:14:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:14:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:30.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:30.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:31 compute-0 ceph-mon[74339]: pgmap v3656: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:14:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:14:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:32.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:32 compute-0 ceph-mon[74339]: pgmap v3657: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:14:32 compute-0 nova_compute[251992]: 2025-12-06 08:14:32.860 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:33 compute-0 nova_compute[251992]: 2025-12-06 08:14:33.370 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 op/s
Dec 06 08:14:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:34.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.274 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.274 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.292 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.410 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.410 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.420 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.420 251996 INFO nova.compute.claims [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:14:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.505 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:34 compute-0 ceph-mon[74339]: pgmap v3658: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 op/s
Dec 06 08:14:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:14:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565198130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.954 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:34 compute-0 nova_compute[251992]: 2025-12-06 08:14:34.961 251996 DEBUG nova.compute.provider_tree [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.004 251996 DEBUG nova.scheduler.client.report [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.037 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.038 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.096 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.096 251996 DEBUG nova.network.neutron [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.115 251996 INFO nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.133 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.219 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.220 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.220 251996 INFO nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Creating image(s)
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.246 251996 DEBUG nova.storage.rbd_utils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image c63c39aa-db19-442c-830d-31f9ddcb5e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.272 251996 DEBUG nova.storage.rbd_utils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image c63c39aa-db19-442c-830d-31f9ddcb5e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.301 251996 DEBUG nova.storage.rbd_utils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image c63c39aa-db19-442c-830d-31f9ddcb5e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.306 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.331 251996 DEBUG nova.policy [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd5359905348247d0b9b5b95982e890bb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.370 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.372 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.372 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.373 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.401 251996 DEBUG nova.storage.rbd_utils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image c63c39aa-db19-442c-830d-31f9ddcb5e48_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:14:35 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.404 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c63c39aa-db19-442c-830d-31f9ddcb5e48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 192 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 132 op/s
Dec 06 08:14:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3565198130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:35.998 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef c63c39aa-db19-442c-830d-31f9ddcb5e48_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.076 251996 DEBUG nova.storage.rbd_utils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] resizing rbd image c63c39aa-db19-442c-830d-31f9ddcb5e48_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:14:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:36.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.120 251996 DEBUG nova.network.neutron [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Successfully created port: 22a348dd-6281-472a-b85e-a65dc1638db1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.197 251996 DEBUG nova.objects.instance [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'migration_context' on Instance uuid c63c39aa-db19-442c-830d-31f9ddcb5e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.214 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.214 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Ensure instance console log exists: /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.215 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.215 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.215 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:36.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.963 251996 DEBUG nova.network.neutron [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Successfully updated port: 22a348dd-6281-472a-b85e-a65dc1638db1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.978 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.978 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquired lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:14:36 compute-0 nova_compute[251992]: 2025-12-06 08:14:36.979 251996 DEBUG nova.network.neutron [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:14:36 compute-0 ceph-mon[74339]: pgmap v3659: 305 pgs: 305 active+clean; 192 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 132 op/s
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.071 251996 DEBUG nova.compute.manager [req-e5566b63-0567-4599-a89f-1376652c3380 req-438061b9-4c0a-4a2f-b74a-dfb8bb8caf05 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-changed-22a348dd-6281-472a-b85e-a65dc1638db1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.072 251996 DEBUG nova.compute.manager [req-e5566b63-0567-4599-a89f-1376652c3380 req-438061b9-4c0a-4a2f-b74a-dfb8bb8caf05 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Refreshing instance network info cache due to event network-changed-22a348dd-6281-472a-b85e-a65dc1638db1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.072 251996 DEBUG oslo_concurrency.lockutils [req-e5566b63-0567-4599-a89f-1376652c3380 req-438061b9-4c0a-4a2f-b74a-dfb8bb8caf05 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.119 251996 DEBUG nova.network.neutron [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:14:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 684 KiB/s rd, 2.5 MiB/s wr, 83 op/s
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.864 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.952 251996 DEBUG nova.network.neutron [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updating instance_info_cache with network_info: [{"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.973 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Releasing lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.973 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Instance network_info: |[{"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.974 251996 DEBUG oslo_concurrency.lockutils [req-e5566b63-0567-4599-a89f-1376652c3380 req-438061b9-4c0a-4a2f-b74a-dfb8bb8caf05 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.974 251996 DEBUG nova.network.neutron [req-e5566b63-0567-4599-a89f-1376652c3380 req-438061b9-4c0a-4a2f-b74a-dfb8bb8caf05 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Refreshing network info cache for port 22a348dd-6281-472a-b85e-a65dc1638db1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.978 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Start _get_guest_xml network_info=[{"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.983 251996 WARNING nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.991 251996 DEBUG nova.virt.libvirt.host [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:14:37 compute-0 nova_compute[251992]: 2025-12-06 08:14:37.992 251996 DEBUG nova.virt.libvirt.host [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.000 251996 DEBUG nova.virt.libvirt.host [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.001 251996 DEBUG nova.virt.libvirt.host [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.002 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.003 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.003 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.004 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.004 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.004 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.005 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.005 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.005 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.006 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.006 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.006 251996 DEBUG nova.virt.hardware [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.010 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:38.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.372 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:14:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2016091100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.536 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.568 251996 DEBUG nova.storage.rbd_utils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image c63c39aa-db19-442c-830d-31f9ddcb5e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:14:38 compute-0 nova_compute[251992]: 2025-12-06 08:14:38.574 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:38.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:14:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/91966497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.001 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.002 251996 DEBUG nova.virt.libvirt.vif [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2096887961',display_name='tempest-TestNetworkBasicOps-server-2096887961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2096887961',id=203,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGLMVHiBFVDbILzmiP2hjkfcRe9nbbI4rX+p9dN2d8el2VZSbgy44eMeyqYZznoUx63aOMZ7aSNernxTJS2o4jBvweg41uY5KZFOccoiCkasts6emqIhof6s0BgnkseSZQ==',key_name='tempest-TestNetworkBasicOps-2111706693',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-2k037ew5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:14:35Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=c63c39aa-db19-442c-830d-31f9ddcb5e48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.003 251996 DEBUG nova.network.os_vif_util [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.004 251996 DEBUG nova.network.os_vif_util [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=22a348dd-6281-472a-b85e-a65dc1638db1,network=Network(00b1f317-7b98-48e8-9683-424e83375985),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22a348dd-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.005 251996 DEBUG nova.objects.instance [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'pci_devices' on Instance uuid c63c39aa-db19-442c-830d-31f9ddcb5e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:14:39 compute-0 ceph-mon[74339]: pgmap v3660: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 684 KiB/s rd, 2.5 MiB/s wr, 83 op/s
Dec 06 08:14:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2016091100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/91966497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.181 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <uuid>c63c39aa-db19-442c-830d-31f9ddcb5e48</uuid>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <name>instance-000000cb</name>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <nova:name>tempest-TestNetworkBasicOps-server-2096887961</nova:name>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:14:37</nova:creationTime>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <nova:user uuid="d5359905348247d0b9b5b95982e890bb">tempest-TestNetworkBasicOps-1435471576-project-member</nova:user>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <nova:project uuid="f4735a799c84437b9dd4ea8778ad2fbb">tempest-TestNetworkBasicOps-1435471576</nova:project>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <nova:port uuid="22a348dd-6281-472a-b85e-a65dc1638db1">
Dec 06 08:14:39 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <system>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <entry name="serial">c63c39aa-db19-442c-830d-31f9ddcb5e48</entry>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <entry name="uuid">c63c39aa-db19-442c-830d-31f9ddcb5e48</entry>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </system>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <os>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   </os>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <features>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   </features>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c63c39aa-db19-442c-830d-31f9ddcb5e48_disk">
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       </source>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/c63c39aa-db19-442c-830d-31f9ddcb5e48_disk.config">
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       </source>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:14:39 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:0f:8a:1b"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <target dev="tap22a348dd-62"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48/console.log" append="off"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <video>
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </video>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:14:39 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:14:39 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:14:39 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:14:39 compute-0 nova_compute[251992]: </domain>
Dec 06 08:14:39 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.183 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Preparing to wait for external event network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.183 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.184 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.184 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.185 251996 DEBUG nova.virt.libvirt.vif [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2096887961',display_name='tempest-TestNetworkBasicOps-server-2096887961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2096887961',id=203,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGLMVHiBFVDbILzmiP2hjkfcRe9nbbI4rX+p9dN2d8el2VZSbgy44eMeyqYZznoUx63aOMZ7aSNernxTJS2o4jBvweg41uY5KZFOccoiCkasts6emqIhof6s0BgnkseSZQ==',key_name='tempest-TestNetworkBasicOps-2111706693',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-2k037ew5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:14:35Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=c63c39aa-db19-442c-830d-31f9ddcb5e48,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.185 251996 DEBUG nova.network.os_vif_util [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.185 251996 DEBUG nova.network.os_vif_util [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=22a348dd-6281-472a-b85e-a65dc1638db1,network=Network(00b1f317-7b98-48e8-9683-424e83375985),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22a348dd-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.186 251996 DEBUG os_vif [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=22a348dd-6281-472a-b85e-a65dc1638db1,network=Network(00b1f317-7b98-48e8-9683-424e83375985),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22a348dd-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.186 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.187 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.187 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.190 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.190 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22a348dd-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.191 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22a348dd-62, col_values=(('external_ids', {'iface-id': '22a348dd-6281-472a-b85e-a65dc1638db1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:8a:1b', 'vm-uuid': 'c63c39aa-db19-442c-830d-31f9ddcb5e48'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.192 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:39 compute-0 NetworkManager[48965]: <info>  [1765008879.1940] manager: (tap22a348dd-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.195 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.198 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.198 251996 INFO os_vif [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=22a348dd-6281-472a-b85e-a65dc1638db1,network=Network(00b1f317-7b98-48e8-9683-424e83375985),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22a348dd-62')
Dec 06 08:14:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.501 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.501 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.502 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] No VIF found with MAC fa:16:3e:0f:8a:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.502 251996 INFO nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Using config drive
Dec 06 08:14:39 compute-0 nova_compute[251992]: 2025-12-06 08:14:39.533 251996 DEBUG nova.storage.rbd_utils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image c63c39aa-db19-442c-830d-31f9ddcb5e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:14:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Dec 06 08:14:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:40.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4210566326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4074947016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:40.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:40 compute-0 nova_compute[251992]: 2025-12-06 08:14:40.997 251996 INFO nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Creating config drive at /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48/disk.config
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.002 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg3yh786c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.137 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg3yh786c" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.168 251996 DEBUG nova.storage.rbd_utils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] rbd image c63c39aa-db19-442c-830d-31f9ddcb5e48_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.172 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48/disk.config c63c39aa-db19-442c-830d-31f9ddcb5e48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:41 compute-0 ceph-mon[74339]: pgmap v3661: 305 pgs: 305 active+clean; 211 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 329 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.343 251996 DEBUG oslo_concurrency.processutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48/disk.config c63c39aa-db19-442c-830d-31f9ddcb5e48_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.344 251996 INFO nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Deleting local config drive /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48/disk.config because it was imported into RBD.
Dec 06 08:14:41 compute-0 kernel: tap22a348dd-62: entered promiscuous mode
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:41 compute-0 ovn_controller[147168]: 2025-12-06T08:14:41Z|00771|binding|INFO|Claiming lport 22a348dd-6281-472a-b85e-a65dc1638db1 for this chassis.
Dec 06 08:14:41 compute-0 ovn_controller[147168]: 2025-12-06T08:14:41Z|00772|binding|INFO|22a348dd-6281-472a-b85e-a65dc1638db1: Claiming fa:16:3e:0f:8a:1b 10.100.0.12
Dec 06 08:14:41 compute-0 NetworkManager[48965]: <info>  [1765008881.3993] manager: (tap22a348dd-62): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.400 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.405 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.419 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:8a:1b 10.100.0.12'], port_security=['fa:16:3e:0f:8a:1b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c63c39aa-db19-442c-830d-31f9ddcb5e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00b1f317-7b98-48e8-9683-424e83375985', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78dcef83-a2a6-4643-8297-c4b0ef39ffc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c85a23ec-dc54-431d-9d86-b858cac47687, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=22a348dd-6281-472a-b85e-a65dc1638db1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.421 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 22a348dd-6281-472a-b85e-a65dc1638db1 in datapath 00b1f317-7b98-48e8-9683-424e83375985 bound to our chassis
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.423 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00b1f317-7b98-48e8-9683-424e83375985
Dec 06 08:14:41 compute-0 systemd-machined[212986]: New machine qemu-92-instance-000000cb.
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.441 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5e193124-038e-496b-90d9-b1982c932949]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.442 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00b1f317-71 in ovnmeta-00b1f317-7b98-48e8-9683-424e83375985 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.444 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00b1f317-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.445 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ef372a00-6366-4ae5-aa84-ce3a91e62e43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.446 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fefaf0df-420e-4cb0-ae7e-89ef5ce78652]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-000000cb.
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.468 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b14dbe87-cb96-4387-a5e5-1b3d465944c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.469 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:41 compute-0 ovn_controller[147168]: 2025-12-06T08:14:41Z|00773|binding|INFO|Setting lport 22a348dd-6281-472a-b85e-a65dc1638db1 ovn-installed in OVS
Dec 06 08:14:41 compute-0 ovn_controller[147168]: 2025-12-06T08:14:41Z|00774|binding|INFO|Setting lport 22a348dd-6281-472a-b85e-a65dc1638db1 up in Southbound
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.475 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:41 compute-0 systemd-udevd[396860]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.486 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0aed33ad-ab8e-4f0a-be42-828085dec82c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 NetworkManager[48965]: <info>  [1765008881.5033] device (tap22a348dd-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:14:41 compute-0 NetworkManager[48965]: <info>  [1765008881.5053] device (tap22a348dd-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.520 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[f83b67cd-7aa7-4f3e-8765-d97c2ee3d732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.526 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[863d05a6-8c39-47c8-9f7c-44189decd461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 systemd-udevd[396870]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:14:41 compute-0 NetworkManager[48965]: <info>  [1765008881.5284] manager: (tap00b1f317-70): new Veth device (/org/freedesktop/NetworkManager/Devices/362)
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.557 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[e3701fad-f736-4867-8cad-6728060390be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.560 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[eeaaff18-7aec-458b-a244-2d7095e603c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 NetworkManager[48965]: <info>  [1765008881.5826] device (tap00b1f317-70): carrier: link connected
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.591 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ae43da93-92e4-4d31-94c6-db41ef504673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 podman[396859]: 2025-12-06 08:14:41.599497911 +0000 UTC m=+0.103093591 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.612 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[98e7b421-bc8b-4ec0-9f8f-27b66a225050]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00b1f317-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:70:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915417, 'reachable_time': 20119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396913, 'error': None, 'target': 'ovnmeta-00b1f317-7b98-48e8-9683-424e83375985', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.628 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d0734d-c99e-4291-972f-4750bbfede53]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:70f4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 915417, 'tstamp': 915417}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396914, 'error': None, 'target': 'ovnmeta-00b1f317-7b98-48e8-9683-424e83375985', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.648 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[246a04f3-2329-43cc-8a8d-04ceeae2ba1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00b1f317-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:70:f4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915417, 'reachable_time': 20119, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396915, 'error': None, 'target': 'ovnmeta-00b1f317-7b98-48e8-9683-424e83375985', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.679 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b78215ad-cf3e-48d3-854b-b1742dc8163b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 sudo[396916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:41 compute-0 sudo[396916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-0 sudo[396916]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:41 compute-0 sudo[396936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:41 compute-0 sudo[396936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-0 sudo[396936]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.743 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ded945-f540-48ea-9a4c-5414cbf7b041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.744 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00b1f317-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.745 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.745 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00b1f317-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:41 compute-0 sudo[396979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:41 compute-0 sudo[396969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:14:41 compute-0 kernel: tap00b1f317-70: entered promiscuous mode
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.791 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:41 compute-0 NetworkManager[48965]: <info>  [1765008881.7923] manager: (tap00b1f317-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Dec 06 08:14:41 compute-0 sudo[396979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.794 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00b1f317-70, col_values=(('external_ids', {'iface-id': '87fca5ce-f232-4474-a8a4-8db880ec3abf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:41 compute-0 sudo[396969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.796 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:41 compute-0 ovn_controller[147168]: 2025-12-06T08:14:41Z|00775|binding|INFO|Releasing lport 87fca5ce-f232-4474-a8a4-8db880ec3abf from this chassis (sb_readonly=0)
Dec 06 08:14:41 compute-0 sudo[396979]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:41 compute-0 sudo[396969]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 06 08:14:41 compute-0 nova_compute[251992]: 2025-12-06 08:14:41.817 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.818 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00b1f317-7b98-48e8-9683-424e83375985.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00b1f317-7b98-48e8-9683-424e83375985.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.819 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8f47b1-4cf9-452c-8f5e-d90d227b6a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.820 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-00b1f317-7b98-48e8-9683-424e83375985
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/00b1f317-7b98-48e8-9683-424e83375985.pid.haproxy
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 00b1f317-7b98-48e8-9683-424e83375985
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:14:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:41.822 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00b1f317-7b98-48e8-9683-424e83375985', 'env', 'PROCESS_TAG=haproxy-00b1f317-7b98-48e8-9683-424e83375985', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00b1f317-7b98-48e8-9683-424e83375985.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:14:41 compute-0 sudo[397022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:41 compute-0 sudo[397022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:41 compute-0 sudo[397022]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:41 compute-0 sudo[397050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:14:41 compute-0 sudo[397050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:42.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.160 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008882.1601744, c63c39aa-db19-442c-830d-31f9ddcb5e48 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.161 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] VM Started (Lifecycle Event)
Dec 06 08:14:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.185 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.189 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008882.1635146, c63c39aa-db19-442c-830d-31f9ddcb5e48 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.190 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] VM Paused (Lifecycle Event)
Dec 06 08:14:42 compute-0 podman[397153]: 2025-12-06 08:14:42.190922658 +0000 UTC m=+0.049081144 container create 412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:14:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:14:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.211 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.215 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:14:42 compute-0 systemd[1]: Started libpod-conmon-412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f.scope.
Dec 06 08:14:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.245 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:14:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/662ca894182bb1d7327bbf92137002fdf783f00d2e21f5b3c4585035beedc4e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:42 compute-0 podman[397153]: 2025-12-06 08:14:42.262530444 +0000 UTC m=+0.120688950 container init 412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:14:42 compute-0 podman[397153]: 2025-12-06 08:14:42.166683379 +0000 UTC m=+0.024841865 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:14:42 compute-0 podman[397153]: 2025-12-06 08:14:42.267406366 +0000 UTC m=+0.125564852 container start 412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 08:14:42 compute-0 neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985[397171]: [NOTICE]   (397177) : New worker (397179) forked
Dec 06 08:14:42 compute-0 neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985[397171]: [NOTICE]   (397177) : Loading success.
Dec 06 08:14:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 08:14:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.372 251996 DEBUG nova.compute.manager [req-e388b2e4-a0c6-4f67-a326-4552e625dc29 req-9aef72bb-4565-4606-883b-e6e139b736b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.372 251996 DEBUG oslo_concurrency.lockutils [req-e388b2e4-a0c6-4f67-a326-4552e625dc29 req-9aef72bb-4565-4606-883b-e6e139b736b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.372 251996 DEBUG oslo_concurrency.lockutils [req-e388b2e4-a0c6-4f67-a326-4552e625dc29 req-9aef72bb-4565-4606-883b-e6e139b736b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.373 251996 DEBUG oslo_concurrency.lockutils [req-e388b2e4-a0c6-4f67-a326-4552e625dc29 req-9aef72bb-4565-4606-883b-e6e139b736b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.373 251996 DEBUG nova.compute.manager [req-e388b2e4-a0c6-4f67-a326-4552e625dc29 req-9aef72bb-4565-4606-883b-e6e139b736b8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Processing event network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.373 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.377 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008882.3771875, c63c39aa-db19-442c-830d-31f9ddcb5e48 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.377 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] VM Resumed (Lifecycle Event)
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.379 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.382 251996 INFO nova.virt.libvirt.driver [-] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Instance spawned successfully.
Dec 06 08:14:42 compute-0 sudo[397050]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.382 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.409 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.414 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.417 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.418 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.418 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.419 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.419 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.419 251996 DEBUG nova.virt.libvirt.driver [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:14:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 08:14:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.451 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.492 251996 INFO nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Took 7.27 seconds to spawn the instance on the hypervisor.
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.493 251996 DEBUG nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.567 251996 INFO nova.compute.manager [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Took 8.21 seconds to build instance.
Dec 06 08:14:42 compute-0 nova_compute[251992]: 2025-12-06 08:14:42.585 251996 DEBUG oslo_concurrency.lockutils [None req-999062ab-54ba-4a74-83d2-433d98d1ac9d d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:42.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:43 compute-0 nova_compute[251992]: 2025-12-06 08:14:43.017 251996 DEBUG nova.network.neutron [req-e5566b63-0567-4599-a89f-1376652c3380 req-438061b9-4c0a-4a2f-b74a-dfb8bb8caf05 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updated VIF entry in instance network info cache for port 22a348dd-6281-472a-b85e-a65dc1638db1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:14:43 compute-0 nova_compute[251992]: 2025-12-06 08:14:43.017 251996 DEBUG nova.network.neutron [req-e5566b63-0567-4599-a89f-1376652c3380 req-438061b9-4c0a-4a2f-b74a-dfb8bb8caf05 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updating instance_info_cache with network_info: [{"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:14:43 compute-0 nova_compute[251992]: 2025-12-06 08:14:43.041 251996 DEBUG oslo_concurrency.lockutils [req-e5566b63-0567-4599-a89f-1376652c3380 req-438061b9-4c0a-4a2f-b74a-dfb8bb8caf05 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:14:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:14:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:14:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:14:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:14:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:14:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:14:43 compute-0 ceph-mon[74339]: pgmap v3662: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 06 08:14:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:14:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:14:43 compute-0 nova_compute[251992]: 2025-12-06 08:14:43.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:43.263 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:14:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:43.264 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:14:43 compute-0 nova_compute[251992]: 2025-12-06 08:14:43.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 06 08:14:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:44.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:44 compute-0 nova_compute[251992]: 2025-12-06 08:14:44.230 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:14:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:14:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:44 compute-0 nova_compute[251992]: 2025-12-06 08:14:44.560 251996 DEBUG nova.compute.manager [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:14:44 compute-0 nova_compute[251992]: 2025-12-06 08:14:44.561 251996 DEBUG oslo_concurrency.lockutils [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:44 compute-0 nova_compute[251992]: 2025-12-06 08:14:44.562 251996 DEBUG oslo_concurrency.lockutils [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:44 compute-0 nova_compute[251992]: 2025-12-06 08:14:44.562 251996 DEBUG oslo_concurrency.lockutils [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:44 compute-0 nova_compute[251992]: 2025-12-06 08:14:44.563 251996 DEBUG nova.compute.manager [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] No waiting events found dispatching network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:14:44 compute-0 nova_compute[251992]: 2025-12-06 08:14:44.563 251996 WARNING nova.compute.manager [req-3103cf9f-fabc-434d-84e8-5ef42d0575c4 req-b89dacc8-01bd-473e-8bdd-2914e7fd6a13 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received unexpected event network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 for instance with vm_state active and task_state None.
Dec 06 08:14:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:44.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:14:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:14:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:14:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:14:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:14:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9a778a26-72bb-40f4-b973-f9557dc76f9c does not exist
Dec 06 08:14:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev aec3d254-fff0-4621-9277-93c714d3f1aa does not exist
Dec 06 08:14:45 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 76e2b6fd-ea93-45aa-b0dc-81308b432ccd does not exist
Dec 06 08:14:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:14:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:14:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:14:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:14:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:14:45 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:14:45 compute-0 sudo[397202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:45 compute-0 sudo[397202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:45 compute-0 sudo[397202]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:45 compute-0 sudo[397227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:14:45 compute-0 sudo[397227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:45 compute-0 sudo[397227]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:45 compute-0 sudo[397252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:45 compute-0 sudo[397252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:45 compute-0 sudo[397252]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:45 compute-0 sudo[397277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:14:45 compute-0 sudo[397277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:45 compute-0 ceph-mon[74339]: pgmap v3663: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Dec 06 08:14:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:14:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:14:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Dec 06 08:14:46 compute-0 podman[397341]: 2025-12-06 08:14:46.000987273 +0000 UTC m=+0.042025373 container create 5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:14:46 compute-0 systemd[1]: Started libpod-conmon-5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef.scope.
Dec 06 08:14:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:14:46 compute-0 podman[397341]: 2025-12-06 08:14:45.981063692 +0000 UTC m=+0.022101812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:14:46 compute-0 podman[397341]: 2025-12-06 08:14:46.089525309 +0000 UTC m=+0.130563419 container init 5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pasteur, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:14:46 compute-0 podman[397341]: 2025-12-06 08:14:46.097707281 +0000 UTC m=+0.138745361 container start 5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:14:46 compute-0 podman[397341]: 2025-12-06 08:14:46.100150637 +0000 UTC m=+0.141188727 container attach 5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:14:46 compute-0 cool_pasteur[397358]: 167 167
Dec 06 08:14:46 compute-0 systemd[1]: libpod-5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef.scope: Deactivated successfully.
Dec 06 08:14:46 compute-0 conmon[397358]: conmon 5df59f2ffd7bcdb01186 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef.scope/container/memory.events
Dec 06 08:14:46 compute-0 podman[397341]: 2025-12-06 08:14:46.105638426 +0000 UTC m=+0.146676526 container died 5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 08:14:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:46.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a83d892ec753908170a118307ab4b0ac5bbde566404b64b5658de49384baee3-merged.mount: Deactivated successfully.
Dec 06 08:14:46 compute-0 podman[397341]: 2025-12-06 08:14:46.14000895 +0000 UTC m=+0.181047040 container remove 5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:14:46 compute-0 systemd[1]: libpod-conmon-5df59f2ffd7bcdb0118639331b2e7f022070a4fc2c1ca460047dfcff19e524ef.scope: Deactivated successfully.
Dec 06 08:14:46 compute-0 podman[397381]: 2025-12-06 08:14:46.309905125 +0000 UTC m=+0.044593512 container create 9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:14:46 compute-0 systemd[1]: Started libpod-conmon-9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783.scope.
Dec 06 08:14:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e063dbb9c869b60d45db48a7eae992310799974db726fdb80b81be495afb40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e063dbb9c869b60d45db48a7eae992310799974db726fdb80b81be495afb40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e063dbb9c869b60d45db48a7eae992310799974db726fdb80b81be495afb40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e063dbb9c869b60d45db48a7eae992310799974db726fdb80b81be495afb40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e063dbb9c869b60d45db48a7eae992310799974db726fdb80b81be495afb40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:46 compute-0 podman[397381]: 2025-12-06 08:14:46.373007139 +0000 UTC m=+0.107695536 container init 9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 08:14:46 compute-0 podman[397381]: 2025-12-06 08:14:46.293609523 +0000 UTC m=+0.028297930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:14:46 compute-0 podman[397381]: 2025-12-06 08:14:46.383154976 +0000 UTC m=+0.117843363 container start 9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:14:46 compute-0 podman[397381]: 2025-12-06 08:14:46.386584288 +0000 UTC m=+0.121272695 container attach 9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:14:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:14:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:14:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:14:46 compute-0 ceph-mon[74339]: pgmap v3664: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Dec 06 08:14:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:14:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:46.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:14:47 compute-0 eager_engelbart[397397]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:14:47 compute-0 eager_engelbart[397397]: --> relative data size: 1.0
Dec 06 08:14:47 compute-0 eager_engelbart[397397]: --> All data devices are unavailable
Dec 06 08:14:47 compute-0 systemd[1]: libpod-9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783.scope: Deactivated successfully.
Dec 06 08:14:47 compute-0 podman[397381]: 2025-12-06 08:14:47.18802577 +0000 UTC m=+0.922714157 container died 9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e063dbb9c869b60d45db48a7eae992310799974db726fdb80b81be495afb40-merged.mount: Deactivated successfully.
Dec 06 08:14:47 compute-0 podman[397381]: 2025-12-06 08:14:47.33341409 +0000 UTC m=+1.068102477 container remove 9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 08:14:47 compute-0 systemd[1]: libpod-conmon-9e0a04a82811976f17e5131023d43ccb8baadc0b26c0bc7ec7a24c78e2082783.scope: Deactivated successfully.
Dec 06 08:14:47 compute-0 sudo[397277]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:47 compute-0 sudo[397426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:47 compute-0 sudo[397426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:47 compute-0 sudo[397426]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:47 compute-0 podman[397450]: 2025-12-06 08:14:47.54804555 +0000 UTC m=+0.062604751 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:14:47 compute-0 sudo[397463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:14:47 compute-0 podman[397451]: 2025-12-06 08:14:47.57603053 +0000 UTC m=+0.089758019 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 08:14:47 compute-0 sudo[397463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:47 compute-0 sudo[397463]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:47 compute-0 sudo[397513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:47 compute-0 sudo[397513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:47 compute-0 sudo[397513]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:47 compute-0 sudo[397539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:14:47 compute-0 sudo[397539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 100 op/s
Dec 06 08:14:48 compute-0 podman[397606]: 2025-12-06 08:14:48.009828125 +0000 UTC m=+0.049260048 container create 56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:14:48 compute-0 systemd[1]: Started libpod-conmon-56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a.scope.
Dec 06 08:14:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:14:48 compute-0 podman[397606]: 2025-12-06 08:14:47.9879225 +0000 UTC m=+0.027354483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:14:48 compute-0 podman[397606]: 2025-12-06 08:14:48.088983896 +0000 UTC m=+0.128415849 container init 56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:14:48 compute-0 podman[397606]: 2025-12-06 08:14:48.098448283 +0000 UTC m=+0.137880206 container start 56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:14:48 compute-0 podman[397606]: 2025-12-06 08:14:48.101610269 +0000 UTC m=+0.141042222 container attach 56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:14:48 compute-0 cranky_vaughan[397623]: 167 167
Dec 06 08:14:48 compute-0 systemd[1]: libpod-56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a.scope: Deactivated successfully.
Dec 06 08:14:48 compute-0 podman[397606]: 2025-12-06 08:14:48.104690212 +0000 UTC m=+0.144122135 container died 56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:14:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:48.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-66270d2abb690f59beba6bc689dee88f48ee6200519cc55e0e35ffbd64669a6f-merged.mount: Deactivated successfully.
Dec 06 08:14:48 compute-0 podman[397606]: 2025-12-06 08:14:48.143092826 +0000 UTC m=+0.182524749 container remove 56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:14:48 compute-0 systemd[1]: libpod-conmon-56a4d29cc578ff17c24061aea39e21cbfdeec58d915807738752a62da2f2f47a.scope: Deactivated successfully.
Dec 06 08:14:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:14:48.267 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:14:48 compute-0 podman[397646]: 2025-12-06 08:14:48.311940433 +0000 UTC m=+0.041194591 container create 3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:14:48 compute-0 systemd[1]: Started libpod-conmon-3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c.scope.
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.376 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119a811fd8c3483f060ca1ba5a2fe94099e09da28bd5824c445c18f28d106178/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119a811fd8c3483f060ca1ba5a2fe94099e09da28bd5824c445c18f28d106178/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119a811fd8c3483f060ca1ba5a2fe94099e09da28bd5824c445c18f28d106178/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119a811fd8c3483f060ca1ba5a2fe94099e09da28bd5824c445c18f28d106178/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:48 compute-0 podman[397646]: 2025-12-06 08:14:48.292751571 +0000 UTC m=+0.022005749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:14:48 compute-0 podman[397646]: 2025-12-06 08:14:48.405859345 +0000 UTC m=+0.135113503 container init 3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:14:48 compute-0 podman[397646]: 2025-12-06 08:14:48.413305606 +0000 UTC m=+0.142559764 container start 3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:14:48 compute-0 podman[397646]: 2025-12-06 08:14:48.417163752 +0000 UTC m=+0.146417930 container attach 3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Dec 06 08:14:48 compute-0 NetworkManager[48965]: <info>  [1765008888.5006] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.499 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:48 compute-0 NetworkManager[48965]: <info>  [1765008888.5015] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.579 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:48 compute-0 ovn_controller[147168]: 2025-12-06T08:14:48Z|00776|binding|INFO|Releasing lport 87fca5ce-f232-4474-a8a4-8db880ec3abf from this chassis (sb_readonly=0)
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.589 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.686 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.686 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.725 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.726 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.726 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.726 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:14:48 compute-0 nova_compute[251992]: 2025-12-06 08:14:48.727 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:48.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.113 251996 DEBUG nova.compute.manager [req-2db8723c-325d-4a78-b847-3c87b2bb872d req-aff9a04f-56a1-4dcc-a569-9bfaad3c44d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-changed-22a348dd-6281-472a-b85e-a65dc1638db1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.114 251996 DEBUG nova.compute.manager [req-2db8723c-325d-4a78-b847-3c87b2bb872d req-aff9a04f-56a1-4dcc-a569-9bfaad3c44d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Refreshing instance network info cache due to event network-changed-22a348dd-6281-472a-b85e-a65dc1638db1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.114 251996 DEBUG oslo_concurrency.lockutils [req-2db8723c-325d-4a78-b847-3c87b2bb872d req-aff9a04f-56a1-4dcc-a569-9bfaad3c44d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.114 251996 DEBUG oslo_concurrency.lockutils [req-2db8723c-325d-4a78-b847-3c87b2bb872d req-aff9a04f-56a1-4dcc-a569-9bfaad3c44d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.115 251996 DEBUG nova.network.neutron [req-2db8723c-325d-4a78-b847-3c87b2bb872d req-aff9a04f-56a1-4dcc-a569-9bfaad3c44d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Refreshing network info cache for port 22a348dd-6281-472a-b85e-a65dc1638db1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:14:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:14:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1832332214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.161 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:49 compute-0 ceph-mon[74339]: pgmap v3665: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 100 op/s
Dec 06 08:14:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1832332214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:49 compute-0 pensive_shirley[397662]: {
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:     "0": [
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:         {
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "devices": [
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "/dev/loop3"
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             ],
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "lv_name": "ceph_lv0",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "lv_size": "7511998464",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "name": "ceph_lv0",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "tags": {
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.cluster_name": "ceph",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.crush_device_class": "",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.encrypted": "0",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.osd_id": "0",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.type": "block",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:                 "ceph.vdo": "0"
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             },
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "type": "block",
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:             "vg_name": "ceph_vg0"
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:         }
Dec 06 08:14:49 compute-0 pensive_shirley[397662]:     ]
Dec 06 08:14:49 compute-0 pensive_shirley[397662]: }
Dec 06 08:14:49 compute-0 systemd[1]: libpod-3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c.scope: Deactivated successfully.
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.233 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:49 compute-0 podman[397646]: 2025-12-06 08:14:49.234882616 +0000 UTC m=+0.964136774 container died 3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-119a811fd8c3483f060ca1ba5a2fe94099e09da28bd5824c445c18f28d106178-merged.mount: Deactivated successfully.
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.258 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.258 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:14:49 compute-0 podman[397646]: 2025-12-06 08:14:49.291926275 +0000 UTC m=+1.021180433 container remove 3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:14:49 compute-0 systemd[1]: libpod-conmon-3990790efa934a20df50f3bad59c96edd5cb26eb79cf4f51090d8b1029a8255c.scope: Deactivated successfully.
Dec 06 08:14:49 compute-0 sudo[397539]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:49 compute-0 sudo[397708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:49 compute-0 sudo[397708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:49 compute-0 sudo[397708]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:49 compute-0 sudo[397733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:14:49 compute-0 sudo[397733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:49 compute-0 sudo[397733]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.479 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.480 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3964MB free_disk=20.921855926513672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.481 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.481 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:14:49 compute-0 sudo[397758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:49 compute-0 sudo[397758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:49 compute-0 sudo[397758]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:49 compute-0 sudo[397783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:14:49 compute-0 sudo[397783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.582 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance c63c39aa-db19-442c-830d-31f9ddcb5e48 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.582 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.582 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:14:49 compute-0 nova_compute[251992]: 2025-12-06 08:14:49.638 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:14:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Dec 06 08:14:49 compute-0 podman[397867]: 2025-12-06 08:14:49.869351682 +0000 UTC m=+0.046405731 container create e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:14:49 compute-0 systemd[1]: Started libpod-conmon-e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb.scope.
Dec 06 08:14:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:14:49 compute-0 podman[397867]: 2025-12-06 08:14:49.934863051 +0000 UTC m=+0.111917130 container init e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_faraday, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:14:49 compute-0 podman[397867]: 2025-12-06 08:14:49.846271445 +0000 UTC m=+0.023325514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:14:49 compute-0 podman[397867]: 2025-12-06 08:14:49.941611425 +0000 UTC m=+0.118665474 container start e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:14:49 compute-0 friendly_faraday[397883]: 167 167
Dec 06 08:14:49 compute-0 systemd[1]: libpod-e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb.scope: Deactivated successfully.
Dec 06 08:14:49 compute-0 conmon[397883]: conmon e4bbc33a5b198f33caed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb.scope/container/memory.events
Dec 06 08:14:49 compute-0 podman[397867]: 2025-12-06 08:14:49.947178936 +0000 UTC m=+0.124233005 container attach e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_faraday, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 08:14:49 compute-0 podman[397867]: 2025-12-06 08:14:49.948541493 +0000 UTC m=+0.125595582 container died e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f15ee8119d6780ea3c4541563254768c271bc617e5cf3eb530ba52b0b8dc6ebd-merged.mount: Deactivated successfully.
Dec 06 08:14:50 compute-0 podman[397867]: 2025-12-06 08:14:50.002255433 +0000 UTC m=+0.179309472 container remove e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:14:50 compute-0 systemd[1]: libpod-conmon-e4bbc33a5b198f33caed095be1d14134551786b7a88e4c0ea6a7dfe3793b97eb.scope: Deactivated successfully.
Dec 06 08:14:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:14:50 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1311401847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:50 compute-0 nova_compute[251992]: 2025-12-06 08:14:50.112 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:14:50 compute-0 nova_compute[251992]: 2025-12-06 08:14:50.119 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:14:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:50 compute-0 nova_compute[251992]: 2025-12-06 08:14:50.140 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:14:50 compute-0 podman[397909]: 2025-12-06 08:14:50.156478722 +0000 UTC m=+0.040073160 container create 2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:14:50 compute-0 nova_compute[251992]: 2025-12-06 08:14:50.180 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:14:50 compute-0 nova_compute[251992]: 2025-12-06 08:14:50.181 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:14:50 compute-0 systemd[1]: Started libpod-conmon-2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703.scope.
Dec 06 08:14:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a758d70a46ccf350c71713481b3828bb61daa4bb02a683b56f8c1a432cf1e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a758d70a46ccf350c71713481b3828bb61daa4bb02a683b56f8c1a432cf1e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a758d70a46ccf350c71713481b3828bb61daa4bb02a683b56f8c1a432cf1e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2a758d70a46ccf350c71713481b3828bb61daa4bb02a683b56f8c1a432cf1e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:14:50 compute-0 podman[397909]: 2025-12-06 08:14:50.138923815 +0000 UTC m=+0.022518263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:14:50 compute-0 podman[397909]: 2025-12-06 08:14:50.23848837 +0000 UTC m=+0.122082828 container init 2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:14:50 compute-0 podman[397909]: 2025-12-06 08:14:50.246446106 +0000 UTC m=+0.130040544 container start 2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 08:14:50 compute-0 podman[397909]: 2025-12-06 08:14:50.249814688 +0000 UTC m=+0.133409156 container attach 2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 08:14:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1311401847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:50 compute-0 nova_compute[251992]: 2025-12-06 08:14:50.466 251996 DEBUG nova.network.neutron [req-2db8723c-325d-4a78-b847-3c87b2bb872d req-aff9a04f-56a1-4dcc-a569-9bfaad3c44d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updated VIF entry in instance network info cache for port 22a348dd-6281-472a-b85e-a65dc1638db1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:14:50 compute-0 nova_compute[251992]: 2025-12-06 08:14:50.468 251996 DEBUG nova.network.neutron [req-2db8723c-325d-4a78-b847-3c87b2bb872d req-aff9a04f-56a1-4dcc-a569-9bfaad3c44d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updating instance_info_cache with network_info: [{"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:14:50 compute-0 nova_compute[251992]: 2025-12-06 08:14:50.553 251996 DEBUG oslo_concurrency.lockutils [req-2db8723c-325d-4a78-b847-3c87b2bb872d req-aff9a04f-56a1-4dcc-a569-9bfaad3c44d0 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:14:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:50.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:51 compute-0 awesome_pascal[397926]: {
Dec 06 08:14:51 compute-0 awesome_pascal[397926]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:14:51 compute-0 awesome_pascal[397926]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:14:51 compute-0 awesome_pascal[397926]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:14:51 compute-0 awesome_pascal[397926]:         "osd_id": 0,
Dec 06 08:14:51 compute-0 awesome_pascal[397926]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:14:51 compute-0 awesome_pascal[397926]:         "type": "bluestore"
Dec 06 08:14:51 compute-0 awesome_pascal[397926]:     }
Dec 06 08:14:51 compute-0 awesome_pascal[397926]: }
Dec 06 08:14:51 compute-0 systemd[1]: libpod-2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703.scope: Deactivated successfully.
Dec 06 08:14:51 compute-0 podman[397909]: 2025-12-06 08:14:51.067537542 +0000 UTC m=+0.951131980 container died 2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:14:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a758d70a46ccf350c71713481b3828bb61daa4bb02a683b56f8c1a432cf1e9-merged.mount: Deactivated successfully.
Dec 06 08:14:51 compute-0 podman[397909]: 2025-12-06 08:14:51.123440471 +0000 UTC m=+1.007034909 container remove 2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 08:14:51 compute-0 systemd[1]: libpod-conmon-2338cf2b28dc1ca32642346498a5521c1e6ecfdcdeb389c0aa2a2cbcb42ca703.scope: Deactivated successfully.
Dec 06 08:14:51 compute-0 sudo[397783]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:14:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:14:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 339e896a-c2bb-42d6-9862-c41786628f8c does not exist
Dec 06 08:14:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 983c641c-4e79-40a7-8b53-7982609b0255 does not exist
Dec 06 08:14:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev faa2138d-f206-4732-bc9b-6fdad6b71192 does not exist
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.207841) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891207939, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2172, "num_deletes": 252, "total_data_size": 3986395, "memory_usage": 4047296, "flush_reason": "Manual Compaction"}
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891233650, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 3875028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72701, "largest_seqno": 74872, "table_properties": {"data_size": 3865176, "index_size": 6281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20462, "raw_average_key_size": 20, "raw_value_size": 3845433, "raw_average_value_size": 3880, "num_data_blocks": 274, "num_entries": 991, "num_filter_entries": 991, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008661, "oldest_key_time": 1765008661, "file_creation_time": 1765008891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 25856 microseconds, and 9018 cpu microseconds.
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.233692) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 3875028 bytes OK
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.233715) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.235288) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.235303) EVENT_LOG_v1 {"time_micros": 1765008891235298, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.235320) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3977623, prev total WAL file size 3977904, number of live WAL files 2.
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.236158) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(3784KB)], [164(11MB)]
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891236228, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 15871249, "oldest_snapshot_seqno": -1}
Dec 06 08:14:51 compute-0 sudo[397962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:14:51 compute-0 sudo[397962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:51 compute-0 sudo[397962]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:51 compute-0 sudo[397987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:14:51 compute-0 sudo[397987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:14:51 compute-0 sudo[397987]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 10875 keys, 13926818 bytes, temperature: kUnknown
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891350722, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 13926818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13856905, "index_size": 41640, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27205, "raw_key_size": 287397, "raw_average_key_size": 26, "raw_value_size": 13666797, "raw_average_value_size": 1256, "num_data_blocks": 1582, "num_entries": 10875, "num_filter_entries": 10875, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.351088) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 13926818 bytes
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.352414) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.4 rd, 121.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 11.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 11408, records dropped: 533 output_compression: NoCompression
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.352430) EVENT_LOG_v1 {"time_micros": 1765008891352423, "job": 102, "event": "compaction_finished", "compaction_time_micros": 114687, "compaction_time_cpu_micros": 39126, "output_level": 6, "num_output_files": 1, "total_output_size": 13926818, "num_input_records": 11408, "num_output_records": 10875, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891353388, "job": 102, "event": "table_file_deletion", "file_number": 166}
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008891355408, "job": 102, "event": "table_file_deletion", "file_number": 164}
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.236081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.355540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.355544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.355546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.355548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:14:51.355550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:14:51 compute-0 ceph-mon[74339]: pgmap v3666: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 92 op/s
Dec 06 08:14:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:51 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:14:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Dec 06 08:14:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:52.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:52.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:53 compute-0 nova_compute[251992]: 2025-12-06 08:14:53.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:53 compute-0 ceph-mon[74339]: pgmap v3667: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 97 op/s
Dec 06 08:14:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 79 op/s
Dec 06 08:14:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:54.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.152 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.246 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.865 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.865 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.866 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:14:54 compute-0 nova_compute[251992]: 2025-12-06 08:14:54.866 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c63c39aa-db19-442c-830d-31f9ddcb5e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:14:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:54.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:55 compute-0 ceph-mon[74339]: pgmap v3668: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 79 op/s
Dec 06 08:14:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1644300321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 972 KiB/s wr, 131 op/s
Dec 06 08:14:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:56.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1336330205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:56 compute-0 ovn_controller[147168]: 2025-12-06T08:14:56Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:8a:1b 10.100.0.12
Dec 06 08:14:56 compute-0 ovn_controller[147168]: 2025-12-06T08:14:56Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:8a:1b 10.100.0.12
Dec 06 08:14:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:14:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:56.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:14:57 compute-0 nova_compute[251992]: 2025-12-06 08:14:57.287 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updating instance_info_cache with network_info: [{"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:14:57 compute-0 nova_compute[251992]: 2025-12-06 08:14:57.308 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:14:57 compute-0 nova_compute[251992]: 2025-12-06 08:14:57.308 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:14:57 compute-0 ceph-mon[74339]: pgmap v3669: 305 pgs: 305 active+clean; 203 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 972 KiB/s wr, 131 op/s
Dec 06 08:14:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 181 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 1.2 MiB/s wr, 70 op/s
Dec 06 08:14:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:14:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:58 compute-0 nova_compute[251992]: 2025-12-06 08:14:58.386 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1889324331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:14:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:14:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:14:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:14:58.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:14:59 compute-0 nova_compute[251992]: 2025-12-06 08:14:59.248 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:14:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:14:59 compute-0 ceph-mon[74339]: pgmap v3670: 305 pgs: 305 active+clean; 181 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 1.2 MiB/s wr, 70 op/s
Dec 06 08:14:59 compute-0 nova_compute[251992]: 2025-12-06 08:14:59.655 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:14:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 181 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 95 KiB/s rd, 1.2 MiB/s wr, 60 op/s
Dec 06 08:15:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:00.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:00 compute-0 nova_compute[251992]: 2025-12-06 08:15:00.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:01 compute-0 ceph-mon[74339]: pgmap v3671: 305 pgs: 305 active+clean; 181 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 95 KiB/s rd, 1.2 MiB/s wr, 60 op/s
Dec 06 08:15:01 compute-0 nova_compute[251992]: 2025-12-06 08:15:01.788 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:15:01 compute-0 nova_compute[251992]: 2025-12-06 08:15:01.858 251996 INFO nova.compute.manager [None req-bd43d4b6-3c90-4f6e-ae0b-83bfc5398cd5 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Get console output
Dec 06 08:15:01 compute-0 nova_compute[251992]: 2025-12-06 08:15:01.866 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:15:01 compute-0 sudo[398017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:01 compute-0 sudo[398017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:01 compute-0 sudo[398017]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:01 compute-0 sudo[398042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:01 compute-0 sudo[398042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:01 compute-0 sudo[398042]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:02.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:02 compute-0 nova_compute[251992]: 2025-12-06 08:15:02.272 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Triggering sync for uuid c63c39aa-db19-442c-830d-31f9ddcb5e48 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 06 08:15:02 compute-0 nova_compute[251992]: 2025-12-06 08:15:02.273 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:02 compute-0 nova_compute[251992]: 2025-12-06 08:15:02.274 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:02 compute-0 nova_compute[251992]: 2025-12-06 08:15:02.582 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:02.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:03 compute-0 ceph-mon[74339]: pgmap v3672: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Dec 06 08:15:03 compute-0 nova_compute[251992]: 2025-12-06 08:15:03.426 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:03 compute-0 ovn_controller[147168]: 2025-12-06T08:15:03Z|00777|binding|INFO|Releasing lport 87fca5ce-f232-4474-a8a4-8db880ec3abf from this chassis (sb_readonly=0)
Dec 06 08:15:03 compute-0 nova_compute[251992]: 2025-12-06 08:15:03.612 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:03 compute-0 ovn_controller[147168]: 2025-12-06T08:15:03Z|00778|binding|INFO|Releasing lport 87fca5ce-f232-4474-a8a4-8db880ec3abf from this chassis (sb_readonly=0)
Dec 06 08:15:03 compute-0 nova_compute[251992]: 2025-12-06 08:15:03.716 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 08:15:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:03.891 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:03.891 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:03.892 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:04.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:04 compute-0 nova_compute[251992]: 2025-12-06 08:15:04.250 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:04 compute-0 nova_compute[251992]: 2025-12-06 08:15:04.868 251996 INFO nova.compute.manager [None req-9cf6048f-0f5d-4254-b973-42fb1a2d1793 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Get console output
Dec 06 08:15:04 compute-0 nova_compute[251992]: 2025-12-06 08:15:04.873 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:15:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:04.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:05 compute-0 ceph-mon[74339]: pgmap v3673: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 08:15:05 compute-0 nova_compute[251992]: 2025-12-06 08:15:05.455 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:05 compute-0 NetworkManager[48965]: <info>  [1765008905.4563] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Dec 06 08:15:05 compute-0 NetworkManager[48965]: <info>  [1765008905.4576] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Dec 06 08:15:05 compute-0 nova_compute[251992]: 2025-12-06 08:15:05.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:05 compute-0 ovn_controller[147168]: 2025-12-06T08:15:05Z|00779|binding|INFO|Releasing lport 87fca5ce-f232-4474-a8a4-8db880ec3abf from this chassis (sb_readonly=0)
Dec 06 08:15:05 compute-0 nova_compute[251992]: 2025-12-06 08:15:05.530 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 08:15:05 compute-0 nova_compute[251992]: 2025-12-06 08:15:05.914 251996 INFO nova.compute.manager [None req-fff5c63f-dcfb-4a33-91db-798322f75dab d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Get console output
Dec 06 08:15:05 compute-0 nova_compute[251992]: 2025-12-06 08:15:05.919 333192 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Dec 06 08:15:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:06.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:06 compute-0 nova_compute[251992]: 2025-12-06 08:15:06.143 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:06 compute-0 nova_compute[251992]: 2025-12-06 08:15:06.144 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:06.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.213 251996 DEBUG nova.compute.manager [req-97d8752e-2f03-4e07-a3b7-391deeb7478c req-aabbc5bc-0d49-4a8e-8e74-969789327eb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-changed-22a348dd-6281-472a-b85e-a65dc1638db1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.213 251996 DEBUG nova.compute.manager [req-97d8752e-2f03-4e07-a3b7-391deeb7478c req-aabbc5bc-0d49-4a8e-8e74-969789327eb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Refreshing instance network info cache due to event network-changed-22a348dd-6281-472a-b85e-a65dc1638db1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.214 251996 DEBUG oslo_concurrency.lockutils [req-97d8752e-2f03-4e07-a3b7-391deeb7478c req-aabbc5bc-0d49-4a8e-8e74-969789327eb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.214 251996 DEBUG oslo_concurrency.lockutils [req-97d8752e-2f03-4e07-a3b7-391deeb7478c req-aabbc5bc-0d49-4a8e-8e74-969789327eb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.214 251996 DEBUG nova.network.neutron [req-97d8752e-2f03-4e07-a3b7-391deeb7478c req-aabbc5bc-0d49-4a8e-8e74-969789327eb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Refreshing network info cache for port 22a348dd-6281-472a-b85e-a65dc1638db1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.350 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.351 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.351 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.351 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.351 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.353 251996 INFO nova.compute.manager [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Terminating instance
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.354 251996 DEBUG nova.compute.manager [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:15:07 compute-0 ceph-mon[74339]: pgmap v3674: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:15:07 compute-0 kernel: tap22a348dd-62 (unregistering): left promiscuous mode
Dec 06 08:15:07 compute-0 NetworkManager[48965]: <info>  [1765008907.7442] device (tap22a348dd-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:15:07 compute-0 ovn_controller[147168]: 2025-12-06T08:15:07Z|00780|binding|INFO|Releasing lport 22a348dd-6281-472a-b85e-a65dc1638db1 from this chassis (sb_readonly=0)
Dec 06 08:15:07 compute-0 ovn_controller[147168]: 2025-12-06T08:15:07Z|00781|binding|INFO|Setting lport 22a348dd-6281-472a-b85e-a65dc1638db1 down in Southbound
Dec 06 08:15:07 compute-0 ovn_controller[147168]: 2025-12-06T08:15:07Z|00782|binding|INFO|Removing iface tap22a348dd-62 ovn-installed in OVS
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.752 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:07.761 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:8a:1b 10.100.0.12'], port_security=['fa:16:3e:0f:8a:1b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c63c39aa-db19-442c-830d-31f9ddcb5e48', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00b1f317-7b98-48e8-9683-424e83375985', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f4735a799c84437b9dd4ea8778ad2fbb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78dcef83-a2a6-4643-8297-c4b0ef39ffc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c85a23ec-dc54-431d-9d86-b858cac47687, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=22a348dd-6281-472a-b85e-a65dc1638db1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:15:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:07.763 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 22a348dd-6281-472a-b85e-a65dc1638db1 in datapath 00b1f317-7b98-48e8-9683-424e83375985 unbound from our chassis
Dec 06 08:15:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:07.764 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00b1f317-7b98-48e8-9683-424e83375985, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:15:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:07.766 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d64ad8ed-ad22-4033-84fd-5aeffc662ed7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:07 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:07.766 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00b1f317-7b98-48e8-9683-424e83375985 namespace which is not needed anymore
Dec 06 08:15:07 compute-0 nova_compute[251992]: 2025-12-06 08:15:07.771 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:07 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000cb.scope: Deactivated successfully.
Dec 06 08:15:07 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000cb.scope: Consumed 14.909s CPU time.
Dec 06 08:15:07 compute-0 systemd-machined[212986]: Machine qemu-92-instance-000000cb terminated.
Dec 06 08:15:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.2 MiB/s wr, 37 op/s
Dec 06 08:15:07 compute-0 neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985[397171]: [NOTICE]   (397177) : haproxy version is 2.8.14-c23fe91
Dec 06 08:15:07 compute-0 neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985[397171]: [NOTICE]   (397177) : path to executable is /usr/sbin/haproxy
Dec 06 08:15:07 compute-0 neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985[397171]: [WARNING]  (397177) : Exiting Master process...
Dec 06 08:15:07 compute-0 neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985[397171]: [WARNING]  (397177) : Exiting Master process...
Dec 06 08:15:07 compute-0 neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985[397171]: [ALERT]    (397177) : Current worker (397179) exited with code 143 (Terminated)
Dec 06 08:15:07 compute-0 neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985[397171]: [WARNING]  (397177) : All workers exited. Exiting... (0)
Dec 06 08:15:07 compute-0 systemd[1]: libpod-412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f.scope: Deactivated successfully.
Dec 06 08:15:07 compute-0 podman[398096]: 2025-12-06 08:15:07.901733722 +0000 UTC m=+0.044717876 container died 412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f-userdata-shm.mount: Deactivated successfully.
Dec 06 08:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-662ca894182bb1d7327bbf92137002fdf783f00d2e21f5b3c4585035beedc4e1-merged.mount: Deactivated successfully.
Dec 06 08:15:07 compute-0 podman[398096]: 2025-12-06 08:15:07.940065203 +0000 UTC m=+0.083049357 container cleanup 412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:15:07 compute-0 systemd[1]: libpod-conmon-412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f.scope: Deactivated successfully.
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.004 251996 INFO nova.virt.libvirt.driver [-] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Instance destroyed successfully.
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.004 251996 DEBUG nova.objects.instance [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lazy-loading 'resources' on Instance uuid c63c39aa-db19-442c-830d-31f9ddcb5e48 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:15:08 compute-0 podman[398127]: 2025-12-06 08:15:08.015848662 +0000 UTC m=+0.056378762 container remove 412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.021 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[16a8b687-08ed-4cc6-b53a-9c77376b59fd]: (4, ('Sat Dec  6 08:15:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985 (412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f)\n412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f\nSat Dec  6 08:15:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-00b1f317-7b98-48e8-9683-424e83375985 (412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f)\n412b616d4f75d84e2d37b38843037ca2492225f2f10ad8bebb5b9e71aeec924f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.022 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf43b3e-91f9-4f2b-ac06-f7330aa30665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.024 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00b1f317-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.026 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:08 compute-0 kernel: tap00b1f317-70: left promiscuous mode
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.042 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.044 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[933044a5-32c3-4df2-a8c2-58505199646c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.054 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3e63b988-e02c-41f9-823b-4a7e4746c4a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.055 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac453fd-ea4c-4562-b8c0-04a2f8d153a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.069 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1f89cb32-2b64-4f85-9feb-f38de45002bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915410, 'reachable_time': 43454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398154, 'error': None, 'target': 'ovnmeta-00b1f317-7b98-48e8-9683-424e83375985', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d00b1f317\x2d7b98\x2d48e8\x2d9683\x2d424e83375985.mount: Deactivated successfully.
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.072 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00b1f317-7b98-48e8-9683-424e83375985 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:15:08 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:08.074 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[d76ec334-0ec2-42f9-b3b3-2b2835b6cbf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:08.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.379 251996 DEBUG nova.virt.libvirt.vif [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2096887961',display_name='tempest-TestNetworkBasicOps-server-2096887961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2096887961',id=203,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGLMVHiBFVDbILzmiP2hjkfcRe9nbbI4rX+p9dN2d8el2VZSbgy44eMeyqYZznoUx63aOMZ7aSNernxTJS2o4jBvweg41uY5KZFOccoiCkasts6emqIhof6s0BgnkseSZQ==',key_name='tempest-TestNetworkBasicOps-2111706693',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:14:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f4735a799c84437b9dd4ea8778ad2fbb',ramdisk_id='',reservation_id='r-2k037ew5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1435471576',owner_user_name='tempest-TestNetworkBasicOps-1435471576-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:14:42Z,user_data=None,user_id='d5359905348247d0b9b5b95982e890bb',uuid=c63c39aa-db19-442c-830d-31f9ddcb5e48,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.380 251996 DEBUG nova.network.os_vif_util [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converting VIF {"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.380 251996 DEBUG nova.network.os_vif_util [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=22a348dd-6281-472a-b85e-a65dc1638db1,network=Network(00b1f317-7b98-48e8-9683-424e83375985),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22a348dd-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.381 251996 DEBUG os_vif [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=22a348dd-6281-472a-b85e-a65dc1638db1,network=Network(00b1f317-7b98-48e8-9683-424e83375985),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22a348dd-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.383 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.383 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22a348dd-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.385 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.386 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.389 251996 INFO os_vif [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:8a:1b,bridge_name='br-int',has_traffic_filtering=True,id=22a348dd-6281-472a-b85e-a65dc1638db1,network=Network(00b1f317-7b98-48e8-9683-424e83375985),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22a348dd-62')
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.428 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.488 251996 DEBUG nova.compute.manager [req-011de86b-d4b5-44b3-be3a-c5494fd2617f req-3b685e07-2bb7-40a9-b620-7c482a993453 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-vif-unplugged-22a348dd-6281-472a-b85e-a65dc1638db1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.489 251996 DEBUG oslo_concurrency.lockutils [req-011de86b-d4b5-44b3-be3a-c5494fd2617f req-3b685e07-2bb7-40a9-b620-7c482a993453 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.489 251996 DEBUG oslo_concurrency.lockutils [req-011de86b-d4b5-44b3-be3a-c5494fd2617f req-3b685e07-2bb7-40a9-b620-7c482a993453 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.489 251996 DEBUG oslo_concurrency.lockutils [req-011de86b-d4b5-44b3-be3a-c5494fd2617f req-3b685e07-2bb7-40a9-b620-7c482a993453 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.489 251996 DEBUG nova.compute.manager [req-011de86b-d4b5-44b3-be3a-c5494fd2617f req-3b685e07-2bb7-40a9-b620-7c482a993453 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] No waiting events found dispatching network-vif-unplugged-22a348dd-6281-472a-b85e-a65dc1638db1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:15:08 compute-0 nova_compute[251992]: 2025-12-06 08:15:08.490 251996 DEBUG nova.compute.manager [req-011de86b-d4b5-44b3-be3a-c5494fd2617f req-3b685e07-2bb7-40a9-b620-7c482a993453 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-vif-unplugged-22a348dd-6281-472a-b85e-a65dc1638db1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:15:08 compute-0 ceph-mon[74339]: pgmap v3675: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.2 MiB/s wr, 37 op/s
Dec 06 08:15:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:08.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:15:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1900472723' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:15:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:15:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1900472723' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:15:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.697 251996 DEBUG nova.network.neutron [req-97d8752e-2f03-4e07-a3b7-391deeb7478c req-aabbc5bc-0d49-4a8e-8e74-969789327eb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updated VIF entry in instance network info cache for port 22a348dd-6281-472a-b85e-a65dc1638db1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.697 251996 DEBUG nova.network.neutron [req-97d8752e-2f03-4e07-a3b7-391deeb7478c req-aabbc5bc-0d49-4a8e-8e74-969789327eb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updating instance_info_cache with network_info: [{"id": "22a348dd-6281-472a-b85e-a65dc1638db1", "address": "fa:16:3e:0f:8a:1b", "network": {"id": "00b1f317-7b98-48e8-9683-424e83375985", "bridge": "br-int", "label": "tempest-network-smoke--154479745", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f4735a799c84437b9dd4ea8778ad2fbb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22a348dd-62", "ovs_interfaceid": "22a348dd-6281-472a-b85e-a65dc1638db1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.721 251996 DEBUG oslo_concurrency.lockutils [req-97d8752e-2f03-4e07-a3b7-391deeb7478c req-aabbc5bc-0d49-4a8e-8e74-969789327eb6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-c63c39aa-db19-442c-830d-31f9ddcb5e48" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.788 251996 INFO nova.virt.libvirt.driver [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Deleting instance files /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48_del
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.789 251996 INFO nova.virt.libvirt.driver [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Deletion of /var/lib/nova/instances/c63c39aa-db19-442c-830d-31f9ddcb5e48_del complete
Dec 06 08:15:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 993 KiB/s wr, 34 op/s
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.835 251996 INFO nova.compute.manager [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Took 2.48 seconds to destroy the instance on the hypervisor.
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.835 251996 DEBUG oslo.service.loopingcall [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.836 251996 DEBUG nova.compute.manager [-] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:15:09 compute-0 nova_compute[251992]: 2025-12-06 08:15:09.836 251996 DEBUG nova.network.neutron [-] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:15:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1900472723' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:15:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1900472723' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:15:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:10.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:10 compute-0 nova_compute[251992]: 2025-12-06 08:15:10.600 251996 DEBUG nova.compute.manager [req-ad7450db-41bd-467d-a025-d6c59b4b7de9 req-fe9c5276-c227-4bec-92e2-367e0a21e87e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:15:10 compute-0 nova_compute[251992]: 2025-12-06 08:15:10.601 251996 DEBUG oslo_concurrency.lockutils [req-ad7450db-41bd-467d-a025-d6c59b4b7de9 req-fe9c5276-c227-4bec-92e2-367e0a21e87e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:10 compute-0 nova_compute[251992]: 2025-12-06 08:15:10.601 251996 DEBUG oslo_concurrency.lockutils [req-ad7450db-41bd-467d-a025-d6c59b4b7de9 req-fe9c5276-c227-4bec-92e2-367e0a21e87e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:10 compute-0 nova_compute[251992]: 2025-12-06 08:15:10.602 251996 DEBUG oslo_concurrency.lockutils [req-ad7450db-41bd-467d-a025-d6c59b4b7de9 req-fe9c5276-c227-4bec-92e2-367e0a21e87e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:10 compute-0 nova_compute[251992]: 2025-12-06 08:15:10.602 251996 DEBUG nova.compute.manager [req-ad7450db-41bd-467d-a025-d6c59b4b7de9 req-fe9c5276-c227-4bec-92e2-367e0a21e87e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] No waiting events found dispatching network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:15:10 compute-0 nova_compute[251992]: 2025-12-06 08:15:10.603 251996 WARNING nova.compute.manager [req-ad7450db-41bd-467d-a025-d6c59b4b7de9 req-fe9c5276-c227-4bec-92e2-367e0a21e87e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received unexpected event network-vif-plugged-22a348dd-6281-472a-b85e-a65dc1638db1 for instance with vm_state active and task_state deleting.
Dec 06 08:15:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:10.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:10 compute-0 ceph-mon[74339]: pgmap v3676: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 993 KiB/s wr, 34 op/s
Dec 06 08:15:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 994 KiB/s wr, 61 op/s
Dec 06 08:15:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:12.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:12 compute-0 podman[398176]: 2025-12-06 08:15:12.422688208 +0000 UTC m=+0.079803730 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:15:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:12.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:12 compute-0 ceph-mon[74339]: pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 994 KiB/s wr, 61 op/s
Dec 06 08:15:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:15:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:15:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:15:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:15:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:15:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:15:13 compute-0 nova_compute[251992]: 2025-12-06 08:15:13.422 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:13 compute-0 nova_compute[251992]: 2025-12-06 08:15:13.430 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:13 compute-0 nova_compute[251992]: 2025-12-06 08:15:13.742 251996 DEBUG nova.network.neutron [-] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:15:13 compute-0 nova_compute[251992]: 2025-12-06 08:15:13.765 251996 INFO nova.compute.manager [-] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Took 3.93 seconds to deallocate network for instance.
Dec 06 08:15:13 compute-0 nova_compute[251992]: 2025-12-06 08:15:13.815 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:13 compute-0 nova_compute[251992]: 2025-12-06 08:15:13.815 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:15:13 compute-0 nova_compute[251992]: 2025-12-06 08:15:13.874 251996 DEBUG nova.compute.manager [req-5b2e303e-4375-4ec9-923a-5b4c9e2b418c req-a7fdfe51-f1ca-496b-9bc6-579d8a470d56 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Received event network-vif-deleted-22a348dd-6281-472a-b85e-a65dc1638db1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:15:13 compute-0 nova_compute[251992]: 2025-12-06 08:15:13.877 251996 DEBUG oslo_concurrency.processutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:14.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:15:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349789118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:14 compute-0 nova_compute[251992]: 2025-12-06 08:15:14.415 251996 DEBUG oslo_concurrency.processutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:14 compute-0 nova_compute[251992]: 2025-12-06 08:15:14.421 251996 DEBUG nova.compute.provider_tree [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:15:14 compute-0 nova_compute[251992]: 2025-12-06 08:15:14.439 251996 DEBUG nova.scheduler.client.report [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:15:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:14 compute-0 nova_compute[251992]: 2025-12-06 08:15:14.479 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:14 compute-0 nova_compute[251992]: 2025-12-06 08:15:14.507 251996 INFO nova.scheduler.client.report [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Deleted allocations for instance c63c39aa-db19-442c-830d-31f9ddcb5e48
Dec 06 08:15:14 compute-0 nova_compute[251992]: 2025-12-06 08:15:14.587 251996 DEBUG oslo_concurrency.lockutils [None req-00fa33c4-b2a5-49d8-97e8-1d28c44accf1 d5359905348247d0b9b5b95982e890bb f4735a799c84437b9dd4ea8778ad2fbb - - default default] Lock "c63c39aa-db19-442c-830d-31f9ddcb5e48" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:14.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:15 compute-0 ceph-mon[74339]: pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:15:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/349789118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:15:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:16.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:17 compute-0 ceph-mon[74339]: pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Dec 06 08:15:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:18.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:18 compute-0 nova_compute[251992]: 2025-12-06 08:15:18.199 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:18 compute-0 podman[398228]: 2025-12-06 08:15:18.397162961 +0000 UTC m=+0.055265254 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 08:15:18 compute-0 podman[398229]: 2025-12-06 08:15:18.405232789 +0000 UTC m=+0.059243931 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:15:18 compute-0 nova_compute[251992]: 2025-12-06 08:15:18.423 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:18 compute-0 nova_compute[251992]: 2025-12-06 08:15:18.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:15:18
Dec 06 08:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'vms', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control']
Dec 06 08:15:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:15:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:19 compute-0 ceph-mon[74339]: pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:20.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:20 compute-0 ceph-mon[74339]: pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:20.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:21 compute-0 nova_compute[251992]: 2025-12-06 08:15:21.059 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:21 compute-0 nova_compute[251992]: 2025-12-06 08:15:21.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:22 compute-0 sudo[398271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:22 compute-0 sudo[398271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:22 compute-0 sudo[398271]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:22 compute-0 sudo[398296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:22 compute-0 sudo[398296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:22 compute-0 sudo[398296]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:22.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:22.218 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:15:22 compute-0 nova_compute[251992]: 2025-12-06 08:15:22.218 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:22.220 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:15:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:15:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 74K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1496 writes, 6399 keys, 1494 commit groups, 1.0 writes per commit group, ingest: 10.05 MB, 0.02 MB/s
                                           Interval WAL: 1496 writes, 1494 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.2      2.34              0.33        51    0.046       0      0       0.0       0.0
                                             L6      1/0   13.28 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.2    103.7     88.9      5.96              1.70        50    0.119    400K    27K       0.0       0.0
                                            Sum      1/0   13.28 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.2     74.4     76.0      8.30              2.03       101    0.082    400K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    127.1    129.5      0.56              0.25        10    0.056     55K   2605       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    103.7     88.9      5.96              1.70        50    0.119    400K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.3      2.34              0.33        50    0.047       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.099, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.62 GB write, 0.10 MB/s write, 0.60 GB read, 0.09 MB/s read, 8.3 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 67.90 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000456 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3896,65.06 MB,21.4029%) FilterBlock(102,1.07 MB,0.35165%) IndexBlock(102,1.77 MB,0.581541%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:15:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:22.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:23 compute-0 nova_compute[251992]: 2025-12-06 08:15:23.003 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765008908.0013733, c63c39aa-db19-442c-830d-31f9ddcb5e48 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:15:23 compute-0 nova_compute[251992]: 2025-12-06 08:15:23.003 251996 INFO nova.compute.manager [-] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] VM Stopped (Lifecycle Event)
Dec 06 08:15:23 compute-0 nova_compute[251992]: 2025-12-06 08:15:23.025 251996 DEBUG nova.compute.manager [None req-09036e2c-29dd-45d6-a48a-b6704889e4fd - - - - - -] [instance: c63c39aa-db19-442c-830d-31f9ddcb5e48] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:15:23 compute-0 ceph-mon[74339]: pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:15:23 compute-0 nova_compute[251992]: 2025-12-06 08:15:23.425 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:23 compute-0 nova_compute[251992]: 2025-12-06 08:15:23.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:15:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:15:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:24.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:25 compute-0 ceph-mon[74339]: pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:26.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:15:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:15:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:26.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:27 compute-0 ceph-mon[74339]: pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:15:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:15:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:15:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:15:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:15:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:28.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:28.222 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:28 compute-0 nova_compute[251992]: 2025-12-06 08:15:28.427 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:28 compute-0 nova_compute[251992]: 2025-12-06 08:15:28.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:28.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:29 compute-0 ceph-mon[74339]: pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:30.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:30.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:31 compute-0 ceph-mon[74339]: pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:32.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:32 compute-0 ceph-mon[74339]: pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:32.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:33 compute-0 nova_compute[251992]: 2025-12-06 08:15:33.429 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:33 compute-0 nova_compute[251992]: 2025-12-06 08:15:33.437 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:34.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:35 compute-0 ceph-mon[74339]: pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:36.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:15:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:15:37 compute-0 ceph-mon[74339]: pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:38.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:38 compute-0 nova_compute[251992]: 2025-12-06 08:15:38.431 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:38 compute-0 nova_compute[251992]: 2025-12-06 08:15:38.439 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:38 compute-0 ceph-mon[74339]: pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1889620069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:39 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3724633399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:40.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:40.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:41 compute-0 ceph-mon[74339]: pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:42 compute-0 sudo[398331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:42 compute-0 sudo[398331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:42 compute-0 sudo[398331]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:42.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:42 compute-0 sudo[398356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:42 compute-0 sudo[398356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:42 compute-0 sudo[398356]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:42.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:15:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:15:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:15:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:15:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:15:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:15:43 compute-0 ceph-mon[74339]: pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:43 compute-0 nova_compute[251992]: 2025-12-06 08:15:43.433 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:43 compute-0 nova_compute[251992]: 2025-12-06 08:15:43.440 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:43 compute-0 podman[398382]: 2025-12-06 08:15:43.46908238 +0000 UTC m=+0.122427837 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:15:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:44.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:44.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:45 compute-0 ceph-mon[74339]: pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Dec 06 08:15:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 08:15:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:46.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.592 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "177567d6-7f0b-46df-aa8e-60e0089ae786" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.593 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.607 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.680 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.680 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.686 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.687 251996 INFO nova.compute.claims [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.806 251996 DEBUG nova.scheduler.client.report [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.822 251996 DEBUG nova.scheduler.client.report [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.823 251996 DEBUG nova.compute.provider_tree [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.844 251996 DEBUG nova.scheduler.client.report [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.865 251996 DEBUG nova.scheduler.client.report [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:15:46 compute-0 nova_compute[251992]: 2025-12-06 08:15:46.895 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:15:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651028505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:47 compute-0 ceph-mon[74339]: pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 06 08:15:47 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2651028505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.351 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.358 251996 DEBUG nova.compute.provider_tree [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.373 251996 DEBUG nova.scheduler.client.report [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.400 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.401 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.450 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.451 251996 DEBUG nova.network.neutron [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.474 251996 INFO nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.494 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.602 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.603 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.604 251996 INFO nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Creating image(s)
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.633 251996 DEBUG nova.storage.rbd_utils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 177567d6-7f0b-46df-aa8e-60e0089ae786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.662 251996 DEBUG nova.storage.rbd_utils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 177567d6-7f0b-46df-aa8e-60e0089ae786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.688 251996 DEBUG nova.storage.rbd_utils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 177567d6-7f0b-46df-aa8e-60e0089ae786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.691 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.720 251996 DEBUG nova.policy [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0432cb6633e14c1b86fc320e7f3bb880', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.760 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.761 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.761 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.762 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.788 251996 DEBUG nova.storage.rbd_utils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 177567d6-7f0b-46df-aa8e-60e0089ae786_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:15:47 compute-0 nova_compute[251992]: 2025-12-06 08:15:47.792 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 177567d6-7f0b-46df-aa8e-60e0089ae786_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 06 08:15:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:48 compute-0 nova_compute[251992]: 2025-12-06 08:15:48.435 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:48 compute-0 nova_compute[251992]: 2025-12-06 08:15:48.441 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:48 compute-0 nova_compute[251992]: 2025-12-06 08:15:48.598 251996 DEBUG nova.network.neutron [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Successfully created port: 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:15:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:48.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:49 compute-0 nova_compute[251992]: 2025-12-06 08:15:49.342 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 177567d6-7f0b-46df-aa8e-60e0089ae786_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:49 compute-0 ceph-mon[74339]: pgmap v3695: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 06 08:15:49 compute-0 podman[398527]: 2025-12-06 08:15:49.401936983 +0000 UTC m=+0.056559177 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 08:15:49 compute-0 podman[398528]: 2025-12-06 08:15:49.414069793 +0000 UTC m=+0.063734962 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, container_name=multipathd)
Dec 06 08:15:49 compute-0 nova_compute[251992]: 2025-12-06 08:15:49.579 251996 DEBUG nova.storage.rbd_utils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] resizing rbd image 177567d6-7f0b-46df-aa8e-60e0089ae786_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:15:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:49 compute-0 nova_compute[251992]: 2025-12-06 08:15:49.690 251996 DEBUG nova.objects.instance [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 177567d6-7f0b-46df-aa8e-60e0089ae786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:15:49 compute-0 nova_compute[251992]: 2025-12-06 08:15:49.706 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:15:49 compute-0 nova_compute[251992]: 2025-12-06 08:15:49.706 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Ensure instance console log exists: /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:15:49 compute-0 nova_compute[251992]: 2025-12-06 08:15:49.707 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:49 compute-0 nova_compute[251992]: 2025-12-06 08:15:49.707 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:49 compute-0 nova_compute[251992]: 2025-12-06 08:15:49.707 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 06 08:15:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:50.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.500 251996 DEBUG nova.network.neutron [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Successfully updated port: 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.542 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.543 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquired lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.543 251996 DEBUG nova.network.neutron [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:15:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1497060737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2115213910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.623 251996 DEBUG nova.compute.manager [req-fe4ca0bd-dff2-478c-85c4-9081d63f53e3 req-b1389efd-c8f4-4c1a-909a-416c823a18de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-changed-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.624 251996 DEBUG nova.compute.manager [req-fe4ca0bd-dff2-478c-85c4-9081d63f53e3 req-b1389efd-c8f4-4c1a-909a-416c823a18de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Refreshing instance network info cache due to event network-changed-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.624 251996 DEBUG oslo_concurrency.lockutils [req-fe4ca0bd-dff2-478c-85c4-9081d63f53e3 req-b1389efd-c8f4-4c1a-909a-416c823a18de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.678 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.679 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:50 compute-0 nova_compute[251992]: 2025-12-06 08:15:50.723 251996 DEBUG nova.network.neutron [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:15:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:15:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911365166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.088 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.247 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.248 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4150MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.249 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.249 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.326 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 177567d6-7f0b-46df-aa8e-60e0089ae786 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.327 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.327 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.368 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.577 251996 DEBUG nova.network.neutron [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updating instance_info_cache with network_info: [{"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:15:51 compute-0 ceph-mon[74339]: pgmap v3696: 305 pgs: 305 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 06 08:15:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/911365166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.613 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Releasing lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.614 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Instance network_info: |[{"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.614 251996 DEBUG oslo_concurrency.lockutils [req-fe4ca0bd-dff2-478c-85c4-9081d63f53e3 req-b1389efd-c8f4-4c1a-909a-416c823a18de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.615 251996 DEBUG nova.network.neutron [req-fe4ca0bd-dff2-478c-85c4-9081d63f53e3 req-b1389efd-c8f4-4c1a-909a-416c823a18de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Refreshing network info cache for port 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.619 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Start _get_guest_xml network_info=[{"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.627 251996 WARNING nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.634 251996 DEBUG nova.virt.libvirt.host [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.634 251996 DEBUG nova.virt.libvirt.host [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.639 251996 DEBUG nova.virt.libvirt.host [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.639 251996 DEBUG nova.virt.libvirt.host [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.641 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.641 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.641 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.642 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.642 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.642 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.643 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.643 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.643 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.644 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.644 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.644 251996 DEBUG nova.virt.hardware [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.647 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:51 compute-0 sudo[398680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:51 compute-0 sudo[398680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:51 compute-0 sudo[398680]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:51 compute-0 sudo[398706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:15:51 compute-0 sudo[398706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:51 compute-0 sudo[398706]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:51 compute-0 sudo[398731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:51 compute-0 sudo[398731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:51 compute-0 sudo[398731]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:15:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961934422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.829 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:51 compute-0 sudo[398775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:15:51 compute-0 sudo[398775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.838 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.855 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.878 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:15:51 compute-0 nova_compute[251992]: 2025-12-06 08:15:51.879 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:15:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481869598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.094 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.127 251996 DEBUG nova.storage.rbd_utils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 177567d6-7f0b-46df-aa8e-60e0089ae786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.132 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:52.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:52 compute-0 sudo[398775]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:15:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:15:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:15:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 69a5b0df-fb87-4e11-9f2f-964fc33c6546 does not exist
Dec 06 08:15:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 921cc0d4-2324-453d-94da-9b895ee84a43 does not exist
Dec 06 08:15:52 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7293ee67-4955-4909-b87d-98ce70879512 does not exist
Dec 06 08:15:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:15:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:15:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:15:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:15:52 compute-0 sudo[398873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:52 compute-0 sudo[398873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:52 compute-0 sudo[398873]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:52 compute-0 sudo[398898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:15:52 compute-0 sudo[398898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:52 compute-0 sudo[398898]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:52 compute-0 sudo[398923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:52 compute-0 sudo[398923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:52 compute-0 sudo[398923]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:15:52 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720579820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.599 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.602 251996 DEBUG nova.virt.libvirt.vif [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:15:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-1654357624',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-1654357624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=204,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKaZAyJuyUPzASOkR1gDA1fKpANBX/232ZFoOe4S4TqW4oJaPbVRWCH6B3MgYuDyWBOXcKvJA2hYTFzL7aJjwbQRzjOcMySxOVJCemy5U4dU6fSZoXsToGx4Gww1etdCtw==',key_name='tempest-TestSecurityGroupsBasicOps-1382558522',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-6isxqqan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:15:47Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=177567d6-7f0b-46df-aa8e-60e0089ae786,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.602 251996 DEBUG nova.network.os_vif_util [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.603 251996 DEBUG nova.network.os_vif_util [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:3c:41,bridge_name='br-int',has_traffic_filtering=True,id=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0,network=Network(538c6ce0-da01-452d-90fd-a1413cdabc3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50ea2b6d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.605 251996 DEBUG nova.objects.instance [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 177567d6-7f0b-46df-aa8e-60e0089ae786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/961934422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: pgmap v3697: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1481869598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:15:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3720579820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:52 compute-0 sudo[398948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:15:52 compute-0 sudo[398948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.622 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <uuid>177567d6-7f0b-46df-aa8e-60e0089ae786</uuid>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <name>instance-000000cc</name>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-1654357624</nova:name>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:15:51</nova:creationTime>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <nova:user uuid="0432cb6633e14c1b86fc320e7f3bb880">tempest-TestSecurityGroupsBasicOps-568463891-project-member</nova:user>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <nova:project uuid="5d23d1d6ffc142eaa9bee0ef93fe60e4">tempest-TestSecurityGroupsBasicOps-568463891</nova:project>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <nova:port uuid="50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0">
Dec 06 08:15:52 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <system>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <entry name="serial">177567d6-7f0b-46df-aa8e-60e0089ae786</entry>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <entry name="uuid">177567d6-7f0b-46df-aa8e-60e0089ae786</entry>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </system>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <os>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   </os>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <features>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   </features>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/177567d6-7f0b-46df-aa8e-60e0089ae786_disk">
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       </source>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/177567d6-7f0b-46df-aa8e-60e0089ae786_disk.config">
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       </source>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:15:52 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:41:3c:41"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <target dev="tap50ea2b6d-c5"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786/console.log" append="off"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <video>
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </video>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:15:52 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:15:52 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:15:52 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:15:52 compute-0 nova_compute[251992]: </domain>
Dec 06 08:15:52 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.623 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Preparing to wait for external event network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.623 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.623 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.623 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.624 251996 DEBUG nova.virt.libvirt.vif [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:15:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-1654357624',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-1654357624',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=204,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKaZAyJuyUPzASOkR1gDA1fKpANBX/232ZFoOe4S4TqW4oJaPbVRWCH6B3MgYuDyWBOXcKvJA2hYTFzL7aJjwbQRzjOcMySxOVJCemy5U4dU6fSZoXsToGx4Gww1etdCtw==',key_name='tempest-TestSecurityGroupsBasicOps-1382558522',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-6isxqqan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:15:47Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=177567d6-7f0b-46df-aa8e-60e0089ae786,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.624 251996 DEBUG nova.network.os_vif_util [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.625 251996 DEBUG nova.network.os_vif_util [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:3c:41,bridge_name='br-int',has_traffic_filtering=True,id=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0,network=Network(538c6ce0-da01-452d-90fd-a1413cdabc3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50ea2b6d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.625 251996 DEBUG os_vif [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:3c:41,bridge_name='br-int',has_traffic_filtering=True,id=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0,network=Network(538c6ce0-da01-452d-90fd-a1413cdabc3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50ea2b6d-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.626 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.627 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.628 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.631 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.632 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50ea2b6d-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.632 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50ea2b6d-c5, col_values=(('external_ids', {'iface-id': '50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:3c:41', 'vm-uuid': '177567d6-7f0b-46df-aa8e-60e0089ae786'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.634 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:52 compute-0 NetworkManager[48965]: <info>  [1765008952.6353] manager: (tap50ea2b6d-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.637 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.641 251996 INFO os_vif [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:3c:41,bridge_name='br-int',has_traffic_filtering=True,id=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0,network=Network(538c6ce0-da01-452d-90fd-a1413cdabc3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50ea2b6d-c5')
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.686 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.686 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.687 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No VIF found with MAC fa:16:3e:41:3c:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.687 251996 INFO nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Using config drive
Dec 06 08:15:52 compute-0 nova_compute[251992]: 2025-12-06 08:15:52.718 251996 DEBUG nova.storage.rbd_utils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 177567d6-7f0b-46df-aa8e-60e0089ae786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:15:52 compute-0 podman[399034]: 2025-12-06 08:15:52.941544922 +0000 UTC m=+0.044212673 container create b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:15:52 compute-0 systemd[1]: Started libpod-conmon-b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210.scope.
Dec 06 08:15:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:53 compute-0 podman[399034]: 2025-12-06 08:15:52.92195557 +0000 UTC m=+0.024623331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:15:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:15:53 compute-0 podman[399034]: 2025-12-06 08:15:53.078645676 +0000 UTC m=+0.181313477 container init b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 08:15:53 compute-0 podman[399034]: 2025-12-06 08:15:53.09018349 +0000 UTC m=+0.192851241 container start b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 08:15:53 compute-0 podman[399034]: 2025-12-06 08:15:53.093733286 +0000 UTC m=+0.196401037 container attach b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swirles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 08:15:53 compute-0 bold_swirles[399050]: 167 167
Dec 06 08:15:53 compute-0 systemd[1]: libpod-b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210.scope: Deactivated successfully.
Dec 06 08:15:53 compute-0 podman[399034]: 2025-12-06 08:15:53.099382379 +0000 UTC m=+0.202050140 container died b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swirles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4522ac41416ad1bc1a41952ff5f5ba542e39f4eb47d5ae4cf6f32dfa6162224-merged.mount: Deactivated successfully.
Dec 06 08:15:53 compute-0 podman[399034]: 2025-12-06 08:15:53.14725119 +0000 UTC m=+0.249918941 container remove b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swirles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:15:53 compute-0 systemd[1]: libpod-conmon-b296de6d4ee780deb17aba9733871d8a39a83fd2a534391dd981f56450d7e210.scope: Deactivated successfully.
Dec 06 08:15:53 compute-0 podman[399073]: 2025-12-06 08:15:53.315691075 +0000 UTC m=+0.045389014 container create fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:15:53 compute-0 systemd[1]: Started libpod-conmon-fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3.scope.
Dec 06 08:15:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257c2da207b287e6dedba91ecf8294479b9e63e5d7f9eaea476215009a52f77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257c2da207b287e6dedba91ecf8294479b9e63e5d7f9eaea476215009a52f77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257c2da207b287e6dedba91ecf8294479b9e63e5d7f9eaea476215009a52f77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:53 compute-0 podman[399073]: 2025-12-06 08:15:53.297174563 +0000 UTC m=+0.026872532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257c2da207b287e6dedba91ecf8294479b9e63e5d7f9eaea476215009a52f77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4257c2da207b287e6dedba91ecf8294479b9e63e5d7f9eaea476215009a52f77/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:53 compute-0 podman[399073]: 2025-12-06 08:15:53.40236705 +0000 UTC m=+0.132065009 container init fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec 06 08:15:53 compute-0 podman[399073]: 2025-12-06 08:15:53.410269925 +0000 UTC m=+0.139967854 container start fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:15:53 compute-0 podman[399073]: 2025-12-06 08:15:53.413965205 +0000 UTC m=+0.143663144 container attach fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:15:53 compute-0 nova_compute[251992]: 2025-12-06 08:15:53.442 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.064 251996 INFO nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Creating config drive at /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786/disk.config
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.069 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpudl_c4ww execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:54.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.194 251996 DEBUG nova.network.neutron [req-fe4ca0bd-dff2-478c-85c4-9081d63f53e3 req-b1389efd-c8f4-4c1a-909a-416c823a18de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updated VIF entry in instance network info cache for port 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.195 251996 DEBUG nova.network.neutron [req-fe4ca0bd-dff2-478c-85c4-9081d63f53e3 req-b1389efd-c8f4-4c1a-909a-416c823a18de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updating instance_info_cache with network_info: [{"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.208 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpudl_c4ww" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:54 compute-0 serene_sanderson[399089]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:15:54 compute-0 serene_sanderson[399089]: --> relative data size: 1.0
Dec 06 08:15:54 compute-0 serene_sanderson[399089]: --> All data devices are unavailable
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.239 251996 DEBUG nova.storage.rbd_utils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 177567d6-7f0b-46df-aa8e-60e0089ae786_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.244 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786/disk.config 177567d6-7f0b-46df-aa8e-60e0089ae786_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:15:54 compute-0 systemd[1]: libpod-fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3.scope: Deactivated successfully.
Dec 06 08:15:54 compute-0 podman[399073]: 2025-12-06 08:15:54.26852351 +0000 UTC m=+0.998221449 container died fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.275 251996 DEBUG oslo_concurrency.lockutils [req-fe4ca0bd-dff2-478c-85c4-9081d63f53e3 req-b1389efd-c8f4-4c1a-909a-416c823a18de 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:15:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4257c2da207b287e6dedba91ecf8294479b9e63e5d7f9eaea476215009a52f77-merged.mount: Deactivated successfully.
Dec 06 08:15:54 compute-0 podman[399073]: 2025-12-06 08:15:54.320228975 +0000 UTC m=+1.049926904 container remove fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:15:54 compute-0 systemd[1]: libpod-conmon-fdac53b57fc637cafc660b390965acd29637d885b686bd22d52468b3d81cc5a3.scope: Deactivated successfully.
Dec 06 08:15:54 compute-0 sudo[398948]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.401 251996 DEBUG oslo_concurrency.processutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786/disk.config 177567d6-7f0b-46df-aa8e-60e0089ae786_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.401 251996 INFO nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Deleting local config drive /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786/disk.config because it was imported into RBD.
Dec 06 08:15:54 compute-0 sudo[399151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:54 compute-0 sudo[399151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:54 compute-0 sudo[399151]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:54 compute-0 kernel: tap50ea2b6d-c5: entered promiscuous mode
Dec 06 08:15:54 compute-0 NetworkManager[48965]: <info>  [1765008954.4598] manager: (tap50ea2b6d-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Dec 06 08:15:54 compute-0 ovn_controller[147168]: 2025-12-06T08:15:54Z|00783|binding|INFO|Claiming lport 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 for this chassis.
Dec 06 08:15:54 compute-0 ovn_controller[147168]: 2025-12-06T08:15:54Z|00784|binding|INFO|50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0: Claiming fa:16:3e:41:3c:41 10.100.0.5
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.461 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.468 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 sudo[399180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:15:54 compute-0 sudo[399180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:54 compute-0 sudo[399180]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.476 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:3c:41 10.100.0.5'], port_security=['fa:16:3e:41:3c:41 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '177567d6-7f0b-46df-aa8e-60e0089ae786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-538c6ce0-da01-452d-90fd-a1413cdabc3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '775a4317-6d80-4c7b-b228-dc47d45bf2a3 7c7b5da5-6a38-4cc8-9da6-e9378fe63966', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d6949a2-5de9-4fea-95c2-bae5d7f67c6b, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.477 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 in datapath 538c6ce0-da01-452d-90fd-a1413cdabc3f bound to our chassis
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.478 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 538c6ce0-da01-452d-90fd-a1413cdabc3f
Dec 06 08:15:54 compute-0 systemd-machined[212986]: New machine qemu-93-instance-000000cc.
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.490 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc1c7b5-5069-49c6-b063-d91a312412e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.490 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap538c6ce0-d1 in ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.495 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap538c6ce0-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.496 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd7de06-c0f1-4b3f-9185-2d63a8ca92a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.497 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b3de8482-7612-45e1-afc6-09519869f1e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.508 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd9980c-bf45-41a1-b921-bbef221dbe76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-000000cc.
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.530 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.532 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f86288a2-0d49-46d8-bfe3-47cb56f146d6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_controller[147168]: 2025-12-06T08:15:54Z|00785|binding|INFO|Setting lport 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 ovn-installed in OVS
Dec 06 08:15:54 compute-0 ovn_controller[147168]: 2025-12-06T08:15:54Z|00786|binding|INFO|Setting lport 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 up in Southbound
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.535 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 sudo[399216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:54 compute-0 sudo[399216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:54 compute-0 systemd-udevd[399245]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:15:54 compute-0 sudo[399216]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:54 compute-0 NetworkManager[48965]: <info>  [1765008954.5536] device (tap50ea2b6d-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:15:54 compute-0 NetworkManager[48965]: <info>  [1765008954.5545] device (tap50ea2b6d-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.563 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1b989ca2-bf12-4a4e-8603-410846288c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 systemd-udevd[399250]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.569 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e25c96a6-8878-4c84-9285-c66a867d229d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 NetworkManager[48965]: <info>  [1765008954.5703] manager: (tap538c6ce0-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/370)
Dec 06 08:15:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.599 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd5a6aa-b48c-486a-8549-42e152f2523a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.603 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7275b27e-b7c7-4e5d-b9cd-8cccdf454925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 sudo[399249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:15:54 compute-0 sudo[399249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:54 compute-0 NetworkManager[48965]: <info>  [1765008954.6228] device (tap538c6ce0-d0): carrier: link connected
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.626 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2db6682b-0b9a-49db-bbe4-9846b166a26b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.778 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5ec18a39-2654-43c4-b833-6692f5dbbce3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap538c6ce0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:15:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 922721, 'reachable_time': 32415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399300, 'error': None, 'target': 'ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.796 251996 DEBUG nova.compute.manager [req-b2f0be17-b87c-4172-ac1f-ac2ab73c9117 req-b8379f10-f216-46bb-aa51-1e4253a77e51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.796 251996 DEBUG oslo_concurrency.lockutils [req-b2f0be17-b87c-4172-ac1f-ac2ab73c9117 req-b8379f10-f216-46bb-aa51-1e4253a77e51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.796 251996 DEBUG oslo_concurrency.lockutils [req-b2f0be17-b87c-4172-ac1f-ac2ab73c9117 req-b8379f10-f216-46bb-aa51-1e4253a77e51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.797 251996 DEBUG oslo_concurrency.lockutils [req-b2f0be17-b87c-4172-ac1f-ac2ab73c9117 req-b8379f10-f216-46bb-aa51-1e4253a77e51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.797 251996 DEBUG nova.compute.manager [req-b2f0be17-b87c-4172-ac1f-ac2ab73c9117 req-b8379f10-f216-46bb-aa51-1e4253a77e51 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Processing event network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.814 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9ea415-11fe-4ca5-a895-1cdba215abdb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:159e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 922721, 'tstamp': 922721}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399316, 'error': None, 'target': 'ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.832 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0c96a3e0-6c95-4b17-9301-fd2833f6b2e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap538c6ce0-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:15:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 922721, 'reachable_time': 32415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399325, 'error': None, 'target': 'ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.867 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b0481b02-a0e7-49fc-adf9-5c9b0222ff19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ceph-mon[74339]: pgmap v3698: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Dec 06 08:15:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2826303976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.939 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f92573ba-fbdb-41b4-b44c-4ebf846f5905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.941 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap538c6ce0-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.942 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.942 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap538c6ce0-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:54 compute-0 kernel: tap538c6ce0-d0: entered promiscuous mode
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.944 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 NetworkManager[48965]: <info>  [1765008954.9461] manager: (tap538c6ce0-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.946 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.949 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap538c6ce0-d0, col_values=(('external_ids', {'iface-id': '0def55ce-ba12-4bf5-a3d3-0df5436060c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.950 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 ovn_controller[147168]: 2025-12-06T08:15:54Z|00787|binding|INFO|Releasing lport 0def55ce-ba12-4bf5-a3d3-0df5436060c0 from this chassis (sb_readonly=0)
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.951 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.953 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/538c6ce0-da01-452d-90fd-a1413cdabc3f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/538c6ce0-da01-452d-90fd-a1413cdabc3f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.955 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[79260e36-ad97-4e74-97ae-cb7e0a5b1d71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.956 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-538c6ce0-da01-452d-90fd-a1413cdabc3f
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/538c6ce0-da01-452d-90fd-a1413cdabc3f.pid.haproxy
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 538c6ce0-da01-452d-90fd-a1413cdabc3f
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:15:54 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:54.957 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f', 'env', 'PROCESS_TAG=haproxy-538c6ce0-da01-452d-90fd-a1413cdabc3f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/538c6ce0-da01-452d-90fd-a1413cdabc3f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:15:54 compute-0 nova_compute[251992]: 2025-12-06 08:15:54.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:54 compute-0 podman[399371]: 2025-12-06 08:15:54.977618944 +0000 UTC m=+0.042646080 container create 4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcclintock, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:15:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:54.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:55 compute-0 systemd[1]: Started libpod-conmon-4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f.scope.
Dec 06 08:15:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:15:55 compute-0 podman[399371]: 2025-12-06 08:15:54.960641912 +0000 UTC m=+0.025669078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.063 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.064 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008955.0626297, 177567d6-7f0b-46df-aa8e-60e0089ae786 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.064 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] VM Started (Lifecycle Event)
Dec 06 08:15:55 compute-0 podman[399371]: 2025-12-06 08:15:55.068061911 +0000 UTC m=+0.133089067 container init 4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.073 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.076 251996 INFO nova.virt.libvirt.driver [-] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Instance spawned successfully.
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.076 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:15:55 compute-0 podman[399371]: 2025-12-06 08:15:55.077135997 +0000 UTC m=+0.142163133 container start 4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:15:55 compute-0 podman[399371]: 2025-12-06 08:15:55.080674893 +0000 UTC m=+0.145702059 container attach 4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 08:15:55 compute-0 relaxed_mcclintock[399408]: 167 167
Dec 06 08:15:55 compute-0 systemd[1]: libpod-4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f.scope: Deactivated successfully.
Dec 06 08:15:55 compute-0 conmon[399408]: conmon 4e62ac7e21793787f03f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f.scope/container/memory.events
Dec 06 08:15:55 compute-0 podman[399371]: 2025-12-06 08:15:55.08422464 +0000 UTC m=+0.149251776 container died 4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcclintock, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.093 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.118 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.121 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.122 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.122 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.123 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.124 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.124 251996 DEBUG nova.virt.libvirt.driver [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-082619a9cdae6a6d1d420b58f862478223b27365b4b22e3e560ad7cbe3ff27f5-merged.mount: Deactivated successfully.
Dec 06 08:15:55 compute-0 podman[399371]: 2025-12-06 08:15:55.142870763 +0000 UTC m=+0.207897899 container remove 4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_mcclintock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.148 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.149 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008955.0636652, 177567d6-7f0b-46df-aa8e-60e0089ae786 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.149 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] VM Paused (Lifecycle Event)
Dec 06 08:15:55 compute-0 systemd[1]: libpod-conmon-4e62ac7e21793787f03fd72d40dff7d1ffc25338f8ba39b83a045c9ca0718a2f.scope: Deactivated successfully.
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.178 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.182 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765008955.0674064, 177567d6-7f0b-46df-aa8e-60e0089ae786 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.182 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] VM Resumed (Lifecycle Event)
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.202 251996 INFO nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Took 7.60 seconds to spawn the instance on the hypervisor.
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.203 251996 DEBUG nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.204 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.215 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.264 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.286 251996 INFO nova.compute.manager [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Took 8.64 seconds to build instance.
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.306 251996 DEBUG oslo_concurrency.lockutils [None req-7c9ba996-e8c5-4086-b472-a9cefcf87275 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:55 compute-0 podman[399442]: 2025-12-06 08:15:55.310765784 +0000 UTC m=+0.042226248 container create 4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:15:55 compute-0 systemd[1]: Started libpod-conmon-4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780.scope.
Dec 06 08:15:55 compute-0 podman[399465]: 2025-12-06 08:15:55.364517715 +0000 UTC m=+0.055182991 container create ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:15:55 compute-0 podman[399442]: 2025-12-06 08:15:55.292894899 +0000 UTC m=+0.024355383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:15:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:15:55 compute-0 systemd[1]: Started libpod-conmon-ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a.scope.
Dec 06 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a0f2cb08b8917961a9641132df17e8edfe3566b146955a7e759bfc9551bb6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a0f2cb08b8917961a9641132df17e8edfe3566b146955a7e759bfc9551bb6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a0f2cb08b8917961a9641132df17e8edfe3566b146955a7e759bfc9551bb6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a0f2cb08b8917961a9641132df17e8edfe3566b146955a7e759bfc9551bb6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:55 compute-0 podman[399442]: 2025-12-06 08:15:55.413653479 +0000 UTC m=+0.145113963 container init 4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 08:15:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:15:55 compute-0 podman[399442]: 2025-12-06 08:15:55.420243099 +0000 UTC m=+0.151703563 container start 4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:15:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb8db7aeb2fdb301e11ceef969faf6bbadc0d39e7052ad11c44cd8df8e8ad54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:55 compute-0 podman[399442]: 2025-12-06 08:15:55.423692682 +0000 UTC m=+0.155153236 container attach 4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leakey, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:15:55 compute-0 podman[399465]: 2025-12-06 08:15:55.336912795 +0000 UTC m=+0.027578101 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:15:55 compute-0 podman[399465]: 2025-12-06 08:15:55.43356963 +0000 UTC m=+0.124234916 container init ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 08:15:55 compute-0 podman[399465]: 2025-12-06 08:15:55.442295707 +0000 UTC m=+0.132961003 container start ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 08:15:55 compute-0 neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f[399487]: [NOTICE]   (399493) : New worker (399495) forked
Dec 06 08:15:55 compute-0 neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f[399487]: [NOTICE]   (399493) : Loading success.
Dec 06 08:15:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.880 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.881 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:15:55 compute-0 nova_compute[251992]: 2025-12-06 08:15:55.881 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.006 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.007 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.007 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.007 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 177567d6-7f0b-46df-aa8e-60e0089ae786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:15:56 compute-0 awesome_leakey[399480]: {
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:     "0": [
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:         {
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "devices": [
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "/dev/loop3"
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             ],
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "lv_name": "ceph_lv0",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "lv_size": "7511998464",
Dec 06 08:15:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "name": "ceph_lv0",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "tags": {
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.cluster_name": "ceph",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.crush_device_class": "",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.encrypted": "0",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.osd_id": "0",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.type": "block",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:                 "ceph.vdo": "0"
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             },
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "type": "block",
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:             "vg_name": "ceph_vg0"
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:         }
Dec 06 08:15:56 compute-0 awesome_leakey[399480]:     ]
Dec 06 08:15:56 compute-0 awesome_leakey[399480]: }
Dec 06 08:15:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:56.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:56 compute-0 systemd[1]: libpod-4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780.scope: Deactivated successfully.
Dec 06 08:15:56 compute-0 podman[399442]: 2025-12-06 08:15:56.211980656 +0000 UTC m=+0.943441110 container died 4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leakey, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:15:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1a0f2cb08b8917961a9641132df17e8edfe3566b146955a7e759bfc9551bb6a-merged.mount: Deactivated successfully.
Dec 06 08:15:56 compute-0 podman[399442]: 2025-12-06 08:15:56.267577677 +0000 UTC m=+0.999038141 container remove 4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:15:56 compute-0 systemd[1]: libpod-conmon-4a29a1bd9a07f5a4d390da560b4cc1c90c016b170fb1acec980893ac5b683780.scope: Deactivated successfully.
Dec 06 08:15:56 compute-0 sudo[399249]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:56 compute-0 sudo[399521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:56 compute-0 sudo[399521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:56 compute-0 sudo[399521]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:56 compute-0 sudo[399546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:15:56 compute-0 sudo[399546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:56 compute-0 sudo[399546]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:56 compute-0 sudo[399571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:56 compute-0 sudo[399571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:56 compute-0 sudo[399571]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:56 compute-0 sudo[399596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:15:56 compute-0 sudo[399596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:56 compute-0 podman[399660]: 2025-12-06 08:15:56.837078618 +0000 UTC m=+0.040198003 container create c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hoover, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:15:56 compute-0 systemd[1]: Started libpod-conmon-c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b.scope.
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.868 251996 DEBUG nova.compute.manager [req-ad0ed47a-8427-4642-a5a1-010605649e03 req-d9da8481-5c89-4593-a06b-8c3e3784493c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.869 251996 DEBUG oslo_concurrency.lockutils [req-ad0ed47a-8427-4642-a5a1-010605649e03 req-d9da8481-5c89-4593-a06b-8c3e3784493c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.869 251996 DEBUG oslo_concurrency.lockutils [req-ad0ed47a-8427-4642-a5a1-010605649e03 req-d9da8481-5c89-4593-a06b-8c3e3784493c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.869 251996 DEBUG oslo_concurrency.lockutils [req-ad0ed47a-8427-4642-a5a1-010605649e03 req-d9da8481-5c89-4593-a06b-8c3e3784493c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.869 251996 DEBUG nova.compute.manager [req-ad0ed47a-8427-4642-a5a1-010605649e03 req-d9da8481-5c89-4593-a06b-8c3e3784493c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] No waiting events found dispatching network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:15:56 compute-0 nova_compute[251992]: 2025-12-06 08:15:56.870 251996 WARNING nova.compute.manager [req-ad0ed47a-8427-4642-a5a1-010605649e03 req-d9da8481-5c89-4593-a06b-8c3e3784493c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received unexpected event network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 for instance with vm_state active and task_state None.
Dec 06 08:15:56 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:15:56 compute-0 podman[399660]: 2025-12-06 08:15:56.899438173 +0000 UTC m=+0.102557578 container init c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:15:56 compute-0 podman[399660]: 2025-12-06 08:15:56.905691552 +0000 UTC m=+0.108810937 container start c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:15:56 compute-0 podman[399660]: 2025-12-06 08:15:56.910151903 +0000 UTC m=+0.113271318 container attach c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 08:15:56 compute-0 hardcore_hoover[399676]: 167 167
Dec 06 08:15:56 compute-0 systemd[1]: libpod-c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b.scope: Deactivated successfully.
Dec 06 08:15:56 compute-0 podman[399660]: 2025-12-06 08:15:56.912200209 +0000 UTC m=+0.115319594 container died c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:15:56 compute-0 podman[399660]: 2025-12-06 08:15:56.820755334 +0000 UTC m=+0.023874739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:15:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:57 compute-0 ceph-mon[74339]: pgmap v3699: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 08:15:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1161032478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb219b8ff0b0009eb730f62e2a9a2b347a5def84a22e39b33351abb081932346-merged.mount: Deactivated successfully.
Dec 06 08:15:57 compute-0 podman[399660]: 2025-12-06 08:15:57.28255632 +0000 UTC m=+0.485675745 container remove c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hoover, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 08:15:57 compute-0 systemd[1]: libpod-conmon-c1f3e70a309372d354249bdbea1d16d130ad7c3b238cab49b27887e21a6a790b.scope: Deactivated successfully.
Dec 06 08:15:57 compute-0 podman[399701]: 2025-12-06 08:15:57.471523824 +0000 UTC m=+0.057119354 container create 1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:15:57 compute-0 systemd[1]: Started libpod-conmon-1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f.scope.
Dec 06 08:15:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:15:57 compute-0 podman[399701]: 2025-12-06 08:15:57.44820726 +0000 UTC m=+0.033802890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00364d185bd51ed88176abcf49137ca16096729abf3a460859c83e23fac58450/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00364d185bd51ed88176abcf49137ca16096729abf3a460859c83e23fac58450/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00364d185bd51ed88176abcf49137ca16096729abf3a460859c83e23fac58450/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00364d185bd51ed88176abcf49137ca16096729abf3a460859c83e23fac58450/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:15:57 compute-0 podman[399701]: 2025-12-06 08:15:57.552679839 +0000 UTC m=+0.138275399 container init 1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mayer, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:15:57 compute-0 podman[399701]: 2025-12-06 08:15:57.559160344 +0000 UTC m=+0.144755894 container start 1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:15:57 compute-0 podman[399701]: 2025-12-06 08:15:57.562821673 +0000 UTC m=+0.148417263 container attach 1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:15:57 compute-0 nova_compute[251992]: 2025-12-06 08:15:57.635 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:15:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:15:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:15:58.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:15:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2520721463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:15:58 compute-0 friendly_mayer[399717]: {
Dec 06 08:15:58 compute-0 friendly_mayer[399717]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:15:58 compute-0 friendly_mayer[399717]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:15:58 compute-0 friendly_mayer[399717]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:15:58 compute-0 friendly_mayer[399717]:         "osd_id": 0,
Dec 06 08:15:58 compute-0 friendly_mayer[399717]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:15:58 compute-0 friendly_mayer[399717]:         "type": "bluestore"
Dec 06 08:15:58 compute-0 friendly_mayer[399717]:     }
Dec 06 08:15:58 compute-0 friendly_mayer[399717]: }
Dec 06 08:15:58 compute-0 systemd[1]: libpod-1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f.scope: Deactivated successfully.
Dec 06 08:15:58 compute-0 podman[399701]: 2025-12-06 08:15:58.415819087 +0000 UTC m=+1.001414627 container died 1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mayer, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 08:15:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-00364d185bd51ed88176abcf49137ca16096729abf3a460859c83e23fac58450-merged.mount: Deactivated successfully.
Dec 06 08:15:58 compute-0 nova_compute[251992]: 2025-12-06 08:15:58.446 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:58 compute-0 podman[399701]: 2025-12-06 08:15:58.470206524 +0000 UTC m=+1.055802064 container remove 1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:15:58 compute-0 systemd[1]: libpod-conmon-1c1a69ef0918867939c7d432e3d269e36977ef6b5c1eb71f401301e29b383b7f.scope: Deactivated successfully.
Dec 06 08:15:58 compute-0 sudo[399596]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:15:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:15:58 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ee2a98ed-679b-4c66-add2-c5d8dfc9578d does not exist
Dec 06 08:15:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 48c7fc25-af2f-46bd-b82e-f08f77603c7d does not exist
Dec 06 08:15:58 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6b50917f-4b72-420d-8de1-1e9156a94a69 does not exist
Dec 06 08:15:58 compute-0 sudo[399754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:15:58 compute-0 sudo[399754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:58 compute-0 sudo[399754]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:58 compute-0 sudo[399779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:15:58 compute-0 sudo[399779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:15:58 compute-0 sudo[399779]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:58 compute-0 nova_compute[251992]: 2025-12-06 08:15:58.700 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updating instance_info_cache with network_info: [{"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:15:58 compute-0 nova_compute[251992]: 2025-12-06 08:15:58.732 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:15:58 compute-0 nova_compute[251992]: 2025-12-06 08:15:58.733 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:15:58 compute-0 nova_compute[251992]: 2025-12-06 08:15:58.733 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:15:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:15:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:15:59.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.321 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:59 compute-0 NetworkManager[48965]: <info>  [1765008959.3227] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Dec 06 08:15:59 compute-0 NetworkManager[48965]: <info>  [1765008959.3238] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.484 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:59 compute-0 ovn_controller[147168]: 2025-12-06T08:15:59Z|00788|binding|INFO|Releasing lport 0def55ce-ba12-4bf5-a3d3-0df5436060c0 from this chassis (sb_readonly=0)
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.499 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:59 compute-0 ceph-mon[74339]: pgmap v3700: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:15:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:59 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:15:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.688 251996 DEBUG nova.compute.manager [req-eef15713-da66-4153-b15c-7c44914ad4cb req-7025f332-672a-4186-91c5-8c3f7fe9f3e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-changed-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.688 251996 DEBUG nova.compute.manager [req-eef15713-da66-4153-b15c-7c44914ad4cb req-7025f332-672a-4186-91c5-8c3f7fe9f3e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Refreshing instance network info cache due to event network-changed-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.688 251996 DEBUG oslo_concurrency.lockutils [req-eef15713-da66-4153-b15c-7c44914ad4cb req-7025f332-672a-4186-91c5-8c3f7fe9f3e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.688 251996 DEBUG oslo_concurrency.lockutils [req-eef15713-da66-4153-b15c-7c44914ad4cb req-7025f332-672a-4186-91c5-8c3f7fe9f3e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.689 251996 DEBUG nova.network.neutron [req-eef15713-da66-4153-b15c-7c44914ad4cb req-7025f332-672a-4186-91c5-8c3f7fe9f3e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Refreshing network info cache for port 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:15:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:59.745 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:15:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:59.746 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:15:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:15:59.747 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:15:59 compute-0 nova_compute[251992]: 2025-12-06 08:15:59.748 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:15:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 08:16:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:00.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:01.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:01 compute-0 nova_compute[251992]: 2025-12-06 08:16:01.286 251996 DEBUG nova.network.neutron [req-eef15713-da66-4153-b15c-7c44914ad4cb req-7025f332-672a-4186-91c5-8c3f7fe9f3e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updated VIF entry in instance network info cache for port 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:16:01 compute-0 nova_compute[251992]: 2025-12-06 08:16:01.286 251996 DEBUG nova.network.neutron [req-eef15713-da66-4153-b15c-7c44914ad4cb req-7025f332-672a-4186-91c5-8c3f7fe9f3e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updating instance_info_cache with network_info: [{"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:16:01 compute-0 nova_compute[251992]: 2025-12-06 08:16:01.417 251996 DEBUG oslo_concurrency.lockutils [req-eef15713-da66-4153-b15c-7c44914ad4cb req-7025f332-672a-4186-91c5-8c3f7fe9f3e3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:16:01 compute-0 ceph-mon[74339]: pgmap v3701: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Dec 06 08:16:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Dec 06 08:16:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:02.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:02 compute-0 sudo[399807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:02 compute-0 sudo[399807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:02 compute-0 sudo[399807]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:02 compute-0 sudo[399832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:02 compute-0 sudo[399832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:02 compute-0 sudo[399832]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:02 compute-0 nova_compute[251992]: 2025-12-06 08:16:02.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:02 compute-0 nova_compute[251992]: 2025-12-06 08:16:02.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:03.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:03 compute-0 ceph-mon[74339]: pgmap v3702: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Dec 06 08:16:03 compute-0 nova_compute[251992]: 2025-12-06 08:16:03.449 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:03 compute-0 nova_compute[251992]: 2025-12-06 08:16:03.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Dec 06 08:16:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:16:03.891 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:16:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:16:03.892 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:16:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:16:03.892 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:16:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:04.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:04 compute-0 ceph-mon[74339]: pgmap v3703: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 83 op/s
Dec 06 08:16:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:05.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:05 compute-0 nova_compute[251992]: 2025-12-06 08:16:05.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 93 op/s
Dec 06 08:16:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:06.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:06 compute-0 nova_compute[251992]: 2025-12-06 08:16:06.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:07 compute-0 ceph-mon[74339]: pgmap v3704: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 93 op/s
Dec 06 08:16:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1624445191' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:07 compute-0 nova_compute[251992]: 2025-12-06 08:16:07.641 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 91 op/s
Dec 06 08:16:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:08.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:08 compute-0 nova_compute[251992]: 2025-12-06 08:16:08.450 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:08 compute-0 ovn_controller[147168]: 2025-12-06T08:16:08Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:3c:41 10.100.0.5
Dec 06 08:16:08 compute-0 ovn_controller[147168]: 2025-12-06T08:16:08Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:3c:41 10.100.0.5
Dec 06 08:16:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:09.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:16:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2667934728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:16:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2667934728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:09 compute-0 ceph-mon[74339]: pgmap v3705: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 91 op/s
Dec 06 08:16:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:09 compute-0 nova_compute[251992]: 2025-12-06 08:16:09.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:09 compute-0 nova_compute[251992]: 2025-12-06 08:16:09.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:16:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Dec 06 08:16:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:10.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2667934728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2667934728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2380515998' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2380515998' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:11.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:11 compute-0 ceph-mon[74339]: pgmap v3706: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 86 op/s
Dec 06 08:16:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Dec 06 08:16:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:12.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:12 compute-0 nova_compute[251992]: 2025-12-06 08:16:12.678 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:13.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:16:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:16:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:16:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:16:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:16:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:16:13 compute-0 nova_compute[251992]: 2025-12-06 08:16:13.453 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:13 compute-0 ceph-mon[74339]: pgmap v3707: 305 pgs: 305 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 157 op/s
Dec 06 08:16:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Dec 06 08:16:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:14.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:14 compute-0 podman[399863]: 2025-12-06 08:16:14.448652018 +0000 UTC m=+0.106148625 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:16:14 compute-0 ceph-mon[74339]: pgmap v3708: 305 pgs: 305 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Dec 06 08:16:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Dec 06 08:16:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:16.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:16 compute-0 ceph-mon[74339]: pgmap v3709: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Dec 06 08:16:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:17 compute-0 nova_compute[251992]: 2025-12-06 08:16:17.680 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 08:16:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:18.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:18 compute-0 nova_compute[251992]: 2025-12-06 08:16:18.455 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:16:18
Dec 06 08:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups']
Dec 06 08:16:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:16:18 compute-0 nova_compute[251992]: 2025-12-06 08:16:18.653 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:18 compute-0 ceph-mon[74339]: pgmap v3710: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Dec 06 08:16:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Dec 06 08:16:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:20.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:20 compute-0 podman[399895]: 2025-12-06 08:16:20.386614789 +0000 UTC m=+0.048976962 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:16:20 compute-0 podman[399896]: 2025-12-06 08:16:20.42420203 +0000 UTC m=+0.074503524 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:16:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Dec 06 08:16:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 123 op/s
Dec 06 08:16:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Dec 06 08:16:22 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Dec 06 08:16:22 compute-0 ceph-mon[74339]: pgmap v3711: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Dec 06 08:16:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:22 compute-0 sudo[399936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:22 compute-0 sudo[399936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:22 compute-0 sudo[399936]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:22 compute-0 sudo[399961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:22 compute-0 sudo[399961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:22 compute-0 sudo[399961]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:22 compute-0 nova_compute[251992]: 2025-12-06 08:16:22.682 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:23 compute-0 ceph-mon[74339]: pgmap v3712: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 123 op/s
Dec 06 08:16:23 compute-0 ceph-mon[74339]: osdmap e417: 3 total, 3 up, 3 in
Dec 06 08:16:23 compute-0 nova_compute[251992]: 2025-12-06 08:16:23.457 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:16:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:16:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 62 op/s
Dec 06 08:16:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:24.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:24 compute-0 nova_compute[251992]: 2025-12-06 08:16:24.496 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:25.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:25 compute-0 ceph-mon[74339]: pgmap v3714: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 62 op/s
Dec 06 08:16:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Dec 06 08:16:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021703206425646754 of space, bias 1.0, pg target 0.6510961927694027 quantized to 32 (current 32)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003150881723431975 of space, bias 1.0, pg target 0.9452645170295925 quantized to 32 (current 32)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:16:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:16:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:27.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:16:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:16:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:16:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:16:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:16:27 compute-0 ceph-mon[74339]: pgmap v3715: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Dec 06 08:16:27 compute-0 nova_compute[251992]: 2025-12-06 08:16:27.685 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 08:16:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:28.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:28 compute-0 nova_compute[251992]: 2025-12-06 08:16:28.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:29.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 08:16:30 compute-0 ceph-mon[74339]: pgmap v3716: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 08:16:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:31 compute-0 ceph-mon[74339]: pgmap v3717: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 08:16:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1315743655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3150436561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:31.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 KiB/s rd, 4.1 KiB/s wr, 5 op/s
Dec 06 08:16:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:32 compute-0 nova_compute[251992]: 2025-12-06 08:16:32.688 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:33.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:33 compute-0 nova_compute[251992]: 2025-12-06 08:16:33.461 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 3.5 KiB/s wr, 4 op/s
Dec 06 08:16:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:34 compute-0 ceph-mon[74339]: pgmap v3718: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 KiB/s rd, 4.1 KiB/s wr, 5 op/s
Dec 06 08:16:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:34 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Dec 06 08:16:34 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:34.984927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:16:34 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Dec 06 08:16:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008994985023, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1116, "num_deletes": 251, "total_data_size": 1802649, "memory_usage": 1829080, "flush_reason": "Manual Compaction"}
Dec 06 08:16:34 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Dec 06 08:16:34 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008994998485, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1075464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74874, "largest_seqno": 75988, "table_properties": {"data_size": 1071257, "index_size": 1730, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11354, "raw_average_key_size": 20, "raw_value_size": 1062089, "raw_average_value_size": 1955, "num_data_blocks": 77, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008891, "oldest_key_time": 1765008891, "file_creation_time": 1765008994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:16:34 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 13630 microseconds, and 8084 cpu microseconds.
Dec 06 08:16:34 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:34.998553) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1075464 bytes OK
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:34.998589) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.000653) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.000677) EVENT_LOG_v1 {"time_micros": 1765008995000669, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.000699) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1797634, prev total WAL file size 1797634, number of live WAL files 2.
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.001806) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373737' seq:72057594037927935, type:22 .. '6D6772737461740033303239' seq:0, type:0; will stop at (end)
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(1050KB)], [167(13MB)]
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995001888, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 15002282, "oldest_snapshot_seqno": -1}
Dec 06 08:16:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:35.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 10945 keys, 11852890 bytes, temperature: kUnknown
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995087273, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 11852890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11785867, "index_size": 38558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27397, "raw_key_size": 289095, "raw_average_key_size": 26, "raw_value_size": 11597918, "raw_average_value_size": 1059, "num_data_blocks": 1455, "num_entries": 10945, "num_filter_entries": 10945, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765008995, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.087690) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11852890 bytes
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.090234) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.5 rd, 138.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.3 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(25.0) write-amplify(11.0) OK, records in: 11418, records dropped: 473 output_compression: NoCompression
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.090254) EVENT_LOG_v1 {"time_micros": 1765008995090244, "job": 104, "event": "compaction_finished", "compaction_time_micros": 85494, "compaction_time_cpu_micros": 37004, "output_level": 6, "num_output_files": 1, "total_output_size": 11852890, "num_input_records": 11418, "num_output_records": 10945, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995090650, "job": 104, "event": "table_file_deletion", "file_number": 169}
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008995094120, "job": 104, "event": "table_file_deletion", "file_number": 167}
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.001722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.094238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.094244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.094246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.094248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:16:35.094250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:16:35 compute-0 ceph-mon[74339]: pgmap v3719: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 KiB/s rd, 3.5 KiB/s wr, 4 op/s
Dec 06 08:16:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 287 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Dec 06 08:16:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:16:36 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2322241667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3745854789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1508888652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2322241667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:37.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:37 compute-0 nova_compute[251992]: 2025-12-06 08:16:37.690 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:16:38 compute-0 ceph-mon[74339]: pgmap v3720: 305 pgs: 305 active+clean; 287 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Dec 06 08:16:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3235342965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:16:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:38.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:38 compute-0 nova_compute[251992]: 2025-12-06 08:16:38.462 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:39.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:39 compute-0 ceph-mon[74339]: pgmap v3721: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:16:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 08:16:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2198310770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:40.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:41.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:41 compute-0 ceph-mon[74339]: pgmap v3722: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Dec 06 08:16:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2035496392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Dec 06 08:16:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:42.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:42 compute-0 sudo[399996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:42 compute-0 sudo[399996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:42 compute-0 sudo[399996]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:42 compute-0 sudo[400021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:42 compute-0 sudo[400021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:42 compute-0 sudo[400021]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:42 compute-0 nova_compute[251992]: 2025-12-06 08:16:42.694 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:16:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:16:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:16:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:16:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:16:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:16:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:43 compute-0 nova_compute[251992]: 2025-12-06 08:16:43.466 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:43 compute-0 ceph-mon[74339]: pgmap v3723: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Dec 06 08:16:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Dec 06 08:16:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:44.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:45 compute-0 podman[400048]: 2025-12-06 08:16:45.423461552 +0000 UTC m=+0.078460722 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Dec 06 08:16:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Dec 06 08:16:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:46.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:46 compute-0 ceph-mon[74339]: pgmap v3724: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Dec 06 08:16:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:47 compute-0 nova_compute[251992]: 2025-12-06 08:16:47.696 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 132 KiB/s wr, 178 op/s
Dec 06 08:16:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:48.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:48 compute-0 ceph-mon[74339]: pgmap v3725: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Dec 06 08:16:48 compute-0 nova_compute[251992]: 2025-12-06 08:16:48.469 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:49.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:49 compute-0 ceph-mon[74339]: pgmap v3726: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 132 KiB/s wr, 178 op/s
Dec 06 08:16:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 157 op/s
Dec 06 08:16:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:50.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:51 compute-0 ceph-mon[74339]: pgmap v3727: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 157 op/s
Dec 06 08:16:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/950012681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:51 compute-0 podman[400077]: 2025-12-06 08:16:51.384638053 +0000 UTC m=+0.044033098 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:16:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:16:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1802524803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:16:51 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1802524803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:51 compute-0 podman[400078]: 2025-12-06 08:16:51.430851898 +0000 UTC m=+0.084524447 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 08:16:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 178 op/s
Dec 06 08:16:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Dec 06 08:16:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:52.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:52 compute-0 nova_compute[251992]: 2025-12-06 08:16:52.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:52 compute-0 nova_compute[251992]: 2025-12-06 08:16:52.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:52 compute-0 nova_compute[251992]: 2025-12-06 08:16:52.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:16:52 compute-0 nova_compute[251992]: 2025-12-06 08:16:52.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:16:52 compute-0 nova_compute[251992]: 2025-12-06 08:16:52.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:16:52 compute-0 nova_compute[251992]: 2025-12-06 08:16:52.680 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:16:52 compute-0 nova_compute[251992]: 2025-12-06 08:16:52.681 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:16:52 compute-0 nova_compute[251992]: 2025-12-06 08:16:52.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:16:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1924030309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.120 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.252 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000cc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.252 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000cc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.412 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.413 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3925MB free_disk=20.921672821044922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.414 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.414 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.470 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.658 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 177567d6-7f0b-46df-aa8e-60e0089ae786 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.659 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.659 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:16:53 compute-0 nova_compute[251992]: 2025-12-06 08:16:53.732 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:16:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Dec 06 08:16:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1802524803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1802524803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:53 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Dec 06 08:16:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec 06 08:16:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:16:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434463726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:54 compute-0 nova_compute[251992]: 2025-12-06 08:16:54.173 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:16:54 compute-0 nova_compute[251992]: 2025-12-06 08:16:54.179 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:16:54 compute-0 nova_compute[251992]: 2025-12-06 08:16:54.195 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:16:54 compute-0 nova_compute[251992]: 2025-12-06 08:16:54.216 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:16:54 compute-0 nova_compute[251992]: 2025-12-06 08:16:54.216 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:16:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:16:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:54.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:16:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:54 compute-0 ceph-mon[74339]: pgmap v3728: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 28 KiB/s wr, 178 op/s
Dec 06 08:16:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1924030309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:54 compute-0 ceph-mon[74339]: osdmap e418: 3 total, 3 up, 3 in
Dec 06 08:16:54 compute-0 ceph-mon[74339]: pgmap v3730: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 KiB/s wr, 116 op/s
Dec 06 08:16:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1434463726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3489347069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:16:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3489347069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:16:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:16:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:55.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:16:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 476 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 06 08:16:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:56.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:57.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:57 compute-0 ceph-mon[74339]: pgmap v3731: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 476 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Dec 06 08:16:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1484566160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:57 compute-0 nova_compute[251992]: 2025-12-06 08:16:57.218 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:16:57 compute-0 nova_compute[251992]: 2025-12-06 08:16:57.218 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:16:57 compute-0 nova_compute[251992]: 2025-12-06 08:16:57.219 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:16:57 compute-0 nova_compute[251992]: 2025-12-06 08:16:57.715 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 06 08:16:58 compute-0 nova_compute[251992]: 2025-12-06 08:16:58.038 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:16:58 compute-0 nova_compute[251992]: 2025-12-06 08:16:58.038 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:16:58 compute-0 nova_compute[251992]: 2025-12-06 08:16:58.039 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:16:58 compute-0 nova_compute[251992]: 2025-12-06 08:16:58.039 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 177567d6-7f0b-46df-aa8e-60e0089ae786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:16:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1500307962' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:16:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:16:58.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:58 compute-0 nova_compute[251992]: 2025-12-06 08:16:58.472 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:16:58 compute-0 sudo[400165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:58 compute-0 sudo[400165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:58 compute-0 sudo[400165]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:59 compute-0 sudo[400190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:16:59 compute-0 sudo[400190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:59 compute-0 sudo[400190]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:59 compute-0 sudo[400215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:16:59 compute-0 sudo[400215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:16:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:16:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:16:59.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:16:59 compute-0 sudo[400215]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:59 compute-0 sudo[400240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:16:59 compute-0 sudo[400240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:16:59 compute-0 ceph-mon[74339]: pgmap v3732: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 06 08:16:59 compute-0 sudo[400240]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:16:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Dec 06 08:16:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:16:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:16:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:16:59 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:16:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:16:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 06 08:17:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Dec 06 08:17:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Dec 06 08:17:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 544ac5c7-39bd-4c56-a992-63d640fe4a0d does not exist
Dec 06 08:17:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev eb3bcbc5-e7bc-487c-957c-009c40adf1a9 does not exist
Dec 06 08:17:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 346d0aa2-80f2-4ecd-8714-1f22f17aa0ba does not exist
Dec 06 08:17:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:17:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:17:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:17:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:17:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:17:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:17:00 compute-0 nova_compute[251992]: 2025-12-06 08:17:00.218 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updating instance_info_cache with network_info: [{"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:17:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:17:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:17:00 compute-0 ceph-mon[74339]: osdmap e419: 3 total, 3 up, 3 in
Dec 06 08:17:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:17:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:17:00 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:17:00 compute-0 nova_compute[251992]: 2025-12-06 08:17:00.246 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:17:00 compute-0 nova_compute[251992]: 2025-12-06 08:17:00.246 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:17:00 compute-0 nova_compute[251992]: 2025-12-06 08:17:00.247 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:00 compute-0 sudo[400297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:00 compute-0 sudo[400297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:00 compute-0 sudo[400297]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:00 compute-0 sudo[400322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:17:00 compute-0 sudo[400322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:00 compute-0 sudo[400322]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:00 compute-0 sudo[400347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:00 compute-0 sudo[400347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:00 compute-0 sudo[400347]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:00 compute-0 sudo[400372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:17:00 compute-0 sudo[400372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:00 compute-0 podman[400435]: 2025-12-06 08:17:00.742296005 +0000 UTC m=+0.046564475 container create 2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:17:00 compute-0 systemd[1]: Started libpod-conmon-2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c.scope.
Dec 06 08:17:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:17:00 compute-0 podman[400435]: 2025-12-06 08:17:00.812229825 +0000 UTC m=+0.116498315 container init 2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:17:00 compute-0 podman[400435]: 2025-12-06 08:17:00.722714853 +0000 UTC m=+0.026983373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:17:00 compute-0 podman[400435]: 2025-12-06 08:17:00.820083839 +0000 UTC m=+0.124352309 container start 2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 08:17:00 compute-0 podman[400435]: 2025-12-06 08:17:00.822824043 +0000 UTC m=+0.127092543 container attach 2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermat, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:17:00 compute-0 nervous_fermat[400452]: 167 167
Dec 06 08:17:00 compute-0 systemd[1]: libpod-2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c.scope: Deactivated successfully.
Dec 06 08:17:00 compute-0 podman[400435]: 2025-12-06 08:17:00.825676931 +0000 UTC m=+0.129945411 container died 2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e510e7a8454d73af85a3d61e5b721326ae0d1f0087bfbad03c48b68d8035a84-merged.mount: Deactivated successfully.
Dec 06 08:17:00 compute-0 podman[400435]: 2025-12-06 08:17:00.867451535 +0000 UTC m=+0.171720005 container remove 2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 08:17:00 compute-0 systemd[1]: libpod-conmon-2943e7664c9fad9f750ce5de404324fc058237bb9c1cea05da508109fa22bb1c.scope: Deactivated successfully.
Dec 06 08:17:01 compute-0 podman[400477]: 2025-12-06 08:17:01.052586925 +0000 UTC m=+0.052938389 container create c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:17:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:01.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:01 compute-0 systemd[1]: Started libpod-conmon-c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5.scope.
Dec 06 08:17:01 compute-0 podman[400477]: 2025-12-06 08:17:01.024536693 +0000 UTC m=+0.024888197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:17:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd4582aa371721c3a26450a37b33b1608028a960b0b264af0e202b89310a2dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd4582aa371721c3a26450a37b33b1608028a960b0b264af0e202b89310a2dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd4582aa371721c3a26450a37b33b1608028a960b0b264af0e202b89310a2dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd4582aa371721c3a26450a37b33b1608028a960b0b264af0e202b89310a2dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd4582aa371721c3a26450a37b33b1608028a960b0b264af0e202b89310a2dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:01 compute-0 podman[400477]: 2025-12-06 08:17:01.164141835 +0000 UTC m=+0.164493269 container init c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:17:01 compute-0 podman[400477]: 2025-12-06 08:17:01.172177134 +0000 UTC m=+0.172528568 container start c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:17:01 compute-0 podman[400477]: 2025-12-06 08:17:01.17571908 +0000 UTC m=+0.176070504 container attach c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 08:17:01 compute-0 ceph-mon[74339]: pgmap v3733: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Dec 06 08:17:01 compute-0 nova_compute[251992]: 2025-12-06 08:17:01.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 149 op/s
Dec 06 08:17:01 compute-0 eloquent_carson[400494]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:17:01 compute-0 eloquent_carson[400494]: --> relative data size: 1.0
Dec 06 08:17:01 compute-0 eloquent_carson[400494]: --> All data devices are unavailable
Dec 06 08:17:01 compute-0 systemd[1]: libpod-c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5.scope: Deactivated successfully.
Dec 06 08:17:02 compute-0 podman[400509]: 2025-12-06 08:17:02.016155191 +0000 UTC m=+0.023172430 container died c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:17:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fd4582aa371721c3a26450a37b33b1608028a960b0b264af0e202b89310a2dd-merged.mount: Deactivated successfully.
Dec 06 08:17:02 compute-0 podman[400509]: 2025-12-06 08:17:02.064029652 +0000 UTC m=+0.071046891 container remove c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:17:02 compute-0 systemd[1]: libpod-conmon-c70311ff1886c144de53809b278a5395c828c6ae206e18f180075a13c1700bd5.scope: Deactivated successfully.
Dec 06 08:17:02 compute-0 sudo[400372]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:02 compute-0 sudo[400524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:02 compute-0 sudo[400524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:02 compute-0 sudo[400524]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:02 compute-0 sudo[400549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:17:02 compute-0 sudo[400549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:02 compute-0 sudo[400549]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:02.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:02 compute-0 sudo[400574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:02 compute-0 sudo[400574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:02 compute-0 sudo[400574]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:02 compute-0 sudo[400599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:17:02 compute-0 sudo[400599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:02 compute-0 podman[400663]: 2025-12-06 08:17:02.643430662 +0000 UTC m=+0.035277340 container create 02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 08:17:02 compute-0 systemd[1]: Started libpod-conmon-02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94.scope.
Dec 06 08:17:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:17:02 compute-0 podman[400663]: 2025-12-06 08:17:02.719779836 +0000 UTC m=+0.111626524 container init 02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:17:02 compute-0 nova_compute[251992]: 2025-12-06 08:17:02.718 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:02 compute-0 podman[400663]: 2025-12-06 08:17:02.628559788 +0000 UTC m=+0.020406486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:17:02 compute-0 podman[400663]: 2025-12-06 08:17:02.725587094 +0000 UTC m=+0.117433772 container start 02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:17:02 compute-0 podman[400663]: 2025-12-06 08:17:02.728556704 +0000 UTC m=+0.120403402 container attach 02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:17:02 compute-0 systemd[1]: libpod-02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94.scope: Deactivated successfully.
Dec 06 08:17:02 compute-0 eloquent_bohr[400679]: 167 167
Dec 06 08:17:02 compute-0 conmon[400679]: conmon 02a4b5296382a1a250d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94.scope/container/memory.events
Dec 06 08:17:02 compute-0 podman[400663]: 2025-12-06 08:17:02.732842061 +0000 UTC m=+0.124688759 container died 02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:17:02 compute-0 sudo[400682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:02 compute-0 sudo[400682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f33d65b9a67ce7015c2c46c4b066bec53393981b7c383b05c1b55033e5393c8-merged.mount: Deactivated successfully.
Dec 06 08:17:02 compute-0 sudo[400682]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:02 compute-0 podman[400663]: 2025-12-06 08:17:02.771705987 +0000 UTC m=+0.163552665 container remove 02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 08:17:02 compute-0 systemd[1]: libpod-conmon-02a4b5296382a1a250d1abc94b398040aeca47ee7b916aa184e21cff88222c94.scope: Deactivated successfully.
Dec 06 08:17:02 compute-0 sudo[400720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:02 compute-0 sudo[400720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:02 compute-0 sudo[400720]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:02 compute-0 podman[400752]: 2025-12-06 08:17:02.928802784 +0000 UTC m=+0.040363138 container create 62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_engelbart, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 08:17:02 compute-0 systemd[1]: Started libpod-conmon-62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27.scope.
Dec 06 08:17:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f4eed2859adea0e0f2dac54eaa9273fc23dc00ad483616ff444e68d233c7e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f4eed2859adea0e0f2dac54eaa9273fc23dc00ad483616ff444e68d233c7e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f4eed2859adea0e0f2dac54eaa9273fc23dc00ad483616ff444e68d233c7e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27f4eed2859adea0e0f2dac54eaa9273fc23dc00ad483616ff444e68d233c7e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:02 compute-0 podman[400752]: 2025-12-06 08:17:02.99453249 +0000 UTC m=+0.106092874 container init 62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 08:17:03 compute-0 podman[400752]: 2025-12-06 08:17:03.000913204 +0000 UTC m=+0.112473568 container start 62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 08:17:03 compute-0 podman[400752]: 2025-12-06 08:17:03.004051759 +0000 UTC m=+0.115612113 container attach 62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 08:17:03 compute-0 podman[400752]: 2025-12-06 08:17:02.913870898 +0000 UTC m=+0.025431272 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:17:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:03.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:03 compute-0 nova_compute[251992]: 2025-12-06 08:17:03.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:03 compute-0 nova_compute[251992]: 2025-12-06 08:17:03.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:03 compute-0 modest_engelbart[400768]: {
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:     "0": [
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:         {
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "devices": [
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "/dev/loop3"
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             ],
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "lv_name": "ceph_lv0",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "lv_size": "7511998464",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "name": "ceph_lv0",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "tags": {
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.cluster_name": "ceph",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.crush_device_class": "",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.encrypted": "0",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.osd_id": "0",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.type": "block",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:                 "ceph.vdo": "0"
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             },
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "type": "block",
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:             "vg_name": "ceph_vg0"
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:         }
Dec 06 08:17:03 compute-0 modest_engelbart[400768]:     ]
Dec 06 08:17:03 compute-0 modest_engelbart[400768]: }
Dec 06 08:17:03 compute-0 systemd[1]: libpod-62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27.scope: Deactivated successfully.
Dec 06 08:17:03 compute-0 ceph-mon[74339]: pgmap v3735: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 149 op/s
Dec 06 08:17:03 compute-0 podman[400779]: 2025-12-06 08:17:03.832645858 +0000 UTC m=+0.024130116 container died 62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 08:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-27f4eed2859adea0e0f2dac54eaa9273fc23dc00ad483616ff444e68d233c7e3-merged.mount: Deactivated successfully.
Dec 06 08:17:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 120 op/s
Dec 06 08:17:03 compute-0 podman[400779]: 2025-12-06 08:17:03.883201691 +0000 UTC m=+0.074685940 container remove 62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:17:03 compute-0 systemd[1]: libpod-conmon-62ee90f414ce8befcd0831ead970d4548b883e554e834bbd302ce9c7e4ec1c27.scope: Deactivated successfully.
Dec 06 08:17:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:03.892 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:03.894 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:03.895 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:03 compute-0 sudo[400599]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:03 compute-0 sudo[400795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:03 compute-0 sudo[400795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:03 compute-0 sudo[400795]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:04 compute-0 sudo[400820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:17:04 compute-0 sudo[400820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:04 compute-0 sudo[400820]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:04 compute-0 sudo[400845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:04 compute-0 sudo[400845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:04 compute-0 sudo[400845]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:04 compute-0 sudo[400870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:17:04 compute-0 sudo[400870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:04.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:04 compute-0 podman[400935]: 2025-12-06 08:17:04.439988827 +0000 UTC m=+0.039228156 container create 8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:17:04 compute-0 systemd[1]: Started libpod-conmon-8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c.scope.
Dec 06 08:17:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:17:04 compute-0 podman[400935]: 2025-12-06 08:17:04.423343455 +0000 UTC m=+0.022582804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:17:04 compute-0 podman[400935]: 2025-12-06 08:17:04.52439119 +0000 UTC m=+0.123630539 container init 8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nash, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 08:17:04 compute-0 podman[400935]: 2025-12-06 08:17:04.530422245 +0000 UTC m=+0.129661574 container start 8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:17:04 compute-0 elegant_nash[400951]: 167 167
Dec 06 08:17:04 compute-0 podman[400935]: 2025-12-06 08:17:04.534423933 +0000 UTC m=+0.133663272 container attach 8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nash, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:17:04 compute-0 systemd[1]: libpod-8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c.scope: Deactivated successfully.
Dec 06 08:17:04 compute-0 podman[400935]: 2025-12-06 08:17:04.536642483 +0000 UTC m=+0.135881812 container died 8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nash, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e614323af8be1fcdb37e5969eaa0d47d36f152ff53e27f13db6299e8cdbdaa-merged.mount: Deactivated successfully.
Dec 06 08:17:04 compute-0 podman[400935]: 2025-12-06 08:17:04.57037117 +0000 UTC m=+0.169610499 container remove 8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_nash, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 08:17:04 compute-0 systemd[1]: libpod-conmon-8c05fd4f8e4ca96765b8481416247711938c93d7984cd5b50a0122dddcf04e9c.scope: Deactivated successfully.
Dec 06 08:17:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:04 compute-0 podman[400975]: 2025-12-06 08:17:04.730820698 +0000 UTC m=+0.039237547 container create 58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:17:04 compute-0 systemd[1]: Started libpod-conmon-58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa.scope.
Dec 06 08:17:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a1aa9aa810bf577bb0920f27b9c46c726d1f18885b5ef613905305cb651ae4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a1aa9aa810bf577bb0920f27b9c46c726d1f18885b5ef613905305cb651ae4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a1aa9aa810bf577bb0920f27b9c46c726d1f18885b5ef613905305cb651ae4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a1aa9aa810bf577bb0920f27b9c46c726d1f18885b5ef613905305cb651ae4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:04 compute-0 podman[400975]: 2025-12-06 08:17:04.714305259 +0000 UTC m=+0.022722138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:17:04 compute-0 podman[400975]: 2025-12-06 08:17:04.812010514 +0000 UTC m=+0.120427383 container init 58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chaum, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:17:04 compute-0 podman[400975]: 2025-12-06 08:17:04.819522218 +0000 UTC m=+0.127939067 container start 58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:17:04 compute-0 podman[400975]: 2025-12-06 08:17:04.822573651 +0000 UTC m=+0.130990510 container attach 58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 08:17:04 compute-0 ceph-mon[74339]: pgmap v3736: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 120 op/s
Dec 06 08:17:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:05.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:05 compute-0 nova_compute[251992]: 2025-12-06 08:17:05.376 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:05.376 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:17:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:05.378 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:17:05 compute-0 youthful_chaum[400991]: {
Dec 06 08:17:05 compute-0 youthful_chaum[400991]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:17:05 compute-0 youthful_chaum[400991]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:17:05 compute-0 youthful_chaum[400991]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:17:05 compute-0 youthful_chaum[400991]:         "osd_id": 0,
Dec 06 08:17:05 compute-0 youthful_chaum[400991]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:17:05 compute-0 youthful_chaum[400991]:         "type": "bluestore"
Dec 06 08:17:05 compute-0 youthful_chaum[400991]:     }
Dec 06 08:17:05 compute-0 youthful_chaum[400991]: }
Dec 06 08:17:05 compute-0 systemd[1]: libpod-58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa.scope: Deactivated successfully.
Dec 06 08:17:05 compute-0 podman[400975]: 2025-12-06 08:17:05.665595953 +0000 UTC m=+0.974012812 container died 58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chaum, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:17:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-17a1aa9aa810bf577bb0920f27b9c46c726d1f18885b5ef613905305cb651ae4-merged.mount: Deactivated successfully.
Dec 06 08:17:05 compute-0 podman[400975]: 2025-12-06 08:17:05.717890393 +0000 UTC m=+1.026307242 container remove 58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:17:05 compute-0 systemd[1]: libpod-conmon-58a75ea35797f8264ade7c29b4db9344cec46a5bb25ea527568acb6fec435cfa.scope: Deactivated successfully.
Dec 06 08:17:05 compute-0 sudo[400870]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:17:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:17:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7aa4c927-09e3-4d8a-a5b1-8cff9f49fdb9 does not exist
Dec 06 08:17:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b0a3f681-d652-4353-a5ab-be6cd7a4d67b does not exist
Dec 06 08:17:05 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ac4dac21-892e-4803-bb52-659d0d489a5d does not exist
Dec 06 08:17:05 compute-0 sudo[401027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:05 compute-0 sudo[401027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 239 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 94 op/s
Dec 06 08:17:05 compute-0 sudo[401027]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:05 compute-0 sudo[401052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:17:05 compute-0 sudo[401052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:05 compute-0 sudo[401052]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:06.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:06 compute-0 nova_compute[251992]: 2025-12-06 08:17:06.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:17:06 compute-0 ceph-mon[74339]: pgmap v3737: 305 pgs: 305 active+clean; 239 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 94 op/s
Dec 06 08:17:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:07.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:07 compute-0 nova_compute[251992]: 2025-12-06 08:17:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:07 compute-0 nova_compute[251992]: 2025-12-06 08:17:07.720 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/713378447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:17:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:08 compute-0 nova_compute[251992]: 2025-12-06 08:17:08.476 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:08 compute-0 ceph-mon[74339]: pgmap v3738: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:17:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:09.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:09 compute-0 nova_compute[251992]: 2025-12-06 08:17:09.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:09 compute-0 nova_compute[251992]: 2025-12-06 08:17:09.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:17:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:17:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1028973287' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:17:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1028973287' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:17:10 compute-0 ovn_controller[147168]: 2025-12-06T08:17:10Z|00789|binding|INFO|Releasing lport 0def55ce-ba12-4bf5-a3d3-0df5436060c0 from this chassis (sb_readonly=0)
Dec 06 08:17:10 compute-0 nova_compute[251992]: 2025-12-06 08:17:10.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:10 compute-0 ceph-mon[74339]: pgmap v3739: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:17:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/28693135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:11.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.408 251996 DEBUG nova.compute.manager [req-ffde2d95-5ca9-45bc-898b-779c7fe27d8b req-706115ba-4787-4666-b2c7-9231aa71d290 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-changed-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.408 251996 DEBUG nova.compute.manager [req-ffde2d95-5ca9-45bc-898b-779c7fe27d8b req-706115ba-4787-4666-b2c7-9231aa71d290 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Refreshing instance network info cache due to event network-changed-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.408 251996 DEBUG oslo_concurrency.lockutils [req-ffde2d95-5ca9-45bc-898b-779c7fe27d8b req-706115ba-4787-4666-b2c7-9231aa71d290 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.409 251996 DEBUG oslo_concurrency.lockutils [req-ffde2d95-5ca9-45bc-898b-779c7fe27d8b req-706115ba-4787-4666-b2c7-9231aa71d290 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.409 251996 DEBUG nova.network.neutron [req-ffde2d95-5ca9-45bc-898b-779c7fe27d8b req-706115ba-4787-4666-b2c7-9231aa71d290 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Refreshing network info cache for port 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.515 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "177567d6-7f0b-46df-aa8e-60e0089ae786" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.516 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.516 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.516 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.516 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.518 251996 INFO nova.compute.manager [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Terminating instance
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.519 251996 DEBUG nova.compute.manager [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:17:11 compute-0 kernel: tap50ea2b6d-c5 (unregistering): left promiscuous mode
Dec 06 08:17:11 compute-0 NetworkManager[48965]: <info>  [1765009031.5761] device (tap50ea2b6d-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.582 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:11 compute-0 ovn_controller[147168]: 2025-12-06T08:17:11Z|00790|binding|INFO|Releasing lport 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 from this chassis (sb_readonly=0)
Dec 06 08:17:11 compute-0 ovn_controller[147168]: 2025-12-06T08:17:11Z|00791|binding|INFO|Setting lport 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 down in Southbound
Dec 06 08:17:11 compute-0 ovn_controller[147168]: 2025-12-06T08:17:11Z|00792|binding|INFO|Removing iface tap50ea2b6d-c5 ovn-installed in OVS
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.584 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.594 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:3c:41 10.100.0.5'], port_security=['fa:16:3e:41:3c:41 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '177567d6-7f0b-46df-aa8e-60e0089ae786', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-538c6ce0-da01-452d-90fd-a1413cdabc3f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '775a4317-6d80-4c7b-b228-dc47d45bf2a3 7c7b5da5-6a38-4cc8-9da6-e9378fe63966', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d6949a2-5de9-4fea-95c2-bae5d7f67c6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.595 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 in datapath 538c6ce0-da01-452d-90fd-a1413cdabc3f unbound from our chassis
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.597 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 538c6ce0-da01-452d-90fd-a1413cdabc3f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.599 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ea3a42-aeec-4720-b39c-2054af6bc836]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.599 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f namespace which is not needed anymore
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.603 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:11 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000cc.scope: Deactivated successfully.
Dec 06 08:17:11 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000cc.scope: Consumed 17.565s CPU time.
Dec 06 08:17:11 compute-0 systemd-machined[212986]: Machine qemu-93-instance-000000cc terminated.
Dec 06 08:17:11 compute-0 neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f[399487]: [NOTICE]   (399493) : haproxy version is 2.8.14-c23fe91
Dec 06 08:17:11 compute-0 neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f[399487]: [NOTICE]   (399493) : path to executable is /usr/sbin/haproxy
Dec 06 08:17:11 compute-0 neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f[399487]: [WARNING]  (399493) : Exiting Master process...
Dec 06 08:17:11 compute-0 neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f[399487]: [ALERT]    (399493) : Current worker (399495) exited with code 143 (Terminated)
Dec 06 08:17:11 compute-0 neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f[399487]: [WARNING]  (399493) : All workers exited. Exiting... (0)
Dec 06 08:17:11 compute-0 systemd[1]: libpod-ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a.scope: Deactivated successfully.
Dec 06 08:17:11 compute-0 podman[401104]: 2025-12-06 08:17:11.7374349 +0000 UTC m=+0.042183617 container died ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.760 251996 INFO nova.virt.libvirt.driver [-] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Instance destroyed successfully.
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.761 251996 DEBUG nova.objects.instance [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'resources' on Instance uuid 177567d6-7f0b-46df-aa8e-60e0089ae786 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a-userdata-shm.mount: Deactivated successfully.
Dec 06 08:17:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbb8db7aeb2fdb301e11ceef969faf6bbadc0d39e7052ad11c44cd8df8e8ad54-merged.mount: Deactivated successfully.
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.778 251996 DEBUG nova.virt.libvirt.vif [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:15:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-1654357624',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-1654357624',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=204,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKaZAyJuyUPzASOkR1gDA1fKpANBX/232ZFoOe4S4TqW4oJaPbVRWCH6B3MgYuDyWBOXcKvJA2hYTFzL7aJjwbQRzjOcMySxOVJCemy5U4dU6fSZoXsToGx4Gww1etdCtw==',key_name='tempest-TestSecurityGroupsBasicOps-1382558522',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:15:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-6isxqqan',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:15:55Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=177567d6-7f0b-46df-aa8e-60e0089ae786,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.779 251996 DEBUG nova.network.os_vif_util [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.780 251996 DEBUG nova.network.os_vif_util [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:3c:41,bridge_name='br-int',has_traffic_filtering=True,id=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0,network=Network(538c6ce0-da01-452d-90fd-a1413cdabc3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50ea2b6d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.780 251996 DEBUG os_vif [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:3c:41,bridge_name='br-int',has_traffic_filtering=True,id=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0,network=Network(538c6ce0-da01-452d-90fd-a1413cdabc3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50ea2b6d-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:17:11 compute-0 podman[401104]: 2025-12-06 08:17:11.781903818 +0000 UTC m=+0.086652535 container cleanup ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.782 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.783 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50ea2b6d-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.784 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.785 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.790 251996 INFO os_vif [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:3c:41,bridge_name='br-int',has_traffic_filtering=True,id=50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0,network=Network(538c6ce0-da01-452d-90fd-a1413cdabc3f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50ea2b6d-c5')
Dec 06 08:17:11 compute-0 systemd[1]: libpod-conmon-ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a.scope: Deactivated successfully.
Dec 06 08:17:11 compute-0 podman[401143]: 2025-12-06 08:17:11.84785472 +0000 UTC m=+0.044839869 container remove ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.853 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9c3441-22c6-4b0f-b1d7-96a10dfc7c34]: (4, ('Sat Dec  6 08:17:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f (ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a)\nec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a\nSat Dec  6 08:17:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f (ec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a)\nec731687b079e7eb9dcb604e90bfb449b5ede7c8917b88ec4f094d78224aef3a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.855 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[68788a3f-55c6-43d4-83a6-3d8d3935d9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.856 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap538c6ce0-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.858 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:11 compute-0 kernel: tap538c6ce0-d0: left promiscuous mode
Dec 06 08:17:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.872 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.875 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3836e1-2eb8-4fae-9bf5-f917ce8768d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.891 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab39ad1-4043-45dc-a40f-700c18fc4497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.892 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[965897d1-6904-4b51-988a-9edd4dbb6548]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.896 251996 DEBUG nova.compute.manager [req-93b959ec-36d6-480e-ad7e-10999b91a164 req-e598ab1e-0c51-4c33-a60b-a5f3245abc25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-vif-unplugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.896 251996 DEBUG oslo_concurrency.lockutils [req-93b959ec-36d6-480e-ad7e-10999b91a164 req-e598ab1e-0c51-4c33-a60b-a5f3245abc25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.897 251996 DEBUG oslo_concurrency.lockutils [req-93b959ec-36d6-480e-ad7e-10999b91a164 req-e598ab1e-0c51-4c33-a60b-a5f3245abc25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.897 251996 DEBUG oslo_concurrency.lockutils [req-93b959ec-36d6-480e-ad7e-10999b91a164 req-e598ab1e-0c51-4c33-a60b-a5f3245abc25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.897 251996 DEBUG nova.compute.manager [req-93b959ec-36d6-480e-ad7e-10999b91a164 req-e598ab1e-0c51-4c33-a60b-a5f3245abc25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] No waiting events found dispatching network-vif-unplugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:17:11 compute-0 nova_compute[251992]: 2025-12-06 08:17:11.898 251996 DEBUG nova.compute.manager [req-93b959ec-36d6-480e-ad7e-10999b91a164 req-e598ab1e-0c51-4c33-a60b-a5f3245abc25 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-vif-unplugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.908 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd5d29b-70fd-4416-8381-a985f9cb6d50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 922714, 'reachable_time': 20636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401177, 'error': None, 'target': 'ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d538c6ce0\x2dda01\x2d452d\x2d90fd\x2da1413cdabc3f.mount: Deactivated successfully.
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.915 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-538c6ce0-da01-452d-90fd-a1413cdabc3f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:17:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:11.916 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[74d16a7e-25c4-470c-a089-71d07ed07b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/109254007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:12 compute-0 nova_compute[251992]: 2025-12-06 08:17:12.171 251996 INFO nova.virt.libvirt.driver [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Deleting instance files /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786_del
Dec 06 08:17:12 compute-0 nova_compute[251992]: 2025-12-06 08:17:12.172 251996 INFO nova.virt.libvirt.driver [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Deletion of /var/lib/nova/instances/177567d6-7f0b-46df-aa8e-60e0089ae786_del complete
Dec 06 08:17:12 compute-0 nova_compute[251992]: 2025-12-06 08:17:12.222 251996 INFO nova.compute.manager [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Took 0.70 seconds to destroy the instance on the hypervisor.
Dec 06 08:17:12 compute-0 nova_compute[251992]: 2025-12-06 08:17:12.222 251996 DEBUG oslo.service.loopingcall [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:17:12 compute-0 nova_compute[251992]: 2025-12-06 08:17:12.223 251996 DEBUG nova.compute.manager [-] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:17:12 compute-0 nova_compute[251992]: 2025-12-06 08:17:12.223 251996 DEBUG nova.network.neutron [-] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:17:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:12.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:13 compute-0 ceph-mon[74339]: pgmap v3740: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Dec 06 08:17:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:17:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:17:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:17:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:17:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:17:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:17:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:13.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.328 251996 DEBUG nova.network.neutron [req-ffde2d95-5ca9-45bc-898b-779c7fe27d8b req-706115ba-4787-4666-b2c7-9231aa71d290 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updated VIF entry in instance network info cache for port 50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.329 251996 DEBUG nova.network.neutron [req-ffde2d95-5ca9-45bc-898b-779c7fe27d8b req-706115ba-4787-4666-b2c7-9231aa71d290 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updating instance_info_cache with network_info: [{"id": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "address": "fa:16:3e:41:3c:41", "network": {"id": "538c6ce0-da01-452d-90fd-a1413cdabc3f", "bridge": "br-int", "label": "tempest-network-smoke--1545075878", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50ea2b6d-c5", "ovs_interfaceid": "50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.350 251996 DEBUG oslo_concurrency.lockutils [req-ffde2d95-5ca9-45bc-898b-779c7fe27d8b req-706115ba-4787-4666-b2c7-9231aa71d290 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-177567d6-7f0b-46df-aa8e-60e0089ae786" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:17:13 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:13.380 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.470 251996 DEBUG nova.network.neutron [-] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.479 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.490 251996 INFO nova.compute.manager [-] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Took 1.27 seconds to deallocate network for instance.
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.548 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.548 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.560 251996 DEBUG nova.compute.manager [req-19e4f3b9-41c0-494a-8ffc-1a60d9ebac5d req-ee84472b-d1a9-4855-ae1c-792d32c927d3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-vif-deleted-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.595 251996 DEBUG oslo_concurrency.processutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.974 251996 DEBUG nova.compute.manager [req-c9adb89d-a4fd-45dc-bc76-903a4e0c7ded req-db2de028-5ca7-4bcb-b5b5-90520da54db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received event network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.975 251996 DEBUG oslo_concurrency.lockutils [req-c9adb89d-a4fd-45dc-bc76-903a4e0c7ded req-db2de028-5ca7-4bcb-b5b5-90520da54db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.975 251996 DEBUG oslo_concurrency.lockutils [req-c9adb89d-a4fd-45dc-bc76-903a4e0c7ded req-db2de028-5ca7-4bcb-b5b5-90520da54db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.975 251996 DEBUG oslo_concurrency.lockutils [req-c9adb89d-a4fd-45dc-bc76-903a4e0c7ded req-db2de028-5ca7-4bcb-b5b5-90520da54db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.975 251996 DEBUG nova.compute.manager [req-c9adb89d-a4fd-45dc-bc76-903a4e0c7ded req-db2de028-5ca7-4bcb-b5b5-90520da54db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] No waiting events found dispatching network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:17:13 compute-0 nova_compute[251992]: 2025-12-06 08:17:13.976 251996 WARNING nova.compute.manager [req-c9adb89d-a4fd-45dc-bc76-903a4e0c7ded req-db2de028-5ca7-4bcb-b5b5-90520da54db8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Received unexpected event network-vif-plugged-50ea2b6d-c57a-49f8-80ec-62a2adc3e6a0 for instance with vm_state deleted and task_state None.
Dec 06 08:17:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:17:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93613626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:14 compute-0 nova_compute[251992]: 2025-12-06 08:17:14.039 251996 DEBUG oslo_concurrency.processutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:14 compute-0 nova_compute[251992]: 2025-12-06 08:17:14.045 251996 DEBUG nova.compute.provider_tree [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:17:14 compute-0 nova_compute[251992]: 2025-12-06 08:17:14.064 251996 DEBUG nova.scheduler.client.report [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:17:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/93613626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:14 compute-0 nova_compute[251992]: 2025-12-06 08:17:14.086 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:14 compute-0 nova_compute[251992]: 2025-12-06 08:17:14.114 251996 INFO nova.scheduler.client.report [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Deleted allocations for instance 177567d6-7f0b-46df-aa8e-60e0089ae786
Dec 06 08:17:14 compute-0 nova_compute[251992]: 2025-12-06 08:17:14.202 251996 DEBUG oslo_concurrency.lockutils [None req-0237fced-bd21-4eeb-bc5c-45a012fed4f9 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "177567d6-7f0b-46df-aa8e-60e0089ae786" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:14.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:15 compute-0 ceph-mon[74339]: pgmap v3741: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 06 08:17:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/686669092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:15.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 08:17:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:16.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:16 compute-0 podman[401203]: 2025-12-06 08:17:16.451847242 +0000 UTC m=+0.108050035 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true)
Dec 06 08:17:16 compute-0 nova_compute[251992]: 2025-12-06 08:17:16.785 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:16 compute-0 ceph-mon[74339]: pgmap v3742: 305 pgs: 305 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 81 op/s
Dec 06 08:17:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000080s ======
Dec 06 08:17:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:17.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 06 08:17:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1018 KiB/s wr, 62 op/s
Dec 06 08:17:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:18.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:18 compute-0 nova_compute[251992]: 2025-12-06 08:17:18.482 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:17:18
Dec 06 08:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'images', '.rgw.root', '.mgr', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.meta']
Dec 06 08:17:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:17:18 compute-0 ceph-mon[74339]: pgmap v3743: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1018 KiB/s wr, 62 op/s
Dec 06 08:17:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 3.6 KiB/s wr, 36 op/s
Dec 06 08:17:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:20.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:20 compute-0 ceph-mon[74339]: pgmap v3744: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 3.6 KiB/s wr, 36 op/s
Dec 06 08:17:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:21.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:21 compute-0 nova_compute[251992]: 2025-12-06 08:17:21.788 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 18 KiB/s wr, 105 op/s
Dec 06 08:17:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:22.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:22 compute-0 podman[401234]: 2025-12-06 08:17:22.391326305 +0000 UTC m=+0.050956216 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:17:22 compute-0 podman[401235]: 2025-12-06 08:17:22.394994125 +0000 UTC m=+0.054453021 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 06 08:17:22 compute-0 sudo[401270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:22 compute-0 sudo[401270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:22 compute-0 sudo[401270]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:22 compute-0 sudo[401295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:22 compute-0 sudo[401295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:22 compute-0 sudo[401295]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:23 compute-0 ceph-mon[74339]: pgmap v3745: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 18 KiB/s wr, 105 op/s
Dec 06 08:17:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:23.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:23 compute-0 nova_compute[251992]: 2025-12-06 08:17:23.533 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:17:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:17:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 96 op/s
Dec 06 08:17:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:24.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:25.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:25 compute-0 ceph-mon[74339]: pgmap v3746: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 96 op/s
Dec 06 08:17:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 101 op/s
Dec 06 08:17:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:26.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:26 compute-0 nova_compute[251992]: 2025-12-06 08:17:26.402 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:26 compute-0 nova_compute[251992]: 2025-12-06 08:17:26.609 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.178915410385259e-06 of space, bias 1.0, pg target 0.0024536746231155777 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003150881723431975 of space, bias 1.0, pg target 0.9452645170295925 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:17:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:17:26 compute-0 nova_compute[251992]: 2025-12-06 08:17:26.759 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009031.7577581, 177567d6-7f0b-46df-aa8e-60e0089ae786 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:17:26 compute-0 nova_compute[251992]: 2025-12-06 08:17:26.759 251996 INFO nova.compute.manager [-] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] VM Stopped (Lifecycle Event)
Dec 06 08:17:26 compute-0 nova_compute[251992]: 2025-12-06 08:17:26.786 251996 DEBUG nova.compute.manager [None req-10b6543d-a549-45d2-9f89-2321fd2ed115 - - - - - -] [instance: 177567d6-7f0b-46df-aa8e-60e0089ae786] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:17:26 compute-0 nova_compute[251992]: 2025-12-06 08:17:26.789 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:27.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:27 compute-0 ceph-mon[74339]: pgmap v3747: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 101 op/s
Dec 06 08:17:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:17:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:17:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:17:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:17:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:17:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 06 08:17:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:28.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:28 compute-0 nova_compute[251992]: 2025-12-06 08:17:28.534 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:29.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:29 compute-0 ceph-mon[74339]: pgmap v3748: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 81 op/s
Dec 06 08:17:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 06 08:17:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:30.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:31.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:31 compute-0 nova_compute[251992]: 2025-12-06 08:17:31.791 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 08:17:31 compute-0 ceph-mon[74339]: pgmap v3749: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 06 08:17:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:32.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:33.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:33 compute-0 ceph-mon[74339]: pgmap v3750: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 08:17:33 compute-0 nova_compute[251992]: 2025-12-06 08:17:33.536 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 KiB/s rd, 6 op/s
Dec 06 08:17:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:34.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:35.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:35 compute-0 ceph-mon[74339]: pgmap v3751: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 KiB/s rd, 6 op/s
Dec 06 08:17:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 191 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 1.5 MiB/s wr, 43 op/s
Dec 06 08:17:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:36.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:36 compute-0 nova_compute[251992]: 2025-12-06 08:17:36.793 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:37.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:37 compute-0 ceph-mon[74339]: pgmap v3752: 305 pgs: 305 active+clean; 191 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 1.5 MiB/s wr, 43 op/s
Dec 06 08:17:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:38.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:38 compute-0 nova_compute[251992]: 2025-12-06 08:17:38.539 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:39.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:39 compute-0 ceph-mon[74339]: pgmap v3753: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1944053455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:40.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:40 compute-0 nova_compute[251992]: 2025-12-06 08:17:40.482 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:40 compute-0 nova_compute[251992]: 2025-12-06 08:17:40.483 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:40 compute-0 nova_compute[251992]: 2025-12-06 08:17:40.499 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:17:40 compute-0 nova_compute[251992]: 2025-12-06 08:17:40.585 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:40 compute-0 nova_compute[251992]: 2025-12-06 08:17:40.585 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:40 compute-0 nova_compute[251992]: 2025-12-06 08:17:40.592 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:17:40 compute-0 nova_compute[251992]: 2025-12-06 08:17:40.592 251996 INFO nova.compute.claims [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:17:40 compute-0 nova_compute[251992]: 2025-12-06 08:17:40.685 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:17:41 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842103239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.114 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.120 251996 DEBUG nova.compute.provider_tree [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.140 251996 DEBUG nova.scheduler.client.report [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:17:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:41.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.167 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.168 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.223 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.223 251996 DEBUG nova.network.neutron [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:17:41 compute-0 ceph-mon[74339]: pgmap v3754: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1459922500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2842103239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.265 251996 INFO nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.293 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.405 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.407 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.407 251996 INFO nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Creating image(s)
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.442 251996 DEBUG nova.storage.rbd_utils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.474 251996 DEBUG nova.storage.rbd_utils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.502 251996 DEBUG nova.storage.rbd_utils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.506 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.573 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.575 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.575 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.576 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.608 251996 DEBUG nova.storage.rbd_utils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.613 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.794 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:41 compute-0 nova_compute[251992]: 2025-12-06 08:17:41.973 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:42 compute-0 nova_compute[251992]: 2025-12-06 08:17:42.049 251996 DEBUG nova.storage.rbd_utils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] resizing rbd image 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:17:42 compute-0 nova_compute[251992]: 2025-12-06 08:17:42.115 251996 DEBUG nova.policy [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0432cb6633e14c1b86fc320e7f3bb880', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:17:42 compute-0 nova_compute[251992]: 2025-12-06 08:17:42.169 251996 DEBUG nova.objects.instance [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:17:42 compute-0 nova_compute[251992]: 2025-12-06 08:17:42.185 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:17:42 compute-0 nova_compute[251992]: 2025-12-06 08:17:42.186 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Ensure instance console log exists: /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:17:42 compute-0 nova_compute[251992]: 2025-12-06 08:17:42.186 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:42 compute-0 nova_compute[251992]: 2025-12-06 08:17:42.187 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:42 compute-0 nova_compute[251992]: 2025-12-06 08:17:42.187 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:42.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:43 compute-0 sudo[401519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:43 compute-0 sudo[401519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:43 compute-0 sudo[401519]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:17:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:17:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:17:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:17:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:17:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:17:43 compute-0 sudo[401544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:17:43 compute-0 sudo[401544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:17:43 compute-0 sudo[401544]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:43.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:43 compute-0 ceph-mon[74339]: pgmap v3755: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:17:43 compute-0 nova_compute[251992]: 2025-12-06 08:17:43.540 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:17:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:44.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:45 compute-0 nova_compute[251992]: 2025-12-06 08:17:45.028 251996 DEBUG nova.network.neutron [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Successfully created port: 334292eb-9b24-4eb7-aa88-28824964eb71 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:17:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:17:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:45.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:17:45 compute-0 ceph-mon[74339]: pgmap v3756: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:17:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 233 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 380 KiB/s rd, 3.8 MiB/s wr, 93 op/s
Dec 06 08:17:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Dec 06 08:17:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Dec 06 08:17:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Dec 06 08:17:46 compute-0 nova_compute[251992]: 2025-12-06 08:17:46.796 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:46 compute-0 nova_compute[251992]: 2025-12-06 08:17:46.895 251996 DEBUG nova.network.neutron [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Successfully updated port: 334292eb-9b24-4eb7-aa88-28824964eb71 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:17:46 compute-0 nova_compute[251992]: 2025-12-06 08:17:46.959 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:17:46 compute-0 nova_compute[251992]: 2025-12-06 08:17:46.959 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquired lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:17:46 compute-0 nova_compute[251992]: 2025-12-06 08:17:46.959 251996 DEBUG nova.network.neutron [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:17:47 compute-0 nova_compute[251992]: 2025-12-06 08:17:47.018 251996 DEBUG nova.compute.manager [req-842461a7-727b-4786-aa74-ea1ce97bc807 req-7acee47a-f098-4961-b412-31dd88a0b91c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-changed-334292eb-9b24-4eb7-aa88-28824964eb71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:17:47 compute-0 nova_compute[251992]: 2025-12-06 08:17:47.019 251996 DEBUG nova.compute.manager [req-842461a7-727b-4786-aa74-ea1ce97bc807 req-7acee47a-f098-4961-b412-31dd88a0b91c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Refreshing instance network info cache due to event network-changed-334292eb-9b24-4eb7-aa88-28824964eb71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:17:47 compute-0 nova_compute[251992]: 2025-12-06 08:17:47.019 251996 DEBUG oslo_concurrency.lockutils [req-842461a7-727b-4786-aa74-ea1ce97bc807 req-7acee47a-f098-4961-b412-31dd88a0b91c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:17:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:47.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:47 compute-0 nova_compute[251992]: 2025-12-06 08:17:47.224 251996 DEBUG nova.network.neutron [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:17:47 compute-0 podman[401572]: 2025-12-06 08:17:47.419986408 +0000 UTC m=+0.078121623 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:17:47 compute-0 ceph-mon[74339]: pgmap v3757: 305 pgs: 305 active+clean; 233 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 380 KiB/s rd, 3.8 MiB/s wr, 93 op/s
Dec 06 08:17:47 compute-0 ceph-mon[74339]: osdmap e420: 3 total, 3 up, 3 in
Dec 06 08:17:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Dec 06 08:17:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Dec 06 08:17:47 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Dec 06 08:17:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.7 MiB/s wr, 44 op/s
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.102 251996 DEBUG nova.network.neutron [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updating instance_info_cache with network_info: [{"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.135 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Releasing lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.136 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Instance network_info: |[{"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.136 251996 DEBUG oslo_concurrency.lockutils [req-842461a7-727b-4786-aa74-ea1ce97bc807 req-7acee47a-f098-4961-b412-31dd88a0b91c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.136 251996 DEBUG nova.network.neutron [req-842461a7-727b-4786-aa74-ea1ce97bc807 req-7acee47a-f098-4961-b412-31dd88a0b91c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Refreshing network info cache for port 334292eb-9b24-4eb7-aa88-28824964eb71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.139 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Start _get_guest_xml network_info=[{"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.143 251996 WARNING nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.146 251996 DEBUG nova.virt.libvirt.host [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.147 251996 DEBUG nova.virt.libvirt.host [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.150 251996 DEBUG nova.virt.libvirt.host [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.150 251996 DEBUG nova.virt.libvirt.host [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.151 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.152 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.152 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.152 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.153 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.153 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.153 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.153 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.153 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.154 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.154 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.154 251996 DEBUG nova.virt.hardware [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.157 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:48.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.543 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:17:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865137747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.595 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:48 compute-0 ceph-mon[74339]: osdmap e421: 3 total, 3 up, 3 in
Dec 06 08:17:48 compute-0 ceph-mon[74339]: pgmap v3760: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.7 MiB/s wr, 44 op/s
Dec 06 08:17:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2865137747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.624 251996 DEBUG nova.storage.rbd_utils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:17:48 compute-0 nova_compute[251992]: 2025-12-06 08:17:48.628 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:17:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/406431765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.047 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.049 251996 DEBUG nova.virt.libvirt.vif [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-360306683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-360306683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=209,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzGPaWk14xbtW5sHv4GXDiMSG/C5MZExlVtwgnjxxcupj/ss21DUqD2dDVK61uPoLLLdziVAI6yXqU7ErexaGkX9er10rHIWMNub/drUqFoT5/Q97yMArULe40gKdjSAw==',key_name='tempest-TestSecurityGroupsBasicOps-853526666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-8jdmlew3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:17:41Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=93e8a72f-c895-445c-8c5b-aadf11ffd3b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.049 251996 DEBUG nova.network.os_vif_util [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.050 251996 DEBUG nova.network.os_vif_util [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:bd:a3,bridge_name='br-int',has_traffic_filtering=True,id=334292eb-9b24-4eb7-aa88-28824964eb71,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap334292eb-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.051 251996 DEBUG nova.objects.instance [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.069 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <uuid>93e8a72f-c895-445c-8c5b-aadf11ffd3b5</uuid>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <name>instance-000000d1</name>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-360306683</nova:name>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:17:48</nova:creationTime>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <nova:user uuid="0432cb6633e14c1b86fc320e7f3bb880">tempest-TestSecurityGroupsBasicOps-568463891-project-member</nova:user>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <nova:project uuid="5d23d1d6ffc142eaa9bee0ef93fe60e4">tempest-TestSecurityGroupsBasicOps-568463891</nova:project>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <nova:port uuid="334292eb-9b24-4eb7-aa88-28824964eb71">
Dec 06 08:17:49 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <system>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <entry name="serial">93e8a72f-c895-445c-8c5b-aadf11ffd3b5</entry>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <entry name="uuid">93e8a72f-c895-445c-8c5b-aadf11ffd3b5</entry>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </system>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <os>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   </os>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <features>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   </features>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk">
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       </source>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk.config">
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       </source>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:17:49 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:32:bd:a3"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <target dev="tap334292eb-9b"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5/console.log" append="off"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <video>
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </video>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:17:49 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:17:49 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:17:49 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:17:49 compute-0 nova_compute[251992]: </domain>
Dec 06 08:17:49 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.070 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Preparing to wait for external event network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.070 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.070 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.070 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.071 251996 DEBUG nova.virt.libvirt.vif [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-360306683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-360306683',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=209,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzGPaWk14xbtW5sHv4GXDiMSG/C5MZExlVtwgnjxxcupj/ss21DUqD2dDVK61uPoLLLdziVAI6yXqU7ErexaGkX9er10rHIWMNub/drUqFoT5/Q97yMArULe40gKdjSAw==',key_name='tempest-TestSecurityGroupsBasicOps-853526666',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-8jdmlew3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:17:41Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=93e8a72f-c895-445c-8c5b-aadf11ffd3b5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.071 251996 DEBUG nova.network.os_vif_util [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.072 251996 DEBUG nova.network.os_vif_util [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:bd:a3,bridge_name='br-int',has_traffic_filtering=True,id=334292eb-9b24-4eb7-aa88-28824964eb71,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap334292eb-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.072 251996 DEBUG os_vif [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:bd:a3,bridge_name='br-int',has_traffic_filtering=True,id=334292eb-9b24-4eb7-aa88-28824964eb71,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap334292eb-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.073 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.073 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.073 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.077 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.078 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap334292eb-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.078 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap334292eb-9b, col_values=(('external_ids', {'iface-id': '334292eb-9b24-4eb7-aa88-28824964eb71', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:bd:a3', 'vm-uuid': '93e8a72f-c895-445c-8c5b-aadf11ffd3b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.079 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:49 compute-0 NetworkManager[48965]: <info>  [1765009069.0808] manager: (tap334292eb-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.084 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.085 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.086 251996 INFO os_vif [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:bd:a3,bridge_name='br-int',has_traffic_filtering=True,id=334292eb-9b24-4eb7-aa88-28824964eb71,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap334292eb-9b')
Dec 06 08:17:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:49.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.155 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.155 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.156 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No VIF found with MAC fa:16:3e:32:bd:a3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.156 251996 INFO nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Using config drive
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.183 251996 DEBUG nova.storage.rbd_utils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.500 251996 DEBUG nova.network.neutron [req-842461a7-727b-4786-aa74-ea1ce97bc807 req-7acee47a-f098-4961-b412-31dd88a0b91c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updated VIF entry in instance network info cache for port 334292eb-9b24-4eb7-aa88-28824964eb71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.501 251996 DEBUG nova.network.neutron [req-842461a7-727b-4786-aa74-ea1ce97bc807 req-7acee47a-f098-4961-b412-31dd88a0b91c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updating instance_info_cache with network_info: [{"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.521 251996 DEBUG oslo_concurrency.lockutils [req-842461a7-727b-4786-aa74-ea1ce97bc807 req-7acee47a-f098-4961-b412-31dd88a0b91c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:17:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/406431765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:17:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.702 251996 INFO nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Creating config drive at /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5/disk.config
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.707 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl49cey3o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.844 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl49cey3o" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.881 251996 DEBUG nova.storage.rbd_utils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:17:49 compute-0 nova_compute[251992]: 2025-12-06 08:17:49.886 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5/disk.config 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.7 MiB/s wr, 43 op/s
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.049 251996 DEBUG oslo_concurrency.processutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5/disk.config 93e8a72f-c895-445c-8c5b-aadf11ffd3b5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.050 251996 INFO nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Deleting local config drive /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5/disk.config because it was imported into RBD.
Dec 06 08:17:50 compute-0 kernel: tap334292eb-9b: entered promiscuous mode
Dec 06 08:17:50 compute-0 NetworkManager[48965]: <info>  [1765009070.1093] manager: (tap334292eb-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Dec 06 08:17:50 compute-0 ovn_controller[147168]: 2025-12-06T08:17:50Z|00793|binding|INFO|Claiming lport 334292eb-9b24-4eb7-aa88-28824964eb71 for this chassis.
Dec 06 08:17:50 compute-0 ovn_controller[147168]: 2025-12-06T08:17:50Z|00794|binding|INFO|334292eb-9b24-4eb7-aa88-28824964eb71: Claiming fa:16:3e:32:bd:a3 10.100.0.4
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.110 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.119 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.127 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:bd:a3 10.100.0.4'], port_security=['fa:16:3e:32:bd:a3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '93e8a72f-c895-445c-8c5b-aadf11ffd3b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e18f04-0697-4301-a26d-02786b558075', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '271978dc-b7c6-4e85-b0d0-f54bd930f144 c4946238-f586-46b2-b353-baad07fea15f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=626f221f-4e25-4acd-9bf5-4d267283bf54, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=334292eb-9b24-4eb7-aa88-28824964eb71) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.128 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 334292eb-9b24-4eb7-aa88-28824964eb71 in datapath 14e18f04-0697-4301-a26d-02786b558075 bound to our chassis
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.129 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e18f04-0697-4301-a26d-02786b558075
Dec 06 08:17:50 compute-0 systemd-machined[212986]: New machine qemu-94-instance-000000d1.
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.140 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5c72f6-17ac-4612-8411-3746b913a918]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.141 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e18f04-01 in ovnmeta-14e18f04-0697-4301-a26d-02786b558075 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.143 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e18f04-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.143 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[18595430-b77c-4f8f-be9e-ec8805ff5d69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.144 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a58658-be05-4611-91ab-2e84fea6e7e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.157 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0eb28b-61c1-4ca9-8997-f32baef076bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-000000d1.
Dec 06 08:17:50 compute-0 systemd-udevd[401738]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.180 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5271ad65-073b-459d-9a4e-d2bd53f8f5a7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 NetworkManager[48965]: <info>  [1765009070.1911] device (tap334292eb-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:17:50 compute-0 ovn_controller[147168]: 2025-12-06T08:17:50Z|00795|binding|INFO|Setting lport 334292eb-9b24-4eb7-aa88-28824964eb71 ovn-installed in OVS
Dec 06 08:17:50 compute-0 ovn_controller[147168]: 2025-12-06T08:17:50Z|00796|binding|INFO|Setting lport 334292eb-9b24-4eb7-aa88-28824964eb71 up in Southbound
Dec 06 08:17:50 compute-0 NetworkManager[48965]: <info>  [1765009070.1919] device (tap334292eb-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.193 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.217 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bd8d80-fc22-46f5-843e-a03c86f6ccff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.223 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[88b65e59-9ea4-4679-a2e3-6217a0493df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 NetworkManager[48965]: <info>  [1765009070.2253] manager: (tap14e18f04-00): new Veth device (/org/freedesktop/NetworkManager/Devices/376)
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.256 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[636fd4d6-c229-4a14-b049-e2059c581d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.259 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[91936543-04fa-4a77-830c-46eba0614f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 NetworkManager[48965]: <info>  [1765009070.2803] device (tap14e18f04-00): carrier: link connected
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.287 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[060b36eb-1dca-4e4f-aa2d-37ca18cee3fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.303 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6e1cd22a-25a8-46d5-8479-f8cdabe91a6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e18f04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:2e:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934286, 'reachable_time': 23230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401768, 'error': None, 'target': 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:50.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.317 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[292cfc2c-4c57-47e9-a957-bdd944df903f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:2eaf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 934286, 'tstamp': 934286}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401769, 'error': None, 'target': 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.336 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0abf357e-29e8-4a66-8f1f-908c53a2959d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e18f04-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:2e:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934286, 'reachable_time': 23230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 401777, 'error': None, 'target': 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.367 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7911b757-42f9-4c7d-918d-113874450f3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.430 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c3529aba-83da-47c1-81a7-ec910cff675f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.431 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e18f04-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.431 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.432 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e18f04-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:50 compute-0 NetworkManager[48965]: <info>  [1765009070.4347] manager: (tap14e18f04-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.434 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:50 compute-0 kernel: tap14e18f04-00: entered promiscuous mode
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.436 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.438 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e18f04-00, col_values=(('external_ids', {'iface-id': '43aca8c4-55d8-4309-a0d5-cbba710a5d1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.439 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:50 compute-0 ovn_controller[147168]: 2025-12-06T08:17:50Z|00797|binding|INFO|Releasing lport 43aca8c4-55d8-4309-a0d5-cbba710a5d1f from this chassis (sb_readonly=0)
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.462 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.463 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e18f04-0697-4301-a26d-02786b558075.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e18f04-0697-4301-a26d-02786b558075.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.464 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c173d254-4118-481f-8985-f8c02b3a7e1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.465 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-14e18f04-0697-4301-a26d-02786b558075
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/14e18f04-0697-4301-a26d-02786b558075.pid.haproxy
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 14e18f04-0697-4301-a26d-02786b558075
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.465 251996 DEBUG nova.compute.manager [req-4eac19e5-64e2-417c-b151-5e29d66f1319 req-aeb473de-6ee7-4a4f-9ad7-0aac45709673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.466 251996 DEBUG oslo_concurrency.lockutils [req-4eac19e5-64e2-417c-b151-5e29d66f1319 req-aeb473de-6ee7-4a4f-9ad7-0aac45709673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:50 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:17:50.466 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'env', 'PROCESS_TAG=haproxy-14e18f04-0697-4301-a26d-02786b558075', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e18f04-0697-4301-a26d-02786b558075.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.466 251996 DEBUG oslo_concurrency.lockutils [req-4eac19e5-64e2-417c-b151-5e29d66f1319 req-aeb473de-6ee7-4a4f-9ad7-0aac45709673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.466 251996 DEBUG oslo_concurrency.lockutils [req-4eac19e5-64e2-417c-b151-5e29d66f1319 req-aeb473de-6ee7-4a4f-9ad7-0aac45709673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.467 251996 DEBUG nova.compute.manager [req-4eac19e5-64e2-417c-b151-5e29d66f1319 req-aeb473de-6ee7-4a4f-9ad7-0aac45709673 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Processing event network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.467 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.516 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009070.5157108, 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.516 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] VM Started (Lifecycle Event)
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.518 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.522 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.525 251996 INFO nova.virt.libvirt.driver [-] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Instance spawned successfully.
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.525 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.547 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.550 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.557 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.558 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.558 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.559 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.559 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.560 251996 DEBUG nova.virt.libvirt.driver [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.596 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.597 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009070.5158353, 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.597 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] VM Paused (Lifecycle Event)
Dec 06 08:17:50 compute-0 ceph-mon[74339]: pgmap v3761: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.7 MiB/s wr, 43 op/s
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.627 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.638 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009070.5214047, 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.639 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] VM Resumed (Lifecycle Event)
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.646 251996 INFO nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Took 9.24 seconds to spawn the instance on the hypervisor.
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.646 251996 DEBUG nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.680 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.686 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.720 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.739 251996 INFO nova.compute.manager [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Took 10.18 seconds to build instance.
Dec 06 08:17:50 compute-0 nova_compute[251992]: 2025-12-06 08:17:50.766 251996 DEBUG oslo_concurrency.lockutils [None req-236cba51-ddd0-493e-a850-3ae917bc03db 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:50 compute-0 podman[401844]: 2025-12-06 08:17:50.894492887 +0000 UTC m=+0.050646107 container create 37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:17:50 compute-0 systemd[1]: Started libpod-conmon-37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5.scope.
Dec 06 08:17:50 compute-0 podman[401844]: 2025-12-06 08:17:50.869848967 +0000 UTC m=+0.026002207 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:17:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a02e412d3e152c3f41c4f7f4ac3f662c6af135971d0ef53b4cb8ec8cee62d4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:17:50 compute-0 podman[401844]: 2025-12-06 08:17:50.987502353 +0000 UTC m=+0.143655593 container init 37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:17:50 compute-0 podman[401844]: 2025-12-06 08:17:50.992593882 +0000 UTC m=+0.148747102 container start 37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 08:17:51 compute-0 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[401859]: [NOTICE]   (401863) : New worker (401865) forked
Dec 06 08:17:51 compute-0 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[401859]: [NOTICE]   (401863) : Loading success.
Dec 06 08:17:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:17:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:51.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:17:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 139 KiB/s rd, 2.7 MiB/s wr, 79 op/s
Dec 06 08:17:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:52.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:52 compute-0 nova_compute[251992]: 2025-12-06 08:17:52.564 251996 DEBUG nova.compute.manager [req-030e9ea9-e7a7-4653-a5b7-edc5d384bddc req-d9a29fff-6a64-47bc-b82d-081430ebe21c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:17:52 compute-0 nova_compute[251992]: 2025-12-06 08:17:52.565 251996 DEBUG oslo_concurrency.lockutils [req-030e9ea9-e7a7-4653-a5b7-edc5d384bddc req-d9a29fff-6a64-47bc-b82d-081430ebe21c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:52 compute-0 nova_compute[251992]: 2025-12-06 08:17:52.566 251996 DEBUG oslo_concurrency.lockutils [req-030e9ea9-e7a7-4653-a5b7-edc5d384bddc req-d9a29fff-6a64-47bc-b82d-081430ebe21c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:52 compute-0 nova_compute[251992]: 2025-12-06 08:17:52.566 251996 DEBUG oslo_concurrency.lockutils [req-030e9ea9-e7a7-4653-a5b7-edc5d384bddc req-d9a29fff-6a64-47bc-b82d-081430ebe21c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:52 compute-0 nova_compute[251992]: 2025-12-06 08:17:52.566 251996 DEBUG nova.compute.manager [req-030e9ea9-e7a7-4653-a5b7-edc5d384bddc req-d9a29fff-6a64-47bc-b82d-081430ebe21c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] No waiting events found dispatching network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:17:52 compute-0 nova_compute[251992]: 2025-12-06 08:17:52.567 251996 WARNING nova.compute.manager [req-030e9ea9-e7a7-4653-a5b7-edc5d384bddc req-d9a29fff-6a64-47bc-b82d-081430ebe21c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received unexpected event network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 for instance with vm_state active and task_state None.
Dec 06 08:17:52 compute-0 ceph-mon[74339]: pgmap v3762: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 139 KiB/s rd, 2.7 MiB/s wr, 79 op/s
Dec 06 08:17:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:53.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:53 compute-0 podman[401876]: 2025-12-06 08:17:53.428698252 +0000 UTC m=+0.061755119 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 06 08:17:53 compute-0 podman[401877]: 2025-12-06 08:17:53.447282676 +0000 UTC m=+0.091288850 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.593 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.692 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.693 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.694 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:53 compute-0 NetworkManager[48965]: <info>  [1765009073.7183] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Dec 06 08:17:53 compute-0 NetworkManager[48965]: <info>  [1765009073.7194] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.736 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.839 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:53 compute-0 ovn_controller[147168]: 2025-12-06T08:17:53Z|00798|binding|INFO|Releasing lport 43aca8c4-55d8-4309-a0d5-cbba710a5d1f from this chassis (sb_readonly=0)
Dec 06 08:17:53 compute-0 nova_compute[251992]: 2025-12-06 08:17:53.852 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 111 KiB/s rd, 173 KiB/s wr, 36 op/s
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.080 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:17:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622176443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.145 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.188 251996 DEBUG nova.compute.manager [req-95caa716-ddc7-4500-8e60-8e2d399e060a req-07a44001-dc2c-4720-95fa-2c05d19c38b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-changed-334292eb-9b24-4eb7-aa88-28824964eb71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.189 251996 DEBUG nova.compute.manager [req-95caa716-ddc7-4500-8e60-8e2d399e060a req-07a44001-dc2c-4720-95fa-2c05d19c38b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Refreshing instance network info cache due to event network-changed-334292eb-9b24-4eb7-aa88-28824964eb71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.189 251996 DEBUG oslo_concurrency.lockutils [req-95caa716-ddc7-4500-8e60-8e2d399e060a req-07a44001-dc2c-4720-95fa-2c05d19c38b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.189 251996 DEBUG oslo_concurrency.lockutils [req-95caa716-ddc7-4500-8e60-8e2d399e060a req-07a44001-dc2c-4720-95fa-2c05d19c38b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.189 251996 DEBUG nova.network.neutron [req-95caa716-ddc7-4500-8e60-8e2d399e060a req-07a44001-dc2c-4720-95fa-2c05d19c38b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Refreshing network info cache for port 334292eb-9b24-4eb7-aa88-28824964eb71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.215 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.216 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:17:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:54.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.512 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.513 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3958MB free_disk=20.967212677001953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.513 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.514 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:17:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.756 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.757 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.757 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:17:54 compute-0 nova_compute[251992]: 2025-12-06 08:17:54.851 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:17:54 compute-0 ceph-mon[74339]: pgmap v3763: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 111 KiB/s rd, 173 KiB/s wr, 36 op/s
Dec 06 08:17:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1622176443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:17:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347139014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:55 compute-0 nova_compute[251992]: 2025-12-06 08:17:55.370 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:17:55 compute-0 nova_compute[251992]: 2025-12-06 08:17:55.376 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:17:55 compute-0 nova_compute[251992]: 2025-12-06 08:17:55.532 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:17:55 compute-0 nova_compute[251992]: 2025-12-06 08:17:55.559 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:17:55 compute-0 nova_compute[251992]: 2025-12-06 08:17:55.559 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:17:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 20 KiB/s wr, 103 op/s
Dec 06 08:17:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3347139014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:56.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:56 compute-0 nova_compute[251992]: 2025-12-06 08:17:56.554 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:56 compute-0 ceph-mon[74339]: pgmap v3764: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 20 KiB/s wr, 103 op/s
Dec 06 08:17:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3542385760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:57.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.462 251996 DEBUG nova.network.neutron [req-95caa716-ddc7-4500-8e60-8e2d399e060a req-07a44001-dc2c-4720-95fa-2c05d19c38b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updated VIF entry in instance network info cache for port 334292eb-9b24-4eb7-aa88-28824964eb71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.463 251996 DEBUG nova.network.neutron [req-95caa716-ddc7-4500-8e60-8e2d399e060a req-07a44001-dc2c-4720-95fa-2c05d19c38b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updating instance_info_cache with network_info: [{"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.485 251996 DEBUG oslo_concurrency.lockutils [req-95caa716-ddc7-4500-8e60-8e2d399e060a req-07a44001-dc2c-4720-95fa-2c05d19c38b2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.814 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.814 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.814 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:17:57 compute-0 nova_compute[251992]: 2025-12-06 08:17:57.814 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:17:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 18 KiB/s wr, 102 op/s
Dec 06 08:17:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/895991980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:17:58.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:58 compute-0 nova_compute[251992]: 2025-12-06 08:17:58.595 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:59 compute-0 ceph-mon[74339]: pgmap v3765: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 18 KiB/s wr, 102 op/s
Dec 06 08:17:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2688482981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:17:59 compute-0 nova_compute[251992]: 2025-12-06 08:17:59.082 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:17:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:17:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:17:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:17:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:17:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:17:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 08:18:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:00 compute-0 nova_compute[251992]: 2025-12-06 08:18:00.543 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updating instance_info_cache with network_info: [{"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:18:00 compute-0 nova_compute[251992]: 2025-12-06 08:18:00.562 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:18:00 compute-0 nova_compute[251992]: 2025-12-06 08:18:00.562 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:18:00 compute-0 nova_compute[251992]: 2025-12-06 08:18:00.563 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:01 compute-0 ceph-mon[74339]: pgmap v3766: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 88 op/s
Dec 06 08:18:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 98 op/s
Dec 06 08:18:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/320196248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:02 compute-0 nova_compute[251992]: 2025-12-06 08:18:02.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:03 compute-0 ceph-mon[74339]: pgmap v3767: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 98 op/s
Dec 06 08:18:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/800670191' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:03 compute-0 sudo[401965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:03 compute-0 sudo[401965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:03 compute-0 sudo[401965]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:03 compute-0 sudo[401990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:03 compute-0 sudo[401990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:03 compute-0 sudo[401990]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:03 compute-0 ovn_controller[147168]: 2025-12-06T08:18:03Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:bd:a3 10.100.0.4
Dec 06 08:18:03 compute-0 ovn_controller[147168]: 2025-12-06T08:18:03Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:bd:a3 10.100.0.4
Dec 06 08:18:03 compute-0 nova_compute[251992]: 2025-12-06 08:18:03.595 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Dec 06 08:18:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:18:03.893 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:18:03.895 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:18:03.896 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:04 compute-0 nova_compute[251992]: 2025-12-06 08:18:04.084 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:04 compute-0 nova_compute[251992]: 2025-12-06 08:18:04.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:05 compute-0 ceph-mon[74339]: pgmap v3768: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Dec 06 08:18:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:05.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Dec 06 08:18:06 compute-0 sudo[402016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:06 compute-0 sudo[402016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:06 compute-0 sudo[402016]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:06.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:06 compute-0 sudo[402041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:18:06 compute-0 sudo[402041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:06 compute-0 sudo[402041]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:06 compute-0 sudo[402066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:06 compute-0 sudo[402066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:06 compute-0 sudo[402066]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:06 compute-0 sudo[402091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:18:06 compute-0 sudo[402091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:06 compute-0 sudo[402091]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 08:18:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:18:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:18:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:18:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bd734ee0-83bf-4266-a655-212baf17fff9 does not exist
Dec 06 08:18:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e63a8c72-38db-476c-8c33-2cc85fca8ad9 does not exist
Dec 06 08:18:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 03a83a6f-b560-4bcf-ab77-bf20bdbe717b does not exist
Dec 06 08:18:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:18:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:18:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:18:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: pgmap v3769: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Dec 06 08:18:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:18:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:18:07 compute-0 sudo[402148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:07 compute-0 sudo[402148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:07 compute-0 sudo[402148]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:07.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:07 compute-0 sudo[402173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:18:07 compute-0 sudo[402173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:07 compute-0 sudo[402173]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:07 compute-0 sudo[402198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:07 compute-0 sudo[402198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:07 compute-0 sudo[402198]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:07 compute-0 sudo[402223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:18:07 compute-0 sudo[402223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:07 compute-0 nova_compute[251992]: 2025-12-06 08:18:07.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:07 compute-0 podman[402291]: 2025-12-06 08:18:07.657575986 +0000 UTC m=+0.043086782 container create e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:18:07 compute-0 systemd[1]: Started libpod-conmon-e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f.scope.
Dec 06 08:18:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:18:07 compute-0 podman[402291]: 2025-12-06 08:18:07.636602826 +0000 UTC m=+0.022113652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:18:07 compute-0 podman[402291]: 2025-12-06 08:18:07.73869968 +0000 UTC m=+0.124210526 container init e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:18:07 compute-0 podman[402291]: 2025-12-06 08:18:07.74608379 +0000 UTC m=+0.131594616 container start e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:18:07 compute-0 podman[402291]: 2025-12-06 08:18:07.749617457 +0000 UTC m=+0.135128263 container attach e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:18:07 compute-0 clever_cohen[402308]: 167 167
Dec 06 08:18:07 compute-0 systemd[1]: libpod-e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f.scope: Deactivated successfully.
Dec 06 08:18:07 compute-0 podman[402291]: 2025-12-06 08:18:07.751579269 +0000 UTC m=+0.137090075 container died e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 08:18:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-db64306f6f8b84f0fa9aff4820f5dd1df7c7f767fd25935f8f32605f6476f2f6-merged.mount: Deactivated successfully.
Dec 06 08:18:07 compute-0 podman[402291]: 2025-12-06 08:18:07.792384538 +0000 UTC m=+0.177895344 container remove e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_cohen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:18:07 compute-0 systemd[1]: libpod-conmon-e5a34f0005c1fba03ab1d45ea419a774180bfbc56cca80f9b4a015e7b18c899f.scope: Deactivated successfully.
Dec 06 08:18:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec 06 08:18:07 compute-0 podman[402335]: 2025-12-06 08:18:07.982982146 +0000 UTC m=+0.046402942 container create a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:18:08 compute-0 systemd[1]: Started libpod-conmon-a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115.scope.
Dec 06 08:18:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf991d598bbdcfdd0417e2044157249d59f7e33c1df3ae5795c32b83d9293b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf991d598bbdcfdd0417e2044157249d59f7e33c1df3ae5795c32b83d9293b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf991d598bbdcfdd0417e2044157249d59f7e33c1df3ae5795c32b83d9293b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf991d598bbdcfdd0417e2044157249d59f7e33c1df3ae5795c32b83d9293b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abf991d598bbdcfdd0417e2044157249d59f7e33c1df3ae5795c32b83d9293b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:08 compute-0 podman[402335]: 2025-12-06 08:18:07.960978128 +0000 UTC m=+0.024398944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:18:08 compute-0 podman[402335]: 2025-12-06 08:18:08.060744599 +0000 UTC m=+0.124165395 container init a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:18:08 compute-0 podman[402335]: 2025-12-06 08:18:08.072386095 +0000 UTC m=+0.135806891 container start a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:18:08 compute-0 podman[402335]: 2025-12-06 08:18:08.075959592 +0000 UTC m=+0.139380448 container attach a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:18:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:08 compute-0 nova_compute[251992]: 2025-12-06 08:18:08.597 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:08 compute-0 nova_compute[251992]: 2025-12-06 08:18:08.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:08 compute-0 nova_compute[251992]: 2025-12-06 08:18:08.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:08 compute-0 bold_greider[402351]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:18:08 compute-0 bold_greider[402351]: --> relative data size: 1.0
Dec 06 08:18:08 compute-0 bold_greider[402351]: --> All data devices are unavailable
Dec 06 08:18:08 compute-0 systemd[1]: libpod-a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115.scope: Deactivated successfully.
Dec 06 08:18:08 compute-0 podman[402335]: 2025-12-06 08:18:08.932758508 +0000 UTC m=+0.996179304 container died a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-abf991d598bbdcfdd0417e2044157249d59f7e33c1df3ae5795c32b83d9293b3-merged.mount: Deactivated successfully.
Dec 06 08:18:08 compute-0 podman[402335]: 2025-12-06 08:18:08.987699301 +0000 UTC m=+1.051120097 container remove a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_greider, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:18:08 compute-0 systemd[1]: libpod-conmon-a09745f0ae94ded36bbae194fbc2c6de2827c2f1b6f299413222911e97516115.scope: Deactivated successfully.
Dec 06 08:18:09 compute-0 sudo[402223]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:18:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3375858047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:18:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3375858047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:09 compute-0 sudo[402378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:09 compute-0 sudo[402378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:09 compute-0 sudo[402378]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:09 compute-0 nova_compute[251992]: 2025-12-06 08:18:09.085 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:09 compute-0 sudo[402403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:18:09 compute-0 ceph-mon[74339]: pgmap v3770: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec 06 08:18:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3375858047' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3375858047' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:09 compute-0 sudo[402403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:09 compute-0 sudo[402403]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:09 compute-0 sudo[402429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:09 compute-0 sudo[402429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:09 compute-0 sudo[402429]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:09 compute-0 sudo[402454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:18:09 compute-0 sudo[402454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:09 compute-0 podman[402521]: 2025-12-06 08:18:09.534473574 +0000 UTC m=+0.035572898 container create b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:18:09 compute-0 systemd[1]: Started libpod-conmon-b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc.scope.
Dec 06 08:18:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:18:09 compute-0 podman[402521]: 2025-12-06 08:18:09.595200763 +0000 UTC m=+0.096300117 container init b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:18:09 compute-0 podman[402521]: 2025-12-06 08:18:09.601548536 +0000 UTC m=+0.102647870 container start b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_yalow, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:18:09 compute-0 unruffled_yalow[402539]: 167 167
Dec 06 08:18:09 compute-0 podman[402521]: 2025-12-06 08:18:09.606311065 +0000 UTC m=+0.107410399 container attach b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:18:09 compute-0 systemd[1]: libpod-b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc.scope: Deactivated successfully.
Dec 06 08:18:09 compute-0 podman[402521]: 2025-12-06 08:18:09.606932793 +0000 UTC m=+0.108032127 container died b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_yalow, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:18:09 compute-0 podman[402521]: 2025-12-06 08:18:09.519338522 +0000 UTC m=+0.020437876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:18:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-812b406039ee274ac0c8b35a5f6bc0f312658719daf7bfca4c12c8d5814acf32-merged.mount: Deactivated successfully.
Dec 06 08:18:09 compute-0 podman[402521]: 2025-12-06 08:18:09.639700702 +0000 UTC m=+0.140800026 container remove b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:18:09 compute-0 systemd[1]: libpod-conmon-b2f6000cef46289e9350a8002fead1e463c5a88743875d75d1a2bd8eea1059dc.scope: Deactivated successfully.
Dec 06 08:18:09 compute-0 nova_compute[251992]: 2025-12-06 08:18:09.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:09 compute-0 nova_compute[251992]: 2025-12-06 08:18:09.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:18:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:09 compute-0 podman[402562]: 2025-12-06 08:18:09.800944023 +0000 UTC m=+0.040533753 container create c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:18:09 compute-0 systemd[1]: Started libpod-conmon-c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842.scope.
Dec 06 08:18:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc7e80b0d0b3922bdf23e02ce093178a24fb513d19545e7f91941305a3bda2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc7e80b0d0b3922bdf23e02ce093178a24fb513d19545e7f91941305a3bda2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc7e80b0d0b3922bdf23e02ce093178a24fb513d19545e7f91941305a3bda2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbc7e80b0d0b3922bdf23e02ce093178a24fb513d19545e7f91941305a3bda2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:09 compute-0 podman[402562]: 2025-12-06 08:18:09.782584914 +0000 UTC m=+0.022174644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:18:09 compute-0 podman[402562]: 2025-12-06 08:18:09.883413893 +0000 UTC m=+0.123003633 container init c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:18:09 compute-0 podman[402562]: 2025-12-06 08:18:09.889071687 +0000 UTC m=+0.128661407 container start c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:18:09 compute-0 podman[402562]: 2025-12-06 08:18:09.891890443 +0000 UTC m=+0.131480183 container attach c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:18:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Dec 06 08:18:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:10.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]: {
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:     "0": [
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:         {
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "devices": [
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "/dev/loop3"
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             ],
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "lv_name": "ceph_lv0",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "lv_size": "7511998464",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "name": "ceph_lv0",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "tags": {
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.cluster_name": "ceph",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.crush_device_class": "",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.encrypted": "0",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.osd_id": "0",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.type": "block",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:                 "ceph.vdo": "0"
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             },
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "type": "block",
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:             "vg_name": "ceph_vg0"
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:         }
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]:     ]
Dec 06 08:18:10 compute-0 mystifying_sanderson[402578]: }
Dec 06 08:18:10 compute-0 systemd[1]: libpod-c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842.scope: Deactivated successfully.
Dec 06 08:18:10 compute-0 conmon[402578]: conmon c787386db5d4f257a7b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842.scope/container/memory.events
Dec 06 08:18:10 compute-0 podman[402562]: 2025-12-06 08:18:10.663184827 +0000 UTC m=+0.902774537 container died c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbc7e80b0d0b3922bdf23e02ce093178a24fb513d19545e7f91941305a3bda2b-merged.mount: Deactivated successfully.
Dec 06 08:18:10 compute-0 podman[402562]: 2025-12-06 08:18:10.713253137 +0000 UTC m=+0.952842857 container remove c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:18:10 compute-0 systemd[1]: libpod-conmon-c787386db5d4f257a7b95fcf3f3c282cac19789279457cca35052ab2c75c9842.scope: Deactivated successfully.
Dec 06 08:18:10 compute-0 sudo[402454]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:10 compute-0 sudo[402600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:10 compute-0 sudo[402600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:10 compute-0 sudo[402600]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:10 compute-0 sudo[402625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:18:10 compute-0 sudo[402625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:10 compute-0 sudo[402625]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:10 compute-0 sudo[402650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:10 compute-0 sudo[402650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:10 compute-0 sudo[402650]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:10 compute-0 sudo[402675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:18:10 compute-0 sudo[402675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:11.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:11 compute-0 podman[402741]: 2025-12-06 08:18:11.296602574 +0000 UTC m=+0.046008081 container create ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 08:18:11 compute-0 systemd[1]: Started libpod-conmon-ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade.scope.
Dec 06 08:18:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:18:11 compute-0 podman[402741]: 2025-12-06 08:18:11.279552921 +0000 UTC m=+0.028958338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:18:11 compute-0 podman[402741]: 2025-12-06 08:18:11.378844458 +0000 UTC m=+0.128249865 container init ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:18:11 compute-0 podman[402741]: 2025-12-06 08:18:11.387796321 +0000 UTC m=+0.137201718 container start ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:18:11 compute-0 vigorous_mccarthy[402757]: 167 167
Dec 06 08:18:11 compute-0 podman[402741]: 2025-12-06 08:18:11.393194518 +0000 UTC m=+0.142599925 container attach ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:18:11 compute-0 systemd[1]: libpod-ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade.scope: Deactivated successfully.
Dec 06 08:18:11 compute-0 podman[402741]: 2025-12-06 08:18:11.394331109 +0000 UTC m=+0.143736506 container died ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:18:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2dc18c133060d919c79ed07f08668af3681dc622b79e7d279da2b621d094315-merged.mount: Deactivated successfully.
Dec 06 08:18:11 compute-0 podman[402741]: 2025-12-06 08:18:11.427552812 +0000 UTC m=+0.176958209 container remove ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:18:11 compute-0 systemd[1]: libpod-conmon-ba203b975c86f7a2a07fe1ee26edf51d641627ba9e5b01689562b55e82920ade.scope: Deactivated successfully.
Dec 06 08:18:11 compute-0 podman[402781]: 2025-12-06 08:18:11.596561993 +0000 UTC m=+0.045739044 container create 6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hugle, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:18:11 compute-0 systemd[1]: Started libpod-conmon-6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25.scope.
Dec 06 08:18:11 compute-0 podman[402781]: 2025-12-06 08:18:11.576552669 +0000 UTC m=+0.025729710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:18:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7848d8ab6ce57eb3a2f84b56195de8abe6ff8acb1f8258653ddfba6b0fa3a974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7848d8ab6ce57eb3a2f84b56195de8abe6ff8acb1f8258653ddfba6b0fa3a974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7848d8ab6ce57eb3a2f84b56195de8abe6ff8acb1f8258653ddfba6b0fa3a974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7848d8ab6ce57eb3a2f84b56195de8abe6ff8acb1f8258653ddfba6b0fa3a974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:18:11 compute-0 podman[402781]: 2025-12-06 08:18:11.693779344 +0000 UTC m=+0.142956385 container init 6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:18:11 compute-0 podman[402781]: 2025-12-06 08:18:11.704453163 +0000 UTC m=+0.153630224 container start 6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:18:11 compute-0 podman[402781]: 2025-12-06 08:18:11.711921946 +0000 UTC m=+0.161098997 container attach 6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:18:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Dec 06 08:18:11 compute-0 ceph-mon[74339]: pgmap v3771: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Dec 06 08:18:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:12.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:12 compute-0 nice_hugle[402797]: {
Dec 06 08:18:12 compute-0 nice_hugle[402797]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:18:12 compute-0 nice_hugle[402797]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:18:12 compute-0 nice_hugle[402797]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:18:12 compute-0 nice_hugle[402797]:         "osd_id": 0,
Dec 06 08:18:12 compute-0 nice_hugle[402797]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:18:12 compute-0 nice_hugle[402797]:         "type": "bluestore"
Dec 06 08:18:12 compute-0 nice_hugle[402797]:     }
Dec 06 08:18:12 compute-0 nice_hugle[402797]: }
Dec 06 08:18:12 compute-0 systemd[1]: libpod-6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25.scope: Deactivated successfully.
Dec 06 08:18:12 compute-0 podman[402781]: 2025-12-06 08:18:12.546524919 +0000 UTC m=+0.995701960 container died 6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:18:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-7848d8ab6ce57eb3a2f84b56195de8abe6ff8acb1f8258653ddfba6b0fa3a974-merged.mount: Deactivated successfully.
Dec 06 08:18:12 compute-0 podman[402781]: 2025-12-06 08:18:12.60433904 +0000 UTC m=+1.053516081 container remove 6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:18:12 compute-0 systemd[1]: libpod-conmon-6cb7e3b453529a9842d215b2bdc09251d0604238d6fb4aa0f42cc828341d1f25.scope: Deactivated successfully.
Dec 06 08:18:12 compute-0 sudo[402675]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:18:12 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:18:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d550bea3-c156-4395-b61f-35c27285ac72 does not exist
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f18eec15-dfae-48f3-b62d-521ff7ec212e does not exist
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 35e7efd3-7291-4bcb-8657-0ab88c8f3e70 does not exist
Dec 06 08:18:13 compute-0 ceph-mon[74339]: pgmap v3772: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Dec 06 08:18:13 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:18:13 compute-0 sudo[402833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:13 compute-0 sudo[402833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:13 compute-0 sudo[402833]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:13 compute-0 sudo[402858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:18:13 compute-0 sudo[402858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:13 compute-0 sudo[402858]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:13.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:13 compute-0 nova_compute[251992]: 2025-12-06 08:18:13.600 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:18:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 59K writes, 217K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s
                                           Cumulative WAL: 59K writes, 22K syncs, 2.61 writes per sync, written: 0.21 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3547 writes, 12K keys, 3547 commit groups, 1.0 writes per commit group, ingest: 11.63 MB, 0.02 MB/s
                                           Interval WAL: 3547 writes, 1513 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:18:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 08:18:14 compute-0 nova_compute[251992]: 2025-12-06 08:18:14.087 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:18:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:14.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.676359) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094676531, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1200, "num_deletes": 256, "total_data_size": 1832364, "memory_usage": 1868176, "flush_reason": "Manual Compaction"}
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094690186, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1799082, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75989, "largest_seqno": 77188, "table_properties": {"data_size": 1793451, "index_size": 2961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12596, "raw_average_key_size": 19, "raw_value_size": 1781830, "raw_average_value_size": 2810, "num_data_blocks": 131, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008995, "oldest_key_time": 1765008995, "file_creation_time": 1765009094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 13902 microseconds, and 6706 cpu microseconds.
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.690263) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1799082 bytes OK
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.690294) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.692123) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.692140) EVENT_LOG_v1 {"time_micros": 1765009094692135, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.692159) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1826936, prev total WAL file size 1826936, number of live WAL files 2.
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.693206) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303330' seq:72057594037927935, type:22 .. '6C6F676D0033323831' seq:0, type:0; will stop at (end)
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1756KB)], [170(11MB)]
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094693340, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 13651972, "oldest_snapshot_seqno": -1}
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 11050 keys, 13516498 bytes, temperature: kUnknown
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094791618, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 13516498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13446814, "index_size": 40963, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27653, "raw_key_size": 292344, "raw_average_key_size": 26, "raw_value_size": 13255159, "raw_average_value_size": 1199, "num_data_blocks": 1555, "num_entries": 11050, "num_filter_entries": 11050, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.791872) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13516498 bytes
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.793257) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.8 rd, 137.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 11.3 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(15.1) write-amplify(7.5) OK, records in: 11579, records dropped: 529 output_compression: NoCompression
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.793272) EVENT_LOG_v1 {"time_micros": 1765009094793265, "job": 106, "event": "compaction_finished", "compaction_time_micros": 98359, "compaction_time_cpu_micros": 64554, "output_level": 6, "num_output_files": 1, "total_output_size": 13516498, "num_input_records": 11579, "num_output_records": 11050, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094793665, "job": 106, "event": "table_file_deletion", "file_number": 172}
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009094795648, "job": 106, "event": "table_file_deletion", "file_number": 170}
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.692971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.795779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.795788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.795791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.795794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:14 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:14.795798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:15.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:15 compute-0 ceph-mon[74339]: pgmap v3773: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 08:18:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 08:18:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:16.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:17.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:17 compute-0 ceph-mon[74339]: pgmap v3774: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Dec 06 08:18:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4049142387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1009 KiB/s wr, 81 op/s
Dec 06 08:18:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:18.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:18 compute-0 podman[402887]: 2025-12-06 08:18:18.462990796 +0000 UTC m=+0.117940765 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller)
Dec 06 08:18:18 compute-0 nova_compute[251992]: 2025-12-06 08:18:18.602 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:18:18
Dec 06 08:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control']
Dec 06 08:18:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:18:18 compute-0 ceph-mon[74339]: pgmap v3775: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1009 KiB/s wr, 81 op/s
Dec 06 08:18:19 compute-0 nova_compute[251992]: 2025-12-06 08:18:19.089 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:19.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:18:19.439 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:18:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:18:19.441 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:18:19 compute-0 nova_compute[251992]: 2025-12-06 08:18:19.470 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 944 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:18:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:20.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:21.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:21 compute-0 ceph-mon[74339]: pgmap v3776: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 944 KiB/s rd, 14 KiB/s wr, 30 op/s
Dec 06 08:18:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 100 op/s
Dec 06 08:18:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:22.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 08:18:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:23.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:23 compute-0 ceph-mon[74339]: pgmap v3777: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 100 op/s
Dec 06 08:18:23 compute-0 sudo[402917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:23 compute-0 sudo[402917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:23 compute-0 sudo[402917]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:23 compute-0 sudo[402942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:23 compute-0 sudo[402942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:23 compute-0 sudo[402942]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:23 compute-0 nova_compute[251992]: 2025-12-06 08:18:23.605 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:18:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:18:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 69 op/s
Dec 06 08:18:24 compute-0 nova_compute[251992]: 2025-12-06 08:18:24.132 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:24.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:24 compute-0 podman[402967]: 2025-12-06 08:18:24.391925932 +0000 UTC m=+0.052234999 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:18:24 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:18:24.443 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:18:24 compute-0 podman[402968]: 2025-12-06 08:18:24.451031688 +0000 UTC m=+0.098412425 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:18:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:25.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:25 compute-0 ceph-mon[74339]: pgmap v3778: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 69 op/s
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.270831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105270855, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 336, "num_deletes": 251, "total_data_size": 176970, "memory_usage": 183704, "flush_reason": "Manual Compaction"}
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105273716, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 175405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77189, "largest_seqno": 77524, "table_properties": {"data_size": 173268, "index_size": 300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5355, "raw_average_key_size": 18, "raw_value_size": 169094, "raw_average_value_size": 585, "num_data_blocks": 14, "num_entries": 289, "num_filter_entries": 289, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009095, "oldest_key_time": 1765009095, "file_creation_time": 1765009105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 2909 microseconds, and 950 cpu microseconds.
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.273742) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 175405 bytes OK
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.273755) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.274953) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.274964) EVENT_LOG_v1 {"time_micros": 1765009105274960, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.274972) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 174679, prev total WAL file size 174679, number of live WAL files 2.
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.275276) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(171KB)], [173(12MB)]
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105275298, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 13691903, "oldest_snapshot_seqno": -1}
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10829 keys, 11755778 bytes, temperature: kUnknown
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105336364, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 11755778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11688983, "index_size": 38586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 288445, "raw_average_key_size": 26, "raw_value_size": 11502551, "raw_average_value_size": 1062, "num_data_blocks": 1447, "num_entries": 10829, "num_filter_entries": 10829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.336902) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11755778 bytes
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.338671) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 223.2 rd, 191.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.9 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(145.1) write-amplify(67.0) OK, records in: 11339, records dropped: 510 output_compression: NoCompression
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.338692) EVENT_LOG_v1 {"time_micros": 1765009105338683, "job": 108, "event": "compaction_finished", "compaction_time_micros": 61349, "compaction_time_cpu_micros": 29809, "output_level": 6, "num_output_files": 1, "total_output_size": 11755778, "num_input_records": 11339, "num_output_records": 10829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105339353, "job": 108, "event": "table_file_deletion", "file_number": 175}
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009105343215, "job": 108, "event": "table_file_deletion", "file_number": 173}
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.275220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.343463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.343470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.343472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.343479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:18:25.343481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:18:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:26.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031750549623115578 of space, bias 1.0, pg target 0.9525164886934673 quantized to 32 (current 32)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004619269670109808 of space, bias 1.0, pg target 1.3857809010329425 quantized to 32 (current 32)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:18:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:18:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:27.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:27 compute-0 ceph-mon[74339]: pgmap v3779: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:18:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:18:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:18:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:18:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:18:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:28.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:28 compute-0 nova_compute[251992]: 2025-12-06 08:18:28.607 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:29 compute-0 nova_compute[251992]: 2025-12-06 08:18:29.134 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:29.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:29 compute-0 ceph-mon[74339]: pgmap v3780: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:30.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:31.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:31 compute-0 ceph-mon[74339]: pgmap v3781: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2296411019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2222883412' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:18:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:32.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:33 compute-0 ceph-mon[74339]: pgmap v3782: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.3 MiB/s wr, 85 op/s
Dec 06 08:18:33 compute-0 nova_compute[251992]: 2025-12-06 08:18:33.609 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 1.2 MiB/s wr, 16 op/s
Dec 06 08:18:34 compute-0 nova_compute[251992]: 2025-12-06 08:18:34.136 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:34 compute-0 ceph-mon[74339]: pgmap v3783: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 1.2 MiB/s wr, 16 op/s
Dec 06 08:18:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:35.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 58 op/s
Dec 06 08:18:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:36.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:37.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:37 compute-0 ceph-mon[74339]: pgmap v3784: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 58 op/s
Dec 06 08:18:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Dec 06 08:18:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:38.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:38 compute-0 nova_compute[251992]: 2025-12-06 08:18:38.612 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:39 compute-0 nova_compute[251992]: 2025-12-06 08:18:39.138 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:39.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 74 op/s
Dec 06 08:18:39 compute-0 ceph-mon[74339]: pgmap v3785: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 75 op/s
Dec 06 08:18:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:40.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:41 compute-0 ceph-mon[74339]: pgmap v3786: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 74 op/s
Dec 06 08:18:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/541583109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:41.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 219 KiB/s wr, 80 op/s
Dec 06 08:18:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1695264830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:42.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:18:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:18:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:18:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:18:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:18:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:18:43 compute-0 ceph-mon[74339]: pgmap v3787: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 219 KiB/s wr, 80 op/s
Dec 06 08:18:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:43.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:43 compute-0 sudo[403014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:43 compute-0 sudo[403014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:43 compute-0 sudo[403014]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:43 compute-0 sudo[403039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:18:43 compute-0 sudo[403039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:18:43 compute-0 sudo[403039]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:43 compute-0 nova_compute[251992]: 2025-12-06 08:18:43.649 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 215 KiB/s wr, 79 op/s
Dec 06 08:18:44 compute-0 nova_compute[251992]: 2025-12-06 08:18:44.140 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4041479394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:44.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:18:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2974362627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:18:44 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2974362627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Dec 06 08:18:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Dec 06 08:18:45 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Dec 06 08:18:45 compute-0 ceph-mon[74339]: pgmap v3788: 305 pgs: 305 active+clean; 347 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 215 KiB/s wr, 79 op/s
Dec 06 08:18:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2974362627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2974362627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:45.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 334 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 616 KiB/s wr, 92 op/s
Dec 06 08:18:46 compute-0 ceph-mon[74339]: osdmap e422: 3 total, 3 up, 3 in
Dec 06 08:18:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:47 compute-0 ceph-mon[74339]: pgmap v3790: 305 pgs: 305 active+clean; 334 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 616 KiB/s wr, 92 op/s
Dec 06 08:18:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:47.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 330 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Dec 06 08:18:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:48 compute-0 nova_compute[251992]: 2025-12-06 08:18:48.651 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:49 compute-0 nova_compute[251992]: 2025-12-06 08:18:49.142 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:49 compute-0 ceph-mon[74339]: pgmap v3791: 305 pgs: 305 active+clean; 330 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Dec 06 08:18:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:49.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:49 compute-0 podman[403067]: 2025-12-06 08:18:49.421643465 +0000 UTC m=+0.079968813 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec 06 08:18:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 330 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Dec 06 08:18:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1304841708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:18:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1304841708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:18:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/274917318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:50.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:51.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:51 compute-0 ceph-mon[74339]: pgmap v3792: 305 pgs: 305 active+clean; 330 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Dec 06 08:18:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 457 KiB/s rd, 2.6 MiB/s wr, 166 op/s
Dec 06 08:18:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:52.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:18:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:53.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:18:53 compute-0 ceph-mon[74339]: pgmap v3793: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 457 KiB/s rd, 2.6 MiB/s wr, 166 op/s
Dec 06 08:18:53 compute-0 nova_compute[251992]: 2025-12-06 08:18:53.653 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 457 KiB/s rd, 2.6 MiB/s wr, 166 op/s
Dec 06 08:18:54 compute-0 nova_compute[251992]: 2025-12-06 08:18:54.144 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:54.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:54 compute-0 nova_compute[251992]: 2025-12-06 08:18:54.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Dec 06 08:18:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Dec 06 08:18:54 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Dec 06 08:18:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:55.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:55 compute-0 podman[403096]: 2025-12-06 08:18:55.400346212 +0000 UTC m=+0.051669464 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:18:55 compute-0 podman[403097]: 2025-12-06 08:18:55.412179384 +0000 UTC m=+0.063522027 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:18:55 compute-0 ceph-mon[74339]: pgmap v3794: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 457 KiB/s rd, 2.6 MiB/s wr, 166 op/s
Dec 06 08:18:55 compute-0 ceph-mon[74339]: osdmap e423: 3 total, 3 up, 3 in
Dec 06 08:18:55 compute-0 nova_compute[251992]: 2025-12-06 08:18:55.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:18:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 411 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Dec 06 08:18:55 compute-0 nova_compute[251992]: 2025-12-06 08:18:55.959 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:55 compute-0 nova_compute[251992]: 2025-12-06 08:18:55.960 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:55 compute-0 nova_compute[251992]: 2025-12-06 08:18:55.960 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:55 compute-0 nova_compute[251992]: 2025-12-06 08:18:55.960 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:18:55 compute-0 nova_compute[251992]: 2025-12-06 08:18:55.961 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:56.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:18:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170588744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:56 compute-0 nova_compute[251992]: 2025-12-06 08:18:56.409 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4170588744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.111 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.112 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000d1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:18:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.254 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.255 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3911MB free_disk=20.897289276123047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.256 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.256 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.431 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.432 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.432 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:18:57 compute-0 ceph-mon[74339]: pgmap v3796: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 411 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.532 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:18:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 1.8 MiB/s wr, 118 op/s
Dec 06 08:18:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:18:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257723763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.986 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:18:57 compute-0 nova_compute[251992]: 2025-12-06 08:18:57.992 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:18:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:18:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:18:58.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:18:58 compute-0 nova_compute[251992]: 2025-12-06 08:18:58.444 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:18:58 compute-0 nova_compute[251992]: 2025-12-06 08:18:58.446 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:18:58 compute-0 nova_compute[251992]: 2025-12-06 08:18:58.446 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:18:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4257723763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:18:58 compute-0 nova_compute[251992]: 2025-12-06 08:18:58.656 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:18:59.061 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:18:59 compute-0 nova_compute[251992]: 2025-12-06 08:18:59.061 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:59 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:18:59.062 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:18:59 compute-0 nova_compute[251992]: 2025-12-06 08:18:59.146 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:18:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:18:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:18:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:18:59.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:18:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Dec 06 08:18:59 compute-0 ceph-mon[74339]: pgmap v3797: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 1.8 MiB/s wr, 118 op/s
Dec 06 08:18:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Dec 06 08:18:59 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Dec 06 08:18:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:18:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 21 KiB/s wr, 22 op/s
Dec 06 08:19:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:00.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:00 compute-0 nova_compute[251992]: 2025-12-06 08:19:00.448 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:00 compute-0 nova_compute[251992]: 2025-12-06 08:19:00.448 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:19:00 compute-0 nova_compute[251992]: 2025-12-06 08:19:00.449 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:19:00 compute-0 ceph-mon[74339]: osdmap e424: 3 total, 3 up, 3 in
Dec 06 08:19:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/315860512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3436298347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/870741633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:01 compute-0 nova_compute[251992]: 2025-12-06 08:19:01.073 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:19:01 compute-0 nova_compute[251992]: 2025-12-06 08:19:01.074 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:19:01 compute-0 nova_compute[251992]: 2025-12-06 08:19:01.074 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:19:01 compute-0 nova_compute[251992]: 2025-12-06 08:19:01.074 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:19:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:01.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:01 compute-0 ceph-mon[74339]: pgmap v3799: 305 pgs: 305 active+clean; 226 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 21 KiB/s wr, 22 op/s
Dec 06 08:19:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 24 KiB/s wr, 71 op/s
Dec 06 08:19:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:02.065 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.389 251996 DEBUG nova.compute.manager [req-7e7f49b5-7f1c-4070-9fd4-14f3f5e0f8b4 req-e34cde3c-dba1-4f24-bdcf-b3dbe87d22ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-changed-334292eb-9b24-4eb7-aa88-28824964eb71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.390 251996 DEBUG nova.compute.manager [req-7e7f49b5-7f1c-4070-9fd4-14f3f5e0f8b4 req-e34cde3c-dba1-4f24-bdcf-b3dbe87d22ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Refreshing instance network info cache due to event network-changed-334292eb-9b24-4eb7-aa88-28824964eb71. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.390 251996 DEBUG oslo_concurrency.lockutils [req-7e7f49b5-7f1c-4070-9fd4-14f3f5e0f8b4 req-e34cde3c-dba1-4f24-bdcf-b3dbe87d22ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:19:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:02.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.739 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.739 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.740 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.740 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.740 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.742 251996 INFO nova.compute.manager [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Terminating instance
Dec 06 08:19:02 compute-0 nova_compute[251992]: 2025-12-06 08:19:02.742 251996 DEBUG nova.compute.manager [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:19:03 compute-0 ceph-mon[74339]: pgmap v3800: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 24 KiB/s wr, 71 op/s
Dec 06 08:19:03 compute-0 kernel: tap334292eb-9b (unregistering): left promiscuous mode
Dec 06 08:19:03 compute-0 NetworkManager[48965]: <info>  [1765009143.0402] device (tap334292eb-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.054 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 ovn_controller[147168]: 2025-12-06T08:19:03Z|00799|binding|INFO|Releasing lport 334292eb-9b24-4eb7-aa88-28824964eb71 from this chassis (sb_readonly=0)
Dec 06 08:19:03 compute-0 ovn_controller[147168]: 2025-12-06T08:19:03Z|00800|binding|INFO|Setting lport 334292eb-9b24-4eb7-aa88-28824964eb71 down in Southbound
Dec 06 08:19:03 compute-0 ovn_controller[147168]: 2025-12-06T08:19:03Z|00801|binding|INFO|Removing iface tap334292eb-9b ovn-installed in OVS
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.057 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.078 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d1.scope: Deactivated successfully.
Dec 06 08:19:03 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000d1.scope: Consumed 16.303s CPU time.
Dec 06 08:19:03 compute-0 systemd-machined[212986]: Machine qemu-94-instance-000000d1 terminated.
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.105 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:bd:a3 10.100.0.4'], port_security=['fa:16:3e:32:bd:a3 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '93e8a72f-c895-445c-8c5b-aadf11ffd3b5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e18f04-0697-4301-a26d-02786b558075', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '271978dc-b7c6-4e85-b0d0-f54bd930f144 c4946238-f586-46b2-b353-baad07fea15f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=626f221f-4e25-4acd-9bf5-4d267283bf54, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=334292eb-9b24-4eb7-aa88-28824964eb71) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.107 158118 INFO neutron.agent.ovn.metadata.agent [-] Port 334292eb-9b24-4eb7-aa88-28824964eb71 in datapath 14e18f04-0697-4301-a26d-02786b558075 unbound from our chassis
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.108 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e18f04-0697-4301-a26d-02786b558075, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.111 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b79b61c0-70d0-47bd-89a1-ab599ce23b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.111 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e18f04-0697-4301-a26d-02786b558075 namespace which is not needed anymore
Dec 06 08:19:03 compute-0 NetworkManager[48965]: <info>  [1765009143.1641] manager: (tap334292eb-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/380)
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.169 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.179 251996 INFO nova.virt.libvirt.driver [-] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Instance destroyed successfully.
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.180 251996 DEBUG nova.objects.instance [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'resources' on Instance uuid 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.222 251996 DEBUG nova.virt.libvirt.vif [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-360306683',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-access_point-360306683',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-acc',id=209,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIzGPaWk14xbtW5sHv4GXDiMSG/C5MZExlVtwgnjxxcupj/ss21DUqD2dDVK61uPoLLLdziVAI6yXqU7ErexaGkX9er10rHIWMNub/drUqFoT5/Q97yMArULe40gKdjSAw==',key_name='tempest-TestSecurityGroupsBasicOps-853526666',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:17:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-8jdmlew3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:17:50Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=93e8a72f-c895-445c-8c5b-aadf11ffd3b5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.223 251996 DEBUG nova.network.os_vif_util [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.223 251996 DEBUG nova.network.os_vif_util [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:bd:a3,bridge_name='br-int',has_traffic_filtering=True,id=334292eb-9b24-4eb7-aa88-28824964eb71,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap334292eb-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.224 251996 DEBUG os_vif [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:bd:a3,bridge_name='br-int',has_traffic_filtering=True,id=334292eb-9b24-4eb7-aa88-28824964eb71,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap334292eb-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.226 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.226 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap334292eb-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.229 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.232 251996 INFO os_vif [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:bd:a3,bridge_name='br-int',has_traffic_filtering=True,id=334292eb-9b24-4eb7-aa88-28824964eb71,network=Network(14e18f04-0697-4301-a26d-02786b558075),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap334292eb-9b')
Dec 06 08:19:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:03.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:03 compute-0 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[401859]: [NOTICE]   (401863) : haproxy version is 2.8.14-c23fe91
Dec 06 08:19:03 compute-0 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[401859]: [NOTICE]   (401863) : path to executable is /usr/sbin/haproxy
Dec 06 08:19:03 compute-0 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[401859]: [WARNING]  (401863) : Exiting Master process...
Dec 06 08:19:03 compute-0 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[401859]: [ALERT]    (401863) : Current worker (401865) exited with code 143 (Terminated)
Dec 06 08:19:03 compute-0 neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075[401859]: [WARNING]  (401863) : All workers exited. Exiting... (0)
Dec 06 08:19:03 compute-0 systemd[1]: libpod-37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5.scope: Deactivated successfully.
Dec 06 08:19:03 compute-0 podman[403213]: 2025-12-06 08:19:03.254818907 +0000 UTC m=+0.044764607 container died 37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5-userdata-shm.mount: Deactivated successfully.
Dec 06 08:19:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-43a02e412d3e152c3f41c4f7f4ac3f662c6af135971d0ef53b4cb8ec8cee62d4-merged.mount: Deactivated successfully.
Dec 06 08:19:03 compute-0 podman[403213]: 2025-12-06 08:19:03.292369548 +0000 UTC m=+0.082315248 container cleanup 37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 08:19:03 compute-0 systemd[1]: libpod-conmon-37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5.scope: Deactivated successfully.
Dec 06 08:19:03 compute-0 podman[403262]: 2025-12-06 08:19:03.346328964 +0000 UTC m=+0.035598189 container remove 37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.351 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2203e729-ba12-44c3-9128-e9173848c23e]: (4, ('Sat Dec  6 08:19:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075 (37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5)\n37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5\nSat Dec  6 08:19:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14e18f04-0697-4301-a26d-02786b558075 (37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5)\n37619c6b7c1031753ceaea0d65f221033c1475cd92d18d29a254388f857c2de5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.353 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6fcc54-2e5d-4a4b-a346-2d85fdbe5f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.354 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e18f04-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.355 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 kernel: tap14e18f04-00: left promiscuous mode
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.373 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.377 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0c697861-c8cc-44eb-96f2-10ca137455ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.399 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[edbc17a0-f287-4da2-a3f3-58f8f55ec879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.401 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4e3805-0486-4edc-b318-ebdc7cf067e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.417 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[85471c05-4d75-4051-bb81-cdc17458f9c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934279, 'reachable_time': 24207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403278, 'error': None, 'target': 'ovnmeta-14e18f04-0697-4301-a26d-02786b558075', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:03 compute-0 systemd[1]: run-netns-ovnmeta\x2d14e18f04\x2d0697\x2d4301\x2da26d\x2d02786b558075.mount: Deactivated successfully.
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.423 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e18f04-0697-4301-a26d-02786b558075 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.423 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[10faaa9e-94aa-4a55-b6b5-d20b0da27a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.657 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:03 compute-0 sudo[403280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:03 compute-0 sudo[403280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:03 compute-0 sudo[403280]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.710 251996 INFO nova.virt.libvirt.driver [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Deleting instance files /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5_del
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.711 251996 INFO nova.virt.libvirt.driver [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Deletion of /var/lib/nova/instances/93e8a72f-c895-445c-8c5b-aadf11ffd3b5_del complete
Dec 06 08:19:03 compute-0 sudo[403305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:03 compute-0 sudo[403305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:03 compute-0 sudo[403305]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.803 251996 DEBUG nova.compute.manager [req-0181a171-51fb-4344-aaa1-6b052aaa89f4 req-65d364e3-2eab-4c41-93ef-a784b5bdbef5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-vif-unplugged-334292eb-9b24-4eb7-aa88-28824964eb71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.804 251996 DEBUG oslo_concurrency.lockutils [req-0181a171-51fb-4344-aaa1-6b052aaa89f4 req-65d364e3-2eab-4c41-93ef-a784b5bdbef5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.804 251996 DEBUG oslo_concurrency.lockutils [req-0181a171-51fb-4344-aaa1-6b052aaa89f4 req-65d364e3-2eab-4c41-93ef-a784b5bdbef5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.804 251996 DEBUG oslo_concurrency.lockutils [req-0181a171-51fb-4344-aaa1-6b052aaa89f4 req-65d364e3-2eab-4c41-93ef-a784b5bdbef5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.805 251996 DEBUG nova.compute.manager [req-0181a171-51fb-4344-aaa1-6b052aaa89f4 req-65d364e3-2eab-4c41-93ef-a784b5bdbef5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] No waiting events found dispatching network-vif-unplugged-334292eb-9b24-4eb7-aa88-28824964eb71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.805 251996 DEBUG nova.compute.manager [req-0181a171-51fb-4344-aaa1-6b052aaa89f4 req-65d364e3-2eab-4c41-93ef-a784b5bdbef5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-vif-unplugged-334292eb-9b24-4eb7-aa88-28824964eb71 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.817 251996 INFO nova.compute.manager [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Took 1.07 seconds to destroy the instance on the hypervisor.
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.817 251996 DEBUG oslo.service.loopingcall [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.818 251996 DEBUG nova.compute.manager [-] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:19:03 compute-0 nova_compute[251992]: 2025-12-06 08:19:03.818 251996 DEBUG nova.network.neutron [-] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.894 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.895 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:03.895 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 21 KiB/s wr, 61 op/s
Dec 06 08:19:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:04.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Dec 06 08:19:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Dec 06 08:19:04 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Dec 06 08:19:05 compute-0 ceph-mon[74339]: pgmap v3801: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 21 KiB/s wr, 61 op/s
Dec 06 08:19:05 compute-0 ceph-mon[74339]: osdmap e425: 3 total, 3 up, 3 in
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.085 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updating instance_info_cache with network_info: [{"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.109 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.110 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.110 251996 DEBUG oslo_concurrency.lockutils [req-7e7f49b5-7f1c-4070-9fd4-14f3f5e0f8b4 req-e34cde3c-dba1-4f24-bdcf-b3dbe87d22ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.111 251996 DEBUG nova.network.neutron [req-7e7f49b5-7f1c-4070-9fd4-14f3f5e0f8b4 req-e34cde3c-dba1-4f24-bdcf-b3dbe87d22ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Refreshing network info cache for port 334292eb-9b24-4eb7-aa88-28824964eb71 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.113 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.115 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.116 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:05.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.366 251996 DEBUG nova.network.neutron [-] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.388 251996 INFO nova.compute.manager [-] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Took 1.57 seconds to deallocate network for instance.
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.435 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.435 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.470 251996 DEBUG nova.compute.manager [req-cd431a57-4b35-42b7-a609-b0038d645fa6 req-34831d22-2418-45a9-85f4-f035a769a247 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-vif-deleted-334292eb-9b24-4eb7-aa88-28824964eb71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.516 251996 DEBUG oslo_concurrency.processutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 4.9 KiB/s wr, 77 op/s
Dec 06 08:19:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:19:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1645654584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.964 251996 DEBUG nova.compute.manager [req-d8a95dc7-15e7-42e9-9a22-a808d433e35b req-118104a0-8ebc-4e4b-8e93-0809f726a459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received event network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.965 251996 DEBUG oslo_concurrency.lockutils [req-d8a95dc7-15e7-42e9-9a22-a808d433e35b req-118104a0-8ebc-4e4b-8e93-0809f726a459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.965 251996 DEBUG oslo_concurrency.lockutils [req-d8a95dc7-15e7-42e9-9a22-a808d433e35b req-118104a0-8ebc-4e4b-8e93-0809f726a459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.966 251996 DEBUG oslo_concurrency.lockutils [req-d8a95dc7-15e7-42e9-9a22-a808d433e35b req-118104a0-8ebc-4e4b-8e93-0809f726a459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.966 251996 DEBUG nova.compute.manager [req-d8a95dc7-15e7-42e9-9a22-a808d433e35b req-118104a0-8ebc-4e4b-8e93-0809f726a459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] No waiting events found dispatching network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.966 251996 WARNING nova.compute.manager [req-d8a95dc7-15e7-42e9-9a22-a808d433e35b req-118104a0-8ebc-4e4b-8e93-0809f726a459 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Received unexpected event network-vif-plugged-334292eb-9b24-4eb7-aa88-28824964eb71 for instance with vm_state deleted and task_state None.
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.967 251996 DEBUG oslo_concurrency.processutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:05 compute-0 nova_compute[251992]: 2025-12-06 08:19:05.974 251996 DEBUG nova.compute.provider_tree [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:19:06 compute-0 nova_compute[251992]: 2025-12-06 08:19:06.030 251996 DEBUG nova.scheduler.client.report [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:19:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1645654584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:06 compute-0 nova_compute[251992]: 2025-12-06 08:19:06.059 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:06 compute-0 nova_compute[251992]: 2025-12-06 08:19:06.107 251996 INFO nova.scheduler.client.report [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Deleted allocations for instance 93e8a72f-c895-445c-8c5b-aadf11ffd3b5
Dec 06 08:19:06 compute-0 nova_compute[251992]: 2025-12-06 08:19:06.197 251996 DEBUG oslo_concurrency.lockutils [None req-673573ec-9f33-4de3-8e21-b6883c04fc87 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "93e8a72f-c895-445c-8c5b-aadf11ffd3b5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:06.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:07 compute-0 ceph-mon[74339]: pgmap v3803: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 4.9 KiB/s wr, 77 op/s
Dec 06 08:19:07 compute-0 nova_compute[251992]: 2025-12-06 08:19:07.163 251996 DEBUG nova.network.neutron [req-7e7f49b5-7f1c-4070-9fd4-14f3f5e0f8b4 req-e34cde3c-dba1-4f24-bdcf-b3dbe87d22ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updated VIF entry in instance network info cache for port 334292eb-9b24-4eb7-aa88-28824964eb71. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:19:07 compute-0 nova_compute[251992]: 2025-12-06 08:19:07.164 251996 DEBUG nova.network.neutron [req-7e7f49b5-7f1c-4070-9fd4-14f3f5e0f8b4 req-e34cde3c-dba1-4f24-bdcf-b3dbe87d22ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Updating instance_info_cache with network_info: [{"id": "334292eb-9b24-4eb7-aa88-28824964eb71", "address": "fa:16:3e:32:bd:a3", "network": {"id": "14e18f04-0697-4301-a26d-02786b558075", "bridge": "br-int", "label": "tempest-network-smoke--446330177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap334292eb-9b", "ovs_interfaceid": "334292eb-9b24-4eb7-aa88-28824964eb71", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:19:07 compute-0 nova_compute[251992]: 2025-12-06 08:19:07.232 251996 DEBUG oslo_concurrency.lockutils [req-7e7f49b5-7f1c-4070-9fd4-14f3f5e0f8b4 req-e34cde3c-dba1-4f24-bdcf-b3dbe87d22ca 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-93e8a72f-c895-445c-8c5b-aadf11ffd3b5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:19:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:07.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 4.7 KiB/s wr, 86 op/s
Dec 06 08:19:08 compute-0 nova_compute[251992]: 2025-12-06 08:19:08.229 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:08.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:08 compute-0 nova_compute[251992]: 2025-12-06 08:19:08.659 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:09 compute-0 ceph-mon[74339]: pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 60 KiB/s rd, 4.7 KiB/s wr, 86 op/s
Dec 06 08:19:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:09.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.9 KiB/s wr, 72 op/s
Dec 06 08:19:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/570818404' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:19:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/570818404' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:19:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:10.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:10 compute-0 nova_compute[251992]: 2025-12-06 08:19:10.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:10 compute-0 nova_compute[251992]: 2025-12-06 08:19:10.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:11 compute-0 nova_compute[251992]: 2025-12-06 08:19:11.062 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:11 compute-0 ceph-mon[74339]: pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.9 KiB/s wr, 72 op/s
Dec 06 08:19:11 compute-0 nova_compute[251992]: 2025-12-06 08:19:11.249 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:11 compute-0 nova_compute[251992]: 2025-12-06 08:19:11.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:11 compute-0 nova_compute[251992]: 2025-12-06 08:19:11.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:19:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Dec 06 08:19:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:12.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:19:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:19:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:19:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:19:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:19:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:19:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:13.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:13 compute-0 nova_compute[251992]: 2025-12-06 08:19:13.273 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:13 compute-0 ceph-mon[74339]: pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Dec 06 08:19:13 compute-0 sudo[403358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:13 compute-0 sudo[403358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:13 compute-0 sudo[403358]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:13 compute-0 sudo[403383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:19:13 compute-0 sudo[403383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:13 compute-0 sudo[403383]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:13 compute-0 sudo[403408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:13 compute-0 sudo[403408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:13 compute-0 sudo[403408]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:13 compute-0 nova_compute[251992]: 2025-12-06 08:19:13.662 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:13 compute-0 sudo[403433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:19:13 compute-0 sudo[403433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Dec 06 08:19:14 compute-0 sudo[403433]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:19:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:19:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:19:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:19:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:19:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 847b975d-f764-4847-a2a2-710ce9d8ac7f does not exist
Dec 06 08:19:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 76fbd3b8-3fad-4140-a1ed-50ebb9acb813 does not exist
Dec 06 08:19:14 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d00ba7a0-dda6-4113-8cac-0ee93479bec7 does not exist
Dec 06 08:19:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:19:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:19:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:19:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:19:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:19:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:19:14 compute-0 sudo[403489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:14 compute-0 sudo[403489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:14 compute-0 sudo[403489]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:14.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:19:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:19:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:19:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:19:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:19:14 compute-0 sudo[403514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:19:14 compute-0 sudo[403514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:14 compute-0 sudo[403514]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:14 compute-0 sudo[403539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:14 compute-0 sudo[403539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:14 compute-0 sudo[403539]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:14 compute-0 sudo[403564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:19:14 compute-0 sudo[403564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:14 compute-0 podman[403629]: 2025-12-06 08:19:14.91467391 +0000 UTC m=+0.037236592 container create 2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 08:19:14 compute-0 systemd[1]: Started libpod-conmon-2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee.scope.
Dec 06 08:19:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:19:14 compute-0 podman[403629]: 2025-12-06 08:19:14.897601176 +0000 UTC m=+0.020163878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:19:14 compute-0 podman[403629]: 2025-12-06 08:19:14.996583106 +0000 UTC m=+0.119145808 container init 2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:19:15 compute-0 podman[403629]: 2025-12-06 08:19:15.002179398 +0000 UTC m=+0.124742080 container start 2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:19:15 compute-0 podman[403629]: 2025-12-06 08:19:15.004777048 +0000 UTC m=+0.127339720 container attach 2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:19:15 compute-0 systemd[1]: libpod-2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee.scope: Deactivated successfully.
Dec 06 08:19:15 compute-0 peaceful_booth[403645]: 167 167
Dec 06 08:19:15 compute-0 conmon[403645]: conmon 2ede3320d547db6ac571 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee.scope/container/memory.events
Dec 06 08:19:15 compute-0 podman[403629]: 2025-12-06 08:19:15.007648037 +0000 UTC m=+0.130210719 container died 2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_booth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-135477de43b665e0f1070b4d5e4b94703c6d3d16fff7ab81d903ad97a35d6eb2-merged.mount: Deactivated successfully.
Dec 06 08:19:15 compute-0 podman[403629]: 2025-12-06 08:19:15.039527762 +0000 UTC m=+0.162090444 container remove 2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 08:19:15 compute-0 systemd[1]: libpod-conmon-2ede3320d547db6ac57163c0836db95c5be9ab3c4789c8ce8499a055ddcd1dee.scope: Deactivated successfully.
Dec 06 08:19:15 compute-0 podman[403670]: 2025-12-06 08:19:15.215437071 +0000 UTC m=+0.058488230 container create defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:19:15 compute-0 systemd[1]: Started libpod-conmon-defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e.scope.
Dec 06 08:19:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:15.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:15 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa74142cb1386b7c2529d41b370490ad3d96a1b4bd1140542439461723404b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa74142cb1386b7c2529d41b370490ad3d96a1b4bd1140542439461723404b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa74142cb1386b7c2529d41b370490ad3d96a1b4bd1140542439461723404b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa74142cb1386b7c2529d41b370490ad3d96a1b4bd1140542439461723404b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa74142cb1386b7c2529d41b370490ad3d96a1b4bd1140542439461723404b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:15 compute-0 podman[403670]: 2025-12-06 08:19:15.1962595 +0000 UTC m=+0.039310679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:19:15 compute-0 podman[403670]: 2025-12-06 08:19:15.296777161 +0000 UTC m=+0.139828340 container init defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:19:15 compute-0 podman[403670]: 2025-12-06 08:19:15.305623831 +0000 UTC m=+0.148674990 container start defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 08:19:15 compute-0 podman[403670]: 2025-12-06 08:19:15.310142423 +0000 UTC m=+0.153193632 container attach defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lichterman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:19:15 compute-0 ceph-mon[74339]: pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Dec 06 08:19:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 KiB/s wr, 37 op/s
Dec 06 08:19:16 compute-0 festive_lichterman[403686]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:19:16 compute-0 festive_lichterman[403686]: --> relative data size: 1.0
Dec 06 08:19:16 compute-0 festive_lichterman[403686]: --> All data devices are unavailable
Dec 06 08:19:16 compute-0 systemd[1]: libpod-defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e.scope: Deactivated successfully.
Dec 06 08:19:16 compute-0 podman[403670]: 2025-12-06 08:19:16.121060953 +0000 UTC m=+0.964112112 container died defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 08:19:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aa74142cb1386b7c2529d41b370490ad3d96a1b4bd1140542439461723404b9-merged.mount: Deactivated successfully.
Dec 06 08:19:16 compute-0 podman[403670]: 2025-12-06 08:19:16.173031006 +0000 UTC m=+1.016082165 container remove defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:19:16 compute-0 systemd[1]: libpod-conmon-defed356a41e929f6451243694e94db019a4bc3df4c131cec37626708192609e.scope: Deactivated successfully.
Dec 06 08:19:16 compute-0 sudo[403564]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:16 compute-0 sudo[403712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:16 compute-0 sudo[403712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:16 compute-0 sudo[403712]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:16 compute-0 sudo[403737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:19:16 compute-0 sudo[403737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:16 compute-0 sudo[403737]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:16 compute-0 sudo[403762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:16 compute-0 sudo[403762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:16 compute-0 sudo[403762]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:16.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:16 compute-0 sudo[403787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:19:16 compute-0 sudo[403787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:16 compute-0 podman[403853]: 2025-12-06 08:19:16.720652772 +0000 UTC m=+0.035925847 container create d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banach, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:19:16 compute-0 systemd[1]: Started libpod-conmon-d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92.scope.
Dec 06 08:19:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:19:16 compute-0 podman[403853]: 2025-12-06 08:19:16.791844665 +0000 UTC m=+0.107117760 container init d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 08:19:16 compute-0 podman[403853]: 2025-12-06 08:19:16.79787655 +0000 UTC m=+0.113149615 container start d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:19:16 compute-0 podman[403853]: 2025-12-06 08:19:16.80081265 +0000 UTC m=+0.116085725 container attach d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 08:19:16 compute-0 podman[403853]: 2025-12-06 08:19:16.705688576 +0000 UTC m=+0.020961671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:19:16 compute-0 admiring_banach[403870]: 167 167
Dec 06 08:19:16 compute-0 systemd[1]: libpod-d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92.scope: Deactivated successfully.
Dec 06 08:19:16 compute-0 podman[403853]: 2025-12-06 08:19:16.803236755 +0000 UTC m=+0.118509830 container died d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:19:16 compute-0 ceph-mon[74339]: pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 KiB/s wr, 37 op/s
Dec 06 08:19:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-55578e107997fa3260eb455e3ccbeda6d89093226fa27e023df9226bbf976527-merged.mount: Deactivated successfully.
Dec 06 08:19:16 compute-0 podman[403853]: 2025-12-06 08:19:16.83725266 +0000 UTC m=+0.152525735 container remove d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:19:16 compute-0 systemd[1]: libpod-conmon-d7d101acd9b27cbb0f1a692a61d49bafaffadb5360c5a1bfaf6281fac5b34b92.scope: Deactivated successfully.
Dec 06 08:19:16 compute-0 podman[403896]: 2025-12-06 08:19:16.9977549 +0000 UTC m=+0.045074016 container create 71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:19:17 compute-0 systemd[1]: Started libpod-conmon-71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec.scope.
Dec 06 08:19:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862e7d5caeceb49f1022d95a72b500bdc216b45880d9e82ce4d7ed69a0b77993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862e7d5caeceb49f1022d95a72b500bdc216b45880d9e82ce4d7ed69a0b77993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862e7d5caeceb49f1022d95a72b500bdc216b45880d9e82ce4d7ed69a0b77993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862e7d5caeceb49f1022d95a72b500bdc216b45880d9e82ce4d7ed69a0b77993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:17 compute-0 podman[403896]: 2025-12-06 08:19:17.065195452 +0000 UTC m=+0.112514578 container init 71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 08:19:17 compute-0 podman[403896]: 2025-12-06 08:19:17.070802154 +0000 UTC m=+0.118121270 container start 71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 08:19:17 compute-0 podman[403896]: 2025-12-06 08:19:16.979419961 +0000 UTC m=+0.026739097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:19:17 compute-0 podman[403896]: 2025-12-06 08:19:17.073924669 +0000 UTC m=+0.121243785 container attach 71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ramanujan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:19:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]: {
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:     "0": [
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:         {
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "devices": [
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "/dev/loop3"
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             ],
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "lv_name": "ceph_lv0",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "lv_size": "7511998464",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "name": "ceph_lv0",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "tags": {
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.cluster_name": "ceph",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.crush_device_class": "",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.encrypted": "0",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.osd_id": "0",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.type": "block",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:                 "ceph.vdo": "0"
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             },
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "type": "block",
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:             "vg_name": "ceph_vg0"
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:         }
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]:     ]
Dec 06 08:19:17 compute-0 clever_ramanujan[403913]: }
Dec 06 08:19:17 compute-0 systemd[1]: libpod-71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec.scope: Deactivated successfully.
Dec 06 08:19:17 compute-0 podman[403896]: 2025-12-06 08:19:17.822472374 +0000 UTC m=+0.869791490 container died 71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ramanujan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:19:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-862e7d5caeceb49f1022d95a72b500bdc216b45880d9e82ce4d7ed69a0b77993-merged.mount: Deactivated successfully.
Dec 06 08:19:17 compute-0 podman[403896]: 2025-12-06 08:19:17.882390122 +0000 UTC m=+0.929709238 container remove 71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_ramanujan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:19:17 compute-0 systemd[1]: libpod-conmon-71b70c1bcf27cfa4f3e6c929e9e1c99148e146eb3adc850d0b0d4cc403dee2ec.scope: Deactivated successfully.
Dec 06 08:19:17 compute-0 sudo[403787]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 15 op/s
Dec 06 08:19:17 compute-0 sudo[403935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:17 compute-0 sudo[403935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:17 compute-0 sudo[403935]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:18 compute-0 sudo[403960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:19:18 compute-0 sudo[403960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:18 compute-0 sudo[403960]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:18 compute-0 sudo[403985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:18 compute-0 sudo[403985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:18 compute-0 sudo[403985]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:18 compute-0 sudo[404010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:19:18 compute-0 sudo[404010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:18 compute-0 nova_compute[251992]: 2025-12-06 08:19:18.178 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009143.1768687, 93e8a72f-c895-445c-8c5b-aadf11ffd3b5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:19:18 compute-0 nova_compute[251992]: 2025-12-06 08:19:18.179 251996 INFO nova.compute.manager [-] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] VM Stopped (Lifecycle Event)
Dec 06 08:19:18 compute-0 nova_compute[251992]: 2025-12-06 08:19:18.313 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:18 compute-0 nova_compute[251992]: 2025-12-06 08:19:18.388 251996 DEBUG nova.compute.manager [None req-2312363c-4eee-43fb-8d0f-d146f44d352b - - - - - -] [instance: 93e8a72f-c895-445c-8c5b-aadf11ffd3b5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:19:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:18.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:18 compute-0 podman[404076]: 2025-12-06 08:19:18.464680761 +0000 UTC m=+0.039051792 container create 9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ellis, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:19:18 compute-0 systemd[1]: Started libpod-conmon-9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426.scope.
Dec 06 08:19:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:19:18 compute-0 podman[404076]: 2025-12-06 08:19:18.536671446 +0000 UTC m=+0.111042487 container init 9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ellis, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:19:18 compute-0 podman[404076]: 2025-12-06 08:19:18.446080145 +0000 UTC m=+0.020451176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:19:18 compute-0 podman[404076]: 2025-12-06 08:19:18.543172213 +0000 UTC m=+0.117543244 container start 9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ellis, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:19:18 compute-0 podman[404076]: 2025-12-06 08:19:18.545809365 +0000 UTC m=+0.120180396 container attach 9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:19:18 compute-0 jolly_ellis[404093]: 167 167
Dec 06 08:19:18 compute-0 systemd[1]: libpod-9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426.scope: Deactivated successfully.
Dec 06 08:19:18 compute-0 conmon[404093]: conmon 9c459b6f3674331e4b4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426.scope/container/memory.events
Dec 06 08:19:18 compute-0 podman[404076]: 2025-12-06 08:19:18.550233024 +0000 UTC m=+0.124604055 container died 9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ellis, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:19:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bc9c3788cd93599209e035f82a852271ceee967d131f7fdedf879c7a7b87b4b-merged.mount: Deactivated successfully.
Dec 06 08:19:18 compute-0 podman[404076]: 2025-12-06 08:19:18.582185142 +0000 UTC m=+0.156556173 container remove 9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ellis, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:19:18 compute-0 systemd[1]: libpod-conmon-9c459b6f3674331e4b4b75a4306ddf70ddd3f1e043d647d3ef5205c6673a5426.scope: Deactivated successfully.
Dec 06 08:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:19:18
Dec 06 08:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'backups', 'vms', '.rgw.root', 'images', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log']
Dec 06 08:19:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:19:18 compute-0 nova_compute[251992]: 2025-12-06 08:19:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:18 compute-0 nova_compute[251992]: 2025-12-06 08:19:18.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:19:18 compute-0 nova_compute[251992]: 2025-12-06 08:19:18.664 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:18 compute-0 nova_compute[251992]: 2025-12-06 08:19:18.672 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:19:18 compute-0 podman[404116]: 2025-12-06 08:19:18.727594112 +0000 UTC m=+0.034526108 container create b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:19:18 compute-0 systemd[1]: Started libpod-conmon-b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc.scope.
Dec 06 08:19:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805fef374e1d9c471662d13232c2d0be42de1e0fcb34977b663d2f391ce838a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805fef374e1d9c471662d13232c2d0be42de1e0fcb34977b663d2f391ce838a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805fef374e1d9c471662d13232c2d0be42de1e0fcb34977b663d2f391ce838a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/805fef374e1d9c471662d13232c2d0be42de1e0fcb34977b663d2f391ce838a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:19:18 compute-0 podman[404116]: 2025-12-06 08:19:18.713725036 +0000 UTC m=+0.020657052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:19:18 compute-0 podman[404116]: 2025-12-06 08:19:18.813531457 +0000 UTC m=+0.120463473 container init b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:19:18 compute-0 podman[404116]: 2025-12-06 08:19:18.819723895 +0000 UTC m=+0.126655901 container start b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:19:18 compute-0 podman[404116]: 2025-12-06 08:19:18.822947013 +0000 UTC m=+0.129879059 container attach b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:19:18 compute-0 ceph-mon[74339]: pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 15 op/s
Dec 06 08:19:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]: {
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]:         "osd_id": 0,
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]:         "type": "bluestore"
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]:     }
Dec 06 08:19:19 compute-0 suspicious_dhawan[404132]: }
Dec 06 08:19:19 compute-0 systemd[1]: libpod-b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc.scope: Deactivated successfully.
Dec 06 08:19:19 compute-0 podman[404116]: 2025-12-06 08:19:19.630008708 +0000 UTC m=+0.936940734 container died b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 08:19:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-805fef374e1d9c471662d13232c2d0be42de1e0fcb34977b663d2f391ce838a0-merged.mount: Deactivated successfully.
Dec 06 08:19:19 compute-0 podman[404116]: 2025-12-06 08:19:19.688491266 +0000 UTC m=+0.995423282 container remove b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:19:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:19 compute-0 systemd[1]: libpod-conmon-b76462ef0d82df0ca330bcfd117ad283b2cc094144b440613a2c32cc6428d3dc.scope: Deactivated successfully.
Dec 06 08:19:19 compute-0 sudo[404010]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:19:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:19:19 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c0df3753-8cc7-4e92-9806-cd292fa43c67 does not exist
Dec 06 08:19:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1afcac3a-8248-4e68-a458-318710b8c6b8 does not exist
Dec 06 08:19:19 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 843e6079-fc51-4edb-82aa-c20bac29a485 does not exist
Dec 06 08:19:19 compute-0 podman[404154]: 2025-12-06 08:19:19.776927109 +0000 UTC m=+0.122483249 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:19:19 compute-0 sudo[404193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:19 compute-0 sudo[404193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:19 compute-0 sudo[404193]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:19 compute-0 sudo[404218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:19:19 compute-0 sudo[404218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:19 compute-0 sudo[404218]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Dec 06 08:19:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:20.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:20 compute-0 nova_compute[251992]: 2025-12-06 08:19:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:20 compute-0 nova_compute[251992]: 2025-12-06 08:19:20.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:19:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:19:20 compute-0 ceph-mon[74339]: pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Dec 06 08:19:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:21.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:22.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:22 compute-0 nova_compute[251992]: 2025-12-06 08:19:22.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:23 compute-0 ceph-mon[74339]: pgmap v3811: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:23.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:23 compute-0 nova_compute[251992]: 2025-12-06 08:19:23.316 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:23 compute-0 nova_compute[251992]: 2025-12-06 08:19:23.666 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:19:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:19:23 compute-0 sudo[404245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:23 compute-0 sudo[404245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:23 compute-0 sudo[404245]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:23 compute-0 sudo[404270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:23 compute-0 sudo[404270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:23 compute-0 sudo[404270]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:24.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:25 compute-0 ceph-mon[74339]: pgmap v3812: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:25.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:26 compute-0 podman[404296]: 2025-12-06 08:19:26.403461055 +0000 UTC m=+0.063312801 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:19:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:26 compute-0 podman[404297]: 2025-12-06 08:19:26.440155023 +0000 UTC m=+0.084992551 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003150881723431975 of space, bias 1.0, pg target 0.9452645170295925 quantized to 32 (current 32)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:19:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:19:27 compute-0 ceph-mon[74339]: pgmap v3813: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:19:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:27.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:19:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:19:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:19:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:19:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:19:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 08:19:28 compute-0 nova_compute[251992]: 2025-12-06 08:19:28.350 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:28.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:28 compute-0 nova_compute[251992]: 2025-12-06 08:19:28.668 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:29.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:29 compute-0 ceph-mon[74339]: pgmap v3814: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 06 08:19:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:19:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2426268881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:30.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:31.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:19:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4163239937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:31 compute-0 ceph-mon[74339]: pgmap v3815: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:19:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4163239937' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:19:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:32.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:32 compute-0 ceph-mon[74339]: pgmap v3816: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 08:19:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:33.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:33 compute-0 nova_compute[251992]: 2025-12-06 08:19:33.394 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:33 compute-0 nova_compute[251992]: 2025-12-06 08:19:33.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:34.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:35 compute-0 ceph-mon[74339]: pgmap v3817: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:35.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:36.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:37 compute-0 ceph-mon[74339]: pgmap v3818: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:37.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/374432018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:38 compute-0 nova_compute[251992]: 2025-12-06 08:19:38.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:38.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:38 compute-0 nova_compute[251992]: 2025-12-06 08:19:38.671 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:39 compute-0 ceph-mon[74339]: pgmap v3819: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:39.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:41 compute-0 ceph-mon[74339]: pgmap v3820: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:19:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:41.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 11 op/s
Dec 06 08:19:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3771830527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:42.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:19:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:19:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:19:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:19:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:19:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:19:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:43.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:43 compute-0 nova_compute[251992]: 2025-12-06 08:19:43.400 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:43 compute-0 nova_compute[251992]: 2025-12-06 08:19:43.672 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 11 op/s
Dec 06 08:19:43 compute-0 sudo[404341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:43 compute-0 sudo[404341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:43 compute-0 sudo[404341]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:44 compute-0 ceph-mon[74339]: pgmap v3821: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 11 op/s
Dec 06 08:19:44 compute-0 sudo[404366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:19:44 compute-0 sudo[404366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:19:44 compute-0 sudo[404366]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:44.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:45.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2898252828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:45 compute-0 ceph-mon[74339]: pgmap v3822: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 11 op/s
Dec 06 08:19:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3533005233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 43 op/s
Dec 06 08:19:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:46.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:47.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:47 compute-0 ceph-mon[74339]: pgmap v3823: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 43 op/s
Dec 06 08:19:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:19:48 compute-0 nova_compute[251992]: 2025-12-06 08:19:48.402 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:48.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:48 compute-0 nova_compute[251992]: 2025-12-06 08:19:48.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:48 compute-0 ceph-mon[74339]: pgmap v3824: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:19:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:49.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:19:50 compute-0 podman[404394]: 2025-12-06 08:19:50.441092032 +0000 UTC m=+0.105497708 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:19:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:50.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:51 compute-0 ceph-mon[74339]: pgmap v3825: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:19:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:51.033 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:19:51 compute-0 nova_compute[251992]: 2025-12-06 08:19:51.033 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:51.034 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:19:51 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:19:51.035 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:19:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:51.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 08:19:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:53 compute-0 ceph-mon[74339]: pgmap v3826: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 08:19:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:53.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:53 compute-0 nova_compute[251992]: 2025-12-06 08:19:53.405 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:53 compute-0 nova_compute[251992]: 2025-12-06 08:19:53.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Dec 06 08:19:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:54.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:19:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:55.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:19:55 compute-0 nova_compute[251992]: 2025-12-06 08:19:55.750 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Dec 06 08:19:56 compute-0 ceph-mon[74339]: pgmap v3827: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Dec 06 08:19:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1638807678' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3758252918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:19:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:56.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:56 compute-0 nova_compute[251992]: 2025-12-06 08:19:56.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:19:56 compute-0 nova_compute[251992]: 2025-12-06 08:19:56.692 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:56 compute-0 nova_compute[251992]: 2025-12-06 08:19:56.692 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:56 compute-0 nova_compute[251992]: 2025-12-06 08:19:56.693 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:56 compute-0 nova_compute[251992]: 2025-12-06 08:19:56.693 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:19:56 compute-0 nova_compute[251992]: 2025-12-06 08:19:56.693 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.152 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:57 compute-0 podman[404447]: 2025-12-06 08:19:57.247295281 +0000 UTC m=+0.054207534 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:19:57 compute-0 podman[404448]: 2025-12-06 08:19:57.28890056 +0000 UTC m=+0.082977594 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 08:19:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:57.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.327 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.328 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4106MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.328 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.329 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.429 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.429 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:19:57 compute-0 ceph-mon[74339]: pgmap v3828: 305 pgs: 305 active+clean; 229 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 116 op/s
Dec 06 08:19:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2854615589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.457 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:19:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:19:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762066553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.892 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.898 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.935 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:19:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.995 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:19:57 compute-0 nova_compute[251992]: 2025-12-06 08:19:57.996 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:19:58 compute-0 nova_compute[251992]: 2025-12-06 08:19:58.408 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:19:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:19:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:19:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3762066553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:19:58 compute-0 nova_compute[251992]: 2025-12-06 08:19:58.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:19:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:19:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:19:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:19:59.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:19:59 compute-0 ceph-mon[74339]: pgmap v3829: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Dec 06 08:19:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:19:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 433 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 06 08:19:59 compute-0 nova_compute[251992]: 2025-12-06 08:19:59.996 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 08:20:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:00.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/656197683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:00 compute-0 ceph-mon[74339]: pgmap v3830: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 433 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Dec 06 08:20:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 08:20:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:01 compute-0 nova_compute[251992]: 2025-12-06 08:20:01.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:01 compute-0 nova_compute[251992]: 2025-12-06 08:20:01.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:20:01 compute-0 nova_compute[251992]: 2025-12-06 08:20:01.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:20:01 compute-0 nova_compute[251992]: 2025-12-06 08:20:01.685 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:20:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Dec 06 08:20:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2368192170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:03 compute-0 ceph-mon[74339]: pgmap v3831: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Dec 06 08:20:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:03 compute-0 nova_compute[251992]: 2025-12-06 08:20:03.453 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:03 compute-0 nova_compute[251992]: 2025-12-06 08:20:03.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:03 compute-0 nova_compute[251992]: 2025-12-06 08:20:03.679 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:03.895 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:03.896 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:03.897 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 08:20:04 compute-0 sudo[404511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:04 compute-0 sudo[404511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:04 compute-0 sudo[404511]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:04 compute-0 sudo[404536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:04 compute-0 sudo[404536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:04 compute-0 sudo[404536]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:04.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:05 compute-0 ceph-mon[74339]: pgmap v3832: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 08:20:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:05.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:05 compute-0 nova_compute[251992]: 2025-12-06 08:20:05.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 08:20:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:07.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:07 compute-0 ceph-mon[74339]: pgmap v3833: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Dec 06 08:20:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 06 08:20:08 compute-0 nova_compute[251992]: 2025-12-06 08:20:08.467 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:08.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:08 compute-0 nova_compute[251992]: 2025-12-06 08:20:08.681 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:08 compute-0 ceph-mon[74339]: pgmap v3834: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Dec 06 08:20:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:20:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/922478015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:20:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:20:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/922478015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:20:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:09.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 70 op/s
Dec 06 08:20:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/922478015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:20:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/922478015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:20:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:10 compute-0 nova_compute[251992]: 2025-12-06 08:20:10.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:11 compute-0 ceph-mon[74339]: pgmap v3835: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 70 op/s
Dec 06 08:20:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:11.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:11 compute-0 nova_compute[251992]: 2025-12-06 08:20:11.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 109 op/s
Dec 06 08:20:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:12.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:12 compute-0 nova_compute[251992]: 2025-12-06 08:20:12.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:20:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:20:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:20:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:20:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:20:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:20:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:13.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:13 compute-0 nova_compute[251992]: 2025-12-06 08:20:13.494 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:13 compute-0 ceph-mon[74339]: pgmap v3836: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 109 op/s
Dec 06 08:20:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3180353347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:13 compute-0 nova_compute[251992]: 2025-12-06 08:20:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:13 compute-0 nova_compute[251992]: 2025-12-06 08:20:13.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:20:13 compute-0 nova_compute[251992]: 2025-12-06 08:20:13.684 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 1.9 MiB/s wr, 39 op/s
Dec 06 08:20:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:14.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:14 compute-0 ceph-mon[74339]: pgmap v3837: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 1.9 MiB/s wr, 39 op/s
Dec 06 08:20:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:15.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:20:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:16.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:17 compute-0 ceph-mon[74339]: pgmap v3838: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 06 08:20:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:17.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:20:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:18.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:18 compute-0 nova_compute[251992]: 2025-12-06 08:20:18.527 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:20:18
Dec 06 08:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'images', 'vms', '.rgw.root']
Dec 06 08:20:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:20:18 compute-0 nova_compute[251992]: 2025-12-06 08:20:18.687 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:19.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:19 compute-0 ceph-mon[74339]: pgmap v3839: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:20:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:20:20 compute-0 sudo[404569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:20 compute-0 sudo[404569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:20 compute-0 sudo[404569]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:20 compute-0 sudo[404594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:20:20 compute-0 sudo[404594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:20 compute-0 sudo[404594]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:20 compute-0 sudo[404619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:20 compute-0 sudo[404619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:20 compute-0 sudo[404619]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:20 compute-0 sudo[404644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:20:20 compute-0 sudo[404644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:20.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:20 compute-0 ovn_controller[147168]: 2025-12-06T08:20:20Z|00802|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 08:20:20 compute-0 podman[404711]: 2025-12-06 08:20:20.878751076 +0000 UTC m=+0.112840026 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:20:20 compute-0 podman[404764]: 2025-12-06 08:20:20.93853317 +0000 UTC m=+0.060824773 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:20:21 compute-0 podman[404764]: 2025-12-06 08:20:21.037461098 +0000 UTC m=+0.159752701 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 08:20:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:21.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:21 compute-0 ceph-mon[74339]: pgmap v3840: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:20:21 compute-0 podman[404921]: 2025-12-06 08:20:21.601620313 +0000 UTC m=+0.047561783 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:20:21 compute-0 podman[404921]: 2025-12-06 08:20:21.637533379 +0000 UTC m=+0.083474839 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:20:21 compute-0 podman[404986]: 2025-12-06 08:20:21.815904255 +0000 UTC m=+0.046899905 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived)
Dec 06 08:20:21 compute-0 podman[404986]: 2025-12-06 08:20:21.83453351 +0000 UTC m=+0.065529140 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, version=2.2.4, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.openshift.tags=Ceph keepalived, release=1793)
Dec 06 08:20:21 compute-0 sudo[404644]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:20:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:20:21 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Dec 06 08:20:21 compute-0 sudo[405020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:21 compute-0 sudo[405020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:21 compute-0 sudo[405020]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 compute-0 sudo[405045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:20:22 compute-0 sudo[405045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:22 compute-0 sudo[405045]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 compute-0 sudo[405070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:22 compute-0 sudo[405070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:22 compute-0 sudo[405070]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 compute-0 sudo[405095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:20:22 compute-0 sudo[405095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:22.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:22 compute-0 sudo[405095]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:20:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:20:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:20:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:20:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:20:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d0d9f55e-0b75-4916-8789-20564f7f46b0 does not exist
Dec 06 08:20:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fbaeefb6-0652-4845-9b82-7b5a05ec78af does not exist
Dec 06 08:20:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 77a5a932-5b58-4b96-945c-3693669ab120 does not exist
Dec 06 08:20:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:20:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:20:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:20:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:20:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:20:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:20:22 compute-0 sudo[405151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:22 compute-0 sudo[405151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:22 compute-0 sudo[405151]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 compute-0 sudo[405176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:20:22 compute-0 sudo[405176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:22 compute-0 sudo[405176]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 compute-0 sudo[405201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:22 compute-0 sudo[405201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:22 compute-0 sudo[405201]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 compute-0 sudo[405226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:20:22 compute-0 sudo[405226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:22 compute-0 ceph-mon[74339]: pgmap v3841: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 2.2 MiB/s wr, 81 op/s
Dec 06 08:20:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:20:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:20:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:20:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:20:22 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:20:23 compute-0 podman[405293]: 2025-12-06 08:20:23.174902044 +0000 UTC m=+0.046897446 container create 308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:20:23 compute-0 systemd[1]: Started libpod-conmon-308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2.scope.
Dec 06 08:20:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:20:23 compute-0 podman[405293]: 2025-12-06 08:20:23.155279811 +0000 UTC m=+0.027275243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:20:23 compute-0 podman[405293]: 2025-12-06 08:20:23.254697991 +0000 UTC m=+0.126693413 container init 308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:20:23 compute-0 podman[405293]: 2025-12-06 08:20:23.262942795 +0000 UTC m=+0.134938207 container start 308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sutherland, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:20:23 compute-0 podman[405293]: 2025-12-06 08:20:23.266611725 +0000 UTC m=+0.138607167 container attach 308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sutherland, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:20:23 compute-0 exciting_sutherland[405309]: 167 167
Dec 06 08:20:23 compute-0 systemd[1]: libpod-308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2.scope: Deactivated successfully.
Dec 06 08:20:23 compute-0 podman[405293]: 2025-12-06 08:20:23.268528397 +0000 UTC m=+0.140523809 container died 308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 08:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-90957c91a874e43d21cc95033d8ba11c0e4cdf7ef5d47206220d8f2dda857aa7-merged.mount: Deactivated successfully.
Dec 06 08:20:23 compute-0 podman[405293]: 2025-12-06 08:20:23.309475299 +0000 UTC m=+0.181470711 container remove 308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sutherland, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:20:23 compute-0 systemd[1]: libpod-conmon-308720fab4ca26dc3a1059cf4fd7185035e762562467c8c269b8bea1a2cf15e2.scope: Deactivated successfully.
Dec 06 08:20:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:23.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:23 compute-0 podman[405337]: 2025-12-06 08:20:23.511682242 +0000 UTC m=+0.064462632 container create 2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:20:23 compute-0 nova_compute[251992]: 2025-12-06 08:20:23.562 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:23 compute-0 podman[405337]: 2025-12-06 08:20:23.488866083 +0000 UTC m=+0.041646443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:20:23 compute-0 systemd[1]: Started libpod-conmon-2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a.scope.
Dec 06 08:20:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b36e39460c27f4abbf9803fe74ec3cca3c9367079a5fcc11ec61892c2550b25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b36e39460c27f4abbf9803fe74ec3cca3c9367079a5fcc11ec61892c2550b25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b36e39460c27f4abbf9803fe74ec3cca3c9367079a5fcc11ec61892c2550b25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b36e39460c27f4abbf9803fe74ec3cca3c9367079a5fcc11ec61892c2550b25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b36e39460c27f4abbf9803fe74ec3cca3c9367079a5fcc11ec61892c2550b25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:23 compute-0 podman[405337]: 2025-12-06 08:20:23.614201728 +0000 UTC m=+0.166982068 container init 2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:20:23 compute-0 podman[405337]: 2025-12-06 08:20:23.623576532 +0000 UTC m=+0.176356872 container start 2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:20:23 compute-0 podman[405337]: 2025-12-06 08:20:23.62682534 +0000 UTC m=+0.179605710 container attach 2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 08:20:23 compute-0 nova_compute[251992]: 2025-12-06 08:20:23.688 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:20:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:20:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 218 KiB/s wr, 42 op/s
Dec 06 08:20:24 compute-0 sudo[405359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:24 compute-0 sudo[405359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:24 compute-0 sudo[405359]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:24 compute-0 sudo[405389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:24 compute-0 sudo[405389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:24 compute-0 sudo[405389]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:24 compute-0 priceless_blackburn[405353]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:20:24 compute-0 priceless_blackburn[405353]: --> relative data size: 1.0
Dec 06 08:20:24 compute-0 priceless_blackburn[405353]: --> All data devices are unavailable
Dec 06 08:20:24 compute-0 systemd[1]: libpod-2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a.scope: Deactivated successfully.
Dec 06 08:20:24 compute-0 podman[405337]: 2025-12-06 08:20:24.42374782 +0000 UTC m=+0.976528160 container died 2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:20:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b36e39460c27f4abbf9803fe74ec3cca3c9367079a5fcc11ec61892c2550b25-merged.mount: Deactivated successfully.
Dec 06 08:20:24 compute-0 podman[405337]: 2025-12-06 08:20:24.481834978 +0000 UTC m=+1.034615318 container remove 2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_blackburn, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:20:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:24 compute-0 systemd[1]: libpod-conmon-2e5874a69885944799b65cc7528d0b0a02e49ea73771a4ba7c8ba47b1563402a.scope: Deactivated successfully.
Dec 06 08:20:24 compute-0 sudo[405226]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:24 compute-0 sudo[405431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:24 compute-0 sudo[405431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:24 compute-0 sudo[405431]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:24 compute-0 sudo[405456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:20:24 compute-0 sudo[405456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:24 compute-0 sudo[405456]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:24 compute-0 sudo[405481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:24 compute-0 sudo[405481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:24 compute-0 sudo[405481]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:24 compute-0 sudo[405506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:20:24 compute-0 sudo[405506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:25 compute-0 podman[405572]: 2025-12-06 08:20:25.036808925 +0000 UTC m=+0.045008925 container create 0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 08:20:25 compute-0 systemd[1]: Started libpod-conmon-0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca.scope.
Dec 06 08:20:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:20:25 compute-0 podman[405572]: 2025-12-06 08:20:25.090471533 +0000 UTC m=+0.098671563 container init 0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:20:25 compute-0 podman[405572]: 2025-12-06 08:20:25.097358559 +0000 UTC m=+0.105558559 container start 0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:20:25 compute-0 podman[405572]: 2025-12-06 08:20:25.100641648 +0000 UTC m=+0.108841708 container attach 0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Dec 06 08:20:25 compute-0 distracted_franklin[405588]: 167 167
Dec 06 08:20:25 compute-0 systemd[1]: libpod-0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca.scope: Deactivated successfully.
Dec 06 08:20:25 compute-0 podman[405572]: 2025-12-06 08:20:25.102297674 +0000 UTC m=+0.110497674 container died 0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:20:25 compute-0 podman[405572]: 2025-12-06 08:20:25.018784785 +0000 UTC m=+0.026984805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:20:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-519050f81a30a8e38c995085b9cf653b33cc52a5f609c940b659b976668128c1-merged.mount: Deactivated successfully.
Dec 06 08:20:25 compute-0 ceph-mon[74339]: pgmap v3842: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 218 KiB/s wr, 42 op/s
Dec 06 08:20:25 compute-0 podman[405572]: 2025-12-06 08:20:25.132842123 +0000 UTC m=+0.141042123 container remove 0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 08:20:25 compute-0 systemd[1]: libpod-conmon-0f4a4096b431d7834ab8fa886bcfab86203422a3899b5f2fb3a6da23b08e1aca.scope: Deactivated successfully.
Dec 06 08:20:25 compute-0 podman[405613]: 2025-12-06 08:20:25.279494747 +0000 UTC m=+0.036673627 container create 7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:20:25 compute-0 systemd[1]: Started libpod-conmon-7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712.scope.
Dec 06 08:20:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9e33e22c8d8317c7d985b49cc87ca8fada928e0727dbeff23df065d00b04ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9e33e22c8d8317c7d985b49cc87ca8fada928e0727dbeff23df065d00b04ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9e33e22c8d8317c7d985b49cc87ca8fada928e0727dbeff23df065d00b04ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d9e33e22c8d8317c7d985b49cc87ca8fada928e0727dbeff23df065d00b04ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:25.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:25 compute-0 podman[405613]: 2025-12-06 08:20:25.356597932 +0000 UTC m=+0.113776832 container init 7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:20:25 compute-0 podman[405613]: 2025-12-06 08:20:25.263319257 +0000 UTC m=+0.020498167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:20:25 compute-0 podman[405613]: 2025-12-06 08:20:25.365457003 +0000 UTC m=+0.122635883 container start 7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mclaren, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 08:20:25 compute-0 podman[405613]: 2025-12-06 08:20:25.368467444 +0000 UTC m=+0.125646324 container attach 7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 08:20:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 218 KiB/s wr, 42 op/s
Dec 06 08:20:26 compute-0 charming_mclaren[405630]: {
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:     "0": [
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:         {
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "devices": [
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "/dev/loop3"
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             ],
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "lv_name": "ceph_lv0",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "lv_size": "7511998464",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "name": "ceph_lv0",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "tags": {
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.cluster_name": "ceph",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.crush_device_class": "",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.encrypted": "0",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.osd_id": "0",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.type": "block",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:                 "ceph.vdo": "0"
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             },
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "type": "block",
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:             "vg_name": "ceph_vg0"
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:         }
Dec 06 08:20:26 compute-0 charming_mclaren[405630]:     ]
Dec 06 08:20:26 compute-0 charming_mclaren[405630]: }
Dec 06 08:20:26 compute-0 systemd[1]: libpod-7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712.scope: Deactivated successfully.
Dec 06 08:20:26 compute-0 podman[405613]: 2025-12-06 08:20:26.154900729 +0000 UTC m=+0.912079609 container died 7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d9e33e22c8d8317c7d985b49cc87ca8fada928e0727dbeff23df065d00b04ed-merged.mount: Deactivated successfully.
Dec 06 08:20:26 compute-0 podman[405613]: 2025-12-06 08:20:26.207015814 +0000 UTC m=+0.964194704 container remove 7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mclaren, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:20:26 compute-0 systemd[1]: libpod-conmon-7d6008353cd127b1c4ff62b04ef3f664b742958915fb2659766c1c7bbc446712.scope: Deactivated successfully.
Dec 06 08:20:26 compute-0 sudo[405506]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:26 compute-0 sudo[405653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:26 compute-0 sudo[405653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:26 compute-0 sudo[405653]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:26 compute-0 sudo[405678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:20:26 compute-0 sudo[405678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:26 compute-0 sudo[405678]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:26 compute-0 sudo[405703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:26 compute-0 sudo[405703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:26 compute-0 sudo[405703]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:26 compute-0 sudo[405728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:20:26 compute-0 sudo[405728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:26.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432773677414852 of space, bias 1.0, pg target 1.298321032244556 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:20:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:20:26 compute-0 podman[405793]: 2025-12-06 08:20:26.798682587 +0000 UTC m=+0.043876942 container create 1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 08:20:26 compute-0 systemd[1]: Started libpod-conmon-1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47.scope.
Dec 06 08:20:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:20:26 compute-0 podman[405793]: 2025-12-06 08:20:26.776713321 +0000 UTC m=+0.021907686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:20:26 compute-0 podman[405793]: 2025-12-06 08:20:26.877203451 +0000 UTC m=+0.122397826 container init 1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:20:26 compute-0 podman[405793]: 2025-12-06 08:20:26.884194961 +0000 UTC m=+0.129389296 container start 1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:20:26 compute-0 modest_allen[405810]: 167 167
Dec 06 08:20:26 compute-0 systemd[1]: libpod-1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47.scope: Deactivated successfully.
Dec 06 08:20:26 compute-0 podman[405793]: 2025-12-06 08:20:26.887259494 +0000 UTC m=+0.132453839 container attach 1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:20:26 compute-0 podman[405793]: 2025-12-06 08:20:26.88747945 +0000 UTC m=+0.132673765 container died 1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:20:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-961adb47ab7eb87778e14d62a4a66bcdaa8f17ab5a426e8cf9726176bc33b1d7-merged.mount: Deactivated successfully.
Dec 06 08:20:26 compute-0 podman[405793]: 2025-12-06 08:20:26.919351126 +0000 UTC m=+0.164545441 container remove 1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:20:26 compute-0 systemd[1]: libpod-conmon-1c05521f30ed5b042030c7bf15be82d3e0c45ec0e0a8b3be079fd97097c96e47.scope: Deactivated successfully.
Dec 06 08:20:27 compute-0 podman[405834]: 2025-12-06 08:20:27.087803303 +0000 UTC m=+0.045490658 container create 8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:20:27 compute-0 systemd[1]: Started libpod-conmon-8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f.scope.
Dec 06 08:20:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:20:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3d1a9c8e7860ba8bf35a26c3b10f15c2c239f7a6fa3585391eab88b1f543f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3d1a9c8e7860ba8bf35a26c3b10f15c2c239f7a6fa3585391eab88b1f543f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3d1a9c8e7860ba8bf35a26c3b10f15c2c239f7a6fa3585391eab88b1f543f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3d1a9c8e7860ba8bf35a26c3b10f15c2c239f7a6fa3585391eab88b1f543f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:27 compute-0 podman[405834]: 2025-12-06 08:20:27.154012421 +0000 UTC m=+0.111699776 container init 8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sammet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:20:27 compute-0 podman[405834]: 2025-12-06 08:20:27.067263664 +0000 UTC m=+0.024951069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:20:27 compute-0 podman[405834]: 2025-12-06 08:20:27.162166742 +0000 UTC m=+0.119854097 container start 8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:20:27 compute-0 podman[405834]: 2025-12-06 08:20:27.165678247 +0000 UTC m=+0.123365602 container attach 8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:20:27 compute-0 ceph-mon[74339]: pgmap v3843: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 218 KiB/s wr, 42 op/s
Dec 06 08:20:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:27.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:20:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:20:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:20:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:20:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:20:27 compute-0 podman[405856]: 2025-12-06 08:20:27.416701567 +0000 UTC m=+0.065402237 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 08:20:27 compute-0 podman[405857]: 2025-12-06 08:20:27.443946577 +0000 UTC m=+0.088299180 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:20:27 compute-0 nova_compute[251992]: 2025-12-06 08:20:27.567 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:27 compute-0 nova_compute[251992]: 2025-12-06 08:20:27.567 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:27 compute-0 nova_compute[251992]: 2025-12-06 08:20:27.593 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:20:27 compute-0 nova_compute[251992]: 2025-12-06 08:20:27.746 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:27 compute-0 nova_compute[251992]: 2025-12-06 08:20:27.746 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:27 compute-0 nova_compute[251992]: 2025-12-06 08:20:27.754 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:20:27 compute-0 nova_compute[251992]: 2025-12-06 08:20:27.754 251996 INFO nova.compute.claims [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:20:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 16 KiB/s wr, 7 op/s
Dec 06 08:20:27 compute-0 nova_compute[251992]: 2025-12-06 08:20:27.992 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:27 compute-0 charming_sammet[405851]: {
Dec 06 08:20:27 compute-0 charming_sammet[405851]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:20:27 compute-0 charming_sammet[405851]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:20:27 compute-0 charming_sammet[405851]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:20:27 compute-0 charming_sammet[405851]:         "osd_id": 0,
Dec 06 08:20:27 compute-0 charming_sammet[405851]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:20:27 compute-0 charming_sammet[405851]:         "type": "bluestore"
Dec 06 08:20:27 compute-0 charming_sammet[405851]:     }
Dec 06 08:20:27 compute-0 charming_sammet[405851]: }
Dec 06 08:20:28 compute-0 systemd[1]: libpod-8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f.scope: Deactivated successfully.
Dec 06 08:20:28 compute-0 podman[405834]: 2025-12-06 08:20:28.025134856 +0000 UTC m=+0.982822241 container died 8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:20:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cce3d1a9c8e7860ba8bf35a26c3b10f15c2c239f7a6fa3585391eab88b1f543f-merged.mount: Deactivated successfully.
Dec 06 08:20:28 compute-0 podman[405834]: 2025-12-06 08:20:28.076540333 +0000 UTC m=+1.034227688 container remove 8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_sammet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 08:20:28 compute-0 systemd[1]: libpod-conmon-8aa6cec76ec10b968f698a885a5c2c51fb56d1eb2b283c8e67c8c20e9c7d462f.scope: Deactivated successfully.
Dec 06 08:20:28 compute-0 sudo[405728]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:20:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:20:28 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:28 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b3790831-05b5-46be-b6c1-76ac1460f043 does not exist
Dec 06 08:20:28 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b0e962e6-21d0-49d5-a98d-74fe008560df does not exist
Dec 06 08:20:28 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 331b91f9-8b35-49bd-b2d7-9d5e1cde79cf does not exist
Dec 06 08:20:28 compute-0 sudo[405941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:28 compute-0 sudo[405941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:28 compute-0 sudo[405941]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:28 compute-0 sudo[405966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:20:28 compute-0 sudo[405966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:28 compute-0 sudo[405966]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:20:28 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079866157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.504 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.514 251996 DEBUG nova.compute.provider_tree [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.535 251996 DEBUG nova.scheduler.client.report [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.561 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.562 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.565 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.645 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.645 251996 DEBUG nova.network.neutron [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.677 251996 INFO nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.690 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.703 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:20:28 compute-0 nova_compute[251992]: 2025-12-06 08:20:28.768 251996 INFO nova.virt.block_device [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Booting with volume 51b925ec-146c-45cc-ba59-9db845a18e81 at /dev/vda
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.088 251996 DEBUG nova.policy [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e8feb4540af4e2caa45a88a9202dbe2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:20:29 compute-0 ceph-mon[74339]: pgmap v3844: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 16 KiB/s wr, 7 op/s
Dec 06 08:20:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:29 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:20:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3079866157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.211 251996 DEBUG os_brick.utils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.213 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.224 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.224 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[4925a8eb-cd41-4e22-9780-ccb40c7cb5e9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.226 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.233 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.233 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[8f25b105-76de-4cea-bf62-cc7449638415]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.235 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.242 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.242 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec6de50-25c1-4df8-88a3-55f133610cf1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.244 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[965396e4-be7f-4039-9ebf-538f1245c39a]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.244 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.271 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.273 251996 DEBUG os_brick.initiator.connectors.lightos [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.274 251996 DEBUG os_brick.initiator.connectors.lightos [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.274 251996 DEBUG os_brick.initiator.connectors.lightos [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.274 251996 DEBUG os_brick.utils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 08:20:29 compute-0 nova_compute[251992]: 2025-12-06 08:20:29.274 251996 DEBUG nova.virt.block_device [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updating existing volume attachment record: 0376b513-a863-4882-9990-f00c95971e30 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 08:20:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:29.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 06 08:20:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:31 compute-0 ceph-mon[74339]: pgmap v3845: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 06 08:20:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2333218000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:20:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:31.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 14 KiB/s wr, 0 op/s
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.061 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.063 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.063 251996 INFO nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Creating image(s)
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.064 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.064 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Ensure instance console log exists: /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.065 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.065 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.065 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:32 compute-0 nova_compute[251992]: 2025-12-06 08:20:32.656 251996 DEBUG nova.network.neutron [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Successfully created port: fdb27d9b-f2d2-4cdc-9682-63a06b53b767 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:20:33 compute-0 ceph-mon[74339]: pgmap v3846: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 14 KiB/s wr, 0 op/s
Dec 06 08:20:33 compute-0 sshd-session[406001]: Connection reset by authenticating user root 45.135.232.92 port 43198 [preauth]
Dec 06 08:20:33 compute-0 nova_compute[251992]: 2025-12-06 08:20:33.314 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:33.316 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:20:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:33.319 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:20:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:33.321 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:33.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:33 compute-0 nova_compute[251992]: 2025-12-06 08:20:33.567 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:33 compute-0 nova_compute[251992]: 2025-12-06 08:20:33.725 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 2.0 KiB/s wr, 0 op/s
Dec 06 08:20:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:34.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:34 compute-0 sshd-session[406005]: Invalid user admin from 45.135.232.92 port 43200
Dec 06 08:20:35 compute-0 nova_compute[251992]: 2025-12-06 08:20:35.109 251996 DEBUG nova.network.neutron [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Successfully updated port: fdb27d9b-f2d2-4cdc-9682-63a06b53b767 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:20:35 compute-0 sshd-session[406005]: Connection reset by invalid user admin 45.135.232.92 port 43200 [preauth]
Dec 06 08:20:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3058139754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:35 compute-0 nova_compute[251992]: 2025-12-06 08:20:35.353 251996 DEBUG nova.compute.manager [req-86abcc97-1604-4cf6-886a-0ad14b7467cb req-885c5182-74dc-4455-8bc1-9031f64e60e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-changed-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:20:35 compute-0 nova_compute[251992]: 2025-12-06 08:20:35.353 251996 DEBUG nova.compute.manager [req-86abcc97-1604-4cf6-886a-0ad14b7467cb req-885c5182-74dc-4455-8bc1-9031f64e60e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Refreshing instance network info cache due to event network-changed-fdb27d9b-f2d2-4cdc-9682-63a06b53b767. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:20:35 compute-0 nova_compute[251992]: 2025-12-06 08:20:35.354 251996 DEBUG oslo_concurrency.lockutils [req-86abcc97-1604-4cf6-886a-0ad14b7467cb req-885c5182-74dc-4455-8bc1-9031f64e60e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:20:35 compute-0 nova_compute[251992]: 2025-12-06 08:20:35.354 251996 DEBUG oslo_concurrency.lockutils [req-86abcc97-1604-4cf6-886a-0ad14b7467cb req-885c5182-74dc-4455-8bc1-9031f64e60e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:20:35 compute-0 nova_compute[251992]: 2025-12-06 08:20:35.354 251996 DEBUG nova.network.neutron [req-86abcc97-1604-4cf6-886a-0ad14b7467cb req-885c5182-74dc-4455-8bc1-9031f64e60e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Refreshing network info cache for port fdb27d9b-f2d2-4cdc-9682-63a06b53b767 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:20:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:35.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:35 compute-0 nova_compute[251992]: 2025-12-06 08:20:35.387 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:20:35 compute-0 nova_compute[251992]: 2025-12-06 08:20:35.722 251996 DEBUG nova.network.neutron [req-86abcc97-1604-4cf6-886a-0ad14b7467cb req-885c5182-74dc-4455-8bc1-9031f64e60e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:20:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.9 KiB/s rd, 2.5 KiB/s wr, 13 op/s
Dec 06 08:20:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:36.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:36 compute-0 nova_compute[251992]: 2025-12-06 08:20:36.523 251996 DEBUG nova.network.neutron [req-86abcc97-1604-4cf6-886a-0ad14b7467cb req-885c5182-74dc-4455-8bc1-9031f64e60e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:20:36 compute-0 nova_compute[251992]: 2025-12-06 08:20:36.542 251996 DEBUG oslo_concurrency.lockutils [req-86abcc97-1604-4cf6-886a-0ad14b7467cb req-885c5182-74dc-4455-8bc1-9031f64e60e7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:20:36 compute-0 nova_compute[251992]: 2025-12-06 08:20:36.543 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquired lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:20:36 compute-0 nova_compute[251992]: 2025-12-06 08:20:36.543 251996 DEBUG nova.network.neutron [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:20:36 compute-0 ceph-mon[74339]: pgmap v3847: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 2.0 KiB/s wr, 0 op/s
Dec 06 08:20:37 compute-0 sshd-session[406008]: Invalid user emcali from 45.135.232.92 port 60924
Dec 06 08:20:37 compute-0 nova_compute[251992]: 2025-12-06 08:20:37.273 251996 DEBUG nova.network.neutron [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:20:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:37.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:37 compute-0 sshd-session[406008]: Connection reset by invalid user emcali 45.135.232.92 port 60924 [preauth]
Dec 06 08:20:37 compute-0 ceph-mon[74339]: pgmap v3848: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.9 KiB/s rd, 2.5 KiB/s wr, 13 op/s
Dec 06 08:20:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:38.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.569 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.580 251996 DEBUG nova.network.neutron [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updating instance_info_cache with network_info: [{"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.619 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Releasing lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.620 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Instance network_info: |[{"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.622 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Start _get_guest_xml network_info=[{"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-51b925ec-146c-45cc-ba59-9db845a18e81', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '51b925ec-146c-45cc-ba59-9db845a18e81', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f2b69bc0-d3ae-4f20-9026-421ff6537c3f', 'attached_at': '', 'detached_at': '', 'volume_id': '51b925ec-146c-45cc-ba59-9db845a18e81', 'serial': '51b925ec-146c-45cc-ba59-9db845a18e81'}, 'attachment_id': '0376b513-a863-4882-9990-f00c95971e30', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.627 251996 WARNING nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.632 251996 DEBUG nova.virt.libvirt.host [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.633 251996 DEBUG nova.virt.libvirt.host [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.644 251996 DEBUG nova.virt.libvirt.host [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.644 251996 DEBUG nova.virt.libvirt.host [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.645 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.645 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.646 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.646 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.646 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.646 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.646 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.647 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.647 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.647 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.647 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.647 251996 DEBUG nova.virt.hardware [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.678 251996 DEBUG nova.storage.rbd_utils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image f2b69bc0-d3ae-4f20-9026-421ff6537c3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.682 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:38 compute-0 ceph-mon[74339]: pgmap v3849: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:38 compute-0 nova_compute[251992]: 2025-12-06 08:20:38.727 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:20:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3733810263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.125 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.186 251996 DEBUG nova.virt.libvirt.vif [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:20:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1178767715',display_name='tempest-TestVolumeBootPattern-server-1178767715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1178767715',id=214,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-945l9ppo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:20:28Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=f2b69bc0-d3ae-4f20-9026-421ff6537c3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.187 251996 DEBUG nova.network.os_vif_util [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.188 251996 DEBUG nova.network.os_vif_util [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:ce:9c,bridge_name='br-int',has_traffic_filtering=True,id=fdb27d9b-f2d2-4cdc-9682-63a06b53b767,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb27d9b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.190 251996 DEBUG nova.objects.instance [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'pci_devices' on Instance uuid f2b69bc0-d3ae-4f20-9026-421ff6537c3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.221 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <uuid>f2b69bc0-d3ae-4f20-9026-421ff6537c3f</uuid>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <name>instance-000000d6</name>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <nova:name>tempest-TestVolumeBootPattern-server-1178767715</nova:name>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:20:38</nova:creationTime>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <nova:user uuid="8e8feb4540af4e2caa45a88a9202dbe2">tempest-TestVolumeBootPattern-97496240-project-member</nova:user>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <nova:project uuid="4b2dc4b8729f446a9c7ac69ca446f71d">tempest-TestVolumeBootPattern-97496240</nova:project>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <nova:port uuid="fdb27d9b-f2d2-4cdc-9682-63a06b53b767">
Dec 06 08:20:39 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <system>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <entry name="serial">f2b69bc0-d3ae-4f20-9026-421ff6537c3f</entry>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <entry name="uuid">f2b69bc0-d3ae-4f20-9026-421ff6537c3f</entry>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </system>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <os>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   </os>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <features>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   </features>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/f2b69bc0-d3ae-4f20-9026-421ff6537c3f_disk.config">
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       </source>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-51b925ec-146c-45cc-ba59-9db845a18e81">
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       </source>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:20:39 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <serial>51b925ec-146c-45cc-ba59-9db845a18e81</serial>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:75:ce:9c"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <target dev="tapfdb27d9b-f2"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f/console.log" append="off"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <video>
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </video>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:20:39 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:20:39 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:20:39 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:20:39 compute-0 nova_compute[251992]: </domain>
Dec 06 08:20:39 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.223 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Preparing to wait for external event network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.223 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.223 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.224 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.225 251996 DEBUG nova.virt.libvirt.vif [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:20:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1178767715',display_name='tempest-TestVolumeBootPattern-server-1178767715',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1178767715',id=214,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-945l9ppo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:20:28Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=f2b69bc0-d3ae-4f20-9026-421ff6537c3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.225 251996 DEBUG nova.network.os_vif_util [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.226 251996 DEBUG nova.network.os_vif_util [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:ce:9c,bridge_name='br-int',has_traffic_filtering=True,id=fdb27d9b-f2d2-4cdc-9682-63a06b53b767,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb27d9b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.226 251996 DEBUG os_vif [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:ce:9c,bridge_name='br-int',has_traffic_filtering=True,id=fdb27d9b-f2d2-4cdc-9682-63a06b53b767,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb27d9b-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.227 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.228 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.228 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.234 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.234 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdb27d9b-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.234 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdb27d9b-f2, col_values=(('external_ids', {'iface-id': 'fdb27d9b-f2d2-4cdc-9682-63a06b53b767', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:ce:9c', 'vm-uuid': 'f2b69bc0-d3ae-4f20-9026-421ff6537c3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.236 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:39 compute-0 NetworkManager[48965]: <info>  [1765009239.2373] manager: (tapfdb27d9b-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.237 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.243 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.245 251996 INFO os_vif [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:ce:9c,bridge_name='br-int',has_traffic_filtering=True,id=fdb27d9b-f2d2-4cdc-9682-63a06b53b767,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb27d9b-f2')
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.305 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.305 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.306 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] No VIF found with MAC fa:16:3e:75:ce:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.306 251996 INFO nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Using config drive
Dec 06 08:20:39 compute-0 nova_compute[251992]: 2025-12-06 08:20:39.337 251996 DEBUG nova.storage.rbd_utils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image f2b69bc0-d3ae-4f20-9026-421ff6537c3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:20:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:39.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:39 compute-0 sshd-session[406011]: Connection reset by authenticating user root 45.135.232.92 port 60938 [preauth]
Dec 06 08:20:40 compute-0 nova_compute[251992]: 2025-12-06 08:20:40.347 251996 INFO nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Creating config drive at /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f/disk.config
Dec 06 08:20:40 compute-0 nova_compute[251992]: 2025-12-06 08:20:40.352 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpizi9li0c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:40 compute-0 nova_compute[251992]: 2025-12-06 08:20:40.495 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpizi9li0c" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:40.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:40 compute-0 nova_compute[251992]: 2025-12-06 08:20:40.535 251996 DEBUG nova.storage.rbd_utils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] rbd image f2b69bc0-d3ae-4f20-9026-421ff6537c3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:20:40 compute-0 nova_compute[251992]: 2025-12-06 08:20:40.538 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f/disk.config f2b69bc0-d3ae-4f20-9026-421ff6537c3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:40 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3733810263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.027 251996 DEBUG oslo_concurrency.processutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f/disk.config f2b69bc0-d3ae-4f20-9026-421ff6537c3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.029 251996 INFO nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Deleting local config drive /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f/disk.config because it was imported into RBD.
Dec 06 08:20:41 compute-0 kernel: tapfdb27d9b-f2: entered promiscuous mode
Dec 06 08:20:41 compute-0 ovn_controller[147168]: 2025-12-06T08:20:41Z|00803|binding|INFO|Claiming lport fdb27d9b-f2d2-4cdc-9682-63a06b53b767 for this chassis.
Dec 06 08:20:41 compute-0 ovn_controller[147168]: 2025-12-06T08:20:41Z|00804|binding|INFO|fdb27d9b-f2d2-4cdc-9682-63a06b53b767: Claiming fa:16:3e:75:ce:9c 10.100.0.3
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 NetworkManager[48965]: <info>  [1765009241.0970] manager: (tapfdb27d9b-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.098 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.102 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.109 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.121 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:ce:9c 10.100.0.3'], port_security=['fa:16:3e:75:ce:9c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f2b69bc0-d3ae-4f20-9026-421ff6537c3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8cd07b30-a335-4570-957e-3674d9a06120', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=fdb27d9b-f2d2-4cdc-9682-63a06b53b767) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.122 158118 INFO neutron.agent.ovn.metadata.agent [-] Port fdb27d9b-f2d2-4cdc-9682-63a06b53b767 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a bound to our chassis
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.123 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.137 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[43dcddcd-2d02-456b-958e-4822b3476a0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.138 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4ef1374-91 in ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:20:41 compute-0 systemd-machined[212986]: New machine qemu-95-instance-000000d6.
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.140 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4ef1374-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.140 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee89e78-e791-476f-8b91-f042a1162fbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.141 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[16bb8b01-9a8c-4108-832e-7d2db209366d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-000000d6.
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.158 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcff9ae-3411-4f72-9cc0-7256ba13903a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 systemd-udevd[406132]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:20:41 compute-0 ovn_controller[147168]: 2025-12-06T08:20:41Z|00805|binding|INFO|Setting lport fdb27d9b-f2d2-4cdc-9682-63a06b53b767 ovn-installed in OVS
Dec 06 08:20:41 compute-0 ovn_controller[147168]: 2025-12-06T08:20:41Z|00806|binding|INFO|Setting lport fdb27d9b-f2d2-4cdc-9682-63a06b53b767 up in Southbound
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.183 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[054556cc-b8be-4116-9815-7914160f6892]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 NetworkManager[48965]: <info>  [1765009241.1856] device (tapfdb27d9b-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.185 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 NetworkManager[48965]: <info>  [1765009241.1871] device (tapfdb27d9b-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.218 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1293f6-6f2d-4abb-a7bc-67d9e9d1fe5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.225 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5198a575-3cff-4584-9834-57f9d7f66b90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 NetworkManager[48965]: <info>  [1765009241.2261] manager: (tapb4ef1374-90): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.257 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[58804790-ba63-4f70-8e7c-f7446b897385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.259 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fd443f6e-db67-4c94-974a-dd4d900197d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 NetworkManager[48965]: <info>  [1765009241.2868] device (tapb4ef1374-90): carrier: link connected
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.292 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0f6ea1-1043-4d33-9981-accb5cf23b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.311 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[a0faab51-4fc3-4bab-964f-5a32259764ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 951387, 'reachable_time': 33081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406162, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.327 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9f19e839-9ba3-4540-b4b1-6d510e77c5ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:d4b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 951387, 'tstamp': 951387}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406163, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.348 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0e6b00a2-3cb8-49fa-8906-ddd6f7a7d58d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4ef1374-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:d4:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 951387, 'reachable_time': 33081, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 406165, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:41.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.381 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[225be3da-c7dc-45ed-b103-e72d4133d4f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.442 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f1163f31-dc66-4054-b085-919155b044d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.444 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.445 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.445 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4ef1374-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:41 compute-0 NetworkManager[48965]: <info>  [1765009241.4476] manager: (tapb4ef1374-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Dec 06 08:20:41 compute-0 kernel: tapb4ef1374-90: entered promiscuous mode
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.451 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.454 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4ef1374-90, col_values=(('external_ids', {'iface-id': '32c82c25-6496-4edd-ba74-1791824b99ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.455 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 ovn_controller[147168]: 2025-12-06T08:20:41Z|00807|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.457 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.458 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9d062aca-22af-48af-8e04-f8967eeead73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.459 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/b4ef1374-9c77-45a7-8776-50aa60c7d84a.pid.haproxy
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID b4ef1374-9c77-45a7-8776-50aa60c7d84a
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:20:41 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:20:41.459 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'env', 'PROCESS_TAG=haproxy-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4ef1374-9c77-45a7-8776-50aa60c7d84a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.475 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.609 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009241.608408, f2b69bc0-d3ae-4f20-9026-421ff6537c3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.609 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] VM Started (Lifecycle Event)
Dec 06 08:20:41 compute-0 podman[406239]: 2025-12-06 08:20:41.823136808 +0000 UTC m=+0.048432888 container create 0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:20:41 compute-0 systemd[1]: Started libpod-conmon-0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11.scope.
Dec 06 08:20:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:20:41 compute-0 podman[406239]: 2025-12-06 08:20:41.798236521 +0000 UTC m=+0.023532581 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:20:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efa1c5cd0454803620f14d518e9a71e4cbda7db2e904557ad9460ba7020c585a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:41 compute-0 ceph-mon[74339]: pgmap v3850: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:41 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/967755823' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:41 compute-0 podman[406239]: 2025-12-06 08:20:41.903321095 +0000 UTC m=+0.128617155 container init 0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:20:41 compute-0 podman[406239]: 2025-12-06 08:20:41.911267081 +0000 UTC m=+0.136563131 container start 0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.926 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:20:41 compute-0 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[406254]: [NOTICE]   (406258) : New worker (406260) forked
Dec 06 08:20:41 compute-0 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[406254]: [NOTICE]   (406258) : Loading success.
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.934 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009241.6095464, f2b69bc0-d3ae-4f20-9026-421ff6537c3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:20:41 compute-0 nova_compute[251992]: 2025-12-06 08:20:41.934 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] VM Paused (Lifecycle Event)
Dec 06 08:20:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.082 251996 DEBUG nova.compute.manager [req-0fce00cc-1607-4fa1-b9d0-d6775f2edf9b req-63601e9e-1374-4c7c-b766-79e56dea1eab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.082 251996 DEBUG oslo_concurrency.lockutils [req-0fce00cc-1607-4fa1-b9d0-d6775f2edf9b req-63601e9e-1374-4c7c-b766-79e56dea1eab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.083 251996 DEBUG oslo_concurrency.lockutils [req-0fce00cc-1607-4fa1-b9d0-d6775f2edf9b req-63601e9e-1374-4c7c-b766-79e56dea1eab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.083 251996 DEBUG oslo_concurrency.lockutils [req-0fce00cc-1607-4fa1-b9d0-d6775f2edf9b req-63601e9e-1374-4c7c-b766-79e56dea1eab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.083 251996 DEBUG nova.compute.manager [req-0fce00cc-1607-4fa1-b9d0-d6775f2edf9b req-63601e9e-1374-4c7c-b766-79e56dea1eab 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Processing event network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.084 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.090 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.094 251996 INFO nova.virt.libvirt.driver [-] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Instance spawned successfully.
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.095 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.261 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.266 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009242.089036, f2b69bc0-d3ae-4f20-9026-421ff6537c3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.267 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] VM Resumed (Lifecycle Event)
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.313 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.314 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.315 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.315 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.316 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.317 251996 DEBUG nova.virt.libvirt.driver [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.426 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.430 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:20:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:42.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.611 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.810 251996 INFO nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Took 10.75 seconds to spawn the instance on the hypervisor.
Dec 06 08:20:42 compute-0 nova_compute[251992]: 2025-12-06 08:20:42.811 251996 DEBUG nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:20:42 compute-0 ceph-mon[74339]: pgmap v3851: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 27 op/s
Dec 06 08:20:42 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2172302351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:20:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:20:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:20:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:20:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:20:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:20:43 compute-0 nova_compute[251992]: 2025-12-06 08:20:43.204 251996 INFO nova.compute.manager [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Took 15.50 seconds to build instance.
Dec 06 08:20:43 compute-0 nova_compute[251992]: 2025-12-06 08:20:43.224 251996 DEBUG oslo_concurrency.lockutils [None req-24a8e481-6714-4d80-bb64-b939a1e93e6f 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:43.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:43 compute-0 sshd-session[406074]: Connection reset by authenticating user root 45.135.232.92 port 60944 [preauth]
Dec 06 08:20:43 compute-0 nova_compute[251992]: 2025-12-06 08:20:43.780 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:20:44 compute-0 nova_compute[251992]: 2025-12-06 08:20:44.237 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:44 compute-0 sudo[406270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:44 compute-0 sudo[406270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:44 compute-0 sudo[406270]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:44.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:44 compute-0 nova_compute[251992]: 2025-12-06 08:20:44.546 251996 DEBUG nova.compute.manager [req-3e13c709-75fe-44ef-bb78-fde989f54a06 req-ec7dc1b5-1a28-4506-8a76-8b73a2610688 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:20:44 compute-0 nova_compute[251992]: 2025-12-06 08:20:44.547 251996 DEBUG oslo_concurrency.lockutils [req-3e13c709-75fe-44ef-bb78-fde989f54a06 req-ec7dc1b5-1a28-4506-8a76-8b73a2610688 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:44 compute-0 nova_compute[251992]: 2025-12-06 08:20:44.547 251996 DEBUG oslo_concurrency.lockutils [req-3e13c709-75fe-44ef-bb78-fde989f54a06 req-ec7dc1b5-1a28-4506-8a76-8b73a2610688 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:44 compute-0 nova_compute[251992]: 2025-12-06 08:20:44.547 251996 DEBUG oslo_concurrency.lockutils [req-3e13c709-75fe-44ef-bb78-fde989f54a06 req-ec7dc1b5-1a28-4506-8a76-8b73a2610688 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:44 compute-0 nova_compute[251992]: 2025-12-06 08:20:44.547 251996 DEBUG nova.compute.manager [req-3e13c709-75fe-44ef-bb78-fde989f54a06 req-ec7dc1b5-1a28-4506-8a76-8b73a2610688 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] No waiting events found dispatching network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:20:44 compute-0 nova_compute[251992]: 2025-12-06 08:20:44.548 251996 WARNING nova.compute.manager [req-3e13c709-75fe-44ef-bb78-fde989f54a06 req-ec7dc1b5-1a28-4506-8a76-8b73a2610688 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received unexpected event network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 for instance with vm_state active and task_state None.
Dec 06 08:20:44 compute-0 sudo[406295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:20:44 compute-0 sudo[406295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:20:44 compute-0 sudo[406295]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:45.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 14 KiB/s wr, 70 op/s
Dec 06 08:20:46 compute-0 ceph-mon[74339]: pgmap v3852: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:20:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:46.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:47.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:47 compute-0 ceph-mon[74339]: pgmap v3853: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 14 KiB/s wr, 70 op/s
Dec 06 08:20:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Dec 06 08:20:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:48.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:48 compute-0 nova_compute[251992]: 2025-12-06 08:20:48.781 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.239 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:49 compute-0 NetworkManager[48965]: <info>  [1765009249.3422] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Dec 06 08:20:49 compute-0 NetworkManager[48965]: <info>  [1765009249.3430] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:49.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.416 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:49 compute-0 ovn_controller[147168]: 2025-12-06T08:20:49Z|00808|binding|INFO|Releasing lport 32c82c25-6496-4edd-ba74-1791824b99ab from this chassis (sb_readonly=0)
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.425 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:49 compute-0 ceph-mon[74339]: pgmap v3854: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Dec 06 08:20:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.763 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.917 251996 DEBUG nova.compute.manager [req-a2753b00-469f-49e9-a49e-01d7ff52efd5 req-18ccedfa-968a-4554-9a6d-1308fc9b9b99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-changed-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.917 251996 DEBUG nova.compute.manager [req-a2753b00-469f-49e9-a49e-01d7ff52efd5 req-18ccedfa-968a-4554-9a6d-1308fc9b9b99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Refreshing instance network info cache due to event network-changed-fdb27d9b-f2d2-4cdc-9682-63a06b53b767. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.918 251996 DEBUG oslo_concurrency.lockutils [req-a2753b00-469f-49e9-a49e-01d7ff52efd5 req-18ccedfa-968a-4554-9a6d-1308fc9b9b99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.918 251996 DEBUG oslo_concurrency.lockutils [req-a2753b00-469f-49e9-a49e-01d7ff52efd5 req-18ccedfa-968a-4554-9a6d-1308fc9b9b99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:20:49 compute-0 nova_compute[251992]: 2025-12-06 08:20:49.918 251996 DEBUG nova.network.neutron [req-a2753b00-469f-49e9-a49e-01d7ff52efd5 req-18ccedfa-968a-4554-9a6d-1308fc9b9b99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Refreshing network info cache for port fdb27d9b-f2d2-4cdc-9682-63a06b53b767 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:20:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:50.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:51.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:51 compute-0 podman[406326]: 2025-12-06 08:20:51.432958209 +0000 UTC m=+0.089113302 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:20:51 compute-0 ceph-mon[74339]: pgmap v3855: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:52 compute-0 nova_compute[251992]: 2025-12-06 08:20:52.537 251996 DEBUG nova.network.neutron [req-a2753b00-469f-49e9-a49e-01d7ff52efd5 req-18ccedfa-968a-4554-9a6d-1308fc9b9b99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updated VIF entry in instance network info cache for port fdb27d9b-f2d2-4cdc-9682-63a06b53b767. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:20:52 compute-0 nova_compute[251992]: 2025-12-06 08:20:52.538 251996 DEBUG nova.network.neutron [req-a2753b00-469f-49e9-a49e-01d7ff52efd5 req-18ccedfa-968a-4554-9a6d-1308fc9b9b99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updating instance_info_cache with network_info: [{"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:20:52 compute-0 nova_compute[251992]: 2025-12-06 08:20:52.852 251996 DEBUG oslo_concurrency.lockutils [req-a2753b00-469f-49e9-a49e-01d7ff52efd5 req-18ccedfa-968a-4554-9a6d-1308fc9b9b99 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:20:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:53 compute-0 ceph-mon[74339]: pgmap v3856: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:53 compute-0 nova_compute[251992]: 2025-12-06 08:20:53.783 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:54 compute-0 nova_compute[251992]: 2025-12-06 08:20:54.242 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:54.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:55 compute-0 ovn_controller[147168]: 2025-12-06T08:20:55Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:ce:9c 10.100.0.3
Dec 06 08:20:55 compute-0 nova_compute[251992]: 2025-12-06 08:20:55.347 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:55.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:55 compute-0 ceph-mon[74339]: pgmap v3857: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:55 compute-0 nova_compute[251992]: 2025-12-06 08:20:55.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 23 KiB/s wr, 93 op/s
Dec 06 08:20:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:20:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:56.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:20:56 compute-0 nova_compute[251992]: 2025-12-06 08:20:56.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:20:56 compute-0 nova_compute[251992]: 2025-12-06 08:20:56.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:56 compute-0 nova_compute[251992]: 2025-12-06 08:20:56.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:56 compute-0 nova_compute[251992]: 2025-12-06 08:20:56.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:56 compute-0 nova_compute[251992]: 2025-12-06 08:20:56.680 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:20:56 compute-0 nova_compute[251992]: 2025-12-06 08:20:56.681 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:20:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2101064357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.126 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.244 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.245 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:20:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:20:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:57.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.444 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.445 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3912MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.445 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.446 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.507 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f2b69bc0-d3ae-4f20-9026-421ff6537c3f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.507 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.507 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.521 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.536 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.536 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.550 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:20:57 compute-0 ceph-mon[74339]: pgmap v3858: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 23 KiB/s wr, 93 op/s
Dec 06 08:20:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2101064357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.573 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:20:57 compute-0 nova_compute[251992]: 2025-12-06 08:20:57.616 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:20:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:20:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1953930024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:58 compute-0 nova_compute[251992]: 2025-12-06 08:20:58.034 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:20:58 compute-0 nova_compute[251992]: 2025-12-06 08:20:58.039 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:20:58 compute-0 nova_compute[251992]: 2025-12-06 08:20:58.191 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:20:58 compute-0 podman[406400]: 2025-12-06 08:20:58.437452995 +0000 UTC m=+0.079468610 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 08:20:58 compute-0 podman[406401]: 2025-12-06 08:20:58.447157039 +0000 UTC m=+0.085485524 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:20:58 compute-0 nova_compute[251992]: 2025-12-06 08:20:58.499 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:20:58 compute-0 nova_compute[251992]: 2025-12-06 08:20:58.499 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:20:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:20:58.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1953930024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:20:58 compute-0 nova_compute[251992]: 2025-12-06 08:20:58.826 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:59 compute-0 nova_compute[251992]: 2025-12-06 08:20:59.244 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:20:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:20:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:20:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:20:59.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:20:59 compute-0 ceph-mon[74339]: pgmap v3859: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 73 op/s
Dec 06 08:20:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:20:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 501 KiB/s rd, 13 KiB/s wr, 41 op/s
Dec 06 08:21:00 compute-0 nova_compute[251992]: 2025-12-06 08:21:00.500 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:00.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2460462847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:01.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:01 compute-0 ceph-mon[74339]: pgmap v3860: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 501 KiB/s rd, 13 KiB/s wr, 41 op/s
Dec 06 08:21:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1718808240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 44 op/s
Dec 06 08:21:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:02.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:02 compute-0 nova_compute[251992]: 2025-12-06 08:21:02.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:02 compute-0 nova_compute[251992]: 2025-12-06 08:21:02.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:21:02 compute-0 nova_compute[251992]: 2025-12-06 08:21:02.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:21:03 compute-0 nova_compute[251992]: 2025-12-06 08:21:03.301 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:21:03 compute-0 nova_compute[251992]: 2025-12-06 08:21:03.301 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:21:03 compute-0 nova_compute[251992]: 2025-12-06 08:21:03.302 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:21:03 compute-0 nova_compute[251992]: 2025-12-06 08:21:03.302 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f2b69bc0-d3ae-4f20-9026-421ff6537c3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:21:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:03.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:03 compute-0 ceph-mon[74339]: pgmap v3861: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 44 op/s
Dec 06 08:21:03 compute-0 nova_compute[251992]: 2025-12-06 08:21:03.828 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:21:03.896 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:21:03.898 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:21:03.899 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 44 op/s
Dec 06 08:21:04 compute-0 nova_compute[251992]: 2025-12-06 08:21:04.246 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:04.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:04 compute-0 ceph-mon[74339]: pgmap v3862: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 27 KiB/s wr, 44 op/s
Dec 06 08:21:04 compute-0 sudo[406443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:04 compute-0 sudo[406443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:04 compute-0 sudo[406443]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:04 compute-0 sudo[406468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:04 compute-0 sudo[406468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:04 compute-0 sudo[406468]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:05.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 27 KiB/s wr, 45 op/s
Dec 06 08:21:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:06.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:07 compute-0 ceph-mon[74339]: pgmap v3863: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 598 KiB/s rd, 27 KiB/s wr, 45 op/s
Dec 06 08:21:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:07.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 474 KiB/s rd, 17 KiB/s wr, 28 op/s
Dec 06 08:21:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Dec 06 08:21:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Dec 06 08:21:08 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Dec 06 08:21:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:08.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:08 compute-0 nova_compute[251992]: 2025-12-06 08:21:08.830 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:09 compute-0 ceph-mon[74339]: pgmap v3864: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 474 KiB/s rd, 17 KiB/s wr, 28 op/s
Dec 06 08:21:09 compute-0 ceph-mon[74339]: osdmap e426: 3 total, 3 up, 3 in
Dec 06 08:21:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1292675319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4290350105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:21:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4290350105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:21:09 compute-0 nova_compute[251992]: 2025-12-06 08:21:09.249 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:09.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 197 KiB/s rd, 17 KiB/s wr, 7 op/s
Dec 06 08:21:10 compute-0 nova_compute[251992]: 2025-12-06 08:21:10.018 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updating instance_info_cache with network_info: [{"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:21:10 compute-0 nova_compute[251992]: 2025-12-06 08:21:10.057 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:21:10 compute-0 nova_compute[251992]: 2025-12-06 08:21:10.058 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:21:10 compute-0 nova_compute[251992]: 2025-12-06 08:21:10.058 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:10 compute-0 nova_compute[251992]: 2025-12-06 08:21:10.058 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:10 compute-0 nova_compute[251992]: 2025-12-06 08:21:10.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:11 compute-0 ceph-mon[74339]: pgmap v3866: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 197 KiB/s rd, 17 KiB/s wr, 7 op/s
Dec 06 08:21:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:21:11.380 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:21:11 compute-0 nova_compute[251992]: 2025-12-06 08:21:11.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:11 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:21:11.381 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:21:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:11.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Dec 06 08:21:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:21:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:21:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:21:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:21:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:21:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:21:13 compute-0 ceph-mon[74339]: pgmap v3867: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Dec 06 08:21:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:13.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:13 compute-0 nova_compute[251992]: 2025-12-06 08:21:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:13 compute-0 nova_compute[251992]: 2025-12-06 08:21:13.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:13 compute-0 nova_compute[251992]: 2025-12-06 08:21:13.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:21:13 compute-0 nova_compute[251992]: 2025-12-06 08:21:13.831 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Dec 06 08:21:14 compute-0 nova_compute[251992]: 2025-12-06 08:21:14.251 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:14.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:15 compute-0 ceph-mon[74339]: pgmap v3868: 305 pgs: 305 active+clean; 232 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 280 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Dec 06 08:21:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:15.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 08:21:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:16.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:17 compute-0 ceph-mon[74339]: pgmap v3869: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 258 KiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 08:21:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:17.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 2.1 MiB/s wr, 174 op/s
Dec 06 08:21:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:21:18
Dec 06 08:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root']
Dec 06 08:21:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:21:18 compute-0 nova_compute[251992]: 2025-12-06 08:21:18.832 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:19 compute-0 nova_compute[251992]: 2025-12-06 08:21:19.254 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:21:19.384 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:21:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:19.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:19 compute-0 ceph-mon[74339]: pgmap v3870: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 2.1 MiB/s wr, 174 op/s
Dec 06 08:21:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 168 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 08:21:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:20.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:21 compute-0 ceph-mon[74339]: pgmap v3871: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 168 KiB/s rd, 1.8 MiB/s wr, 146 op/s
Dec 06 08:21:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:21.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 216 KiB/s rd, 1.8 MiB/s wr, 226 op/s
Dec 06 08:21:22 compute-0 podman[406502]: 2025-12-06 08:21:22.429574358 +0000 UTC m=+0.083906881 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller)
Dec 06 08:21:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:22.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:23 compute-0 ceph-mon[74339]: pgmap v3872: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 216 KiB/s rd, 1.8 MiB/s wr, 226 op/s
Dec 06 08:21:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:23.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:21:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:21:23 compute-0 nova_compute[251992]: 2025-12-06 08:21:23.833 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 964 KiB/s wr, 185 op/s
Dec 06 08:21:24 compute-0 nova_compute[251992]: 2025-12-06 08:21:24.255 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:24.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:24 compute-0 sudo[406527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:24 compute-0 sudo[406527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:24 compute-0 sudo[406527]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:24 compute-0 sudo[406552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:24 compute-0 sudo[406552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:24 compute-0 sudo[406552]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:25 compute-0 ceph-mon[74339]: pgmap v3873: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 964 KiB/s wr, 185 op/s
Dec 06 08:21:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:25.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:25 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 965 KiB/s wr, 185 op/s
Dec 06 08:21:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/359431386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:26.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0043317353550158194 of space, bias 1.0, pg target 1.2995206065047458 quantized to 32 (current 32)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:21:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:21:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:21:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:21:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:21:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:21:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:21:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:27.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:27 compute-0 ceph-mon[74339]: pgmap v3874: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 965 KiB/s wr, 185 op/s
Dec 06 08:21:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/18612945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/190565723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:27 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 3.2 KiB/s wr, 117 op/s
Dec 06 08:21:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:28.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:28 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:21:28 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2345884700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:28 compute-0 sudo[406579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:28 compute-0 sudo[406579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:28 compute-0 sudo[406579]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:28 compute-0 podman[406603]: 2025-12-06 08:21:28.739817391 +0000 UTC m=+0.049770082 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 08:21:28 compute-0 sudo[406616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:21:28 compute-0 sudo[406616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:28 compute-0 sudo[406616]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:28 compute-0 podman[406604]: 2025-12-06 08:21:28.75009181 +0000 UTC m=+0.060363921 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:21:28 compute-0 sudo[406663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:28 compute-0 sudo[406663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:28 compute-0 sudo[406663]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:28 compute-0 nova_compute[251992]: 2025-12-06 08:21:28.834 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:28 compute-0 sudo[406688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:21:28 compute-0 sudo[406688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:29 compute-0 ceph-mon[74339]: pgmap v3875: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 3.2 KiB/s wr, 117 op/s
Dec 06 08:21:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2345884700' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:29 compute-0 sudo[406688]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:21:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:21:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:29 compute-0 nova_compute[251992]: 2025-12-06 08:21:29.257 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:29 compute-0 sudo[406734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:29 compute-0 sudo[406734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:29 compute-0 sudo[406734]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:29 compute-0 sudo[406759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:21:29 compute-0 sudo[406759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:29 compute-0 sudo[406759]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:29 compute-0 sudo[406784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:29 compute-0 sudo[406784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:29 compute-0 sudo[406784]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:29.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:29 compute-0 sudo[406809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:21:29 compute-0 sudo[406809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:29 compute-0 sudo[406809]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:29 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 84 op/s
Dec 06 08:21:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:30.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:21:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:21:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:31.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:21:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:21:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:21:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:21:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:21:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c5552041-b095-4b0f-a513-1451109c9abe does not exist
Dec 06 08:21:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9de5bf1b-2c21-4f29-bb9f-8711b833be0f does not exist
Dec 06 08:21:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a7b7c172-35dc-42a3-8ed8-20351c5a84ca does not exist
Dec 06 08:21:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:21:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:21:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:21:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:21:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:21:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:21:31 compute-0 ceph-mon[74339]: pgmap v3876: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 84 op/s
Dec 06 08:21:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:21:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:21:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:21:31 compute-0 sudo[406866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:31 compute-0 sudo[406866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:31 compute-0 sudo[406866]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:31 compute-0 sudo[406891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:21:31 compute-0 sudo[406891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:31 compute-0 sudo[406891]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:31 compute-0 sudo[406916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:31 compute-0 sudo[406916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:31 compute-0 sudo[406916]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:31 compute-0 sudo[406941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:21:31 compute-0 sudo[406941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:31 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 491 KiB/s rd, 16 KiB/s wr, 109 op/s
Dec 06 08:21:32 compute-0 podman[407007]: 2025-12-06 08:21:32.154933866 +0000 UTC m=+0.041557899 container create ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:21:32 compute-0 systemd[1]: Started libpod-conmon-ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581.scope.
Dec 06 08:21:32 compute-0 podman[407007]: 2025-12-06 08:21:32.135889419 +0000 UTC m=+0.022513492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:21:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:21:32 compute-0 podman[407007]: 2025-12-06 08:21:32.248263212 +0000 UTC m=+0.134887275 container init ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hawking, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 08:21:32 compute-0 podman[407007]: 2025-12-06 08:21:32.255521708 +0000 UTC m=+0.142145751 container start ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:21:32 compute-0 podman[407007]: 2025-12-06 08:21:32.258007136 +0000 UTC m=+0.144631179 container attach ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hawking, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:21:32 compute-0 elastic_hawking[407023]: 167 167
Dec 06 08:21:32 compute-0 systemd[1]: libpod-ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581.scope: Deactivated successfully.
Dec 06 08:21:32 compute-0 conmon[407023]: conmon ad6a92184e13d56e78a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581.scope/container/memory.events
Dec 06 08:21:32 compute-0 podman[407007]: 2025-12-06 08:21:32.262137899 +0000 UTC m=+0.148761952 container died ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hawking, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:21:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-957519e656c4c1ff00c24948880fa872ad8050276e03c2059d2b8601b9f415ae-merged.mount: Deactivated successfully.
Dec 06 08:21:32 compute-0 podman[407007]: 2025-12-06 08:21:32.294823956 +0000 UTC m=+0.181447989 container remove ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:21:32 compute-0 systemd[1]: libpod-conmon-ad6a92184e13d56e78a16c66702ace53fa8b6cfe7d668edac24ffc33d84d8581.scope: Deactivated successfully.
Dec 06 08:21:32 compute-0 podman[407045]: 2025-12-06 08:21:32.452360737 +0000 UTC m=+0.041421437 container create 5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_allen, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:21:32 compute-0 systemd[1]: Started libpod-conmon-5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9.scope.
Dec 06 08:21:32 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f2651b24c80ad7bda5722edad02f329925030239f1507051f25d932b7b64a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f2651b24c80ad7bda5722edad02f329925030239f1507051f25d932b7b64a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f2651b24c80ad7bda5722edad02f329925030239f1507051f25d932b7b64a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f2651b24c80ad7bda5722edad02f329925030239f1507051f25d932b7b64a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f2651b24c80ad7bda5722edad02f329925030239f1507051f25d932b7b64a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:32 compute-0 podman[407045]: 2025-12-06 08:21:32.435176339 +0000 UTC m=+0.024237059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:21:32 compute-0 podman[407045]: 2025-12-06 08:21:32.538253099 +0000 UTC m=+0.127313809 container init 5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:21:32 compute-0 podman[407045]: 2025-12-06 08:21:32.54601331 +0000 UTC m=+0.135074010 container start 5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_allen, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:21:32 compute-0 podman[407045]: 2025-12-06 08:21:32.548993121 +0000 UTC m=+0.138053821 container attach 5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:21:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:32.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:21:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:21:33 compute-0 elastic_allen[407062]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:21:33 compute-0 elastic_allen[407062]: --> relative data size: 1.0
Dec 06 08:21:33 compute-0 elastic_allen[407062]: --> All data devices are unavailable
Dec 06 08:21:33 compute-0 systemd[1]: libpod-5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9.scope: Deactivated successfully.
Dec 06 08:21:33 compute-0 podman[407045]: 2025-12-06 08:21:33.351026969 +0000 UTC m=+0.940087699 container died 5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_allen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:21:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f2651b24c80ad7bda5722edad02f329925030239f1507051f25d932b7b64a1-merged.mount: Deactivated successfully.
Dec 06 08:21:33 compute-0 podman[407045]: 2025-12-06 08:21:33.408339637 +0000 UTC m=+0.997400337 container remove 5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:21:33 compute-0 systemd[1]: libpod-conmon-5b43f942b408e6e6a05add3df870066ca529658e783c51f2a65437fa274444c9.scope: Deactivated successfully.
Dec 06 08:21:33 compute-0 sudo[406941]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:33.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:33 compute-0 sudo[407090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:33 compute-0 sudo[407090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:33 compute-0 sudo[407090]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:33 compute-0 sudo[407115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:21:33 compute-0 sudo[407115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:33 compute-0 sudo[407115]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:33 compute-0 sudo[407140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:33 compute-0 sudo[407140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:33 compute-0 sudo[407140]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:33 compute-0 sudo[407165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:21:33 compute-0 sudo[407165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:33 compute-0 ceph-mon[74339]: pgmap v3877: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 491 KiB/s rd, 16 KiB/s wr, 109 op/s
Dec 06 08:21:33 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3905464386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:21:33 compute-0 nova_compute[251992]: 2025-12-06 08:21:33.836 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:33 compute-0 podman[407229]: 2025-12-06 08:21:33.968991367 +0000 UTC m=+0.035390943 container create cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec 06 08:21:33 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 442 KiB/s rd, 16 KiB/s wr, 28 op/s
Dec 06 08:21:34 compute-0 systemd[1]: Started libpod-conmon-cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615.scope.
Dec 06 08:21:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:21:34 compute-0 podman[407229]: 2025-12-06 08:21:34.032894073 +0000 UTC m=+0.099293669 container init cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:21:34 compute-0 podman[407229]: 2025-12-06 08:21:34.040375556 +0000 UTC m=+0.106775132 container start cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:21:34 compute-0 podman[407229]: 2025-12-06 08:21:34.04308717 +0000 UTC m=+0.109486746 container attach cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_varahamihira, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 08:21:34 compute-0 goofy_varahamihira[407245]: 167 167
Dec 06 08:21:34 compute-0 systemd[1]: libpod-cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615.scope: Deactivated successfully.
Dec 06 08:21:34 compute-0 podman[407229]: 2025-12-06 08:21:34.046066341 +0000 UTC m=+0.112465917 container died cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_varahamihira, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:21:34 compute-0 podman[407229]: 2025-12-06 08:21:33.955061399 +0000 UTC m=+0.021460995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:21:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fa1b95893b5b7423d1349a3c7db5e66dd20d9dc6b22f1e228819833780f6db4-merged.mount: Deactivated successfully.
Dec 06 08:21:34 compute-0 podman[407229]: 2025-12-06 08:21:34.077494585 +0000 UTC m=+0.143894161 container remove cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_varahamihira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 08:21:34 compute-0 systemd[1]: libpod-conmon-cda5abf5d2c61f793595016e51c62a5ec4c153275081e6960d965b21a1510615.scope: Deactivated successfully.
Dec 06 08:21:34 compute-0 podman[407269]: 2025-12-06 08:21:34.244649815 +0000 UTC m=+0.042103664 container create c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 06 08:21:34 compute-0 nova_compute[251992]: 2025-12-06 08:21:34.259 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:34 compute-0 systemd[1]: Started libpod-conmon-c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4.scope.
Dec 06 08:21:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14db58918757f3f51df2ddcfd9fb0c01d728e652eda2988125693e7b48d73f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14db58918757f3f51df2ddcfd9fb0c01d728e652eda2988125693e7b48d73f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14db58918757f3f51df2ddcfd9fb0c01d728e652eda2988125693e7b48d73f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a14db58918757f3f51df2ddcfd9fb0c01d728e652eda2988125693e7b48d73f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:34 compute-0 podman[407269]: 2025-12-06 08:21:34.226367849 +0000 UTC m=+0.023821688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:21:34 compute-0 podman[407269]: 2025-12-06 08:21:34.327706232 +0000 UTC m=+0.125160061 container init c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 08:21:34 compute-0 podman[407269]: 2025-12-06 08:21:34.333011946 +0000 UTC m=+0.130465775 container start c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:21:34 compute-0 podman[407269]: 2025-12-06 08:21:34.33609497 +0000 UTC m=+0.133548809 container attach c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:21:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:34 compute-0 ceph-mon[74339]: pgmap v3878: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 442 KiB/s rd, 16 KiB/s wr, 28 op/s
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]: {
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:     "0": [
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:         {
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "devices": [
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "/dev/loop3"
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             ],
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "lv_name": "ceph_lv0",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "lv_size": "7511998464",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "name": "ceph_lv0",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "tags": {
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.cluster_name": "ceph",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.crush_device_class": "",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.encrypted": "0",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.osd_id": "0",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.type": "block",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:                 "ceph.vdo": "0"
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             },
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "type": "block",
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:             "vg_name": "ceph_vg0"
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:         }
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]:     ]
Dec 06 08:21:35 compute-0 pensive_keldysh[407286]: }
Dec 06 08:21:35 compute-0 systemd[1]: libpod-c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4.scope: Deactivated successfully.
Dec 06 08:21:35 compute-0 podman[407295]: 2025-12-06 08:21:35.134538651 +0000 UTC m=+0.026862502 container died c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a14db58918757f3f51df2ddcfd9fb0c01d728e652eda2988125693e7b48d73f-merged.mount: Deactivated successfully.
Dec 06 08:21:35 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:21:35 compute-0 podman[407295]: 2025-12-06 08:21:35.181194828 +0000 UTC m=+0.073518669 container remove c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 08:21:35 compute-0 systemd[1]: libpod-conmon-c5542f05212fae08c9d1206d5d84f98e96a2a853eb57cdf0aec869a9732bddb4.scope: Deactivated successfully.
Dec 06 08:21:35 compute-0 sudo[407165]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:35 compute-0 sudo[407312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:35 compute-0 sudo[407312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:35 compute-0 sudo[407312]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:35 compute-0 sudo[407337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:21:35 compute-0 sudo[407337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:35 compute-0 sudo[407337]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:35 compute-0 sudo[407362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:35 compute-0 sudo[407362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:35 compute-0 sudo[407362]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:35.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:35 compute-0 sudo[407387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:21:35 compute-0 sudo[407387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:35 compute-0 podman[407452]: 2025-12-06 08:21:35.79674203 +0000 UTC m=+0.044177811 container create ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:21:35 compute-0 systemd[1]: Started libpod-conmon-ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93.scope.
Dec 06 08:21:35 compute-0 podman[407452]: 2025-12-06 08:21:35.775853593 +0000 UTC m=+0.023289394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:21:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:21:35 compute-0 podman[407452]: 2025-12-06 08:21:35.888910084 +0000 UTC m=+0.136345875 container init ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:21:35 compute-0 podman[407452]: 2025-12-06 08:21:35.898597927 +0000 UTC m=+0.146033708 container start ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:21:35 compute-0 podman[407452]: 2025-12-06 08:21:35.902338409 +0000 UTC m=+0.149774270 container attach ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 08:21:35 compute-0 bold_mirzakhani[407469]: 167 167
Dec 06 08:21:35 compute-0 systemd[1]: libpod-ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93.scope: Deactivated successfully.
Dec 06 08:21:35 compute-0 conmon[407469]: conmon ffef665f1e1de60cf100 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93.scope/container/memory.events
Dec 06 08:21:35 compute-0 podman[407452]: 2025-12-06 08:21:35.906616155 +0000 UTC m=+0.154051946 container died ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:21:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb59f9ac136ae98e14ad2f914a0b01341447756ef8c73b81b224426e2d6f868c-merged.mount: Deactivated successfully.
Dec 06 08:21:35 compute-0 podman[407452]: 2025-12-06 08:21:35.945979555 +0000 UTC m=+0.193415336 container remove ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:21:35 compute-0 systemd[1]: libpod-conmon-ffef665f1e1de60cf100078a6c44a8b7e9edefb1d58ef221c33685869f004f93.scope: Deactivated successfully.
Dec 06 08:21:35 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 88 op/s
Dec 06 08:21:36 compute-0 podman[407492]: 2025-12-06 08:21:36.113184246 +0000 UTC m=+0.041903599 container create e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:21:36 compute-0 systemd[1]: Started libpod-conmon-e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381.scope.
Dec 06 08:21:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c125e856a41374e2ffe605f0293661fef53d8fe60710c14e12e36d1ee9c7dbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c125e856a41374e2ffe605f0293661fef53d8fe60710c14e12e36d1ee9c7dbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c125e856a41374e2ffe605f0293661fef53d8fe60710c14e12e36d1ee9c7dbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c125e856a41374e2ffe605f0293661fef53d8fe60710c14e12e36d1ee9c7dbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:21:36 compute-0 podman[407492]: 2025-12-06 08:21:36.094935291 +0000 UTC m=+0.023654664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:21:36 compute-0 podman[407492]: 2025-12-06 08:21:36.197313242 +0000 UTC m=+0.126032615 container init e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:21:36 compute-0 podman[407492]: 2025-12-06 08:21:36.203788108 +0000 UTC m=+0.132507451 container start e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:21:36 compute-0 podman[407492]: 2025-12-06 08:21:36.207052137 +0000 UTC m=+0.135771490 container attach e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:21:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:36.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]: {
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]:         "osd_id": 0,
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]:         "type": "bluestore"
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]:     }
Dec 06 08:21:36 compute-0 peaceful_meninsky[407508]: }
Dec 06 08:21:37 compute-0 systemd[1]: libpod-e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381.scope: Deactivated successfully.
Dec 06 08:21:37 compute-0 podman[407492]: 2025-12-06 08:21:37.026357264 +0000 UTC m=+0.955076617 container died e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec 06 08:21:37 compute-0 ceph-mon[74339]: pgmap v3879: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 32 KiB/s wr, 88 op/s
Dec 06 08:21:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c125e856a41374e2ffe605f0293661fef53d8fe60710c14e12e36d1ee9c7dbc-merged.mount: Deactivated successfully.
Dec 06 08:21:37 compute-0 podman[407492]: 2025-12-06 08:21:37.106184693 +0000 UTC m=+1.034904046 container remove e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:21:37 compute-0 systemd[1]: libpod-conmon-e9939f19421761062cca7ddcaa6f749ce88c47f8c2acb63a5c5fedf9a2e28381.scope: Deactivated successfully.
Dec 06 08:21:37 compute-0 sudo[407387]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:21:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:21:37 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8fd1aff3-dcc6-4872-a562-87ad26e98658 does not exist
Dec 06 08:21:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 87a6383e-4dc4-4c89-9eb4-aa5e28d7dac0 does not exist
Dec 06 08:21:37 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 243ca0d4-3fed-4eb6-b939-2545ee0154ab does not exist
Dec 06 08:21:37 compute-0 sudo[407544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:37 compute-0 sudo[407544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:37 compute-0 sudo[407544]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:37 compute-0 sudo[407569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:21:37 compute-0 sudo[407569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:37 compute-0 sudo[407569]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:37.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:37 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 106 op/s
Dec 06 08:21:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:21:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:38.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:38 compute-0 nova_compute[251992]: 2025-12-06 08:21:38.837 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:39 compute-0 nova_compute[251992]: 2025-12-06 08:21:39.261 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:39 compute-0 ceph-mon[74339]: pgmap v3880: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 106 op/s
Dec 06 08:21:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:39.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:39 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 103 op/s
Dec 06 08:21:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:40.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:41 compute-0 ceph-mon[74339]: pgmap v3881: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 103 op/s
Dec 06 08:21:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:41.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:41 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 32 KiB/s wr, 153 op/s
Dec 06 08:21:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:42.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:21:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:21:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:21:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:21:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:21:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:21:43 compute-0 ceph-mon[74339]: pgmap v3882: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 32 KiB/s wr, 153 op/s
Dec 06 08:21:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:43.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:43 compute-0 nova_compute[251992]: 2025-12-06 08:21:43.874 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:43 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 128 op/s
Dec 06 08:21:44 compute-0 nova_compute[251992]: 2025-12-06 08:21:44.263 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:44.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:44 compute-0 sudo[407597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:44 compute-0 sudo[407597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:44 compute-0 sudo[407597]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:45 compute-0 sudo[407622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:21:45 compute-0 sudo[407622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:21:45 compute-0 sudo[407622]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:45.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:45 compute-0 ceph-mon[74339]: pgmap v3883: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 19 KiB/s wr, 128 op/s
Dec 06 08:21:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/14057990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:45 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 168 op/s
Dec 06 08:21:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:46.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2852534258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:21:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:47.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:21:47 compute-0 ceph-mon[74339]: pgmap v3884: 305 pgs: 305 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 168 op/s
Dec 06 08:21:47 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec 06 08:21:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:48.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:48 compute-0 nova_compute[251992]: 2025-12-06 08:21:48.875 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:49 compute-0 nova_compute[251992]: 2025-12-06 08:21:49.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:49.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:49 compute-0 ceph-mon[74339]: pgmap v3885: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Dec 06 08:21:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:49 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Dec 06 08:21:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:50.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.633602) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310633674, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 2139, "num_deletes": 253, "total_data_size": 3787517, "memory_usage": 3853552, "flush_reason": "Manual Compaction"}
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310654515, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 3696882, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77526, "largest_seqno": 79663, "table_properties": {"data_size": 3687334, "index_size": 5977, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19993, "raw_average_key_size": 20, "raw_value_size": 3668179, "raw_average_value_size": 3762, "num_data_blocks": 261, "num_entries": 975, "num_filter_entries": 975, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009105, "oldest_key_time": 1765009105, "file_creation_time": 1765009310, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 20971 microseconds, and 8047 cpu microseconds.
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.654576) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 3696882 bytes OK
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.654604) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.656023) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.656035) EVENT_LOG_v1 {"time_micros": 1765009310656030, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.656051) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 3778867, prev total WAL file size 3778867, number of live WAL files 2.
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.657046) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(3610KB)], [176(11MB)]
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310657144, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 15452660, "oldest_snapshot_seqno": -1}
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 11279 keys, 13444551 bytes, temperature: kUnknown
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310733573, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 13444551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13373388, "index_size": 41866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 298572, "raw_average_key_size": 26, "raw_value_size": 13177873, "raw_average_value_size": 1168, "num_data_blocks": 1582, "num_entries": 11279, "num_filter_entries": 11279, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009310, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.734002) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 13444551 bytes
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.735125) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.9 rd, 175.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 11804, records dropped: 525 output_compression: NoCompression
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.735145) EVENT_LOG_v1 {"time_micros": 1765009310735136, "job": 110, "event": "compaction_finished", "compaction_time_micros": 76540, "compaction_time_cpu_micros": 40284, "output_level": 6, "num_output_files": 1, "total_output_size": 13444551, "num_input_records": 11804, "num_output_records": 11279, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310735875, "job": 110, "event": "table_file_deletion", "file_number": 178}
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009310738116, "job": 110, "event": "table_file_deletion", "file_number": 176}
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.656915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.738206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.738211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.738213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.738214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:50 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:21:50.738215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:21:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:51.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:51 compute-0 ceph-mon[74339]: pgmap v3886: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Dec 06 08:21:51 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Dec 06 08:21:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:52.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:52 compute-0 ceph-mon[74339]: pgmap v3887: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 163 op/s
Dec 06 08:21:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:53.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:53 compute-0 podman[407653]: 2025-12-06 08:21:53.482478624 +0000 UTC m=+0.136048518 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller)
Dec 06 08:21:53 compute-0 nova_compute[251992]: 2025-12-06 08:21:53.877 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:53 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Dec 06 08:21:54 compute-0 nova_compute[251992]: 2025-12-06 08:21:54.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:54.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:55.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:55 compute-0 ceph-mon[74339]: pgmap v3888: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Dec 06 08:21:55 compute-0 nova_compute[251992]: 2025-12-06 08:21:55.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:55 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Dec 06 08:21:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:56.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:57.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:57 compute-0 ceph-mon[74339]: pgmap v3889: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Dec 06 08:21:57 compute-0 nova_compute[251992]: 2025-12-06 08:21:57.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:21:57 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 851 KiB/s wr, 75 op/s
Dec 06 08:21:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:21:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:21:58.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:21:58 compute-0 nova_compute[251992]: 2025-12-06 08:21:58.847 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:58 compute-0 nova_compute[251992]: 2025-12-06 08:21:58.847 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:58 compute-0 nova_compute[251992]: 2025-12-06 08:21:58.848 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:21:58 compute-0 nova_compute[251992]: 2025-12-06 08:21:58.848 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:21:58 compute-0 nova_compute[251992]: 2025-12-06 08:21:58.848 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:21:58 compute-0 nova_compute[251992]: 2025-12-06 08:21:58.938 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:59 compute-0 nova_compute[251992]: 2025-12-06 08:21:59.269 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:21:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:21:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2892796958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:59 compute-0 nova_compute[251992]: 2025-12-06 08:21:59.294 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:21:59 compute-0 podman[407704]: 2025-12-06 08:21:59.389034292 +0000 UTC m=+0.048276552 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 06 08:21:59 compute-0 podman[407705]: 2025-12-06 08:21:59.426046017 +0000 UTC m=+0.080582569 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:21:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:21:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:21:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:21:59.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:21:59 compute-0 nova_compute[251992]: 2025-12-06 08:21:59.516 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:21:59 compute-0 nova_compute[251992]: 2025-12-06 08:21:59.517 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:21:59 compute-0 ceph-mon[74339]: pgmap v3890: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 851 KiB/s wr, 75 op/s
Dec 06 08:21:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2892796958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:21:59 compute-0 nova_compute[251992]: 2025-12-06 08:21:59.651 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:21:59 compute-0 nova_compute[251992]: 2025-12-06 08:21:59.652 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3910MB free_disk=20.942432403564453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:21:59 compute-0 nova_compute[251992]: 2025-12-06 08:21:59.652 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:21:59 compute-0 nova_compute[251992]: 2025-12-06 08:21:59.652 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:21:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:21:59 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 594 KiB/s wr, 53 op/s
Dec 06 08:22:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:00.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:00 compute-0 nova_compute[251992]: 2025-12-06 08:22:00.969 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance f2b69bc0-d3ae-4f20-9026-421ff6537c3f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:22:00 compute-0 nova_compute[251992]: 2025-12-06 08:22:00.970 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:22:00 compute-0 nova_compute[251992]: 2025-12-06 08:22:00.971 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:22:01 compute-0 nova_compute[251992]: 2025-12-06 08:22:01.191 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:01.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3420947166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:01 compute-0 nova_compute[251992]: 2025-12-06 08:22:01.635 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:01 compute-0 nova_compute[251992]: 2025-12-06 08:22:01.642 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:22:01 compute-0 ceph-mon[74339]: pgmap v3891: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 594 KiB/s wr, 53 op/s
Dec 06 08:22:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2736176045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:01 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 600 KiB/s wr, 56 op/s
Dec 06 08:22:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:02.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:03 compute-0 nova_compute[251992]: 2025-12-06 08:22:03.191 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:22:03 compute-0 nova_compute[251992]: 2025-12-06 08:22:03.193 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:22:03 compute-0 nova_compute[251992]: 2025-12-06 08:22:03.194 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3420947166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:03 compute-0 ceph-mon[74339]: pgmap v3892: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 600 KiB/s wr, 56 op/s
Dec 06 08:22:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:03.897 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:03.899 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:03.900 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:03 compute-0 nova_compute[251992]: 2025-12-06 08:22:03.939 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:03 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 142 KiB/s rd, 89 KiB/s wr, 6 op/s
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.154 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:04.155 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:22:04 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:04.156 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.269 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "377c47cc-946e-4216-99ad-9e2118dd22dd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.269 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.306 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.418 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.419 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.429 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.429 251996 INFO nova.compute.claims [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:22:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3353675565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:04.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:04 compute-0 nova_compute[251992]: 2025-12-06 08:22:04.736 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:05 compute-0 sudo[407784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:05 compute-0 sudo[407784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:05 compute-0 sudo[407784]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274571355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:05 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:05.157 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.160 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.167 251996 DEBUG nova.compute.provider_tree [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:22:05 compute-0 sudo[407811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:05 compute-0 sudo[407811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:05 compute-0 sudo[407811]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.194 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.195 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.195 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.415 251996 DEBUG nova.scheduler.client.report [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:22:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:05.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.493 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.518 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.519 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:22:05 compute-0 ceph-mon[74339]: pgmap v3893: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 142 KiB/s rd, 89 KiB/s wr, 6 op/s
Dec 06 08:22:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3274571355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.640 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.640 251996 DEBUG nova.network.neutron [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.681 251996 INFO nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:22:05 compute-0 nova_compute[251992]: 2025-12-06 08:22:05.717 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:22:05 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 90 KiB/s wr, 18 op/s
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.024 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.025 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.026 251996 INFO nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Creating image(s)
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.058 251996 DEBUG nova.storage.rbd_utils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 377c47cc-946e-4216-99ad-9e2118dd22dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.089 251996 DEBUG nova.storage.rbd_utils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 377c47cc-946e-4216-99ad-9e2118dd22dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.115 251996 DEBUG nova.storage.rbd_utils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 377c47cc-946e-4216-99ad-9e2118dd22dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.119 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.185 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.186 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.187 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.188 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.221 251996 DEBUG nova.storage.rbd_utils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 377c47cc-946e-4216-99ad-9e2118dd22dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.226 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 377c47cc-946e-4216-99ad-9e2118dd22dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.279 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.279 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.280 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.280 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f2b69bc0-d3ae-4f20-9026-421ff6537c3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.308 251996 DEBUG nova.policy [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0432cb6633e14c1b86fc320e7f3bb880', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:22:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:22:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:06.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.614 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 377c47cc-946e-4216-99ad-9e2118dd22dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.680 251996 DEBUG nova.storage.rbd_utils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] resizing rbd image 377c47cc-946e-4216-99ad-9e2118dd22dd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:22:06 compute-0 nova_compute[251992]: 2025-12-06 08:22:06.780 251996 DEBUG nova.objects.instance [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 377c47cc-946e-4216-99ad-9e2118dd22dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:22:07 compute-0 nova_compute[251992]: 2025-12-06 08:22:07.153 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:22:07 compute-0 nova_compute[251992]: 2025-12-06 08:22:07.154 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Ensure instance console log exists: /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:22:07 compute-0 nova_compute[251992]: 2025-12-06 08:22:07.154 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:07 compute-0 nova_compute[251992]: 2025-12-06 08:22:07.154 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:07 compute-0 nova_compute[251992]: 2025-12-06 08:22:07.154 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:22:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:07.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:22:07 compute-0 ceph-mon[74339]: pgmap v3894: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 285 KiB/s rd, 90 KiB/s wr, 18 op/s
Dec 06 08:22:07 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 90 KiB/s wr, 18 op/s
Dec 06 08:22:08 compute-0 nova_compute[251992]: 2025-12-06 08:22:08.354 251996 DEBUG nova.network.neutron [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Successfully created port: aa19ef6e-d7de-47d8-bb55-d5d7bd727abe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:22:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:08.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:08 compute-0 nova_compute[251992]: 2025-12-06 08:22:08.942 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:09 compute-0 nova_compute[251992]: 2025-12-06 08:22:09.277 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:09.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:09 compute-0 nova_compute[251992]: 2025-12-06 08:22:09.525 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updating instance_info_cache with network_info: [{"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:09 compute-0 nova_compute[251992]: 2025-12-06 08:22:09.541 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:22:09 compute-0 nova_compute[251992]: 2025-12-06 08:22:09.541 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:22:09 compute-0 nova_compute[251992]: 2025-12-06 08:22:09.542 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:09 compute-0 nova_compute[251992]: 2025-12-06 08:22:09.542 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:09 compute-0 nova_compute[251992]: 2025-12-06 08:22:09.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:09 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 217 KiB/s rd, 6.6 KiB/s wr, 15 op/s
Dec 06 08:22:10 compute-0 ceph-mon[74339]: pgmap v3895: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 90 KiB/s wr, 18 op/s
Dec 06 08:22:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/192332593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2209579158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:22:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2209579158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:22:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:10.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:10 compute-0 nova_compute[251992]: 2025-12-06 08:22:10.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:10 compute-0 nova_compute[251992]: 2025-12-06 08:22:10.915 251996 DEBUG nova.network.neutron [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Successfully updated port: aa19ef6e-d7de-47d8-bb55-d5d7bd727abe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:22:10 compute-0 nova_compute[251992]: 2025-12-06 08:22:10.937 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:22:10 compute-0 nova_compute[251992]: 2025-12-06 08:22:10.937 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquired lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:22:10 compute-0 nova_compute[251992]: 2025-12-06 08:22:10.937 251996 DEBUG nova.network.neutron [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:22:11 compute-0 nova_compute[251992]: 2025-12-06 08:22:11.048 251996 DEBUG nova.compute.manager [req-63534d92-04a5-4fb6-ac2a-a8981fe695fd req-99aa0b85-4807-4e1d-b94d-bb0a192321f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-changed-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:11 compute-0 nova_compute[251992]: 2025-12-06 08:22:11.048 251996 DEBUG nova.compute.manager [req-63534d92-04a5-4fb6-ac2a-a8981fe695fd req-99aa0b85-4807-4e1d-b94d-bb0a192321f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Refreshing instance network info cache due to event network-changed-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:22:11 compute-0 nova_compute[251992]: 2025-12-06 08:22:11.048 251996 DEBUG oslo_concurrency.lockutils [req-63534d92-04a5-4fb6-ac2a-a8981fe695fd req-99aa0b85-4807-4e1d-b94d-bb0a192321f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:22:11 compute-0 ceph-mon[74339]: pgmap v3896: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 217 KiB/s rd, 6.6 KiB/s wr, 15 op/s
Dec 06 08:22:11 compute-0 nova_compute[251992]: 2025-12-06 08:22:11.173 251996 DEBUG nova.network.neutron [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:22:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:11.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:11 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.469 251996 DEBUG nova.network.neutron [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Updating instance_info_cache with network_info: [{"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.501 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Releasing lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.501 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Instance network_info: |[{"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.502 251996 DEBUG oslo_concurrency.lockutils [req-63534d92-04a5-4fb6-ac2a-a8981fe695fd req-99aa0b85-4807-4e1d-b94d-bb0a192321f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.502 251996 DEBUG nova.network.neutron [req-63534d92-04a5-4fb6-ac2a-a8981fe695fd req-99aa0b85-4807-4e1d-b94d-bb0a192321f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Refreshing network info cache for port aa19ef6e-d7de-47d8-bb55-d5d7bd727abe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.506 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Start _get_guest_xml network_info=[{"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.511 251996 WARNING nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.516 251996 DEBUG nova.virt.libvirt.host [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.517 251996 DEBUG nova.virt.libvirt.host [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.524 251996 DEBUG nova.virt.libvirt.host [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.525 251996 DEBUG nova.virt.libvirt.host [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.526 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.527 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.527 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.527 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.528 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.528 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.528 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.529 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.529 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.529 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.529 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.530 251996 DEBUG nova.virt.hardware [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.533 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:12.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:22:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265643239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:22:12 compute-0 nova_compute[251992]: 2025-12-06 08:22:12.981 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.015 251996 DEBUG nova.storage.rbd_utils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 377c47cc-946e-4216-99ad-9e2118dd22dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.019 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:22:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:22:13 compute-0 ceph-mon[74339]: pgmap v3897: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 238 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Dec 06 08:22:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/186175830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:22:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/186175830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:22:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2265643239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:22:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:22:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3254266222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.444 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.446 251996 DEBUG nova.virt.libvirt.vif [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:21:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-359828598',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-359828598',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=217,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbFho9uZYw/C67SrAZAOll1Yn6PYb0ai5sIBHmAL6dW73Ch+qRjce9a6w2oaJggOsc3UVjACoOu/DVsfm+enspt+H1pyOFCemHZp5ou5LgCYmgT/pXXIRPYRaBdn6h/nw==',key_name='tempest-TestSecurityGroupsBasicOps-443327318',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-yv5ldyi3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:22:05Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=377c47cc-946e-4216-99ad-9e2118dd22dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.446 251996 DEBUG nova.network.os_vif_util [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.447 251996 DEBUG nova.network.os_vif_util [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:eb:9e,bridge_name='br-int',has_traffic_filtering=True,id=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa19ef6e-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.448 251996 DEBUG nova.objects.instance [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 377c47cc-946e-4216-99ad-9e2118dd22dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:22:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:13.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.607 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <uuid>377c47cc-946e-4216-99ad-9e2118dd22dd</uuid>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <name>instance-000000d9</name>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-359828598</nova:name>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:22:12</nova:creationTime>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <nova:user uuid="0432cb6633e14c1b86fc320e7f3bb880">tempest-TestSecurityGroupsBasicOps-568463891-project-member</nova:user>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <nova:project uuid="5d23d1d6ffc142eaa9bee0ef93fe60e4">tempest-TestSecurityGroupsBasicOps-568463891</nova:project>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <nova:port uuid="aa19ef6e-d7de-47d8-bb55-d5d7bd727abe">
Dec 06 08:22:13 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <system>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <entry name="serial">377c47cc-946e-4216-99ad-9e2118dd22dd</entry>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <entry name="uuid">377c47cc-946e-4216-99ad-9e2118dd22dd</entry>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </system>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <os>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   </os>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <features>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   </features>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/377c47cc-946e-4216-99ad-9e2118dd22dd_disk">
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       </source>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/377c47cc-946e-4216-99ad-9e2118dd22dd_disk.config">
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       </source>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:22:13 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:b3:eb:9e"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <target dev="tapaa19ef6e-d7"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd/console.log" append="off"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <video>
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </video>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:22:13 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:22:13 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:22:13 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:22:13 compute-0 nova_compute[251992]: </domain>
Dec 06 08:22:13 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.609 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Preparing to wait for external event network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.610 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.610 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.610 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.611 251996 DEBUG nova.virt.libvirt.vif [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:21:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-359828598',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-359828598',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=217,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbFho9uZYw/C67SrAZAOll1Yn6PYb0ai5sIBHmAL6dW73Ch+qRjce9a6w2oaJggOsc3UVjACoOu/DVsfm+enspt+H1pyOFCemHZp5ou5LgCYmgT/pXXIRPYRaBdn6h/nw==',key_name='tempest-TestSecurityGroupsBasicOps-443327318',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-yv5ldyi3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:22:05Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=377c47cc-946e-4216-99ad-9e2118dd22dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.611 251996 DEBUG nova.network.os_vif_util [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.612 251996 DEBUG nova.network.os_vif_util [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:eb:9e,bridge_name='br-int',has_traffic_filtering=True,id=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa19ef6e-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.612 251996 DEBUG os_vif [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:eb:9e,bridge_name='br-int',has_traffic_filtering=True,id=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa19ef6e-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.613 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.613 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.614 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.619 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.619 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa19ef6e-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.620 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa19ef6e-d7, col_values=(('external_ids', {'iface-id': 'aa19ef6e-d7de-47d8-bb55-d5d7bd727abe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:eb:9e', 'vm-uuid': '377c47cc-946e-4216-99ad-9e2118dd22dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.621 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:13 compute-0 NetworkManager[48965]: <info>  [1765009333.6229] manager: (tapaa19ef6e-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.623 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.628 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.629 251996 INFO os_vif [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:eb:9e,bridge_name='br-int',has_traffic_filtering=True,id=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa19ef6e-d7')
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.786 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.786 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.786 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No VIF found with MAC fa:16:3e:b3:eb:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.787 251996 INFO nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Using config drive
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.817 251996 DEBUG nova.storage.rbd_utils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 377c47cc-946e-4216-99ad-9e2118dd22dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:22:13 compute-0 nova_compute[251992]: 2025-12-06 08:22:13.944 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:13 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 163 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 06 08:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Dec 06 08:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Dec 06 08:22:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3254266222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:22:14 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Dec 06 08:22:14 compute-0 nova_compute[251992]: 2025-12-06 08:22:14.372 251996 INFO nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Creating config drive at /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd/disk.config
Dec 06 08:22:14 compute-0 nova_compute[251992]: 2025-12-06 08:22:14.377 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp74apu49u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:14 compute-0 nova_compute[251992]: 2025-12-06 08:22:14.513 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp74apu49u" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:14 compute-0 nova_compute[251992]: 2025-12-06 08:22:14.540 251996 DEBUG nova.storage.rbd_utils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 377c47cc-946e-4216-99ad-9e2118dd22dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:22:14 compute-0 nova_compute[251992]: 2025-12-06 08:22:14.543 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd/disk.config 377c47cc-946e-4216-99ad-9e2118dd22dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:14.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:14 compute-0 nova_compute[251992]: 2025-12-06 08:22:14.746 251996 DEBUG nova.network.neutron [req-63534d92-04a5-4fb6-ac2a-a8981fe695fd req-99aa0b85-4807-4e1d-b94d-bb0a192321f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Updated VIF entry in instance network info cache for port aa19ef6e-d7de-47d8-bb55-d5d7bd727abe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:22:14 compute-0 nova_compute[251992]: 2025-12-06 08:22:14.747 251996 DEBUG nova.network.neutron [req-63534d92-04a5-4fb6-ac2a-a8981fe695fd req-99aa0b85-4807-4e1d-b94d-bb0a192321f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Updating instance_info_cache with network_info: [{"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:14 compute-0 nova_compute[251992]: 2025-12-06 08:22:14.772 251996 DEBUG oslo_concurrency.lockutils [req-63534d92-04a5-4fb6-ac2a-a8981fe695fd req-99aa0b85-4807-4e1d-b94d-bb0a192321f9 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.196 251996 DEBUG oslo_concurrency.processutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd/disk.config 377c47cc-946e-4216-99ad-9e2118dd22dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.197 251996 INFO nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Deleting local config drive /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd/disk.config because it was imported into RBD.
Dec 06 08:22:15 compute-0 kernel: tapaa19ef6e-d7: entered promiscuous mode
Dec 06 08:22:15 compute-0 NetworkManager[48965]: <info>  [1765009335.2512] manager: (tapaa19ef6e-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Dec 06 08:22:15 compute-0 ovn_controller[147168]: 2025-12-06T08:22:15Z|00809|binding|INFO|Claiming lport aa19ef6e-d7de-47d8-bb55-d5d7bd727abe for this chassis.
Dec 06 08:22:15 compute-0 ovn_controller[147168]: 2025-12-06T08:22:15Z|00810|binding|INFO|aa19ef6e-d7de-47d8-bb55-d5d7bd727abe: Claiming fa:16:3e:b3:eb:9e 10.100.0.11
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.251 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.262 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:eb:9e 10.100.0.11'], port_security=['fa:16:3e:b3:eb:9e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '377c47cc-946e-4216-99ad-9e2118dd22dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '353e35c5-d221-4dce-9275-d54187ac5a75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d7af5e0-a504-41e1-b9f6-95b3d5bf94dd, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.264 158118 INFO neutron.agent.ovn.metadata.agent [-] Port aa19ef6e-d7de-47d8-bb55-d5d7bd727abe in datapath 8e3ac9aa-9766-45cd-a8b5-88cc15193af6 bound to our chassis
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.265 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e3ac9aa-9766-45cd-a8b5-88cc15193af6
Dec 06 08:22:15 compute-0 ovn_controller[147168]: 2025-12-06T08:22:15Z|00811|binding|INFO|Setting lport aa19ef6e-d7de-47d8-bb55-d5d7bd727abe ovn-installed in OVS
Dec 06 08:22:15 compute-0 ovn_controller[147168]: 2025-12-06T08:22:15Z|00812|binding|INFO|Setting lport aa19ef6e-d7de-47d8-bb55-d5d7bd727abe up in Southbound
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.269 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.282 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8a242cef-24c3-4bbe-af80-aa3ddfebc74c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.284 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e3ac9aa-91 in ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.286 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e3ac9aa-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.286 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfce40e-d3c9-4b9b-93b6-809a97dd372a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.288 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d07c0215-d1cb-499b-90db-7bc67c67666a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 systemd-udevd[408143]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:22:15 compute-0 systemd-machined[212986]: New machine qemu-96-instance-000000d9.
Dec 06 08:22:15 compute-0 NetworkManager[48965]: <info>  [1765009335.3031] device (tapaa19ef6e-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:22:15 compute-0 NetworkManager[48965]: <info>  [1765009335.3045] device (tapaa19ef6e-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.306 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb27c9b-9425-449b-aaf5-7c4d90f3c56f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-000000d9.
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.329 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[89488394-f57d-4637-a25d-444dfd6aba58]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.359 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[05e50c41-e461-4322-bf3e-17750b9c3e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.364 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9970d65d-8c04-48f5-bee2-a4faf720f2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 NetworkManager[48965]: <info>  [1765009335.3657] manager: (tap8e3ac9aa-90): new Veth device (/org/freedesktop/NetworkManager/Devices/389)
Dec 06 08:22:15 compute-0 ceph-mon[74339]: pgmap v3898: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 163 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Dec 06 08:22:15 compute-0 ceph-mon[74339]: osdmap e427: 3 total, 3 up, 3 in
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.394 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[af309854-4db7-4592-8a7f-bb0b85658f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.397 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[b08cba55-09c4-41bf-b59d-57c635aea498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 NetworkManager[48965]: <info>  [1765009335.4206] device (tap8e3ac9aa-90): carrier: link connected
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.425 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[005bbc9a-77c9-42e0-adfe-fad2453485b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.443 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[456ec504-488f-464d-8d41-061c4898dd12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e3ac9aa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:38:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 960800, 'reachable_time': 34127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408176, 'error': None, 'target': 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.462 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f63fff-2ea3-45f6-a1df-545edbdc03f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:3801'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 960800, 'tstamp': 960800}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408177, 'error': None, 'target': 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.482 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[23d773e3-2a7b-4c28-ab0d-c822fd8dd287]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e3ac9aa-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:38:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 245], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 960800, 'reachable_time': 34127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408178, 'error': None, 'target': 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:15.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.516 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4330d9-d20b-49b3-b1db-3489668992d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.574 251996 DEBUG nova.compute.manager [req-a1e5a4f4-5af1-4110-a1ac-085b4e1a38b3 req-ab6a7c8a-df85-4d7b-b893-bb591ea42050 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-changed-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.574 251996 DEBUG nova.compute.manager [req-a1e5a4f4-5af1-4110-a1ac-085b4e1a38b3 req-ab6a7c8a-df85-4d7b-b893-bb591ea42050 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Refreshing instance network info cache due to event network-changed-fdb27d9b-f2d2-4cdc-9682-63a06b53b767. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.574 251996 DEBUG oslo_concurrency.lockutils [req-a1e5a4f4-5af1-4110-a1ac-085b4e1a38b3 req-ab6a7c8a-df85-4d7b-b893-bb591ea42050 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.575 251996 DEBUG oslo_concurrency.lockutils [req-a1e5a4f4-5af1-4110-a1ac-085b4e1a38b3 req-ab6a7c8a-df85-4d7b-b893-bb591ea42050 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.575 251996 DEBUG nova.network.neutron [req-a1e5a4f4-5af1-4110-a1ac-085b4e1a38b3 req-ab6a7c8a-df85-4d7b-b893-bb591ea42050 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Refreshing network info cache for port fdb27d9b-f2d2-4cdc-9682-63a06b53b767 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.575 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e4ccbb-f9d9-4bd1-9fd1-ab8e04f2f3c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.577 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e3ac9aa-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.577 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.578 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e3ac9aa-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.579 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 NetworkManager[48965]: <info>  [1765009335.5807] manager: (tap8e3ac9aa-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Dec 06 08:22:15 compute-0 kernel: tap8e3ac9aa-90: entered promiscuous mode
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.586 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e3ac9aa-90, col_values=(('external_ids', {'iface-id': 'e79b0cd9-d776-4f44-8b09-bc4e745f59bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.587 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 ovn_controller[147168]: 2025-12-06T08:22:15Z|00813|binding|INFO|Releasing lport e79b0cd9-d776-4f44-8b09-bc4e745f59bc from this chassis (sb_readonly=0)
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.590 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.594 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e3ac9aa-9766-45cd-a8b5-88cc15193af6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e3ac9aa-9766-45cd-a8b5-88cc15193af6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.599 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d3ac52-3717-40f2-8db1-e9fb4d266b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.600 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-8e3ac9aa-9766-45cd-a8b5-88cc15193af6
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/8e3ac9aa-9766-45cd-a8b5-88cc15193af6.pid.haproxy
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 8e3ac9aa-9766-45cd-a8b5-88cc15193af6
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.601 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'env', 'PROCESS_TAG=haproxy-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e3ac9aa-9766-45cd-a8b5-88cc15193af6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.602 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.645 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.646 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.646 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.646 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.646 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.647 251996 INFO nova.compute.manager [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Terminating instance
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.649 251996 DEBUG nova.compute.manager [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.671 251996 DEBUG nova.compute.manager [req-be41c389-8bd2-43a3-9af4-65a5ea4804db req-9750964b-4fa2-4f58-be70-017eb0118946 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.672 251996 DEBUG oslo_concurrency.lockutils [req-be41c389-8bd2-43a3-9af4-65a5ea4804db req-9750964b-4fa2-4f58-be70-017eb0118946 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.672 251996 DEBUG oslo_concurrency.lockutils [req-be41c389-8bd2-43a3-9af4-65a5ea4804db req-9750964b-4fa2-4f58-be70-017eb0118946 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.673 251996 DEBUG oslo_concurrency.lockutils [req-be41c389-8bd2-43a3-9af4-65a5ea4804db req-9750964b-4fa2-4f58-be70-017eb0118946 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.673 251996 DEBUG nova.compute.manager [req-be41c389-8bd2-43a3-9af4-65a5ea4804db req-9750964b-4fa2-4f58-be70-017eb0118946 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Processing event network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:22:15 compute-0 kernel: tapfdb27d9b-f2 (unregistering): left promiscuous mode
Dec 06 08:22:15 compute-0 NetworkManager[48965]: <info>  [1765009335.9590] device (tapfdb27d9b-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:22:15 compute-0 ovn_controller[147168]: 2025-12-06T08:22:15Z|00814|binding|INFO|Releasing lport fdb27d9b-f2d2-4cdc-9682-63a06b53b767 from this chassis (sb_readonly=0)
Dec 06 08:22:15 compute-0 ovn_controller[147168]: 2025-12-06T08:22:15Z|00815|binding|INFO|Setting lport fdb27d9b-f2d2-4cdc-9682-63a06b53b767 down in Southbound
Dec 06 08:22:15 compute-0 ovn_controller[147168]: 2025-12-06T08:22:15Z|00816|binding|INFO|Removing iface tapfdb27d9b-f2 ovn-installed in OVS
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.970 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:15.982 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:ce:9c 10.100.0.3'], port_security=['fa:16:3e:75:ce:9c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f2b69bc0-d3ae-4f20-9026-421ff6537c3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b2dc4b8729f446a9c7ac69ca446f71d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8cd07b30-a335-4570-957e-3674d9a06120', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=60eec70d-8996-4225-9077-6d0f2705560a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=fdb27d9b-f2d2-4cdc-9682-63a06b53b767) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:22:15 compute-0 nova_compute[251992]: 2025-12-06 08:22:15.983 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:15 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 340 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Dec 06 08:22:16 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Dec 06 08:22:16 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000d6.scope: Consumed 17.018s CPU time.
Dec 06 08:22:16 compute-0 systemd-machined[212986]: Machine qemu-95-instance-000000d6 terminated.
Dec 06 08:22:16 compute-0 podman[408244]: 2025-12-06 08:22:16.023144196 +0000 UTC m=+0.052392595 container create 6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 08:22:16 compute-0 NetworkManager[48965]: <info>  [1765009336.0662] manager: (tapfdb27d9b-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Dec 06 08:22:16 compute-0 systemd[1]: Started libpod-conmon-6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc.scope.
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.085 251996 INFO nova.virt.libvirt.driver [-] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Instance destroyed successfully.
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.086 251996 DEBUG nova.objects.instance [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lazy-loading 'resources' on Instance uuid f2b69bc0-d3ae-4f20-9026-421ff6537c3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:22:16 compute-0 podman[408244]: 2025-12-06 08:22:15.996008238 +0000 UTC m=+0.025256657 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:22:16 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d79ccdad7b18c1f5e63cfd2875571e71788ab86952de657fc4072748e3de2f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.110 251996 DEBUG nova.virt.libvirt.vif [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:20:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1178767715',display_name='tempest-TestVolumeBootPattern-server-1178767715',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1178767715',id=214,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOvUl3Ab4ESWezLZ9mehuTavMygXDhT0chVOH5OGNfzBJ6GphwodjSkpQcbaa1ADoOOfJ6+3BcKIVxorR3UxI6tyiW7Q3SFHkhHBjCjD54foFQ6i6sfCU/p7OcBbQ12cuw==',key_name='tempest-TestVolumeBootPattern-2075529576',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:20:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b2dc4b8729f446a9c7ac69ca446f71d',ramdisk_id='',reservation_id='r-945l9ppo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-97496240',owner_user_name='tempest-TestVolumeBootPattern-97496240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:20:43Z,user_data=None,user_id='8e8feb4540af4e2caa45a88a9202dbe2',uuid=f2b69bc0-d3ae-4f20-9026-421ff6537c3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.110 251996 DEBUG nova.network.os_vif_util [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converting VIF {"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.112 251996 DEBUG nova.network.os_vif_util [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:ce:9c,bridge_name='br-int',has_traffic_filtering=True,id=fdb27d9b-f2d2-4cdc-9682-63a06b53b767,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb27d9b-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.112 251996 DEBUG os_vif [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:ce:9c,bridge_name='br-int',has_traffic_filtering=True,id=fdb27d9b-f2d2-4cdc-9682-63a06b53b767,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb27d9b-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.115 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.115 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdb27d9b-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.116 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009336.1116436, 377c47cc-946e-4216-99ad-9e2118dd22dd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.117 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] VM Started (Lifecycle Event)
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.119 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.120 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.122 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:22:16 compute-0 podman[408244]: 2025-12-06 08:22:16.124032477 +0000 UTC m=+0.153280906 container init 6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.123 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.125 251996 INFO os_vif [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:ce:9c,bridge_name='br-int',has_traffic_filtering=True,id=fdb27d9b-f2d2-4cdc-9682-63a06b53b767,network=Network(b4ef1374-9c77-45a7-8776-50aa60c7d84a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdb27d9b-f2')
Dec 06 08:22:16 compute-0 podman[408244]: 2025-12-06 08:22:16.130744419 +0000 UTC m=+0.159992818 container start 6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.148 251996 INFO nova.virt.libvirt.driver [-] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Instance spawned successfully.
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.148 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.152 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.159 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:22:16 compute-0 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[408279]: [NOTICE]   (408287) : New worker (408304) forked
Dec 06 08:22:16 compute-0 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[408279]: [NOTICE]   (408287) : Loading success.
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.180 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.180 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.181 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.182 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.182 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.183 251996 DEBUG nova.virt.libvirt.driver [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.188 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.189 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009336.1117053, 377c47cc-946e-4216-99ad-9e2118dd22dd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.189 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] VM Paused (Lifecycle Event)
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.198 158118 INFO neutron.agent.ovn.metadata.agent [-] Port fdb27d9b-f2d2-4cdc-9682-63a06b53b767 in datapath b4ef1374-9c77-45a7-8776-50aa60c7d84a unbound from our chassis
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.200 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4ef1374-9c77-45a7-8776-50aa60c7d84a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.201 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0611904b-95c5-4a63-9d51-590044280c23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.202 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a namespace which is not needed anymore
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.224 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.228 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009336.1226196, 377c47cc-946e-4216-99ad-9e2118dd22dd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.229 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] VM Resumed (Lifecycle Event)
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.268 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.271 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.299 251996 INFO nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Took 10.27 seconds to spawn the instance on the hypervisor.
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.300 251996 DEBUG nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.302 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:22:16 compute-0 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[406254]: [NOTICE]   (406258) : haproxy version is 2.8.14-c23fe91
Dec 06 08:22:16 compute-0 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[406254]: [NOTICE]   (406258) : path to executable is /usr/sbin/haproxy
Dec 06 08:22:16 compute-0 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[406254]: [WARNING]  (406258) : Exiting Master process...
Dec 06 08:22:16 compute-0 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[406254]: [ALERT]    (406258) : Current worker (406260) exited with code 143 (Terminated)
Dec 06 08:22:16 compute-0 neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a[406254]: [WARNING]  (406258) : All workers exited. Exiting... (0)
Dec 06 08:22:16 compute-0 systemd[1]: libpod-0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11.scope: Deactivated successfully.
Dec 06 08:22:16 compute-0 podman[408332]: 2025-12-06 08:22:16.327415092 +0000 UTC m=+0.043577825 container died 0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 08:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11-userdata-shm.mount: Deactivated successfully.
Dec 06 08:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-efa1c5cd0454803620f14d518e9a71e4cbda7db2e904557ad9460ba7020c585a-merged.mount: Deactivated successfully.
Dec 06 08:22:16 compute-0 podman[408332]: 2025-12-06 08:22:16.363406609 +0000 UTC m=+0.079569352 container cleanup 0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.367 251996 INFO nova.compute.manager [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Took 11.97 seconds to build instance.
Dec 06 08:22:16 compute-0 systemd[1]: libpod-conmon-0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11.scope: Deactivated successfully.
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.407 251996 DEBUG oslo_concurrency.lockutils [None req-dbb6137c-2b89-46a3-a2c3-89ccbfe9372d 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:16 compute-0 podman[408360]: 2025-12-06 08:22:16.427405218 +0000 UTC m=+0.040703736 container remove 0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.433 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[faa1eb39-5b75-493c-8b17-c854259abf55]: (4, ('Sat Dec  6 08:22:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11)\n0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11\nSat Dec  6 08:22:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a (0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11)\n0d4743e166f7afa43d3cb1b7d3fc65a337d09f20d8344223cd32543196fc5e11\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.435 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5f191e84-7d8a-48ca-92ee-0a9473a35306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.436 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4ef1374-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:16 compute-0 kernel: tapb4ef1374-90: left promiscuous mode
Dec 06 08:22:16 compute-0 nova_compute[251992]: 2025-12-06 08:22:16.451 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.454 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[1b58e805-437e-40ba-940e-702fedb19932]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.467 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[59f374a4-d2e8-4479-873b-a521068c01d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.468 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6506ca-25d6-400b-8620-6bfc8cf132f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.489 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb936c5-5ab6-4421-a6dd-cdf51f8fb15a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 951380, 'reachable_time': 15842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408375, 'error': None, 'target': 'ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:16 compute-0 systemd[1]: run-netns-ovnmeta\x2db4ef1374\x2d9c77\x2d45a7\x2d8776\x2d50aa60c7d84a.mount: Deactivated successfully.
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.494 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4ef1374-9c77-45a7-8776-50aa60c7d84a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:22:16 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:16.494 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[b803f604-4cd0-40e9-b001-19b06372dd19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:16.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.101 251996 INFO nova.virt.libvirt.driver [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Deleting instance files /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f_del
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.102 251996 INFO nova.virt.libvirt.driver [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Deletion of /var/lib/nova/instances/f2b69bc0-d3ae-4f20-9026-421ff6537c3f_del complete
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.177 251996 INFO nova.compute.manager [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Took 1.53 seconds to destroy the instance on the hypervisor.
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.178 251996 DEBUG oslo.service.loopingcall [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.178 251996 DEBUG nova.compute.manager [-] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.178 251996 DEBUG nova.network.neutron [-] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:22:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:17.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:17 compute-0 ceph-mon[74339]: pgmap v3900: 305 pgs: 305 active+clean; 340 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.819 251996 DEBUG nova.compute.manager [req-047c4ed0-6c75-428c-af21-20be151dc96b req-c6c6e456-75fd-4270-ae7d-fb0791af7a4c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.820 251996 DEBUG oslo_concurrency.lockutils [req-047c4ed0-6c75-428c-af21-20be151dc96b req-c6c6e456-75fd-4270-ae7d-fb0791af7a4c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.820 251996 DEBUG oslo_concurrency.lockutils [req-047c4ed0-6c75-428c-af21-20be151dc96b req-c6c6e456-75fd-4270-ae7d-fb0791af7a4c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.820 251996 DEBUG oslo_concurrency.lockutils [req-047c4ed0-6c75-428c-af21-20be151dc96b req-c6c6e456-75fd-4270-ae7d-fb0791af7a4c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.820 251996 DEBUG nova.compute.manager [req-047c4ed0-6c75-428c-af21-20be151dc96b req-c6c6e456-75fd-4270-ae7d-fb0791af7a4c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] No waiting events found dispatching network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:22:17 compute-0 nova_compute[251992]: 2025-12-06 08:22:17.821 251996 WARNING nova.compute.manager [req-047c4ed0-6c75-428c-af21-20be151dc96b req-c6c6e456-75fd-4270-ae7d-fb0791af7a4c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received unexpected event network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe for instance with vm_state active and task_state None.
Dec 06 08:22:17 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 06 08:22:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:18.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:22:18
Dec 06 08:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.log', 'images', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Dec 06 08:22:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:22:18 compute-0 nova_compute[251992]: 2025-12-06 08:22:18.788 251996 DEBUG nova.compute.manager [req-3d45d24c-77b6-41f4-8c63-1f3ac83bb392 req-7949fc37-28e6-4fec-9782-95f9dff57022 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-vif-unplugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:18 compute-0 nova_compute[251992]: 2025-12-06 08:22:18.789 251996 DEBUG oslo_concurrency.lockutils [req-3d45d24c-77b6-41f4-8c63-1f3ac83bb392 req-7949fc37-28e6-4fec-9782-95f9dff57022 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:18 compute-0 nova_compute[251992]: 2025-12-06 08:22:18.789 251996 DEBUG oslo_concurrency.lockutils [req-3d45d24c-77b6-41f4-8c63-1f3ac83bb392 req-7949fc37-28e6-4fec-9782-95f9dff57022 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:18 compute-0 nova_compute[251992]: 2025-12-06 08:22:18.789 251996 DEBUG oslo_concurrency.lockutils [req-3d45d24c-77b6-41f4-8c63-1f3ac83bb392 req-7949fc37-28e6-4fec-9782-95f9dff57022 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:18 compute-0 nova_compute[251992]: 2025-12-06 08:22:18.790 251996 DEBUG nova.compute.manager [req-3d45d24c-77b6-41f4-8c63-1f3ac83bb392 req-7949fc37-28e6-4fec-9782-95f9dff57022 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] No waiting events found dispatching network-vif-unplugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:22:18 compute-0 nova_compute[251992]: 2025-12-06 08:22:18.790 251996 DEBUG nova.compute.manager [req-3d45d24c-77b6-41f4-8c63-1f3ac83bb392 req-7949fc37-28e6-4fec-9782-95f9dff57022 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-vif-unplugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:22:18 compute-0 ceph-mon[74339]: pgmap v3901: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 06 08:22:18 compute-0 nova_compute[251992]: 2025-12-06 08:22:18.946 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:19 compute-0 nova_compute[251992]: 2025-12-06 08:22:19.455 251996 DEBUG nova.network.neutron [req-a1e5a4f4-5af1-4110-a1ac-085b4e1a38b3 req-ab6a7c8a-df85-4d7b-b893-bb591ea42050 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updated VIF entry in instance network info cache for port fdb27d9b-f2d2-4cdc-9682-63a06b53b767. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:22:19 compute-0 nova_compute[251992]: 2025-12-06 08:22:19.456 251996 DEBUG nova.network.neutron [req-a1e5a4f4-5af1-4110-a1ac-085b4e1a38b3 req-ab6a7c8a-df85-4d7b-b893-bb591ea42050 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updating instance_info_cache with network_info: [{"id": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "address": "fa:16:3e:75:ce:9c", "network": {"id": "b4ef1374-9c77-45a7-8776-50aa60c7d84a", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1664561964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b2dc4b8729f446a9c7ac69ca446f71d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdb27d9b-f2", "ovs_interfaceid": "fdb27d9b-f2d2-4cdc-9682-63a06b53b767", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:19.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:19 compute-0 nova_compute[251992]: 2025-12-06 08:22:19.542 251996 DEBUG oslo_concurrency.lockutils [req-a1e5a4f4-5af1-4110-a1ac-085b4e1a38b3 req-ab6a7c8a-df85-4d7b-b893-bb591ea42050 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-f2b69bc0-d3ae-4f20-9026-421ff6537c3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:22:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Dec 06 08:22:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Dec 06 08:22:19 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Dec 06 08:22:19 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 24 KiB/s wr, 51 op/s
Dec 06 08:22:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:20 compute-0 ceph-mon[74339]: osdmap e428: 3 total, 3 up, 3 in
Dec 06 08:22:20 compute-0 ceph-mon[74339]: pgmap v3903: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 24 KiB/s wr, 51 op/s
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.016 251996 DEBUG nova.compute.manager [req-22413f13-5175-4be0-938d-67596c38b1b1 req-ad9c87fb-6681-4b19-a50f-1847266d2244 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.017 251996 DEBUG oslo_concurrency.lockutils [req-22413f13-5175-4be0-938d-67596c38b1b1 req-ad9c87fb-6681-4b19-a50f-1847266d2244 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.017 251996 DEBUG oslo_concurrency.lockutils [req-22413f13-5175-4be0-938d-67596c38b1b1 req-ad9c87fb-6681-4b19-a50f-1847266d2244 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.017 251996 DEBUG oslo_concurrency.lockutils [req-22413f13-5175-4be0-938d-67596c38b1b1 req-ad9c87fb-6681-4b19-a50f-1847266d2244 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.017 251996 DEBUG nova.compute.manager [req-22413f13-5175-4be0-938d-67596c38b1b1 req-ad9c87fb-6681-4b19-a50f-1847266d2244 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] No waiting events found dispatching network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.017 251996 WARNING nova.compute.manager [req-22413f13-5175-4be0-938d-67596c38b1b1 req-ad9c87fb-6681-4b19-a50f-1847266d2244 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received unexpected event network-vif-plugged-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 for instance with vm_state active and task_state deleting.
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.118 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.464 251996 DEBUG nova.network.neutron [-] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.487 251996 INFO nova.compute.manager [-] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Took 4.31 seconds to deallocate network for instance.
Dec 06 08:22:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:21.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.572 251996 DEBUG nova.compute.manager [req-1ecb8e4f-b2f7-4381-85ac-9d1fa343cca1 req-0f79b76f-c747-47e9-b5aa-ec7ec9fbc575 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Received event network-vif-deleted-fdb27d9b-f2d2-4cdc-9682-63a06b53b767 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.748 251996 INFO nova.compute.manager [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Took 0.26 seconds to detach 1 volumes for instance.
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.822 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.822 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:21 compute-0 nova_compute[251992]: 2025-12-06 08:22:21.932 251996 DEBUG oslo_concurrency.processutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:21 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 175 op/s
Dec 06 08:22:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4093370634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.351 251996 DEBUG nova.compute.manager [req-7d2f2186-6be9-43be-a3a7-dcf53e84792a req-fc1b6e25-379c-454b-9432-d4d23fb0722a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-changed-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.352 251996 DEBUG nova.compute.manager [req-7d2f2186-6be9-43be-a3a7-dcf53e84792a req-fc1b6e25-379c-454b-9432-d4d23fb0722a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Refreshing instance network info cache due to event network-changed-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.352 251996 DEBUG oslo_concurrency.lockutils [req-7d2f2186-6be9-43be-a3a7-dcf53e84792a req-fc1b6e25-379c-454b-9432-d4d23fb0722a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.352 251996 DEBUG oslo_concurrency.lockutils [req-7d2f2186-6be9-43be-a3a7-dcf53e84792a req-fc1b6e25-379c-454b-9432-d4d23fb0722a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.352 251996 DEBUG nova.network.neutron [req-7d2f2186-6be9-43be-a3a7-dcf53e84792a req-fc1b6e25-379c-454b-9432-d4d23fb0722a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Refreshing network info cache for port aa19ef6e-d7de-47d8-bb55-d5d7bd727abe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.354 251996 DEBUG oslo_concurrency.processutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.361 251996 DEBUG nova.compute.provider_tree [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.572 251996 DEBUG nova.scheduler.client.report [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:22:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:22.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.850 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.892 251996 INFO nova.scheduler.client.report [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Deleted allocations for instance f2b69bc0-d3ae-4f20-9026-421ff6537c3f
Dec 06 08:22:22 compute-0 nova_compute[251992]: 2025-12-06 08:22:22.995 251996 DEBUG oslo_concurrency.lockutils [None req-80f00633-17bf-4620-bd7f-32941fdc879e 8e8feb4540af4e2caa45a88a9202dbe2 4b2dc4b8729f446a9c7ac69ca446f71d - - default default] Lock "f2b69bc0-d3ae-4f20-9026-421ff6537c3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:23 compute-0 ceph-mon[74339]: pgmap v3904: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 175 op/s
Dec 06 08:22:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4093370634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:23.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:22:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:22:23 compute-0 nova_compute[251992]: 2025-12-06 08:22:23.949 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:23 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 145 op/s
Dec 06 08:22:24 compute-0 podman[408403]: 2025-12-06 08:22:24.468279118 +0000 UTC m=+0.119764395 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 08:22:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:22:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:22:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:25 compute-0 ceph-mon[74339]: pgmap v3905: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 145 op/s
Dec 06 08:22:25 compute-0 sudo[408429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:25 compute-0 sudo[408429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:25 compute-0 sudo[408429]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:25 compute-0 nova_compute[251992]: 2025-12-06 08:22:25.292 251996 DEBUG nova.network.neutron [req-7d2f2186-6be9-43be-a3a7-dcf53e84792a req-fc1b6e25-379c-454b-9432-d4d23fb0722a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Updated VIF entry in instance network info cache for port aa19ef6e-d7de-47d8-bb55-d5d7bd727abe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:22:25 compute-0 nova_compute[251992]: 2025-12-06 08:22:25.292 251996 DEBUG nova.network.neutron [req-7d2f2186-6be9-43be-a3a7-dcf53e84792a req-fc1b6e25-379c-454b-9432-d4d23fb0722a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Updating instance_info_cache with network_info: [{"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:25 compute-0 sudo[408454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:25 compute-0 sudo[408454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:25 compute-0 sudo[408454]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:25 compute-0 nova_compute[251992]: 2025-12-06 08:22:25.342 251996 DEBUG oslo_concurrency.lockutils [req-7d2f2186-6be9-43be-a3a7-dcf53e84792a req-fc1b6e25-379c-454b-9432-d4d23fb0722a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:22:25 compute-0 nova_compute[251992]: 2025-12-06 08:22:25.509 251996 DEBUG nova.compute.manager [req-26739d4f-642b-40cd-93f7-62583644267d req-365b9785-9c3c-49ee-9a52-c9cad37eedbb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-changed-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:25 compute-0 nova_compute[251992]: 2025-12-06 08:22:25.509 251996 DEBUG nova.compute.manager [req-26739d4f-642b-40cd-93f7-62583644267d req-365b9785-9c3c-49ee-9a52-c9cad37eedbb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Refreshing instance network info cache due to event network-changed-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:22:25 compute-0 nova_compute[251992]: 2025-12-06 08:22:25.510 251996 DEBUG oslo_concurrency.lockutils [req-26739d4f-642b-40cd-93f7-62583644267d req-365b9785-9c3c-49ee-9a52-c9cad37eedbb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:22:25 compute-0 nova_compute[251992]: 2025-12-06 08:22:25.510 251996 DEBUG oslo_concurrency.lockutils [req-26739d4f-642b-40cd-93f7-62583644267d req-365b9785-9c3c-49ee-9a52-c9cad37eedbb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:22:25 compute-0 nova_compute[251992]: 2025-12-06 08:22:25.510 251996 DEBUG nova.network.neutron [req-26739d4f-642b-40cd-93f7-62583644267d req-365b9785-9c3c-49ee-9a52-c9cad37eedbb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Refreshing network info cache for port aa19ef6e-d7de-47d8-bb55-d5d7bd727abe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:22:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:25.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 KiB/s wr, 103 op/s
Dec 06 08:22:26 compute-0 nova_compute[251992]: 2025-12-06 08:22:26.119 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:26.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031676030616043177 of space, bias 1.0, pg target 0.9502809184812954 quantized to 32 (current 32)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004331008340312675 of space, bias 1.0, pg target 1.2993025020938025 quantized to 32 (current 32)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:22:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:22:27 compute-0 ceph-mon[74339]: pgmap v3906: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 KiB/s wr, 103 op/s
Dec 06 08:22:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:22:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:22:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:22:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:22:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:22:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:27.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 100 op/s
Dec 06 08:22:28 compute-0 nova_compute[251992]: 2025-12-06 08:22:28.331 251996 DEBUG nova.network.neutron [req-26739d4f-642b-40cd-93f7-62583644267d req-365b9785-9c3c-49ee-9a52-c9cad37eedbb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Updated VIF entry in instance network info cache for port aa19ef6e-d7de-47d8-bb55-d5d7bd727abe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:22:28 compute-0 nova_compute[251992]: 2025-12-06 08:22:28.332 251996 DEBUG nova.network.neutron [req-26739d4f-642b-40cd-93f7-62583644267d req-365b9785-9c3c-49ee-9a52-c9cad37eedbb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Updating instance_info_cache with network_info: [{"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:28 compute-0 nova_compute[251992]: 2025-12-06 08:22:28.408 251996 DEBUG oslo_concurrency.lockutils [req-26739d4f-642b-40cd-93f7-62583644267d req-365b9785-9c3c-49ee-9a52-c9cad37eedbb 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-377c47cc-946e-4216-99ad-9e2118dd22dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:22:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/433424503' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:22:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/433424503' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:22:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:28.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:28 compute-0 nova_compute[251992]: 2025-12-06 08:22:28.950 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:22:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:29.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:22:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:29 compute-0 ceph-mon[74339]: pgmap v3907: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 KiB/s wr, 100 op/s
Dec 06 08:22:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 98 op/s
Dec 06 08:22:30 compute-0 podman[408481]: 2025-12-06 08:22:30.431845805 +0000 UTC m=+0.055416285 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 08:22:30 compute-0 podman[408482]: 2025-12-06 08:22:30.46000122 +0000 UTC m=+0.082621215 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 08:22:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:30.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:31 compute-0 nova_compute[251992]: 2025-12-06 08:22:31.081 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009336.0798767, f2b69bc0-d3ae-4f20-9026-421ff6537c3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:22:31 compute-0 nova_compute[251992]: 2025-12-06 08:22:31.083 251996 INFO nova.compute.manager [-] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] VM Stopped (Lifecycle Event)
Dec 06 08:22:31 compute-0 nova_compute[251992]: 2025-12-06 08:22:31.121 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:31 compute-0 nova_compute[251992]: 2025-12-06 08:22:31.133 251996 DEBUG nova.compute.manager [None req-b46afd67-fdb1-4372-8cc1-e9c840ac90b1 - - - - - -] [instance: f2b69bc0-d3ae-4f20-9026-421ff6537c3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:22:31 compute-0 ceph-mon[74339]: pgmap v3908: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 98 op/s
Dec 06 08:22:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:22:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:31.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:22:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 268 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 136 op/s
Dec 06 08:22:32 compute-0 ovn_controller[147168]: 2025-12-06T08:22:32Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:eb:9e 10.100.0.11
Dec 06 08:22:32 compute-0 ovn_controller[147168]: 2025-12-06T08:22:32Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:eb:9e 10.100.0.11
Dec 06 08:22:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:32.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:33 compute-0 ceph-mon[74339]: pgmap v3909: 305 pgs: 305 active+clean; 268 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 136 op/s
Dec 06 08:22:33 compute-0 ovn_controller[147168]: 2025-12-06T08:22:33Z|00817|binding|INFO|Releasing lport e79b0cd9-d776-4f44-8b09-bc4e745f59bc from this chassis (sb_readonly=0)
Dec 06 08:22:33 compute-0 nova_compute[251992]: 2025-12-06 08:22:33.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:33.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:33 compute-0 nova_compute[251992]: 2025-12-06 08:22:33.952 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 268 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 177 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 06 08:22:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:34.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:35.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:35 compute-0 ceph-mon[74339]: pgmap v3910: 305 pgs: 305 active+clean; 268 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 177 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Dec 06 08:22:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:22:36 compute-0 nova_compute[251992]: 2025-12-06 08:22:36.124 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:36.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:37 compute-0 ceph-mon[74339]: pgmap v3911: 305 pgs: 305 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Dec 06 08:22:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:37.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:37 compute-0 sudo[408518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:37 compute-0 sudo[408518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:37 compute-0 sudo[408518]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:37 compute-0 sudo[408543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:22:37 compute-0 sudo[408543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:37 compute-0 sudo[408543]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:37 compute-0 sudo[408568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:37 compute-0 sudo[408568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:37 compute-0 sudo[408568]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:37 compute-0 sudo[408593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:22:37 compute-0 sudo[408593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 08:22:38 compute-0 sudo[408593]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:22:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:22:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:22:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:22:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:22:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:22:38 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:22:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 683fa806-3ad6-4525-8042-c208e9bfa3ef does not exist
Dec 06 08:22:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4e649bd9-beeb-44d0-ab57-3b0988391c41 does not exist
Dec 06 08:22:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1c6cb953-3d99-4ac2-b252-e7a1a63766d6 does not exist
Dec 06 08:22:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:22:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:22:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:22:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:22:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:22:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:22:38 compute-0 sudo[408649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:38 compute-0 sudo[408649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:38 compute-0 sudo[408649]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:38 compute-0 sudo[408674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:22:38 compute-0 sudo[408674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:38 compute-0 sudo[408674]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:38.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:38 compute-0 sudo[408699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:38 compute-0 sudo[408699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:38 compute-0 sudo[408699]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:38 compute-0 sudo[408724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:22:38 compute-0 sudo[408724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:38 compute-0 podman[408790]: 2025-12-06 08:22:38.98756423 +0000 UTC m=+0.044386285 container create 67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:22:39 compute-0 nova_compute[251992]: 2025-12-06 08:22:39.001 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:39 compute-0 systemd[1]: Started libpod-conmon-67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb.scope.
Dec 06 08:22:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:22:39 compute-0 podman[408790]: 2025-12-06 08:22:39.06371601 +0000 UTC m=+0.120538085 container init 67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:22:39 compute-0 podman[408790]: 2025-12-06 08:22:38.969561712 +0000 UTC m=+0.026383797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:22:39 compute-0 podman[408790]: 2025-12-06 08:22:39.070431532 +0000 UTC m=+0.127253587 container start 67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 08:22:39 compute-0 podman[408790]: 2025-12-06 08:22:39.074198054 +0000 UTC m=+0.131020139 container attach 67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:22:39 compute-0 nostalgic_brattain[408806]: 167 167
Dec 06 08:22:39 compute-0 systemd[1]: libpod-67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb.scope: Deactivated successfully.
Dec 06 08:22:39 compute-0 conmon[408806]: conmon 67f325b1dcf763aa4731 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb.scope/container/memory.events
Dec 06 08:22:39 compute-0 podman[408790]: 2025-12-06 08:22:39.07805372 +0000 UTC m=+0.134875785 container died 67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:22:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd9a91c6aa4c281dc2a3a48fcea3d64d758f25f4428b04f73e8016a48b815ec-merged.mount: Deactivated successfully.
Dec 06 08:22:39 compute-0 podman[408790]: 2025-12-06 08:22:39.111431886 +0000 UTC m=+0.168253951 container remove 67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_brattain, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:22:39 compute-0 systemd[1]: libpod-conmon-67f325b1dcf763aa47313c92c1281f93c969a3e5d4f305bb858eca8b6a7c70bb.scope: Deactivated successfully.
Dec 06 08:22:39 compute-0 podman[408831]: 2025-12-06 08:22:39.278311289 +0000 UTC m=+0.040911211 container create 6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 08:22:39 compute-0 systemd[1]: Started libpod-conmon-6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583.scope.
Dec 06 08:22:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc18fbb3e96479cce339e98f444f08b69de73489cc7a31cb9752532bba1f60c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc18fbb3e96479cce339e98f444f08b69de73489cc7a31cb9752532bba1f60c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc18fbb3e96479cce339e98f444f08b69de73489cc7a31cb9752532bba1f60c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc18fbb3e96479cce339e98f444f08b69de73489cc7a31cb9752532bba1f60c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc18fbb3e96479cce339e98f444f08b69de73489cc7a31cb9752532bba1f60c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:39 compute-0 podman[408831]: 2025-12-06 08:22:39.262174291 +0000 UTC m=+0.024774243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:22:39 compute-0 podman[408831]: 2025-12-06 08:22:39.365649652 +0000 UTC m=+0.128249594 container init 6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:22:39 compute-0 podman[408831]: 2025-12-06 08:22:39.372360445 +0000 UTC m=+0.134960367 container start 6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:22:39 compute-0 podman[408831]: 2025-12-06 08:22:39.375609493 +0000 UTC m=+0.138209435 container attach 6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:22:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:39.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:39 compute-0 ceph-mon[74339]: pgmap v3912: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 08:22:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:22:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:22:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:22:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:39.999 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "377c47cc-946e-4216-99ad-9e2118dd22dd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.001 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.001 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.002 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.002 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.003 251996 INFO nova.compute.manager [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Terminating instance
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.004 251996 DEBUG nova.compute.manager [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:22:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 08:22:40 compute-0 inspiring_swirles[408847]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:22:40 compute-0 inspiring_swirles[408847]: --> relative data size: 1.0
Dec 06 08:22:40 compute-0 inspiring_swirles[408847]: --> All data devices are unavailable
Dec 06 08:22:40 compute-0 systemd[1]: libpod-6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583.scope: Deactivated successfully.
Dec 06 08:22:40 compute-0 podman[408831]: 2025-12-06 08:22:40.183617663 +0000 UTC m=+0.946217585 container died 6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc18fbb3e96479cce339e98f444f08b69de73489cc7a31cb9752532bba1f60c6-merged.mount: Deactivated successfully.
Dec 06 08:22:40 compute-0 podman[408831]: 2025-12-06 08:22:40.237159968 +0000 UTC m=+0.999759890 container remove 6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:22:40 compute-0 systemd[1]: libpod-conmon-6837f82584bdc0911bba6763aea3675c20ee3ef390a4089b9906cf608140e583.scope: Deactivated successfully.
Dec 06 08:22:40 compute-0 sudo[408724]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:40 compute-0 kernel: tapaa19ef6e-d7 (unregistering): left promiscuous mode
Dec 06 08:22:40 compute-0 NetworkManager[48965]: <info>  [1765009360.2918] device (tapaa19ef6e-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:22:40 compute-0 ovn_controller[147168]: 2025-12-06T08:22:40Z|00818|binding|INFO|Releasing lport aa19ef6e-d7de-47d8-bb55-d5d7bd727abe from this chassis (sb_readonly=0)
Dec 06 08:22:40 compute-0 ovn_controller[147168]: 2025-12-06T08:22:40Z|00819|binding|INFO|Setting lport aa19ef6e-d7de-47d8-bb55-d5d7bd727abe down in Southbound
Dec 06 08:22:40 compute-0 ovn_controller[147168]: 2025-12-06T08:22:40Z|00820|binding|INFO|Removing iface tapaa19ef6e-d7 ovn-installed in OVS
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.344 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.349 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:eb:9e 10.100.0.11', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '377c47cc-946e-4216-99ad-9e2118dd22dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d7af5e0-a504-41e1-b9f6-95b3d5bf94dd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.352 158118 INFO neutron.agent.ovn.metadata.agent [-] Port aa19ef6e-d7de-47d8-bb55-d5d7bd727abe in datapath 8e3ac9aa-9766-45cd-a8b5-88cc15193af6 unbound from our chassis
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.353 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e3ac9aa-9766-45cd-a8b5-88cc15193af6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.354 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf572b7-0f76-464b-9d55-62db24a76105]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.355 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 namespace which is not needed anymore
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.356 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 sudo[408875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:40 compute-0 sudo[408875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:40 compute-0 sudo[408875]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:40 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Dec 06 08:22:40 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d9.scope: Consumed 14.254s CPU time.
Dec 06 08:22:40 compute-0 systemd-machined[212986]: Machine qemu-96-instance-000000d9 terminated.
Dec 06 08:22:40 compute-0 sudo[408911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:22:40 compute-0 sudo[408911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:40 compute-0 sudo[408911]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.445 251996 INFO nova.virt.libvirt.driver [-] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Instance destroyed successfully.
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.446 251996 DEBUG nova.objects.instance [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'resources' on Instance uuid 377c47cc-946e-4216-99ad-9e2118dd22dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.468 251996 DEBUG nova.virt.libvirt.vif [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:21:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-359828598',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-359828598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=217,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKbFho9uZYw/C67SrAZAOll1Yn6PYb0ai5sIBHmAL6dW73Ch+qRjce9a6w2oaJggOsc3UVjACoOu/DVsfm+enspt+H1pyOFCemHZp5ou5LgCYmgT/pXXIRPYRaBdn6h/nw==',key_name='tempest-TestSecurityGroupsBasicOps-443327318',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:22:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-yv5ldyi3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:22:16Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=377c47cc-946e-4216-99ad-9e2118dd22dd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.469 251996 DEBUG nova.network.os_vif_util [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "address": "fa:16:3e:b3:eb:9e", "network": {"id": "8e3ac9aa-9766-45cd-a8b5-88cc15193af6", "bridge": "br-int", "label": "tempest-network-smoke--1070081001", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa19ef6e-d7", "ovs_interfaceid": "aa19ef6e-d7de-47d8-bb55-d5d7bd727abe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.470 251996 DEBUG nova.network.os_vif_util [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b3:eb:9e,bridge_name='br-int',has_traffic_filtering=True,id=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa19ef6e-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.470 251996 DEBUG os_vif [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:eb:9e,bridge_name='br-int',has_traffic_filtering=True,id=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa19ef6e-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.472 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.472 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa19ef6e-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.475 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.479 251996 INFO os_vif [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b3:eb:9e,bridge_name='br-int',has_traffic_filtering=True,id=aa19ef6e-d7de-47d8-bb55-d5d7bd727abe,network=Network(8e3ac9aa-9766-45cd-a8b5-88cc15193af6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa19ef6e-d7')
Dec 06 08:22:40 compute-0 sudo[408955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:40 compute-0 sudo[408955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:40 compute-0 sudo[408955]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:40 compute-0 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[408279]: [NOTICE]   (408287) : haproxy version is 2.8.14-c23fe91
Dec 06 08:22:40 compute-0 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[408279]: [NOTICE]   (408287) : path to executable is /usr/sbin/haproxy
Dec 06 08:22:40 compute-0 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[408279]: [WARNING]  (408287) : Exiting Master process...
Dec 06 08:22:40 compute-0 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[408279]: [ALERT]    (408287) : Current worker (408304) exited with code 143 (Terminated)
Dec 06 08:22:40 compute-0 neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6[408279]: [WARNING]  (408287) : All workers exited. Exiting... (0)
Dec 06 08:22:40 compute-0 systemd[1]: libpod-6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc.scope: Deactivated successfully.
Dec 06 08:22:40 compute-0 podman[408960]: 2025-12-06 08:22:40.516749283 +0000 UTC m=+0.054238414 container died 6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 08:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d79ccdad7b18c1f5e63cfd2875571e71788ab86952de657fc4072748e3de2f1-merged.mount: Deactivated successfully.
Dec 06 08:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc-userdata-shm.mount: Deactivated successfully.
Dec 06 08:22:40 compute-0 podman[408960]: 2025-12-06 08:22:40.551391245 +0000 UTC m=+0.088880366 container cleanup 6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:22:40 compute-0 sudo[409009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:22:40 compute-0 sudo[409009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:40 compute-0 systemd[1]: libpod-conmon-6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc.scope: Deactivated successfully.
Dec 06 08:22:40 compute-0 podman[409054]: 2025-12-06 08:22:40.619300309 +0000 UTC m=+0.045138907 container remove 6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:22:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:40.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.645 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8a2c4c-42ce-47b8-9151-785eeaa7e4fe]: (4, ('Sat Dec  6 08:22:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 (6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc)\n6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc\nSat Dec  6 08:22:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 (6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc)\n6508d4dc97c4f9e5c0cb94f79eb55d62ea0b6308af0cfafeaaf20229febfd5dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.648 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[76a47a63-6e5a-4945-89fa-bf94e49319de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.649 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e3ac9aa-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.649 251996 DEBUG nova.compute.manager [req-c02365e8-7a6a-42fc-9666-e8673cc70e41 req-86561860-f34b-496e-9808-f9aaca236a58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-vif-unplugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.650 251996 DEBUG oslo_concurrency.lockutils [req-c02365e8-7a6a-42fc-9666-e8673cc70e41 req-86561860-f34b-496e-9808-f9aaca236a58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.650 251996 DEBUG oslo_concurrency.lockutils [req-c02365e8-7a6a-42fc-9666-e8673cc70e41 req-86561860-f34b-496e-9808-f9aaca236a58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.650 251996 DEBUG oslo_concurrency.lockutils [req-c02365e8-7a6a-42fc-9666-e8673cc70e41 req-86561860-f34b-496e-9808-f9aaca236a58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.651 251996 DEBUG nova.compute.manager [req-c02365e8-7a6a-42fc-9666-e8673cc70e41 req-86561860-f34b-496e-9808-f9aaca236a58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] No waiting events found dispatching network-vif-unplugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.651 251996 DEBUG nova.compute.manager [req-c02365e8-7a6a-42fc-9666-e8673cc70e41 req-86561860-f34b-496e-9808-f9aaca236a58 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-vif-unplugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.652 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 kernel: tap8e3ac9aa-90: left promiscuous mode
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.657 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[71d959bf-4754-40fd-931c-cf47103c1901]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.669 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.674 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7db781bc-093d-4c84-a453-3a4a39980110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.675 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4a81b29d-a967-4e0d-9132-ef9bcca86ad5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.692 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[db8256af-6986-4c97-be4f-d23bab1d9b52]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 960794, 'reachable_time': 44885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409069, 'error': None, 'target': 'ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:40 compute-0 systemd[1]: run-netns-ovnmeta\x2d8e3ac9aa\x2d9766\x2d45cd\x2da8b5\x2d88cc15193af6.mount: Deactivated successfully.
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.696 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e3ac9aa-9766-45cd-a8b5-88cc15193af6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.696 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[6482d057-d9d7-4697-ab36-2ce66ce17f6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.717 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:22:40 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:40.718 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.717 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.787 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 podman[409110]: 2025-12-06 08:22:40.879901219 +0000 UTC m=+0.036410310 container create f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:22:40 compute-0 nova_compute[251992]: 2025-12-06 08:22:40.910 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:40 compute-0 systemd[1]: Started libpod-conmon-f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4.scope.
Dec 06 08:22:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:22:40 compute-0 podman[409110]: 2025-12-06 08:22:40.955351719 +0000 UTC m=+0.111860830 container init f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:22:40 compute-0 podman[409110]: 2025-12-06 08:22:40.862699352 +0000 UTC m=+0.019208473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:22:40 compute-0 podman[409110]: 2025-12-06 08:22:40.962559464 +0000 UTC m=+0.119068555 container start f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dewdney, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 08:22:40 compute-0 podman[409110]: 2025-12-06 08:22:40.967439447 +0000 UTC m=+0.123948558 container attach f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dewdney, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:22:40 compute-0 interesting_dewdney[409126]: 167 167
Dec 06 08:22:40 compute-0 systemd[1]: libpod-f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4.scope: Deactivated successfully.
Dec 06 08:22:40 compute-0 podman[409110]: 2025-12-06 08:22:40.969893543 +0000 UTC m=+0.126402644 container died f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:22:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-30e2244f7091aa6044f55b4f97eea174760b2714017e4da5a088159960eb87dc-merged.mount: Deactivated successfully.
Dec 06 08:22:41 compute-0 podman[409110]: 2025-12-06 08:22:41.000086744 +0000 UTC m=+0.156595845 container remove f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:22:41 compute-0 systemd[1]: libpod-conmon-f24a58254f1f2146f6986ef188c63e1c4978ef7f34f44ae32d60ee55e23649e4.scope: Deactivated successfully.
Dec 06 08:22:41 compute-0 ceph-mon[74339]: pgmap v3913: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Dec 06 08:22:41 compute-0 podman[409151]: 2025-12-06 08:22:41.157630293 +0000 UTC m=+0.050577484 container create bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 08:22:41 compute-0 systemd[1]: Started libpod-conmon-bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d.scope.
Dec 06 08:22:41 compute-0 podman[409151]: 2025-12-06 08:22:41.128579105 +0000 UTC m=+0.021526346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:22:41 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d2a60f556002a4ce0c48f4c4126f8c65d2144ebe77b2ffbb746c4fba097a6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d2a60f556002a4ce0c48f4c4126f8c65d2144ebe77b2ffbb746c4fba097a6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d2a60f556002a4ce0c48f4c4126f8c65d2144ebe77b2ffbb746c4fba097a6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d2a60f556002a4ce0c48f4c4126f8c65d2144ebe77b2ffbb746c4fba097a6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:41 compute-0 podman[409151]: 2025-12-06 08:22:41.240225338 +0000 UTC m=+0.133172559 container init bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:22:41 compute-0 podman[409151]: 2025-12-06 08:22:41.252306046 +0000 UTC m=+0.145253247 container start bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:22:41 compute-0 podman[409151]: 2025-12-06 08:22:41.25540492 +0000 UTC m=+0.148352131 container attach bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 08:22:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:41.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Dec 06 08:22:42 compute-0 focused_blackburn[409168]: {
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:     "0": [
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:         {
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "devices": [
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "/dev/loop3"
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             ],
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "lv_name": "ceph_lv0",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "lv_size": "7511998464",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "name": "ceph_lv0",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "tags": {
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.cluster_name": "ceph",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.crush_device_class": "",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.encrypted": "0",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.osd_id": "0",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.type": "block",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:                 "ceph.vdo": "0"
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             },
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "type": "block",
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:             "vg_name": "ceph_vg0"
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:         }
Dec 06 08:22:42 compute-0 focused_blackburn[409168]:     ]
Dec 06 08:22:42 compute-0 focused_blackburn[409168]: }
Dec 06 08:22:42 compute-0 systemd[1]: libpod-bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d.scope: Deactivated successfully.
Dec 06 08:22:42 compute-0 podman[409178]: 2025-12-06 08:22:42.10720702 +0000 UTC m=+0.023761766 container died bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec 06 08:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0d2a60f556002a4ce0c48f4c4126f8c65d2144ebe77b2ffbb746c4fba097a6e-merged.mount: Deactivated successfully.
Dec 06 08:22:42 compute-0 podman[409178]: 2025-12-06 08:22:42.153691573 +0000 UTC m=+0.070246309 container remove bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 08:22:42 compute-0 systemd[1]: libpod-conmon-bb39a5a16c8841329faeb6474c1e8a0fc9078689f30f96fb553523c7f343428d.scope: Deactivated successfully.
Dec 06 08:22:42 compute-0 sudo[409009]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:42 compute-0 sudo[409193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:42 compute-0 sudo[409193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:42 compute-0 sudo[409193]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:42 compute-0 sudo[409218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:22:42 compute-0 sudo[409218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:42 compute-0 sudo[409218]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:42 compute-0 sudo[409243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:42 compute-0 sudo[409243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:42 compute-0 sudo[409243]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:42 compute-0 sudo[409268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:22:42 compute-0 sudo[409268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:42.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:42 compute-0 podman[409336]: 2025-12-06 08:22:42.703208251 +0000 UTC m=+0.038922718 container create 67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 08:22:42 compute-0 nova_compute[251992]: 2025-12-06 08:22:42.736 251996 DEBUG nova.compute.manager [req-69570451-9731-4731-aea8-2de87d74a1dd req-07aec591-c555-40f3-a1e3-954b30619da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:42 compute-0 nova_compute[251992]: 2025-12-06 08:22:42.737 251996 DEBUG oslo_concurrency.lockutils [req-69570451-9731-4731-aea8-2de87d74a1dd req-07aec591-c555-40f3-a1e3-954b30619da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:42 compute-0 nova_compute[251992]: 2025-12-06 08:22:42.737 251996 DEBUG oslo_concurrency.lockutils [req-69570451-9731-4731-aea8-2de87d74a1dd req-07aec591-c555-40f3-a1e3-954b30619da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:42 compute-0 nova_compute[251992]: 2025-12-06 08:22:42.737 251996 DEBUG oslo_concurrency.lockutils [req-69570451-9731-4731-aea8-2de87d74a1dd req-07aec591-c555-40f3-a1e3-954b30619da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:42 compute-0 nova_compute[251992]: 2025-12-06 08:22:42.738 251996 DEBUG nova.compute.manager [req-69570451-9731-4731-aea8-2de87d74a1dd req-07aec591-c555-40f3-a1e3-954b30619da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] No waiting events found dispatching network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:22:42 compute-0 nova_compute[251992]: 2025-12-06 08:22:42.738 251996 WARNING nova.compute.manager [req-69570451-9731-4731-aea8-2de87d74a1dd req-07aec591-c555-40f3-a1e3-954b30619da3 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received unexpected event network-vif-plugged-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe for instance with vm_state active and task_state deleting.
Dec 06 08:22:42 compute-0 systemd[1]: Started libpod-conmon-67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6.scope.
Dec 06 08:22:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:22:42 compute-0 podman[409336]: 2025-12-06 08:22:42.68584911 +0000 UTC m=+0.021563577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:22:42 compute-0 podman[409336]: 2025-12-06 08:22:42.792802055 +0000 UTC m=+0.128516522 container init 67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:22:42 compute-0 podman[409336]: 2025-12-06 08:22:42.804527533 +0000 UTC m=+0.140242000 container start 67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:22:42 compute-0 podman[409336]: 2025-12-06 08:22:42.808728377 +0000 UTC m=+0.144442864 container attach 67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:22:42 compute-0 festive_hypatia[409352]: 167 167
Dec 06 08:22:42 compute-0 systemd[1]: libpod-67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6.scope: Deactivated successfully.
Dec 06 08:22:42 compute-0 conmon[409352]: conmon 67cc1b68301001b84e47 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6.scope/container/memory.events
Dec 06 08:22:42 compute-0 podman[409336]: 2025-12-06 08:22:42.81397885 +0000 UTC m=+0.149693307 container died 67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:22:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-897f320ef908d83bffd03a0bfc58906898759f5e7aad21af4691b72ffe0d3bab-merged.mount: Deactivated successfully.
Dec 06 08:22:42 compute-0 podman[409336]: 2025-12-06 08:22:42.856266359 +0000 UTC m=+0.191980816 container remove 67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hypatia, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 06 08:22:42 compute-0 systemd[1]: libpod-conmon-67cc1b68301001b84e47e0d46959ecea3119fc2b76830870e2794ec40d71bad6.scope: Deactivated successfully.
Dec 06 08:22:43 compute-0 podman[409376]: 2025-12-06 08:22:43.037931474 +0000 UTC m=+0.049469224 container create 4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:22:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:22:43 compute-0 systemd[1]: Started libpod-conmon-4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92.scope.
Dec 06 08:22:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1679306551da109b7c6fecf748e46513e7eeaecc6a18b8cae53547baf09561/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1679306551da109b7c6fecf748e46513e7eeaecc6a18b8cae53547baf09561/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1679306551da109b7c6fecf748e46513e7eeaecc6a18b8cae53547baf09561/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f1679306551da109b7c6fecf748e46513e7eeaecc6a18b8cae53547baf09561/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:22:43 compute-0 podman[409376]: 2025-12-06 08:22:43.016998376 +0000 UTC m=+0.028536136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:22:43 compute-0 podman[409376]: 2025-12-06 08:22:43.112395267 +0000 UTC m=+0.123933027 container init 4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 08:22:43 compute-0 podman[409376]: 2025-12-06 08:22:43.119019367 +0000 UTC m=+0.130557107 container start 4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 08:22:43 compute-0 podman[409376]: 2025-12-06 08:22:43.122566944 +0000 UTC m=+0.134104704 container attach 4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:22:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/779887381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:43 compute-0 nova_compute[251992]: 2025-12-06 08:22:43.277 251996 INFO nova.virt.libvirt.driver [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Deleting instance files /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd_del
Dec 06 08:22:43 compute-0 nova_compute[251992]: 2025-12-06 08:22:43.279 251996 INFO nova.virt.libvirt.driver [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Deletion of /var/lib/nova/instances/377c47cc-946e-4216-99ad-9e2118dd22dd_del complete
Dec 06 08:22:43 compute-0 nova_compute[251992]: 2025-12-06 08:22:43.328 251996 INFO nova.compute.manager [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Took 3.32 seconds to destroy the instance on the hypervisor.
Dec 06 08:22:43 compute-0 nova_compute[251992]: 2025-12-06 08:22:43.328 251996 DEBUG oslo.service.loopingcall [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:22:43 compute-0 nova_compute[251992]: 2025-12-06 08:22:43.328 251996 DEBUG nova.compute.manager [-] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:22:43 compute-0 nova_compute[251992]: 2025-12-06 08:22:43.329 251996 DEBUG nova.network.neutron [-] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:22:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:22:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:43.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:22:43 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:22:43.719 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:22:43 compute-0 sharp_easley[409393]: {
Dec 06 08:22:43 compute-0 sharp_easley[409393]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:22:43 compute-0 sharp_easley[409393]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:22:43 compute-0 sharp_easley[409393]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:22:43 compute-0 sharp_easley[409393]:         "osd_id": 0,
Dec 06 08:22:43 compute-0 sharp_easley[409393]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:22:43 compute-0 sharp_easley[409393]:         "type": "bluestore"
Dec 06 08:22:43 compute-0 sharp_easley[409393]:     }
Dec 06 08:22:43 compute-0 sharp_easley[409393]: }
Dec 06 08:22:43 compute-0 systemd[1]: libpod-4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92.scope: Deactivated successfully.
Dec 06 08:22:43 compute-0 podman[409376]: 2025-12-06 08:22:43.934614223 +0000 UTC m=+0.946151963 container died 4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 08:22:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f1679306551da109b7c6fecf748e46513e7eeaecc6a18b8cae53547baf09561-merged.mount: Deactivated successfully.
Dec 06 08:22:43 compute-0 podman[409376]: 2025-12-06 08:22:43.986686948 +0000 UTC m=+0.998224688 container remove 4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 08:22:43 compute-0 systemd[1]: libpod-conmon-4478474f6104ad690e3b1c2dc7731f2938b72884536a6befc5f152e71fd22a92.scope: Deactivated successfully.
Dec 06 08:22:44 compute-0 nova_compute[251992]: 2025-12-06 08:22:44.003 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 110 KiB/s wr, 34 op/s
Dec 06 08:22:44 compute-0 sudo[409268]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:22:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:22:44 compute-0 ceph-mon[74339]: pgmap v3914: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Dec 06 08:22:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3962467000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2e641774-2d61-463c-85de-c217fb62ab00 does not exist
Dec 06 08:22:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0065165f-257d-49a7-bf87-adbe8fd12d8e does not exist
Dec 06 08:22:44 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 34f8c659-4831-4651-b939-73e848e67f0c does not exist
Dec 06 08:22:44 compute-0 sudo[409427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:44 compute-0 sudo[409427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:44 compute-0 sudo[409427]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:44 compute-0 sudo[409452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:22:44 compute-0 sudo[409452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:44 compute-0 sudo[409452]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:44.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:45 compute-0 sudo[409478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:45 compute-0 sudo[409478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:45 compute-0 sudo[409478]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:45 compute-0 sudo[409503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:22:45 compute-0 sudo[409503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:22:45 compute-0 sudo[409503]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:45 compute-0 nova_compute[251992]: 2025-12-06 08:22:45.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:45 compute-0 ceph-mon[74339]: pgmap v3915: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 201 KiB/s rd, 110 KiB/s wr, 34 op/s
Dec 06 08:22:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:22:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:45.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:45 compute-0 nova_compute[251992]: 2025-12-06 08:22:45.878 251996 DEBUG nova.network.neutron [-] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:22:45 compute-0 nova_compute[251992]: 2025-12-06 08:22:45.910 251996 INFO nova.compute.manager [-] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Took 2.58 seconds to deallocate network for instance.
Dec 06 08:22:45 compute-0 nova_compute[251992]: 2025-12-06 08:22:45.964 251996 DEBUG nova.compute.manager [req-55b5624b-31ab-4f58-a22b-f7897e49a729 req-2f018997-d01b-46fc-a50e-c8cf32b232c2 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Received event network-vif-deleted-aa19ef6e-d7de-47d8-bb55-d5d7bd727abe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:22:45 compute-0 nova_compute[251992]: 2025-12-06 08:22:45.991 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:45 compute-0 nova_compute[251992]: 2025-12-06 08:22:45.991 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 225 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 215 KiB/s rd, 111 KiB/s wr, 56 op/s
Dec 06 08:22:46 compute-0 nova_compute[251992]: 2025-12-06 08:22:46.039 251996 DEBUG oslo_concurrency.processutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:46 compute-0 nova_compute[251992]: 2025-12-06 08:22:46.495 251996 DEBUG oslo_concurrency.processutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:46 compute-0 nova_compute[251992]: 2025-12-06 08:22:46.501 251996 DEBUG nova.compute.provider_tree [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:22:46 compute-0 nova_compute[251992]: 2025-12-06 08:22:46.515 251996 DEBUG nova.scheduler.client.report [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:22:46 compute-0 nova_compute[251992]: 2025-12-06 08:22:46.533 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3679459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:46 compute-0 nova_compute[251992]: 2025-12-06 08:22:46.564 251996 INFO nova.scheduler.client.report [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Deleted allocations for instance 377c47cc-946e-4216-99ad-9e2118dd22dd
Dec 06 08:22:46 compute-0 nova_compute[251992]: 2025-12-06 08:22:46.637 251996 DEBUG oslo_concurrency.lockutils [None req-cabfb1b7-101d-4d3b-ac38-94ed1bd81c50 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "377c47cc-946e-4216-99ad-9e2118dd22dd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:46.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:47.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:47 compute-0 ceph-mon[74339]: pgmap v3916: 305 pgs: 305 active+clean; 225 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 215 KiB/s rd, 111 KiB/s wr, 56 op/s
Dec 06 08:22:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 22 KiB/s wr, 30 op/s
Dec 06 08:22:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:48.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:48 compute-0 ceph-mon[74339]: pgmap v3917: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 22 KiB/s wr, 30 op/s
Dec 06 08:22:49 compute-0 nova_compute[251992]: 2025-12-06 08:22:49.005 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:49.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 06 08:22:50 compute-0 nova_compute[251992]: 2025-12-06 08:22:50.476 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:50.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:51 compute-0 ceph-mon[74339]: pgmap v3918: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 06 08:22:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 29 op/s
Dec 06 08:22:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:22:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:52.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:22:53 compute-0 ceph-mon[74339]: pgmap v3919: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 29 op/s
Dec 06 08:22:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:54 compute-0 nova_compute[251992]: 2025-12-06 08:22:54.008 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 2.2 KiB/s wr, 23 op/s
Dec 06 08:22:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/877748460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:55 compute-0 podman[409556]: 2025-12-06 08:22:55.436332611 +0000 UTC m=+0.088065152 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:22:55 compute-0 nova_compute[251992]: 2025-12-06 08:22:55.442 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009360.4415169, 377c47cc-946e-4216-99ad-9e2118dd22dd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:22:55 compute-0 nova_compute[251992]: 2025-12-06 08:22:55.442 251996 INFO nova.compute.manager [-] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] VM Stopped (Lifecycle Event)
Dec 06 08:22:55 compute-0 nova_compute[251992]: 2025-12-06 08:22:55.478 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:22:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:22:55 compute-0 nova_compute[251992]: 2025-12-06 08:22:55.606 251996 DEBUG nova.compute.manager [None req-4b8bf7c3-094d-43eb-9041-fcd324fd0bf2 - - - - - -] [instance: 377c47cc-946e-4216-99ad-9e2118dd22dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:22:55 compute-0 ceph-mon[74339]: pgmap v3920: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 2.2 KiB/s wr, 23 op/s
Dec 06 08:22:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 2.7 KiB/s wr, 36 op/s
Dec 06 08:22:56 compute-0 nova_compute[251992]: 2025-12-06 08:22:56.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:56.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:56 compute-0 ceph-mon[74339]: pgmap v3921: 305 pgs: 305 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 2.7 KiB/s wr, 36 op/s
Dec 06 08:22:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 08:22:58 compute-0 nova_compute[251992]: 2025-12-06 08:22:58.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:22:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:22:58.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:58 compute-0 nova_compute[251992]: 2025-12-06 08:22:58.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:58 compute-0 nova_compute[251992]: 2025-12-06 08:22:58.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:58 compute-0 nova_compute[251992]: 2025-12-06 08:22:58.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:22:58 compute-0 nova_compute[251992]: 2025-12-06 08:22:58.682 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:22:58 compute-0 nova_compute[251992]: 2025-12-06 08:22:58.683 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.009 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:22:59 compute-0 ceph-mon[74339]: pgmap v3922: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Dec 06 08:22:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546864504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.118 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.277 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.279 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4062MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.279 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.280 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.351 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.352 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.470 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:22:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:22:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:22:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:22:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:22:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:22:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:22:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3666536907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.911 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.916 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.933 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.956 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:22:59 compute-0 nova_compute[251992]: 2025-12-06 08:22:59.957 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3546864504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3666536907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:00 compute-0 nova_compute[251992]: 2025-12-06 08:23:00.479 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:00.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:01 compute-0 ceph-mon[74339]: pgmap v3923: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:01 compute-0 podman[409633]: 2025-12-06 08:23:01.39000327 +0000 UTC m=+0.053369751 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:23:01 compute-0 podman[409634]: 2025-12-06 08:23:01.395598332 +0000 UTC m=+0.057148734 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:01.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:01 compute-0 nova_compute[251992]: 2025-12-06 08:23:01.957 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:02.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:03.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:03 compute-0 ceph-mon[74339]: pgmap v3924: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1354063911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4214306056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:23:03.899 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:23:03.899 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:23:03.900 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:04 compute-0 nova_compute[251992]: 2025-12-06 08:23:04.010 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:04.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:04 compute-0 ceph-mon[74339]: pgmap v3925: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:05 compute-0 nova_compute[251992]: 2025-12-06 08:23:05.481 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:05 compute-0 sudo[409673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:05 compute-0 sudo[409673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:05 compute-0 sudo[409673]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:05.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:05 compute-0 sudo[409698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:05 compute-0 sudo[409698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:05 compute-0 sudo[409698]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:05 compute-0 nova_compute[251992]: 2025-12-06 08:23:05.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:05 compute-0 nova_compute[251992]: 2025-12-06 08:23:05.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:23:05 compute-0 nova_compute[251992]: 2025-12-06 08:23:05.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:23:05 compute-0 nova_compute[251992]: 2025-12-06 08:23:05.777 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:23:05 compute-0 nova_compute[251992]: 2025-12-06 08:23:05.777 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:06.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:07 compute-0 ceph-mon[74339]: pgmap v3926: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:23:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:07.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 682 B/s wr, 13 op/s
Dec 06 08:23:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:09 compute-0 nova_compute[251992]: 2025-12-06 08:23:09.012 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:23:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2801272996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:23:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:23:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2801272996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:23:09 compute-0 ceph-mon[74339]: pgmap v3927: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 682 B/s wr, 13 op/s
Dec 06 08:23:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2801272996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:23:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2801272996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:23:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:09.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:10 compute-0 nova_compute[251992]: 2025-12-06 08:23:10.482 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:10 compute-0 nova_compute[251992]: 2025-12-06 08:23:10.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:10.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:11 compute-0 ceph-mon[74339]: pgmap v3928: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:11.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:11 compute-0 nova_compute[251992]: 2025-12-06 08:23:11.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:12.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:12 compute-0 ceph-mon[74339]: pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:23:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:23:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:13.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:14 compute-0 nova_compute[251992]: 2025-12-06 08:23:14.015 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:14.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:15 compute-0 ceph-mon[74339]: pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:15 compute-0 nova_compute[251992]: 2025-12-06 08:23:15.483 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:15.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:16.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:17 compute-0 ceph-mon[74339]: pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:17.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:17 compute-0 nova_compute[251992]: 2025-12-06 08:23:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:17 compute-0 nova_compute[251992]: 2025-12-06 08:23:17.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:17 compute-0 nova_compute[251992]: 2025-12-06 08:23:17.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:23:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:23:18
Dec 06 08:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Dec 06 08:23:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:23:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:18.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:19 compute-0 nova_compute[251992]: 2025-12-06 08:23:19.016 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:19.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:19 compute-0 ceph-mon[74339]: pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:20 compute-0 nova_compute[251992]: 2025-12-06 08:23:20.514 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:20.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:20 compute-0 ceph-mon[74339]: pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:21.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2848336609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:22.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:22 compute-0 ceph-mon[74339]: pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:23:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:23:24 compute-0 nova_compute[251992]: 2025-12-06 08:23:24.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:24.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:25 compute-0 ceph-mon[74339]: pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:23:25 compute-0 nova_compute[251992]: 2025-12-06 08:23:25.516 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:25.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:25 compute-0 sudo[409733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:25 compute-0 sudo[409733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:25 compute-0 sudo[409733]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:25 compute-0 sudo[409764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:25 compute-0 sudo[409764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:25 compute-0 sudo[409764]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:25 compute-0 podman[409757]: 2025-12-06 08:23:25.770935895 +0000 UTC m=+0.104400928 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 156 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Dec 06 08:23:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2036688055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:23:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:26.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008048052763819096 of space, bias 1.0, pg target 0.24144158291457288 quantized to 32 (current 32)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:23:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:23:27 compute-0 ceph-mon[74339]: pgmap v3936: 305 pgs: 305 active+clean; 156 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Dec 06 08:23:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3111255366' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:23:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:23:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:23:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:23:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:23:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:23:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:27.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:23:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:28.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:29 compute-0 nova_compute[251992]: 2025-12-06 08:23:29.018 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:29 compute-0 ceph-mon[74339]: pgmap v3937: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:23:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:29.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:23:29.674 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:23:29 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:23:29.675 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:23:29 compute-0 nova_compute[251992]: 2025-12-06 08:23:29.721 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:23:30 compute-0 nova_compute[251992]: 2025-12-06 08:23:30.518 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:30.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:31 compute-0 ceph-mon[74339]: pgmap v3938: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:23:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:31.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:23:32 compute-0 podman[409811]: 2025-12-06 08:23:32.390098461 +0000 UTC m=+0.047013528 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 08:23:32 compute-0 podman[409812]: 2025-12-06 08:23:32.400874954 +0000 UTC m=+0.055697864 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:32.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:32 compute-0 ovn_controller[147168]: 2025-12-06T08:23:32Z|00821|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Dec 06 08:23:33 compute-0 ceph-mon[74339]: pgmap v3939: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:23:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:34 compute-0 nova_compute[251992]: 2025-12-06 08:23:34.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:23:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:35 compute-0 ceph-mon[74339]: pgmap v3940: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 08:23:35 compute-0 nova_compute[251992]: 2025-12-06 08:23:35.520 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:35.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 08:23:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:36.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:37 compute-0 ceph-mon[74339]: pgmap v3941: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec 06 08:23:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:37.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 350 KiB/s wr, 74 op/s
Dec 06 08:23:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:38.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:39 compute-0 nova_compute[251992]: 2025-12-06 08:23:39.080 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:39 compute-0 ceph-mon[74339]: pgmap v3942: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 350 KiB/s wr, 74 op/s
Dec 06 08:23:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:39.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:39 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:23:39.677 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:23:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:23:40 compute-0 nova_compute[251992]: 2025-12-06 08:23:40.522 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:40.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:41 compute-0 ceph-mon[74339]: pgmap v3943: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:23:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:41.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:23:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:23:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:23:43 compute-0 ceph-mon[74339]: pgmap v3944: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:23:43 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1091127023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:43.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 33 op/s
Dec 06 08:23:44 compute-0 nova_compute[251992]: 2025-12-06 08:23:44.081 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:44 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4095322003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:44 compute-0 sudo[409856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:44 compute-0 sudo[409856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:44 compute-0 sudo[409856]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:44 compute-0 sudo[409881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:23:44 compute-0 sudo[409881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:44 compute-0 sudo[409881]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:44 compute-0 sudo[409906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:44 compute-0 sudo[409906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:44 compute-0 sudo[409906]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:44 compute-0 sudo[409931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:23:44 compute-0 sudo[409931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:23:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:23:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:45 compute-0 sudo[409931]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:45 compute-0 nova_compute[251992]: 2025-12-06 08:23:45.523 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:45 compute-0 ceph-mon[74339]: pgmap v3945: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 994 KiB/s rd, 33 op/s
Dec 06 08:23:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:45.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:45 compute-0 sudo[409989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:45 compute-0 sudo[409989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:45 compute-0 sudo[409989]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:45 compute-0 sudo[410014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:45 compute-0 sudo[410014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:45 compute-0 sudo[410014]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 184 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 977 KiB/s wr, 72 op/s
Dec 06 08:23:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:23:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:23:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:23:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:23:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:23:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 864b8d62-ffc3-4a0f-9d1d-0222c22707b4 does not exist
Dec 06 08:23:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9e2cd2ca-bd64-4909-9182-ccca80ed76b5 does not exist
Dec 06 08:23:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b1fa68e1-0413-4ca7-9056-84df83ba2c0b does not exist
Dec 06 08:23:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:23:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:23:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:23:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:23:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:23:46 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:23:46 compute-0 sudo[410039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:46 compute-0 sudo[410039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:46 compute-0 sudo[410039]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:46 compute-0 sudo[410064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:23:46 compute-0 sudo[410064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:46 compute-0 sudo[410064]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:46 compute-0 sudo[410089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:46 compute-0 sudo[410089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:46 compute-0 sudo[410089]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:46 compute-0 sudo[410114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:23:46 compute-0 sudo[410114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:23:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:23:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:23:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:23:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:23:46 compute-0 podman[410178]: 2025-12-06 08:23:46.642767953 +0000 UTC m=+0.042744382 container create fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 08:23:46 compute-0 systemd[1]: Started libpod-conmon-fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1.scope.
Dec 06 08:23:46 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:23:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:46.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:46 compute-0 podman[410178]: 2025-12-06 08:23:46.626989924 +0000 UTC m=+0.026966383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:23:46 compute-0 podman[410178]: 2025-12-06 08:23:46.730870157 +0000 UTC m=+0.130846596 container init fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 08:23:46 compute-0 podman[410178]: 2025-12-06 08:23:46.737635091 +0000 UTC m=+0.137611520 container start fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:23:46 compute-0 podman[410178]: 2025-12-06 08:23:46.740638632 +0000 UTC m=+0.140615061 container attach fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:23:46 compute-0 objective_black[410194]: 167 167
Dec 06 08:23:46 compute-0 systemd[1]: libpod-fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1.scope: Deactivated successfully.
Dec 06 08:23:46 compute-0 podman[410178]: 2025-12-06 08:23:46.743989823 +0000 UTC m=+0.143966272 container died fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad7786d23c5ca46cbf7f1d3247ba931edb2a17b3db36023f3e997c2165441ed7-merged.mount: Deactivated successfully.
Dec 06 08:23:46 compute-0 podman[410178]: 2025-12-06 08:23:46.781216414 +0000 UTC m=+0.181192833 container remove fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_black, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 08:23:46 compute-0 systemd[1]: libpod-conmon-fc30edc12d00362c80a86e3176428b77b43a89901a0186aef0e6b3f547bc38c1.scope: Deactivated successfully.
Dec 06 08:23:46 compute-0 podman[410218]: 2025-12-06 08:23:46.95808012 +0000 UTC m=+0.041506059 container create c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:23:46 compute-0 systemd[1]: Started libpod-conmon-c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1.scope.
Dec 06 08:23:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a900c1b12ce946601aa64b374ac6ee20aec04001ff7a023d643d9fbca320446c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a900c1b12ce946601aa64b374ac6ee20aec04001ff7a023d643d9fbca320446c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a900c1b12ce946601aa64b374ac6ee20aec04001ff7a023d643d9fbca320446c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a900c1b12ce946601aa64b374ac6ee20aec04001ff7a023d643d9fbca320446c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a900c1b12ce946601aa64b374ac6ee20aec04001ff7a023d643d9fbca320446c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:47 compute-0 podman[410218]: 2025-12-06 08:23:46.940325057 +0000 UTC m=+0.023751026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:23:47 compute-0 podman[410218]: 2025-12-06 08:23:47.038321729 +0000 UTC m=+0.121747678 container init c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:23:47 compute-0 podman[410218]: 2025-12-06 08:23:47.044622301 +0000 UTC m=+0.128048250 container start c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 08:23:47 compute-0 podman[410218]: 2025-12-06 08:23:47.253796822 +0000 UTC m=+0.337222801 container attach c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:23:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:47.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:47 compute-0 ceph-mon[74339]: pgmap v3946: 305 pgs: 305 active+clean; 184 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 977 KiB/s wr, 72 op/s
Dec 06 08:23:47 compute-0 fervent_goldberg[410234]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:23:47 compute-0 fervent_goldberg[410234]: --> relative data size: 1.0
Dec 06 08:23:47 compute-0 fervent_goldberg[410234]: --> All data devices are unavailable
Dec 06 08:23:47 compute-0 systemd[1]: libpod-c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1.scope: Deactivated successfully.
Dec 06 08:23:47 compute-0 podman[410218]: 2025-12-06 08:23:47.878714839 +0000 UTC m=+0.962140778 container died c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a900c1b12ce946601aa64b374ac6ee20aec04001ff7a023d643d9fbca320446c-merged.mount: Deactivated successfully.
Dec 06 08:23:47 compute-0 podman[410218]: 2025-12-06 08:23:47.93249021 +0000 UTC m=+1.015916169 container remove c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldberg, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:47 compute-0 systemd[1]: libpod-conmon-c343f7778481f750dc094ac9aef407b7c3960fefa660f3a69942f61d47f856f1.scope: Deactivated successfully.
Dec 06 08:23:47 compute-0 sudo[410114]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:48 compute-0 sudo[410262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:48 compute-0 sudo[410262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:48 compute-0 sudo[410262]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:48 compute-0 sudo[410287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:23:48 compute-0 sudo[410287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:48 compute-0 sudo[410287]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:48 compute-0 sudo[410312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:48 compute-0 sudo[410312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:48 compute-0 sudo[410312]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:48 compute-0 sudo[410337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:23:48 compute-0 sudo[410337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:48 compute-0 podman[410401]: 2025-12-06 08:23:48.506836702 +0000 UTC m=+0.038133096 container create b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:48 compute-0 systemd[1]: Started libpod-conmon-b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6.scope.
Dec 06 08:23:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:23:48 compute-0 podman[410401]: 2025-12-06 08:23:48.580393051 +0000 UTC m=+0.111689465 container init b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:23:48 compute-0 podman[410401]: 2025-12-06 08:23:48.488897525 +0000 UTC m=+0.020193939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:23:48 compute-0 podman[410401]: 2025-12-06 08:23:48.587215956 +0000 UTC m=+0.118512370 container start b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:23:48 compute-0 podman[410401]: 2025-12-06 08:23:48.590490375 +0000 UTC m=+0.121786789 container attach b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 08:23:48 compute-0 pedantic_merkle[410417]: 167 167
Dec 06 08:23:48 compute-0 systemd[1]: libpod-b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6.scope: Deactivated successfully.
Dec 06 08:23:48 compute-0 podman[410401]: 2025-12-06 08:23:48.591774551 +0000 UTC m=+0.123070945 container died b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5f0479192c05a711b175022db4c1ff2f5acf7da112521b396ea58f2ba0512cb-merged.mount: Deactivated successfully.
Dec 06 08:23:48 compute-0 podman[410401]: 2025-12-06 08:23:48.623180364 +0000 UTC m=+0.154476758 container remove b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_merkle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec 06 08:23:48 compute-0 systemd[1]: libpod-conmon-b0aeef1a6030e155ffb1ad4e8b3031fd1d2afc57b27faba4672a40b0834581f6.scope: Deactivated successfully.
Dec 06 08:23:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:48.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:48 compute-0 podman[410440]: 2025-12-06 08:23:48.795186156 +0000 UTC m=+0.039092503 container create e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:48 compute-0 systemd[1]: Started libpod-conmon-e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081.scope.
Dec 06 08:23:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e3e1bac55027eb13511e2458f3f3d3c07b6da2d735b8c5fc2a509f9e197b88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e3e1bac55027eb13511e2458f3f3d3c07b6da2d735b8c5fc2a509f9e197b88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e3e1bac55027eb13511e2458f3f3d3c07b6da2d735b8c5fc2a509f9e197b88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e3e1bac55027eb13511e2458f3f3d3c07b6da2d735b8c5fc2a509f9e197b88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:48 compute-0 podman[410440]: 2025-12-06 08:23:48.864139429 +0000 UTC m=+0.108045776 container init e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec 06 08:23:48 compute-0 podman[410440]: 2025-12-06 08:23:48.870721248 +0000 UTC m=+0.114627595 container start e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:23:48 compute-0 podman[410440]: 2025-12-06 08:23:48.778348848 +0000 UTC m=+0.022255195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:23:48 compute-0 podman[410440]: 2025-12-06 08:23:48.877363509 +0000 UTC m=+0.121269906 container attach e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:49 compute-0 nova_compute[251992]: 2025-12-06 08:23:49.132 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:49.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:49 compute-0 ceph-mon[74339]: pgmap v3947: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]: {
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:     "0": [
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:         {
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "devices": [
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "/dev/loop3"
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             ],
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "lv_name": "ceph_lv0",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "lv_size": "7511998464",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "name": "ceph_lv0",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "tags": {
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.cluster_name": "ceph",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.crush_device_class": "",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.encrypted": "0",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.osd_id": "0",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.type": "block",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:                 "ceph.vdo": "0"
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             },
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "type": "block",
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:             "vg_name": "ceph_vg0"
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:         }
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]:     ]
Dec 06 08:23:49 compute-0 stupefied_montalcini[410457]: }
Dec 06 08:23:49 compute-0 systemd[1]: libpod-e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081.scope: Deactivated successfully.
Dec 06 08:23:49 compute-0 podman[410440]: 2025-12-06 08:23:49.682554102 +0000 UTC m=+0.926460449 container died e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-89e3e1bac55027eb13511e2458f3f3d3c07b6da2d735b8c5fc2a509f9e197b88-merged.mount: Deactivated successfully.
Dec 06 08:23:49 compute-0 podman[410440]: 2025-12-06 08:23:49.736273022 +0000 UTC m=+0.980179369 container remove e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:49 compute-0 systemd[1]: libpod-conmon-e1db6ac9a2f00396b3939b82932735b101a456f4402a46d00f2accb5c0e73081.scope: Deactivated successfully.
Dec 06 08:23:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:49 compute-0 sudo[410337]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:49 compute-0 sudo[410477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:49 compute-0 sudo[410477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:49 compute-0 sudo[410477]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:49 compute-0 sudo[410502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:23:49 compute-0 sudo[410502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:49 compute-0 sudo[410502]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:49 compute-0 sudo[410527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:49 compute-0 sudo[410527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:49 compute-0 sudo[410527]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:49 compute-0 sudo[410552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:23:49 compute-0 sudo[410552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:50 compute-0 podman[410617]: 2025-12-06 08:23:50.290712884 +0000 UTC m=+0.036978586 container create ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:23:50 compute-0 systemd[1]: Started libpod-conmon-ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408.scope.
Dec 06 08:23:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:23:50 compute-0 podman[410617]: 2025-12-06 08:23:50.35721119 +0000 UTC m=+0.103476912 container init ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:23:50 compute-0 podman[410617]: 2025-12-06 08:23:50.365178567 +0000 UTC m=+0.111444259 container start ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 08:23:50 compute-0 sweet_almeida[410633]: 167 167
Dec 06 08:23:50 compute-0 podman[410617]: 2025-12-06 08:23:50.369155965 +0000 UTC m=+0.115421667 container attach ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:23:50 compute-0 podman[410617]: 2025-12-06 08:23:50.274425491 +0000 UTC m=+0.020691203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:23:50 compute-0 systemd[1]: libpod-ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408.scope: Deactivated successfully.
Dec 06 08:23:50 compute-0 podman[410617]: 2025-12-06 08:23:50.370866241 +0000 UTC m=+0.117131953 container died ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c45b75ff2d4c521b21f08bbaee632cea0acab4a6bb4c0b3bf4fcde33e52e545d-merged.mount: Deactivated successfully.
Dec 06 08:23:50 compute-0 podman[410617]: 2025-12-06 08:23:50.403877589 +0000 UTC m=+0.150143281 container remove ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 08:23:50 compute-0 systemd[1]: libpod-conmon-ce0d8c6aef98d7f24ad5935112324518a8096702ee6a3a54485f86f320222408.scope: Deactivated successfully.
Dec 06 08:23:50 compute-0 nova_compute[251992]: 2025-12-06 08:23:50.524 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:50 compute-0 podman[410657]: 2025-12-06 08:23:50.571039969 +0000 UTC m=+0.043146373 container create 77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kirch, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:23:50 compute-0 systemd[1]: Started libpod-conmon-77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950.scope.
Dec 06 08:23:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a359766b07ab5b32bcaad93d3981e6f4095be002ec356f3d5a1aaf66fc019e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a359766b07ab5b32bcaad93d3981e6f4095be002ec356f3d5a1aaf66fc019e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a359766b07ab5b32bcaad93d3981e6f4095be002ec356f3d5a1aaf66fc019e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a359766b07ab5b32bcaad93d3981e6f4095be002ec356f3d5a1aaf66fc019e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:23:50 compute-0 podman[410657]: 2025-12-06 08:23:50.643745244 +0000 UTC m=+0.115851658 container init 77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:23:50 compute-0 podman[410657]: 2025-12-06 08:23:50.550586444 +0000 UTC m=+0.022692878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:23:50 compute-0 podman[410657]: 2025-12-06 08:23:50.654317782 +0000 UTC m=+0.126424186 container start 77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kirch, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 08:23:50 compute-0 podman[410657]: 2025-12-06 08:23:50.657644032 +0000 UTC m=+0.129750436 container attach 77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:23:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:50 compute-0 ceph-mon[74339]: pgmap v3948: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:51 compute-0 youthful_kirch[410673]: {
Dec 06 08:23:51 compute-0 youthful_kirch[410673]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:23:51 compute-0 youthful_kirch[410673]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:23:51 compute-0 youthful_kirch[410673]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:23:51 compute-0 youthful_kirch[410673]:         "osd_id": 0,
Dec 06 08:23:51 compute-0 youthful_kirch[410673]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:23:51 compute-0 youthful_kirch[410673]:         "type": "bluestore"
Dec 06 08:23:51 compute-0 youthful_kirch[410673]:     }
Dec 06 08:23:51 compute-0 youthful_kirch[410673]: }
Dec 06 08:23:51 compute-0 systemd[1]: libpod-77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950.scope: Deactivated successfully.
Dec 06 08:23:51 compute-0 podman[410657]: 2025-12-06 08:23:51.519897386 +0000 UTC m=+0.992003790 container died 77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kirch, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a359766b07ab5b32bcaad93d3981e6f4095be002ec356f3d5a1aaf66fc019e8-merged.mount: Deactivated successfully.
Dec 06 08:23:51 compute-0 podman[410657]: 2025-12-06 08:23:51.572480585 +0000 UTC m=+1.044586989 container remove 77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kirch, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 08:23:51 compute-0 systemd[1]: libpod-conmon-77f4a76843c8350bb387d9cdcd345e68ba9c596c78d778813a33ca0695f61950.scope: Deactivated successfully.
Dec 06 08:23:51 compute-0 sudo[410552]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:23:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:23:51 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:51.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b39dd7c9-4f94-4abf-b472-5e15805d476b does not exist
Dec 06 08:23:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4b22355a-8310-4c97-aaf4-1633d94cf0c5 does not exist
Dec 06 08:23:51 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 546b40ad-7493-4731-bf85-18fb60567d6a does not exist
Dec 06 08:23:51 compute-0 sudo[410707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:23:51 compute-0 sudo[410707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:51 compute-0 sudo[410707]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:51 compute-0 sudo[410732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:23:51 compute-0 sudo[410732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:23:51 compute-0 sudo[410732]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:52 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:23:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:53 compute-0 ceph-mon[74339]: pgmap v3949: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:54 compute-0 nova_compute[251992]: 2025-12-06 08:23:54.134 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:23:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:54.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:23:54 compute-0 ceph-mon[74339]: pgmap v3950: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:23:55 compute-0 nova_compute[251992]: 2025-12-06 08:23:55.527 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:55.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:56 compute-0 podman[410759]: 2025-12-06 08:23:56.462003073 +0000 UTC m=+0.113190166 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.652 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.713 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "58a69ab9-f433-4e6e-be04-533ca52d4646" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.713 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:56.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.740 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.823 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.824 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.836 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.836 251996 INFO nova.compute.claims [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:23:56 compute-0 nova_compute[251992]: 2025-12-06 08:23:56.936 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:57 compute-0 ceph-mon[74339]: pgmap v3951: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 06 08:23:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:23:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532581924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.379 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.391 251996 DEBUG nova.compute.provider_tree [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.580 251996 DEBUG nova.scheduler.client.report [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.602 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.603 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:23:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:23:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.656 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.656 251996 DEBUG nova.network.neutron [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.680 251996 INFO nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.700 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.802 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.803 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.804 251996 INFO nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Creating image(s)
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.832 251996 DEBUG nova.storage.rbd_utils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 58a69ab9-f433-4e6e-be04-533ca52d4646_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.859 251996 DEBUG nova.storage.rbd_utils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 58a69ab9-f433-4e6e-be04-533ca52d4646_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.885 251996 DEBUG nova.storage.rbd_utils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 58a69ab9-f433-4e6e-be04-533ca52d4646_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.888 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.955 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.956 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.957 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.958 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.984 251996 DEBUG nova.storage.rbd_utils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 58a69ab9-f433-4e6e-be04-533ca52d4646_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:23:57 compute-0 nova_compute[251992]: 2025-12-06 08:23:57.988 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 58a69ab9-f433-4e6e-be04-533ca52d4646_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Dec 06 08:23:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/532581924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.329 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef 58a69ab9-f433-4e6e-be04-533ca52d4646_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.386 251996 DEBUG nova.storage.rbd_utils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] resizing rbd image 58a69ab9-f433-4e6e-be04-533ca52d4646_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.507 251996 DEBUG nova.objects.instance [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 58a69ab9-f433-4e6e-be04-533ca52d4646 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.521 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.521 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Ensure instance console log exists: /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.522 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.522 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.522 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:58 compute-0 nova_compute[251992]: 2025-12-06 08:23:58.593 251996 DEBUG nova.policy [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0432cb6633e14c1b86fc320e7f3bb880', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:23:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:23:58.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:59 compute-0 ceph-mon[74339]: pgmap v3952: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 207 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Dec 06 08:23:59 compute-0 nova_compute[251992]: 2025-12-06 08:23:59.175 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:23:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:23:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:23:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:23:59.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:23:59 compute-0 nova_compute[251992]: 2025-12-06 08:23:59.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:23:59 compute-0 nova_compute[251992]: 2025-12-06 08:23:59.678 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:23:59 compute-0 nova_compute[251992]: 2025-12-06 08:23:59.679 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:23:59 compute-0 nova_compute[251992]: 2025-12-06 08:23:59.679 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:23:59 compute-0 nova_compute[251992]: 2025-12-06 08:23:59.679 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:23:59 compute-0 nova_compute[251992]: 2025-12-06 08:23:59.679 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:23:59 compute-0 nova_compute[251992]: 2025-12-06 08:23:59.712 251996 DEBUG nova.network.neutron [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Successfully created port: b3891f21-193f-4da7-9562-ca6b7dd4f5d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:23:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 06 08:24:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:24:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206065533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.151 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1206065533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.284 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.285 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4088MB free_disk=20.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.285 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.286 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.404 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance 58a69ab9-f433-4e6e-be04-533ca52d4646 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.404 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.405 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.452 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.529 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:24:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4088412875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.876 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.882 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.903 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.925 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:24:00 compute-0 nova_compute[251992]: 2025-12-06 08:24:00.925 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:01 compute-0 ceph-mon[74339]: pgmap v3953: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Dec 06 08:24:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4088412875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:01 compute-0 nova_compute[251992]: 2025-12-06 08:24:01.211 251996 DEBUG nova.network.neutron [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Successfully updated port: b3891f21-193f-4da7-9562-ca6b7dd4f5d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:24:01 compute-0 nova_compute[251992]: 2025-12-06 08:24:01.237 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:24:01 compute-0 nova_compute[251992]: 2025-12-06 08:24:01.237 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquired lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:24:01 compute-0 nova_compute[251992]: 2025-12-06 08:24:01.237 251996 DEBUG nova.network.neutron [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:24:01 compute-0 nova_compute[251992]: 2025-12-06 08:24:01.608 251996 DEBUG nova.compute.manager [req-ae9d0bef-51e8-4f6b-b3eb-2be0bde83bd6 req-2fd00d35-7f6b-4688-8e8f-9c2dafe8b6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received event network-changed-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:01 compute-0 nova_compute[251992]: 2025-12-06 08:24:01.608 251996 DEBUG nova.compute.manager [req-ae9d0bef-51e8-4f6b-b3eb-2be0bde83bd6 req-2fd00d35-7f6b-4688-8e8f-9c2dafe8b6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Refreshing instance network info cache due to event network-changed-b3891f21-193f-4da7-9562-ca6b7dd4f5d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:24:01 compute-0 nova_compute[251992]: 2025-12-06 08:24:01.608 251996 DEBUG oslo_concurrency.lockutils [req-ae9d0bef-51e8-4f6b-b3eb-2be0bde83bd6 req-2fd00d35-7f6b-4688-8e8f-9c2dafe8b6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:24:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:01 compute-0 nova_compute[251992]: 2025-12-06 08:24:01.669 251996 DEBUG nova.network.neutron [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:24:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:02.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:02 compute-0 nova_compute[251992]: 2025-12-06 08:24:02.926 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:03 compute-0 ceph-mon[74339]: pgmap v3954: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:03 compute-0 podman[411021]: 2025-12-06 08:24:03.395140419 +0000 UTC m=+0.052675162 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:24:03 compute-0 podman[411022]: 2025-12-06 08:24:03.426404619 +0000 UTC m=+0.077889598 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.586 251996 DEBUG nova.network.neutron [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Updating instance_info_cache with network_info: [{"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.615 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Releasing lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.616 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Instance network_info: |[{"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.616 251996 DEBUG oslo_concurrency.lockutils [req-ae9d0bef-51e8-4f6b-b3eb-2be0bde83bd6 req-2fd00d35-7f6b-4688-8e8f-9c2dafe8b6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.616 251996 DEBUG nova.network.neutron [req-ae9d0bef-51e8-4f6b-b3eb-2be0bde83bd6 req-2fd00d35-7f6b-4688-8e8f-9c2dafe8b6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Refreshing network info cache for port b3891f21-193f-4da7-9562-ca6b7dd4f5d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.619 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Start _get_guest_xml network_info=[{"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.623 251996 WARNING nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.627 251996 DEBUG nova.virt.libvirt.host [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.628 251996 DEBUG nova.virt.libvirt.host [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.630 251996 DEBUG nova.virt.libvirt.host [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.631 251996 DEBUG nova.virt.libvirt.host [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.632 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.633 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.633 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.633 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.633 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.634 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.634 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.634 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.634 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.634 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.634 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.635 251996 DEBUG nova.virt.hardware [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:24:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:03 compute-0 nova_compute[251992]: 2025-12-06 08:24:03.638 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:03.899 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:03.900 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:03.900 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:24:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774930002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.079 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.104 251996 DEBUG nova.storage.rbd_utils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 58a69ab9-f433-4e6e-be04-533ca52d4646_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.108 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.176 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/158196637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3774930002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:24:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:24:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2615928693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.559 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.561 251996 DEBUG nova.virt.libvirt.vif [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:23:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-272483131',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-272483131',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=219,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC8BznMB0Xu/ykT7z/N8+gel7XJlQSn0npteW8yVXEIGRqJoEANmDfv68DjMaYfGNX4b0z8xP/ctyWvQ7TYfkeAXD05ZkFtdVnjgdZHjlMKaS2ob4Ytz/egC3eVPS5uFkw==',key_name='tempest-TestSecurityGroupsBasicOps-1814186017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-cgj7rixt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:23:57Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=58a69ab9-f433-4e6e-be04-533ca52d4646,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.561 251996 DEBUG nova.network.os_vif_util [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.562 251996 DEBUG nova.network.os_vif_util [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d7:bd,bridge_name='br-int',has_traffic_filtering=True,id=b3891f21-193f-4da7-9562-ca6b7dd4f5d4,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3891f21-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.563 251996 DEBUG nova.objects.instance [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58a69ab9-f433-4e6e-be04-533ca52d4646 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.599 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <uuid>58a69ab9-f433-4e6e-be04-533ca52d4646</uuid>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <name>instance-000000db</name>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-272483131</nova:name>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:24:03</nova:creationTime>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <nova:user uuid="0432cb6633e14c1b86fc320e7f3bb880">tempest-TestSecurityGroupsBasicOps-568463891-project-member</nova:user>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <nova:project uuid="5d23d1d6ffc142eaa9bee0ef93fe60e4">tempest-TestSecurityGroupsBasicOps-568463891</nova:project>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <nova:port uuid="b3891f21-193f-4da7-9562-ca6b7dd4f5d4">
Dec 06 08:24:04 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <system>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <entry name="serial">58a69ab9-f433-4e6e-be04-533ca52d4646</entry>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <entry name="uuid">58a69ab9-f433-4e6e-be04-533ca52d4646</entry>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </system>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <os>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   </os>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <features>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   </features>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/58a69ab9-f433-4e6e-be04-533ca52d4646_disk">
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       </source>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/58a69ab9-f433-4e6e-be04-533ca52d4646_disk.config">
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       </source>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:24:04 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:9d:d7:bd"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <target dev="tapb3891f21-19"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646/console.log" append="off"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <video>
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </video>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:24:04 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:24:04 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:24:04 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:24:04 compute-0 nova_compute[251992]: </domain>
Dec 06 08:24:04 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.602 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Preparing to wait for external event network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.602 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.603 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.603 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.604 251996 DEBUG nova.virt.libvirt.vif [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:23:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-272483131',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-272483131',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=219,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC8BznMB0Xu/ykT7z/N8+gel7XJlQSn0npteW8yVXEIGRqJoEANmDfv68DjMaYfGNX4b0z8xP/ctyWvQ7TYfkeAXD05ZkFtdVnjgdZHjlMKaS2ob4Ytz/egC3eVPS5uFkw==',key_name='tempest-TestSecurityGroupsBasicOps-1814186017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-cgj7rixt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:23:57Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=58a69ab9-f433-4e6e-be04-533ca52d4646,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.605 251996 DEBUG nova.network.os_vif_util [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.606 251996 DEBUG nova.network.os_vif_util [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d7:bd,bridge_name='br-int',has_traffic_filtering=True,id=b3891f21-193f-4da7-9562-ca6b7dd4f5d4,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3891f21-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.606 251996 DEBUG os_vif [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d7:bd,bridge_name='br-int',has_traffic_filtering=True,id=b3891f21-193f-4da7-9562-ca6b7dd4f5d4,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3891f21-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.607 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.608 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.608 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.614 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.614 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3891f21-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.615 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3891f21-19, col_values=(('external_ids', {'iface-id': 'b3891f21-193f-4da7-9562-ca6b7dd4f5d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:d7:bd', 'vm-uuid': '58a69ab9-f433-4e6e-be04-533ca52d4646'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.616 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:04 compute-0 NetworkManager[48965]: <info>  [1765009444.6181] manager: (tapb3891f21-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.618 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.623 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.624 251996 INFO os_vif [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:d7:bd,bridge_name='br-int',has_traffic_filtering=True,id=b3891f21-193f-4da7-9562-ca6b7dd4f5d4,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3891f21-19')
Dec 06 08:24:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:04.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.776 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.777 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.777 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] No VIF found with MAC fa:16:3e:9d:d7:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.778 251996 INFO nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Using config drive
Dec 06 08:24:04 compute-0 nova_compute[251992]: 2025-12-06 08:24:04.805 251996 DEBUG nova.storage.rbd_utils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 58a69ab9-f433-4e6e-be04-533ca52d4646_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.348 251996 INFO nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Creating config drive at /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646/disk.config
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.353 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdky2rvil execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:05 compute-0 ceph-mon[74339]: pgmap v3955: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2615928693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:24:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1585537670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.489 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdky2rvil" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.593 251996 DEBUG nova.storage.rbd_utils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] rbd image 58a69ab9-f433-4e6e-be04-533ca52d4646_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.596 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646/disk.config 58a69ab9-f433-4e6e-be04-533ca52d4646_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:05.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.783 251996 DEBUG oslo_concurrency.processutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646/disk.config 58a69ab9-f433-4e6e-be04-533ca52d4646_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.784 251996 INFO nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Deleting local config drive /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646/disk.config because it was imported into RBD.
Dec 06 08:24:05 compute-0 kernel: tapb3891f21-19: entered promiscuous mode
Dec 06 08:24:05 compute-0 NetworkManager[48965]: <info>  [1765009445.8479] manager: (tapb3891f21-19): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.851 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:05 compute-0 ovn_controller[147168]: 2025-12-06T08:24:05Z|00822|binding|INFO|Claiming lport b3891f21-193f-4da7-9562-ca6b7dd4f5d4 for this chassis.
Dec 06 08:24:05 compute-0 ovn_controller[147168]: 2025-12-06T08:24:05Z|00823|binding|INFO|b3891f21-193f-4da7-9562-ca6b7dd4f5d4: Claiming fa:16:3e:9d:d7:bd 10.100.0.7
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.854 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.863 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:05 compute-0 NetworkManager[48965]: <info>  [1765009445.8653] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Dec 06 08:24:05 compute-0 NetworkManager[48965]: <info>  [1765009445.8662] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Dec 06 08:24:05 compute-0 systemd-udevd[411194]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:24:05 compute-0 NetworkManager[48965]: <info>  [1765009445.9015] device (tapb3891f21-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:24:05 compute-0 NetworkManager[48965]: <info>  [1765009445.9025] device (tapb3891f21-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:24:05 compute-0 nova_compute[251992]: 2025-12-06 08:24:05.999 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 sudo[411197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:06 compute-0 sudo[411197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:06 compute-0 sudo[411197]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:06 compute-0 systemd-machined[212986]: New machine qemu-97-instance-000000db.
Dec 06 08:24:06 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-000000db.
Dec 06 08:24:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.053 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:d7:bd 10.100.0.7'], port_security=['fa:16:3e:9d:d7:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '58a69ab9-f433-4e6e-be04-533ca52d4646', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189eab69-7772-4260-9abd-06a6f9690645', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '05ba0383-51c9-4f05-9aff-9ce5d118989a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daab64e6-4b17-43eb-9bf1-b22faaf53364, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=b3891f21-193f-4da7-9562-ca6b7dd4f5d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.055 158118 INFO neutron.agent.ovn.metadata.agent [-] Port b3891f21-193f-4da7-9562-ca6b7dd4f5d4 in datapath 189eab69-7772-4260-9abd-06a6f9690645 bound to our chassis
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.056 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 189eab69-7772-4260-9abd-06a6f9690645
Dec 06 08:24:06 compute-0 ovn_controller[147168]: 2025-12-06T08:24:06Z|00824|binding|INFO|Setting lport b3891f21-193f-4da7-9562-ca6b7dd4f5d4 ovn-installed in OVS
Dec 06 08:24:06 compute-0 ovn_controller[147168]: 2025-12-06T08:24:06Z|00825|binding|INFO|Setting lport b3891f21-193f-4da7-9562-ca6b7dd4f5d4 up in Southbound
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.058 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.069 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.068 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7c35c1-dbdb-422b-b4f4-cfb11e51da0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.069 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap189eab69-71 in ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:24:06 compute-0 sudo[411225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.072 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap189eab69-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.072 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cdeb9815-1c32-4522-ba46-b710ca7e4588]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.074 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e1accecb-9cc6-4e71-a43b-08f7aee1de07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 sudo[411225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:06 compute-0 sudo[411225]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.088 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[79678e69-1e04-4752-a267-86c8ab9e9c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.102 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0813d48a-e9e6-4381-b98f-fd019ed5bd5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.133 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[fce029c9-5a48-4214-9443-a898ed8f10b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.140 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c56fc261-4b5f-42ba-b564-1ed543413e77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 NetworkManager[48965]: <info>  [1765009446.1413] manager: (tap189eab69-70): new Veth device (/org/freedesktop/NetworkManager/Devices/396)
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.173 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[d1228d70-e60a-4f50-be34-92e540880f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.178 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[2d386242-4e46-4863-a94b-1ac4d50c3536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 NetworkManager[48965]: <info>  [1765009446.1969] device (tap189eab69-70): carrier: link connected
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.200 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[1876e7fa-7857-4768-8b10-379f9153a359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.215 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2338f6-6d6c-4158-8c3b-1b7e2a0e7969]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap189eab69-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:c9:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 971878, 'reachable_time': 30838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411280, 'error': None, 'target': 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.227 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8c935a-629d-4648-afb5-88996eecde29]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:c9d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 971878, 'tstamp': 971878}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411281, 'error': None, 'target': 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.242 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d5312e5c-0b59-463f-944b-e32b936d1b88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap189eab69-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:c9:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 971878, 'reachable_time': 30838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 411282, 'error': None, 'target': 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.264 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2bde503d-00f9-40e4-b0e0-9154e3423f9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.321 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[5e117798-cf8e-4458-8795-5357175ce5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.322 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap189eab69-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.322 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.323 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap189eab69-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.325 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 kernel: tap189eab69-70: entered promiscuous mode
Dec 06 08:24:06 compute-0 NetworkManager[48965]: <info>  [1765009446.3283] manager: (tap189eab69-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.328 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.330 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap189eab69-70, col_values=(('external_ids', {'iface-id': '46835515-f38d-4eac-91aa-acd0b877297f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.331 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 ovn_controller[147168]: 2025-12-06T08:24:06Z|00826|binding|INFO|Releasing lport 46835515-f38d-4eac-91aa-acd0b877297f from this chassis (sb_readonly=0)
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.332 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.334 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/189eab69-7772-4260-9abd-06a6f9690645.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/189eab69-7772-4260-9abd-06a6f9690645.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.335 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[60e5821d-350c-4687-b16a-8916d5980611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.336 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-189eab69-7772-4260-9abd-06a6f9690645
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/189eab69-7772-4260-9abd-06a6f9690645.pid.haproxy
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 189eab69-7772-4260-9abd-06a6f9690645
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.337 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'env', 'PROCESS_TAG=haproxy-189eab69-7772-4260-9abd-06a6f9690645', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/189eab69-7772-4260-9abd-06a6f9690645.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.343 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.462 251996 DEBUG nova.compute.manager [req-c491f0fc-774d-448f-a5ba-4dfefec68648 req-a21935ba-8be0-496a-822b-d3bc3e0fff81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received event network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.463 251996 DEBUG oslo_concurrency.lockutils [req-c491f0fc-774d-448f-a5ba-4dfefec68648 req-a21935ba-8be0-496a-822b-d3bc3e0fff81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.463 251996 DEBUG oslo_concurrency.lockutils [req-c491f0fc-774d-448f-a5ba-4dfefec68648 req-a21935ba-8be0-496a-822b-d3bc3e0fff81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.463 251996 DEBUG oslo_concurrency.lockutils [req-c491f0fc-774d-448f-a5ba-4dfefec68648 req-a21935ba-8be0-496a-822b-d3bc3e0fff81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.464 251996 DEBUG nova.compute.manager [req-c491f0fc-774d-448f-a5ba-4dfefec68648 req-a21935ba-8be0-496a-822b-d3bc3e0fff81 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Processing event network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.555 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009446.554775, 58a69ab9-f433-4e6e-be04-533ca52d4646 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.556 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] VM Started (Lifecycle Event)
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.558 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.562 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.565 251996 INFO nova.virt.libvirt.driver [-] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Instance spawned successfully.
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.566 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.579 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.586 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.589 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.590 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.591 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.591 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.592 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.592 251996 DEBUG nova.virt.libvirt.driver [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.617 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.619 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009446.5548935, 58a69ab9-f433-4e6e-be04-533ca52d4646 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.619 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] VM Paused (Lifecycle Event)
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.652 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.656 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009446.561571, 58a69ab9-f433-4e6e-be04-533ca52d4646 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.656 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] VM Resumed (Lifecycle Event)
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.662 251996 INFO nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Took 8.86 seconds to spawn the instance on the hypervisor.
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.663 251996 DEBUG nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.681 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.684 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.684 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.685 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.686 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.687 251996 DEBUG nova.network.neutron [req-ae9d0bef-51e8-4f6b-b3eb-2be0bde83bd6 req-2fd00d35-7f6b-4688-8e8f-9c2dafe8b6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Updated VIF entry in instance network info cache for port b3891f21-193f-4da7-9562-ca6b7dd4f5d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.688 251996 DEBUG nova.network.neutron [req-ae9d0bef-51e8-4f6b-b3eb-2be0bde83bd6 req-2fd00d35-7f6b-4688-8e8f-9c2dafe8b6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Updating instance_info_cache with network_info: [{"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.690 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=106, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=105) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:24:06 compute-0 podman[411356]: 2025-12-06 08:24:06.697491212 +0000 UTC m=+0.049199758 container create 67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 08:24:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:06.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.738 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:24:06 compute-0 systemd[1]: Started libpod-conmon-67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120.scope.
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.740 251996 DEBUG oslo_concurrency.lockutils [req-ae9d0bef-51e8-4f6b-b3eb-2be0bde83bd6 req-2fd00d35-7f6b-4688-8e8f-9c2dafe8b6e8 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.750 251996 INFO nova.compute.manager [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Took 9.96 seconds to build instance.
Dec 06 08:24:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:24:06 compute-0 podman[411356]: 2025-12-06 08:24:06.673444578 +0000 UTC m=+0.025153144 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a9e5b09d612a35202a3606a519309de81a98eb12156579a3a6f52ba40026eb0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:06 compute-0 podman[411356]: 2025-12-06 08:24:06.783957901 +0000 UTC m=+0.135666467 container init 67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:24:06 compute-0 podman[411356]: 2025-12-06 08:24:06.789568263 +0000 UTC m=+0.141276809 container start 67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 08:24:06 compute-0 nova_compute[251992]: 2025-12-06 08:24:06.791 251996 DEBUG oslo_concurrency.lockutils [None req-8a4c3e33-8eb8-4ed4-bcd6-c976c11c037c 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:06 compute-0 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[411371]: [NOTICE]   (411375) : New worker (411377) forked
Dec 06 08:24:06 compute-0 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[411371]: [NOTICE]   (411375) : Loading success.
Dec 06 08:24:06 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:06.848 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:24:07 compute-0 ceph-mon[74339]: pgmap v3956: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:07.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:08 compute-0 nova_compute[251992]: 2025-12-06 08:24:08.706 251996 DEBUG nova.compute.manager [req-ad428dd7-a0f8-4e9a-8efa-15ebf18de5fa req-d15624ba-abf5-4899-8c92-28519eba573f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received event network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:08 compute-0 nova_compute[251992]: 2025-12-06 08:24:08.706 251996 DEBUG oslo_concurrency.lockutils [req-ad428dd7-a0f8-4e9a-8efa-15ebf18de5fa req-d15624ba-abf5-4899-8c92-28519eba573f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:08 compute-0 nova_compute[251992]: 2025-12-06 08:24:08.706 251996 DEBUG oslo_concurrency.lockutils [req-ad428dd7-a0f8-4e9a-8efa-15ebf18de5fa req-d15624ba-abf5-4899-8c92-28519eba573f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:08 compute-0 nova_compute[251992]: 2025-12-06 08:24:08.706 251996 DEBUG oslo_concurrency.lockutils [req-ad428dd7-a0f8-4e9a-8efa-15ebf18de5fa req-d15624ba-abf5-4899-8c92-28519eba573f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:08 compute-0 nova_compute[251992]: 2025-12-06 08:24:08.707 251996 DEBUG nova.compute.manager [req-ad428dd7-a0f8-4e9a-8efa-15ebf18de5fa req-d15624ba-abf5-4899-8c92-28519eba573f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] No waiting events found dispatching network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:24:08 compute-0 nova_compute[251992]: 2025-12-06 08:24:08.707 251996 WARNING nova.compute.manager [req-ad428dd7-a0f8-4e9a-8efa-15ebf18de5fa req-d15624ba-abf5-4899-8c92-28519eba573f 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received unexpected event network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 for instance with vm_state active and task_state None.
Dec 06 08:24:08 compute-0 ceph-mon[74339]: pgmap v3957: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:08.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:09 compute-0 nova_compute[251992]: 2025-12-06 08:24:09.178 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:09 compute-0 nova_compute[251992]: 2025-12-06 08:24:09.617 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:09.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1369561069' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:24:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1369561069' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:24:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:09 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:09.851 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '106'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:10 compute-0 ceph-mon[74339]: pgmap v3958: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:24:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:11.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:11 compute-0 nova_compute[251992]: 2025-12-06 08:24:11.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 08:24:12 compute-0 nova_compute[251992]: 2025-12-06 08:24:12.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:12.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:24:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:24:13 compute-0 ceph-mon[74339]: pgmap v3959: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 08:24:13 compute-0 nova_compute[251992]: 2025-12-06 08:24:13.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:13.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:13 compute-0 nova_compute[251992]: 2025-12-06 08:24:13.728 251996 DEBUG nova.compute.manager [req-580feec3-b5a4-41ac-98eb-cf5257b5b5d5 req-899826d5-20c1-4886-9280-5b32936e8ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received event network-changed-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:13 compute-0 nova_compute[251992]: 2025-12-06 08:24:13.728 251996 DEBUG nova.compute.manager [req-580feec3-b5a4-41ac-98eb-cf5257b5b5d5 req-899826d5-20c1-4886-9280-5b32936e8ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Refreshing instance network info cache due to event network-changed-b3891f21-193f-4da7-9562-ca6b7dd4f5d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:24:13 compute-0 nova_compute[251992]: 2025-12-06 08:24:13.728 251996 DEBUG oslo_concurrency.lockutils [req-580feec3-b5a4-41ac-98eb-cf5257b5b5d5 req-899826d5-20c1-4886-9280-5b32936e8ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:24:13 compute-0 nova_compute[251992]: 2025-12-06 08:24:13.728 251996 DEBUG oslo_concurrency.lockutils [req-580feec3-b5a4-41ac-98eb-cf5257b5b5d5 req-899826d5-20c1-4886-9280-5b32936e8ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:24:13 compute-0 nova_compute[251992]: 2025-12-06 08:24:13.728 251996 DEBUG nova.network.neutron [req-580feec3-b5a4-41ac-98eb-cf5257b5b5d5 req-899826d5-20c1-4886-9280-5b32936e8ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Refreshing network info cache for port b3891f21-193f-4da7-9562-ca6b7dd4f5d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:24:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:24:14 compute-0 nova_compute[251992]: 2025-12-06 08:24:14.222 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:14 compute-0 nova_compute[251992]: 2025-12-06 08:24:14.619 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:15 compute-0 ceph-mon[74339]: pgmap v3960: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:24:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 08:24:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:16.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:17 compute-0 ceph-mon[74339]: pgmap v3961: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Dec 06 08:24:17 compute-0 nova_compute[251992]: 2025-12-06 08:24:17.617 251996 DEBUG nova.network.neutron [req-580feec3-b5a4-41ac-98eb-cf5257b5b5d5 req-899826d5-20c1-4886-9280-5b32936e8ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Updated VIF entry in instance network info cache for port b3891f21-193f-4da7-9562-ca6b7dd4f5d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:24:17 compute-0 nova_compute[251992]: 2025-12-06 08:24:17.618 251996 DEBUG nova.network.neutron [req-580feec3-b5a4-41ac-98eb-cf5257b5b5d5 req-899826d5-20c1-4886-9280-5b32936e8ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Updating instance_info_cache with network_info: [{"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:24:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:17.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:17 compute-0 nova_compute[251992]: 2025-12-06 08:24:17.780 251996 DEBUG oslo_concurrency.lockutils [req-580feec3-b5a4-41ac-98eb-cf5257b5b5d5 req-899826d5-20c1-4886-9280-5b32936e8ce7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-58a69ab9-f433-4e6e-be04-533ca52d4646" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:24:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:24:18 compute-0 nova_compute[251992]: 2025-12-06 08:24:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:18 compute-0 nova_compute[251992]: 2025-12-06 08:24:18.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:24:18
Dec 06 08:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'vms', 'cephfs.cephfs.data', '.mgr']
Dec 06 08:24:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:24:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:18.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:19 compute-0 ceph-mon[74339]: pgmap v3962: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:24:19 compute-0 nova_compute[251992]: 2025-12-06 08:24:19.224 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:19 compute-0 nova_compute[251992]: 2025-12-06 08:24:19.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:19 compute-0 nova_compute[251992]: 2025-12-06 08:24:19.658 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:24:20 compute-0 ovn_controller[147168]: 2025-12-06T08:24:20Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:d7:bd 10.100.0.7
Dec 06 08:24:20 compute-0 ovn_controller[147168]: 2025-12-06T08:24:20Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:d7:bd 10.100.0.7
Dec 06 08:24:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:20.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:21 compute-0 ceph-mon[74339]: pgmap v3963: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Dec 06 08:24:21 compute-0 nova_compute[251992]: 2025-12-06 08:24:21.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:21 compute-0 nova_compute[251992]: 2025-12-06 08:24:21.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:24:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 08:24:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:22.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:23 compute-0 ceph-mon[74339]: pgmap v3964: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Dec 06 08:24:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:24:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:24:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:24:24 compute-0 nova_compute[251992]: 2025-12-06 08:24:24.226 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:24 compute-0 nova_compute[251992]: 2025-12-06 08:24:24.660 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:25 compute-0 ceph-mon[74339]: pgmap v3965: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 06 08:24:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:24:26 compute-0 sudo[411398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:26 compute-0 sudo[411398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:26 compute-0 sudo[411398]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:26 compute-0 sudo[411423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:26 compute-0 sudo[411423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:26 compute-0 sudo[411423]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.646 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "58a69ab9-f433-4e6e-be04-533ca52d4646" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.646 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.646 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.646 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.646 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.648 251996 INFO nova.compute.manager [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Terminating instance
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.648 251996 DEBUG nova.compute.manager [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.683 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.683 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:24:26 compute-0 nova_compute[251992]: 2025-12-06 08:24:26.701 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004335370428531547 of space, bias 1.0, pg target 1.3006111285594641 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.6464803764191328 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:24:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:24:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:26.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:27 compute-0 kernel: tapb3891f21-19 (unregistering): left promiscuous mode
Dec 06 08:24:27 compute-0 NetworkManager[48965]: <info>  [1765009467.0494] device (tapb3891f21-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:24:27 compute-0 ovn_controller[147168]: 2025-12-06T08:24:27Z|00827|binding|INFO|Releasing lport b3891f21-193f-4da7-9562-ca6b7dd4f5d4 from this chassis (sb_readonly=0)
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.058 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:27 compute-0 ovn_controller[147168]: 2025-12-06T08:24:27Z|00828|binding|INFO|Setting lport b3891f21-193f-4da7-9562-ca6b7dd4f5d4 down in Southbound
Dec 06 08:24:27 compute-0 ovn_controller[147168]: 2025-12-06T08:24:27Z|00829|binding|INFO|Removing iface tapb3891f21-19 ovn-installed in OVS
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.060 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.076 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.078 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:d7:bd 10.100.0.7'], port_security=['fa:16:3e:9d:d7:bd 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '58a69ab9-f433-4e6e-be04-533ca52d4646', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-189eab69-7772-4260-9abd-06a6f9690645', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5d23d1d6ffc142eaa9bee0ef93fe60e4', 'neutron:revision_number': '5', 'neutron:security_group_ids': '00a0042c-61f3-4eca-b95b-854faf78335c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daab64e6-4b17-43eb-9bf1-b22faaf53364, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=b3891f21-193f-4da7-9562-ca6b7dd4f5d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.079 158118 INFO neutron.agent.ovn.metadata.agent [-] Port b3891f21-193f-4da7-9562-ca6b7dd4f5d4 in datapath 189eab69-7772-4260-9abd-06a6f9690645 unbound from our chassis
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.080 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 189eab69-7772-4260-9abd-06a6f9690645, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.081 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[6ce8244b-31d4-4545-be7b-532e97c904f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.081 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 namespace which is not needed anymore
Dec 06 08:24:27 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000db.scope: Deactivated successfully.
Dec 06 08:24:27 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000db.scope: Consumed 14.026s CPU time.
Dec 06 08:24:27 compute-0 systemd-machined[212986]: Machine qemu-97-instance-000000db terminated.
Dec 06 08:24:27 compute-0 podman[411450]: 2025-12-06 08:24:27.170047421 +0000 UTC m=+0.088924337 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 06 08:24:27 compute-0 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[411371]: [NOTICE]   (411375) : haproxy version is 2.8.14-c23fe91
Dec 06 08:24:27 compute-0 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[411371]: [NOTICE]   (411375) : path to executable is /usr/sbin/haproxy
Dec 06 08:24:27 compute-0 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[411371]: [WARNING]  (411375) : Exiting Master process...
Dec 06 08:24:27 compute-0 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[411371]: [ALERT]    (411375) : Current worker (411377) exited with code 143 (Terminated)
Dec 06 08:24:27 compute-0 neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645[411371]: [WARNING]  (411375) : All workers exited. Exiting... (0)
Dec 06 08:24:27 compute-0 systemd[1]: libpod-67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120.scope: Deactivated successfully.
Dec 06 08:24:27 compute-0 podman[411498]: 2025-12-06 08:24:27.223580436 +0000 UTC m=+0.049979099 container died 67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:24:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120-userdata-shm.mount: Deactivated successfully.
Dec 06 08:24:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a9e5b09d612a35202a3606a519309de81a98eb12156579a3a6f52ba40026eb0-merged.mount: Deactivated successfully.
Dec 06 08:24:27 compute-0 ceph-mon[74339]: pgmap v3966: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:24:27 compute-0 podman[411498]: 2025-12-06 08:24:27.264569299 +0000 UTC m=+0.090967972 container cleanup 67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:24:27 compute-0 systemd[1]: libpod-conmon-67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120.scope: Deactivated successfully.
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.280 251996 INFO nova.virt.libvirt.driver [-] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Instance destroyed successfully.
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.282 251996 DEBUG nova.objects.instance [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lazy-loading 'resources' on Instance uuid 58a69ab9-f433-4e6e-be04-533ca52d4646 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.318 251996 DEBUG nova.virt.libvirt.vif [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:23:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-272483131',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-568463891-gen-1-272483131',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-568463891-gen',id=219,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC8BznMB0Xu/ykT7z/N8+gel7XJlQSn0npteW8yVXEIGRqJoEANmDfv68DjMaYfGNX4b0z8xP/ctyWvQ7TYfkeAXD05ZkFtdVnjgdZHjlMKaS2ob4Ytz/egC3eVPS5uFkw==',key_name='tempest-TestSecurityGroupsBasicOps-1814186017',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:24:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5d23d1d6ffc142eaa9bee0ef93fe60e4',ramdisk_id='',reservation_id='r-cgj7rixt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-568463891',owner_user_name='tempest-TestSecurityGroupsBasicOps-568463891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:24:06Z,user_data=None,user_id='0432cb6633e14c1b86fc320e7f3bb880',uuid=58a69ab9-f433-4e6e-be04-533ca52d4646,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.318 251996 DEBUG nova.network.os_vif_util [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converting VIF {"id": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "address": "fa:16:3e:9d:d7:bd", "network": {"id": "189eab69-7772-4260-9abd-06a6f9690645", "bridge": "br-int", "label": "tempest-network-smoke--574787283", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5d23d1d6ffc142eaa9bee0ef93fe60e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3891f21-19", "ovs_interfaceid": "b3891f21-193f-4da7-9562-ca6b7dd4f5d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.319 251996 DEBUG nova.network.os_vif_util [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d7:bd,bridge_name='br-int',has_traffic_filtering=True,id=b3891f21-193f-4da7-9562-ca6b7dd4f5d4,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3891f21-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.320 251996 DEBUG os_vif [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d7:bd,bridge_name='br-int',has_traffic_filtering=True,id=b3891f21-193f-4da7-9562-ca6b7dd4f5d4,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3891f21-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.322 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.322 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3891f21-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.325 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.328 251996 INFO os_vif [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:d7:bd,bridge_name='br-int',has_traffic_filtering=True,id=b3891f21-193f-4da7-9562-ca6b7dd4f5d4,network=Network(189eab69-7772-4260-9abd-06a6f9690645),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3891f21-19')
Dec 06 08:24:27 compute-0 podman[411535]: 2025-12-06 08:24:27.338070866 +0000 UTC m=+0.047575003 container remove 67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.344 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4ab639-ec11-42c3-a66f-89a21751a3c0]: (4, ('Sat Dec  6 08:24:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 (67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120)\n67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120\nSat Dec  6 08:24:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 (67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120)\n67de43cde17aea8735cc616e0d6e8fb992cd54bda6ee3dadba8296a5a4364120\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.346 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4665c26d-4a0c-4cac-8179-c6cdb8e2583d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.347 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap189eab69-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.348 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:27 compute-0 kernel: tap189eab69-70: left promiscuous mode
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.360 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.364 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3175683c-be0e-46c2-80dc-5856c32e1319]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.385 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4993116b-b918-47c5-bb27-613305941e48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.387 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f31770a3-1cbf-44dc-a0f0-ee74bd4a7862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.400 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[e7cf6ff5-6e95-46e5-a46a-4e89eb6e3cda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 971871, 'reachable_time': 42987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411572, 'error': None, 'target': 'ovnmeta-189eab69-7772-4260-9abd-06a6f9690645', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d189eab69\x2d7772\x2d4260\x2d9abd\x2d06a6f9690645.mount: Deactivated successfully.
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.403 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-189eab69-7772-4260-9abd-06a6f9690645 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:24:27 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:24:27.403 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba0bc5e-5fa6-407d-a775-af91a1d768b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:24:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:24:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:24:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:24:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:24:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:24:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.714 251996 INFO nova.virt.libvirt.driver [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Deleting instance files /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646_del
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.715 251996 INFO nova.virt.libvirt.driver [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Deletion of /var/lib/nova/instances/58a69ab9-f433-4e6e-be04-533ca52d4646_del complete
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.790 251996 INFO nova.compute.manager [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Took 1.14 seconds to destroy the instance on the hypervisor.
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.791 251996 DEBUG oslo.service.loopingcall [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.791 251996 DEBUG nova.compute.manager [-] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.791 251996 DEBUG nova.network.neutron [-] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.848 251996 DEBUG nova.compute.manager [req-e1c3351a-44b3-403b-bc33-7d9d0f23e411 req-59ab7d57-364b-49e9-a12f-8c12961ff88e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received event network-vif-unplugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.849 251996 DEBUG oslo_concurrency.lockutils [req-e1c3351a-44b3-403b-bc33-7d9d0f23e411 req-59ab7d57-364b-49e9-a12f-8c12961ff88e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.849 251996 DEBUG oslo_concurrency.lockutils [req-e1c3351a-44b3-403b-bc33-7d9d0f23e411 req-59ab7d57-364b-49e9-a12f-8c12961ff88e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.849 251996 DEBUG oslo_concurrency.lockutils [req-e1c3351a-44b3-403b-bc33-7d9d0f23e411 req-59ab7d57-364b-49e9-a12f-8c12961ff88e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.850 251996 DEBUG nova.compute.manager [req-e1c3351a-44b3-403b-bc33-7d9d0f23e411 req-59ab7d57-364b-49e9-a12f-8c12961ff88e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] No waiting events found dispatching network-vif-unplugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:24:27 compute-0 nova_compute[251992]: 2025-12-06 08:24:27.850 251996 DEBUG nova.compute.manager [req-e1c3351a-44b3-403b-bc33-7d9d0f23e411 req-59ab7d57-364b-49e9-a12f-8c12961ff88e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received event network-vif-unplugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:24:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:24:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.227 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:29 compute-0 ceph-mon[74339]: pgmap v3967: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Dec 06 08:24:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:29.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.707 251996 DEBUG nova.network.neutron [-] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:24:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.785 251996 DEBUG nova.compute.manager [req-57e25af7-37cf-4c1b-82f5-bc9b17339e89 req-f38ba78e-d418-4323-b90b-4b92f25d97f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received event network-vif-deleted-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.785 251996 INFO nova.compute.manager [req-57e25af7-37cf-4c1b-82f5-bc9b17339e89 req-f38ba78e-d418-4323-b90b-4b92f25d97f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Neutron deleted interface b3891f21-193f-4da7-9562-ca6b7dd4f5d4; detaching it from the instance and deleting it from the info cache
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.786 251996 DEBUG nova.network.neutron [req-57e25af7-37cf-4c1b-82f5-bc9b17339e89 req-f38ba78e-d418-4323-b90b-4b92f25d97f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.808 251996 INFO nova.compute.manager [-] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Took 2.02 seconds to deallocate network for instance.
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.822 251996 DEBUG nova.compute.manager [req-57e25af7-37cf-4c1b-82f5-bc9b17339e89 req-f38ba78e-d418-4323-b90b-4b92f25d97f7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Detach interface failed, port_id=b3891f21-193f-4da7-9562-ca6b7dd4f5d4, reason: Instance 58a69ab9-f433-4e6e-be04-533ca52d4646 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.903 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.904 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.959 251996 DEBUG nova.compute.manager [req-9a0b0ad7-932a-4af5-8d37-aea1cfc3d52d req-a54985b3-a31f-45f9-975f-63b691e092ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received event network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.960 251996 DEBUG oslo_concurrency.lockutils [req-9a0b0ad7-932a-4af5-8d37-aea1cfc3d52d req-a54985b3-a31f-45f9-975f-63b691e092ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.960 251996 DEBUG oslo_concurrency.lockutils [req-9a0b0ad7-932a-4af5-8d37-aea1cfc3d52d req-a54985b3-a31f-45f9-975f-63b691e092ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.960 251996 DEBUG oslo_concurrency.lockutils [req-9a0b0ad7-932a-4af5-8d37-aea1cfc3d52d req-a54985b3-a31f-45f9-975f-63b691e092ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.961 251996 DEBUG nova.compute.manager [req-9a0b0ad7-932a-4af5-8d37-aea1cfc3d52d req-a54985b3-a31f-45f9-975f-63b691e092ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] No waiting events found dispatching network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.961 251996 WARNING nova.compute.manager [req-9a0b0ad7-932a-4af5-8d37-aea1cfc3d52d req-a54985b3-a31f-45f9-975f-63b691e092ef 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Received unexpected event network-vif-plugged-b3891f21-193f-4da7-9562-ca6b7dd4f5d4 for instance with vm_state deleted and task_state None.
Dec 06 08:24:29 compute-0 nova_compute[251992]: 2025-12-06 08:24:29.965 251996 DEBUG oslo_concurrency.processutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:24:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:24:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:24:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1010346848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:30 compute-0 nova_compute[251992]: 2025-12-06 08:24:30.403 251996 DEBUG oslo_concurrency.processutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:24:30 compute-0 nova_compute[251992]: 2025-12-06 08:24:30.410 251996 DEBUG nova.compute.provider_tree [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:24:30 compute-0 nova_compute[251992]: 2025-12-06 08:24:30.430 251996 DEBUG nova.scheduler.client.report [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:24:30 compute-0 nova_compute[251992]: 2025-12-06 08:24:30.456 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:30 compute-0 nova_compute[251992]: 2025-12-06 08:24:30.492 251996 INFO nova.scheduler.client.report [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Deleted allocations for instance 58a69ab9-f433-4e6e-be04-533ca52d4646
Dec 06 08:24:30 compute-0 nova_compute[251992]: 2025-12-06 08:24:30.579 251996 DEBUG oslo_concurrency.lockutils [None req-653dcec6-8345-493e-b201-5203b76ddcd6 0432cb6633e14c1b86fc320e7f3bb880 5d23d1d6ffc142eaa9bee0ef93fe60e4 - - default default] Lock "58a69ab9-f433-4e6e-be04-533ca52d4646" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:24:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:30.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:31 compute-0 ceph-mon[74339]: pgmap v3968: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Dec 06 08:24:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1010346848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:31.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Dec 06 08:24:32 compute-0 nova_compute[251992]: 2025-12-06 08:24:32.358 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:32.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:33 compute-0 ceph-mon[74339]: pgmap v3969: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 344 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Dec 06 08:24:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:33.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 24 KiB/s wr, 30 op/s
Dec 06 08:24:34 compute-0 nova_compute[251992]: 2025-12-06 08:24:34.229 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:34 compute-0 podman[411599]: 2025-12-06 08:24:34.409470717 +0000 UTC m=+0.048665413 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:24:34 compute-0 podman[411600]: 2025-12-06 08:24:34.420930398 +0000 UTC m=+0.055009385 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:24:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:34.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:35 compute-0 ceph-mon[74339]: pgmap v3970: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 24 KiB/s wr, 30 op/s
Dec 06 08:24:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:35.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 147 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 25 KiB/s wr, 54 op/s
Dec 06 08:24:36 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2290888888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:36 compute-0 nova_compute[251992]: 2025-12-06 08:24:36.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:36.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:37 compute-0 nova_compute[251992]: 2025-12-06 08:24:37.361 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:37 compute-0 ceph-mon[74339]: pgmap v3971: 305 pgs: 305 active+clean; 147 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 25 KiB/s wr, 54 op/s
Dec 06 08:24:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:37.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Dec 06 08:24:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:38.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:39 compute-0 nova_compute[251992]: 2025-12-06 08:24:39.231 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:39 compute-0 ceph-mon[74339]: pgmap v3972: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Dec 06 08:24:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 55 op/s
Dec 06 08:24:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:40.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:41 compute-0 ceph-mon[74339]: pgmap v3973: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 55 op/s
Dec 06 08:24:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:41.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 55 op/s
Dec 06 08:24:42 compute-0 nova_compute[251992]: 2025-12-06 08:24:42.280 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009467.277745, 58a69ab9-f433-4e6e-be04-533ca52d4646 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:24:42 compute-0 nova_compute[251992]: 2025-12-06 08:24:42.281 251996 INFO nova.compute.manager [-] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] VM Stopped (Lifecycle Event)
Dec 06 08:24:42 compute-0 nova_compute[251992]: 2025-12-06 08:24:42.301 251996 DEBUG nova.compute.manager [None req-7992c5b6-79b9-451b-a8bd-962f9e162f96 - - - - - -] [instance: 58a69ab9-f433-4e6e-be04-533ca52d4646] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:24:42 compute-0 nova_compute[251992]: 2025-12-06 08:24:42.365 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:42 compute-0 nova_compute[251992]: 2025-12-06 08:24:42.675 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:42.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:42 compute-0 nova_compute[251992]: 2025-12-06 08:24:42.795 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:24:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:24:43 compute-0 ceph-mon[74339]: pgmap v3974: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 55 op/s
Dec 06 08:24:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:43.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:24:44 compute-0 nova_compute[251992]: 2025-12-06 08:24:44.233 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:44.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:45 compute-0 ceph-mon[74339]: pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:24:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/142033772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:45.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:24:46 compute-0 sudo[411645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:46 compute-0 sudo[411645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:46 compute-0 sudo[411645]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:46 compute-0 sudo[411670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:46 compute-0 sudo[411670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:46 compute-0 sudo[411670]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/474660219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:24:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:46.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:47 compute-0 nova_compute[251992]: 2025-12-06 08:24:47.368 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:47 compute-0 ceph-mon[74339]: pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 06 08:24:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:47.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 08:24:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:48.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:49 compute-0 nova_compute[251992]: 2025-12-06 08:24:49.235 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:49 compute-0 ceph-mon[74339]: pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 3 op/s
Dec 06 08:24:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:24:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:49.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:24:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:50.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:51 compute-0 ceph-mon[74339]: pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:51.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:52 compute-0 sudo[411698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:52 compute-0 sudo[411698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:52 compute-0 sudo[411698]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 compute-0 sudo[411723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:24:52 compute-0 sudo[411723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:52 compute-0 sudo[411723]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 compute-0 sudo[411748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:52 compute-0 sudo[411748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:52 compute-0 sudo[411748]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 compute-0 sudo[411773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:24:52 compute-0 sudo[411773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:52 compute-0 nova_compute[251992]: 2025-12-06 08:24:52.408 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:24:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:24:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:52 compute-0 sudo[411773]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:52.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 08:24:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:24:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 08:24:52 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:24:53 compute-0 ceph-mon[74339]: pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:24:53 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:24:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:53.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:24:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:24:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:54 compute-0 nova_compute[251992]: 2025-12-06 08:24:54.237 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:24:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:24:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:24:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:24:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:24:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 11334213-4fbf-4821-9a5f-3f84cdf5b805 does not exist
Dec 06 08:24:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a963b90c-4885-4ab1-9021-22925f6e7934 does not exist
Dec 06 08:24:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 88b8644a-c43f-4564-8bf3-e77b194e9e35 does not exist
Dec 06 08:24:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:24:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:24:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:24:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:24:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:24:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:24:54 compute-0 sudo[411830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:54 compute-0 sudo[411830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:54 compute-0 sudo[411830]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:54 compute-0 sudo[411855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:24:54 compute-0 sudo[411855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:54 compute-0 sudo[411855]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:54 compute-0 sudo[411880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:54 compute-0 sudo[411880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:54 compute-0 sudo[411880]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:54 compute-0 sudo[411905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:24:54 compute-0 sudo[411905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:54 compute-0 ceph-mon[74339]: pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:24:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:24:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:24:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:24:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:24:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:24:55 compute-0 podman[411972]: 2025-12-06 08:24:55.276017811 +0000 UTC m=+0.039478963 container create deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 08:24:55 compute-0 systemd[1]: Started libpod-conmon-deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6.scope.
Dec 06 08:24:55 compute-0 podman[411972]: 2025-12-06 08:24:55.256719587 +0000 UTC m=+0.020180759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:24:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:24:55 compute-0 podman[411972]: 2025-12-06 08:24:55.382240207 +0000 UTC m=+0.145701349 container init deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:24:55 compute-0 podman[411972]: 2025-12-06 08:24:55.39487558 +0000 UTC m=+0.158336722 container start deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:24:55 compute-0 podman[411972]: 2025-12-06 08:24:55.39819823 +0000 UTC m=+0.161659392 container attach deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:24:55 compute-0 priceless_greider[411988]: 167 167
Dec 06 08:24:55 compute-0 systemd[1]: libpod-deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6.scope: Deactivated successfully.
Dec 06 08:24:55 compute-0 podman[411972]: 2025-12-06 08:24:55.400347018 +0000 UTC m=+0.163808200 container died deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb0db40e5a749fc79fc1c0aa00d2ed13f2d88d05f485ac73537de0bf954a3de2-merged.mount: Deactivated successfully.
Dec 06 08:24:55 compute-0 podman[411972]: 2025-12-06 08:24:55.450293415 +0000 UTC m=+0.213754557 container remove deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:24:55 compute-0 systemd[1]: libpod-conmon-deea693006e4514d47caae7411e917e20931c41d94fd48f71922b44c9f8832f6.scope: Deactivated successfully.
Dec 06 08:24:55 compute-0 podman[412015]: 2025-12-06 08:24:55.678822154 +0000 UTC m=+0.067732042 container create 866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:24:55 compute-0 systemd[1]: Started libpod-conmon-866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a.scope.
Dec 06 08:24:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:55.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:55 compute-0 podman[412015]: 2025-12-06 08:24:55.633985446 +0000 UTC m=+0.022895424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:24:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fda68cd7e142eb5c2df5ac95209529ddffcf50d06957ca15664180776617bcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fda68cd7e142eb5c2df5ac95209529ddffcf50d06957ca15664180776617bcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fda68cd7e142eb5c2df5ac95209529ddffcf50d06957ca15664180776617bcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fda68cd7e142eb5c2df5ac95209529ddffcf50d06957ca15664180776617bcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fda68cd7e142eb5c2df5ac95209529ddffcf50d06957ca15664180776617bcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:55 compute-0 podman[412015]: 2025-12-06 08:24:55.76372956 +0000 UTC m=+0.152639458 container init 866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kowalevski, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:24:55 compute-0 podman[412015]: 2025-12-06 08:24:55.773822155 +0000 UTC m=+0.162732043 container start 866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:24:55 compute-0 podman[412015]: 2025-12-06 08:24:55.777283929 +0000 UTC m=+0.166193817 container attach 866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kowalevski, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 08:24:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:56 compute-0 thirsty_kowalevski[412031]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:24:56 compute-0 thirsty_kowalevski[412031]: --> relative data size: 1.0
Dec 06 08:24:56 compute-0 thirsty_kowalevski[412031]: --> All data devices are unavailable
Dec 06 08:24:56 compute-0 systemd[1]: libpod-866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a.scope: Deactivated successfully.
Dec 06 08:24:56 compute-0 podman[412046]: 2025-12-06 08:24:56.641028683 +0000 UTC m=+0.030513150 container died 866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kowalevski, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 08:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fda68cd7e142eb5c2df5ac95209529ddffcf50d06957ca15664180776617bcc-merged.mount: Deactivated successfully.
Dec 06 08:24:56 compute-0 podman[412046]: 2025-12-06 08:24:56.691092043 +0000 UTC m=+0.080576500 container remove 866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:24:56 compute-0 systemd[1]: libpod-conmon-866469ff1d13936b18c8fade61188d98b4e31b31f8856000799b018d7111791a.scope: Deactivated successfully.
Dec 06 08:24:56 compute-0 sudo[411905]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:56.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:56 compute-0 sudo[412062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:56 compute-0 sudo[412062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:56 compute-0 sudo[412062]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:56 compute-0 sudo[412087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:24:56 compute-0 sudo[412087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:56 compute-0 sudo[412087]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:56 compute-0 sudo[412112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:56 compute-0 sudo[412112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:56 compute-0 sudo[412112]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:56 compute-0 sudo[412137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:24:56 compute-0 sudo[412137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:57 compute-0 ceph-mon[74339]: pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:57 compute-0 podman[412204]: 2025-12-06 08:24:57.317262913 +0000 UTC m=+0.043255355 container create 4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:24:57 compute-0 systemd[1]: Started libpod-conmon-4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe.scope.
Dec 06 08:24:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:24:57 compute-0 podman[412204]: 2025-12-06 08:24:57.393919596 +0000 UTC m=+0.119912048 container init 4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:24:57 compute-0 podman[412204]: 2025-12-06 08:24:57.301981418 +0000 UTC m=+0.027973870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:24:57 compute-0 podman[412204]: 2025-12-06 08:24:57.403307911 +0000 UTC m=+0.129300343 container start 4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_babbage, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:24:57 compute-0 podman[412204]: 2025-12-06 08:24:57.406653342 +0000 UTC m=+0.132645774 container attach 4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_babbage, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:24:57 compute-0 suspicious_babbage[412221]: 167 167
Dec 06 08:24:57 compute-0 systemd[1]: libpod-4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe.scope: Deactivated successfully.
Dec 06 08:24:57 compute-0 podman[412204]: 2025-12-06 08:24:57.410030194 +0000 UTC m=+0.136022656 container died 4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:24:57 compute-0 nova_compute[251992]: 2025-12-06 08:24:57.411 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a98d606a54653f8f7929a9e7feb9bcea54b2056af76b055c4e026fd9db7ea50-merged.mount: Deactivated successfully.
Dec 06 08:24:57 compute-0 podman[412204]: 2025-12-06 08:24:57.449868636 +0000 UTC m=+0.175861078 container remove 4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:24:57 compute-0 systemd[1]: libpod-conmon-4198d839e7370d3ee862f5883194154177fe05fb240469bf7030971b170ee7fe.scope: Deactivated successfully.
Dec 06 08:24:57 compute-0 podman[412218]: 2025-12-06 08:24:57.498956339 +0000 UTC m=+0.142580994 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:24:57 compute-0 podman[412271]: 2025-12-06 08:24:57.616469152 +0000 UTC m=+0.041592871 container create 69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kowalevski, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:24:57 compute-0 systemd[1]: Started libpod-conmon-69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b.scope.
Dec 06 08:24:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04018c63911ffcb2aaa063c65a871513b95a7e3cf811676992a6380ee87da255/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04018c63911ffcb2aaa063c65a871513b95a7e3cf811676992a6380ee87da255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04018c63911ffcb2aaa063c65a871513b95a7e3cf811676992a6380ee87da255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04018c63911ffcb2aaa063c65a871513b95a7e3cf811676992a6380ee87da255/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:57 compute-0 podman[412271]: 2025-12-06 08:24:57.600458117 +0000 UTC m=+0.025581836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:24:57 compute-0 podman[412271]: 2025-12-06 08:24:57.705695016 +0000 UTC m=+0.130818775 container init 69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kowalevski, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 08:24:57 compute-0 podman[412271]: 2025-12-06 08:24:57.713247981 +0000 UTC m=+0.138371680 container start 69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:24:57 compute-0 podman[412271]: 2025-12-06 08:24:57.716314414 +0000 UTC m=+0.141438173 container attach 69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 08:24:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:24:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:57.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:24:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]: {
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:     "0": [
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:         {
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "devices": [
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "/dev/loop3"
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             ],
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "lv_name": "ceph_lv0",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "lv_size": "7511998464",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "name": "ceph_lv0",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "tags": {
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.cluster_name": "ceph",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.crush_device_class": "",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.encrypted": "0",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.osd_id": "0",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.type": "block",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:                 "ceph.vdo": "0"
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             },
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "type": "block",
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:             "vg_name": "ceph_vg0"
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:         }
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]:     ]
Dec 06 08:24:58 compute-0 tender_kowalevski[412287]: }
Dec 06 08:24:58 compute-0 systemd[1]: libpod-69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b.scope: Deactivated successfully.
Dec 06 08:24:58 compute-0 podman[412271]: 2025-12-06 08:24:58.466867144 +0000 UTC m=+0.891990913 container died 69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kowalevski, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-04018c63911ffcb2aaa063c65a871513b95a7e3cf811676992a6380ee87da255-merged.mount: Deactivated successfully.
Dec 06 08:24:58 compute-0 podman[412271]: 2025-12-06 08:24:58.525185749 +0000 UTC m=+0.950309448 container remove 69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kowalevski, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:24:58 compute-0 systemd[1]: libpod-conmon-69885982db2a1464b416e0de51aa54246af9213ea89cf4cd01f7fecfb587440b.scope: Deactivated successfully.
Dec 06 08:24:58 compute-0 sudo[412137]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:58 compute-0 sudo[412310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:58 compute-0 sudo[412310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:58 compute-0 sudo[412310]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:58 compute-0 nova_compute[251992]: 2025-12-06 08:24:58.663 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:24:58 compute-0 sudo[412335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:24:58 compute-0 sudo[412335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:58 compute-0 sudo[412335]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:58 compute-0 sudo[412360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:24:58 compute-0 sudo[412360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:58 compute-0 sudo[412360]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:58 compute-0 sudo[412385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:24:58 compute-0 sudo[412385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:24:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:24:58.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:59 compute-0 podman[412450]: 2025-12-06 08:24:59.096948761 +0000 UTC m=+0.038257290 container create d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_neumann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:24:59 compute-0 systemd[1]: Started libpod-conmon-d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4.scope.
Dec 06 08:24:59 compute-0 ceph-mon[74339]: pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:24:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:24:59 compute-0 podman[412450]: 2025-12-06 08:24:59.08033002 +0000 UTC m=+0.021638549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:24:59 compute-0 podman[412450]: 2025-12-06 08:24:59.183431441 +0000 UTC m=+0.124739970 container init d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_neumann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:24:59 compute-0 podman[412450]: 2025-12-06 08:24:59.191250392 +0000 UTC m=+0.132558921 container start d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:24:59 compute-0 podman[412450]: 2025-12-06 08:24:59.194703337 +0000 UTC m=+0.136011896 container attach d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_neumann, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:24:59 compute-0 gracious_neumann[412467]: 167 167
Dec 06 08:24:59 compute-0 systemd[1]: libpod-d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4.scope: Deactivated successfully.
Dec 06 08:24:59 compute-0 podman[412450]: 2025-12-06 08:24:59.196414963 +0000 UTC m=+0.137723482 container died d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 08:24:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-d29692464b3a7749c8258f2a72422e0c08e040f45c45dc04d6fb1f6e81be705b-merged.mount: Deactivated successfully.
Dec 06 08:24:59 compute-0 podman[412450]: 2025-12-06 08:24:59.234048316 +0000 UTC m=+0.175356845 container remove d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec 06 08:24:59 compute-0 systemd[1]: libpod-conmon-d927f5b05fd586444da6e2baa2693f8b781a3c372b1beb7d0d27df469ecec7f4.scope: Deactivated successfully.
Dec 06 08:24:59 compute-0 nova_compute[251992]: 2025-12-06 08:24:59.282 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:24:59 compute-0 podman[412489]: 2025-12-06 08:24:59.382473388 +0000 UTC m=+0.037236373 container create 9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_agnesi, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:24:59 compute-0 systemd[1]: Started libpod-conmon-9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655.scope.
Dec 06 08:24:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0850eda9cab28fe0ca65f58ce97b862fbe860a8862c4a07507f0e1f47cd5fc63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0850eda9cab28fe0ca65f58ce97b862fbe860a8862c4a07507f0e1f47cd5fc63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0850eda9cab28fe0ca65f58ce97b862fbe860a8862c4a07507f0e1f47cd5fc63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0850eda9cab28fe0ca65f58ce97b862fbe860a8862c4a07507f0e1f47cd5fc63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:24:59 compute-0 podman[412489]: 2025-12-06 08:24:59.366582116 +0000 UTC m=+0.021345131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:24:59 compute-0 podman[412489]: 2025-12-06 08:24:59.463233752 +0000 UTC m=+0.117996747 container init 9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_agnesi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 08:24:59 compute-0 podman[412489]: 2025-12-06 08:24:59.468549535 +0000 UTC m=+0.123312520 container start 9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_agnesi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 08:24:59 compute-0 podman[412489]: 2025-12-06 08:24:59.471750223 +0000 UTC m=+0.126513238 container attach 9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 08:24:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:24:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:24:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:24:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:24:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.792549) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499792607, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1912, "num_deletes": 252, "total_data_size": 3391782, "memory_usage": 3451312, "flush_reason": "Manual Compaction"}
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499805753, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 1984809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79664, "largest_seqno": 81575, "table_properties": {"data_size": 1978402, "index_size": 3288, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17032, "raw_average_key_size": 21, "raw_value_size": 1964130, "raw_average_value_size": 2439, "num_data_blocks": 146, "num_entries": 805, "num_filter_entries": 805, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009311, "oldest_key_time": 1765009311, "file_creation_time": 1765009499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 13256 microseconds, and 6198 cpu microseconds.
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.805802) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 1984809 bytes OK
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.805821) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.807413) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.807428) EVENT_LOG_v1 {"time_micros": 1765009499807423, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.807447) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 3383918, prev total WAL file size 3383918, number of live WAL files 2.
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.808497) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303238' seq:72057594037927935, type:22 .. '6D6772737461740033323830' seq:0, type:0; will stop at (end)
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(1938KB)], [179(12MB)]
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499808566, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 15429360, "oldest_snapshot_seqno": -1}
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 11647 keys, 12896119 bytes, temperature: kUnknown
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499905040, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 12896119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12824690, "index_size": 41236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 306478, "raw_average_key_size": 26, "raw_value_size": 12624881, "raw_average_value_size": 1083, "num_data_blocks": 1561, "num_entries": 11647, "num_filter_entries": 11647, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.905348) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 12896119 bytes
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.906778) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.7 rd, 133.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.3) write-amplify(6.5) OK, records in: 12084, records dropped: 437 output_compression: NoCompression
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.906799) EVENT_LOG_v1 {"time_micros": 1765009499906789, "job": 112, "event": "compaction_finished", "compaction_time_micros": 96599, "compaction_time_cpu_micros": 50453, "output_level": 6, "num_output_files": 1, "total_output_size": 12896119, "num_input_records": 12084, "num_output_records": 11647, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499907505, "job": 112, "event": "table_file_deletion", "file_number": 181}
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009499910346, "job": 112, "event": "table_file_deletion", "file_number": 179}
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.808353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.910392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.910396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.910399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.910400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:24:59 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:24:59.910402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]: {
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]:         "osd_id": 0,
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]:         "type": "bluestore"
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]:     }
Dec 06 08:25:00 compute-0 relaxed_agnesi[412504]: }
Dec 06 08:25:00 compute-0 systemd[1]: libpod-9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655.scope: Deactivated successfully.
Dec 06 08:25:00 compute-0 podman[412489]: 2025-12-06 08:25:00.378174826 +0000 UTC m=+1.032937821 container died 9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_agnesi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:25:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0850eda9cab28fe0ca65f58ce97b862fbe860a8862c4a07507f0e1f47cd5fc63-merged.mount: Deactivated successfully.
Dec 06 08:25:00 compute-0 podman[412489]: 2025-12-06 08:25:00.437137118 +0000 UTC m=+1.091900103 container remove 9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:25:00 compute-0 systemd[1]: libpod-conmon-9ff9c6e3709a4f23abd572b11443fb2be3e5c6d87bc359cb7e20d319b6246655.scope: Deactivated successfully.
Dec 06 08:25:00 compute-0 sudo[412385]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:25:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:25:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:25:00 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:25:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b61feb0f-2e41-4a1c-9931-fa99e7168b6d does not exist
Dec 06 08:25:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2117360e-0a71-4131-a6fa-8150f51c478a does not exist
Dec 06 08:25:00 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 683bcc52-7ee1-4a59-82ad-11edcf1e1738 does not exist
Dec 06 08:25:00 compute-0 sudo[412538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:25:00 compute-0 sudo[412538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:00 compute-0 sudo[412538]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:00 compute-0 sudo[412563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:25:00 compute-0 sudo[412563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:00 compute-0 sudo[412563]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:00.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:01 compute-0 ceph-mon[74339]: pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:25:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:25:01 compute-0 nova_compute[251992]: 2025-12-06 08:25:01.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:01.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:01 compute-0 nova_compute[251992]: 2025-12-06 08:25:01.746 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:01 compute-0 nova_compute[251992]: 2025-12-06 08:25:01.747 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:01 compute-0 nova_compute[251992]: 2025-12-06 08:25:01.747 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:01 compute-0 nova_compute[251992]: 2025-12-06 08:25:01.747 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:25:01 compute-0 nova_compute[251992]: 2025-12-06 08:25:01.747 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:25:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681230716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.216 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.376 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.378 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4070MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.378 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.378 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.414 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.463 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.463 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:25:02 compute-0 nova_compute[251992]: 2025-12-06 08:25:02.482 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:25:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3681230716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:02.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:25:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/991267142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:03 compute-0 nova_compute[251992]: 2025-12-06 08:25:03.084 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:25:03 compute-0 nova_compute[251992]: 2025-12-06 08:25:03.090 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:25:03 compute-0 nova_compute[251992]: 2025-12-06 08:25:03.123 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:25:03 compute-0 nova_compute[251992]: 2025-12-06 08:25:03.152 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:25:03 compute-0 nova_compute[251992]: 2025-12-06 08:25:03.152 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:03.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:03 compute-0 ceph-mon[74339]: pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/991267142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:25:03.901 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:25:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:25:03.902 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:25:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:25:03.903 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:25:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:04 compute-0 nova_compute[251992]: 2025-12-06 08:25:04.152 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:04 compute-0 nova_compute[251992]: 2025-12-06 08:25:04.284 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:04 compute-0 ceph-mon[74339]: pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2341981030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:04.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:05 compute-0 podman[412636]: 2025-12-06 08:25:05.413149958 +0000 UTC m=+0.071158124 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:25:05 compute-0 podman[412637]: 2025-12-06 08:25:05.416377296 +0000 UTC m=+0.071683179 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:25:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:05.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3901546734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 125 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 45 KiB/s wr, 2 op/s
Dec 06 08:25:06 compute-0 sudo[412674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:25:06 compute-0 sudo[412674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:06 compute-0 sudo[412674]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:06 compute-0 sudo[412699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:25:06 compute-0 sudo[412699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:06 compute-0 sudo[412699]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:06 compute-0 nova_compute[251992]: 2025-12-06 08:25:06.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:06 compute-0 nova_compute[251992]: 2025-12-06 08:25:06.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:25:06 compute-0 nova_compute[251992]: 2025-12-06 08:25:06.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:25:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:06.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:06 compute-0 ceph-mon[74339]: pgmap v3986: 305 pgs: 305 active+clean; 125 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 45 KiB/s wr, 2 op/s
Dec 06 08:25:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3298858900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:06 compute-0 nova_compute[251992]: 2025-12-06 08:25:06.880 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:25:06 compute-0 nova_compute[251992]: 2025-12-06 08:25:06.880 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:07 compute-0 nova_compute[251992]: 2025-12-06 08:25:07.417 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:07.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 539 KiB/s wr, 25 op/s
Dec 06 08:25:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:08.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:09 compute-0 ceph-mon[74339]: pgmap v3987: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 539 KiB/s wr, 25 op/s
Dec 06 08:25:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3461588636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:25:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3417346296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:25:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3417346296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:25:09 compute-0 nova_compute[251992]: 2025-12-06 08:25:09.286 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:09.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.798370) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509798489, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 374, "num_deletes": 259, "total_data_size": 234047, "memory_usage": 242448, "flush_reason": "Manual Compaction"}
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509801972, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 232373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81577, "largest_seqno": 81949, "table_properties": {"data_size": 230045, "index_size": 427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5756, "raw_average_key_size": 18, "raw_value_size": 225311, "raw_average_value_size": 713, "num_data_blocks": 18, "num_entries": 316, "num_filter_entries": 316, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009500, "oldest_key_time": 1765009500, "file_creation_time": 1765009509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 3632 microseconds, and 1075 cpu microseconds.
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.802001) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 232373 bytes OK
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.802015) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.803732) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.803749) EVENT_LOG_v1 {"time_micros": 1765009509803745, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.803766) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 231554, prev total WAL file size 231554, number of live WAL files 2.
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.804137) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323830' seq:72057594037927935, type:22 .. '6C6F676D0033353335' seq:0, type:0; will stop at (end)
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(226KB)], [182(12MB)]
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509804157, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 13128492, "oldest_snapshot_seqno": -1}
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 11433 keys, 13004083 bytes, temperature: kUnknown
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509876439, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 13004083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12933370, "index_size": 41032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28613, "raw_key_size": 303004, "raw_average_key_size": 26, "raw_value_size": 12736557, "raw_average_value_size": 1114, "num_data_blocks": 1550, "num_entries": 11433, "num_filter_entries": 11433, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.876714) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13004083 bytes
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.877829) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.4 rd, 179.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(112.5) write-amplify(56.0) OK, records in: 11963, records dropped: 530 output_compression: NoCompression
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.877850) EVENT_LOG_v1 {"time_micros": 1765009509877840, "job": 114, "event": "compaction_finished", "compaction_time_micros": 72373, "compaction_time_cpu_micros": 29204, "output_level": 6, "num_output_files": 1, "total_output_size": 13004083, "num_input_records": 11963, "num_output_records": 11433, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509878025, "job": 114, "event": "table_file_deletion", "file_number": 184}
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009509881176, "job": 114, "event": "table_file_deletion", "file_number": 182}
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.804061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.881372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.881383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.881387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.881391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:09 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:09.881395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 539 KiB/s wr, 25 op/s
Dec 06 08:25:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/241042467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:25:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:10.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:11 compute-0 ceph-mon[74339]: pgmap v3988: 305 pgs: 305 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 539 KiB/s wr, 25 op/s
Dec 06 08:25:11 compute-0 nova_compute[251992]: 2025-12-06 08:25:11.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:11.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:25:12 compute-0 nova_compute[251992]: 2025-12-06 08:25:12.471 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:12.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:25:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:25:13 compute-0 ceph-mon[74339]: pgmap v3989: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:25:13 compute-0 nova_compute[251992]: 2025-12-06 08:25:13.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:13.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:25:14 compute-0 nova_compute[251992]: 2025-12-06 08:25:14.288 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:14.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:15 compute-0 ceph-mon[74339]: pgmap v3990: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 06 08:25:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:15.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Dec 06 08:25:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:17 compute-0 nova_compute[251992]: 2025-12-06 08:25:17.474 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:17.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:17 compute-0 ceph-mon[74339]: pgmap v3991: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Dec 06 08:25:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 93 op/s
Dec 06 08:25:18 compute-0 nova_compute[251992]: 2025-12-06 08:25:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:18 compute-0 nova_compute[251992]: 2025-12-06 08:25:18.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:25:18
Dec 06 08:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'backups', '.mgr', 'volumes']
Dec 06 08:25:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:25:18 compute-0 ceph-mon[74339]: pgmap v3992: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 93 op/s
Dec 06 08:25:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:19 compute-0 nova_compute[251992]: 2025-12-06 08:25:19.290 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:19.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 70 op/s
Dec 06 08:25:20 compute-0 ceph-mon[74339]: pgmap v3993: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 70 op/s
Dec 06 08:25:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:20.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.832209) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520832482, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 358, "num_deletes": 251, "total_data_size": 210389, "memory_usage": 217280, "flush_reason": "Manual Compaction"}
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520836013, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 208199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81950, "largest_seqno": 82307, "table_properties": {"data_size": 205998, "index_size": 364, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5534, "raw_average_key_size": 18, "raw_value_size": 201608, "raw_average_value_size": 676, "num_data_blocks": 16, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009510, "oldest_key_time": 1765009510, "file_creation_time": 1765009520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 3837 microseconds, and 1525 cpu microseconds.
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.836051) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 208199 bytes OK
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.836070) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.837585) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.837602) EVENT_LOG_v1 {"time_micros": 1765009520837596, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.837620) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 208028, prev total WAL file size 208028, number of live WAL files 2.
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.838214) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(203KB)], [185(12MB)]
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520838308, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 13212282, "oldest_snapshot_seqno": -1}
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 11218 keys, 11300421 bytes, temperature: kUnknown
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520942093, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 11300421, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11232608, "index_size": 38644, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 299211, "raw_average_key_size": 26, "raw_value_size": 11040957, "raw_average_value_size": 984, "num_data_blocks": 1443, "num_entries": 11218, "num_filter_entries": 11218, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.942385) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11300421 bytes
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.944029) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.2 rd, 108.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(117.7) write-amplify(54.3) OK, records in: 11731, records dropped: 513 output_compression: NoCompression
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.944045) EVENT_LOG_v1 {"time_micros": 1765009520944037, "job": 116, "event": "compaction_finished", "compaction_time_micros": 103895, "compaction_time_cpu_micros": 59865, "output_level": 6, "num_output_files": 1, "total_output_size": 11300421, "num_input_records": 11731, "num_output_records": 11218, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520944209, "job": 116, "event": "table_file_deletion", "file_number": 187}
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009520946402, "job": 116, "event": "table_file_deletion", "file_number": 185}
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.838034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.946730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.946737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.946740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.946742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:20 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:25:20.946745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:25:21 compute-0 nova_compute[251992]: 2025-12-06 08:25:21.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:21.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:21 compute-0 ovn_controller[147168]: 2025-12-06T08:25:21Z|00830|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 06 08:25:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 75 op/s
Dec 06 08:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:25:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 82K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1570 writes, 7183 keys, 1568 commit groups, 1.0 writes per commit group, ingest: 10.40 MB, 0.02 MB/s
                                           Interval WAL: 1570 writes, 1568 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     45.6      2.41              0.36        58    0.042       0      0       0.0       0.0
                                             L6      1/0   10.78 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.6    108.7     93.6      6.56              2.01        57    0.115    482K    30K       0.0       0.0
                                            Sum      1/0   10.78 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.6     79.5     80.7      8.97              2.38       115    0.078    482K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0  10.6    142.4    138.7      0.67              0.34        14    0.048     81K   3517       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    108.7     93.6      6.56              2.01        57    0.115    482K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.6      2.41              0.36        57    0.042       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.107, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.71 GB write, 0.10 MB/s write, 0.70 GB read, 0.10 MB/s read, 9.0 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 76.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000488 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4388,73.28 MB,24.1043%) FilterBlock(116,1.30 MB,0.427161%) IndexBlock(116,2.09 MB,0.686977%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:25:22 compute-0 nova_compute[251992]: 2025-12-06 08:25:22.477 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:22.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:23 compute-0 ceph-mon[74339]: pgmap v3994: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 75 op/s
Dec 06 08:25:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:23.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:25:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:25:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:25:24 compute-0 nova_compute[251992]: 2025-12-06 08:25:24.293 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:24.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:25 compute-0 ceph-mon[74339]: pgmap v3995: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 08:25:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:25.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:25 compute-0 nova_compute[251992]: 2025-12-06 08:25:25.788 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec 06 08:25:26 compute-0 sudo[412735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:25:26 compute-0 sudo[412735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:26 compute-0 sudo[412735]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:26 compute-0 sudo[412760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:25:26 compute-0 sudo[412760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:26 compute-0 sudo[412760]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:25:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:25:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:26.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:25:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:25:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:25:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:25:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:25:27 compute-0 nova_compute[251992]: 2025-12-06 08:25:27.480 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:27 compute-0 ceph-mon[74339]: pgmap v3996: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Dec 06 08:25:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:27.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 676 KiB/s rd, 28 op/s
Dec 06 08:25:28 compute-0 podman[412786]: 2025-12-06 08:25:28.423254183 +0000 UTC m=+0.078998477 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:25:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:28.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:29 compute-0 nova_compute[251992]: 2025-12-06 08:25:29.294 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:29 compute-0 ceph-mon[74339]: pgmap v3997: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 676 KiB/s rd, 28 op/s
Dec 06 08:25:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:29.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 9 op/s
Dec 06 08:25:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:30.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:31 compute-0 ceph-mon[74339]: pgmap v3998: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 9 op/s
Dec 06 08:25:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:31.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:31 compute-0 nova_compute[251992]: 2025-12-06 08:25:31.826 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:25:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 14 op/s
Dec 06 08:25:32 compute-0 nova_compute[251992]: 2025-12-06 08:25:32.481 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:32 compute-0 ceph-mon[74339]: pgmap v3999: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 14 op/s
Dec 06 08:25:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:32.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:33 compute-0 nova_compute[251992]: 2025-12-06 08:25:33.629 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:25:33.630 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=107, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=106) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:25:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:25:33.631 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:25:33 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:25:33.633 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '107'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:25:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:33.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 KiB/s rd, 9 op/s
Dec 06 08:25:34 compute-0 nova_compute[251992]: 2025-12-06 08:25:34.297 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:35 compute-0 ceph-mon[74339]: pgmap v4000: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 KiB/s rd, 9 op/s
Dec 06 08:25:35 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3315669685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:35.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 139 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 853 B/s wr, 25 op/s
Dec 06 08:25:36 compute-0 podman[412817]: 2025-12-06 08:25:36.394063729 +0000 UTC m=+0.051705586 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 08:25:36 compute-0 podman[412818]: 2025-12-06 08:25:36.410424373 +0000 UTC m=+0.063038913 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:25:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:36.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:37 compute-0 ceph-mon[74339]: pgmap v4001: 305 pgs: 305 active+clean; 139 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 853 B/s wr, 25 op/s
Dec 06 08:25:37 compute-0 nova_compute[251992]: 2025-12-06 08:25:37.484 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:37.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Dec 06 08:25:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:38.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:39 compute-0 ceph-mon[74339]: pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Dec 06 08:25:39 compute-0 nova_compute[251992]: 2025-12-06 08:25:39.300 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:25:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:39.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:25:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 06 08:25:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:40.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:41 compute-0 ceph-mon[74339]: pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 06 08:25:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:41.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 06 08:25:42 compute-0 nova_compute[251992]: 2025-12-06 08:25:42.488 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:25:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:25:43 compute-0 ceph-mon[74339]: pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Dec 06 08:25:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:25:44 compute-0 nova_compute[251992]: 2025-12-06 08:25:44.301 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:44 compute-0 ceph-mon[74339]: pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:25:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:44.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:45.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:25:46 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3770189070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:46 compute-0 sudo[412861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:25:46 compute-0 sudo[412861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:46 compute-0 sudo[412861]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:46 compute-0 sudo[412886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:25:46 compute-0 sudo[412886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:25:46 compute-0 sudo[412886]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:46.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:47 compute-0 ceph-mon[74339]: pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Dec 06 08:25:47 compute-0 nova_compute[251992]: 2025-12-06 08:25:47.544 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:47.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Dec 06 08:25:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:48.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:49.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:50 compute-0 nova_compute[251992]: 2025-12-06 08:25:50.279 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/375367801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:25:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:50.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:51 compute-0 ceph-mon[74339]: pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 341 B/s wr, 10 op/s
Dec 06 08:25:51 compute-0 ceph-mon[74339]: pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:51.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:52 compute-0 nova_compute[251992]: 2025-12-06 08:25:52.546 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:52.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:53 compute-0 ceph-mon[74339]: pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:53.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:54 compute-0 nova_compute[251992]: 2025-12-06 08:25:54.305 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:54 compute-0 ceph-mon[74339]: pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:54.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:25:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:56.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:25:57 compute-0 ceph-mon[74339]: pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:57 compute-0 nova_compute[251992]: 2025-12-06 08:25:57.549 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:58 compute-0 sshd-session[412919]: banner exchange: Connection from 172.202.118.82 port 38792: invalid format
Dec 06 08:25:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:25:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:25:58.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:25:59 compute-0 nova_compute[251992]: 2025-12-06 08:25:59.307 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:25:59 compute-0 podman[412921]: 2025-12-06 08:25:59.429959436 +0000 UTC m=+0.087547530 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:25:59 compute-0 ceph-mon[74339]: pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:25:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:25:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:25:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:25:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:00 compute-0 nova_compute[251992]: 2025-12-06 08:26:00.652 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:00.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:00 compute-0 sudo[412947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:00 compute-0 sudo[412947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:00 compute-0 sudo[412947]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:00 compute-0 sudo[412972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:26:00 compute-0 sudo[412972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:00 compute-0 sudo[412972]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-0 sudo[412997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:01 compute-0 sudo[412997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:01 compute-0 sudo[412997]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-0 sudo[413022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:26:01 compute-0 sudo[413022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:01 compute-0 sudo[413022]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:26:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:26:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:26:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2bd9465d-c96f-4ee1-99bd-9dcd6f4f23cc does not exist
Dec 06 08:26:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ef0b1729-bd27-4f89-ab82-ebd98d1c999c does not exist
Dec 06 08:26:01 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7e0ef70c-2487-4b1b-b17b-f6034f791414 does not exist
Dec 06 08:26:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:26:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:26:01 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:26:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:26:01 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:26:01 compute-0 sudo[413078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:01 compute-0 sudo[413078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:01 compute-0 sudo[413078]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-0 sudo[413103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:26:01 compute-0 sudo[413103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:01 compute-0 sudo[413103]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:01.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:01 compute-0 sudo[413128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:01 compute-0 sudo[413128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:01 compute-0 sudo[413128]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:01 compute-0 sudo[413153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:26:01 compute-0 sudo[413153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:02 compute-0 podman[413220]: 2025-12-06 08:26:02.226910238 +0000 UTC m=+0.040002298 container create 71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:26:02 compute-0 systemd[1]: Started libpod-conmon-71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4.scope.
Dec 06 08:26:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:26:02 compute-0 podman[413220]: 2025-12-06 08:26:02.209333021 +0000 UTC m=+0.022425101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:26:02 compute-0 nova_compute[251992]: 2025-12-06 08:26:02.553 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:02 compute-0 nova_compute[251992]: 2025-12-06 08:26:02.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:02 compute-0 nova_compute[251992]: 2025-12-06 08:26:02.689 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:02 compute-0 nova_compute[251992]: 2025-12-06 08:26:02.690 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:02 compute-0 nova_compute[251992]: 2025-12-06 08:26:02.690 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:02 compute-0 nova_compute[251992]: 2025-12-06 08:26:02.691 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:26:02 compute-0 nova_compute[251992]: 2025-12-06 08:26:02.691 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:02 compute-0 podman[413220]: 2025-12-06 08:26:02.692568948 +0000 UTC m=+0.505661038 container init 71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jepsen, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:26:02 compute-0 podman[413220]: 2025-12-06 08:26:02.700174965 +0000 UTC m=+0.513267045 container start 71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jepsen, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:26:02 compute-0 heuristic_jepsen[413236]: 167 167
Dec 06 08:26:02 compute-0 systemd[1]: libpod-71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4.scope: Deactivated successfully.
Dec 06 08:26:02 compute-0 podman[413220]: 2025-12-06 08:26:02.804702654 +0000 UTC m=+0.617794764 container attach 71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:26:02 compute-0 podman[413220]: 2025-12-06 08:26:02.806416471 +0000 UTC m=+0.619508531 container died 71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-113327a4557176021559fffc19689448d01bf782676cca56d73ecd2d9bd88942-merged.mount: Deactivated successfully.
Dec 06 08:26:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:02.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:02 compute-0 ceph-mon[74339]: pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:02 compute-0 podman[413220]: 2025-12-06 08:26:02.866858212 +0000 UTC m=+0.679950272 container remove 71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_jepsen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:26:02 compute-0 systemd[1]: libpod-conmon-71aa367fa73402b50a4b1c7cb8b04549d897c727c6bd726b59f47179882787a4.scope: Deactivated successfully.
Dec 06 08:26:03 compute-0 podman[413280]: 2025-12-06 08:26:03.03238513 +0000 UTC m=+0.045399395 container create 66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 08:26:03 compute-0 systemd[1]: Started libpod-conmon-66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd.scope.
Dec 06 08:26:03 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0a19f80b9283658563bd1589f6f47a2839567af372b88a87053993512cea9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0a19f80b9283658563bd1589f6f47a2839567af372b88a87053993512cea9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0a19f80b9283658563bd1589f6f47a2839567af372b88a87053993512cea9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0a19f80b9283658563bd1589f6f47a2839567af372b88a87053993512cea9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0a19f80b9283658563bd1589f6f47a2839567af372b88a87053993512cea9c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:03 compute-0 podman[413280]: 2025-12-06 08:26:03.015095329 +0000 UTC m=+0.028109614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:26:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:26:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078334676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:03 compute-0 podman[413280]: 2025-12-06 08:26:03.123892195 +0000 UTC m=+0.136906470 container init 66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec 06 08:26:03 compute-0 podman[413280]: 2025-12-06 08:26:03.129496467 +0000 UTC m=+0.142510722 container start 66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:26:03 compute-0 podman[413280]: 2025-12-06 08:26:03.133085836 +0000 UTC m=+0.146100131 container attach 66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.139 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.283 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.285 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4079MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.285 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.286 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.369 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.369 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.416 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.437 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.438 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.452 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.470 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.486 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:03.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1078334676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:26:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690275473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:03.901 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:03.902 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:03.902 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.904 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:03 compute-0 clever_blackwell[413297]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:26:03 compute-0 clever_blackwell[413297]: --> relative data size: 1.0
Dec 06 08:26:03 compute-0 clever_blackwell[413297]: --> All data devices are unavailable
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.911 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.926 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.928 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:26:03 compute-0 nova_compute[251992]: 2025-12-06 08:26:03.929 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:03 compute-0 systemd[1]: libpod-66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd.scope: Deactivated successfully.
Dec 06 08:26:03 compute-0 podman[413280]: 2025-12-06 08:26:03.938428583 +0000 UTC m=+0.951442858 container died 66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 08:26:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-df0a19f80b9283658563bd1589f6f47a2839567af372b88a87053993512cea9c-merged.mount: Deactivated successfully.
Dec 06 08:26:04 compute-0 podman[413280]: 2025-12-06 08:26:04.013224875 +0000 UTC m=+1.026239130 container remove 66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_blackwell, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 08:26:04 compute-0 systemd[1]: libpod-conmon-66879e36817b92bd20f0005b553f216210df10e6df5df24ffd9b0279e95f9fcd.scope: Deactivated successfully.
Dec 06 08:26:04 compute-0 sudo[413153]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:04 compute-0 sudo[413350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:04 compute-0 sudo[413350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:04 compute-0 sudo[413350]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:04 compute-0 sudo[413375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:26:04 compute-0 sudo[413375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:04 compute-0 sudo[413375]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:04 compute-0 sudo[413400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:04 compute-0 sudo[413400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:04 compute-0 sudo[413400]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:04 compute-0 sudo[413425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:26:04 compute-0 sudo[413425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:04 compute-0 nova_compute[251992]: 2025-12-06 08:26:04.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:04 compute-0 podman[413488]: 2025-12-06 08:26:04.603248453 +0000 UTC m=+0.052496036 container create c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:26:04 compute-0 systemd[1]: Started libpod-conmon-c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8.scope.
Dec 06 08:26:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:26:04 compute-0 podman[413488]: 2025-12-06 08:26:04.576874827 +0000 UTC m=+0.026122410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:26:04 compute-0 podman[413488]: 2025-12-06 08:26:04.678876378 +0000 UTC m=+0.128123971 container init c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:26:04 compute-0 podman[413488]: 2025-12-06 08:26:04.684691806 +0000 UTC m=+0.133939349 container start c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:26:04 compute-0 podman[413488]: 2025-12-06 08:26:04.687896804 +0000 UTC m=+0.137144357 container attach c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_snyder, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:26:04 compute-0 systemd[1]: libpod-c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8.scope: Deactivated successfully.
Dec 06 08:26:04 compute-0 flamboyant_snyder[413504]: 167 167
Dec 06 08:26:04 compute-0 conmon[413504]: conmon c80c5c15d91aaf89a999 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8.scope/container/memory.events
Dec 06 08:26:04 compute-0 podman[413488]: 2025-12-06 08:26:04.690515854 +0000 UTC m=+0.139763407 container died c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fc449660b27f4ca1de486e379b7938b6f0f90977c5d81a7bbdbd061838a3fcb-merged.mount: Deactivated successfully.
Dec 06 08:26:04 compute-0 podman[413488]: 2025-12-06 08:26:04.739367612 +0000 UTC m=+0.188615145 container remove c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:26:04 compute-0 systemd[1]: libpod-conmon-c80c5c15d91aaf89a99974df3ff6ffe2ae6f159086973aa269971279f45374d8.scope: Deactivated successfully.
Dec 06 08:26:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:04.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2690275473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:04 compute-0 ceph-mon[74339]: pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3266479852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:04 compute-0 podman[413529]: 2025-12-06 08:26:04.88287758 +0000 UTC m=+0.037159950 container create 3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_buck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:26:04 compute-0 systemd[1]: Started libpod-conmon-3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d.scope.
Dec 06 08:26:04 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1626a126df926791a0578af71d1ab07d979c88b72c45b82ba8aadddbd80196d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1626a126df926791a0578af71d1ab07d979c88b72c45b82ba8aadddbd80196d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1626a126df926791a0578af71d1ab07d979c88b72c45b82ba8aadddbd80196d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1626a126df926791a0578af71d1ab07d979c88b72c45b82ba8aadddbd80196d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:04 compute-0 podman[413529]: 2025-12-06 08:26:04.945854471 +0000 UTC m=+0.100136831 container init 3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_buck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 08:26:04 compute-0 podman[413529]: 2025-12-06 08:26:04.955225806 +0000 UTC m=+0.109508156 container start 3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_buck, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:26:04 compute-0 podman[413529]: 2025-12-06 08:26:04.959325167 +0000 UTC m=+0.113607507 container attach 3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:26:04 compute-0 podman[413529]: 2025-12-06 08:26:04.866037343 +0000 UTC m=+0.020319693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:26:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:05 compute-0 tender_buck[413546]: {
Dec 06 08:26:05 compute-0 tender_buck[413546]:     "0": [
Dec 06 08:26:05 compute-0 tender_buck[413546]:         {
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "devices": [
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "/dev/loop3"
Dec 06 08:26:05 compute-0 tender_buck[413546]:             ],
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "lv_name": "ceph_lv0",
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "lv_size": "7511998464",
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "name": "ceph_lv0",
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "tags": {
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.cluster_name": "ceph",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.crush_device_class": "",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.encrypted": "0",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.osd_id": "0",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.type": "block",
Dec 06 08:26:05 compute-0 tender_buck[413546]:                 "ceph.vdo": "0"
Dec 06 08:26:05 compute-0 tender_buck[413546]:             },
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "type": "block",
Dec 06 08:26:05 compute-0 tender_buck[413546]:             "vg_name": "ceph_vg0"
Dec 06 08:26:05 compute-0 tender_buck[413546]:         }
Dec 06 08:26:05 compute-0 tender_buck[413546]:     ]
Dec 06 08:26:05 compute-0 tender_buck[413546]: }
Dec 06 08:26:05 compute-0 systemd[1]: libpod-3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d.scope: Deactivated successfully.
Dec 06 08:26:05 compute-0 podman[413529]: 2025-12-06 08:26:05.742353789 +0000 UTC m=+0.896636169 container died 3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 08:26:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-1626a126df926791a0578af71d1ab07d979c88b72c45b82ba8aadddbd80196d1-merged.mount: Deactivated successfully.
Dec 06 08:26:05 compute-0 podman[413529]: 2025-12-06 08:26:05.802633236 +0000 UTC m=+0.956915586 container remove 3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 08:26:05 compute-0 systemd[1]: libpod-conmon-3841518c3a8590ce05a5fa63b8b50d8edc8ff04b7f6b1d944856e6e118b24e2d.scope: Deactivated successfully.
Dec 06 08:26:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:05.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:05 compute-0 sudo[413425]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/382677093' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:05 compute-0 sudo[413570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:05 compute-0 sudo[413570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:05 compute-0 sudo[413570]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:05 compute-0 nova_compute[251992]: 2025-12-06 08:26:05.930 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:05 compute-0 sudo[413595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:26:05 compute-0 sudo[413595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:05 compute-0 sudo[413595]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:06 compute-0 sudo[413620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:06 compute-0 sudo[413620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:06 compute-0 sudo[413620]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:06 compute-0 sudo[413645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:26:06 compute-0 sudo[413645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 975 KiB/s wr, 1 op/s
Dec 06 08:26:06 compute-0 podman[413710]: 2025-12-06 08:26:06.407255461 +0000 UTC m=+0.037391766 container create aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:26:06 compute-0 systemd[1]: Started libpod-conmon-aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c.scope.
Dec 06 08:26:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:26:06 compute-0 podman[413710]: 2025-12-06 08:26:06.471709492 +0000 UTC m=+0.101845817 container init aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:26:06 compute-0 podman[413710]: 2025-12-06 08:26:06.478171109 +0000 UTC m=+0.108307414 container start aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:26:06 compute-0 podman[413710]: 2025-12-06 08:26:06.481218001 +0000 UTC m=+0.111354356 container attach aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:26:06 compute-0 confident_kare[413727]: 167 167
Dec 06 08:26:06 compute-0 systemd[1]: libpod-aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c.scope: Deactivated successfully.
Dec 06 08:26:06 compute-0 podman[413710]: 2025-12-06 08:26:06.391280718 +0000 UTC m=+0.021417043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:26:06 compute-0 podman[413710]: 2025-12-06 08:26:06.486895206 +0000 UTC m=+0.117031531 container died aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kare, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 08:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-78d4fc2318a2827677f86134cbb3ead4a38ab0a901a183bb7c7f9450d0a42e13-merged.mount: Deactivated successfully.
Dec 06 08:26:06 compute-0 podman[413710]: 2025-12-06 08:26:06.526279625 +0000 UTC m=+0.156415930 container remove aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:26:06 compute-0 podman[413728]: 2025-12-06 08:26:06.527172159 +0000 UTC m=+0.076516749 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:26:06 compute-0 systemd[1]: libpod-conmon-aa97079fe08c46e6628519c16d1ac44321af5625e6dceaedacb892376b40690c.scope: Deactivated successfully.
Dec 06 08:26:06 compute-0 podman[413724]: 2025-12-06 08:26:06.547029719 +0000 UTC m=+0.100720938 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:26:06 compute-0 nova_compute[251992]: 2025-12-06 08:26:06.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:06 compute-0 podman[413788]: 2025-12-06 08:26:06.69136828 +0000 UTC m=+0.038821285 container create f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 08:26:06 compute-0 systemd[1]: Started libpod-conmon-f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4.scope.
Dec 06 08:26:06 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181376fb88830dcf926d2c42f1c476f7cbeb251e5bdf4f6aa20f5f21dff396d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181376fb88830dcf926d2c42f1c476f7cbeb251e5bdf4f6aa20f5f21dff396d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181376fb88830dcf926d2c42f1c476f7cbeb251e5bdf4f6aa20f5f21dff396d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181376fb88830dcf926d2c42f1c476f7cbeb251e5bdf4f6aa20f5f21dff396d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:06 compute-0 podman[413788]: 2025-12-06 08:26:06.769971676 +0000 UTC m=+0.117424691 container init f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:26:06 compute-0 podman[413788]: 2025-12-06 08:26:06.675559631 +0000 UTC m=+0.023012636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:26:06 compute-0 podman[413788]: 2025-12-06 08:26:06.778414765 +0000 UTC m=+0.125867760 container start f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:26:06 compute-0 podman[413788]: 2025-12-06 08:26:06.781777886 +0000 UTC m=+0.129230921 container attach f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:26:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:06.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:06 compute-0 ceph-mon[74339]: pgmap v4016: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 975 KiB/s wr, 1 op/s
Dec 06 08:26:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3789081481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1855592645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:06 compute-0 sudo[413810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:06 compute-0 sudo[413810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:06 compute-0 sudo[413810]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:06 compute-0 sudo[413835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:06 compute-0 sudo[413835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:06 compute-0 sudo[413835]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:07 compute-0 nova_compute[251992]: 2025-12-06 08:26:07.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]: {
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]:         "osd_id": 0,
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]:         "type": "bluestore"
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]:     }
Dec 06 08:26:07 compute-0 flamboyant_maxwell[413805]: }
Dec 06 08:26:07 compute-0 systemd[1]: libpod-f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4.scope: Deactivated successfully.
Dec 06 08:26:07 compute-0 podman[413788]: 2025-12-06 08:26:07.617662353 +0000 UTC m=+0.965115338 container died f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-181376fb88830dcf926d2c42f1c476f7cbeb251e5bdf4f6aa20f5f21dff396d6-merged.mount: Deactivated successfully.
Dec 06 08:26:07 compute-0 nova_compute[251992]: 2025-12-06 08:26:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:07 compute-0 nova_compute[251992]: 2025-12-06 08:26:07.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:26:07 compute-0 nova_compute[251992]: 2025-12-06 08:26:07.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:26:07 compute-0 nova_compute[251992]: 2025-12-06 08:26:07.675 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:26:07 compute-0 podman[413788]: 2025-12-06 08:26:07.685186788 +0000 UTC m=+1.032639773 container remove f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 08:26:07 compute-0 systemd[1]: libpod-conmon-f3d0d3a25f64c2dbdf4e7603625d3fa51545e84198d01d283b29b095d999a6e4.scope: Deactivated successfully.
Dec 06 08:26:07 compute-0 sudo[413645]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:26:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:26:07 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4a70e970-81f6-49f5-8fbf-9e63d38cbdb1 does not exist
Dec 06 08:26:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 541c6e5b-80f5-463e-8ebd-57e116c22e80 does not exist
Dec 06 08:26:07 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2a486e3a-1299-4fd0-8b51-a0987f1295e2 does not exist
Dec 06 08:26:07 compute-0 sudo[413891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:07 compute-0 sudo[413891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:07 compute-0 sudo[413891]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:07.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:07 compute-0 sudo[413916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:26:07 compute-0 sudo[413916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:07 compute-0 sudo[413916]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:07 compute-0 sshd-session[412917]: Connection closed by 172.202.118.82 port 38778 [preauth]
Dec 06 08:26:07 compute-0 nova_compute[251992]: 2025-12-06 08:26:07.978 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:07 compute-0 nova_compute[251992]: 2025-12-06 08:26:07.979 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:07 compute-0 nova_compute[251992]: 2025-12-06 08:26:07.995 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:26:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.093 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.094 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.104 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.105 251996 INFO nova.compute.claims [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.352 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:08 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:26:08 compute-0 ceph-mon[74339]: pgmap v4017: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Dec 06 08:26:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4278581340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:26:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3946932057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.792 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.800 251996 DEBUG nova.compute.provider_tree [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.817 251996 DEBUG nova.scheduler.client.report [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.840 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.841 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:26:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.898 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.899 251996 DEBUG nova.network.neutron [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.918 251996 INFO nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:26:08 compute-0 nova_compute[251992]: 2025-12-06 08:26:08.936 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.079 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.081 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.081 251996 INFO nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Creating image(s)
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.116 251996 DEBUG nova.storage.rbd_utils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.211 251996 DEBUG nova.storage.rbd_utils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.241 251996 DEBUG nova.storage.rbd_utils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.244 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.311 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.324 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.325 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.325 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.326 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "890368a5690a3dbdbb6650dcb9de9e2c9dc5acef" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.356 251996 DEBUG nova.storage.rbd_utils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.360 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.642 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/890368a5690a3dbdbb6650dcb9de9e2c9dc5acef cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.715 251996 DEBUG nova.storage.rbd_utils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] resizing rbd image cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 08:26:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3946932057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2751347995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/110990899' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:26:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/110990899' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:26:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:09.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:09 compute-0 nova_compute[251992]: 2025-12-06 08:26:09.832 251996 DEBUG nova.objects.instance [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'migration_context' on Instance uuid cef7ff7a-e5a2-4be4-aa0b-d2db15217842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:26:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Dec 06 08:26:10 compute-0 nova_compute[251992]: 2025-12-06 08:26:10.268 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 08:26:10 compute-0 nova_compute[251992]: 2025-12-06 08:26:10.268 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Ensure instance console log exists: /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:26:10 compute-0 nova_compute[251992]: 2025-12-06 08:26:10.269 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:10 compute-0 nova_compute[251992]: 2025-12-06 08:26:10.269 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:10 compute-0 nova_compute[251992]: 2025-12-06 08:26:10.269 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:10 compute-0 ceph-mon[74339]: pgmap v4018: 305 pgs: 305 active+clean; 151 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.7 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Dec 06 08:26:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:10.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3155474753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1140979052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:11.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 MiB/s wr, 86 op/s
Dec 06 08:26:12 compute-0 nova_compute[251992]: 2025-12-06 08:26:12.559 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:12 compute-0 nova_compute[251992]: 2025-12-06 08:26:12.626 251996 DEBUG nova.network.neutron [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Successfully created port: fc42b635-dbcf-4627-a718-47b88146ac1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:26:12 compute-0 nova_compute[251992]: 2025-12-06 08:26:12.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:12 compute-0 ceph-mon[74339]: pgmap v4019: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 MiB/s wr, 86 op/s
Dec 06 08:26:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:12.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:26:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:26:13 compute-0 nova_compute[251992]: 2025-12-06 08:26:13.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:13 compute-0 ceph-mgr[74630]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 08:26:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:13.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 MiB/s wr, 86 op/s
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.312 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.613 251996 DEBUG nova.network.neutron [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Successfully updated port: fc42b635-dbcf-4627-a718-47b88146ac1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.633 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "refresh_cache-cef7ff7a-e5a2-4be4-aa0b-d2db15217842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.634 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquired lock "refresh_cache-cef7ff7a-e5a2-4be4-aa0b-d2db15217842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.634 251996 DEBUG nova.network.neutron [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.713 251996 DEBUG nova.compute.manager [req-f443f305-3044-4b29-842a-29483e1d19e5 req-504131d1-ef23-40fc-b7dc-b6439de988c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received event network-changed-fc42b635-dbcf-4627-a718-47b88146ac1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.713 251996 DEBUG nova.compute.manager [req-f443f305-3044-4b29-842a-29483e1d19e5 req-504131d1-ef23-40fc-b7dc-b6439de988c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Refreshing instance network info cache due to event network-changed-fc42b635-dbcf-4627-a718-47b88146ac1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:26:14 compute-0 nova_compute[251992]: 2025-12-06 08:26:14.713 251996 DEBUG oslo_concurrency.lockutils [req-f443f305-3044-4b29-842a-29483e1d19e5 req-504131d1-ef23-40fc-b7dc-b6439de988c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-cef7ff7a-e5a2-4be4-aa0b-d2db15217842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:26:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:14.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:15 compute-0 nova_compute[251992]: 2025-12-06 08:26:15.118 251996 DEBUG nova.network.neutron [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:26:15 compute-0 ceph-mon[74339]: pgmap v4020: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 5.3 MiB/s wr, 86 op/s
Dec 06 08:26:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:15.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.3 MiB/s wr, 144 op/s
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.179 251996 DEBUG nova.network.neutron [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Updating instance_info_cache with network_info: [{"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.201 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Releasing lock "refresh_cache-cef7ff7a-e5a2-4be4-aa0b-d2db15217842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.202 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Instance network_info: |[{"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.202 251996 DEBUG oslo_concurrency.lockutils [req-f443f305-3044-4b29-842a-29483e1d19e5 req-504131d1-ef23-40fc-b7dc-b6439de988c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-cef7ff7a-e5a2-4be4-aa0b-d2db15217842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.202 251996 DEBUG nova.network.neutron [req-f443f305-3044-4b29-842a-29483e1d19e5 req-504131d1-ef23-40fc-b7dc-b6439de988c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Refreshing network info cache for port fc42b635-dbcf-4627-a718-47b88146ac1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.205 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Start _get_guest_xml network_info=[{"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'guest_format': None, 'size': 0, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'boot_index': 0, 'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'image_id': '6efab05d-c7cf-4770-a5c3-c806a2739063'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.209 251996 WARNING nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.214 251996 DEBUG nova.virt.libvirt.host [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.214 251996 DEBUG nova.virt.libvirt.host [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.217 251996 DEBUG nova.virt.libvirt.host [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.217 251996 DEBUG nova.virt.libvirt.host [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.218 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.219 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T06:56:26Z,direct_url=<?>,disk_format='qcow2',id=6efab05d-c7cf-4770-a5c3-c806a2739063,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='5ed95c9b17ee4dcb83395850789304e6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T06:56:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.219 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.219 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.220 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.220 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.220 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.220 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.223 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.223 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.223 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.223 251996 DEBUG nova.virt.hardware [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.226 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:26:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.650 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.675 251996 DEBUG nova.storage.rbd_utils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:16 compute-0 nova_compute[251992]: 2025-12-06 08:26:16.679 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:16.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:26:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/920395903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.156 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.157 251996 DEBUG nova.virt.libvirt.vif [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-310190783',display_name='tempest-TestServerMultinode-server-310190783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-310190783',id=223,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-8t7fjrcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:26:08Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=cef7ff7a-e5a2-4be4-aa0b-d2db15217842,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.158 251996 DEBUG nova.network.os_vif_util [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.159 251996 DEBUG nova.network.os_vif_util [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:36:71,bridge_name='br-int',has_traffic_filtering=True,id=fc42b635-dbcf-4627-a718-47b88146ac1c,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc42b635-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.160 251996 DEBUG nova.objects.instance [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'pci_devices' on Instance uuid cef7ff7a-e5a2-4be4-aa0b-d2db15217842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:26:17 compute-0 ceph-mon[74339]: pgmap v4021: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.3 MiB/s wr, 144 op/s
Dec 06 08:26:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1269390' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/920395903' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.528 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <uuid>cef7ff7a-e5a2-4be4-aa0b-d2db15217842</uuid>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <name>instance-000000df</name>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <nova:name>tempest-TestServerMultinode-server-310190783</nova:name>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:26:16</nova:creationTime>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <nova:user uuid="4989b6252b64443aaec21b075dbc29d9">tempest-TestServerMultinode-1864301627-project-admin</nova:user>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <nova:project uuid="7102dcd5b58d4dec801a71dacc60eaaf">tempest-TestServerMultinode-1864301627</nova:project>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <nova:root type="image" uuid="6efab05d-c7cf-4770-a5c3-c806a2739063"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <nova:port uuid="fc42b635-dbcf-4627-a718-47b88146ac1c">
Dec 06 08:26:17 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <system>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <entry name="serial">cef7ff7a-e5a2-4be4-aa0b-d2db15217842</entry>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <entry name="uuid">cef7ff7a-e5a2-4be4-aa0b-d2db15217842</entry>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </system>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <os>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   </os>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <features>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   </features>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk">
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       </source>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk.config">
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       </source>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:26:17 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:c3:36:71"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <target dev="tapfc42b635-db"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842/console.log" append="off"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <video>
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </video>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:26:17 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:26:17 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:26:17 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:26:17 compute-0 nova_compute[251992]: </domain>
Dec 06 08:26:17 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.529 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Preparing to wait for external event network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.530 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.530 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.530 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.531 251996 DEBUG nova.virt.libvirt.vif [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-310190783',display_name='tempest-TestServerMultinode-server-310190783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-310190783',id=223,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-8t7fjrcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:26:08Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=cef7ff7a-e5a2-4be4-aa0b-d2db15217842,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.531 251996 DEBUG nova.network.os_vif_util [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.532 251996 DEBUG nova.network.os_vif_util [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:36:71,bridge_name='br-int',has_traffic_filtering=True,id=fc42b635-dbcf-4627-a718-47b88146ac1c,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc42b635-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.532 251996 DEBUG os_vif [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:36:71,bridge_name='br-int',has_traffic_filtering=True,id=fc42b635-dbcf-4627-a718-47b88146ac1c,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc42b635-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.533 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.533 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.533 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.542 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.542 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc42b635-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.542 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc42b635-db, col_values=(('external_ids', {'iface-id': 'fc42b635-dbcf-4627-a718-47b88146ac1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:36:71', 'vm-uuid': 'cef7ff7a-e5a2-4be4-aa0b-d2db15217842'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.596 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:17 compute-0 NetworkManager[48965]: <info>  [1765009577.5995] manager: (tapfc42b635-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.599 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.604 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.605 251996 INFO os_vif [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:36:71,bridge_name='br-int',has_traffic_filtering=True,id=fc42b635-dbcf-4627-a718-47b88146ac1c,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc42b635-db')
Dec 06 08:26:17 compute-0 ovn_controller[147168]: 2025-12-06T08:26:17Z|00831|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.738 251996 DEBUG nova.network.neutron [req-f443f305-3044-4b29-842a-29483e1d19e5 req-504131d1-ef23-40fc-b7dc-b6439de988c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Updated VIF entry in instance network info cache for port fc42b635-dbcf-4627-a718-47b88146ac1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.739 251996 DEBUG nova.network.neutron [req-f443f305-3044-4b29-842a-29483e1d19e5 req-504131d1-ef23-40fc-b7dc-b6439de988c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Updating instance_info_cache with network_info: [{"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.761 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.762 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.762 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] No VIF found with MAC fa:16:3e:c3:36:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.763 251996 INFO nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Using config drive
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.798 251996 DEBUG nova.storage.rbd_utils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:17 compute-0 nova_compute[251992]: 2025-12-06 08:26:17.808 251996 DEBUG oslo_concurrency.lockutils [req-f443f305-3044-4b29-842a-29483e1d19e5 req-504131d1-ef23-40fc-b7dc-b6439de988c6 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-cef7ff7a-e5a2-4be4-aa0b-d2db15217842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:26:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:17.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 163 op/s
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.293 251996 INFO nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Creating config drive at /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842/disk.config
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.299 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsi3cij7u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.440 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsi3cij7u" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.479 251996 DEBUG nova.storage.rbd_utils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] rbd image cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.484 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842/disk.config cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.664 251996 DEBUG oslo_concurrency.processutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842/disk.config cef7ff7a-e5a2-4be4-aa0b-d2db15217842_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.665 251996 INFO nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Deleting local config drive /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842/disk.config because it was imported into RBD.
Dec 06 08:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:26:18
Dec 06 08:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data']
Dec 06 08:26:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:26:18 compute-0 kernel: tapfc42b635-db: entered promiscuous mode
Dec 06 08:26:18 compute-0 NetworkManager[48965]: <info>  [1765009578.7352] manager: (tapfc42b635-db): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.745 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:18 compute-0 ovn_controller[147168]: 2025-12-06T08:26:18Z|00832|binding|INFO|Claiming lport fc42b635-dbcf-4627-a718-47b88146ac1c for this chassis.
Dec 06 08:26:18 compute-0 ovn_controller[147168]: 2025-12-06T08:26:18Z|00833|binding|INFO|fc42b635-dbcf-4627-a718-47b88146ac1c: Claiming fa:16:3e:c3:36:71 10.100.0.4
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.749 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:18 compute-0 systemd-machined[212986]: New machine qemu-98-instance-000000df.
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.782 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:36:71 10.100.0.4'], port_security=['fa:16:3e:c3:36:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cef7ff7a-e5a2-4be4-aa0b-d2db15217842', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7102dcd5b58d4dec801a71dacc60eaaf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '542d1f4a-a306-4d0a-9719-694ed1b1f413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d50a2125-562d-4707-b67c-0f0d40fd3bbc, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=fc42b635-dbcf-4627-a718-47b88146ac1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.783 158118 INFO neutron.agent.ovn.metadata.agent [-] Port fc42b635-dbcf-4627-a718-47b88146ac1c in datapath 0cfa750a-9a18-4c24-bbf9-75517f0157ee bound to our chassis
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.784 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0cfa750a-9a18-4c24-bbf9-75517f0157ee
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.800 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[df82cf3a-409d-4ba5-b878-dac7ac107f31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.801 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0cfa750a-91 in ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.802 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0cfa750a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.803 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb2eec5-11c3-4d11-b38d-9ef23d300f0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.803 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9826bb65-c598-4ba9-b993-e314ed4e6cab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-000000df.
Dec 06 08:26:18 compute-0 ovn_controller[147168]: 2025-12-06T08:26:18Z|00834|binding|INFO|Setting lport fc42b635-dbcf-4627-a718-47b88146ac1c ovn-installed in OVS
Dec 06 08:26:18 compute-0 ovn_controller[147168]: 2025-12-06T08:26:18Z|00835|binding|INFO|Setting lport fc42b635-dbcf-4627-a718-47b88146ac1c up in Southbound
Dec 06 08:26:18 compute-0 nova_compute[251992]: 2025-12-06 08:26:18.813 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.818 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa70308-0f5c-4c26-9f78-fe8f94273e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 systemd-udevd[414271]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.832 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f698061e-e1d1-434e-8ed4-8309277dc525]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 NetworkManager[48965]: <info>  [1765009578.8380] device (tapfc42b635-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:26:18 compute-0 NetworkManager[48965]: <info>  [1765009578.8394] device (tapfc42b635-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.861 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[0516877a-103f-467e-afaa-4879c8a99bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 NetworkManager[48965]: <info>  [1765009578.8674] manager: (tap0cfa750a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/400)
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.866 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b8caf2-02e1-4998-9298-2a5f582a0e9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:18.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.896 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9a0edb-6e45-4b8e-bd8f-2746760c5c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.899 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ea86a1-6038-4cf4-96c8-23042f8bbde1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 NetworkManager[48965]: <info>  [1765009578.9221] device (tap0cfa750a-90): carrier: link connected
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.927 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[70dd294f-dee0-45f9-b705-46025bcffd5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.943 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f558f47f-bbf4-4692-bb8d-a1b08f40c466]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfa750a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:93:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985151, 'reachable_time': 15785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414302, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.960 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[2d650fc4-3d12-4d2a-9556-9802fa294e77]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:93e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 985151, 'tstamp': 985151}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414303, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:18 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:18.977 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c48d7d91-01ad-4263-b2d9-15da6bdac9a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0cfa750a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:93:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985151, 'reachable_time': 15785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 414304, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.006 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[3d31f706-3b9f-4468-964d-182a719feeff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.049 251996 DEBUG nova.compute.manager [req-cd3031a0-8971-4b7f-9c72-ff1871a3dbf5 req-9cb5be91-4afd-4e72-a980-f86666f6f01c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received event network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.049 251996 DEBUG oslo_concurrency.lockutils [req-cd3031a0-8971-4b7f-9c72-ff1871a3dbf5 req-9cb5be91-4afd-4e72-a980-f86666f6f01c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.049 251996 DEBUG oslo_concurrency.lockutils [req-cd3031a0-8971-4b7f-9c72-ff1871a3dbf5 req-9cb5be91-4afd-4e72-a980-f86666f6f01c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.050 251996 DEBUG oslo_concurrency.lockutils [req-cd3031a0-8971-4b7f-9c72-ff1871a3dbf5 req-9cb5be91-4afd-4e72-a980-f86666f6f01c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.050 251996 DEBUG nova.compute.manager [req-cd3031a0-8971-4b7f-9c72-ff1871a3dbf5 req-9cb5be91-4afd-4e72-a980-f86666f6f01c 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Processing event network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.078 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[8c925935-22f8-4294-833e-a6b750890d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.079 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfa750a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.079 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.080 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0cfa750a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:19 compute-0 kernel: tap0cfa750a-90: entered promiscuous mode
Dec 06 08:26:19 compute-0 NetworkManager[48965]: <info>  [1765009579.0820] manager: (tap0cfa750a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.082 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.085 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0cfa750a-90, col_values=(('external_ids', {'iface-id': '9c309117-cb51-4d66-b962-5f9b07ea29e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.085 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:19 compute-0 ovn_controller[147168]: 2025-12-06T08:26:19Z|00836|binding|INFO|Releasing lport 9c309117-cb51-4d66-b962-5f9b07ea29e2 from this chassis (sb_readonly=0)
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.087 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.088 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.089 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4125de-8da8-4c0b-911d-62910511d85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.090 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-0cfa750a-9a18-4c24-bbf9-75517f0157ee
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/0cfa750a-9a18-4c24-bbf9-75517f0157ee.pid.haproxy
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID 0cfa750a-9a18-4c24-bbf9-75517f0157ee
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:26:19 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:19.090 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'env', 'PROCESS_TAG=haproxy-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0cfa750a-9a18-4c24-bbf9-75517f0157ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.100 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.239 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009579.2388473, cef7ff7a-e5a2-4be4-aa0b-d2db15217842 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.240 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] VM Started (Lifecycle Event)
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.242 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.246 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.249 251996 INFO nova.virt.libvirt.driver [-] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Instance spawned successfully.
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.250 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.257 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:19 compute-0 ceph-mon[74339]: pgmap v4022: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 163 op/s
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.266 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.270 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.271 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.271 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.272 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.272 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.273 251996 DEBUG nova.virt.libvirt.driver [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.303 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.304 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009579.2398767, cef7ff7a-e5a2-4be4-aa0b-d2db15217842 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.304 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] VM Paused (Lifecycle Event)
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.314 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.339 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.342 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009579.244795, cef7ff7a-e5a2-4be4-aa0b-d2db15217842 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.342 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] VM Resumed (Lifecycle Event)
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.367 251996 INFO nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Took 10.29 seconds to spawn the instance on the hypervisor.
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.368 251996 DEBUG nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.370 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.376 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.435 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.472 251996 INFO nova.compute.manager [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Took 11.42 seconds to build instance.
Dec 06 08:26:19 compute-0 podman[414380]: 2025-12-06 08:26:19.476949067 +0000 UTC m=+0.051318086 container create d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.506 251996 DEBUG oslo_concurrency.lockutils [None req-55cb74c3-df2f-426a-af3c-ab61dd489715 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:19 compute-0 systemd[1]: Started libpod-conmon-d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c.scope.
Dec 06 08:26:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:26:19 compute-0 podman[414380]: 2025-12-06 08:26:19.448912296 +0000 UTC m=+0.023281325 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c961c1618cc46c3c00aa78e9269b9e55681923944749be81c481b13f693a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:26:19 compute-0 podman[414380]: 2025-12-06 08:26:19.558159723 +0000 UTC m=+0.132528772 container init d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:26:19 compute-0 podman[414380]: 2025-12-06 08:26:19.564063724 +0000 UTC m=+0.138432753 container start d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 08:26:19 compute-0 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[414395]: [NOTICE]   (414401) : New worker (414403) forked
Dec 06 08:26:19 compute-0 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[414395]: [NOTICE]   (414401) : Loading success.
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:19 compute-0 nova_compute[251992]: 2025-12-06 08:26:19.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:26:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:19.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 151 op/s
Dec 06 08:26:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:20.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:21 compute-0 nova_compute[251992]: 2025-12-06 08:26:21.149 251996 DEBUG nova.compute.manager [req-eca3ef34-1e76-4629-b55d-a1ddf7cf49f9 req-1ba5e8ee-cfea-4153-a076-4ce62a4ad104 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received event network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:21 compute-0 nova_compute[251992]: 2025-12-06 08:26:21.150 251996 DEBUG oslo_concurrency.lockutils [req-eca3ef34-1e76-4629-b55d-a1ddf7cf49f9 req-1ba5e8ee-cfea-4153-a076-4ce62a4ad104 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:21 compute-0 nova_compute[251992]: 2025-12-06 08:26:21.151 251996 DEBUG oslo_concurrency.lockutils [req-eca3ef34-1e76-4629-b55d-a1ddf7cf49f9 req-1ba5e8ee-cfea-4153-a076-4ce62a4ad104 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:21 compute-0 nova_compute[251992]: 2025-12-06 08:26:21.151 251996 DEBUG oslo_concurrency.lockutils [req-eca3ef34-1e76-4629-b55d-a1ddf7cf49f9 req-1ba5e8ee-cfea-4153-a076-4ce62a4ad104 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:21 compute-0 nova_compute[251992]: 2025-12-06 08:26:21.152 251996 DEBUG nova.compute.manager [req-eca3ef34-1e76-4629-b55d-a1ddf7cf49f9 req-1ba5e8ee-cfea-4153-a076-4ce62a4ad104 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] No waiting events found dispatching network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:26:21 compute-0 nova_compute[251992]: 2025-12-06 08:26:21.152 251996 WARNING nova.compute.manager [req-eca3ef34-1e76-4629-b55d-a1ddf7cf49f9 req-1ba5e8ee-cfea-4153-a076-4ce62a4ad104 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received unexpected event network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c for instance with vm_state active and task_state None.
Dec 06 08:26:21 compute-0 ceph-mon[74339]: pgmap v4023: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 151 op/s
Dec 06 08:26:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:21.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 271 op/s
Dec 06 08:26:22 compute-0 nova_compute[251992]: 2025-12-06 08:26:22.599 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:22.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:22 compute-0 nova_compute[251992]: 2025-12-06 08:26:22.994 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:22 compute-0 nova_compute[251992]: 2025-12-06 08:26:22.996 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:22 compute-0 nova_compute[251992]: 2025-12-06 08:26:22.996 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:22 compute-0 nova_compute[251992]: 2025-12-06 08:26:22.996 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:22 compute-0 nova_compute[251992]: 2025-12-06 08:26:22.996 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:22 compute-0 nova_compute[251992]: 2025-12-06 08:26:22.997 251996 INFO nova.compute.manager [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Terminating instance
Dec 06 08:26:22 compute-0 nova_compute[251992]: 2025-12-06 08:26:22.998 251996 DEBUG nova.compute.manager [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:26:23 compute-0 kernel: tapfc42b635-db (unregistering): left promiscuous mode
Dec 06 08:26:23 compute-0 NetworkManager[48965]: <info>  [1765009583.0360] device (tapfc42b635-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:26:23 compute-0 ovn_controller[147168]: 2025-12-06T08:26:23Z|00837|binding|INFO|Releasing lport fc42b635-dbcf-4627-a718-47b88146ac1c from this chassis (sb_readonly=0)
Dec 06 08:26:23 compute-0 ovn_controller[147168]: 2025-12-06T08:26:23Z|00838|binding|INFO|Setting lport fc42b635-dbcf-4627-a718-47b88146ac1c down in Southbound
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.047 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:23 compute-0 ovn_controller[147168]: 2025-12-06T08:26:23Z|00839|binding|INFO|Removing iface tapfc42b635-db ovn-installed in OVS
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.050 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.058 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:36:71 10.100.0.4'], port_security=['fa:16:3e:c3:36:71 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'cef7ff7a-e5a2-4be4-aa0b-d2db15217842', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7102dcd5b58d4dec801a71dacc60eaaf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '542d1f4a-a306-4d0a-9719-694ed1b1f413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d50a2125-562d-4707-b67c-0f0d40fd3bbc, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=fc42b635-dbcf-4627-a718-47b88146ac1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.062 158118 INFO neutron.agent.ovn.metadata.agent [-] Port fc42b635-dbcf-4627-a718-47b88146ac1c in datapath 0cfa750a-9a18-4c24-bbf9-75517f0157ee unbound from our chassis
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.063 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0cfa750a-9a18-4c24-bbf9-75517f0157ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.064 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.065 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4f89848d-4a3f-44b6-9a67-0b3e6571f3ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.065 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee namespace which is not needed anymore
Dec 06 08:26:23 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000df.scope: Deactivated successfully.
Dec 06 08:26:23 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000df.scope: Consumed 4.213s CPU time.
Dec 06 08:26:23 compute-0 systemd-machined[212986]: Machine qemu-98-instance-000000df terminated.
Dec 06 08:26:23 compute-0 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[414395]: [NOTICE]   (414401) : haproxy version is 2.8.14-c23fe91
Dec 06 08:26:23 compute-0 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[414395]: [NOTICE]   (414401) : path to executable is /usr/sbin/haproxy
Dec 06 08:26:23 compute-0 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[414395]: [WARNING]  (414401) : Exiting Master process...
Dec 06 08:26:23 compute-0 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[414395]: [WARNING]  (414401) : Exiting Master process...
Dec 06 08:26:23 compute-0 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[414395]: [ALERT]    (414401) : Current worker (414403) exited with code 143 (Terminated)
Dec 06 08:26:23 compute-0 neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee[414395]: [WARNING]  (414401) : All workers exited. Exiting... (0)
Dec 06 08:26:23 compute-0 systemd[1]: libpod-d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c.scope: Deactivated successfully.
Dec 06 08:26:23 compute-0 podman[414437]: 2025-12-06 08:26:23.198304502 +0000 UTC m=+0.042822664 container died d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:26:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c-userdata-shm.mount: Deactivated successfully.
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.240 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb7c961c1618cc46c3c00aa78e9269b9e55681923944749be81c481b13f693a4-merged.mount: Deactivated successfully.
Dec 06 08:26:23 compute-0 podman[414437]: 2025-12-06 08:26:23.253928243 +0000 UTC m=+0.098446375 container cleanup d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.253 251996 INFO nova.virt.libvirt.driver [-] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Instance destroyed successfully.
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.254 251996 DEBUG nova.objects.instance [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lazy-loading 'resources' on Instance uuid cef7ff7a-e5a2-4be4-aa0b-d2db15217842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:26:23 compute-0 systemd[1]: libpod-conmon-d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c.scope: Deactivated successfully.
Dec 06 08:26:23 compute-0 ceph-mon[74339]: pgmap v4024: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 271 op/s
Dec 06 08:26:23 compute-0 podman[414475]: 2025-12-06 08:26:23.314873239 +0000 UTC m=+0.040595964 container remove d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.320 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9a00e1-a31b-42e4-b2e4-3012a58f12d3]: (4, ('Sat Dec  6 08:26:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee (d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c)\nd4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c\nSat Dec  6 08:26:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee (d4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c)\nd4c95eab11cd96c4572d68c8fc16fa66becc58c6a79c164ccd719da2482a7e4c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.321 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dffcb50b-8e58-4cbd-ad98-9550b9aec108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.322 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0cfa750a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.324 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:23 compute-0 kernel: tap0cfa750a-90: left promiscuous mode
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.341 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.343 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0c821f4d-88a4-4780-adca-0c30b5efb093]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.354 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ed4e15-c8ee-4c30-8b21-217915ed4fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.355 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9579c7a9-a071-4925-963e-219d4bad5cbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.367 251996 DEBUG nova.virt.libvirt.vif [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:26:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-310190783',display_name='tempest-TestServerMultinode-server-310190783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-310190783',id=223,image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:26:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7102dcd5b58d4dec801a71dacc60eaaf',ramdisk_id='',reservation_id='r-8t7fjrcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,admin,reader',image_base_image_ref='6efab05d-c7cf-4770-a5c3-c806a2739063',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1864301627',owner_user_name='tempest-TestServerMultinode-1864301627-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:26:19Z,user_data=None,user_id='4989b6252b64443aaec21b075dbc29d9',uuid=cef7ff7a-e5a2-4be4-aa0b-d2db15217842,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.367 251996 DEBUG nova.network.os_vif_util [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converting VIF {"id": "fc42b635-dbcf-4627-a718-47b88146ac1c", "address": "fa:16:3e:c3:36:71", "network": {"id": "0cfa750a-9a18-4c24-bbf9-75517f0157ee", "bridge": "br-int", "label": "tempest-TestServerMultinode-1982834598-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41c7ac10745449b3a23d724093a203c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc42b635-db", "ovs_interfaceid": "fc42b635-dbcf-4627-a718-47b88146ac1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.368 251996 DEBUG nova.network.os_vif_util [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:36:71,bridge_name='br-int',has_traffic_filtering=True,id=fc42b635-dbcf-4627-a718-47b88146ac1c,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc42b635-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.368 251996 DEBUG os_vif [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:36:71,bridge_name='br-int',has_traffic_filtering=True,id=fc42b635-dbcf-4627-a718-47b88146ac1c,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc42b635-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.369 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[d509a1eb-c10f-432f-a3ee-1ac659cd416c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 985144, 'reachable_time': 27276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414492, 'error': None, 'target': 'ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.370 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.370 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc42b635-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d0cfa750a\x2d9a18\x2d4c24\x2dbbf9\x2d75517f0157ee.mount: Deactivated successfully.
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.373 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.373 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0cfa750a-9a18-4c24-bbf9-75517f0157ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:26:23 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:23.373 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2f3f2f-37b0-4d83-8958-8dea1fc70a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.376 251996 INFO os_vif [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:36:71,bridge_name='br-int',has_traffic_filtering=True,id=fc42b635-dbcf-4627-a718-47b88146ac1c,network=Network(0cfa750a-9a18-4c24-bbf9-75517f0157ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc42b635-db')
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.710 251996 INFO nova.virt.libvirt.driver [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Deleting instance files /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842_del
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.711 251996 INFO nova.virt.libvirt.driver [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Deletion of /var/lib/nova/instances/cef7ff7a-e5a2-4be4-aa0b-d2db15217842_del complete
Dec 06 08:26:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:23.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:26:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.881 251996 INFO nova.compute.manager [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Took 0.88 seconds to destroy the instance on the hypervisor.
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.882 251996 DEBUG oslo.service.loopingcall [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.882 251996 DEBUG nova.compute.manager [-] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.882 251996 DEBUG nova.network.neutron [-] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.965 251996 DEBUG nova.compute.manager [req-15fe975f-4937-41dc-8d11-598c4c89c226 req-4d2bd953-8f98-4ced-b874-53340db38706 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received event network-vif-unplugged-fc42b635-dbcf-4627-a718-47b88146ac1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.965 251996 DEBUG oslo_concurrency.lockutils [req-15fe975f-4937-41dc-8d11-598c4c89c226 req-4d2bd953-8f98-4ced-b874-53340db38706 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.965 251996 DEBUG oslo_concurrency.lockutils [req-15fe975f-4937-41dc-8d11-598c4c89c226 req-4d2bd953-8f98-4ced-b874-53340db38706 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.966 251996 DEBUG oslo_concurrency.lockutils [req-15fe975f-4937-41dc-8d11-598c4c89c226 req-4d2bd953-8f98-4ced-b874-53340db38706 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.966 251996 DEBUG nova.compute.manager [req-15fe975f-4937-41dc-8d11-598c4c89c226 req-4d2bd953-8f98-4ced-b874-53340db38706 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] No waiting events found dispatching network-vif-unplugged-fc42b635-dbcf-4627-a718-47b88146ac1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:26:23 compute-0 nova_compute[251992]: 2025-12-06 08:26:23.966 251996 DEBUG nova.compute.manager [req-15fe975f-4937-41dc-8d11-598c4c89c226 req-4d2bd953-8f98-4ced-b874-53340db38706 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received event network-vif-unplugged-fc42b635-dbcf-4627-a718-47b88146ac1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:26:24 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Dec 06 08:26:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 25 KiB/s wr, 197 op/s
Dec 06 08:26:24 compute-0 nova_compute[251992]: 2025-12-06 08:26:24.316 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:24 compute-0 nova_compute[251992]: 2025-12-06 08:26:24.710 251996 DEBUG nova.network.neutron [-] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:26:24 compute-0 nova_compute[251992]: 2025-12-06 08:26:24.732 251996 INFO nova.compute.manager [-] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Took 0.85 seconds to deallocate network for instance.
Dec 06 08:26:24 compute-0 nova_compute[251992]: 2025-12-06 08:26:24.773 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:24 compute-0 nova_compute[251992]: 2025-12-06 08:26:24.773 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:24 compute-0 nova_compute[251992]: 2025-12-06 08:26:24.784 251996 DEBUG nova.compute.manager [req-e7b036c7-2208-4ded-905b-1d5a3fb9608f req-e89724a9-cdf2-424e-9fd6-00c5d6d050ba 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received event network-vif-deleted-fc42b635-dbcf-4627-a718-47b88146ac1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:24 compute-0 nova_compute[251992]: 2025-12-06 08:26:24.823 251996 DEBUG oslo_concurrency.processutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:26:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:24.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:26:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229264166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:25 compute-0 nova_compute[251992]: 2025-12-06 08:26:25.264 251996 DEBUG oslo_concurrency.processutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:26:25 compute-0 nova_compute[251992]: 2025-12-06 08:26:25.271 251996 DEBUG nova.compute.provider_tree [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:26:25 compute-0 nova_compute[251992]: 2025-12-06 08:26:25.292 251996 DEBUG nova.scheduler.client.report [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:26:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:25 compute-0 ceph-mon[74339]: pgmap v4025: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.2 MiB/s rd, 25 KiB/s wr, 197 op/s
Dec 06 08:26:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1229264166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:25 compute-0 nova_compute[251992]: 2025-12-06 08:26:25.314 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:25 compute-0 nova_compute[251992]: 2025-12-06 08:26:25.357 251996 INFO nova.scheduler.client.report [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Deleted allocations for instance cef7ff7a-e5a2-4be4-aa0b-d2db15217842
Dec 06 08:26:25 compute-0 nova_compute[251992]: 2025-12-06 08:26:25.420 251996 DEBUG oslo_concurrency.lockutils [None req-12c9efac-3efe-49fa-a109-524225f29a63 4989b6252b64443aaec21b075dbc29d9 7102dcd5b58d4dec801a71dacc60eaaf - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:25.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:26 compute-0 nova_compute[251992]: 2025-12-06 08:26:26.084 251996 DEBUG nova.compute.manager [req-bc2347b2-ebe6-4efb-a73d-959907657b82 req-1f6420a0-5a61-43ad-b6fd-9bae619d13b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received event network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:26:26 compute-0 nova_compute[251992]: 2025-12-06 08:26:26.085 251996 DEBUG oslo_concurrency.lockutils [req-bc2347b2-ebe6-4efb-a73d-959907657b82 req-1f6420a0-5a61-43ad-b6fd-9bae619d13b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:26:26 compute-0 nova_compute[251992]: 2025-12-06 08:26:26.085 251996 DEBUG oslo_concurrency.lockutils [req-bc2347b2-ebe6-4efb-a73d-959907657b82 req-1f6420a0-5a61-43ad-b6fd-9bae619d13b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:26:26 compute-0 nova_compute[251992]: 2025-12-06 08:26:26.085 251996 DEBUG oslo_concurrency.lockutils [req-bc2347b2-ebe6-4efb-a73d-959907657b82 req-1f6420a0-5a61-43ad-b6fd-9bae619d13b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "cef7ff7a-e5a2-4be4-aa0b-d2db15217842-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:26:26 compute-0 nova_compute[251992]: 2025-12-06 08:26:26.086 251996 DEBUG nova.compute.manager [req-bc2347b2-ebe6-4efb-a73d-959907657b82 req-1f6420a0-5a61-43ad-b6fd-9bae619d13b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] No waiting events found dispatching network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:26:26 compute-0 nova_compute[251992]: 2025-12-06 08:26:26.086 251996 WARNING nova.compute.manager [req-bc2347b2-ebe6-4efb-a73d-959907657b82 req-1f6420a0-5a61-43ad-b6fd-9bae619d13b7 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Received unexpected event network-vif-plugged-fc42b635-dbcf-4627-a718-47b88146ac1c for instance with vm_state deleted and task_state None.
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.9 MiB/s wr, 268 op/s
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0032582981458217013 of space, bias 1.0, pg target 0.9774894437465104 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:26:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:26:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:26.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:27 compute-0 sudo[414535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:27 compute-0 sudo[414535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:27 compute-0 sudo[414535]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:27 compute-0 sudo[414560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:27 compute-0 sudo[414560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:27 compute-0 sudo[414560]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:27 compute-0 ceph-mon[74339]: pgmap v4026: 305 pgs: 305 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.9 MiB/s wr, 268 op/s
Dec 06 08:26:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:26:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:26:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:26:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:26:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:26:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:27.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Dec 06 08:26:28 compute-0 nova_compute[251992]: 2025-12-06 08:26:28.372 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:28.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:29 compute-0 nova_compute[251992]: 2025-12-06 08:26:29.318 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:29 compute-0 ceph-mon[74339]: pgmap v4027: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Dec 06 08:26:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3779432085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:29.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 225 op/s
Dec 06 08:26:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:30 compute-0 podman[414588]: 2025-12-06 08:26:30.454076643 +0000 UTC m=+0.108214850 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:26:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:30.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:31 compute-0 ceph-mon[74339]: pgmap v4028: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 225 op/s
Dec 06 08:26:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:31.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 272 op/s
Dec 06 08:26:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:32.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:33 compute-0 nova_compute[251992]: 2025-12-06 08:26:33.375 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:33 compute-0 ceph-mon[74339]: pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 272 op/s
Dec 06 08:26:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:33.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 923 KiB/s rd, 2.1 MiB/s wr, 152 op/s
Dec 06 08:26:34 compute-0 nova_compute[251992]: 2025-12-06 08:26:34.321 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:34 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/992866140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:35 compute-0 ceph-mon[74339]: pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 923 KiB/s rd, 2.1 MiB/s wr, 152 op/s
Dec 06 08:26:35 compute-0 nova_compute[251992]: 2025-12-06 08:26:35.602 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:35.602 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=108, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=107) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:26:35 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:35.603 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:26:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:35.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 930 KiB/s rd, 2.1 MiB/s wr, 160 op/s
Dec 06 08:26:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:36.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:37 compute-0 sshd-session[414619]: error: kex_exchange_identification: read: Connection reset by peer
Dec 06 08:26:37 compute-0 sshd-session[414619]: Connection reset by 45.140.17.97 port 6807
Dec 06 08:26:37 compute-0 ceph-mon[74339]: pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 930 KiB/s rd, 2.1 MiB/s wr, 160 op/s
Dec 06 08:26:37 compute-0 podman[414621]: 2025-12-06 08:26:37.389441132 +0000 UTC m=+0.047481302 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 08:26:37 compute-0 podman[414622]: 2025-12-06 08:26:37.402930268 +0000 UTC m=+0.059908659 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 08:26:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:37.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 132 KiB/s rd, 319 KiB/s wr, 89 op/s
Dec 06 08:26:38 compute-0 nova_compute[251992]: 2025-12-06 08:26:38.252 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009583.2508564, cef7ff7a-e5a2-4be4-aa0b-d2db15217842 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:26:38 compute-0 nova_compute[251992]: 2025-12-06 08:26:38.252 251996 INFO nova.compute.manager [-] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] VM Stopped (Lifecycle Event)
Dec 06 08:26:38 compute-0 nova_compute[251992]: 2025-12-06 08:26:38.278 251996 DEBUG nova.compute.manager [None req-e2aa05e1-34f5-45e2-9cfe-11eb50ce23e3 - - - - - -] [instance: cef7ff7a-e5a2-4be4-aa0b-d2db15217842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:26:38 compute-0 nova_compute[251992]: 2025-12-06 08:26:38.377 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:38 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:26:38.605 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '108'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:26:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:38.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:38 compute-0 nova_compute[251992]: 2025-12-06 08:26:38.942 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:39 compute-0 ceph-mon[74339]: pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 132 KiB/s rd, 319 KiB/s wr, 89 op/s
Dec 06 08:26:39 compute-0 nova_compute[251992]: 2025-12-06 08:26:39.324 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 54 op/s
Dec 06 08:26:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:41 compute-0 ceph-mon[74339]: pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 54 op/s
Dec 06 08:26:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:41.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 54 op/s
Dec 06 08:26:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:26:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:26:43 compute-0 nova_compute[251992]: 2025-12-06 08:26:43.378 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:43 compute-0 ceph-mon[74339]: pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 54 op/s
Dec 06 08:26:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:43.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Dec 06 08:26:44 compute-0 nova_compute[251992]: 2025-12-06 08:26:44.326 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:44.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:45 compute-0 ceph-mon[74339]: pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Dec 06 08:26:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:26:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:45.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:26:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Dec 06 08:26:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:46.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:47 compute-0 sudo[414662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:47 compute-0 sudo[414662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:47 compute-0 sudo[414662]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:47 compute-0 ceph-mon[74339]: pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Dec 06 08:26:47 compute-0 sudo[414687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:26:47 compute-0 sudo[414687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:26:47 compute-0 sudo[414687]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:48 compute-0 nova_compute[251992]: 2025-12-06 08:26:48.379 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2275139996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:48.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:49 compute-0 nova_compute[251992]: 2025-12-06 08:26:49.329 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:49 compute-0 ceph-mon[74339]: pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3575716990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:26:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:49.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:26:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:50.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:26:51 compute-0 ceph-mon[74339]: pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:51.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:53 compute-0 nova_compute[251992]: 2025-12-06 08:26:53.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:53 compute-0 ceph-mon[74339]: pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:53.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:54 compute-0 nova_compute[251992]: 2025-12-06 08:26:54.388 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:54.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:26:55 compute-0 ceph-mon[74339]: pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:55.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:56.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:57.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:57 compute-0 ceph-mon[74339]: pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:58 compute-0 nova_compute[251992]: 2025-12-06 08:26:58.381 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:26:58.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:26:58 compute-0 ceph-mon[74339]: pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:26:59 compute-0 nova_compute[251992]: 2025-12-06 08:26:59.390 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:26:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:26:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:26:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:26:59.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:00 compute-0 nova_compute[251992]: 2025-12-06 08:27:00.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:00.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:01 compute-0 ceph-mon[74339]: pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:01 compute-0 podman[414719]: 2025-12-06 08:27:01.444800812 +0000 UTC m=+0.103251386 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:27:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:01.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:02.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:03 compute-0 ceph-mon[74339]: pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:03 compute-0 nova_compute[251992]: 2025-12-06 08:27:03.383 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:03 compute-0 nova_compute[251992]: 2025-12-06 08:27:03.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:03 compute-0 nova_compute[251992]: 2025-12-06 08:27:03.685 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:03 compute-0 nova_compute[251992]: 2025-12-06 08:27:03.685 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:03 compute-0 nova_compute[251992]: 2025-12-06 08:27:03.685 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:27:03 compute-0 nova_compute[251992]: 2025-12-06 08:27:03.686 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:27:03 compute-0 nova_compute[251992]: 2025-12-06 08:27:03.686 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:03.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:27:03.902 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:27:03.903 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:27:03.903 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:27:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:27:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503660495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.133 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.286 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.287 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4110MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.287 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.287 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3503660495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.391 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.536 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.536 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:27:04 compute-0 nova_compute[251992]: 2025-12-06 08:27:04.614 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:04.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:27:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109562262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:05 compute-0 nova_compute[251992]: 2025-12-06 08:27:05.040 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:05 compute-0 nova_compute[251992]: 2025-12-06 08:27:05.046 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:27:05 compute-0 nova_compute[251992]: 2025-12-06 08:27:05.062 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:27:05 compute-0 nova_compute[251992]: 2025-12-06 08:27:05.086 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:27:05 compute-0 nova_compute[251992]: 2025-12-06 08:27:05.086 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:27:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:05 compute-0 ceph-mon[74339]: pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2109562262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:05.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:06 compute-0 nova_compute[251992]: 2025-12-06 08:27:06.087 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:06.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:07 compute-0 sudo[414792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:07 compute-0 sudo[414792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:07 compute-0 sudo[414792]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:07 compute-0 sudo[414829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:07 compute-0 sudo[414829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:07 compute-0 podman[414816]: 2025-12-06 08:27:07.528790601 +0000 UTC m=+0.081801784 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:27:07 compute-0 sudo[414829]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:07 compute-0 podman[414817]: 2025-12-06 08:27:07.532971664 +0000 UTC m=+0.081952637 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:27:07 compute-0 ceph-mon[74339]: pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1869144123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:07 compute-0 nova_compute[251992]: 2025-12-06 08:27:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:07.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:08 compute-0 sudo[414881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:08 compute-0 sudo[414881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-0 sudo[414881]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:08 compute-0 sudo[414906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:27:08 compute-0 sudo[414906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-0 sudo[414906]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:08 compute-0 sudo[414931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:08 compute-0 sudo[414931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-0 sudo[414931]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:08 compute-0 nova_compute[251992]: 2025-12-06 08:27:08.384 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:08 compute-0 sudo[414956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:27:08 compute-0 sudo[414956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-0 ceph-mon[74339]: pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:08 compute-0 sudo[414956]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:08.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:08 compute-0 sudo[415012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:08 compute-0 sudo[415012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:08 compute-0 sudo[415012]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:09 compute-0 sudo[415037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:27:09 compute-0 sudo[415037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:09 compute-0 sudo[415037]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:09 compute-0 sudo[415062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:09 compute-0 sudo[415062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:09 compute-0 sudo[415062]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:09 compute-0 sudo[415088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Dec 06 08:27:09 compute-0 sudo[415088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:09 compute-0 nova_compute[251992]: 2025-12-06 08:27:09.393 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:09 compute-0 sudo[415088]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:27:09 compute-0 nova_compute[251992]: 2025-12-06 08:27:09.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:09 compute-0 nova_compute[251992]: 2025-12-06 08:27:09.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:27:09 compute-0 nova_compute[251992]: 2025-12-06 08:27:09.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-0 nova_compute[251992]: 2025-12-06 08:27:09.676 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 43cb6e0c-3114-4368-b2d5-fb91519884a5 does not exist
Dec 06 08:27:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bf4131bc-f05c-4dd8-ba86-50a19351e6f9 does not exist
Dec 06 08:27:09 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7f6615f8-57f5-46e2-aaad-91cf19e93937 does not exist
Dec 06 08:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:27:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:27:09 compute-0 sudo[415132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:09 compute-0 sudo[415132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:09 compute-0 sudo[415132]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3898552235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1091282979' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1091282979' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:27:09 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:27:09 compute-0 sudo[415157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:27:09 compute-0 sudo[415157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:09 compute-0 sudo[415157]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:09.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:09 compute-0 sudo[415182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:09 compute-0 sudo[415182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:09 compute-0 sudo[415182]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:10 compute-0 sudo[415207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:27:10 compute-0 sudo[415207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:10 compute-0 podman[415272]: 2025-12-06 08:27:10.329268858 +0000 UTC m=+0.038881807 container create f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 06 08:27:10 compute-0 systemd[1]: Started libpod-conmon-f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683.scope.
Dec 06 08:27:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:27:10 compute-0 podman[415272]: 2025-12-06 08:27:10.313688696 +0000 UTC m=+0.023301665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:27:10 compute-0 podman[415272]: 2025-12-06 08:27:10.411237396 +0000 UTC m=+0.120850355 container init f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Dec 06 08:27:10 compute-0 podman[415272]: 2025-12-06 08:27:10.417699151 +0000 UTC m=+0.127312100 container start f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ganguly, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 06 08:27:10 compute-0 podman[415272]: 2025-12-06 08:27:10.421170875 +0000 UTC m=+0.130783834 container attach f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:27:10 compute-0 gifted_ganguly[415288]: 167 167
Dec 06 08:27:10 compute-0 systemd[1]: libpod-f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683.scope: Deactivated successfully.
Dec 06 08:27:10 compute-0 podman[415272]: 2025-12-06 08:27:10.423392626 +0000 UTC m=+0.133005575 container died f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ganguly, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 08:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-d745a2eee9441cfae2cbf622bc00753b73b4011e39008766a72d4e9bce7f8f86-merged.mount: Deactivated successfully.
Dec 06 08:27:10 compute-0 podman[415272]: 2025-12-06 08:27:10.462159109 +0000 UTC m=+0.171772048 container remove f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ganguly, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:27:10 compute-0 systemd[1]: libpod-conmon-f7ed3e3e95e0929e1479c1436547f5a6a62039aa2b77e76003de1071e3933683.scope: Deactivated successfully.
Dec 06 08:27:10 compute-0 podman[415312]: 2025-12-06 08:27:10.630265136 +0000 UTC m=+0.060252409 container create eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:27:10 compute-0 systemd[1]: Started libpod-conmon-eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f.scope.
Dec 06 08:27:10 compute-0 podman[415312]: 2025-12-06 08:27:10.592767547 +0000 UTC m=+0.022754850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:27:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfd107aceeb6b1badfc6a6aa5039d70a7fb4cde392e49c4750c2f30332b3c8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfd107aceeb6b1badfc6a6aa5039d70a7fb4cde392e49c4750c2f30332b3c8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfd107aceeb6b1badfc6a6aa5039d70a7fb4cde392e49c4750c2f30332b3c8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfd107aceeb6b1badfc6a6aa5039d70a7fb4cde392e49c4750c2f30332b3c8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcfd107aceeb6b1badfc6a6aa5039d70a7fb4cde392e49c4750c2f30332b3c8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:10 compute-0 podman[415312]: 2025-12-06 08:27:10.713914098 +0000 UTC m=+0.143901401 container init eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:27:10 compute-0 podman[415312]: 2025-12-06 08:27:10.721708859 +0000 UTC m=+0.151696142 container start eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamport, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:27:10 compute-0 podman[415312]: 2025-12-06 08:27:10.724563147 +0000 UTC m=+0.154550420 container attach eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:27:10 compute-0 ceph-mon[74339]: pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:10.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:11 compute-0 nifty_lamport[415328]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:27:11 compute-0 nifty_lamport[415328]: --> relative data size: 1.0
Dec 06 08:27:11 compute-0 nifty_lamport[415328]: --> All data devices are unavailable
Dec 06 08:27:11 compute-0 systemd[1]: libpod-eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f.scope: Deactivated successfully.
Dec 06 08:27:11 compute-0 podman[415344]: 2025-12-06 08:27:11.599125216 +0000 UTC m=+0.029900533 container died eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:27:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcfd107aceeb6b1badfc6a6aa5039d70a7fb4cde392e49c4750c2f30332b3c8f-merged.mount: Deactivated successfully.
Dec 06 08:27:11 compute-0 podman[415344]: 2025-12-06 08:27:11.650867731 +0000 UTC m=+0.081642878 container remove eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:27:11 compute-0 systemd[1]: libpod-conmon-eab8bb630f6df5b49e4b155efe2b1da6a09fcdd119bb6d66ddfa831e3d3d636f.scope: Deactivated successfully.
Dec 06 08:27:11 compute-0 sudo[415207]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:11 compute-0 sudo[415359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:11 compute-0 sudo[415359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:11 compute-0 sudo[415359]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:11 compute-0 sudo[415384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:27:11 compute-0 sudo[415384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:11 compute-0 sudo[415384]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:11 compute-0 sudo[415409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:11 compute-0 sudo[415409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:11 compute-0 sudo[415409]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:11 compute-0 sudo[415434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:27:11 compute-0 sudo[415434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:11.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:12 compute-0 podman[415498]: 2025-12-06 08:27:12.258870698 +0000 UTC m=+0.041199600 container create afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ride, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:27:12 compute-0 systemd[1]: Started libpod-conmon-afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756.scope.
Dec 06 08:27:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:27:12 compute-0 podman[415498]: 2025-12-06 08:27:12.329301272 +0000 UTC m=+0.111630224 container init afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:27:12 compute-0 podman[415498]: 2025-12-06 08:27:12.238578227 +0000 UTC m=+0.020907179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:27:12 compute-0 podman[415498]: 2025-12-06 08:27:12.336332463 +0000 UTC m=+0.118661365 container start afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 08:27:12 compute-0 practical_ride[415515]: 167 167
Dec 06 08:27:12 compute-0 systemd[1]: libpod-afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756.scope: Deactivated successfully.
Dec 06 08:27:12 compute-0 podman[415498]: 2025-12-06 08:27:12.350126008 +0000 UTC m=+0.132454940 container attach afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:27:12 compute-0 podman[415498]: 2025-12-06 08:27:12.351278279 +0000 UTC m=+0.133607181 container died afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 08:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4be24c7f56bba0152a6700ae56d8e93f7ac793e95e027d6bdaf19d5c2f7a035-merged.mount: Deactivated successfully.
Dec 06 08:27:12 compute-0 podman[415498]: 2025-12-06 08:27:12.382869017 +0000 UTC m=+0.165197919 container remove afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ride, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Dec 06 08:27:12 compute-0 systemd[1]: libpod-conmon-afc4793e98cd6bd78c54570f6e8a72b24e0b14af8a6c301d5074fa14cd610756.scope: Deactivated successfully.
Dec 06 08:27:12 compute-0 podman[415541]: 2025-12-06 08:27:12.551530199 +0000 UTC m=+0.052367013 container create 52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:27:12 compute-0 systemd[1]: Started libpod-conmon-52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870.scope.
Dec 06 08:27:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:27:12 compute-0 podman[415541]: 2025-12-06 08:27:12.524388342 +0000 UTC m=+0.025225196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38125c061376a208d12e7b5a24f191da04ab72a777ac0b89a474965d1781f48a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38125c061376a208d12e7b5a24f191da04ab72a777ac0b89a474965d1781f48a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38125c061376a208d12e7b5a24f191da04ab72a777ac0b89a474965d1781f48a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38125c061376a208d12e7b5a24f191da04ab72a777ac0b89a474965d1781f48a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:12 compute-0 podman[415541]: 2025-12-06 08:27:12.633927957 +0000 UTC m=+0.134764751 container init 52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mclean, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:27:12 compute-0 podman[415541]: 2025-12-06 08:27:12.64102236 +0000 UTC m=+0.141859134 container start 52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:27:12 compute-0 podman[415541]: 2025-12-06 08:27:12.644204297 +0000 UTC m=+0.145041071 container attach 52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:27:12 compute-0 nova_compute[251992]: 2025-12-06 08:27:12.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:27:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:27:13 compute-0 ceph-mon[74339]: pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:13 compute-0 nova_compute[251992]: 2025-12-06 08:27:13.387 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]: {
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:     "0": [
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:         {
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "devices": [
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "/dev/loop3"
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             ],
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "lv_name": "ceph_lv0",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "lv_size": "7511998464",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "name": "ceph_lv0",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "tags": {
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.cluster_name": "ceph",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.crush_device_class": "",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.encrypted": "0",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.osd_id": "0",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.type": "block",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:                 "ceph.vdo": "0"
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             },
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "type": "block",
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:             "vg_name": "ceph_vg0"
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:         }
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]:     ]
Dec 06 08:27:13 compute-0 pedantic_mclean[415558]: }
Dec 06 08:27:13 compute-0 systemd[1]: libpod-52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870.scope: Deactivated successfully.
Dec 06 08:27:13 compute-0 podman[415541]: 2025-12-06 08:27:13.433608421 +0000 UTC m=+0.934445195 container died 52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mclean, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:27:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-38125c061376a208d12e7b5a24f191da04ab72a777ac0b89a474965d1781f48a-merged.mount: Deactivated successfully.
Dec 06 08:27:13 compute-0 podman[415541]: 2025-12-06 08:27:13.488597505 +0000 UTC m=+0.989434279 container remove 52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mclean, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:27:13 compute-0 systemd[1]: libpod-conmon-52f84e70cae831958101a7239a35ad02013c20b12b44affd1ad4fb6ff6463870.scope: Deactivated successfully.
Dec 06 08:27:13 compute-0 sudo[415434]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:13 compute-0 sudo[415579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:13 compute-0 sudo[415579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:13 compute-0 sudo[415579]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:13 compute-0 sudo[415604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:27:13 compute-0 sudo[415604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:13 compute-0 sudo[415604]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:13 compute-0 sudo[415629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:13 compute-0 sudo[415629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:13 compute-0 sudo[415629]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:13 compute-0 sudo[415654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:27:13 compute-0 sudo[415654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:13.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:14 compute-0 podman[415719]: 2025-12-06 08:27:14.019489858 +0000 UTC m=+0.035457504 container create 13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:27:14 compute-0 systemd[1]: Started libpod-conmon-13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1.scope.
Dec 06 08:27:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:27:14 compute-0 ovn_controller[147168]: 2025-12-06T08:27:14Z|00840|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 08:27:14 compute-0 podman[415719]: 2025-12-06 08:27:14.004816639 +0000 UTC m=+0.020784305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:27:14 compute-0 podman[415719]: 2025-12-06 08:27:14.101338672 +0000 UTC m=+0.117306348 container init 13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:27:14 compute-0 podman[415719]: 2025-12-06 08:27:14.108570247 +0000 UTC m=+0.124537893 container start 13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:27:14 compute-0 podman[415719]: 2025-12-06 08:27:14.111773375 +0000 UTC m=+0.127741041 container attach 13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Dec 06 08:27:14 compute-0 naughty_hopper[415735]: 167 167
Dec 06 08:27:14 compute-0 systemd[1]: libpod-13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1.scope: Deactivated successfully.
Dec 06 08:27:14 compute-0 podman[415719]: 2025-12-06 08:27:14.113280206 +0000 UTC m=+0.129247852 container died 13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:27:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e777fa242088ac2dd054595183b63426ef26566d626be67c9aa21ad0e3825a8-merged.mount: Deactivated successfully.
Dec 06 08:27:14 compute-0 podman[415719]: 2025-12-06 08:27:14.150523607 +0000 UTC m=+0.166491253 container remove 13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:27:14 compute-0 systemd[1]: libpod-conmon-13ce042cc1d9169062b5cfe998a7d883e518b5cfb3a8ca9b46caf299bc70f5e1.scope: Deactivated successfully.
Dec 06 08:27:14 compute-0 podman[415758]: 2025-12-06 08:27:14.309059634 +0000 UTC m=+0.042544706 container create d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_driscoll, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:27:14 compute-0 systemd[1]: Started libpod-conmon-d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187.scope.
Dec 06 08:27:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade64c45cab9e2be1fe376e4352327829b33c6668a1b4c9fed022b3d69480ebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade64c45cab9e2be1fe376e4352327829b33c6668a1b4c9fed022b3d69480ebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade64c45cab9e2be1fe376e4352327829b33c6668a1b4c9fed022b3d69480ebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade64c45cab9e2be1fe376e4352327829b33c6668a1b4c9fed022b3d69480ebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:27:14 compute-0 podman[415758]: 2025-12-06 08:27:14.386073696 +0000 UTC m=+0.119558788 container init d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:27:14 compute-0 podman[415758]: 2025-12-06 08:27:14.292276798 +0000 UTC m=+0.025761890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:27:14 compute-0 podman[415758]: 2025-12-06 08:27:14.392769739 +0000 UTC m=+0.126254811 container start d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_driscoll, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 06 08:27:14 compute-0 nova_compute[251992]: 2025-12-06 08:27:14.393 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:14 compute-0 podman[415758]: 2025-12-06 08:27:14.398847324 +0000 UTC m=+0.132332436 container attach d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 08:27:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]: {
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]:         "osd_id": 0,
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]:         "type": "bluestore"
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]:     }
Dec 06 08:27:15 compute-0 intelligent_driscoll[415774]: }
Dec 06 08:27:15 compute-0 systemd[1]: libpod-d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187.scope: Deactivated successfully.
Dec 06 08:27:15 compute-0 podman[415758]: 2025-12-06 08:27:15.247814167 +0000 UTC m=+0.981299239 container died d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:27:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ade64c45cab9e2be1fe376e4352327829b33c6668a1b4c9fed022b3d69480ebc-merged.mount: Deactivated successfully.
Dec 06 08:27:15 compute-0 ceph-mon[74339]: pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:15 compute-0 podman[415758]: 2025-12-06 08:27:15.305257378 +0000 UTC m=+1.038742450 container remove d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:27:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:15 compute-0 systemd[1]: libpod-conmon-d48260129c9f95730be39a880543f45189d65af5c4d5263a94de7ac4f6c0f187.scope: Deactivated successfully.
Dec 06 08:27:15 compute-0 sudo[415654]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:27:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:27:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 24223017-2756-4540-a9d9-1e0fc4a974f4 does not exist
Dec 06 08:27:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d351cf2e-104f-48c6-a729-2c647e581f15 does not exist
Dec 06 08:27:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d791bf32-c757-40c6-b4c1-47b13b46b40a does not exist
Dec 06 08:27:15 compute-0 sudo[415808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:15 compute-0 sudo[415808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:15 compute-0 sudo[415808]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:15 compute-0 sudo[415833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:27:15 compute-0 sudo[415833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:15 compute-0 sudo[415833]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:15 compute-0 nova_compute[251992]: 2025-12-06 08:27:15.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:15.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:27:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:16.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:17 compute-0 ceph-mon[74339]: pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:17.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:18 compute-0 nova_compute[251992]: 2025-12-06 08:27:18.389 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:27:18
Dec 06 08:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.rgw.root', 'backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Dec 06 08:27:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:27:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:18.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:19 compute-0 nova_compute[251992]: 2025-12-06 08:27:19.430 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:19 compute-0 ceph-mon[74339]: pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:19.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:20 compute-0 nova_compute[251992]: 2025-12-06 08:27:20.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:20 compute-0 nova_compute[251992]: 2025-12-06 08:27:20.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:27:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:20.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:21 compute-0 ceph-mon[74339]: pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:21.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:22.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:23 compute-0 nova_compute[251992]: 2025-12-06 08:27:23.391 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:23 compute-0 nova_compute[251992]: 2025-12-06 08:27:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:27:23 compute-0 ceph-mon[74339]: pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:27:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:27:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:23.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:24 compute-0 nova_compute[251992]: 2025-12-06 08:27:24.432 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:24.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:25 compute-0 ceph-mon[74339]: pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:25.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:27:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:27:26 compute-0 ceph-mon[74339]: pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:26.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:27:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:27:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:27:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:27:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:27:27 compute-0 sudo[415864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:27 compute-0 sudo[415864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:27 compute-0 sudo[415864]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:27 compute-0 sudo[415889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:27 compute-0 sudo[415889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:27 compute-0 sudo[415889]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:27.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:28 compute-0 nova_compute[251992]: 2025-12-06 08:27:28.393 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:28.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:29 compute-0 ceph-mon[74339]: pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:29 compute-0 nova_compute[251992]: 2025-12-06 08:27:29.434 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:29.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:30.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:31 compute-0 ceph-mon[74339]: pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:27:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:31.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Dec 06 08:27:32 compute-0 podman[415917]: 2025-12-06 08:27:32.418943022 +0000 UTC m=+0.078737401 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:27:32 compute-0 ceph-mon[74339]: pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Dec 06 08:27:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:32.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:33 compute-0 nova_compute[251992]: 2025-12-06 08:27:33.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:33.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Dec 06 08:27:34 compute-0 nova_compute[251992]: 2025-12-06 08:27:34.436 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:34.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:35 compute-0 ceph-mon[74339]: pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Dec 06 08:27:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:35.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 26 op/s
Dec 06 08:27:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:36.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:37 compute-0 ceph-mon[74339]: pgmap v4061: 305 pgs: 305 active+clean; 145 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.0 MiB/s wr, 26 op/s
Dec 06 08:27:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:37.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4062: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:27:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:27:38 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/891910200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:27:38 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/891910200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:27:38 compute-0 nova_compute[251992]: 2025-12-06 08:27:38.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:38 compute-0 podman[415946]: 2025-12-06 08:27:38.435299452 +0000 UTC m=+0.079850150 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:27:38 compute-0 podman[415947]: 2025-12-06 08:27:38.443022252 +0000 UTC m=+0.082508723 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125)
Dec 06 08:27:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:38.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Dec 06 08:27:39 compute-0 ceph-mon[74339]: pgmap v4062: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Dec 06 08:27:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Dec 06 08:27:39 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Dec 06 08:27:39 compute-0 nova_compute[251992]: 2025-12-06 08:27:39.438 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:39.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 49 op/s
Dec 06 08:27:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Dec 06 08:27:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Dec 06 08:27:40 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Dec 06 08:27:40 compute-0 ceph-mon[74339]: osdmap e429: 3 total, 3 up, 3 in
Dec 06 08:27:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:40.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Dec 06 08:27:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Dec 06 08:27:41 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Dec 06 08:27:41 compute-0 ceph-mon[74339]: pgmap v4064: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 49 op/s
Dec 06 08:27:41 compute-0 ceph-mon[74339]: osdmap e430: 3 total, 3 up, 3 in
Dec 06 08:27:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:41.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 MiB/s wr, 35 op/s
Dec 06 08:27:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Dec 06 08:27:42 compute-0 ceph-mon[74339]: osdmap e431: 3 total, 3 up, 3 in
Dec 06 08:27:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Dec 06 08:27:42 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Dec 06 08:27:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:42.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:27:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:27:43 compute-0 ceph-mon[74339]: pgmap v4067: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.5 MiB/s wr, 35 op/s
Dec 06 08:27:43 compute-0 ceph-mon[74339]: osdmap e432: 3 total, 3 up, 3 in
Dec 06 08:27:43 compute-0 nova_compute[251992]: 2025-12-06 08:27:43.397 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:43.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 KiB/s rd, 838 B/s wr, 7 op/s
Dec 06 08:27:44 compute-0 nova_compute[251992]: 2025-12-06 08:27:44.440 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:44.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:45 compute-0 ceph-mon[74339]: pgmap v4069: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 KiB/s rd, 838 B/s wr, 7 op/s
Dec 06 08:27:45 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1713963602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:27:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:45.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 78 op/s
Dec 06 08:27:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Dec 06 08:27:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Dec 06 08:27:46 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Dec 06 08:27:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:47 compute-0 ceph-mon[74339]: pgmap v4070: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 78 op/s
Dec 06 08:27:47 compute-0 ceph-mon[74339]: osdmap e433: 3 total, 3 up, 3 in
Dec 06 08:27:47 compute-0 sudo[415993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:47 compute-0 sudo[415993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:47 compute-0 sudo[415993]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:47 compute-0 sudo[416018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:27:47 compute-0 sudo[416018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:27:47 compute-0 sudo[416018]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:47.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 88 op/s
Dec 06 08:27:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3032119297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:48 compute-0 nova_compute[251992]: 2025-12-06 08:27:48.399 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:27:48.610 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=109, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=108) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:27:48 compute-0 nova_compute[251992]: 2025-12-06 08:27:48.611 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:48 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:27:48.611 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:27:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:48.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:49 compute-0 ceph-mon[74339]: pgmap v4072: 305 pgs: 305 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 88 op/s
Dec 06 08:27:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/394519015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:49 compute-0 nova_compute[251992]: 2025-12-06 08:27:49.442 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:49.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 76 op/s
Dec 06 08:27:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Dec 06 08:27:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Dec 06 08:27:50 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Dec 06 08:27:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:50.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:51 compute-0 ceph-mon[74339]: pgmap v4073: 305 pgs: 305 active+clean; 240 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 76 op/s
Dec 06 08:27:51 compute-0 ceph-mon[74339]: osdmap e434: 3 total, 3 up, 3 in
Dec 06 08:27:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:51.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 93 op/s
Dec 06 08:27:52 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:27:52.613 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '109'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:27:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:53 compute-0 nova_compute[251992]: 2025-12-06 08:27:53.402 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:53 compute-0 ceph-mon[74339]: pgmap v4075: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 93 op/s
Dec 06 08:27:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:53.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 39 op/s
Dec 06 08:27:54 compute-0 nova_compute[251992]: 2025-12-06 08:27:54.442 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:54 compute-0 nova_compute[251992]: 2025-12-06 08:27:54.832 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquiring lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:54 compute-0 nova_compute[251992]: 2025-12-06 08:27:54.833 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:54 compute-0 nova_compute[251992]: 2025-12-06 08:27:54.853 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 08:27:54 compute-0 ceph-mon[74339]: pgmap v4076: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 39 op/s
Dec 06 08:27:54 compute-0 nova_compute[251992]: 2025-12-06 08:27:54.949 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:54 compute-0 nova_compute[251992]: 2025-12-06 08:27:54.950 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:54 compute-0 nova_compute[251992]: 2025-12-06 08:27:54.959 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 08:27:54 compute-0 nova_compute[251992]: 2025-12-06 08:27:54.960 251996 INFO nova.compute.claims [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Claim successful on node compute-0.ctlplane.example.com
Dec 06 08:27:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:54.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.083 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:27:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:27:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038309903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.572 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.579 251996 DEBUG nova.compute.provider_tree [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.606 251996 DEBUG nova.scheduler.client.report [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.634 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.636 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.704 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.705 251996 DEBUG nova.network.neutron [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.737 251996 INFO nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.759 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.805 251996 INFO nova.virt.block_device [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Booting with volume cac0df88-28ec-4cc6-ba02-9bd488ce8782 at /dev/vda
Dec 06 08:27:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:27:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:55.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.968 251996 DEBUG os_brick.utils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.971 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.991 283120 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.992 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc33cbb-4d92-4e3c-a398-43bf28592e8b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:27:55 compute-0 nova_compute[251992]: 2025-12-06 08:27:55.994 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.004 283120 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.004 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2b50cf-725b-4860-aaf0-578cd38e9f2d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14d7cbfe12ab', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.006 283120 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.017 283120 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.017 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[17e40f52-cb5e-49d2-8972-2dfabb0f818f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.019 283120 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0449a0-1cc1-4de1-a847-211d2459f8bd]: (4, 'dc45738e-2bb0-4417-914c-a006d79f6275') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.020 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2038309903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.061 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.066 251996 DEBUG os_brick.initiator.connectors.lightos [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.066 251996 DEBUG os_brick.initiator.connectors.lightos [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.067 251996 DEBUG os_brick.initiator.connectors.lightos [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.068 251996 DEBUG os_brick.utils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14d7cbfe12ab', 'do_local_attach': False, 'nvme_hostid': 'bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'system uuid': 'dc45738e-2bb0-4417-914c-a006d79f6275', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:bf3e0a14-a5f8-4123-aa26-e7cad37b879a', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.068 251996 DEBUG nova.virt.block_device [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating existing volume attachment record: 8cd113ac-847b-4e13-af9a-7723f57e95f0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 08:27:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 516 KiB/s rd, 536 KiB/s wr, 14 op/s
Dec 06 08:27:56 compute-0 nova_compute[251992]: 2025-12-06 08:27:56.256 251996 DEBUG nova.policy [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00e2fea2f8f54b1c9af85553820566a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '55d0236410514dd9ad6cdb3e1a5d0ee6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 08:27:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:56.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.021 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.023 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.023 251996 INFO nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Creating image(s)
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.024 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.024 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Ensure instance console log exists: /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.024 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.025 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.025 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:27:57 compute-0 ceph-mon[74339]: pgmap v4077: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 516 KiB/s rd, 536 KiB/s wr, 14 op/s
Dec 06 08:27:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3327544377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.102 251996 DEBUG nova.network.neutron [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Successfully created port: a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.787 251996 DEBUG nova.network.neutron [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Successfully updated port: a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.808 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquiring lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.808 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquired lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.809 251996 DEBUG nova.network.neutron [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.878 251996 DEBUG nova.compute.manager [req-a8a46ae0-1da8-4b8a-aa4d-73abfab2f9db req-cdc62d21-4838-40ea-87d9-f67f08e76904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.879 251996 DEBUG nova.compute.manager [req-a8a46ae0-1da8-4b8a-aa4d-73abfab2f9db req-cdc62d21-4838-40ea-87d9-f67f08e76904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing instance network info cache due to event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:27:57 compute-0 nova_compute[251992]: 2025-12-06 08:27:57.879 251996 DEBUG oslo_concurrency.lockutils [req-a8a46ae0-1da8-4b8a-aa4d-73abfab2f9db req-cdc62d21-4838-40ea-87d9-f67f08e76904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:27:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:57.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 525 KiB/s wr, 14 op/s
Dec 06 08:27:58 compute-0 nova_compute[251992]: 2025-12-06 08:27:58.403 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:58 compute-0 nova_compute[251992]: 2025-12-06 08:27:58.737 251996 DEBUG nova.network.neutron [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 08:27:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:27:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:27:58.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.495 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.658 251996 DEBUG nova.network.neutron [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating instance_info_cache with network_info: [{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.681 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Releasing lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.681 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Instance network_info: |[{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.682 251996 DEBUG oslo_concurrency.lockutils [req-a8a46ae0-1da8-4b8a-aa4d-73abfab2f9db req-cdc62d21-4838-40ea-87d9-f67f08e76904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.682 251996 DEBUG nova.network.neutron [req-a8a46ae0-1da8-4b8a-aa4d-73abfab2f9db req-cdc62d21-4838-40ea-87d9-f67f08e76904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.687 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Start _get_guest_xml network_info=[{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cac0df88-28ec-4cc6-ba02-9bd488ce8782', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cac0df88-28ec-4cc6-ba02-9bd488ce8782', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ec4b7c40-d407-4a4a-bafa-4d075f29487f', 'attached_at': '', 'detached_at': '', 'volume_id': 'cac0df88-28ec-4cc6-ba02-9bd488ce8782', 'serial': 'cac0df88-28ec-4cc6-ba02-9bd488ce8782'}, 'attachment_id': '8cd113ac-847b-4e13-af9a-7723f57e95f0', 'guest_format': None, 'delete_on_termination': False, 'disk_bus': 'virtio', 'boot_index': 0, 'device_type': 'disk', 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.695 251996 WARNING nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.706 251996 DEBUG nova.virt.libvirt.host [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.707 251996 DEBUG nova.virt.libvirt.host [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.712 251996 DEBUG nova.virt.libvirt.host [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.712 251996 DEBUG nova.virt.libvirt.host [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.715 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.715 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T06:56:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25848a18-11d9-4f11-80b5-5d005675c76d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.716 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.717 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.717 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.717 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.718 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.718 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.719 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.720 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.720 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.721 251996 DEBUG nova.virt.hardware [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.775 251996 DEBUG nova.storage.rbd_utils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] rbd image ec4b7c40-d407-4a4a-bafa-4d075f29487f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:27:59 compute-0 nova_compute[251992]: 2025-12-06 08:27:59.781 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:27:59 compute-0 ceph-mon[74339]: pgmap v4078: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 525 KiB/s wr, 14 op/s
Dec 06 08:27:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:27:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:27:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:27:59.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 525 KiB/s wr, 14 op/s
Dec 06 08:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec 06 08:28:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366344083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.692 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.911s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.723 251996 DEBUG nova.virt.libvirt.vif [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:27:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1417535578',display_name='tempest-TestVolumeBackupRestore-server-1417535578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1417535578',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGi74QcvQx1TJ5bn/MOn8g36uTTcLfmxtJnjTjNKqhzlHTumMcvQfyd1LcmjxLXLqetQHWPcDq6WT4b+Ge0BjCBPB+nREeeso33JF9FsmzpFuSdurxhWIDKNHnoRxWIhUQ==',key_name='tempest-TestVolumeBackupRestore-367977014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d0236410514dd9ad6cdb3e1a5d0ee6',ramdisk_id='',reservation_id='r-iegmkqc9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1150314823',owner_user_name='tempest-TestVolumeBackupRestore-1150314823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:27:55Z,user_data=None,user_id='00e2fea2f8f54b1c9af85553820566a6',uuid=ec4b7c40-d407-4a4a-bafa-4d075f29487f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.724 251996 DEBUG nova.network.os_vif_util [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Converting VIF {"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.725 251996 DEBUG nova.network.os_vif_util [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e0:f8,bridge_name='br-int',has_traffic_filtering=True,id=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c,network=Network(d45a42e5-9dac-4ea0-bad8-cca7babbbcbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9c2ca9d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.728 251996 DEBUG nova.objects.instance [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lazy-loading 'pci_devices' on Instance uuid ec4b7c40-d407-4a4a-bafa-4d075f29487f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.749 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <uuid>ec4b7c40-d407-4a4a-bafa-4d075f29487f</uuid>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <name>instance-000000e0</name>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <memory>131072</memory>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <vcpu>1</vcpu>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <metadata>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <nova:name>tempest-TestVolumeBackupRestore-server-1417535578</nova:name>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <nova:creationTime>2025-12-06 08:27:59</nova:creationTime>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <nova:flavor name="m1.nano">
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <nova:memory>128</nova:memory>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <nova:disk>1</nova:disk>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <nova:swap>0</nova:swap>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <nova:vcpus>1</nova:vcpus>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       </nova:flavor>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <nova:owner>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <nova:user uuid="00e2fea2f8f54b1c9af85553820566a6">tempest-TestVolumeBackupRestore-1150314823-project-member</nova:user>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <nova:project uuid="55d0236410514dd9ad6cdb3e1a5d0ee6">tempest-TestVolumeBackupRestore-1150314823</nova:project>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       </nova:owner>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <nova:ports>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <nova:port uuid="a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c">
Dec 06 08:28:00 compute-0 nova_compute[251992]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         </nova:port>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       </nova:ports>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </nova:instance>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   </metadata>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <sysinfo type="smbios">
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <system>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <entry name="manufacturer">RDO</entry>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <entry name="product">OpenStack Compute</entry>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <entry name="serial">ec4b7c40-d407-4a4a-bafa-4d075f29487f</entry>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <entry name="uuid">ec4b7c40-d407-4a4a-bafa-4d075f29487f</entry>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <entry name="family">Virtual Machine</entry>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </system>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   </sysinfo>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <os>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <boot dev="hd"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <smbios mode="sysinfo"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   </os>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <features>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <acpi/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <apic/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <vmcoreinfo/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   </features>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <clock offset="utc">
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <timer name="hpet" present="no"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   </clock>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <cpu mode="custom" match="exact">
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <model>Nehalem</model>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   </cpu>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   <devices>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <disk type="network" device="cdrom">
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <driver type="raw" cache="none"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <source protocol="rbd" name="vms/ec4b7c40-d407-4a4a-bafa-4d075f29487f_disk.config">
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       </source>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <target dev="sda" bus="sata"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <disk type="network" device="disk">
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <source protocol="rbd" name="volumes/volume-cac0df88-28ec-4cc6-ba02-9bd488ce8782">
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <host name="192.168.122.100" port="6789"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <host name="192.168.122.102" port="6789"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <host name="192.168.122.101" port="6789"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       </source>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <auth username="openstack">
Dec 06 08:28:00 compute-0 nova_compute[251992]:         <secret type="ceph" uuid="40a1bae4-cf76-5610-8dab-c75116dfe0bb"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       </auth>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <target dev="vda" bus="virtio"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <serial>cac0df88-28ec-4cc6-ba02-9bd488ce8782</serial>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </disk>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <interface type="ethernet">
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <mac address="fa:16:3e:4e:e0:f8"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <mtu size="1442"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <target dev="tapa9c2ca9d-21"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </interface>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <serial type="pty">
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <log file="/var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f/console.log" append="off"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </serial>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <video>
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <model type="virtio"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </video>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <input type="tablet" bus="usb"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <rng model="virtio">
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <backend model="random">/dev/urandom</backend>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </rng>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <controller type="usb" index="0"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     <memballoon model="virtio">
Dec 06 08:28:00 compute-0 nova_compute[251992]:       <stats period="10"/>
Dec 06 08:28:00 compute-0 nova_compute[251992]:     </memballoon>
Dec 06 08:28:00 compute-0 nova_compute[251992]:   </devices>
Dec 06 08:28:00 compute-0 nova_compute[251992]: </domain>
Dec 06 08:28:00 compute-0 nova_compute[251992]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.750 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Preparing to wait for external event network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.752 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquiring lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.752 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.753 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.754 251996 DEBUG nova.virt.libvirt.vif [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T08:27:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1417535578',display_name='tempest-TestVolumeBackupRestore-server-1417535578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1417535578',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGi74QcvQx1TJ5bn/MOn8g36uTTcLfmxtJnjTjNKqhzlHTumMcvQfyd1LcmjxLXLqetQHWPcDq6WT4b+Ge0BjCBPB+nREeeso33JF9FsmzpFuSdurxhWIDKNHnoRxWIhUQ==',key_name='tempest-TestVolumeBackupRestore-367977014',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='55d0236410514dd9ad6cdb3e1a5d0ee6',ramdisk_id='',reservation_id='r-iegmkqc9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1150314823',owner_user_name='tempest-TestVolumeBackupRestore-1150314823-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T08:27:55Z,user_data=None,user_id='00e2fea2f8f54b1c9af85553820566a6',uuid=ec4b7c40-d407-4a4a-bafa-4d075f29487f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.755 251996 DEBUG nova.network.os_vif_util [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Converting VIF {"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.756 251996 DEBUG nova.network.os_vif_util [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e0:f8,bridge_name='br-int',has_traffic_filtering=True,id=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c,network=Network(d45a42e5-9dac-4ea0-bad8-cca7babbbcbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9c2ca9d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.757 251996 DEBUG os_vif [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e0:f8,bridge_name='br-int',has_traffic_filtering=True,id=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c,network=Network(d45a42e5-9dac-4ea0-bad8-cca7babbbcbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9c2ca9d-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.758 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.759 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.760 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.765 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.766 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9c2ca9d-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.767 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa9c2ca9d-21, col_values=(('external_ids', {'iface-id': 'a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:e0:f8', 'vm-uuid': 'ec4b7c40-d407-4a4a-bafa-4d075f29487f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.768 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.769 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 08:28:00 compute-0 NetworkManager[48965]: <info>  [1765009680.7711] manager: (tapa9c2ca9d-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.775 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.776 251996 INFO os_vif [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:e0:f8,bridge_name='br-int',has_traffic_filtering=True,id=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c,network=Network(d45a42e5-9dac-4ea0-bad8-cca7babbbcbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9c2ca9d-21')
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.847 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.847 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.847 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] No VIF found with MAC fa:16:3e:4e:e0:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.848 251996 INFO nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Using config drive
Dec 06 08:28:00 compute-0 nova_compute[251992]: 2025-12-06 08:28:00.885 251996 DEBUG nova.storage.rbd_utils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] rbd image ec4b7c40-d407-4a4a-bafa-4d075f29487f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:28:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:00.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:00 compute-0 ceph-mon[74339]: pgmap v4079: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 525 KiB/s wr, 14 op/s
Dec 06 08:28:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2366344083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec 06 08:28:01 compute-0 nova_compute[251992]: 2025-12-06 08:28:01.742 251996 INFO nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Creating config drive at /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f/disk.config
Dec 06 08:28:01 compute-0 nova_compute[251992]: 2025-12-06 08:28:01.749 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3_itv3q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:28:01 compute-0 nova_compute[251992]: 2025-12-06 08:28:01.907 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3_itv3q" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:28:01 compute-0 nova_compute[251992]: 2025-12-06 08:28:01.939 251996 DEBUG nova.storage.rbd_utils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] rbd image ec4b7c40-d407-4a4a-bafa-4d075f29487f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 08:28:01 compute-0 nova_compute[251992]: 2025-12-06 08:28:01.942 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f/disk.config ec4b7c40-d407-4a4a-bafa-4d075f29487f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:28:01 compute-0 nova_compute[251992]: 2025-12-06 08:28:01.966 251996 DEBUG nova.network.neutron [req-a8a46ae0-1da8-4b8a-aa4d-73abfab2f9db req-cdc62d21-4838-40ea-87d9-f67f08e76904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updated VIF entry in instance network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:28:01 compute-0 nova_compute[251992]: 2025-12-06 08:28:01.967 251996 DEBUG nova.network.neutron [req-a8a46ae0-1da8-4b8a-aa4d-73abfab2f9db req-cdc62d21-4838-40ea-87d9-f67f08e76904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating instance_info_cache with network_info: [{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:28:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:01.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:01 compute-0 nova_compute[251992]: 2025-12-06 08:28:01.991 251996 DEBUG oslo_concurrency.lockutils [req-a8a46ae0-1da8-4b8a-aa4d-73abfab2f9db req-cdc62d21-4838-40ea-87d9-f67f08e76904 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.105 251996 DEBUG oslo_concurrency.processutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f/disk.config ec4b7c40-d407-4a4a-bafa-4d075f29487f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.105 251996 INFO nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Deleting local config drive /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f/disk.config because it was imported into RBD.
Dec 06 08:28:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 351 B/s wr, 9 op/s
Dec 06 08:28:02 compute-0 kernel: tapa9c2ca9d-21: entered promiscuous mode
Dec 06 08:28:02 compute-0 NetworkManager[48965]: <info>  [1765009682.1557] manager: (tapa9c2ca9d-21): new Tun device (/org/freedesktop/NetworkManager/Devices/403)
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.159 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-0 ovn_controller[147168]: 2025-12-06T08:28:02Z|00841|binding|INFO|Claiming lport a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c for this chassis.
Dec 06 08:28:02 compute-0 ovn_controller[147168]: 2025-12-06T08:28:02Z|00842|binding|INFO|a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c: Claiming fa:16:3e:4e:e0:f8 10.100.0.4
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.164 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.167 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.173 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:e0:f8 10.100.0.4'], port_security=['fa:16:3e:4e:e0:f8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ec4b7c40-d407-4a4a-bafa-4d075f29487f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d0236410514dd9ad6cdb3e1a5d0ee6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '50af7cc2-125d-439a-810c-229d0e5b22a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d51e676-9fc7-404c-a223-5b590dbd406b, chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.175 158118 INFO neutron.agent.ovn.metadata.agent [-] Port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c in datapath d45a42e5-9dac-4ea0-bad8-cca7babbbcbb bound to our chassis
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.176 158118 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d45a42e5-9dac-4ea0-bad8-cca7babbbcbb
Dec 06 08:28:02 compute-0 systemd-udevd[416192]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.198 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0eec7f32-5632-44ee-8be2-ebfc2e4f2a96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.199 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd45a42e5-91 in ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.202 260599 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd45a42e5-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.202 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[59852600-56cc-4294-8451-c5921451fba9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.203 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[dd48afe9-1da7-41fe-a0da-7eb01efafadc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 systemd-machined[212986]: New machine qemu-99-instance-000000e0.
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.215 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[2e13e469-ac6c-4c00-a741-2c9e45832b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 NetworkManager[48965]: <info>  [1765009682.2167] device (tapa9c2ca9d-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 06 08:28:02 compute-0 NetworkManager[48965]: <info>  [1765009682.2180] device (tapa9c2ca9d-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.227 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-0 ovn_controller[147168]: 2025-12-06T08:28:02Z|00843|binding|INFO|Setting lport a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c ovn-installed in OVS
Dec 06 08:28:02 compute-0 ovn_controller[147168]: 2025-12-06T08:28:02Z|00844|binding|INFO|Setting lport a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c up in Southbound
Dec 06 08:28:02 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-000000e0.
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.232 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.249 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4245a5e8-3813-4734-9972-c6ee039a3a44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.287 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[27366fae-8c27-451c-9798-745758abb38d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.293 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b2223b89-a7ae-44db-a859-d8af16316dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 NetworkManager[48965]: <info>  [1765009682.2942] manager: (tapd45a42e5-90): new Veth device (/org/freedesktop/NetworkManager/Devices/404)
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.323 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef2c0aa-937b-4603-bb39-78dc7feccb22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.326 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4f0a69-ff22-4901-a41a-7447e2b615ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 NetworkManager[48965]: <info>  [1765009682.3515] device (tapd45a42e5-90): carrier: link connected
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.357 260617 DEBUG oslo.privsep.daemon [-] privsep: reply[ab80e3c2-96ac-40df-9141-a1df0beb26a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.373 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7220f4-2ad2-41bd-a8e4-b28a0aa2399b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd45a42e5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:f7:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 995493, 'reachable_time': 36260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416225, 'error': None, 'target': 'ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.387 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad3bf9c-d93b-43f4-b98d-6f50ccd06d1b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:f77d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 995493, 'tstamp': 995493}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416226, 'error': None, 'target': 'ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.402 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[b93df950-b680-4864-a38c-9fcb2cda06bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd45a42e5-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:f7:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 255], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 995493, 'reachable_time': 36260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 416227, 'error': None, 'target': 'ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.428 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[badbd34d-52ca-42ec-97f4-ae95d34c63e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.486 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[07b37dd4-2e99-4a56-b8ca-2d5111c86542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.488 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd45a42e5-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.488 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.489 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd45a42e5-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:28:02 compute-0 kernel: tapd45a42e5-90: entered promiscuous mode
Dec 06 08:28:02 compute-0 NetworkManager[48965]: <info>  [1765009682.4915] manager: (tapd45a42e5-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.491 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.495 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd45a42e5-90, col_values=(('external_ids', {'iface-id': 'c25ff352-b2c4-459d-af46-3f06a7d68c84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:28:02 compute-0 ovn_controller[147168]: 2025-12-06T08:28:02Z|00845|binding|INFO|Releasing lport c25ff352-b2c4-459d-af46-3f06a7d68c84 from this chassis (sb_readonly=0)
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.496 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.498 158118 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d45a42e5-9dac-4ea0-bad8-cca7babbbcbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d45a42e5-9dac-4ea0-bad8-cca7babbbcbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.499 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[4b239cd6-25d4-4c5b-98d5-93bf73521e16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.500 158118 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: global
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     log         /dev/log local0 debug
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     log-tag     haproxy-metadata-proxy-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     user        root
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     group       root
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     maxconn     1024
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     pidfile     /var/lib/neutron/external/pids/d45a42e5-9dac-4ea0-bad8-cca7babbbcbb.pid.haproxy
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     daemon
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: defaults
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     log global
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     mode http
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     option httplog
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     option dontlognull
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     option http-server-close
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     option forwardfor
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     retries                 3
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     timeout http-request    30s
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     timeout connect         30s
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     timeout client          32s
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     timeout server          32s
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     timeout http-keep-alive 30s
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: listen listener
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     bind 169.254.169.254:80
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:     http-request add-header X-OVN-Network-ID d45a42e5-9dac-4ea0-bad8-cca7babbbcbb
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 08:28:02 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:02.501 158118 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb', 'env', 'PROCESS_TAG=haproxy-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d45a42e5-9dac-4ea0-bad8-cca7babbbcbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.649 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.835 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009682.8354485, ec4b7c40-d407-4a4a-bafa-4d075f29487f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.837 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] VM Started (Lifecycle Event)
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.863 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.869 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009682.83557, ec4b7c40-d407-4a4a-bafa-4d075f29487f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.870 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] VM Paused (Lifecycle Event)
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.892 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:28:02 compute-0 podman[416300]: 2025-12-06 08:28:02.893669465 +0000 UTC m=+0.068632256 container create 5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.898 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.920 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:28:02 compute-0 systemd[1]: Started libpod-conmon-5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6.scope.
Dec 06 08:28:02 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9231a6281b3609ca6998013313166b02f67df33fcdef31dcd3ec67287ad40168/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:02 compute-0 podman[416300]: 2025-12-06 08:28:02.860855424 +0000 UTC m=+0.035818275 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 08:28:02 compute-0 podman[416300]: 2025-12-06 08:28:02.962139295 +0000 UTC m=+0.137102076 container init 5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:28:02 compute-0 podman[416300]: 2025-12-06 08:28:02.968621832 +0000 UTC m=+0.143584613 container start 5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:28:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:02.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:02 compute-0 neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb[416316]: [NOTICE]   (416330) : New worker (416339) forked
Dec 06 08:28:02 compute-0 neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb[416316]: [NOTICE]   (416330) : Loading success.
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.993 251996 DEBUG nova.compute.manager [req-90612ed6-2765-4547-915f-5317468dc4ee req-82316b8a-7228-47f9-8e4f-b9edc79f42d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.993 251996 DEBUG oslo_concurrency.lockutils [req-90612ed6-2765-4547-915f-5317468dc4ee req-82316b8a-7228-47f9-8e4f-b9edc79f42d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.993 251996 DEBUG oslo_concurrency.lockutils [req-90612ed6-2765-4547-915f-5317468dc4ee req-82316b8a-7228-47f9-8e4f-b9edc79f42d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.994 251996 DEBUG oslo_concurrency.lockutils [req-90612ed6-2765-4547-915f-5317468dc4ee req-82316b8a-7228-47f9-8e4f-b9edc79f42d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.994 251996 DEBUG nova.compute.manager [req-90612ed6-2765-4547-915f-5317468dc4ee req-82316b8a-7228-47f9-8e4f-b9edc79f42d4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Processing event network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.994 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.998 251996 DEBUG nova.virt.driver [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] Emitting event <LifecycleEvent: 1765009682.9982855, ec4b7c40-d407-4a4a-bafa-4d075f29487f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:28:02 compute-0 nova_compute[251992]: 2025-12-06 08:28:02.998 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] VM Resumed (Lifecycle Event)
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.000 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.003 251996 INFO nova.virt.libvirt.driver [-] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Instance spawned successfully.
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.003 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 08:28:03 compute-0 podman[416313]: 2025-12-06 08:28:03.027324506 +0000 UTC m=+0.090684285 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.031 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.038 251996 DEBUG nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.039 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.040 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.040 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.040 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.041 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.041 251996 DEBUG nova.virt.libvirt.driver [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.080 251996 INFO nova.compute.manager [None req-f13e8a22-03e9-4b84-bda6-5ac292167cd9 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.143 251996 INFO nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Took 6.12 seconds to spawn the instance on the hypervisor.
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.143 251996 DEBUG nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.214 251996 INFO nova.compute.manager [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Took 8.31 seconds to build instance.
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.237 251996 DEBUG oslo_concurrency.lockutils [None req-acf92feb-07e5-4694-b955-d1c463e10485 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:03 compute-0 ceph-mon[74339]: pgmap v4080: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 351 B/s wr, 9 op/s
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.676 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.677 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.677 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.678 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:28:03 compute-0 nova_compute[251992]: 2025-12-06 08:28:03.678 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:28:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:03.903 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:03.904 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:03.904 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:03.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:28:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1065690781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.129 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:28:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.210 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.211 251996 DEBUG nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.358 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.359 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4028MB free_disk=20.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.359 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.359 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.493 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.590 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Instance ec4b7c40-d407-4a4a-bafa-4d075f29487f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.591 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.591 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:28:04 compute-0 nova_compute[251992]: 2025-12-06 08:28:04.636 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:28:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:04.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.075 251996 DEBUG nova.compute.manager [req-1550f4c3-1802-4daf-9a3d-02cfdecb8a68 req-89fedaef-52ec-4831-9e6a-a50921475029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.076 251996 DEBUG oslo_concurrency.lockutils [req-1550f4c3-1802-4daf-9a3d-02cfdecb8a68 req-89fedaef-52ec-4831-9e6a-a50921475029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422550437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.077 251996 DEBUG oslo_concurrency.lockutils [req-1550f4c3-1802-4daf-9a3d-02cfdecb8a68 req-89fedaef-52ec-4831-9e6a-a50921475029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.078 251996 DEBUG oslo_concurrency.lockutils [req-1550f4c3-1802-4daf-9a3d-02cfdecb8a68 req-89fedaef-52ec-4831-9e6a-a50921475029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.078 251996 DEBUG nova.compute.manager [req-1550f4c3-1802-4daf-9a3d-02cfdecb8a68 req-89fedaef-52ec-4831-9e6a-a50921475029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] No waiting events found dispatching network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.079 251996 WARNING nova.compute.manager [req-1550f4c3-1802-4daf-9a3d-02cfdecb8a68 req-89fedaef-52ec-4831-9e6a-a50921475029 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received unexpected event network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c for instance with vm_state active and task_state None.
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.096 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.104 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.121 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.148 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.148 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1065690781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:05 compute-0 nova_compute[251992]: 2025-12-06 08:28:05.769 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:05.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 54 op/s
Dec 06 08:28:06 compute-0 ceph-mon[74339]: pgmap v4081: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3422550437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:06 compute-0 NetworkManager[48965]: <info>  [1765009686.8564] manager: (patch-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Dec 06 08:28:06 compute-0 NetworkManager[48965]: <info>  [1765009686.8571] manager: (patch-br-int-to-provnet-9e78c1a1-68f4-477a-abaa-13a98bde06e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Dec 06 08:28:06 compute-0 nova_compute[251992]: 2025-12-06 08:28:06.858 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:06 compute-0 nova_compute[251992]: 2025-12-06 08:28:06.949 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:06 compute-0 ovn_controller[147168]: 2025-12-06T08:28:06Z|00846|binding|INFO|Releasing lport c25ff352-b2c4-459d-af46-3f06a7d68c84 from this chassis (sb_readonly=0)
Dec 06 08:28:06 compute-0 nova_compute[251992]: 2025-12-06 08:28:06.958 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:06.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:07 compute-0 ceph-mon[74339]: pgmap v4082: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 12 KiB/s wr, 54 op/s
Dec 06 08:28:07 compute-0 nova_compute[251992]: 2025-12-06 08:28:07.345 251996 DEBUG nova.compute.manager [req-3c9bfbde-b97b-445c-a558-fb8f7f0ca166 req-333326b9-aa24-46d3-8988-da82a8100c1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:07 compute-0 nova_compute[251992]: 2025-12-06 08:28:07.346 251996 DEBUG nova.compute.manager [req-3c9bfbde-b97b-445c-a558-fb8f7f0ca166 req-333326b9-aa24-46d3-8988-da82a8100c1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing instance network info cache due to event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:28:07 compute-0 nova_compute[251992]: 2025-12-06 08:28:07.346 251996 DEBUG oslo_concurrency.lockutils [req-3c9bfbde-b97b-445c-a558-fb8f7f0ca166 req-333326b9-aa24-46d3-8988-da82a8100c1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:28:07 compute-0 nova_compute[251992]: 2025-12-06 08:28:07.346 251996 DEBUG oslo_concurrency.lockutils [req-3c9bfbde-b97b-445c-a558-fb8f7f0ca166 req-333326b9-aa24-46d3-8988-da82a8100c1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:28:07 compute-0 nova_compute[251992]: 2025-12-06 08:28:07.346 251996 DEBUG nova.network.neutron [req-3c9bfbde-b97b-445c-a558-fb8f7f0ca166 req-333326b9-aa24-46d3-8988-da82a8100c1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:28:07 compute-0 sudo[416404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:07 compute-0 sudo[416404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:07 compute-0 sudo[416404]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:07 compute-0 sudo[416429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:07 compute-0 sudo[416429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:07.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:07 compute-0 sudo[416429]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:08 compute-0 nova_compute[251992]: 2025-12-06 08:28:08.149 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:08 compute-0 nova_compute[251992]: 2025-12-06 08:28:08.978 251996 DEBUG nova.compute.manager [req-41e1e22e-ffae-4f69-987c-7057cb0a085c req-1aafa73f-9e8f-4d99-8c63-3d49ed62e547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:08 compute-0 nova_compute[251992]: 2025-12-06 08:28:08.979 251996 DEBUG nova.compute.manager [req-41e1e22e-ffae-4f69-987c-7057cb0a085c req-1aafa73f-9e8f-4d99-8c63-3d49ed62e547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing instance network info cache due to event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:28:08 compute-0 nova_compute[251992]: 2025-12-06 08:28:08.980 251996 DEBUG oslo_concurrency.lockutils [req-41e1e22e-ffae-4f69-987c-7057cb0a085c req-1aafa73f-9e8f-4d99-8c63-3d49ed62e547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:28:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:08.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:09 compute-0 ceph-mon[74339]: pgmap v4083: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2913063228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1864675316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:09 compute-0 podman[416455]: 2025-12-06 08:28:09.421308765 +0000 UTC m=+0.073703613 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 08:28:09 compute-0 podman[416456]: 2025-12-06 08:28:09.427523604 +0000 UTC m=+0.080819636 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.448 251996 DEBUG nova.network.neutron [req-3c9bfbde-b97b-445c-a558-fb8f7f0ca166 req-333326b9-aa24-46d3-8988-da82a8100c1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updated VIF entry in instance network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.448 251996 DEBUG nova.network.neutron [req-3c9bfbde-b97b-445c-a558-fb8f7f0ca166 req-333326b9-aa24-46d3-8988-da82a8100c1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating instance_info_cache with network_info: [{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.468 251996 DEBUG oslo_concurrency.lockutils [req-3c9bfbde-b97b-445c-a558-fb8f7f0ca166 req-333326b9-aa24-46d3-8988-da82a8100c1e 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.469 251996 DEBUG oslo_concurrency.lockutils [req-41e1e22e-ffae-4f69-987c-7057cb0a085c req-1aafa73f-9e8f-4d99-8c63-3d49ed62e547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.469 251996 DEBUG nova.network.neutron [req-41e1e22e-ffae-4f69-987c-7057cb0a085c req-1aafa73f-9e8f-4d99-8c63-3d49ed62e547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.494 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:28:09 compute-0 nova_compute[251992]: 2025-12-06 08:28:09.806 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:28:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:09.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1310669518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:28:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1310669518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:28:10 compute-0 nova_compute[251992]: 2025-12-06 08:28:10.772 251996 DEBUG nova.network.neutron [req-41e1e22e-ffae-4f69-987c-7057cb0a085c req-1aafa73f-9e8f-4d99-8c63-3d49ed62e547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updated VIF entry in instance network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:28:10 compute-0 nova_compute[251992]: 2025-12-06 08:28:10.773 251996 DEBUG nova.network.neutron [req-41e1e22e-ffae-4f69-987c-7057cb0a085c req-1aafa73f-9e8f-4d99-8c63-3d49ed62e547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating instance_info_cache with network_info: [{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:28:10 compute-0 nova_compute[251992]: 2025-12-06 08:28:10.775 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:10 compute-0 nova_compute[251992]: 2025-12-06 08:28:10.790 251996 DEBUG oslo_concurrency.lockutils [req-41e1e22e-ffae-4f69-987c-7057cb0a085c req-1aafa73f-9e8f-4d99-8c63-3d49ed62e547 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:28:10 compute-0 nova_compute[251992]: 2025-12-06 08:28:10.790 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquired lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:28:10 compute-0 nova_compute[251992]: 2025-12-06 08:28:10.790 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 08:28:10 compute-0 nova_compute[251992]: 2025-12-06 08:28:10.791 251996 DEBUG nova.objects.instance [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ec4b7c40-d407-4a4a-bafa-4d075f29487f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:28:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:10.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.063 251996 DEBUG nova.compute.manager [req-543fecd6-3eed-4193-9075-40420ace0f08 req-19f097c6-58cc-4688-aac5-190761df96ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.064 251996 DEBUG nova.compute.manager [req-543fecd6-3eed-4193-9075-40420ace0f08 req-19f097c6-58cc-4688-aac5-190761df96ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing instance network info cache due to event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.064 251996 DEBUG oslo_concurrency.lockutils [req-543fecd6-3eed-4193-9075-40420ace0f08 req-19f097c6-58cc-4688-aac5-190761df96ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:28:11 compute-0 ceph-mon[74339]: pgmap v4084: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.943 251996 DEBUG nova.network.neutron [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating instance_info_cache with network_info: [{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.963 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Releasing lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.964 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.965 251996 DEBUG oslo_concurrency.lockutils [req-543fecd6-3eed-4193-9075-40420ace0f08 req-19f097c6-58cc-4688-aac5-190761df96ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.966 251996 DEBUG nova.network.neutron [req-543fecd6-3eed-4193-9075-40420ace0f08 req-19f097c6-58cc-4688-aac5-190761df96ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:28:11 compute-0 nova_compute[251992]: 2025-12-06 08:28:11.967 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:11.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:12.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:28:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:28:13 compute-0 ceph-mon[74339]: pgmap v4085: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:13 compute-0 nova_compute[251992]: 2025-12-06 08:28:13.243 251996 DEBUG nova.network.neutron [req-543fecd6-3eed-4193-9075-40420ace0f08 req-19f097c6-58cc-4688-aac5-190761df96ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updated VIF entry in instance network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:28:13 compute-0 nova_compute[251992]: 2025-12-06 08:28:13.244 251996 DEBUG nova.network.neutron [req-543fecd6-3eed-4193-9075-40420ace0f08 req-19f097c6-58cc-4688-aac5-190761df96ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating instance_info_cache with network_info: [{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:28:13 compute-0 nova_compute[251992]: 2025-12-06 08:28:13.267 251996 DEBUG oslo_concurrency.lockutils [req-543fecd6-3eed-4193-9075-40420ace0f08 req-19f097c6-58cc-4688-aac5-190761df96ee 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:28:13 compute-0 nova_compute[251992]: 2025-12-06 08:28:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:28:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 62K writes, 230K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 62K writes, 24K syncs, 2.60 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3510 writes, 12K keys, 3510 commit groups, 1.0 writes per commit group, ingest: 13.22 MB, 0.02 MB/s
                                           Interval WAL: 3510 writes, 1479 syncs, 2.37 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:28:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:13.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:14 compute-0 nova_compute[251992]: 2025-12-06 08:28:14.495 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:14.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:15 compute-0 ceph-mon[74339]: pgmap v4086: 305 pgs: 305 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Dec 06 08:28:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:15 compute-0 nova_compute[251992]: 2025-12-06 08:28:15.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:15 compute-0 nova_compute[251992]: 2025-12-06 08:28:15.674 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:15 compute-0 nova_compute[251992]: 2025-12-06 08:28:15.777 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:15 compute-0 sudo[416498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:15 compute-0 sudo[416498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:15 compute-0 sudo[416498]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:15 compute-0 sudo[416523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:28:15 compute-0 sudo[416523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:15 compute-0 sudo[416523]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:15 compute-0 sudo[416548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:15 compute-0 sudo[416548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:15 compute-0 sudo[416548]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:15.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:16 compute-0 sudo[416573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:28:16 compute-0 sudo[416573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 271 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 90 op/s
Dec 06 08:28:16 compute-0 ovn_controller[147168]: 2025-12-06T08:28:16Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4e:e0:f8 10.100.0.4
Dec 06 08:28:16 compute-0 ovn_controller[147168]: 2025-12-06T08:28:16Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4e:e0:f8 10.100.0.4
Dec 06 08:28:16 compute-0 sudo[416573]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 08:28:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:28:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:28:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:28:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:28:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:28:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:28:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:16 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9d85d9a1-9f62-457b-b418-a8da519cb02b does not exist
Dec 06 08:28:16 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ed32084f-d12c-496f-917f-169b5e706e65 does not exist
Dec 06 08:28:16 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9bac973d-f2c6-45d1-8d3d-3d22d75840c8 does not exist
Dec 06 08:28:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:28:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:28:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:28:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:28:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:28:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:28:16 compute-0 sudo[416629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:16 compute-0 sudo[416629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:16 compute-0 sudo[416629]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:16 compute-0 sudo[416654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:28:16 compute-0 sudo[416654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:16 compute-0 sudo[416654]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:16 compute-0 sudo[416679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:16 compute-0 sudo[416679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:16 compute-0 sudo[416679]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:16 compute-0 sudo[416704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:28:16 compute-0 sudo[416704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:16.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:17 compute-0 ceph-mon[74339]: pgmap v4087: 305 pgs: 305 active+clean; 271 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 744 KiB/s wr, 90 op/s
Dec 06 08:28:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:28:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:28:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:28:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:28:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:28:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:28:17 compute-0 podman[416769]: 2025-12-06 08:28:17.208436752 +0000 UTC m=+0.052200740 container create 2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:28:17 compute-0 systemd[1]: Started libpod-conmon-2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc.scope.
Dec 06 08:28:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:28:17 compute-0 podman[416769]: 2025-12-06 08:28:17.187574624 +0000 UTC m=+0.031338632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:28:17 compute-0 podman[416769]: 2025-12-06 08:28:17.298896809 +0000 UTC m=+0.142660827 container init 2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:28:17 compute-0 podman[416769]: 2025-12-06 08:28:17.306493336 +0000 UTC m=+0.150257364 container start 2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_franklin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:28:17 compute-0 podman[416769]: 2025-12-06 08:28:17.311771049 +0000 UTC m=+0.155535047 container attach 2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 08:28:17 compute-0 beautiful_franklin[416785]: 167 167
Dec 06 08:28:17 compute-0 systemd[1]: libpod-2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc.scope: Deactivated successfully.
Dec 06 08:28:17 compute-0 podman[416769]: 2025-12-06 08:28:17.312809697 +0000 UTC m=+0.156573725 container died 2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_franklin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-941b6c94e4c8a5753fad89ca1c7e8a61b9a3a0098454478351f6297e12a64c30-merged.mount: Deactivated successfully.
Dec 06 08:28:17 compute-0 podman[416769]: 2025-12-06 08:28:17.36150425 +0000 UTC m=+0.205268238 container remove 2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_franklin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:28:17 compute-0 systemd[1]: libpod-conmon-2b9c737d7e254cdb0e5f473f83d81718f5a44e7f7151723d3d886997610113cc.scope: Deactivated successfully.
Dec 06 08:28:17 compute-0 podman[416810]: 2025-12-06 08:28:17.542854066 +0000 UTC m=+0.046571835 container create 6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 08:28:17 compute-0 systemd[1]: Started libpod-conmon-6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146.scope.
Dec 06 08:28:17 compute-0 podman[416810]: 2025-12-06 08:28:17.519286776 +0000 UTC m=+0.023004585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:28:17 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732d51b62f91cd4c1b96bd53e1129fd82beeaf7400fdad2f304f2185b4c511f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732d51b62f91cd4c1b96bd53e1129fd82beeaf7400fdad2f304f2185b4c511f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732d51b62f91cd4c1b96bd53e1129fd82beeaf7400fdad2f304f2185b4c511f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732d51b62f91cd4c1b96bd53e1129fd82beeaf7400fdad2f304f2185b4c511f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/732d51b62f91cd4c1b96bd53e1129fd82beeaf7400fdad2f304f2185b4c511f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:17 compute-0 podman[416810]: 2025-12-06 08:28:17.634114406 +0000 UTC m=+0.137832195 container init 6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 08:28:17 compute-0 podman[416810]: 2025-12-06 08:28:17.641931838 +0000 UTC m=+0.145649617 container start 6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:28:17 compute-0 podman[416810]: 2025-12-06 08:28:17.645998648 +0000 UTC m=+0.149716427 container attach 6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:28:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:17.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 603 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Dec 06 08:28:18 compute-0 zealous_saha[416826]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:28:18 compute-0 zealous_saha[416826]: --> relative data size: 1.0
Dec 06 08:28:18 compute-0 zealous_saha[416826]: --> All data devices are unavailable
Dec 06 08:28:18 compute-0 systemd[1]: libpod-6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146.scope: Deactivated successfully.
Dec 06 08:28:18 compute-0 podman[416810]: 2025-12-06 08:28:18.451004107 +0000 UTC m=+0.954721896 container died 6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:28:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-732d51b62f91cd4c1b96bd53e1129fd82beeaf7400fdad2f304f2185b4c511f0-merged.mount: Deactivated successfully.
Dec 06 08:28:18 compute-0 podman[416810]: 2025-12-06 08:28:18.513738282 +0000 UTC m=+1.017456041 container remove 6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:28:18 compute-0 systemd[1]: libpod-conmon-6bce95aa0cdf935496a8dfe1e0eef3d87aebe59dc5b17b7a6bfdb840ff984146.scope: Deactivated successfully.
Dec 06 08:28:18 compute-0 sudo[416704]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:18 compute-0 sudo[416855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:18 compute-0 sudo[416855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:18 compute-0 sudo[416855]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:18 compute-0 sudo[416880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:28:18 compute-0 sudo[416880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:18 compute-0 sudo[416880]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:28:18
Dec 06 08:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Dec 06 08:28:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:28:18 compute-0 sudo[416905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:18 compute-0 sudo[416905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:18 compute-0 sudo[416905]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:18 compute-0 sudo[416930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:28:18 compute-0 sudo[416930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:18.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:19 compute-0 podman[416997]: 2025-12-06 08:28:19.140834228 +0000 UTC m=+0.039418482 container create 21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 08:28:19 compute-0 systemd[1]: Started libpod-conmon-21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492.scope.
Dec 06 08:28:19 compute-0 ceph-mon[74339]: pgmap v4088: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 603 KiB/s rd, 1.1 MiB/s wr, 55 op/s
Dec 06 08:28:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:28:19 compute-0 podman[416997]: 2025-12-06 08:28:19.122868869 +0000 UTC m=+0.021453133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:28:19 compute-0 podman[416997]: 2025-12-06 08:28:19.226511855 +0000 UTC m=+0.125096119 container init 21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:28:19 compute-0 podman[416997]: 2025-12-06 08:28:19.238433949 +0000 UTC m=+0.137018203 container start 21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:28:19 compute-0 optimistic_moore[417014]: 167 167
Dec 06 08:28:19 compute-0 podman[416997]: 2025-12-06 08:28:19.242620103 +0000 UTC m=+0.141204377 container attach 21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:28:19 compute-0 systemd[1]: libpod-21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492.scope: Deactivated successfully.
Dec 06 08:28:19 compute-0 conmon[417014]: conmon 21959455ab6f2d90ba40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492.scope/container/memory.events
Dec 06 08:28:19 compute-0 podman[416997]: 2025-12-06 08:28:19.244260778 +0000 UTC m=+0.142845042 container died 21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 08:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-afdca112b6a5b3bbc133885e46c4b0fc9a7d6cd021e3046cac8958d6dd046ee3-merged.mount: Deactivated successfully.
Dec 06 08:28:19 compute-0 podman[416997]: 2025-12-06 08:28:19.278787145 +0000 UTC m=+0.177371399 container remove 21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moore, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:28:19 compute-0 systemd[1]: libpod-conmon-21959455ab6f2d90ba40024462dbbc1954afe96aacd5c7c4de318474b2ca9492.scope: Deactivated successfully.
Dec 06 08:28:19 compute-0 podman[417038]: 2025-12-06 08:28:19.479730904 +0000 UTC m=+0.044100699 container create ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:28:19 compute-0 nova_compute[251992]: 2025-12-06 08:28:19.498 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:19 compute-0 systemd[1]: Started libpod-conmon-ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99.scope.
Dec 06 08:28:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d9a542c1262536eb283601ee15d1224cd4182915b443c9d38eabe4758d9d05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:19 compute-0 podman[417038]: 2025-12-06 08:28:19.463168394 +0000 UTC m=+0.027538209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d9a542c1262536eb283601ee15d1224cd4182915b443c9d38eabe4758d9d05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d9a542c1262536eb283601ee15d1224cd4182915b443c9d38eabe4758d9d05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d9a542c1262536eb283601ee15d1224cd4182915b443c9d38eabe4758d9d05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:19 compute-0 podman[417038]: 2025-12-06 08:28:19.574536809 +0000 UTC m=+0.138906624 container init ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:28:19 compute-0 podman[417038]: 2025-12-06 08:28:19.583497703 +0000 UTC m=+0.147867518 container start ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:28:19 compute-0 podman[417038]: 2025-12-06 08:28:19.586773022 +0000 UTC m=+0.151142817 container attach ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:28:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:19.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 159 KiB/s rd, 1.1 MiB/s wr, 36 op/s
Dec 06 08:28:20 compute-0 gifted_keller[417055]: {
Dec 06 08:28:20 compute-0 gifted_keller[417055]:     "0": [
Dec 06 08:28:20 compute-0 gifted_keller[417055]:         {
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "devices": [
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "/dev/loop3"
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             ],
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "lv_name": "ceph_lv0",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "lv_size": "7511998464",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "name": "ceph_lv0",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "tags": {
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.cluster_name": "ceph",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.crush_device_class": "",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.encrypted": "0",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.osd_id": "0",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.type": "block",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:                 "ceph.vdo": "0"
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             },
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "type": "block",
Dec 06 08:28:20 compute-0 gifted_keller[417055]:             "vg_name": "ceph_vg0"
Dec 06 08:28:20 compute-0 gifted_keller[417055]:         }
Dec 06 08:28:20 compute-0 gifted_keller[417055]:     ]
Dec 06 08:28:20 compute-0 gifted_keller[417055]: }
Dec 06 08:28:20 compute-0 systemd[1]: libpod-ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99.scope: Deactivated successfully.
Dec 06 08:28:20 compute-0 podman[417038]: 2025-12-06 08:28:20.296717038 +0000 UTC m=+0.861086893 container died ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-76d9a542c1262536eb283601ee15d1224cd4182915b443c9d38eabe4758d9d05-merged.mount: Deactivated successfully.
Dec 06 08:28:20 compute-0 podman[417038]: 2025-12-06 08:28:20.350683245 +0000 UTC m=+0.915053050 container remove ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:28:20 compute-0 systemd[1]: libpod-conmon-ce9e13e2d454d90d250c841fb52027910578b73656cbc1d8773167e769347a99.scope: Deactivated successfully.
Dec 06 08:28:20 compute-0 sudo[416930]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:20 compute-0 sudo[417077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:20 compute-0 sudo[417077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:20 compute-0 sudo[417077]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:20 compute-0 sudo[417102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:28:20 compute-0 sudo[417102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:20 compute-0 sudo[417102]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:20 compute-0 sudo[417127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:20 compute-0 sudo[417127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:20 compute-0 sudo[417127]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:20 compute-0 sudo[417152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:28:20 compute-0 sudo[417152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:20 compute-0 nova_compute[251992]: 2025-12-06 08:28:20.779 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:20.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:21 compute-0 podman[417217]: 2025-12-06 08:28:21.108181733 +0000 UTC m=+0.043583076 container create c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:28:21 compute-0 systemd[1]: Started libpod-conmon-c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530.scope.
Dec 06 08:28:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:28:21 compute-0 podman[417217]: 2025-12-06 08:28:21.09075446 +0000 UTC m=+0.026155803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:28:21 compute-0 podman[417217]: 2025-12-06 08:28:21.191526317 +0000 UTC m=+0.126927750 container init c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 08:28:21 compute-0 podman[417217]: 2025-12-06 08:28:21.197832828 +0000 UTC m=+0.133234151 container start c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:28:21 compute-0 podman[417217]: 2025-12-06 08:28:21.201912379 +0000 UTC m=+0.137313802 container attach c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:28:21 compute-0 cranky_varahamihira[417234]: 167 167
Dec 06 08:28:21 compute-0 systemd[1]: libpod-c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530.scope: Deactivated successfully.
Dec 06 08:28:21 compute-0 conmon[417234]: conmon c4d473be521566e6f3d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530.scope/container/memory.events
Dec 06 08:28:21 compute-0 podman[417217]: 2025-12-06 08:28:21.20634929 +0000 UTC m=+0.141750623 container died c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:28:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ee23e36774daf978969a3ad04ccff586c3c3bd3ad8e5c0eb38131d4cc45cf0b-merged.mount: Deactivated successfully.
Dec 06 08:28:21 compute-0 podman[417217]: 2025-12-06 08:28:21.249512442 +0000 UTC m=+0.184913775 container remove c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Dec 06 08:28:21 compute-0 ceph-mon[74339]: pgmap v4089: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 159 KiB/s rd, 1.1 MiB/s wr, 36 op/s
Dec 06 08:28:21 compute-0 systemd[1]: libpod-conmon-c4d473be521566e6f3d4931d74b8a6187f03b1a196e954f2e37feac568889530.scope: Deactivated successfully.
Dec 06 08:28:21 compute-0 podman[417258]: 2025-12-06 08:28:21.41179463 +0000 UTC m=+0.042757492 container create 4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldwasser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:28:21 compute-0 systemd[1]: Started libpod-conmon-4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af.scope.
Dec 06 08:28:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9304973b3c5ad2b647d46d51586d4487a90e6a94a1d052d4a17d2253c38f3783/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9304973b3c5ad2b647d46d51586d4487a90e6a94a1d052d4a17d2253c38f3783/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9304973b3c5ad2b647d46d51586d4487a90e6a94a1d052d4a17d2253c38f3783/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9304973b3c5ad2b647d46d51586d4487a90e6a94a1d052d4a17d2253c38f3783/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:28:21 compute-0 podman[417258]: 2025-12-06 08:28:21.482734048 +0000 UTC m=+0.113696920 container init 4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldwasser, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec 06 08:28:21 compute-0 podman[417258]: 2025-12-06 08:28:21.394780888 +0000 UTC m=+0.025743770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:28:21 compute-0 podman[417258]: 2025-12-06 08:28:21.489696927 +0000 UTC m=+0.120659779 container start 4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:28:21 compute-0 podman[417258]: 2025-12-06 08:28:21.493098679 +0000 UTC m=+0.124061651 container attach 4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldwasser, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 08:28:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:22.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.281 251996 DEBUG nova.compute.manager [req-e39263a0-8fbe-4ff4-a2a3-d4c8b23bd65f req-c5b93875-bcfc-4591-851d-54eb70748c4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.281 251996 DEBUG nova.compute.manager [req-e39263a0-8fbe-4ff4-a2a3-d4c8b23bd65f req-c5b93875-bcfc-4591-851d-54eb70748c4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing instance network info cache due to event network-changed-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.281 251996 DEBUG oslo_concurrency.lockutils [req-e39263a0-8fbe-4ff4-a2a3-d4c8b23bd65f req-c5b93875-bcfc-4591-851d-54eb70748c4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.282 251996 DEBUG oslo_concurrency.lockutils [req-e39263a0-8fbe-4ff4-a2a3-d4c8b23bd65f req-c5b93875-bcfc-4591-851d-54eb70748c4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquired lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.282 251996 DEBUG nova.network.neutron [req-e39263a0-8fbe-4ff4-a2a3-d4c8b23bd65f req-c5b93875-bcfc-4591-851d-54eb70748c4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Refreshing network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]: {
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]:         "osd_id": 0,
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]:         "type": "bluestore"
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]:     }
Dec 06 08:28:22 compute-0 fervent_goldwasser[417275]: }
Dec 06 08:28:22 compute-0 podman[417258]: 2025-12-06 08:28:22.399899954 +0000 UTC m=+1.030862836 container died 4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:28:22 compute-0 systemd[1]: libpod-4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af.scope: Deactivated successfully.
Dec 06 08:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9304973b3c5ad2b647d46d51586d4487a90e6a94a1d052d4a17d2253c38f3783-merged.mount: Deactivated successfully.
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.433 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquiring lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.434 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.434 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquiring lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.435 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.435 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.436 251996 INFO nova.compute.manager [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Terminating instance
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.438 251996 DEBUG nova.compute.manager [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 08:28:22 compute-0 podman[417258]: 2025-12-06 08:28:22.459677637 +0000 UTC m=+1.090640519 container remove 4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 08:28:22 compute-0 systemd[1]: libpod-conmon-4f11ab5645bdba6f9d5af47723dc2cb152548865a57a7d936401678046df30af.scope: Deactivated successfully.
Dec 06 08:28:22 compute-0 sudo[417152]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:28:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 08:28:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:28:22 compute-0 kernel: tapa9c2ca9d-21 (unregistering): left promiscuous mode
Dec 06 08:28:22 compute-0 NetworkManager[48965]: <info>  [1765009702.5972] device (tapa9c2ca9d-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 06 08:28:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev bb28da3b-8a44-489d-85b6-6c1f2e63efd8 does not exist
Dec 06 08:28:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 718f5ee3-f924-4aad-ae0d-fdbf7b99f589 does not exist
Dec 06 08:28:22 compute-0 ovn_controller[147168]: 2025-12-06T08:28:22Z|00847|binding|INFO|Releasing lport a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c from this chassis (sb_readonly=0)
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.605 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-0 ovn_controller[147168]: 2025-12-06T08:28:22Z|00848|binding|INFO|Setting lport a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c down in Southbound
Dec 06 08:28:22 compute-0 ovn_controller[147168]: 2025-12-06T08:28:22Z|00849|binding|INFO|Removing iface tapa9c2ca9d-21 ovn-installed in OVS
Dec 06 08:28:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8150116f-33be-4fd2-b2b8-ab15ece156f1 does not exist
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.607 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.613 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:e0:f8 10.100.0.4'], port_security=['fa:16:3e:4e:e0:f8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ec4b7c40-d407-4a4a-bafa-4d075f29487f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '55d0236410514dd9ad6cdb3e1a5d0ee6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '50af7cc2-125d-439a-810c-229d0e5b22a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d51e676-9fc7-404c-a223-5b590dbd406b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>], logical_port=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f70aabed700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.617 158118 INFO neutron.agent.ovn.metadata.agent [-] Port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c in datapath d45a42e5-9dac-4ea0-bad8-cca7babbbcbb unbound from our chassis
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.619 158118 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d45a42e5-9dac-4ea0-bad8-cca7babbbcbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.623 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[012a02fa-5b4a-4814-8868-900deb7b4b74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.625 158118 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb namespace which is not needed anymore
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.646 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:28:22 compute-0 sudo[417312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:22 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000e0.scope: Deactivated successfully.
Dec 06 08:28:22 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000e0.scope: Consumed 14.067s CPU time.
Dec 06 08:28:22 compute-0 sudo[417312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:22 compute-0 systemd-machined[212986]: Machine qemu-99-instance-000000e0 terminated.
Dec 06 08:28:22 compute-0 sudo[417312]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:22 compute-0 sudo[417351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:28:22 compute-0 sudo[417351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:22 compute-0 sudo[417351]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:22 compute-0 neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb[416316]: [NOTICE]   (416330) : haproxy version is 2.8.14-c23fe91
Dec 06 08:28:22 compute-0 neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb[416316]: [NOTICE]   (416330) : path to executable is /usr/sbin/haproxy
Dec 06 08:28:22 compute-0 neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb[416316]: [WARNING]  (416330) : Exiting Master process...
Dec 06 08:28:22 compute-0 neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb[416316]: [WARNING]  (416330) : Exiting Master process...
Dec 06 08:28:22 compute-0 neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb[416316]: [ALERT]    (416330) : Current worker (416339) exited with code 143 (Terminated)
Dec 06 08:28:22 compute-0 neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb[416316]: [WARNING]  (416330) : All workers exited. Exiting... (0)
Dec 06 08:28:22 compute-0 systemd[1]: libpod-5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6.scope: Deactivated successfully.
Dec 06 08:28:22 compute-0 podman[417380]: 2025-12-06 08:28:22.76420255 +0000 UTC m=+0.042056123 container died 5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 08:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6-userdata-shm.mount: Deactivated successfully.
Dec 06 08:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9231a6281b3609ca6998013313166b02f67df33fcdef31dcd3ec67287ad40168-merged.mount: Deactivated successfully.
Dec 06 08:28:22 compute-0 podman[417380]: 2025-12-06 08:28:22.800701222 +0000 UTC m=+0.078554755 container cleanup 5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:28:22 compute-0 systemd[1]: libpod-conmon-5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6.scope: Deactivated successfully.
Dec 06 08:28:22 compute-0 podman[417415]: 2025-12-06 08:28:22.857085454 +0000 UTC m=+0.038040644 container remove 5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:28:22 compute-0 NetworkManager[48965]: <info>  [1765009702.8597] manager: (tapa9c2ca9d-21): new Tun device (/org/freedesktop/NetworkManager/Devices/408)
Dec 06 08:28:22 compute-0 systemd-udevd[417319]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.864 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2df2f9-d3f8-41b0-af58-52491de5611c]: (4, ('Sat Dec  6 08:28:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb (5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6)\n5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6\nSat Dec  6 08:28:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb (5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6)\n5093049b25b90ee178508e94392a08f1a87104ffd55181378a81ef7d732622a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.865 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[c69cc583-f037-4013-abcf-2249440fcf45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.868 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd45a42e5-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.871 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-0 kernel: tapd45a42e5-90: left promiscuous mode
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.883 251996 INFO nova.virt.libvirt.driver [-] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Instance destroyed successfully.
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.884 251996 DEBUG nova.objects.instance [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lazy-loading 'resources' on Instance uuid ec4b7c40-d407-4a4a-bafa-4d075f29487f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.888 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.890 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a48ec5-c9fb-43e9-aa4b-60ed0ebd7d81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.904 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[55fc5cb3-e9cd-4d82-aaea-bab57875c5be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.905 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3dc9a6-9995-4bdb-ba03-a03327f3e335]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.910 251996 DEBUG nova.virt.libvirt.vif [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T08:27:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1417535578',display_name='tempest-TestVolumeBackupRestore-server-1417535578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1417535578',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGi74QcvQx1TJ5bn/MOn8g36uTTcLfmxtJnjTjNKqhzlHTumMcvQfyd1LcmjxLXLqetQHWPcDq6WT4b+Ge0BjCBPB+nREeeso33JF9FsmzpFuSdurxhWIDKNHnoRxWIhUQ==',key_name='tempest-TestVolumeBackupRestore-367977014',keypairs=<?>,launch_index=0,launched_at=2025-12-06T08:28:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='55d0236410514dd9ad6cdb3e1a5d0ee6',ramdisk_id='',reservation_id='r-iegmkqc9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1150314823',owner_user_name='tempest-TestVolumeBackupRestore-1150314823-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T08:28:03Z,user_data=None,user_id='00e2fea2f8f54b1c9af85553820566a6',uuid=ec4b7c40-d407-4a4a-bafa-4d075f29487f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.911 251996 DEBUG nova.network.os_vif_util [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Converting VIF {"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.912 251996 DEBUG nova.network.os_vif_util [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e0:f8,bridge_name='br-int',has_traffic_filtering=True,id=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c,network=Network(d45a42e5-9dac-4ea0-bad8-cca7babbbcbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9c2ca9d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.912 251996 DEBUG os_vif [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e0:f8,bridge_name='br-int',has_traffic_filtering=True,id=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c,network=Network(d45a42e5-9dac-4ea0-bad8-cca7babbbcbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9c2ca9d-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.914 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.914 251996 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9c2ca9d-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.915 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.917 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.920 251996 INFO os_vif [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:e0:f8,bridge_name='br-int',has_traffic_filtering=True,id=a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c,network=Network(d45a42e5-9dac-4ea0-bad8-cca7babbbcbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9c2ca9d-21')
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.923 260599 DEBUG oslo.privsep.daemon [-] privsep: reply[09bf166f-af3c-4b82-89b9-93491c2f1f13]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 995486, 'reachable_time': 28686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417438, 'error': None, 'target': 'ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.927 158260 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d45a42e5-9dac-4ea0-bad8-cca7babbbcbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 08:28:22 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:28:22.927 158260 DEBUG oslo.privsep.daemon [-] privsep: reply[9a76ba6c-5199-4f99-890c-e8ee63db509b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 08:28:22 compute-0 systemd[1]: run-netns-ovnmeta\x2dd45a42e5\x2d9dac\x2d4ea0\x2dbad8\x2dcca7babbbcbb.mount: Deactivated successfully.
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.972 251996 DEBUG nova.compute.manager [req-684a5609-95cc-4924-b780-6a0c44e15198 req-7aed27f7-4311-46d7-887d-2ce62c1186a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-vif-unplugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.972 251996 DEBUG oslo_concurrency.lockutils [req-684a5609-95cc-4924-b780-6a0c44e15198 req-7aed27f7-4311-46d7-887d-2ce62c1186a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.973 251996 DEBUG oslo_concurrency.lockutils [req-684a5609-95cc-4924-b780-6a0c44e15198 req-7aed27f7-4311-46d7-887d-2ce62c1186a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.973 251996 DEBUG oslo_concurrency.lockutils [req-684a5609-95cc-4924-b780-6a0c44e15198 req-7aed27f7-4311-46d7-887d-2ce62c1186a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.973 251996 DEBUG nova.compute.manager [req-684a5609-95cc-4924-b780-6a0c44e15198 req-7aed27f7-4311-46d7-887d-2ce62c1186a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] No waiting events found dispatching network-vif-unplugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:28:22 compute-0 nova_compute[251992]: 2025-12-06 08:28:22.973 251996 DEBUG nova.compute.manager [req-684a5609-95cc-4924-b780-6a0c44e15198 req-7aed27f7-4311-46d7-887d-2ce62c1186a5 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-vif-unplugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 08:28:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:22.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.104 251996 INFO nova.virt.libvirt.driver [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Deleting instance files /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f_del
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.105 251996 INFO nova.virt.libvirt.driver [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Deletion of /var/lib/nova/instances/ec4b7c40-d407-4a4a-bafa-4d075f29487f_del complete
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.201 251996 INFO nova.compute.manager [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Took 0.76 seconds to destroy the instance on the hypervisor.
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.203 251996 DEBUG oslo.service.loopingcall [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.204 251996 DEBUG nova.compute.manager [-] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.204 251996 DEBUG nova.network.neutron [-] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 08:28:23 compute-0 ceph-mon[74339]: pgmap v4090: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 06 08:28:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:23 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.777 251996 DEBUG nova.network.neutron [req-e39263a0-8fbe-4ff4-a2a3-d4c8b23bd65f req-c5b93875-bcfc-4591-851d-54eb70748c4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updated VIF entry in instance network info cache for port a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.778 251996 DEBUG nova.network.neutron [req-e39263a0-8fbe-4ff4-a2a3-d4c8b23bd65f req-c5b93875-bcfc-4591-851d-54eb70748c4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating instance_info_cache with network_info: [{"id": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "address": "fa:16:3e:4e:e0:f8", "network": {"id": "d45a42e5-9dac-4ea0-bad8-cca7babbbcbb", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1249179696-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "55d0236410514dd9ad6cdb3e1a5d0ee6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9c2ca9d-21", "ovs_interfaceid": "a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.806 251996 DEBUG oslo_concurrency.lockutils [req-e39263a0-8fbe-4ff4-a2a3-d4c8b23bd65f req-c5b93875-bcfc-4591-851d-54eb70748c4a 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Releasing lock "refresh_cache-ec4b7c40-d407-4a4a-bafa-4d075f29487f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 08:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:28:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:28:23 compute-0 nova_compute[251992]: 2025-12-06 08:28:23.985 251996 DEBUG nova.network.neutron [-] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 08:28:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:24.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.007 251996 INFO nova.compute.manager [-] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Took 0.80 seconds to deallocate network for instance.
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.083 251996 DEBUG nova.compute.manager [req-cf15c736-4052-441e-8e57-f664df13eef7 req-69a754f9-9777-4624-8bf8-bd4250c4fab4 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-vif-deleted-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.217 251996 INFO nova.compute.manager [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Took 0.21 seconds to detach 1 volumes for instance.
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.267 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.268 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.311 251996 DEBUG oslo_concurrency.processutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.500 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:28:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/666306502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.742 251996 DEBUG oslo_concurrency.processutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.751 251996 DEBUG nova.compute.provider_tree [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.768 251996 DEBUG nova.scheduler.client.report [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:28:24 compute-0 nova_compute[251992]: 2025-12-06 08:28:24.843 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:24 compute-0 ceph-mon[74339]: pgmap v4091: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Dec 06 08:28:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/666306502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:24.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:25 compute-0 nova_compute[251992]: 2025-12-06 08:28:25.050 251996 INFO nova.scheduler.client.report [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Deleted allocations for instance ec4b7c40-d407-4a4a-bafa-4d075f29487f
Dec 06 08:28:25 compute-0 nova_compute[251992]: 2025-12-06 08:28:25.138 251996 DEBUG nova.compute.manager [req-214456e3-d4f1-43a1-b24b-65b21a1f19c9 req-c6655a85-fde7-4f7d-a0f3-a69cab220091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received event network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 08:28:25 compute-0 nova_compute[251992]: 2025-12-06 08:28:25.138 251996 DEBUG oslo_concurrency.lockutils [req-214456e3-d4f1-43a1-b24b-65b21a1f19c9 req-c6655a85-fde7-4f7d-a0f3-a69cab220091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Acquiring lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:28:25 compute-0 nova_compute[251992]: 2025-12-06 08:28:25.139 251996 DEBUG oslo_concurrency.lockutils [req-214456e3-d4f1-43a1-b24b-65b21a1f19c9 req-c6655a85-fde7-4f7d-a0f3-a69cab220091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:28:25 compute-0 nova_compute[251992]: 2025-12-06 08:28:25.139 251996 DEBUG oslo_concurrency.lockutils [req-214456e3-d4f1-43a1-b24b-65b21a1f19c9 req-c6655a85-fde7-4f7d-a0f3-a69cab220091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:25 compute-0 nova_compute[251992]: 2025-12-06 08:28:25.139 251996 DEBUG nova.compute.manager [req-214456e3-d4f1-43a1-b24b-65b21a1f19c9 req-c6655a85-fde7-4f7d-a0f3-a69cab220091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] No waiting events found dispatching network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 08:28:25 compute-0 nova_compute[251992]: 2025-12-06 08:28:25.139 251996 WARNING nova.compute.manager [req-214456e3-d4f1-43a1-b24b-65b21a1f19c9 req-c6655a85-fde7-4f7d-a0f3-a69cab220091 0c7273e6d9c9413aaf1939a555f98d43 6363f866b688477296a48bc0eb3789ff - - default default] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Received unexpected event network-vif-plugged-a9c2ca9d-211d-4dcd-bf35-3c13ace83f9c for instance with vm_state deleted and task_state None.
Dec 06 08:28:25 compute-0 nova_compute[251992]: 2025-12-06 08:28:25.143 251996 DEBUG oslo_concurrency.lockutils [None req-b700d028-3cd7-4d3d-9ff2-cef5d234fe8f 00e2fea2f8f54b1c9af85553820566a6 55d0236410514dd9ad6cdb3e1a5d0ee6 - - default default] Lock "ec4b7c40-d407-4a4a-bafa-4d075f29487f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:28:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:26.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Dec 06 08:28:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Dec 06 08:28:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Dec 06 08:28:26 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005319384829238787 of space, bias 1.0, pg target 1.595815448771636 quantized to 32 (current 32)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:28:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Dec 06 08:28:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:26.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:28:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:28:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:28:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:28:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:28:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Dec 06 08:28:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Dec 06 08:28:27 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Dec 06 08:28:27 compute-0 ceph-mon[74339]: pgmap v4092: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Dec 06 08:28:27 compute-0 ceph-mon[74339]: osdmap e435: 3 total, 3 up, 3 in
Dec 06 08:28:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/536961981' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:28:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/536961981' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:28:27 compute-0 nova_compute[251992]: 2025-12-06 08:28:27.915 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:28.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:28 compute-0 sudo[417483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:28 compute-0 sudo[417483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:28 compute-0 sudo[417483]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:28 compute-0 sudo[417508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:28 compute-0 sudo[417508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:28 compute-0 sudo[417508]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Dec 06 08:28:28 compute-0 ceph-mon[74339]: osdmap e436: 3 total, 3 up, 3 in
Dec 06 08:28:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1100284927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:28:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1100284927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:28:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:29.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:29 compute-0 nova_compute[251992]: 2025-12-06 08:28:29.501 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:29 compute-0 ceph-mon[74339]: pgmap v4095: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Dec 06 08:28:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:30.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 20 KiB/s wr, 20 op/s
Dec 06 08:28:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:31.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:31 compute-0 nova_compute[251992]: 2025-12-06 08:28:31.260 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:31 compute-0 nova_compute[251992]: 2025-12-06 08:28:31.410 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:31 compute-0 ceph-mon[74339]: pgmap v4096: 305 pgs: 305 active+clean; 302 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 20 KiB/s wr, 20 op/s
Dec 06 08:28:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:32.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 24 KiB/s wr, 115 op/s
Dec 06 08:28:32 compute-0 nova_compute[251992]: 2025-12-06 08:28:32.917 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:33.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:33 compute-0 podman[417538]: 2025-12-06 08:28:33.459070078 +0000 UTC m=+0.106642438 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:28:33 compute-0 ceph-mon[74339]: pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 24 KiB/s wr, 115 op/s
Dec 06 08:28:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:34.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 95 op/s
Dec 06 08:28:34 compute-0 nova_compute[251992]: 2025-12-06 08:28:34.503 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:34 compute-0 ceph-mon[74339]: pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 65 KiB/s rd, 4.5 KiB/s wr, 95 op/s
Dec 06 08:28:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:35.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e436 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Dec 06 08:28:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Dec 06 08:28:35 compute-0 ceph-mon[74339]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Dec 06 08:28:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:36.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 4.2 KiB/s wr, 88 op/s
Dec 06 08:28:36 compute-0 ceph-mon[74339]: osdmap e437: 3 total, 3 up, 3 in
Dec 06 08:28:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:37.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:37 compute-0 ceph-mon[74339]: pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 4.2 KiB/s wr, 88 op/s
Dec 06 08:28:37 compute-0 nova_compute[251992]: 2025-12-06 08:28:37.880 251996 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765009702.8796906, ec4b7c40-d407-4a4a-bafa-4d075f29487f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 08:28:37 compute-0 nova_compute[251992]: 2025-12-06 08:28:37.881 251996 INFO nova.compute.manager [-] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] VM Stopped (Lifecycle Event)
Dec 06 08:28:37 compute-0 nova_compute[251992]: 2025-12-06 08:28:37.901 251996 DEBUG nova.compute.manager [None req-49a65e2d-2372-41dd-b2f4-b4ba1fe4ad46 - - - - - -] [instance: ec4b7c40-d407-4a4a-bafa-4d075f29487f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 08:28:37 compute-0 nova_compute[251992]: 2025-12-06 08:28:37.919 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:38.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 76 op/s
Dec 06 08:28:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:39.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:39 compute-0 nova_compute[251992]: 2025-12-06 08:28:39.505 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:39 compute-0 ceph-mon[74339]: pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 76 op/s
Dec 06 08:28:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:40.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 76 op/s
Dec 06 08:28:40 compute-0 podman[417568]: 2025-12-06 08:28:40.429091698 +0000 UTC m=+0.080657793 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 08:28:40 compute-0 podman[417569]: 2025-12-06 08:28:40.4291545 +0000 UTC m=+0.075476712 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 06 08:28:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:41.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:41 compute-0 ceph-mon[74339]: pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 3.6 KiB/s wr, 76 op/s
Dec 06 08:28:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:42.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:42 compute-0 ceph-mon[74339]: pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:42 compute-0 nova_compute[251992]: 2025-12-06 08:28:42.920 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:43.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:28:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:28:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:44.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:44 compute-0 nova_compute[251992]: 2025-12-06 08:28:44.507 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:45.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:45 compute-0 ceph-mon[74339]: pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:47 compute-0 ceph-mon[74339]: pgmap v4105: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:47 compute-0 nova_compute[251992]: 2025-12-06 08:28:47.922 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:48.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:48 compute-0 sudo[417609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:48 compute-0 sudo[417609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:48 compute-0 sudo[417609]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:48 compute-0 sudo[417634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:28:48 compute-0 sudo[417634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:28:48 compute-0 sudo[417634]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:49.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:49 compute-0 ceph-mon[74339]: pgmap v4106: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/642553328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:49 compute-0 nova_compute[251992]: 2025-12-06 08:28:49.551 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:50.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/478041238' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:28:50 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:51.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:51 compute-0 ceph-mon[74339]: pgmap v4107: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:28:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:52.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:28:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:52 compute-0 nova_compute[251992]: 2025-12-06 08:28:52.924 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:53 compute-0 ceph-mon[74339]: pgmap v4108: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.493760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733493883, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 2162, "num_deletes": 254, "total_data_size": 3851716, "memory_usage": 3916176, "flush_reason": "Manual Compaction"}
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733521024, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 3771841, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82308, "largest_seqno": 84469, "table_properties": {"data_size": 3762129, "index_size": 6141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20097, "raw_average_key_size": 20, "raw_value_size": 3742679, "raw_average_value_size": 3819, "num_data_blocks": 269, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009522, "oldest_key_time": 1765009522, "file_creation_time": 1765009733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 27296 microseconds, and 10632 cpu microseconds.
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.521074) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 3771841 bytes OK
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.521095) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.523212) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.523226) EVENT_LOG_v1 {"time_micros": 1765009733523222, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.523244) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 3842940, prev total WAL file size 3842940, number of live WAL files 2.
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.524407) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(3683KB)], [188(10MB)]
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733524484, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 15072262, "oldest_snapshot_seqno": -1}
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 11675 keys, 13164328 bytes, temperature: kUnknown
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733645057, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 13164328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13091728, "index_size": 42279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 309550, "raw_average_key_size": 26, "raw_value_size": 12890361, "raw_average_value_size": 1104, "num_data_blocks": 1592, "num_entries": 11675, "num_filter_entries": 11675, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.645374) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 13164328 bytes
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.646631) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.9 rd, 109.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 12198, records dropped: 523 output_compression: NoCompression
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.646649) EVENT_LOG_v1 {"time_micros": 1765009733646640, "job": 118, "event": "compaction_finished", "compaction_time_micros": 120702, "compaction_time_cpu_micros": 52165, "output_level": 6, "num_output_files": 1, "total_output_size": 13164328, "num_input_records": 12198, "num_output_records": 11675, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733647286, "job": 118, "event": "table_file_deletion", "file_number": 190}
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009733649247, "job": 118, "event": "table_file_deletion", "file_number": 188}
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.524274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.649328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.649334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.649336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.649337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:53 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:28:53.649339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:28:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:54.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:54 compute-0 nova_compute[251992]: 2025-12-06 08:28:54.553 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:55.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:28:55 compute-0 ceph-mon[74339]: pgmap v4109: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:28:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:57.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:28:57 compute-0 ceph-mon[74339]: pgmap v4110: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:57 compute-0 nova_compute[251992]: 2025-12-06 08:28:57.924 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:28:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:28:58.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:28:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:28:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:28:59.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:28:59 compute-0 ceph-mon[74339]: pgmap v4111: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:28:59 compute-0 nova_compute[251992]: 2025-12-06 08:28:59.556 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:00.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:01.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:01 compute-0 ceph-mon[74339]: pgmap v4112: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:02.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4113: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:02 compute-0 nova_compute[251992]: 2025-12-06 08:29:02.925 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:03.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:03 compute-0 ceph-mon[74339]: pgmap v4113: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:03 compute-0 nova_compute[251992]: 2025-12-06 08:29:03.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:29:03.904 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:29:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:29:03.904 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:29:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:29:03.905 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:29:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:04.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4114: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:04 compute-0 podman[417667]: 2025-12-06 08:29:04.475830765 +0000 UTC m=+0.121461782 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:29:04 compute-0 nova_compute[251992]: 2025-12-06 08:29:04.558 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:04 compute-0 nova_compute[251992]: 2025-12-06 08:29:04.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:04 compute-0 nova_compute[251992]: 2025-12-06 08:29:04.717 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:29:04 compute-0 nova_compute[251992]: 2025-12-06 08:29:04.718 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:29:04 compute-0 nova_compute[251992]: 2025-12-06 08:29:04.719 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:29:04 compute-0 nova_compute[251992]: 2025-12-06 08:29:04.720 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:29:04 compute-0 nova_compute[251992]: 2025-12-06 08:29:04.721 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:29:04 compute-0 ovn_controller[147168]: 2025-12-06T08:29:04Z|00850|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 08:29:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:05.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:29:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1708173048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:05 compute-0 nova_compute[251992]: 2025-12-06 08:29:05.238 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:29:05 compute-0 nova_compute[251992]: 2025-12-06 08:29:05.437 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:29:05 compute-0 nova_compute[251992]: 2025-12-06 08:29:05.438 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4113MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:29:05 compute-0 nova_compute[251992]: 2025-12-06 08:29:05.438 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:29:05 compute-0 nova_compute[251992]: 2025-12-06 08:29:05.439 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:29:05 compute-0 ceph-mon[74339]: pgmap v4114: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1708173048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:05 compute-0 nova_compute[251992]: 2025-12-06 08:29:05.507 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:29:05 compute-0 nova_compute[251992]: 2025-12-06 08:29:05.508 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:29:05 compute-0 nova_compute[251992]: 2025-12-06 08:29:05.522 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:29:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:29:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3976858743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:06 compute-0 nova_compute[251992]: 2025-12-06 08:29:06.021 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:29:06 compute-0 nova_compute[251992]: 2025-12-06 08:29:06.029 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:29:06 compute-0 nova_compute[251992]: 2025-12-06 08:29:06.047 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:29:06 compute-0 nova_compute[251992]: 2025-12-06 08:29:06.069 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:29:06 compute-0 nova_compute[251992]: 2025-12-06 08:29:06.070 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:29:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:06.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4115: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3976858743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:07.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:07 compute-0 ceph-mon[74339]: pgmap v4115: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:07 compute-0 nova_compute[251992]: 2025-12-06 08:29:07.927 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:08.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4116: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:08 compute-0 sudo[417741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:08 compute-0 sudo[417741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:08 compute-0 sudo[417741]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:08 compute-0 sudo[417766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:08 compute-0 sudo[417766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:08 compute-0 sudo[417766]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:09.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:09 compute-0 nova_compute[251992]: 2025-12-06 08:29:09.559 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:09 compute-0 ceph-mon[74339]: pgmap v4116: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3761457245' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:29:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3761457245' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:29:10 compute-0 nova_compute[251992]: 2025-12-06 08:29:10.072 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:10 compute-0 nova_compute[251992]: 2025-12-06 08:29:10.072 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:10.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4117: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:10 compute-0 nova_compute[251992]: 2025-12-06 08:29:10.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:10 compute-0 nova_compute[251992]: 2025-12-06 08:29:10.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:29:10 compute-0 nova_compute[251992]: 2025-12-06 08:29:10.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:29:10 compute-0 nova_compute[251992]: 2025-12-06 08:29:10.921 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:29:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:11.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:11 compute-0 podman[417793]: 2025-12-06 08:29:11.418141271 +0000 UTC m=+0.068074080 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:29:11 compute-0 podman[417794]: 2025-12-06 08:29:11.495121411 +0000 UTC m=+0.137563117 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:29:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3044378136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:11 compute-0 ceph-mon[74339]: pgmap v4117: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3507224446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:12.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4118: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:12 compute-0 nova_compute[251992]: 2025-12-06 08:29:12.929 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:13.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:29:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:29:13 compute-0 ceph-mon[74339]: pgmap v4118: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:14.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4119: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:14 compute-0 nova_compute[251992]: 2025-12-06 08:29:14.562 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:15.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:15 compute-0 ceph-mon[74339]: pgmap v4119: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:15 compute-0 nova_compute[251992]: 2025-12-06 08:29:15.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:16.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4120: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:16 compute-0 nova_compute[251992]: 2025-12-06 08:29:16.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:17.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:17 compute-0 ceph-mon[74339]: pgmap v4120: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:17 compute-0 nova_compute[251992]: 2025-12-06 08:29:17.931 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:18.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4121: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:29:18
Dec 06 08:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.control']
Dec 06 08:29:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:29:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:19.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:19 compute-0 nova_compute[251992]: 2025-12-06 08:29:19.564 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:19 compute-0 ceph-mon[74339]: pgmap v4121: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:20.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4122: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:21.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:21 compute-0 ceph-mon[74339]: pgmap v4122: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:22.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4123: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:22 compute-0 nova_compute[251992]: 2025-12-06 08:29:22.931 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:23 compute-0 sudo[417836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:23 compute-0 sudo[417836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:23 compute-0 sudo[417836]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:23 compute-0 sudo[417861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:29:23 compute-0 sudo[417861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:23 compute-0 sudo[417861]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:23 compute-0 sudo[417887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:23 compute-0 sudo[417887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:23 compute-0 sudo[417887]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:23 compute-0 sudo[417912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:29:23 compute-0 sudo[417912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:23 compute-0 nova_compute[251992]: 2025-12-06 08:29:23.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:23 compute-0 nova_compute[251992]: 2025-12-06 08:29:23.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:29:23 compute-0 sudo[417912]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:29:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:29:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:24.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4124: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:24 compute-0 ceph-mon[74339]: pgmap v4123: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:29:24 compute-0 nova_compute[251992]: 2025-12-06 08:29:24.565 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:29:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:29:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:29:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:25 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ec75331c-7df7-4cb7-9124-0212aebb8389 does not exist
Dec 06 08:29:25 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 448f4cb8-9f6d-44b0-9134-4e4fed7cbd2f does not exist
Dec 06 08:29:25 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 34abd632-b789-485a-8562-2dc0ea46f7cd does not exist
Dec 06 08:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:29:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:29:25 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:29:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:25.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:25 compute-0 sudo[417968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:25 compute-0 sudo[417968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:25 compute-0 sudo[417968]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:25 compute-0 sudo[417993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:29:25 compute-0 sudo[417993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:25 compute-0 sudo[417993]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:25 compute-0 sudo[418019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:25 compute-0 sudo[418019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:25 compute-0 sudo[418019]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:25 compute-0 sudo[418044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:29:25 compute-0 sudo[418044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:25 compute-0 ceph-mon[74339]: pgmap v4124: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:29:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:29:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:29:25 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:25 compute-0 podman[418111]: 2025-12-06 08:29:25.583249814 +0000 UTC m=+0.051854709 container create 9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:29:25 compute-0 systemd[1]: Started libpod-conmon-9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d.scope.
Dec 06 08:29:25 compute-0 podman[418111]: 2025-12-06 08:29:25.558820601 +0000 UTC m=+0.027425526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:29:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:29:25 compute-0 nova_compute[251992]: 2025-12-06 08:29:25.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:25 compute-0 podman[418111]: 2025-12-06 08:29:25.677056563 +0000 UTC m=+0.145661498 container init 9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:29:25 compute-0 podman[418111]: 2025-12-06 08:29:25.683142319 +0000 UTC m=+0.151747214 container start 9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 08:29:25 compute-0 podman[418111]: 2025-12-06 08:29:25.68758617 +0000 UTC m=+0.156191075 container attach 9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:29:25 compute-0 systemd[1]: libpod-9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d.scope: Deactivated successfully.
Dec 06 08:29:25 compute-0 wizardly_mendel[418128]: 167 167
Dec 06 08:29:25 compute-0 conmon[418128]: conmon 9dee861ee5969447cad3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d.scope/container/memory.events
Dec 06 08:29:25 compute-0 podman[418111]: 2025-12-06 08:29:25.690066857 +0000 UTC m=+0.158671762 container died 9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a770c6e799beba625d912794801bafea33400fac9b939d7ffca5fcf7b727b02e-merged.mount: Deactivated successfully.
Dec 06 08:29:25 compute-0 podman[418111]: 2025-12-06 08:29:25.731330257 +0000 UTC m=+0.199935142 container remove 9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mendel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:29:25 compute-0 systemd[1]: libpod-conmon-9dee861ee5969447cad3ba43b4d4acc07288e669c083b7b578ad6310e505a05d.scope: Deactivated successfully.
Dec 06 08:29:25 compute-0 podman[418151]: 2025-12-06 08:29:25.924221398 +0000 UTC m=+0.054022549 container create 31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:29:25 compute-0 systemd[1]: Started libpod-conmon-31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38.scope.
Dec 06 08:29:25 compute-0 podman[418151]: 2025-12-06 08:29:25.905396716 +0000 UTC m=+0.035197877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:29:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338e00fd2006c2d3dd954b45661af455db33a6793fd82ce8b41da7637bc2635a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338e00fd2006c2d3dd954b45661af455db33a6793fd82ce8b41da7637bc2635a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338e00fd2006c2d3dd954b45661af455db33a6793fd82ce8b41da7637bc2635a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338e00fd2006c2d3dd954b45661af455db33a6793fd82ce8b41da7637bc2635a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338e00fd2006c2d3dd954b45661af455db33a6793fd82ce8b41da7637bc2635a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:26 compute-0 podman[418151]: 2025-12-06 08:29:26.025680274 +0000 UTC m=+0.155481425 container init 31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 08:29:26 compute-0 podman[418151]: 2025-12-06 08:29:26.033804634 +0000 UTC m=+0.163605785 container start 31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:29:26 compute-0 podman[418151]: 2025-12-06 08:29:26.036755755 +0000 UTC m=+0.166556946 container attach 31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:29:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4125: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:29:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:29:26 compute-0 busy_golick[418166]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:29:26 compute-0 busy_golick[418166]: --> relative data size: 1.0
Dec 06 08:29:26 compute-0 busy_golick[418166]: --> All data devices are unavailable
Dec 06 08:29:26 compute-0 systemd[1]: libpod-31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38.scope: Deactivated successfully.
Dec 06 08:29:26 compute-0 podman[418151]: 2025-12-06 08:29:26.880815425 +0000 UTC m=+1.010616566 container died 31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-338e00fd2006c2d3dd954b45661af455db33a6793fd82ce8b41da7637bc2635a-merged.mount: Deactivated successfully.
Dec 06 08:29:26 compute-0 podman[418151]: 2025-12-06 08:29:26.939815468 +0000 UTC m=+1.069616599 container remove 31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_golick, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:29:26 compute-0 systemd[1]: libpod-conmon-31323afae333560e559e020be1e82c7bc8004a4670b55f76be5a1a4f415efb38.scope: Deactivated successfully.
Dec 06 08:29:26 compute-0 sudo[418044]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:27 compute-0 sudo[418195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:27 compute-0 sudo[418195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:27 compute-0 sudo[418195]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:27 compute-0 sudo[418220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:29:27 compute-0 sudo[418220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:27 compute-0 sudo[418220]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:27 compute-0 sudo[418245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:27 compute-0 sudo[418245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:27 compute-0 sudo[418245]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:27 compute-0 sudo[418271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:29:27 compute-0 sudo[418271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:29:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:29:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:29:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:29:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:29:27 compute-0 ceph-mon[74339]: pgmap v4125: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:27 compute-0 podman[418340]: 2025-12-06 08:29:27.634567802 +0000 UTC m=+0.049383203 container create 323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bose, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec 06 08:29:27 compute-0 systemd[1]: Started libpod-conmon-323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05.scope.
Dec 06 08:29:27 compute-0 podman[418340]: 2025-12-06 08:29:27.607082975 +0000 UTC m=+0.021898376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:29:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:29:27 compute-0 podman[418340]: 2025-12-06 08:29:27.734439044 +0000 UTC m=+0.149254465 container init 323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bose, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:29:27 compute-0 podman[418340]: 2025-12-06 08:29:27.746200743 +0000 UTC m=+0.161016124 container start 323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bose, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 08:29:27 compute-0 podman[418340]: 2025-12-06 08:29:27.750598413 +0000 UTC m=+0.165413804 container attach 323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bose, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 08:29:27 compute-0 nice_bose[418356]: 167 167
Dec 06 08:29:27 compute-0 systemd[1]: libpod-323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05.scope: Deactivated successfully.
Dec 06 08:29:27 compute-0 podman[418362]: 2025-12-06 08:29:27.820975695 +0000 UTC m=+0.041378785 container died 323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b513ba10813b8c8a0ed052eea0812f19234119d31352034ee4ecb95a4fa8d210-merged.mount: Deactivated successfully.
Dec 06 08:29:27 compute-0 podman[418362]: 2025-12-06 08:29:27.867840259 +0000 UTC m=+0.088243389 container remove 323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:29:27 compute-0 systemd[1]: libpod-conmon-323daec2a6c1b36e1a942e2daaa9d2e09775331613b71e41484753ab2ea78f05.scope: Deactivated successfully.
Dec 06 08:29:27 compute-0 nova_compute[251992]: 2025-12-06 08:29:27.944 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:28 compute-0 podman[418384]: 2025-12-06 08:29:28.069916072 +0000 UTC m=+0.049522457 container create 90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:29:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:28.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:28 compute-0 systemd[1]: Started libpod-conmon-90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895.scope.
Dec 06 08:29:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6c7576dc1c2fad502bbc5299041f191e79e76bf0330134ad27fc96db5680b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6c7576dc1c2fad502bbc5299041f191e79e76bf0330134ad27fc96db5680b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6c7576dc1c2fad502bbc5299041f191e79e76bf0330134ad27fc96db5680b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6c7576dc1c2fad502bbc5299041f191e79e76bf0330134ad27fc96db5680b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:28 compute-0 podman[418384]: 2025-12-06 08:29:28.052681387 +0000 UTC m=+0.032287792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:29:28 compute-0 podman[418384]: 2025-12-06 08:29:28.14835765 +0000 UTC m=+0.127964055 container init 90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:29:28 compute-0 podman[418384]: 2025-12-06 08:29:28.155847032 +0000 UTC m=+0.135453417 container start 90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 08:29:28 compute-0 podman[418384]: 2025-12-06 08:29:28.158815102 +0000 UTC m=+0.138421597 container attach 90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:29:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4126: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:28 compute-0 sudo[418406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:28 compute-0 sudo[418406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:28 compute-0 sudo[418406]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:28 compute-0 sudo[418431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:28 compute-0 sudo[418431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:28 compute-0 sudo[418431]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:28 compute-0 nova_compute[251992]: 2025-12-06 08:29:28.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:28 compute-0 nova_compute[251992]: 2025-12-06 08:29:28.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:29:28 compute-0 boring_engelbart[418401]: {
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:     "0": [
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:         {
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "devices": [
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "/dev/loop3"
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             ],
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "lv_name": "ceph_lv0",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "lv_size": "7511998464",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "name": "ceph_lv0",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "tags": {
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.cluster_name": "ceph",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.crush_device_class": "",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.encrypted": "0",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.osd_id": "0",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.type": "block",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:                 "ceph.vdo": "0"
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             },
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "type": "block",
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:             "vg_name": "ceph_vg0"
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:         }
Dec 06 08:29:28 compute-0 boring_engelbart[418401]:     ]
Dec 06 08:29:28 compute-0 boring_engelbart[418401]: }
Dec 06 08:29:28 compute-0 systemd[1]: libpod-90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895.scope: Deactivated successfully.
Dec 06 08:29:28 compute-0 podman[418384]: 2025-12-06 08:29:28.896408779 +0000 UTC m=+0.876015184 container died 90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:29:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-af6c7576dc1c2fad502bbc5299041f191e79e76bf0330134ad27fc96db5680b3-merged.mount: Deactivated successfully.
Dec 06 08:29:28 compute-0 podman[418384]: 2025-12-06 08:29:28.949706958 +0000 UTC m=+0.929313343 container remove 90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:29:28 compute-0 systemd[1]: libpod-conmon-90527f6f0e59d949bb23b0fd530a6e3afea24afc1145b886de5b42653f048895.scope: Deactivated successfully.
Dec 06 08:29:28 compute-0 sudo[418271]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:29 compute-0 sudo[418473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:29 compute-0 sudo[418473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:29 compute-0 sudo[418473]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:29 compute-0 sudo[418498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:29:29 compute-0 sudo[418498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:29 compute-0 sudo[418498]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:29 compute-0 sudo[418523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:29 compute-0 sudo[418523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:29 compute-0 sudo[418523]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:29 compute-0 sudo[418549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:29:29 compute-0 sudo[418549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:29 compute-0 ceph-mon[74339]: pgmap v4126: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:29 compute-0 podman[418616]: 2025-12-06 08:29:29.539568878 +0000 UTC m=+0.043958627 container create 4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:29:29 compute-0 nova_compute[251992]: 2025-12-06 08:29:29.567 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:29 compute-0 systemd[1]: Started libpod-conmon-4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002.scope.
Dec 06 08:29:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:29:29 compute-0 podman[418616]: 2025-12-06 08:29:29.521708436 +0000 UTC m=+0.026098155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:29:29 compute-0 podman[418616]: 2025-12-06 08:29:29.638142979 +0000 UTC m=+0.142532688 container init 4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:29:29 compute-0 podman[418616]: 2025-12-06 08:29:29.64633607 +0000 UTC m=+0.150725769 container start 4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:29:29 compute-0 podman[418616]: 2025-12-06 08:29:29.65007098 +0000 UTC m=+0.154460719 container attach 4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Dec 06 08:29:29 compute-0 epic_boyd[418632]: 167 167
Dec 06 08:29:29 compute-0 systemd[1]: libpod-4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002.scope: Deactivated successfully.
Dec 06 08:29:29 compute-0 podman[418616]: 2025-12-06 08:29:29.652980109 +0000 UTC m=+0.157369848 container died 4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 08:29:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-03bd5f19d40ceb173912a34e54dd7b952b51cabfc84e4593987b7ec2e28fd218-merged.mount: Deactivated successfully.
Dec 06 08:29:29 compute-0 podman[418616]: 2025-12-06 08:29:29.6926867 +0000 UTC m=+0.197076409 container remove 4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:29:29 compute-0 systemd[1]: libpod-conmon-4e7369e955ca65f8753732b84236e7aae7dcb3ef12e1ebd3d31757cd351bb002.scope: Deactivated successfully.
Dec 06 08:29:29 compute-0 podman[418657]: 2025-12-06 08:29:29.838153776 +0000 UTC m=+0.036697081 container create 236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_darwin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:29:29 compute-0 systemd[1]: Started libpod-conmon-236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d.scope.
Dec 06 08:29:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882e1ec988d5ddd75485e8c362f6f7d5f04577e1f3230400289d8fd738c3d876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882e1ec988d5ddd75485e8c362f6f7d5f04577e1f3230400289d8fd738c3d876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882e1ec988d5ddd75485e8c362f6f7d5f04577e1f3230400289d8fd738c3d876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882e1ec988d5ddd75485e8c362f6f7d5f04577e1f3230400289d8fd738c3d876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:29:29 compute-0 podman[418657]: 2025-12-06 08:29:29.82268931 +0000 UTC m=+0.021232645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:29:29 compute-0 podman[418657]: 2025-12-06 08:29:29.930448767 +0000 UTC m=+0.128992112 container init 236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:29:29 compute-0 podman[418657]: 2025-12-06 08:29:29.938250878 +0000 UTC m=+0.136794183 container start 236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:29:29 compute-0 podman[418657]: 2025-12-06 08:29:29.942305067 +0000 UTC m=+0.140848372 container attach 236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_darwin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 08:29:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:30.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4127: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:30 compute-0 tender_darwin[418673]: {
Dec 06 08:29:30 compute-0 tender_darwin[418673]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:29:30 compute-0 tender_darwin[418673]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:29:30 compute-0 tender_darwin[418673]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:29:30 compute-0 tender_darwin[418673]:         "osd_id": 0,
Dec 06 08:29:30 compute-0 tender_darwin[418673]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:29:30 compute-0 tender_darwin[418673]:         "type": "bluestore"
Dec 06 08:29:30 compute-0 tender_darwin[418673]:     }
Dec 06 08:29:30 compute-0 tender_darwin[418673]: }
Dec 06 08:29:30 compute-0 systemd[1]: libpod-236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d.scope: Deactivated successfully.
Dec 06 08:29:30 compute-0 podman[418657]: 2025-12-06 08:29:30.739156655 +0000 UTC m=+0.937699960 container died 236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:29:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-882e1ec988d5ddd75485e8c362f6f7d5f04577e1f3230400289d8fd738c3d876-merged.mount: Deactivated successfully.
Dec 06 08:29:30 compute-0 podman[418657]: 2025-12-06 08:29:30.790562772 +0000 UTC m=+0.989106077 container remove 236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_darwin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:29:30 compute-0 systemd[1]: libpod-conmon-236f823c1cc8e1599e409e1c6d38725d5e277a2af98ee0bb371967901d2ef55d.scope: Deactivated successfully.
Dec 06 08:29:30 compute-0 sudo[418549]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:29:30 compute-0 ceph-mon[74339]: pgmap v4127: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:29:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:31.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8be6f0dc-0d3f-44cf-a7d9-50bc946206fb does not exist
Dec 06 08:29:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 05cc45a6-fcc8-4fac-9e36-333c5ab55392 does not exist
Dec 06 08:29:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 05b7e739-f73a-4efb-af48-552d9c0a10e2 does not exist
Dec 06 08:29:31 compute-0 sudo[418709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:31 compute-0 sudo[418709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:31 compute-0 sudo[418709]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:31 compute-0 sudo[418735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:29:31 compute-0 sudo[418735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:31 compute-0 sudo[418735]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:32.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:29:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4128: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:32 compute-0 nova_compute[251992]: 2025-12-06 08:29:32.950 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:34.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4129: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:34 compute-0 nova_compute[251992]: 2025-12-06 08:29:34.569 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:35.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:35 compute-0 ceph-mon[74339]: pgmap v4128: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:35 compute-0 podman[418762]: 2025-12-06 08:29:35.478850816 +0000 UTC m=+0.126487365 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:29:35 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:35 compute-0 nova_compute[251992]: 2025-12-06 08:29:35.839 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:35 compute-0 nova_compute[251992]: 2025-12-06 08:29:35.840 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:29:35 compute-0 nova_compute[251992]: 2025-12-06 08:29:35.930 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:29:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:36.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4130: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:36 compute-0 ceph-mon[74339]: pgmap v4129: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:37 compute-0 ceph-mon[74339]: pgmap v4130: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:37 compute-0 nova_compute[251992]: 2025-12-06 08:29:37.952 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:38.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4131: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:39 compute-0 ceph-mon[74339]: pgmap v4131: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:39 compute-0 nova_compute[251992]: 2025-12-06 08:29:39.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:40.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4132: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:41 compute-0 ceph-mon[74339]: pgmap v4132: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:42.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4133: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:42 compute-0 podman[418794]: 2025-12-06 08:29:42.413982123 +0000 UTC m=+0.065134569 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:29:42 compute-0 podman[418793]: 2025-12-06 08:29:42.428920976 +0000 UTC m=+0.080697999 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:29:42 compute-0 nova_compute[251992]: 2025-12-06 08:29:42.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:29:42 compute-0 ceph-mon[74339]: pgmap v4133: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:42 compute-0 nova_compute[251992]: 2025-12-06 08:29:42.959 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:29:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:29:43 compute-0 sshd-session[418833]: Accepted publickey for zuul from 192.168.122.10 port 46044 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 08:29:43 compute-0 systemd-logind[798]: New session 65 of user zuul.
Dec 06 08:29:43 compute-0 systemd[1]: Started Session 65 of User zuul.
Dec 06 08:29:43 compute-0 sshd-session[418833]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 08:29:43 compute-0 sudo[418838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 06 08:29:43 compute-0 sudo[418838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:29:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:44.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4134: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:44 compute-0 nova_compute[251992]: 2025-12-06 08:29:44.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:46 compute-0 ceph-mon[74339]: pgmap v4134: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:46 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45353 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:46.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4135: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:46 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45359 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:47 compute-0 ceph-mon[74339]: from='client.45353 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:47 compute-0 ceph-mon[74339]: pgmap v4135: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:47 compute-0 ceph-mon[74339]: from='client.45359 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:47 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37110 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:47 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46204 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:47 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37116 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:47 compute-0 nova_compute[251992]: 2025-12-06 08:29:47.960 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/760709345' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:29:48 compute-0 ceph-mon[74339]: from='client.37110 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:29:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:48.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:29:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4136: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:48 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46216 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 06 08:29:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3959119430' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:29:48 compute-0 sudo[419084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:48 compute-0 sudo[419084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:48 compute-0 sudo[419084]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:48 compute-0 sudo[419109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:29:48 compute-0 sudo[419109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:29:48 compute-0 sudo[419109]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:49 compute-0 ceph-mon[74339]: from='client.46204 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:49 compute-0 ceph-mon[74339]: from='client.37116 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:49 compute-0 ceph-mon[74339]: pgmap v4136: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:49 compute-0 ceph-mon[74339]: from='client.46216 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3959119430' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:29:49 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3120292559' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:29:49 compute-0 nova_compute[251992]: 2025-12-06 08:29:49.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:50.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4137: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:51 compute-0 ceph-mon[74339]: pgmap v4137: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1792754580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/185607013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:29:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:52.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4138: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:52 compute-0 ovs-vsctl[419171]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 06 08:29:52 compute-0 nova_compute[251992]: 2025-12-06 08:29:52.962 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:53 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45386 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:53 compute-0 virtqemud[251613]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 06 08:29:53 compute-0 virtqemud[251613]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 06 08:29:53 compute-0 virtqemud[251613]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 08:29:53 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45392 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 06 08:29:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:54 compute-0 ceph-mon[74339]: pgmap v4138: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: cache status {prefix=cache status} (starting...)
Dec 06 08:29:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:54.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4139: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:54 compute-0 lvm[419504]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 08:29:54 compute-0 lvm[419504]: VG ceph_vg0 finished
Dec 06 08:29:54 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46240 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: client ls {prefix=client ls} (starting...)
Dec 06 08:29:54 compute-0 nova_compute[251992]: 2025-12-06 08:29:54.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:54 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46255 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:54 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37143 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 06 08:29:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:54 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45416 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:54 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:29:54 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:29:54.687+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:29:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: damage ls {prefix=damage ls} (starting...)
Dec 06 08:29:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump loads {prefix=dump loads} (starting...)
Dec 06 08:29:55 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37158 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:29:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2728014241' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.45386 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.45392 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2589679709' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: pgmap v4139: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.46240 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1676675540' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.46255 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.37143 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/201956814' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.45416 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3727707845' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2728014241' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:55.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 06 08:29:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740306402' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 06 08:29:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 06 08:29:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 06 08:29:55 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46270 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:29:55 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:29:55.394+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:29:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:29:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3793706944' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 06 08:29:55 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45455 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37185 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:29:55 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:29:55.738+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:29:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 06 08:29:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec 06 08:29:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/863163011' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:29:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 06 08:29:56 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45473 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.37158 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2740306402' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2337824083' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1783511862' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1262805181' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.46270 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3793706944' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3402814757' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.45455 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.37185 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2945805798' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/863163011' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1809216193' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3166734127' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:56.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec 06 08:29:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943769363' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: ops {prefix=ops} (starting...)
Dec 06 08:29:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4140: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:56 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46315 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec 06 08:29:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291798680' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 06 08:29:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46321 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:29:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2848158878' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 06 08:29:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207179364' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37230 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:56 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: session ls {prefix=session ls} (starting...)
Dec 06 08:29:56 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: status {prefix=status} (starting...)
Dec 06 08:29:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 06 08:29:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:29:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108570152' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37242 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:29:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:57.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.45473 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2943769363' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2980343332' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: pgmap v4140: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.46315 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/291798680' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2964182196' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3992149126' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.46321 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2848158878' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4207179364' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.37230 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/586738722' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/589179532' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/486751230' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1722038647' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4108570152' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45518 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:29:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:29:57.415+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:29:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 06 08:29:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/886029708' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 06 08:29:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905335147' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46366 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:57 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:29:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:29:57.763+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:29:57 compute-0 nova_compute[251992]: 2025-12-06 08:29:57.963 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 08:29:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015455620' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec 06 08:29:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2206128586' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:29:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:29:58.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45539 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4141: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.37242 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2792455046' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3902118403' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.45518 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3005104597' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/886029708' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/905335147' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2100372371' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1074309655' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3607517829' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2015455620' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2206128586' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3947966807' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3263639216' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/538665449' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 06 08:29:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3474173825' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37290 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:29:58 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:29:58.413+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:29:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45551 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46402 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec 06 08:29:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1281743882' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 06 08:29:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/230021667' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:29:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46420 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:29:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:29:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:29:59.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:29:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45581 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec 06 08:29:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701106949' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45587 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.46366 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.45539 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: pgmap v4141: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3474173825' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.37290 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.45551 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1373369456' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.46402 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1008488265' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1281743882' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/230021667' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4289010433' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/209309539' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2701106949' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 nova_compute[251992]: 2025-12-06 08:29:59.575 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:29:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37329 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 06 08:29:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810903548' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45599 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46447 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:29:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37341 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 08:30:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45617 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:00.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46468 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:30:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1118942847' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4142: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37359 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.45566 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.46420 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.45581 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.45587 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.46435 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3078248732' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.37329 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/614316164' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3810903548' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/447824310' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1991419766' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1118942847' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2329781491' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:12.778007+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383590400 unmapped: 55992320 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:13.778137+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383590400 unmapped: 55992320 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:14.778274+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a305c000/0x0/0x1bfc00000, data 0x542c3ce/0x5642000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383606784 unmapped: 55975936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4567441 data_alloc: 251658240 data_used: 40271872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:15.778433+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383606784 unmapped: 55975936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:16.778614+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383606784 unmapped: 55975936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:17.778777+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383606784 unmapped: 55975936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a304d000/0x0/0x1bfc00000, data 0x543b3ce/0x5651000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:18.778895+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383606784 unmapped: 55975936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:19.779150+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383606784 unmapped: 55975936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4568097 data_alloc: 251658240 data_used: 40271872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:20.779269+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383606784 unmapped: 55975936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:21.779460+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383606784 unmapped: 55975936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.880715370s of 13.320129395s, submitted: 8
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:22.779596+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383623168 unmapped: 55959552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:23.779739+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383623168 unmapped: 55959552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3046000/0x0/0x1bfc00000, data 0x54423ce/0x5658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:24.779880+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383623168 unmapped: 55959552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4568537 data_alloc: 251658240 data_used: 40271872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:25.780067+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383623168 unmapped: 55959552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:26.780244+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383623168 unmapped: 55959552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:27.780435+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3046000/0x0/0x1bfc00000, data 0x54423ce/0x5658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383623168 unmapped: 55959552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:28.780592+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383623168 unmapped: 55959552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:29.780844+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383623168 unmapped: 55959552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4568537 data_alloc: 251658240 data_used: 40271872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:30.781082+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383631360 unmapped: 55951360 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:31.781312+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383631360 unmapped: 55951360 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:32.781508+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383631360 unmapped: 55951360 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3046000/0x0/0x1bfc00000, data 0x54423ce/0x5658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.727558136s of 10.752400398s, submitted: 4
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:33.781701+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383631360 unmapped: 55951360 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:34.781902+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383631360 unmapped: 55951360 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4568537 data_alloc: 251658240 data_used: 40271872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:35.782035+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383631360 unmapped: 55951360 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:36.782198+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383631360 unmapped: 55951360 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:37.782398+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383639552 unmapped: 55943168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3044000/0x0/0x1bfc00000, data 0x54433ce/0x5659000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:38.782557+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383647744 unmapped: 55934976 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:39.782757+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383647744 unmapped: 55934976 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4569713 data_alloc: 251658240 data_used: 40284160
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d50f12c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:40.782885+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d2a21400 session 0x5636d2d714a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383959040 unmapped: 55623680 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3038000/0x0/0x1bfc00000, data 0x54503ce/0x5666000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:41.783033+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3014000/0x0/0x1bfc00000, data 0x54743ce/0x568a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383959040 unmapped: 55623680 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:42.783176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1f400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383959040 unmapped: 55623680 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:43.783304+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384024576 unmapped: 55558144 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:44.783481+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d4334000 session 0x5636d46f4000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1fc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d3a1fc00 session 0x5636d5301680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1f72800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d1f72800 session 0x5636d4e63e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d2a21400 session 0x5636d2eedc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384024576 unmapped: 55558144 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.615420341s of 11.662307739s, submitted: 14
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d4bb9e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4614421 data_alloc: 251658240 data_used: 40415232
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1fc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d3a1fc00 session 0x5636d28e85a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d4334000 session 0x5636d5300f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1f72800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d1f72800 session 0x5636d502d860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 ms_handle_reset con 0x5636d2a21400 session 0x5636d502c5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:45.783668+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384040960 unmapped: 55541760 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a2a89000/0x0/0x1bfc00000, data 0x59fe3de/0x5c15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:46.783807+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384040960 unmapped: 55541760 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:47.783973+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384040960 unmapped: 55541760 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:48.784164+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384040960 unmapped: 55541760 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:49.784516+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384368640 unmapped: 55214080 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4619395 data_alloc: 251658240 data_used: 41037824
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a2a85000/0x0/0x1bfc00000, data 0x5a000aa/0x5c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:50.784680+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384958464 unmapped: 54624256 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:51.784900+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384958464 unmapped: 54624256 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:52.785161+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384958464 unmapped: 54624256 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1fc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 408 ms_handle_reset con 0x5636d3a1fc00 session 0x5636d2ec01e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:53.785467+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 385114112 unmapped: 54468608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:54.785683+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a2a62000/0x0/0x1bfc00000, data 0x5a240aa/0x5c3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 385277952 unmapped: 54304768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4640447 data_alloc: 251658240 data_used: 42713088
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:55.785811+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.738558769s of 10.866394997s, submitted: 20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386424832 unmapped: 53157888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:56.785961+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387604480 unmapped: 51978240 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:57.786131+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387604480 unmapped: 51978240 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:58.786293+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a264e000/0x0/0x1bfc00000, data 0x5a25dca/0x5c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387620864 unmapped: 51961856 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a264e000/0x0/0x1bfc00000, data 0x5a25dca/0x5c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T07:59:59.786709+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387620864 unmapped: 51961856 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4683581 data_alloc: 251658240 data_used: 45088768
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:00.786859+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387620864 unmapped: 51961856 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a264e000/0x0/0x1bfc00000, data 0x5a25dca/0x5c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:01.787066+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387735552 unmapped: 51847168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:02.787241+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388784128 unmapped: 50798592 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:03.787410+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388882432 unmapped: 50700288 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:04.787555+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388907008 unmapped: 50675712 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4701911 data_alloc: 251658240 data_used: 46043136
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:05.787691+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388907008 unmapped: 50675712 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2648000/0x0/0x1bfc00000, data 0x5a2a97c/0x5c45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:06.787843+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2648000/0x0/0x1bfc00000, data 0x5a2a97c/0x5c45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.055346489s of 11.168386459s, submitted: 25
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389603328 unmapped: 49979392 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:07.787986+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389365760 unmapped: 50216960 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:08.789172+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389398528 unmapped: 50184192 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:09.789323+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389398528 unmapped: 50184192 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a248d000/0x0/0x1bfc00000, data 0x5be697c/0x5e01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4722773 data_alloc: 251658240 data_used: 46141440
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:10.789474+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d4bb83c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 50176000 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:11.789617+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d631b800 session 0x5636d2ed0960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388923392 unmapped: 50659328 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:12.789755+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388923392 unmapped: 50659328 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:13.789891+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388923392 unmapped: 50659328 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:14.790017+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388923392 unmapped: 50659328 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4537069 data_alloc: 251658240 data_used: 36954112
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a331b000/0x0/0x1bfc00000, data 0x4d5897c/0x4f73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:15.790189+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388939776 unmapped: 50642944 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:16.790319+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d1e67c00 session 0x5636d2ecc780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d1e67000 session 0x5636d2ece5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d538ed20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.348403931s of 10.039106369s, submitted: 27
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388939776 unmapped: 50642944 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3a1f400 session 0x5636d4f97680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1f72800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d1f72800 session 0x5636d4da83c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d2a21400 session 0x5636d222b680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:17.790447+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3a5f000/0x0/0x1bfc00000, data 0x461596c/0x482f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386056192 unmapped: 53526528 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:18.790598+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3aa7000/0x0/0x1bfc00000, data 0x45cd96c/0x47e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386056192 unmapped: 53526528 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:19.790775+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386056192 unmapped: 53526528 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4152855 data_alloc: 234881024 data_used: 15642624
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:20.790970+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d1e67000 session 0x5636d2ec1a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:21.791226+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a50a4000/0x0/0x1bfc00000, data 0x2fd090a/0x31e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:22.791390+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a50a4000/0x0/0x1bfc00000, data 0x2fd090a/0x31e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:23.791527+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a50a4000/0x0/0x1bfc00000, data 0x2fd090a/0x31e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:24.791692+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4149443 data_alloc: 234881024 data_used: 15618048
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:25.791857+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:26.792045+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:27.792231+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:28.792392+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a50a4000/0x0/0x1bfc00000, data 0x2fd090a/0x31e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:29.792606+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4149443 data_alloc: 234881024 data_used: 15618048
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:30.792808+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:31.792988+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:32.793129+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:33.793269+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:34.793397+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a50a4000/0x0/0x1bfc00000, data 0x2fd090a/0x31e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372826112 unmapped: 66756608 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d50a0800 session 0x5636d4516f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3b07400 session 0x5636d5326f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4149443 data_alloc: 234881024 data_used: 15618048
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:35.793565+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.399353027s of 18.679479599s, submitted: 34
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366895104 unmapped: 72687616 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d1e67c00 session 0x5636d2ecf4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:36.793719+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366895104 unmapped: 72687616 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:37.793881+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366895104 unmapped: 72687616 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:38.794047+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a66c0000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1796f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366895104 unmapped: 72687616 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:39.794301+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72605696 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:40.794487+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366993408 unmapped: 72589312 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:41.794697+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366993408 unmapped: 72589312 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:42.794887+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366993408 unmapped: 72589312 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:43.795044+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366993408 unmapped: 72589312 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:44.795409+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367001600 unmapped: 72581120 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:45.795561+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.587407112s of 10.158181190s, submitted: 167
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367001600 unmapped: 72581120 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:46.795742+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368058368 unmapped: 71524352 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:47.795900+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364740608 unmapped: 74842112 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:48.796026+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364773376 unmapped: 74809344 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:49.796196+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364789760 unmapped: 74792960 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:50.796326+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:51.796549+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:52.796752+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:53.796963+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:54.797192+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:55.797327+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:56.797456+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:57.797636+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:58.797842+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:00:59.798017+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:00.798175+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:01.798334+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:02.798542+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:03.798689+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:04.798855+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:05.799012+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:06.799210+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:07.799347+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:08.799504+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:09.799722+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:10.799853+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:11.799995+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:12.800171+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364797952 unmapped: 74784768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:13.800341+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 74776576 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:14.800471+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364806144 unmapped: 74776576 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:15.800612+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:16.800754+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:17.800878+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:18.801001+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:19.801253+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:20.801401+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:21.801581+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:22.801753+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:23.801871+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a7701000/0x0/0x1bfc00000, data 0x19b68d7/0x1bcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:24.802041+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878324 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:25.802205+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:26.802401+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364814336 unmapped: 74768384 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:27.803263+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.398212433s of 41.942527771s, submitted: 180
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 74760192 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:28.803421+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d1e67000 session 0x5636d2028d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364822528 unmapped: 74760192 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:29.803625+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a73fb000/0x0/0x1bfc00000, data 0x1cbc8d7/0x1ed3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 74752000 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902022 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:30.803757+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 74752000 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:31.803904+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 74752000 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:32.804094+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a73fb000/0x0/0x1bfc00000, data 0x1cbc8d7/0x1ed3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 74752000 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:33.804293+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 74752000 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:34.804429+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d2a21400 session 0x5636d2a95a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364830720 unmapped: 74752000 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902022 data_alloc: 218103808 data_used: 3248128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:35.804543+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a73fb000/0x0/0x1bfc00000, data 0x1cbc8d7/0x1ed3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3b07400 session 0x5636d502d860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 74743808 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:36.804673+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50a0800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d50a0800 session 0x5636d4709e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364838912 unmapped: 74743808 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:37.804801+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.572644234s of 10.033238411s, submitted: 5
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3b69000 session 0x5636d5327860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b69000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d20281e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 74563584 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:38.804936+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365019136 unmapped: 74563584 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:39.805078+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 74285056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:40.805409+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3928984 data_alloc: 218103808 data_used: 6422528
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a73d6000/0x0/0x1bfc00000, data 0x1ce08e7/0x1ef8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 74285056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:41.805525+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 74285056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:42.805686+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 74285056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:43.805770+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 74285056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:44.805895+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 74285056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a73d6000/0x0/0x1bfc00000, data 0x1ce08e7/0x1ef8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:45.806048+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3928984 data_alloc: 218103808 data_used: 6422528
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365297664 unmapped: 74285056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:46.806166+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3b07400 session 0x5636d5033c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50a0800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d50a0800 session 0x5636d2ecf860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1f400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3a1f400 session 0x5636d538f860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d2d4af00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d2d70960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d538e000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1f400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365305856 unmapped: 74276864 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3a1f400 session 0x5636d5300780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3b07400 session 0x5636d50f0780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50a0800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d50a0800 session 0x5636d52965a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:47.806291+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365314048 unmapped: 74268672 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:48.806459+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365314048 unmapped: 74268672 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:49.806651+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365314048 unmapped: 74268672 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:50.806821+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3962357 data_alloc: 218103808 data_used: 6422528
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a704e000/0x0/0x1bfc00000, data 0x2066959/0x2280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.897373199s of 13.001561165s, submitted: 25
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365182976 unmapped: 74399744 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:51.806992+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365158400 unmapped: 74424320 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a6bc1000/0x0/0x1bfc00000, data 0x24ed959/0x2707000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:52.807168+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d4e5f860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:53.807295+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d4486960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:54.807419+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1f400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3a1f400 session 0x5636d4487e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d3b07400 session 0x5636d4486b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:55.807551+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4012071 data_alloc: 218103808 data_used: 6627328
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a6bad000/0x0/0x1bfc00000, data 0x24ff959/0x2719000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1fc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a6bac000/0x0/0x1bfc00000, data 0x24ff969/0x271a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:56.807679+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:57.807789+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:58.807936+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:01:59.808167+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:00.808312+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038471 data_alloc: 218103808 data_used: 10326016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:01.808444+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a6bac000/0x0/0x1bfc00000, data 0x24ff969/0x271a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:02.808573+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a6bac000/0x0/0x1bfc00000, data 0x24ff969/0x271a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:03.808731+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:04.808917+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:05.809050+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4038471 data_alloc: 218103808 data_used: 10326016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a6bac000/0x0/0x1bfc00000, data 0x24ff969/0x271a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:06.809167+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:07.809283+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73261056 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:08.809396+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.625205994s of 17.796827316s, submitted: 46
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 72359936 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:09.809551+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367230976 unmapped: 72351744 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:10.809697+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4062045 data_alloc: 218103808 data_used: 10694656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367230976 unmapped: 72351744 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:11.809866+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a6887000/0x0/0x1bfc00000, data 0x2826969/0x2a41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367230976 unmapped: 72351744 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:12.809994+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368279552 unmapped: 71303168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:13.810190+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:14.810328+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a688b000/0x0/0x1bfc00000, data 0x2827969/0x2a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:15.810465+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4067517 data_alloc: 218103808 data_used: 10752000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d2a21400 session 0x5636d4f79680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 ms_handle_reset con 0x5636d1e67000 session 0x5636d2a434a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:16.810767+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a688b000/0x0/0x1bfc00000, data 0x2827969/0x2a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:17.810904+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:18.811041+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:19.811244+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:20.811437+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4067517 data_alloc: 218103808 data_used: 10752000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:21.811614+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a688b000/0x0/0x1bfc00000, data 0x2827969/0x2a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:22.811780+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:23.811940+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a688b000/0x0/0x1bfc00000, data 0x2827969/0x2a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:24.812065+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 72343552 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:25.812248+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4067517 data_alloc: 218103808 data_used: 10752000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.991325378s of 17.552423477s, submitted: 75
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:26.812416+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:27.812574+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d4bb9a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d4516b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:28.812719+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1f400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:29.812887+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a6887000/0x0/0x1bfc00000, data 0x2829635/0x2a45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:30.813020+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4073131 data_alloc: 218103808 data_used: 10838016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a6887000/0x0/0x1bfc00000, data 0x2829635/0x2a45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:31.813157+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:32.813274+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a6887000/0x0/0x1bfc00000, data 0x2829635/0x2a45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:33.813407+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a6887000/0x0/0x1bfc00000, data 0x2829635/0x2a45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:34.813634+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a6887000/0x0/0x1bfc00000, data 0x2829635/0x2a45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:35.813774+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4073131 data_alloc: 218103808 data_used: 10838016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:36.813914+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:37.814179+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 ms_handle_reset con 0x5636d3b07400 session 0x5636d21bd860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 ms_handle_reset con 0x5636d3a1f400 session 0x5636d4da83c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:38.814760+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a6887000/0x0/0x1bfc00000, data 0x2829635/0x2a45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:39.814980+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:40.815179+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4073131 data_alloc: 218103808 data_used: 10838016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367181824 unmapped: 72400896 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:41.815381+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72392704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:42.815534+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a6887000/0x0/0x1bfc00000, data 0x2829635/0x2a45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72392704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:43.815680+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 72392704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:44.815839+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.963266373s of 18.967422485s, submitted: 1
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 ms_handle_reset con 0x5636d1e67000 session 0x5636d2ec12c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367075328 unmapped: 72507392 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:45.815990+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4128109 data_alloc: 218103808 data_used: 10838016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72491008 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:46.816160+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 412 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d1ee3a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72491008 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:47.816288+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 412 ms_handle_reset con 0x5636d3a1fc00 session 0x5636d4486780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 412 ms_handle_reset con 0x5636d631b800 session 0x5636d5297c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72523776 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:48.816435+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50b3800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a6885000/0x0/0x1bfc00000, data 0x282b355/0x2a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 412 ms_handle_reset con 0x5636d50b3800 session 0x5636d2a434a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:49.816658+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:50.816850+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4015460 data_alloc: 218103808 data_used: 6742016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:51.817014+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a6f35000/0x0/0x1bfc00000, data 0x217e2d3/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:52.817419+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:53.817992+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:54.818164+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:55.818307+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4019458 data_alloc: 218103808 data_used: 6750208
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f32000/0x0/0x1bfc00000, data 0x217fe85/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:56.818470+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:57.818614+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73416704 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f32000/0x0/0x1bfc00000, data 0x217fe85/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:58.818743+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73400320 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:02:59.818922+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.341357231s of 14.472592354s, submitted: 52
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 72335360 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:00.819155+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4034598 data_alloc: 218103808 data_used: 7221248
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:01.819295+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:02.819437+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f17000/0x0/0x1bfc00000, data 0x217fe85/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f17000/0x0/0x1bfc00000, data 0x217fe85/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:03.819616+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:04.819810+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 72327168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:05.819940+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72417280 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4034700 data_alloc: 218103808 data_used: 7221248
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:06.820072+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72417280 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:07.820322+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72417280 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:08.820458+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72417280 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3b07400 session 0x5636d3a6da40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d2a42f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f2e000/0x0/0x1bfc00000, data 0x2184e85/0x23a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:09.820652+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f2e000/0x0/0x1bfc00000, data 0x2184e85/0x23a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.340467453s of 10.470830917s, submitted: 26
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f2e000/0x0/0x1bfc00000, data 0x2184e85/0x23a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:10.820795+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3916688 data_alloc: 218103808 data_used: 3387392
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:11.820968+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d2ec1a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:12.821186+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:13.821370+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a76f8000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:14.821518+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:15.821692+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3909487 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:16.821822+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a76f8000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:17.821998+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:18.822149+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:19.822338+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.096918106s of 10.256064415s, submitted: 4
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d2d70960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:20.822611+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3928089 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:21.822867+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 75841536 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:22.823243+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 75833344 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:23.823382+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363749376 unmapped: 75833344 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:24.823634+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 75825152 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:25.823787+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 75825152 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3928089 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:26.823943+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 75825152 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:27.824284+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 75825152 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:28.824466+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 75825152 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:29.824841+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363757568 unmapped: 75825152 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:30.825079+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3928089 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1fc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:31.825282+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:32.825589+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:33.825772+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:34.825943+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:35.826073+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3942809 data_alloc: 218103808 data_used: 5410816
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:36.826255+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:37.826467+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:38.826609+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:39.826830+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:40.826982+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3942809 data_alloc: 218103808 data_used: 5410816
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a74eb000/0x0/0x1bfc00000, data 0x1bc8e75/0x1de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:41.827144+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363773952 unmapped: 75808768 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.168073654s of 22.202831268s, submitted: 4
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:42.827335+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363970560 unmapped: 75612160 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:43.827457+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363765760 unmapped: 75816960 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d631b800 session 0x5636d20a4960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d53265a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d50f05a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d20a5680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3b07400 session 0x5636d4f97a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:44.827609+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d7e4a400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d7e4a400 session 0x5636d47083c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d502de00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d2a42960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d4e5fc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363798528 unmapped: 75784192 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:45.827779+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 75767808 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4075385 data_alloc: 218103808 data_used: 6123520
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a66a7000/0x0/0x1bfc00000, data 0x2a0be85/0x2c27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:46.827915+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 75767808 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:47.828744+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 75767808 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:48.828871+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 75767808 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3b07400 session 0x5636d5300f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:49.829145+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363814912 unmapped: 75767808 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a5400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d75a5400 session 0x5636d502cd20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:50.829377+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 75915264 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4073301 data_alloc: 218103808 data_used: 6123520
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d50f01e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a66a5000/0x0/0x1bfc00000, data 0x2a0de85/0x2c29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:51.829490+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 75915264 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d538e3c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:52.829636+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 75915264 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b07400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:53.829774+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 363667456 unmapped: 75915264 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1fc00 session 0x5636d2d710e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:54.829909+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365322240 unmapped: 74260480 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.840703964s of 12.340802193s, submitted: 64
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d3c8be00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:55.830167+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4032949 data_alloc: 218103808 data_used: 10227712
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:56.830318+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f2c000/0x0/0x1bfc00000, data 0x2185e95/0x23a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:57.830625+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f2c000/0x0/0x1bfc00000, data 0x2185e95/0x23a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:58.830836+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:03:59.831071+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:00.831253+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4032949 data_alloc: 218103808 data_used: 10227712
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:01.831402+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:02.831574+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:03.831784+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f2c000/0x0/0x1bfc00000, data 0x2185e95/0x23a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:04.832065+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 77406208 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.463399887s of 10.480900764s, submitted: 8
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:05.832204+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73711616 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4119083 data_alloc: 218103808 data_used: 10817536
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6f2c000/0x0/0x1bfc00000, data 0x2185e95/0x23a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:06.832365+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73711616 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:07.832558+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a64ec000/0x0/0x1bfc00000, data 0x2bb4e95/0x2dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73637888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:08.832762+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73637888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:09.832972+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73637888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:10.833198+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a64dc000/0x0/0x1bfc00000, data 0x2bbee95/0x2ddb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73637888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4124917 data_alloc: 218103808 data_used: 10633216
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:11.833345+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73637888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:12.833541+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73637888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:13.833730+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73637888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:14.833918+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73637888 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:15.834074+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73629696 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a64dc000/0x0/0x1bfc00000, data 0x2bbee95/0x2ddb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4124933 data_alloc: 218103808 data_used: 10633216
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:16.834229+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73629696 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:17.834387+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a64dc000/0x0/0x1bfc00000, data 0x2bbee95/0x2ddb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73629696 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:18.834545+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73629696 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:19.834722+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73629696 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2a21800 session 0x5636d50f10e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d538e780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2a21800 session 0x5636d4708960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d523bc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.760953903s of 15.057385445s, submitted: 90
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2028d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:20.834887+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1fc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1fc00 session 0x5636d523de00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73965568 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4148350 data_alloc: 218103808 data_used: 10633216
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:21.835098+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73965568 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:22.835317+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73965568 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a611d000/0x0/0x1bfc00000, data 0x2f93ef7/0x31b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:23.835470+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73965568 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a611d000/0x0/0x1bfc00000, data 0x2f93ef7/0x31b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:24.835625+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73965568 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:25.835812+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365625344 unmapped: 73957376 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4148350 data_alloc: 218103808 data_used: 10633216
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:26.836011+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d5033860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365625344 unmapped: 73957376 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d21bdc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:27.836161+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d21bd0e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d5032780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3b07400 session 0x5636d53274a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d2ece5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73801728 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:28.836324+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a60f8000/0x0/0x1bfc00000, data 0x2fb7f07/0x31d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73801728 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:29.836542+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73801728 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:30.836695+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73605120 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4181264 data_alloc: 234881024 data_used: 14577664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:31.836911+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a60f8000/0x0/0x1bfc00000, data 0x2fb7f07/0x31d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73588736 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:32.837058+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73588736 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:33.837205+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73588736 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:34.837424+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73588736 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:35.837601+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a60f8000/0x0/0x1bfc00000, data 0x2fb7f07/0x31d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73588736 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4181264 data_alloc: 234881024 data_used: 14577664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:36.837789+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73580544 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:37.837956+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73580544 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.727260590s of 17.816476822s, submitted: 30
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:38.838140+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73580544 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a60f6000/0x0/0x1bfc00000, data 0x2fb8f07/0x31d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:39.838321+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73580544 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:40.838582+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2a21800 session 0x5636d4e5fa40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d5301680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73580544 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3993274 data_alloc: 218103808 data_used: 7213056
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:41.838717+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d538e780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 364183552 unmapped: 75399168 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:42.838878+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365240320 unmapped: 74342400 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:43.839039+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365363200 unmapped: 74219520 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:44.839224+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a702f000/0x0/0x1bfc00000, data 0x2082ee7/0x229f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:45.839398+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022013 data_alloc: 218103808 data_used: 8237056
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:46.839585+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a702d000/0x0/0x1bfc00000, data 0x2084ee7/0x22a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:47.839811+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:48.839979+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:49.840176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a702d000/0x0/0x1bfc00000, data 0x2084ee7/0x22a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:50.840330+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022013 data_alloc: 218103808 data_used: 8237056
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:51.840476+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:52.840621+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:53.840759+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:54.840930+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:55.841198+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a702d000/0x0/0x1bfc00000, data 0x2084ee7/0x22a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022173 data_alloc: 218103808 data_used: 8241152
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:56.841391+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74178560 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:57.841612+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74170368 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:58.841815+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74170368 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:04:59.842063+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74170368 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:00.842268+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a702d000/0x0/0x1bfc00000, data 0x2084ee7/0x22a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74170368 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022173 data_alloc: 218103808 data_used: 8241152
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a702d000/0x0/0x1bfc00000, data 0x2084ee7/0x22a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:01.842438+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74170368 heap: 439582720 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b06c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.654491425s of 24.080133438s, submitted: 67
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3b06c00 session 0x5636d2029e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d53272c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2a21800 session 0x5636d5327a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:02.842568+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d2083e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d4f790e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 78282752 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:03.842768+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 78282752 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:04.843012+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 78282752 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:05.843222+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 78282752 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4121430 data_alloc: 218103808 data_used: 8245248
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:06.843397+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a634d000/0x0/0x1bfc00000, data 0x2d64ee7/0x2f81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 78282752 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:07.843530+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 78282752 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d7e4a000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:08.843712+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365510656 unmapped: 78274560 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d7e4a000 session 0x5636d4f781e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d2272f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:09.843864+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365518848 unmapped: 78266368 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:10.844041+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6249000/0x0/0x1bfc00000, data 0x2e67f49/0x3085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 365518848 unmapped: 78266368 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4135743 data_alloc: 218103808 data_used: 8249344
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:11.844182+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 367321088 unmapped: 76464128 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:12.844327+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 75120640 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:13.844454+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 75120640 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:14.844577+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.871390343s of 12.730360031s, submitted: 48
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 75120640 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:15.844722+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 75120640 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6249000/0x0/0x1bfc00000, data 0x2e67f49/0x3085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4223263 data_alloc: 234881024 data_used: 19980288
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:16.844869+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 75120640 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:17.845029+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d4f78b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 75112448 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:18.845197+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 75112448 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:19.845371+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6249000/0x0/0x1bfc00000, data 0x2e67f49/0x3085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 368672768 unmapped: 75112448 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:20.845512+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 74686464 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4231740 data_alloc: 234881024 data_used: 21028864
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:21.845633+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 74686464 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:22.845789+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6249000/0x0/0x1bfc00000, data 0x2e67f49/0x3085000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 369098752 unmapped: 74686464 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:23.845957+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 72736768 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:24.846093+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 72736768 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5d6a000/0x0/0x1bfc00000, data 0x3346f49/0x3564000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:25.846299+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371048448 unmapped: 72736768 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4269540 data_alloc: 234881024 data_used: 21262336
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:26.846401+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.272833824s of 11.732513428s, submitted: 52
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372097024 unmapped: 71688192 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:27.846536+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372097024 unmapped: 71688192 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:28.846662+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5d61000/0x0/0x1bfc00000, data 0x334ef49/0x356c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372097024 unmapped: 71688192 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:29.846808+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372097024 unmapped: 71688192 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:30.846972+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372097024 unmapped: 71688192 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4270186 data_alloc: 234881024 data_used: 21274624
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:31.847171+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373366784 unmapped: 70418432 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:32.847341+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373366784 unmapped: 70418432 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:33.847480+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5aea000/0x0/0x1bfc00000, data 0x35c6f49/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373366784 unmapped: 70418432 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:34.847608+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373366784 unmapped: 70418432 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:35.847747+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373383168 unmapped: 70402048 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4286902 data_alloc: 234881024 data_used: 21348352
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:36.847900+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.709306240s of 10.282056808s, submitted: 31
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2a21800 session 0x5636d50f03c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373415936 unmapped: 70369280 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f52c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:37.848028+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 70352896 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a612e000/0x0/0x1bfc00000, data 0x2405f49/0x2623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:38.848128+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 70352896 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:39.848272+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f52c00 session 0x5636d4e62b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 70352896 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:40.848390+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 70352896 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4073402 data_alloc: 218103808 data_used: 9273344
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:41.848589+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 70352896 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6cab000/0x0/0x1bfc00000, data 0x2405f49/0x2623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:42.848704+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 70352896 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:43.848855+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d5033e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4e62f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373432320 unmapped: 70352896 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d50f10e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:44.848967+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373448704 unmapped: 70336512 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:45.849161+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373448704 unmapped: 70336512 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3985645 data_alloc: 218103808 data_used: 4415488
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:46.849301+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a7351000/0x0/0x1bfc00000, data 0x1d3ced7/0x1f58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 70328320 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:47.849478+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 70328320 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:48.849614+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.117572784s of 12.014616966s, submitted: 44
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d2940000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d2029a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373456896 unmapped: 70328320 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:49.849794+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2a21800 session 0x5636d2eecf00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:50.849961+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a76f7000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3946743 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:51.850176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:52.850315+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:53.850498+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a76f7000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:54.850666+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:55.850797+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3946743 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:56.850973+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:57.851195+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:58.851601+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a76f7000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:05:59.851771+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:00.851884+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371449856 unmapped: 72335360 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3946743 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:01.852033+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 72318976 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:02.852179+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 72318976 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:03.852461+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 72318976 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:04.852611+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a76f7000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 72318976 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:05.852789+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 72318976 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a76f7000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3946743 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:06.852939+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 72318976 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:07.853164+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 72318976 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:08.853280+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371466240 unmapped: 72318976 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:09.853423+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371474432 unmapped: 72310784 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:10.853582+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a76f7000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371474432 unmapped: 72310784 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3946743 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:11.853726+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371474432 unmapped: 72310784 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:12.853881+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371474432 unmapped: 72310784 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:13.854023+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371474432 unmapped: 72310784 heap: 443785216 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.235162735s of 25.344781876s, submitted: 31
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d20a5680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:14.854157+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371482624 unmapped: 76505088 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:15.854295+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6c28000/0x0/0x1bfc00000, data 0x248be75/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371499008 unmapped: 76488704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4025617 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:16.854616+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371499008 unmapped: 76488704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:17.854746+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6c28000/0x0/0x1bfc00000, data 0x248be75/0x26a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d50f0b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d5032960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371523584 unmapped: 76464128 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:18.854912+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371523584 unmapped: 76464128 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:19.855073+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371523584 unmapped: 76464128 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:20.855219+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d5300960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371802112 unmapped: 76185600 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:21.855376+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4082311 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a65dc000/0x0/0x1bfc00000, data 0x2ad6ed7/0x2cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371818496 unmapped: 76169216 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:22.855526+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f52c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 371826688 unmapped: 76161024 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:23.855652+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a65dc000/0x0/0x1bfc00000, data 0x2ad6ed7/0x2cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d46e7400 session 0x5636d45170e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372531200 unmapped: 75456512 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:24.855966+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d502d2c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a65dc000/0x0/0x1bfc00000, data 0x2ad6ed7/0x2cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372531200 unmapped: 75456512 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:25.856119+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d53005a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.403267860s of 11.564950943s, submitted: 46
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d28e8d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372531200 unmapped: 75456512 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ac00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:26.856267+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4164949 data_alloc: 234881024 data_used: 14618624
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 372563968 unmapped: 75423744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:27.856377+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375316480 unmapped: 72671232 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:28.856505+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a65db000/0x0/0x1bfc00000, data 0x2ad6ee7/0x2cf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375316480 unmapped: 72671232 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:29.856658+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375316480 unmapped: 72671232 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:30.856834+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375316480 unmapped: 72671232 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:31.856982+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4202069 data_alloc: 234881024 data_used: 19927040
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375316480 unmapped: 72671232 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:32.857142+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375316480 unmapped: 72671232 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:33.857280+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375316480 unmapped: 72671232 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a65db000/0x0/0x1bfc00000, data 0x2ad6ee7/0x2cf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:34.857396+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377511936 unmapped: 70475776 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:35.857520+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5b8f000/0x0/0x1bfc00000, data 0x3522ee7/0x373f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5b4e000/0x0/0x1bfc00000, data 0x3561ee7/0x377e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378773504 unmapped: 69214208 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:36.857719+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298421 data_alloc: 234881024 data_used: 20406272
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378773504 unmapped: 69214208 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:37.857854+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378773504 unmapped: 69214208 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:38.857986+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.204916000s of 13.398816109s, submitted: 64
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379019264 unmapped: 68968448 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:39.858213+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379551744 unmapped: 68435968 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:40.858401+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5603000/0x0/0x1bfc00000, data 0x3aa5ee7/0x3cc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:41.858582+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4354569 data_alloc: 234881024 data_used: 21159936
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:42.858746+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:43.858902+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:44.859087+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:45.859312+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:46.859477+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4354569 data_alloc: 234881024 data_used: 21159936
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a55f7000/0x0/0x1bfc00000, data 0x3aabee7/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:47.859696+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:48.859915+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a55f7000/0x0/0x1bfc00000, data 0x3aabee7/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380928000 unmapped: 67059712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:49.860186+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.878812790s of 11.064892769s, submitted: 72
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380977152 unmapped: 67010560 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:50.860348+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380977152 unmapped: 67010560 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:51.860521+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4348441 data_alloc: 234881024 data_used: 21164032
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380977152 unmapped: 67010560 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:52.860760+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d20a4960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1ac00 session 0x5636d4e63680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380977152 unmapped: 67010560 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:53.860908+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e67000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5606000/0x0/0x1bfc00000, data 0x3aabee7/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380977152 unmapped: 67010560 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:54.861045+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:55.861262+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:56.861395+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349809 data_alloc: 234881024 data_used: 21323776
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:57.861550+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5606000/0x0/0x1bfc00000, data 0x3aabee7/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:58.861711+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:06:59.861910+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:00.862035+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5606000/0x0/0x1bfc00000, data 0x3aabee7/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:01.862245+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349809 data_alloc: 234881024 data_used: 21323776
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:02.862442+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:03.862678+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5606000/0x0/0x1bfc00000, data 0x3aabee7/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:04.862883+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380985344 unmapped: 67002368 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:05.863080+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.647557259s of 15.686962128s, submitted: 25
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381001728 unmapped: 66985984 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:06.863173+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4356169 data_alloc: 234881024 data_used: 21753856
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381001728 unmapped: 66985984 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:07.863322+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381001728 unmapped: 66985984 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:08.863465+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381001728 unmapped: 66985984 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5606000/0x0/0x1bfc00000, data 0x3aabee7/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:09.863710+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381001728 unmapped: 66985984 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:10.863838+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381001728 unmapped: 66985984 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:11.864011+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4356489 data_alloc: 234881024 data_used: 21835776
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5606000/0x0/0x1bfc00000, data 0x3aabee7/0x3cc8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d53001e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d4516780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4335c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4335c00 session 0x5636d29410e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d2199c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50b2c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381018112 unmapped: 66969600 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d50b2c00 session 0x5636d5230d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d53005a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:12.864135+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4335c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4335c00 session 0x5636d5300960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d20a5680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d50f10e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381026304 unmapped: 66961408 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:13.864273+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381026304 unmapped: 66961408 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:14.864457+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381026304 unmapped: 66961408 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:15.864647+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4e8f000/0x0/0x1bfc00000, data 0x3e11ef7/0x402f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381034496 unmapped: 66953216 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:16.864840+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386124 data_alloc: 234881024 data_used: 21835776
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.855154991s of 10.953269958s, submitted: 35
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d1e67000 session 0x5636d53270e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2272000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381034496 unmapped: 66953216 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:17.864992+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d502de00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:18.865159+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:19.865349+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:20.865531+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4335c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:21.865750+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4242903 data_alloc: 234881024 data_used: 15097856
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a592d000/0x0/0x1bfc00000, data 0x32ace85/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:22.865869+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:23.866035+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:24.866170+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:25.866329+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:26.866527+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4266423 data_alloc: 234881024 data_used: 18370560
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a592d000/0x0/0x1bfc00000, data 0x32ace85/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:27.866733+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:28.866892+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:29.867094+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:30.867297+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:31.867498+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4266423 data_alloc: 234881024 data_used: 18370560
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:32.868040+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377683968 unmapped: 70303744 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a592d000/0x0/0x1bfc00000, data 0x32ace85/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.316431046s of 16.381662369s, submitted: 21
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:33.868465+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378191872 unmapped: 69795840 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:34.868835+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 68657152 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:35.869155+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 68231168 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a576b000/0x0/0x1bfc00000, data 0x352fe85/0x374b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:36.870302+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379756544 unmapped: 68231168 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4293929 data_alloc: 234881024 data_used: 18391040
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:37.871307+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 68902912 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:38.872566+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 68902912 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:39.874169+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 68902912 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5773000/0x0/0x1bfc00000, data 0x352fe85/0x374b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:40.874309+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379084800 unmapped: 68902912 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5773000/0x0/0x1bfc00000, data 0x352fe85/0x374b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:41.875630+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4289693 data_alloc: 234881024 data_used: 18374656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:42.876063+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 55K writes, 204K keys, 55K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 55K writes, 21K syncs, 2.63 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3291 writes, 12K keys, 3291 commit groups, 1.0 writes per commit group, ingest: 15.21 MB, 0.03 MB/s
                                           Interval WAL: 3291 writes, 1292 syncs, 2.55 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ecdd0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5636d05ec430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:43.876408+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:44.876992+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5752000/0x0/0x1bfc00000, data 0x3550e85/0x376c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:45.878054+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:46.878209+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4289693 data_alloc: 234881024 data_used: 18374656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5752000/0x0/0x1bfc00000, data 0x3550e85/0x376c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5752000/0x0/0x1bfc00000, data 0x3550e85/0x376c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:47.879059+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:48.879234+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:49.879550+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:50.879758+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:51.879931+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4289693 data_alloc: 234881024 data_used: 18374656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.836980820s of 19.065063477s, submitted: 61
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:52.880171+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5752000/0x0/0x1bfc00000, data 0x3550e85/0x376c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:53.880452+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379092992 unmapped: 68894720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a574f000/0x0/0x1bfc00000, data 0x3553e85/0x376f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:54.880603+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 68886528 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a574f000/0x0/0x1bfc00000, data 0x3553e85/0x376f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:55.880871+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 68886528 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:56.881049+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379101184 unmapped: 68886528 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4290265 data_alloc: 234881024 data_used: 18374656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a574f000/0x0/0x1bfc00000, data 0x3553e85/0x376f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:57.881419+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379109376 unmapped: 68878336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:58.881600+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 68870144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d4708b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:07:59.882019+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 68870144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:00.882181+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a56cc000/0x0/0x1bfc00000, data 0x35d6e85/0x37f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 68870144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a56cc000/0x0/0x1bfc00000, data 0x35d6e85/0x37f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:01.882366+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379117568 unmapped: 68870144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296307 data_alloc: 234881024 data_used: 18374656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:02.882531+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379125760 unmapped: 68861952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:03.882708+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379125760 unmapped: 68861952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:04.882863+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d47092c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379125760 unmapped: 68861952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a29000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a29000 session 0x5636d4708960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:05.883006+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.093201637s of 13.127966881s, submitted: 8
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379125760 unmapped: 68861952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a29000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a29000 session 0x5636d2d4b2c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d502c960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a56cc000/0x0/0x1bfc00000, data 0x35d6e85/0x37f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:06.883083+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 68714496 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4300709 data_alloc: 234881024 data_used: 18374656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:07.883239+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a56a7000/0x0/0x1bfc00000, data 0x35fae95/0x3817000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 68714496 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:08.883383+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 68714496 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:09.883563+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379273216 unmapped: 68714496 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4335c00 session 0x5636d4bb85a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:10.883693+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378290176 unmapped: 69697536 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d538e000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:11.883824+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 69689344 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4232111 data_alloc: 234881024 data_used: 15577088
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:12.883955+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 69689344 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5cb5000/0x0/0x1bfc00000, data 0x2fede85/0x3209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:13.884095+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 69689344 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5cb5000/0x0/0x1bfc00000, data 0x2fede85/0x3209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:14.884233+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 69689344 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:15.884396+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 69689344 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:16.884545+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 69689344 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4232463 data_alloc: 234881024 data_used: 15577088
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5cb5000/0x0/0x1bfc00000, data 0x2fede85/0x3209000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:17.884705+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 69689344 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:18.884836+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378298368 unmapped: 69689344 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.740122795s of 13.922272682s, submitted: 30
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:19.885018+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378724352 unmapped: 69263360 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:20.885158+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f52c00 session 0x5636d4bb8f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d2d4ab40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378732544 unmapped: 69255168 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4da92c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:21.885309+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022275 data_alloc: 218103808 data_used: 3919872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a64e2000/0x0/0x1bfc00000, data 0x1d25e85/0x1f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:22.885525+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:23.885686+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a64e2000/0x0/0x1bfc00000, data 0x1d01e85/0x1f1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:24.885844+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:25.885998+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:26.886175+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022275 data_alloc: 218103808 data_used: 3919872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:27.886295+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:28.886468+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a64e2000/0x0/0x1bfc00000, data 0x1d01e85/0x1f1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:29.886635+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:30.886782+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:31.886921+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4022275 data_alloc: 218103808 data_used: 3919872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d2272f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d20821e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:32.887081+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a64e2000/0x0/0x1bfc00000, data 0x1d01e85/0x1f1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376004608 unmapped: 71983104 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a29000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.600057602s of 13.712610245s, submitted: 45
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:33.887289+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a29000 session 0x5636d4f96000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376012800 unmapped: 71974912 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d2ece1e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:34.887507+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376020992 unmapped: 71966720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d44861e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:35.887701+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376020992 unmapped: 71966720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d5300d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:36.888023+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6a12000/0x0/0x1bfc00000, data 0x2291e75/0x24ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d4f96f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376020992 unmapped: 71966720 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4335c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4046509 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4335c00 session 0x5636d3c8be00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:37.888158+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376176640 unmapped: 71811072 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:38.888364+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376176640 unmapped: 71811072 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:39.888603+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:40.888762+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a69ed000/0x0/0x1bfc00000, data 0x22b5e85/0x24d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:41.888918+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4115153 data_alloc: 218103808 data_used: 12541952
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:42.889066+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:43.889193+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:44.889347+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:45.889483+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:46.889638+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4115153 data_alloc: 218103808 data_used: 12541952
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a69ed000/0x0/0x1bfc00000, data 0x22b5e85/0x24d1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:47.889747+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:48.889916+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:49.890174+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377069568 unmapped: 70918144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.681131363s of 16.758640289s, submitted: 19
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:50.890361+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 68452352 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d5327a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d2029e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d2ed14a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d5353000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d5353000 session 0x5636d2daed20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a658d000/0x0/0x1bfc00000, data 0x2715e85/0x2931000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c92c00 session 0x5636d2ecef00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:51.890517+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d21bd0e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d4f790e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d5353000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d5353000 session 0x5636d4e62b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d5300b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 68296704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4187550 data_alloc: 234881024 data_used: 12988416
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:52.890677+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 68296704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:53.890824+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 68296704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:54.890986+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 68296704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a62da000/0x0/0x1bfc00000, data 0x29c7e95/0x2be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4166000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4166000 session 0x5636d2a425a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:55.891199+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 68296704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d5326d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:56.891425+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 68296704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4187550 data_alloc: 234881024 data_used: 12988416
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:57.891556+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 68296704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a62da000/0x0/0x1bfc00000, data 0x29c7e95/0x2be4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:58.891685+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d29401e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d5353000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379691008 unmapped: 68296704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d5353000 session 0x5636d2ec0b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:08:59.891829+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379707392 unmapped: 68280320 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:00.891988+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379846656 unmapped: 68141056 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:01.892186+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2ece5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d2e4bc00 session 0x5636d50f05a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 67878912 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4207202 data_alloc: 234881024 data_used: 15687680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.239658356s of 12.413492203s, submitted: 58
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:02.892358+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d3a6cf00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377618432 unmapped: 70369280 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:03.892575+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a7037000/0x0/0x1bfc00000, data 0x1c6be85/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377618432 unmapped: 70369280 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:04.892701+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377618432 unmapped: 70369280 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:05.892809+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377618432 unmapped: 70369280 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:06.892988+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a7037000/0x0/0x1bfc00000, data 0x1c6be85/0x1e87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377618432 unmapped: 70369280 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4036345 data_alloc: 218103808 data_used: 5971968
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:07.893152+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d44863c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377618432 unmapped: 70369280 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:08.893290+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d44861e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:09.893454+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:10.893582+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:11.893728+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3991205 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:12.893940+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a72e8000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:13.894129+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:14.894263+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a72e8000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:15.894373+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a72e8000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:16.894622+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45629 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3991205 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:17.894783+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:18.894947+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a72e8000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:19.895153+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:20.895281+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 70352896 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:21.895409+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377643008 unmapped: 70344704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3991205 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:22.895612+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377643008 unmapped: 70344704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:23.895765+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377643008 unmapped: 70344704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a72e8000/0x0/0x1bfc00000, data 0x19bbe75/0x1bd6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:24.895934+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377643008 unmapped: 70344704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:25.896066+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377643008 unmapped: 70344704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:26.896167+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d53270e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d5353000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d5353000 session 0x5636d20a5680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d5300960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d53005a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377643008 unmapped: 70344704 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.646026611s of 24.730895996s, submitted: 32
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4059270 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d5230d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d4709c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3b400 session 0x5636d523c3c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2ecc780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d2ec12c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:27.896283+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377651200 unmapped: 70336512 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:28.896463+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6bca000/0x0/0x1bfc00000, data 0x20d7ee6/0x22f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377651200 unmapped: 70336512 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:29.896663+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377659392 unmapped: 70328320 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:30.896808+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377659392 unmapped: 70328320 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:31.897019+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377659392 unmapped: 70328320 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d5297680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050094 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:32.897191+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d2ecf4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6bca000/0x0/0x1bfc00000, data 0x20d7ee6/0x22f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377659392 unmapped: 70328320 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:33.897331+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3afe000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3afe000 session 0x5636d44874a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4e5f4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377659392 unmapped: 70328320 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:34.897517+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377659392 unmapped: 70328320 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:35.897704+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d2eec1e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d75a4400 session 0x5636d4bb8000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d631b400 session 0x5636d5301e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d47b7000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377692160 unmapped: 70295552 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d47b7000 session 0x5636d4709860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2028000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:36.897865+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d52972c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d3a6cd20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377692160 unmapped: 70295552 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.833073616s of 10.046889305s, submitted: 62
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4161364 data_alloc: 218103808 data_used: 10723328
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d631b400 session 0x5636d523a5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:37.898066+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373727232 unmapped: 74260480 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6b4c000/0x0/0x1bfc00000, data 0x2155ed7/0x2371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:38.898215+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373727232 unmapped: 74260480 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:39.898416+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373727232 unmapped: 74260480 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:40.898562+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373727232 unmapped: 74260480 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:41.898849+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373727232 unmapped: 74260480 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4056182 data_alloc: 218103808 data_used: 3272704
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6b4c000/0x0/0x1bfc00000, data 0x2155ed7/0x2371000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:42.899076+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d6c93000 session 0x5636d4e3d860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373727232 unmapped: 74260480 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:43.899170+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373727232 unmapped: 74260480 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:44.899460+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373727232 unmapped: 74260480 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:45.899719+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6b4c000/0x0/0x1bfc00000, data 0x2155efa/0x2372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:46.899978+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4114271 data_alloc: 218103808 data_used: 11243520
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:47.900217+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:48.900498+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:49.900794+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:50.900981+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:51.901157+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a6b4c000/0x0/0x1bfc00000, data 0x2155efa/0x2372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x16d3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4114271 data_alloc: 218103808 data_used: 11243520
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:52.901404+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:53.901654+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:54.901872+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 373743616 unmapped: 74244096 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:55.902060+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.707136154s of 18.781089783s, submitted: 22
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376029184 unmapped: 71958528 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5691000/0x0/0x1bfc00000, data 0x2470efa/0x268d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:56.902265+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 71950336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4140087 data_alloc: 218103808 data_used: 11522048
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:57.902500+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5687000/0x0/0x1bfc00000, data 0x247aefa/0x2697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:58.902684+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:09:59.902886+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:00.903030+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:01.903257+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4146119 data_alloc: 218103808 data_used: 11649024
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:02.903511+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:03.903682+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5687000/0x0/0x1bfc00000, data 0x247aefa/0x2697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:04.903893+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:05.904092+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:06.904293+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4146119 data_alloc: 218103808 data_used: 11649024
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:07.904429+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376078336 unmapped: 71909376 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:08.904589+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5687000/0x0/0x1bfc00000, data 0x247aefa/0x2697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:09.904795+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5687000/0x0/0x1bfc00000, data 0x247aefa/0x2697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:10.904978+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:11.905134+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4146119 data_alloc: 218103808 data_used: 11649024
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:12.905282+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5687000/0x0/0x1bfc00000, data 0x247aefa/0x2697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:13.905475+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:14.905621+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:15.905777+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5687000/0x0/0x1bfc00000, data 0x247aefa/0x2697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:16.905987+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376086528 unmapped: 71901184 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4146119 data_alloc: 218103808 data_used: 11649024
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:17.906213+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 71892992 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:18.906373+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 71892992 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:19.906577+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 71892992 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5687000/0x0/0x1bfc00000, data 0x247aefa/0x2697000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.394876480s of 24.501911163s, submitted: 44
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:20.906778+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 71892992 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:21.907045+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 71892992 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4146427 data_alloc: 218103808 data_used: 11649024
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:22.907229+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4e63e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 71892992 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5685000/0x0/0x1bfc00000, data 0x247befa/0x2698000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:23.907408+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 71892992 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:24.907578+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5686000/0x0/0x1bfc00000, data 0x247befa/0x2698000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376094720 unmapped: 71892992 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5686000/0x0/0x1bfc00000, data 0x247befa/0x2698000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:25.907732+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376102912 unmapped: 71884800 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:26.907874+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 ms_handle_reset con 0x5636d3a22000 session 0x5636d4bb8960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d2a941e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d631b400 session 0x5636d2ec12c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d75a4400 session 0x5636d2ec0b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376119296 unmapped: 71868416 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4150857 data_alloc: 218103808 data_used: 11657216
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:27.908056+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2d714a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376119296 unmapped: 71868416 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:28.908186+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4f96000/0x0/0x1bfc00000, data 0x2b69c28/0x2d88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376119296 unmapped: 71868416 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:29.908381+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376119296 unmapped: 71868416 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:30.908539+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376135680 unmapped: 71852032 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:31.908681+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376135680 unmapped: 71852032 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4206829 data_alloc: 218103808 data_used: 11681792
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4f96000/0x0/0x1bfc00000, data 0x2b69c28/0x2d88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:32.908800+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376135680 unmapped: 71852032 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:33.908952+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376135680 unmapped: 71852032 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:34.909088+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d20283c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d631b400 session 0x5636d4f78960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376135680 unmapped: 71852032 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:35.909242+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d75a4400 session 0x5636d5032b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d7e4b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.372348785s of 15.480269432s, submitted: 29
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4d73c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d4d73c00 session 0x5636d4516d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4bb90e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376143872 unmapped: 71843840 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:36.909375+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4f97000/0x0/0x1bfc00000, data 0x2b69bc6/0x2d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 414 handle_osd_map epochs [415,415], i have 415, src has [1,415]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 415 ms_handle_reset con 0x5636d7e4b400 session 0x5636d5033860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376152064 unmapped: 71835648 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4211022 data_alloc: 218103808 data_used: 11694080
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:37.909516+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377856000 unmapped: 70131712 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:38.909728+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377888768 unmapped: 70098944 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4f93000/0x0/0x1bfc00000, data 0x2b6b8e6/0x2d8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [1])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:39.909971+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377872384 unmapped: 70115328 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:40.910148+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377905152 unmapped: 70082560 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:41.910294+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377946112 unmapped: 70041600 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4261938 data_alloc: 234881024 data_used: 18345984
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:42.910451+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377946112 unmapped: 70041600 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:43.910618+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4f92000/0x0/0x1bfc00000, data 0x2b6b8e6/0x2d8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 415 handle_osd_map epochs [416,416], i have 416, src has [1,416]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377946112 unmapped: 70041600 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:44.910811+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377978880 unmapped: 70008832 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:45.910959+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377995264 unmapped: 69992448 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:46.911073+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.507719040s of 11.348558426s, submitted: 257
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377995264 unmapped: 69992448 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4265376 data_alloc: 234881024 data_used: 18362368
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:47.911252+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377995264 unmapped: 69992448 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:48.911372+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 379379712 unmapped: 68608000 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:49.911545+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f91000/0x0/0x1bfc00000, data 0x2b6d498/0x2d8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380616704 unmapped: 67371008 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:50.911653+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381050880 unmapped: 66936832 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:51.911842+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d20a4b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381050880 unmapped: 66936832 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305252 data_alloc: 234881024 data_used: 18567168
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:52.911966+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d75a4400 session 0x5636d4bb8b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:53.912118+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:54.912308+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a5155000/0x0/0x1bfc00000, data 0x262d413/0x284b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:55.912461+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a54b2000/0x0/0x1bfc00000, data 0x264e413/0x286c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:56.912589+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a54b2000/0x0/0x1bfc00000, data 0x264e413/0x286c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4158144 data_alloc: 218103808 data_used: 9433088
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:57.912729+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:58.912916+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:10:59.913135+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:00.913313+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 69246976 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:01.913482+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a54b2000/0x0/0x1bfc00000, data 0x264e413/0x286c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378224640 unmapped: 69763072 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4158144 data_alloc: 218103808 data_used: 9433088
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:02.913680+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.899720192s of 15.479844093s, submitted: 202
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378232832 unmapped: 69754880 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a54a4000/0x0/0x1bfc00000, data 0x265c413/0x287a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:03.913825+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378232832 unmapped: 69754880 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:04.913948+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378232832 unmapped: 69754880 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4f3dc00 session 0x5636d2940000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:05.914164+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d631b400 session 0x5636d2ccc960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 69738496 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:06.914357+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 69738496 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a54a4000/0x0/0x1bfc00000, data 0x265c413/0x287a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:07.914581+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4158756 data_alloc: 218103808 data_used: 9433088
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2ec05a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:08.914799+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:09.914985+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:10.915176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:11.915338+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a613f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:12.915530+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4023412 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:13.915673+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:14.915881+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:15.916044+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a613f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:16.916242+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:17.916419+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4023412 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:18.916570+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375013376 unmapped: 72974336 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:19.916784+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a613f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375021568 unmapped: 72966144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:20.916993+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375021568 unmapped: 72966144 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:21.917161+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375029760 unmapped: 72957952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:22.917343+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4023412 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375029760 unmapped: 72957952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:23.917511+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375029760 unmapped: 72957952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:24.917670+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a613f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375029760 unmapped: 72957952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:25.917801+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375029760 unmapped: 72957952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:26.917952+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375029760 unmapped: 72957952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:27.918161+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4023412 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375029760 unmapped: 72957952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:28.918368+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375029760 unmapped: 72957952 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:29.918543+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a613f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375046144 unmapped: 72941568 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:30.918690+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375046144 unmapped: 72941568 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:31.918888+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375046144 unmapped: 72941568 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:32.919078+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4023412 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a613f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375046144 unmapped: 72941568 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:33.919223+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375046144 unmapped: 72941568 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:34.919369+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375046144 unmapped: 72941568 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:35.919537+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375046144 unmapped: 72941568 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:36.919714+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a613f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375046144 unmapped: 72941568 heap: 447987712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:37.919858+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.870098114s of 34.938060760s, submitted: 24
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117244 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d2a425a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 77135872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:38.920002+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 77135872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:39.920175+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 77135872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:40.920318+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x2695413/0x28b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 77135872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:41.920472+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 77135872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:42.920598+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117490 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 77135872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:43.920748+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 77135872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:44.920939+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375054336 unmapped: 77135872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:45.921136+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x2695413/0x28b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375062528 unmapped: 77127680 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:46.921388+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375062528 unmapped: 77127680 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:47.921524+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117490 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x2695413/0x28b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375062528 unmapped: 77127680 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:48.921763+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375062528 unmapped: 77127680 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:49.921957+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x2695413/0x28b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375062528 unmapped: 77127680 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:50.922184+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375062528 unmapped: 77127680 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:51.922562+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375062528 unmapped: 77127680 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 06 08:30:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661017523' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:52.922804+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4117490 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375062528 unmapped: 77127680 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:53.922950+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 76824576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:54.923867+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:55.924273+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 76824576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x2695413/0x28b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:56.924673+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 76824576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:57.925349+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 76824576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4212530 data_alloc: 234881024 data_used: 16187392
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:58.925879+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 76824576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:11:59.926495+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 76824576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:00.926839+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 76824576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x2695413/0x28b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:01.926984+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375365632 unmapped: 76824576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:02.927179+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375373824 unmapped: 76816384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4212530 data_alloc: 234881024 data_used: 16187392
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:03.927354+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375373824 unmapped: 76816384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:04.927745+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 375373824 unmapped: 76816384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.746597290s of 27.803609848s, submitted: 7
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:05.927884+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378462208 unmapped: 73728000 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:06.928146+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377937920 unmapped: 74252288 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4a26000/0x0/0x1bfc00000, data 0x30da413/0x32f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [0,0,1])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:07.928493+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4a26000/0x0/0x1bfc00000, data 0x30da413/0x32f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4309574 data_alloc: 234881024 data_used: 17457152
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:08.928764+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:09.929156+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:10.929395+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:11.929543+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:12.929687+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310090 data_alloc: 234881024 data_used: 17494016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3861000/0x0/0x1bfc00000, data 0x30ff413/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:13.929823+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a385d000/0x0/0x1bfc00000, data 0x3103413/0x3321000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:14.929962+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:15.930122+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a385d000/0x0/0x1bfc00000, data 0x3103413/0x3321000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:16.930257+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d75a4400 session 0x5636d4bb8000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:17.930377+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380084224 unmapped: 72105984 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310090 data_alloc: 234881024 data_used: 17494016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d7e4b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.452485085s of 12.634751320s, submitted: 80
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d7e4b400 session 0x5636d2ecf4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4e5fc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d4517a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d631b400 session 0x5636d2ec0d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d75a4400 session 0x5636d502c5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:18.930532+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:19.930852+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:20.931064+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a370d000/0x0/0x1bfc00000, data 0x3253413/0x3471000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:21.931281+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:22.931425+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330911 data_alloc: 234881024 data_used: 17494016
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:23.931602+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:24.931808+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a370d000/0x0/0x1bfc00000, data 0x3253413/0x3471000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2a21c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d2a21c00 session 0x5636d2272b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:25.931950+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:26.932146+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380100608 unmapped: 72089600 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:27.932266+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a370c000/0x0/0x1bfc00000, data 0x3253436/0x3472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4340934 data_alloc: 234881024 data_used: 19177472
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:28.932511+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:29.932719+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:30.932960+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a370c000/0x0/0x1bfc00000, data 0x3253436/0x3472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:31.933196+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a370c000/0x0/0x1bfc00000, data 0x3253436/0x3472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:32.933383+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4341094 data_alloc: 234881024 data_used: 19185664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:33.933543+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:34.933721+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:35.933828+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a370c000/0x0/0x1bfc00000, data 0x3253436/0x3472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:36.934040+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a370c000/0x0/0x1bfc00000, data 0x3253436/0x3472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:37.934259+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4341094 data_alloc: 234881024 data_used: 19185664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:38.934482+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.944215775s of 21.043718338s, submitted: 24
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:39.934660+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a370c000/0x0/0x1bfc00000, data 0x3253436/0x3472000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380223488 unmapped: 71966720 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:40.934806+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381165568 unmapped: 71024640 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:41.934991+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381509632 unmapped: 70680576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:42.935142+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382369792 unmapped: 69820416 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4419240 data_alloc: 234881024 data_used: 19595264
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d631b400 session 0x5636d4e3c000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d75a4400 session 0x5636d4e3c3c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6077400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:43.935255+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d6077400 session 0x5636d4e3d0e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d7e48c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d7e48c00 session 0x5636d4e3cd20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382951424 unmapped: 69238784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:44.935375+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382951424 unmapped: 69238784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:45.935597+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a2c80000/0x0/0x1bfc00000, data 0x3cd8446/0x3ef8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383123456 unmapped: 69066752 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:46.935723+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383246336 unmapped: 68943872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a2ae3000/0x0/0x1bfc00000, data 0x3e75446/0x4095000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:47.935873+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383246336 unmapped: 68943872 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454246 data_alloc: 234881024 data_used: 20770816
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d631bc00 session 0x5636d4e3c1e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6077400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d6077400 session 0x5636d47083c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d631b400 session 0x5636d502d2c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:48.936014+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383180800 unmapped: 69009408 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d75a4400 session 0x5636d50325a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d7e48c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d7e48c00 session 0x5636d538e5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.252954483s of 10.239645958s, submitted: 109
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:49.936219+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384262144 unmapped: 67928064 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:50.936344+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: mgrc ms_handle_reset ms_handle_reset con 0x5636d3a1c800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/798720280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: get_auth_request con 0x5636d631bc00 auth_method 0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: mgrc handle_mgr_configure stats_period=5
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384540672 unmapped: 67649536 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d4da9a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a2a45000/0x0/0x1bfc00000, data 0x3f17446/0x4137000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:51.936549+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6077400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384540672 unmapped: 67649536 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d2d4dc00 session 0x5636d3a6d860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3cc1800 session 0x5636d50f1860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2d4dc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1e000 session 0x5636d50f1680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d75a4400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:52.936701+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 384573440 unmapped: 67616768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4468680 data_alloc: 234881024 data_used: 20742144
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a2a33000/0x0/0x1bfc00000, data 0x3f23446/0x4143000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d7e48c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:53.936877+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d7e48c00 session 0x5636d45172c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:54.937003+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d47b6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x2800446/0x2a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:55.937134+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x2800446/0x2a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:56.937278+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:57.937424+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4191626 data_alloc: 218103808 data_used: 6684672
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:58.937564+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:12:59.937740+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x2800446/0x2a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x2800446/0x2a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:00.937931+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:01.938170+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:02.938396+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4191626 data_alloc: 218103808 data_used: 6684672
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:03.938551+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:04.938679+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x2800446/0x2a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:05.938902+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x2800446/0x2a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377626624 unmapped: 74563584 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.567713737s of 16.769104004s, submitted: 32
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x2800446/0x2a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:06.939029+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381173760 unmapped: 71016448 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:07.939238+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374571008 unmapped: 77619200 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4221200 data_alloc: 218103808 data_used: 6705152
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:08.943049+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374743040 unmapped: 77447168 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d6077400 session 0x5636d4e5e960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:09.943363+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376430592 unmapped: 75759616 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3c04000/0x0/0x1bfc00000, data 0x2d5a446/0x2f7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:10.943544+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376504320 unmapped: 75685888 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:11.943714+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 75677696 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:12.943894+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 376512512 unmapped: 75677696 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4237230 data_alloc: 218103808 data_used: 7118848
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3c04000/0x0/0x1bfc00000, data 0x2d5a446/0x2f7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:13.944035+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:14.944205+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:15.944362+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3b84000/0x0/0x1bfc00000, data 0x2dda446/0x2ffa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:16.944502+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:17.944644+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4241400 data_alloc: 218103808 data_used: 7184384
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:18.944844+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:19.945062+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.114913940s of 13.826477051s, submitted: 68
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:20.945217+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:21.945332+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3b82000/0x0/0x1bfc00000, data 0x2ddc446/0x2ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:22.945476+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46480 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240320 data_alloc: 218103808 data_used: 7184384
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:23.945599+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:24.945751+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:25.945901+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377192448 unmapped: 74997760 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4334000 session 0x5636d2ecde00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d47b6000 session 0x5636d5327e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d47b6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:26.946066+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377200640 unmapped: 74989568 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d47b6000 session 0x5636d2a94d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3b5a000/0x0/0x1bfc00000, data 0x2e04446/0x3024000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:27.946211+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377200640 unmapped: 74989568 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4164060 data_alloc: 218103808 data_used: 5271552
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a42d2000/0x0/0x1bfc00000, data 0x2669436/0x2888000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:28.946407+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377200640 unmapped: 74989568 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:29.946544+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377200640 unmapped: 74989568 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:30.946696+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.404729843s of 10.684812546s, submitted: 18
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377200640 unmapped: 74989568 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a42f3000/0x0/0x1bfc00000, data 0x266c436/0x288b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:31.946853+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377200640 unmapped: 74989568 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d52961e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:32.946989+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377208832 unmapped: 74981376 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4163608 data_alloc: 218103808 data_used: 5271552
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:33.947176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a42f3000/0x0/0x1bfc00000, data 0x266c436/0x288b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377208832 unmapped: 74981376 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:34.947300+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d2ed0b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377233408 unmapped: 74956800 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:35.947466+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377233408 unmapped: 74956800 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f9e000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:36.947637+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377233408 unmapped: 74956800 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:37.947783+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377241600 unmapped: 74948608 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4049251 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:38.947919+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377241600 unmapped: 74948608 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:39.948090+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377241600 unmapped: 74948608 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:40.948305+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377241600 unmapped: 74948608 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:41.948438+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.786657333s of 10.997327805s, submitted: 37
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4f9e000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,4])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4334000 session 0x5636d2ecd680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377257984 unmapped: 74932224 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:42.948753+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377257984 unmapped: 74932224 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083059 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:43.948943+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377266176 unmapped: 74924032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:44.949095+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377266176 unmapped: 74924032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:45.949306+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b9c000/0x0/0x1bfc00000, data 0x1dc4413/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377274368 unmapped: 74915840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:46.949471+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377274368 unmapped: 74915840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:47.949619+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377274368 unmapped: 74915840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4083059 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:48.949776+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377274368 unmapped: 74915840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:49.949980+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377274368 unmapped: 74915840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:50.950149+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377274368 unmapped: 74915840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b9c000/0x0/0x1bfc00000, data 0x1dc4413/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6077400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:51.950254+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:52.950441+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112339 data_alloc: 218103808 data_used: 7491584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:53.950734+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:54.950869+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:55.951068+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b9c000/0x0/0x1bfc00000, data 0x1dc4413/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:56.951503+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:57.951680+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b9c000/0x0/0x1bfc00000, data 0x1dc4413/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4112339 data_alloc: 218103808 data_used: 7491584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:58.951904+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:13:59.952152+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:00.952291+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377282560 unmapped: 74907648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:01.952467+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b9c000/0x0/0x1bfc00000, data 0x1dc4413/0x1fe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377290752 unmapped: 74899456 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:02.952639+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.260578156s of 21.444272995s, submitted: 5
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377741312 unmapped: 74448896 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4175675 data_alloc: 218103808 data_used: 7491584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:03.952807+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380452864 unmapped: 71737344 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:04.952954+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380461056 unmapped: 71729152 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d7e48c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:05.953088+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d7e48c00 session 0x5636d2a42960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380477440 unmapped: 71712768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:06.953286+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380477440 unmapped: 71712768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:07.953433+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3f7d000/0x0/0x1bfc00000, data 0x29e3413/0x2c01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380477440 unmapped: 71712768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2029c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4213605 data_alloc: 218103808 data_used: 7532544
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:08.953621+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380477440 unmapped: 71712768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d2ece5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:09.953792+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380485632 unmapped: 71704576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:10.953928+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4334000 session 0x5636d4e621e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d47b6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d47b6000 session 0x5636d5301c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380485632 unmapped: 71704576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:11.954048+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380485632 unmapped: 71704576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:12.954203+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d6077400 session 0x5636d4487860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380485632 unmapped: 71704576 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4223893 data_alloc: 218103808 data_used: 9113600
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3f7a000/0x0/0x1bfc00000, data 0x29e6413/0x2c04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:13.954330+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 71688192 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:14.954501+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 71688192 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:15.954652+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 71688192 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:16.954781+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 71688192 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:17.954894+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3f7a000/0x0/0x1bfc00000, data 0x29e6413/0x2c04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 71688192 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4238453 data_alloc: 218103808 data_used: 11210752
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:18.955036+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 71688192 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:19.955185+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 71688192 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:20.955390+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3f7a000/0x0/0x1bfc00000, data 0x29e6413/0x2c04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380510208 unmapped: 71680000 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:21.955537+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3f7a000/0x0/0x1bfc00000, data 0x29e6413/0x2c04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380510208 unmapped: 71680000 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:22.957043+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3f7a000/0x0/0x1bfc00000, data 0x29e6413/0x2c04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380510208 unmapped: 71680000 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4238453 data_alloc: 218103808 data_used: 11210752
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:23.957421+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380510208 unmapped: 71680000 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.016658783s of 21.183668137s, submitted: 66
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3f7a000/0x0/0x1bfc00000, data 0x29e6413/0x2c04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d52972c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:24.957596+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378806272 unmapped: 73383936 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:25.957755+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:26.957904+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:27.958066+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4140659 data_alloc: 218103808 data_used: 7413760
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:28.958222+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:29.958444+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:30.958613+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:31.958749+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:32.958925+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4140659 data_alloc: 218103808 data_used: 7413760
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:33.959218+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:34.959414+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:35.959642+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:36.959807+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d28e8d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:37.959929+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4140659 data_alloc: 218103808 data_used: 7413760
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:38.960079+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.253051758s of 14.435792923s, submitted: 43
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d222be00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:39.960635+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:40.960769+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:41.960941+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:42.961180+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:43.961335+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:44.961528+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:45.961685+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:46.961897+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:47.962075+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:48.962256+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:49.962691+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:50.962869+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:51.963036+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:52.963164+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:53.963345+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:54.963501+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:55.963677+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:56.963850+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:57.963994+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 77922304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:58.964170+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 77922304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:59.964379+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 77922304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:00.964512+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:01.964698+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:02.964888+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:03.965069+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:04.965218+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:05.965371+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:06.965563+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:07.965707+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:08.965910+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:09.966093+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:10.966239+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.619552612s of 32.639488220s, submitted: 7
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:11.966432+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b4d000/0x0/0x1bfc00000, data 0x1a01486/0x1c21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374292480 unmapped: 77897728 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:12.966614+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374292480 unmapped: 77897728 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4334000 session 0x5636d1ee2b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4070091 data_alloc: 218103808 data_used: 3301376
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:13.966809+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374300672 unmapped: 77889536 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:14.966985+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b4d000/0x0/0x1bfc00000, data 0x1a01486/0x1c21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374300672 unmapped: 77889536 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:15.967145+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374308864 unmapped: 77881344 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:16.967273+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d47b6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d47b6000 session 0x5636d2029a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f79680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2029680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374308864 unmapped: 77881344 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d50334a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:17.967496+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378265600 unmapped: 73924608 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110023 data_alloc: 218103808 data_used: 3301376
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:18.967652+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4334000 session 0x5636d2d4b860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4765000/0x0/0x1bfc00000, data 0x1de9486/0x2009000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374325248 unmapped: 77864960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:19.967838+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374325248 unmapped: 77864960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:20.967993+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374325248 unmapped: 77864960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:21.968132+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d47b6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d47b6000 session 0x5636d4bb9860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4e3c780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374333440 unmapped: 77856768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:22.968307+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374333440 unmapped: 77856768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4100263 data_alloc: 218103808 data_used: 3301376
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:23.968470+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d1ee2b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.373261452s of 12.638956070s, submitted: 15
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d28e8d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 77553664 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:24.968599+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4741000/0x0/0x1bfc00000, data 0x1e0d486/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 77553664 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:25.968717+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d417a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d417a800 session 0x5636d2a42960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4116c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:26.969368+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:27.969526+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:28.969710+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4138629 data_alloc: 218103808 data_used: 7319552
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:29.969919+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a471c000/0x0/0x1bfc00000, data 0x1e314a9/0x2052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:30.970056+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a471c000/0x0/0x1bfc00000, data 0x1e314a9/0x2052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:31.970236+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:32.970374+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4116c00 session 0x5636d2ed0b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3b02800 session 0x5636d4f96000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:33.970534+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4139369 data_alloc: 218103808 data_used: 7581696
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.952085495s of 10.106675148s, submitted: 15
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:34.970646+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d5327e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:35.970776+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4740000/0x0/0x1bfc00000, data 0x1e0d486/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:36.970866+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378707968 unmapped: 73482240 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:37.970942+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378707968 unmapped: 73482240 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:38.971058+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4181971 data_alloc: 218103808 data_used: 7675904
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2ec05a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d45161e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 74121216 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:39.971260+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4235000/0x0/0x1bfc00000, data 0x22da413/0x24f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:40.971400+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:41.971515+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:42.971673+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:43.971810+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4180123 data_alloc: 218103808 data_used: 7405568
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d417a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d417a800 session 0x5636d4e3d860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d44874a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:44.971913+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.364075661s of 11.115024567s, submitted: 81
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a422d000/0x0/0x1bfc00000, data 0x22e2413/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:45.972077+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4bb90e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:46.972240+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a426b000/0x0/0x1bfc00000, data 0x22e5413/0x2503000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a426a000/0x0/0x1bfc00000, data 0x22e5423/0x2504000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:47.972367+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d4bb8b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 74104832 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:48.972635+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4180952 data_alloc: 218103808 data_used: 7409664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3b02800 session 0x5636d2a425a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d417a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d417a800 session 0x5636d4e5e780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f972c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378331136 unmapped: 73859072 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:49.972799+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378331136 unmapped: 73859072 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:50.972995+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a37de000/0x0/0x1bfc00000, data 0x2d70485/0x2f90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378331136 unmapped: 73859072 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:51.973213+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d50f1c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 73834496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:52.973345+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x2d721b3/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 73834496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:53.973468+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4275684 data_alloc: 218103808 data_used: 7417856
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:54.973579+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:55.973722+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x2d721b3/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:56.973873+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:57.974018+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:58.974206+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4275684 data_alloc: 218103808 data_used: 7417856
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:59.974434+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x2d721b3/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:00.974588+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x2d721b3/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.710718155s of 16.121202469s, submitted: 79
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a22000 session 0x5636d284a5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d6c92c00 session 0x5636d523cb40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3b02800 session 0x5636d2eedc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:01.974711+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3b02800 session 0x5636d2ed0d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a7000/0x0/0x1bfc00000, data 0x2ea4215/0x30c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:02.974864+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:03.974988+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4290403 data_alloc: 218103808 data_used: 7426048
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:04.975182+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:05.975348+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a7000/0x0/0x1bfc00000, data 0x2ea4215/0x30c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:06.975493+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:07.975615+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:08.975771+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4290563 data_alloc: 218103808 data_used: 7430144
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a7000/0x0/0x1bfc00000, data 0x2ea4215/0x30c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,1])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:09.975963+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 73711616 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:10.976050+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 72081408 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a5000/0x0/0x1bfc00000, data 0x2ea5215/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:11.976204+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a5000/0x0/0x1bfc00000, data 0x2ea5215/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:12.976351+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.107357979s of 12.207020760s, submitted: 35
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4bb8f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:13.976509+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369271 data_alloc: 234881024 data_used: 18341888
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:14.976665+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:15.976844+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:16.977006+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a22000 session 0x5636d4f79a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:17.977169+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a5000/0x0/0x1bfc00000, data 0x2ea5215/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:18.977316+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380346368 unmapped: 71843840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369271 data_alloc: 234881024 data_used: 18341888
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a5000/0x0/0x1bfc00000, data 0x2ea5215/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d6c92c00 session 0x5636d5033860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d5a6b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:19.977981+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380346368 unmapped: 71843840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:20.978084+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380354560 unmapped: 71835648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d5a6b400 session 0x5636d46f41e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4f781e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:21.978239+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381493248 unmapped: 70696960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:22.978393+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381493248 unmapped: 70696960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d3a22000 session 0x5636d2ed14a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.844287872s of 10.037823677s, submitted: 70
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:23.978512+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a3265000/0x0/0x1bfc00000, data 0x32e71b3/0x3509000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380829696 unmapped: 71360512 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414198 data_alloc: 234881024 data_used: 18370560
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d3b02800 session 0x5636d20a4b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d6c92c00 session 0x5636d5033c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a39da000/0x0/0x1bfc00000, data 0x286fec3/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:24.978638+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:25.978747+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:26.978891+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e3d4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:27.978993+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4487e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:28.979259+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4255630 data_alloc: 218103808 data_used: 7487488
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a39da000/0x0/0x1bfc00000, data 0x286fe61/0x2a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:29.979643+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d1fd81e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a39da000/0x0/0x1bfc00000, data 0x286fe61/0x2a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 418 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:30.979758+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381943808 unmapped: 70246400 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:31.979934+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3cd9000/0x0/0x1bfc00000, data 0x2871a13/0x2a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381943808 unmapped: 70246400 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3a22000 session 0x5636d2199c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3b02800 session 0x5636d52961e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:32.980072+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381960192 unmapped: 70230016 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d6c92c00 session 0x5636d4e62b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d6c92c00 session 0x5636d4e5f4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:33.980178+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382001152 unmapped: 74383360 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349962 data_alloc: 218103808 data_used: 7499776
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.098374367s of 10.435062408s, submitted: 131
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4487860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:34.980337+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382009344 unmapped: 74375168 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d2e4a800 session 0x5636d44874a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:35.980495+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382009344 unmapped: 74375168 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3682000/0x0/0x1bfc00000, data 0x2ec8a13/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:36.980672+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382009344 unmapped: 74375168 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:37.980828+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3b69000 session 0x5636d4517e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 74366976 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:38.981442+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3682000/0x0/0x1bfc00000, data 0x2ec8a13/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 74366976 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303202 data_alloc: 218103808 data_used: 7434240
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:39.981690+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 74366976 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:40.981849+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 74366976 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3682000/0x0/0x1bfc00000, data 0x2ec8a13/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d52a1400 session 0x5636d2029e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d4334000 session 0x5636d4e621e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:41.982145+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d1e8c400 session 0x5636d28e8d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:42.982595+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:43.982749+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4197298 data_alloc: 218103808 data_used: 3325952
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:44.983281+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:45.983481+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:46.983872+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x25a2a13/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:47.984031+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:48.984258+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4197458 data_alloc: 218103808 data_used: 3330048
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:49.984510+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377274368 unmapped: 79110144 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:50.984979+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:51.985394+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x25a2a13/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:52.985546+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:53.985674+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x25a2a13/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4276338 data_alloc: 234881024 data_used: 14434304
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:54.986033+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:55.986272+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:56.986744+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:57.987342+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x25a2a13/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:58.987844+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 78307328 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4276338 data_alloc: 234881024 data_used: 14434304
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:59.988265+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 78307328 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:00.988607+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.658170700s of 26.777448654s, submitted: 44
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 78307328 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:01.988827+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381362176 unmapped: 75022336 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3be4000/0x0/0x1bfc00000, data 0x2967a13/0x2b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:02.989006+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381632512 unmapped: 74752000 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:03.989197+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382074880 unmapped: 74309632 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344532 data_alloc: 234881024 data_used: 16035840
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:04.989666+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382074880 unmapped: 74309632 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:05.989977+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 74301440 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a38ae000/0x0/0x1bfc00000, data 0x2c9ca13/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:06.990295+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 74301440 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:07.990529+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a38ae000/0x0/0x1bfc00000, data 0x2c9ca13/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 74301440 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:08.990672+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 74301440 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344692 data_alloc: 234881024 data_used: 16039936
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a38ae000/0x0/0x1bfc00000, data 0x2c9ca13/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:09.990912+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382099456 unmapped: 74285056 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:10.991217+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382099456 unmapped: 74285056 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d52a1400 session 0x5636d2daed20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d6c92c00 session 0x5636d2eec1e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b69000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3b69000 session 0x5636d4516780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3a22000 session 0x5636d2029860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.866822243s of 10.619400978s, submitted: 93
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d1e8c400 session 0x5636d502c780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:11.991335+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382115840 unmapped: 74268672 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a373c000/0x0/0x1bfc00000, data 0x2e0fa13/0x3032000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:12.991529+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382115840 unmapped: 74268672 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:13.991682+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382124032 unmapped: 74260480 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361624 data_alloc: 234881024 data_used: 16044032
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:14.991908+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382124032 unmapped: 74260480 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:15.992076+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382124032 unmapped: 74260480 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:16.992290+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383188992 unmapped: 73195520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3738000/0x0/0x1bfc00000, data 0x2e116df/0x3035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3a22000 session 0x5636d502d860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:17.992502+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383229952 unmapped: 73154560 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b69000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3b69000 session 0x5636d538e000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:18.992755+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d52a1400 session 0x5636d4f79e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383229952 unmapped: 73154560 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4382991 data_alloc: 234881024 data_used: 16052224
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d6c92c00 session 0x5636d2d4b860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:19.992916+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d1e8c400 session 0x5636d20a5680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383254528 unmapped: 73129984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:20.993071+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b69000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:21.993203+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a3730000/0x0/0x1bfc00000, data 0x2fcc3ab/0x303d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:22.993381+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:23.993535+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4393900 data_alloc: 234881024 data_used: 17227776
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:24.993743+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:25.993863+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 72990720 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:26.994018+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.449282646s of 15.555412292s, submitted: 30
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 72990720 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d52a1400 session 0x5636d510e000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:27.994169+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a3731000/0x0/0x1bfc00000, data 0x2fcc3ab/0x303d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 72990720 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:28.994349+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 72990720 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394494 data_alloc: 234881024 data_used: 17227776
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:29.994514+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 72982528 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:30.994639+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 72982528 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:31.994838+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a3731000/0x0/0x1bfc00000, data 0x2fcc3ab/0x303d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 72982528 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:32.994975+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386228224 unmapped: 70156288 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:33.995147+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386899968 unmapped: 69484544 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 234881024 data_used: 17502208
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:34.995375+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386908160 unmapped: 69476352 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bc7000/0x0/0x1bfc00000, data 0x3d533ab/0x3b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:35.995509+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:36.995720+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:37.995860+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:38.995994+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512802 data_alloc: 234881024 data_used: 17567744
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.949254036s of 12.184158325s, submitted: 111
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:39.996176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:40.996397+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bb4000/0x0/0x1bfc00000, data 0x3d743ab/0x3bba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:41.996547+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:42.997366+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 59K writes, 217K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s
                                           Cumulative WAL: 59K writes, 22K syncs, 2.61 writes per sync, written: 0.21 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3547 writes, 12K keys, 3547 commit groups, 1.0 writes per commit group, ingest: 11.63 MB, 0.02 MB/s
                                           Interval WAL: 3547 writes, 1513 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:43.997603+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504022 data_alloc: 234881024 data_used: 17571840
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:44.997857+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bb4000/0x0/0x1bfc00000, data 0x3d743ab/0x3bba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:45.998086+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bae000/0x0/0x1bfc00000, data 0x3d7a3ab/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:46.998325+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391815168 unmapped: 64569344 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a285c000/0x0/0x1bfc00000, data 0x40cc3ab/0x3f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:47.998578+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391872512 unmapped: 64512000 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:48.998757+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388677632 unmapped: 67706880 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541881 data_alloc: 234881024 data_used: 19451904
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:49.998950+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388677632 unmapped: 67706880 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.131248474s of 11.183950424s, submitted: 11
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d28e6000 session 0x5636d3a3c1e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:50.999201+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:51.999417+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2074000/0x0/0x1bfc00000, data 0x48b43ab/0x46fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:52.999576+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:53.999730+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4605885 data_alloc: 234881024 data_used: 19456000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:54.999876+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2071000/0x0/0x1bfc00000, data 0x48b73ab/0x46fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:56.000230+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:57.000464+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2071000/0x0/0x1bfc00000, data 0x48b73ab/0x46fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:58.000789+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:59.001034+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4605885 data_alloc: 234881024 data_used: 19456000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:00.001230+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d4f3d000 session 0x5636d5300f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:01.001460+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3a1b400 session 0x5636d4e5e5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2071000/0x0/0x1bfc00000, data 0x48b73ab/0x46fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:02.001696+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d1e8c400 session 0x5636d502dc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.721348763s of 11.780836105s, submitted: 13
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d28e6000 session 0x5636d5327a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:03.001934+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:04.002053+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391192576 unmapped: 69394432 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4667533 data_alloc: 234881024 data_used: 27430912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:05.002216+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391921664 unmapped: 68665344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:06.002570+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391921664 unmapped: 68665344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:07.002822+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391921664 unmapped: 68665344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a206d000/0x0/0x1bfc00000, data 0x48b83de/0x4700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:08.003133+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392019968 unmapped: 68567040 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a206e000/0x0/0x1bfc00000, data 0x48b83de/0x4700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:09.003387+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392019968 unmapped: 68567040 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4674049 data_alloc: 234881024 data_used: 28856320
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:10.003589+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392028160 unmapped: 68558848 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:11.003772+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2068000/0x0/0x1bfc00000, data 0x48be3de/0x4706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392028160 unmapped: 68558848 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3b02800 session 0x5636d4f97e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:12.004029+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d60d1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d60d1400 session 0x5636d5113680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392028160 unmapped: 68558848 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:13.004259+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392028160 unmapped: 68558848 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b68400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.107902527s of 11.172630310s, submitted: 29
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3b68400 session 0x5636d5297860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:14.004480+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d1e8c400 session 0x5636d50325a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 393093120 unmapped: 67493888 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4646049 data_alloc: 234881024 data_used: 28741632
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:15.004689+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d28e6000 session 0x5636d51123c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 393093120 unmapped: 67493888 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:16.004917+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 ms_handle_reset con 0x5636d2e4a800 session 0x5636d538ef00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2048000/0x0/0x1bfc00000, data 0x44f50fe/0x471d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394534912 unmapped: 66052096 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 ms_handle_reset con 0x5636d3b02800 session 0x5636d2ccc960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:17.005138+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394108928 unmapped: 66478080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:18.005724+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394108928 unmapped: 66478080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a1f40000/0x0/0x1bfc00000, data 0x46060fe/0x482e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:19.006606+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d60d1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 ms_handle_reset con 0x5636d60d1400 session 0x5636d284b4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4409313 data_alloc: 234881024 data_used: 12996608
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:20.006821+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:21.007340+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a323f000/0x0/0x1bfc00000, data 0x330709c/0x352e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a323f000/0x0/0x1bfc00000, data 0x330709c/0x352e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:22.007886+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:23.008198+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x332809c/0x354f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x332809c/0x354f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:24.008523+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.350358963s of 11.046576500s, submitted: 123
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414031 data_alloc: 234881024 data_used: 13004800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:25.008690+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 423 ms_handle_reset con 0x5636d52a1400 session 0x5636d2ec03c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 423 ms_handle_reset con 0x5636d4f3d000 session 0x5636d5300d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d60d1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389373952 unmapped: 71213056 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 423 ms_handle_reset con 0x5636d60d1400 session 0x5636d21bcf00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:26.008847+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389382144 unmapped: 71204864 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:27.008983+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389382144 unmapped: 71204864 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:28.009095+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a3e9a000/0x0/0x1bfc00000, data 0x26abc1b/0x28d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,0,1])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389382144 unmapped: 71204864 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:29.009333+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389382144 unmapped: 71204864 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4264034 data_alloc: 218103808 data_used: 4784128
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:30.009697+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 424 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f78b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389398528 unmapped: 71188480 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:31.010013+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3e96000/0x0/0x1bfc00000, data 0x26b093b/0x28d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389398528 unmapped: 71188480 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:32.010294+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389398528 unmapped: 71188480 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:33.010547+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 424 ms_handle_reset con 0x5636d3b69000 session 0x5636d4bb85a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 424 ms_handle_reset con 0x5636d3a22000 session 0x5636d4bb8f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 424 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f792c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:34.011058+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.740626335s of 10.004773140s, submitted: 115
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4160032 data_alloc: 218103808 data_used: 3366912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:35.011362+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:36.011628+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:37.011965+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:38.012277+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:39.012592+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4160032 data_alloc: 218103808 data_used: 3366912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:40.012843+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:41.013220+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:42.013471+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:43.013673+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d4f3d000 session 0x5636d5033680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d52a1400 session 0x5636d4e5fc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:44.013929+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4160032 data_alloc: 218103808 data_used: 3366912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:45.014068+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d60d1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d60d1400 session 0x5636d2028b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:46.014176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:47.014320+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:48.014552+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.819927216s of 13.827451706s, submitted: 13
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ec1680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:49.014706+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d5297680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d4f3d000 session 0x5636d4e3c780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389939200 unmapped: 74850304 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249575 data_alloc: 218103808 data_used: 3366912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:50.014875+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:51.015006+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:52.015164+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:53.015289+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:54.015421+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249575 data_alloc: 218103808 data_used: 3366912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:55.015568+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:56.015746+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:57.015892+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:58.016067+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:59.016203+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249575 data_alloc: 218103808 data_used: 3366912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:00.016406+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:01.016573+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:02.016713+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:03.016899+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:04.017039+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249575 data_alloc: 218103808 data_used: 3366912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:05.017182+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:06.017322+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389971968 unmapped: 74817536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:07.017454+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d52a1400 session 0x5636d20830e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389971968 unmapped: 74817536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:08.017591+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d28e6000 session 0x5636d4f781e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389971968 unmapped: 74817536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:09.017755+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ec0960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.849130630s of 20.970087051s, submitted: 39
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d5300000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389914624 unmapped: 74874880 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249376 data_alloc: 218103808 data_used: 3366912
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:10.017947+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389914624 unmapped: 74874880 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:11.018140+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389914624 unmapped: 74874880 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:12.018267+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389931008 unmapped: 74858496 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:13.018406+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389931008 unmapped: 74858496 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:14.018536+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389931008 unmapped: 74858496 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320228 data_alloc: 234881024 data_used: 13238272
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:15.018744+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d3a3d680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3b02800 session 0x5636d52972c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f53c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d4f53c00 session 0x5636d2940780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d5327c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389931008 unmapped: 74858496 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:16.018964+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d3c8a000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d4f79e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3b02800 session 0x5636d510e000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636db0f0800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636db0f0800 session 0x5636d5230d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d502dc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:17.019179+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3706000/0x0/0x1bfc00000, data 0x2e3c588/0x3068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3706000/0x0/0x1bfc00000, data 0x2e3c588/0x3068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:18.019379+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:19.019560+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3706000/0x0/0x1bfc00000, data 0x2e3c5c1/0x3068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395543 data_alloc: 234881024 data_used: 13238272
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:20.019816+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:21.020017+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:22.020194+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.823556900s of 12.940815926s, submitted: 32
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3066000/0x0/0x1bfc00000, data 0x34dc5c1/0x3708000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [1,0,1])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 395591680 unmapped: 69197824 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:23.020401+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 397041664 unmapped: 67747840 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:24.020594+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e5e1e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 395132928 unmapped: 69656576 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4526849 data_alloc: 234881024 data_used: 15872000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d4f97680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:25.021342+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a298a000/0x0/0x1bfc00000, data 0x3bb85c1/0x3de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a298a000/0x0/0x1bfc00000, data 0x3bb85c1/0x3de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3b02800 session 0x5636d1fd94a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b2d800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 395141120 unmapped: 69648384 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3b2d800 session 0x5636d4da85a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:26.021658+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a298a000/0x0/0x1bfc00000, data 0x3bb85c1/0x3de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 395141120 unmapped: 69648384 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:27.021868+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2989000/0x0/0x1bfc00000, data 0x3bb85e4/0x3de5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396853248 unmapped: 67936256 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:28.021981+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2989000/0x0/0x1bfc00000, data 0x3bb85e4/0x3de5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400375808 unmapped: 64413696 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:29.022181+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400375808 unmapped: 64413696 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601827 data_alloc: 234881024 data_used: 25722880
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:30.022373+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:31.022534+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:32.022768+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:33.022980+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2968000/0x0/0x1bfc00000, data 0x3bd95e4/0x3e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:34.023166+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4598487 data_alloc: 234881024 data_used: 25722880
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:35.023335+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:36.023455+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.942391396s of 14.256587982s, submitted: 151
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400621568 unmapped: 64167936 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:37.023635+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400621568 unmapped: 64167936 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d4f3d000 session 0x5636d1ee3a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:38.023834+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d52a1400 session 0x5636d21bd860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400629760 unmapped: 64159744 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:39.024041+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2955000/0x0/0x1bfc00000, data 0x3beb5e4/0x3e18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,0,0,2])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404774912 unmapped: 60014592 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d2198780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4661869 data_alloc: 234881024 data_used: 25866240
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:40.024306+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404774912 unmapped: 60014592 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:41.024456+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404938752 unmapped: 59850752 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:42.024644+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405069824 unmapped: 59719680 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a219f000/0x0/0x1bfc00000, data 0x43a25e4/0x45cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:43.024908+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:44.025196+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4666355 data_alloc: 234881024 data_used: 26353664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:45.025393+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:46.025546+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2199000/0x0/0x1bfc00000, data 0x43a85e4/0x45d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:47.025769+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:48.025960+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:49.026144+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4666355 data_alloc: 234881024 data_used: 26353664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:50.026381+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:51.026521+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:52.026666+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2199000/0x0/0x1bfc00000, data 0x43a85e4/0x45d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:53.026799+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:54.026926+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:55.027039+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4666355 data_alloc: 234881024 data_used: 26353664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2199000/0x0/0x1bfc00000, data 0x43a85e4/0x45d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:56.027155+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:57.027273+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:58.027401+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:59.027522+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405094400 unmapped: 59695104 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:00.027712+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4666355 data_alloc: 234881024 data_used: 26353664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405094400 unmapped: 59695104 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:01.027878+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.164173126s of 24.367654800s, submitted: 90
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4709680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4487680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2199000/0x0/0x1bfc00000, data 0x43a85e4/0x45d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d5032d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:02.027988+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:03.028195+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:04.028336+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:05.028416+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4463795 data_alloc: 234881024 data_used: 15867904
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:06.028562+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:07.028704+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:08.028799+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:09.028952+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 66191360 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:10.029466+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 66183168 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4463795 data_alloc: 234881024 data_used: 15867904
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:11.029633+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 66183168 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:12.029802+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 66183168 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:13.029938+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398614528 unmapped: 66174976 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [1])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:14.030094+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:15.030248+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4464435 data_alloc: 234881024 data_used: 16064512
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:16.030406+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:17.030560+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:18.030723+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:19.030864+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:20.031145+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4464435 data_alloc: 234881024 data_used: 16064512
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:21.031293+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:22.031417+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:23.031547+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:24.031747+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.178138733s of 23.300649643s, submitted: 45
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:25.031955+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475883 data_alloc: 234881024 data_used: 17014784
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:26.032187+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:27.032459+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:28.032652+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:29.032806+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:30.033026+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475531 data_alloc: 234881024 data_used: 17014784
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:31.033203+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:32.033371+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32eb000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:33.033518+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:34.033653+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:35.033776+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475531 data_alloc: 234881024 data_used: 17014784
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:36.033936+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:37.034033+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32eb000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:38.034180+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.652603149s of 13.687821388s, submitted: 12
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:39.034314+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:40.034472+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3a22000 session 0x5636d50334a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d4f3d000 session 0x5636d1ee3860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d52a1400 session 0x5636d4f96f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483613 data_alloc: 234881024 data_used: 17641472
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a32e8000/0x0/0x1bfc00000, data 0x325b21b/0x3486000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3b02800 session 0x5636d2d71e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e5f4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3a22000 session 0x5636d5300f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d4f3d000 session 0x5636d4f78000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:41.034624+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 66011136 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:42.034772+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 65970176 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:43.034909+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 65970176 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:44.035049+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 65970176 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:45.035183+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398827520 unmapped: 65961984 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521852 data_alloc: 234881024 data_used: 17645568
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:46.035319+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bf6000/0x0/0x1bfc00000, data 0x35ec2ef/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398901248 unmapped: 65888256 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:47.035494+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398901248 unmapped: 65888256 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:48.035639+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398901248 unmapped: 65888256 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.337140083s of 10.989769936s, submitted: 252
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:49.035744+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398909440 unmapped: 65880064 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:50.035878+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521852 data_alloc: 234881024 data_used: 17645568
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bf6000/0x0/0x1bfc00000, data 0x35ec2ef/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:51.035998+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:52.036180+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:53.036338+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:54.036532+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:55.036656+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525194 data_alloc: 234881024 data_used: 17641472
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bf6000/0x0/0x1bfc00000, data 0x35ec2ef/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:56.036788+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d52a1400 session 0x5636d2029c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:57.037376+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4d73000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d4d73000 session 0x5636d5032780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:58.037512+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399007744 unmapped: 65781760 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:59.037641+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2e4a800 session 0x5636d502c960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399007744 unmapped: 65781760 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.992650032s of 10.227743149s, submitted: 112
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3a22000 session 0x5636d4e3d4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:00.038177+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399163392 unmapped: 65626112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533294 data_alloc: 234881024 data_used: 17649664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bca000/0x0/0x1bfc00000, data 0x361a322/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:01.038348+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:02.038487+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d46e7c00 session 0x5636d5326d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:03.038681+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d50f1860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bca000/0x0/0x1bfc00000, data 0x361a322/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d6c93000 session 0x5636d2197e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:04.038826+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d6c93000 session 0x5636d2d4b2c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:05.038950+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593166 data_alloc: 234881024 data_used: 20439040
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bc9000/0x0/0x1bfc00000, data 0x361a331/0x3795000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:06.039091+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bc1000/0x0/0x1bfc00000, data 0x3ae9331/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:07.039244+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:08.039365+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:09.039552+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:10.039723+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593486 data_alloc: 234881024 data_used: 20451328
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:11.039884+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bc1000/0x0/0x1bfc00000, data 0x3ae9331/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:12.040028+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.785018921s of 12.839673996s, submitted: 21
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:13.040154+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 63717376 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:14.040316+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401506304 unmapped: 63283200 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:15.040438+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 63266816 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4626468 data_alloc: 234881024 data_used: 20738048
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:16.040651+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401588224 unmapped: 63201280 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a28c7000/0x0/0x1bfc00000, data 0x3de3331/0x3a97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:17.040778+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402612224 unmapped: 62177280 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:18.040913+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406994944 unmapped: 57794560 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a24fc000/0x0/0x1bfc00000, data 0x41ae331/0x3e62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a24fc000/0x0/0x1bfc00000, data 0x41ae331/0x3e62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:19.041049+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406994944 unmapped: 57794560 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:20.041236+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403800064 unmapped: 60989440 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4672227 data_alloc: 234881024 data_used: 24047616
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:21.041413+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a24dd000/0x0/0x1bfc00000, data 0x41cd331/0x3e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403808256 unmapped: 60981248 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:22.041549+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403816448 unmapped: 60973056 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:23.041687+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.828656197s of 11.126467705s, submitted: 61
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404881408 unmapped: 59908096 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:24.041829+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404889600 unmapped: 59899904 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a240b000/0x0/0x1bfc00000, data 0x429f331/0x3f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:25.042008+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404889600 unmapped: 59899904 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4684943 data_alloc: 234881024 data_used: 24043520
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:26.042184+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 60727296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:27.042333+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 60727296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:28.042475+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2401000/0x0/0x1bfc00000, data 0x42a9331/0x3f5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 60727296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:29.042647+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 60727296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:30.042878+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4681143 data_alloc: 234881024 data_used: 24047616
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:31.043011+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:32.043174+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:33.043309+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a23fa000/0x0/0x1bfc00000, data 0x42b0331/0x3f64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:34.043433+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d502d2c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.482513428s of 11.543985367s, submitted: 17
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e3c780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:35.043563+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4681314 data_alloc: 234881024 data_used: 24174592
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3a22000 session 0x5636d50f05a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:36.043708+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404217856 unmapped: 60571648 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d46e7c00 session 0x5636d538eb40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:37.043881+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 63766528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:38.044015+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404717568 unmapped: 63750144 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1abc000/0x0/0x1bfc00000, data 0x4bef322/0x48a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:39.044158+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404717568 unmapped: 63750144 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:40.044365+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404717568 unmapped: 63750144 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4755696 data_alloc: 234881024 data_used: 24174592
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d46e7c00 session 0x5636d538e780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:41.044485+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 63766528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d46f4b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:42.044650+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 63766528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:43.044789+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1f5b000/0x0/0x1bfc00000, data 0x47512b0/0x4402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 63766528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4f792c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:44.044943+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404709376 unmapped: 63758336 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a1f5b000/0x0/0x1bfc00000, data 0x47512b0/0x4402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.871521950s of 10.023799896s, submitted: 46
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:45.045145+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404709376 unmapped: 63758336 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4693622 data_alloc: 234881024 data_used: 24039424
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 427 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f79a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:46.045280+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404725760 unmapped: 63741952 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:47.045460+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 427 ms_handle_reset con 0x5636d3a22000 session 0x5636d4e621e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405168128 unmapped: 63299584 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:48.045606+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406331392 unmapped: 62136320 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:49.045734+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406331392 unmapped: 62136320 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:50.045908+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1f62000/0x0/0x1bfc00000, data 0x41cab82/0x43fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406347776 unmapped: 62119936 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764956 data_alloc: 251658240 data_used: 32079872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:51.046094+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406347776 unmapped: 62119936 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:52.046307+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:53.046510+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:54.046656+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1f62000/0x0/0x1bfc00000, data 0x41cab82/0x43fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:55.046794+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764956 data_alloc: 251658240 data_used: 32079872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:56.046929+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.281738281s of 11.311649323s, submitted: 22
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:57.047063+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406380544 unmapped: 62087168 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4486960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2f98000/0x0/0x1bfc00000, data 0x2e56b20/0x3086000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [0,0,0,0,6])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:58.047192+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 66207744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:59.047436+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 66207744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2d13000/0x0/0x1bfc00000, data 0x30d3b20/0x3303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:00.047697+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402595840 unmapped: 65871872 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505656 data_alloc: 234881024 data_used: 17100800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2d13000/0x0/0x1bfc00000, data 0x30d3b20/0x3303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:01.047849+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2c9b000/0x0/0x1bfc00000, data 0x314bb20/0x337b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402595840 unmapped: 65871872 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:02.047982+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:03.048172+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:04.048301+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:05.048457+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505608 data_alloc: 234881024 data_used: 17465344
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:06.048618+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd6000/0x0/0x1bfc00000, data 0x3158b20/0x3388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:07.048786+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.255873680s of 11.477129936s, submitted: 104
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd6000/0x0/0x1bfc00000, data 0x3158b20/0x3388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:08.048938+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:09.049072+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:10.049252+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d4bb90e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504180 data_alloc: 234881024 data_used: 17465344
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:11.049386+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400007168 unmapped: 68460544 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:12.049534+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd3000/0x0/0x1bfc00000, data 0x315bb20/0x338b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:13.049694+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d4f97860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:14.049848+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:15.049983+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303742 data_alloc: 218103808 data_used: 6471680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:16.050190+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:17.050320+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4121000/0x0/0x1bfc00000, data 0x200db20/0x223d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:18.050476+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:19.050600+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4121000/0x0/0x1bfc00000, data 0x200db20/0x223d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4121000/0x0/0x1bfc00000, data 0x200db20/0x223d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:20.050912+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303742 data_alloc: 218103808 data_used: 6471680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.177040100s of 13.237181664s, submitted: 14
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d52a1400 session 0x5636d2ecf680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d4f3d000 session 0x5636d2198000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4121000/0x0/0x1bfc00000, data 0x200db20/0x223d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:21.051036+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2eec960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:22.051172+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:23.051313+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:24.051428+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:25.051585+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:26.051720+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:27.051893+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:28.052045+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:29.052221+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:30.052389+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:31.052537+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:32.052689+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:33.052829+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:34.052996+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:35.053205+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:36.053348+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:37.053463+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:38.053628+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:39.053770+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:40.053933+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:41.054040+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:42.054179+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:43.054349+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:44.054505+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:45.054646+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:46.054799+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:47.054951+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:48.055176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:49.055507+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:50.055659+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394297344 unmapped: 74170368 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:51.055803+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394297344 unmapped: 74170368 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:52.055971+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.362501144s of 31.471702576s, submitted: 43
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d44861e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:53.056090+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:54.056295+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:55.056433+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4262521 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:56.056618+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4366000/0x0/0x1bfc00000, data 0x1dcba8b/0x1ff8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4366000/0x0/0x1bfc00000, data 0x1dcba8b/0x1ff8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:57.056805+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:58.056982+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d52a1400 session 0x5636d28e9a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:59.057159+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:00.057323+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4272926 data_alloc: 218103808 data_used: 4550656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:01.057499+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4365000/0x0/0x1bfc00000, data 0x1dcbaae/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:02.057664+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:03.057826+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:04.057957+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:05.058890+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285406 data_alloc: 218103808 data_used: 6332416
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:06.059085+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4365000/0x0/0x1bfc00000, data 0x1dcbaae/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:07.059566+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 73834496 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:08.059844+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4365000/0x0/0x1bfc00000, data 0x1dcbaae/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 73834496 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:09.079062+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 73834496 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:10.079460+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 73834496 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285406 data_alloc: 218103808 data_used: 6332416
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:11.079765+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.085372925s of 19.132513046s, submitted: 17
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4365000/0x0/0x1bfc00000, data 0x1dcbaae/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [0,0,1,6,2])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401121280 unmapped: 67346432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:12.081657+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:13.083329+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:14.083664+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:15.084004+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3552000/0x0/0x1bfc00000, data 0x2bddaae/0x2e0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4407902 data_alloc: 218103808 data_used: 8286208
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:16.084264+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:17.084422+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:18.084598+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:19.084754+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:20.084955+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3550000/0x0/0x1bfc00000, data 0x2be0aae/0x2e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404590 data_alloc: 218103808 data_used: 8286208
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:21.085118+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3550000/0x0/0x1bfc00000, data 0x2be0aae/0x2e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:22.085407+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:23.085728+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3550000/0x0/0x1bfc00000, data 0x2be0aae/0x2e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 66945024 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:24.086140+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 66945024 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:25.086414+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401530880 unmapped: 66936832 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404910 data_alloc: 218103808 data_used: 8294400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:26.086729+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401530880 unmapped: 66936832 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:27.086874+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401530880 unmapped: 66936832 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:28.087136+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.468616486s of 16.711784363s, submitted: 112
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d3c8be00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 66756608 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:29.087564+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 66756608 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:30.087786+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458452 data_alloc: 218103808 data_used: 8294400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:31.087942+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:32.088328+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:33.088531+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:34.088653+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:35.088807+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458452 data_alloc: 218103808 data_used: 8294400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:36.088932+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:37.089279+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:38.089501+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:39.089677+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:40.090242+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4491092 data_alloc: 234881024 data_used: 12820480
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:41.091421+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:42.091809+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:43.091986+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:44.092188+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.121622086s of 16.178701401s, submitted: 17
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:45.092588+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4491204 data_alloc: 234881024 data_used: 12881920
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:46.092820+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401924096 unmapped: 66543616 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:47.093050+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401924096 unmapped: 66543616 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:48.093186+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 62742528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:49.093372+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1626000/0x0/0x1bfc00000, data 0x396aaae/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:50.093567+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560238 data_alloc: 234881024 data_used: 14278656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:51.093747+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:52.094014+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:53.094202+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:54.094396+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1618000/0x0/0x1bfc00000, data 0x3977aae/0x3ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.064040184s of 10.258877754s, submitted: 80
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:55.094600+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4553794 data_alloc: 234881024 data_used: 14278656
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:56.094727+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a22000 session 0x5636d5112000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:57.095177+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d284a5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:58.095360+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1618000/0x0/0x1bfc00000, data 0x3977aae/0x3ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:59.095551+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:00.095784+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a23af000/0x0/0x1bfc00000, data 0x2be1aae/0x2e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4412186 data_alloc: 218103808 data_used: 8355840
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:01.096018+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:02.096246+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:03.096497+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d50f05a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:04.096650+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d5113e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 69287936 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:05.096853+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:06.097017+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:07.097216+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:08.097405+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:09.097613+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:10.097841+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:11.098006+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:12.098135+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:13.098314+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:14.098553+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:15.098746+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:16.098882+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:17.099042+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:18.099198+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:19.099335+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:20.099522+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:21.099636+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:22.099770+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:23.099908+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:24.100062+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:25.100185+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:26.100389+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:27.100593+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:28.100729+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:29.100871+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:30.101048+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:31.101169+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:32.101314+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:33.101462+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:34.101591+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 69238784 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:35.101765+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.500312805s of 40.577518463s, submitted: 30
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4f970e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d52a1400 session 0x5636d47090e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a33fb000/0x0/0x1bfc00000, data 0x1b95ab4/0x1dc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d51123c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d20a5680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d50321e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4297582 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:36.101942+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:37.102076+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:38.102264+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:39.102476+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:40.102648+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:41.102806+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4297582 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:42.103564+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:43.103726+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:44.103862+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 69206016 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:45.104030+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:46.104156+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4339474 data_alloc: 218103808 data_used: 9203712
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:47.104349+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.480249405s of 12.605928421s, submitted: 45
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d3a6cd20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:48.104519+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:49.104680+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:50.104923+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:51.105144+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4337998 data_alloc: 218103808 data_used: 9203712
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:52.105331+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:53.105536+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:54.105748+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:55.105973+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:56.106162+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4338130 data_alloc: 218103808 data_used: 9203712
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d46e7c00 session 0x5636d2940000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:57.107034+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:58.107226+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:59.107408+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.830513954s of 11.840190887s, submitted: 3
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:00.107683+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:01.107898+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4338130 data_alloc: 218103808 data_used: 9203712
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d47083c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:02.108083+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d2028960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:03.108345+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399286272 unmapped: 69181440 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:04.108524+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399286272 unmapped: 69181440 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:05.108721+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399286272 unmapped: 69181440 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:06.108940+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:07.109194+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:08.109433+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:09.109635+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:10.109830+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:11.110012+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:12.110375+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:13.110562+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:14.110709+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:15.110843+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:16.110961+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:17.111229+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:18.111340+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:19.111479+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-06T08:25:20.279399+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _finish_auth 0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:20.290327+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:21.279640+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399310848 unmapped: 69156864 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:22.279803+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399310848 unmapped: 69156864 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:23.279917+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:24.280046+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:25.280172+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:26.280319+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:27.280520+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:28.280649+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:29.280862+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:30.281066+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:31.281248+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:32.281386+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:33.281522+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:34.281705+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e3da40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d4516d20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d3a3d680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:35.281914+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4e62b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.754333496s of 35.839118958s, submitted: 32
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d4f97680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d20821e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:36.282040+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301455 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:37.282169+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d4517680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ee5000/0x0/0x1bfc00000, data 0x20abaed/0x22d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:38.282748+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399376384 unmapped: 69091328 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2d4e000/0x0/0x1bfc00000, data 0x2242aed/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:39.282896+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399376384 unmapped: 69091328 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a23000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a23000 session 0x5636d2ec10e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d50f0b40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d2d4af00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:40.283053+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399368192 unmapped: 69099520 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d5301860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:41.283178+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399368192 unmapped: 69099520 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4374929 data_alloc: 218103808 data_used: 3399680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50b2000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d50b2000 session 0x5636d2d71e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:42.283316+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399917056 unmapped: 68550656 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2777000/0x0/0x1bfc00000, data 0x2817b72/0x2a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:43.283458+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399917056 unmapped: 68550656 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e65000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e65000 session 0x5636d3a3c1e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ec1e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:44.283591+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399917056 unmapped: 68550656 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50b2000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:45.283715+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399917056 unmapped: 68550656 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:46.283835+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399925248 unmapped: 68542464 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4427327 data_alloc: 218103808 data_used: 10481664
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d2083a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:47.283997+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400089088 unmapped: 68378624 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a22c00 session 0x5636d4e3c000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2776000/0x0/0x1bfc00000, data 0x2817b81/0x2a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:48.284155+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400089088 unmapped: 68378624 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d5327e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3afec00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.909353256s of 13.144114494s, submitted: 72
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3afec00 session 0x5636d20283c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:49.284279+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400089088 unmapped: 68378624 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2775000/0x0/0x1bfc00000, data 0x2817b91/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:50.284451+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402071552 unmapped: 66396160 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:51.284625+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402866176 unmapped: 65601536 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473866 data_alloc: 234881024 data_used: 16875520
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2775000/0x0/0x1bfc00000, data 0x2817b91/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:52.284756+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402866176 unmapped: 65601536 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2775000/0x0/0x1bfc00000, data 0x2817b91/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2028960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d5112780
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:53.284838+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404668416 unmapped: 63799296 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a22c00 session 0x5636d2daed20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:54.284947+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:55.285070+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:56.285188+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d4e3d4a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4429543 data_alloc: 218103808 data_used: 11780096
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d50b2000 session 0x5636d51132c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a12d6000/0x0/0x1bfc00000, data 0x25ebb1f/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d4e5e1e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:57.285310+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:58.285436+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:59.285628+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.875857353s of 11.130379677s, submitted: 107
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d4bb9c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d3a3cd20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:00.285783+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ccc000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2048000/0x0/0x1bfc00000, data 0x19faaae/0x1c28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:01.285905+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:02.286036+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:03.286207+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:04.286350+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:05.286519+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:06.286651+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:07.286813+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:08.286952+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:09.287138+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:10.287403+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:11.287522+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:12.287670+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:13.288030+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:14.288151+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:15.288308+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:16.288436+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:17.288563+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:18.288711+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:19.288856+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:20.289021+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:21.289136+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:22.289475+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:23.289655+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:24.289784+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:25.289921+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:26.290049+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:27.290276+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:28.290422+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:29.290640+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:30.290832+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:31.290965+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:32.291236+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:33.291396+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:34.291518+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:35.291657+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:36.291829+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:37.292075+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:38.292211+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:39.292361+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:40.292540+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403185664 unmapped: 65282048 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:41.292684+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403185664 unmapped: 65282048 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:42.292810+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:43.292956+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:44.293180+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:45.293389+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:46.293532+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:47.293680+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:48.293827+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:49.293967+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:50.294166+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:51.294299+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:52.294438+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:53.294570+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:54.294740+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:55.294861+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:56.295032+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:57.295155+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:58.295334+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d44874a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:59.295492+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d45161e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:00.295653+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 61.038486481s of 61.128166199s, submitted: 29
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d502dc20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:01.295774+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4271666 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:02.295908+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:03.296055+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4487680
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d50f03c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:04.296181+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405331968 unmapped: 63135744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a22c00 session 0x5636d2029c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d28e9a40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:05.296323+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403234816 unmapped: 69435392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1ae5000/0x0/0x1bfc00000, data 0x230aafd/0x2539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:06.296460+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403243008 unmapped: 69427200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4348010 data_alloc: 218103808 data_used: 3395584
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:07.296607+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403243008 unmapped: 69427200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1ae5000/0x0/0x1bfc00000, data 0x230aafd/0x2539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:08.296782+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403243008 unmapped: 69427200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:09.296912+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403251200 unmapped: 69419008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:10.297051+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 430 ms_handle_reset con 0x5636d2e4a800 session 0x5636d2d714a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403259392 unmapped: 69410816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.583721161s of 10.452672958s, submitted: 60
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:11.297165+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 71598080 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4399704 data_alloc: 218103808 data_used: 3407872
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 431 ms_handle_reset con 0x5636d6c93000 session 0x5636d502d860
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a1ad9000/0x0/0x1bfc00000, data 0x2310233/0x2543000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:12.297337+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d538eb40
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ecc5a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401080320 unmapped: 71589888 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:13.297470+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401080320 unmapped: 71589888 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d1e6a800 session 0x5636d4e62f00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:14.297620+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401088512 unmapped: 71581696 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:15.297813+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d5112000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3c000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a10e7000/0x0/0x1bfc00000, data 0x2cfdf8d/0x2f34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402169856 unmapped: 70500352 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d6c93000 session 0x5636d3b730e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:16.298003+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 433 ms_handle_reset con 0x5636d4f3c000 session 0x5636d2d714a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b2c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 433 ms_handle_reset con 0x5636d3b2c400 session 0x5636d20285a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4487619 data_alloc: 218103808 data_used: 3416064
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 433 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4bb85a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:17.298175+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:18.298374+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:19.298507+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a0c14000/0x0/0x1bfc00000, data 0x31d1c75/0x3409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:20.298656+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:21.298789+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4487619 data_alloc: 218103808 data_used: 3416064
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:22.298916+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a0c14000/0x0/0x1bfc00000, data 0x31d1c75/0x3409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 433 handle_osd_map epochs [434,434], i have 434, src has [1,434]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.084393501s of 11.318605423s, submitted: 45
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:23.299160+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:24.299341+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:25.299492+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:26.299671+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490593 data_alloc: 218103808 data_used: 3416064
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:27.299826+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:28.300006+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c11000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402219008 unmapped: 70451200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:29.300176+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c11000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402219008 unmapped: 70451200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:30.300385+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402235392 unmapped: 70434816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:31.300524+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490593 data_alloc: 218103808 data_used: 3416064
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402235392 unmapped: 70434816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d1e6a800 session 0x5636d45161e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:32.300646+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b2c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402235392 unmapped: 70434816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:33.300792+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402112512 unmapped: 70557696 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:34.300940+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c12000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:35.301177+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:36.301311+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519020 data_alloc: 218103808 data_used: 7258112
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c12000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:37.301450+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:38.301597+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c12000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:39.301767+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:40.301951+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:41.302265+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519020 data_alloc: 218103808 data_used: 7258112
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:42.302461+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:43.302593+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 62K writes, 230K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 62K writes, 24K syncs, 2.60 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3510 writes, 12K keys, 3510 commit groups, 1.0 writes per commit group, ingest: 13.22 MB, 0.02 MB/s
                                           Interval WAL: 3510 writes, 1479 syncs, 2.37 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c12000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:44.302731+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:45.302879+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.704803467s of 22.738908768s, submitted: 19
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403709952 unmapped: 68960256 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:46.303000+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4613876 data_alloc: 218103808 data_used: 8192000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:47.303140+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a07e7000/0x0/0x1bfc00000, data 0x3aa4827/0x382a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:48.303304+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:49.303462+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:50.303656+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: mgrc ms_handle_reset ms_handle_reset con 0x5636d631bc00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/798720280
Dec 06 08:30:00 compute-0 ceph-osd[84884]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: get_auth_request con 0x5636d1e8c400 auth_method 0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: mgrc handle_mgr_configure stats_period=5
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:51.303839+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4614532 data_alloc: 218103808 data_used: 8208384
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d28e6c00 session 0x5636d45165a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d2d4dc00 session 0x5636d4e62960
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:52.303963+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d75a4400 session 0x5636d44865a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d4bb9c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d3b2c400 session 0x5636d2083e00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402857984 unmapped: 69812224 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4117400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a07e6000/0x0/0x1bfc00000, data 0x3aa4827/0x382a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [0,1])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d4117400 session 0x5636d4e3c000
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:53.304276+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402857984 unmapped: 69812224 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:54.304422+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402857984 unmapped: 69812224 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:55.304584+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402857984 unmapped: 69812224 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a03e4000/0x0/0x1bfc00000, data 0x3aa4827/0x382a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:56.304770+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.772432327s of 11.028005600s, submitted: 79
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 435 ms_handle_reset con 0x5636d1e6a800 session 0x5636d50f14a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b2c400
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609210 data_alloc: 218103808 data_used: 8187904
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403914752 unmapped: 68755456 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 435 ms_handle_reset con 0x5636d3b2c400 session 0x5636d502d2c0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:57.304946+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 436 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d50f1c20
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e69c00
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401178624 unmapped: 71491584 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 436 ms_handle_reset con 0x5636d2e4a800 session 0x5636d44861e0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 436 ms_handle_reset con 0x5636d1e69c00 session 0x5636d52974a0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:58.305206+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:59.305388+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:00.305574+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a1ff2000/0x0/0x1bfc00000, data 0x19e52c7/0x1c1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a1ff2000/0x0/0x1bfc00000, data 0x19e52c7/0x1c1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:01.305741+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4326015 data_alloc: 218103808 data_used: 3424256
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:02.305933+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a1ff2000/0x0/0x1bfc00000, data 0x19e52c7/0x1c1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:03.306178+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:04.306361+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a1ff2000/0x0/0x1bfc00000, data 0x19e52c7/0x1c1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:05.306513+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:06.306731+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.616170883s of 10.038711548s, submitted: 109
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 71475200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:07.306893+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 71475200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:08.307076+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 71475200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:09.307236+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 71475200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:10.307441+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:11.307594+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:12.307776+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:13.307940+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:14.308062+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:15.308297+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:16.308428+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:17.308628+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:18.308763+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 71458816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:19.308904+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 71458816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:20.309158+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 71458816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:21.309352+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:22.309513+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:23.309654+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:24.309819+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:25.309978+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:26.310190+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401227776 unmapped: 71442432 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:27.310457+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401227776 unmapped: 71442432 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:28.310596+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:29.310810+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:30.311054+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:31.311193+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:32.311374+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:33.311534+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:34.311744+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401244160 unmapped: 71426048 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:35.311903+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:36.312042+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:37.312225+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:38.312418+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:39.312596+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:40.312761+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:41.312970+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:42.313123+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:43.313242+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:44.313405+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:45.313628+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:46.313816+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:47.314045+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:48.314196+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:49.314424+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:50.314691+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:51.314856+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:52.315023+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:53.315157+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:54.315297+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:55.315480+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:56.315624+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:57.315842+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:58.316035+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:59.316162+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:00.316372+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:01.316550+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:02.316773+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:03.316989+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:04.317179+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:05.317343+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:06.317664+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:07.317791+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:08.317964+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:09.318129+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:10.318328+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:11.318491+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:12.318654+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:13.318822+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:14.319015+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:15.319205+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 71352320 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:16.319373+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 71352320 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:17.319561+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 71352320 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:18.319691+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 71344128 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:19.320301+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 71344128 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:20.320466+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 71344128 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:21.321208+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 71344128 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:22.321345+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:23.321490+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:24.321606+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:25.321748+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:26.321873+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:30:00 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:30:00 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:27.322019+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'config diff' '{prefix=config diff}'
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'config show' '{prefix=config show}'
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:28.322168+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 71704576 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:29.322302+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400850944 unmapped: 71819264 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:30:00 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:30.322483+0000)
Dec 06 08:30:00 compute-0 ceph-osd[84884]: do_command 'log dump' '{prefix=log dump}'
Dec 06 08:30:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37371 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:00 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:30:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45650 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46501 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:01.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37389 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45662 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46513 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 06 08:30:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224398132' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37407 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46528 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.45599 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.46447 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.37341 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.45617 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.46468 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: pgmap v4142: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.37359 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4281710221' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.45629 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3661017523' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.46480 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2986239348' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1543744436' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3073358271' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/770934710' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3224398132' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37422 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec 06 08:30:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197532046' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:30:02 compute-0 crontab[420813]: (root) LIST (root)
Dec 06 08:30:02 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45695 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:30:02 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:30:02.152+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:30:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:02.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4143: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:02 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37428 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:30:02 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:30:02.732+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:30:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec 06 08:30:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224827068' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.37371 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.45650 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.46501 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.37389 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.45662 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.46513 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.37407 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.46528 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/202662870' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.37422 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2197532046' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.45695 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: pgmap v4143: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.37428 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1970116714' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3704065491' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1705572258' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.46561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2156604952' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3224827068' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:30:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2894406726' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:30:02 compute-0 nova_compute[251992]: 2025-12-06 08:30:02.965 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:03 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37455 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:30:03 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:30:03.046+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:30:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:03.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec 06 08:30:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1188637612' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec 06 08:30:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/471531764' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec 06 08:30:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314498896' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005305835' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:30:03.905 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:30:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:30:03.907 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:30:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:30:03.907 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:30:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec 06 08:30:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892246700' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3960838425' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.37455 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3059804696' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1188637612' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/471531764' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3894160054' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2413139653' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2314498896' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1646325394' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3163222241' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4005305835' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/811302128' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3892246700' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:30:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2941519675' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:04.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4144: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec 06 08:30:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036098789' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:30:04 compute-0 nova_compute[251992]: 2025-12-06 08:30:04.577 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec 06 08:30:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1902888652' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:30:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec 06 08:30:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2084276204' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: pgmap v4144: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3035065875' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1241005051' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3156275921' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3036098789' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2252398494' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/564105370' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1800079149' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1902888652' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/841291817' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2084276204' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/859904933' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3241546565' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1062355183' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec 06 08:30:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072314910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:30:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec 06 08:30:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605788513' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 06 08:30:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504983881' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec 06 08:30:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/420795564' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45815 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:05 compute-0 nova_compute[251992]: 2025-12-06 08:30:05.686 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45821 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec 06 08:30:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1730310572' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:30:05 compute-0 systemd[1]: Starting Hostname Service...
Dec 06 08:30:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec 06 08:30:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766860470' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 podman[421342]: 2025-12-06 08:30:06.005727588 +0000 UTC m=+0.117855642 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:30:06 compute-0 systemd[1]: Started Hostname Service.
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45827 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45833 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3072314910' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/164530540' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3605788513' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2820554929' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3810317471' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/552085105' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1504983881' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/420795564' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3112676870' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.45815 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.45821 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3515512720' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1730310572' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2766860470' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/871280326' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:06.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4145: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37581 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37587 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45848 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46705 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46711 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:06 compute-0 nova_compute[251992]: 2025-12-06 08:30:06.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:06 compute-0 nova_compute[251992]: 2025-12-06 08:30:06.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:30:06 compute-0 nova_compute[251992]: 2025-12-06 08:30:06.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:30:06 compute-0 nova_compute[251992]: 2025-12-06 08:30:06.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:30:06 compute-0 nova_compute[251992]: 2025-12-06 08:30:06.683 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:30:06 compute-0 nova_compute[251992]: 2025-12-06 08:30:06.684 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37596 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46717 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45857 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46723 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46735 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:07.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.45827 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.45833 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3375895849' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: pgmap v4145: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.37581 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.37587 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3724135901' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.45848 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.46705 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.46711 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.37596 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.46717 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1512476254' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:30:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1607896249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37620 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.162 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:30:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45872 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.350 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.351 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3813MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.351 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.352 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:30:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46753 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.430 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.430 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.446 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:30:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec 06 08:30:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449367434' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37638 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45884 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46765 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37662 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:30:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3106095360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.919 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.928 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.950 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.954 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:30:07 compute-0 nova_compute[251992]: 2025-12-06 08:30:07.955 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:30:08 compute-0 nova_compute[251992]: 2025-12-06 08:30:08.005 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec 06 08:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031716560' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45896 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.45857 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.46723 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.46735 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1607896249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.37620 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.45872 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.46753 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/836512384' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/449367434' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.37638 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.45884 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3933450453' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3106095360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2298280578' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4031716560' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:30:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:08.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4146: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37671 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 06 08:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/72862526' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46792 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37689 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec 06 08:30:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2170862911' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 sudo[421936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:08 compute-0 sudo[421936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:08 compute-0 sudo[421936]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46804 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:08 compute-0 sudo[421965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:08 compute-0 sudo[421965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:08 compute-0 sudo[421965]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:09.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.46765 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.37662 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.45896 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.46780 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1735822335' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: pgmap v4146: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.37671 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1507927948' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/72862526' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.46792 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.37689 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/707535975' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2170862911' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3596632528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:09 compute-0 nova_compute[251992]: 2025-12-06 08:30:09.579 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:09 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.45983 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:10.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='client.46804 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3831316851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3831316851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3797364698' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:30:10 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:30:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4147: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec 06 08:30:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/98021121' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:30:10 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46918 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:10 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37797 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:10 compute-0 nova_compute[251992]: 2025-12-06 08:30:10.956 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:10 compute-0 nova_compute[251992]: 2025-12-06 08:30:10.956 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec 06 08:30:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1089499906' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:11.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.45983 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: pgmap v4147: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2092924722' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/98021121' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3180610648' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.46918 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.37797 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4294752078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3066853454' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1089499906' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1527899473' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:30:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Dec 06 08:30:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853012714' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:30:11 compute-0 nova_compute[251992]: 2025-12-06 08:30:11.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:11 compute-0 nova_compute[251992]: 2025-12-06 08:30:11.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:30:11 compute-0 nova_compute[251992]: 2025-12-06 08:30:11.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:30:11 compute-0 nova_compute[251992]: 2025-12-06 08:30:11.675 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:30:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec 06 08:30:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/570365904' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46040 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:12.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec 06 08:30:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692227316' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4148: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1386011340' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1853012714' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1056357027' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1456843590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1301984105' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/570365904' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2186325669' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3692227316' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37830 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:12 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46963 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-0 nova_compute[251992]: 2025-12-06 08:30:13.007 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:30:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec 06 08:30:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237105517' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:30:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:13.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:13 compute-0 podman[422414]: 2025-12-06 08:30:13.122936179 +0000 UTC m=+0.076523486 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:30:13 compute-0 podman[422413]: 2025-12-06 08:30:13.141001097 +0000 UTC m=+0.095276713 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:30:13 compute-0 ceph-mon[74339]: from='client.46040 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mon[74339]: pgmap v4148: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2466723271' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/289311643' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mon[74339]: from='client.37830 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mon[74339]: from='client.46963 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4122255845' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3681133396' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3237105517' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46064 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec 06 08:30:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157891544' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46993 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37854 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4149: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46076 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec 06 08:30:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020119241' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mon[74339]: from='client.46064 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2735476104' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2157891544' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3333165368' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1500227487' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:30:14 compute-0 nova_compute[251992]: 2025-12-06 08:30:14.581 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47005 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37866 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46085 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47011 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46088 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:15.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Dec 06 08:30:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3745472718' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Dec 06 08:30:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/483461817' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.46993 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.37854 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: pgmap v4149: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.46076 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4020119241' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.47005 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.37866 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4036621259' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2243827790' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3745472718' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/483461817' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 08:30:15 compute-0 nova_compute[251992]: 2025-12-06 08:30:15.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:15 compute-0 nova_compute[251992]: 2025-12-06 08:30:15.681 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Dec 06 08:30:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2132293271' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 08:30:15 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46106 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47035 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4150: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37896 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46115 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47044 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mon[74339]: from='client.46085 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mon[74339]: from='client.47011 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mon[74339]: from='client.46088 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3844968379' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2132293271' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37905 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:16 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:30:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Dec 06 08:30:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1986429874' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 08:30:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:17.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:17 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46142 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Dec 06 08:30:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2293411759' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47071 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.46106 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.47035 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: pgmap v4150: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.37896 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.46115 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.47044 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.37905 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3220172303' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2410749592' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1986429874' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/347281288' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1891305318' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2293411759' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Dec 06 08:30:17 compute-0 ovs-appctl[423554]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 08:30:17 compute-0 ovs-appctl[423558]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 08:30:17 compute-0 ovs-appctl[423562]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Dec 06 08:30:17 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46148 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37938 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:17 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47077 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:18 compute-0 nova_compute[251992]: 2025-12-06 08:30:18.008 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:18.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:18 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37947 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 06 08:30:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4062746758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:30:18 compute-0 nova_compute[251992]: 2025-12-06 08:30:18.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:30:18
Dec 06 08:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'backups', 'default.rgw.control', 'vms', 'default.rgw.meta']
Dec 06 08:30:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:30:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:19.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 06 08:30:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117150299' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:30:19 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47107 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:19 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:19 compute-0 nova_compute[251992]: 2025-12-06 08:30:19.638 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Dec 06 08:30:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573622413' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 08:30:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496890426' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:20 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.37977 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 06 08:30:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596404359' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:21.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:22.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:22 compute-0 ceph-mon[74339]: from='client.46142 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:22 compute-0 ceph-mon[74339]: from='client.47071 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4062746758' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:30:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2634980621' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:30:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Dec 06 08:30:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179480912' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 nova_compute[251992]: 2025-12-06 08:30:23.011 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:23.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/779822408' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47155 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46211 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Dec 06 08:30:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3674858668' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 nova_compute[251992]: 2025-12-06 08:30:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:23 compute-0 nova_compute[251992]: 2025-12-06 08:30:23.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.46148 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.37938 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.47077 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.37947 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1164476600' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/554704831' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3572935360' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/650511931' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2117150299' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.47107 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.46178 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3573622413' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1565949019' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/496890426' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4171845927' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.37977 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2596404359' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3335022702' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3768634476' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3375488095' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/370689176' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4179480912' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3762760187' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3695961694' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/779822408' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:30:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:30:23 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38019 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:24.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:24 compute-0 nova_compute[251992]: 2025-12-06 08:30:24.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:24 compute-0 ceph-mon[74339]: from='client.47155 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-0 ceph-mon[74339]: from='client.46211 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3674858668' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/958197419' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/416997609' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-0 ceph-mon[74339]: from='client.38019 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:24 compute-0 ceph-mon[74339]: pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:25.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Dec 06 08:30:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3599598298' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251283217' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:25 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47176 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:25 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46232 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:25 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38043 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:26.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3599598298' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3655934544' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4124238822' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4251283217' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 ceph-mon[74339]: from='client.47176 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Dec 06 08:30:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/782571674' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 nova_compute[251992]: 2025-12-06 08:30:26.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38055 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47197 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:30:26 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46250 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47206 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38064 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:27.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:27 compute-0 ceph-mon[74339]: from='client.46232 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mon[74339]: from='client.38043 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/328269644' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mon[74339]: pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/782571674' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3979588669' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mon[74339]: from='client.38055 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mon[74339]: from='client.47197 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46256 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:30:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:30:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:30:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:30:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:30:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Dec 06 08:30:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776452059' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Dec 06 08:30:27 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3419545571' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 nova_compute[251992]: 2025-12-06 08:30:28.014 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47236 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:28.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38091 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.46250 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.47206 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.38064 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.46256 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2530206311' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/776452059' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/61962800' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3573223227' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3419545571' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1952695033' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47245 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46283 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38103 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46301 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:30:28 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:30:29 compute-0 sudo[425300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:29 compute-0 sudo[425300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:29 compute-0 sudo[425300]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 06 08:30:29 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117232887' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 sudo[425325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:29 compute-0 sudo[425325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:29 compute-0 sudo[425325]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:29.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.47236 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mon[74339]: pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.38091 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.47245 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.46283 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.38103 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3317472033' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1117232887' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3249964146' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/348320463' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47281 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:29 compute-0 nova_compute[251992]: 2025-12-06 08:30:29.671 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:29 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38136 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47287 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38142 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:30.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4157: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:30 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46328 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46337 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mon[74339]: from='client.46301 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/939267410' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mon[74339]: from='client.47281 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/638693939' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3466208961' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:30 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1655787077' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 virtqemud[251613]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 08:30:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Dec 06 08:30:31 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3615114752' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:31.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:31 compute-0 sudo[425798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:31 compute-0 sudo[425798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:31 compute-0 sudo[425798]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 compute-0 sudo[425834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:30:31 compute-0 sudo[425834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:31 compute-0 sudo[425834]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 compute-0 sudo[425867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:31 compute-0 sudo[425867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:31 compute-0 sudo[425867]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 compute-0 sudo[425897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:30:31 compute-0 sudo[425897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.38136 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.47287 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.38142 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 ceph-mon[74339]: pgmap v4157: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.46328 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.46337 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/222777010' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3615114752' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2817429348' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:31 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/563215958' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Dec 06 08:30:32 compute-0 systemd[1]: Starting Time & Date Service...
Dec 06 08:30:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:32.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4158: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:32 compute-0 systemd[1]: Started Time & Date Service.
Dec 06 08:30:32 compute-0 podman[426036]: 2025-12-06 08:30:32.350085885 +0000 UTC m=+0.147288017 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:30:32 compute-0 podman[426036]: 2025-12-06 08:30:32.453588508 +0000 UTC m=+0.250790630 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:30:33 compute-0 ceph-mon[74339]: pgmap v4158: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:33 compute-0 nova_compute[251992]: 2025-12-06 08:30:33.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:33 compute-0 podman[426225]: 2025-12-06 08:30:33.120350894 +0000 UTC m=+0.059006384 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:30:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:33.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:33 compute-0 podman[426225]: 2025-12-06 08:30:33.128221957 +0000 UTC m=+0.066877427 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:30:33 compute-0 podman[426290]: 2025-12-06 08:30:33.386358413 +0000 UTC m=+0.060136604 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, release=1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived)
Dec 06 08:30:33 compute-0 podman[426290]: 2025-12-06 08:30:33.397858004 +0000 UTC m=+0.071636165 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, description=keepalived for Ceph, io.openshift.expose-services=)
Dec 06 08:30:33 compute-0 sudo[425897]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:30:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:30:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:33 compute-0 sudo[426322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:33 compute-0 sudo[426322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:33 compute-0 sudo[426322]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:33 compute-0 sudo[426347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:30:33 compute-0 sudo[426347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:33 compute-0 sudo[426347]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:33 compute-0 sudo[426372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:33 compute-0 sudo[426372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:33 compute-0 sudo[426372]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:33 compute-0 sudo[426397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:30:33 compute-0 sudo[426397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:34 compute-0 sudo[426397]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:30:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:30:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:30:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:30:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:30:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 68cb9676-9cce-479f-958c-ed3b36f199d6 does not exist
Dec 06 08:30:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 681ecdd4-b136-437a-800b-7813a671b090 does not exist
Dec 06 08:30:34 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev edd62718-3b1b-4ff2-af45-54f31bb67758 does not exist
Dec 06 08:30:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:30:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:30:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:30:34 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:30:34 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:30:34 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:30:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4159: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:34.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:34 compute-0 sudo[426455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:34 compute-0 sudo[426455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:34 compute-0 sudo[426455]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:34 compute-0 sudo[426480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:30:34 compute-0 sudo[426480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:34 compute-0 sudo[426480]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:34 compute-0 sudo[426505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:34 compute-0 sudo[426505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:34 compute-0 sudo[426505]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:34 compute-0 sudo[426530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:30:34 compute-0 sudo[426530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:30:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:30:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:30:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:30:34 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:30:34 compute-0 nova_compute[251992]: 2025-12-06 08:30:34.673 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:34 compute-0 podman[426596]: 2025-12-06 08:30:34.868947368 +0000 UTC m=+0.065139809 container create 7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haslett, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:30:34 compute-0 systemd[1]: Started libpod-conmon-7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6.scope.
Dec 06 08:30:34 compute-0 podman[426596]: 2025-12-06 08:30:34.836945634 +0000 UTC m=+0.033138135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:30:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:30:34 compute-0 podman[426596]: 2025-12-06 08:30:34.961466535 +0000 UTC m=+0.157658996 container init 7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haslett, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 08:30:34 compute-0 podman[426596]: 2025-12-06 08:30:34.972432261 +0000 UTC m=+0.168624702 container start 7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Dec 06 08:30:34 compute-0 podman[426596]: 2025-12-06 08:30:34.976054558 +0000 UTC m=+0.172246989 container attach 7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haslett, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:30:34 compute-0 hopeful_haslett[426612]: 167 167
Dec 06 08:30:34 compute-0 podman[426596]: 2025-12-06 08:30:34.979695677 +0000 UTC m=+0.175888108 container died 7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:30:34 compute-0 systemd[1]: libpod-7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6.scope: Deactivated successfully.
Dec 06 08:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b0977f9330a3ba42f7028ea34d5a0ceb573c6359f55a4f5e955d1e4d8ed0076-merged.mount: Deactivated successfully.
Dec 06 08:30:35 compute-0 podman[426596]: 2025-12-06 08:30:35.035093272 +0000 UTC m=+0.231285693 container remove 7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_haslett, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:30:35 compute-0 systemd[1]: libpod-conmon-7d7eea5c5a5483eb38c8a02428594004e06bd2e0cd2fbd72ed4443ea387543f6.scope: Deactivated successfully.
Dec 06 08:30:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:35 compute-0 podman[426635]: 2025-12-06 08:30:35.241929124 +0000 UTC m=+0.053705110 container create 6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:30:35 compute-0 systemd[1]: Started libpod-conmon-6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030.scope.
Dec 06 08:30:35 compute-0 podman[426635]: 2025-12-06 08:30:35.213289612 +0000 UTC m=+0.025065618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:30:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4937e9532e23117809a18eb0273ed582efe140b5d5fa54a762f594c816574862/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4937e9532e23117809a18eb0273ed582efe140b5d5fa54a762f594c816574862/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4937e9532e23117809a18eb0273ed582efe140b5d5fa54a762f594c816574862/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4937e9532e23117809a18eb0273ed582efe140b5d5fa54a762f594c816574862/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4937e9532e23117809a18eb0273ed582efe140b5d5fa54a762f594c816574862/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 compute-0 podman[426635]: 2025-12-06 08:30:35.3773485 +0000 UTC m=+0.189124506 container init 6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:30:35 compute-0 podman[426635]: 2025-12-06 08:30:35.385405086 +0000 UTC m=+0.197181072 container start 6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:30:35 compute-0 podman[426635]: 2025-12-06 08:30:35.406311041 +0000 UTC m=+0.218087047 container attach 6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:30:35 compute-0 ceph-mon[74339]: pgmap v4159: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4160: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:36.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:36 compute-0 nervous_lovelace[426652]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:30:36 compute-0 nervous_lovelace[426652]: --> relative data size: 1.0
Dec 06 08:30:36 compute-0 nervous_lovelace[426652]: --> All data devices are unavailable
Dec 06 08:30:36 compute-0 systemd[1]: libpod-6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030.scope: Deactivated successfully.
Dec 06 08:30:36 compute-0 podman[426635]: 2025-12-06 08:30:36.288636145 +0000 UTC m=+1.100412161 container died 6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:30:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4937e9532e23117809a18eb0273ed582efe140b5d5fa54a762f594c816574862-merged.mount: Deactivated successfully.
Dec 06 08:30:36 compute-0 podman[426635]: 2025-12-06 08:30:36.358910682 +0000 UTC m=+1.170686668 container remove 6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lovelace, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Dec 06 08:30:36 compute-0 systemd[1]: libpod-conmon-6a7a9e903e158c5cfcdd634f30fc02a716b25293792817f740604035186de030.scope: Deactivated successfully.
Dec 06 08:30:36 compute-0 sudo[426530]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:36 compute-0 podman[426669]: 2025-12-06 08:30:36.419665232 +0000 UTC m=+0.110664228 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:30:36 compute-0 sudo[426705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:36 compute-0 sudo[426705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:36 compute-0 sudo[426705]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:36 compute-0 sudo[426733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:30:36 compute-0 sudo[426733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:36 compute-0 sudo[426733]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:36 compute-0 sudo[426758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:36 compute-0 sudo[426758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:36 compute-0 sudo[426758]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:36 compute-0 sudo[426783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:30:36 compute-0 sudo[426783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:37 compute-0 podman[426849]: 2025-12-06 08:30:37.086281964 +0000 UTC m=+0.057294528 container create 648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 08:30:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:37 compute-0 systemd[1]: Started libpod-conmon-648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782.scope.
Dec 06 08:30:37 compute-0 podman[426849]: 2025-12-06 08:30:37.058364911 +0000 UTC m=+0.029377535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:30:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:30:37 compute-0 podman[426849]: 2025-12-06 08:30:37.178396129 +0000 UTC m=+0.149408753 container init 648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 08:30:37 compute-0 podman[426849]: 2025-12-06 08:30:37.18914844 +0000 UTC m=+0.160161024 container start 648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:30:37 compute-0 podman[426849]: 2025-12-06 08:30:37.193259541 +0000 UTC m=+0.164272115 container attach 648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:30:37 compute-0 adoring_curran[426865]: 167 167
Dec 06 08:30:37 compute-0 systemd[1]: libpod-648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782.scope: Deactivated successfully.
Dec 06 08:30:37 compute-0 conmon[426865]: conmon 648a740313188f1895fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782.scope/container/memory.events
Dec 06 08:30:37 compute-0 podman[426849]: 2025-12-06 08:30:37.19876609 +0000 UTC m=+0.169778694 container died 648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0490b6aad8a4f9bfc173d0dc6cab1a84f86adb5611fc19fb8fee1fcaf3635b6-merged.mount: Deactivated successfully.
Dec 06 08:30:37 compute-0 podman[426849]: 2025-12-06 08:30:37.26437989 +0000 UTC m=+0.235392474 container remove 648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_curran, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:30:37 compute-0 systemd[1]: libpod-conmon-648a740313188f1895fbd1a683590279916341e4d4f9e32ce509a7ea31675782.scope: Deactivated successfully.
Dec 06 08:30:37 compute-0 podman[426887]: 2025-12-06 08:30:37.473796303 +0000 UTC m=+0.052875289 container create e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 08:30:37 compute-0 ceph-mon[74339]: pgmap v4160: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:37 compute-0 systemd[1]: Started libpod-conmon-e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e.scope.
Dec 06 08:30:37 compute-0 podman[426887]: 2025-12-06 08:30:37.458021336 +0000 UTC m=+0.037100322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:30:37 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1c62a1afbdc00bb80b2ee7ede4b452e3361477c90196e4fa2cde6dcf8eaeb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1c62a1afbdc00bb80b2ee7ede4b452e3361477c90196e4fa2cde6dcf8eaeb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1c62a1afbdc00bb80b2ee7ede4b452e3361477c90196e4fa2cde6dcf8eaeb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1c62a1afbdc00bb80b2ee7ede4b452e3361477c90196e4fa2cde6dcf8eaeb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:37 compute-0 podman[426887]: 2025-12-06 08:30:37.608772765 +0000 UTC m=+0.187851771 container init e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:30:37 compute-0 podman[426887]: 2025-12-06 08:30:37.615756864 +0000 UTC m=+0.194835880 container start e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:30:37 compute-0 podman[426887]: 2025-12-06 08:30:37.619819124 +0000 UTC m=+0.198898140 container attach e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:30:38 compute-0 nova_compute[251992]: 2025-12-06 08:30:38.020 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4161: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:38.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]: {
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:     "0": [
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:         {
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "devices": [
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "/dev/loop3"
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             ],
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "lv_name": "ceph_lv0",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "lv_size": "7511998464",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "name": "ceph_lv0",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "tags": {
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.cluster_name": "ceph",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.crush_device_class": "",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.encrypted": "0",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.osd_id": "0",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.type": "block",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:                 "ceph.vdo": "0"
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             },
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "type": "block",
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:             "vg_name": "ceph_vg0"
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:         }
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]:     ]
Dec 06 08:30:38 compute-0 intelligent_almeida[426904]: }
Dec 06 08:30:38 compute-0 systemd[1]: libpod-e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e.scope: Deactivated successfully.
Dec 06 08:30:38 compute-0 podman[426887]: 2025-12-06 08:30:38.368620373 +0000 UTC m=+0.947699359 container died e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:30:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c1c62a1afbdc00bb80b2ee7ede4b452e3361477c90196e4fa2cde6dcf8eaeb1-merged.mount: Deactivated successfully.
Dec 06 08:30:38 compute-0 podman[426887]: 2025-12-06 08:30:38.442754325 +0000 UTC m=+1.021833321 container remove e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:30:38 compute-0 systemd[1]: libpod-conmon-e9057ded7cb8d61d4c0f261289f2e026f7702de9ccca84043808757510ff833e.scope: Deactivated successfully.
Dec 06 08:30:38 compute-0 sudo[426783]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 compute-0 sudo[426926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:38 compute-0 sudo[426926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:38 compute-0 sudo[426926]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 compute-0 sudo[426951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:30:38 compute-0 sudo[426951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:38 compute-0 sudo[426951]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 compute-0 sudo[426976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:38 compute-0 sudo[426976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:38 compute-0 sudo[426976]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 compute-0 sudo[427001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:30:38 compute-0 sudo[427001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:39 compute-0 podman[427064]: 2025-12-06 08:30:39.119712755 +0000 UTC m=+0.054246345 container create e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:30:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:39 compute-0 systemd[1]: Started libpod-conmon-e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152.scope.
Dec 06 08:30:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:30:39 compute-0 podman[427064]: 2025-12-06 08:30:39.091816052 +0000 UTC m=+0.026349672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:30:39 compute-0 podman[427064]: 2025-12-06 08:30:39.239259002 +0000 UTC m=+0.173792612 container init e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Dec 06 08:30:39 compute-0 podman[427064]: 2025-12-06 08:30:39.244965546 +0000 UTC m=+0.179499136 container start e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:30:39 compute-0 sharp_lederberg[427081]: 167 167
Dec 06 08:30:39 compute-0 systemd[1]: libpod-e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152.scope: Deactivated successfully.
Dec 06 08:30:39 compute-0 podman[427064]: 2025-12-06 08:30:39.268860211 +0000 UTC m=+0.203393781 container attach e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 08:30:39 compute-0 podman[427064]: 2025-12-06 08:30:39.269287772 +0000 UTC m=+0.203821342 container died e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5ef80ebef9e976f5c238cd97947cbfcf5700bf7725c28f4800054f6092dffc0-merged.mount: Deactivated successfully.
Dec 06 08:30:39 compute-0 podman[427064]: 2025-12-06 08:30:39.311345207 +0000 UTC m=+0.245878787 container remove e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 08:30:39 compute-0 systemd[1]: libpod-conmon-e2195e40dd50b5ed4eadb2fafd6ef0e4ef8c4846584db6c88f42846a74d30152.scope: Deactivated successfully.
Dec 06 08:30:39 compute-0 podman[427105]: 2025-12-06 08:30:39.496022982 +0000 UTC m=+0.046186508 container create d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Dec 06 08:30:39 compute-0 ceph-mon[74339]: pgmap v4161: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:39 compute-0 systemd[1]: Started libpod-conmon-d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50.scope.
Dec 06 08:30:39 compute-0 podman[427105]: 2025-12-06 08:30:39.475343583 +0000 UTC m=+0.025507129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:30:39 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ef87cd2898c6c2df5be1574dd46f242868228af5b4fe34abdba3593f84475d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ef87cd2898c6c2df5be1574dd46f242868228af5b4fe34abdba3593f84475d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ef87cd2898c6c2df5be1574dd46f242868228af5b4fe34abdba3593f84475d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ef87cd2898c6c2df5be1574dd46f242868228af5b4fe34abdba3593f84475d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:39 compute-0 podman[427105]: 2025-12-06 08:30:39.587321475 +0000 UTC m=+0.137485041 container init d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:30:39 compute-0 podman[427105]: 2025-12-06 08:30:39.597238464 +0000 UTC m=+0.147401990 container start d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:30:39 compute-0 podman[427105]: 2025-12-06 08:30:39.60342169 +0000 UTC m=+0.153585216 container attach d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:30:39 compute-0 nova_compute[251992]: 2025-12-06 08:30:39.676 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4162: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]: {
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]:         "osd_id": 0,
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]:         "type": "bluestore"
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]:     }
Dec 06 08:30:40 compute-0 happy_chaplygin[427121]: }
Dec 06 08:30:40 compute-0 systemd[1]: libpod-d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50.scope: Deactivated successfully.
Dec 06 08:30:40 compute-0 podman[427105]: 2025-12-06 08:30:40.44297601 +0000 UTC m=+0.993139536 container died d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:30:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-60ef87cd2898c6c2df5be1574dd46f242868228af5b4fe34abdba3593f84475d-merged.mount: Deactivated successfully.
Dec 06 08:30:40 compute-0 podman[427105]: 2025-12-06 08:30:40.665876795 +0000 UTC m=+1.216040311 container remove d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:30:40 compute-0 systemd[1]: libpod-conmon-d87687450f29bc63bcc8003b875e58710b69b2e584c382321dd4efc59a099e50.scope: Deactivated successfully.
Dec 06 08:30:40 compute-0 sudo[427001]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:30:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:41 compute-0 ceph-mon[74339]: pgmap v4162: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:30:41 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:41 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a9295769-1da5-411c-9717-981fd5696280 does not exist
Dec 06 08:30:41 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4f2e34f5-fec7-414e-986c-fffa8f36e212 does not exist
Dec 06 08:30:41 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 525f0791-a1ff-4009-b751-8292acb9d0de does not exist
Dec 06 08:30:41 compute-0 sudo[427157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:41 compute-0 sudo[427157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:41 compute-0 sudo[427157]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:41 compute-0 sudo[427182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:30:41 compute-0 sudo[427182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:41 compute-0 sudo[427182]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4163: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:42.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:42 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:30:43 compute-0 nova_compute[251992]: 2025-12-06 08:30:43.025 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:30:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:30:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:43 compute-0 ceph-mon[74339]: pgmap v4163: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:43 compute-0 podman[427208]: 2025-12-06 08:30:43.42560742 +0000 UTC m=+0.076444114 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:30:43 compute-0 podman[427209]: 2025-12-06 08:30:43.462092455 +0000 UTC m=+0.097476512 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:30:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4164: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:44.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:44 compute-0 nova_compute[251992]: 2025-12-06 08:30:44.708 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:45.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:46 compute-0 ceph-mon[74339]: pgmap v4164: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4165: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:46.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:47.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:48 compute-0 nova_compute[251992]: 2025-12-06 08:30:48.077 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:48 compute-0 ceph-mon[74339]: pgmap v4165: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4166: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:48.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:49.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:49 compute-0 sudo[427250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:49 compute-0 sudo[427250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:49 compute-0 sudo[427250]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:49 compute-0 sudo[427276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:30:49 compute-0 sudo[427276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:30:49 compute-0 sudo[427276]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:49 compute-0 ceph-mon[74339]: pgmap v4166: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:49 compute-0 nova_compute[251992]: 2025-12-06 08:30:49.708 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4167: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:50.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3827008471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:51.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:51 compute-0 ceph-mon[74339]: pgmap v4167: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2064156881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:30:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4168: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:53 compute-0 nova_compute[251992]: 2025-12-06 08:30:53.079 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:53.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:53 compute-0 ceph-mon[74339]: pgmap v4168: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4169: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:54.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:54 compute-0 nova_compute[251992]: 2025-12-06 08:30:54.712 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:30:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:55.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:30:55 compute-0 ceph-mon[74339]: pgmap v4169: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:30:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4170: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:56.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:57.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:57 compute-0 ceph-mon[74339]: pgmap v4170: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:58 compute-0 nova_compute[251992]: 2025-12-06 08:30:58.082 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:30:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4171: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:30:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:30:58.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:30:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:30:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:30:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:30:59.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:30:59 compute-0 ceph-mon[74339]: pgmap v4171: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:30:59 compute-0 nova_compute[251992]: 2025-12-06 08:30:59.714 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4172: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:00.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:01.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:01 compute-0 ceph-mon[74339]: pgmap v4172: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4173: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:02.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:02 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 08:31:02 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 08:31:02 compute-0 ceph-mon[74339]: pgmap v4173: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:03 compute-0 nova_compute[251992]: 2025-12-06 08:31:03.085 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:03.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:31:03.907 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:31:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:31:03.908 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:31:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:31:03.908 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:31:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4174: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:04.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:04 compute-0 nova_compute[251992]: 2025-12-06 08:31:04.717 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:05.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:05 compute-0 ceph-mon[74339]: pgmap v4174: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4175: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:06.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:06 compute-0 nova_compute[251992]: 2025-12-06 08:31:06.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:07.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:07 compute-0 ceph-mon[74339]: pgmap v4175: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:07 compute-0 podman[427316]: 2025-12-06 08:31:07.4710604 +0000 UTC m=+0.119234670 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:31:07 compute-0 nova_compute[251992]: 2025-12-06 08:31:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:07 compute-0 nova_compute[251992]: 2025-12-06 08:31:07.715 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:31:07 compute-0 nova_compute[251992]: 2025-12-06 08:31:07.716 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:31:07 compute-0 nova_compute[251992]: 2025-12-06 08:31:07.717 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:31:07 compute-0 nova_compute[251992]: 2025-12-06 08:31:07.717 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:31:07 compute-0 nova_compute[251992]: 2025-12-06 08:31:07.717 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:31:08 compute-0 sshd-session[427314]: Connection reset by authenticating user root 91.202.233.33 port 30552 [preauth]
Dec 06 08:31:08 compute-0 nova_compute[251992]: 2025-12-06 08:31:08.130 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:08 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:31:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295295733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4176: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:08 compute-0 nova_compute[251992]: 2025-12-06 08:31:08.227 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:31:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:08 compute-0 nova_compute[251992]: 2025-12-06 08:31:08.394 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:31:08 compute-0 nova_compute[251992]: 2025-12-06 08:31:08.396 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3912MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:31:08 compute-0 nova_compute[251992]: 2025-12-06 08:31:08.397 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:31:08 compute-0 nova_compute[251992]: 2025-12-06 08:31:08.397 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:31:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3295295733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:09.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.247 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.248 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.262 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.280 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.280 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.292 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.309 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.325 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:31:09 compute-0 sudo[427368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:09 compute-0 sudo[427368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:09 compute-0 sudo[427368]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:09 compute-0 sudo[427394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:09 compute-0 sudo[427394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:09 compute-0 sudo[427394]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:09 compute-0 ceph-mon[74339]: pgmap v4176: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2136885558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:31:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2136885558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:31:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612683571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.762 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.769 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.789 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.791 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:31:09 compute-0 nova_compute[251992]: 2025-12-06 08:31:09.792 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:31:10 compute-0 sshd-session[427366]: Connection reset by authenticating user root 91.202.233.33 port 30562 [preauth]
Dec 06 08:31:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4177: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 08:31:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:10.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/612683571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3684053451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:11.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:11 compute-0 ceph-mon[74339]: pgmap v4177: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Dec 06 08:31:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2578691889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4178: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 06 08:31:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:12.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:12 compute-0 sshd-session[427441]: Connection reset by authenticating user root 91.202.233.33 port 30566 [preauth]
Dec 06 08:31:12 compute-0 nova_compute[251992]: 2025-12-06 08:31:12.793 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:12 compute-0 nova_compute[251992]: 2025-12-06 08:31:12.794 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:12 compute-0 ceph-mon[74339]: pgmap v4178: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 06 08:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:31:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:31:13 compute-0 nova_compute[251992]: 2025-12-06 08:31:13.133 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:13.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:13 compute-0 nova_compute[251992]: 2025-12-06 08:31:13.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:13 compute-0 nova_compute[251992]: 2025-12-06 08:31:13.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:31:13 compute-0 nova_compute[251992]: 2025-12-06 08:31:13.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:31:13 compute-0 nova_compute[251992]: 2025-12-06 08:31:13.690 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:31:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4179: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 06 08:31:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:14.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:14 compute-0 podman[427447]: 2025-12-06 08:31:14.399065874 +0000 UTC m=+0.061594293 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 06 08:31:14 compute-0 podman[427448]: 2025-12-06 08:31:14.404039949 +0000 UTC m=+0.059913009 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125)
Dec 06 08:31:14 compute-0 sshd-session[427444]: Connection reset by authenticating user root 91.202.233.33 port 25888 [preauth]
Dec 06 08:31:14 compute-0 nova_compute[251992]: 2025-12-06 08:31:14.721 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:15.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:15 compute-0 ceph-mon[74339]: pgmap v4179: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec 06 08:31:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4180: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Dec 06 08:31:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:16.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:16 compute-0 nova_compute[251992]: 2025-12-06 08:31:16.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:16 compute-0 sshd-session[427483]: Connection reset by authenticating user root 91.202.233.33 port 25904 [preauth]
Dec 06 08:31:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:17.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:17 compute-0 ceph-mon[74339]: pgmap v4180: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Dec 06 08:31:18 compute-0 nova_compute[251992]: 2025-12-06 08:31:18.170 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4181: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Dec 06 08:31:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:18.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:31:18
Dec 06 08:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', '.rgw.root', '.mgr', 'backups', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control']
Dec 06 08:31:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:31:19 compute-0 ceph-mon[74339]: pgmap v4181: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 0 B/s wr, 97 op/s
Dec 06 08:31:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:19.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:19 compute-0 nova_compute[251992]: 2025-12-06 08:31:19.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:19 compute-0 nova_compute[251992]: 2025-12-06 08:31:19.726 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:20 compute-0 sudo[418838]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:20 compute-0 sshd-session[418837]: Received disconnect from 192.168.122.10 port 46044:11: disconnected by user
Dec 06 08:31:20 compute-0 sshd-session[418837]: Disconnected from user zuul 192.168.122.10 port 46044
Dec 06 08:31:20 compute-0 sshd-session[418833]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:31:20 compute-0 systemd[1]: session-65.scope: Deactivated successfully.
Dec 06 08:31:20 compute-0 systemd[1]: session-65.scope: Consumed 2min 53.559s CPU time, 1.1G memory peak, read 446.7M from disk, written 399.8M to disk.
Dec 06 08:31:20 compute-0 systemd-logind[798]: Session 65 logged out. Waiting for processes to exit.
Dec 06 08:31:20 compute-0 systemd-logind[798]: Removed session 65.
Dec 06 08:31:20 compute-0 sshd-session[427487]: Accepted publickey for zuul from 192.168.122.10 port 57362 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 08:31:20 compute-0 systemd-logind[798]: New session 66 of user zuul.
Dec 06 08:31:20 compute-0 systemd[1]: Started Session 66 of User zuul.
Dec 06 08:31:20 compute-0 sshd-session[427487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 08:31:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4182: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 08:31:20 compute-0 sudo[427492]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-12-06-setrzsa.tar.xz
Dec 06 08:31:20 compute-0 sudo[427492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:31:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:20.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:20 compute-0 sudo[427492]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:20 compute-0 sshd-session[427491]: Received disconnect from 192.168.122.10 port 57362:11: disconnected by user
Dec 06 08:31:20 compute-0 sshd-session[427491]: Disconnected from user zuul 192.168.122.10 port 57362
Dec 06 08:31:20 compute-0 sshd-session[427487]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:31:20 compute-0 systemd[1]: session-66.scope: Deactivated successfully.
Dec 06 08:31:20 compute-0 systemd-logind[798]: Session 66 logged out. Waiting for processes to exit.
Dec 06 08:31:20 compute-0 systemd-logind[798]: Removed session 66.
Dec 06 08:31:20 compute-0 sshd-session[427517]: Accepted publickey for zuul from 192.168.122.10 port 57374 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 08:31:20 compute-0 systemd-logind[798]: New session 67 of user zuul.
Dec 06 08:31:20 compute-0 systemd[1]: Started Session 67 of User zuul.
Dec 06 08:31:20 compute-0 sshd-session[427517]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 08:31:20 compute-0 sudo[427521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Dec 06 08:31:20 compute-0 sudo[427521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:31:20 compute-0 sudo[427521]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:20 compute-0 sshd-session[427520]: Received disconnect from 192.168.122.10 port 57374:11: disconnected by user
Dec 06 08:31:20 compute-0 sshd-session[427520]: Disconnected from user zuul 192.168.122.10 port 57374
Dec 06 08:31:20 compute-0 sshd-session[427517]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:31:20 compute-0 systemd[1]: session-67.scope: Deactivated successfully.
Dec 06 08:31:20 compute-0 systemd-logind[798]: Session 67 logged out. Waiting for processes to exit.
Dec 06 08:31:20 compute-0 systemd-logind[798]: Removed session 67.
Dec 06 08:31:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:21.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:21 compute-0 ceph-mon[74339]: pgmap v4182: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Dec 06 08:31:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4183: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 08:31:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:22.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:23 compute-0 nova_compute[251992]: 2025-12-06 08:31:23.172 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:23.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:23 compute-0 ceph-mon[74339]: pgmap v4183: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Dec 06 08:31:23 compute-0 nova_compute[251992]: 2025-12-06 08:31:23.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:23 compute-0 nova_compute[251992]: 2025-12-06 08:31:23.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:31:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:31:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4184: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Dec 06 08:31:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:24.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:24 compute-0 nova_compute[251992]: 2025-12-06 08:31:24.725 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:25.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4185: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec 06 08:31:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:26.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:26 compute-0 ceph-mon[74339]: pgmap v4184: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 0 B/s wr, 115 op/s
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:31:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:31:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:27.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:27 compute-0 ceph-mon[74339]: pgmap v4185: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Dec 06 08:31:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:31:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:31:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:31:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:31:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:31:27 compute-0 nova_compute[251992]: 2025-12-06 08:31:27.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:31:28 compute-0 nova_compute[251992]: 2025-12-06 08:31:28.174 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4186: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 95 op/s
Dec 06 08:31:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:28.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:29.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:29 compute-0 sudo[427551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:29 compute-0 sudo[427551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:29 compute-0 sudo[427551]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:29 compute-0 sudo[427576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:29 compute-0 sudo[427576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:29 compute-0 sudo[427576]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:29 compute-0 ceph-mon[74339]: pgmap v4186: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 95 op/s
Dec 06 08:31:29 compute-0 nova_compute[251992]: 2025-12-06 08:31:29.725 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4187: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 102 op/s
Dec 06 08:31:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:30.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:31.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:31 compute-0 ceph-mon[74339]: pgmap v4187: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 102 op/s
Dec 06 08:31:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4188: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Dec 06 08:31:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:32.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:33 compute-0 nova_compute[251992]: 2025-12-06 08:31:33.224 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:33 compute-0 ceph-mon[74339]: pgmap v4188: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 64 KiB/s rd, 0 B/s wr, 106 op/s
Dec 06 08:31:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4189: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 08:31:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:34.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:34 compute-0 nova_compute[251992]: 2025-12-06 08:31:34.728 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:35 compute-0 ceph-mon[74339]: pgmap v4189: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 08:31:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:35.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4190: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 08:31:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:36.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:37.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:37 compute-0 ceph-mon[74339]: pgmap v4190: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 0 B/s wr, 81 op/s
Dec 06 08:31:38 compute-0 nova_compute[251992]: 2025-12-06 08:31:38.227 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4191: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 06 08:31:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:38.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:38 compute-0 podman[427606]: 2025-12-06 08:31:38.429202422 +0000 UTC m=+0.090882384 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 08:31:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:39.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:39 compute-0 ceph-mon[74339]: pgmap v4191: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 06 08:31:39 compute-0 nova_compute[251992]: 2025-12-06 08:31:39.729 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4192: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 06 08:31:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:41.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:41 compute-0 ceph-mon[74339]: pgmap v4192: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Dec 06 08:31:41 compute-0 sudo[427634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:41 compute-0 sudo[427634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:41 compute-0 sudo[427634]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:41 compute-0 sudo[427659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:41 compute-0 sudo[427659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:41 compute-0 sudo[427659]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:41 compute-0 sudo[427684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:41 compute-0 sudo[427684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:41 compute-0 sudo[427684]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:42 compute-0 sudo[427709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:31:42 compute-0 sudo[427709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4193: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 06 08:31:42 compute-0 sudo[427709]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:31:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:42.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:31:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:42 compute-0 sudo[427757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:42 compute-0 sudo[427757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:42 compute-0 sudo[427757]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:42 compute-0 sudo[427782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:42 compute-0 sudo[427782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:42 compute-0 sudo[427782]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:42 compute-0 sudo[427807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:42 compute-0 sudo[427807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:42 compute-0 sudo[427807]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:42 compute-0 sudo[427832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:31:42 compute-0 sudo[427832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:31:43 compute-0 sudo[427832]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:31:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:31:43 compute-0 sudo[427889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:43 compute-0 sudo[427889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:43 compute-0 sudo[427889]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:43 compute-0 sudo[427914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:43.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:43 compute-0 sudo[427914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:43 compute-0 nova_compute[251992]: 2025-12-06 08:31:43.268 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:43 compute-0 sudo[427914]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:43 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:31:43 compute-0 sudo[427939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:43 compute-0 sudo[427939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:43 compute-0 sudo[427939]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:43 compute-0 sudo[427964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- inventory --format=json-pretty --filter-for-batch
Dec 06 08:31:43 compute-0 sudo[427964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:43 compute-0 podman[428030]: 2025-12-06 08:31:43.685380866 +0000 UTC m=+0.043372761 container create ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:31:43 compute-0 systemd[1]: Started libpod-conmon-ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf.scope.
Dec 06 08:31:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:31:43 compute-0 podman[428030]: 2025-12-06 08:31:43.75998265 +0000 UTC m=+0.117974565 container init ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 08:31:43 compute-0 podman[428030]: 2025-12-06 08:31:43.667557945 +0000 UTC m=+0.025549860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:43 compute-0 podman[428030]: 2025-12-06 08:31:43.76850072 +0000 UTC m=+0.126492615 container start ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:31:43 compute-0 podman[428030]: 2025-12-06 08:31:43.773402462 +0000 UTC m=+0.131394367 container attach ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 06 08:31:43 compute-0 fervent_jepsen[428046]: 167 167
Dec 06 08:31:43 compute-0 systemd[1]: libpod-ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf.scope: Deactivated successfully.
Dec 06 08:31:43 compute-0 conmon[428046]: conmon ac919d8172d13a30f3ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf.scope/container/memory.events
Dec 06 08:31:43 compute-0 podman[428030]: 2025-12-06 08:31:43.779162228 +0000 UTC m=+0.137154123 container died ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Dec 06 08:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-accd27b2175fdb4104b788c824b25a24d9ddfda1d216d6877b724aa949e28fbf-merged.mount: Deactivated successfully.
Dec 06 08:31:43 compute-0 podman[428030]: 2025-12-06 08:31:43.820936725 +0000 UTC m=+0.178928620 container remove ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:31:43 compute-0 systemd[1]: libpod-conmon-ac919d8172d13a30f3ff2320536de0e84c28df8670b70794d2a67dde41fc0adf.scope: Deactivated successfully.
Dec 06 08:31:44 compute-0 podman[428070]: 2025-12-06 08:31:44.017061609 +0000 UTC m=+0.065484019 container create b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_napier, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 08:31:44 compute-0 systemd[1]: Started libpod-conmon-b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee.scope.
Dec 06 08:31:44 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275a737683096166af0e30d30b889e2525751b3401126c3e47cdb171fd443a47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275a737683096166af0e30d30b889e2525751b3401126c3e47cdb171fd443a47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275a737683096166af0e30d30b889e2525751b3401126c3e47cdb171fd443a47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:44 compute-0 podman[428070]: 2025-12-06 08:31:43.995062645 +0000 UTC m=+0.043485095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/275a737683096166af0e30d30b889e2525751b3401126c3e47cdb171fd443a47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:44 compute-0 podman[428070]: 2025-12-06 08:31:44.097250072 +0000 UTC m=+0.145672492 container init b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_napier, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:31:44 compute-0 podman[428070]: 2025-12-06 08:31:44.103673076 +0000 UTC m=+0.152095476 container start b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:31:44 compute-0 podman[428070]: 2025-12-06 08:31:44.107202251 +0000 UTC m=+0.155624651 container attach b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_napier, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 08:31:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:31:44 compute-0 ceph-mon[74339]: pgmap v4193: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 06 08:31:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:44 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:44 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:31:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4194: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:44 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:44.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:44 compute-0 nova_compute[251992]: 2025-12-06 08:31:44.731 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-0 ceph-mon[74339]: pgmap v4194: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:45 compute-0 adoring_napier[428087]: [
Dec 06 08:31:45 compute-0 adoring_napier[428087]:     {
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         "available": false,
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         "ceph_device": false,
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         "lsm_data": {},
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         "lvs": [],
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         "path": "/dev/sr0",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         "rejected_reasons": [
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "Has a FileSystem",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "Insufficient space (<5GB)"
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         ],
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         "sys_api": {
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "actuators": null,
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "device_nodes": "sr0",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "devname": "sr0",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "human_readable_size": "482.00 KB",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "id_bus": "ata",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "model": "QEMU DVD-ROM",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "nr_requests": "2",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "parent": "/dev/sr0",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "partitions": {},
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "path": "/dev/sr0",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "removable": "1",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "rev": "2.5+",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "ro": "0",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "rotational": "1",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "sas_address": "",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "sas_device_handle": "",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "scheduler_mode": "mq-deadline",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "sectors": 0,
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "sectorsize": "2048",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "size": 493568.0,
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "support_discard": "2048",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "type": "disk",
Dec 06 08:31:45 compute-0 adoring_napier[428087]:             "vendor": "QEMU"
Dec 06 08:31:45 compute-0 adoring_napier[428087]:         }
Dec 06 08:31:45 compute-0 adoring_napier[428087]:     }
Dec 06 08:31:45 compute-0 adoring_napier[428087]: ]
Dec 06 08:31:45 compute-0 systemd[1]: libpod-b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee.scope: Deactivated successfully.
Dec 06 08:31:45 compute-0 systemd[1]: libpod-b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee.scope: Consumed 1.212s CPU time.
Dec 06 08:31:45 compute-0 podman[428070]: 2025-12-06 08:31:45.310140348 +0000 UTC m=+1.358562758 container died b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_napier, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-275a737683096166af0e30d30b889e2525751b3401126c3e47cdb171fd443a47-merged.mount: Deactivated successfully.
Dec 06 08:31:45 compute-0 podman[428070]: 2025-12-06 08:31:45.388591946 +0000 UTC m=+1.437014346 container remove b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_napier, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:45 compute-0 systemd[1]: libpod-conmon-b9c3f2cf87887708e61aec4b3c05b22dbcbe9352cc2c2e9c176ab3d56713d2ee.scope: Deactivated successfully.
Dec 06 08:31:45 compute-0 sudo[427964]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:31:45 compute-0 podman[429299]: 2025-12-06 08:31:45.437934408 +0000 UTC m=+0.083847795 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:31:45 compute-0 podman[429306]: 2025-12-06 08:31:45.437598519 +0000 UTC m=+0.084340858 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:45 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:31:45 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4195: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:46.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:46 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:47.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:31:47 compute-0 ceph-mon[74339]: pgmap v4195: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:31:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:31:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:31:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:31:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b38dfcd8-ca19-4df3-b7d8-904edf8160a2 does not exist
Dec 06 08:31:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 19d3df7e-d9c3-4eb1-b4a4-30648277880f does not exist
Dec 06 08:31:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 526adcdd-88ff-4009-ad77-d64728358580 does not exist
Dec 06 08:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:31:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:31:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:31:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:31:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:31:47 compute-0 sudo[429349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:47 compute-0 sudo[429349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:47 compute-0 sudo[429349]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:47 compute-0 sudo[429374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:47 compute-0 sudo[429374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:47 compute-0 sudo[429374]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:47 compute-0 sudo[429399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:47 compute-0 sudo[429399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:47 compute-0 sudo[429399]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:47 compute-0 sudo[429424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:31:47 compute-0 sudo[429424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:48 compute-0 podman[429490]: 2025-12-06 08:31:48.133047808 +0000 UTC m=+0.036982549 container create feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 06 08:31:48 compute-0 systemd[1]: Started libpod-conmon-feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf.scope.
Dec 06 08:31:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:31:48 compute-0 podman[429490]: 2025-12-06 08:31:48.117365444 +0000 UTC m=+0.021300205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:48 compute-0 podman[429490]: 2025-12-06 08:31:48.214176017 +0000 UTC m=+0.118110788 container init feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bouman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:31:48 compute-0 podman[429490]: 2025-12-06 08:31:48.220417746 +0000 UTC m=+0.124352487 container start feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:48 compute-0 podman[429490]: 2025-12-06 08:31:48.223443037 +0000 UTC m=+0.127377778 container attach feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:31:48 compute-0 fervent_bouman[429506]: 167 167
Dec 06 08:31:48 compute-0 systemd[1]: libpod-feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf.scope: Deactivated successfully.
Dec 06 08:31:48 compute-0 conmon[429506]: conmon feb41b578d63c6bf6e0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf.scope/container/memory.events
Dec 06 08:31:48 compute-0 podman[429490]: 2025-12-06 08:31:48.227546428 +0000 UTC m=+0.131481179 container died feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4196: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e228c0b2b5c0339c8277d4ae6c7180e4b6a3cae48f51771c82b7e1604687def8-merged.mount: Deactivated successfully.
Dec 06 08:31:48 compute-0 podman[429490]: 2025-12-06 08:31:48.26021986 +0000 UTC m=+0.164154601 container remove feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:31:48 compute-0 nova_compute[251992]: 2025-12-06 08:31:48.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:48 compute-0 systemd[1]: libpod-conmon-feb41b578d63c6bf6e0c5f5f301c9bc6bb11a9d89c7882fe96f875d23d90bcaf.scope: Deactivated successfully.
Dec 06 08:31:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:48.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:48 compute-0 podman[429531]: 2025-12-06 08:31:48.419592571 +0000 UTC m=+0.047652557 container create 2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bassi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 06 08:31:48 compute-0 systemd[1]: Started libpod-conmon-2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a.scope.
Dec 06 08:31:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9851967bc09d80f9d8cda9d0f02b6a9aaf69329587a7d9d321116acfe34cb4e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9851967bc09d80f9d8cda9d0f02b6a9aaf69329587a7d9d321116acfe34cb4e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:48 compute-0 podman[429531]: 2025-12-06 08:31:48.398871192 +0000 UTC m=+0.026931218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9851967bc09d80f9d8cda9d0f02b6a9aaf69329587a7d9d321116acfe34cb4e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9851967bc09d80f9d8cda9d0f02b6a9aaf69329587a7d9d321116acfe34cb4e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9851967bc09d80f9d8cda9d0f02b6a9aaf69329587a7d9d321116acfe34cb4e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:48 compute-0 podman[429531]: 2025-12-06 08:31:48.506426105 +0000 UTC m=+0.134486081 container init 2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bassi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:31:48 compute-0 podman[429531]: 2025-12-06 08:31:48.516295052 +0000 UTC m=+0.144355038 container start 2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bassi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:31:48 compute-0 podman[429531]: 2025-12-06 08:31:48.519983711 +0000 UTC m=+0.148043717 container attach 2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:31:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:31:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:31:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:31:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:31:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:31:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:49 compute-0 jolly_bassi[429547]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:31:49 compute-0 jolly_bassi[429547]: --> relative data size: 1.0
Dec 06 08:31:49 compute-0 jolly_bassi[429547]: --> All data devices are unavailable
Dec 06 08:31:49 compute-0 systemd[1]: libpod-2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a.scope: Deactivated successfully.
Dec 06 08:31:49 compute-0 podman[429531]: 2025-12-06 08:31:49.319327275 +0000 UTC m=+0.947387251 container died 2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bassi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 06 08:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9851967bc09d80f9d8cda9d0f02b6a9aaf69329587a7d9d321116acfe34cb4e1-merged.mount: Deactivated successfully.
Dec 06 08:31:49 compute-0 podman[429531]: 2025-12-06 08:31:49.376500858 +0000 UTC m=+1.004560854 container remove 2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bassi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:31:49 compute-0 systemd[1]: libpod-conmon-2f97e292e28a0760a9bf34bdfeff931530657a4ec5c21e4450e09e7d1682310a.scope: Deactivated successfully.
Dec 06 08:31:49 compute-0 sudo[429424]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:49 compute-0 sudo[429576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:49 compute-0 sudo[429576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:49 compute-0 sudo[429576]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:49 compute-0 sudo[429601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:49 compute-0 sudo[429601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:49 compute-0 sudo[429601]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:49 compute-0 sudo[429626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:49 compute-0 sudo[429626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:49 compute-0 sudo[429626]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:49 compute-0 sudo[429651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:31:49 compute-0 sudo[429651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:49 compute-0 sudo[429657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:49 compute-0 sudo[429657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:49 compute-0 sudo[429657]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:49 compute-0 sudo[429701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:49 compute-0 sudo[429701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:49 compute-0 sudo[429701]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:49 compute-0 nova_compute[251992]: 2025-12-06 08:31:49.733 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:49 compute-0 ceph-mon[74339]: pgmap v4196: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:49 compute-0 podman[429767]: 2025-12-06 08:31:49.930461059 +0000 UTC m=+0.042994141 container create 79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bose, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:31:49 compute-0 systemd[1]: Started libpod-conmon-79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9.scope.
Dec 06 08:31:49 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:31:50 compute-0 podman[429767]: 2025-12-06 08:31:50.000018026 +0000 UTC m=+0.112551118 container init 79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bose, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:31:50 compute-0 podman[429767]: 2025-12-06 08:31:50.006937374 +0000 UTC m=+0.119470476 container start 79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bose, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:50 compute-0 podman[429767]: 2025-12-06 08:31:49.913279445 +0000 UTC m=+0.025812557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:50 compute-0 affectionate_bose[429783]: 167 167
Dec 06 08:31:50 compute-0 systemd[1]: libpod-79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9.scope: Deactivated successfully.
Dec 06 08:31:50 compute-0 conmon[429783]: conmon 79d63e55d72d9abf4fdd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9.scope/container/memory.events
Dec 06 08:31:50 compute-0 podman[429767]: 2025-12-06 08:31:50.011036624 +0000 UTC m=+0.123569706 container attach 79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bose, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:31:50 compute-0 podman[429767]: 2025-12-06 08:31:50.01386026 +0000 UTC m=+0.126393342 container died 79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:31:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-796d935f5fba51234b94044f2fa5505626305b259867350a53fe5196e71a4d7b-merged.mount: Deactivated successfully.
Dec 06 08:31:50 compute-0 podman[429767]: 2025-12-06 08:31:50.056386908 +0000 UTC m=+0.168919990 container remove 79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:31:50 compute-0 systemd[1]: libpod-conmon-79d63e55d72d9abf4fdd563684bf514b9f8f28089670924bd48a845d314b30d9.scope: Deactivated successfully.
Dec 06 08:31:50 compute-0 podman[429808]: 2025-12-06 08:31:50.218507263 +0000 UTC m=+0.046874265 container create bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:31:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4197: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:50 compute-0 systemd[1]: Started libpod-conmon-bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62.scope.
Dec 06 08:31:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091a8a2f9f312a1599c50bbf23f890bb9bfca9a2f02e7300a56aeb93e3ffe77a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091a8a2f9f312a1599c50bbf23f890bb9bfca9a2f02e7300a56aeb93e3ffe77a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091a8a2f9f312a1599c50bbf23f890bb9bfca9a2f02e7300a56aeb93e3ffe77a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091a8a2f9f312a1599c50bbf23f890bb9bfca9a2f02e7300a56aeb93e3ffe77a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:50 compute-0 podman[429808]: 2025-12-06 08:31:50.196940602 +0000 UTC m=+0.025307634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:50 compute-0 podman[429808]: 2025-12-06 08:31:50.29618672 +0000 UTC m=+0.124553742 container init bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:31:50 compute-0 podman[429808]: 2025-12-06 08:31:50.302765747 +0000 UTC m=+0.131132749 container start bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:31:50 compute-0 podman[429808]: 2025-12-06 08:31:50.30581969 +0000 UTC m=+0.134186732 container attach bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec 06 08:31:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:50.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]: {
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:     "0": [
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:         {
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "devices": [
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "/dev/loop3"
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             ],
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "lv_name": "ceph_lv0",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "lv_size": "7511998464",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "name": "ceph_lv0",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "tags": {
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.cluster_name": "ceph",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.crush_device_class": "",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.encrypted": "0",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.osd_id": "0",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.type": "block",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:                 "ceph.vdo": "0"
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             },
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "type": "block",
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:             "vg_name": "ceph_vg0"
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:         }
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]:     ]
Dec 06 08:31:51 compute-0 admiring_stonebraker[429824]: }
Dec 06 08:31:51 compute-0 systemd[1]: libpod-bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62.scope: Deactivated successfully.
Dec 06 08:31:51 compute-0 podman[429808]: 2025-12-06 08:31:51.097163788 +0000 UTC m=+0.925530780 container died bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 08:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-091a8a2f9f312a1599c50bbf23f890bb9bfca9a2f02e7300a56aeb93e3ffe77a-merged.mount: Deactivated successfully.
Dec 06 08:31:51 compute-0 podman[429808]: 2025-12-06 08:31:51.145465161 +0000 UTC m=+0.973832163 container remove bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_stonebraker, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 08:31:51 compute-0 systemd[1]: libpod-conmon-bd25124a8adf1992df7d91a5a4c0c47acf1ec57a71bb73a96b163a1bf16e0a62.scope: Deactivated successfully.
Dec 06 08:31:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:51 compute-0 sudo[429651]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:51 compute-0 sudo[429847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:51 compute-0 sudo[429847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:51 compute-0 sudo[429847]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:51.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:51 compute-0 sudo[429872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:51 compute-0 sudo[429872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:51 compute-0 sudo[429872]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:51 compute-0 sudo[429897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:51 compute-0 sudo[429897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:51 compute-0 sudo[429897]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:51 compute-0 sudo[429922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:31:51 compute-0 sudo[429922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:51 compute-0 podman[429989]: 2025-12-06 08:31:51.706371679 +0000 UTC m=+0.037499012 container create 040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:31:51 compute-0 systemd[1]: Started libpod-conmon-040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56.scope.
Dec 06 08:31:51 compute-0 ceph-mon[74339]: pgmap v4197: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:51 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3564539784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:51 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:31:51 compute-0 podman[429989]: 2025-12-06 08:31:51.689472283 +0000 UTC m=+0.020599636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:51 compute-0 podman[429989]: 2025-12-06 08:31:51.786623616 +0000 UTC m=+0.117750959 container init 040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 08:31:51 compute-0 podman[429989]: 2025-12-06 08:31:51.795997449 +0000 UTC m=+0.127124802 container start 040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec 06 08:31:51 compute-0 podman[429989]: 2025-12-06 08:31:51.800281675 +0000 UTC m=+0.131409018 container attach 040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:31:51 compute-0 wizardly_heyrovsky[430005]: 167 167
Dec 06 08:31:51 compute-0 systemd[1]: libpod-040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56.scope: Deactivated successfully.
Dec 06 08:31:51 compute-0 podman[429989]: 2025-12-06 08:31:51.802508985 +0000 UTC m=+0.133636308 container died 040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ed284651865f294453007983649c6fca7d610d4ea88933edd129e79239fadca-merged.mount: Deactivated successfully.
Dec 06 08:31:51 compute-0 podman[429989]: 2025-12-06 08:31:51.84348723 +0000 UTC m=+0.174614563 container remove 040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:51 compute-0 systemd[1]: libpod-conmon-040337a522afb0e5b700ba156f6b9e0fc493046576e31f5653148c64404c6e56.scope: Deactivated successfully.
Dec 06 08:31:52 compute-0 podman[430030]: 2025-12-06 08:31:52.036205192 +0000 UTC m=+0.058973493 container create 7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:31:52 compute-0 systemd[1]: Started libpod-conmon-7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0.scope.
Dec 06 08:31:52 compute-0 podman[430030]: 2025-12-06 08:31:52.013216772 +0000 UTC m=+0.035985103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:31:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6480f94445574de35031a61e59b0b58e906f1e3df71ae6d212fb73ad3b00ed4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6480f94445574de35031a61e59b0b58e906f1e3df71ae6d212fb73ad3b00ed4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6480f94445574de35031a61e59b0b58e906f1e3df71ae6d212fb73ad3b00ed4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6480f94445574de35031a61e59b0b58e906f1e3df71ae6d212fb73ad3b00ed4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:52 compute-0 podman[430030]: 2025-12-06 08:31:52.13950088 +0000 UTC m=+0.162269211 container init 7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_knuth, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:31:52 compute-0 podman[430030]: 2025-12-06 08:31:52.148201725 +0000 UTC m=+0.170970016 container start 7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_knuth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:31:52 compute-0 podman[430030]: 2025-12-06 08:31:52.151986987 +0000 UTC m=+0.174755288 container attach 7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_knuth, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:31:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4198: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:31:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:52.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:31:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2614627506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:31:52 compute-0 ceph-mon[74339]: pgmap v4198: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]: {
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]:         "osd_id": 0,
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]:         "type": "bluestore"
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]:     }
Dec 06 08:31:52 compute-0 dazzling_knuth[430048]: }
Dec 06 08:31:52 compute-0 systemd[1]: libpod-7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0.scope: Deactivated successfully.
Dec 06 08:31:52 compute-0 podman[430030]: 2025-12-06 08:31:52.99946327 +0000 UTC m=+1.022231591 container died 7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_knuth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6480f94445574de35031a61e59b0b58e906f1e3df71ae6d212fb73ad3b00ed4a-merged.mount: Deactivated successfully.
Dec 06 08:31:53 compute-0 podman[430030]: 2025-12-06 08:31:53.055424101 +0000 UTC m=+1.078192402 container remove 7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:31:53 compute-0 systemd[1]: libpod-conmon-7428603eb576b09165a3dc1aec08035798a2d5bdcd1c102a39b14d4fb67c2fb0.scope: Deactivated successfully.
Dec 06 08:31:53 compute-0 sudo[429922]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:31:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:31:53 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5e44a655-130c-4276-8bdb-8ada644c4863 does not exist
Dec 06 08:31:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 54768b1e-e1fe-486b-984a-8f1df128d372 does not exist
Dec 06 08:31:53 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev da660737-f109-41e8-ab94-572b89f1fb7c does not exist
Dec 06 08:31:53 compute-0 sudo[430081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:31:53 compute-0 sudo[430081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:53 compute-0 sudo[430081]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:53 compute-0 sudo[430106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:31:53 compute-0 sudo[430106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:31:53 compute-0 sudo[430106]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:53 compute-0 nova_compute[251992]: 2025-12-06 08:31:53.273 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:53.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4199: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:54 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:31:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:31:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:54.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:31:54 compute-0 nova_compute[251992]: 2025-12-06 08:31:54.787 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:55.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:55 compute-0 ceph-mon[74339]: pgmap v4199: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:31:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4200: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:57.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:57 compute-0 ceph-mon[74339]: pgmap v4200: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4201: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:58 compute-0 nova_compute[251992]: 2025-12-06 08:31:58.345 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:31:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:31:58.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:58 compute-0 ceph-mon[74339]: pgmap v4201: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:31:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:31:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:31:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:31:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:31:59 compute-0 nova_compute[251992]: 2025-12-06 08:31:59.790 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4202: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:00.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:01.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:01 compute-0 ceph-mon[74339]: pgmap v4202: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4203: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:02.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:03.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:03 compute-0 nova_compute[251992]: 2025-12-06 08:32:03.395 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:03 compute-0 ceph-mon[74339]: pgmap v4203: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:32:03.908 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:32:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:32:03.909 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:32:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:32:03.909 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:32:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4204: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:04.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:04 compute-0 ceph-mon[74339]: pgmap v4204: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:04 compute-0 nova_compute[251992]: 2025-12-06 08:32:04.793 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:05.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.187383) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926187462, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 2379, "num_deletes": 257, "total_data_size": 4001372, "memory_usage": 4063184, "flush_reason": "Manual Compaction"}
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926224435, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 3884538, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84470, "largest_seqno": 86848, "table_properties": {"data_size": 3873046, "index_size": 7153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 28458, "raw_average_key_size": 22, "raw_value_size": 3848520, "raw_average_value_size": 2976, "num_data_blocks": 308, "num_entries": 1293, "num_filter_entries": 1293, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009734, "oldest_key_time": 1765009734, "file_creation_time": 1765009926, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 37097 microseconds, and 9276 cpu microseconds.
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.224484) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 3884538 bytes OK
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.224503) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.225815) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.225826) EVENT_LOG_v1 {"time_micros": 1765009926225823, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.225841) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 3990570, prev total WAL file size 3990570, number of live WAL files 2.
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.227065) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353334' seq:72057594037927935, type:22 .. '6C6F676D0033373837' seq:0, type:0; will stop at (end)
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(3793KB)], [191(12MB)]
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926227165, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 17048866, "oldest_snapshot_seqno": -1}
Dec 06 08:32:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4205: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:06.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 12437 keys, 16895758 bytes, temperature: kUnknown
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926363700, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 16895758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16814690, "index_size": 48891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 328579, "raw_average_key_size": 26, "raw_value_size": 16596656, "raw_average_value_size": 1334, "num_data_blocks": 1868, "num_entries": 12437, "num_filter_entries": 12437, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009926, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.364067) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 16895758 bytes
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.366203) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.7 rd, 123.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.6 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 12968, records dropped: 531 output_compression: NoCompression
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.366249) EVENT_LOG_v1 {"time_micros": 1765009926366220, "job": 120, "event": "compaction_finished", "compaction_time_micros": 136709, "compaction_time_cpu_micros": 40035, "output_level": 6, "num_output_files": 1, "total_output_size": 16895758, "num_input_records": 12968, "num_output_records": 12437, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926367497, "job": 120, "event": "table_file_deletion", "file_number": 193}
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009926371330, "job": 120, "event": "table_file_deletion", "file_number": 191}
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.226928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.371385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.371392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.371395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.371397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:06 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:06.371400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:07 compute-0 ceph-mon[74339]: pgmap v4205: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:07.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:07 compute-0 nova_compute[251992]: 2025-12-06 08:32:07.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:07 compute-0 nova_compute[251992]: 2025-12-06 08:32:07.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4206: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:08.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:08 compute-0 nova_compute[251992]: 2025-12-06 08:32:08.398 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:08 compute-0 nova_compute[251992]: 2025-12-06 08:32:08.875 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:32:08 compute-0 nova_compute[251992]: 2025-12-06 08:32:08.876 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:32:08 compute-0 nova_compute[251992]: 2025-12-06 08:32:08.877 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:32:08 compute-0 nova_compute[251992]: 2025-12-06 08:32:08.877 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:32:08 compute-0 nova_compute[251992]: 2025-12-06 08:32:08.878 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:32:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:32:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1861015745' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:32:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1861015745' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:32:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:32:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069405486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:09 compute-0 nova_compute[251992]: 2025-12-06 08:32:09.399 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:32:09 compute-0 podman[430159]: 2025-12-06 08:32:09.504745722 +0000 UTC m=+0.156495305 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:32:09 compute-0 nova_compute[251992]: 2025-12-06 08:32:09.592 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:32:09 compute-0 nova_compute[251992]: 2025-12-06 08:32:09.593 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4016MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:32:09 compute-0 nova_compute[251992]: 2025-12-06 08:32:09.593 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:32:09 compute-0 nova_compute[251992]: 2025-12-06 08:32:09.593 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:32:09 compute-0 ceph-mon[74339]: pgmap v4206: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1861015745' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:32:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1861015745' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:32:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1069405486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:09 compute-0 sudo[430188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:09 compute-0 sudo[430188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:09 compute-0 sudo[430188]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:09 compute-0 nova_compute[251992]: 2025-12-06 08:32:09.795 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:09 compute-0 sudo[430213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:09 compute-0 sudo[430213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:09 compute-0 sudo[430213]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:09 compute-0 nova_compute[251992]: 2025-12-06 08:32:09.873 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:32:09 compute-0 nova_compute[251992]: 2025-12-06 08:32:09.873 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:32:10 compute-0 nova_compute[251992]: 2025-12-06 08:32:10.091 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:32:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4207: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:10.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:32:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625983309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:10 compute-0 nova_compute[251992]: 2025-12-06 08:32:10.571 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:32:10 compute-0 nova_compute[251992]: 2025-12-06 08:32:10.576 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:32:10 compute-0 nova_compute[251992]: 2025-12-06 08:32:10.629 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:32:10 compute-0 nova_compute[251992]: 2025-12-06 08:32:10.630 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:32:10 compute-0 nova_compute[251992]: 2025-12-06 08:32:10.631 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:32:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2625983309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:11.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:11 compute-0 ceph-mon[74339]: pgmap v4207: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4208: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:12.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:32:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:32:13 compute-0 ceph-mon[74339]: pgmap v4208: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2621224397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:13 compute-0 nova_compute[251992]: 2025-12-06 08:32:13.402 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3688201196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4209: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:14 compute-0 nova_compute[251992]: 2025-12-06 08:32:14.632 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:14 compute-0 nova_compute[251992]: 2025-12-06 08:32:14.633 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:14 compute-0 nova_compute[251992]: 2025-12-06 08:32:14.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:14 compute-0 nova_compute[251992]: 2025-12-06 08:32:14.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:32:14 compute-0 nova_compute[251992]: 2025-12-06 08:32:14.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:32:14 compute-0 nova_compute[251992]: 2025-12-06 08:32:14.683 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:32:14 compute-0 nova_compute[251992]: 2025-12-06 08:32:14.799 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:15.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:15 compute-0 ceph-mon[74339]: pgmap v4209: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4210: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:16.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:16 compute-0 podman[430265]: 2025-12-06 08:32:16.425155451 +0000 UTC m=+0.077172953 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:32:16 compute-0 podman[430264]: 2025-12-06 08:32:16.448167702 +0000 UTC m=+0.101578482 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 08:32:16 compute-0 nova_compute[251992]: 2025-12-06 08:32:16.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000080s ======
Dec 06 08:32:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:17.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 06 08:32:17 compute-0 ceph-mon[74339]: pgmap v4210: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4211: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:18 compute-0 nova_compute[251992]: 2025-12-06 08:32:18.406 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.597333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938597403, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 368, "num_deletes": 251, "total_data_size": 244197, "memory_usage": 252360, "flush_reason": "Manual Compaction"}
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938601301, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 241741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86849, "largest_seqno": 87216, "table_properties": {"data_size": 239516, "index_size": 388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5636, "raw_average_key_size": 18, "raw_value_size": 235083, "raw_average_value_size": 778, "num_data_blocks": 17, "num_entries": 302, "num_filter_entries": 302, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009927, "oldest_key_time": 1765009927, "file_creation_time": 1765009938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 4000 microseconds, and 1556 cpu microseconds.
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.601340) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 241741 bytes OK
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.601357) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.603010) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.603026) EVENT_LOG_v1 {"time_micros": 1765009938603022, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.603045) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 241790, prev total WAL file size 241790, number of live WAL files 2.
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.603471) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(236KB)], [194(16MB)]
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938603713, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 17137499, "oldest_snapshot_seqno": -1}
Dec 06 08:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:32:18
Dec 06 08:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Dec 06 08:32:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 12229 keys, 15213515 bytes, temperature: kUnknown
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938722580, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 15213515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15135273, "index_size": 46577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30597, "raw_key_size": 324961, "raw_average_key_size": 26, "raw_value_size": 14922271, "raw_average_value_size": 1220, "num_data_blocks": 1764, "num_entries": 12229, "num_filter_entries": 12229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765009938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.722857) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 15213515 bytes
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.724461) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.1 rd, 128.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 16.1 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(133.8) write-amplify(62.9) OK, records in: 12739, records dropped: 510 output_compression: NoCompression
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.724475) EVENT_LOG_v1 {"time_micros": 1765009938724469, "job": 122, "event": "compaction_finished", "compaction_time_micros": 118896, "compaction_time_cpu_micros": 50773, "output_level": 6, "num_output_files": 1, "total_output_size": 15213515, "num_input_records": 12739, "num_output_records": 12229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938724612, "job": 122, "event": "table_file_deletion", "file_number": 196}
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765009938727284, "job": 122, "event": "table_file_deletion", "file_number": 194}
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.603398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.727331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.727334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.727336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.727337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:18 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:32:18.727339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:32:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:19.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:19 compute-0 ceph-mon[74339]: pgmap v4211: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:19 compute-0 nova_compute[251992]: 2025-12-06 08:32:19.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:19 compute-0 nova_compute[251992]: 2025-12-06 08:32:19.727 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:19 compute-0 nova_compute[251992]: 2025-12-06 08:32:19.800 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4212: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:20.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:21.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:21 compute-0 ceph-mon[74339]: pgmap v4212: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4213: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:23.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:23 compute-0 nova_compute[251992]: 2025-12-06 08:32:23.409 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:23 compute-0 ceph-mon[74339]: pgmap v4213: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:32:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:32:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4214: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:24.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:24 compute-0 nova_compute[251992]: 2025-12-06 08:32:24.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:24 compute-0 nova_compute[251992]: 2025-12-06 08:32:24.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:32:24 compute-0 nova_compute[251992]: 2025-12-06 08:32:24.802 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:25.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:25 compute-0 ceph-mon[74339]: pgmap v4214: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4215: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:26.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:32:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:32:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:27.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:32:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:32:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:32:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:32:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:32:27 compute-0 ceph-mon[74339]: pgmap v4215: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4216: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:28 compute-0 nova_compute[251992]: 2025-12-06 08:32:28.411 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:29.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:29 compute-0 nova_compute[251992]: 2025-12-06 08:32:29.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:32:29 compute-0 nova_compute[251992]: 2025-12-06 08:32:29.804 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:29 compute-0 sudo[430307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:29 compute-0 sudo[430307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:29 compute-0 sudo[430307]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:29 compute-0 sudo[430332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:29 compute-0 ceph-mon[74339]: pgmap v4216: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:29 compute-0 sudo[430332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:29 compute-0 sudo[430332]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4217: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:31 compute-0 ceph-mon[74339]: pgmap v4217: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4218: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:32.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:33.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:33 compute-0 nova_compute[251992]: 2025-12-06 08:32:33.452 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:33 compute-0 ceph-mon[74339]: pgmap v4218: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4219: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:34.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:34 compute-0 nova_compute[251992]: 2025-12-06 08:32:34.806 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:34 compute-0 ceph-mon[74339]: pgmap v4219: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:35.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4220: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:36.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:37.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:37 compute-0 ceph-mon[74339]: pgmap v4220: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4221: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:38.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:38 compute-0 nova_compute[251992]: 2025-12-06 08:32:38.456 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:39.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:39 compute-0 ceph-mon[74339]: pgmap v4221: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:39 compute-0 nova_compute[251992]: 2025-12-06 08:32:39.808 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4222: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:40.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:40 compute-0 podman[430363]: 2025-12-06 08:32:40.484607269 +0000 UTC m=+0.128189061 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:32:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:41.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:41 compute-0 ceph-mon[74339]: pgmap v4222: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4223: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:42.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:42 compute-0 ceph-mon[74339]: pgmap v4223: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:32:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:32:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:43.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:43 compute-0 nova_compute[251992]: 2025-12-06 08:32:43.459 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4224: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:44.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:44 compute-0 nova_compute[251992]: 2025-12-06 08:32:44.810 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:45.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:45 compute-0 ceph-mon[74339]: pgmap v4224: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4225: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:46.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:47 compute-0 ceph-mon[74339]: pgmap v4225: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:47.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:47 compute-0 podman[430393]: 2025-12-06 08:32:47.454170235 +0000 UTC m=+0.093879875 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:32:47 compute-0 podman[430392]: 2025-12-06 08:32:47.478974184 +0000 UTC m=+0.120278947 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:32:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4226: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:48.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:48 compute-0 nova_compute[251992]: 2025-12-06 08:32:48.462 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:49 compute-0 ceph-mon[74339]: pgmap v4226: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:49.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:49 compute-0 nova_compute[251992]: 2025-12-06 08:32:49.812 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:50 compute-0 sudo[430430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:50 compute-0 sudo[430430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:50 compute-0 sudo[430430]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:50 compute-0 sudo[430455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:50 compute-0 sudo[430455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:50 compute-0 sudo[430455]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4227: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:50.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:51.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:51 compute-0 ceph-mon[74339]: pgmap v4227: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4228: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:52.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:53.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:53 compute-0 nova_compute[251992]: 2025-12-06 08:32:53.465 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:53 compute-0 sudo[430482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:53 compute-0 sudo[430482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:53 compute-0 sudo[430482]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:53 compute-0 sudo[430507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:32:53 compute-0 sudo[430507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:53 compute-0 sudo[430507]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:53 compute-0 sudo[430532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:53 compute-0 sudo[430532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:53 compute-0 sudo[430532]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:53 compute-0 sudo[430557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:32:53 compute-0 sudo[430557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2631169994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:54 compute-0 ceph-mon[74339]: pgmap v4228: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4229: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:54 compute-0 sudo[430557]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:54.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:32:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 250df61f-a91c-4b17-9260-557a1c026a75 does not exist
Dec 06 08:32:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0b91a530-7e97-4ec1-9281-a7ea82a9f825 does not exist
Dec 06 08:32:54 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2c11f72b-e978-4fd9-bb48-73ca77b70aca does not exist
Dec 06 08:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:32:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:32:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:32:54 compute-0 nova_compute[251992]: 2025-12-06 08:32:54.816 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:54 compute-0 sudo[430613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:54 compute-0 sudo[430613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:54 compute-0 sudo[430613]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:54 compute-0 sudo[430638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:32:54 compute-0 sudo[430638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:54 compute-0 sudo[430638]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:54 compute-0 sudo[430663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:54 compute-0 sudo[430663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:54 compute-0 sudo[430663]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:55 compute-0 sudo[430688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:32:55 compute-0 sudo[430688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:55 compute-0 ceph-mon[74339]: pgmap v4229: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:32:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:32:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:32:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:32:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:32:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:32:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1018717692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:32:55 compute-0 podman[430753]: 2025-12-06 08:32:55.303072684 +0000 UTC m=+0.037170454 container create 6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:32:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:55 compute-0 systemd[1]: Started libpod-conmon-6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc.scope.
Dec 06 08:32:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:32:55 compute-0 podman[430753]: 2025-12-06 08:32:55.28694396 +0000 UTC m=+0.021041750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:32:55 compute-0 podman[430753]: 2025-12-06 08:32:55.393943267 +0000 UTC m=+0.128041077 container init 6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:32:55 compute-0 podman[430753]: 2025-12-06 08:32:55.40037156 +0000 UTC m=+0.134469330 container start 6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:32:55 compute-0 podman[430753]: 2025-12-06 08:32:55.403567967 +0000 UTC m=+0.137665777 container attach 6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:32:55 compute-0 agitated_faraday[430769]: 167 167
Dec 06 08:32:55 compute-0 systemd[1]: libpod-6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc.scope: Deactivated successfully.
Dec 06 08:32:55 compute-0 conmon[430769]: conmon 6c25e2af026637776366 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc.scope/container/memory.events
Dec 06 08:32:55 compute-0 podman[430753]: 2025-12-06 08:32:55.410787942 +0000 UTC m=+0.144885752 container died 6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3152058245ea3bf283c51f74e4a8f143b9ae90f2993476673e0923d9d194446-merged.mount: Deactivated successfully.
Dec 06 08:32:55 compute-0 podman[430753]: 2025-12-06 08:32:55.452830397 +0000 UTC m=+0.186928167 container remove 6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:32:55 compute-0 systemd[1]: libpod-conmon-6c25e2af026637776366eaf75adcfedb3395cee48d20f1d271a5b34fa92a12bc.scope: Deactivated successfully.
Dec 06 08:32:55 compute-0 podman[430793]: 2025-12-06 08:32:55.654446498 +0000 UTC m=+0.072237101 container create 502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:32:55 compute-0 systemd[1]: Started libpod-conmon-502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7.scope.
Dec 06 08:32:55 compute-0 podman[430793]: 2025-12-06 08:32:55.62525091 +0000 UTC m=+0.043041523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:32:55 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92118fc508efb9fbaae609f6f23128987a1f7e1904e31029b2202dedb8be9dee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92118fc508efb9fbaae609f6f23128987a1f7e1904e31029b2202dedb8be9dee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92118fc508efb9fbaae609f6f23128987a1f7e1904e31029b2202dedb8be9dee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92118fc508efb9fbaae609f6f23128987a1f7e1904e31029b2202dedb8be9dee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92118fc508efb9fbaae609f6f23128987a1f7e1904e31029b2202dedb8be9dee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:55 compute-0 podman[430793]: 2025-12-06 08:32:55.779615246 +0000 UTC m=+0.197405899 container init 502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:32:55 compute-0 podman[430793]: 2025-12-06 08:32:55.805581287 +0000 UTC m=+0.223371880 container start 502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 08:32:55 compute-0 podman[430793]: 2025-12-06 08:32:55.809546294 +0000 UTC m=+0.227336937 container attach 502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec 06 08:32:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:32:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4230: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:32:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:56.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:32:56 compute-0 fervent_gates[430809]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:32:56 compute-0 fervent_gates[430809]: --> relative data size: 1.0
Dec 06 08:32:56 compute-0 fervent_gates[430809]: --> All data devices are unavailable
Dec 06 08:32:56 compute-0 systemd[1]: libpod-502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7.scope: Deactivated successfully.
Dec 06 08:32:56 compute-0 podman[430793]: 2025-12-06 08:32:56.669758031 +0000 UTC m=+1.087548654 container died 502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-92118fc508efb9fbaae609f6f23128987a1f7e1904e31029b2202dedb8be9dee-merged.mount: Deactivated successfully.
Dec 06 08:32:56 compute-0 podman[430793]: 2025-12-06 08:32:56.751588559 +0000 UTC m=+1.169379152 container remove 502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gates, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:32:56 compute-0 systemd[1]: libpod-conmon-502b774d4728c2518ef7f94b55ac5d71516eb6272ae495450dca79bbacdd08b7.scope: Deactivated successfully.
Dec 06 08:32:56 compute-0 sudo[430688]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:56 compute-0 sudo[430836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:56 compute-0 sudo[430836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:56 compute-0 sudo[430836]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:57 compute-0 sudo[430861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:32:57 compute-0 sudo[430861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:57 compute-0 sudo[430861]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:57 compute-0 sudo[430886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:57 compute-0 sudo[430886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:57 compute-0 sudo[430886]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:57 compute-0 sudo[430911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:32:57 compute-0 sudo[430911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:32:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:57.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:32:57 compute-0 podman[430974]: 2025-12-06 08:32:57.586916495 +0000 UTC m=+0.052318583 container create a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:32:57 compute-0 systemd[1]: Started libpod-conmon-a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8.scope.
Dec 06 08:32:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:32:57 compute-0 podman[430974]: 2025-12-06 08:32:57.570173963 +0000 UTC m=+0.035576051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:32:57 compute-0 podman[430974]: 2025-12-06 08:32:57.675410423 +0000 UTC m=+0.140812551 container init a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moser, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 08:32:57 compute-0 podman[430974]: 2025-12-06 08:32:57.686010499 +0000 UTC m=+0.151412577 container start a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:32:57 compute-0 podman[430974]: 2025-12-06 08:32:57.689779161 +0000 UTC m=+0.155181239 container attach a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moser, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:32:57 compute-0 brave_moser[430990]: 167 167
Dec 06 08:32:57 compute-0 systemd[1]: libpod-a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8.scope: Deactivated successfully.
Dec 06 08:32:57 compute-0 podman[430974]: 2025-12-06 08:32:57.694305143 +0000 UTC m=+0.159707221 container died a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:32:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bfd0f050a82d23381101e174a5ede1ea36b7434a824ef900d749a745f7f8f5a-merged.mount: Deactivated successfully.
Dec 06 08:32:57 compute-0 podman[430974]: 2025-12-06 08:32:57.743269154 +0000 UTC m=+0.208671232 container remove a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_moser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:32:57 compute-0 systemd[1]: libpod-conmon-a58519fb5bc5ea903e12f008729826da47444c3a4bf75b8c2d2781e20e3e77b8.scope: Deactivated successfully.
Dec 06 08:32:57 compute-0 podman[431013]: 2025-12-06 08:32:57.933114758 +0000 UTC m=+0.054319796 container create a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:32:57 compute-0 systemd[1]: Started libpod-conmon-a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa.scope.
Dec 06 08:32:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a41e39c92a195987c8b08e5bbdfad9a99fe14cf15bc02051eb48d38e977d0d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a41e39c92a195987c8b08e5bbdfad9a99fe14cf15bc02051eb48d38e977d0d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a41e39c92a195987c8b08e5bbdfad9a99fe14cf15bc02051eb48d38e977d0d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a41e39c92a195987c8b08e5bbdfad9a99fe14cf15bc02051eb48d38e977d0d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:32:58 compute-0 podman[431013]: 2025-12-06 08:32:57.910867448 +0000 UTC m=+0.032072526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:32:58 compute-0 podman[431013]: 2025-12-06 08:32:58.015344867 +0000 UTC m=+0.136549915 container init a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 08:32:58 compute-0 podman[431013]: 2025-12-06 08:32:58.026696915 +0000 UTC m=+0.147901993 container start a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 08:32:58 compute-0 podman[431013]: 2025-12-06 08:32:58.030745723 +0000 UTC m=+0.151950761 container attach a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:32:58 compute-0 ceph-mon[74339]: pgmap v4230: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4231: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:32:58.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:58 compute-0 nova_compute[251992]: 2025-12-06 08:32:58.467 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:58 compute-0 youthful_bell[431029]: {
Dec 06 08:32:58 compute-0 youthful_bell[431029]:     "0": [
Dec 06 08:32:58 compute-0 youthful_bell[431029]:         {
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "devices": [
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "/dev/loop3"
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             ],
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "lv_name": "ceph_lv0",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "lv_size": "7511998464",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "name": "ceph_lv0",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "tags": {
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.cluster_name": "ceph",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.crush_device_class": "",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.encrypted": "0",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.osd_id": "0",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.type": "block",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:                 "ceph.vdo": "0"
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             },
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "type": "block",
Dec 06 08:32:58 compute-0 youthful_bell[431029]:             "vg_name": "ceph_vg0"
Dec 06 08:32:58 compute-0 youthful_bell[431029]:         }
Dec 06 08:32:58 compute-0 youthful_bell[431029]:     ]
Dec 06 08:32:58 compute-0 youthful_bell[431029]: }
Dec 06 08:32:58 compute-0 systemd[1]: libpod-a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa.scope: Deactivated successfully.
Dec 06 08:32:58 compute-0 podman[431013]: 2025-12-06 08:32:58.869592323 +0000 UTC m=+0.990797401 container died a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec 06 08:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a41e39c92a195987c8b08e5bbdfad9a99fe14cf15bc02051eb48d38e977d0d1-merged.mount: Deactivated successfully.
Dec 06 08:32:58 compute-0 podman[431013]: 2025-12-06 08:32:58.944534886 +0000 UTC m=+1.065739934 container remove a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:32:58 compute-0 systemd[1]: libpod-conmon-a2278d53de56198b570a0df8121115be36a0f7b40d6404e0c97b00823a2982fa.scope: Deactivated successfully.
Dec 06 08:32:58 compute-0 sudo[430911]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:59 compute-0 sudo[431053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:59 compute-0 sudo[431053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:59 compute-0 sudo[431053]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:59 compute-0 sudo[431078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:32:59 compute-0 sudo[431078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:59 compute-0 sudo[431078]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:59 compute-0 ceph-mon[74339]: pgmap v4231: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:32:59 compute-0 sudo[431103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:32:59 compute-0 sudo[431103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:59 compute-0 sudo[431103]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:59 compute-0 sudo[431128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:32:59 compute-0 sudo[431128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:32:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:32:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:32:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:32:59.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:32:59 compute-0 podman[431194]: 2025-12-06 08:32:59.676194893 +0000 UTC m=+0.041135060 container create 46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:32:59 compute-0 systemd[1]: Started libpod-conmon-46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34.scope.
Dec 06 08:32:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:32:59 compute-0 podman[431194]: 2025-12-06 08:32:59.656917623 +0000 UTC m=+0.021857830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:32:59 compute-0 podman[431194]: 2025-12-06 08:32:59.757397675 +0000 UTC m=+0.122337852 container init 46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mclaren, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:32:59 compute-0 podman[431194]: 2025-12-06 08:32:59.762623046 +0000 UTC m=+0.127563203 container start 46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:32:59 compute-0 podman[431194]: 2025-12-06 08:32:59.766270285 +0000 UTC m=+0.131210642 container attach 46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 08:32:59 compute-0 funny_mclaren[431210]: 167 167
Dec 06 08:32:59 compute-0 systemd[1]: libpod-46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34.scope: Deactivated successfully.
Dec 06 08:32:59 compute-0 podman[431194]: 2025-12-06 08:32:59.769967125 +0000 UTC m=+0.134907322 container died 46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd8982b2c88390e7077e5acfbe604131028005d781b33920af5d6cc0e4d5d066-merged.mount: Deactivated successfully.
Dec 06 08:32:59 compute-0 podman[431194]: 2025-12-06 08:32:59.813618752 +0000 UTC m=+0.178558949 container remove 46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec 06 08:32:59 compute-0 nova_compute[251992]: 2025-12-06 08:32:59.817 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:32:59 compute-0 systemd[1]: libpod-conmon-46b0b804a3d137f99cd02ceecbb0a63f92b6238da396f9c29905f7396c745b34.scope: Deactivated successfully.
Dec 06 08:33:00 compute-0 podman[431233]: 2025-12-06 08:33:00.025856771 +0000 UTC m=+0.052252441 container create 7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_davinci, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:33:00 compute-0 systemd[1]: Started libpod-conmon-7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9.scope.
Dec 06 08:33:00 compute-0 podman[431233]: 2025-12-06 08:33:00.004457484 +0000 UTC m=+0.030853184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:33:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5590e74df5c838efcce6737246dca75113aa1e024f4055753bb5446fb6462e81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5590e74df5c838efcce6737246dca75113aa1e024f4055753bb5446fb6462e81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5590e74df5c838efcce6737246dca75113aa1e024f4055753bb5446fb6462e81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:33:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5590e74df5c838efcce6737246dca75113aa1e024f4055753bb5446fb6462e81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:33:00 compute-0 podman[431233]: 2025-12-06 08:33:00.140167886 +0000 UTC m=+0.166563586 container init 7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_davinci, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:33:00 compute-0 podman[431233]: 2025-12-06 08:33:00.15439545 +0000 UTC m=+0.180791120 container start 7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:33:00 compute-0 podman[431233]: 2025-12-06 08:33:00.158596424 +0000 UTC m=+0.184992194 container attach 7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_davinci, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:33:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4232: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:00.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:00 compute-0 friendly_davinci[431249]: {
Dec 06 08:33:00 compute-0 friendly_davinci[431249]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:33:00 compute-0 friendly_davinci[431249]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:33:00 compute-0 friendly_davinci[431249]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:33:00 compute-0 friendly_davinci[431249]:         "osd_id": 0,
Dec 06 08:33:00 compute-0 friendly_davinci[431249]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:33:00 compute-0 friendly_davinci[431249]:         "type": "bluestore"
Dec 06 08:33:00 compute-0 friendly_davinci[431249]:     }
Dec 06 08:33:00 compute-0 friendly_davinci[431249]: }
Dec 06 08:33:01 compute-0 systemd[1]: libpod-7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9.scope: Deactivated successfully.
Dec 06 08:33:01 compute-0 podman[431233]: 2025-12-06 08:33:01.026843707 +0000 UTC m=+1.053239367 container died 7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_davinci, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-5590e74df5c838efcce6737246dca75113aa1e024f4055753bb5446fb6462e81-merged.mount: Deactivated successfully.
Dec 06 08:33:01 compute-0 podman[431233]: 2025-12-06 08:33:01.090408143 +0000 UTC m=+1.116803803 container remove 7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_davinci, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:33:01 compute-0 systemd[1]: libpod-conmon-7dcf28cfe43ad912764e81a512d4a68e3930cd8185c55244bec4faf300506fe9.scope: Deactivated successfully.
Dec 06 08:33:01 compute-0 sudo[431128]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:33:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:01.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4233: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:02.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:03.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:03 compute-0 ceph-mon[74339]: pgmap v4232: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:03 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:33:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:33:03 compute-0 nova_compute[251992]: 2025-12-06 08:33:03.526 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:33:03.909 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:33:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:33:03.910 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:33:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:33:03.910 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:33:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4234: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:04.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:04 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:33:04 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d412c39b-0b99-4df1-96c5-0ce464117251 does not exist
Dec 06 08:33:04 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a4664047-b10f-4a99-adb4-2c1d6ccb6512 does not exist
Dec 06 08:33:04 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0021e0be-75e1-4601-bd12-6d3b5cba9ebd does not exist
Dec 06 08:33:04 compute-0 sudo[431287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:33:04 compute-0 sudo[431287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:04 compute-0 sudo[431287]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:04 compute-0 nova_compute[251992]: 2025-12-06 08:33:04.857 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:04 compute-0 sudo[431312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:33:04 compute-0 sudo[431312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:04 compute-0 sudo[431312]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:05 compute-0 ceph-mon[74339]: pgmap v4233: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:05 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:33:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:05.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4235: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:06 compute-0 ceph-mon[74339]: pgmap v4234: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:33:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:06.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:07.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:08 compute-0 sshd-session[431337]: Connection reset by authenticating user root 45.140.17.124 port 60834 [preauth]
Dec 06 08:33:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4236: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:08 compute-0 ceph-mon[74339]: pgmap v4235: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:08.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:08 compute-0 nova_compute[251992]: 2025-12-06 08:33:08.560 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:08 compute-0 nova_compute[251992]: 2025-12-06 08:33:08.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:09.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:09 compute-0 nova_compute[251992]: 2025-12-06 08:33:09.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:09 compute-0 nova_compute[251992]: 2025-12-06 08:33:09.859 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:10 compute-0 sshd-session[431341]: Connection reset by authenticating user root 45.140.17.124 port 60852 [preauth]
Dec 06 08:33:10 compute-0 sudo[431344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:33:10 compute-0 sudo[431344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:10 compute-0 sudo[431344]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:10 compute-0 sudo[431369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:33:10 compute-0 sudo[431369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:10 compute-0 sudo[431369]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4237: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:10.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:11.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:11 compute-0 podman[431396]: 2025-12-06 08:33:11.460594509 +0000 UTC m=+0.115208200 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:33:12 compute-0 sshd-session[431377]: Connection reset by authenticating user root 45.140.17.124 port 60862 [preauth]
Dec 06 08:33:12 compute-0 ceph-mon[74339]: pgmap v4236: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4238: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:33:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113851892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:33:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:33:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113851892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:33:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:12.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:33:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:33:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 08:33:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:13.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 08:33:13 compute-0 nova_compute[251992]: 2025-12-06 08:33:13.585 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:13 compute-0 ceph-mon[74339]: pgmap v4237: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:13 compute-0 ceph-mon[74339]: pgmap v4238: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4113851892' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:33:13 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4113851892' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:33:13 compute-0 nova_compute[251992]: 2025-12-06 08:33:13.643 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:33:13 compute-0 nova_compute[251992]: 2025-12-06 08:33:13.644 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:33:13 compute-0 nova_compute[251992]: 2025-12-06 08:33:13.644 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:33:13 compute-0 nova_compute[251992]: 2025-12-06 08:33:13.645 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:33:13 compute-0 nova_compute[251992]: 2025-12-06 08:33:13.645 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:33:13 compute-0 sshd-session[431423]: Invalid user pi from 45.140.17.124 port 60878
Dec 06 08:33:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:33:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3772122857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:14 compute-0 nova_compute[251992]: 2025-12-06 08:33:14.160 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:33:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4239: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:14 compute-0 sshd-session[431423]: Connection reset by invalid user pi 45.140.17.124 port 60878 [preauth]
Dec 06 08:33:14 compute-0 nova_compute[251992]: 2025-12-06 08:33:14.387 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:33:14 compute-0 nova_compute[251992]: 2025-12-06 08:33:14.389 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4053MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:33:14 compute-0 nova_compute[251992]: 2025-12-06 08:33:14.390 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:33:14 compute-0 nova_compute[251992]: 2025-12-06 08:33:14.390 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:33:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:14.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3772122857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1660156342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:14 compute-0 nova_compute[251992]: 2025-12-06 08:33:14.862 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:15.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4240: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:16.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:16 compute-0 ceph-mon[74339]: pgmap v4239: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:16 compute-0 nova_compute[251992]: 2025-12-06 08:33:16.484 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:33:16 compute-0 nova_compute[251992]: 2025-12-06 08:33:16.484 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:33:16 compute-0 nova_compute[251992]: 2025-12-06 08:33:16.511 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:33:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:33:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4202559805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:16 compute-0 nova_compute[251992]: 2025-12-06 08:33:16.949 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:33:16 compute-0 nova_compute[251992]: 2025-12-06 08:33:16.955 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:33:16 compute-0 nova_compute[251992]: 2025-12-06 08:33:16.972 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:33:16 compute-0 nova_compute[251992]: 2025-12-06 08:33:16.973 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:33:16 compute-0 nova_compute[251992]: 2025-12-06 08:33:16.973 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:33:17 compute-0 sshd-session[431448]: Connection reset by authenticating user root 45.140.17.124 port 23262 [preauth]
Dec 06 08:33:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:17.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:17 compute-0 ceph-mon[74339]: pgmap v4240: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4202559805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2460955246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4241: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:18 compute-0 podman[431474]: 2025-12-06 08:33:18.391347816 +0000 UTC m=+0.044856672 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:33:18 compute-0 podman[431475]: 2025-12-06 08:33:18.401961893 +0000 UTC m=+0.055011646 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:33:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:18.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:18 compute-0 nova_compute[251992]: 2025-12-06 08:33:18.589 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:33:18
Dec 06 08:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec 06 08:33:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:33:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:19.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:19 compute-0 ceph-mon[74339]: pgmap v4241: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:19 compute-0 nova_compute[251992]: 2025-12-06 08:33:19.863 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4242: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:20.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:20 compute-0 nova_compute[251992]: 2025-12-06 08:33:20.974 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:20 compute-0 nova_compute[251992]: 2025-12-06 08:33:20.975 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:33:20 compute-0 nova_compute[251992]: 2025-12-06 08:33:20.975 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:33:21 compute-0 nova_compute[251992]: 2025-12-06 08:33:21.002 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:33:21 compute-0 nova_compute[251992]: 2025-12-06 08:33:21.002 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:21 compute-0 nova_compute[251992]: 2025-12-06 08:33:21.003 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:21 compute-0 nova_compute[251992]: 2025-12-06 08:33:21.003 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:21 compute-0 nova_compute[251992]: 2025-12-06 08:33:21.003 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:21.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:21 compute-0 ceph-mon[74339]: pgmap v4242: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4243: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:22.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:23 compute-0 nova_compute[251992]: 2025-12-06 08:33:23.592 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:23 compute-0 ceph-mon[74339]: pgmap v4243: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:33:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:33:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4244: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:24 compute-0 nova_compute[251992]: 2025-12-06 08:33:24.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:24 compute-0 nova_compute[251992]: 2025-12-06 08:33:24.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:33:24 compute-0 nova_compute[251992]: 2025-12-06 08:33:24.865 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:25.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:25 compute-0 ceph-mon[74339]: pgmap v4244: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4245: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:26.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.491337) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006491397, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 752, "num_deletes": 250, "total_data_size": 1075602, "memory_usage": 1100688, "flush_reason": "Manual Compaction"}
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006500022, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 681489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87217, "largest_seqno": 87968, "table_properties": {"data_size": 678239, "index_size": 1093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8768, "raw_average_key_size": 20, "raw_value_size": 671349, "raw_average_value_size": 1590, "num_data_blocks": 49, "num_entries": 422, "num_filter_entries": 422, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765009939, "oldest_key_time": 1765009939, "file_creation_time": 1765010006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 8981 microseconds, and 4917 cpu microseconds.
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.500313) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 681489 bytes OK
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.500394) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.502637) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.502661) EVENT_LOG_v1 {"time_micros": 1765010006502653, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.502683) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 1071852, prev total WAL file size 1071852, number of live WAL files 2.
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.503938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323739' seq:72057594037927935, type:22 .. '6D6772737461740033353330' seq:0, type:0; will stop at (end)
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(665KB)], [197(14MB)]
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006504030, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 15895004, "oldest_snapshot_seqno": -1}
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 12164 keys, 12430756 bytes, temperature: kUnknown
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006631532, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 12430756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12356937, "index_size": 42282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 323825, "raw_average_key_size": 26, "raw_value_size": 12149043, "raw_average_value_size": 998, "num_data_blocks": 1586, "num_entries": 12164, "num_filter_entries": 12164, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765010006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.631815) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 12430756 bytes
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.633129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.6 rd, 97.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(41.6) write-amplify(18.2) OK, records in: 12651, records dropped: 487 output_compression: NoCompression
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.633149) EVENT_LOG_v1 {"time_micros": 1765010006633139, "job": 124, "event": "compaction_finished", "compaction_time_micros": 127575, "compaction_time_cpu_micros": 62322, "output_level": 6, "num_output_files": 1, "total_output_size": 12430756, "num_input_records": 12651, "num_output_records": 12164, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006633431, "job": 124, "event": "table_file_deletion", "file_number": 199}
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010006636932, "job": 124, "event": "table_file_deletion", "file_number": 197}
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.503834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.637059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.637068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.637071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.637074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:33:26.637077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:33:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:33:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:27.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:33:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:33:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:33:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:33:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:33:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4246: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:28 compute-0 ceph-mon[74339]: pgmap v4245: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:28.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:28 compute-0 nova_compute[251992]: 2025-12-06 08:33:28.596 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:29 compute-0 ceph-mon[74339]: pgmap v4246: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:29.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:29 compute-0 nova_compute[251992]: 2025-12-06 08:33:29.866 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4247: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:30 compute-0 sudo[431517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:33:30 compute-0 sudo[431517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:30 compute-0 sudo[431517]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:30 compute-0 sudo[431542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:33:30 compute-0 sudo[431542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:30 compute-0 sudo[431542]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:30.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:30 compute-0 nova_compute[251992]: 2025-12-06 08:33:30.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:33:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:31 compute-0 ceph-mon[74339]: pgmap v4247: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:31.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4248: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:32.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:33 compute-0 ceph-mon[74339]: pgmap v4248: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:33.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:33 compute-0 nova_compute[251992]: 2025-12-06 08:33:33.599 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4249: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:34.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:34 compute-0 nova_compute[251992]: 2025-12-06 08:33:34.897 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:35 compute-0 ceph-mon[74339]: pgmap v4249: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:35.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4250: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:36.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:37.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:37 compute-0 ceph-mon[74339]: pgmap v4250: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:38.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:38 compute-0 nova_compute[251992]: 2025-12-06 08:33:38.603 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:39.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:39 compute-0 nova_compute[251992]: 2025-12-06 08:33:39.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:39 compute-0 ceph-mon[74339]: pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4252: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:40.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:41 compute-0 ceph-mon[74339]: pgmap v4252: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:41.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:42 compute-0 podman[431573]: 2025-12-06 08:33:42.467654717 +0000 UTC m=+0.115842447 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:33:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:42.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:43 compute-0 ceph-mon[74339]: pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:33:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:33:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:43.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:43 compute-0 nova_compute[251992]: 2025-12-06 08:33:43.635 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:44.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:44 compute-0 nova_compute[251992]: 2025-12-06 08:33:44.900 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:45.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:45 compute-0 ceph-mon[74339]: pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:46.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:47 compute-0 ceph-mon[74339]: pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:33:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:47.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:33:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:48 compute-0 nova_compute[251992]: 2025-12-06 08:33:48.639 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:49 compute-0 podman[431604]: 2025-12-06 08:33:49.388393476 +0000 UTC m=+0.048729915 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 08:33:49 compute-0 podman[431605]: 2025-12-06 08:33:49.402255711 +0000 UTC m=+0.057366330 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:33:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:49.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:49 compute-0 ceph-mon[74339]: pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:49 compute-0 nova_compute[251992]: 2025-12-06 08:33:49.903 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4257: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:50.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:50 compute-0 sudo[431644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:33:50 compute-0 sudo[431644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:50 compute-0 sudo[431644]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:50 compute-0 sudo[431669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:33:50 compute-0 sudo[431669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:33:50 compute-0 sudo[431669]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:51.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:51 compute-0 ceph-mon[74339]: pgmap v4257: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4258: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:52.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:52 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/760969643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:53.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:53 compute-0 nova_compute[251992]: 2025-12-06 08:33:53.642 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:54 compute-0 ceph-mon[74339]: pgmap v4258: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4259: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:54.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:54 compute-0 nova_compute[251992]: 2025-12-06 08:33:54.905 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3201809378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:33:55 compute-0 ceph-mon[74339]: pgmap v4259: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:55.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:33:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4260: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:57 compute-0 ceph-mon[74339]: pgmap v4260: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:57.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4261: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:33:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:33:58.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:33:58 compute-0 nova_compute[251992]: 2025-12-06 08:33:58.645 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:33:59 compute-0 ceph-mon[74339]: pgmap v4261: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:33:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:33:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:33:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:33:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:33:59 compute-0 nova_compute[251992]: 2025-12-06 08:33:59.948 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4262: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:01 compute-0 ceph-mon[74339]: pgmap v4262: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:01.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4263: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:02.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:03 compute-0 ceph-mon[74339]: pgmap v4263: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:03.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:03 compute-0 nova_compute[251992]: 2025-12-06 08:34:03.648 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:34:03.910 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:34:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:34:03.912 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:34:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:34:03.912 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:34:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4264: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:04 compute-0 nova_compute[251992]: 2025-12-06 08:34:04.994 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:05 compute-0 sudo[431701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:05 compute-0 sudo[431701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-0 sudo[431701]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 compute-0 sudo[431726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:34:05 compute-0 sudo[431726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-0 sudo[431726]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:05.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:05 compute-0 sudo[431751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:05 compute-0 ceph-mon[74339]: pgmap v4264: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:05 compute-0 sudo[431751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-0 sudo[431751]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 compute-0 sudo[431776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:34:05 compute-0 sudo[431776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:34:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:34:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:05 compute-0 sudo[431776]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4265: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:06.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:34:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:34:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:34:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d4818f28-aa28-411e-a093-ffe430ae0131 does not exist
Dec 06 08:34:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 61294f4d-5ce6-4715-a176-654b724d63ce does not exist
Dec 06 08:34:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4c3abf33-f9e2-4ad1-8b8c-17571776888f does not exist
Dec 06 08:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:34:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:34:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:34:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:34:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:34:06 compute-0 sudo[431834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:06 compute-0 sudo[431834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:06 compute-0 sudo[431834]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:34:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:34:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:34:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:34:06 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:34:06 compute-0 sudo[431859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:34:06 compute-0 sudo[431859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:06 compute-0 sudo[431859]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 compute-0 sudo[431884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:06 compute-0 sudo[431884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:06 compute-0 sudo[431884]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 compute-0 sudo[431909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:34:07 compute-0 sudo[431909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:07.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:07 compute-0 podman[431974]: 2025-12-06 08:34:07.478248535 +0000 UTC m=+0.074064029 container create 0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:34:07 compute-0 systemd[1]: Started libpod-conmon-0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef.scope.
Dec 06 08:34:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:34:07 compute-0 podman[431974]: 2025-12-06 08:34:07.454504935 +0000 UTC m=+0.050320439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:34:07 compute-0 podman[431974]: 2025-12-06 08:34:07.558277446 +0000 UTC m=+0.154092990 container init 0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:34:07 compute-0 podman[431974]: 2025-12-06 08:34:07.566243691 +0000 UTC m=+0.162059155 container start 0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:34:07 compute-0 podman[431974]: 2025-12-06 08:34:07.569385276 +0000 UTC m=+0.165200820 container attach 0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:34:07 compute-0 distracted_kowalevski[431990]: 167 167
Dec 06 08:34:07 compute-0 systemd[1]: libpod-0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef.scope: Deactivated successfully.
Dec 06 08:34:07 compute-0 conmon[431990]: conmon 0a94db6ee96e30f8241f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef.scope/container/memory.events
Dec 06 08:34:07 compute-0 podman[431974]: 2025-12-06 08:34:07.573578428 +0000 UTC m=+0.169393892 container died 0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb7456343252446624def6819d1674649ff911e72132e58d855811bf5b227079-merged.mount: Deactivated successfully.
Dec 06 08:34:07 compute-0 podman[431974]: 2025-12-06 08:34:07.621139452 +0000 UTC m=+0.216954946 container remove 0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kowalevski, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:34:07 compute-0 systemd[1]: libpod-conmon-0a94db6ee96e30f8241fe18ad0082290f34da4436a78581e67ffe4aa131526ef.scope: Deactivated successfully.
Dec 06 08:34:07 compute-0 podman[432015]: 2025-12-06 08:34:07.862062454 +0000 UTC m=+0.065222781 container create 2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:34:07 compute-0 systemd[1]: Started libpod-conmon-2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7.scope.
Dec 06 08:34:07 compute-0 podman[432015]: 2025-12-06 08:34:07.838076748 +0000 UTC m=+0.041237095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:34:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077b3bcd3c0176c65caf6a77c0e056e0a0903c82914b6bb70786b37a3e01227e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077b3bcd3c0176c65caf6a77c0e056e0a0903c82914b6bb70786b37a3e01227e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077b3bcd3c0176c65caf6a77c0e056e0a0903c82914b6bb70786b37a3e01227e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077b3bcd3c0176c65caf6a77c0e056e0a0903c82914b6bb70786b37a3e01227e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077b3bcd3c0176c65caf6a77c0e056e0a0903c82914b6bb70786b37a3e01227e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:07 compute-0 ceph-mon[74339]: pgmap v4265: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:07 compute-0 podman[432015]: 2025-12-06 08:34:07.957247034 +0000 UTC m=+0.160407431 container init 2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:34:07 compute-0 podman[432015]: 2025-12-06 08:34:07.969547776 +0000 UTC m=+0.172708083 container start 2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:34:07 compute-0 podman[432015]: 2025-12-06 08:34:07.973192144 +0000 UTC m=+0.176352491 container attach 2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 08:34:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4266: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:08 compute-0 nova_compute[251992]: 2025-12-06 08:34:08.657 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:08 compute-0 cool_swanson[432032]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:34:08 compute-0 cool_swanson[432032]: --> relative data size: 1.0
Dec 06 08:34:08 compute-0 cool_swanson[432032]: --> All data devices are unavailable
Dec 06 08:34:08 compute-0 systemd[1]: libpod-2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7.scope: Deactivated successfully.
Dec 06 08:34:08 compute-0 podman[432015]: 2025-12-06 08:34:08.800549544 +0000 UTC m=+1.003709931 container died 2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swanson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-077b3bcd3c0176c65caf6a77c0e056e0a0903c82914b6bb70786b37a3e01227e-merged.mount: Deactivated successfully.
Dec 06 08:34:08 compute-0 podman[432015]: 2025-12-06 08:34:08.868080267 +0000 UTC m=+1.071240564 container remove 2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_swanson, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 08:34:08 compute-0 systemd[1]: libpod-conmon-2a3d36f1f4eaa55b362038c3ff10d410ba8ed4209d711d4d37a79a531a10b9f7.scope: Deactivated successfully.
Dec 06 08:34:08 compute-0 sudo[431909]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:08 compute-0 ceph-mon[74339]: pgmap v4266: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:08 compute-0 sudo[432063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:08 compute-0 sudo[432063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:08 compute-0 sudo[432063]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:09 compute-0 sudo[432088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:34:09 compute-0 sudo[432088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:09 compute-0 sudo[432088]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:09 compute-0 sudo[432113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:09 compute-0 sudo[432113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:09 compute-0 sudo[432113]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:09 compute-0 sudo[432138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:34:09 compute-0 sudo[432138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:09.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:09 compute-0 nova_compute[251992]: 2025-12-06 08:34:09.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:09 compute-0 podman[432203]: 2025-12-06 08:34:09.707191734 +0000 UTC m=+0.068284073 container create 06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:34:09 compute-0 systemd[1]: Started libpod-conmon-06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7.scope.
Dec 06 08:34:09 compute-0 podman[432203]: 2025-12-06 08:34:09.679752484 +0000 UTC m=+0.040844903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:34:09 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:34:09 compute-0 podman[432203]: 2025-12-06 08:34:09.792895918 +0000 UTC m=+0.153988327 container init 06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:34:09 compute-0 podman[432203]: 2025-12-06 08:34:09.800813491 +0000 UTC m=+0.161905820 container start 06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:34:09 compute-0 podman[432203]: 2025-12-06 08:34:09.804809199 +0000 UTC m=+0.165901618 container attach 06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:34:09 compute-0 compassionate_cohen[432220]: 167 167
Dec 06 08:34:09 compute-0 systemd[1]: libpod-06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7.scope: Deactivated successfully.
Dec 06 08:34:09 compute-0 podman[432203]: 2025-12-06 08:34:09.806722981 +0000 UTC m=+0.167815350 container died 06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f5d10bfee1b2bea26eb433497a6137f36af7444ef0f4391e34d3ad133e781e0-merged.mount: Deactivated successfully.
Dec 06 08:34:09 compute-0 podman[432203]: 2025-12-06 08:34:09.850458141 +0000 UTC m=+0.211550500 container remove 06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:34:09 compute-0 systemd[1]: libpod-conmon-06b2ae8c6aa16320a6d859d4b885dd4d30856594492b088593f87f6e14bb3ee7.scope: Deactivated successfully.
Dec 06 08:34:09 compute-0 nova_compute[251992]: 2025-12-06 08:34:09.992 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:10 compute-0 podman[432245]: 2025-12-06 08:34:10.035349811 +0000 UTC m=+0.051795178 container create b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_black, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:34:10 compute-0 systemd[1]: Started libpod-conmon-b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3.scope.
Dec 06 08:34:10 compute-0 podman[432245]: 2025-12-06 08:34:10.015717801 +0000 UTC m=+0.032163188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:34:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895154f0ad95e93bd9748053c93b7d52a4b85b7d055bfad9c4696bedda7dc30c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895154f0ad95e93bd9748053c93b7d52a4b85b7d055bfad9c4696bedda7dc30c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895154f0ad95e93bd9748053c93b7d52a4b85b7d055bfad9c4696bedda7dc30c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/895154f0ad95e93bd9748053c93b7d52a4b85b7d055bfad9c4696bedda7dc30c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:10 compute-0 podman[432245]: 2025-12-06 08:34:10.135998728 +0000 UTC m=+0.152444075 container init b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:34:10 compute-0 podman[432245]: 2025-12-06 08:34:10.149512992 +0000 UTC m=+0.165958349 container start b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_black, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:34:10 compute-0 podman[432245]: 2025-12-06 08:34:10.153325605 +0000 UTC m=+0.169770972 container attach b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_black, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:34:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4267: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2526053359' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:34:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2526053359' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:34:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:10.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:10 compute-0 sudo[432267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:10 compute-0 sudo[432267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:10 compute-0 sudo[432267]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:10 compute-0 sudo[432292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:10 compute-0 sudo[432292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:10 compute-0 sudo[432292]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:10 compute-0 sharp_black[432261]: {
Dec 06 08:34:10 compute-0 sharp_black[432261]:     "0": [
Dec 06 08:34:10 compute-0 sharp_black[432261]:         {
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "devices": [
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "/dev/loop3"
Dec 06 08:34:10 compute-0 sharp_black[432261]:             ],
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "lv_name": "ceph_lv0",
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "lv_size": "7511998464",
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "name": "ceph_lv0",
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "tags": {
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.cluster_name": "ceph",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.crush_device_class": "",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.encrypted": "0",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.osd_id": "0",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.type": "block",
Dec 06 08:34:10 compute-0 sharp_black[432261]:                 "ceph.vdo": "0"
Dec 06 08:34:10 compute-0 sharp_black[432261]:             },
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "type": "block",
Dec 06 08:34:10 compute-0 sharp_black[432261]:             "vg_name": "ceph_vg0"
Dec 06 08:34:10 compute-0 sharp_black[432261]:         }
Dec 06 08:34:10 compute-0 sharp_black[432261]:     ]
Dec 06 08:34:10 compute-0 sharp_black[432261]: }
Dec 06 08:34:10 compute-0 systemd[1]: libpod-b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3.scope: Deactivated successfully.
Dec 06 08:34:10 compute-0 podman[432245]: 2025-12-06 08:34:10.921312343 +0000 UTC m=+0.937757750 container died b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_black, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-895154f0ad95e93bd9748053c93b7d52a4b85b7d055bfad9c4696bedda7dc30c-merged.mount: Deactivated successfully.
Dec 06 08:34:10 compute-0 podman[432245]: 2025-12-06 08:34:10.981554309 +0000 UTC m=+0.997999666 container remove b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:34:10 compute-0 systemd[1]: libpod-conmon-b1e4ede92c84bb6a11e2cb4cb809200ad616176e386a29f415984b92468636c3.scope: Deactivated successfully.
Dec 06 08:34:11 compute-0 sudo[432138]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:11 compute-0 sudo[432334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:11 compute-0 sudo[432334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:11 compute-0 sudo[432334]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:11 compute-0 sudo[432359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:34:11 compute-0 sudo[432359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:11 compute-0 sudo[432359]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:11 compute-0 sudo[432384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:11 compute-0 sudo[432384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:11 compute-0 sudo[432384]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:11 compute-0 sudo[432409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:34:11 compute-0 sudo[432409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:11.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:11 compute-0 ceph-mon[74339]: pgmap v4267: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:11 compute-0 nova_compute[251992]: 2025-12-06 08:34:11.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:11 compute-0 podman[432472]: 2025-12-06 08:34:11.677656746 +0000 UTC m=+0.068042647 container create 8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 08:34:11 compute-0 nova_compute[251992]: 2025-12-06 08:34:11.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:34:11 compute-0 nova_compute[251992]: 2025-12-06 08:34:11.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:34:11 compute-0 nova_compute[251992]: 2025-12-06 08:34:11.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:34:11 compute-0 nova_compute[251992]: 2025-12-06 08:34:11.683 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:34:11 compute-0 nova_compute[251992]: 2025-12-06 08:34:11.684 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:34:11 compute-0 systemd[1]: Started libpod-conmon-8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca.scope.
Dec 06 08:34:11 compute-0 podman[432472]: 2025-12-06 08:34:11.651150871 +0000 UTC m=+0.041536832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:34:11 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:34:11 compute-0 podman[432472]: 2025-12-06 08:34:11.77265478 +0000 UTC m=+0.163040681 container init 8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:34:11 compute-0 podman[432472]: 2025-12-06 08:34:11.780600695 +0000 UTC m=+0.170986566 container start 8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermat, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:34:11 compute-0 podman[432472]: 2025-12-06 08:34:11.784424499 +0000 UTC m=+0.174810390 container attach 8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermat, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 08:34:11 compute-0 youthful_fermat[432489]: 167 167
Dec 06 08:34:11 compute-0 systemd[1]: libpod-8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca.scope: Deactivated successfully.
Dec 06 08:34:11 compute-0 conmon[432489]: conmon 8fa53d7e94a672741a7f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca.scope/container/memory.events
Dec 06 08:34:11 compute-0 podman[432472]: 2025-12-06 08:34:11.790174104 +0000 UTC m=+0.180559995 container died 8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2be609c83f3b46a13c937d8c2baa6773b5ef3f4bc148cfe2c341595637b124d0-merged.mount: Deactivated successfully.
Dec 06 08:34:11 compute-0 podman[432472]: 2025-12-06 08:34:11.832449815 +0000 UTC m=+0.222835716 container remove 8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_fermat, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:34:11 compute-0 systemd[1]: libpod-conmon-8fa53d7e94a672741a7f438874e1b03727510b2a3e935eecc001e37a81faaaca.scope: Deactivated successfully.
Dec 06 08:34:12 compute-0 podman[432531]: 2025-12-06 08:34:12.005895206 +0000 UTC m=+0.060131374 container create c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:34:12 compute-0 systemd[1]: Started libpod-conmon-c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490.scope.
Dec 06 08:34:12 compute-0 podman[432531]: 2025-12-06 08:34:11.97861959 +0000 UTC m=+0.032855838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:34:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f233ca779127d591c70c31925b8aeed9402c91306429209cb2cd9bedf963bb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f233ca779127d591c70c31925b8aeed9402c91306429209cb2cd9bedf963bb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f233ca779127d591c70c31925b8aeed9402c91306429209cb2cd9bedf963bb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f233ca779127d591c70c31925b8aeed9402c91306429209cb2cd9bedf963bb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:12 compute-0 podman[432531]: 2025-12-06 08:34:12.113767747 +0000 UTC m=+0.168003975 container init c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_merkle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 08:34:12 compute-0 podman[432531]: 2025-12-06 08:34:12.129724318 +0000 UTC m=+0.183960486 container start c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:34:12 compute-0 podman[432531]: 2025-12-06 08:34:12.133461469 +0000 UTC m=+0.187697687 container attach c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_merkle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:34:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:34:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055977561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:12 compute-0 nova_compute[251992]: 2025-12-06 08:34:12.169 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:34:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4268: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:12 compute-0 nova_compute[251992]: 2025-12-06 08:34:12.372 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:34:12 compute-0 nova_compute[251992]: 2025-12-06 08:34:12.375 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4006MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:34:12 compute-0 nova_compute[251992]: 2025-12-06 08:34:12.375 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:34:12 compute-0 nova_compute[251992]: 2025-12-06 08:34:12.376 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:34:12 compute-0 nova_compute[251992]: 2025-12-06 08:34:12.506 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:34:12 compute-0 nova_compute[251992]: 2025-12-06 08:34:12.506 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:34:12 compute-0 nova_compute[251992]: 2025-12-06 08:34:12.523 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:34:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:12.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:12 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2055977561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:34:13 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453581408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:13 compute-0 nova_compute[251992]: 2025-12-06 08:34:13.030 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:34:13 compute-0 nova_compute[251992]: 2025-12-06 08:34:13.037 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]: {
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]:         "osd_id": 0,
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]:         "type": "bluestore"
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]:     }
Dec 06 08:34:13 compute-0 unruffled_merkle[432548]: }
Dec 06 08:34:13 compute-0 nova_compute[251992]: 2025-12-06 08:34:13.067 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:34:13 compute-0 nova_compute[251992]: 2025-12-06 08:34:13.069 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:34:13 compute-0 nova_compute[251992]: 2025-12-06 08:34:13.070 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:34:13 compute-0 systemd[1]: libpod-c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490.scope: Deactivated successfully.
Dec 06 08:34:13 compute-0 podman[432531]: 2025-12-06 08:34:13.078358151 +0000 UTC m=+1.132594379 container died c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_merkle, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:34:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f233ca779127d591c70c31925b8aeed9402c91306429209cb2cd9bedf963bb5-merged.mount: Deactivated successfully.
Dec 06 08:34:13 compute-0 podman[432531]: 2025-12-06 08:34:13.146001247 +0000 UTC m=+1.200237415 container remove c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:34:13 compute-0 systemd[1]: libpod-conmon-c60306821bc9781b6de1e88c3dde5493ed27720048811cceab1ec5be429ff490.scope: Deactivated successfully.
Dec 06 08:34:13 compute-0 sudo[432409]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:34:13 compute-0 podman[432596]: 2025-12-06 08:34:13.215144303 +0000 UTC m=+0.108523420 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 08:34:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:13 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:34:13 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 88e4fdca-6079-4771-815e-f38de1c0e670 does not exist
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6d74e2fe-d557-4525-9b0d-100003ee1b58 does not exist
Dec 06 08:34:13 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a86c69eb-d451-4304-b362-dd8ec7133f0a does not exist
Dec 06 08:34:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:13 compute-0 sudo[432632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:13 compute-0 sudo[432632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:13 compute-0 sudo[432632]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:13 compute-0 sudo[432657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:34:13 compute-0 sudo[432657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:13 compute-0 sudo[432657]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:13 compute-0 nova_compute[251992]: 2025-12-06 08:34:13.660 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:14 compute-0 ceph-mon[74339]: pgmap v4268: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2453581408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:14 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:34:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4269: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:14.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:14 compute-0 nova_compute[251992]: 2025-12-06 08:34:14.994 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:15 compute-0 ceph-mon[74339]: pgmap v4269: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:16 compute-0 nova_compute[251992]: 2025-12-06 08:34:16.071 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:16 compute-0 nova_compute[251992]: 2025-12-06 08:34:16.072 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4270: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:16.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:16 compute-0 nova_compute[251992]: 2025-12-06 08:34:16.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:16 compute-0 nova_compute[251992]: 2025-12-06 08:34:16.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:34:16 compute-0 nova_compute[251992]: 2025-12-06 08:34:16.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:34:16 compute-0 nova_compute[251992]: 2025-12-06 08:34:16.677 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:34:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:17.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:17 compute-0 ceph-mon[74339]: pgmap v4270: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3168349565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/578781008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4271: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:18.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:18 compute-0 nova_compute[251992]: 2025-12-06 08:34:18.663 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:34:18
Dec 06 08:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'vms', 'volumes', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Dec 06 08:34:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:34:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:19.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:19 compute-0 nova_compute[251992]: 2025-12-06 08:34:19.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:19 compute-0 ceph-mon[74339]: pgmap v4271: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:19 compute-0 nova_compute[251992]: 2025-12-06 08:34:19.996 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4272: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:20 compute-0 podman[432686]: 2025-12-06 08:34:20.432449537 +0000 UTC m=+0.076282801 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:34:20 compute-0 podman[432687]: 2025-12-06 08:34:20.433207037 +0000 UTC m=+0.078627923 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:34:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:20.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:20 compute-0 nova_compute[251992]: 2025-12-06 08:34:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:21 compute-0 ceph-mon[74339]: pgmap v4272: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4273: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:22.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:22 compute-0 nova_compute[251992]: 2025-12-06 08:34:22.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:23 compute-0 nova_compute[251992]: 2025-12-06 08:34:23.667 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:34:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:34:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4274: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:24 compute-0 ceph-mon[74339]: pgmap v4273: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:24 compute-0 nova_compute[251992]: 2025-12-06 08:34:24.998 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:25 compute-0 ceph-mon[74339]: pgmap v4274: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4275: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:26.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:26 compute-0 nova_compute[251992]: 2025-12-06 08:34:26.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:26 compute-0 nova_compute[251992]: 2025-12-06 08:34:26.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:34:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:34:27 compute-0 ceph-mon[74339]: pgmap v4275: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:27.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:34:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:34:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:34:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:34:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:34:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4276: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:28.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:28 compute-0 nova_compute[251992]: 2025-12-06 08:34:28.671 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:29 compute-0 ceph-mon[74339]: pgmap v4276: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:30 compute-0 nova_compute[251992]: 2025-12-06 08:34:30.000 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4277: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:30 compute-0 nova_compute[251992]: 2025-12-06 08:34:30.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:30 compute-0 sudo[432732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:30 compute-0 sudo[432732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:30 compute-0 sudo[432732]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:30 compute-0 sudo[432757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:30 compute-0 sudo[432757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:30 compute-0 sudo[432757]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:31.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:31 compute-0 ceph-mon[74339]: pgmap v4277: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4278: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:32.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:33.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:33 compute-0 nova_compute[251992]: 2025-12-06 08:34:33.674 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:33 compute-0 ceph-mon[74339]: pgmap v4278: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4279: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:34 compute-0 nova_compute[251992]: 2025-12-06 08:34:34.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:34 compute-0 nova_compute[251992]: 2025-12-06 08:34:34.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:34:35 compute-0 nova_compute[251992]: 2025-12-06 08:34:35.002 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:35 compute-0 ceph-mon[74339]: pgmap v4279: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:35.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:35 compute-0 nova_compute[251992]: 2025-12-06 08:34:35.676 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:35 compute-0 nova_compute[251992]: 2025-12-06 08:34:35.676 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:34:35 compute-0 nova_compute[251992]: 2025-12-06 08:34:35.751 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:34:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4280: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:36.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:37 compute-0 ceph-mon[74339]: pgmap v4280: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:37.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4281: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:38 compute-0 nova_compute[251992]: 2025-12-06 08:34:38.677 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:39.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:39 compute-0 ceph-mon[74339]: pgmap v4281: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:40 compute-0 nova_compute[251992]: 2025-12-06 08:34:40.004 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4282: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:40.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:41 compute-0 ceph-mon[74339]: pgmap v4282: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4283: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:42.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:42 compute-0 ceph-mon[74339]: pgmap v4283: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:34:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:34:43 compute-0 podman[432788]: 2025-12-06 08:34:43.421982636 +0000 UTC m=+0.081450909 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 08:34:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:43 compute-0 nova_compute[251992]: 2025-12-06 08:34:43.730 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4284: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:44.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:45 compute-0 nova_compute[251992]: 2025-12-06 08:34:45.006 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:45.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:46 compute-0 ceph-mon[74339]: pgmap v4284: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4285: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:47.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4286: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:48 compute-0 ceph-mon[74339]: pgmap v4285: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:48.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:48 compute-0 nova_compute[251992]: 2025-12-06 08:34:48.733 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:49.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:50 compute-0 nova_compute[251992]: 2025-12-06 08:34:50.009 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:50 compute-0 ceph-mon[74339]: pgmap v4286: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4287: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:50.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:51 compute-0 sudo[432818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:51 compute-0 sudo[432818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:51 compute-0 sudo[432818]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:51 compute-0 podman[432843]: 2025-12-06 08:34:51.084927288 +0000 UTC m=+0.065209602 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 08:34:51 compute-0 sudo[432855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:34:51 compute-0 sudo[432855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:34:51 compute-0 sudo[432855]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:51 compute-0 podman[432842]: 2025-12-06 08:34:51.10797887 +0000 UTC m=+0.091367227 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:34:51 compute-0 ceph-mon[74339]: pgmap v4287: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:51.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4288: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:52.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:53.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:53 compute-0 ceph-mon[74339]: pgmap v4288: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:53 compute-0 nova_compute[251992]: 2025-12-06 08:34:53.737 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4289: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:54.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:54 compute-0 ceph-mon[74339]: pgmap v4289: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:55 compute-0 nova_compute[251992]: 2025-12-06 08:34:55.010 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:34:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:55.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:34:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:34:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4290: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:56.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:56 compute-0 nova_compute[251992]: 2025-12-06 08:34:56.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:34:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1331996013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2914100732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:34:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:34:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:57.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:34:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4291: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:34:58.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:34:58 compute-0 nova_compute[251992]: 2025-12-06 08:34:58.740 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:34:59 compute-0 ceph-mon[74339]: pgmap v4290: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:34:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:34:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:34:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:34:59.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:00 compute-0 nova_compute[251992]: 2025-12-06 08:35:00.012 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4292: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:00.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:00 compute-0 ceph-mon[74339]: pgmap v4291: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:01.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:02 compute-0 ceph-mon[74339]: pgmap v4292: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4293: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:02.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:03.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:03 compute-0 nova_compute[251992]: 2025-12-06 08:35:03.747 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:03 compute-0 ceph-mon[74339]: pgmap v4293: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:35:03.911 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:35:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:35:03.912 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:35:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:35:03.912 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:35:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4294: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:04.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:05 compute-0 nova_compute[251992]: 2025-12-06 08:35:05.016 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:05.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4295: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:06.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:07.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:07 compute-0 ceph-mon[74339]: pgmap v4294: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4296: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:08.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:08 compute-0 ceph-mon[74339]: pgmap v4295: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:08 compute-0 nova_compute[251992]: 2025-12-06 08:35:08.817 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:09.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:09 compute-0 nova_compute[251992]: 2025-12-06 08:35:09.734 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:10 compute-0 nova_compute[251992]: 2025-12-06 08:35:10.017 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4297: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:10.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:11 compute-0 sudo[432919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:11 compute-0 sudo[432919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:11 compute-0 sudo[432919]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:11 compute-0 sudo[432944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:11 compute-0 sudo[432944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:11 compute-0 sudo[432944]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:11 compute-0 ceph-mon[74339]: pgmap v4296: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:11.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4298: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:12.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:35:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:35:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:13.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:13 compute-0 nova_compute[251992]: 2025-12-06 08:35:13.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:13 compute-0 nova_compute[251992]: 2025-12-06 08:35:13.796 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:35:13 compute-0 nova_compute[251992]: 2025-12-06 08:35:13.796 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:35:13 compute-0 nova_compute[251992]: 2025-12-06 08:35:13.796 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:35:13 compute-0 nova_compute[251992]: 2025-12-06 08:35:13.797 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:35:13 compute-0 nova_compute[251992]: 2025-12-06 08:35:13.797 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:35:13 compute-0 nova_compute[251992]: 2025-12-06 08:35:13.840 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:13 compute-0 sudo[432971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:13 compute-0 sudo[432971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:13 compute-0 sudo[432971]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:14 compute-0 sudo[433009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:35:14 compute-0 sudo[433009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:14 compute-0 sudo[433009]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:14 compute-0 podman[432995]: 2025-12-06 08:35:14.046014522 +0000 UTC m=+0.114069040 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 08:35:14 compute-0 sudo[433067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:14 compute-0 sudo[433067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:14 compute-0 sudo[433067]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:14 compute-0 sudo[433094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:35:14 compute-0 sudo[433094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.183 251996 DEBUG oslo_concurrency.processutils [None req-973963d8-dc59-4fec-ad5a-82e5018c1182 05522e78304c4c4eb4be044936c2fa3e 5ed95c9b17ee4dcb83395850789304e6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.231 251996 DEBUG oslo_concurrency.processutils [None req-973963d8-dc59-4fec-ad5a-82e5018c1182 05522e78304c4c4eb4be044936c2fa3e 5ed95c9b17ee4dcb83395850789304e6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:35:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:35:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:35:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295919848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.327 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:35:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4299: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.486 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.488 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4038MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.488 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.488 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:35:14 compute-0 sudo[433094]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:14.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 08:35:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:35:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2342233228' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:35:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2342233228' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:35:14 compute-0 ceph-mon[74339]: pgmap v4297: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.721 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.721 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:35:14 compute-0 nova_compute[251992]: 2025-12-06 08:35:14.762 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:35:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 08:35:14 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:35:15 compute-0 nova_compute[251992]: 2025-12-06 08:35:15.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:35:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:35:15 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806479779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:15.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:15 compute-0 nova_compute[251992]: 2025-12-06 08:35:15.538 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.776s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:35:15 compute-0 nova_compute[251992]: 2025-12-06 08:35:15.548 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:35:15 compute-0 nova_compute[251992]: 2025-12-06 08:35:15.670 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:35:15 compute-0 nova_compute[251992]: 2025-12-06 08:35:15.673 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:35:15 compute-0 nova_compute[251992]: 2025-12-06 08:35:15.674 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:35:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4300: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:16 compute-0 ceph-mon[74339]: pgmap v4298: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3295919848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:16 compute-0 ceph-mon[74339]: pgmap v4299: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:35:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:35:16 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2806479779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:17.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:35:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4301: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:18 compute-0 ceph-mon[74339]: pgmap v4300: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:35:18 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:18.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:18 compute-0 nova_compute[251992]: 2025-12-06 08:35:18.675 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:18 compute-0 nova_compute[251992]: 2025-12-06 08:35:18.676 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:35:18 compute-0 nova_compute[251992]: 2025-12-06 08:35:18.676 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:35:18 compute-0 nova_compute[251992]: 2025-12-06 08:35:18.695 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:35:18 compute-0 nova_compute[251992]: 2025-12-06 08:35:18.696 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:18 compute-0 nova_compute[251992]: 2025-12-06 08:35:18.696 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:35:18
Dec 06 08:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms']
Dec 06 08:35:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:35:18 compute-0 nova_compute[251992]: 2025-12-06 08:35:18.842 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:19.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:20 compute-0 nova_compute[251992]: 2025-12-06 08:35:20.079 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3215890770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:20 compute-0 ceph-mon[74339]: pgmap v4301: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:20 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/766304778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4302: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:20 compute-0 nova_compute[251992]: 2025-12-06 08:35:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:20 compute-0 nova_compute[251992]: 2025-12-06 08:35:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:21 compute-0 podman[433178]: 2025-12-06 08:35:21.397211778 +0000 UTC m=+0.053694230 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:35:21 compute-0 podman[433179]: 2025-12-06 08:35:21.406769426 +0000 UTC m=+0.059835666 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 08:35:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:35:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:21.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:21 compute-0 nova_compute[251992]: 2025-12-06 08:35:21.581 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:35:21.581 158118 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=110, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:ec:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '32:72:e7:89:e0:7d'}, ipsec=False) old=SB_Global(nb_cfg=109) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 08:35:21 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:35:21.582 158118 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 08:35:21 compute-0 ceph-mon[74339]: pgmap v4302: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:35:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4303: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:35:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Cumulative writes: 19K writes, 88K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s
                                           Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1518 writes, 6600 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 10.63 MB, 0.02 MB/s
                                           Interval WAL: 1518 writes, 1516 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     47.4      2.49              0.39        62    0.040       0      0       0.0       0.0
                                             L6      1/0   11.85 MB   0.0      0.8     0.1      0.6       0.7      0.0       0.0   5.7    109.8     94.7      7.06              2.22        61    0.116    533K    32K       0.0       0.0
                                            Sum      1/0   11.85 MB   0.0      0.8     0.1      0.6       0.8      0.1       0.0   6.7     81.1     82.4      9.55              2.61       123    0.078    533K    32K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7    106.9    108.8      0.58              0.23         8    0.073     50K   2051       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.6       0.7      0.0       0.0   0.0    109.8     94.7      7.06              2.22        61    0.116    533K    32K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     47.5      2.49              0.39        61    0.041       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.115, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.77 GB write, 0.10 MB/s write, 0.76 GB read, 0.10 MB/s read, 9.6 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 84.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000893 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5262,80.72 MB,26.5542%) FilterBlock(124,1.43 MB,0.470849%) IndexBlock(124,2.29 MB,0.753844%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:35:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:35:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:35:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:35:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:35:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:35:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8ae50ee5-a6b5-4f31-9cd7-a5a968d06ede does not exist
Dec 06 08:35:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 679c761c-a4da-438f-831e-7ea9540dd393 does not exist
Dec 06 08:35:22 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d07590f3-9c84-466d-9795-de9bed2a1703 does not exist
Dec 06 08:35:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:35:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:35:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:35:22 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:35:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:35:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:35:22 compute-0 sudo[433219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:22 compute-0 sudo[433219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:22 compute-0 sudo[433219]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:22.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:22 compute-0 sudo[433244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:35:22 compute-0 sudo[433244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:22 compute-0 sudo[433244]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:22 compute-0 sudo[433269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:22 compute-0 sudo[433269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:22 compute-0 sudo[433269]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:22 compute-0 sudo[433294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:35:22 compute-0 sudo[433294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:23 compute-0 podman[433359]: 2025-12-06 08:35:23.101399953 +0000 UTC m=+0.038536661 container create e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mestorf, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:35:23 compute-0 systemd[1]: Started libpod-conmon-e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c.scope.
Dec 06 08:35:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:35:23 compute-0 podman[433359]: 2025-12-06 08:35:23.084448515 +0000 UTC m=+0.021585253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:35:23 compute-0 podman[433359]: 2025-12-06 08:35:23.316059257 +0000 UTC m=+0.253196015 container init e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mestorf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:35:23 compute-0 podman[433359]: 2025-12-06 08:35:23.324969217 +0000 UTC m=+0.262105925 container start e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:35:23 compute-0 podman[433359]: 2025-12-06 08:35:23.328547644 +0000 UTC m=+0.265684422 container attach e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mestorf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:35:23 compute-0 modest_mestorf[433375]: 167 167
Dec 06 08:35:23 compute-0 systemd[1]: libpod-e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c.scope: Deactivated successfully.
Dec 06 08:35:23 compute-0 podman[433359]: 2025-12-06 08:35:23.334186686 +0000 UTC m=+0.271323414 container died e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:35:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c23969796975a6f03bde6270b786fb9e1cb37795fb0cf1a3d2938531fd4b837f-merged.mount: Deactivated successfully.
Dec 06 08:35:23 compute-0 nova_compute[251992]: 2025-12-06 08:35:23.844 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:35:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:35:23 compute-0 podman[433359]: 2025-12-06 08:35:23.927040668 +0000 UTC m=+0.864177386 container remove e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mestorf, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:35:24 compute-0 systemd[1]: libpod-conmon-e2fade2c4d24eeaed41875f8597b86af18214f8bf5cacd1db863e6f5d9d4910c.scope: Deactivated successfully.
Dec 06 08:35:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:24 compute-0 ceph-mon[74339]: pgmap v4303: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:35:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:35:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:35:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:35:24 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:35:24 compute-0 podman[433399]: 2025-12-06 08:35:24.084836616 +0000 UTC m=+0.043991088 container create d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jones, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:35:24 compute-0 systemd[1]: Started libpod-conmon-d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127.scope.
Dec 06 08:35:24 compute-0 podman[433399]: 2025-12-06 08:35:24.066299686 +0000 UTC m=+0.025454078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:35:24 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb34c2851237877b936c0fcd515f947a5de16469b9d102b889a4183afd57a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb34c2851237877b936c0fcd515f947a5de16469b9d102b889a4183afd57a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb34c2851237877b936c0fcd515f947a5de16469b9d102b889a4183afd57a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb34c2851237877b936c0fcd515f947a5de16469b9d102b889a4183afd57a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9deb34c2851237877b936c0fcd515f947a5de16469b9d102b889a4183afd57a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:24 compute-0 podman[433399]: 2025-12-06 08:35:24.225602756 +0000 UTC m=+0.184757158 container init d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:35:24 compute-0 podman[433399]: 2025-12-06 08:35:24.232601705 +0000 UTC m=+0.191756067 container start d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:35:24 compute-0 podman[433399]: 2025-12-06 08:35:24.235817501 +0000 UTC m=+0.194971883 container attach d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jones, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:35:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4304: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:24.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:25 compute-0 upbeat_jones[433416]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:35:25 compute-0 upbeat_jones[433416]: --> relative data size: 1.0
Dec 06 08:35:25 compute-0 upbeat_jones[433416]: --> All data devices are unavailable
Dec 06 08:35:25 compute-0 systemd[1]: libpod-d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127.scope: Deactivated successfully.
Dec 06 08:35:25 compute-0 podman[433399]: 2025-12-06 08:35:25.057363805 +0000 UTC m=+1.016518217 container died d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 08:35:25 compute-0 nova_compute[251992]: 2025-12-06 08:35:25.081 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9deb34c2851237877b936c0fcd515f947a5de16469b9d102b889a4183afd57a5-merged.mount: Deactivated successfully.
Dec 06 08:35:25 compute-0 podman[433399]: 2025-12-06 08:35:25.130296904 +0000 UTC m=+1.089451256 container remove d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:35:25 compute-0 systemd[1]: libpod-conmon-d4bcd0cf07d11324d11515c8fd1228849ecda134fd9284257f1cee640e9b3127.scope: Deactivated successfully.
Dec 06 08:35:25 compute-0 sudo[433294]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:25 compute-0 ceph-mon[74339]: pgmap v4304: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:25 compute-0 sudo[433442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:25 compute-0 sudo[433442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:25 compute-0 sudo[433442]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:25 compute-0 sudo[433467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:35:25 compute-0 sudo[433467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:25 compute-0 sudo[433467]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:25 compute-0 sudo[433492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:25 compute-0 sudo[433492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:25 compute-0 sudo[433492]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:25 compute-0 sudo[433517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:35:25 compute-0 sudo[433517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:25.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:25 compute-0 podman[433581]: 2025-12-06 08:35:25.776469913 +0000 UTC m=+0.033555087 container create ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec 06 08:35:25 compute-0 systemd[1]: Started libpod-conmon-ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb.scope.
Dec 06 08:35:25 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:35:25 compute-0 podman[433581]: 2025-12-06 08:35:25.829884235 +0000 UTC m=+0.086969429 container init ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 08:35:25 compute-0 podman[433581]: 2025-12-06 08:35:25.837016978 +0000 UTC m=+0.094102152 container start ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:35:25 compute-0 podman[433581]: 2025-12-06 08:35:25.840139242 +0000 UTC m=+0.097224416 container attach ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:35:25 compute-0 elastic_mendel[433598]: 167 167
Dec 06 08:35:25 compute-0 systemd[1]: libpod-ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb.scope: Deactivated successfully.
Dec 06 08:35:25 compute-0 podman[433581]: 2025-12-06 08:35:25.843872482 +0000 UTC m=+0.100957646 container died ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:35:25 compute-0 podman[433581]: 2025-12-06 08:35:25.761298464 +0000 UTC m=+0.018383658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-182d56893305329b416a6d1b35ab738b5dbad447d7648a9b7b820018ddca0456-merged.mount: Deactivated successfully.
Dec 06 08:35:25 compute-0 podman[433581]: 2025-12-06 08:35:25.874834938 +0000 UTC m=+0.131920112 container remove ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mendel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 06 08:35:25 compute-0 systemd[1]: libpod-conmon-ccf40fb9aa6d0a8ed49ffef9335c17de4e6a6b8dc46f6e5d552e555549c42ccb.scope: Deactivated successfully.
Dec 06 08:35:26 compute-0 podman[433622]: 2025-12-06 08:35:26.018735052 +0000 UTC m=+0.036681631 container create 76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rosalind, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:35:26 compute-0 systemd[1]: Started libpod-conmon-76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c.scope.
Dec 06 08:35:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080387024a12281155a4f34deae21e4f9117bd9b37438d6b61a2d7d764adf78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080387024a12281155a4f34deae21e4f9117bd9b37438d6b61a2d7d764adf78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080387024a12281155a4f34deae21e4f9117bd9b37438d6b61a2d7d764adf78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5080387024a12281155a4f34deae21e4f9117bd9b37438d6b61a2d7d764adf78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:26 compute-0 podman[433622]: 2025-12-06 08:35:26.004295993 +0000 UTC m=+0.022242602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:35:26 compute-0 podman[433622]: 2025-12-06 08:35:26.199536082 +0000 UTC m=+0.217482751 container init 76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:35:26 compute-0 podman[433622]: 2025-12-06 08:35:26.208877224 +0000 UTC m=+0.226823803 container start 76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rosalind, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:35:26 compute-0 podman[433622]: 2025-12-06 08:35:26.212343818 +0000 UTC m=+0.230290487 container attach 76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:35:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4305: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:26.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:35:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]: {
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:     "0": [
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:         {
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "devices": [
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "/dev/loop3"
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             ],
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "lv_name": "ceph_lv0",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "lv_size": "7511998464",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "name": "ceph_lv0",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "tags": {
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.cluster_name": "ceph",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.crush_device_class": "",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.encrypted": "0",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.osd_id": "0",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.type": "block",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:                 "ceph.vdo": "0"
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             },
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "type": "block",
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:             "vg_name": "ceph_vg0"
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:         }
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]:     ]
Dec 06 08:35:26 compute-0 xenodochial_rosalind[433638]: }
Dec 06 08:35:26 compute-0 systemd[1]: libpod-76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c.scope: Deactivated successfully.
Dec 06 08:35:26 compute-0 podman[433622]: 2025-12-06 08:35:26.998501075 +0000 UTC m=+1.016447654 container died 76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rosalind, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5080387024a12281155a4f34deae21e4f9117bd9b37438d6b61a2d7d764adf78-merged.mount: Deactivated successfully.
Dec 06 08:35:27 compute-0 podman[433622]: 2025-12-06 08:35:27.180750095 +0000 UTC m=+1.198696674 container remove 76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rosalind, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 08:35:27 compute-0 systemd[1]: libpod-conmon-76be6f948b3b42f04ec8c1f83ccdcda529f2f72f0164ae413437ecd6411f884c.scope: Deactivated successfully.
Dec 06 08:35:27 compute-0 sudo[433517]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:27 compute-0 sudo[433662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:27 compute-0 sudo[433662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:27 compute-0 sudo[433662]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:27 compute-0 sudo[433687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:35:27 compute-0 sudo[433687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:27 compute-0 sudo[433687]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:27 compute-0 sudo[433712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:27 compute-0 sudo[433712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:27 compute-0 sudo[433712]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:27 compute-0 sudo[433737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:35:27 compute-0 sudo[433737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:35:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:35:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:35:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:35:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:35:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:27.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:27 compute-0 ceph-mon[74339]: pgmap v4305: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:27 compute-0 podman[433801]: 2025-12-06 08:35:27.792574657 +0000 UTC m=+0.036213568 container create 0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_burnell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:35:27 compute-0 systemd[1]: Started libpod-conmon-0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1.scope.
Dec 06 08:35:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:35:27 compute-0 podman[433801]: 2025-12-06 08:35:27.864675774 +0000 UTC m=+0.108314685 container init 0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:35:27 compute-0 podman[433801]: 2025-12-06 08:35:27.870972993 +0000 UTC m=+0.114611904 container start 0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:35:27 compute-0 podman[433801]: 2025-12-06 08:35:27.874086617 +0000 UTC m=+0.117725558 container attach 0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:35:27 compute-0 keen_burnell[433818]: 167 167
Dec 06 08:35:27 compute-0 podman[433801]: 2025-12-06 08:35:27.779288479 +0000 UTC m=+0.022927410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:35:27 compute-0 systemd[1]: libpod-0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1.scope: Deactivated successfully.
Dec 06 08:35:27 compute-0 podman[433801]: 2025-12-06 08:35:27.876088172 +0000 UTC m=+0.119727083 container died 0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_burnell, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-12c90957e30456cc397f8ba4209ca6b4c6c7687e7d22449cf954865026d28d92-merged.mount: Deactivated successfully.
Dec 06 08:35:27 compute-0 podman[433801]: 2025-12-06 08:35:27.912508815 +0000 UTC m=+0.156147726 container remove 0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:35:27 compute-0 systemd[1]: libpod-conmon-0464bee93c5ed195787b6fae0c9fe7b4e05a2824524cd2071185751e738c79a1.scope: Deactivated successfully.
Dec 06 08:35:28 compute-0 podman[433842]: 2025-12-06 08:35:28.073926881 +0000 UTC m=+0.044449841 container create 90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_joliot, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:35:28 compute-0 systemd[1]: Started libpod-conmon-90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3.scope.
Dec 06 08:35:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6243061bf6c9009d16ee55765e4fe3f97bf32248a9b00cee457e7cfaddb1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6243061bf6c9009d16ee55765e4fe3f97bf32248a9b00cee457e7cfaddb1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6243061bf6c9009d16ee55765e4fe3f97bf32248a9b00cee457e7cfaddb1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad6243061bf6c9009d16ee55765e4fe3f97bf32248a9b00cee457e7cfaddb1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:35:28 compute-0 podman[433842]: 2025-12-06 08:35:28.05055072 +0000 UTC m=+0.021073660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:35:28 compute-0 podman[433842]: 2025-12-06 08:35:28.162921363 +0000 UTC m=+0.133444303 container init 90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:35:28 compute-0 podman[433842]: 2025-12-06 08:35:28.170782825 +0000 UTC m=+0.141305745 container start 90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:35:28 compute-0 podman[433842]: 2025-12-06 08:35:28.174422033 +0000 UTC m=+0.144944973 container attach 90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:35:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4306: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:28 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:35:28.585 158118 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=feab6d5f-1b29-488a-ae05-1d4fd579aca4, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '110'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 08:35:28 compute-0 nova_compute[251992]: 2025-12-06 08:35:28.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:28 compute-0 nova_compute[251992]: 2025-12-06 08:35:28.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:35:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:28.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:28 compute-0 nova_compute[251992]: 2025-12-06 08:35:28.846 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]: {
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]:         "osd_id": 0,
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]:         "type": "bluestore"
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]:     }
Dec 06 08:35:28 compute-0 intelligent_joliot[433859]: }
Dec 06 08:35:29 compute-0 ceph-mon[74339]: pgmap v4306: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:29 compute-0 systemd[1]: libpod-90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3.scope: Deactivated successfully.
Dec 06 08:35:29 compute-0 podman[433842]: 2025-12-06 08:35:29.011850435 +0000 UTC m=+0.982373365 container died 90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ad6243061bf6c9009d16ee55765e4fe3f97bf32248a9b00cee457e7cfaddb1b-merged.mount: Deactivated successfully.
Dec 06 08:35:29 compute-0 podman[433842]: 2025-12-06 08:35:29.063999123 +0000 UTC m=+1.034522043 container remove 90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_joliot, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:35:29 compute-0 systemd[1]: libpod-conmon-90bfbdcbcf4c34eb96fa889b43109e8256b4bea94dd1a156e2c6a5205a25d6f3.scope: Deactivated successfully.
Dec 06 08:35:29 compute-0 sudo[433737]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:35:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:29 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:35:29 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0eabc04b-baa4-481e-8f44-f63edd36a111 does not exist
Dec 06 08:35:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d014a61e-080f-4ff2-b099-978ee0d0bf99 does not exist
Dec 06 08:35:29 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f18958f1-fa7e-42fb-aded-c5a7c551247f does not exist
Dec 06 08:35:29 compute-0 sudo[433893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:29 compute-0 sudo[433893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:29 compute-0 sudo[433893]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:29 compute-0 sudo[433918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:35:29 compute-0 sudo[433918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:29 compute-0 sudo[433918]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:29.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:30 compute-0 nova_compute[251992]: 2025-12-06 08:35:30.083 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4307: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:30 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:35:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:30.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:30 compute-0 nova_compute[251992]: 2025-12-06 08:35:30.789 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:31 compute-0 sudo[433944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:31 compute-0 sudo[433944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:31 compute-0 sudo[433944]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:31 compute-0 sudo[433969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:31 compute-0 sudo[433969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:31 compute-0 sudo[433969]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:31.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4308: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:32 compute-0 ceph-mon[74339]: pgmap v4307: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:32.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:32 compute-0 nova_compute[251992]: 2025-12-06 08:35:32.676 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:35:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:33.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:33 compute-0 nova_compute[251992]: 2025-12-06 08:35:33.848 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:34 compute-0 ceph-mon[74339]: pgmap v4308: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4309: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:34.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:35 compute-0 ceph-mon[74339]: pgmap v4309: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:35 compute-0 nova_compute[251992]: 2025-12-06 08:35:35.086 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:35.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4310: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:36.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:37 compute-0 ceph-mon[74339]: pgmap v4310: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:37.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4311: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:38 compute-0 nova_compute[251992]: 2025-12-06 08:35:38.850 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:39.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:39 compute-0 ceph-mon[74339]: pgmap v4311: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:40 compute-0 nova_compute[251992]: 2025-12-06 08:35:40.088 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4312: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:40.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:41.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:41 compute-0 ceph-mon[74339]: pgmap v4312: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4313: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:42.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:35:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:35:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:43.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:43 compute-0 ceph-mon[74339]: pgmap v4313: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:43 compute-0 nova_compute[251992]: 2025-12-06 08:35:43.851 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4314: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:44 compute-0 podman[434001]: 2025-12-06 08:35:44.463931063 +0000 UTC m=+0.115922509 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:35:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:44.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:45 compute-0 nova_compute[251992]: 2025-12-06 08:35:45.090 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:45.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:46 compute-0 ceph-mon[74339]: pgmap v4314: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4315: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:46.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:47 compute-0 ceph-mon[74339]: pgmap v4315: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:35:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:47.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:35:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4316: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:48.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:48 compute-0 nova_compute[251992]: 2025-12-06 08:35:48.854 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:49.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:49 compute-0 ceph-mon[74339]: pgmap v4316: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:50 compute-0 nova_compute[251992]: 2025-12-06 08:35:50.092 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4317: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:50.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:51 compute-0 sudo[434031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:51 compute-0 sudo[434031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:51 compute-0 sudo[434031]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:51 compute-0 sudo[434068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:35:51 compute-0 sudo[434068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:35:51 compute-0 sudo[434068]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:51 compute-0 podman[434056]: 2025-12-06 08:35:51.546987443 +0000 UTC m=+0.069425794 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:35:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:51.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:51 compute-0 podman[434055]: 2025-12-06 08:35:51.564056064 +0000 UTC m=+0.088142020 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:35:51 compute-0 ceph-mon[74339]: pgmap v4317: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4318: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:52.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:53.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:53 compute-0 ceph-mon[74339]: pgmap v4318: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:53 compute-0 nova_compute[251992]: 2025-12-06 08:35:53.856 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4319: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:54.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/490789410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:55 compute-0 nova_compute[251992]: 2025-12-06 08:35:55.135 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:55.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:55 compute-0 ceph-mon[74339]: pgmap v4319: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/436894287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:35:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:35:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4320: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:56.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:56 compute-0 ceph-mon[74339]: pgmap v4320: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:35:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:57.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:35:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4321: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:35:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:35:58.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:58 compute-0 nova_compute[251992]: 2025-12-06 08:35:58.858 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:35:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:35:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:35:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:35:59.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:35:59 compute-0 ceph-mon[74339]: pgmap v4321: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:00 compute-0 nova_compute[251992]: 2025-12-06 08:36:00.136 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4322: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:00.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:00 compute-0 ceph-mon[74339]: pgmap v4322: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:01.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4323: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:02.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:03.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:03 compute-0 ceph-mon[74339]: pgmap v4323: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:03 compute-0 nova_compute[251992]: 2025-12-06 08:36:03.896 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:36:03.912 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:36:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:36:03.912 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:36:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:36:03.912 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:36:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4324: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:04.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:04 compute-0 ceph-mon[74339]: pgmap v4324: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:05 compute-0 nova_compute[251992]: 2025-12-06 08:36:05.200 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:05.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4325: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:06.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:07 compute-0 ceph-mon[74339]: pgmap v4325: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4326: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:08.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:08 compute-0 nova_compute[251992]: 2025-12-06 08:36:08.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:36:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294410134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:36:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:36:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4294410134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:36:09 compute-0 ceph-mon[74339]: pgmap v4326: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4294410134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:36:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4294410134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:36:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:09.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:10 compute-0 nova_compute[251992]: 2025-12-06 08:36:10.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4327: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:10 compute-0 nova_compute[251992]: 2025-12-06 08:36:10.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:10.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:11.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:11 compute-0 sudo[434128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:11 compute-0 sudo[434128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:11 compute-0 sudo[434128]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:11 compute-0 sudo[434153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:11 compute-0 sudo[434153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:11 compute-0 sudo[434153]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:11 compute-0 ceph-mon[74339]: pgmap v4327: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4328: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:36:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.232266) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173232360, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 1624, "num_deletes": 251, "total_data_size": 2967976, "memory_usage": 3003008, "flush_reason": "Manual Compaction"}
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Dec 06 08:36:13 compute-0 ceph-mon[74339]: pgmap v4328: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173398577, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 2901434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87970, "largest_seqno": 89592, "table_properties": {"data_size": 2893790, "index_size": 4586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15560, "raw_average_key_size": 20, "raw_value_size": 2878691, "raw_average_value_size": 3738, "num_data_blocks": 201, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010006, "oldest_key_time": 1765010006, "file_creation_time": 1765010173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 166374 microseconds, and 10145 cpu microseconds.
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.398646) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 2901434 bytes OK
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.398674) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.539472) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.539513) EVENT_LOG_v1 {"time_micros": 1765010173539499, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.539542) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 2961169, prev total WAL file size 2961169, number of live WAL files 2.
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.541062) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(2833KB)], [200(11MB)]
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173541230, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 15332190, "oldest_snapshot_seqno": -1}
Dec 06 08:36:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:13.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 12415 keys, 13371953 bytes, temperature: kUnknown
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173671290, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 13371953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13295714, "index_size": 44126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31045, "raw_key_size": 329707, "raw_average_key_size": 26, "raw_value_size": 13082674, "raw_average_value_size": 1053, "num_data_blocks": 1660, "num_entries": 12415, "num_filter_entries": 12415, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765010173, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.671632) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 13371953 bytes
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.673527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.8 rd, 102.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 11.9 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(9.9) write-amplify(4.6) OK, records in: 12934, records dropped: 519 output_compression: NoCompression
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.673549) EVENT_LOG_v1 {"time_micros": 1765010173673539, "job": 126, "event": "compaction_finished", "compaction_time_micros": 130179, "compaction_time_cpu_micros": 77683, "output_level": 6, "num_output_files": 1, "total_output_size": 13371953, "num_input_records": 12934, "num_output_records": 12415, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173674343, "job": 126, "event": "table_file_deletion", "file_number": 202}
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010173677519, "job": 126, "event": "table_file_deletion", "file_number": 200}
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.540931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.677606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.677611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.677613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.677615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:36:13.677617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:36:13 compute-0 nova_compute[251992]: 2025-12-06 08:36:13.902 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4329: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:14.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:15 compute-0 nova_compute[251992]: 2025-12-06 08:36:15.254 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:15 compute-0 podman[434180]: 2025-12-06 08:36:15.46635422 +0000 UTC m=+0.127154653 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 08:36:15 compute-0 ceph-mon[74339]: pgmap v4329: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:15 compute-0 nova_compute[251992]: 2025-12-06 08:36:15.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:15 compute-0 nova_compute[251992]: 2025-12-06 08:36:15.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:36:15 compute-0 nova_compute[251992]: 2025-12-06 08:36:15.680 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:36:15 compute-0 nova_compute[251992]: 2025-12-06 08:36:15.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:36:15 compute-0 nova_compute[251992]: 2025-12-06 08:36:15.681 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:36:15 compute-0 nova_compute[251992]: 2025-12-06 08:36:15.681 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:36:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:36:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3432423705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.113 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:36:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.261 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.262 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4048MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.263 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.263 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.357 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.358 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:36:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4330: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.385 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.408 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.409 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.427 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.453 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.467 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:36:16 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3432423705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:16.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:36:16 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4080446913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.914 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.920 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.935 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.938 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:36:16 compute-0 nova_compute[251992]: 2025-12-06 08:36:16.939 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:36:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:17.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:17 compute-0 ceph-mon[74339]: pgmap v4330: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:17 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4080446913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:17 compute-0 nova_compute[251992]: 2025-12-06 08:36:17.939 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:17 compute-0 nova_compute[251992]: 2025-12-06 08:36:17.940 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4331: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:36:18
Dec 06 08:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'vms', '.rgw.root']
Dec 06 08:36:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:36:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:18.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:18 compute-0 nova_compute[251992]: 2025-12-06 08:36:18.906 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:19 compute-0 ceph-mon[74339]: pgmap v4331: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4153081065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:19.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:19 compute-0 nova_compute[251992]: 2025-12-06 08:36:19.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:19 compute-0 nova_compute[251992]: 2025-12-06 08:36:19.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:36:19 compute-0 nova_compute[251992]: 2025-12-06 08:36:19.659 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:36:19 compute-0 nova_compute[251992]: 2025-12-06 08:36:19.682 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:36:20 compute-0 nova_compute[251992]: 2025-12-06 08:36:20.256 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4332: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:20 compute-0 nova_compute[251992]: 2025-12-06 08:36:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:20.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3293839874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:21 compute-0 ceph-mon[74339]: pgmap v4332: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:21 compute-0 nova_compute[251992]: 2025-12-06 08:36:21.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4333: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:22 compute-0 podman[434255]: 2025-12-06 08:36:22.402798404 +0000 UTC m=+0.059326352 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 08:36:22 compute-0 podman[434256]: 2025-12-06 08:36:22.403947234 +0000 UTC m=+0.061309126 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:36:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:22.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:23 compute-0 ceph-mon[74339]: pgmap v4333: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:23.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:23 compute-0 nova_compute[251992]: 2025-12-06 08:36:23.910 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:36:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:36:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:24.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:25 compute-0 nova_compute[251992]: 2025-12-06 08:36:25.259 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:25.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:25 compute-0 nova_compute[251992]: 2025-12-06 08:36:25.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:25 compute-0 ceph-mon[74339]: pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4335: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:26.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:36:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:36:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:36:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:36:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:36:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:36:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:36:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:27.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:27 compute-0 ceph-mon[74339]: pgmap v4335: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:28 compute-0 nova_compute[251992]: 2025-12-06 08:36:28.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:28 compute-0 nova_compute[251992]: 2025-12-06 08:36:28.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:36:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:28.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:28 compute-0 nova_compute[251992]: 2025-12-06 08:36:28.914 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:29.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:29 compute-0 ceph-mon[74339]: pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:29 compute-0 sudo[434299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:29 compute-0 sudo[434299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:29 compute-0 sudo[434299]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:29 compute-0 sudo[434324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:36:29 compute-0 sudo[434324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:29 compute-0 sudo[434324]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:29 compute-0 sudo[434349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:29 compute-0 sudo[434349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:29 compute-0 sudo[434349]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:29 compute-0 sudo[434374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:36:29 compute-0 sudo[434374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:30 compute-0 nova_compute[251992]: 2025-12-06 08:36:30.261 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:30 compute-0 sudo[434374]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:36:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:36:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:36:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:36:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:36:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:30 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ef943f08-466f-4d03-b87f-7331db875b27 does not exist
Dec 06 08:36:30 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a8401054-24d2-4f56-afba-131221b227a9 does not exist
Dec 06 08:36:30 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 952f4f9c-3b95-4466-aeb1-ebf2a82ace12 does not exist
Dec 06 08:36:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:36:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:36:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:36:30 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:36:30 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:36:30 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:36:30 compute-0 sudo[434432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:30 compute-0 sudo[434432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:30 compute-0 sudo[434432]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:30 compute-0 sudo[434457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:36:30 compute-0 sudo[434457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:30 compute-0 sudo[434457]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:30 compute-0 sudo[434482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:30 compute-0 sudo[434482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:30 compute-0 sudo[434482]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:30 compute-0 sudo[434507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:36:30 compute-0 sudo[434507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:30.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:31 compute-0 podman[434573]: 2025-12-06 08:36:31.032730654 +0000 UTC m=+0.029801485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:36:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:36:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:36:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:36:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:36:31 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:36:31 compute-0 podman[434573]: 2025-12-06 08:36:31.516671935 +0000 UTC m=+0.513742746 container create 3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec 06 08:36:31 compute-0 systemd[1]: Started libpod-conmon-3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401.scope.
Dec 06 08:36:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:31.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:36:31 compute-0 podman[434573]: 2025-12-06 08:36:31.647533447 +0000 UTC m=+0.644604268 container init 3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:36:31 compute-0 podman[434573]: 2025-12-06 08:36:31.654774593 +0000 UTC m=+0.651845384 container start 3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec 06 08:36:31 compute-0 podman[434573]: 2025-12-06 08:36:31.660489337 +0000 UTC m=+0.657560118 container attach 3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:36:31 compute-0 trusting_elion[434589]: 167 167
Dec 06 08:36:31 compute-0 systemd[1]: libpod-3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401.scope: Deactivated successfully.
Dec 06 08:36:31 compute-0 podman[434573]: 2025-12-06 08:36:31.663042926 +0000 UTC m=+0.660113727 container died 3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elion, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d13468fc31a93b0b5c3512b116aff9971b2978cd9800ec56b642eb1e7e3dbd89-merged.mount: Deactivated successfully.
Dec 06 08:36:31 compute-0 podman[434573]: 2025-12-06 08:36:31.708199055 +0000 UTC m=+0.705269836 container remove 3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_elion, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:36:31 compute-0 systemd[1]: libpod-conmon-3609c4e930fcd083c10f525a40a6a0f92e69ee5cb247eba1fb3c82459ee91401.scope: Deactivated successfully.
Dec 06 08:36:31 compute-0 sudo[434596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:31 compute-0 sudo[434596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:31 compute-0 sudo[434596]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:31 compute-0 sudo[434633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:31 compute-0 sudo[434633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:31 compute-0 sudo[434633]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:31 compute-0 podman[434663]: 2025-12-06 08:36:31.873845346 +0000 UTC m=+0.037388211 container create e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hypatia, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:36:31 compute-0 systemd[1]: Started libpod-conmon-e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08.scope.
Dec 06 08:36:31 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b938a7df2ac15ef17320ff35623ba86fd93e2ff317a4f83ed8aa5224ef29f68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:31 compute-0 podman[434663]: 2025-12-06 08:36:31.854624547 +0000 UTC m=+0.018167422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b938a7df2ac15ef17320ff35623ba86fd93e2ff317a4f83ed8aa5224ef29f68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b938a7df2ac15ef17320ff35623ba86fd93e2ff317a4f83ed8aa5224ef29f68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b938a7df2ac15ef17320ff35623ba86fd93e2ff317a4f83ed8aa5224ef29f68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b938a7df2ac15ef17320ff35623ba86fd93e2ff317a4f83ed8aa5224ef29f68/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:31 compute-0 podman[434663]: 2025-12-06 08:36:31.969849987 +0000 UTC m=+0.133392882 container init e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hypatia, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:36:31 compute-0 podman[434663]: 2025-12-06 08:36:31.979873538 +0000 UTC m=+0.143416393 container start e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:36:31 compute-0 podman[434663]: 2025-12-06 08:36:31.983028953 +0000 UTC m=+0.146571848 container attach e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hypatia, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 08:36:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:32 compute-0 interesting_hypatia[434681]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:36:32 compute-0 interesting_hypatia[434681]: --> relative data size: 1.0
Dec 06 08:36:32 compute-0 interesting_hypatia[434681]: --> All data devices are unavailable
Dec 06 08:36:32 compute-0 systemd[1]: libpod-e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08.scope: Deactivated successfully.
Dec 06 08:36:32 compute-0 podman[434663]: 2025-12-06 08:36:32.81327833 +0000 UTC m=+0.976821205 container died e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:36:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b938a7df2ac15ef17320ff35623ba86fd93e2ff317a4f83ed8aa5224ef29f68-merged.mount: Deactivated successfully.
Dec 06 08:36:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:32.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:32 compute-0 podman[434663]: 2025-12-06 08:36:32.883965988 +0000 UTC m=+1.047508853 container remove e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec 06 08:36:32 compute-0 systemd[1]: libpod-conmon-e28d06a7386cbb55307aabda70ef44f28d1e493aeaff5539243e39f9c0f19a08.scope: Deactivated successfully.
Dec 06 08:36:32 compute-0 sudo[434507]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:32 compute-0 sudo[434708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:32 compute-0 ceph-mon[74339]: pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:32 compute-0 sudo[434708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:32 compute-0 sudo[434708]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:33 compute-0 sudo[434733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:36:33 compute-0 sudo[434733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:33 compute-0 sudo[434733]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:33 compute-0 sudo[434758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:33 compute-0 sudo[434758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:33 compute-0 sudo[434758]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:33 compute-0 sudo[434783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:36:33 compute-0 sudo[434783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:33 compute-0 podman[434848]: 2025-12-06 08:36:33.436192812 +0000 UTC m=+0.033933476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:36:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:33.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:33 compute-0 nova_compute[251992]: 2025-12-06 08:36:33.659 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:36:33 compute-0 nova_compute[251992]: 2025-12-06 08:36:33.985 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:34 compute-0 podman[434848]: 2025-12-06 08:36:34.036229917 +0000 UTC m=+0.633970601 container create d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:36:34 compute-0 systemd[1]: Started libpod-conmon-d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4.scope.
Dec 06 08:36:34 compute-0 ceph-mon[74339]: pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:36:34 compute-0 podman[434848]: 2025-12-06 08:36:34.122786904 +0000 UTC m=+0.720527588 container init d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:36:34 compute-0 podman[434848]: 2025-12-06 08:36:34.130865131 +0000 UTC m=+0.728605785 container start d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec 06 08:36:34 compute-0 podman[434848]: 2025-12-06 08:36:34.134824858 +0000 UTC m=+0.732565512 container attach d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:36:34 compute-0 trusting_ardinghelli[434864]: 167 167
Dec 06 08:36:34 compute-0 systemd[1]: libpod-d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4.scope: Deactivated successfully.
Dec 06 08:36:34 compute-0 podman[434848]: 2025-12-06 08:36:34.137306335 +0000 UTC m=+0.735046999 container died d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec 06 08:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-93351a20d7ae83b957afeab761053acf4ea8955d5aa379d4f2607c6f7514f5dc-merged.mount: Deactivated successfully.
Dec 06 08:36:34 compute-0 podman[434848]: 2025-12-06 08:36:34.181857298 +0000 UTC m=+0.779597952 container remove d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ardinghelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:36:34 compute-0 systemd[1]: libpod-conmon-d7d28981f277bb9c444045c61bbbbc1a376fea3c20815b69beaa850f9a9091d4.scope: Deactivated successfully.
Dec 06 08:36:34 compute-0 podman[434888]: 2025-12-06 08:36:34.347873698 +0000 UTC m=+0.043402943 container create 44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:36:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4339: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:34 compute-0 systemd[1]: Started libpod-conmon-44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f.scope.
Dec 06 08:36:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:36:34 compute-0 podman[434888]: 2025-12-06 08:36:34.328975708 +0000 UTC m=+0.024504973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91e339990a261afd040807fdd6a7605178a68eeae166243949bc0d65841e60f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91e339990a261afd040807fdd6a7605178a68eeae166243949bc0d65841e60f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91e339990a261afd040807fdd6a7605178a68eeae166243949bc0d65841e60f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91e339990a261afd040807fdd6a7605178a68eeae166243949bc0d65841e60f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:34 compute-0 podman[434888]: 2025-12-06 08:36:34.437298122 +0000 UTC m=+0.132827387 container init 44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hertz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:36:34 compute-0 podman[434888]: 2025-12-06 08:36:34.444367532 +0000 UTC m=+0.139896767 container start 44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hertz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:36:34 compute-0 podman[434888]: 2025-12-06 08:36:34.448624558 +0000 UTC m=+0.144153793 container attach 44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:36:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:34.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]: {
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:     "0": [
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:         {
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "devices": [
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "/dev/loop3"
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             ],
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "lv_name": "ceph_lv0",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "lv_size": "7511998464",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "name": "ceph_lv0",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "tags": {
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.cluster_name": "ceph",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.crush_device_class": "",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.encrypted": "0",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.osd_id": "0",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.type": "block",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:                 "ceph.vdo": "0"
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             },
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "type": "block",
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:             "vg_name": "ceph_vg0"
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:         }
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]:     ]
Dec 06 08:36:35 compute-0 ecstatic_hertz[434904]: }
Dec 06 08:36:35 compute-0 ceph-mon[74339]: pgmap v4339: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:35 compute-0 systemd[1]: libpod-44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f.scope: Deactivated successfully.
Dec 06 08:36:35 compute-0 podman[434888]: 2025-12-06 08:36:35.26309566 +0000 UTC m=+0.958624945 container died 44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hertz, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:36:35 compute-0 nova_compute[251992]: 2025-12-06 08:36:35.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c91e339990a261afd040807fdd6a7605178a68eeae166243949bc0d65841e60f-merged.mount: Deactivated successfully.
Dec 06 08:36:35 compute-0 podman[434888]: 2025-12-06 08:36:35.31573781 +0000 UTC m=+1.011267055 container remove 44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:36:35 compute-0 systemd[1]: libpod-conmon-44578f1e8d9e7230b4f7e6e4699f7df9672d11f78bf65dcccb46ee127295431f.scope: Deactivated successfully.
Dec 06 08:36:35 compute-0 sudo[434783]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:35 compute-0 sudo[434926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:35 compute-0 sudo[434926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:35 compute-0 sudo[434926]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:35 compute-0 sudo[434951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:36:35 compute-0 sudo[434951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:35 compute-0 sudo[434951]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:35 compute-0 sudo[434976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:35 compute-0 sudo[434976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:35 compute-0 sudo[434976]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:35 compute-0 sudo[435001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:36:35 compute-0 sudo[435001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:35.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:35 compute-0 podman[435067]: 2025-12-06 08:36:35.898573211 +0000 UTC m=+0.035295743 container create b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:36:35 compute-0 systemd[1]: Started libpod-conmon-b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e.scope.
Dec 06 08:36:35 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:36:35 compute-0 podman[435067]: 2025-12-06 08:36:35.881699135 +0000 UTC m=+0.018421687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:36:35 compute-0 podman[435067]: 2025-12-06 08:36:35.98595678 +0000 UTC m=+0.122679342 container init b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:36:35 compute-0 podman[435067]: 2025-12-06 08:36:35.99152736 +0000 UTC m=+0.128249892 container start b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:36:35 compute-0 fervent_allen[435083]: 167 167
Dec 06 08:36:35 compute-0 podman[435067]: 2025-12-06 08:36:35.995858607 +0000 UTC m=+0.132581169 container attach b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:36:35 compute-0 systemd[1]: libpod-b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e.scope: Deactivated successfully.
Dec 06 08:36:35 compute-0 podman[435067]: 2025-12-06 08:36:35.997559803 +0000 UTC m=+0.134282345 container died b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac7d124c1a294407aa0863f43bccbf93fd82c4d6b3de41474d354d6319cd541c-merged.mount: Deactivated successfully.
Dec 06 08:36:36 compute-0 podman[435067]: 2025-12-06 08:36:36.040197224 +0000 UTC m=+0.176919756 container remove b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:36:36 compute-0 systemd[1]: libpod-conmon-b44ffd924f53c11e415e016f6031b9e04debb49d151bf7d71bbc2b483428539e.scope: Deactivated successfully.
Dec 06 08:36:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:36 compute-0 podman[435109]: 2025-12-06 08:36:36.166818871 +0000 UTC m=+0.021084650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:36:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:36 compute-0 podman[435109]: 2025-12-06 08:36:36.401360781 +0000 UTC m=+0.255626590 container create d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:36:36 compute-0 systemd[1]: Started libpod-conmon-d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2.scope.
Dec 06 08:36:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dca6b3f5bb9a6bb747a13f831684ec8f8ff572f108c380beb60a8d433de70b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dca6b3f5bb9a6bb747a13f831684ec8f8ff572f108c380beb60a8d433de70b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dca6b3f5bb9a6bb747a13f831684ec8f8ff572f108c380beb60a8d433de70b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dca6b3f5bb9a6bb747a13f831684ec8f8ff572f108c380beb60a8d433de70b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:36:36 compute-0 podman[435109]: 2025-12-06 08:36:36.742526149 +0000 UTC m=+0.596791958 container init d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec 06 08:36:36 compute-0 podman[435109]: 2025-12-06 08:36:36.756459505 +0000 UTC m=+0.610725284 container start d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haslett, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:36:36 compute-0 podman[435109]: 2025-12-06 08:36:36.760214626 +0000 UTC m=+0.614480425 container attach d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:36:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:37.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:37 compute-0 boring_haslett[435125]: {
Dec 06 08:36:37 compute-0 boring_haslett[435125]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:36:37 compute-0 boring_haslett[435125]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:36:37 compute-0 boring_haslett[435125]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:36:37 compute-0 boring_haslett[435125]:         "osd_id": 0,
Dec 06 08:36:37 compute-0 boring_haslett[435125]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:36:37 compute-0 boring_haslett[435125]:         "type": "bluestore"
Dec 06 08:36:37 compute-0 boring_haslett[435125]:     }
Dec 06 08:36:37 compute-0 boring_haslett[435125]: }
Dec 06 08:36:37 compute-0 systemd[1]: libpod-d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2.scope: Deactivated successfully.
Dec 06 08:36:37 compute-0 podman[435109]: 2025-12-06 08:36:37.698722827 +0000 UTC m=+1.552988636 container died d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:36:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dca6b3f5bb9a6bb747a13f831684ec8f8ff572f108c380beb60a8d433de70b4-merged.mount: Deactivated successfully.
Dec 06 08:36:37 compute-0 podman[435109]: 2025-12-06 08:36:37.774185473 +0000 UTC m=+1.628451282 container remove d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_haslett, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:36:37 compute-0 systemd[1]: libpod-conmon-d1ea2d18e6e6a8743efee75066f2c3cb150b8f152caca4740e1ae330397f24b2.scope: Deactivated successfully.
Dec 06 08:36:37 compute-0 sudo[435001]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:36:37 compute-0 ceph-mon[74339]: pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:38 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:36:38 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev cec672ce-91d4-453b-9613-4cecb7cdc17d does not exist
Dec 06 08:36:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 255b3f54-d61a-4c02-bd47-f076544eb560 does not exist
Dec 06 08:36:38 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 782edc41-c7a0-4251-9304-7f023baf0e92 does not exist
Dec 06 08:36:38 compute-0 sudo[435161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:38 compute-0 sudo[435161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:38 compute-0 sudo[435161]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:38 compute-0 sudo[435186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:36:38 compute-0 sudo[435186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:38 compute-0 sudo[435186]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:38.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:38 compute-0 nova_compute[251992]: 2025-12-06 08:36:38.990 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:36:39 compute-0 ceph-mon[74339]: pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:39.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:40 compute-0 nova_compute[251992]: 2025-12-06 08:36:40.264 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:41 compute-0 ceph-mon[74339]: pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:41.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:42.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:42 compute-0 ceph-mon[74339]: pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:36:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:36:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:43.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:43 compute-0 nova_compute[251992]: 2025-12-06 08:36:43.991 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:45 compute-0 nova_compute[251992]: 2025-12-06 08:36:45.265 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:45.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:45 compute-0 ceph-mon[74339]: pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:46 compute-0 podman[435215]: 2025-12-06 08:36:46.518415678 +0000 UTC m=+0.161065457 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:36:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:46.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:47.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:48 compute-0 ceph-mon[74339]: pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:48.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:48 compute-0 nova_compute[251992]: 2025-12-06 08:36:48.995 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:50 compute-0 ceph-mon[74339]: pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:50 compute-0 nova_compute[251992]: 2025-12-06 08:36:50.268 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:50.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:51 compute-0 ceph-mon[74339]: pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:51.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:51 compute-0 sudo[435244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:51 compute-0 sudo[435244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:51 compute-0 sudo[435244]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:51 compute-0 sudo[435269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:36:51 compute-0 sudo[435269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:36:51 compute-0 sudo[435269]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:52.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:53 compute-0 podman[435295]: 2025-12-06 08:36:53.413874366 +0000 UTC m=+0.067484612 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 08:36:53 compute-0 podman[435296]: 2025-12-06 08:36:53.441488572 +0000 UTC m=+0.095641742 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:36:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:54 compute-0 nova_compute[251992]: 2025-12-06 08:36:53.999 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:54 compute-0 ceph-mon[74339]: pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:54.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:55 compute-0 ceph-mon[74339]: pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2972193943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:55 compute-0 nova_compute[251992]: 2025-12-06 08:36:55.269 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:55.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1615628550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:36:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:36:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:36:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:56.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:36:57 compute-0 ceph-mon[74339]: pgmap v4350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:36:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:57.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:36:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4351: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:36:58.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:36:59 compute-0 nova_compute[251992]: 2025-12-06 08:36:59.001 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:36:59 compute-0 ceph-mon[74339]: pgmap v4351: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:36:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:36:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:36:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:36:59.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:00 compute-0 nova_compute[251992]: 2025-12-06 08:37:00.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4352: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:00.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:01 compute-0 ceph-mon[74339]: pgmap v4352: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:01.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4353: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:02.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:03 compute-0 ceph-mon[74339]: pgmap v4353: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:03.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:37:03.913 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:37:03.914 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:37:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:37:03.915 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:37:04 compute-0 nova_compute[251992]: 2025-12-06 08:37:04.005 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4354: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:04.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:05 compute-0 nova_compute[251992]: 2025-12-06 08:37:05.274 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:05 compute-0 ceph-mon[74339]: pgmap v4354: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:05.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4355: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:06.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:07 compute-0 ceph-mon[74339]: pgmap v4355: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:07.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4356: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:08.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:09 compute-0 nova_compute[251992]: 2025-12-06 08:37:09.040 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:09 compute-0 ceph-mon[74339]: pgmap v4356: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2873814189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:37:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2873814189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:37:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:09.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:10 compute-0 nova_compute[251992]: 2025-12-06 08:37:10.275 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4357: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:10 compute-0 nova_compute[251992]: 2025-12-06 08:37:10.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:10.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:10 compute-0 ceph-mon[74339]: pgmap v4357: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:12 compute-0 sudo[435341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:12 compute-0 sudo[435341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:12 compute-0 sudo[435341]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:12 compute-0 sudo[435366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:12 compute-0 sudo[435366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:12 compute-0 sudo[435366]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4358: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:12.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:37:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:37:13 compute-0 ceph-mon[74339]: pgmap v4358: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:13.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:14 compute-0 nova_compute[251992]: 2025-12-06 08:37:14.042 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4359: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:14.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:15 compute-0 nova_compute[251992]: 2025-12-06 08:37:15.277 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:15 compute-0 ceph-mon[74339]: pgmap v4359: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4360: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:17 compute-0 podman[435394]: 2025-12-06 08:37:17.413693056 +0000 UTC m=+0.077495643 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:37:17 compute-0 ceph-mon[74339]: pgmap v4360: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:17 compute-0 nova_compute[251992]: 2025-12-06 08:37:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:17 compute-0 nova_compute[251992]: 2025-12-06 08:37:17.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:17.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:17 compute-0 nova_compute[251992]: 2025-12-06 08:37:17.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:17 compute-0 nova_compute[251992]: 2025-12-06 08:37:17.694 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:37:17 compute-0 nova_compute[251992]: 2025-12-06 08:37:17.695 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:37:17 compute-0 nova_compute[251992]: 2025-12-06 08:37:17.696 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:37:17 compute-0 nova_compute[251992]: 2025-12-06 08:37:17.696 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:37:17 compute-0 nova_compute[251992]: 2025-12-06 08:37:17.697 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:37:18 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:37:18 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2483443150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:18 compute-0 nova_compute[251992]: 2025-12-06 08:37:18.154 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:37:18 compute-0 nova_compute[251992]: 2025-12-06 08:37:18.357 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:37:18 compute-0 nova_compute[251992]: 2025-12-06 08:37:18.358 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:37:18 compute-0 nova_compute[251992]: 2025-12-06 08:37:18.359 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:37:18 compute-0 nova_compute[251992]: 2025-12-06 08:37:18.359 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:37:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4361: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:18 compute-0 nova_compute[251992]: 2025-12-06 08:37:18.593 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:37:18 compute-0 nova_compute[251992]: 2025-12-06 08:37:18.593 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:37:18 compute-0 nova_compute[251992]: 2025-12-06 08:37:18.693 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:37:18
Dec 06 08:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', '.rgw.root']
Dec 06 08:37:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:37:18 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2483443150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:18.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:19 compute-0 nova_compute[251992]: 2025-12-06 08:37:19.087 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:37:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1232078863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:19 compute-0 nova_compute[251992]: 2025-12-06 08:37:19.125 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:37:19 compute-0 nova_compute[251992]: 2025-12-06 08:37:19.131 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:37:19 compute-0 nova_compute[251992]: 2025-12-06 08:37:19.148 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:37:19 compute-0 nova_compute[251992]: 2025-12-06 08:37:19.151 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:37:19 compute-0 nova_compute[251992]: 2025-12-06 08:37:19.152 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:37:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:19.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:19 compute-0 ceph-mon[74339]: pgmap v4361: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:19 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1232078863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:20 compute-0 nova_compute[251992]: 2025-12-06 08:37:20.279 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4362: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:20.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/943641337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:21.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:21 compute-0 ceph-mon[74339]: pgmap v4362: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1486169835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4363: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:22.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:23 compute-0 nova_compute[251992]: 2025-12-06 08:37:23.153 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:23 compute-0 nova_compute[251992]: 2025-12-06 08:37:23.153 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:37:23 compute-0 nova_compute[251992]: 2025-12-06 08:37:23.154 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:37:23 compute-0 ceph-mon[74339]: pgmap v4363: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:23 compute-0 nova_compute[251992]: 2025-12-06 08:37:23.513 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:37:23 compute-0 nova_compute[251992]: 2025-12-06 08:37:23.514 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:23 compute-0 nova_compute[251992]: 2025-12-06 08:37:23.514 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:23.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:37:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:37:24 compute-0 nova_compute[251992]: 2025-12-06 08:37:24.090 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4364: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:24 compute-0 podman[435469]: 2025-12-06 08:37:24.434790903 +0000 UTC m=+0.081541661 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:37:24 compute-0 podman[435468]: 2025-12-06 08:37:24.43505364 +0000 UTC m=+0.085724654 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:37:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:24.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:25 compute-0 nova_compute[251992]: 2025-12-06 08:37:25.281 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:25 compute-0 ceph-mon[74339]: pgmap v4364: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:25.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4365: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:37:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:37:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:26.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:37:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:37:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:37:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:37:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:37:27 compute-0 ceph-mon[74339]: pgmap v4365: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:27.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4366: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:28 compute-0 nova_compute[251992]: 2025-12-06 08:37:28.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:28 compute-0 nova_compute[251992]: 2025-12-06 08:37:28.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:37:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:28.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:29 compute-0 nova_compute[251992]: 2025-12-06 08:37:29.095 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:29 compute-0 ceph-mon[74339]: pgmap v4366: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:29.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:30 compute-0 nova_compute[251992]: 2025-12-06 08:37:30.285 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4367: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:30.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:31 compute-0 ceph-mon[74339]: pgmap v4367: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:31.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:32 compute-0 sudo[435512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:32 compute-0 sudo[435512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:32 compute-0 sudo[435512]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:32 compute-0 sudo[435537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:32 compute-0 sudo[435537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:32 compute-0 sudo[435537]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4368: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:32.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:33 compute-0 nova_compute[251992]: 2025-12-06 08:37:33.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:37:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:33.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:33 compute-0 ceph-mon[74339]: pgmap v4368: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:34 compute-0 nova_compute[251992]: 2025-12-06 08:37:34.098 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4369: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:34.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:35 compute-0 ceph-mon[74339]: pgmap v4369: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:35 compute-0 nova_compute[251992]: 2025-12-06 08:37:35.288 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:35.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4370: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:37 compute-0 ceph-mon[74339]: pgmap v4370: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:37.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4371: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:38 compute-0 sudo[435565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:38 compute-0 sudo[435565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:38 compute-0 sudo[435565]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:38 compute-0 sudo[435590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:37:38 compute-0 sudo[435590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:38 compute-0 sudo[435590]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:38 compute-0 sudo[435615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:38 compute-0 sudo[435615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:38 compute-0 sudo[435615]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:38 compute-0 sudo[435640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:37:38 compute-0 sudo[435640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:38.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:39 compute-0 nova_compute[251992]: 2025-12-06 08:37:39.102 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:39 compute-0 sudo[435640]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:37:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:37:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:37:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev a9d6ed17-6233-40ac-a3bf-7da665586ac0 does not exist
Dec 06 08:37:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev f8590471-b0cc-420b-bbd4-6affd6082286 does not exist
Dec 06 08:37:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 106b40de-3b29-4f28-a1d8-b2cabaad8089 does not exist
Dec 06 08:37:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:37:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:37:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:37:39 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: pgmap v4371: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:37:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:37:39 compute-0 sudo[435697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:39 compute-0 sudo[435697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:39 compute-0 sudo[435697]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:39 compute-0 sudo[435722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:37:39 compute-0 sudo[435722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:39 compute-0 sudo[435722]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:39 compute-0 sudo[435747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:39 compute-0 sudo[435747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:39 compute-0 sudo[435747]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:39 compute-0 sudo[435772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:37:39 compute-0 sudo[435772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:39.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:40 compute-0 podman[435837]: 2025-12-06 08:37:40.021166196 +0000 UTC m=+0.049571400 container create 1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:37:40 compute-0 systemd[1]: Started libpod-conmon-1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d.scope.
Dec 06 08:37:40 compute-0 podman[435837]: 2025-12-06 08:37:39.996211932 +0000 UTC m=+0.024617226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:37:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:37:40 compute-0 podman[435837]: 2025-12-06 08:37:40.117381272 +0000 UTC m=+0.145786496 container init 1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kalam, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:37:40 compute-0 podman[435837]: 2025-12-06 08:37:40.12433569 +0000 UTC m=+0.152740904 container start 1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:37:40 compute-0 podman[435837]: 2025-12-06 08:37:40.1280301 +0000 UTC m=+0.156435304 container attach 1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kalam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:37:40 compute-0 systemd[1]: libpod-1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d.scope: Deactivated successfully.
Dec 06 08:37:40 compute-0 ecstatic_kalam[435853]: 167 167
Dec 06 08:37:40 compute-0 conmon[435853]: conmon 1905b5b5f418ab8f7f64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d.scope/container/memory.events
Dec 06 08:37:40 compute-0 podman[435837]: 2025-12-06 08:37:40.131769881 +0000 UTC m=+0.160175095 container died 1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kalam, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec 06 08:37:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-75461fd4da0c08f71a6eaa4bc84c5c51a24624064e90923407fed72678da5643-merged.mount: Deactivated successfully.
Dec 06 08:37:40 compute-0 podman[435837]: 2025-12-06 08:37:40.17583522 +0000 UTC m=+0.204240444 container remove 1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:37:40 compute-0 systemd[1]: libpod-conmon-1905b5b5f418ab8f7f641d413fa05e771fbeec5a990e0a474585e827a644f89d.scope: Deactivated successfully.
Dec 06 08:37:40 compute-0 nova_compute[251992]: 2025-12-06 08:37:40.289 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:40 compute-0 podman[435878]: 2025-12-06 08:37:40.345322915 +0000 UTC m=+0.051584204 container create 9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec 06 08:37:40 compute-0 systemd[1]: Started libpod-conmon-9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b.scope.
Dec 06 08:37:40 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:37:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4372: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7bbbaa7998f436f4a1e80ccd7ec64d907c1045eb5507fdb0b379a4cb600fa4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7bbbaa7998f436f4a1e80ccd7ec64d907c1045eb5507fdb0b379a4cb600fa4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7bbbaa7998f436f4a1e80ccd7ec64d907c1045eb5507fdb0b379a4cb600fa4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7bbbaa7998f436f4a1e80ccd7ec64d907c1045eb5507fdb0b379a4cb600fa4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7bbbaa7998f436f4a1e80ccd7ec64d907c1045eb5507fdb0b379a4cb600fa4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:40 compute-0 podman[435878]: 2025-12-06 08:37:40.417369459 +0000 UTC m=+0.123630768 container init 9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:37:40 compute-0 podman[435878]: 2025-12-06 08:37:40.422887848 +0000 UTC m=+0.129149137 container start 9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:37:40 compute-0 podman[435878]: 2025-12-06 08:37:40.329282232 +0000 UTC m=+0.035543541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:37:40 compute-0 podman[435878]: 2025-12-06 08:37:40.426133686 +0000 UTC m=+0.132394995 container attach 9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:37:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:40.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:41 compute-0 elated_brattain[435895]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:37:41 compute-0 elated_brattain[435895]: --> relative data size: 1.0
Dec 06 08:37:41 compute-0 elated_brattain[435895]: --> All data devices are unavailable
Dec 06 08:37:41 compute-0 systemd[1]: libpod-9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b.scope: Deactivated successfully.
Dec 06 08:37:41 compute-0 podman[435878]: 2025-12-06 08:37:41.315357155 +0000 UTC m=+1.021618504 container died 9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:37:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:41.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:41 compute-0 ceph-mon[74339]: pgmap v4372: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7bbbaa7998f436f4a1e80ccd7ec64d907c1045eb5507fdb0b379a4cb600fa4c-merged.mount: Deactivated successfully.
Dec 06 08:37:41 compute-0 podman[435878]: 2025-12-06 08:37:41.793064509 +0000 UTC m=+1.499325818 container remove 9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_brattain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:37:41 compute-0 systemd[1]: libpod-conmon-9667af8c494ad9a361e143e94faebfb91868e962b9afb46bd0f9d001f86a1a7b.scope: Deactivated successfully.
Dec 06 08:37:41 compute-0 sudo[435772]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:41 compute-0 sudo[435924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:41 compute-0 sudo[435924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:41 compute-0 sudo[435924]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:41 compute-0 sudo[435949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:37:41 compute-0 sudo[435949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:41 compute-0 sudo[435949]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:41 compute-0 sudo[435974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:41 compute-0 sudo[435974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:41 compute-0 sudo[435974]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:42 compute-0 sudo[435999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:37:42 compute-0 sudo[435999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:42 compute-0 podman[436065]: 2025-12-06 08:37:42.352010174 +0000 UTC m=+0.042471057 container create 2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 08:37:42 compute-0 systemd[1]: Started libpod-conmon-2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe.scope.
Dec 06 08:37:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4373: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:37:42 compute-0 podman[436065]: 2025-12-06 08:37:42.334688547 +0000 UTC m=+0.025149420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:37:42 compute-0 podman[436065]: 2025-12-06 08:37:42.439079514 +0000 UTC m=+0.129540367 container init 2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chatelet, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:37:42 compute-0 podman[436065]: 2025-12-06 08:37:42.444603953 +0000 UTC m=+0.135064806 container start 2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:37:42 compute-0 podman[436065]: 2025-12-06 08:37:42.447409479 +0000 UTC m=+0.137870332 container attach 2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chatelet, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:37:42 compute-0 recursing_chatelet[436081]: 167 167
Dec 06 08:37:42 compute-0 systemd[1]: libpod-2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe.scope: Deactivated successfully.
Dec 06 08:37:42 compute-0 podman[436065]: 2025-12-06 08:37:42.451283104 +0000 UTC m=+0.141743997 container died 2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-20f3fbb8892a99fcb7268ee2a1446bc8bbd9763a0c3d37b2a353c3e9a9a0842f-merged.mount: Deactivated successfully.
Dec 06 08:37:42 compute-0 podman[436065]: 2025-12-06 08:37:42.497401129 +0000 UTC m=+0.187861982 container remove 2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:37:42 compute-0 systemd[1]: libpod-conmon-2e51e919d7389653f12df2cb43916ac0d5201243cee8442b5397fd88e4e883fe.scope: Deactivated successfully.
Dec 06 08:37:42 compute-0 podman[436104]: 2025-12-06 08:37:42.69937385 +0000 UTC m=+0.069351883 container create 7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec 06 08:37:42 compute-0 systemd[1]: Started libpod-conmon-7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129.scope.
Dec 06 08:37:42 compute-0 podman[436104]: 2025-12-06 08:37:42.67716023 +0000 UTC m=+0.047138273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:37:42 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5207662c53ba5938eb0de3a0756b2553553c6e4be68b8120d6e09f39c7a0cb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5207662c53ba5938eb0de3a0756b2553553c6e4be68b8120d6e09f39c7a0cb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5207662c53ba5938eb0de3a0756b2553553c6e4be68b8120d6e09f39c7a0cb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5207662c53ba5938eb0de3a0756b2553553c6e4be68b8120d6e09f39c7a0cb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:42 compute-0 podman[436104]: 2025-12-06 08:37:42.799916093 +0000 UTC m=+0.169894136 container init 7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec 06 08:37:42 compute-0 podman[436104]: 2025-12-06 08:37:42.813895931 +0000 UTC m=+0.183873954 container start 7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:37:42 compute-0 podman[436104]: 2025-12-06 08:37:42.817382765 +0000 UTC m=+0.187360818 container attach 7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:37:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:42.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:37:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]: {
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:     "0": [
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:         {
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "devices": [
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "/dev/loop3"
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             ],
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "lv_name": "ceph_lv0",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "lv_size": "7511998464",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "name": "ceph_lv0",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "tags": {
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.cluster_name": "ceph",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.crush_device_class": "",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.encrypted": "0",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.osd_id": "0",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.type": "block",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:                 "ceph.vdo": "0"
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             },
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "type": "block",
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:             "vg_name": "ceph_vg0"
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:         }
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]:     ]
Dec 06 08:37:43 compute-0 relaxed_varahamihira[436120]: }
Dec 06 08:37:43 compute-0 systemd[1]: libpod-7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129.scope: Deactivated successfully.
Dec 06 08:37:43 compute-0 podman[436104]: 2025-12-06 08:37:43.602179286 +0000 UTC m=+0.972157339 container died 7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5207662c53ba5938eb0de3a0756b2553553c6e4be68b8120d6e09f39c7a0cb7-merged.mount: Deactivated successfully.
Dec 06 08:37:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:43.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:44 compute-0 nova_compute[251992]: 2025-12-06 08:37:44.107 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:44 compute-0 ceph-mon[74339]: pgmap v4373: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:44 compute-0 podman[436104]: 2025-12-06 08:37:44.3085236 +0000 UTC m=+1.678501623 container remove 7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_varahamihira, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 06 08:37:44 compute-0 systemd[1]: libpod-conmon-7d725b6df0c1a4012e810191dd9159301a163d257901c8ad4d0201698a159129.scope: Deactivated successfully.
Dec 06 08:37:44 compute-0 sudo[435999]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4374: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:44 compute-0 sudo[436143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:44 compute-0 sudo[436143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:44 compute-0 sudo[436143]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:44 compute-0 sudo[436168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:37:44 compute-0 sudo[436168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:44 compute-0 sudo[436168]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:44 compute-0 sudo[436193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:44 compute-0 sudo[436193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:44 compute-0 sudo[436193]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:44 compute-0 sudo[436218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:37:44 compute-0 sudo[436218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:44 compute-0 podman[436283]: 2025-12-06 08:37:44.949759246 +0000 UTC m=+0.046488145 container create 4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:37:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:44.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:44 compute-0 systemd[1]: Started libpod-conmon-4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed.scope.
Dec 06 08:37:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:37:45 compute-0 podman[436283]: 2025-12-06 08:37:45.016949599 +0000 UTC m=+0.113678508 container init 4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:37:45 compute-0 podman[436283]: 2025-12-06 08:37:45.023663681 +0000 UTC m=+0.120392580 container start 4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:37:45 compute-0 podman[436283]: 2025-12-06 08:37:44.931179035 +0000 UTC m=+0.027907984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:37:45 compute-0 podman[436283]: 2025-12-06 08:37:45.028170232 +0000 UTC m=+0.124899221 container attach 4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:37:45 compute-0 vigorous_austin[436300]: 167 167
Dec 06 08:37:45 compute-0 systemd[1]: libpod-4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed.scope: Deactivated successfully.
Dec 06 08:37:45 compute-0 podman[436283]: 2025-12-06 08:37:45.029833298 +0000 UTC m=+0.126562237 container died 4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_austin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:37:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6ab174a6c03e72aa9d85fa9b65bd08f244f7c39ff8df4eda54cc5e13e082fcf-merged.mount: Deactivated successfully.
Dec 06 08:37:45 compute-0 podman[436283]: 2025-12-06 08:37:45.070948668 +0000 UTC m=+0.167677577 container remove 4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:37:45 compute-0 systemd[1]: libpod-conmon-4692c52d411053130faaaac9cdd1842e486147cf2181147990c7968bf3b286ed.scope: Deactivated successfully.
Dec 06 08:37:45 compute-0 podman[436324]: 2025-12-06 08:37:45.225055867 +0000 UTC m=+0.038701726 container create 01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:37:45 compute-0 systemd[1]: Started libpod-conmon-01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e.scope.
Dec 06 08:37:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f266a17d1c7c94ef3a787c03617ddcce28b4a82779fb6f8cbb79c7ad67787/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f266a17d1c7c94ef3a787c03617ddcce28b4a82779fb6f8cbb79c7ad67787/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f266a17d1c7c94ef3a787c03617ddcce28b4a82779fb6f8cbb79c7ad67787/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f266a17d1c7c94ef3a787c03617ddcce28b4a82779fb6f8cbb79c7ad67787/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:37:45 compute-0 nova_compute[251992]: 2025-12-06 08:37:45.291 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:45 compute-0 podman[436324]: 2025-12-06 08:37:45.297713208 +0000 UTC m=+0.111359067 container init 01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec 06 08:37:45 compute-0 ceph-mon[74339]: pgmap v4374: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:45 compute-0 podman[436324]: 2025-12-06 08:37:45.206334411 +0000 UTC m=+0.019980300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:37:45 compute-0 podman[436324]: 2025-12-06 08:37:45.306392682 +0000 UTC m=+0.120038541 container start 01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Dec 06 08:37:45 compute-0 podman[436324]: 2025-12-06 08:37:45.31003467 +0000 UTC m=+0.123680559 container attach 01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 06 08:37:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:45.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]: {
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]:         "osd_id": 0,
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]:         "type": "bluestore"
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]:     }
Dec 06 08:37:46 compute-0 nostalgic_blackwell[436341]: }
Dec 06 08:37:46 compute-0 systemd[1]: libpod-01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e.scope: Deactivated successfully.
Dec 06 08:37:46 compute-0 podman[436324]: 2025-12-06 08:37:46.130316139 +0000 UTC m=+0.943962038 container died 01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-002f266a17d1c7c94ef3a787c03617ddcce28b4a82779fb6f8cbb79c7ad67787-merged.mount: Deactivated successfully.
Dec 06 08:37:46 compute-0 podman[436324]: 2025-12-06 08:37:46.180481143 +0000 UTC m=+0.994127002 container remove 01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 08:37:46 compute-0 systemd[1]: libpod-conmon-01bfe0a6de17485feb649641694a7b8fe4a35571e9034ff8a955fe43fc8d745e.scope: Deactivated successfully.
Dec 06 08:37:46 compute-0 sudo[436218]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:37:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:37:46 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8bb2c7ed-2ea6-4d39-9975-cb8a57e10a30 does not exist
Dec 06 08:37:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0e554b24-b1c2-4bae-be35-f81dad670e65 does not exist
Dec 06 08:37:46 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 88d1e125-e9e6-4ad9-8a0c-54f7e8dee2c9 does not exist
Dec 06 08:37:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:46 compute-0 sudo[436377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:46 compute-0 sudo[436377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:46 compute-0 sudo[436377]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:46 compute-0 sudo[436402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:37:46 compute-0 sudo[436402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:46 compute-0 sudo[436402]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4375: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:46.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:47 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:37:47 compute-0 ceph-mon[74339]: pgmap v4375: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4376: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:48 compute-0 podman[436428]: 2025-12-06 08:37:48.458948958 +0000 UTC m=+0.118303554 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:37:48 compute-0 ceph-mon[74339]: pgmap v4376: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:48.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:49 compute-0 nova_compute[251992]: 2025-12-06 08:37:49.110 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:49.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:50 compute-0 nova_compute[251992]: 2025-12-06 08:37:50.294 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4377: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:50.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:51 compute-0 ceph-mon[74339]: pgmap v4377: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:51.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4378: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:52 compute-0 sudo[436456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:52 compute-0 sudo[436456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:52 compute-0 sudo[436456]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:52 compute-0 sudo[436481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:37:52 compute-0 sudo[436481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:37:52 compute-0 sudo[436481]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:37:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:52.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:37:53 compute-0 ceph-mon[74339]: pgmap v4378: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:53.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:54 compute-0 nova_compute[251992]: 2025-12-06 08:37:54.113 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4379: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:54.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:55 compute-0 nova_compute[251992]: 2025-12-06 08:37:55.297 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:55 compute-0 podman[436508]: 2025-12-06 08:37:55.414153496 +0000 UTC m=+0.065545630 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:37:55 compute-0 podman[436507]: 2025-12-06 08:37:55.420868587 +0000 UTC m=+0.065304853 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:37:55 compute-0 ceph-mon[74339]: pgmap v4379: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/350442536' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:55.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:37:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4380: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/751184748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:37:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:37:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:56.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:37:57 compute-0 ceph-mon[74339]: pgmap v4380: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:57.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4381: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:37:58.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:37:59 compute-0 nova_compute[251992]: 2025-12-06 08:37:59.117 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:37:59 compute-0 ceph-mon[74339]: pgmap v4381: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:37:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:37:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:37:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:37:59.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:00 compute-0 nova_compute[251992]: 2025-12-06 08:38:00.300 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4382: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:00.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:01 compute-0 ceph-mon[74339]: pgmap v4382: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:01.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4383: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:02.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:03.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:03 compute-0 ceph-mon[74339]: pgmap v4383: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:38:03.915 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:38:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:38:03.915 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:38:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:38:03.915 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:38:04 compute-0 nova_compute[251992]: 2025-12-06 08:38:04.123 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4384: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:38:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:04.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:38:05 compute-0 ceph-mon[74339]: pgmap v4384: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:05 compute-0 nova_compute[251992]: 2025-12-06 08:38:05.302 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:05.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4385: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:38:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:07.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:38:07 compute-0 ceph-mon[74339]: pgmap v4385: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:07.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4386: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:09.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:09 compute-0 nova_compute[251992]: 2025-12-06 08:38:09.127 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec 06 08:38:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2004746509' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:38:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec 06 08:38:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2004746509' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:38:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:09.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:10 compute-0 ceph-mon[74339]: pgmap v4386: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2004746509' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:38:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2004746509' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:38:10 compute-0 nova_compute[251992]: 2025-12-06 08:38:10.303 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4387: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:11.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:11 compute-0 ceph-mon[74339]: pgmap v4387: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:38:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:11.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:38:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4388: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:12 compute-0 sudo[436556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:12 compute-0 sudo[436556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:12 compute-0 sudo[436556]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:12 compute-0 nova_compute[251992]: 2025-12-06 08:38:12.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:12 compute-0 sudo[436581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:12 compute-0 sudo[436581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:12 compute-0 sudo[436581]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:13.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:38:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:38:13 compute-0 ceph-mon[74339]: pgmap v4388: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:13.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:38:13 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 63K writes, 232K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 63K writes, 24K syncs, 2.59 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1120 writes, 2689 keys, 1120 commit groups, 1.0 writes per commit group, ingest: 1.64 MB, 0.00 MB/s
                                           Interval WAL: 1120 writes, 524 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:38:14 compute-0 nova_compute[251992]: 2025-12-06 08:38:14.130 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4389: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:15.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:15 compute-0 nova_compute[251992]: 2025-12-06 08:38:15.307 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:15.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:15 compute-0 ceph-mon[74339]: pgmap v4389: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4390: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:17.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:17 compute-0 ceph-mon[74339]: pgmap v4390: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:17 compute-0 nova_compute[251992]: 2025-12-06 08:38:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:17.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4391: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:18 compute-0 nova_compute[251992]: 2025-12-06 08:38:18.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:18 compute-0 nova_compute[251992]: 2025-12-06 08:38:18.700 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:38:18 compute-0 nova_compute[251992]: 2025-12-06 08:38:18.700 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:38:18 compute-0 nova_compute[251992]: 2025-12-06 08:38:18.700 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:38:18 compute-0 nova_compute[251992]: 2025-12-06 08:38:18.701 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:38:18 compute-0 nova_compute[251992]: 2025-12-06 08:38:18.701 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:38:18
Dec 06 08:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images']
Dec 06 08:38:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:38:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:19.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:19 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:38:19 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62786821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.118 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.134 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.289 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.290 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4052MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.290 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.291 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:38:19 compute-0 podman[436631]: 2025-12-06 08:38:19.41957613 +0000 UTC m=+0.083678570 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:38:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.761 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.761 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:38:19 compute-0 nova_compute[251992]: 2025-12-06 08:38:19.779 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:38:20 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:38:20 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1095114067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:20 compute-0 nova_compute[251992]: 2025-12-06 08:38:20.212 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:38:20 compute-0 nova_compute[251992]: 2025-12-06 08:38:20.221 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:38:20 compute-0 nova_compute[251992]: 2025-12-06 08:38:20.309 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:20 compute-0 ceph-mon[74339]: pgmap v4391: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:20 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/62786821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4392: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:20 compute-0 nova_compute[251992]: 2025-12-06 08:38:20.423 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:38:20 compute-0 nova_compute[251992]: 2025-12-06 08:38:20.425 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:38:20 compute-0 nova_compute[251992]: 2025-12-06 08:38:20.425 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:38:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:21 compute-0 nova_compute[251992]: 2025-12-06 08:38:21.425 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1095114067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:21 compute-0 ceph-mon[74339]: pgmap v4392: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1308302036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:21.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4393: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:22 compute-0 ceph-mgr[74630]: [devicehealth INFO root] Check health
Dec 06 08:38:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2898152300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:23.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:23 compute-0 nova_compute[251992]: 2025-12-06 08:38:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:23 compute-0 nova_compute[251992]: 2025-12-06 08:38:23.656 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:38:23 compute-0 nova_compute[251992]: 2025-12-06 08:38:23.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:38:23 compute-0 nova_compute[251992]: 2025-12-06 08:38:23.709 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:38:23 compute-0 nova_compute[251992]: 2025-12-06 08:38:23.709 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:23.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:38:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:38:24 compute-0 nova_compute[251992]: 2025-12-06 08:38:24.138 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:24 compute-0 ceph-mon[74339]: pgmap v4393: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4394: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:24 compute-0 nova_compute[251992]: 2025-12-06 08:38:24.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:25.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:25 compute-0 nova_compute[251992]: 2025-12-06 08:38:25.351 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:25.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:25 compute-0 ceph-mon[74339]: pgmap v4394: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:26 compute-0 podman[436684]: 2025-12-06 08:38:26.385298422 +0000 UTC m=+0.044876962 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 08:38:26 compute-0 podman[436685]: 2025-12-06 08:38:26.406541696 +0000 UTC m=+0.063736062 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4395: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:38:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:38:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:27.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:27 compute-0 ceph-mon[74339]: pgmap v4395: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:38:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:38:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:38:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:38:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:38:27 compute-0 nova_compute[251992]: 2025-12-06 08:38:27.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:27.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4396: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:29.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:29 compute-0 nova_compute[251992]: 2025-12-06 08:38:29.143 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:29 compute-0 ceph-mon[74339]: pgmap v4396: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:29 compute-0 nova_compute[251992]: 2025-12-06 08:38:29.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:29 compute-0 nova_compute[251992]: 2025-12-06 08:38:29.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:38:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:29.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:30 compute-0 nova_compute[251992]: 2025-12-06 08:38:30.355 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4397: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:31.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:31.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4398: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:32 compute-0 sudo[436725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:32 compute-0 sudo[436725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:32 compute-0 sudo[436725]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:32 compute-0 sudo[436750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:32 compute-0 sudo[436750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:32 compute-0 sudo[436750]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:32 compute-0 ceph-mon[74339]: pgmap v4397: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:33.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:34 compute-0 nova_compute[251992]: 2025-12-06 08:38:34.145 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4399: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:34 compute-0 nova_compute[251992]: 2025-12-06 08:38:34.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:38:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:35.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:35 compute-0 ceph-mon[74339]: pgmap v4398: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:35 compute-0 nova_compute[251992]: 2025-12-06 08:38:35.357 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:35.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:36 compute-0 ceph-mon[74339]: pgmap v4399: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4400: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:36 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:37.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:37 compute-0 ceph-mon[74339]: pgmap v4400: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:37.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4401: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:39.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:39 compute-0 nova_compute[251992]: 2025-12-06 08:38:39.148 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:39 compute-0 ceph-mon[74339]: pgmap v4401: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:40 compute-0 nova_compute[251992]: 2025-12-06 08:38:40.362 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4402: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:41.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:41 compute-0 ceph-mon[74339]: pgmap v4402: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:41 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4403: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:43.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:38:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:38:43 compute-0 ceph-mon[74339]: pgmap v4403: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:43.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:44 compute-0 nova_compute[251992]: 2025-12-06 08:38:44.151 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4404: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:45.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:45 compute-0 nova_compute[251992]: 2025-12-06 08:38:45.365 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:45 compute-0 ceph-mon[74339]: pgmap v4404: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4405: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:46 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:46 compute-0 sudo[436782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:46 compute-0 sudo[436782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:46 compute-0 sudo[436782]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:46 compute-0 sudo[436807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:38:46 compute-0 sudo[436807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:46 compute-0 sudo[436807]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:46 compute-0 sudo[436832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:46 compute-0 sudo[436832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:46 compute-0 sudo[436832]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:46 compute-0 sudo[436857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:38:46 compute-0 sudo[436857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:47 compute-0 ceph-mon[74339]: pgmap v4405: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:38:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:47.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:38:47 compute-0 sudo[436857]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Dec 06 08:38:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:38:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:38:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:38:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:38:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:38:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:38:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev fadc9825-5b88-4da3-bab8-3c356ab8cab9 does not exist
Dec 06 08:38:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 72360d66-aad0-4711-ad3b-b785d1e30115 does not exist
Dec 06 08:38:47 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 557be2ed-f245-4e50-b694-0c10d9364665 does not exist
Dec 06 08:38:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:38:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:38:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:38:47 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:38:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:38:47 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:38:47 compute-0 sudo[436913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:47 compute-0 sudo[436913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:47 compute-0 sudo[436913]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:47 compute-0 sudo[436938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:38:47 compute-0 sudo[436938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:47 compute-0 sudo[436938]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:47 compute-0 sudo[436963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:47 compute-0 sudo[436963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:47 compute-0 sudo[436963]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:47 compute-0 sudo[436988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:38:47 compute-0 sudo[436988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:47.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:48 compute-0 podman[437053]: 2025-12-06 08:38:48.01190024 +0000 UTC m=+0.044696117 container create 32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Dec 06 08:38:48 compute-0 systemd[1]: Started libpod-conmon-32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c.scope.
Dec 06 08:38:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 06 08:38:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:38:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:38:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:38:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:38:48 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:38:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:38:48 compute-0 podman[437053]: 2025-12-06 08:38:47.993242017 +0000 UTC m=+0.026037914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:38:48 compute-0 podman[437053]: 2025-12-06 08:38:48.108896518 +0000 UTC m=+0.141692395 container init 32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:38:48 compute-0 podman[437053]: 2025-12-06 08:38:48.117451439 +0000 UTC m=+0.150247316 container start 32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Dec 06 08:38:48 compute-0 podman[437053]: 2025-12-06 08:38:48.120873571 +0000 UTC m=+0.153669448 container attach 32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:38:48 compute-0 wonderful_perlman[437070]: 167 167
Dec 06 08:38:48 compute-0 systemd[1]: libpod-32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c.scope: Deactivated successfully.
Dec 06 08:38:48 compute-0 podman[437053]: 2025-12-06 08:38:48.125383803 +0000 UTC m=+0.158179680 container died 32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:38:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-04e52fc275101fd0422f7f9742d40cf0ccd5d163ce0c4f25f550835f8cc5af70-merged.mount: Deactivated successfully.
Dec 06 08:38:48 compute-0 podman[437053]: 2025-12-06 08:38:48.16750586 +0000 UTC m=+0.200301737 container remove 32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:38:48 compute-0 systemd[1]: libpod-conmon-32c62fc9c87929d2338264ff98a4963e3936f7138e2b5eed7460602a8091bb3c.scope: Deactivated successfully.
Dec 06 08:38:48 compute-0 podman[437093]: 2025-12-06 08:38:48.394730433 +0000 UTC m=+0.114391099 container create 334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:38:48 compute-0 podman[437093]: 2025-12-06 08:38:48.303657145 +0000 UTC m=+0.023317801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:38:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4406: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:48 compute-0 systemd[1]: Started libpod-conmon-334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872.scope.
Dec 06 08:38:48 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14246b2a40530dee09bde38e475cf330a08f4e76dc7e0190f384cb099ccfb6e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14246b2a40530dee09bde38e475cf330a08f4e76dc7e0190f384cb099ccfb6e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14246b2a40530dee09bde38e475cf330a08f4e76dc7e0190f384cb099ccfb6e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14246b2a40530dee09bde38e475cf330a08f4e76dc7e0190f384cb099ccfb6e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14246b2a40530dee09bde38e475cf330a08f4e76dc7e0190f384cb099ccfb6e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:48 compute-0 podman[437093]: 2025-12-06 08:38:48.54579548 +0000 UTC m=+0.265456136 container init 334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:38:48 compute-0 podman[437093]: 2025-12-06 08:38:48.554133145 +0000 UTC m=+0.273793771 container start 334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:38:48 compute-0 podman[437093]: 2025-12-06 08:38:48.558211375 +0000 UTC m=+0.277872001 container attach 334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 06 08:38:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:49.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:49 compute-0 nova_compute[251992]: 2025-12-06 08:38:49.154 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:49 compute-0 ceph-mon[74339]: pgmap v4406: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:49 compute-0 brave_turing[437109]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:38:49 compute-0 brave_turing[437109]: --> relative data size: 1.0
Dec 06 08:38:49 compute-0 brave_turing[437109]: --> All data devices are unavailable
Dec 06 08:38:49 compute-0 systemd[1]: libpod-334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872.scope: Deactivated successfully.
Dec 06 08:38:49 compute-0 podman[437093]: 2025-12-06 08:38:49.408282269 +0000 UTC m=+1.127942905 container died 334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-14246b2a40530dee09bde38e475cf330a08f4e76dc7e0190f384cb099ccfb6e4-merged.mount: Deactivated successfully.
Dec 06 08:38:49 compute-0 podman[437093]: 2025-12-06 08:38:49.63841983 +0000 UTC m=+1.358080456 container remove 334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:38:49 compute-0 sudo[436988]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:49 compute-0 systemd[1]: libpod-conmon-334c8c4785b062a402000cc80eb3a9ad1a6f6d31df4e545e10e443b149abd872.scope: Deactivated successfully.
Dec 06 08:38:49 compute-0 podman[437134]: 2025-12-06 08:38:49.711577324 +0000 UTC m=+0.233736579 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:38:49 compute-0 sudo[437154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:49 compute-0 sudo[437154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:49 compute-0 sudo[437154]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:49.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:49 compute-0 sudo[437185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:38:49 compute-0 sudo[437185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:49 compute-0 sudo[437185]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:49 compute-0 sudo[437210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:49 compute-0 sudo[437210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:49 compute-0 sudo[437210]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:49 compute-0 sudo[437235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:38:49 compute-0 sudo[437235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:50 compute-0 podman[437299]: 2025-12-06 08:38:50.224003914 +0000 UTC m=+0.037784030 container create 01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:38:50 compute-0 systemd[1]: Started libpod-conmon-01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4.scope.
Dec 06 08:38:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:38:50 compute-0 podman[437299]: 2025-12-06 08:38:50.28718381 +0000 UTC m=+0.100963946 container init 01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec 06 08:38:50 compute-0 podman[437299]: 2025-12-06 08:38:50.293660495 +0000 UTC m=+0.107440611 container start 01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:38:50 compute-0 podman[437299]: 2025-12-06 08:38:50.296675556 +0000 UTC m=+0.110455672 container attach 01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:38:50 compute-0 romantic_solomon[437315]: 167 167
Dec 06 08:38:50 compute-0 systemd[1]: libpod-01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4.scope: Deactivated successfully.
Dec 06 08:38:50 compute-0 podman[437299]: 2025-12-06 08:38:50.298377982 +0000 UTC m=+0.112158098 container died 01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:38:50 compute-0 podman[437299]: 2025-12-06 08:38:50.207698494 +0000 UTC m=+0.021478630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ffafef5f86c5383803f73fb9a983359773102cdaac5451c37e1fdb1aae3cce9-merged.mount: Deactivated successfully.
Dec 06 08:38:50 compute-0 podman[437299]: 2025-12-06 08:38:50.329876972 +0000 UTC m=+0.143657088 container remove 01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec 06 08:38:50 compute-0 systemd[1]: libpod-conmon-01c12de2abc678b6bb92ac238d9458fe537da2c782c3f628a3d687595a354aa4.scope: Deactivated successfully.
Dec 06 08:38:50 compute-0 nova_compute[251992]: 2025-12-06 08:38:50.367 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4407: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:50 compute-0 podman[437338]: 2025-12-06 08:38:50.480261891 +0000 UTC m=+0.039381514 container create 4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:38:50 compute-0 systemd[1]: Started libpod-conmon-4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235.scope.
Dec 06 08:38:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612e7693836b01e0fbb253b082bc0c98633e4ccac0437f8900f0af5724262d4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612e7693836b01e0fbb253b082bc0c98633e4ccac0437f8900f0af5724262d4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612e7693836b01e0fbb253b082bc0c98633e4ccac0437f8900f0af5724262d4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612e7693836b01e0fbb253b082bc0c98633e4ccac0437f8900f0af5724262d4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:50 compute-0 podman[437338]: 2025-12-06 08:38:50.463473248 +0000 UTC m=+0.022592891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:38:50 compute-0 podman[437338]: 2025-12-06 08:38:50.564596047 +0000 UTC m=+0.123715690 container init 4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:38:50 compute-0 podman[437338]: 2025-12-06 08:38:50.571156654 +0000 UTC m=+0.130276277 container start 4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:38:50 compute-0 podman[437338]: 2025-12-06 08:38:50.574418642 +0000 UTC m=+0.133538265 container attach 4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:38:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:51.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:51 compute-0 keen_kilby[437354]: {
Dec 06 08:38:51 compute-0 keen_kilby[437354]:     "0": [
Dec 06 08:38:51 compute-0 keen_kilby[437354]:         {
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "devices": [
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "/dev/loop3"
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             ],
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "lv_name": "ceph_lv0",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "lv_size": "7511998464",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "name": "ceph_lv0",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "tags": {
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.cluster_name": "ceph",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.crush_device_class": "",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.encrypted": "0",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.osd_id": "0",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.type": "block",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:                 "ceph.vdo": "0"
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             },
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "type": "block",
Dec 06 08:38:51 compute-0 keen_kilby[437354]:             "vg_name": "ceph_vg0"
Dec 06 08:38:51 compute-0 keen_kilby[437354]:         }
Dec 06 08:38:51 compute-0 keen_kilby[437354]:     ]
Dec 06 08:38:51 compute-0 keen_kilby[437354]: }
Dec 06 08:38:51 compute-0 systemd[1]: libpod-4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235.scope: Deactivated successfully.
Dec 06 08:38:51 compute-0 podman[437338]: 2025-12-06 08:38:51.373796878 +0000 UTC m=+0.932916561 container died 4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:38:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:51 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:52 compute-0 ceph-mon[74339]: pgmap v4407: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-612e7693836b01e0fbb253b082bc0c98633e4ccac0437f8900f0af5724262d4a-merged.mount: Deactivated successfully.
Dec 06 08:38:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4408: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:52 compute-0 podman[437338]: 2025-12-06 08:38:52.795663533 +0000 UTC m=+2.354783186 container remove 4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:38:52 compute-0 systemd[1]: libpod-conmon-4d7cdb491808bf28cfd3380502457455cbc72a076758b398093db62d2d5f6235.scope: Deactivated successfully.
Dec 06 08:38:52 compute-0 sudo[437235]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:52 compute-0 sudo[437377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:52 compute-0 sudo[437377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:52 compute-0 sudo[437377]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:52 compute-0 sudo[437400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:52 compute-0 sudo[437400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:52 compute-0 sudo[437400]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:52 compute-0 sudo[437423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:52 compute-0 sudo[437423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:52 compute-0 sudo[437423]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:52 compute-0 sudo[437450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:38:52 compute-0 sudo[437450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:52 compute-0 sudo[437450]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:53 compute-0 ceph-mon[74339]: pgmap v4408: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:53 compute-0 sudo[437477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:53 compute-0 sudo[437477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:53 compute-0 sudo[437477]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:53.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:53 compute-0 sudo[437502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:38:53 compute-0 sudo[437502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:53 compute-0 podman[437568]: 2025-12-06 08:38:53.384977049 +0000 UTC m=+0.033756602 container create 2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:38:53 compute-0 systemd[1]: Started libpod-conmon-2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578.scope.
Dec 06 08:38:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:38:53 compute-0 podman[437568]: 2025-12-06 08:38:53.37019878 +0000 UTC m=+0.018978353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:38:53 compute-0 podman[437568]: 2025-12-06 08:38:53.46984345 +0000 UTC m=+0.118623083 container init 2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec 06 08:38:53 compute-0 podman[437568]: 2025-12-06 08:38:53.475934564 +0000 UTC m=+0.124714127 container start 2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:38:53 compute-0 podman[437568]: 2025-12-06 08:38:53.479417138 +0000 UTC m=+0.128196791 container attach 2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:38:53 compute-0 kind_brown[437585]: 167 167
Dec 06 08:38:53 compute-0 systemd[1]: libpod-2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578.scope: Deactivated successfully.
Dec 06 08:38:53 compute-0 podman[437568]: 2025-12-06 08:38:53.482041829 +0000 UTC m=+0.130821412 container died 2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:38:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae66c407fb754355b5489513c91d7440ad71c8b654327799095a2f331882d8b2-merged.mount: Deactivated successfully.
Dec 06 08:38:53 compute-0 podman[437568]: 2025-12-06 08:38:53.533030535 +0000 UTC m=+0.181810118 container remove 2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:38:53 compute-0 systemd[1]: libpod-conmon-2e305cd4fb2619a06a67bf843c49882e9c2f0320a0dd8363615b64459003f578.scope: Deactivated successfully.
Dec 06 08:38:53 compute-0 podman[437610]: 2025-12-06 08:38:53.758711696 +0000 UTC m=+0.057039830 container create e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 08:38:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:53 compute-0 systemd[1]: Started libpod-conmon-e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb.scope.
Dec 06 08:38:53 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfa691c924191c72bc651c586947db73aa1dce6366e90ab39e4f6846d5abc0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfa691c924191c72bc651c586947db73aa1dce6366e90ab39e4f6846d5abc0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfa691c924191c72bc651c586947db73aa1dce6366e90ab39e4f6846d5abc0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecfa691c924191c72bc651c586947db73aa1dce6366e90ab39e4f6846d5abc0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:38:53 compute-0 podman[437610]: 2025-12-06 08:38:53.740366261 +0000 UTC m=+0.038694415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:38:53 compute-0 podman[437610]: 2025-12-06 08:38:53.846401913 +0000 UTC m=+0.144730067 container init e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_yalow, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:38:53 compute-0 podman[437610]: 2025-12-06 08:38:53.85631004 +0000 UTC m=+0.154638174 container start e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:38:53 compute-0 podman[437610]: 2025-12-06 08:38:53.859540027 +0000 UTC m=+0.157868161 container attach e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_yalow, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:38:54 compute-0 nova_compute[251992]: 2025-12-06 08:38:54.156 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4409: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:54 compute-0 competent_yalow[437627]: {
Dec 06 08:38:54 compute-0 competent_yalow[437627]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:38:54 compute-0 competent_yalow[437627]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:38:54 compute-0 competent_yalow[437627]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:38:54 compute-0 competent_yalow[437627]:         "osd_id": 0,
Dec 06 08:38:54 compute-0 competent_yalow[437627]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:38:54 compute-0 competent_yalow[437627]:         "type": "bluestore"
Dec 06 08:38:54 compute-0 competent_yalow[437627]:     }
Dec 06 08:38:54 compute-0 competent_yalow[437627]: }
Dec 06 08:38:54 compute-0 systemd[1]: libpod-e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb.scope: Deactivated successfully.
Dec 06 08:38:54 compute-0 podman[437610]: 2025-12-06 08:38:54.720950746 +0000 UTC m=+1.019278880 container died e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_yalow, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 06 08:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecfa691c924191c72bc651c586947db73aa1dce6366e90ab39e4f6846d5abc0a-merged.mount: Deactivated successfully.
Dec 06 08:38:54 compute-0 podman[437610]: 2025-12-06 08:38:54.77925766 +0000 UTC m=+1.077585814 container remove e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:38:54 compute-0 systemd[1]: libpod-conmon-e66a267c324dd546b6593466399e92d9524f7c460b8e9b7a203a9e05a46361fb.scope: Deactivated successfully.
Dec 06 08:38:54 compute-0 sudo[437502]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:38:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:38:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:55.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:38:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:38:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev b0e37528-e309-4ca0-bfa8-f926c6832495 does not exist
Dec 06 08:38:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 73896eda-d9ba-4164-a220-003d69fb0b49 does not exist
Dec 06 08:38:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 22bbc498-fffb-4028-bf28-aabf0975a550 does not exist
Dec 06 08:38:55 compute-0 nova_compute[251992]: 2025-12-06 08:38:55.371 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:55 compute-0 sudo[437660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:38:55 compute-0 sudo[437660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:55 compute-0 sudo[437660]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:55 compute-0 sudo[437685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:38:55 compute-0 sudo[437685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:38:55 compute-0 sudo[437685]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:55 compute-0 ceph-mon[74339]: pgmap v4409: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:38:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:55.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4410: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:38:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:38:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:38:57 compute-0 podman[437712]: 2025-12-06 08:38:57.410398364 +0000 UTC m=+0.059761113 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 08:38:57 compute-0 podman[437711]: 2025-12-06 08:38:57.440067075 +0000 UTC m=+0.087318537 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:38:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:38:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:57.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:38:58 compute-0 ceph-mon[74339]: pgmap v4410: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1865931276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4411: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:38:59.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:38:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/490959542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:38:59 compute-0 ceph-mon[74339]: pgmap v4411: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:38:59 compute-0 nova_compute[251992]: 2025-12-06 08:38:59.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:38:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:38:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:38:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:38:59.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:00 compute-0 nova_compute[251992]: 2025-12-06 08:39:00.372 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4412: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:01.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:01 compute-0 ceph-mon[74339]: pgmap v4412: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4413: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:03.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:03 compute-0 ceph-mon[74339]: pgmap v4413: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:39:03.915 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:39:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:39:03.916 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:39:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:39:03.916 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:39:04 compute-0 nova_compute[251992]: 2025-12-06 08:39:04.165 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4414: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:39:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:39:05 compute-0 nova_compute[251992]: 2025-12-06 08:39:05.373 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:05.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4415: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:06 compute-0 ceph-mon[74339]: pgmap v4414: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:07.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4416: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:08 compute-0 ceph-mon[74339]: pgmap v4415: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:09.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:09 compute-0 nova_compute[251992]: 2025-12-06 08:39:09.168 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:09.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:10 compute-0 ceph-mon[74339]: pgmap v4416: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/869045702' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:39:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/869045702' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:39:10 compute-0 nova_compute[251992]: 2025-12-06 08:39:10.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4417: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:11.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:11 compute-0 ceph-mon[74339]: pgmap v4417: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:11.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4418: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:12 compute-0 nova_compute[251992]: 2025-12-06 08:39:12.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:13 compute-0 sudo[437757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:13 compute-0 sudo[437757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:13 compute-0 sudo[437757]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:39:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:39:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:13.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:13 compute-0 sudo[437782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:13 compute-0 sudo[437782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:13 compute-0 sudo[437782]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:13 compute-0 ceph-mon[74339]: pgmap v4418: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:13.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:14 compute-0 nova_compute[251992]: 2025-12-06 08:39:14.173 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4419: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:15.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:15 compute-0 nova_compute[251992]: 2025-12-06 08:39:15.376 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:15 compute-0 ceph-mon[74339]: pgmap v4419: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4420: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:16 compute-0 ceph-mon[74339]: pgmap v4420: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:17.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:17 compute-0 nova_compute[251992]: 2025-12-06 08:39:17.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4421: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:39:18
Dec 06 08:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'volumes']
Dec 06 08:39:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:39:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:39:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:19.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:39:19 compute-0 nova_compute[251992]: 2025-12-06 08:39:19.177 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:19 compute-0 nova_compute[251992]: 2025-12-06 08:39:19.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:19 compute-0 nova_compute[251992]: 2025-12-06 08:39:19.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:19 compute-0 nova_compute[251992]: 2025-12-06 08:39:19.683 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:39:19 compute-0 nova_compute[251992]: 2025-12-06 08:39:19.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:39:19 compute-0 nova_compute[251992]: 2025-12-06 08:39:19.684 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:39:19 compute-0 nova_compute[251992]: 2025-12-06 08:39:19.684 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:39:19 compute-0 nova_compute[251992]: 2025-12-06 08:39:19.684 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:39:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:19.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4422: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:20 compute-0 nova_compute[251992]: 2025-12-06 08:39:20.477 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:20 compute-0 podman[437822]: 2025-12-06 08:39:20.548092615 +0000 UTC m=+0.194315816 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:39:20 compute-0 ceph-mon[74339]: pgmap v4421: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:21.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:21 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:39:21 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3261544347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:21 compute-0 nova_compute[251992]: 2025-12-06 08:39:21.299 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:39:21 compute-0 nova_compute[251992]: 2025-12-06 08:39:21.456 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:39:21 compute-0 nova_compute[251992]: 2025-12-06 08:39:21.457 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4052MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:39:21 compute-0 nova_compute[251992]: 2025-12-06 08:39:21.457 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:39:21 compute-0 nova_compute[251992]: 2025-12-06 08:39:21.457 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:39:21 compute-0 nova_compute[251992]: 2025-12-06 08:39:21.511 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:39:21 compute-0 nova_compute[251992]: 2025-12-06 08:39:21.511 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:39:21 compute-0 nova_compute[251992]: 2025-12-06 08:39:21.685 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:39:21 compute-0 ceph-mon[74339]: pgmap v4422: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3790213319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/509942672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3261544347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:39:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3223365794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:22 compute-0 nova_compute[251992]: 2025-12-06 08:39:22.095 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:39:22 compute-0 nova_compute[251992]: 2025-12-06 08:39:22.100 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:39:22 compute-0 nova_compute[251992]: 2025-12-06 08:39:22.116 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:39:22 compute-0 nova_compute[251992]: 2025-12-06 08:39:22.117 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:39:22 compute-0 nova_compute[251992]: 2025-12-06 08:39:22.118 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:39:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4423: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3223365794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:23.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:23.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:23 compute-0 ceph-mon[74339]: pgmap v4423: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:39:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:39:24 compute-0 nova_compute[251992]: 2025-12-06 08:39:24.180 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4424: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:24 compute-0 sshd-session[437884]: Connection closed by 193.32.162.146 port 57144
Dec 06 08:39:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:25.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:25 compute-0 nova_compute[251992]: 2025-12-06 08:39:25.479 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:25.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:25 compute-0 ceph-mon[74339]: pgmap v4424: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:26 compute-0 nova_compute[251992]: 2025-12-06 08:39:26.118 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:26 compute-0 nova_compute[251992]: 2025-12-06 08:39:26.119 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:39:26 compute-0 nova_compute[251992]: 2025-12-06 08:39:26.119 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:39:26 compute-0 nova_compute[251992]: 2025-12-06 08:39:26.136 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:39:26 compute-0 nova_compute[251992]: 2025-12-06 08:39:26.136 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:26 compute-0 nova_compute[251992]: 2025-12-06 08:39:26.137 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4425: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:39:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:39:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:27.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:27 compute-0 ceph-mon[74339]: pgmap v4425: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:39:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:39:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:39:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:39:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:39:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:27.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:28 compute-0 podman[437887]: 2025-12-06 08:39:28.397860566 +0000 UTC m=+0.055709434 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 06 08:39:28 compute-0 podman[437888]: 2025-12-06 08:39:28.433130778 +0000 UTC m=+0.079619230 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:39:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4426: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:39:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:29.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:39:29 compute-0 nova_compute[251992]: 2025-12-06 08:39:29.184 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:29 compute-0 ceph-mon[74339]: pgmap v4426: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:29.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4427: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:30 compute-0 nova_compute[251992]: 2025-12-06 08:39:30.481 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:31.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:31 compute-0 ceph-mon[74339]: pgmap v4427: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:31 compute-0 nova_compute[251992]: 2025-12-06 08:39:31.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:31 compute-0 nova_compute[251992]: 2025-12-06 08:39:31.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:39:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4428: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:33.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:33 compute-0 sudo[437929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:33 compute-0 sudo[437929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:33 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:39:33 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:39:33 compute-0 sudo[437929]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:33 compute-0 sudo[437955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:33 compute-0 sudo[437955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:33 compute-0 sudo[437955]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:33 compute-0 ceph-mon[74339]: pgmap v4428: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:33.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:34 compute-0 nova_compute[251992]: 2025-12-06 08:39:34.188 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4429: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:35.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:35 compute-0 nova_compute[251992]: 2025-12-06 08:39:35.483 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:35 compute-0 ceph-mon[74339]: pgmap v4429: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.002000053s ======
Dec 06 08:39:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:35.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Dec 06 08:39:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4430: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:36 compute-0 nova_compute[251992]: 2025-12-06 08:39:36.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:37.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:37 compute-0 ceph-mon[74339]: pgmap v4430: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4431: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:39.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:39 compute-0 nova_compute[251992]: 2025-12-06 08:39:39.191 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:39 compute-0 ceph-mon[74339]: pgmap v4431: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:39.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4432: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:40 compute-0 nova_compute[251992]: 2025-12-06 08:39:40.484 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:40 compute-0 nova_compute[251992]: 2025-12-06 08:39:40.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:40 compute-0 nova_compute[251992]: 2025-12-06 08:39:40.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:39:40 compute-0 nova_compute[251992]: 2025-12-06 08:39:40.677 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:39:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:41.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:41 compute-0 ceph-mon[74339]: pgmap v4432: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4433: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:42 compute-0 nova_compute[251992]: 2025-12-06 08:39:42.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:39:42 compute-0 nova_compute[251992]: 2025-12-06 08:39:42.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:39:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:39:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:43.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:43 compute-0 ceph-mon[74339]: pgmap v4433: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:39:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:43.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:39:44 compute-0 nova_compute[251992]: 2025-12-06 08:39:44.194 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4434: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:39:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:45.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:39:45 compute-0 nova_compute[251992]: 2025-12-06 08:39:45.486 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:45 compute-0 ceph-mon[74339]: pgmap v4434: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:45.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4435: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:47.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:47 compute-0 ceph-mon[74339]: pgmap v4435: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4436: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:49.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:49 compute-0 nova_compute[251992]: 2025-12-06 08:39:49.197 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:49.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:49 compute-0 ceph-mon[74339]: pgmap v4436: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4437: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:50 compute-0 nova_compute[251992]: 2025-12-06 08:39:50.488 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:51 compute-0 podman[437989]: 2025-12-06 08:39:51.445256361 +0000 UTC m=+0.109771673 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:39:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:51.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4438: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:52 compute-0 ceph-mon[74339]: pgmap v4437: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:53.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:53 compute-0 sudo[438016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:53 compute-0 sudo[438016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:53 compute-0 sudo[438016]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:53 compute-0 sudo[438041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:53 compute-0 sudo[438041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:53 compute-0 sudo[438041]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:53 compute-0 ceph-mon[74339]: pgmap v4438: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:53.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:54 compute-0 nova_compute[251992]: 2025-12-06 08:39:54.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4439: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:39:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:55.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:39:55 compute-0 nova_compute[251992]: 2025-12-06 08:39:55.491 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:55 compute-0 sudo[438067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:55 compute-0 sudo[438067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:55 compute-0 sudo[438067]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:55.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:55 compute-0 sudo[438092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:39:55 compute-0 sudo[438092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:55 compute-0 sudo[438092]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:55 compute-0 sudo[438117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:55 compute-0 sudo[438117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:55 compute-0 sudo[438117]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:55 compute-0 sudo[438142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:39:55 compute-0 sudo[438142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:56 compute-0 ceph-mon[74339]: pgmap v4439: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:56 compute-0 sudo[438142]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4440: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:39:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:39:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:39:56 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:39:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:39:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:39:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4937a903-b7f6-47ce-b2d5-3d41bba2e6c2 does not exist
Dec 06 08:39:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3f011ce4-d5bd-461f-983b-862d1120508f does not exist
Dec 06 08:39:57 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 5b465d56-5dcb-45c4-a3e5-c3cccaf4b8d7 does not exist
Dec 06 08:39:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:39:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:39:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:39:57 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:39:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:39:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:39:57 compute-0 sudo[438200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:57 compute-0 sudo[438200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:57 compute-0 sudo[438200]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:57.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:57 compute-0 sudo[438225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:39:57 compute-0 sudo[438225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:57 compute-0 sudo[438225]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:57 compute-0 ceph-mon[74339]: pgmap v4440: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:39:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:39:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:39:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:39:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:39:57 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:39:57 compute-0 sudo[438250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:57 compute-0 sudo[438250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:57 compute-0 sudo[438250]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:39:57 compute-0 sudo[438275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:39:57 compute-0 sudo[438275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:57 compute-0 podman[438341]: 2025-12-06 08:39:57.670436216 +0000 UTC m=+0.058080938 container create fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 08:39:57 compute-0 systemd[1]: Started libpod-conmon-fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1.scope.
Dec 06 08:39:57 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:39:57 compute-0 podman[438341]: 2025-12-06 08:39:57.632228846 +0000 UTC m=+0.019873598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:39:57 compute-0 podman[438341]: 2025-12-06 08:39:57.740493467 +0000 UTC m=+0.128138209 container init fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec 06 08:39:57 compute-0 podman[438341]: 2025-12-06 08:39:57.745800651 +0000 UTC m=+0.133445373 container start fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec 06 08:39:57 compute-0 podman[438341]: 2025-12-06 08:39:57.748509704 +0000 UTC m=+0.136154426 container attach fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:39:57 compute-0 unruffled_hypatia[438357]: 167 167
Dec 06 08:39:57 compute-0 systemd[1]: libpod-fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1.scope: Deactivated successfully.
Dec 06 08:39:57 compute-0 podman[438341]: 2025-12-06 08:39:57.752053749 +0000 UTC m=+0.139698471 container died fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.769653) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397769709, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 2064, "num_deletes": 258, "total_data_size": 3820786, "memory_usage": 3892544, "flush_reason": "Manual Compaction"}
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Dec 06 08:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-25b94584c2d19a571878b822d9ee5657a44ac6812116c651ccbc7c7148c2014a-merged.mount: Deactivated successfully.
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397790932, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 3754820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89593, "largest_seqno": 91656, "table_properties": {"data_size": 3745477, "index_size": 5900, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18572, "raw_average_key_size": 19, "raw_value_size": 3726930, "raw_average_value_size": 3998, "num_data_blocks": 260, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010174, "oldest_key_time": 1765010174, "file_creation_time": 1765010397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 21342 microseconds, and 8438 cpu microseconds.
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.790989) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 3754820 bytes OK
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.791011) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.792793) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.792809) EVENT_LOG_v1 {"time_micros": 1765010397792804, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.792828) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 3812464, prev total WAL file size 3812464, number of live WAL files 2.
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.793903) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373836' seq:72057594037927935, type:22 .. '6C6F676D0034303430' seq:0, type:0; will stop at (end)
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(3666KB)], [203(12MB)]
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397793979, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 17126773, "oldest_snapshot_seqno": -1}
Dec 06 08:39:57 compute-0 podman[438341]: 2025-12-06 08:39:57.805695047 +0000 UTC m=+0.193339789 container remove fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:39:57 compute-0 systemd[1]: libpod-conmon-fb73eecda64d6dcc269d7407b1e809ebf98ea26350f1d78c281d55bab227dec1.scope: Deactivated successfully.
Dec 06 08:39:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:57.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 12818 keys, 17003255 bytes, temperature: kUnknown
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397904793, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 17003255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16920604, "index_size": 49514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32069, "raw_key_size": 339032, "raw_average_key_size": 26, "raw_value_size": 16696601, "raw_average_value_size": 1302, "num_data_blocks": 1891, "num_entries": 12818, "num_filter_entries": 12818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765010397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.905315) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 17003255 bytes
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.906565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.2 rd, 153.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.8 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(9.1) write-amplify(4.5) OK, records in: 13347, records dropped: 529 output_compression: NoCompression
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.906580) EVENT_LOG_v1 {"time_micros": 1765010397906572, "job": 128, "event": "compaction_finished", "compaction_time_micros": 111075, "compaction_time_cpu_micros": 50512, "output_level": 6, "num_output_files": 1, "total_output_size": 17003255, "num_input_records": 13347, "num_output_records": 12818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397907217, "job": 128, "event": "table_file_deletion", "file_number": 205}
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010397909073, "job": 128, "event": "table_file_deletion", "file_number": 203}
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.793805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.909126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.909129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.909131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.909133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:39:57.909134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:39:57 compute-0 podman[438382]: 2025-12-06 08:39:57.975736376 +0000 UTC m=+0.044050820 container create 371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:39:58 compute-0 systemd[1]: Started libpod-conmon-371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552.scope.
Dec 06 08:39:58 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5101ac5d71942fab4d6f4c1c9ed334b7e7a7b2d460222ee41280d5c5da9eb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5101ac5d71942fab4d6f4c1c9ed334b7e7a7b2d460222ee41280d5c5da9eb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5101ac5d71942fab4d6f4c1c9ed334b7e7a7b2d460222ee41280d5c5da9eb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5101ac5d71942fab4d6f4c1c9ed334b7e7a7b2d460222ee41280d5c5da9eb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5101ac5d71942fab4d6f4c1c9ed334b7e7a7b2d460222ee41280d5c5da9eb0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:39:58 compute-0 podman[438382]: 2025-12-06 08:39:58.052704144 +0000 UTC m=+0.121018598 container init 371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:39:58 compute-0 podman[438382]: 2025-12-06 08:39:57.958478291 +0000 UTC m=+0.026792785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:39:58 compute-0 podman[438382]: 2025-12-06 08:39:58.059207859 +0000 UTC m=+0.127522303 container start 371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec 06 08:39:58 compute-0 podman[438382]: 2025-12-06 08:39:58.062840057 +0000 UTC m=+0.131154521 container attach 371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:39:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4441: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3263683812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/409984988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:39:58 compute-0 condescending_dijkstra[438398]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:39:58 compute-0 condescending_dijkstra[438398]: --> relative data size: 1.0
Dec 06 08:39:58 compute-0 condescending_dijkstra[438398]: --> All data devices are unavailable
Dec 06 08:39:58 compute-0 systemd[1]: libpod-371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552.scope: Deactivated successfully.
Dec 06 08:39:58 compute-0 podman[438382]: 2025-12-06 08:39:58.89251602 +0000 UTC m=+0.960830484 container died 371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:39:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c5101ac5d71942fab4d6f4c1c9ed334b7e7a7b2d460222ee41280d5c5da9eb0-merged.mount: Deactivated successfully.
Dec 06 08:39:58 compute-0 podman[438382]: 2025-12-06 08:39:58.947553556 +0000 UTC m=+1.015868000 container remove 371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dijkstra, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:39:58 compute-0 systemd[1]: libpod-conmon-371fadee2e74ab80960edf0bfc256a5a0c4f7597b815f2ae67366c9c75656552.scope: Deactivated successfully.
Dec 06 08:39:58 compute-0 sudo[438275]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:58 compute-0 podman[438423]: 2025-12-06 08:39:58.997428522 +0000 UTC m=+0.072139068 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:39:59 compute-0 podman[438414]: 2025-12-06 08:39:59.014275417 +0000 UTC m=+0.092417366 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 06 08:39:59 compute-0 sudo[438463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:59 compute-0 sudo[438463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:59 compute-0 sudo[438463]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:59 compute-0 sudo[438488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:39:59 compute-0 sudo[438488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:59 compute-0 sudo[438488]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:59 compute-0 sudo[438513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:39:59 compute-0 sudo[438513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:59 compute-0 sudo[438513]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:39:59.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:59 compute-0 nova_compute[251992]: 2025-12-06 08:39:59.205 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:39:59 compute-0 sudo[438538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:39:59 compute-0 sudo[438538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:39:59 compute-0 podman[438603]: 2025-12-06 08:39:59.581924508 +0000 UTC m=+0.054843492 container create 3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:39:59 compute-0 systemd[1]: Started libpod-conmon-3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb.scope.
Dec 06 08:39:59 compute-0 podman[438603]: 2025-12-06 08:39:59.563694505 +0000 UTC m=+0.036613479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:39:59 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:39:59 compute-0 podman[438603]: 2025-12-06 08:39:59.688217655 +0000 UTC m=+0.161136649 container init 3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 06 08:39:59 compute-0 podman[438603]: 2025-12-06 08:39:59.699360566 +0000 UTC m=+0.172279510 container start 3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:39:59 compute-0 podman[438603]: 2025-12-06 08:39:59.703026946 +0000 UTC m=+0.175945930 container attach 3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 06 08:39:59 compute-0 pedantic_knuth[438619]: 167 167
Dec 06 08:39:59 compute-0 systemd[1]: libpod-3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb.scope: Deactivated successfully.
Dec 06 08:39:59 compute-0 podman[438624]: 2025-12-06 08:39:59.773918039 +0000 UTC m=+0.042675953 container died 3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:39:59 compute-0 ceph-mon[74339]: pgmap v4441: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:39:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-110052a12a3ff3ce6594b95bf16e9e77e3bcad6d94208c8bd7d313ac6fd29c69-merged.mount: Deactivated successfully.
Dec 06 08:39:59 compute-0 podman[438624]: 2025-12-06 08:39:59.816793736 +0000 UTC m=+0.085551640 container remove 3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:39:59 compute-0 systemd[1]: libpod-conmon-3eddca9fa32e57e45d23a3e4d21683e805d2ea800cf1ff69713f654c9f07d4cb.scope: Deactivated successfully.
Dec 06 08:39:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:39:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:39:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:39:59.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:39:59 compute-0 podman[438646]: 2025-12-06 08:39:59.976910708 +0000 UTC m=+0.040058353 container create 1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:40:00 compute-0 ceph-mon[74339]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 06 08:40:00 compute-0 systemd[1]: Started libpod-conmon-1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966.scope.
Dec 06 08:40:00 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e45af243c6b5ad1b4c5e7a3dd58b596f442be2940b64e7358b50ba251fa4ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e45af243c6b5ad1b4c5e7a3dd58b596f442be2940b64e7358b50ba251fa4ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e45af243c6b5ad1b4c5e7a3dd58b596f442be2940b64e7358b50ba251fa4ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e45af243c6b5ad1b4c5e7a3dd58b596f442be2940b64e7358b50ba251fa4ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:40:00 compute-0 podman[438646]: 2025-12-06 08:40:00.03924216 +0000 UTC m=+0.102389825 container init 1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:40:00 compute-0 podman[438646]: 2025-12-06 08:40:00.052294512 +0000 UTC m=+0.115442157 container start 1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec 06 08:40:00 compute-0 podman[438646]: 2025-12-06 08:40:00.055549881 +0000 UTC m=+0.118697546 container attach 1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:40:00 compute-0 podman[438646]: 2025-12-06 08:39:59.959946059 +0000 UTC m=+0.023093724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:40:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4442: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:00 compute-0 nova_compute[251992]: 2025-12-06 08:40:00.493 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]: {
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:     "0": [
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:         {
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "devices": [
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "/dev/loop3"
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             ],
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "lv_name": "ceph_lv0",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "lv_size": "7511998464",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "name": "ceph_lv0",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "tags": {
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.cluster_name": "ceph",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.crush_device_class": "",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.encrypted": "0",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.osd_id": "0",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.type": "block",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:                 "ceph.vdo": "0"
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             },
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "type": "block",
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:             "vg_name": "ceph_vg0"
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:         }
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]:     ]
Dec 06 08:40:00 compute-0 mystifying_albattani[438662]: }
Dec 06 08:40:00 compute-0 systemd[1]: libpod-1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966.scope: Deactivated successfully.
Dec 06 08:40:00 compute-0 conmon[438662]: conmon 1b1131da926e0d4969a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966.scope/container/memory.events
Dec 06 08:40:00 compute-0 podman[438646]: 2025-12-06 08:40:00.897125974 +0000 UTC m=+0.960273669 container died 1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-67e45af243c6b5ad1b4c5e7a3dd58b596f442be2940b64e7358b50ba251fa4ca-merged.mount: Deactivated successfully.
Dec 06 08:40:00 compute-0 podman[438646]: 2025-12-06 08:40:00.952979902 +0000 UTC m=+1.016127547 container remove 1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 06 08:40:00 compute-0 systemd[1]: libpod-conmon-1b1131da926e0d4969a9b3fd00747154e2c7a5889412c964332f1e9c4060a966.scope: Deactivated successfully.
Dec 06 08:40:00 compute-0 sudo[438538]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:01 compute-0 sudo[438685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:01 compute-0 sudo[438685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:01 compute-0 sudo[438685]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:01 compute-0 sudo[438710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:40:01 compute-0 sudo[438710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:01 compute-0 sudo[438710]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:01 compute-0 sudo[438735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:01 compute-0 sudo[438735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:01 compute-0 sudo[438735]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:01.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:01 compute-0 sudo[438760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:40:01 compute-0 sudo[438760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:01 compute-0 ceph-mon[74339]: overall HEALTH_OK
Dec 06 08:40:01 compute-0 podman[438824]: 2025-12-06 08:40:01.575047841 +0000 UTC m=+0.036519197 container create 15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_meninsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 06 08:40:01 compute-0 systemd[1]: Started libpod-conmon-15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9.scope.
Dec 06 08:40:01 compute-0 podman[438824]: 2025-12-06 08:40:01.559036499 +0000 UTC m=+0.020507855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:40:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:40:01 compute-0 podman[438824]: 2025-12-06 08:40:01.675709708 +0000 UTC m=+0.137181074 container init 15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:40:01 compute-0 podman[438824]: 2025-12-06 08:40:01.687565578 +0000 UTC m=+0.149036934 container start 15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:40:01 compute-0 vigilant_meninsky[438840]: 167 167
Dec 06 08:40:01 compute-0 podman[438824]: 2025-12-06 08:40:01.691021951 +0000 UTC m=+0.152493327 container attach 15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_meninsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:40:01 compute-0 systemd[1]: libpod-15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9.scope: Deactivated successfully.
Dec 06 08:40:01 compute-0 podman[438824]: 2025-12-06 08:40:01.692299196 +0000 UTC m=+0.153770562 container died 15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:40:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-de8fba3cde4853427341047eb4d139b7edb9605a513833bdd2929e998ac72e29-merged.mount: Deactivated successfully.
Dec 06 08:40:01 compute-0 podman[438824]: 2025-12-06 08:40:01.733533839 +0000 UTC m=+0.195005195 container remove 15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:40:01 compute-0 systemd[1]: libpod-conmon-15f73dd4545a0a3de41b1981d5d75fb56c27a9269782ab5e3e6db6a335efa2d9.scope: Deactivated successfully.
Dec 06 08:40:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:01.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:01 compute-0 podman[438864]: 2025-12-06 08:40:01.893752803 +0000 UTC m=+0.043129726 container create 36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:40:01 compute-0 systemd[1]: Started libpod-conmon-36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d.scope.
Dec 06 08:40:01 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7192d11a2c719fac12dc6cfa8cd7b9e62283cc346965e2f260a70cb2443070a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:40:01 compute-0 podman[438864]: 2025-12-06 08:40:01.876660312 +0000 UTC m=+0.026037215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7192d11a2c719fac12dc6cfa8cd7b9e62283cc346965e2f260a70cb2443070a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7192d11a2c719fac12dc6cfa8cd7b9e62283cc346965e2f260a70cb2443070a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:40:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7192d11a2c719fac12dc6cfa8cd7b9e62283cc346965e2f260a70cb2443070a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:40:01 compute-0 podman[438864]: 2025-12-06 08:40:01.988227253 +0000 UTC m=+0.137604156 container init 36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:40:02 compute-0 podman[438864]: 2025-12-06 08:40:01.999854397 +0000 UTC m=+0.149231320 container start 36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:40:02 compute-0 podman[438864]: 2025-12-06 08:40:02.0051496 +0000 UTC m=+0.154526503 container attach 36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:40:02 compute-0 ceph-mon[74339]: pgmap v4442: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.286090) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402286155, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 300, "num_deletes": 251, "total_data_size": 81091, "memory_usage": 86840, "flush_reason": "Manual Compaction"}
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402288189, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 80391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91657, "largest_seqno": 91956, "table_properties": {"data_size": 78459, "index_size": 159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5111, "raw_average_key_size": 18, "raw_value_size": 74592, "raw_average_value_size": 269, "num_data_blocks": 7, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010398, "oldest_key_time": 1765010398, "file_creation_time": 1765010402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 2094 microseconds, and 767 cpu microseconds.
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.288215) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 80391 bytes OK
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.288226) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.289116) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.289129) EVENT_LOG_v1 {"time_micros": 1765010402289124, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.289145) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 78920, prev total WAL file size 78920, number of live WAL files 2.
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.289463) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(78KB)], [206(16MB)]
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402289491, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 17083646, "oldest_snapshot_seqno": -1}
Dec 06 08:40:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 12585 keys, 14953292 bytes, temperature: kUnknown
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402405559, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 14953292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14874292, "index_size": 46439, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 334889, "raw_average_key_size": 26, "raw_value_size": 14656612, "raw_average_value_size": 1164, "num_data_blocks": 1752, "num_entries": 12585, "num_filter_entries": 12585, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765010402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.405807) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 14953292 bytes
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.407612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 128.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 16.2 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(398.5) write-amplify(186.0) OK, records in: 13095, records dropped: 510 output_compression: NoCompression
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.407628) EVENT_LOG_v1 {"time_micros": 1765010402407621, "job": 130, "event": "compaction_finished", "compaction_time_micros": 116135, "compaction_time_cpu_micros": 33876, "output_level": 6, "num_output_files": 1, "total_output_size": 14953292, "num_input_records": 13095, "num_output_records": 12585, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402407740, "job": 130, "event": "table_file_deletion", "file_number": 208}
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010402410088, "job": 130, "event": "table_file_deletion", "file_number": 206}
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.289386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.410146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.410154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.410156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.410158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:40:02.410160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:40:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:02 compute-0 nova_compute[251992]: 2025-12-06 08:40:02.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:02 compute-0 funny_cray[438880]: {
Dec 06 08:40:02 compute-0 funny_cray[438880]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:40:02 compute-0 funny_cray[438880]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:40:02 compute-0 funny_cray[438880]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:40:02 compute-0 funny_cray[438880]:         "osd_id": 0,
Dec 06 08:40:02 compute-0 funny_cray[438880]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:40:02 compute-0 funny_cray[438880]:         "type": "bluestore"
Dec 06 08:40:02 compute-0 funny_cray[438880]:     }
Dec 06 08:40:02 compute-0 funny_cray[438880]: }
Dec 06 08:40:02 compute-0 systemd[1]: libpod-36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d.scope: Deactivated successfully.
Dec 06 08:40:02 compute-0 podman[438864]: 2025-12-06 08:40:02.842811698 +0000 UTC m=+0.992188601 container died 36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7192d11a2c719fac12dc6cfa8cd7b9e62283cc346965e2f260a70cb2443070a-merged.mount: Deactivated successfully.
Dec 06 08:40:02 compute-0 podman[438864]: 2025-12-06 08:40:02.902984011 +0000 UTC m=+1.052360904 container remove 36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:40:02 compute-0 systemd[1]: libpod-conmon-36e27163fef743c3a752787f351c3bd5c13bb4faf02314e508d27c0a0e454e7d.scope: Deactivated successfully.
Dec 06 08:40:02 compute-0 sudo[438760]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:40:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:40:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:40:02 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:40:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 770ac8e0-f4a4-4034-b0cc-b438a199ca45 does not exist
Dec 06 08:40:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6108b267-1063-4924-869b-ea40391257ea does not exist
Dec 06 08:40:02 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9753a734-6ac3-42eb-96bf-b6c3c4dc4050 does not exist
Dec 06 08:40:03 compute-0 sudo[438915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:03 compute-0 sudo[438915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:03 compute-0 sudo[438915]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:03 compute-0 sudo[438940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:40:03 compute-0 sudo[438940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:03 compute-0 sudo[438940]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:03.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:03 compute-0 ceph-mon[74339]: pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:40:03 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:40:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:03.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:40:03.917 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:40:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:40:03.920 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:40:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:40:03.921 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:40:04 compute-0 nova_compute[251992]: 2025-12-06 08:40:04.255 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:05 compute-0 ceph-mon[74339]: pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:05.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:05 compute-0 nova_compute[251992]: 2025-12-06 08:40:05.497 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:05.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:07.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:08 compute-0 ceph-mon[74339]: pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:09 compute-0 nova_compute[251992]: 2025-12-06 08:40:09.258 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:09 compute-0 ceph-mon[74339]: pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3340009868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:40:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3340009868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:40:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:09.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:10 compute-0 nova_compute[251992]: 2025-12-06 08:40:10.497 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:12 compute-0 ceph-mon[74339]: pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:40:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:40:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:13.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:13 compute-0 sudo[438970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:13 compute-0 sudo[438970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:13 compute-0 sudo[438970]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:13 compute-0 ceph-mon[74339]: pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:13 compute-0 sudo[438995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:13 compute-0 sudo[438995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:13 compute-0 sudo[438995]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:13.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:14 compute-0 nova_compute[251992]: 2025-12-06 08:40:14.263 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:14 compute-0 nova_compute[251992]: 2025-12-06 08:40:14.748 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:15.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:15 compute-0 nova_compute[251992]: 2025-12-06 08:40:15.499 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:15 compute-0 ceph-mon[74339]: pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:15.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:17.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:17 compute-0 ceph-mon[74339]: pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:17.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:18 compute-0 nova_compute[251992]: 2025-12-06 08:40:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:40:18
Dec 06 08:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'images', '.mgr', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec 06 08:40:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:40:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:19.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:19 compute-0 nova_compute[251992]: 2025-12-06 08:40:19.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:19 compute-0 ceph-mon[74339]: pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:19.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:20 compute-0 nova_compute[251992]: 2025-12-06 08:40:20.501 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:21.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:21 compute-0 nova_compute[251992]: 2025-12-06 08:40:21.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:21 compute-0 nova_compute[251992]: 2025-12-06 08:40:21.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:21 compute-0 nova_compute[251992]: 2025-12-06 08:40:21.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:40:21 compute-0 nova_compute[251992]: 2025-12-06 08:40:21.681 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:40:21 compute-0 nova_compute[251992]: 2025-12-06 08:40:21.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:40:21 compute-0 nova_compute[251992]: 2025-12-06 08:40:21.682 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:40:21 compute-0 nova_compute[251992]: 2025-12-06 08:40:21.683 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:40:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:21.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:22 compute-0 ceph-mon[74339]: pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:40:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073063480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:22 compute-0 nova_compute[251992]: 2025-12-06 08:40:22.404 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:40:22 compute-0 podman[439046]: 2025-12-06 08:40:22.444914882 +0000 UTC m=+0.097669837 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 08:40:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:22 compute-0 nova_compute[251992]: 2025-12-06 08:40:22.563 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:40:22 compute-0 nova_compute[251992]: 2025-12-06 08:40:22.564 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:40:22 compute-0 nova_compute[251992]: 2025-12-06 08:40:22.564 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:40:22 compute-0 nova_compute[251992]: 2025-12-06 08:40:22.565 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:40:22 compute-0 nova_compute[251992]: 2025-12-06 08:40:22.646 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:40:22 compute-0 nova_compute[251992]: 2025-12-06 08:40:22.646 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:40:22 compute-0 nova_compute[251992]: 2025-12-06 08:40:22.663 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:40:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:40:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651602309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3187170638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2298997144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2073063480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-0 ceph-mon[74339]: pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1651602309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:40:23 compute-0 nova_compute[251992]: 2025-12-06 08:40:23.127 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:40:23 compute-0 nova_compute[251992]: 2025-12-06 08:40:23.133 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:40:23 compute-0 nova_compute[251992]: 2025-12-06 08:40:23.148 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:40:23 compute-0 nova_compute[251992]: 2025-12-06 08:40:23.149 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:40:23 compute-0 nova_compute[251992]: 2025-12-06 08:40:23.150 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:40:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:23.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:40:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:40:24 compute-0 nova_compute[251992]: 2025-12-06 08:40:24.270 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:25.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:25 compute-0 nova_compute[251992]: 2025-12-06 08:40:25.502 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:25 compute-0 ceph-mon[74339]: pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:25.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:40:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:40:27 compute-0 nova_compute[251992]: 2025-12-06 08:40:27.150 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:27 compute-0 nova_compute[251992]: 2025-12-06 08:40:27.151 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:40:27 compute-0 nova_compute[251992]: 2025-12-06 08:40:27.151 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:40:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:27.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:27 compute-0 nova_compute[251992]: 2025-12-06 08:40:27.434 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:40:27 compute-0 nova_compute[251992]: 2025-12-06 08:40:27.434 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:27 compute-0 nova_compute[251992]: 2025-12-06 08:40:27.435 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:40:27 compute-0 ceph-mon[74339]: pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:40:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:40:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:40:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:40:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:27.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:27 compute-0 nova_compute[251992]: 2025-12-06 08:40:27.935 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:29.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:29 compute-0 nova_compute[251992]: 2025-12-06 08:40:29.273 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:29 compute-0 podman[439098]: 2025-12-06 08:40:29.394007567 +0000 UTC m=+0.056226099 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 08:40:29 compute-0 podman[439099]: 2025-12-06 08:40:29.403954836 +0000 UTC m=+0.063344162 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 08:40:29 compute-0 ceph-mon[74339]: pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:29.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:30 compute-0 nova_compute[251992]: 2025-12-06 08:40:30.505 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:31 compute-0 ceph-mon[74339]: pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:31.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:32 compute-0 nova_compute[251992]: 2025-12-06 08:40:32.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:32 compute-0 nova_compute[251992]: 2025-12-06 08:40:32.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:40:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:33.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:33 compute-0 sudo[439140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:33 compute-0 sudo[439140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:33 compute-0 sudo[439140]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:33 compute-0 sudo[439165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:33 compute-0 sudo[439165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:33 compute-0 sudo[439165]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:33.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:33 compute-0 ceph-mon[74339]: pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:34 compute-0 nova_compute[251992]: 2025-12-06 08:40:34.276 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:35.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:35 compute-0 nova_compute[251992]: 2025-12-06 08:40:35.506 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:35 compute-0 ceph-mon[74339]: pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:35.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:37 compute-0 ceph-mon[74339]: pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:37.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:38 compute-0 nova_compute[251992]: 2025-12-06 08:40:38.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:40:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:39.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:39 compute-0 nova_compute[251992]: 2025-12-06 08:40:39.280 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:39 compute-0 ceph-mon[74339]: pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:39.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:40 compute-0 nova_compute[251992]: 2025-12-06 08:40:40.510 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:41.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:42 compute-0 ceph-mon[74339]: pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:40:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:40:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:43.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:43 compute-0 ceph-mon[74339]: pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:43.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:44 compute-0 nova_compute[251992]: 2025-12-06 08:40:44.284 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:45.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:45 compute-0 nova_compute[251992]: 2025-12-06 08:40:45.511 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:45 compute-0 ceph-mon[74339]: pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:45.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:47.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:47.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:48 compute-0 ceph-mon[74339]: pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:49.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:49 compute-0 nova_compute[251992]: 2025-12-06 08:40:49.287 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:49.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:50 compute-0 ceph-mon[74339]: pgmap v4466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:50 compute-0 nova_compute[251992]: 2025-12-06 08:40:50.515 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:51.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:51 compute-0 ceph-mon[74339]: pgmap v4467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:51.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4468: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:53.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:53 compute-0 podman[439200]: 2025-12-06 08:40:53.425666775 +0000 UTC m=+0.083143595 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 08:40:53 compute-0 sudo[439225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:53 compute-0 sudo[439225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:53 compute-0 sudo[439225]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:53 compute-0 sudo[439250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:40:53 compute-0 sudo[439250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:40:53 compute-0 sudo[439250]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:53 compute-0 ceph-mon[74339]: pgmap v4468: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:53.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:54 compute-0 nova_compute[251992]: 2025-12-06 08:40:54.291 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4469: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:55.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:55 compute-0 nova_compute[251992]: 2025-12-06 08:40:55.516 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:55 compute-0 ceph-mon[74339]: pgmap v4469: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:55.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:40:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4470: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:40:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:57.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:40:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:40:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:57.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4471: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:59 compute-0 ceph-mon[74339]: pgmap v4470: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:40:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:40:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:40:59.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:40:59 compute-0 nova_compute[251992]: 2025-12-06 08:40:59.293 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:40:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:40:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:40:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:40:59.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:00 compute-0 podman[439280]: 2025-12-06 08:41:00.393532377 +0000 UTC m=+0.049555867 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:41:00 compute-0 podman[439279]: 2025-12-06 08:41:00.424985006 +0000 UTC m=+0.081619244 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:41:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:00 compute-0 nova_compute[251992]: 2025-12-06 08:41:00.518 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1709071470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:00 compute-0 ceph-mon[74339]: pgmap v4471: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4092294334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:01.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:01.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:02 compute-0 ceph-mon[74339]: pgmap v4472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4473: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:03.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:03 compute-0 ceph-mon[74339]: pgmap v4473: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:03 compute-0 sudo[439317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:03 compute-0 sudo[439317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:03 compute-0 sudo[439317]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:03 compute-0 sudo[439342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:41:03 compute-0 sudo[439342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:03 compute-0 sudo[439342]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:03 compute-0 sudo[439367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:03 compute-0 sudo[439367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:03 compute-0 sudo[439367]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:03 compute-0 sudo[439392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Dec 06 08:41:03 compute-0 sudo[439392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:03.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:41:03.917 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:41:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:41:03.918 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:41:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:41:03.918 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:41:04 compute-0 podman[439488]: 2025-12-06 08:41:04.167213959 +0000 UTC m=+0.053540986 container exec 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec 06 08:41:04 compute-0 podman[439488]: 2025-12-06 08:41:04.260466726 +0000 UTC m=+0.146793753 container exec_died 6ea38236040b5ab1f440bc5b9d04bdabbffa6404b87968c907ef776deeab24d0 (image=quay.io/ceph/ceph:v18, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:41:04 compute-0 nova_compute[251992]: 2025-12-06 08:41:04.295 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:05 compute-0 podman[439641]: 2025-12-06 08:41:05.269771517 +0000 UTC m=+0.499423900 container exec 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:41:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:05.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:05 compute-0 podman[439662]: 2025-12-06 08:41:05.392327335 +0000 UTC m=+0.104936854 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:41:05 compute-0 podman[439641]: 2025-12-06 08:41:05.444986185 +0000 UTC m=+0.674638568 container exec_died 6887fe20f06935b9e07e222fc7df700702068e29cca04887bf0ce2883bc0c94c (image=quay.io/ceph/haproxy:2.3, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-haproxy-rgw-default-compute-0-ybrwqj)
Dec 06 08:41:05 compute-0 nova_compute[251992]: 2025-12-06 08:41:05.520 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:05 compute-0 podman[439707]: 2025-12-06 08:41:05.685509207 +0000 UTC m=+0.064125042 container exec bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20)
Dec 06 08:41:05 compute-0 ceph-mon[74339]: pgmap v4474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:05 compute-0 podman[439728]: 2025-12-06 08:41:05.779359181 +0000 UTC m=+0.073086444 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.openshift.expose-services=, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:41:05 compute-0 podman[439707]: 2025-12-06 08:41:05.784172831 +0000 UTC m=+0.162788646 container exec_died bf577901bf8d9312161873bed0f8e3ccd63b5e4a97fdc3ea913bb849efddfcb6 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-keepalived-rgw-default-compute-0-fknpoc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, vendor=Red Hat, Inc.)
Dec 06 08:41:05 compute-0 sudo[439392]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:41:05 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:41:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:05.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:06 compute-0 sudo[439740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:06 compute-0 sudo[439740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:06 compute-0 sudo[439740]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:06 compute-0 sudo[439765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:41:06 compute-0 sudo[439765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:06 compute-0 sudo[439765]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:06 compute-0 sudo[439790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:06 compute-0 sudo[439790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:06 compute-0 sudo[439790]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:06 compute-0 sudo[439815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:41:06 compute-0 sudo[439815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4475: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:06 compute-0 sudo[439815]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:41:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:41:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:41:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:41:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:41:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9da22f3a-1abf-4d77-86ad-f9a8f79044be does not exist
Dec 06 08:41:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ee669d36-37ad-4264-b0d0-11f9a31a1b2a does not exist
Dec 06 08:41:06 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 9eb200c0-a540-4b9e-a38d-c6a9b83f5895 does not exist
Dec 06 08:41:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:41:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:41:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:41:06 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:41:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:41:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:41:06 compute-0 sudo[439873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:06 compute-0 sudo[439873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:06 compute-0 sudo[439873]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:06 compute-0 sudo[439898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:41:06 compute-0 sudo[439898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:06 compute-0 sudo[439898]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:07 compute-0 sudo[439923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:07 compute-0 sudo[439923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:07 compute-0 sudo[439923]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:41:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:41:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:41:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:41:07 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:41:07 compute-0 sudo[439948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:41:07 compute-0 sudo[439948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:07.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:07 compute-0 podman[440014]: 2025-12-06 08:41:07.397459262 +0000 UTC m=+0.043686370 container create 5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 06 08:41:07 compute-0 systemd[1]: Started libpod-conmon-5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99.scope.
Dec 06 08:41:07 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:41:07 compute-0 podman[440014]: 2025-12-06 08:41:07.376869976 +0000 UTC m=+0.023097134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:41:07 compute-0 podman[440014]: 2025-12-06 08:41:07.78125095 +0000 UTC m=+0.427478088 container init 5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec 06 08:41:07 compute-0 podman[440014]: 2025-12-06 08:41:07.788067935 +0000 UTC m=+0.434295043 container start 5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:41:07 compute-0 silly_chaplygin[440030]: 167 167
Dec 06 08:41:07 compute-0 systemd[1]: libpod-5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99.scope: Deactivated successfully.
Dec 06 08:41:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:07.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:07 compute-0 podman[440014]: 2025-12-06 08:41:07.960141188 +0000 UTC m=+0.606368296 container attach 5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaplygin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:41:07 compute-0 podman[440014]: 2025-12-06 08:41:07.960676713 +0000 UTC m=+0.606903821 container died 5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 06 08:41:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4bca5ad27adf2070469732f3fac3904d2906ad4d6be28d857c0972a5e0af613-merged.mount: Deactivated successfully.
Dec 06 08:41:08 compute-0 podman[440014]: 2025-12-06 08:41:08.009219054 +0000 UTC m=+0.655446172 container remove 5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaplygin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:41:08 compute-0 systemd[1]: libpod-conmon-5157d33c6716f1385ccd92eee4ed1a99df49a9d07056497a28569feea9ab6e99.scope: Deactivated successfully.
Dec 06 08:41:08 compute-0 ceph-mon[74339]: pgmap v4475: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:08 compute-0 podman[440055]: 2025-12-06 08:41:08.175756678 +0000 UTC m=+0.052983001 container create 3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jones, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:41:08 compute-0 systemd[1]: Started libpod-conmon-3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65.scope.
Dec 06 08:41:08 compute-0 podman[440055]: 2025-12-06 08:41:08.150984819 +0000 UTC m=+0.028211222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:41:08 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ba264f53766c8d6acb5ad7b62d174e774a8df2f412f2cf0e57c8690ff1eeb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ba264f53766c8d6acb5ad7b62d174e774a8df2f412f2cf0e57c8690ff1eeb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ba264f53766c8d6acb5ad7b62d174e774a8df2f412f2cf0e57c8690ff1eeb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ba264f53766c8d6acb5ad7b62d174e774a8df2f412f2cf0e57c8690ff1eeb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1ba264f53766c8d6acb5ad7b62d174e774a8df2f412f2cf0e57c8690ff1eeb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4476: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:08 compute-0 podman[440055]: 2025-12-06 08:41:08.688919628 +0000 UTC m=+0.566145951 container init 3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:41:08 compute-0 podman[440055]: 2025-12-06 08:41:08.698750994 +0000 UTC m=+0.575977357 container start 3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jones, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:41:09 compute-0 podman[440055]: 2025-12-06 08:41:09.121280858 +0000 UTC m=+0.998507221 container attach 3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jones, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 08:41:09 compute-0 ceph-mon[74339]: pgmap v4476: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:09.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:09 compute-0 nova_compute[251992]: 2025-12-06 08:41:09.298 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:09 compute-0 dreamy_jones[440071]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:41:09 compute-0 dreamy_jones[440071]: --> relative data size: 1.0
Dec 06 08:41:09 compute-0 dreamy_jones[440071]: --> All data devices are unavailable
Dec 06 08:41:09 compute-0 systemd[1]: libpod-3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65.scope: Deactivated successfully.
Dec 06 08:41:09 compute-0 podman[440055]: 2025-12-06 08:41:09.533909094 +0000 UTC m=+1.411135437 container died 3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec 06 08:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1ba264f53766c8d6acb5ad7b62d174e774a8df2f412f2cf0e57c8690ff1eeb3-merged.mount: Deactivated successfully.
Dec 06 08:41:09 compute-0 podman[440055]: 2025-12-06 08:41:09.606430732 +0000 UTC m=+1.483657055 container remove 3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_jones, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec 06 08:41:09 compute-0 systemd[1]: libpod-conmon-3e344aeeee8edc8079933e0d8a37110e650b18bc45d649eb7189115af7a98a65.scope: Deactivated successfully.
Dec 06 08:41:09 compute-0 sudo[439948]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:09 compute-0 sudo[440099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:09 compute-0 sudo[440099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:09 compute-0 sudo[440099]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:09 compute-0 sudo[440124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:41:09 compute-0 sudo[440124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:09 compute-0 sudo[440124]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:09 compute-0 sudo[440149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:09 compute-0 sudo[440149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:09 compute-0 sudo[440149]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:09 compute-0 sudo[440174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:41:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:09.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:09 compute-0 sudo[440174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:10 compute-0 podman[440241]: 2025-12-06 08:41:10.26958522 +0000 UTC m=+0.036143446 container create 3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 06 08:41:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/166007238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:41:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/166007238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:41:10 compute-0 systemd[1]: Started libpod-conmon-3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c.scope.
Dec 06 08:41:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:41:10 compute-0 podman[440241]: 2025-12-06 08:41:10.253445374 +0000 UTC m=+0.020003620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:41:10 compute-0 podman[440241]: 2025-12-06 08:41:10.355215561 +0000 UTC m=+0.121773797 container init 3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 08:41:10 compute-0 podman[440241]: 2025-12-06 08:41:10.361685446 +0000 UTC m=+0.128243672 container start 3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:41:10 compute-0 podman[440241]: 2025-12-06 08:41:10.365316444 +0000 UTC m=+0.131874670 container attach 3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:41:10 compute-0 systemd[1]: libpod-3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c.scope: Deactivated successfully.
Dec 06 08:41:10 compute-0 hopeful_jepsen[440257]: 167 167
Dec 06 08:41:10 compute-0 podman[440241]: 2025-12-06 08:41:10.368580172 +0000 UTC m=+0.135138418 container died 3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c196f678d9433000883c21a457da1caf1a826cd205f5deecf2ee5bdb29d0c37-merged.mount: Deactivated successfully.
Dec 06 08:41:10 compute-0 podman[440241]: 2025-12-06 08:41:10.417948514 +0000 UTC m=+0.184506740 container remove 3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:41:10 compute-0 systemd[1]: libpod-conmon-3c05b2aea2df52fa659d8eb25e386e1dc7c143670e683ffe99179ee09f7d5a0c.scope: Deactivated successfully.
Dec 06 08:41:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 08:41:10 compute-0 nova_compute[251992]: 2025-12-06 08:41:10.569 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:10 compute-0 podman[440282]: 2025-12-06 08:41:10.572008442 +0000 UTC m=+0.037355729 container create e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:41:10 compute-0 systemd[1]: Started libpod-conmon-e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48.scope.
Dec 06 08:41:10 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806324737d5233c9f4ea31462673ad151cc8df0c1c637743ab85f7e1cd33c47a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806324737d5233c9f4ea31462673ad151cc8df0c1c637743ab85f7e1cd33c47a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806324737d5233c9f4ea31462673ad151cc8df0c1c637743ab85f7e1cd33c47a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/806324737d5233c9f4ea31462673ad151cc8df0c1c637743ab85f7e1cd33c47a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:10 compute-0 podman[440282]: 2025-12-06 08:41:10.647910591 +0000 UTC m=+0.113257888 container init e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:41:10 compute-0 podman[440282]: 2025-12-06 08:41:10.554831878 +0000 UTC m=+0.020179195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:41:10 compute-0 podman[440282]: 2025-12-06 08:41:10.655377732 +0000 UTC m=+0.120725039 container start e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:41:10 compute-0 podman[440282]: 2025-12-06 08:41:10.659521634 +0000 UTC m=+0.124868921 container attach e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec 06 08:41:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]: {
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:     "0": [
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:         {
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "devices": [
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "/dev/loop3"
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             ],
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "lv_name": "ceph_lv0",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "lv_size": "7511998464",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "name": "ceph_lv0",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "tags": {
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.cluster_name": "ceph",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.crush_device_class": "",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.encrypted": "0",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.osd_id": "0",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.type": "block",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:                 "ceph.vdo": "0"
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             },
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "type": "block",
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:             "vg_name": "ceph_vg0"
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:         }
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]:     ]
Dec 06 08:41:11 compute-0 dazzling_bartik[440298]: }
Dec 06 08:41:11 compute-0 ceph-mon[74339]: pgmap v4477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 06 08:41:11 compute-0 systemd[1]: libpod-e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48.scope: Deactivated successfully.
Dec 06 08:41:11 compute-0 podman[440282]: 2025-12-06 08:41:11.423773651 +0000 UTC m=+0.889120958 container died e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec 06 08:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-806324737d5233c9f4ea31462673ad151cc8df0c1c637743ab85f7e1cd33c47a-merged.mount: Deactivated successfully.
Dec 06 08:41:11 compute-0 podman[440282]: 2025-12-06 08:41:11.481575371 +0000 UTC m=+0.946922658 container remove e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Dec 06 08:41:11 compute-0 systemd[1]: libpod-conmon-e40fff8ac6ea4cb7c42913fec020765041be49c59c92fe6d342936e2552cbe48.scope: Deactivated successfully.
Dec 06 08:41:11 compute-0 sudo[440174]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:11 compute-0 sudo[440318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:11 compute-0 sudo[440318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:11 compute-0 sudo[440318]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:11 compute-0 sudo[440343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:41:11 compute-0 sudo[440343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:11 compute-0 sudo[440343]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:11 compute-0 sudo[440368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:11 compute-0 sudo[440368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:11 compute-0 sudo[440368]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:11 compute-0 sudo[440393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:41:11 compute-0 sudo[440393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:11.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:12 compute-0 podman[440459]: 2025-12-06 08:41:12.095767588 +0000 UTC m=+0.019992401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:41:12 compute-0 podman[440459]: 2025-12-06 08:41:12.333899305 +0000 UTC m=+0.258124128 container create 40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Dec 06 08:41:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:12 compute-0 systemd[1]: Started libpod-conmon-40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6.scope.
Dec 06 08:41:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 08:41:12 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:41:13 compute-0 podman[440459]: 2025-12-06 08:41:13.01290322 +0000 UTC m=+0.937128023 container init 40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_noyce, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec 06 08:41:13 compute-0 podman[440459]: 2025-12-06 08:41:13.019240011 +0000 UTC m=+0.943464794 container start 40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_noyce, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 08:41:13 compute-0 silly_noyce[440476]: 167 167
Dec 06 08:41:13 compute-0 systemd[1]: libpod-40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6.scope: Deactivated successfully.
Dec 06 08:41:13 compute-0 conmon[440476]: conmon 40e438ef5b17c787c982 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6.scope/container/memory.events
Dec 06 08:41:13 compute-0 podman[440459]: 2025-12-06 08:41:13.034691379 +0000 UTC m=+0.958916192 container attach 40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_noyce, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 08:41:13 compute-0 podman[440459]: 2025-12-06 08:41:13.035081819 +0000 UTC m=+0.959306602 container died 40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:41:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:41:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7aa00259a66df38c6136ad6eac1f2e816ec4f8f6990ff5ea2f039a2a54c947a-merged.mount: Deactivated successfully.
Dec 06 08:41:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:13 compute-0 podman[440459]: 2025-12-06 08:41:13.476357719 +0000 UTC m=+1.400582512 container remove 40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec 06 08:41:13 compute-0 systemd[1]: libpod-conmon-40e438ef5b17c787c982360aeefdb47539f3165335d4bd8a2fdc89695ec91ea6.scope: Deactivated successfully.
Dec 06 08:41:13 compute-0 ceph-mon[74339]: pgmap v4478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 08:41:13 compute-0 podman[440501]: 2025-12-06 08:41:13.61050685 +0000 UTC m=+0.023730521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:41:13 compute-0 podman[440501]: 2025-12-06 08:41:13.792654526 +0000 UTC m=+0.205878177 container create 504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lewin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:41:13 compute-0 ceph-mgr[74630]: client.0 ms_handle_reset on v2:192.168.122.100:6800/798720280
Dec 06 08:41:13 compute-0 sudo[440515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:13 compute-0 sudo[440515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:13 compute-0 sudo[440515]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:13 compute-0 sudo[440540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:13 compute-0 sudo[440540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:13 compute-0 sudo[440540]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:13.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:14 compute-0 systemd[1]: Started libpod-conmon-504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf.scope.
Dec 06 08:41:14 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4c0777664bf12d4aec2921745bac9eb29ae6c914f6c193158b55083d5591e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4c0777664bf12d4aec2921745bac9eb29ae6c914f6c193158b55083d5591e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4c0777664bf12d4aec2921745bac9eb29ae6c914f6c193158b55083d5591e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4c0777664bf12d4aec2921745bac9eb29ae6c914f6c193158b55083d5591e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:41:14 compute-0 podman[440501]: 2025-12-06 08:41:14.154628095 +0000 UTC m=+0.567851766 container init 504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lewin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:41:14 compute-0 podman[440501]: 2025-12-06 08:41:14.161713367 +0000 UTC m=+0.574937018 container start 504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 08:41:14 compute-0 podman[440501]: 2025-12-06 08:41:14.175766476 +0000 UTC m=+0.588990127 container attach 504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:41:14 compute-0 nova_compute[251992]: 2025-12-06 08:41:14.302 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]: {
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]:         "osd_id": 0,
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]:         "type": "bluestore"
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]:     }
Dec 06 08:41:14 compute-0 dazzling_lewin[440567]: }
Dec 06 08:41:14 compute-0 systemd[1]: libpod-504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf.scope: Deactivated successfully.
Dec 06 08:41:14 compute-0 podman[440501]: 2025-12-06 08:41:14.974556265 +0000 UTC m=+1.387779926 container died 504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lewin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 06 08:41:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d4c0777664bf12d4aec2921745bac9eb29ae6c914f6c193158b55083d5591e0-merged.mount: Deactivated successfully.
Dec 06 08:41:15 compute-0 podman[440501]: 2025-12-06 08:41:15.291280473 +0000 UTC m=+1.704504124 container remove 504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 08:41:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:15.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:15 compute-0 sudo[440393]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:41:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:15 compute-0 systemd[1]: libpod-conmon-504cc64fe1b5107d4a034a99303582ec21a6f4bc45aeb77aab68ce4502519ddf.scope: Deactivated successfully.
Dec 06 08:41:15 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:41:15 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 08e2258b-4719-48d0-8e70-f06eb57ae7bc does not exist
Dec 06 08:41:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 41a31450-c11a-46fe-bdf5-5ee79f9b75c1 does not exist
Dec 06 08:41:15 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2e8a3cd9-9be0-4017-9274-c74c7c82c8d1 does not exist
Dec 06 08:41:15 compute-0 sudo[440602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:15 compute-0 sudo[440602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:15 compute-0 sudo[440602]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:15 compute-0 sudo[440627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:41:15 compute-0 sudo[440627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:15 compute-0 sudo[440627]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:15 compute-0 nova_compute[251992]: 2025-12-06 08:41:15.570 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:15 compute-0 ceph-mon[74339]: pgmap v4479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Dec 06 08:41:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:15 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:41:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:15.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 06 08:41:16 compute-0 nova_compute[251992]: 2025-12-06 08:41:16.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:17.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:18 compute-0 ceph-mon[74339]: pgmap v4480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 06 08:41:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 06 08:41:18 compute-0 nova_compute[251992]: 2025-12-06 08:41:18.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:41:18
Dec 06 08:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.mgr']
Dec 06 08:41:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:41:19 compute-0 nova_compute[251992]: 2025-12-06 08:41:19.305 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:19.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:19 compute-0 ceph-mon[74339]: pgmap v4481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Dec 06 08:41:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4482: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Dec 06 08:41:20 compute-0 nova_compute[251992]: 2025-12-06 08:41:20.573 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:21.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:21 compute-0 ceph-mon[74339]: pgmap v4482: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 90 op/s
Dec 06 08:41:21 compute-0 nova_compute[251992]: 2025-12-06 08:41:21.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:21.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.243 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.244 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.244 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.244 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.245 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:41:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4483: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Dec 06 08:41:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4159804911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:41:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737536969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.709 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.898 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.899 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4061MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.899 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:41:22 compute-0 nova_compute[251992]: 2025-12-06 08:41:22.900 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:41:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:23.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:23 compute-0 nova_compute[251992]: 2025-12-06 08:41:23.554 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:41:23 compute-0 nova_compute[251992]: 2025-12-06 08:41:23.554 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:41:23 compute-0 nova_compute[251992]: 2025-12-06 08:41:23.569 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:41:23 compute-0 ceph-mon[74339]: pgmap v4483: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 172 op/s
Dec 06 08:41:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1737536969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:23 compute-0 nova_compute[251992]: 2025-12-06 08:41:23.597 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:41:23 compute-0 nova_compute[251992]: 2025-12-06 08:41:23.598 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:41:23 compute-0 nova_compute[251992]: 2025-12-06 08:41:23.614 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:41:23 compute-0 nova_compute[251992]: 2025-12-06 08:41:23.641 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:41:23 compute-0 nova_compute[251992]: 2025-12-06 08:41:23.657 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:41:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:23.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:41:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:41:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:41:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3380316587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:24 compute-0 nova_compute[251992]: 2025-12-06 08:41:24.092 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:41:24 compute-0 nova_compute[251992]: 2025-12-06 08:41:24.100 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:41:24 compute-0 nova_compute[251992]: 2025-12-06 08:41:24.118 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:41:24 compute-0 nova_compute[251992]: 2025-12-06 08:41:24.121 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:41:24 compute-0 nova_compute[251992]: 2025-12-06 08:41:24.121 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:41:24 compute-0 nova_compute[251992]: 2025-12-06 08:41:24.310 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:24 compute-0 podman[440702]: 2025-12-06 08:41:24.447622212 +0000 UTC m=+0.104295355 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:41:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4484: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Dec 06 08:41:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3839346878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3380316587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:25 compute-0 nova_compute[251992]: 2025-12-06 08:41:25.122 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:25.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:25 compute-0 nova_compute[251992]: 2025-12-06 08:41:25.575 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:25 compute-0 ceph-mon[74339]: pgmap v4484: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Dec 06 08:41:25 compute-0 nova_compute[251992]: 2025-12-06 08:41:25.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:25 compute-0 nova_compute[251992]: 2025-12-06 08:41:25.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:41:25 compute-0 nova_compute[251992]: 2025-12-06 08:41:25.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:41:25 compute-0 nova_compute[251992]: 2025-12-06 08:41:25.819 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:41:25 compute-0 nova_compute[251992]: 2025-12-06 08:41:25.819 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:25.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4485: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Dec 06 08:41:26 compute-0 nova_compute[251992]: 2025-12-06 08:41:26.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:41:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:41:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:27.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:41:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:41:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:41:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:41:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:41:27 compute-0 ceph-mon[74339]: pgmap v4485: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Dec 06 08:41:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:27.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4486: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Dec 06 08:41:29 compute-0 nova_compute[251992]: 2025-12-06 08:41:29.313 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:29.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:29 compute-0 ceph-mon[74339]: pgmap v4486: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Dec 06 08:41:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:29.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4487: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Dec 06 08:41:30 compute-0 nova_compute[251992]: 2025-12-06 08:41:30.578 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:31.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:31 compute-0 podman[440733]: 2025-12-06 08:41:31.394486075 +0000 UTC m=+0.051475211 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:41:31 compute-0 podman[440732]: 2025-12-06 08:41:31.411580207 +0000 UTC m=+0.072849308 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:41:31 compute-0 ceph-mon[74339]: pgmap v4487: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Dec 06 08:41:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:31.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4488: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s
Dec 06 08:41:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:33.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:33 compute-0 nova_compute[251992]: 2025-12-06 08:41:33.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:33 compute-0 nova_compute[251992]: 2025-12-06 08:41:33.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:41:33 compute-0 ceph-mon[74339]: pgmap v4488: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 87 op/s
Dec 06 08:41:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:33.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:33 compute-0 sudo[440770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:33 compute-0 sudo[440770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:33 compute-0 sudo[440770]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:34 compute-0 sudo[440795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:34 compute-0 sudo[440795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:34 compute-0 sudo[440795]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:34 compute-0 nova_compute[251992]: 2025-12-06 08:41:34.316 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4489: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:41:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:35.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:35 compute-0 nova_compute[251992]: 2025-12-06 08:41:35.580 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:35 compute-0 ceph-mon[74339]: pgmap v4489: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:41:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:35.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4490: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:41:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:37.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:37 compute-0 ceph-mon[74339]: pgmap v4490: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Dec 06 08:41:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:37.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4491: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:38 compute-0 nova_compute[251992]: 2025-12-06 08:41:38.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:41:39 compute-0 nova_compute[251992]: 2025-12-06 08:41:39.319 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:39.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:39 compute-0 ceph-mon[74339]: pgmap v4491: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:39.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4492: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:40 compute-0 nova_compute[251992]: 2025-12-06 08:41:40.593 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:41.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:41 compute-0 ceph-mon[74339]: pgmap v4492: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:41.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4493: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:41:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:41:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:43.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:43.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:43 compute-0 ceph-mon[74339]: pgmap v4493: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:44 compute-0 nova_compute[251992]: 2025-12-06 08:41:44.323 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4494: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:45.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:45 compute-0 nova_compute[251992]: 2025-12-06 08:41:45.594 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:45.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4495: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:47 compute-0 ceph-mon[74339]: pgmap v4494: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:47.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:47.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:48 compute-0 ceph-mon[74339]: pgmap v4495: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4496: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:49 compute-0 nova_compute[251992]: 2025-12-06 08:41:49.364 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:49.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:49 compute-0 ceph-mon[74339]: pgmap v4496: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:49.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4497: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:50 compute-0 nova_compute[251992]: 2025-12-06 08:41:50.597 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:41:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:51.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:41:51 compute-0 ceph-mon[74339]: pgmap v4497: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:51.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.486566) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512486800, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1148, "num_deletes": 251, "total_data_size": 1948683, "memory_usage": 1976672, "flush_reason": "Manual Compaction"}
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512499661, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 1156403, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91957, "largest_seqno": 93104, "table_properties": {"data_size": 1152133, "index_size": 1793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11184, "raw_average_key_size": 20, "raw_value_size": 1142950, "raw_average_value_size": 2116, "num_data_blocks": 81, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010402, "oldest_key_time": 1765010402, "file_creation_time": 1765010512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 13121 microseconds, and 7070 cpu microseconds.
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:41:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4498: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.499702) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 1156403 bytes OK
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.499727) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.501577) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.501592) EVENT_LOG_v1 {"time_micros": 1765010512501587, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.501606) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 1943576, prev total WAL file size 1943576, number of live WAL files 2.
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.502455) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353239' seq:72057594037927935, type:22 .. '6D6772737461740033373831' seq:0, type:0; will stop at (end)
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(1129KB)], [209(14MB)]
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512502525, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 16109695, "oldest_snapshot_seqno": -1}
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 12659 keys, 13008723 bytes, temperature: kUnknown
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512657489, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 13008723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12932560, "index_size": 43410, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31685, "raw_key_size": 336585, "raw_average_key_size": 26, "raw_value_size": 12716664, "raw_average_value_size": 1004, "num_data_blocks": 1629, "num_entries": 12659, "num_filter_entries": 12659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765010512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.658291) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 13008723 bytes
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.663259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.9 rd, 83.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.3 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(25.2) write-amplify(11.2) OK, records in: 13125, records dropped: 466 output_compression: NoCompression
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.663282) EVENT_LOG_v1 {"time_micros": 1765010512663272, "job": 132, "event": "compaction_finished", "compaction_time_micros": 155062, "compaction_time_cpu_micros": 38676, "output_level": 6, "num_output_files": 1, "total_output_size": 13008723, "num_input_records": 13125, "num_output_records": 12659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512663759, "job": 132, "event": "table_file_deletion", "file_number": 211}
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010512666355, "job": 132, "event": "table_file_deletion", "file_number": 209}
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.502349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.666445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.666451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.666453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.666454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:52 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:41:52.666456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:41:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:53.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:53 compute-0 ceph-mon[74339]: pgmap v4498: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:53.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:54 compute-0 sudo[440830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:54 compute-0 sudo[440830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:54 compute-0 sudo[440830]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:54 compute-0 sudo[440855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:41:54 compute-0 sudo[440855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:41:54 compute-0 sudo[440855]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:54 compute-0 nova_compute[251992]: 2025-12-06 08:41:54.368 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4499: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:55 compute-0 podman[440881]: 2025-12-06 08:41:55.457094985 +0000 UTC m=+0.108127219 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 08:41:55 compute-0 nova_compute[251992]: 2025-12-06 08:41:55.598 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:55 compute-0 ceph-mon[74339]: pgmap v4499: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:55.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4500: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:57.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:41:57 compute-0 ceph-mon[74339]: pgmap v4500: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:41:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:57.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:41:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4501: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:59 compute-0 nova_compute[251992]: 2025-12-06 08:41:59.374 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:41:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:41:59.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:41:59 compute-0 ceph-mon[74339]: pgmap v4501: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:41:59 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2386261853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:41:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:41:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:41:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:41:59.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4502: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:00 compute-0 nova_compute[251992]: 2025-12-06 08:42:00.640 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1094671400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:01.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:01.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:02 compute-0 ceph-mon[74339]: pgmap v4502: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:02 compute-0 podman[440909]: 2025-12-06 08:42:02.388776209 +0000 UTC m=+0.044274335 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 08:42:02 compute-0 podman[440910]: 2025-12-06 08:42:02.427053112 +0000 UTC m=+0.069717882 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:42:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4503: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:03 compute-0 ceph-mon[74339]: pgmap v4503: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:03.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:42:03.919 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:42:03.919 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:42:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:42:03.920 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:42:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:03.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:04 compute-0 nova_compute[251992]: 2025-12-06 08:42:04.376 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4504: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:05.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:05 compute-0 ceph-mon[74339]: pgmap v4504: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:05 compute-0 nova_compute[251992]: 2025-12-06 08:42:05.641 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:05.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4505: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:07.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:07 compute-0 ceph-mon[74339]: pgmap v4505: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:07.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4506: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:09 compute-0 nova_compute[251992]: 2025-12-06 08:42:09.380 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:09.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:09 compute-0 ceph-mon[74339]: pgmap v4506: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/391879808' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:42:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/391879808' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:42:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:09.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4507: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:10 compute-0 nova_compute[251992]: 2025-12-06 08:42:10.643 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:11.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:11 compute-0 ceph-mon[74339]: pgmap v4507: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:11.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4508: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:42:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:42:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:13.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:13 compute-0 ceph-mon[74339]: pgmap v4508: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:13.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:14 compute-0 sudo[440953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:14 compute-0 sudo[440953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:14 compute-0 sudo[440953]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:14 compute-0 sudo[440978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:14 compute-0 sudo[440978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:14 compute-0 sudo[440978]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:14 compute-0 nova_compute[251992]: 2025-12-06 08:42:14.453 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4509: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:15.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:15 compute-0 ceph-mon[74339]: pgmap v4509: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:15 compute-0 nova_compute[251992]: 2025-12-06 08:42:15.693 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:15 compute-0 sudo[441003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:15 compute-0 sudo[441003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:15 compute-0 sudo[441003]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:15 compute-0 sudo[441028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:15 compute-0 sudo[441028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:15 compute-0 sudo[441028]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:15 compute-0 sudo[441053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:15 compute-0 sudo[441053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:15 compute-0 sudo[441053]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:15 compute-0 sudo[441078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Dec 06 08:42:15 compute-0 sudo[441078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:16 compute-0 sudo[441078]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:42:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:16 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:42:16 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:16 compute-0 sudo[441124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:16 compute-0 sudo[441124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:16 compute-0 sudo[441124]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:16 compute-0 sudo[441149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:16 compute-0 sudo[441149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:16 compute-0 sudo[441149]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:16 compute-0 sudo[441174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:16 compute-0 sudo[441174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:16 compute-0 sudo[441174]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:16 compute-0 sudo[441199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:42:16 compute-0 sudo[441199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4510: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:16 compute-0 nova_compute[251992]: 2025-12-06 08:42:16.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:16 compute-0 sudo[441199]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-0 ceph-mon[74339]: pgmap v4510: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:17.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4ed758bc-d9a3-4649-9db7-33d48e828d2c does not exist
Dec 06 08:42:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7fdcb651-0ff7-4857-8151-f634665d0288 does not exist
Dec 06 08:42:17 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev dc121127-1779-456d-ada0-550ac87007d3 does not exist
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:42:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:42:17 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:42:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:17 compute-0 sudo[441255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:17 compute-0 sudo[441255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:17 compute-0 sudo[441255]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:18 compute-0 sudo[441280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:18 compute-0 sudo[441280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:18 compute-0 sudo[441280]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:18 compute-0 sudo[441305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:18 compute-0 sudo[441305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:18 compute-0 sudo[441305]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:18 compute-0 sudo[441331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:42:18 compute-0 sudo[441331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:42:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:42:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:42:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:42:18 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:42:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4511: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:18 compute-0 podman[441396]: 2025-12-06 08:42:18.451739443 +0000 UTC m=+0.023986017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:42:18 compute-0 podman[441396]: 2025-12-06 08:42:18.691419623 +0000 UTC m=+0.263666167 container create 4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 06 08:42:18 compute-0 systemd[1]: Started libpod-conmon-4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511.scope.
Dec 06 08:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:42:18
Dec 06 08:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'images']
Dec 06 08:42:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:42:18 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:42:18 compute-0 podman[441396]: 2025-12-06 08:42:18.774865705 +0000 UTC m=+0.347112279 container init 4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:42:18 compute-0 podman[441396]: 2025-12-06 08:42:18.787953308 +0000 UTC m=+0.360199882 container start 4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:42:18 compute-0 podman[441396]: 2025-12-06 08:42:18.792314526 +0000 UTC m=+0.364561090 container attach 4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:42:18 compute-0 modest_goldberg[441412]: 167 167
Dec 06 08:42:18 compute-0 systemd[1]: libpod-4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511.scope: Deactivated successfully.
Dec 06 08:42:18 compute-0 podman[441396]: 2025-12-06 08:42:18.79541561 +0000 UTC m=+0.367662204 container died 4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd1034a5514f7caf57b0467bcd0556bc5ac189e94ef10744b2d2ba01d8da2fca-merged.mount: Deactivated successfully.
Dec 06 08:42:18 compute-0 podman[441396]: 2025-12-06 08:42:18.83470105 +0000 UTC m=+0.406947614 container remove 4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec 06 08:42:18 compute-0 systemd[1]: libpod-conmon-4a14e738b6330911e914df404c9bb3c2d4383abc4ad7ffe763a68945354e8511.scope: Deactivated successfully.
Dec 06 08:42:19 compute-0 podman[441437]: 2025-12-06 08:42:19.014492382 +0000 UTC m=+0.041172721 container create b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 08:42:19 compute-0 systemd[1]: Started libpod-conmon-b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7.scope.
Dec 06 08:42:19 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050df699990ed10206e7a63c823e5240b123f734e930d36f6752046e822f1d2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050df699990ed10206e7a63c823e5240b123f734e930d36f6752046e822f1d2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050df699990ed10206e7a63c823e5240b123f734e930d36f6752046e822f1d2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050df699990ed10206e7a63c823e5240b123f734e930d36f6752046e822f1d2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050df699990ed10206e7a63c823e5240b123f734e930d36f6752046e822f1d2a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:19 compute-0 podman[441437]: 2025-12-06 08:42:19.08849047 +0000 UTC m=+0.115170849 container init b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:42:19 compute-0 podman[441437]: 2025-12-06 08:42:18.995454208 +0000 UTC m=+0.022134597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:42:19 compute-0 podman[441437]: 2025-12-06 08:42:19.09737165 +0000 UTC m=+0.124051999 container start b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:42:19 compute-0 podman[441437]: 2025-12-06 08:42:19.101978524 +0000 UTC m=+0.128658863 container attach b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:42:19 compute-0 ceph-mon[74339]: pgmap v4511: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:19.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:19 compute-0 nova_compute[251992]: 2025-12-06 08:42:19.552 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:19 compute-0 wizardly_vaughan[441453]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:42:19 compute-0 wizardly_vaughan[441453]: --> relative data size: 1.0
Dec 06 08:42:19 compute-0 wizardly_vaughan[441453]: --> All data devices are unavailable
Dec 06 08:42:19 compute-0 systemd[1]: libpod-b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7.scope: Deactivated successfully.
Dec 06 08:42:19 compute-0 podman[441437]: 2025-12-06 08:42:19.92695641 +0000 UTC m=+0.953636749 container died b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_vaughan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-050df699990ed10206e7a63c823e5240b123f734e930d36f6752046e822f1d2a-merged.mount: Deactivated successfully.
Dec 06 08:42:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:19.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:19 compute-0 podman[441437]: 2025-12-06 08:42:19.98996442 +0000 UTC m=+1.016644769 container remove b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_vaughan, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:42:19 compute-0 systemd[1]: libpod-conmon-b72f2ce36d6d2d41af4e5dfa8b8f97547f66bc91e18e0f962aac7c037b8387d7.scope: Deactivated successfully.
Dec 06 08:42:20 compute-0 sudo[441331]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:20 compute-0 sudo[441483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:20 compute-0 sudo[441483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:20 compute-0 sudo[441483]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:20 compute-0 sudo[441508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:20 compute-0 sudo[441508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:20 compute-0 sudo[441508]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:20 compute-0 sudo[441534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:20 compute-0 sudo[441534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:20 compute-0 sudo[441534]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:20 compute-0 sudo[441559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:42:20 compute-0 sudo[441559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4512: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:20 compute-0 podman[441624]: 2025-12-06 08:42:20.612469062 +0000 UTC m=+0.049557479 container create 0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:42:20 compute-0 systemd[1]: Started libpod-conmon-0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50.scope.
Dec 06 08:42:20 compute-0 nova_compute[251992]: 2025-12-06 08:42:20.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:20 compute-0 podman[441624]: 2025-12-06 08:42:20.588293699 +0000 UTC m=+0.025382156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:42:20 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:42:20 compute-0 podman[441624]: 2025-12-06 08:42:20.698045671 +0000 UTC m=+0.135134108 container init 0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:42:20 compute-0 nova_compute[251992]: 2025-12-06 08:42:20.746 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:20 compute-0 podman[441624]: 2025-12-06 08:42:20.752363557 +0000 UTC m=+0.189451974 container start 0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:42:20 compute-0 podman[441624]: 2025-12-06 08:42:20.756060437 +0000 UTC m=+0.193148854 container attach 0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:42:20 compute-0 hardcore_mahavira[441641]: 167 167
Dec 06 08:42:20 compute-0 systemd[1]: libpod-0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50.scope: Deactivated successfully.
Dec 06 08:42:20 compute-0 podman[441624]: 2025-12-06 08:42:20.759763897 +0000 UTC m=+0.196852314 container died 0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b068c83f3c0f9d11408ff2719406c13d559e01621875b4f4fc9e5e603fede2-merged.mount: Deactivated successfully.
Dec 06 08:42:20 compute-0 podman[441624]: 2025-12-06 08:42:20.79689849 +0000 UTC m=+0.233986897 container remove 0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:42:20 compute-0 systemd[1]: libpod-conmon-0fecb1d5401ab17eb86b34288c35cb4c288db62d94f75fb7255c6bde14693b50.scope: Deactivated successfully.
Dec 06 08:42:20 compute-0 podman[441666]: 2025-12-06 08:42:20.96365305 +0000 UTC m=+0.051451240 container create fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:42:21 compute-0 systemd[1]: Started libpod-conmon-fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012.scope.
Dec 06 08:42:21 compute-0 podman[441666]: 2025-12-06 08:42:20.937801572 +0000 UTC m=+0.025599812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:42:21 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc924c9598f77d6ec660ab6d595faea80a32459058557d2096e7f721c0e649d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc924c9598f77d6ec660ab6d595faea80a32459058557d2096e7f721c0e649d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc924c9598f77d6ec660ab6d595faea80a32459058557d2096e7f721c0e649d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc924c9598f77d6ec660ab6d595faea80a32459058557d2096e7f721c0e649d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:21 compute-0 podman[441666]: 2025-12-06 08:42:21.050182845 +0000 UTC m=+0.137981085 container init fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:42:21 compute-0 podman[441666]: 2025-12-06 08:42:21.063149456 +0000 UTC m=+0.150947596 container start fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 06 08:42:21 compute-0 podman[441666]: 2025-12-06 08:42:21.067186644 +0000 UTC m=+0.154984884 container attach fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:42:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:21 compute-0 ceph-mon[74339]: pgmap v4512: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:21 compute-0 nova_compute[251992]: 2025-12-06 08:42:21.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:21 compute-0 admiring_benz[441682]: {
Dec 06 08:42:21 compute-0 admiring_benz[441682]:     "0": [
Dec 06 08:42:21 compute-0 admiring_benz[441682]:         {
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "devices": [
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "/dev/loop3"
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             ],
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "lv_name": "ceph_lv0",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "lv_size": "7511998464",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "name": "ceph_lv0",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "tags": {
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.cluster_name": "ceph",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.crush_device_class": "",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.encrypted": "0",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.osd_id": "0",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.type": "block",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:                 "ceph.vdo": "0"
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             },
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "type": "block",
Dec 06 08:42:21 compute-0 admiring_benz[441682]:             "vg_name": "ceph_vg0"
Dec 06 08:42:21 compute-0 admiring_benz[441682]:         }
Dec 06 08:42:21 compute-0 admiring_benz[441682]:     ]
Dec 06 08:42:21 compute-0 admiring_benz[441682]: }
Dec 06 08:42:21 compute-0 systemd[1]: libpod-fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012.scope: Deactivated successfully.
Dec 06 08:42:21 compute-0 podman[441666]: 2025-12-06 08:42:21.857754381 +0000 UTC m=+0.945552541 container died fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:42:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc924c9598f77d6ec660ab6d595faea80a32459058557d2096e7f721c0e649d-merged.mount: Deactivated successfully.
Dec 06 08:42:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:21.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:22 compute-0 podman[441666]: 2025-12-06 08:42:22.016168877 +0000 UTC m=+1.103967027 container remove fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_benz, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 06 08:42:22 compute-0 systemd[1]: libpod-conmon-fdd3855881bb22a364023bcda5e6ddfaf6887caf646a17a20bafdb26f0707012.scope: Deactivated successfully.
Dec 06 08:42:22 compute-0 sudo[441559]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:22 compute-0 sudo[441703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:22 compute-0 sudo[441703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:22 compute-0 sudo[441703]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:22 compute-0 sudo[441729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:22 compute-0 sudo[441729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:22 compute-0 sudo[441729]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:22 compute-0 sudo[441754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:22 compute-0 sudo[441754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:22 compute-0 sudo[441754]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:22 compute-0 sudo[441779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:42:22 compute-0 sudo[441779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4513: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:22 compute-0 podman[441844]: 2025-12-06 08:42:22.629518821 +0000 UTC m=+0.021928103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:42:22 compute-0 podman[441844]: 2025-12-06 08:42:22.80255351 +0000 UTC m=+0.194962782 container create 5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brattain, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:42:22 compute-0 systemd[1]: Started libpod-conmon-5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd.scope.
Dec 06 08:42:22 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:42:22 compute-0 podman[441844]: 2025-12-06 08:42:22.937052591 +0000 UTC m=+0.329461873 container init 5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec 06 08:42:22 compute-0 podman[441844]: 2025-12-06 08:42:22.943439554 +0000 UTC m=+0.335848806 container start 5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:42:22 compute-0 upbeat_brattain[441860]: 167 167
Dec 06 08:42:22 compute-0 podman[441844]: 2025-12-06 08:42:22.947231496 +0000 UTC m=+0.339640768 container attach 5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:42:22 compute-0 systemd[1]: libpod-5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd.scope: Deactivated successfully.
Dec 06 08:42:22 compute-0 podman[441844]: 2025-12-06 08:42:22.948352976 +0000 UTC m=+0.340762228 container died 5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec 06 08:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e0af97b51174aa750682116607a9fc8172196a9d737bfb57cb9713da6ceeb9a-merged.mount: Deactivated successfully.
Dec 06 08:42:23 compute-0 podman[441844]: 2025-12-06 08:42:23.251373685 +0000 UTC m=+0.643782937 container remove 5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:42:23 compute-0 systemd[1]: libpod-conmon-5491d5c1148a5a6237c0dc159dd8d996ed750e5ab4851647cc44a92555db97cd.scope: Deactivated successfully.
Dec 06 08:42:23 compute-0 podman[441886]: 2025-12-06 08:42:23.406693076 +0000 UTC m=+0.044804100 container create f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:42:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:23.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:23 compute-0 systemd[1]: Started libpod-conmon-f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72.scope.
Dec 06 08:42:23 compute-0 podman[441886]: 2025-12-06 08:42:23.388816094 +0000 UTC m=+0.026927148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:42:23 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9a8aab982eddd3757069f8f9f361dafe73914d7cc15dfbb494b6c73df9bf6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9a8aab982eddd3757069f8f9f361dafe73914d7cc15dfbb494b6c73df9bf6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9a8aab982eddd3757069f8f9f361dafe73914d7cc15dfbb494b6c73df9bf6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9a8aab982eddd3757069f8f9f361dafe73914d7cc15dfbb494b6c73df9bf6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:42:23 compute-0 podman[441886]: 2025-12-06 08:42:23.563464018 +0000 UTC m=+0.201575062 container init f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec 06 08:42:23 compute-0 podman[441886]: 2025-12-06 08:42:23.569812429 +0000 UTC m=+0.207923453 container start f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:42:23 compute-0 podman[441886]: 2025-12-06 08:42:23.66061994 +0000 UTC m=+0.298730964 container attach f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:42:23 compute-0 nova_compute[251992]: 2025-12-06 08:42:23.682 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:42:23 compute-0 nova_compute[251992]: 2025-12-06 08:42:23.685 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:42:23 compute-0 nova_compute[251992]: 2025-12-06 08:42:23.685 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:42:23 compute-0 nova_compute[251992]: 2025-12-06 08:42:23.686 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:42:23 compute-0 nova_compute[251992]: 2025-12-06 08:42:23.686 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:42:23 compute-0 ceph-mon[74339]: pgmap v4513: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:42:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:42:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:23.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:42:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825190703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:24 compute-0 nova_compute[251992]: 2025-12-06 08:42:24.178 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:42:24 compute-0 nova_compute[251992]: 2025-12-06 08:42:24.321 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:42:24 compute-0 nova_compute[251992]: 2025-12-06 08:42:24.323 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3997MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:42:24 compute-0 nova_compute[251992]: 2025-12-06 08:42:24.323 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:42:24 compute-0 nova_compute[251992]: 2025-12-06 08:42:24.324 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]: {
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]:         "osd_id": 0,
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]:         "type": "bluestore"
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]:     }
Dec 06 08:42:24 compute-0 pedantic_bardeen[441902]: }
Dec 06 08:42:24 compute-0 systemd[1]: libpod-f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72.scope: Deactivated successfully.
Dec 06 08:42:24 compute-0 podman[441886]: 2025-12-06 08:42:24.442910654 +0000 UTC m=+1.081021678 container died f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec 06 08:42:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4514: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:24 compute-0 nova_compute[251992]: 2025-12-06 08:42:24.599 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b9a8aab982eddd3757069f8f9f361dafe73914d7cc15dfbb494b6c73df9bf6e-merged.mount: Deactivated successfully.
Dec 06 08:42:24 compute-0 podman[441886]: 2025-12-06 08:42:24.769595991 +0000 UTC m=+1.407707015 container remove f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:42:24 compute-0 systemd[1]: libpod-conmon-f7ea0aede50588f3555c36eff5b6da957f6920ddb44ad1581e4e57cdb2edcd72.scope: Deactivated successfully.
Dec 06 08:42:24 compute-0 sudo[441779]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:42:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:24 compute-0 nova_compute[251992]: 2025-12-06 08:42:24.947 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:42:24 compute-0 nova_compute[251992]: 2025-12-06 08:42:24.948 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:42:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:42:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1943936127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3825190703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:24 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c2a1a38b-66eb-4481-8fd0-ca29941fdf71 does not exist
Dec 06 08:42:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7319fb3a-a17c-412f-b67b-2e639a376ae5 does not exist
Dec 06 08:42:24 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0eb3c6be-7fdb-4839-91ad-6a6ea4b0e147 does not exist
Dec 06 08:42:25 compute-0 sudo[441960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:25 compute-0 sudo[441960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:25 compute-0 sudo[441960]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:25 compute-0 sudo[441985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:42:25 compute-0 sudo[441985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:25 compute-0 sudo[441985]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:25 compute-0 nova_compute[251992]: 2025-12-06 08:42:25.116 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:42:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:25.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:25 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:42:25 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688271195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:25 compute-0 nova_compute[251992]: 2025-12-06 08:42:25.560 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:42:25 compute-0 nova_compute[251992]: 2025-12-06 08:42:25.567 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:42:25 compute-0 nova_compute[251992]: 2025-12-06 08:42:25.749 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:25 compute-0 nova_compute[251992]: 2025-12-06 08:42:25.789 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:42:25 compute-0 nova_compute[251992]: 2025-12-06 08:42:25.792 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:42:25 compute-0 nova_compute[251992]: 2025-12-06 08:42:25.793 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:42:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:25.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:26 compute-0 ceph-mon[74339]: pgmap v4514: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:42:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3695553771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2688271195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:42:26 compute-0 podman[442033]: 2025-12-06 08:42:26.427929459 +0000 UTC m=+0.082952341 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:42:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4515: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:26 compute-0 nova_compute[251992]: 2025-12-06 08:42:26.794 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:26 compute-0 nova_compute[251992]: 2025-12-06 08:42:26.795 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:42:26 compute-0 nova_compute[251992]: 2025-12-06 08:42:26.796 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:42:26 compute-0 nova_compute[251992]: 2025-12-06 08:42:26.822 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:42:26 compute-0 nova_compute[251992]: 2025-12-06 08:42:26.823 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:26 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:42:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:27.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:42:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:42:27 compute-0 nova_compute[251992]: 2025-12-06 08:42:27.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:27.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:28 compute-0 ceph-mon[74339]: pgmap v4515: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4516: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:28 compute-0 nova_compute[251992]: 2025-12-06 08:42:28.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:28 compute-0 nova_compute[251992]: 2025-12-06 08:42:28.799 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:29 compute-0 ceph-mon[74339]: pgmap v4516: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:29.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:29 compute-0 nova_compute[251992]: 2025-12-06 08:42:29.604 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:30.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4517: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:30 compute-0 nova_compute[251992]: 2025-12-06 08:42:30.750 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:31.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:31 compute-0 ceph-mon[74339]: pgmap v4517: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:32.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4518: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:33 compute-0 podman[442061]: 2025-12-06 08:42:33.385062758 +0000 UTC m=+0.049763584 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 08:42:33 compute-0 podman[442062]: 2025-12-06 08:42:33.386847087 +0000 UTC m=+0.049490597 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 08:42:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:33.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:33 compute-0 nova_compute[251992]: 2025-12-06 08:42:33.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:33 compute-0 nova_compute[251992]: 2025-12-06 08:42:33.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:42:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:34.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:34 compute-0 ceph-mon[74339]: pgmap v4518: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:34 compute-0 sudo[442100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:34 compute-0 sudo[442100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:34 compute-0 sudo[442100]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:34 compute-0 sudo[442125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:34 compute-0 sudo[442125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:34 compute-0 sudo[442125]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4519: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:34 compute-0 nova_compute[251992]: 2025-12-06 08:42:34.654 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:35 compute-0 ceph-mon[74339]: pgmap v4519: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:35.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:35 compute-0 nova_compute[251992]: 2025-12-06 08:42:35.787 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:36.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4520: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:37.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:37 compute-0 ceph-mon[74339]: pgmap v4520: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:38.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4521: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:39.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:39 compute-0 nova_compute[251992]: 2025-12-06 08:42:39.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:42:39 compute-0 nova_compute[251992]: 2025-12-06 08:42:39.699 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:39 compute-0 ceph-mon[74339]: pgmap v4521: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:40.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4522: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:40 compute-0 nova_compute[251992]: 2025-12-06 08:42:40.802 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:41.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:41 compute-0 ceph-mon[74339]: pgmap v4522: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:42.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4523: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:42:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:42:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:43.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:44.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:44 compute-0 ceph-mon[74339]: pgmap v4523: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4524: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:44 compute-0 nova_compute[251992]: 2025-12-06 08:42:44.702 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:45.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:45 compute-0 nova_compute[251992]: 2025-12-06 08:42:45.845 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:46.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:46 compute-0 ceph-mon[74339]: pgmap v4524: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4525: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:47 compute-0 ceph-mon[74339]: pgmap v4525: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:47.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:48.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4526: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:49.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:49 compute-0 ceph-mon[74339]: pgmap v4526: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:49 compute-0 nova_compute[251992]: 2025-12-06 08:42:49.706 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4527: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:50 compute-0 nova_compute[251992]: 2025-12-06 08:42:50.849 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:42:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:51.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:42:51 compute-0 ceph-mon[74339]: pgmap v4527: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:52.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4528: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:53.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:53 compute-0 ceph-mon[74339]: pgmap v4528: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:54.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:54 compute-0 sudo[442160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:54 compute-0 sudo[442160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:54 compute-0 sudo[442160]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4529: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:54 compute-0 sudo[442185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:42:54 compute-0 sudo[442185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:42:54 compute-0 sudo[442185]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:54 compute-0 nova_compute[251992]: 2025-12-06 08:42:54.710 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:55.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:55 compute-0 ceph-mon[74339]: pgmap v4529: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:55 compute-0 nova_compute[251992]: 2025-12-06 08:42:55.895 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:42:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:56.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4530: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:57 compute-0 podman[442211]: 2025-12-06 08:42:57.418958316 +0000 UTC m=+0.081501260 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 08:42:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:57.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:42:57 compute-0 ceph-mon[74339]: pgmap v4530: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:42:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:42:58.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:42:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4531: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:42:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:42:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:42:59.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:42:59 compute-0 ceph-mon[74339]: pgmap v4531: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:42:59 compute-0 nova_compute[251992]: 2025-12-06 08:42:59.713 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:00.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4532: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2862912728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:00 compute-0 nova_compute[251992]: 2025-12-06 08:43:00.898 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:01.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:01 compute-0 ceph-mon[74339]: pgmap v4532: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/608419154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:02.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4533: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:03.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:03 compute-0 ceph-mon[74339]: pgmap v4533: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:43:03.921 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:43:03.924 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:43:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:43:03.924 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:43:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:04.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:04 compute-0 podman[442242]: 2025-12-06 08:43:04.390761982 +0000 UTC m=+0.049914538 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:43:04 compute-0 podman[442241]: 2025-12-06 08:43:04.416009194 +0000 UTC m=+0.077027320 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 06 08:43:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4534: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:04 compute-0 nova_compute[251992]: 2025-12-06 08:43:04.716 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:05.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:05 compute-0 ceph-mon[74339]: pgmap v4534: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:05 compute-0 nova_compute[251992]: 2025-12-06 08:43:05.899 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:06.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4535: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:07.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:07 compute-0 ceph-mon[74339]: pgmap v4535: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:08.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4536: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:09.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:09 compute-0 nova_compute[251992]: 2025-12-06 08:43:09.719 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:09 compute-0 ceph-mon[74339]: pgmap v4536: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3926080530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:43:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3926080530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:43:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:10.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4537: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:10 compute-0 nova_compute[251992]: 2025-12-06 08:43:10.946 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:11.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:12.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:12 compute-0 ceph-mon[74339]: pgmap v4537: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4538: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:43:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:43:13 compute-0 ceph-mon[74339]: pgmap v4538: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:13.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:14.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4539: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:14 compute-0 sudo[442286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:14 compute-0 sudo[442286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:14 compute-0 sudo[442286]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:14 compute-0 sudo[442311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:14 compute-0 sudo[442311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:14 compute-0 sudo[442311]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:14 compute-0 nova_compute[251992]: 2025-12-06 08:43:14.722 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:15.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:15 compute-0 ceph-mon[74339]: pgmap v4539: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:15 compute-0 nova_compute[251992]: 2025-12-06 08:43:15.948 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:16.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4540: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:16 compute-0 nova_compute[251992]: 2025-12-06 08:43:16.649 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:17.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:17 compute-0 ceph-mon[74339]: pgmap v4540: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:18.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4541: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:43:18
Dec 06 08:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.mgr', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Dec 06 08:43:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:43:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:19.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:19 compute-0 ceph-mon[74339]: pgmap v4541: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:19 compute-0 nova_compute[251992]: 2025-12-06 08:43:19.726 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:43:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:20.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:43:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4542: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:20 compute-0 nova_compute[251992]: 2025-12-06 08:43:20.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:20 compute-0 nova_compute[251992]: 2025-12-06 08:43:20.951 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:21.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:21 compute-0 nova_compute[251992]: 2025-12-06 08:43:21.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:21 compute-0 nova_compute[251992]: 2025-12-06 08:43:21.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:43:21 compute-0 nova_compute[251992]: 2025-12-06 08:43:21.688 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:43:21 compute-0 nova_compute[251992]: 2025-12-06 08:43:21.688 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:43:21 compute-0 nova_compute[251992]: 2025-12-06 08:43:21.688 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:43:21 compute-0 nova_compute[251992]: 2025-12-06 08:43:21.689 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:43:21 compute-0 ceph-mon[74339]: pgmap v4542: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2520334797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:22.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:43:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1253960128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.108 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.278 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.280 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4067MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.280 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.280 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.494 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.495 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:43:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.513 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:43:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4543: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1253960128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2003389108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:43:22 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2416721712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.950 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.959 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.978 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.980 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:43:22 compute-0 nova_compute[251992]: 2025-12-06 08:43:22.980 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:43:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:23.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:23 compute-0 ceph-mon[74339]: pgmap v4543: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2416721712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:43:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:43:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:24.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4544: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:24 compute-0 nova_compute[251992]: 2025-12-06 08:43:24.730 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:24 compute-0 nova_compute[251992]: 2025-12-06 08:43:24.981 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:25 compute-0 sudo[442385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:25 compute-0 sudo[442385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:25 compute-0 sudo[442385]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:25 compute-0 sudo[442410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:43:25 compute-0 sudo[442410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:25 compute-0 sudo[442410]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:25.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:25 compute-0 sudo[442435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:25 compute-0 sudo[442435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:25 compute-0 sudo[442435]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:25 compute-0 sudo[442460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:43:25 compute-0 sudo[442460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:25 compute-0 ceph-mon[74339]: pgmap v4544: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:25 compute-0 nova_compute[251992]: 2025-12-06 08:43:25.951 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:26.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:26 compute-0 sudo[442460]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:43:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:43:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:43:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 42c33e96-9607-4134-9475-813680a50088 does not exist
Dec 06 08:43:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev c8b03353-2262-4577-b531-2aa78131dead does not exist
Dec 06 08:43:26 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3720c720-c708-4197-93aa-7f8396750e73 does not exist
Dec 06 08:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:43:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:43:26 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:43:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:43:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:43:26 compute-0 sudo[442516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:26 compute-0 sudo[442516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:26 compute-0 sudo[442516]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:26 compute-0 sudo[442541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:43:26 compute-0 sudo[442541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:26 compute-0 sudo[442541]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:26 compute-0 sudo[442566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:26 compute-0 sudo[442566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:26 compute-0 sudo[442566]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:26 compute-0 sudo[442591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:43:26 compute-0 sudo[442591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4545: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:26 compute-0 nova_compute[251992]: 2025-12-06 08:43:26.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:26 compute-0 nova_compute[251992]: 2025-12-06 08:43:26.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:43:26 compute-0 nova_compute[251992]: 2025-12-06 08:43:26.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:43:26 compute-0 nova_compute[251992]: 2025-12-06 08:43:26.675 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:43:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:43:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:43:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:43:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:43:26 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:43:26 compute-0 podman[442656]: 2025-12-06 08:43:26.857433442 +0000 UTC m=+0.043381282 container create 8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:43:26 compute-0 systemd[1]: Started libpod-conmon-8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0.scope.
Dec 06 08:43:26 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:43:26 compute-0 podman[442656]: 2025-12-06 08:43:26.836148537 +0000 UTC m=+0.022096427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:43:26 compute-0 podman[442656]: 2025-12-06 08:43:26.940264237 +0000 UTC m=+0.126212097 container init 8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:43:26 compute-0 podman[442656]: 2025-12-06 08:43:26.948339305 +0000 UTC m=+0.134287145 container start 8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_engelbart, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:43:26 compute-0 podman[442656]: 2025-12-06 08:43:26.953080184 +0000 UTC m=+0.139028044 container attach 8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:43:26 compute-0 systemd[1]: libpod-8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0.scope: Deactivated successfully.
Dec 06 08:43:26 compute-0 upbeat_engelbart[442672]: 167 167
Dec 06 08:43:26 compute-0 conmon[442672]: conmon 8dfa9b7dabeaaeeb5e51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0.scope/container/memory.events
Dec 06 08:43:26 compute-0 podman[442656]: 2025-12-06 08:43:26.95589466 +0000 UTC m=+0.141842510 container died 8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-28da0abb6c2842b9a126b5bda8c6c5439a15575ac19337d91359caa404799936-merged.mount: Deactivated successfully.
Dec 06 08:43:26 compute-0 podman[442656]: 2025-12-06 08:43:26.992851887 +0000 UTC m=+0.178799727 container remove 8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_engelbart, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:43:27 compute-0 systemd[1]: libpod-conmon-8dfa9b7dabeaaeeb5e51af25b13e69697195c9a5291fc18657d4632634acdcf0.scope: Deactivated successfully.
Dec 06 08:43:27 compute-0 podman[442698]: 2025-12-06 08:43:27.187174412 +0000 UTC m=+0.045634843 container create 1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 08:43:27 compute-0 systemd[1]: Started libpod-conmon-1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e.scope.
Dec 06 08:43:27 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6bd2736268ceb0df75121f4fcd1100b814a6897932c5c2f8d3560a9f49d395/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6bd2736268ceb0df75121f4fcd1100b814a6897932c5c2f8d3560a9f49d395/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6bd2736268ceb0df75121f4fcd1100b814a6897932c5c2f8d3560a9f49d395/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6bd2736268ceb0df75121f4fcd1100b814a6897932c5c2f8d3560a9f49d395/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6bd2736268ceb0df75121f4fcd1100b814a6897932c5c2f8d3560a9f49d395/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:27 compute-0 podman[442698]: 2025-12-06 08:43:27.16933407 +0000 UTC m=+0.027794521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:43:27 compute-0 podman[442698]: 2025-12-06 08:43:27.266422291 +0000 UTC m=+0.124882752 container init 1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:43:27 compute-0 podman[442698]: 2025-12-06 08:43:27.274586161 +0000 UTC m=+0.133046592 container start 1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:43:27 compute-0 podman[442698]: 2025-12-06 08:43:27.278185308 +0000 UTC m=+0.136645759 container attach 1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:43:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:43:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:43:27 compute-0 ceph-mon[74339]: pgmap v4545: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:43:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:28.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:43:28 compute-0 distracted_elion[442714]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:43:28 compute-0 distracted_elion[442714]: --> relative data size: 1.0
Dec 06 08:43:28 compute-0 distracted_elion[442714]: --> All data devices are unavailable
Dec 06 08:43:28 compute-0 podman[442698]: 2025-12-06 08:43:28.131254863 +0000 UTC m=+0.989715304 container died 1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:43:28 compute-0 systemd[1]: libpod-1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e.scope: Deactivated successfully.
Dec 06 08:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b6bd2736268ceb0df75121f4fcd1100b814a6897932c5c2f8d3560a9f49d395-merged.mount: Deactivated successfully.
Dec 06 08:43:28 compute-0 podman[442698]: 2025-12-06 08:43:28.196519084 +0000 UTC m=+1.054979525 container remove 1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:43:28 compute-0 systemd[1]: libpod-conmon-1c84c0c70f150ee56a360691895e188715ddd68e4fb135ca5e37bc01da48bc2e.scope: Deactivated successfully.
Dec 06 08:43:28 compute-0 sudo[442591]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:28 compute-0 sudo[442760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:28 compute-0 podman[442731]: 2025-12-06 08:43:28.287726765 +0000 UTC m=+0.114888602 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 08:43:28 compute-0 sudo[442760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:28 compute-0 sudo[442760]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:28 compute-0 sudo[442792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:43:28 compute-0 sudo[442792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:28 compute-0 sudo[442792]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:28 compute-0 sudo[442817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:28 compute-0 sudo[442817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:28 compute-0 sudo[442817]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:28 compute-0 sudo[442842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:43:28 compute-0 sudo[442842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4546: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:28 compute-0 nova_compute[251992]: 2025-12-06 08:43:28.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:28 compute-0 podman[442906]: 2025-12-06 08:43:28.866402744 +0000 UTC m=+0.041156062 container create 94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:43:28 compute-0 systemd[1]: Started libpod-conmon-94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f.scope.
Dec 06 08:43:28 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:43:28 compute-0 podman[442906]: 2025-12-06 08:43:28.94072314 +0000 UTC m=+0.115476468 container init 94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 08:43:28 compute-0 podman[442906]: 2025-12-06 08:43:28.846439095 +0000 UTC m=+0.021192423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:43:28 compute-0 podman[442906]: 2025-12-06 08:43:28.947053801 +0000 UTC m=+0.121807109 container start 94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 08:43:28 compute-0 podman[442906]: 2025-12-06 08:43:28.950331829 +0000 UTC m=+0.125085157 container attach 94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:43:28 compute-0 naughty_lumiere[442922]: 167 167
Dec 06 08:43:28 compute-0 systemd[1]: libpod-94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f.scope: Deactivated successfully.
Dec 06 08:43:28 compute-0 podman[442906]: 2025-12-06 08:43:28.952817086 +0000 UTC m=+0.127570414 container died 94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lumiere, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b63d4a3eaa5831b54b95b36c75da7d4d0129af1e5fc2b253b8f8c077c7ad762-merged.mount: Deactivated successfully.
Dec 06 08:43:28 compute-0 podman[442906]: 2025-12-06 08:43:28.986360952 +0000 UTC m=+0.161114260 container remove 94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lumiere, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 08:43:28 compute-0 systemd[1]: libpod-conmon-94bcf2c2f76d5e3573fb54a22f87f974529e4da40247d3f7d66b7cb158ba113f.scope: Deactivated successfully.
Dec 06 08:43:29 compute-0 podman[442946]: 2025-12-06 08:43:29.131458398 +0000 UTC m=+0.043077494 container create 1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:43:29 compute-0 systemd[1]: Started libpod-conmon-1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166.scope.
Dec 06 08:43:29 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef16b04025418675476f7517d2fc735e335e55a06bef37cbd8c235ba11217b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef16b04025418675476f7517d2fc735e335e55a06bef37cbd8c235ba11217b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef16b04025418675476f7517d2fc735e335e55a06bef37cbd8c235ba11217b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef16b04025418675476f7517d2fc735e335e55a06bef37cbd8c235ba11217b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:29 compute-0 podman[442946]: 2025-12-06 08:43:29.187584833 +0000 UTC m=+0.099203939 container init 1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamport, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:43:29 compute-0 podman[442946]: 2025-12-06 08:43:29.19598913 +0000 UTC m=+0.107608226 container start 1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 06 08:43:29 compute-0 podman[442946]: 2025-12-06 08:43:29.198889608 +0000 UTC m=+0.110508744 container attach 1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamport, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec 06 08:43:29 compute-0 podman[442946]: 2025-12-06 08:43:29.111133579 +0000 UTC m=+0.022752715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:43:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:43:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:43:29 compute-0 nova_compute[251992]: 2025-12-06 08:43:29.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:29 compute-0 nova_compute[251992]: 2025-12-06 08:43:29.732 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:29 compute-0 ceph-mon[74339]: pgmap v4546: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:29 compute-0 amazing_lamport[442962]: {
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:     "0": [
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:         {
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "devices": [
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "/dev/loop3"
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             ],
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "lv_name": "ceph_lv0",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "lv_size": "7511998464",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "name": "ceph_lv0",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "tags": {
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.cluster_name": "ceph",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.crush_device_class": "",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.encrypted": "0",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.osd_id": "0",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.type": "block",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:                 "ceph.vdo": "0"
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             },
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "type": "block",
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:             "vg_name": "ceph_vg0"
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:         }
Dec 06 08:43:29 compute-0 amazing_lamport[442962]:     ]
Dec 06 08:43:29 compute-0 amazing_lamport[442962]: }
Dec 06 08:43:29 compute-0 systemd[1]: libpod-1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166.scope: Deactivated successfully.
Dec 06 08:43:29 compute-0 podman[442946]: 2025-12-06 08:43:29.918042147 +0000 UTC m=+0.829661273 container died 1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-aef16b04025418675476f7517d2fc735e335e55a06bef37cbd8c235ba11217b5-merged.mount: Deactivated successfully.
Dec 06 08:43:29 compute-0 podman[442946]: 2025-12-06 08:43:29.98334603 +0000 UTC m=+0.894965126 container remove 1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamport, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:43:29 compute-0 systemd[1]: libpod-conmon-1f9836eda8b8533e6d613a4253909a94d21d62c8aedf71069e78bcbab47be166.scope: Deactivated successfully.
Dec 06 08:43:30 compute-0 sudo[442842]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:30.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:30 compute-0 sudo[442983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:30 compute-0 sudo[442983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:30 compute-0 sudo[442983]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:30 compute-0 sudo[443008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:43:30 compute-0 sudo[443008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:30 compute-0 sudo[443008]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:30 compute-0 sudo[443034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:30 compute-0 sudo[443034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:30 compute-0 sudo[443034]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:30 compute-0 sudo[443059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:43:30 compute-0 sudo[443059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4547: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:30 compute-0 podman[443124]: 2025-12-06 08:43:30.553663132 +0000 UTC m=+0.046083614 container create 2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:43:30 compute-0 systemd[1]: Started libpod-conmon-2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac.scope.
Dec 06 08:43:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:43:30 compute-0 podman[443124]: 2025-12-06 08:43:30.625670216 +0000 UTC m=+0.118090678 container init 2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:43:30 compute-0 podman[443124]: 2025-12-06 08:43:30.533795616 +0000 UTC m=+0.026216088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:43:30 compute-0 podman[443124]: 2025-12-06 08:43:30.631448462 +0000 UTC m=+0.123868914 container start 2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:43:30 compute-0 podman[443124]: 2025-12-06 08:43:30.633518278 +0000 UTC m=+0.125938750 container attach 2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jones, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:43:30 compute-0 hungry_jones[443140]: 167 167
Dec 06 08:43:30 compute-0 systemd[1]: libpod-2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac.scope: Deactivated successfully.
Dec 06 08:43:30 compute-0 podman[443124]: 2025-12-06 08:43:30.636837767 +0000 UTC m=+0.129258239 container died 2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jones, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 08:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-55e3dbb51a3898151ed0fd76e3d1a6d01445056b2de63781f36d49907b0ec16c-merged.mount: Deactivated successfully.
Dec 06 08:43:30 compute-0 podman[443124]: 2025-12-06 08:43:30.673702072 +0000 UTC m=+0.166122524 container remove 2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_jones, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:43:30 compute-0 systemd[1]: libpod-conmon-2afbf20f95c8545023b907d62b3f4fe38106fca0e212cdcf829f43083b5dd4ac.scope: Deactivated successfully.
Dec 06 08:43:30 compute-0 podman[443164]: 2025-12-06 08:43:30.850072942 +0000 UTC m=+0.037984665 container create 0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:43:30 compute-0 systemd[1]: Started libpod-conmon-0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85.scope.
Dec 06 08:43:30 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06af8232a5b96359c2a89ca98a10656380880a940361c57b63885638f39347/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06af8232a5b96359c2a89ca98a10656380880a940361c57b63885638f39347/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06af8232a5b96359c2a89ca98a10656380880a940361c57b63885638f39347/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06af8232a5b96359c2a89ca98a10656380880a940361c57b63885638f39347/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:43:30 compute-0 podman[443164]: 2025-12-06 08:43:30.917398239 +0000 UTC m=+0.105309972 container init 0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 08:43:30 compute-0 podman[443164]: 2025-12-06 08:43:30.924002258 +0000 UTC m=+0.111913971 container start 0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:43:30 compute-0 podman[443164]: 2025-12-06 08:43:30.927120722 +0000 UTC m=+0.115032435 container attach 0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:43:30 compute-0 podman[443164]: 2025-12-06 08:43:30.833920636 +0000 UTC m=+0.021832369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:43:30 compute-0 nova_compute[251992]: 2025-12-06 08:43:30.954 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:31.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:31 compute-0 distracted_curran[443180]: {
Dec 06 08:43:31 compute-0 distracted_curran[443180]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:43:31 compute-0 distracted_curran[443180]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:43:31 compute-0 distracted_curran[443180]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:43:31 compute-0 distracted_curran[443180]:         "osd_id": 0,
Dec 06 08:43:31 compute-0 distracted_curran[443180]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:43:31 compute-0 distracted_curran[443180]:         "type": "bluestore"
Dec 06 08:43:31 compute-0 distracted_curran[443180]:     }
Dec 06 08:43:31 compute-0 distracted_curran[443180]: }
Dec 06 08:43:31 compute-0 systemd[1]: libpod-0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85.scope: Deactivated successfully.
Dec 06 08:43:31 compute-0 ceph-mon[74339]: pgmap v4547: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:31 compute-0 podman[443201]: 2025-12-06 08:43:31.823349041 +0000 UTC m=+0.030855183 container died 0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:43:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c06af8232a5b96359c2a89ca98a10656380880a940361c57b63885638f39347-merged.mount: Deactivated successfully.
Dec 06 08:43:31 compute-0 podman[443201]: 2025-12-06 08:43:31.869025144 +0000 UTC m=+0.076531296 container remove 0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_curran, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:43:31 compute-0 systemd[1]: libpod-conmon-0f9a43e85194d40042b4e253b7f6aea34e7a16f6da8548fcebe3423cb7ba8d85.scope: Deactivated successfully.
Dec 06 08:43:31 compute-0 sudo[443059]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:43:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:31 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:43:31 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ce887a61-aa53-4c37-8f2c-1f4680849704 does not exist
Dec 06 08:43:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 8e9be58a-a1fd-4cce-b2c7-da7a3e95278c does not exist
Dec 06 08:43:31 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 10f5f954-ab36-4688-9b91-c6cb0309a449 does not exist
Dec 06 08:43:31 compute-0 sudo[443217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:31 compute-0 sudo[443217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:31 compute-0 sudo[443217]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:32 compute-0 sudo[443242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:43:32 compute-0 sudo[443242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:32 compute-0 sudo[443242]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4548: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:32 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:43:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:33.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:33 compute-0 nova_compute[251992]: 2025-12-06 08:43:33.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:33 compute-0 nova_compute[251992]: 2025-12-06 08:43:33.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:43:33 compute-0 ceph-mon[74339]: pgmap v4548: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:34.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4549: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:34 compute-0 nova_compute[251992]: 2025-12-06 08:43:34.770 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:34 compute-0 sudo[443269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:34 compute-0 sudo[443269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:34 compute-0 sudo[443269]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:34 compute-0 sudo[443301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:34 compute-0 sudo[443301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:34 compute-0 sudo[443301]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:34 compute-0 podman[443294]: 2025-12-06 08:43:34.920174914 +0000 UTC m=+0.070897874 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 08:43:34 compute-0 podman[443293]: 2025-12-06 08:43:34.922768074 +0000 UTC m=+0.085158739 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Dec 06 08:43:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:35.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:35 compute-0 nova_compute[251992]: 2025-12-06 08:43:35.997 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:36 compute-0 ceph-mon[74339]: pgmap v4549: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:36.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4550: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:37.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:38 compute-0 ceph-mon[74339]: pgmap v4550: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:38.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4551: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:39 compute-0 ceph-mon[74339]: pgmap v4551: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:39.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:39 compute-0 nova_compute[251992]: 2025-12-06 08:43:39.814 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:40.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4552: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:40 compute-0 nova_compute[251992]: 2025-12-06 08:43:40.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:43:41 compute-0 nova_compute[251992]: 2025-12-06 08:43:41.045 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:41.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:41 compute-0 ceph-mon[74339]: pgmap v4552: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:42.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4553: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:43:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:43:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:43:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:43.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:43:43 compute-0 ceph-mon[74339]: pgmap v4553: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.720708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623720744, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1209, "num_deletes": 251, "total_data_size": 2026666, "memory_usage": 2057824, "flush_reason": "Manual Compaction"}
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623737636, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 1981695, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93106, "largest_seqno": 94313, "table_properties": {"data_size": 1975905, "index_size": 3120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12323, "raw_average_key_size": 19, "raw_value_size": 1964333, "raw_average_value_size": 3183, "num_data_blocks": 138, "num_entries": 617, "num_filter_entries": 617, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010512, "oldest_key_time": 1765010512, "file_creation_time": 1765010623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 17048 microseconds, and 5560 cpu microseconds.
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.737751) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 1981695 bytes OK
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.737796) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.741374) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.741387) EVENT_LOG_v1 {"time_micros": 1765010623741383, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.741402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 2021324, prev total WAL file size 2021324, number of live WAL files 2.
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.742271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(1935KB)], [212(12MB)]
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623742354, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 14990418, "oldest_snapshot_seqno": -1}
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 12759 keys, 12944536 bytes, temperature: kUnknown
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623842795, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 12944536, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12867799, "index_size": 43728, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 339338, "raw_average_key_size": 26, "raw_value_size": 12650278, "raw_average_value_size": 991, "num_data_blocks": 1637, "num_entries": 12759, "num_filter_entries": 12759, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765010623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.843051) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12944536 bytes
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.844340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.1 rd, 128.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.1) write-amplify(6.5) OK, records in: 13276, records dropped: 517 output_compression: NoCompression
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.844361) EVENT_LOG_v1 {"time_micros": 1765010623844351, "job": 134, "event": "compaction_finished", "compaction_time_micros": 100515, "compaction_time_cpu_micros": 35051, "output_level": 6, "num_output_files": 1, "total_output_size": 12944536, "num_input_records": 13276, "num_output_records": 12759, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623844850, "job": 134, "event": "table_file_deletion", "file_number": 214}
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010623847767, "job": 134, "event": "table_file_deletion", "file_number": 212}
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.742139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.847802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.847807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.847809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.847811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:43 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:43:43.847813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:43:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4554: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:44.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:44 compute-0 nova_compute[251992]: 2025-12-06 08:43:44.816 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:43:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:45.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:43:45 compute-0 ceph-mon[74339]: pgmap v4554: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:46 compute-0 nova_compute[251992]: 2025-12-06 08:43:46.080 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4555: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:46.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:47.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:47 compute-0 ceph-mon[74339]: pgmap v4555: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4556: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:43:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:48.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:43:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:49.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:49 compute-0 nova_compute[251992]: 2025-12-06 08:43:49.819 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:49 compute-0 ceph-mon[74339]: pgmap v4556: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4557: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:50.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:51 compute-0 nova_compute[251992]: 2025-12-06 08:43:51.081 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:51.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:52 compute-0 ceph-mon[74339]: pgmap v4557: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4558: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:52.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:53.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:54 compute-0 ceph-mon[74339]: pgmap v4558: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4559: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:54.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:54 compute-0 nova_compute[251992]: 2025-12-06 08:43:54.823 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:54 compute-0 sudo[443369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:54 compute-0 sudo[443369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:54 compute-0 sudo[443369]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:55 compute-0 sudo[443394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:43:55 compute-0 sudo[443394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:43:55 compute-0 sudo[443394]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:55 compute-0 sshd-session[443366]: Connection reset by authenticating user root 45.135.232.92 port 65388 [preauth]
Dec 06 08:43:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:55.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:56 compute-0 ceph-mon[74339]: pgmap v4559: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:56 compute-0 nova_compute[251992]: 2025-12-06 08:43:56.083 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:43:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4560: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:56.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:43:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:57 compute-0 sshd-session[443419]: Connection reset by authenticating user root 45.135.232.92 port 63068 [preauth]
Dec 06 08:43:58 compute-0 ceph-mon[74339]: pgmap v4560: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:58 compute-0 podman[443425]: 2025-12-06 08:43:58.419575509 +0000 UTC m=+0.073587498 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:43:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4561: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:43:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:43:58.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:43:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:43:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:43:59.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:43:59 compute-0 nova_compute[251992]: 2025-12-06 08:43:59.826 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:00 compute-0 ceph-mon[74339]: pgmap v4561: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4562: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:00.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2339240083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:01 compute-0 ceph-mon[74339]: pgmap v4562: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4115477988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:01 compute-0 nova_compute[251992]: 2025-12-06 08:44:01.084 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:01 compute-0 sshd-session[443422]: Connection reset by authenticating user root 45.135.232.92 port 63080 [preauth]
Dec 06 08:44:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:01.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4563: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:02.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:03 compute-0 sshd-session[443452]: Invalid user setup from 45.135.232.92 port 63092
Dec 06 08:44:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:03.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:03 compute-0 sshd-session[443452]: Connection reset by invalid user setup 45.135.232.92 port 63092 [preauth]
Dec 06 08:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:44:03.921 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:44:03.921 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:44:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:44:03.922 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:44:03 compute-0 ceph-mon[74339]: pgmap v4563: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4564: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:04.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:04 compute-0 nova_compute[251992]: 2025-12-06 08:44:04.830 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:05 compute-0 podman[443458]: 2025-12-06 08:44:05.393984985 +0000 UTC m=+0.050363330 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 08:44:05 compute-0 podman[443459]: 2025-12-06 08:44:05.435177557 +0000 UTC m=+0.084429050 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:44:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:05.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:05 compute-0 ceph-mon[74339]: pgmap v4564: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:06 compute-0 nova_compute[251992]: 2025-12-06 08:44:06.134 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4565: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:06.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:07.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:07 compute-0 ceph-mon[74339]: pgmap v4565: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4566: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:08.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:09.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:09 compute-0 nova_compute[251992]: 2025-12-06 08:44:09.834 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:09 compute-0 sshd-session[443455]: Connection reset by authenticating user root 45.135.232.92 port 63104 [preauth]
Dec 06 08:44:10 compute-0 ceph-mon[74339]: pgmap v4566: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2208837905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:44:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/2208837905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:44:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4567: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:10.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:11 compute-0 nova_compute[251992]: 2025-12-06 08:44:11.136 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:11.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:12 compute-0 ceph-mon[74339]: pgmap v4567: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4568: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:12.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:44:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:44:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:13.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:14 compute-0 ceph-mon[74339]: pgmap v4568: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4569: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.003000080s ======
Dec 06 08:44:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:14.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Dec 06 08:44:14 compute-0 nova_compute[251992]: 2025-12-06 08:44:14.837 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:15 compute-0 sudo[443503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:15 compute-0 sudo[443503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:15 compute-0 sudo[443503]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:15 compute-0 sudo[443528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:15 compute-0 sudo[443528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:15 compute-0 sudo[443528]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:15.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:16 compute-0 ceph-mon[74339]: pgmap v4569: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:16 compute-0 nova_compute[251992]: 2025-12-06 08:44:16.139 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4570: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:17 compute-0 ceph-mon[74339]: pgmap v4570: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:17.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:17 compute-0 nova_compute[251992]: 2025-12-06 08:44:17.650 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:44:18
Dec 06 08:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec 06 08:44:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:44:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:19.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:19 compute-0 ceph-mon[74339]: pgmap v4571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:19 compute-0 nova_compute[251992]: 2025-12-06 08:44:19.842 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:21 compute-0 nova_compute[251992]: 2025-12-06 08:44:21.140 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:21.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:21 compute-0 ceph-mon[74339]: pgmap v4572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:21 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3146289939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4573: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:22 compute-0 nova_compute[251992]: 2025-12-06 08:44:22.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:22 compute-0 nova_compute[251992]: 2025-12-06 08:44:22.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:22 compute-0 nova_compute[251992]: 2025-12-06 08:44:22.697 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:44:22 compute-0 nova_compute[251992]: 2025-12-06 08:44:22.697 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:44:22 compute-0 nova_compute[251992]: 2025-12-06 08:44:22.697 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:44:22 compute-0 nova_compute[251992]: 2025-12-06 08:44:22.698 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:44:22 compute-0 nova_compute[251992]: 2025-12-06 08:44:22.698 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:44:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:22.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:22 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/591882036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:44:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721875890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.160 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.323 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.324 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4056MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.325 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.325 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.516 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.517 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.539 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:44:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:23.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:23 compute-0 ceph-mon[74339]: pgmap v4573: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:23 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/721875890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:44:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:44:23 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:44:23 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3529626307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.988 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:44:23 compute-0 nova_compute[251992]: 2025-12-06 08:44:23.994 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:44:24 compute-0 nova_compute[251992]: 2025-12-06 08:44:24.026 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:44:24 compute-0 nova_compute[251992]: 2025-12-06 08:44:24.028 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:44:24 compute-0 nova_compute[251992]: 2025-12-06 08:44:24.028 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:44:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:24.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:24 compute-0 nova_compute[251992]: 2025-12-06 08:44:24.845 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3529626307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:44:25 compute-0 nova_compute[251992]: 2025-12-06 08:44:25.029 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:25.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:25 compute-0 ceph-mon[74339]: pgmap v4574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:26 compute-0 nova_compute[251992]: 2025-12-06 08:44:26.141 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:26 compute-0 nova_compute[251992]: 2025-12-06 08:44:26.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:26 compute-0 nova_compute[251992]: 2025-12-06 08:44:26.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:44:26 compute-0 nova_compute[251992]: 2025-12-06 08:44:26.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:44:26 compute-0 nova_compute[251992]: 2025-12-06 08:44:26.690 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:44:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:26.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:44:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:44:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:44:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:27.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:28 compute-0 ceph-mon[74339]: pgmap v4575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:28.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:29 compute-0 podman[443604]: 2025-12-06 08:44:29.438226679 +0000 UTC m=+0.089028524 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 08:44:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:29.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:29 compute-0 nova_compute[251992]: 2025-12-06 08:44:29.877 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:30 compute-0 ceph-mon[74339]: pgmap v4576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:30 compute-0 nova_compute[251992]: 2025-12-06 08:44:30.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:30.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:31 compute-0 ceph-mon[74339]: pgmap v4577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:31 compute-0 nova_compute[251992]: 2025-12-06 08:44:31.142 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:31.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:31 compute-0 nova_compute[251992]: 2025-12-06 08:44:31.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:32 compute-0 sudo[443632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:32 compute-0 sudo[443632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-0 sudo[443632]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:32 compute-0 sudo[443657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:44:32 compute-0 sudo[443657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-0 sudo[443657]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:32 compute-0 sudo[443682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:32 compute-0 sudo[443682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-0 sudo[443682]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:32 compute-0 sudo[443707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:44:32 compute-0 sudo[443707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4578: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:32 compute-0 nova_compute[251992]: 2025-12-06 08:44:32.565 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:32.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Dec 06 08:44:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Dec 06 08:44:32 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:32 compute-0 sudo[443707]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:33.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:33 compute-0 nova_compute[251992]: 2025-12-06 08:44:33.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:33 compute-0 nova_compute[251992]: 2025-12-06 08:44:33.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:44:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:44:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:44:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:44:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:44:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:44:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 19720aa2-9f36-4c54-ba69-3e4fdac86bc6 does not exist
Dec 06 08:44:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 4218ff9b-18c9-493f-af85-723e3b8fec8d does not exist
Dec 06 08:44:33 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 25a32ae7-fbfa-4804-b122-3599669f2d29 does not exist
Dec 06 08:44:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:44:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:44:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:44:33 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:44:33 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:44:33 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:44:33 compute-0 sudo[443764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:33 compute-0 sudo[443764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:33 compute-0 sudo[443764]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:33 compute-0 sudo[443789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:44:33 compute-0 sudo[443789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:33 compute-0 sudo[443789]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:33 compute-0 sudo[443814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:33 compute-0 sudo[443814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:33 compute-0 ceph-mon[74339]: pgmap v4578: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:44:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:44:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:33 compute-0 sudo[443814]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:44:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:44:33 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:44:33 compute-0 sudo[443839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:44:33 compute-0 sudo[443839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:34 compute-0 podman[443907]: 2025-12-06 08:44:34.285379192 +0000 UTC m=+0.052909368 container create 211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec 06 08:44:34 compute-0 systemd[1]: Started libpod-conmon-211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf.scope.
Dec 06 08:44:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:44:34 compute-0 podman[443907]: 2025-12-06 08:44:34.254202911 +0000 UTC m=+0.021733137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:44:34 compute-0 podman[443907]: 2025-12-06 08:44:34.361555209 +0000 UTC m=+0.129085385 container init 211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:44:34 compute-0 podman[443907]: 2025-12-06 08:44:34.368860615 +0000 UTC m=+0.136390761 container start 211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec 06 08:44:34 compute-0 podman[443907]: 2025-12-06 08:44:34.372436853 +0000 UTC m=+0.139967019 container attach 211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:44:34 compute-0 gracious_archimedes[443924]: 167 167
Dec 06 08:44:34 compute-0 systemd[1]: libpod-211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf.scope: Deactivated successfully.
Dec 06 08:44:34 compute-0 conmon[443924]: conmon 211ea1edd285c12defde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf.scope/container/memory.events
Dec 06 08:44:34 compute-0 podman[443907]: 2025-12-06 08:44:34.377587521 +0000 UTC m=+0.145117667 container died 211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-7506b878aaf240d1fcb1dfe9df8b8ad1b9b7ade0d485080f575a9641e9ac4c3c-merged.mount: Deactivated successfully.
Dec 06 08:44:34 compute-0 podman[443907]: 2025-12-06 08:44:34.413066829 +0000 UTC m=+0.180596975 container remove 211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec 06 08:44:34 compute-0 systemd[1]: libpod-conmon-211ea1edd285c12defde436570fe1c246f942771773c2eeae12891261fc83baf.scope: Deactivated successfully.
Dec 06 08:44:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4579: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:34 compute-0 podman[443948]: 2025-12-06 08:44:34.600871157 +0000 UTC m=+0.062409505 container create 976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Dec 06 08:44:34 compute-0 systemd[1]: Started libpod-conmon-976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b.scope.
Dec 06 08:44:34 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:44:34 compute-0 podman[443948]: 2025-12-06 08:44:34.580224401 +0000 UTC m=+0.041762759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/396c88b8dee1ed0ff13c9fa88526e53fbf18a21c3b400640abd3661ffebafe75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/396c88b8dee1ed0ff13c9fa88526e53fbf18a21c3b400640abd3661ffebafe75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/396c88b8dee1ed0ff13c9fa88526e53fbf18a21c3b400640abd3661ffebafe75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/396c88b8dee1ed0ff13c9fa88526e53fbf18a21c3b400640abd3661ffebafe75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/396c88b8dee1ed0ff13c9fa88526e53fbf18a21c3b400640abd3661ffebafe75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:34 compute-0 podman[443948]: 2025-12-06 08:44:34.689255513 +0000 UTC m=+0.150793871 container init 976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khayyam, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec 06 08:44:34 compute-0 podman[443948]: 2025-12-06 08:44:34.696610571 +0000 UTC m=+0.158148909 container start 976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:44:34 compute-0 podman[443948]: 2025-12-06 08:44:34.699595912 +0000 UTC m=+0.161134270 container attach 976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 08:44:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:34.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:34 compute-0 nova_compute[251992]: 2025-12-06 08:44:34.881 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:35 compute-0 sudo[443969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:35 compute-0 sudo[443969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:35 compute-0 sudo[443969]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:35 compute-0 sudo[443994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:35 compute-0 sudo[443994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:35 compute-0 sudo[443994]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:35 compute-0 dreamy_khayyam[443964]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:44:35 compute-0 dreamy_khayyam[443964]: --> relative data size: 1.0
Dec 06 08:44:35 compute-0 dreamy_khayyam[443964]: --> All data devices are unavailable
Dec 06 08:44:35 compute-0 systemd[1]: libpod-976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b.scope: Deactivated successfully.
Dec 06 08:44:35 compute-0 podman[443948]: 2025-12-06 08:44:35.520328053 +0000 UTC m=+0.981866401 container died 976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 06 08:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-396c88b8dee1ed0ff13c9fa88526e53fbf18a21c3b400640abd3661ffebafe75-merged.mount: Deactivated successfully.
Dec 06 08:44:35 compute-0 podman[443948]: 2025-12-06 08:44:35.583490578 +0000 UTC m=+1.045028916 container remove 976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khayyam, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:44:35 compute-0 systemd[1]: libpod-conmon-976ff7bfaa75c6ea090ac6108290f2599e07f70d1ac5d2062c61236c0705218b.scope: Deactivated successfully.
Dec 06 08:44:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:35 compute-0 sudo[443839]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:35.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:35 compute-0 podman[444030]: 2025-12-06 08:44:35.62137649 +0000 UTC m=+0.070963606 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 06 08:44:35 compute-0 podman[444037]: 2025-12-06 08:44:35.63583247 +0000 UTC m=+0.084661025 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 08:44:35 compute-0 sudo[444081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:35 compute-0 sudo[444081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:35 compute-0 sudo[444081]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:35 compute-0 sudo[444106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:44:35 compute-0 sudo[444106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:35 compute-0 sudo[444106]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:35 compute-0 sudo[444131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:35 compute-0 sudo[444131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:35 compute-0 sudo[444131]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:35 compute-0 sudo[444156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:44:35 compute-0 sudo[444156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:35 compute-0 ceph-mon[74339]: pgmap v4579: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:36 compute-0 podman[444223]: 2025-12-06 08:44:36.158433256 +0000 UTC m=+0.040774642 container create fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 06 08:44:36 compute-0 nova_compute[251992]: 2025-12-06 08:44:36.192 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:36 compute-0 systemd[1]: Started libpod-conmon-fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c.scope.
Dec 06 08:44:36 compute-0 podman[444223]: 2025-12-06 08:44:36.14081001 +0000 UTC m=+0.023151426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:44:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:44:36 compute-0 podman[444223]: 2025-12-06 08:44:36.34977479 +0000 UTC m=+0.232116196 container init fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:44:36 compute-0 podman[444223]: 2025-12-06 08:44:36.35716931 +0000 UTC m=+0.239510696 container start fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:44:36 compute-0 loving_banach[444240]: 167 167
Dec 06 08:44:36 compute-0 systemd[1]: libpod-fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c.scope: Deactivated successfully.
Dec 06 08:44:36 compute-0 podman[444223]: 2025-12-06 08:44:36.369523223 +0000 UTC m=+0.251864639 container attach fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:44:36 compute-0 podman[444223]: 2025-12-06 08:44:36.37013881 +0000 UTC m=+0.252480206 container died fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 06 08:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-02913acd66286c67456dfdd7537b14fca4cef9aeda3761e0f4bbd34a71314335-merged.mount: Deactivated successfully.
Dec 06 08:44:36 compute-0 podman[444223]: 2025-12-06 08:44:36.409257846 +0000 UTC m=+0.291599232 container remove fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:44:36 compute-0 systemd[1]: libpod-conmon-fa62bc8f5bca28590cb804f518c71e5c96e1c653468f3a6bb1ea2ded9593cf4c.scope: Deactivated successfully.
Dec 06 08:44:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4580: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:36 compute-0 podman[444264]: 2025-12-06 08:44:36.569165482 +0000 UTC m=+0.047888304 container create 42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:44:36 compute-0 systemd[1]: Started libpod-conmon-42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261.scope.
Dec 06 08:44:36 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05d2467c5fb1546cdfabfee9c3adaad29b7b6ec0f12811f8436328710f1e7db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:36 compute-0 podman[444264]: 2025-12-06 08:44:36.553380685 +0000 UTC m=+0.032103527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05d2467c5fb1546cdfabfee9c3adaad29b7b6ec0f12811f8436328710f1e7db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05d2467c5fb1546cdfabfee9c3adaad29b7b6ec0f12811f8436328710f1e7db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d05d2467c5fb1546cdfabfee9c3adaad29b7b6ec0f12811f8436328710f1e7db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:36 compute-0 podman[444264]: 2025-12-06 08:44:36.662540312 +0000 UTC m=+0.141263154 container init 42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 06 08:44:36 compute-0 podman[444264]: 2025-12-06 08:44:36.669277223 +0000 UTC m=+0.148000035 container start 42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_solomon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:44:36 compute-0 podman[444264]: 2025-12-06 08:44:36.673676972 +0000 UTC m=+0.152399824 container attach 42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:44:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:36.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:37 compute-0 agitated_solomon[444281]: {
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:     "0": [
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:         {
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "devices": [
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "/dev/loop3"
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             ],
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "lv_name": "ceph_lv0",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "lv_size": "7511998464",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "name": "ceph_lv0",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "tags": {
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.cluster_name": "ceph",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.crush_device_class": "",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.encrypted": "0",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.osd_id": "0",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.type": "block",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:                 "ceph.vdo": "0"
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             },
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "type": "block",
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:             "vg_name": "ceph_vg0"
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:         }
Dec 06 08:44:37 compute-0 agitated_solomon[444281]:     ]
Dec 06 08:44:37 compute-0 agitated_solomon[444281]: }
Dec 06 08:44:37 compute-0 systemd[1]: libpod-42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261.scope: Deactivated successfully.
Dec 06 08:44:37 compute-0 podman[444264]: 2025-12-06 08:44:37.468601007 +0000 UTC m=+0.947323839 container died 42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_solomon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 08:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-d05d2467c5fb1546cdfabfee9c3adaad29b7b6ec0f12811f8436328710f1e7db-merged.mount: Deactivated successfully.
Dec 06 08:44:37 compute-0 podman[444264]: 2025-12-06 08:44:37.518405452 +0000 UTC m=+0.997128274 container remove 42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_solomon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec 06 08:44:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:37 compute-0 systemd[1]: libpod-conmon-42363baea163a755256d47cc1234c281e3202db69b0ff77ffe92a312d969f261.scope: Deactivated successfully.
Dec 06 08:44:37 compute-0 sudo[444156]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:37 compute-0 sudo[444303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:37 compute-0 sudo[444303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:37 compute-0 sudo[444303]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:37 compute-0 sudo[444328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:44:37 compute-0 sudo[444328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:37 compute-0 sudo[444328]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:37 compute-0 sudo[444353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:37 compute-0 sudo[444353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:37 compute-0 sudo[444353]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:37 compute-0 sudo[444378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:44:37 compute-0 sudo[444378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:37 compute-0 ceph-mon[74339]: pgmap v4580: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:38 compute-0 podman[444443]: 2025-12-06 08:44:38.121327274 +0000 UTC m=+0.043292250 container create d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:44:38 compute-0 systemd[1]: Started libpod-conmon-d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c.scope.
Dec 06 08:44:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:44:38 compute-0 podman[444443]: 2025-12-06 08:44:38.191635841 +0000 UTC m=+0.113600827 container init d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 06 08:44:38 compute-0 podman[444443]: 2025-12-06 08:44:38.197745806 +0000 UTC m=+0.119710782 container start d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 08:44:38 compute-0 practical_mestorf[444460]: 167 167
Dec 06 08:44:38 compute-0 systemd[1]: libpod-d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c.scope: Deactivated successfully.
Dec 06 08:44:38 compute-0 podman[444443]: 2025-12-06 08:44:38.106700579 +0000 UTC m=+0.028665575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:44:38 compute-0 podman[444443]: 2025-12-06 08:44:38.201732953 +0000 UTC m=+0.123697949 container attach d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:44:38 compute-0 conmon[444460]: conmon d425402658431d0f07a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c.scope/container/memory.events
Dec 06 08:44:38 compute-0 podman[444443]: 2025-12-06 08:44:38.203043539 +0000 UTC m=+0.125008515 container died d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-901d7fe2054e640bdee81b6e312f27fabc39a75931ad854e35f7d21e0fabc175-merged.mount: Deactivated successfully.
Dec 06 08:44:38 compute-0 podman[444443]: 2025-12-06 08:44:38.236369039 +0000 UTC m=+0.158334015 container remove d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mestorf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 06 08:44:38 compute-0 systemd[1]: libpod-conmon-d425402658431d0f07a99c67f9480400d3133c7c06ee767dfe27d2c46b1d1b3c.scope: Deactivated successfully.
Dec 06 08:44:38 compute-0 podman[444483]: 2025-12-06 08:44:38.38872097 +0000 UTC m=+0.038375767 container create f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:44:38 compute-0 systemd[1]: Started libpod-conmon-f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235.scope.
Dec 06 08:44:38 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20af6f7395da070b7550259255d423d729bdf2da1a0da46e9c668e237539b860/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20af6f7395da070b7550259255d423d729bdf2da1a0da46e9c668e237539b860/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20af6f7395da070b7550259255d423d729bdf2da1a0da46e9c668e237539b860/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20af6f7395da070b7550259255d423d729bdf2da1a0da46e9c668e237539b860/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:44:38 compute-0 podman[444483]: 2025-12-06 08:44:38.448452533 +0000 UTC m=+0.098107350 container init f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gagarin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec 06 08:44:38 compute-0 podman[444483]: 2025-12-06 08:44:38.455686678 +0000 UTC m=+0.105341475 container start f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:44:38 compute-0 podman[444483]: 2025-12-06 08:44:38.460983401 +0000 UTC m=+0.110638228 container attach f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:44:38 compute-0 podman[444483]: 2025-12-06 08:44:38.372890054 +0000 UTC m=+0.022544871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:44:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4581: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:38.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]: {
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]:         "osd_id": 0,
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]:         "type": "bluestore"
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]:     }
Dec 06 08:44:39 compute-0 fervent_gagarin[444500]: }
Dec 06 08:44:39 compute-0 systemd[1]: libpod-f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235.scope: Deactivated successfully.
Dec 06 08:44:39 compute-0 podman[444483]: 2025-12-06 08:44:39.29053014 +0000 UTC m=+0.940184957 container died f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gagarin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec 06 08:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-20af6f7395da070b7550259255d423d729bdf2da1a0da46e9c668e237539b860-merged.mount: Deactivated successfully.
Dec 06 08:44:39 compute-0 podman[444483]: 2025-12-06 08:44:39.348920606 +0000 UTC m=+0.998575413 container remove f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_gagarin, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:44:39 compute-0 systemd[1]: libpod-conmon-f4de55a50fef4cfa848ee0fecc27858dc56a8c1c1c52dc81caef8c875bc19235.scope: Deactivated successfully.
Dec 06 08:44:39 compute-0 sudo[444378]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:44:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:39 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:44:39 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 3e3cdfb3-8c5e-418a-96e2-38c1b293958d does not exist
Dec 06 08:44:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1b489c28-770c-4804-93c5-4eea98468865 does not exist
Dec 06 08:44:39 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 1f7b5651-84d3-4b27-b384-bb7f75780d0a does not exist
Dec 06 08:44:39 compute-0 sudo[444534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:39 compute-0 sudo[444534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:39 compute-0 sudo[444534]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:39 compute-0 sudo[444559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:44:39 compute-0 sudo[444559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:39 compute-0 sudo[444559]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:39 compute-0 nova_compute[251992]: 2025-12-06 08:44:39.896 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:39 compute-0 ceph-mon[74339]: pgmap v4581: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:39 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:44:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4582: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:40 compute-0 nova_compute[251992]: 2025-12-06 08:44:40.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:40.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:41 compute-0 nova_compute[251992]: 2025-12-06 08:44:41.194 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:41 compute-0 ceph-mon[74339]: pgmap v4582: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4583: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:42 compute-0 nova_compute[251992]: 2025-12-06 08:44:42.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:42 compute-0 nova_compute[251992]: 2025-12-06 08:44:42.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 08:44:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:42.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:44:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:44:43 compute-0 ceph-mon[74339]: pgmap v4583: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:44:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:44:44 compute-0 nova_compute[251992]: 2025-12-06 08:44:44.329 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 08:44:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4584: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:44.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:44 compute-0 nova_compute[251992]: 2025-12-06 08:44:44.900 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:45 compute-0 ceph-mon[74339]: pgmap v4584: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:46 compute-0 nova_compute[251992]: 2025-12-06 08:44:46.195 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4585: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:46.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:47 compute-0 ceph-mon[74339]: pgmap v4585: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4586: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:48.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:49.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:49 compute-0 ceph-mon[74339]: pgmap v4586: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:49 compute-0 nova_compute[251992]: 2025-12-06 08:44:49.902 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4587: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:50.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:51 compute-0 nova_compute[251992]: 2025-12-06 08:44:51.197 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:51 compute-0 ceph-mon[74339]: pgmap v4587: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:44:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:51.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:44:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4588: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:52.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:53 compute-0 ceph-mon[74339]: pgmap v4588: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:53.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4589: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:54.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:54 compute-0 nova_compute[251992]: 2025-12-06 08:44:54.907 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:55 compute-0 sudo[444592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:55 compute-0 sudo[444592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:55 compute-0 sudo[444592]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:55 compute-0 sudo[444617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:44:55 compute-0 sudo[444617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:44:55 compute-0 sudo[444617]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:55.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:55 compute-0 ceph-mon[74339]: pgmap v4589: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:55 compute-0 nova_compute[251992]: 2025-12-06 08:44:55.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:44:55 compute-0 nova_compute[251992]: 2025-12-06 08:44:55.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 08:44:56 compute-0 nova_compute[251992]: 2025-12-06 08:44:56.199 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:44:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4590: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:56.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:44:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:57.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:57 compute-0 ceph-mon[74339]: pgmap v4590: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:57 compute-0 sshd-session[444643]: Invalid user ubuntu from 193.32.162.146 port 40216
Dec 06 08:44:58 compute-0 sshd-session[444643]: Connection closed by invalid user ubuntu 193.32.162.146 port 40216 [preauth]
Dec 06 08:44:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4591: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:44:58.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:44:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:44:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:44:59.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:44:59 compute-0 ceph-mon[74339]: pgmap v4591: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:44:59 compute-0 nova_compute[251992]: 2025-12-06 08:44:59.939 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:00 compute-0 podman[444647]: 2025-12-06 08:45:00.455754276 +0000 UTC m=+0.109080455 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:45:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4592: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:00.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:01 compute-0 nova_compute[251992]: 2025-12-06 08:45:01.201 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:01.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:01 compute-0 ceph-mon[74339]: pgmap v4592: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/289764488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4593: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2883025854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:02.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:03.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:03 compute-0 ceph-mon[74339]: pgmap v4593: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:45:03.923 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:45:03.923 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:45:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:45:03.924 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:45:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4594: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:04 compute-0 nova_compute[251992]: 2025-12-06 08:45:04.944 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:05.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:05 compute-0 ceph-mon[74339]: pgmap v4594: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:06 compute-0 nova_compute[251992]: 2025-12-06 08:45:06.202 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:06 compute-0 podman[444677]: 2025-12-06 08:45:06.412928809 +0000 UTC m=+0.076466845 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 08:45:06 compute-0 podman[444678]: 2025-12-06 08:45:06.419086495 +0000 UTC m=+0.079549588 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 08:45:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4595: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:07.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:07 compute-0 ceph-mon[74339]: pgmap v4595: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4596: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:08.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:09 compute-0 ceph-mon[74339]: pgmap v4596: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3111797188' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:45:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/3111797188' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:45:09 compute-0 nova_compute[251992]: 2025-12-06 08:45:09.990 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4597: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:10 compute-0 nova_compute[251992]: 2025-12-06 08:45:10.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:10.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:11 compute-0 nova_compute[251992]: 2025-12-06 08:45:11.205 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:11.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:11 compute-0 ceph-mon[74339]: pgmap v4597: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4598: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:12.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:45:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:45:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:13.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:13 compute-0 ceph-mon[74339]: pgmap v4598: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4599: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:14.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:14 compute-0 nova_compute[251992]: 2025-12-06 08:45:14.993 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:15 compute-0 sudo[444717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:15 compute-0 sudo[444717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:15 compute-0 sudo[444717]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:15 compute-0 sudo[444742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:15 compute-0 sudo[444742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:15 compute-0 sudo[444742]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:15.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:15 compute-0 ceph-mon[74339]: pgmap v4599: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:16 compute-0 nova_compute[251992]: 2025-12-06 08:45:16.207 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4600: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:16.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:17.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:18 compute-0 ceph-mon[74339]: pgmap v4600: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4601: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:45:18
Dec 06 08:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta', '.rgw.root', 'volumes']
Dec 06 08:45:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:45:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:18.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:19.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:19 compute-0 nova_compute[251992]: 2025-12-06 08:45:19.680 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:19 compute-0 nova_compute[251992]: 2025-12-06 08:45:19.996 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:20 compute-0 ceph-mon[74339]: pgmap v4601: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4602: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:20.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:21 compute-0 nova_compute[251992]: 2025-12-06 08:45:21.209 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:21.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:22 compute-0 ceph-mon[74339]: pgmap v4602: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:45:22 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.0 total, 600.0 interval
                                           Cumulative writes: 21K writes, 94K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s
                                           Cumulative WAL: 21K writes, 21K syncs, 1.00 writes per sync, written: 0.14 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1450 writes, 6181 keys, 1449 commit groups, 1.0 writes per commit group, ingest: 9.98 MB, 0.02 MB/s
                                           Interval WAL: 1450 writes, 1449 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     47.1      2.71              0.42        67    0.040       0      0       0.0       0.0
                                             L6      1/0   12.34 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.8    111.0     96.0      7.67              2.46        66    0.116    598K    35K       0.0       0.0
                                            Sum      1/0   12.34 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.8     82.0     83.2     10.38              2.88       133    0.078    598K    35K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2     92.3     92.9      0.83              0.27        10    0.083     65K   2541       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    111.0     96.0      7.67              2.46        66    0.116    598K    35K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     47.1      2.71              0.42        66    0.041       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 8400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.125, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.84 GB write, 0.10 MB/s write, 0.83 GB read, 0.10 MB/s read, 10.4 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5596d2c271f0#2 capacity: 304.00 MB usage: 92.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 15 last_copies: 0 last_secs: 0.000919 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5738,88.47 MB,29.1032%) FilterBlock(134,1.60 MB,0.525078%) IndexBlock(134,2.55 MB,0.838872%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 06 08:45:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4603: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:23 compute-0 nova_compute[251992]: 2025-12-06 08:45:23.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:23 compute-0 nova_compute[251992]: 2025-12-06 08:45:23.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:23.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:23 compute-0 nova_compute[251992]: 2025-12-06 08:45:23.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:45:23 compute-0 nova_compute[251992]: 2025-12-06 08:45:23.687 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:45:23 compute-0 nova_compute[251992]: 2025-12-06 08:45:23.688 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:45:23 compute-0 nova_compute[251992]: 2025-12-06 08:45:23.688 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:45:23 compute-0 nova_compute[251992]: 2025-12-06 08:45:23.688 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:45:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:45:24 compute-0 ceph-mon[74339]: pgmap v4603: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:24 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3671994025' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:45:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/632855294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.125 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.288 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.290 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4061MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.290 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.291 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.401 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.402 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.428 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:45:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4604: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:24 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:45:24 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771911123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.893 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.899 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.950 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.952 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:45:24 compute-0 nova_compute[251992]: 2025-12-06 08:45:24.952 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:45:25 compute-0 nova_compute[251992]: 2025-12-06 08:45:25.000 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/632855294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1430437182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:25 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2771911123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:45:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:25.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:25 compute-0 nova_compute[251992]: 2025-12-06 08:45:25.952 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:26 compute-0 ceph-mon[74339]: pgmap v4604: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:26 compute-0 nova_compute[251992]: 2025-12-06 08:45:26.210 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4605: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:45:27 compute-0 ceph-mon[74339]: pgmap v4605: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:45:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:45:27 compute-0 nova_compute[251992]: 2025-12-06 08:45:27.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:27 compute-0 nova_compute[251992]: 2025-12-06 08:45:27.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:45:27 compute-0 nova_compute[251992]: 2025-12-06 08:45:27.658 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:45:27 compute-0 nova_compute[251992]: 2025-12-06 08:45:27.685 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:45:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:27.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4606: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:29 compute-0 ceph-mon[74339]: pgmap v4606: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:29.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:30 compute-0 nova_compute[251992]: 2025-12-06 08:45:30.004 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4607: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:30 compute-0 nova_compute[251992]: 2025-12-06 08:45:30.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:31 compute-0 nova_compute[251992]: 2025-12-06 08:45:31.212 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:31 compute-0 podman[444819]: 2025-12-06 08:45:31.419675965 +0000 UTC m=+0.081894191 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:45:31 compute-0 ceph-mon[74339]: pgmap v4607: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:31.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4608: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:32 compute-0 nova_compute[251992]: 2025-12-06 08:45:32.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:32.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:33 compute-0 ceph-mon[74339]: pgmap v4608: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:33.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4609: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:34 compute-0 nova_compute[251992]: 2025-12-06 08:45:34.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:34 compute-0 nova_compute[251992]: 2025-12-06 08:45:34.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:45:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:35 compute-0 nova_compute[251992]: 2025-12-06 08:45:35.007 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:35 compute-0 sudo[444849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:35 compute-0 sudo[444849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:35 compute-0 sudo[444849]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:35.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:35 compute-0 ceph-mon[74339]: pgmap v4609: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:35 compute-0 sudo[444874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:35 compute-0 sudo[444874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:35 compute-0 sudo[444874]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:36 compute-0 nova_compute[251992]: 2025-12-06 08:45:36.214 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4610: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:36.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:37 compute-0 podman[444901]: 2025-12-06 08:45:37.393741353 +0000 UTC m=+0.052162280 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 08:45:37 compute-0 podman[444900]: 2025-12-06 08:45:37.417994698 +0000 UTC m=+0.078801008 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:45:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:37.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:37 compute-0 ceph-mon[74339]: pgmap v4610: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4611: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:38.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:39.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:39 compute-0 sudo[444938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:39 compute-0 sudo[444938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:39 compute-0 sudo[444938]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:39 compute-0 sudo[444963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:45:39 compute-0 sudo[444963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:39 compute-0 sudo[444963]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:39 compute-0 sudo[444988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:39 compute-0 sudo[444988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:39 compute-0 sudo[444988]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:40 compute-0 nova_compute[251992]: 2025-12-06 08:45:40.010 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:40 compute-0 ceph-mon[74339]: pgmap v4611: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:40 compute-0 sudo[445013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:45:40 compute-0 sudo[445013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:45:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:45:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:40 compute-0 sudo[445013]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec 06 08:45:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:45:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4612: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:40 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Dec 06 08:45:40 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:45:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:40.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:41 compute-0 nova_compute[251992]: 2025-12-06 08:45:41.216 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 06 08:45:41 compute-0 ceph-mon[74339]: pgmap v4612: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:41 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 06 08:45:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:41.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Dec 06 08:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Dec 06 08:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4613: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:42 compute-0 nova_compute[251992]: 2025-12-06 08:45:42.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ad15bc40-d5bb-4f09-8c93-91d5d0229d1c does not exist
Dec 06 08:45:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 0cbd79dc-732d-4b12-afdb-99df867ccc08 does not exist
Dec 06 08:45:42 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 961a378b-8a82-4c91-bf38-9bdaf1d0f005 does not exist
Dec 06 08:45:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:42.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:45:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:45:42 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:45:42 compute-0 sudo[445069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:42 compute-0 sudo[445069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:42 compute-0 sudo[445069]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:43 compute-0 sudo[445094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:45:43 compute-0 sudo[445094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:43 compute-0 sudo[445094]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:43 compute-0 ceph-mon[74339]: pgmap v4613: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:45:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:45:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:45:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:45:43 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:45:43 compute-0 sudo[445119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:45:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:45:43 compute-0 sudo[445119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:43 compute-0 sudo[445119]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:43 compute-0 sudo[445144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:45:43 compute-0 sudo[445144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:43 compute-0 podman[445209]: 2025-12-06 08:45:43.456437383 +0000 UTC m=+0.038059238 container create 7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec 06 08:45:43 compute-0 systemd[1]: Started libpod-conmon-7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7.scope.
Dec 06 08:45:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:45:43 compute-0 podman[445209]: 2025-12-06 08:45:43.524636644 +0000 UTC m=+0.106258529 container init 7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:45:43 compute-0 podman[445209]: 2025-12-06 08:45:43.530851272 +0000 UTC m=+0.112473137 container start 7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec 06 08:45:43 compute-0 exciting_robinson[445226]: 167 167
Dec 06 08:45:43 compute-0 systemd[1]: libpod-7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7.scope: Deactivated successfully.
Dec 06 08:45:43 compute-0 podman[445209]: 2025-12-06 08:45:43.440224265 +0000 UTC m=+0.021846140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:45:43 compute-0 podman[445209]: 2025-12-06 08:45:43.536408022 +0000 UTC m=+0.118029877 container attach 7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec 06 08:45:43 compute-0 conmon[445226]: conmon 7cb18050661b677e8e16 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7.scope/container/memory.events
Dec 06 08:45:43 compute-0 podman[445209]: 2025-12-06 08:45:43.537310466 +0000 UTC m=+0.118932321 container died 7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-236fadf643298980ef09cf2c54e7e87cd9325cde8961f42ea437d18d3d9abd19-merged.mount: Deactivated successfully.
Dec 06 08:45:43 compute-0 podman[445209]: 2025-12-06 08:45:43.578057736 +0000 UTC m=+0.159679591 container remove 7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_robinson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:45:43 compute-0 systemd[1]: libpod-conmon-7cb18050661b677e8e16215e81a5d6ac05ceef023805f58cfbf1a37161af0ba7.scope: Deactivated successfully.
Dec 06 08:45:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:43.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:43 compute-0 podman[445248]: 2025-12-06 08:45:43.739992985 +0000 UTC m=+0.042049345 container create 720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 06 08:45:43 compute-0 systemd[1]: Started libpod-conmon-720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d.scope.
Dec 06 08:45:43 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39279908b012b38ac1652a422e864cbad82145fb1267d915a1d415e2f1af3981/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:43 compute-0 podman[445248]: 2025-12-06 08:45:43.723209663 +0000 UTC m=+0.025266053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39279908b012b38ac1652a422e864cbad82145fb1267d915a1d415e2f1af3981/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39279908b012b38ac1652a422e864cbad82145fb1267d915a1d415e2f1af3981/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39279908b012b38ac1652a422e864cbad82145fb1267d915a1d415e2f1af3981/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39279908b012b38ac1652a422e864cbad82145fb1267d915a1d415e2f1af3981/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:43 compute-0 podman[445248]: 2025-12-06 08:45:43.833226703 +0000 UTC m=+0.135283083 container init 720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:45:43 compute-0 podman[445248]: 2025-12-06 08:45:43.842725939 +0000 UTC m=+0.144782319 container start 720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 06 08:45:43 compute-0 podman[445248]: 2025-12-06 08:45:43.846464999 +0000 UTC m=+0.148521369 container attach 720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:45:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4614: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:44 compute-0 brave_chandrasekhar[445265]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:45:44 compute-0 brave_chandrasekhar[445265]: --> relative data size: 1.0
Dec 06 08:45:44 compute-0 brave_chandrasekhar[445265]: --> All data devices are unavailable
Dec 06 08:45:44 compute-0 systemd[1]: libpod-720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d.scope: Deactivated successfully.
Dec 06 08:45:44 compute-0 podman[445248]: 2025-12-06 08:45:44.66458483 +0000 UTC m=+0.966641200 container died 720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-39279908b012b38ac1652a422e864cbad82145fb1267d915a1d415e2f1af3981-merged.mount: Deactivated successfully.
Dec 06 08:45:44 compute-0 podman[445248]: 2025-12-06 08:45:44.718185017 +0000 UTC m=+1.020241407 container remove 720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chandrasekhar, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:45:44 compute-0 systemd[1]: libpod-conmon-720601212bd71a2f4b61dd95a48ee0e725919d71a66d7931fbb84fcb21ac1b5d.scope: Deactivated successfully.
Dec 06 08:45:44 compute-0 sudo[445144]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:44 compute-0 sudo[445292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:44 compute-0 sudo[445292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:44 compute-0 sudo[445292]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:44 compute-0 sudo[445317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:45:44 compute-0 sudo[445317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:44 compute-0 sudo[445317]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:44.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:44 compute-0 sudo[445342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:44 compute-0 sudo[445342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:44 compute-0 sudo[445342]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:44 compute-0 sudo[445367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:45:44 compute-0 sudo[445367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:45 compute-0 nova_compute[251992]: 2025-12-06 08:45:45.014 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:45 compute-0 podman[445434]: 2025-12-06 08:45:45.288017937 +0000 UTC m=+0.041105661 container create ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:45:45 compute-0 systemd[1]: Started libpod-conmon-ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd.scope.
Dec 06 08:45:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:45:45 compute-0 podman[445434]: 2025-12-06 08:45:45.266144756 +0000 UTC m=+0.019232500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:45:45 compute-0 podman[445434]: 2025-12-06 08:45:45.370586825 +0000 UTC m=+0.123674549 container init ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:45:45 compute-0 podman[445434]: 2025-12-06 08:45:45.378426046 +0000 UTC m=+0.131513770 container start ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:45:45 compute-0 podman[445434]: 2025-12-06 08:45:45.381845758 +0000 UTC m=+0.134933482 container attach ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec 06 08:45:45 compute-0 determined_bohr[445450]: 167 167
Dec 06 08:45:45 compute-0 systemd[1]: libpod-ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd.scope: Deactivated successfully.
Dec 06 08:45:45 compute-0 podman[445434]: 2025-12-06 08:45:45.382946859 +0000 UTC m=+0.136034603 container died ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b43763f9a0b1cc1c919ab93f1d936de126aad29afd2cc57958f65cff3e73c6d2-merged.mount: Deactivated successfully.
Dec 06 08:45:45 compute-0 podman[445434]: 2025-12-06 08:45:45.418710994 +0000 UTC m=+0.171798718 container remove ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:45:45 compute-0 systemd[1]: libpod-conmon-ba3276fde5de8139e434130cef61d095fa00e80b52e3d3612e4297648d6c51fd.scope: Deactivated successfully.
Dec 06 08:45:45 compute-0 podman[445477]: 2025-12-06 08:45:45.567333306 +0000 UTC m=+0.039259991 container create f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:45:45 compute-0 systemd[1]: Started libpod-conmon-f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16.scope.
Dec 06 08:45:45 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62080436f96a6521475bff4fe112484c97feec462ba6c961931e052bbce7cbc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62080436f96a6521475bff4fe112484c97feec462ba6c961931e052bbce7cbc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62080436f96a6521475bff4fe112484c97feec462ba6c961931e052bbce7cbc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62080436f96a6521475bff4fe112484c97feec462ba6c961931e052bbce7cbc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:45 compute-0 podman[445477]: 2025-12-06 08:45:45.641124837 +0000 UTC m=+0.113051532 container init f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_perlman, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:45:45 compute-0 podman[445477]: 2025-12-06 08:45:45.550625205 +0000 UTC m=+0.022551920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:45:45 compute-0 podman[445477]: 2025-12-06 08:45:45.646657946 +0000 UTC m=+0.118584641 container start f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec 06 08:45:45 compute-0 podman[445477]: 2025-12-06 08:45:45.652406811 +0000 UTC m=+0.124333506 container attach f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_perlman, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 08:45:45 compute-0 ceph-mon[74339]: pgmap v4614: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:45.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:46 compute-0 nova_compute[251992]: 2025-12-06 08:45:46.217 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:46 compute-0 gallant_perlman[445494]: {
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:     "0": [
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:         {
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "devices": [
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "/dev/loop3"
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             ],
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "lv_name": "ceph_lv0",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "lv_size": "7511998464",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "name": "ceph_lv0",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "tags": {
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.cluster_name": "ceph",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.crush_device_class": "",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.encrypted": "0",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.osd_id": "0",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.type": "block",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:                 "ceph.vdo": "0"
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             },
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "type": "block",
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:             "vg_name": "ceph_vg0"
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:         }
Dec 06 08:45:46 compute-0 gallant_perlman[445494]:     ]
Dec 06 08:45:46 compute-0 gallant_perlman[445494]: }
Dec 06 08:45:46 compute-0 systemd[1]: libpod-f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16.scope: Deactivated successfully.
Dec 06 08:45:46 compute-0 podman[445477]: 2025-12-06 08:45:46.394780588 +0000 UTC m=+0.866707313 container died f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-62080436f96a6521475bff4fe112484c97feec462ba6c961931e052bbce7cbc9-merged.mount: Deactivated successfully.
Dec 06 08:45:46 compute-0 podman[445477]: 2025-12-06 08:45:46.445894777 +0000 UTC m=+0.917821472 container remove f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec 06 08:45:46 compute-0 systemd[1]: libpod-conmon-f2a8b17cc361afa6af1f693701e3b1d27dd674c182806eec86099eacc5a6bc16.scope: Deactivated successfully.
Dec 06 08:45:46 compute-0 sudo[445367]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4615: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:46 compute-0 sudo[445516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:46 compute-0 sudo[445516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:46 compute-0 sudo[445516]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:46 compute-0 sudo[445541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:45:46 compute-0 sudo[445541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:46 compute-0 sudo[445541]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:46 compute-0 sudo[445566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:46 compute-0 sudo[445566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:46 compute-0 sudo[445566]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:46 compute-0 sudo[445591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:45:46 compute-0 sudo[445591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:46.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:47 compute-0 podman[445656]: 2025-12-06 08:45:47.101766779 +0000 UTC m=+0.034787640 container create e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:45:47 compute-0 systemd[1]: Started libpod-conmon-e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082.scope.
Dec 06 08:45:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:45:47 compute-0 podman[445656]: 2025-12-06 08:45:47.164804991 +0000 UTC m=+0.097825872 container init e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cohen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:45:47 compute-0 podman[445656]: 2025-12-06 08:45:47.172452317 +0000 UTC m=+0.105473178 container start e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cohen, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:45:47 compute-0 podman[445656]: 2025-12-06 08:45:47.176119106 +0000 UTC m=+0.109139987 container attach e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec 06 08:45:47 compute-0 systemd[1]: libpod-e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082.scope: Deactivated successfully.
Dec 06 08:45:47 compute-0 kind_cohen[445673]: 167 167
Dec 06 08:45:47 compute-0 conmon[445673]: conmon e30beeec3fa478d046d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082.scope/container/memory.events
Dec 06 08:45:47 compute-0 podman[445656]: 2025-12-06 08:45:47.178080549 +0000 UTC m=+0.111101410 container died e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cohen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 08:45:47 compute-0 podman[445656]: 2025-12-06 08:45:47.087059842 +0000 UTC m=+0.020080723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef001f276b790e535add9339f715eb2b5c2d2f232700bd11d68b956f3b9dc066-merged.mount: Deactivated successfully.
Dec 06 08:45:47 compute-0 podman[445656]: 2025-12-06 08:45:47.219884928 +0000 UTC m=+0.152905789 container remove e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:45:47 compute-0 systemd[1]: libpod-conmon-e30beeec3fa478d046d70d5cffb2cfaa9f37e0b10d1f40afd5bd1037169d2082.scope: Deactivated successfully.
Dec 06 08:45:47 compute-0 podman[445697]: 2025-12-06 08:45:47.370528213 +0000 UTC m=+0.041997055 container create 0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec 06 08:45:47 compute-0 systemd[1]: Started libpod-conmon-0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd.scope.
Dec 06 08:45:47 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f988f847e42403ace74c0d1a1f806903637970536b7fc268d42d7a7e03888/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f988f847e42403ace74c0d1a1f806903637970536b7fc268d42d7a7e03888/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f988f847e42403ace74c0d1a1f806903637970536b7fc268d42d7a7e03888/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f988f847e42403ace74c0d1a1f806903637970536b7fc268d42d7a7e03888/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:45:47 compute-0 podman[445697]: 2025-12-06 08:45:47.443277796 +0000 UTC m=+0.114746658 container init 0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 06 08:45:47 compute-0 podman[445697]: 2025-12-06 08:45:47.351950112 +0000 UTC m=+0.023418974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:45:47 compute-0 podman[445697]: 2025-12-06 08:45:47.449645319 +0000 UTC m=+0.121114161 container start 0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:45:47 compute-0 podman[445697]: 2025-12-06 08:45:47.452753052 +0000 UTC m=+0.124221894 container attach 0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:45:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:47 compute-0 ceph-mon[74339]: pgmap v4615: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:47.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:48 compute-0 cool_murdock[445714]: {
Dec 06 08:45:48 compute-0 cool_murdock[445714]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:45:48 compute-0 cool_murdock[445714]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:45:48 compute-0 cool_murdock[445714]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:45:48 compute-0 cool_murdock[445714]:         "osd_id": 0,
Dec 06 08:45:48 compute-0 cool_murdock[445714]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:45:48 compute-0 cool_murdock[445714]:         "type": "bluestore"
Dec 06 08:45:48 compute-0 cool_murdock[445714]:     }
Dec 06 08:45:48 compute-0 cool_murdock[445714]: }
Dec 06 08:45:48 compute-0 systemd[1]: libpod-0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd.scope: Deactivated successfully.
Dec 06 08:45:48 compute-0 podman[445697]: 2025-12-06 08:45:48.268051247 +0000 UTC m=+0.939520089 container died 0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec 06 08:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7f988f847e42403ace74c0d1a1f806903637970536b7fc268d42d7a7e03888-merged.mount: Deactivated successfully.
Dec 06 08:45:48 compute-0 podman[445697]: 2025-12-06 08:45:48.319713181 +0000 UTC m=+0.991182023 container remove 0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec 06 08:45:48 compute-0 systemd[1]: libpod-conmon-0063dccc0af2523edd069a272d3f0eb7bc6d979f31e04620df4f683f031dbccd.scope: Deactivated successfully.
Dec 06 08:45:48 compute-0 sudo[445591]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:45:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:45:48 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev d45fc6cf-98ce-44ad-8976-5fbd3c16ac1e does not exist
Dec 06 08:45:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 265ed8b0-7a21-47a8-9de9-b4c79ef1790d does not exist
Dec 06 08:45:48 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 77533c95-fc42-4252-abb2-b0b78bd90878 does not exist
Dec 06 08:45:48 compute-0 sudo[445750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:48 compute-0 sudo[445750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:48 compute-0 sudo[445750]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:48 compute-0 sudo[445775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:45:48 compute-0 sudo[445775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:48 compute-0 sudo[445775]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4616: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:48 compute-0 nova_compute[251992]: 2025-12-06 08:45:48.788 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:45:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:48.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:45:49 compute-0 ceph-mon[74339]: pgmap v4616: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:49.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:50 compute-0 nova_compute[251992]: 2025-12-06 08:45:50.019 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4617: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:50.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:51 compute-0 nova_compute[251992]: 2025-12-06 08:45:51.218 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:51.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:52 compute-0 ceph-mon[74339]: pgmap v4617: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4618: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:52.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:45:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:53.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:45:54 compute-0 ceph-mon[74339]: pgmap v4618: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4619: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:54.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:55 compute-0 nova_compute[251992]: 2025-12-06 08:45:55.023 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:55.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:55 compute-0 sudo[445803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:55 compute-0 sudo[445803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:55 compute-0 sudo[445803]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:55 compute-0 sudo[445828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:45:55 compute-0 sudo[445828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:45:55 compute-0 sudo[445828]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:56 compute-0 ceph-mon[74339]: pgmap v4619: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:56 compute-0 nova_compute[251992]: 2025-12-06 08:45:56.220 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:45:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4620: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:56.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:45:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:45:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:57.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:45:58 compute-0 ceph-mon[74339]: pgmap v4620: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4621: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:45:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:45:58.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:45:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:45:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:45:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:45:59.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:00 compute-0 nova_compute[251992]: 2025-12-06 08:46:00.026 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:00 compute-0 ceph-mon[74339]: pgmap v4621: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4622: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:00.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:01 compute-0 nova_compute[251992]: 2025-12-06 08:46:01.222 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:01.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:02 compute-0 ceph-mon[74339]: pgmap v4622: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1869584584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:02 compute-0 podman[445857]: 2025-12-06 08:46:02.440876148 +0000 UTC m=+0.089697291 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 08:46:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4623: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:02.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:03 compute-0 ceph-mon[74339]: pgmap v4623: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1875903531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:03.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:46:03.924 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:46:03.925 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:46:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:46:03.926 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:46:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4624: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:04.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:05 compute-0 nova_compute[251992]: 2025-12-06 08:46:05.029 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:05.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:05 compute-0 ceph-mon[74339]: pgmap v4624: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:06 compute-0 nova_compute[251992]: 2025-12-06 08:46:06.224 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4625: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:06.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:07.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:07 compute-0 ceph-mon[74339]: pgmap v4625: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:08 compute-0 podman[445887]: 2025-12-06 08:46:08.412129851 +0000 UTC m=+0.054942984 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 08:46:08 compute-0 podman[445886]: 2025-12-06 08:46:08.423322192 +0000 UTC m=+0.072630880 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 08:46:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4626: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:08.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:09.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:09 compute-0 ceph-mon[74339]: pgmap v4626: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1800815749' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:46:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/1800815749' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:46:10 compute-0 nova_compute[251992]: 2025-12-06 08:46:10.032 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4627: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:10.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:11 compute-0 nova_compute[251992]: 2025-12-06 08:46:11.225 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:11.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:11 compute-0 ceph-mon[74339]: pgmap v4627: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4628: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:12.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:46:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:46:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:13.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:13 compute-0 ceph-mon[74339]: pgmap v4628: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4629: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:14 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:14 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:14 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:14.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:15 compute-0 nova_compute[251992]: 2025-12-06 08:46:15.063 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:15 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:15 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:15 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:15.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:15 compute-0 sudo[445925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:15 compute-0 sudo[445925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:15 compute-0 sudo[445925]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:15 compute-0 ceph-mon[74339]: pgmap v4629: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:16 compute-0 sudo[445950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:16 compute-0 sudo[445950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:16 compute-0 sudo[445950]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:16 compute-0 nova_compute[251992]: 2025-12-06 08:46:16.227 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:16 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4630: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:16 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:16 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:16 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:16.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:17 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:17 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:17 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:17 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:17.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:17 compute-0 ceph-mon[74339]: pgmap v4630: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:18 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4631: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Optimize plan auto_2025-12-06_08:46:18
Dec 06 08:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 08:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] do_upmap
Dec 06 08:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', '.mgr', 'images', 'backups', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Dec 06 08:46:18 compute-0 ceph-mgr[74630]: [balancer INFO root] prepared 0/10 changes
Dec 06 08:46:18 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:18 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:18 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:18.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:19 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:19 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:19 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:19.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:19 compute-0 ceph-mon[74339]: pgmap v4631: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:20 compute-0 nova_compute[251992]: 2025-12-06 08:46:20.066 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:20 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4632: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:20 compute-0 nova_compute[251992]: 2025-12-06 08:46:20.675 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:20 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:20 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:20 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:20.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:21 compute-0 nova_compute[251992]: 2025-12-06 08:46:21.230 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:21 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:21 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:21 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:21.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:22 compute-0 ceph-mon[74339]: pgmap v4632: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:22 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:22 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4633: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:22 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:22 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:22 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:22.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:23 compute-0 nova_compute[251992]: 2025-12-06 08:46:23.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:23 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:23 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:23 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:23.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 08:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:46:23 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:46:24 compute-0 ceph-mon[74339]: pgmap v4633: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:24 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4634: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:24 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:24 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:24 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:24.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:25 compute-0 nova_compute[251992]: 2025-12-06 08:46:25.071 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:25 compute-0 nova_compute[251992]: 2025-12-06 08:46:25.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:25 compute-0 nova_compute[251992]: 2025-12-06 08:46:25.658 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:25 compute-0 nova_compute[251992]: 2025-12-06 08:46:25.691 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:46:25 compute-0 nova_compute[251992]: 2025-12-06 08:46:25.691 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:46:25 compute-0 nova_compute[251992]: 2025-12-06 08:46:25.691 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:46:25 compute-0 nova_compute[251992]: 2025-12-06 08:46:25.691 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 08:46:25 compute-0 nova_compute[251992]: 2025-12-06 08:46:25.692 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:46:25 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:25 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:25 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:25.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:26 compute-0 ceph-mon[74339]: pgmap v4634: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:26 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/75782707' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:46:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659106783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.144 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.230 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.295 251996 WARNING nova.virt.libvirt.driver [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.296 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4059MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.297 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.297 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.373 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.373 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.391 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing inventories for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.411 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating ProviderTree inventory for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.411 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Updating inventory in ProviderTree for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.424 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing aggregate associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.445 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Refreshing trait associations for resource provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433, traits: COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE42,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.463 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 08:46:26 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4635: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:26 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec 06 08:46:26 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/768853376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.897 251996 DEBUG oslo_concurrency.processutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 08:46:26 compute-0 nova_compute[251992]: 2025-12-06 08:46:26.903 251996 DEBUG nova.compute.provider_tree [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed in ProviderTree for provider: e75da5bf-16fa-49b1-b5e1-3aa61daf0433 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 08:46:26 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:26 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:26 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:26.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 4.1076330727712636e-05 of space, bias 1.0, pg target 0.01232289921831379 quantized to 1 (current 1)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216214172715429 of space, bias 1.0, pg target 0.648642518146287 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Dec 06 08:46:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1659106783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3420502171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:27 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/768853376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:46:27 compute-0 nova_compute[251992]: 2025-12-06 08:46:27.081 251996 DEBUG nova.scheduler.client.report [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Inventory has not changed for provider e75da5bf-16fa-49b1-b5e1-3aa61daf0433 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 08:46:27 compute-0 nova_compute[251992]: 2025-12-06 08:46:27.085 251996 DEBUG nova.compute.resource_tracker [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 08:46:27 compute-0 nova_compute[251992]: 2025-12-06 08:46:27.085 251996 DEBUG oslo_concurrency.lockutils [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:46:27 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 08:46:27 compute-0 ceph-mgr[74630]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 08:46:27 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:27 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:27 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:27.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:28 compute-0 ceph-mon[74339]: pgmap v4635: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:28 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4636: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:28 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:28 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:28 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:28.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:29 compute-0 nova_compute[251992]: 2025-12-06 08:46:29.087 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:29 compute-0 nova_compute[251992]: 2025-12-06 08:46:29.087 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 08:46:29 compute-0 nova_compute[251992]: 2025-12-06 08:46:29.087 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 08:46:29 compute-0 nova_compute[251992]: 2025-12-06 08:46:29.105 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 08:46:29 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:29 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:29 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:29.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:30 compute-0 ceph-mon[74339]: pgmap v4636: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:30 compute-0 nova_compute[251992]: 2025-12-06 08:46:30.074 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:30 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4637: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:30 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:30 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:30 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:31 compute-0 nova_compute[251992]: 2025-12-06 08:46:31.231 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:31 compute-0 nova_compute[251992]: 2025-12-06 08:46:31.656 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:31 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:31 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:31 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:31.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:32 compute-0 ceph-mon[74339]: pgmap v4637: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:32 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:32 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4638: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:32 compute-0 nova_compute[251992]: 2025-12-06 08:46:32.651 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:32 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:32 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:32 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:32.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:33 compute-0 ceph-mon[74339]: pgmap v4638: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:33 compute-0 podman[446028]: 2025-12-06 08:46:33.42906213 +0000 UTC m=+0.084500062 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 06 08:46:33 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:33 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:33 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:33.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:34 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4639: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:34 compute-0 nova_compute[251992]: 2025-12-06 08:46:34.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:34 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:34 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:34 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:34.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:35 compute-0 nova_compute[251992]: 2025-12-06 08:46:35.077 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:35 compute-0 nova_compute[251992]: 2025-12-06 08:46:35.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:35 compute-0 nova_compute[251992]: 2025-12-06 08:46:35.657 251996 DEBUG nova.compute.manager [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 08:46:35 compute-0 ceph-mon[74339]: pgmap v4639: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:35 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:35 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:35 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:35.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:36 compute-0 sudo[446056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:36 compute-0 sudo[446056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:36 compute-0 sudo[446056]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:36 compute-0 sudo[446081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:36 compute-0 sudo[446081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:36 compute-0 sudo[446081]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:36 compute-0 nova_compute[251992]: 2025-12-06 08:46:36.262 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:36 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4640: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:36 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:36 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:36 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:36.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:37 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:37 compute-0 ceph-mon[74339]: pgmap v4640: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:37 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:37 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:37 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:37.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:38 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4641: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:38 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:38 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:38 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:39 compute-0 podman[446108]: 2025-12-06 08:46:39.399806438 +0000 UTC m=+0.055370536 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Dec 06 08:46:39 compute-0 podman[446109]: 2025-12-06 08:46:39.436021506 +0000 UTC m=+0.086779344 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:46:39 compute-0 ceph-mon[74339]: pgmap v4641: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:39 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:39 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:39 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:39.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:40 compute-0 nova_compute[251992]: 2025-12-06 08:46:40.080 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:40 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4642: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:40 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:40 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:40 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:40.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:41 compute-0 nova_compute[251992]: 2025-12-06 08:46:41.264 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:41 compute-0 ceph-mon[74339]: pgmap v4642: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:41 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:41 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:41 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:41.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:42 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:42 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4643: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:42 compute-0 nova_compute[251992]: 2025-12-06 08:46:42.657 251996 DEBUG oslo_service.periodic_task [None req-ef1000de-053b-4dec-8e2a-efcef260c8a4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 08:46:42 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:42 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:46:42 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:42.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:46:43 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:46:43 compute-0 ceph-mon[74339]: pgmap v4643: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:43 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:43 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:43 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:43.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:44 compute-0 sshd-session[446150]: Accepted publickey for zuul from 192.168.122.10 port 52096 ssh2: ECDSA SHA256:1GGo/sE+V3TagYXky+wz/EbgEVK7d6I++8XwwL4s53E
Dec 06 08:46:44 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4644: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:44 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:44 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:44 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:44.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:45 compute-0 systemd-logind[798]: New session 68 of user zuul.
Dec 06 08:46:45 compute-0 nova_compute[251992]: 2025-12-06 08:46:45.144 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:45 compute-0 systemd[1]: Started Session 68 of User zuul.
Dec 06 08:46:45 compute-0 sshd-session[446150]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 06 08:46:45 compute-0 ceph-mon[74339]: pgmap v4644: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:45 compute-0 sudo[446154]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 06 08:46:45 compute-0 sudo[446154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:46:45 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:45 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:46:45 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:45.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:46:46 compute-0 nova_compute[251992]: 2025-12-06 08:46:46.266 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:46 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4645: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:46 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:46 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:46 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:47 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46769 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:47 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:47 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46775 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:47 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:47 compute-0 ceph-mon[74339]: pgmap v4645: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:47 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:47 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:47 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:47.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:48 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38562 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:48 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Dec 06 08:46:48 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767899615' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:46:48 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4646: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:48 compute-0 ceph-mon[74339]: from='client.46769 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:48 compute-0 ceph-mon[74339]: from='client.46775 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:48 compute-0 ceph-mon[74339]: from='client.38556 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1260620969' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:46:48 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1767899615' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:46:48 compute-0 sudo[446404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:48 compute-0 sudo[446404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:48 compute-0 sudo[446404]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:48 compute-0 sudo[446429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:46:48 compute-0 sudo[446429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:48 compute-0 sudo[446429]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:48 compute-0 sudo[446454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:48 compute-0 sudo[446454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:48 compute-0 sudo[446454]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:48 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:48 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:48 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:48.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:48 compute-0 sudo[446479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Dec 06 08:46:48 compute-0 sudo[446479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:49 compute-0 sudo[446479]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:49 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47743 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:46:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec 06 08:46:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec 06 08:46:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 2f00e130-d4a3-4fc2-b2d0-2f4e38728fe5 does not exist
Dec 06 08:46:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev ec480324-0e46-47c3-8933-23801ef777b9 does not exist
Dec 06 08:46:49 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 7b660d5a-203d-4471-bb1b-4777f4f4b934 does not exist
Dec 06 08:46:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec 06 08:46:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec 06 08:46:49 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:46:49 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:49 compute-0 sudo[446541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:49 compute-0 sudo[446541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:49 compute-0 sudo[446541]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:49 compute-0 sudo[446566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:46:49 compute-0 sudo[446566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:49 compute-0 sudo[446566]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:49 compute-0 sudo[446591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:49 compute-0 sudo[446591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:49 compute-0 sudo[446591]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:49 compute-0 ceph-mon[74339]: from='client.38562 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: pgmap v4646: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 06 08:46:49 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:49 compute-0 sudo[446616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 06 08:46:49 compute-0 sudo[446616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:49 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:49 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:49 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:49.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:49 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47749 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:50 compute-0 podman[446681]: 2025-12-06 08:46:50.070336177 +0000 UTC m=+0.039064230 container create eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec 06 08:46:50 compute-0 systemd[1]: Started libpod-conmon-eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9.scope.
Dec 06 08:46:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:46:50 compute-0 podman[446681]: 2025-12-06 08:46:50.050734176 +0000 UTC m=+0.019462249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:46:50 compute-0 nova_compute[251992]: 2025-12-06 08:46:50.147 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:50 compute-0 podman[446681]: 2025-12-06 08:46:50.155852754 +0000 UTC m=+0.124580827 container init eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:46:50 compute-0 podman[446681]: 2025-12-06 08:46:50.163197052 +0000 UTC m=+0.131925105 container start eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec 06 08:46:50 compute-0 podman[446681]: 2025-12-06 08:46:50.166735328 +0000 UTC m=+0.135463451 container attach eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec 06 08:46:50 compute-0 cool_dirac[446697]: 167 167
Dec 06 08:46:50 compute-0 podman[446681]: 2025-12-06 08:46:50.170086439 +0000 UTC m=+0.138814512 container died eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:46:50 compute-0 systemd[1]: libpod-eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9.scope: Deactivated successfully.
Dec 06 08:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f22f7f5e1c06775d32ffa9f191870394a0835b916af59bc17bb314c83379066c-merged.mount: Deactivated successfully.
Dec 06 08:46:50 compute-0 podman[446681]: 2025-12-06 08:46:50.211066459 +0000 UTC m=+0.179794532 container remove eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:46:50 compute-0 systemd[1]: libpod-conmon-eb51a0b04caa290cc43051a0ca527bc2b12ae045cca0ebd2069d87a742640ab9.scope: Deactivated successfully.
Dec 06 08:46:50 compute-0 podman[446722]: 2025-12-06 08:46:50.366054278 +0000 UTC m=+0.043898130 container create 204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 06 08:46:50 compute-0 systemd[1]: Started libpod-conmon-204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc.scope.
Dec 06 08:46:50 compute-0 podman[446722]: 2025-12-06 08:46:50.345752718 +0000 UTC m=+0.023596620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:46:50 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9dcc28b198a5554ab6fa537f146f4449cc5a4869c4c8db9c73ade819e0c08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9dcc28b198a5554ab6fa537f146f4449cc5a4869c4c8db9c73ade819e0c08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9dcc28b198a5554ab6fa537f146f4449cc5a4869c4c8db9c73ade819e0c08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9dcc28b198a5554ab6fa537f146f4449cc5a4869c4c8db9c73ade819e0c08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41a9dcc28b198a5554ab6fa537f146f4449cc5a4869c4c8db9c73ade819e0c08/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:50 compute-0 podman[446722]: 2025-12-06 08:46:50.47356396 +0000 UTC m=+0.151407862 container init 204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 06 08:46:50 compute-0 podman[446722]: 2025-12-06 08:46:50.481896216 +0000 UTC m=+0.159740078 container start 204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 06 08:46:50 compute-0 podman[446722]: 2025-12-06 08:46:50.485713659 +0000 UTC m=+0.163557531 container attach 204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec 06 08:46:50 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4647: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:50 compute-0 ceph-mon[74339]: from='client.47743 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:50 compute-0 ceph-mon[74339]: from='client.47749 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:50 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/584181407' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec 06 08:46:50 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:50 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:50 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:50.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:51 compute-0 epic_goldstine[446739]: --> passed data devices: 0 physical, 1 LVM
Dec 06 08:46:51 compute-0 epic_goldstine[446739]: --> relative data size: 1.0
Dec 06 08:46:51 compute-0 epic_goldstine[446739]: --> All data devices are unavailable
Dec 06 08:46:51 compute-0 nova_compute[251992]: 2025-12-06 08:46:51.267 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:51 compute-0 systemd[1]: libpod-204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc.scope: Deactivated successfully.
Dec 06 08:46:51 compute-0 podman[446722]: 2025-12-06 08:46:51.273245233 +0000 UTC m=+0.951089085 container died 204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec 06 08:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-41a9dcc28b198a5554ab6fa537f146f4449cc5a4869c4c8db9c73ade819e0c08-merged.mount: Deactivated successfully.
Dec 06 08:46:51 compute-0 podman[446722]: 2025-12-06 08:46:51.326358342 +0000 UTC m=+1.004202194 container remove 204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:46:51 compute-0 systemd[1]: libpod-conmon-204718238cfa8337a33e2b0dcecf30ce2653c2d3cf9bd799e117ff84713913dc.scope: Deactivated successfully.
Dec 06 08:46:51 compute-0 sudo[446616]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:51 compute-0 sudo[446783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:51 compute-0 sudo[446783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:51 compute-0 sudo[446783]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:51 compute-0 sudo[446815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:46:51 compute-0 sudo[446815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:51 compute-0 sudo[446815]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:51 compute-0 ovs-vsctl[446847]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 06 08:46:51 compute-0 sudo[446848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:51 compute-0 sudo[446848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:51 compute-0 sudo[446848]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:51 compute-0 sudo[446879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- lvm list --format json
Dec 06 08:46:51 compute-0 sudo[446879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:51 compute-0 ceph-mon[74339]: pgmap v4647: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:51 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:51 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:51 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:51.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:51 compute-0 podman[446974]: 2025-12-06 08:46:51.872268651 +0000 UTC m=+0.021617436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:46:51 compute-0 podman[446974]: 2025-12-06 08:46:51.992471848 +0000 UTC m=+0.141820613 container create a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 08:46:52 compute-0 systemd[1]: Started libpod-conmon-a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc.scope.
Dec 06 08:46:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:46:52 compute-0 podman[446974]: 2025-12-06 08:46:52.105568892 +0000 UTC m=+0.254917687 container init a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:46:52 compute-0 podman[446974]: 2025-12-06 08:46:52.114029191 +0000 UTC m=+0.263377956 container start a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Dec 06 08:46:52 compute-0 podman[446974]: 2025-12-06 08:46:52.116990022 +0000 UTC m=+0.266338787 container attach a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 06 08:46:52 compute-0 vigorous_bouman[446996]: 167 167
Dec 06 08:46:52 compute-0 systemd[1]: libpod-a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc.scope: Deactivated successfully.
Dec 06 08:46:52 compute-0 podman[446974]: 2025-12-06 08:46:52.120525957 +0000 UTC m=+0.269874722 container died a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e60e0aab012f420861fca278ad84f645c4ad851573850b461ed8ab21aba73767-merged.mount: Deactivated successfully.
Dec 06 08:46:52 compute-0 podman[446974]: 2025-12-06 08:46:52.160827658 +0000 UTC m=+0.310176423 container remove a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:46:52 compute-0 systemd[1]: libpod-conmon-a9ea621d0dd47e6ad369d1d1ae06ed3ca28e09f99917519ae4dbfe1c3387a9bc.scope: Deactivated successfully.
Dec 06 08:46:52 compute-0 podman[447093]: 2025-12-06 08:46:52.32627044 +0000 UTC m=+0.046005057 container create 7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 06 08:46:52 compute-0 virtqemud[251613]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 06 08:46:52 compute-0 systemd[1]: Started libpod-conmon-7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43.scope.
Dec 06 08:46:52 compute-0 virtqemud[251613]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 06 08:46:52 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:46:52 compute-0 podman[447093]: 2025-12-06 08:46:52.303186355 +0000 UTC m=+0.022921002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfcc1d8f2cb1cc926adcd96f0cf8917a84a45db2bd4e6e30d1bb44e5dabbb5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfcc1d8f2cb1cc926adcd96f0cf8917a84a45db2bd4e6e30d1bb44e5dabbb5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfcc1d8f2cb1cc926adcd96f0cf8917a84a45db2bd4e6e30d1bb44e5dabbb5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cfcc1d8f2cb1cc926adcd96f0cf8917a84a45db2bd4e6e30d1bb44e5dabbb5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:52 compute-0 podman[447093]: 2025-12-06 08:46:52.413152464 +0000 UTC m=+0.132887101 container init 7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:46:52 compute-0 podman[447093]: 2025-12-06 08:46:52.422079237 +0000 UTC m=+0.141813854 container start 7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:46:52 compute-0 podman[447093]: 2025-12-06 08:46:52.425374356 +0000 UTC m=+0.145108983 container attach 7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec 06 08:46:52 compute-0 virtqemud[251613]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 08:46:52 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:52 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4648: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:52 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46787 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:52 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:52 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:52 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:52 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: cache status {prefix=cache status} (starting...)
Dec 06 08:46:53 compute-0 lvm[447316]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 08:46:53 compute-0 lvm[447316]: VG ceph_vg0 finished
Dec 06 08:46:53 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38574 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:53 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: client ls {prefix=client ls} (starting...)
Dec 06 08:46:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 06 08:46:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:53 compute-0 admiring_cerf[447125]: {
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:     "0": [
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:         {
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "devices": [
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "/dev/loop3"
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             ],
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "lv_name": "ceph_lv0",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "lv_size": "7511998464",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=40a1bae4-cf76-5610-8dab-c75116dfe0bb,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6b7b52dc-0b4c-403a-a623-fd06da2b6a8e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "lv_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "name": "ceph_lv0",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "tags": {
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.block_uuid": "Kg9NPl-vcgO-OIBj-vnWn-LIf4-NohH-0Ogysk",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.cluster_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.cluster_name": "ceph",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.crush_device_class": "",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.encrypted": "0",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.osd_fsid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.osd_id": "0",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.type": "block",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:                 "ceph.vdo": "0"
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             },
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "type": "block",
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:             "vg_name": "ceph_vg0"
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:         }
Dec 06 08:46:53 compute-0 admiring_cerf[447125]:     ]
Dec 06 08:46:53 compute-0 admiring_cerf[447125]: }
Dec 06 08:46:53 compute-0 systemd[1]: libpod-7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43.scope: Deactivated successfully.
Dec 06 08:46:53 compute-0 podman[447093]: 2025-12-06 08:46:53.258604458 +0000 UTC m=+0.978339095 container died 7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec 06 08:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cfcc1d8f2cb1cc926adcd96f0cf8917a84a45db2bd4e6e30d1bb44e5dabbb5c-merged.mount: Deactivated successfully.
Dec 06 08:46:53 compute-0 podman[447093]: 2025-12-06 08:46:53.33359895 +0000 UTC m=+1.053333567 container remove 7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Dec 06 08:46:53 compute-0 systemd[1]: libpod-conmon-7a5140e15528afa1200c02469668fe65dab626c0b4dfdd70706357e6c1639b43.scope: Deactivated successfully.
Dec 06 08:46:53 compute-0 sudo[446879]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:53 compute-0 sudo[447399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:53 compute-0 sudo[447399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:53 compute-0 sudo[447399]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:53 compute-0 sudo[447434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:46:53 compute-0 sudo[447434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:53 compute-0 sudo[447434]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:53 compute-0 sudo[447462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:53 compute-0 sudo[447462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:53 compute-0 sudo[447462]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:53 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38580 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:53 compute-0 sudo[447506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/40a1bae4-cf76-5610-8dab-c75116dfe0bb/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 40a1bae4-cf76-5610-8dab-c75116dfe0bb -- raw list --format json
Dec 06 08:46:53 compute-0 sudo[447506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:53 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: damage ls {prefix=damage ls} (starting...)
Dec 06 08:46:53 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:53 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:53 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:53.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:53 compute-0 ceph-mon[74339]: pgmap v4648: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:53 compute-0 ceph-mon[74339]: from='client.46787 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2292196295' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:53 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:53 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2772597569' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:53 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46823 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:53 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:46:53 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:46:53.869+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:46:53 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 06 08:46:53 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603513567' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:53 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump loads {prefix=dump loads} (starting...)
Dec 06 08:46:54 compute-0 podman[447602]: 2025-12-06 08:46:54.007027743 +0000 UTC m=+0.038260597 container create 9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec 06 08:46:54 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38598 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-0 systemd[1]: Started libpod-conmon-9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6.scope.
Dec 06 08:46:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 06 08:46:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:46:54 compute-0 podman[447602]: 2025-12-06 08:46:53.989485198 +0000 UTC m=+0.020718082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:46:54 compute-0 podman[447602]: 2025-12-06 08:46:54.096722292 +0000 UTC m=+0.127955166 container init 9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 06 08:46:54 compute-0 podman[447602]: 2025-12-06 08:46:54.103708292 +0000 UTC m=+0.134941146 container start 9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 06 08:46:54 compute-0 podman[447602]: 2025-12-06 08:46:54.107222617 +0000 UTC m=+0.138455481 container attach 9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec 06 08:46:54 compute-0 epic_torvalds[447629]: 167 167
Dec 06 08:46:54 compute-0 systemd[1]: libpod-9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6.scope: Deactivated successfully.
Dec 06 08:46:54 compute-0 podman[447602]: 2025-12-06 08:46:54.111262797 +0000 UTC m=+0.142495671 container died 9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 06 08:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a2eff05d2f287f51d554be9d490f4c0c410ebb01c64340b5ee8824be60ba034-merged.mount: Deactivated successfully.
Dec 06 08:46:54 compute-0 podman[447602]: 2025-12-06 08:46:54.152787911 +0000 UTC m=+0.184020765 container remove 9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec 06 08:46:54 compute-0 systemd[1]: libpod-conmon-9cf9bc0c9b14b96e7b1ac1b92edfc3e5a6a39b58884ff96187097cca591178c6.scope: Deactivated successfully.
Dec 06 08:46:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 06 08:46:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec 06 08:46:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520104312' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:54 compute-0 podman[447690]: 2025-12-06 08:46:54.313792223 +0000 UTC m=+0.041281519 container create 834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 08:46:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 06 08:46:54 compute-0 systemd[1]: Started libpod-conmon-834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544.scope.
Dec 06 08:46:54 compute-0 systemd[1]: Started libcrun container.
Dec 06 08:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66719e7a553cbce00d198a4c74d1853699b3fc0eece1d2f824e2dab3273a51fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66719e7a553cbce00d198a4c74d1853699b3fc0eece1d2f824e2dab3273a51fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66719e7a553cbce00d198a4c74d1853699b3fc0eece1d2f824e2dab3273a51fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66719e7a553cbce00d198a4c74d1853699b3fc0eece1d2f824e2dab3273a51fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:46:54 compute-0 podman[447690]: 2025-12-06 08:46:54.296349421 +0000 UTC m=+0.023838737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec 06 08:46:54 compute-0 podman[447690]: 2025-12-06 08:46:54.404553442 +0000 UTC m=+0.132042758 container init 834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 06 08:46:54 compute-0 podman[447690]: 2025-12-06 08:46:54.412570249 +0000 UTC m=+0.140059545 container start 834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec 06 08:46:54 compute-0 podman[447690]: 2025-12-06 08:46:54.423228278 +0000 UTC m=+0.150717604 container attach 834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:46:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 06 08:46:54 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4649: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:54 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec 06 08:46:54 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/251249151' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 06 08:46:54 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:46:54 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:46:54.830+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:46:54 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46856 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.38574 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.38580 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.46823 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/603513567' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2599282265' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.38598 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/520104312' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/290536756' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1880989875' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/251249151' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:46:54 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1962631154' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:54 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:54 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:54 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: ops {prefix=ops} (starting...)
Dec 06 08:46:55 compute-0 nova_compute[251992]: 2025-12-06 08:46:55.152 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:55 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47800 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46868 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]: {
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]:     "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e": {
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]:         "ceph_fsid": "40a1bae4-cf76-5610-8dab-c75116dfe0bb",
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]:         "osd_id": 0,
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]:         "osd_uuid": "6b7b52dc-0b4c-403a-a623-fd06da2b6a8e",
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]:         "type": "bluestore"
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]:     }
Dec 06 08:46:55 compute-0 thirsty_hodgkin[447710]: }
Dec 06 08:46:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec 06 08:46:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/795653726' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec 06 08:46:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2238563466' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:46:55 compute-0 systemd[1]: libpod-834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544.scope: Deactivated successfully.
Dec 06 08:46:55 compute-0 podman[447690]: 2025-12-06 08:46:55.344512416 +0000 UTC m=+1.072001712 container died 834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec 06 08:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-66719e7a553cbce00d198a4c74d1853699b3fc0eece1d2f824e2dab3273a51fe-merged.mount: Deactivated successfully.
Dec 06 08:46:55 compute-0 podman[447690]: 2025-12-06 08:46:55.399549236 +0000 UTC m=+1.127038532 container remove 834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 06 08:46:55 compute-0 systemd[1]: libpod-conmon-834b7d615705ddbbf0386dcfbcb05e5e94b4c4497fa82c385f069d3f6cc1b544.scope: Deactivated successfully.
Dec 06 08:46:55 compute-0 sudo[447506]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec 06 08:46:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec 06 08:46:55 compute-0 ceph-mon[74339]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Dec 06 08:46:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev e2d154e6-fef4-4db9-b263-3430c1394820 does not exist
Dec 06 08:46:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev da80d6df-9d4c-422b-827b-7f296468f354 does not exist
Dec 06 08:46:55 compute-0 ceph-mgr[74630]: [progress WARNING root] complete: ev 6a935d2b-fd3e-4f27-aa70-d706dff641f9 does not exist
Dec 06 08:46:55 compute-0 sudo[447861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:55 compute-0 sudo[447861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:55 compute-0 sudo[447861]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:55 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47815 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:55 compute-0 sudo[447922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:46:55 compute-0 sudo[447922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:55 compute-0 sudo[447922]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 06 08:46:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38664 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 06 08:46:55 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3973645874' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:55 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:55 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:55 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:55.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: session ls {prefix=session ls} (starting...)
Dec 06 08:46:55 compute-0 ceph-mon[74339]: pgmap v4649: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.38631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.46856 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2165939020' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/795653726' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2238563466' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='mgr.14132 192.168.122.100:0/3880098189' entity='mgr.compute-0.sfzyix' 
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2466793297' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3694447404' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1406254907' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3973645874' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1438137494' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 06 08:46:55 compute-0 ceph-mds[92997]: mds.cephfs.compute-0.qqwnku asok_command: status {prefix=status} (starting...)
Dec 06 08:46:56 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38670 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-0 sudo[448014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:56 compute-0 sudo[448014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:46:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553370529' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:56 compute-0 sudo[448014]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:56 compute-0 nova_compute[251992]: 2025-12-06 08:46:56.271 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:46:56 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47845 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:46:56 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:46:56.297+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:46:56 compute-0 sudo[448064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 06 08:46:56 compute-0 sudo[448064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 06 08:46:56 compute-0 sudo[448064]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 06 08:46:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46913 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:46:56 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:46:56.501+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:46:56 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4650: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 06 08:46:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613699475' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec 06 08:46:56 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/608501659' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:46:56 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:56 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:56 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:56.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.47800 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.46868 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.47815 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.38664 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1433611718' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.38670 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/114419710' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1607452064' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3553370529' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1382535926' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3755701381' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/161953549' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/613699475' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3388070215' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec 06 08:46:56 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/451094596' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47878 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 08:46:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142785465' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46937 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38709 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:46:57 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:46:57.345+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:46:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47896 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:46:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec 06 08:46:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/887353704' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.589667) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817589725, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1916, "num_deletes": 251, "total_data_size": 3517481, "memory_usage": 3580320, "flush_reason": "Manual Compaction"}
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817607671, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 3445968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94314, "largest_seqno": 96229, "table_properties": {"data_size": 3437078, "index_size": 5511, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17756, "raw_average_key_size": 19, "raw_value_size": 3419286, "raw_average_value_size": 3761, "num_data_blocks": 241, "num_entries": 909, "num_filter_entries": 909, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765010624, "oldest_key_time": 1765010624, "file_creation_time": 1765010817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 18261 microseconds, and 7811 cpu microseconds.
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.607924) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 3445968 bytes OK
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.608021) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.609583) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.609598) EVENT_LOG_v1 {"time_micros": 1765010817609593, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.609615) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 3509506, prev total WAL file size 3509506, number of live WAL files 2.
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.611040) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600353030' seq:72057594037927935, type:22 .. '6B7600373532' seq:0, type:0; will stop at (end)
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(3365KB)], [215(12MB)]
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817611128, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 16390504, "oldest_snapshot_seqno": -1}
Dec 06 08:46:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46955 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 13149 keys, 15263100 bytes, temperature: kUnknown
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817700821, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 15263100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15181847, "index_size": 47284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 349427, "raw_average_key_size": 26, "raw_value_size": 14955276, "raw_average_value_size": 1137, "num_data_blocks": 1775, "num_entries": 13149, "num_filter_entries": 13149, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765002318, "oldest_key_time": 0, "file_creation_time": 1765010817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3233f64f-2a9e-4588-b218-4397d62e9b9f", "db_session_id": "TCRYIVRGAQK56L3E52U0", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.701052) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 15263100 bytes
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.701988) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.6 rd, 170.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 12.3 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.4) OK, records in: 13668, records dropped: 519 output_compression: NoCompression
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.702003) EVENT_LOG_v1 {"time_micros": 1765010817701996, "job": 136, "event": "compaction_finished", "compaction_time_micros": 89775, "compaction_time_cpu_micros": 42324, "output_level": 6, "num_output_files": 1, "total_output_size": 15263100, "num_input_records": 13668, "num_output_records": 13149, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817702560, "job": 136, "event": "table_file_deletion", "file_number": 217}
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765010817704542, "job": 136, "event": "table_file_deletion", "file_number": 215}
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.610968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.704569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.704573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.704574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.704575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-0 ceph-mon[74339]: rocksdb: (Original Log Time 2025/12/06-08:46:57.704577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 08:46:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec 06 08:46:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/831227999' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Dec 06 08:46:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:57 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:57 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:57 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:57.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.47845 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.46913 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: pgmap v4650: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/608501659' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1188300132' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.47878 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1142785465' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1827320869' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2713819647' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/887353704' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/360114230' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/831227999' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2119058871' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46970 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:57 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 06 08:46:57 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1431281258' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec 06 08:46:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3276557058' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.46985 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47938 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47950 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:46:58 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:46:58.600+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 08:46:58 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4651: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:46:58 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec 06 08:46:58 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3733909566' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38775 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47000 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:58 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:58 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:46:58.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.46937 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.38709 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.47896 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.46955 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.46970 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2387806077' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1431281258' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3589491132' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3276557058' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3233308557' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3987429521' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2200948100' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3733909566' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1567968785' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec 06 08:46:58 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/364251483' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38790 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec 06 08:46:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645184560' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:46:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47012 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:29.958444+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:30.958613+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:31.958749+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:32.958925+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378740736 unmapped: 73449472 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4140659 data_alloc: 218103808 data_used: 7413760
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:33.959218+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:34.959414+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:35.959642+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:36.959807+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d28e8d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:37.959929+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a450d000/0x0/0x1bfc00000, data 0x2043413/0x2261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378757120 unmapped: 73433088 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4140659 data_alloc: 218103808 data_used: 7413760
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:38.960079+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.253051758s of 14.435792923s, submitted: 43
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d222be00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:39.960635+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:40.960769+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:41.960941+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:42.961180+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:43.961335+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:44.961528+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374251520 unmapped: 77938688 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:45.961685+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:46.961897+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:47.962075+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:48.962256+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:49.962691+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:50.962869+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:51.963036+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:52.963164+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:53.963345+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:54.963501+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:55.963677+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:56.963850+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374259712 unmapped: 77930496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:57.963994+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 77922304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:58.964170+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 77922304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:14:59.964379+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 77922304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:00.964512+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:01.964698+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:02.964888+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:03.965069+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:04.965218+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 77914112 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:05.965371+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:06.965563+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:07.965707+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061675 data_alloc: 218103808 data_used: 3297280
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:08.965910+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:09.966093+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b8f000/0x0/0x1bfc00000, data 0x19c1413/0x1bdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:10.966239+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.619552612s of 32.639488220s, submitted: 7
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 77905920 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:11.966432+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b4d000/0x0/0x1bfc00000, data 0x1a01486/0x1c21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374292480 unmapped: 77897728 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:12.966614+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374292480 unmapped: 77897728 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4334000 session 0x5636d1ee2b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4070091 data_alloc: 218103808 data_used: 3301376
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:13.966809+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374300672 unmapped: 77889536 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:14.966985+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4b4d000/0x0/0x1bfc00000, data 0x1a01486/0x1c21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374300672 unmapped: 77889536 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:15.967145+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374308864 unmapped: 77881344 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:16.967273+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d47b6000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d47b6000 session 0x5636d2029a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f79680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2029680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374308864 unmapped: 77881344 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d50334a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:17.967496+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378265600 unmapped: 73924608 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4110023 data_alloc: 218103808 data_used: 3301376
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:18.967652+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4334000 session 0x5636d2d4b860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4765000/0x0/0x1bfc00000, data 0x1de9486/0x2009000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374325248 unmapped: 77864960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:19.967838+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374325248 unmapped: 77864960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:20.967993+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374325248 unmapped: 77864960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:21.968132+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d47b6000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d47b6000 session 0x5636d4bb9860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4e3c780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374333440 unmapped: 77856768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:22.968307+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374333440 unmapped: 77856768 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4100263 data_alloc: 218103808 data_used: 3301376
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:23.968470+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d1ee2b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.373261452s of 12.638956070s, submitted: 15
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d28e8d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 77553664 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4334000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:24.968599+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4741000/0x0/0x1bfc00000, data 0x1e0d486/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374636544 unmapped: 77553664 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:25.968717+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d417a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d417a800 session 0x5636d2a42960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4116c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:26.969368+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:27.969526+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:28.969710+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4138629 data_alloc: 218103808 data_used: 7319552
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:29.969919+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a471c000/0x0/0x1bfc00000, data 0x1e314a9/0x2052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:30.970056+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a471c000/0x0/0x1bfc00000, data 0x1e314a9/0x2052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:31.970236+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:32.970374+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d4116c00 session 0x5636d2ed0b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3b02800 session 0x5636d4f96000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:33.970534+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4139369 data_alloc: 218103808 data_used: 7581696
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.952085495s of 10.106675148s, submitted: 15
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:34.970646+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d5327e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:35.970776+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4740000/0x0/0x1bfc00000, data 0x1e0d486/0x202d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 374939648 unmapped: 77250560 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:36.970866+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378707968 unmapped: 73482240 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:37.970942+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378707968 unmapped: 73482240 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:38.971058+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4181971 data_alloc: 218103808 data_used: 7675904
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d2ec05a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d45161e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 74121216 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:39.971260+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a4235000/0x0/0x1bfc00000, data 0x22da413/0x24f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:40.971400+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:41.971515+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:42.971673+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:43.971810+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4180123 data_alloc: 218103808 data_used: 7405568
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d417a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d417a800 session 0x5636d4e3d860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d44874a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:44.971913+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.364075661s of 11.115024567s, submitted: 81
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a422d000/0x0/0x1bfc00000, data 0x22e2413/0x2500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:45.972077+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4bb90e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:46.972240+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a426b000/0x0/0x1bfc00000, data 0x22e5413/0x2503000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a426a000/0x0/0x1bfc00000, data 0x22e5423/0x2504000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 74113024 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:47.972367+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3a22000 session 0x5636d4bb8b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378085376 unmapped: 74104832 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:48.972635+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4180952 data_alloc: 218103808 data_used: 7409664
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d3b02800 session 0x5636d2a425a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d417a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d417a800 session 0x5636d4e5e780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f972c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378331136 unmapped: 73859072 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:49.972799+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378331136 unmapped: 73859072 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:50.972995+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a37de000/0x0/0x1bfc00000, data 0x2d70485/0x2f90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378331136 unmapped: 73859072 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:51.973213+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d50f1c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 73834496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:52.973345+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x2d721b3/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 73834496 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:53.973468+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4275684 data_alloc: 218103808 data_used: 7417856
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:54.973579+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:55.973722+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x2d721b3/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:56.973873+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:57.974018+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:58.974206+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4275684 data_alloc: 218103808 data_used: 7417856
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:15:59.974434+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x2d721b3/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378363904 unmapped: 73826304 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:00.974588+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a37d9000/0x0/0x1bfc00000, data 0x2d721b3/0x2f94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.710718155s of 16.121202469s, submitted: 79
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a22000 session 0x5636d284a5a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d6c92c00 session 0x5636d523cb40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3b02800 session 0x5636d2eedc20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:01.974711+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3b02800 session 0x5636d2ed0d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a7000/0x0/0x1bfc00000, data 0x2ea4215/0x30c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:02.974864+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:03.974988+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4290403 data_alloc: 218103808 data_used: 7426048
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:04.975182+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:05.975348+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a7000/0x0/0x1bfc00000, data 0x2ea4215/0x30c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:06.975493+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:07.975615+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378445824 unmapped: 73744384 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:08.975771+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4290563 data_alloc: 218103808 data_used: 7430144
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a7000/0x0/0x1bfc00000, data 0x2ea4215/0x30c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:09.975963+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378478592 unmapped: 73711616 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:10.976050+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 72081408 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a5000/0x0/0x1bfc00000, data 0x2ea5215/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:11.976204+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a5000/0x0/0x1bfc00000, data 0x2ea5215/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:12.976351+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.107357979s of 12.207020760s, submitted: 35
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4bb8f00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:13.976509+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369271 data_alloc: 234881024 data_used: 18341888
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:14.976665+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:15.976844+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:16.977006+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a22000 session 0x5636d4f79a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:17.977169+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380338176 unmapped: 71852032 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a5000/0x0/0x1bfc00000, data 0x2ea5215/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:18.977316+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380346368 unmapped: 71843840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4369271 data_alloc: 234881024 data_used: 18341888
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a36a5000/0x0/0x1bfc00000, data 0x2ea5215/0x30c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d6c92c00 session 0x5636d5033860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d5a6b400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:19.977981+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380346368 unmapped: 71843840 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:20.978084+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380354560 unmapped: 71835648 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d5a6b400 session 0x5636d46f41e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4f781e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:21.978239+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381493248 unmapped: 70696960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:22.978393+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381493248 unmapped: 70696960 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d3a22000 session 0x5636d2ed14a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.844287872s of 10.037823677s, submitted: 70
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:23.978512+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a3265000/0x0/0x1bfc00000, data 0x32e71b3/0x3509000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 380829696 unmapped: 71360512 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414198 data_alloc: 234881024 data_used: 18370560
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d3b02800 session 0x5636d20a4b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d6c92c00 session 0x5636d5033c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a39da000/0x0/0x1bfc00000, data 0x286fec3/0x2a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:24.978638+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:25.978747+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:26.978891+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e3d4a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:27.978993+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4487e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:28.979259+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4255630 data_alloc: 218103808 data_used: 7487488
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a39da000/0x0/0x1bfc00000, data 0x286fe61/0x2a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:29.979643+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381927424 unmapped: 70262784 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d1fd81e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a39da000/0x0/0x1bfc00000, data 0x286fe61/0x2a91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 418 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:30.979758+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381943808 unmapped: 70246400 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:31.979934+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3cd9000/0x0/0x1bfc00000, data 0x2871a13/0x2a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381943808 unmapped: 70246400 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3a22000 session 0x5636d2199c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3b02800 session 0x5636d52961e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:32.980072+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381960192 unmapped: 70230016 heap: 452190208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d6c92c00 session 0x5636d4e62b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d6c92c00 session 0x5636d4e5f4a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:33.980178+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382001152 unmapped: 74383360 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4349962 data_alloc: 218103808 data_used: 7499776
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.098374367s of 10.435062408s, submitted: 131
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4487860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:34.980337+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382009344 unmapped: 74375168 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d2e4a800 session 0x5636d44874a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:35.980495+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382009344 unmapped: 74375168 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3682000/0x0/0x1bfc00000, data 0x2ec8a13/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:36.980672+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382009344 unmapped: 74375168 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:37.980828+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3b69000 session 0x5636d4517e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 74366976 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:38.981442+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3682000/0x0/0x1bfc00000, data 0x2ec8a13/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 74366976 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303202 data_alloc: 218103808 data_used: 7434240
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:39.981690+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 74366976 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:40.981849+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 74366976 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3682000/0x0/0x1bfc00000, data 0x2ec8a13/0x30eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d52a1400 session 0x5636d2029e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d4334000 session 0x5636d4e621e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:41.982145+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d1e8c400 session 0x5636d28e8d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:42.982595+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:43.982749+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4197298 data_alloc: 218103808 data_used: 3325952
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:44.983281+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:45.983481+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:46.983872+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x25a2a13/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:47.984031+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:48.984258+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377110528 unmapped: 79273984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4197458 data_alloc: 218103808 data_used: 3330048
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:49.984510+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 377274368 unmapped: 79110144 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:50.984979+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:51.985394+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x25a2a13/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:52.985546+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:53.985674+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x25a2a13/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4276338 data_alloc: 234881024 data_used: 14434304
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:54.986033+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:55.986272+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:56.986744+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:57.987342+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378068992 unmapped: 78315520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3fa9000/0x0/0x1bfc00000, data 0x25a2a13/0x27c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:58.987844+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 78307328 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4276338 data_alloc: 234881024 data_used: 14434304
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:16:59.988265+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 78307328 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:00.988607+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.658170700s of 26.777448654s, submitted: 44
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 378077184 unmapped: 78307328 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:01.988827+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381362176 unmapped: 75022336 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a3be4000/0x0/0x1bfc00000, data 0x2967a13/0x2b8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:02.989006+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 381632512 unmapped: 74752000 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:03.989197+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382074880 unmapped: 74309632 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344532 data_alloc: 234881024 data_used: 16035840
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:04.989666+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382074880 unmapped: 74309632 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:05.989977+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 74301440 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a38ae000/0x0/0x1bfc00000, data 0x2c9ca13/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:06.990295+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 74301440 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:07.990529+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a38ae000/0x0/0x1bfc00000, data 0x2c9ca13/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 74301440 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:08.990672+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382083072 unmapped: 74301440 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4344692 data_alloc: 234881024 data_used: 16039936
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a38ae000/0x0/0x1bfc00000, data 0x2c9ca13/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:09.990912+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382099456 unmapped: 74285056 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:10.991217+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382099456 unmapped: 74285056 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d52a1400 session 0x5636d2daed20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d6c92c00 session 0x5636d2eec1e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b69000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3b69000 session 0x5636d4516780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d3a22000 session 0x5636d2029860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.866822243s of 10.619400978s, submitted: 93
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 ms_handle_reset con 0x5636d1e8c400 session 0x5636d502c780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:11.991335+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382115840 unmapped: 74268672 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a373c000/0x0/0x1bfc00000, data 0x2e0fa13/0x3032000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:12.991529+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382115840 unmapped: 74268672 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:13.991682+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382124032 unmapped: 74260480 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361624 data_alloc: 234881024 data_used: 16044032
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:14.991908+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382124032 unmapped: 74260480 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:15.992076+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 382124032 unmapped: 74260480 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:16.992290+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383188992 unmapped: 73195520 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a3738000/0x0/0x1bfc00000, data 0x2e116df/0x3035000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3a22000 session 0x5636d502d860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:17.992502+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383229952 unmapped: 73154560 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b69000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3b69000 session 0x5636d538e000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:18.992755+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d52a1400 session 0x5636d4f79e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383229952 unmapped: 73154560 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4382991 data_alloc: 234881024 data_used: 16052224
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c92c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d6c92c00 session 0x5636d2d4b860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:19.992916+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d1e8c400 session 0x5636d20a5680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383254528 unmapped: 73129984 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:20.993071+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b69000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:21.993203+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a3730000/0x0/0x1bfc00000, data 0x2fcc3ab/0x303d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:22.993381+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:23.993535+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4393900 data_alloc: 234881024 data_used: 17227776
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:24.993743+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 72998912 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:25.993863+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 72990720 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:26.994018+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.449282646s of 15.555412292s, submitted: 30
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 72990720 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d52a1400 session 0x5636d510e000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:27.994169+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a3731000/0x0/0x1bfc00000, data 0x2fcc3ab/0x303d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 72990720 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:28.994349+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383393792 unmapped: 72990720 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4394494 data_alloc: 234881024 data_used: 17227776
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:29.994514+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 72982528 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:30.994639+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 72982528 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:31.994838+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a3731000/0x0/0x1bfc00000, data 0x2fcc3ab/0x303d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 383401984 unmapped: 72982528 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:32.994975+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386228224 unmapped: 70156288 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:33.995147+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386899968 unmapped: 69484544 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4511842 data_alloc: 234881024 data_used: 17502208
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:34.995375+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 386908160 unmapped: 69476352 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bc7000/0x0/0x1bfc00000, data 0x3d533ab/0x3b99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:35.995509+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:36.995720+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:37.995860+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:38.995994+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4512802 data_alloc: 234881024 data_used: 17567744
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.949254036s of 12.184158325s, submitted: 111
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:39.996176+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:40.996397+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bb4000/0x0/0x1bfc00000, data 0x3d743ab/0x3bba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:41.996547+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:42.997366+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 59K writes, 217K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s
                                           Cumulative WAL: 59K writes, 22K syncs, 2.61 writes per sync, written: 0.21 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3547 writes, 12K keys, 3547 commit groups, 1.0 writes per commit group, ingest: 11.63 MB, 0.02 MB/s
                                           Interval WAL: 3547 writes, 1513 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:43.997603+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504022 data_alloc: 234881024 data_used: 17571840
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:44.997857+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bb4000/0x0/0x1bfc00000, data 0x3d743ab/0x3bba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:45.998086+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 69369856 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2bae000/0x0/0x1bfc00000, data 0x3d7a3ab/0x3bc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:46.998325+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391815168 unmapped: 64569344 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a285c000/0x0/0x1bfc00000, data 0x40cc3ab/0x3f12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:47.998578+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391872512 unmapped: 64512000 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:48.998757+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388677632 unmapped: 67706880 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541881 data_alloc: 234881024 data_used: 19451904
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:49.998950+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 388677632 unmapped: 67706880 heap: 456384512 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.131248474s of 11.183950424s, submitted: 11
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d28e6000 session 0x5636d3a3c1e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:50.999201+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:51.999417+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2074000/0x0/0x1bfc00000, data 0x48b43ab/0x46fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:52.999576+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:53.999730+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4605885 data_alloc: 234881024 data_used: 19456000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:54.999876+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2071000/0x0/0x1bfc00000, data 0x48b73ab/0x46fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:56.000230+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:57.000464+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2071000/0x0/0x1bfc00000, data 0x48b73ab/0x46fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:58.000789+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:17:59.001034+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4605885 data_alloc: 234881024 data_used: 19456000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:00.001230+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d4f3d000 session 0x5636d5300f00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:01.001460+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1b400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3a1b400 session 0x5636d4e5e5a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2071000/0x0/0x1bfc00000, data 0x48b73ab/0x46fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:02.001696+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d1e8c400 session 0x5636d502dc20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.721348763s of 11.780836105s, submitted: 13
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d28e6000 session 0x5636d5327a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:03.001934+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389890048 unmapped: 70696960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:04.002053+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391192576 unmapped: 69394432 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4667533 data_alloc: 234881024 data_used: 27430912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:05.002216+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391921664 unmapped: 68665344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:06.002570+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391921664 unmapped: 68665344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:07.002822+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 391921664 unmapped: 68665344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a206d000/0x0/0x1bfc00000, data 0x48b83de/0x4700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:08.003133+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392019968 unmapped: 68567040 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a206e000/0x0/0x1bfc00000, data 0x48b83de/0x4700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:09.003387+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392019968 unmapped: 68567040 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4674049 data_alloc: 234881024 data_used: 28856320
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:10.003589+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392028160 unmapped: 68558848 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:11.003772+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a2068000/0x0/0x1bfc00000, data 0x48be3de/0x4706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392028160 unmapped: 68558848 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3b02800 session 0x5636d4f97e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:12.004029+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d60d1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d60d1400 session 0x5636d5113680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392028160 unmapped: 68558848 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:13.004259+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392028160 unmapped: 68558848 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b68400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.107902527s of 11.172630310s, submitted: 29
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d3b68400 session 0x5636d5297860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:14.004480+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d1e8c400 session 0x5636d50325a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 393093120 unmapped: 67493888 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4646049 data_alloc: 234881024 data_used: 28741632
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:15.004689+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 ms_handle_reset con 0x5636d28e6000 session 0x5636d51123c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 393093120 unmapped: 67493888 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:16.004917+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 ms_handle_reset con 0x5636d2e4a800 session 0x5636d538ef00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2048000/0x0/0x1bfc00000, data 0x44f50fe/0x471d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394534912 unmapped: 66052096 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 ms_handle_reset con 0x5636d3b02800 session 0x5636d2ccc960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:17.005138+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394108928 unmapped: 66478080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:18.005724+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394108928 unmapped: 66478080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a1f40000/0x0/0x1bfc00000, data 0x46060fe/0x482e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:19.006606+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d60d1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 ms_handle_reset con 0x5636d60d1400 session 0x5636d284b4a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4409313 data_alloc: 234881024 data_used: 12996608
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:20.006821+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:21.007340+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a323f000/0x0/0x1bfc00000, data 0x330709c/0x352e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a323f000/0x0/0x1bfc00000, data 0x330709c/0x352e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:22.007886+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:23.008198+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x332809c/0x354f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x332809c/0x354f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:24.008523+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392822784 unmapped: 67764224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.350358963s of 11.046576500s, submitted: 123
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414031 data_alloc: 234881024 data_used: 13004800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:25.008690+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 423 ms_handle_reset con 0x5636d52a1400 session 0x5636d2ec03c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 423 ms_handle_reset con 0x5636d4f3d000 session 0x5636d5300d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d60d1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389373952 unmapped: 71213056 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 423 ms_handle_reset con 0x5636d60d1400 session 0x5636d21bcf00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:26.008847+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389382144 unmapped: 71204864 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:27.008983+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389382144 unmapped: 71204864 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:28.009095+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a3e9a000/0x0/0x1bfc00000, data 0x26abc1b/0x28d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,0,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389382144 unmapped: 71204864 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:29.009333+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389382144 unmapped: 71204864 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4264034 data_alloc: 218103808 data_used: 4784128
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:30.009697+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 424 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f78b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389398528 unmapped: 71188480 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:31.010013+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3e96000/0x0/0x1bfc00000, data 0x26b093b/0x28d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389398528 unmapped: 71188480 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:32.010294+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389398528 unmapped: 71188480 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:33.010547+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 424 ms_handle_reset con 0x5636d3b69000 session 0x5636d4bb85a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 424 ms_handle_reset con 0x5636d3a22000 session 0x5636d4bb8f00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 424 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f792c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:34.011058+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.740626335s of 10.004773140s, submitted: 115
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4160032 data_alloc: 218103808 data_used: 3366912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:35.011362+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:36.011628+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:37.011965+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:38.012277+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:39.012592+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4160032 data_alloc: 218103808 data_used: 3366912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:40.012843+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:41.013220+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389406720 unmapped: 71180288 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:42.013471+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:43.013673+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d4f3d000 session 0x5636d5033680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d52a1400 session 0x5636d4e5fc20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:44.013929+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4160032 data_alloc: 218103808 data_used: 3366912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:45.014068+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d60d1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d60d1400 session 0x5636d2028b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:46.014176+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:47.014320+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:48.014552+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4b73000/0x0/0x1bfc00000, data 0x19d14ed/0x1bfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.819927216s of 13.827451706s, submitted: 13
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ec1680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 71163904 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:49.014706+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d5297680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d4f3d000 session 0x5636d4e3c780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389939200 unmapped: 74850304 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249575 data_alloc: 218103808 data_used: 3366912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:50.014875+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:51.015006+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:52.015164+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:53.015289+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:54.015421+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249575 data_alloc: 218103808 data_used: 3366912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:55.015568+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:56.015746+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:57.015892+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389947392 unmapped: 74842112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:58.016067+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:18:59.016203+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249575 data_alloc: 218103808 data_used: 3366912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:00.016406+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:01.016573+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:02.016713+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:03.016899+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:04.017039+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249575 data_alloc: 218103808 data_used: 3366912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:05.017182+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389963776 unmapped: 74825728 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:06.017322+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389971968 unmapped: 74817536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:07.017454+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d52a1400 session 0x5636d20830e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389971968 unmapped: 74817536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:08.017591+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d28e6000 session 0x5636d4f781e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389971968 unmapped: 74817536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:09.017755+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ec0960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.849130630s of 20.970087051s, submitted: 39
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d5300000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389914624 unmapped: 74874880 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249376 data_alloc: 218103808 data_used: 3366912
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:10.017947+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389914624 unmapped: 74874880 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:11.018140+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389914624 unmapped: 74874880 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:12.018267+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389931008 unmapped: 74858496 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:13.018406+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389931008 unmapped: 74858496 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:14.018536+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a409a000/0x0/0x1bfc00000, data 0x24aa54f/0x26d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389931008 unmapped: 74858496 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4320228 data_alloc: 234881024 data_used: 13238272
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:15.018744+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d3a3d680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3b02800 session 0x5636d52972c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f53c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d4f53c00 session 0x5636d2940780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d5327c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 389931008 unmapped: 74858496 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:16.018964+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d3c8a000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d4f79e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3b02800 session 0x5636d510e000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636db0f0800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636db0f0800 session 0x5636d5230d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d502dc20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:17.019179+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3706000/0x0/0x1bfc00000, data 0x2e3c588/0x3068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3706000/0x0/0x1bfc00000, data 0x2e3c588/0x3068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:18.019379+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:19.019560+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3706000/0x0/0x1bfc00000, data 0x2e3c5c1/0x3068000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395543 data_alloc: 234881024 data_used: 13238272
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:20.019816+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:21.020017+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 72744960 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:22.020194+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.823556900s of 12.940815926s, submitted: 32
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3066000/0x0/0x1bfc00000, data 0x34dc5c1/0x3708000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [1,0,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 395591680 unmapped: 69197824 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:23.020401+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 397041664 unmapped: 67747840 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:24.020594+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e5e1e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 395132928 unmapped: 69656576 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4526849 data_alloc: 234881024 data_used: 15872000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d4f97680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:25.021342+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a298a000/0x0/0x1bfc00000, data 0x3bb85c1/0x3de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a298a000/0x0/0x1bfc00000, data 0x3bb85c1/0x3de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3b02800 session 0x5636d1fd94a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b2d800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 395141120 unmapped: 69648384 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3b2d800 session 0x5636d4da85a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:26.021658+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a298a000/0x0/0x1bfc00000, data 0x3bb85c1/0x3de4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 395141120 unmapped: 69648384 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:27.021868+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2989000/0x0/0x1bfc00000, data 0x3bb85e4/0x3de5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396853248 unmapped: 67936256 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:28.021981+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2989000/0x0/0x1bfc00000, data 0x3bb85e4/0x3de5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400375808 unmapped: 64413696 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:29.022181+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400375808 unmapped: 64413696 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601827 data_alloc: 234881024 data_used: 25722880
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:30.022373+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:31.022534+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:32.022768+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:33.022980+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2968000/0x0/0x1bfc00000, data 0x3bd95e4/0x3e06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:34.023166+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4598487 data_alloc: 234881024 data_used: 25722880
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:35.023335+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400523264 unmapped: 64266240 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:36.023455+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.942391396s of 14.256587982s, submitted: 151
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400621568 unmapped: 64167936 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:37.023635+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400621568 unmapped: 64167936 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d4f3d000 session 0x5636d1ee3a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:38.023834+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d52a1400 session 0x5636d21bd860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400629760 unmapped: 64159744 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:39.024041+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2955000/0x0/0x1bfc00000, data 0x3beb5e4/0x3e18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [0,0,0,2])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404774912 unmapped: 60014592 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d3a22000 session 0x5636d2198780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4661869 data_alloc: 234881024 data_used: 25866240
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:40.024306+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404774912 unmapped: 60014592 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:41.024456+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404938752 unmapped: 59850752 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:42.024644+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405069824 unmapped: 59719680 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a219f000/0x0/0x1bfc00000, data 0x43a25e4/0x45cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:43.024908+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:44.025196+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4666355 data_alloc: 234881024 data_used: 26353664
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:45.025393+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:46.025546+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2199000/0x0/0x1bfc00000, data 0x43a85e4/0x45d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:47.025769+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:48.025960+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:49.026144+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4666355 data_alloc: 234881024 data_used: 26353664
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:50.026381+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:51.026521+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:52.026666+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2199000/0x0/0x1bfc00000, data 0x43a85e4/0x45d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:53.026799+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 59711488 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:54.026926+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:55.027039+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4666355 data_alloc: 234881024 data_used: 26353664
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2199000/0x0/0x1bfc00000, data 0x43a85e4/0x45d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:56.027155+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:57.027273+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:58.027401+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 59703296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:19:59.027522+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405094400 unmapped: 59695104 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:00.027712+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4666355 data_alloc: 234881024 data_used: 26353664
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405094400 unmapped: 59695104 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:01.027878+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.164173126s of 24.367654800s, submitted: 90
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4709680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4487680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2199000/0x0/0x1bfc00000, data 0x43a85e4/0x45d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 ms_handle_reset con 0x5636d2e4a800 session 0x5636d5032d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:02.027988+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:03.028195+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:04.028336+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:05.028416+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4463795 data_alloc: 234881024 data_used: 15867904
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:06.028562+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:07.028704+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:08.028799+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 66199552 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:09.028952+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 66191360 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:10.029466+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 66183168 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4463795 data_alloc: 234881024 data_used: 15867904
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:11.029633+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 66183168 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:12.029802+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 66183168 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:13.029938+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398614528 unmapped: 66174976 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:14.030094+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:15.030248+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4464435 data_alloc: 234881024 data_used: 16064512
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:16.030406+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:17.030560+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:18.030723+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:19.030864+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:20.031145+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4464435 data_alloc: 234881024 data_used: 16064512
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:21.031293+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:22.031417+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:23.031547+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:24.031747+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 66166784 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.178138733s of 23.300649643s, submitted: 45
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:25.031955+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475883 data_alloc: 234881024 data_used: 17014784
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:26.032187+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:27.032459+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:28.032652+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32e9000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:29.032806+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:30.033026+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 66084864 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475531 data_alloc: 234881024 data_used: 17014784
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:31.033203+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:32.033371+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32eb000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:33.033518+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:34.033653+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:35.033776+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475531 data_alloc: 234881024 data_used: 17014784
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:36.033936+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:37.034033+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a32eb000/0x0/0x1bfc00000, data 0x325954f/0x3483000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:38.034180+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.652603149s of 13.687821388s, submitted: 12
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:39.034314+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:40.034472+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 66076672 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3a22000 session 0x5636d50334a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d4f3d000 session 0x5636d1ee3860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d52a1400 session 0x5636d4f96f00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b02800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483613 data_alloc: 234881024 data_used: 17641472
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a32e8000/0x0/0x1bfc00000, data 0x325b21b/0x3486000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3b02800 session 0x5636d2d71e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e5f4a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3a22000 session 0x5636d5300f00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d4f3d000 session 0x5636d4f78000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:41.034624+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 66011136 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:42.034772+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 65970176 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:43.034909+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 65970176 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:44.035049+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 65970176 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:45.035183+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398827520 unmapped: 65961984 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521852 data_alloc: 234881024 data_used: 17645568
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:46.035319+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bf6000/0x0/0x1bfc00000, data 0x35ec2ef/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398901248 unmapped: 65888256 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:47.035494+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398901248 unmapped: 65888256 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:48.035639+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398901248 unmapped: 65888256 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.337140083s of 10.989769936s, submitted: 252
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:49.035744+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398909440 unmapped: 65880064 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:50.035878+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521852 data_alloc: 234881024 data_used: 17645568
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bf6000/0x0/0x1bfc00000, data 0x35ec2ef/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:51.035998+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:52.036180+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:53.036338+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:54.036532+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:55.036656+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525194 data_alloc: 234881024 data_used: 17641472
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bf6000/0x0/0x1bfc00000, data 0x35ec2ef/0x3768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:56.036788+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d52a1400 session 0x5636d2029c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:57.037376+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 398999552 unmapped: 65789952 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4d73000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d4d73000 session 0x5636d5032780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:58.037512+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399007744 unmapped: 65781760 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:20:59.037641+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2e4a800 session 0x5636d502c960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399007744 unmapped: 65781760 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.992650032s of 10.227743149s, submitted: 112
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3a22000 session 0x5636d4e3d4a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3d000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:00.038177+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399163392 unmapped: 65626112 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4533294 data_alloc: 234881024 data_used: 17649664
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bca000/0x0/0x1bfc00000, data 0x361a322/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:01.038348+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:02.038487+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d46e7c00 session 0x5636d5326d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:03.038681+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d50f1860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bca000/0x0/0x1bfc00000, data 0x361a322/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d6c93000 session 0x5636d2197e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:04.038826+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d6c93000 session 0x5636d2d4b2c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:05.038950+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 65601536 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593166 data_alloc: 234881024 data_used: 20439040
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bc9000/0x0/0x1bfc00000, data 0x361a331/0x3795000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:06.039091+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bc1000/0x0/0x1bfc00000, data 0x3ae9331/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:07.039244+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:08.039365+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:09.039552+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:10.039723+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593486 data_alloc: 234881024 data_used: 20451328
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:11.039884+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2bc1000/0x0/0x1bfc00000, data 0x3ae9331/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:12.040028+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.785018921s of 12.839673996s, submitted: 21
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 65527808 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:13.040154+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 63717376 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:14.040316+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401506304 unmapped: 63283200 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:15.040438+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 63266816 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4626468 data_alloc: 234881024 data_used: 20738048
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:16.040651+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401588224 unmapped: 63201280 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a28c7000/0x0/0x1bfc00000, data 0x3de3331/0x3a97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:17.040778+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402612224 unmapped: 62177280 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:18.040913+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406994944 unmapped: 57794560 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a24fc000/0x0/0x1bfc00000, data 0x41ae331/0x3e62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a24fc000/0x0/0x1bfc00000, data 0x41ae331/0x3e62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:19.041049+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406994944 unmapped: 57794560 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:20.041236+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403800064 unmapped: 60989440 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4672227 data_alloc: 234881024 data_used: 24047616
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:21.041413+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a24dd000/0x0/0x1bfc00000, data 0x41cd331/0x3e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403808256 unmapped: 60981248 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:22.041549+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403816448 unmapped: 60973056 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:23.041687+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.828656197s of 11.126467705s, submitted: 61
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404881408 unmapped: 59908096 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:24.041829+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404889600 unmapped: 59899904 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a240b000/0x0/0x1bfc00000, data 0x429f331/0x3f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:25.042008+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404889600 unmapped: 59899904 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4684943 data_alloc: 234881024 data_used: 24043520
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:26.042184+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 60727296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:27.042333+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 60727296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:28.042475+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a2401000/0x0/0x1bfc00000, data 0x42a9331/0x3f5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 60727296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:29.042647+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404062208 unmapped: 60727296 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:30.042878+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4681143 data_alloc: 234881024 data_used: 24047616
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:31.043011+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:32.043174+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:33.043309+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a23fa000/0x0/0x1bfc00000, data 0x42b0331/0x3f64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:34.043433+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d502d2c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.482513428s of 11.543985367s, submitted: 17
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e3c780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:35.043563+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 60588032 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4681314 data_alloc: 234881024 data_used: 24174592
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d3a22000 session 0x5636d50f05a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:36.043708+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404217856 unmapped: 60571648 heap: 464789504 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d46e7c00 session 0x5636d538eb40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:37.043881+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 63766528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:38.044015+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404717568 unmapped: 63750144 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1abc000/0x0/0x1bfc00000, data 0x4bef322/0x48a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:39.044158+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404717568 unmapped: 63750144 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:40.044365+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404717568 unmapped: 63750144 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4755696 data_alloc: 234881024 data_used: 24174592
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d46e7c00 session 0x5636d538e780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:41.044485+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 63766528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d46f4b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:42.044650+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 63766528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:43.044789+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a1f5b000/0x0/0x1bfc00000, data 0x47512b0/0x4402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 63766528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4f792c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:44.044943+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404709376 unmapped: 63758336 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a1f5b000/0x0/0x1bfc00000, data 0x47512b0/0x4402000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.871521950s of 10.023799896s, submitted: 46
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:45.045145+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404709376 unmapped: 63758336 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4693622 data_alloc: 234881024 data_used: 24039424
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 427 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4f79a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:46.045280+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404725760 unmapped: 63741952 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:47.045460+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 427 ms_handle_reset con 0x5636d3a22000 session 0x5636d4e621e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405168128 unmapped: 63299584 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:48.045606+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406331392 unmapped: 62136320 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:49.045734+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406331392 unmapped: 62136320 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:50.045908+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1f62000/0x0/0x1bfc00000, data 0x41cab82/0x43fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406347776 unmapped: 62119936 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764956 data_alloc: 251658240 data_used: 32079872
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:51.046094+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406347776 unmapped: 62119936 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:52.046307+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:53.046510+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:54.046656+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1f62000/0x0/0x1bfc00000, data 0x41cab82/0x43fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:55.046794+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764956 data_alloc: 251658240 data_used: 32079872
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:56.046929+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.281738281s of 11.311649323s, submitted: 22
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406364160 unmapped: 62103552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:57.047063+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406380544 unmapped: 62087168 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4486960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2f98000/0x0/0x1bfc00000, data 0x2e56b20/0x3086000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [0,0,0,0,6])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:58.047192+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 66207744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:21:59.047436+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402259968 unmapped: 66207744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2d13000/0x0/0x1bfc00000, data 0x30d3b20/0x3303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:00.047697+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402595840 unmapped: 65871872 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505656 data_alloc: 234881024 data_used: 17100800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2d13000/0x0/0x1bfc00000, data 0x30d3b20/0x3303000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:01.047849+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2c9b000/0x0/0x1bfc00000, data 0x314bb20/0x337b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402595840 unmapped: 65871872 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:02.047982+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:03.048172+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:04.048301+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:05.048457+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505608 data_alloc: 234881024 data_used: 17465344
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:06.048618+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd6000/0x0/0x1bfc00000, data 0x3158b20/0x3388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:07.048786+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.255873680s of 11.477129936s, submitted: 104
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd6000/0x0/0x1bfc00000, data 0x3158b20/0x3388000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:08.048938+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:09.049072+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:10.049252+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d4bb90e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399998976 unmapped: 68468736 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504180 data_alloc: 234881024 data_used: 17465344
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:11.049386+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400007168 unmapped: 68460544 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:12.049534+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd3000/0x0/0x1bfc00000, data 0x315bb20/0x338b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:13.049694+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d4f97860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:14.049848+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:15.049983+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303742 data_alloc: 218103808 data_used: 6471680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:16.050190+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:17.050320+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4121000/0x0/0x1bfc00000, data 0x200db20/0x223d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:18.050476+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:19.050600+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4121000/0x0/0x1bfc00000, data 0x200db20/0x223d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4121000/0x0/0x1bfc00000, data 0x200db20/0x223d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:20.050912+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 396435456 unmapped: 72032256 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303742 data_alloc: 218103808 data_used: 6471680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.177040100s of 13.237181664s, submitted: 14
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d52a1400 session 0x5636d2ecf680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d4f3d000 session 0x5636d2198000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4121000/0x0/0x1bfc00000, data 0x200db20/0x223d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:21.051036+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2eec960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:22.051172+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:23.051313+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:24.051428+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:25.051585+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:26.051720+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:27.051893+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:28.052045+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:29.052221+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:30.052389+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394272768 unmapped: 74194944 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:31.052537+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:32.052689+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:33.052829+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:34.052996+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:35.053205+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:36.053348+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:37.053463+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:38.053628+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:39.053770+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:40.053933+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:41.054040+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394280960 unmapped: 74186752 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:42.054179+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:43.054349+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:44.054505+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:45.054646+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:46.054799+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:47.054951+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:48.055176+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:49.055507+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394289152 unmapped: 74178560 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:50.055659+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394297344 unmapped: 74170368 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4225027 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:51.055803+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394297344 unmapped: 74170368 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:52.055971+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.362501144s of 31.471702576s, submitted: 43
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4759000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d44861e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:53.056090+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:54.056295+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:55.056433+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4262521 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:56.056618+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4366000/0x0/0x1bfc00000, data 0x1dcba8b/0x1ff8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4366000/0x0/0x1bfc00000, data 0x1dcba8b/0x1ff8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:57.056805+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 73850880 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:58.056982+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d52a1400 session 0x5636d28e9a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:22:59.057159+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:00.057323+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4272926 data_alloc: 218103808 data_used: 4550656
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:01.057499+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4365000/0x0/0x1bfc00000, data 0x1dcbaae/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:02.057664+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:03.057826+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:04.057957+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:05.058890+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285406 data_alloc: 218103808 data_used: 6332416
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:06.059085+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394625024 unmapped: 73842688 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4365000/0x0/0x1bfc00000, data 0x1dcbaae/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:07.059566+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 73834496 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:08.059844+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4365000/0x0/0x1bfc00000, data 0x1dcbaae/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 73834496 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:09.079062+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 73834496 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:10.079460+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 394633216 unmapped: 73834496 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285406 data_alloc: 218103808 data_used: 6332416
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:11.079765+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.085372925s of 19.132513046s, submitted: 17
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a4365000/0x0/0x1bfc00000, data 0x1dcbaae/0x1ff9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [0,0,1,6,2])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401121280 unmapped: 67346432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:12.081657+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:13.083329+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:14.083664+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:15.084004+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3552000/0x0/0x1bfc00000, data 0x2bddaae/0x2e0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4407902 data_alloc: 218103808 data_used: 8286208
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:16.084264+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:17.084422+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:18.084598+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38796 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:19.084754+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:20.084955+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3550000/0x0/0x1bfc00000, data 0x2be0aae/0x2e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404590 data_alloc: 218103808 data_used: 8286208
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:21.085118+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3550000/0x0/0x1bfc00000, data 0x2be0aae/0x2e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:22.085407+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 66953216 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:23.085728+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a3550000/0x0/0x1bfc00000, data 0x2be0aae/0x2e0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 66945024 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:24.086140+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 66945024 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:25.086414+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401530880 unmapped: 66936832 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404910 data_alloc: 218103808 data_used: 8294400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:26.086729+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401530880 unmapped: 66936832 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:27.086874+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401530880 unmapped: 66936832 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:28.087136+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.468616486s of 16.711784363s, submitted: 112
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d3c8be00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 66756608 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:29.087564+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 66756608 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:30.087786+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458452 data_alloc: 218103808 data_used: 8294400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:31.087942+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:32.088328+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:33.088531+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:34.088653+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:35.088807+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4458452 data_alloc: 218103808 data_used: 8294400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:36.088932+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 66748416 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:37.089279+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:38.089501+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:39.089677+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:40.090242+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4491092 data_alloc: 234881024 data_used: 12820480
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:41.091421+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:42.091809+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:43.091986+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:44.092188+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.121622086s of 16.178701401s, submitted: 17
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2fd2000/0x0/0x1bfc00000, data 0x315eaae/0x338c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:45.092588+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 66551808 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4491204 data_alloc: 234881024 data_used: 12881920
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:46.092820+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401924096 unmapped: 66543616 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:47.093050+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401924096 unmapped: 66543616 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:48.093186+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 62742528 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:49.093372+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1626000/0x0/0x1bfc00000, data 0x396aaae/0x3b98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:50.093567+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560238 data_alloc: 234881024 data_used: 14278656
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:51.093747+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:52.094014+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:53.094202+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:54.094396+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1618000/0x0/0x1bfc00000, data 0x3977aae/0x3ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.064040184s of 10.258877754s, submitted: 80
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:55.094600+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4553794 data_alloc: 234881024 data_used: 14278656
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:56.094727+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405995520 unmapped: 62472192 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a22000 session 0x5636d5112000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:57.095177+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d284a5a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:58.095360+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1618000/0x0/0x1bfc00000, data 0x3977aae/0x3ba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:23:59.095551+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:00.095784+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a23af000/0x0/0x1bfc00000, data 0x2be1aae/0x2e0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4412186 data_alloc: 218103808 data_used: 8355840
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:01.096018+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:02.096246+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:03.096497+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d50f05a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 64479232 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:04.096650+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d5113e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399179776 unmapped: 69287936 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:05.096853+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:06.097017+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:07.097216+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:08.097405+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:09.097613+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399187968 unmapped: 69279744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:10.097841+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:11.098006+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:12.098135+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:13.098314+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:14.098553+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:15.098746+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:16.098882+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:17.099042+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399196160 unmapped: 69271552 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:18.099198+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:19.099335+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:20.099522+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:21.099636+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:22.099770+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:23.099908+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:24.100062+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:25.100185+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399204352 unmapped: 69263360 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:26.100389+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:27.100593+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:28.100729+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:29.100871+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:30.101048+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4240744 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:31.101169+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:32.101314+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:33.101462+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 69246976 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:34.101591+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 69238784 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:35.101765+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.500312805s of 40.577518463s, submitted: 30
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4f970e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d52a1400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d52a1400 session 0x5636d47090e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a33fb000/0x0/0x1bfc00000, data 0x1b95ab4/0x1dc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d51123c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d20a5680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d50321e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4297582 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:36.101942+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:37.102076+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:38.102264+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:39.102476+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:40.102648+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:41.102806+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4297582 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:42.103564+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:43.103726+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 69214208 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:44.103862+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 69206016 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:45.104030+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:46.104156+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4339474 data_alloc: 218103808 data_used: 9203712
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:47.104349+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.480249405s of 12.605928421s, submitted: 45
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d3a6cd20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:48.104519+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:49.104680+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:50.104923+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:51.105144+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4337998 data_alloc: 218103808 data_used: 9203712
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:52.105331+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:53.105536+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:54.105748+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d46e7c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:55.105973+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:56.106162+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4338130 data_alloc: 218103808 data_used: 9203712
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d46e7c00 session 0x5636d2940000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:57.107034+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399269888 unmapped: 69197824 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ff8000/0x0/0x1bfc00000, data 0x1f98aed/0x21c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:58.107226+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:24:59.107408+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.830513954s of 11.840190887s, submitted: 3
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:00.107683+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:01.107898+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4338130 data_alloc: 218103808 data_used: 9203712
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d47083c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:02.108083+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399278080 unmapped: 69189632 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d2028960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:03.108345+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399286272 unmapped: 69181440 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:04.108524+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399286272 unmapped: 69181440 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:05.108721+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399286272 unmapped: 69181440 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:06.108940+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:07.109194+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:08.109433+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:09.109635+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:10.109830+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:11.110012+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:12.110375+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:13.110562+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 69173248 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:14.110709+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:15.110843+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:16.110961+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:17.111229+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:18.111340+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:19.111479+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-12-06T08:25:20.279399+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _finish_auth 0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:20.290327+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399302656 unmapped: 69165056 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:21.279640+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399310848 unmapped: 69156864 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:22.279803+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399310848 unmapped: 69156864 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:23.279917+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:24.280046+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:25.280172+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:26.280319+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:27.280520+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:28.280649+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 69148672 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:29.280862+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:30.281066+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4247334 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:31.281248+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:32.281386+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:33.281522+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:34.281705+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a35ba000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4e3da40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d4516d20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a1ec00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a1ec00 session 0x5636d3a3d680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:35.281914+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d4e62b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.754333496s of 35.839118958s, submitted: 32
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d4f97680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d20821e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:36.282040+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4301455 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:37.282169+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 69140480 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d4517680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2ee5000/0x0/0x1bfc00000, data 0x20abaed/0x22d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:38.282748+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399376384 unmapped: 69091328 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2d4e000/0x0/0x1bfc00000, data 0x2242aed/0x2470000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:39.282896+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399376384 unmapped: 69091328 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a23000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a23000 session 0x5636d2ec10e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d50f0b40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d2d4af00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:40.283053+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399368192 unmapped: 69099520 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d5301860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:41.283178+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399368192 unmapped: 69099520 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4374929 data_alloc: 218103808 data_used: 3399680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50b2000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d50b2000 session 0x5636d2d71e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:42.283316+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399917056 unmapped: 68550656 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2777000/0x0/0x1bfc00000, data 0x2817b72/0x2a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:43.283458+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399917056 unmapped: 68550656 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e65000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e65000 session 0x5636d3a3c1e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ec1e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:44.283591+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399917056 unmapped: 68550656 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d50b2000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:45.283715+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399917056 unmapped: 68550656 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:46.283835+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 399925248 unmapped: 68542464 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4427327 data_alloc: 218103808 data_used: 10481664
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d6c93000 session 0x5636d2083a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:47.283997+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400089088 unmapped: 68378624 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a22c00 session 0x5636d4e3c000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2776000/0x0/0x1bfc00000, data 0x2817b81/0x2a48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:48.284155+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400089088 unmapped: 68378624 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d5327e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3afec00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.909353256s of 13.144114494s, submitted: 72
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3afec00 session 0x5636d20283c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:49.284279+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400089088 unmapped: 68378624 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2775000/0x0/0x1bfc00000, data 0x2817b91/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:50.284451+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402071552 unmapped: 66396160 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:51.284625+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402866176 unmapped: 65601536 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4473866 data_alloc: 234881024 data_used: 16875520
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2775000/0x0/0x1bfc00000, data 0x2817b91/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:52.284756+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402866176 unmapped: 65601536 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2775000/0x0/0x1bfc00000, data 0x2817b91/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2028960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d5112780
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:53.284838+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404668416 unmapped: 63799296 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a22c00 session 0x5636d2daed20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:54.284947+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:55.285070+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:56.285188+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d4e3d4a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4429543 data_alloc: 218103808 data_used: 11780096
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d50b2000 session 0x5636d51132c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a12d6000/0x0/0x1bfc00000, data 0x25ebb1f/0x281b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d4e5e1e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:57.285310+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:58.285436+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:25:59.285628+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 406855680 unmapped: 61612032 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.875857353s of 11.130379677s, submitted: 107
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d4bb9c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d3a3cd20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:00.285783+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ccc000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a2048000/0x0/0x1bfc00000, data 0x19faaae/0x1c28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:01.285905+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:02.286036+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:03.286207+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:04.286350+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:05.286519+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:06.286651+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:07.286813+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:08.286952+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:09.287138+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:10.287403+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:11.287522+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:12.287670+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:13.288030+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:14.288151+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:15.288308+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:16.288436+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:17.288563+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:18.288711+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:19.288856+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:20.289021+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:21.289136+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:22.289475+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:23.289655+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:24.289784+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:25.289921+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:26.290049+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:27.290276+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:28.290422+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:29.290640+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:30.290832+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403169280 unmapped: 65298432 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:31.290965+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:32.291236+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:33.291396+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:34.291518+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:35.291657+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:36.291829+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:37.292075+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:38.292211+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:39.292361+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403177472 unmapped: 65290240 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:40.292540+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403185664 unmapped: 65282048 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:41.292684+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403185664 unmapped: 65282048 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:42.292810+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:43.292956+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:44.293180+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:45.293389+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:46.293532+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:47.293680+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:48.293827+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:49.293967+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403193856 unmapped: 65273856 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:50.294166+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:51.294299+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:52.294438+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:53.294570+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:54.294740+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:55.294861+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:56.295032+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4268408 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:57.295155+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403202048 unmapped: 65265664 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:58.295334+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2ccbc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2ccbc00 session 0x5636d44874a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:26:59.295492+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d45161e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:00.295653+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a206c000/0x0/0x1bfc00000, data 0x19d6a8b/0x1c03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 61.038486481s of 61.128166199s, submitted: 29
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e8c400 session 0x5636d502dc20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:01.295774+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4271666 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:02.295908+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:03.296055+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4487680
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 65257472 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d50f03c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:04.296181+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3a22c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 405331968 unmapped: 63135744 heap: 468467712 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d3a22c00 session 0x5636d2029c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 ms_handle_reset con 0x5636d1e6a800 session 0x5636d28e9a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:05.296323+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403234816 unmapped: 69435392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1ae5000/0x0/0x1bfc00000, data 0x230aafd/0x2539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:06.296460+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403243008 unmapped: 69427200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4348010 data_alloc: 218103808 data_used: 3395584
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:07.296607+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403243008 unmapped: 69427200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a1ae5000/0x0/0x1bfc00000, data 0x230aafd/0x2539000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e8c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:08.296782+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403243008 unmapped: 69427200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:09.296912+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403251200 unmapped: 69419008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:10.297051+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 430 ms_handle_reset con 0x5636d2e4a800 session 0x5636d2d714a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403259392 unmapped: 69410816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.583721161s of 10.452672958s, submitted: 60
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:11.297165+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 71598080 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4399704 data_alloc: 218103808 data_used: 3407872
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 431 ms_handle_reset con 0x5636d6c93000 session 0x5636d502d860
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a1ad9000/0x0/0x1bfc00000, data 0x2310233/0x2543000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:12.297337+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d538eb40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d1e8c400 session 0x5636d2ecc5a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401080320 unmapped: 71589888 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:13.297470+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401080320 unmapped: 71589888 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d1e6a800 session 0x5636d4e62f00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:14.297620+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401088512 unmapped: 71581696 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:15.297813+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d5112000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3c000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a10e7000/0x0/0x1bfc00000, data 0x2cfdf8d/0x2f34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402169856 unmapped: 70500352 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 432 ms_handle_reset con 0x5636d6c93000 session 0x5636d3b730e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:16.298003+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 433 ms_handle_reset con 0x5636d4f3c000 session 0x5636d2d714a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b2c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 433 ms_handle_reset con 0x5636d3b2c400 session 0x5636d20285a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4487619 data_alloc: 218103808 data_used: 3416064
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 433 ms_handle_reset con 0x5636d2e4a800 session 0x5636d4bb85a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:17.298175+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:18.298374+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:19.298507+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a0c14000/0x0/0x1bfc00000, data 0x31d1c75/0x3409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:20.298656+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:21.298789+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4487619 data_alloc: 218103808 data_used: 3416064
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402194432 unmapped: 70475776 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:22.298916+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a0c14000/0x0/0x1bfc00000, data 0x31d1c75/0x3409000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 433 handle_osd_map epochs [434,434], i have 434, src has [1,434]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.084393501s of 11.318605423s, submitted: 45
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:23.299160+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:24.299341+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:25.299492+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:26.299671+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490593 data_alloc: 218103808 data_used: 3416064
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:27.299826+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402210816 unmapped: 70459392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:28.300006+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c11000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402219008 unmapped: 70451200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:29.300176+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c11000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402219008 unmapped: 70451200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:30.300385+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402235392 unmapped: 70434816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:31.300524+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4490593 data_alloc: 218103808 data_used: 3416064
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402235392 unmapped: 70434816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d1e6a800 session 0x5636d45161e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:32.300646+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b2c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402235392 unmapped: 70434816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:33.300792+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402112512 unmapped: 70557696 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:34.300940+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c12000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:35.301177+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:36.301311+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519020 data_alloc: 218103808 data_used: 7258112
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c12000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:37.301450+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:38.301597+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c12000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:39.301767+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402120704 unmapped: 70549504 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:40.301951+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:41.302265+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519020 data_alloc: 218103808 data_used: 7258112
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:42.302461+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:43.302593+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 62K writes, 230K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 62K writes, 24K syncs, 2.60 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3510 writes, 12K keys, 3510 commit groups, 1.0 writes per commit group, ingest: 13.22 MB, 0.02 MB/s
                                           Interval WAL: 3510 writes, 1479 syncs, 2.37 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a0c12000/0x0/0x1bfc00000, data 0x31d3827/0x340c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:44.302731+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 70541312 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:45.302879+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.704803467s of 22.738908768s, submitted: 19
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403709952 unmapped: 68960256 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:46.303000+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4613876 data_alloc: 218103808 data_used: 8192000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:47.303140+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a07e7000/0x0/0x1bfc00000, data 0x3aa4827/0x382a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:48.303304+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:49.303462+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:50.303656+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: mgrc ms_handle_reset ms_handle_reset con 0x5636d631bc00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/798720280
Dec 06 08:46:59 compute-0 ceph-osd[84884]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/798720280,v1:192.168.122.100:6801/798720280]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: get_auth_request con 0x5636d1e8c400 auth_method 0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: mgrc handle_mgr_configure stats_period=5
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:51.303839+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4614532 data_alloc: 218103808 data_used: 8208384
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 68894720 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d28e6c00 session 0x5636d45165a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d2d4dc00 session 0x5636d4e62960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d28e6c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:52.303963+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d75a4400 session 0x5636d44865a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d6c93000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d4bb9c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d3b2c400 session 0x5636d2083e00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402857984 unmapped: 69812224 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4117400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a07e6000/0x0/0x1bfc00000, data 0x3aa4827/0x382a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bbdf9c6), peers [1,2] op hist [0,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 ms_handle_reset con 0x5636d4117400 session 0x5636d4e3c000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:53.304276+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402857984 unmapped: 69812224 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:54.304422+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402857984 unmapped: 69812224 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:55.304584+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402857984 unmapped: 69812224 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a03e4000/0x0/0x1bfc00000, data 0x3aa4827/0x382a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e6a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:56.304770+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.772432327s of 11.028005600s, submitted: 79
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4ac00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 435 ms_handle_reset con 0x5636d1e6a800 session 0x5636d50f14a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d3b2c400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609210 data_alloc: 218103808 data_used: 8187904
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403914752 unmapped: 68755456 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 435 ms_handle_reset con 0x5636d3b2c400 session 0x5636d502d2c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:57.304946+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _renew_subs
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 436 ms_handle_reset con 0x5636d2e4ac00 session 0x5636d50f1c20
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e69c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401178624 unmapped: 71491584 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 436 ms_handle_reset con 0x5636d2e4a800 session 0x5636d44861e0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 436 ms_handle_reset con 0x5636d1e69c00 session 0x5636d52974a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:58.305206+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:27:59.305388+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:00.305574+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a1ff2000/0x0/0x1bfc00000, data 0x19e52c7/0x1c1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a1ff2000/0x0/0x1bfc00000, data 0x19e52c7/0x1c1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:01.305741+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4326015 data_alloc: 218103808 data_used: 3424256
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:02.305933+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a1ff2000/0x0/0x1bfc00000, data 0x19e52c7/0x1c1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:03.306178+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:04.306361+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a1ff2000/0x0/0x1bfc00000, data 0x19e52c7/0x1c1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:05.306513+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401186816 unmapped: 71483392 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:06.306731+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.616170883s of 10.038711548s, submitted: 109
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 71475200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:07.306893+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 71475200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:08.307076+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 71475200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:09.307236+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401195008 unmapped: 71475200 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:10.307441+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:11.307594+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:12.307776+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:13.307940+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:14.308062+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:15.308297+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:16.308428+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:17.308628+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 71467008 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:18.308763+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 71458816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:19.308904+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 71458816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:20.309158+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401211392 unmapped: 71458816 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:21.309352+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:22.309513+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:23.309654+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:24.309819+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:25.309978+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 71450624 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:26.310190+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401227776 unmapped: 71442432 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:27.310457+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401227776 unmapped: 71442432 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:28.310596+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:29.310810+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:30.311054+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:31.311193+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:32.311374+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:33.311534+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 71434240 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:34.311744+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401244160 unmapped: 71426048 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:35.311903+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:36.312042+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:37.312225+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:38.312418+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:39.312596+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:40.312761+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:41.312970+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:42.313123+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:43.313242+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:44.313405+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:45.313628+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:46.313816+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:47.314045+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:48.314196+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:49.314424+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 71401472 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:50.314691+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:51.314856+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:52.315023+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:53.315157+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:54.315297+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:55.315480+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:56.315624+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:57.315842+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 71393280 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:58.316035+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:28:59.316162+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:00.316372+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:01.316550+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:02.316773+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:03.316989+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:04.317179+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:05.317343+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 71385088 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:06.317664+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:07.317791+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:08.317964+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:09.318129+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:10.318328+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:11.318491+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:12.318654+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:13.318822+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 71368704 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:14.319015+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:15.319205+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 71352320 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:16.319373+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 71352320 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:17.319561+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 71352320 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:18.319691+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 71344128 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:19.320301+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 71344128 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:20.320466+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 71344128 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:21.321208+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 71344128 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:22.321345+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:23.321490+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:24.321606+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:25.321748+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:26.321873+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 71335936 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:27.322019+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'config diff' '{prefix=config diff}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 71417856 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'config show' '{prefix=config show}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:28.322168+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 71704576 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:29.322302+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400850944 unmapped: 71819264 heap: 472670208 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:30.322483+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'log dump' '{prefix=log dump}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'perf dump' '{prefix=perf dump}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:31.322600+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'perf schema' '{prefix=perf schema}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 82821120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:32.322723+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 82821120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:33.322864+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 82821120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:34.322990+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 82821120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:35.323146+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 82821120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:36.323264+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 82821120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:37.323377+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400891904 unmapped: 82821120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:38.323494+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 82812928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:39.323634+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 82812928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:40.323784+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 82812928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:41.323937+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 82812928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:42.324082+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 82812928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:43.324163+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 82812928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:44.324283+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400900096 unmapped: 82812928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:45.324405+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 82804736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:46.327822+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 82804736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:47.327994+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 82804736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:48.328152+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 82804736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:49.328320+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 82804736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:50.328513+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 82804736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:51.328643+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 82804736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:52.328761+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400908288 unmapped: 82804736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:53.328930+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 82780160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:54.329061+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 82780160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:55.329217+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 82780160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:56.329401+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 82780160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:57.329563+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 82780160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:58.329740+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 82780160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:29:59.329904+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 82780160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:00.330056+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400932864 unmapped: 82780160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:01.330196+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:02.330476+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:03.330678+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:04.330867+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:05.331080+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:06.331343+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:07.331680+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:08.331876+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400949248 unmapped: 82763776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:09.331998+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 82755584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:10.332165+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 82755584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:11.332316+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 82755584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:12.332498+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 82755584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:13.332650+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 82755584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:14.332778+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 82755584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:15.332965+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 82755584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:16.333149+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:17.333283+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400957440 unmapped: 82755584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:18.333492+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 82739200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:19.333665+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 82739200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:20.333931+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 82739200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:21.334087+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 82739200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:22.334311+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 82739200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:23.334468+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 82739200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:24.334838+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 82739200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:25.335209+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 82739200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:26.335443+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 82722816 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:27.335627+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 82722816 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:28.335802+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 82722816 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:29.336044+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 82722816 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:30.336448+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 82722816 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:31.336729+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 82722816 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330013 data_alloc: 218103808 data_used: 3432448
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:32.336910+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 82722816 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:33.337090+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 82722816 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:34.337293+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401006592 unmapped: 82706432 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1fef000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:35.337474+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 82698240 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 149.447158813s of 149.457839966s, submitted: 24
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:36.337699+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401014784 unmapped: 82698240 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:37.337981+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 82681856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:38.338240+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 82681856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1ff0000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:39.338429+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401031168 unmapped: 82681856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a1ff0000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1bfef9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:40.338584+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 82665472 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:41.338721+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401055744 unmapped: 82657280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:42.338839+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330005 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401088512 unmapped: 82624512 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:43.338967+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401096704 unmapped: 82616320 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:44.341303+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401104896 unmapped: 82608128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:45.341417+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401113088 unmapped: 82599936 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.797143459s of 10.023102760s, submitted: 172
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:46.341589+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401121280 unmapped: 82591744 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:47.341706+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401129472 unmapped: 82583552 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:48.341834+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401129472 unmapped: 82583552 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:49.342028+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401129472 unmapped: 82583552 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:50.342198+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401145856 unmapped: 82567168 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:51.342310+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401154048 unmapped: 82558976 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:52.342440+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330005 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401178624 unmapped: 82534400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:53.342659+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401203200 unmapped: 82509824 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:54.342845+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 82493440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:55.343025+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [0,0,0,0,0,0,2])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401219584 unmapped: 82493440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1.625399113s of 10.143717766s, submitted: 172
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:56.343167+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401227776 unmapped: 82485248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:57.343321+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4330005 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 82477056 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:58.343465+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401244160 unmapped: 82468864 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:30:59.343641+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:00.343804+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:01.343952+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:02.344118+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:03.344251+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:04.344412+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:05.344552+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:06.344679+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:07.344823+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:08.344950+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:09.345164+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:10.345373+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:11.345531+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:12.345761+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:13.345954+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 82460672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:14.346093+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 82452480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:15.346246+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 82452480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:16.346388+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 82452480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:17.346557+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 82452480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:18.346710+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 82452480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:19.346836+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 82452480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:20.347045+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 82444288 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:21.347173+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401268736 unmapped: 82444288 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:22.347330+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 82436096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:23.347463+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 82436096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:24.347640+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 82436096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:25.347789+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401276928 unmapped: 82436096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:26.347942+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 82427904 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:27.348187+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 82427904 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:28.348362+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 82427904 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:29.348501+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401285120 unmapped: 82427904 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:30.348641+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401293312 unmapped: 82419712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:31.348821+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401293312 unmapped: 82419712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:32.349017+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401293312 unmapped: 82419712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:33.349206+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401293312 unmapped: 82419712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:34.349462+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 82411520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:35.349592+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 82411520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:36.349754+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 82411520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:37.349899+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401301504 unmapped: 82411520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 ms_handle_reset con 0x5636d3a1bc00 session 0x5636d4f785a0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d1e69c00
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:38.350072+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 82395136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:39.350167+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 82395136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:40.350331+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 82395136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:41.350487+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 82395136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:42.350656+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 82395136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:43.350802+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 82395136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:44.350991+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 82395136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:45.351201+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401317888 unmapped: 82395136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:46.351369+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 82386944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:47.351579+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 82386944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:48.351740+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 82386944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:49.351906+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 82386944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:50.352097+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 82386944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:51.352307+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 82386944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:52.352456+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401326080 unmapped: 82386944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:53.352635+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 82378752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:54.352807+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 82370560 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:55.352950+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 82370560 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:56.353083+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 82370560 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:57.353305+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 82370560 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:58.353451+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 82370560 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:31:59.353640+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 82370560 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:00.353825+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 82370560 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:01.354012+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 82370560 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:02.354151+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 82354176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:03.354280+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 82354176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:04.354444+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 82354176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:05.354627+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 82354176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:06.354786+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 82354176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:07.354926+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 82354176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:08.355204+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 82354176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:09.355403+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401358848 unmapped: 82354176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:10.355621+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 82345984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:11.355808+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 82345984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:12.355976+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 82345984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:13.356204+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 82345984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:14.356377+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 82345984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:15.356591+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 82345984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:16.356834+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 82345984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:17.357010+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401367040 unmapped: 82345984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:18.357203+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 82329600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:19.357391+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 82329600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:20.357645+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 82329600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:21.357801+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 82329600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:22.358013+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 82329600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:23.358283+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 82329600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:24.358485+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 82329600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:25.358618+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401383424 unmapped: 82329600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:26.358772+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401399808 unmapped: 82313216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:27.358901+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401399808 unmapped: 82313216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:28.359045+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401399808 unmapped: 82313216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:29.359183+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401399808 unmapped: 82313216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:30.359359+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401399808 unmapped: 82313216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:31.359556+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401399808 unmapped: 82313216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:32.359710+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401399808 unmapped: 82313216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:33.359906+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401399808 unmapped: 82313216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:34.360136+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401408000 unmapped: 82305024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:35.360272+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401408000 unmapped: 82305024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:36.360415+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401408000 unmapped: 82305024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:37.360558+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401408000 unmapped: 82305024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:38.360707+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401408000 unmapped: 82305024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:39.360828+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401408000 unmapped: 82305024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:40.360985+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401408000 unmapped: 82305024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:41.361136+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401408000 unmapped: 82305024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:42.361297+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401424384 unmapped: 82288640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:43.361470+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401424384 unmapped: 82288640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:44.361597+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401432576 unmapped: 82280448 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:45.361741+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401432576 unmapped: 82280448 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:46.361925+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401432576 unmapped: 82280448 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:47.362126+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401432576 unmapped: 82280448 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:48.362247+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401432576 unmapped: 82280448 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:49.362411+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401432576 unmapped: 82280448 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:50.362590+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401440768 unmapped: 82272256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:51.362715+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401440768 unmapped: 82272256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:52.362910+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401448960 unmapped: 82264064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:53.363080+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401448960 unmapped: 82264064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:54.363219+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401448960 unmapped: 82264064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:55.363389+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401448960 unmapped: 82264064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:56.363533+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401448960 unmapped: 82264064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:57.363691+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401448960 unmapped: 82264064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:58.363825+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401457152 unmapped: 82255872 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:32:59.364025+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401457152 unmapped: 82255872 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:00.364168+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401457152 unmapped: 82255872 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:01.364367+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401457152 unmapped: 82255872 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:02.364580+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401465344 unmapped: 82247680 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:03.364753+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401465344 unmapped: 82247680 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:04.364911+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401465344 unmapped: 82247680 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:05.365375+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401465344 unmapped: 82247680 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:06.365524+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401473536 unmapped: 82239488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:07.365698+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401473536 unmapped: 82239488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:08.365884+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401473536 unmapped: 82239488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:09.366202+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401473536 unmapped: 82239488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:10.366378+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401473536 unmapped: 82239488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:11.366536+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401473536 unmapped: 82239488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:12.366685+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401473536 unmapped: 82239488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:13.366862+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401473536 unmapped: 82239488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:14.366998+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 82223104 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:15.367188+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 82223104 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:16.367407+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 82223104 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:17.367548+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 82223104 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:18.367675+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 82223104 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:19.367802+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 82223104 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:20.368022+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 82223104 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:21.368193+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 82223104 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:22.368396+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 82198528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:23.368569+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 82198528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:24.368758+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 82198528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:25.368905+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 82198528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:26.369200+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 82198528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:27.369373+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 82198528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:28.369500+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 82198528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:29.369633+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401514496 unmapped: 82198528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:30.369826+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 82190336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:31.370004+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 82190336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:32.370179+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 82190336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:33.370394+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 82190336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:34.370593+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 82190336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:35.370764+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 82190336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:36.370912+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 82190336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:37.371153+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401522688 unmapped: 82190336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:38.371328+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401539072 unmapped: 82173952 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:39.371559+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401539072 unmapped: 82173952 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:40.371736+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401539072 unmapped: 82173952 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:41.371935+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401539072 unmapped: 82173952 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:42.372065+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401539072 unmapped: 82173952 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:43.372280+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401539072 unmapped: 82173952 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:44.372497+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401539072 unmapped: 82173952 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:45.372662+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401539072 unmapped: 82173952 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:46.372859+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401547264 unmapped: 82165760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:47.373016+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401547264 unmapped: 82165760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:48.373218+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401547264 unmapped: 82165760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:49.373384+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401547264 unmapped: 82165760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:50.373582+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401547264 unmapped: 82165760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:51.373800+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401547264 unmapped: 82165760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:52.373940+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401547264 unmapped: 82165760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:53.374152+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401547264 unmapped: 82165760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:54.374308+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401571840 unmapped: 82141184 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:55.374451+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401571840 unmapped: 82141184 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:56.374592+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401571840 unmapped: 82141184 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:57.374743+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401571840 unmapped: 82141184 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:58.374891+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401571840 unmapped: 82141184 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:33:59.375044+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401571840 unmapped: 82141184 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:00.375277+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401571840 unmapped: 82141184 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:01.375414+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 82116608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:02.375584+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 82116608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:03.375728+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 82116608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:04.375871+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 82116608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:05.376001+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 82116608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:06.376146+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 82116608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:07.376258+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 82116608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:08.376393+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401596416 unmapped: 82116608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:09.376492+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 82100224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:10.376663+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 82100224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:11.376800+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 82100224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:12.376968+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 82100224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:13.377542+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 82100224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:14.377679+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 82100224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:15.377900+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 82100224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:16.378033+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401612800 unmapped: 82100224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:17.378199+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 82083840 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:18.378824+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 82083840 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:19.378975+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 82083840 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:20.379177+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 82083840 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:21.379389+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 82083840 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:22.379504+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 82083840 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:23.379730+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 82083840 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:24.379881+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401629184 unmapped: 82083840 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:25.380058+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 82075648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:26.380240+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 82075648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:27.380423+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 82075648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:28.380615+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 82075648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:29.380747+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 82075648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:30.380959+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 82075648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:31.381163+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 82075648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:32.381307+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 82075648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:33.381514+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401653760 unmapped: 82059264 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:34.381688+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401653760 unmapped: 82059264 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:35.381872+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401661952 unmapped: 82051072 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:36.382060+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401661952 unmapped: 82051072 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:37.382241+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401661952 unmapped: 82051072 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:38.382425+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401661952 unmapped: 82051072 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:39.382575+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401661952 unmapped: 82051072 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:40.382745+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401661952 unmapped: 82051072 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:41.382881+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401670144 unmapped: 82042880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:42.383046+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401670144 unmapped: 82042880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:43.383195+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401670144 unmapped: 82042880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:44.383332+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401670144 unmapped: 82042880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:45.383537+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401670144 unmapped: 82042880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:46.383752+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401670144 unmapped: 82042880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:47.383924+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401670144 unmapped: 82042880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:48.384075+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401670144 unmapped: 82042880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:49.384226+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401678336 unmapped: 82034688 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:50.384429+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401686528 unmapped: 82026496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:51.384549+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401686528 unmapped: 82026496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:52.384712+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401686528 unmapped: 82026496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:53.384839+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401686528 unmapped: 82026496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:54.384963+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401686528 unmapped: 82026496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:55.385083+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401686528 unmapped: 82026496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:56.385256+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:57.385386+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401686528 unmapped: 82026496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:58.385528+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401702912 unmapped: 82010112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:34:59.385678+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401702912 unmapped: 82010112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:00.385865+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401702912 unmapped: 82010112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:01.385990+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 82001920 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:02.386165+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 82001920 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:03.386299+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 82001920 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:04.386448+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 82001920 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:05.386565+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401711104 unmapped: 82001920 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:06.386742+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401719296 unmapped: 81993728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:07.386872+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401727488 unmapped: 81985536 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:08.387058+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401727488 unmapped: 81985536 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:09.387243+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401727488 unmapped: 81985536 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:10.387465+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401727488 unmapped: 81985536 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:11.387720+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401727488 unmapped: 81985536 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:12.387981+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401727488 unmapped: 81985536 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:13.388152+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401727488 unmapped: 81985536 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:14.388332+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401735680 unmapped: 81977344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:15.388472+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401735680 unmapped: 81977344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:16.388678+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401735680 unmapped: 81977344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:17.388888+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401735680 unmapped: 81977344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:18.389024+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401735680 unmapped: 81977344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:19.389171+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401735680 unmapped: 81977344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:20.389398+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401735680 unmapped: 81977344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:21.389580+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401735680 unmapped: 81977344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:22.389824+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401752064 unmapped: 81960960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:23.390026+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401752064 unmapped: 81960960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:24.390160+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401752064 unmapped: 81960960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:25.390348+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401752064 unmapped: 81960960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:26.390568+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401752064 unmapped: 81960960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:27.390754+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401752064 unmapped: 81960960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:28.390899+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401752064 unmapped: 81960960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:29.391038+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401752064 unmapped: 81960960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:30.391250+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 81944576 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:31.391464+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 81944576 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:32.391607+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 81944576 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:33.391787+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 81944576 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:34.391961+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 81944576 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:35.392175+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 81944576 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:36.392372+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 81944576 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:37.392534+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401768448 unmapped: 81944576 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:38.392660+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401776640 unmapped: 81936384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:39.392802+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401776640 unmapped: 81936384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:40.392968+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401776640 unmapped: 81936384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:41.393164+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401776640 unmapped: 81936384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:42.393358+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401776640 unmapped: 81936384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:43.393535+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401776640 unmapped: 81936384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:44.393693+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401776640 unmapped: 81936384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:45.393828+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401776640 unmapped: 81936384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:46.393992+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401801216 unmapped: 81911808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:47.394188+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401801216 unmapped: 81911808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:48.394563+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401801216 unmapped: 81911808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:49.394754+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401801216 unmapped: 81911808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:50.394950+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401801216 unmapped: 81911808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:51.395077+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401801216 unmapped: 81911808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:52.395173+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401801216 unmapped: 81911808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:53.395327+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401801216 unmapped: 81911808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:54.395454+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401809408 unmapped: 81903616 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:55.395621+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401809408 unmapped: 81903616 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:56.395758+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401809408 unmapped: 81903616 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:57.395893+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401809408 unmapped: 81903616 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:58.396028+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401809408 unmapped: 81903616 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:35:59.396215+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401809408 unmapped: 81903616 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:00.396752+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401809408 unmapped: 81903616 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:01.396893+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401809408 unmapped: 81903616 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:02.397059+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 81879040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:03.397192+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 81879040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:04.397319+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 81879040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:05.397442+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 81879040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:06.397585+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 81879040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:07.397753+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 81879040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:08.397902+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 81879040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:09.398037+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401833984 unmapped: 81879040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:10.398235+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401850368 unmapped: 81862656 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:11.398449+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401850368 unmapped: 81862656 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:12.398607+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401850368 unmapped: 81862656 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:13.398752+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47992 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401850368 unmapped: 81862656 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:14.398910+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401850368 unmapped: 81862656 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:15.399039+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401850368 unmapped: 81862656 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:16.399181+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401850368 unmapped: 81862656 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:17.399335+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401850368 unmapped: 81862656 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:18.399512+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 81846272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:19.399661+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 81846272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:20.399850+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 81846272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:21.400059+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 81846272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:22.400179+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 81846272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:23.400656+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 81846272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:24.400794+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 81846272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:25.400925+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401866752 unmapped: 81846272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:26.401089+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401883136 unmapped: 81829888 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:27.401336+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401883136 unmapped: 81829888 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:28.401495+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401883136 unmapped: 81829888 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:29.401626+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401883136 unmapped: 81829888 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:30.401793+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401883136 unmapped: 81829888 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:31.401931+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401883136 unmapped: 81829888 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:32.402059+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401883136 unmapped: 81829888 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:33.402241+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401883136 unmapped: 81829888 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:34.402418+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 81813504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:35.402578+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 81813504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:36.402777+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 81813504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:37.402957+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 81813504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:38.403130+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 81813504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:39.403328+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 81813504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:40.403496+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 81813504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:41.403647+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401899520 unmapped: 81813504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:42.403792+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 81797120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:43.403941+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 81797120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:44.404083+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 81797120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:45.404301+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 81797120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:46.404562+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 81797120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:47.405164+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 81797120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:48.405280+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 81797120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:49.405463+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401915904 unmapped: 81797120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:50.405643+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 81780736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:51.405782+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 81780736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:52.405897+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 81780736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:53.406034+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 81780736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:54.406209+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 81780736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:55.406361+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 81780736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:56.406487+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 81780736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:57.406625+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 81780736 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:58.406756+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 81764352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:36:59.406902+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 81764352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:00.407051+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 81764352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:01.407175+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 81764352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:02.407302+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 81764352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:03.407432+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 81764352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:04.407599+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 81764352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:05.407725+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401948672 unmapped: 81764352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:06.407847+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 81747968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:07.407954+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 81747968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:08.408321+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 81747968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:09.408648+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 81747968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:10.408760+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 81747968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:11.408879+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 81747968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:12.408997+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 81747968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:13.409147+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 81747968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:14.409292+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 81739776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:15.409457+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 81739776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:16.409614+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 81739776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:17.409742+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 81739776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:18.409880+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 81739776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:19.409998+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 81739776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:20.410191+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 81739776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:21.410384+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 81739776 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:22.410540+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401989632 unmapped: 81723392 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:23.410660+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401989632 unmapped: 81723392 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:24.410788+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401989632 unmapped: 81723392 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:25.410912+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401989632 unmapped: 81723392 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:26.411032+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401997824 unmapped: 81715200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:27.411172+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401997824 unmapped: 81715200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:28.411319+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401997824 unmapped: 81715200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:29.411462+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 401997824 unmapped: 81715200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:30.411620+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402006016 unmapped: 81707008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:31.411754+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402006016 unmapped: 81707008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:32.411904+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402006016 unmapped: 81707008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:33.412068+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402006016 unmapped: 81707008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:34.412211+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402006016 unmapped: 81707008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:35.412341+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402006016 unmapped: 81707008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:36.412486+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402006016 unmapped: 81707008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:37.412623+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402006016 unmapped: 81707008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:38.412817+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402022400 unmapped: 81690624 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:39.412956+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402022400 unmapped: 81690624 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:40.413173+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402030592 unmapped: 81682432 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:41.413309+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402038784 unmapped: 81674240 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:42.413495+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402038784 unmapped: 81674240 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:43.413662+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 63K writes, 232K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s
                                           Cumulative WAL: 63K writes, 24K syncs, 2.59 writes per sync, written: 0.22 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1120 writes, 2689 keys, 1120 commit groups, 1.0 writes per commit group, ingest: 1.64 MB, 0.00 MB/s
                                           Interval WAL: 1120 writes, 524 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402038784 unmapped: 81674240 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:44.413784+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402038784 unmapped: 81674240 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:45.413917+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402046976 unmapped: 81666048 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:46.414045+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402055168 unmapped: 81657856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:47.414178+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402055168 unmapped: 81657856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:48.414301+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402055168 unmapped: 81657856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:49.414407+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402055168 unmapped: 81657856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:50.414566+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402055168 unmapped: 81657856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:51.414688+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402055168 unmapped: 81657856 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:52.414849+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402071552 unmapped: 81641472 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:53.414991+0000)
Dec 06 08:46:59 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec 06 08:46:59 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220555448' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402071552 unmapped: 81641472 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:54.415165+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402079744 unmapped: 81633280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:55.415308+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402079744 unmapped: 81633280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:56.415435+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402079744 unmapped: 81633280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:57.415599+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402079744 unmapped: 81633280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:58.415789+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402079744 unmapped: 81633280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:37:59.415957+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402079744 unmapped: 81633280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:00.416256+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402079744 unmapped: 81633280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:01.416408+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402079744 unmapped: 81633280 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:02.416562+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402087936 unmapped: 81625088 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:03.416712+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402087936 unmapped: 81625088 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:04.416874+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402087936 unmapped: 81625088 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:05.417030+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402087936 unmapped: 81625088 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:06.417202+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402087936 unmapped: 81625088 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:07.417332+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402087936 unmapped: 81625088 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:08.417469+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402096128 unmapped: 81616896 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:09.417617+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402096128 unmapped: 81616896 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:10.417792+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402104320 unmapped: 81608704 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:11.417930+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402104320 unmapped: 81608704 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:12.418173+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402104320 unmapped: 81608704 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:13.418319+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402104320 unmapped: 81608704 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:14.418492+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402104320 unmapped: 81608704 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:15.418691+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402104320 unmapped: 81608704 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:16.418912+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402104320 unmapped: 81608704 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:17.419160+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402104320 unmapped: 81608704 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:18.419290+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 81584128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:19.419460+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 81584128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:20.419668+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 81584128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:21.419821+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 81584128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:22.419972+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 81584128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:23.420171+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 81584128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:24.420357+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 81584128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:25.420536+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402128896 unmapped: 81584128 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:26.420773+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402137088 unmapped: 81575936 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:27.420923+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402137088 unmapped: 81575936 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:28.421057+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402137088 unmapped: 81575936 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:29.421185+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402137088 unmapped: 81575936 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:30.421409+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402145280 unmapped: 81567744 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:31.421580+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402145280 unmapped: 81567744 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:32.421757+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402145280 unmapped: 81567744 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:33.421906+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402145280 unmapped: 81567744 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:34.422069+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 81551360 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:35.422243+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 81551360 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:36.422404+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 81551360 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:37.422615+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 81551360 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:38.422792+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 81551360 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:39.422954+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 81551360 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:40.423178+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 81551360 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:41.423280+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402161664 unmapped: 81551360 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:42.423429+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402178048 unmapped: 81534976 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:43.423597+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402186240 unmapped: 81526784 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:44.423774+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402186240 unmapped: 81526784 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:45.423921+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402186240 unmapped: 81526784 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:46.424040+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402186240 unmapped: 81526784 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:47.424183+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402186240 unmapped: 81526784 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:48.424335+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402186240 unmapped: 81526784 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:49.424511+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402186240 unmapped: 81526784 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:50.424694+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 81510400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:51.424853+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 81510400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:52.424986+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 81510400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:53.425186+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 81510400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:54.425392+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 81510400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:55.425589+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 81510400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:56.425766+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 81510400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:57.425940+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402202624 unmapped: 81510400 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:58.426161+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402227200 unmapped: 81485824 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:38:59.426297+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402227200 unmapped: 81485824 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:00.426496+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402227200 unmapped: 81485824 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:01.426673+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402227200 unmapped: 81485824 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:02.426808+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402227200 unmapped: 81485824 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:03.426956+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402227200 unmapped: 81485824 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:04.427134+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402227200 unmapped: 81485824 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:05.427259+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 81469440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:06.427408+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 81469440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:07.427569+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 81469440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:08.427694+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 81469440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:09.427827+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 81469440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:10.428016+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 81469440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:11.428159+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 81469440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:12.428299+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402243584 unmapped: 81469440 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:13.428454+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 81461248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:14.428591+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 81461248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:15.428721+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 81461248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:16.428838+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 81461248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:17.429006+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 81461248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:18.429156+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 81461248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:19.429459+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 81461248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:20.429677+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402251776 unmapped: 81461248 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:21.429794+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402268160 unmapped: 81444864 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:22.429982+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 81436672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:23.430119+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 81436672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:24.430257+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 81436672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:25.430424+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 81436672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:26.430668+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 81436672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:27.430813+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 81436672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:28.431001+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402276352 unmapped: 81436672 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:29.431177+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 81412096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:30.431404+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 81412096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:31.431535+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 81412096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:32.431681+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 81412096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:33.431811+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 81412096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:34.431950+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 81412096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:35.432145+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 81412096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:36.432294+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402300928 unmapped: 81412096 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:37.432448+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:38.432644+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:39.432939+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:40.433228+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:41.433377+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:42.433564+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:43.433711+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:44.433872+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:45.434015+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:46.434155+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:47.434301+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:48.434420+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402317312 unmapped: 81395712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:49.434570+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 81387520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:50.434737+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 81387520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:51.434897+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 81387520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:52.435032+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402325504 unmapped: 81387520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:53.435193+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402341888 unmapped: 81371136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:54.435411+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402341888 unmapped: 81371136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:55.435615+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402341888 unmapped: 81371136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:56.435822+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:57.435974+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402341888 unmapped: 81371136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:58.436193+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402341888 unmapped: 81371136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:39:59.436378+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402341888 unmapped: 81371136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:00.436597+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402341888 unmapped: 81371136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:01.436773+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402341888 unmapped: 81371136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:02.436954+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402358272 unmapped: 81354752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:03.437141+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402358272 unmapped: 81354752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:04.437282+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402358272 unmapped: 81354752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:05.437450+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402358272 unmapped: 81354752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:06.437649+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402358272 unmapped: 81354752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:07.437800+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402358272 unmapped: 81354752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:08.438219+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402358272 unmapped: 81354752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:09.438422+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402358272 unmapped: 81354752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:10.438679+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402374656 unmapped: 81338368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:11.438804+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402374656 unmapped: 81338368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:12.438970+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402382848 unmapped: 81330176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:13.439126+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402382848 unmapped: 81330176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:14.439299+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402382848 unmapped: 81330176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:15.439430+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 81321984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:16.439569+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 81321984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:17.439704+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 81321984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:18.439866+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 81313792 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:19.440018+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 81313792 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:20.440216+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 81313792 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:21.440380+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 81313792 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:22.440541+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 81313792 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:23.440700+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 81313792 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:24.440854+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402407424 unmapped: 81305600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:25.441061+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402407424 unmapped: 81305600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:26.441321+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402432000 unmapped: 81281024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:27.441527+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402432000 unmapped: 81281024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:28.441672+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402432000 unmapped: 81281024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:29.441835+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402432000 unmapped: 81281024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:30.442048+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402432000 unmapped: 81281024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:31.442193+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402432000 unmapped: 81281024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:32.442342+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402432000 unmapped: 81281024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:33.442946+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402432000 unmapped: 81281024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:34.443092+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402448384 unmapped: 81264640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:35.443311+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402448384 unmapped: 81264640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:36.443429+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402448384 unmapped: 81264640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:37.443550+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402448384 unmapped: 81264640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:38.443696+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402448384 unmapped: 81264640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:39.443828+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402448384 unmapped: 81264640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 581.725341797s of 583.914367676s, submitted: 40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:40.443985+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402448384 unmapped: 81264640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:41.444123+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402456576 unmapped: 81256448 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:42.444249+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402464768 unmapped: 81248256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:43.444402+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 81240064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:44.444546+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402472960 unmapped: 81240064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:45.444685+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402497536 unmapped: 81215488 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:46.445083+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402505728 unmapped: 81207296 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:47.445320+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402522112 unmapped: 81190912 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:48.445520+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402571264 unmapped: 81141760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:49.445637+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 402989056 unmapped: 80723968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.159659386s of 10.007906914s, submitted: 296
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:50.445777+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403308544 unmapped: 80404480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:51.445943+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403308544 unmapped: 80404480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:52.446066+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403308544 unmapped: 80404480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:53.446235+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403308544 unmapped: 80404480 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:54.446381+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:55.446530+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:56.446715+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:57.446921+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:58.447092+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:40:59.447298+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:00.447532+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:01.447683+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:02.447854+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:03.448015+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:04.448180+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:05.448334+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:06.448432+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:07.448559+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:08.448682+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:09.448859+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:10.449006+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:11.449384+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:12.449580+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:13.449731+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403341312 unmapped: 80371712 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:14.449865+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403349504 unmapped: 80363520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:15.450008+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403349504 unmapped: 80363520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:16.450177+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403349504 unmapped: 80363520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:17.450358+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403349504 unmapped: 80363520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:18.450521+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403349504 unmapped: 80363520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:19.450640+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403349504 unmapped: 80363520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:20.450824+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403349504 unmapped: 80363520 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:21.451010+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403357696 unmapped: 80355328 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:22.451190+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403365888 unmapped: 80347136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:23.451356+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403365888 unmapped: 80347136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:24.451515+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403365888 unmapped: 80347136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:25.451680+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403365888 unmapped: 80347136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:26.451806+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403365888 unmapped: 80347136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:27.451932+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403365888 unmapped: 80347136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:28.452080+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403365888 unmapped: 80347136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:29.452268+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403365888 unmapped: 80347136 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:30.452457+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403374080 unmapped: 80338944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:31.452601+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403374080 unmapped: 80338944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:32.452734+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403374080 unmapped: 80338944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:33.452871+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403374080 unmapped: 80338944 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:34.453040+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403382272 unmapped: 80330752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:35.453191+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403382272 unmapped: 80330752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:36.453349+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403382272 unmapped: 80330752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:37.453580+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403382272 unmapped: 80330752 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:38.453762+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403398656 unmapped: 80314368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:39.453911+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403398656 unmapped: 80314368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:40.454076+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403398656 unmapped: 80314368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:41.454228+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403398656 unmapped: 80314368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:42.454377+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403398656 unmapped: 80314368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:43.454530+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403398656 unmapped: 80314368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:44.454684+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403398656 unmapped: 80314368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:45.454848+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403398656 unmapped: 80314368 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:46.455022+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403406848 unmapped: 80306176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:47.455200+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403406848 unmapped: 80306176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:48.455326+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403406848 unmapped: 80306176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47024 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:49.455472+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403406848 unmapped: 80306176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:50.455685+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403406848 unmapped: 80306176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:51.455979+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403406848 unmapped: 80306176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:52.456128+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403406848 unmapped: 80306176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:53.456292+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403406848 unmapped: 80306176 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:54.456419+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403415040 unmapped: 80297984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:55.456635+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403415040 unmapped: 80297984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:56.456794+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403415040 unmapped: 80297984 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:57.457009+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403431424 unmapped: 80281600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:58.457197+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403431424 unmapped: 80281600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:41:59.457366+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403431424 unmapped: 80281600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:00.457540+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403431424 unmapped: 80281600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:01.457731+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403431424 unmapped: 80281600 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:02.457880+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403439616 unmapped: 80273408 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:03.458011+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403439616 unmapped: 80273408 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:04.458157+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403439616 unmapped: 80273408 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:05.458330+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403439616 unmapped: 80273408 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:06.458515+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403439616 unmapped: 80273408 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:07.458661+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403439616 unmapped: 80273408 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:08.458811+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403447808 unmapped: 80265216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:09.458949+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403447808 unmapped: 80265216 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:10.459194+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403456000 unmapped: 80257024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:11.459386+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403456000 unmapped: 80257024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:12.459548+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403456000 unmapped: 80257024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:13.459813+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403456000 unmapped: 80257024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:14.459966+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403456000 unmapped: 80257024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:15.460170+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403456000 unmapped: 80257024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:16.460343+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403456000 unmapped: 80257024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:17.460482+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403456000 unmapped: 80257024 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:18.460628+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403464192 unmapped: 80248832 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:19.460774+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403464192 unmapped: 80248832 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:20.460998+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403464192 unmapped: 80248832 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:21.461153+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403464192 unmapped: 80248832 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:22.461298+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403464192 unmapped: 80248832 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:23.461493+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403464192 unmapped: 80248832 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:24.461675+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403464192 unmapped: 80248832 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:25.461808+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403464192 unmapped: 80248832 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:26.461949+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 80240640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:27.462157+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 80240640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:28.462322+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 80240640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:29.462470+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 80240640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:30.462639+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 80240640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:31.462770+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 80240640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:32.462972+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 80240640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:33.463157+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403472384 unmapped: 80240640 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:34.463331+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403488768 unmapped: 80224256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:35.463544+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403488768 unmapped: 80224256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:36.463712+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403488768 unmapped: 80224256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:37.463893+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403488768 unmapped: 80224256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:38.464068+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403488768 unmapped: 80224256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:39.464235+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403488768 unmapped: 80224256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:40.464421+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403488768 unmapped: 80224256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:41.464549+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403488768 unmapped: 80224256 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:42.464731+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403496960 unmapped: 80216064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:43.464864+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403496960 unmapped: 80216064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:44.465074+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403496960 unmapped: 80216064 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets getting new tickets!
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:45.465460+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _finish_auth 0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:45.466236+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403546112 unmapped: 80166912 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:46.465603+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403546112 unmapped: 80166912 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:47.465726+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403546112 unmapped: 80166912 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:48.465910+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403546112 unmapped: 80166912 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:49.466040+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403546112 unmapped: 80166912 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:50.466216+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403554304 unmapped: 80158720 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:51.466341+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403554304 unmapped: 80158720 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 ms_handle_reset con 0x5636d631b400 session 0x5636d4f963c0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d4f3c000
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 ms_handle_reset con 0x5636d28e6c00 session 0x5636d2028960
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d631b400
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 ms_handle_reset con 0x5636d6c93000 session 0x5636d4f97a40
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: handle_auth_request added challenge on 0x5636d2e4a800
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:52.466526+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403562496 unmapped: 80150528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:53.466705+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403562496 unmapped: 80150528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:54.466855+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403562496 unmapped: 80150528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:55.467048+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403562496 unmapped: 80150528 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:56.467196+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403570688 unmapped: 80142336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:57.467382+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403570688 unmapped: 80142336 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:58.467508+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 80134144 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:42:59.467637+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 80134144 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:00.467799+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 80134144 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:01.467953+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 80134144 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:02.468142+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 80134144 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:03.468313+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 80134144 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:04.468464+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 80134144 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:05.468622+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 80134144 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:06.468769+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403595264 unmapped: 80117760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:07.468988+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403595264 unmapped: 80117760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:08.469160+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403595264 unmapped: 80117760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:09.469299+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403595264 unmapped: 80117760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:10.469489+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403595264 unmapped: 80117760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:11.469623+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403595264 unmapped: 80117760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:12.469803+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403595264 unmapped: 80117760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:13.469959+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403595264 unmapped: 80117760 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:14.470124+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403603456 unmapped: 80109568 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:15.470312+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403603456 unmapped: 80109568 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:16.470475+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403603456 unmapped: 80109568 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:17.470630+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403603456 unmapped: 80109568 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:18.470786+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403603456 unmapped: 80109568 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:19.470972+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403603456 unmapped: 80109568 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:20.471167+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403603456 unmapped: 80109568 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:21.471335+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403603456 unmapped: 80109568 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:22.471470+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403628032 unmapped: 80084992 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:23.471752+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403628032 unmapped: 80084992 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:24.472045+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403628032 unmapped: 80084992 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:25.472209+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403628032 unmapped: 80084992 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:26.472367+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403636224 unmapped: 80076800 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:27.472648+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403636224 unmapped: 80076800 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:28.472802+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403636224 unmapped: 80076800 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:29.473156+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403636224 unmapped: 80076800 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:30.473491+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403644416 unmapped: 80068608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:31.473696+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403644416 unmapped: 80068608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:32.473940+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403644416 unmapped: 80068608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:33.474215+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403644416 unmapped: 80068608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:34.474585+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403644416 unmapped: 80068608 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:35.474867+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403652608 unmapped: 80060416 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:36.475027+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403652608 unmapped: 80060416 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:37.475280+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403652608 unmapped: 80060416 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:38.475540+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403660800 unmapped: 80052224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:39.475736+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403660800 unmapped: 80052224 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:40.475989+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403668992 unmapped: 80044032 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:41.476248+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403668992 unmapped: 80044032 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:42.476465+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403668992 unmapped: 80044032 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:43.476629+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403668992 unmapped: 80044032 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:44.476867+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403668992 unmapped: 80044032 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:45.477159+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403668992 unmapped: 80044032 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:46.477346+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 80027648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:47.477585+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 80027648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:48.477756+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 80027648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:49.477910+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 80027648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:50.478190+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 80027648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:51.478353+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 80027648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:52.478598+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 80027648 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:53.478736+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403701760 unmapped: 80011264 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:54.478941+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 79994880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:55.479209+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 79994880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:56.479386+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 79994880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:57.479558+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 79994880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:58.479685+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 79994880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:43:59.479836+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 79994880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:00.480089+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 79994880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:01.480307+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 79994880 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:02.480455+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403734528 unmapped: 79978496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:03.480655+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403734528 unmapped: 79978496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:04.480796+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403734528 unmapped: 79978496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:05.480934+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403734528 unmapped: 79978496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:06.481071+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403734528 unmapped: 79978496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:07.481175+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403734528 unmapped: 79978496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:08.481313+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403734528 unmapped: 79978496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:09.481486+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403734528 unmapped: 79978496 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:10.481654+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403750912 unmapped: 79962112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:11.481799+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403750912 unmapped: 79962112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:12.481965+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403750912 unmapped: 79962112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:13.482238+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403750912 unmapped: 79962112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:14.482434+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403750912 unmapped: 79962112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:15.482555+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403750912 unmapped: 79962112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:16.482744+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403750912 unmapped: 79962112 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:17.482912+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403759104 unmapped: 79953920 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:18.483064+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403767296 unmapped: 79945728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:19.483206+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403767296 unmapped: 79945728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:20.483471+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403767296 unmapped: 79945728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:21.483621+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403767296 unmapped: 79945728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:22.483754+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403767296 unmapped: 79945728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:23.483907+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403767296 unmapped: 79945728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:24.484075+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403767296 unmapped: 79945728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:25.484231+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403767296 unmapped: 79945728 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:26.484373+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 79929344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:27.484542+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 79929344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:28.484712+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 79929344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:29.484877+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 79929344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:30.485044+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 79929344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:31.485195+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 79929344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:32.485327+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 79929344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:33.485480+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 79929344 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:34.485710+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403791872 unmapped: 79921152 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:35.485901+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403800064 unmapped: 79912960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:36.485990+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403800064 unmapped: 79912960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:37.486168+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403800064 unmapped: 79912960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:38.486304+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403800064 unmapped: 79912960 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:39.486442+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403808256 unmapped: 79904768 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:40.486594+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403808256 unmapped: 79904768 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:41.486748+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403808256 unmapped: 79904768 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:42.486908+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 79888384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:43.487048+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 79888384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:44.487200+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 79888384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:45.487341+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 79888384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:46.487452+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 79888384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:47.487547+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 79888384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:48.487661+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 79888384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:49.487789+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403824640 unmapped: 79888384 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:50.487958+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403832832 unmapped: 79880192 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:51.488168+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403832832 unmapped: 79880192 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:52.488291+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403832832 unmapped: 79880192 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:53.488394+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:54.488525+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403849216 unmapped: 79863808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:55.488674+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403849216 unmapped: 79863808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:56.488830+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403849216 unmapped: 79863808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:57.488947+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403849216 unmapped: 79863808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:58.489152+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403849216 unmapped: 79863808 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:44:59.489313+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403865600 unmapped: 79847424 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:00.489491+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403873792 unmapped: 79839232 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:01.489649+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:02.489760+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:03.489912+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:04.490031+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:05.490192+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:06.490376+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:07.490482+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:08.490645+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:09.490780+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:10.490930+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:11.491097+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:12.491290+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:13.491453+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 79831040 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:14.491590+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403890176 unmapped: 79822848 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:15.491742+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403906560 unmapped: 79806464 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:16.491870+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403906560 unmapped: 79806464 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:17.492025+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403906560 unmapped: 79806464 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:18.492165+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403906560 unmapped: 79806464 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:19.492315+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403914752 unmapped: 79798272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:20.492507+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403914752 unmapped: 79798272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:21.492618+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403914752 unmapped: 79798272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:22.492758+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403914752 unmapped: 79798272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:23.492934+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403914752 unmapped: 79798272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:24.493078+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403914752 unmapped: 79798272 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:25.493251+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403922944 unmapped: 79790080 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:26.493387+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403922944 unmapped: 79790080 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:27.493535+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403922944 unmapped: 79790080 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:28.493678+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403922944 unmapped: 79790080 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:29.493846+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403922944 unmapped: 79790080 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:30.494036+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403922944 unmapped: 79790080 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:31.494211+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:32.494351+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:33.494468+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:34.494556+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:35.494683+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:36.494852+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:37.495022+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:38.495155+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:39.495301+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 79757312 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:40.495514+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 79757312 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:41.495650+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 79757312 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:42.495823+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 79757312 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:43.496022+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 79757312 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:44.496146+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 79757312 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:45.496222+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 79757312 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:46.496432+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403955712 unmapped: 79757312 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:47.496564+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403963904 unmapped: 79749120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:48.496705+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403963904 unmapped: 79749120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:49.496832+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403963904 unmapped: 79749120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:50.497004+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403963904 unmapped: 79749120 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:51.497145+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403972096 unmapped: 79740928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:52.497239+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403972096 unmapped: 79740928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:53.497381+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403972096 unmapped: 79740928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:54.497478+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403972096 unmapped: 79740928 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:55.497595+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 79724544 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:56.497733+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 79724544 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:57.497898+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403988480 unmapped: 79724544 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:58.498025+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 79716352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:45:59.498196+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 79716352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:00.498373+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 79716352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:01.498549+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 79716352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:02.498680+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403996672 unmapped: 79716352 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:03.498835+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404004864 unmapped: 79708160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:04.498973+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404004864 unmapped: 79708160 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:05.499132+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 79699968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:06.499312+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 79699968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:07.499489+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 79699968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:08.499614+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 79699968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:09.499787+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 79699968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:10.499975+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404013056 unmapped: 79699968 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:11.500176+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 79683584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:12.500318+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 79683584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:13.500449+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 79683584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:14.500568+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 79683584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:15.500696+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 79683584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:16.500834+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404029440 unmapped: 79683584 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:17.501060+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404037632 unmapped: 79675392 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:18.501279+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404045824 unmapped: 79667200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:19.501429+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404045824 unmapped: 79667200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:20.501611+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404045824 unmapped: 79667200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:21.501779+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404045824 unmapped: 79667200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:22.501916+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404045824 unmapped: 79667200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:23.502043+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404045824 unmapped: 79667200 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:24.502185+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 79659008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:25.502306+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 06 08:46:59 compute-0 ceph-osd[84884]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 06 08:46:59 compute-0 ceph-osd[84884]: bluestore.MempoolThread(0x5636d06cbb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329933 data_alloc: 218103808 data_used: 3452928
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 404054016 unmapped: 79659008 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:26.502428+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'config diff' '{prefix=config diff}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'config show' '{prefix=config show}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403947520 unmapped: 79765504 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:27.502623+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403791872 unmapped: 79921152 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: tick
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_tickets
Dec 06 08:46:59 compute-0 ceph-osd[84884]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T08:46:28.502756+0000)
Dec 06 08:46:59 compute-0 ceph-osd[84884]: prioritycache tune_memory target: 4294967296 mapped: 403939328 unmapped: 79773696 heap: 483713024 old mem: 2845415832 new mem: 2845415832
Dec 06 08:46:59 compute-0 ceph-osd[84884]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a3030000/0x0/0x1bfc00000, data 0x19e6eb1/0x1c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1afaf9c6), peers [1,2] op hist [])
Dec 06 08:46:59 compute-0 ceph-osd[84884]: do_command 'log dump' '{prefix=log dump}'
Dec 06 08:46:59 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:46:59 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:46:59 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:46:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:46:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38808 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:46:59 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48007 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47039 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec 06 08:47:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1339204430' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:47:00 compute-0 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.46985 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.47938 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.47950 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: pgmap v4651: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.38775 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.47000 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/951096703' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3645184560' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3234307305' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2843281197' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1574297695' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2220555448' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1819590850' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3285086935' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec 06 08:47:00 compute-0 nova_compute[251992]: 2025-12-06 08:47:00.158 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38820 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48013 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47051 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec 06 08:47:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/793007941' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4652: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48028 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38835 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:00 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec 06 08:47:00 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3099624559' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:47:00 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:00 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:00 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:00.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38859 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-0 crontab[448956]: (root) LIST (root)
Dec 06 08:47:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47084 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.38790 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.47012 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.38796 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.47992 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.47024 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.38808 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.48007 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.47039 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1339204430' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1027309975' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.38820 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.48013 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.47051 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3703404667' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/793007941' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: pgmap v4652: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.48028 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.38835 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1156329899' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3099624559' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: from='client.38859 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47090 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:47:01 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:47:01.140+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:47:01 compute-0 nova_compute[251992]: 2025-12-06 08:47:01.273 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38874 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48061 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:01 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec 06 08:47:01 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99096409' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:47:01 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:01 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:01 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:01.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:01 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48073 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec 06 08:47:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935464987' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48094 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.38901 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:47:02 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:47:02.252+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.47084 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3794276524' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.47090 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2117309582' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.38874 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.48061 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2157011927' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1542503518' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2316704263' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/99096409' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.48073 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2007981825' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1940468244' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2097908437' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:47:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec 06 08:47:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3673175152' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48112 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4653: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec 06 08:47:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2803842601' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:47:02 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec 06 08:47:02 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845692263' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:47:02 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:02 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:02 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:02.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec 06 08:47:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048246434' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2935464987' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.48094 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.38901 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/4158636158' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3673175152' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.48112 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: pgmap v4653: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2840103552' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2803842601' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3017613200' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2303681952' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2845692263' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3584339672' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1343773895' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3048246434' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3063384302' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48139 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mgr[74630]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:47:03 compute-0 ceph-40a1bae4-cf76-5610-8dab-c75116dfe0bb-mgr-compute-0-sfzyix[74626]: 2025-12-06T08:47:03.393+0000 7f67611e6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 08:47:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec 06 08:47:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180698634' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec 06 08:47:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3812902119' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:47:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec 06 08:47:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767667201' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:47:03 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:03 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000026s ======
Dec 06 08:47:03 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:03.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 06 08:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:47:03.925 158118 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 08:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:47:03.926 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 08:47:03 compute-0 ovn_metadata_agent[158111]: 2025-12-06 08:47:03.926 158118 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 08:47:03 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec 06 08:47:03 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3674706854' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Dec 06 08:47:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015597887' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.48139 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/180698634' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1736339127' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3550407173' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3812902119' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1311921730' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1767667201' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4246357108' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2300073004' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2655961151' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3674706854' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2991099391' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3522175721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1015597887' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1493012721' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/651100828' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3240636314' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4072605533' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Dec 06 08:47:04 compute-0 podman[449401]: 2025-12-06 08:47:04.425260925 +0000 UTC m=+0.086944446 container health_status 6a344a2a15a05a96ac68579e538c91c1835f678cf4fddf8118c7a0eb4d9e0f2d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 08:47:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3104068488' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec 06 08:47:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280998855' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4654: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:04 compute-0 systemd[1]: Starting Hostname Service...
Dec 06 08:47:04 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47204 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47210 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 systemd[1]: Started Hostname Service.
Dec 06 08:47:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Dec 06 08:47:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316993960' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:47:04 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Dec 06 08:47:04 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410012521' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:47:04 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:04 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:04 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:04.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47231 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47225 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:05 compute-0 nova_compute[251992]: 2025-12-06 08:47:05.161 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:05 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Dec 06 08:47:05 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1486880162' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39012 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3104068488' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1280998855' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2684433230' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: pgmap v4654: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.47204 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2338196638' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.47210 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1316993960' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1410012521' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4249903457' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2475362331' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/455727909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1486880162' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3925471560' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47240 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39018 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39024 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:05 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:05 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.001000027s ======
Dec 06 08:47:05 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 06 08:47:05 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47249 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39030 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48241 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:06 compute-0 nova_compute[251992]: 2025-12-06 08:47:06.274 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47270 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.47231 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.47225 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.39012 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3960332756' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.47240 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.39018 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3820400097' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1667974750' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.39024 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.47249 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2538820321' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/238822674' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48262 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48256 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4655: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:06 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Dec 06 08:47:06 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133397263' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47288 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48274 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:06 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:06 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:06 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:06.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47303 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Dec 06 08:47:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524758473' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48286 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39069 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.39030 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.48241 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.47270 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/28598356' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.39045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.48262 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.48256 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: pgmap v4655: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2133397263' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.47288 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/59356258' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.48274 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.39057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.47303 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1524758473' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/1255464285' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:07 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/4031061109' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:47:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Dec 06 08:47:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2661229246' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48301 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39093 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:07 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:07 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:07 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Dec 06 08:47:07 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1542819250' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:47:07 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39120 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48337 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.48286 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.39069 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2661229246' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.48301 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.39093 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/396089789' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/1542819250' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.48319 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.39120 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1229308044' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4656: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48361 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:08 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47384 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:08 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:08 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:08 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:08 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:08.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:09 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Dec 06 08:47:09 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502847766' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39216 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.48337 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2248560303' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-0 ceph-mon[74339]: pgmap v4656: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1850885369' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.48361 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.47384 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/2502847766' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4094367856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.10:0/4094367856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/481958975' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 06 08:47:09 compute-0 ceph-mon[74339]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 06 08:47:09 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:09 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:09 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:09.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Dec 06 08:47:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265975113' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:47:10 compute-0 nova_compute[251992]: 2025-12-06 08:47:10.164 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:10 compute-0 podman[450319]: 2025-12-06 08:47:10.220664104 +0000 UTC m=+0.051880016 container health_status 3865b147faeba9261cd789ea42ab320778add7b254bb25c75680278dd31b80a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 08:47:10 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48445 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:10 compute-0 podman[450336]: 2025-12-06 08:47:10.261951183 +0000 UTC m=+0.090423761 container health_status a749e3f2b417eedc1534b5a5c6306a170b279896ffc97485816e01d12ee8cf2a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 08:47:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Dec 06 08:47:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592466088' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:47:10 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4657: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:10 compute-0 ceph-mon[74339]: from='client.39216 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/254992418' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:47:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/3519464518' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Dec 06 08:47:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3265975113' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:47:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/224885905' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:47:10 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/592466088' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:47:10 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Dec 06 08:47:10 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318357680' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:47:10 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:10 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:10 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:10.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:11 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47456 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Dec 06 08:47:11 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/246440838' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:47:11 compute-0 nova_compute[251992]: 2025-12-06 08:47:11.275 251996 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 08:47:11 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39255 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: from='client.48445 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: pgmap v4657: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2162878835' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2648013193' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/3318357680' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1945036162' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/246440838' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/50389982' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Dec 06 08:47:11 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/2461841317' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:47:11 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:11 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:11 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Dec 06 08:47:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/983149449' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:47:12 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48478 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:12 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47477 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 08:47:12 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Dec 06 08:47:12 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026056007' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:47:12 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4658: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:12 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:12 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:12 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.100 - anonymous [06/Dec/2025:08:47:12.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39273 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47492 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48499 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:13 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47498 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:13 compute-0 radosgw[91889]: ====== starting new request req=0x7f463f5576f0 =====
Dec 06 08:47:13 compute-0 radosgw[91889]: ====== req done req=0x7f463f5576f0 op status=0 http_status=200 latency=0.000000000s ======
Dec 06 08:47:13 compute-0 radosgw[91889]: beast: 0x7f463f5576f0: 192.168.122.102 - anonymous [06/Dec/2025:08:47:13.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 06 08:47:14 compute-0 ceph-mon[74339]: from='client.47456 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mon[74339]: from='client.39255 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/1868328670' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/983149449' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.101:0/3140350994' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.100:0/4026056007' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mon[74339]: from='client.? 192.168.122.102:0/2713378083' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mon[74339]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Dec 06 08:47:14 compute-0 ceph-mon[74339]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989435628' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48514 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39291 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.48520 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mgr[74630]: log_channel(cluster) log [DBG] : pgmap v4659: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Dec 06 08:47:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.39297 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 08:47:14 compute-0 ceph-mgr[74630]: log_channel(audit) log [DBG] : from='client.47516 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
